GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 32,845,800 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-05-05T09:45:00.000 | 0 | 2 | 0 | What does eigenfaces training set have to look like? | 30,049,490 | 1.2 | python,opencv,training-data | I later found the answer and would like to share it if someone will be facing the same challenges.
You need pictures only for the different people you are trying to recognise. I created my training set with 30 images of every person (6 persons) and figured out that histogram equalisation can play an important role when creating the training set and later when recognising faces. Using the histogram equalisation model accuracy was greatly increased. Another thing to consider is eye axis alignment so that all pictures have their eye axis aligned before they enter face recognition. | I am using python and openCV to create face recognition with Eigenfaces. I stumbled on a problem, since I don't know how to create training set.
Do I need multiple faces of people I want to recognize(myself for example), or do I need a lot of different faces to train my model?
First I tried training my model with 10 pictures of my face and 10 pictures of ScarJo face, but my prediction was not working well.
Now I'm trying to train my model with 20 different faces (mine is one of them).
Am I doing it wrong and if so what am I doing wrong? | 0 | 1 | 247 |
0 | 56,863,216 | 0 | 0 | 0 | 0 | 2 | false | 22 | 2015-05-05T14:50:00.000 | 3 | 4 | 0 | How to list all scikit-learn classifiers that support predict_proba() | 30,056,331 | 0.148885 | python,scikit-learn | AdaBoostClassifier
BaggingClassifier
BayesianGaussianMixture
BernoulliNB
CalibratedClassifierCV
ComplementNB
DecisionTreeClassifier
ExtraTreeClassifier
ExtraTreesClassifier
GaussianMixture
GaussianNB
GaussianProcessClassifier
GradientBoostingClassifier
KNeighborsClassifier
LabelPropagation
LabelSpreading
LinearDiscriminantAnalysis
LogisticRegression
LogisticRegressionCV
MLPClassifier
MultinomialNB
NuSVC
QuadraticDiscriminantAnalysis
RandomForestClassifier
SGDClassifier
SVC
_BinaryGaussianProcessClassifierLaplace
_ConstantPredictor | I need a list of all scikit-learn classifiers that support the predict_proba() method. Since the documentation provides no easy way of getting that information, how can get this programatically? | 0 | 1 | 8,716 |
0 | 72,497,753 | 0 | 0 | 0 | 0 | 2 | false | 22 | 2015-05-05T14:50:00.000 | 0 | 4 | 0 | How to list all scikit-learn classifiers that support predict_proba() | 30,056,331 | 0 | python,scikit-learn | If you are interested in a spesific type of estimator(say classifier), you could go with:
import sklearn
estimators = sklearn.utils.all_estimators(type_filter="classifier")
for name, class_ in estimators:
if not hasattr(class_, 'predict_proba'):
print(name) | I need a list of all scikit-learn classifiers that support the predict_proba() method. Since the documentation provides no easy way of getting that information, how can get this programatically? | 0 | 1 | 8,716 |
0 | 30,084,402 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-05-05T15:02:00.000 | 0 | 1 | 0 | How to get the same thresholds values for both functions precision_recall_curve and roc_curve in sklearn.metrics | 30,056,603 | 0 | python,scikit-learn,roc,precision-recall | The threshold values have two major differences.
The orders are different. roc_curve has thresholds in decreasing order, while precision_recall_curve has thresholds in increasing order.
The numbers are different. In roc_curve, n_thresholds = len(np.unique(probas_pred)), while in precision_recall_curve the number n_thresholds = len(np.unique(probas_pred)) - 1. In the latter, the smallest threshold value from roc_curve is not included. In the same time, the last precision and recall values are 1. and 0. respectively with no corresponding threshold. Therefore, the numbers of items for tpr, fpr, precision and recall are the same.
So, back to your question, how to make a table to include tpr, fpr, precision and recall with corresponding thresholds? Here are the steps:
Discard the last precision and recall values
Reverse the precision and recall values
Compute the precision and recall values corresponding to the lowest threshold value from the thresholds of roc_curve
Put all the values into the same table | I need to make a table with the TPR and FPR values, as well as precision and recall. I am using the roc_curve and precision_recall_curve functions from sklearn.metrics package in python. My problem is that every function give me a different vector for the thresholds, and I need only one, to merge the values as columns in a single table. Could anyone help me?
Thanks in advance | 0 | 1 | 1,042 |
0 | 30,063,676 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-05-05T21:21:00.000 | 1 | 1 | 0 | how to extract the relative colour intensity in a black and white image in python? | 30,063,430 | 0.197375 | python,arrays,image,image-processing,colors | The image is being opened as a color image, not as a black and white one. The shape is 181x187x3 because of that: the 3 is there because each pixel is an RGB value. Quite often images in black and white are actually stored in an RGB format. For an image image, if np.all(image[:,:,0]==image[:,:,1]) and so on, then you can just choose to use any of them (eg, image[:,:,0]). Alternatively, you could take the mean with np.mean(image,axis=2).
Note too that the range of values will depend on the format, and so depending upon what you mean by color intensity, you may need to normalize them. In the case of a jpeg, they are probably uint8s, so you may want image[:,:,0].astype('float')/255 or something similar. | Suppose I have got a black an white image, how do I convert the colour intensity at each point into a numerical value that represents its relativity intensity?
I checked somewhere on the web and found the following:
Intensity = np.asarray(PIL.Image.open('test.jpg'))
What's the difference between asarray and array?
Besides, the shape of the array Intensity is '181L, 187L, 3L'. The size of the image test.jpg is 181x187, so what does the extra '3' represent?
And are there any other better ways of extracting the colour intensity of an image?
thank you. | 0 | 1 | 473 |
0 | 30,066,780 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-05-06T01:22:00.000 | 1 | 1 | 0 | Does numpy use spatial locality in memory while doing matrix multiplication? | 30,065,905 | 0.197375 | python,numpy,matrix,matrix-multiplication | Numpy dot is implemented in multiarraymodule.c as PyArray_MatrixProduct2. The implementation it actually uses is dependent upon a number of factors.
If you have numpy linked to a BLAS implementation, your dtypes are all double, cdouble, float, or cfloat, and your arrays have 2 or fewer dimensions each, then numpy hands off the array to the BLAS implementation. What is does is dependent upon the package you're using.
Otherwise, no, it doesn't do this. However, at least on my machine, doing this (or just a dot product in general) with a transpose and einsum is ten times slower than just using dot, because dot pushes to BLAS. | While multiplying large matrices (say A and B, A.dot(B)), does numpy use spatial locality by computing the transpose of the B and using row wise multiplication, or does it access the elements of B in column-wise fashion which would lead to many cache misses. I have observed that memory bandwidth is becoming a bottleneck when I run multiple instances of the same program. For example, if I run 4 independent instances of a program which does matrix multiplication (for large matrices) on a 20 core machine, I only see a 2.3 times speedup. | 0 | 1 | 312 |
0 | 30,067,640 | 0 | 0 | 0 | 0 | 2 | true | 7 | 2015-05-06T03:39:00.000 | 14 | 3 | 0 | Python - What are the major improvement of Pandas over Numpy/Scipy | 30,067,051 | 1.2 | python,numpy,pandas,scipy,data-analysis | Pandas is not particularly revolutionary and does use the NumPy and SciPy ecosystem to accomplish it's goals along with some key Cython code. It can be seen as a simpler API to the functionality with the addition of key utilities like joins and simpler group-by capability that are particularly useful for people with Table-like data or time-series. But, while not revolutionary, Pandas does have key benefits.
For a while I had also perceived Pandas as just utilities on top of NumPy for those who liked the DataFrame interface. However, I now see Pandas as providing these key features (this is not comprehensive):
Array of Structures (independent-storage of disparate types instead of the contiguous storage of structured arrays in NumPy) --- this will allow faster processing in many cases.
Simpler interfaces to common operations (file-loading, plotting, selection, and joining / aligning data) make it easy to do a lot of work in little code.
Index arrays which mean that operations are always aligned instead of having to keep track of alignment yourself.
Split-Apply-Combine is a powerful way of thinking about and implementing data-processing
However, there are downsides to Pandas:
Pandas is basically a user-interface library and not particularly suited for writing library code. The "automatic" features can lull you into repeatedly using them even when you don't need to and slowing down code that gets called over and over again.
Pandas typically takes up more memory as it is generous with the creation of object arrays to solve otherwise sticky problems of things like string handling.
If your use-case is outside the realm of what Pandas was designed to do, it gets clunky quickly. But, within the realms of what it was designed to do, Pandas is powerful and easy to use for quick data analysis. | I have been using numpy/scipy for data analysis. I recently started to learn Pandas.
I have gone through a few tutorials and I am trying to understand what are the major improvement of Pandas over Numpy/Scipy.
It seems to me that the key idea of Pandas is to wrap up different numpy arrays in a Data Frame, with some utility functions around it.
Is there something revolutionary about Pandas that I just stupidly missed? | 0 | 1 | 3,241 |
0 | 30,096,156 | 0 | 0 | 0 | 0 | 2 | false | 7 | 2015-05-06T03:39:00.000 | 1 | 3 | 0 | Python - What are the major improvement of Pandas over Numpy/Scipy | 30,067,051 | 0.066568 | python,numpy,pandas,scipy,data-analysis | A main point is that it introduces new data structures like dataframes, panels etc. and has good interfaces to other structure and libs. So in generally its more an great extension to the python ecosystem than an improvement over other libs. For me its a great tool among others like numpy, bcolz. Often i use it to reshape my data, get an overview before starting to do data mining etc. | I have been using numpy/scipy for data analysis. I recently started to learn Pandas.
I have gone through a few tutorials and I am trying to understand what are the major improvement of Pandas over Numpy/Scipy.
It seems to me that the key idea of Pandas is to wrap up different numpy arrays in a Data Frame, with some utility functions around it.
Is there something revolutionary about Pandas that I just stupidly missed? | 0 | 1 | 3,241 |
0 | 30,114,151 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-05-07T23:50:00.000 | 0 | 1 | 0 | gradient at the minimum in fmin_l_bfgs_b | 30,113,538 | 0 | python,optimization,scipy,gradient | What you are getting is the gradient of the cost function with respect to each parameter, in turn.
To picture it, suppose there were only two parameters, x and y. The cost function is a surface z as a function of x and y.
The optimization is finding a minimum point on that surface.
That's where the gradients with respect to both x and y are zero (or close to it).
If either gradient is not zero, you are not at a minimum, and you would descend further.
As a further point, you could well be interested in the curvature, or second derivative, because high curvature means a narrow (precise) minimum, while low curvature means a nearly flat minimum, with very uncertain estimates.
The second derivative in the x,y case would not be a 2-vector, but a 2x2-matrix (called a "Hessian", just to snow your friends).
You might want to think about why it's a 2x2-matrix. | I am using fmin_l_bfgs_b for a bounded minimization on 4 parameters.
I would like to inspect the gradient at the minimum of the cost function and for this I call the d['grad'] parameter as described in the documentation of fmin_l_bfgs_b. My problem is that d['grad'] is an array of size 4 looking like:
'grad': array([ 8.38440428e-05, -5.72697445e-04, 3.21875859e-03,
-2.21115926e+00])
I would expect it to be a single value close to zero. Does this have something to do with the number of the parameters I am using for the minimization (4)..? Not what I would expect but any help would be appreciated. | 0 | 1 | 255 |
0 | 30,130,970 | 0 | 0 | 1 | 0 | 1 | false | 7 | 2015-05-08T18:12:00.000 | 0 | 3 | 0 | Binary storage of floating point values (between 0 and 1) using less than 4 bytes? | 30,130,277 | 0 | python,numpy,scipy | Use struct.pack() with the f type code to get them into 4-byte packets. | I need to store a massive numpy vector to disk. Right now the vector that I am trying to store is ~2.4 billion elements long and the data is float64. This takes about 18GB of space when serialized out to disk.
If I use struct.pack() and use float32 (4 bytes) I can reduce it to ~9GB. I don't need anywhere near this amount of precision disk space is going to quickly becomes an issue as I expect the number of values I need to store could grow by an order of magnitude or two.
I was thinking that if I could access the first 4 significant digits I could store those values in an int and only use 1 or 2 bytes of space. However, I have no idea how to do this efficiently. Does anyone have any idea or suggestions? | 0 | 1 | 934 |
0 | 31,457,037 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-05-09T17:08:00.000 | 2 | 1 | 0 | Number of parameters in MCMC | 30,142,780 | 0.379949 | python,bayesian,pymc,mcmc | I recently ran (successfully) a model with 2,958 parameters. It was on a 8 Gb Windows machine. You should be fine with 750. | I want to sample from my posterior distribution using the pymc package.
I am wondering if there is a limit on the number of dimensions such algorithm can handle. My log likelihood is the sum of 3 Gaussians and 1 mixture of Gaussians. I have approx 750 parameters in my model. Can pymc handle such a big number of parameters? | 0 | 1 | 267 |
0 | 30,153,269 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-05-10T10:03:00.000 | 0 | 2 | 0 | I need to find the distribution of data, which is from a retail chain network. No distribution fits the data | 30,150,071 | 0 | python,r,distribution,frequency-distribution,goodness-of-fit | Have you tried transforming the data? Simulate multiple transformations and take the best approximation to a distribution amenable for statistical inference. | I need to find the distribution of data, which is from a retail chain network( demand of product across all stores). I tried to fit distribution using EasyFit (which has 82 distribution to check the best distributions) but no distribution fits the data. What can be done? Is there any way to find if the data distribution is a sum or convolution of multiple distribution? I have removed the spikes or seasonality or promotional data from the dataset but still no distribution fits. | 0 | 1 | 714 |
0 | 30,209,202 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-05-11T20:39:00.000 | 1 | 1 | 0 | Opencv Blur just within Circle | 30,177,229 | 0.197375 | python,opencv | It doesn't look like the OpenCV's blurring and filtering functions allow masking the input. I suggest applying the filter on a Rect around the circular portion, then assign the elements of the blurred sub matrix to the original image while masking the elements that do not correspond to the circle. | How does one blur a circular portion of an image in the python bindings of Open CV. Can one apply blurring on images without making new images? | 0 | 1 | 566 |
0 | 30,196,520 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-05-12T15:43:00.000 | 0 | 1 | 0 | Create a 2D grid of hexagonal cells using lat/lon coordinates in Python | 30,195,646 | 0 | python,multidimensional-array,matplotlib-basemap | Maybe try zip?
Calling zip(a,b) where a and b are some iterable things will return a new array of tuples along the lines of [ (a[0], b[0]) , (a[1],b[1]) , ... , (a[n], b[n]) ] where n is the number of things in the lists.
You could match up the lat/lon into pairs first and then pair them with the temperature. | I have 3 1D-arrays (lat, lon and temperature) and would like to plot the data using Basemap in python. However, Basemap seems to need 2D-arrays to be able to plot the data according to the latitudes and longitudes I have.
How would I do that?
Thanks for you help! | 0 | 1 | 451 |
0 | 30,209,508 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-05-12T23:38:00.000 | 1 | 1 | 0 | Does it matter if I set random_state same for RandomForestClassifier() and train_test_split()? | 30,203,229 | 0.197375 | python,scikit-learn | Nothing will be wrong to set the same seed. | In sciki learn, RandomForestClassifier() and train_test_split() both have random_state parameter.
Statistically, does it matter if I set them to be the same seed? Will that be wrong? Thanks. | 0 | 1 | 105 |
0 | 33,983,181 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-05-13T00:42:00.000 | 0 | 1 | 0 | Product of elements of scipy sparse matrix | 30,203,785 | 0 | python,scipy,sparse-matrix | If you are using any plugin that is named as "infinite posts scroll" or "jetpack" or any thing similar delete it. | I can find sum of all (non-zero) elements in scipy sparse matrix by mat.sum(), but how can I find their product? There's no mat.prod() method. | 0 | 1 | 44 |
0 | 30,208,490 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-05-13T02:58:00.000 | 1 | 1 | 0 | SVM find member of outside training set | 30,204,877 | 0.197375 | python,opencv,svm | The classic SVM does part the n-dimensional feature space with planes. That means every point in space is in one of the partitions and therefore belongs to one of the trained classes. there is no outlier detection.
However there is also the concept of a one-class SVM that tries to encapsulate the "known" space and classifies into "known" and "unknown". The libSVM package also has probabilities, you could try to analyse if that helps. You could also try other classification concepts to detect outliers like nearest neighbour | I try to make classification multiclass with SVM on OpenCV (I use openCV for python). Let's say if I have 5 class and training it well. I have been test it, and got good result.
The problem appear when object from 6th class come to this classification. Althought I haven't train this class before, why I got result this object (that come from 6th class) recognize as object from one of the class that I have been train before (It classified as member of 1st,or 2nd, ect class). While machine should say didn't know it from what class. SVM at OpenCV didn't return the probability, it just return the label of the class.
I have an idea to make it 2 times classification. First with biner classification, with all of the sample as the training set. Second I classify it to multiclass.
But the problem, how should I find the negative sample for the first classification while I didn't know the other object (let's say that come from 6th or 7th class). Anybody can help me, what should I do? Which samples that should I use as Negative Sample? It is the great idea or stupid idea? Is there other way to solve this problem? | 0 | 1 | 87 |
0 | 30,227,232 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-05-13T21:33:00.000 | 0 | 3 | 0 | Indexing tensor with binary matrix in numpy | 30,225,539 | 0 | python,numpy,matrix | C = A[:,:,0]*(B==0) + A[:,:,1]*(B==1) should work. You can generalize this as np.sum([A[:,:,k]*(B==k) for k in np.arange(A.shape[-1])], axis=0) if you need to index more planes. | I have a tensor A such that A.shape = (32, 19, 2) and a binary matrix B such that B.shape = (32, 19). Is there a one-line operation I can perform to get a matrix C, where C.shape = (32, 19) and C(i,j) = A[i, j, B[i,j]]?
Essentially, I want to use B as an indexing matrix, where if B[i,j] = 1 I take A[i,j,1] to form C(i,j). | 0 | 1 | 338 |
0 | 30,242,748 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-05-14T16:36:00.000 | 0 | 1 | 0 | Pandas is blocking python when it reads big file | 30,242,603 | 1.2 | python,pandas | I believe the file you are loading is far to large reference what your computer can handle. Unless you need to load all the data at once, I would try to load the data based on what you need at that time e.g., load data based on a specific criteria then run your program over those specifics. This should help with the two things. First, you will be able to load your data and run your program. Second, your program will run faster as it is only working on subsets of data at a time. | I am reading a 5 G file on my 8G memory MacBook : pd.read_csv(filepath).
I see the memory usage going to 12 G (orange, then red), and then suddenly the memory usage drops back to 6G, and then slowly goes back up.... And my script doesn't deliver anything, not even an exit error....
What can happen ? Seems like python is completely blocked (ventilator are very silencious...) | 0 | 1 | 54 |
0 | 30,254,159 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2015-05-15T07:32:00.000 | 0 | 4 | 0 | Pickling pandas dataframe multiplies by 5 the file size | 30,253,976 | 0 | python,csv,pandas,pickle | Dont load 800MB file to memory. It will increase your loading time. Pickle objects too takes more time to load. Instead store the csv file as a sqlite3 (which comes along with python) table. And then query the table every time depending upon your need. | I am reading a 800 Mb CSV file with pandas.read_csv, and then use the original Python pickle.dump(datfarame) to save it. The result is a 4 Gb pkl file, so the CSV size is multiplied by 5.
I expected pickle to compress data rather than extend it. Also because I can do a gzip on the CSV file which compress it to 200 Mb, dividing it by 4.
I am willing to accelerate the loading time of my program, and thought that pickling would help, but considering disk access is the main bottleneck I am understanding that I would rather have to compress the files and then use the compression option from pandas.read_csv to speed up the loading time.
Is that correct?
Is it normal that pickling pandas dataframe extend the data size?
How do you speed up loading time usually?
What are the data-size limit would you load with pandas? | 0 | 1 | 7,611 |
0 | 30,278,960 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-05-16T17:22:00.000 | 0 | 1 | 0 | Python reads CSV on OS X differently then Windows? | 30,278,630 | 0 | python,csv | It turns out there was just a problem in the csv file. What happened was that I was running the same file, which caused another header row to be entered. When there is more then one header row, the script tries to convert something like "ind_sharpe" into a float which obviously will not work, and breaks the program.
The solution, overwrite header rows, don't stack them. | I have a Python script that writes and reads multiple CSV files. On OS X, in order to perform any calculations on any values in the CSV file, I have to float the value float(d['var_returns']). This works perfectly fine on OS X and the entire script works as performed.
However, using the exact same code on Windows, I get:
ValueError: could not convert string to float: var_returns
I was wondering why this is happening, and how I can fix it? The typical value in var_returns would be 0.050244079 | 0 | 1 | 32 |
0 | 30,279,862 | 0 | 0 | 0 | 0 | 3 | false | 99 | 2015-05-16T19:19:00.000 | 4 | 5 | 0 | Apache Spark: How to use pyspark with Python 3 | 30,279,783 | 0.158649 | python,python-3.x,apache-spark | Have a look into the file. The shebang line is probably pointed to the 'env' binary which searches the path for the first compatible executable.
You can change python to python3. Change the env to directly use hardcoded the python3 binary. Or execute the binary directly with python3 and omit the shebang line. | I built Spark 1.4 from the GH development master, and the build went through fine. But when I do a bin/pyspark I get the Python 2.7.9 version. How can I change this? | 0 | 1 | 93,816 |
0 | 32,094,874 | 0 | 0 | 0 | 0 | 3 | false | 99 | 2015-05-16T19:19:00.000 | 155 | 5 | 0 | Apache Spark: How to use pyspark with Python 3 | 30,279,783 | 1 | python,python-3.x,apache-spark | Just set the environment variable:
export PYSPARK_PYTHON=python3
in case you want this to be a permanent change add this line to pyspark script. | I built Spark 1.4 from the GH development master, and the build went through fine. But when I do a bin/pyspark I get the Python 2.7.9 version. How can I change this? | 0 | 1 | 93,816 |
0 | 38,938,668 | 0 | 0 | 0 | 0 | 3 | false | 99 | 2015-05-16T19:19:00.000 | 9 | 5 | 0 | Apache Spark: How to use pyspark with Python 3 | 30,279,783 | 1 | python,python-3.x,apache-spark | 1,edit profile :vim ~/.profile
2,add the code into the file: export PYSPARK_PYTHON=python3
3, execute command : source ~/.profile
4, ./bin/pyspark | I built Spark 1.4 from the GH development master, and the build went through fine. But when I do a bin/pyspark I get the Python 2.7.9 version. How can I change this? | 0 | 1 | 93,816 |
0 | 30,296,559 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2015-05-18T05:43:00.000 | 3 | 3 | 0 | How to write .csv file in Python? | 30,295,848 | 1.2 | python,file,pandas,permissions | Better give the complete path for your output csv file. May be that you are checking in a wrong folder. | I am running the following: output.to_csv("hi.csv") where output is a pandas dataframe.
My variables all have values but when I run this in iPython, no file is created. What should I do? | 0 | 1 | 3,754 |
0 | 30,336,990 | 0 | 0 | 0 | 0 | 2 | true | 5 | 2015-05-19T16:28:00.000 | 4 | 2 | 0 | Get Colorbar Ticks | 30,331,208 | 1.2 | python,matplotlib | Calling the locator for the colorbar instance should return the tick locations.
colorbar.locator(). | I am using a colorbar in matplotlib and because I want my graph to be more sensitive I take the square root of each value. This gives me square rooted values on my colorbar, but now I want to scale the ticks labels back to the real values. I am thusly having a hard time doing. I see that colorbar has a set_ticks function but I need to be able to get my ticks in the first place to do this generally. Is there an easy way to do this that I am not seeing, or some other way around this? | 0 | 1 | 984 |
0 | 49,789,479 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2015-05-19T16:28:00.000 | 0 | 2 | 0 | Get Colorbar Ticks | 30,331,208 | 0 | python,matplotlib | In matplotlib 2.1 you may use method colorbar.get_ticks(). | I am using a colorbar in matplotlib and because I want my graph to be more sensitive I take the square root of each value. This gives me square rooted values on my colorbar, but now I want to scale the ticks labels back to the real values. I am thusly having a hard time doing. I see that colorbar has a set_ticks function but I need to be able to get my ticks in the first place to do this generally. Is there an easy way to do this that I am not seeing, or some other way around this? | 0 | 1 | 984 |
0 | 30,369,767 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2015-05-21T09:23:00.000 | 0 | 2 | 0 | How do I write a variable that includes multiple values and commas to a csv file? | 30,369,641 | 0 | python,csv | For a variable that holds multiple values I would use an array and loop through that with a foreach loop or something similar. That way every value stays properly seperated regardless of punctuation. | How do I write a variable that includes multiple values and commas to a CSV file?
for example the variable might hold jimmy,5,250,james and I want to write this to a CSV file then start a new line and write the next batch of variables in the loop again in the same fashion.
outFile = open('outputfile.csv','w')
csvFile_out = csv.writer(outFile, delimiter=',') csvFile_out.writerows(valuesvariable) | 0 | 1 | 1,136 |
0 | 30,377,236 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2015-05-21T14:45:00.000 | 1 | 1 | 0 | .Spyder opens a figure instead of absence of plt.show() | 30,377,127 | 1.2 | python,matplotlib,spyder | You likely are running in iPython with Spyder. iPython automatically runs plt.show(). Press f6 (default), and see if "execute in current python or iPython console" is selected. If so, check the console, the 'tab' should show a blue icon with "iP" and have 'kernal' (or similar) following it.
Simple fix: switch to "execute in dedicated python console" or open a new python console (tab header "python 1")
Source: had same problem. | I actually know a problem in Spyder. When I Work with matplotlib.pyplot, it automatically shows the figure without any kind of plt.show(). So when I make many different figures, it always shows them all on the same.
I've recently made a program, which is saving one figure by iteration in an for loop, it also opens a figure and at the end, the loop crashes before ending because of too many figures showed.
I would like to know if someone also knows such problems, because I've encountered the in many computers running spyder | 0 | 1 | 441 |
0 | 36,701,858 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2015-05-22T13:07:00.000 | 2 | 5 | 0 | Visualizing an LDA model, using Python | 30,397,550 | 0.07983 | python,data-visualization,lda,topic-modeling | Word clouds are popular ways of visualizing topic distributions. To generate a word cloud in python consider cloning the wordcloud library. | I have a LDA model with the 10 most common topics in 10K documents. Now it's just an overview of the words with corresponding probability distribution for each topic.
I was wondering if there is something available for python to visualize these topics? | 0 | 1 | 12,860 |
0 | 30,441,330 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2015-05-25T14:57:00.000 | 0 | 3 | 0 | Concatenate large files in sql-like way with limited RAM | 30,441,107 | 0 | python,file,memory,merge | As @Marc B said, reading one row at a time is the solution.
About the join I would do the following (pseudocode: I don't know python).
"Select distinct Model from A" on first file A.csv
Read all rows, search for Model field and collect distinct values in a list/array/map
"Select distinct Model from B" on second file B.csv
Same operation as 1, but using another list/array/map
Find matching models
Compare the two lists/arrays/maps finding only matching models (they will be part of the join)
Do the join
Reading rows of file A which match model, read all the rows of file B which match same model and write a file C with join result. To this for all models.
Note: it's not particularly optimized.
For point 2 just choose a subset of matching models and/or read a part of rows of file A and/or B with maching models. | I have a large A.csv file (~5 Gb) with several columns. One of the columns is Model.
There is another large B.csv file (~15 Gb) with Vendor, Name and Model columns.
Two questions:
1) How can I create result file that combines all columns from A.csv and corresponding Vendor and Name from B.csv (join on Model). The trick is - how to do it when my RAM is 4 Gb only, and I'm using python.
2) How can I create a sample (say, 1 Gb) result file that combines random subsample from A.csv (all columns) joined with Vendor and Name from B.csv. The trick is, again, in 4 Gb of RAM.
I know how to do it in pandas, but 4 Gb is limiting factor I can't overcome ( | 0 | 1 | 862 |
0 | 61,126,661 | 0 | 0 | 0 | 0 | 1 | false | 57 | 2015-05-25T23:03:00.000 | -2 | 6 | 0 | Python, Pandas : Return only those rows which have missing values | 30,447,083 | -0.066568 | python,pandas,missing-data | If you are looking for a quicker way to find the total number of missing rows in the dataframe, you can use this:
sum(df.isnull().values.any(axis=1)) | While working in Pandas in Python...
I'm working with a dataset that contains some missing values, and I'd like to return a dataframe which contains only those rows which have missing data. Is there a nice way to do this?
(My current method to do this is an inefficient "look to see what index isn't in the dataframe without the missing values, then make a df out of those indices.") | 0 | 1 | 100,635 |
0 | 50,763,111 | 0 | 0 | 1 | 0 | 1 | false | 6 | 2015-05-26T05:05:00.000 | 0 | 3 | 0 | Read .sph files in Python | 30,449,860 | 0 | python,audio | You can read sph files via audioreadwith ffmpeg codecs. | I am working on a project where I need to extract the Mel-Cepstral Frequency Coefficients (MFCC) from audio signals. The first step for this process is to read the audio file into Python.
The audio files I have are stored in a .sph format. I am unable to find a method to read these files directly into Python. I would like to have the sampling rate, and a NumPy array with the data, similar to how wav read works.
Since the audio files I will be dealing with are large in size, I would prefer not to convert to .wav format for reading. Could you please suggest a possible method to do so? | 0 | 1 | 3,923 |
0 | 30,470,049 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2015-05-26T13:03:00.000 | 2 | 1 | 0 | Comparing datasets to nonstandard probability distributions in Python | 30,459,398 | 1.2 | python,scipy,statistics,probability,statistical-test | (1) "Is it from distribution X" is generally a question which can be answered a priori, if at all; a statistical test for it will only tell you "I have a large sample / not a large sample", which may be true but not too useful. If you are trying to classify new data into one distribution or another, my advice is to look at it as a classification problem and use your constructed pdf's to compute p(class | data) = p(data | class) p(class) / p(data) where the key part p(data | class) is your histogram. Maybe you can say more about your problem domain.
(2) You could apply the Kolmogorov-Smirnov test, but it's really pointless, as mentioned above. | I have a few large sets of data which I have used to create non-standard probability distributions (using numpy.histogram to bin the data, and scipy.interpolate's interp1d function to interpolate the resulting curves). I have also created a function which can sample from these custom PDFs using the scipy.stats package.
My goal is to see how varying the size of my samples changes the goodness of fit to both the distributions they came from, and the other PDFs as well, and determine how large a sample is necessary to completely determine whether it came from one or other of my custom PDFs.
To do this I've gathered that I need to use some sort of nonparametric statistical analysis, i.e. seeing whether a set of data has been drawn from a provided probability distribution. Doing a bit of research, it seems like the Anderson-Darling test is ideal for this, however its implementation in python (scipy.stats.anderson) seems to only be usable for preset probability distributions such as normal, exponential, etc.
So my question is: given my many nonstandard PDFs (or CDFs if necessary, or the data I used to create them) what is the best way to work out how well a set of sample data fits each model in Python? If it is the Anderson-Darling test, is there some way of defining a custom PDF to test against?
Thanks. Any help is much appreciated. | 0 | 1 | 452 |
0 | 30,506,462 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2015-05-27T18:42:00.000 | 5 | 1 | 0 | Calculating real world co-ordinates using stereo images in Python and OpenCV | 30,490,625 | 1.2 | python,opencv,computer-vision,coordinates,stereo-3d | 1) Case of no rotation, only translation parallel to the horizontal axis of the image plane, cameras with equal focal lengths.
Denote with "f" the common focal length. Denote with "b" the baseline of the stereo pair, namely the distance between the cameras' optical centers. Given a 3D point P, visible in both cameras at horizontal image coordinates x_left and x_right, denote with "d" their disparity, namely the difference d = x_left - x_right.
By elementary geometry it then follows that the depth z_left of P in the left camera coordinates is:
z_left = b * f / d.
2) Any other case (unequal focal lengths, differences in other intrinsic parameters, non-linear lens distortion, inter-camera rotation, translation not parallel to the x axis, etc.):
Don't bother, use OpenCV, | I'm working on calculating the real world coordinates of an object in a scene by using a pair of stereo images. The images are simulations of perfect pinhole cameras so there is no distortion to correct and there is no rotation. I know OpenCV has a bunch of functions to calibrate stereo cameras and create disparity maps, but if all I want to calculate is the coordinates of one point, is there a simple way to do that? | 0 | 1 | 1,838 |
0 | 30,640,900 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2015-05-27T22:18:00.000 | 1 | 2 | 0 | Does using sparse matrices make an algorithm slower or faster in Sklearn? | 30,494,067 | 0.099668 | python,machine-learning,scikit-learn | Depends on your data
Memory consumption.
If your data is dense, a dense representation needs d*sizeof(double) bytes for your data (i.e. usually d * 8 bytes). A sparse representation usually needs sparsity*d*(sizeof(int)+sizeof(double)). Depending on your programming language and code quality, it can also be much more due to memory management overhead. A typical Java implementation adds 8 bytes of overhead, and will round to 8 bytes size; so sparse vectors may easily use 16 + sparsity * d * 24 bytes. then.
If your sparsity is 1, this means a sparse representation needs 50% more memory. I guess the memory tradeoff in practise will be somewhere around 50% sparsity; and if your implementation isn't carefull optimized, maybe even 30% - so 1 out of 3 values should be a zero.
Memory consumption is usually a key problem. The more memory you use, the more pagefaults and cache misses your CPU will have, which can have a big impact on performance (which is why e.g. BLAS perform large matrix multiplications in block sizes optimized for your CPU caches).
Optimizations and SIMD.
Dense vector code (e.g. BLAS) is usually much better optimized than sparse operations. In particular, SIMD (single instruction, multiple data) CPU instructions usually only work with dense data.
Random access.
Many algorithms may need random access to vectors. If your data is represented as a double[] array, random access is O(1). If your data is a sparse vector, random access usually is O(sparsity*d), i.e. you will have to scan the vector to check if there is a value present. It may thus be beneficial to transpose the matrix for some operations, and work with sparse columns instead of sparse rows.
On the other hand, some algorithms may exactly benefit from this. But many implementations have such optimizations built in, and will take care of this automatically. Sometimes you also have different choices available. For example APRIORI works on rows, and thus will work well with row-sparse data. Eclat on the other hand is an algorithm to solve the same problem, but it first transforms all data into a row-sparse form, then even computes column differences to further optimize.
Code complexity.
Code to process sparse data usually is much more complex. In particular, it cannot make use of SSE and similar fast CPU instructions easily. It is one of the reasons why sparse matrix multiplications are much slower than dense operations - optimizing these operations without knowing certain characteristics of your data is surprisingly hard. :-( | I have a large but sparse train data. I would like to use it with ExtraTreeClassifier. I am not sure considering computational time whether I need to use sparse csr_matrix or the raw data. Which version of the data runs faster with that classifier and can we generalize its answer to all sparse capable models? | 0 | 1 | 982 |
0 | 30,640,578 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2015-05-27T22:18:00.000 | 1 | 2 | 0 | Does using sparse matrices make an algorithm slower or faster in Sklearn? | 30,494,067 | 0.099668 | python,machine-learning,scikit-learn | If your data are sparse, the extra tree classifier will be faster with a csc_matrix. In doubt, I would suggest you to benchmark with both version.
All algorithms should benefit from using the appropriate sparse format if your data are sufficiently sparse. For instance, algorithms based on dot product will be a lot faster with sparse data. | I have a large but sparse train data. I would like to use it with ExtraTreeClassifier. I am not sure considering computational time whether I need to use sparse csr_matrix or the raw data. Which version of the data runs faster with that classifier and can we generalize its answer to all sparse capable models? | 0 | 1 | 982 |
0 | 30,553,182 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-05-28T07:47:00.000 | 1 | 2 | 0 | Bounded optimization using the Hessian matrix (scipy) | 30,500,311 | 0.099668 | python,numpy,optimization,scipy | l-bfgs-b does a bounded optimisation. Like any quasi-Newton method it approximates the Hessian. But this is often better than using the real Hessian. | I am trying to optimize a function of a small number of variables (somewhere from 2 to 10). What I am trying to do is calculate the minimum of the function on a bounded hypercube
[0,1] x [0,1] x ... x [0,1]
The calculation of the function, its gradient and its hessian is al relatively simple, quick and accurate.
Now, my problem is this:
Using scipy, I can use either scipy.optimize.minimize(..., method='Newton-CG') or scipy.optimize.minimize(..., method='TNC') to calculate the minimum of the function, however:
The first method uses the Hessian matrix, but I cannot set the bounds for the variables I am optimizing
The second method allows me to set bounds on the variables, but the method does not use the Hessian.
Is there any method that will use both? | 0 | 1 | 923 |
0 | 63,025,280 | 0 | 0 | 0 | 0 | 4 | false | 95 | 2015-05-28T22:52:00.000 | 1 | 19 | 0 | How do I set the driver's python version in spark? | 30,518,362 | 0.010526 | python,apache-spark,pyspark | I had the same problem, just forgot to activate my virtual environment. | I'm using spark 1.4.0-rc2 so I can use python 3 with spark. If I add export PYSPARK_PYTHON=python3 to my .bashrc file, I can run spark interactively with python 3. However, if I want to run a standalone program in local mode, I get an error:
Exception: Python in worker has different version 3.4 than that in
driver 2.7, PySpark cannot run with different minor versions
How can I specify the version of python for the driver? Setting export PYSPARK_DRIVER_PYTHON=python3 didn't work. | 0 | 1 | 165,951 |
0 | 46,797,446 | 0 | 0 | 0 | 0 | 4 | false | 95 | 2015-05-28T22:52:00.000 | 6 | 19 | 0 | How do I set the driver's python version in spark? | 30,518,362 | 1 | python,apache-spark,pyspark | I came across the same error message and I have tried three ways mentioned above. I listed the results as a complementary reference to others.
Change the PYTHON_SPARK and PYTHON_DRIVER_SPARK value in spark-env.sh does not work for me.
Change the value inside python script using os.environ["PYSPARK_PYTHON"]="/usr/bin/python3.5"
os.environ["PYSPARK_DRIVER_PYTHON"]="/usr/bin/python3.5" does not work for me.
Change the value in ~/.bashrc works like a charm~ | I'm using spark 1.4.0-rc2 so I can use python 3 with spark. If I add export PYSPARK_PYTHON=python3 to my .bashrc file, I can run spark interactively with python 3. However, if I want to run a standalone program in local mode, I get an error:
Exception: Python in worker has different version 3.4 than that in
driver 2.7, PySpark cannot run with different minor versions
How can I specify the version of python for the driver? Setting export PYSPARK_DRIVER_PYTHON=python3 didn't work. | 0 | 1 | 165,951 |
0 | 50,399,085 | 0 | 0 | 0 | 0 | 4 | false | 95 | 2015-05-28T22:52:00.000 | 9 | 19 | 0 | How do I set the driver's python version in spark? | 30,518,362 | 1 | python,apache-spark,pyspark | I just faced the same issue and these are the steps that I follow in order to provide Python version. I wanted to run my PySpark jobs with Python 2.7 instead of 2.6.
Go to the folder where $SPARK_HOME is pointing to (in my case is /home/cloudera/spark-2.1.0-bin-hadoop2.7/)
Under folder conf, there is a file called spark-env.sh. In case you have a file called spark-env.sh.template you will need to copy the file to a new file called spark-env.sh.
Edit the file and write the next three lines
export PYSPARK_PYTHON=/usr/local/bin/python2.7
export PYSPARK_DRIVER_PYTHON=/usr/local/bin/python2.7
export SPARK_YARN_USER_ENV="PYSPARK_PYTHON=/usr/local/bin/python2.7"
Save it and launch your application again :)
In that way, if you download a new Spark standalone version, you can set the Python version which you want to run PySpark to. | I'm using spark 1.4.0-rc2 so I can use python 3 with spark. If I add export PYSPARK_PYTHON=python3 to my .bashrc file, I can run spark interactively with python 3. However, if I want to run a standalone program in local mode, I get an error:
Exception: Python in worker has different version 3.4 than that in
driver 2.7, PySpark cannot run with different minor versions
How can I specify the version of python for the driver? Setting export PYSPARK_DRIVER_PYTHON=python3 didn't work. | 0 | 1 | 165,951 |
0 | 30,518,974 | 0 | 0 | 0 | 0 | 4 | true | 95 | 2015-05-28T22:52:00.000 | 34 | 19 | 0 | How do I set the driver's python version in spark? | 30,518,362 | 1.2 | python,apache-spark,pyspark | You need to make sure the standalone project you're launching is launched with Python 3. If you are submitting your standalone program through spark-submit then it should work fine, but if you are launching it with python make sure you use python3 to start your app.
Also, make sure you have set your env variables in ./conf/spark-env.sh (if it doesn't exist you can use spark-env.sh.template as a base.) | I'm using spark 1.4.0-rc2 so I can use python 3 with spark. If I add export PYSPARK_PYTHON=python3 to my .bashrc file, I can run spark interactively with python 3. However, if I want to run a standalone program in local mode, I get an error:
Exception: Python in worker has different version 3.4 than that in
driver 2.7, PySpark cannot run with different minor versions
How can I specify the version of python for the driver? Setting export PYSPARK_DRIVER_PYTHON=python3 didn't work. | 0 | 1 | 165,951 |
0 | 30,529,135 | 0 | 0 | 0 | 0 | 2 | false | 13 | 2015-05-29T11:49:00.000 | 2 | 3 | 0 | numpy: difference between NaN and masked array | 30,528,852 | 0.132549 | python,numpy,nan | From what I understand NaN represents anything that is not a number, while a masked array marks missing values OR values that are numbers but are not valid for your data set.
I hope that helps. | In numpy there are two ways to mark missing values: I can either use a NaN or a masked array. I understand that using NaNs is (potentially) faster while masked array offers more functionality (which?).
I guess my question is if/ when should I use one over the other?
What is the use case of np.NaN in a regular array vs. a masked array?
I am sure the answer must be out there but I could not find it... | 0 | 1 | 2,235 |
0 | 52,755,040 | 0 | 0 | 0 | 0 | 2 | false | 13 | 2015-05-29T11:49:00.000 | 1 | 3 | 0 | numpy: difference between NaN and masked array | 30,528,852 | 0.066568 | python,numpy,nan | Keep in mind that strange np.nan behaviours, mentioned by jrmyp, include unexpected results for example when using functions of the statsmodels (e.g. ttest) or numpy module (e.g. average). From experience, most those functions have workarounds for NaNs, but it has the potential of driving you mad for a while. This seems like a reason to mask arrays whenever possible. | In numpy there are two ways to mark missing values: I can either use a NaN or a masked array. I understand that using NaNs is (potentially) faster while masked array offers more functionality (which?).
I guess my question is if/ when should I use one over the other?
What is the use case of np.NaN in a regular array vs. a masked array?
I am sure the answer must be out there but I could not find it... | 0 | 1 | 2,235 |
0 | 30,576,274 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2015-05-29T17:38:00.000 | 0 | 2 | 0 | Python access violation | 30,535,810 | 1.2 | python,visual-studio,python-3.x,matplotlib,ptvs | All of the attempts sounds futile...
Even removing the VS is a challenging stuff... and as people consider, changing the OS is the most stable way to get rid of VS in the presence of such anomalies regarding its libraries...
So... I changed the OS... and installed the VS and PTVS again... | Here is a Python 3.4 user, in VS 2013 and PTVS...
I'd written a program to plot something by Matplotlib... The output had been generating and everything was ok...
So, I closed VS and now I've opened it again after an hour, running the very script, but this time as soon as I press F5, a window appears and says Python has stopped working...
There is a short log in the output window, which asserts that:
The program '[9952] python.exe' has exited with code -1073741819 (0xc0000005) 'Access violation'.
Who could decrypt this error, please?!...
Kind Regards
.........................................
Edit:
I just tested again with no change... The error has been changed to:
The program '[11284] python.exe' has exited with code -1073741819 (0xc0000005) 'Access violation'.
Debug shows that when the program points to the drawing command of the matloptlib, i.e. plt.show(), this crash will happen... | 0 | 1 | 659 |
0 | 30,536,536 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2015-05-29T17:38:00.000 | -1 | 2 | 0 | Python access violation | 30,535,810 | -0.099668 | python,visual-studio,python-3.x,matplotlib,ptvs | It seems to be a problem with your python and PTVS. Try to remove every .pyc file and have another go at it | Here is a Python 3.4 user, in VS 2013 and PTVS...
I'd written a program to plot something by Matplotlib... The output had been generating and everything was ok...
So, I closed VS and now I've opened it again after an hour, running the very script, but this time as soon as I press F5, a window appears and says Python has stopped working...
There is a short log in the output window, which asserts that:
The program '[9952] python.exe' has exited with code -1073741819 (0xc0000005) 'Access violation'.
Who could decrypt this error, please?!...
Kind Regards
.........................................
Edit:
I just tested again with no change... The error has been changed to:
The program '[11284] python.exe' has exited with code -1073741819 (0xc0000005) 'Access violation'.
Debug shows that when the program points to the drawing command of the matloptlib, i.e. plt.show(), this crash will happen... | 0 | 1 | 659 |
0 | 30,674,814 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-05-29T19:56:00.000 | 0 | 1 | 0 | matplotlib discrete data versus continuous function | 30,538,062 | 0 | python,function,matplotlib,dataset | At the end the solution has been easier than I thought. I had simply to define the continuous variable through the discrete data, as, for example:
w=x/y,
then define the function as already said:
exfunct=w**4
and finally plot the "continuous-discrete" function:
plt.plot(x,x/exfunct),'k-',color='red',lw=2)
I hope this can be useful. | I need to plot a ratio between a function introduced thorough a discrete data set, imported from a text file, for example:
x,y,z=np.loadtxt('example.txt',usecols=(0,1,2),unpack=True),
and a continuous function defined using the np.arange command, for example:
w=np.arange(0,0.5,0.01)
exfunct=w**4.
Clearly, solutions as
plt.plot(w,1-(x/w),'k--',color='blue',lw=2) as well
plt.plot(y,1-(x/w),'k--',color='blue',lw=2)
do not work. Despite having looked for the answer in the site (and outside it), I can not find any solution to my problem. Should I fit the discrete data set, to obtain a continuous function, and then define it in the same interval as the "exfunct"? Any suggestion? Thank you a lot. | 0 | 1 | 1,808 |
0 | 30,564,059 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2015-06-01T00:10:00.000 | 8 | 7 | 0 | How to generate random points in a circular distribution | 30,564,015 | 1 | python,distribution,point | FIRST ANSWER:
An easy solution would be to do a check to see if the result satisfies your equation before proceeding.
Generate x, y (there are ways to randomize into a select range)
Check if ((x−500)^2 + (y−500)^2 < 250000) is true
if not, regenerate.
The only downside would be inefficiency.
SECOND ANSWER:
OR, you could do something similar to riemann sums like for approximating integrals. Approximate your circle by dividing it up into many rectangles. (the more rectangles, the more accurate), and use your rectangle algorithm for each rectangle within your circle. | I am wondering how i could generate random numbers that appear in a circular distribution.
I am able to generate random points in a rectangular distribution such that the points are generated within the square of (0 <= x < 1000, 0 <= y < 1000):
How would i go upon to generate the points within a circle such that:
(x−500)^2 + (y−500)^2 < 250000 ? | 0 | 1 | 31,584 |
0 | 30,615,998 | 0 | 1 | 0 | 0 | 1 | false | 6 | 2015-06-03T09:06:00.000 | 0 | 2 | 0 | Generate random permutation of huge list (in Python) | 30,615,536 | 0 | python,algorithm,iterator,permutation | This is a generic issue and rather not a Python-specific. In most languages, even when iterators are used for using structures, the whole structure is kept in memory. So, iterators are mainly used as "functional" tools and not as "memory-optimization" tools.
In python, a lot of people end up using a lot of memory due to having really big structures (dictionaries etc.). However, all the variables-objects of the program will be stored in memory in any way. The only solution is the serialization of the data (save in filesystem, Database etc.).
So, in your case, you could create a customized function that would create the list of the permutations. But, instead of adding each element of the permutation to a list, it would save the element either in a file (or in a database with the corresponding structure). Then, you would be able to retrieve one-by-one each permutation from the file (or the database), without bringing the whole list in memory.
However, as mentioned before, you will have to always know in which permutation you currently are. In order to avoid retrieving all the created permutations from Database (which would create the same bottleneck), you could have an index for each place holding the symbol used in the previously generated permutation (and create the permutations adding the symbols and a predefined sequence). | I'd like to create a random permutation of the numbers [1,2,...,N] where N is a big number. So I don't want to store all elements of the permutation in memory, but rather iterate over the elements of my particular permutation without holding former values in memory.
Any idea how to do that in Python? | 0 | 1 | 1,933 |
0 | 30,648,685 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2015-06-04T11:12:00.000 | 5 | 1 | 0 | how to drop dataframe in pandas? | 30,642,356 | 1.2 | python,pandas,dataframe | Generally creating a new object and binding it to a variable will allow the deletion of any object the variable previously referred to. del, mentioned in @EdChum's comment, removes both the variable and any object it referred to.
This is an over-simplification, but it will serve. | Tips are there for dropping column and rows depending on some condition.
But I want to drop the whole dataframe created in pandas.
like in R : rm(dataframe) or in SQL: drop table
This will help to release the ram utilization. | 0 | 1 | 9,398 |
0 | 30,650,664 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-06-04T15:15:00.000 | 1 | 1 | 0 | In python, can I see if a file lives on an HD or an SSD? | 30,647,758 | 0.197375 | python,numpy,h5py,solid-state-drive,memory-mapping | cat /sys/block/sda/queue/rotational is a good way of finding out if your hard drive is a SSD or a hard disk. You can also slightly change this command in order to get other useful information like cat /sys/block/sdb/queue/rotational. | I want to randomly access the elements of a large array (>7GB) that I load into Python as a either an HDF5 dataset (h5py.Dataset), or a memory-mapped array (numpy.memmap).
If this file lives on an spinning-platter HD, these random accesses take forever, for obvious reasons.
Is there a way to check (assert) that the file in question lives on an SSD, before attempting these random accesses?
I am running python in Linux (Ubuntu 14.04). I don't mind non-cross-platform solutions. | 0 | 1 | 889 |
0 | 30,713,394 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-06-04T22:35:00.000 | 0 | 1 | 0 | Creating and Storing Multi-Dimensional Array in a netCDF File | 30,655,378 | 0 | python,arrays,numpy,multidimensional-array,netcdf | After talking to a few people where I work we came up with this solution:
First we made an array of zeroes using the following argument:
array1=np.zeros((28,5,24,4))
Then appended this array by specifying where in the array we wanted to change:
array1[:,0,0,0]=list1
This inserted the values of the list into the first entry in the array.
Next to write the array to a netCDF file, I created a netCDF in the same program I made the array, made a single variable and gave it values like this:
netcdfvariable[:]=array1
Hope that helps anyone who finds this. | This question has potentially two parts but maybe only one if the first part can be encapsulated by the second. I am using python with numpy and netCDF4
First:
I have four lists of different variable values (hereafter referred to elevation values) each of which has a length of 28. These four lists are one set of 5 different latitude values of which are one set of the 24 different time values.
So 24 times...each time with 5 latitudes...each latitude with four lists...each list with 28 values.
I want to create an array with the following dimensions (elevation, latitude, time, variable)
In words, I want to be able to specify which of the four lists I access,which index in the list, and specify a specific time and latitude. So an index into this array would look like this:
array(0,1,2,3) where 0 specifies the first index of the the 4th list specified by the 3. 1 specifies the 2nd latitude, and 2 specifies the 3rd time and the output is the value at that point.
I won't include my code for this part since literally the only things of mention are the lists
list1=[...]
list2=[...]
list3=[...]
list4=[...]
How can I do this, is there an easier structure of the array, or is there anything else I a missing?
Second:
I have created a netCDF file with variables with these four dimensions. I need to set those variables to the array structure made above. I have no idea how to do this and the netCDF4 documentation does a 1-d array in a fairly cryptic way. If the arrays can be made directly into the netCDF file bypassing the need to use numpy first, by all means show me how.
Thanks! | 0 | 1 | 1,886 |
0 | 30,662,556 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-06-05T08:42:00.000 | 3 | 1 | 0 | Is there any implementation of hstack in gnumpy | 30,662,065 | 1.2 | python,numpy | You can use gnumpy.concatenate. For 1D arrays you need to reshape to 2D first. | I want to transfer python codes in CPU to GPU, but I failed to find the numpy function hstack in gnumpy. Who can give me some hints to implement adding some extra rows to a existing matrix(garray) like hstack in numpy. Thank you. | 0 | 1 | 75 |
0 | 30,667,520 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-06-05T11:31:00.000 | 1 | 1 | 0 | What is the fastest way to stream a large csv file? | 30,665,447 | 0.197375 | python,csv,pandas | In pandas.read_csv you can use the "chunksize" option, if you do, the object returned by pandas will be an iterator (of type TextFileReader) which when iterated over will return a DataFrame reading over number_of_rows<=chunksize (I hadn't realized the option existed until I read the source code...). | I've compared the built-in csv reader with Pandas's read_csv. The former is significantly slower. However, I have a need to stream csv files due to memory limitation. What streaming csv reader that is as fast or almost as fast as Pandas? | 0 | 1 | 1,231 |
0 | 30,684,782 | 0 | 1 | 0 | 1 | 1 | false | 2 | 2015-06-06T11:32:00.000 | 0 | 2 | 0 | Fastest way to store and retrieve arrays | 30,682,311 | 0 | python,sql,arrays,database,nosql | It seems not so big with numpy arrays, if your integers are 8 bits. a=numpy.ones((17e6,128),uint8) is created in less than a second on my computer. but ones((17e6,128),uint16) is difficult, and ones((17e6,128),uint64) crashed. | I'm working on a project where I have to store about 17 million 128-dimensional integer arrays e.g [1, 2, 1, 0, ..., 2, 6, 4] and I'm trying to figure out what's the best way to do it.
The perfect solution would be one that makes it fast to both store and retrieve the arrays, since I need to access ALL of them to make calculations. With such a vast amount of data, I obviously can't store them all in memory in order to make calculations, so accessing batches of arrays should be as fast as possible.
I'm working in Python.
What do you recommend ? Using a DB (SQL vs NOSQL ?), storing it in a text file, using python's Pickle? | 0 | 1 | 1,207 |
0 | 30,711,755 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-06-07T00:21:00.000 | 1 | 1 | 0 | Load Modules in Wing IDE | 30,688,828 | 0.197375 | python,wing-ide | It is probably due to your code being in a file named numpy.py If you do this then 'import numpy' may import your module and not numpy. This depends on what's on the Python Path and possibly current directory, which probably explains why it works outside of Wing. | I am new to Python and am having trouble loading numpy in Wing IDE. I can load the module and use it fine in the command line but not in Wing IDE. Below is what I am seeing:
code:
import numpy as np
a=np.arange(15)
result:
[evaluate numpy.py]
Traceback (most recent call last):
File "C:\Users[my name]\Documents\Python\practice\numpy.py", line 2, in 0
builtins.NameError: name 'arange' is not defined
I have also tried to use the help() command:
code:
help(np)
result:
Help on module numpy:
NAME
numpy
FILE
c:\users[my name]\documents\python\practice\numpy.py | 0 | 1 | 1,987 |
0 | 30,704,359 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-06-08T08:18:00.000 | 1 | 1 | 0 | Python 2.7 and 3.4: Libraries inaccessible across versions | 30,704,214 | 1.2 | python,python-2.7,python-3.x,compatibility | When you install Python packages using apt-get, you're relying on the distribution package manager. The Ubuntu convention is to prefix Python 2 packages with python-, and Python 3 packages with python3-.
This distinction is necessary because Python 3 introduced some incompatible changes from Python 2. It's thus not possible to simply recompile (most) packages for Python 3, meaning both need to be made available.
Alternatively, you can use the Python package manager, pip (or pip3). The catch with this is that some packages (like scipy) require certain compiler toolchains which you might not have installed.
It's generally a good idea to stick to either apt-get or pip for a particular machine. You probably won't have issues if you mix them, but it's better to be consistent. | I am new to Python. I am running Ubuntu 14.04, and I have both Python 2.7 and 3.4 on it.
I want to use the newer 3.x version, with the NumPy, SciPy, and NLTK libraries. I set the Python REPL path to Python 3.x in the ~/.bash_aliases file like so:
alias python=python3
After this I installed several libs, including python-numpy, python-scipy, and python-matplotlib.
$ sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose
Unfortunately, I am facing issues since I am guessing that the libraries got installed for the older 2.7 version of Python; I am unable to access the libraries using the 3.4 REPL.
import numpy
ImportError: No module named 'numpy'
However, I am able to access the libraries using the older version:
$ /usr/bin/python2.7
How do I get this this work? | 0 | 1 | 154 |
0 | 30,713,767 | 0 | 0 | 0 | 1 | 1 | true | 9 | 2015-06-08T15:20:00.000 | 8 | 2 | 0 | store numpy array in mysql | 30,713,062 | 1.2 | python,mysql,arrays,numpy | You could use ndarray.dumps() to pickle it to a string then write it to a BLOB field? Recover it using numpy.loads() | My use case is simple, i have performed some kind of operation on image and the resulting feature vector is a numpy object of shape rowX1000(what i mean to say is that the row number can be variable but column number is always 1000)
I want to store this numpy array in mysql. No kind of operation is to be performed on this array. The query will be simple given a image name return the whole feature vector. so is there any way in which the array can be stored (something like a magic container which encapsulates the array and then put it on the table and on retrieval it retrieves the magic container and pops out the array)
I want to do this in python. If possible support with a short code snippit of how to put the data in the mysql database. | 0 | 1 | 7,366 |
0 | 30,721,305 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-06-08T18:32:00.000 | 1 | 1 | 0 | Simple Python Median Filter for time series | 30,716,541 | 1.2 | python,filtering,time-series,signal-processing,noise-reduction | Load the data using any method you prefer. I see that your file can be treated as csv format, therefore you could use numpy.genfromtxt('file.csv', delimiter=',') function.
Use the scipy function for median filtering: scipy.signal.medfilt(data, window_len). Keep in mind that window length must be odd number.
Save the results to a file. You can do it for example by using the numpy.savetxt('out.csv', data, delimiter=',') function. | I have a time series in a log file having the following form (timestamp, value) :
1433787443, -60
1433787450, -65
1433787470, -57
1433787483, -70
Is there any available python code/library that takes as input the log file and a window size, apply a median filter to the time series to remove noise and outliers, and outputs the filtered signal to a new file ? | 0 | 1 | 4,246 |
0 | 30,724,975 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2015-06-09T06:05:00.000 | 0 | 1 | 0 | how to import file csv without using bulk insert query? | 30,724,143 | 1.2 | python,mysql,sql,sql-server,csv | My answer is to work with bulk-insert.
1. Make sure you have bulk-admin permission in server.
2. Use SQL authentication login (For me most of the time window authentication login haven't worked.) for bulk-insert operation. | i have tried import file csv using bulk insert but it is failed, is there another way in query to import csv file without using bulk insert ?
so far this is my query but it use bulk insert :
bulk insert [dbo].[TEMP] from
'C:\Inetpub\vhosts\topimerah.org\httpdocs\SNASPV0374280960.txt' with
(firstrow=2,fieldterminator = '~', rowterminator = ' '); | 0 | 1 | 777 |
0 | 30,748,701 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-06-09T15:38:00.000 | 3 | 2 | 0 | Data exchange - Python and Fortran | 30,736,698 | 0.291313 | python-2.7,fortran | I would interface directly between Python and Fortran. It is relatively straightforward to allocate memory using Numpy and pass a pointer through to Fortran. You use iso_c_binding to write C-compatible wrapper routines for your Fortran routines and ctypes to load the Fortran .dll and call the wrappers. If you're interested I can throw together a simple example (but I am busy right this moment). | We are developing a scientific application which has the interface in python 2.7 and the computation routines written in Intel Visual Fortran. Reading the source files is done using python, then only the required data for computations has to be passed to standalone Fortran algorithms. Once the computations done, the data has to be read by python once again.
Using formatted text files seems to be taking too long and not efficient. Further, we would like to have a standard intermediate format. There can be about 20 arrays and those are huge (if written to formatted text, the file is about 500 MB).
Q1. In a similar situation where Python and Fortran data exchange is necessary. What would be recommended way of interaction? (e.g.: writing an intermediate data to be read by the other or calling Fortran from within Python or using numpy to create compatible arrays or etc.)
Q2. If writing intermediate structures is recommended, What format is good for data exchange? (We came across CDF, NETCdf, binary streaming, but didn't try any so far.) | 0 | 1 | 325 |
0 | 30,737,140 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2015-06-09T15:41:00.000 | 2 | 1 | 0 | Restrict MKL optimized scipy to single thread | 30,736,765 | 1.2 | python,multithreading,numpy,intel,intel-mkl | Upon further investigation it looks you are able to set the environment variable MKL_NUM_THREADS to achieve this. | I just installed a intel MKL optimized version of scipy and when running my benchmarks, I got remarkable speedup with it. I then looked closer and found out it was running on 20 cores ... how do I restrict it to single threaded mode? Is there a way I could have installed it to single threaded mode by default, while leaving the option open to run on a specified number of cores? | 0 | 1 | 225 |
0 | 43,092,480 | 0 | 0 | 0 | 0 | 1 | true | 13 | 2015-06-09T20:50:00.000 | 2 | 6 | 0 | Argmax of each row or column in scipy sparse matrix | 30,742,572 | 1.2 | python,scipy,sparse-matrix | From scipy version 0.19, both csr_matrix and csc_matrix support argmax() and argmin() methods. | scipy.sparse.coo_matrix.max returns the maximum value of each row or column, given an axis. I would like to know not the value, but the index of the maximum value of each row or column. I haven't found a way to make this in an efficient manner yet, so I'll gladly accept any help. | 0 | 1 | 4,988 |
0 | 47,763,498 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-06-10T00:37:00.000 | 0 | 3 | 0 | Gensim LDA - Default number of iterations | 30,745,184 | 0 | python,topic-modeling,gensim | The default number of iterations = 50 | I wish to know the default number of iterations in gensim's LDA (Latent Dirichlet Allocation) algorithm. I don't think the documentation talks about this. (Number of iterations is denoted by the parameter iterations while initializing the LdaModel ). Thanks ! | 0 | 1 | 6,280 |
0 | 30,751,250 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-06-10T08:19:00.000 | 0 | 3 | 0 | Binary to CSV record Converstion | 30,750,849 | 0 | python,csv,binary,data-conversion,hexdump | From the binary into meaningful strings, we must know that the binary code protocol We can't resolve the binary out of thin air. | Hi FolksI have been working on a python module which will convert a binary string into a CSV record. A 3rd Party application does this usually, however I'm trying to build this logic into my code. The records before and after conversion are as follows:
CSV Record After Conversion
0029.6,000.87,002.06,0029.2,0010.6,0010.0,0002.1,0002.3,00120,00168,00054,00111,00130,00000,00034,00000,00000,00039,00000,0313.1,11:09:01,06-06-2015,00000169
I'm trying to figure out the conversion logic that has been used by the 3rd party tool, if anyone can help me with some clues regarding this, it would be great! One thing I have analysed is that each CSV value corresponds to an unsigned short in the byte stream. TIA, cheers! | 0 | 1 | 3,548 |
0 | 30,752,427 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2015-06-10T08:19:00.000 | 1 | 3 | 0 | Binary to CSV record Converstion | 30,750,849 | 1.2 | python,csv,binary,data-conversion,hexdump | As already mentioned, without knowing the binary protocol it will be difficult to guess the exact encoding that is being used. There may be special case logic that is not apparent in the given data.
It would be useful to know the name of the 3rd party application or a least what field this relates to. Anything to give an idea as to what the encoding could be.
The following are clues as you requested on how to proceed:
The end of the CSV shows a date, this can be seen at the start
31 08 11 06 06 15 20 AA A8 00 00 00 28 01 57 00 CE 00 24 01 6A 00
64 00 15 00 17 00 78 00 A8 00 36 00 6F 00 82 00 00 00 22 00 00 00 00
00 27 00 00 00
The end value 169 (hex A9) is suspiciously in between the next two hex values
31 08 11 06 06 15 20 AA A8 00 00 00 28 01 57 00 CE 00 24 01 6A 00
64 00 15 00 17 00 78 00 A8 00 36 00 6F 00 82 00 00 00 22 00 00 00 00
00 27 00 00 00
"00039," could refer to the last 4 digits
31 08 11 06 06 15 20 AA A8 00 00 00 28 01 57 00 CE 00 24 01 6A 00 64
00 15 00 17 00 78 00 A8 00 36 00 6F 00 82 00 00 00 22 00 00 00 00 00
27 00 00 00
or:
31 08 11 06 06 15 20 AA A8 00 00 00 28 01 57 00 CE 00 24 01 6A 00 64
00 15 00 17 00 78 00 A8 00 36 00 6F 00 82 00 00 00 22 00 00 00 00 00
27 00 00 00 ....or 27 00 00 00
...you guess two bytes are used so perhaps the others are separate 0 value fields.
"00034," could refer to:
31 08 11 06 06 15 20 AA A8 00 00 00 28 01 57 00 CE 00 24 01 6A 00 64
00 15 00 17 00 78 00 A8 00 36 00 6F 00 82 00 00 00 22 00 00 00 00
00 27 00 00 00
and so on... simply convert some of the decimal numbers into hex and search for possible locations in the data. Consider that fields might be big or little endian or a combination thereof.
You should take a look at the struct python library which can be useful in dealing with such conversions once you know the formatting that is being used.
With more data examples the above theories could then be tested. | Hi FolksI have been working on a python module which will convert a binary string into a CSV record. A 3rd Party application does this usually, however I'm trying to build this logic into my code. The records before and after conversion are as follows:
CSV Record After Conversion
0029.6,000.87,002.06,0029.2,0010.6,0010.0,0002.1,0002.3,00120,00168,00054,00111,00130,00000,00034,00000,00000,00039,00000,0313.1,11:09:01,06-06-2015,00000169
I'm trying to figure out the conversion logic that has been used by the 3rd party tool, if anyone can help me with some clues regarding this, it would be great! One thing I have analysed is that each CSV value corresponds to an unsigned short in the byte stream. TIA, cheers! | 0 | 1 | 3,548 |
0 | 30,756,777 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2015-06-10T11:10:00.000 | 2 | 1 | 0 | Compilation difference in Spyder IDE and Python intrepeter | 30,754,748 | 1.2 | python,spyder | Spyder 2.2.5 is an older version (the last version is 2.3.4). When it is started automatically imports numpy and matplotlib. The regular Python interpreter needs an explicit import numpy as np in order to define an array, A=np.array([[1,2,3], [4,5,6]]). | This problem may seem a little strange, but a while (about 1-2 weeks) ago I wrote a little Python script which I tested and everything worked just fine. Today when I was taking lines from this latter script, the lines would run without any error in the Spyder IDE Python console, but when I try to put those same line in a new .py file, Spyder gives me errors!
So I tried to compile the old script again, then I got errors!
A few examples to maybe clear things up:
I load and image using PIL Image: im = Image.open("test.jpg")
then in the Spyder console I can do: im.layers which gives me the number of color channels. Even though this attribute doesn't exist in PIL Image docs!
But using this same attribute in a python file would give an error!
Using: a = array( [ [ 1, 2, 3], [4, 5, 6], [...] ] ) I can create a 2D array (or matrix). This is possible through Spyder, but not regular Python interpreter (which leads to NameError: global name 'array' is not defined
)!
And a few more examples like these.
Could anyone help me understand what is going on, knowing that I'm sort of a Python noob?
Python version: 2.7.6 | GCC 4.8.2 | Spyder 2.2.5 | 0 | 1 | 1,345 |
0 | 30,769,647 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-06-10T22:10:00.000 | 0 | 1 | 0 | Baum-Welch many possible observations | 30,768,182 | 1.2 | python,hidden-markov-models | This sounds like the standard HMM scaling problem. Have a look at "A Tutorial on Hidden Markov Models ..." (Rabiner, 1989), section V.A "Scaling".
Briefly, you can rescale alpha at each time to sum to 1, and rescale beta using the same factor as the corresponding alpha, and everything should work. | I have implemented the baum-welch algorithm in python but I am now encountering a problem when attempting to train HMM (hidden markov model) parameters A,B, and pi. The problem is that I have many observation sequences Y = (Y_1=y_1, Y_2=y_2,...,Y_t=y_t). And each observation variable Y_t can take on K possible values, K=4096 in my case. Luckily I only have two states N=2, but my emission matrix B is N by K so 2 rows by 4096 columns.
Now when you initialize B, each row must sum to 1. Since there are 4096 values in each of the two rows, the numbers are very small. So small that when I go to compute alpha and beta their rows eventually approach 0 as t increases. This is a problem because you cannot compute gamma as it tries to compute x/0 or 0/0. How can I run the algorithm without it crashing and without permanently altering my values? | 0 | 1 | 869 |
0 | 30,814,337 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-06-13T01:03:00.000 | 0 | 1 | 0 | How to make overlaid contour plots with python? | 30,814,133 | 0 | python,contour | You can save each layer into a PNG file with a transparent background and overlay them in Photoshop, Gimp or ImageMagick. | Using Python, how to create two(or more) color contour plots, each with their own color map, but overlaid into a single image with the same x and y axes? | 0 | 1 | 67 |
0 | 30,821,067 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-06-13T16:23:00.000 | 1 | 1 | 0 | How to find out what function I am calling | 30,820,896 | 1.2 | python | Step in the function using pdb.
use pdb.set_trace() some where before you are calling train method.
Something like this
import pdb; pdb.set_trace()
classifier = NaiveBayesClassifier.train(training_set)
When you debug. Stop at the line where you are calling train method.
press s to step in the function. This will take you inside the train function. From there you can debug normally. | I am calling a function like this:
classifier = NaiveBayesClassifier.train(training_set)
and I would like to debug the code inside the train() function. The problem is that if I add print statements or pdb calls nothing changes.
I am importing this:
from nltk.classify.naivebayes import NaiveBayesClassifier
but even if I change something in nltk/classify/naivebayes.py nothing happens. I can also delete all the content of this file and I still have a working output. So I suppose that the function I am calling is somewhere else, but I cannot find it.
Is there a way to check where my function call is actually going? I am quite confused. | 0 | 1 | 50 |
0 | 30,865,729 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2015-06-16T08:03:00.000 | 1 | 2 | 0 | Python 2.7 Anaconda Pandas error(Ubuntu 14.04) | 30,861,956 | 1.2 | python,python-2.7,ubuntu,pandas | So, the solution was essentially to create a virtual environment and install the needed packages independently. Some issues with dependencies on my system, I believe. | I'm doing a data science course on udemy using python 2.7, running Anaconda. My OS is Ubuntu 14.04.
I'm getting the following error running with the pandas module:
Traceback (most recent call last):
File "/home/flyveren/PycharmProjects/Udemy/15_DataFrames.py", line 13, in <module>
nfl_frame = pd.read_clipboard()
File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/clipboard.py", line 51, in read_clipboard
return read_table(StringIO(text), **kwargs)
File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 474, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 260, in _read
return parser.read()
File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 721, in read
ret = self._engine.read(nrows)
File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 1170, in read
data = self._reader.read(nrows)
File "pandas/parser.pyx", line 769, in pandas.parser.TextReader.read (pandas/parser.c:7544)
File "pandas/parser.pyx", line 791, in pandas.parser.TextReader._read_low_memory (pandas/parser.c:7784)
File "pandas/parser.pyx", line 844, in pandas.parser.TextReader._read_rows (pandas/parser.c:8401)
File "pandas/parser.pyx", line 831, in pandas.parser.TextReader._tokenize_rows (pandas/parser.c:8275)
File "pandas/parser.pyx", line 1742, in pandas.parser.raise_parser_error (pandas/parser.c:20691)
pandas.parser.CParserError: Error tokenizing data. C error: Expected 11 fields in line 5, saw 12
I've tried conda uninstall pandas and subsequently conda install pandas again to see, however with the same result. The package is there, it tells me an error if I uninstall and try to run the code again with missing package, but it gives this error when it's properly installed.
Anyone knows what's up? | 0 | 1 | 332 |
0 | 40,701,927 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-06-16T08:03:00.000 | 1 | 2 | 0 | Python 2.7 Anaconda Pandas error(Ubuntu 14.04) | 30,861,956 | 0.099668 | python,python-2.7,ubuntu,pandas | I watch same lecture at udemy and face same problem.
I change my browser from internet explorer to chrome. (I'm using windows7 & VS2013 with PTVS)
Then, error does not occur.
However, delimeter has some problem.
Space should not be used as delimeter according to lecture, however, it does.
So, result is not perfect. | I'm doing a data science course on udemy using python 2.7, running Anaconda. My OS is Ubuntu 14.04.
I'm getting the following error running with the pandas module:
Traceback (most recent call last):
File "/home/flyveren/PycharmProjects/Udemy/15_DataFrames.py", line 13, in <module>
nfl_frame = pd.read_clipboard()
File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/clipboard.py", line 51, in read_clipboard
return read_table(StringIO(text), **kwargs)
File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 474, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 260, in _read
return parser.read()
File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 721, in read
ret = self._engine.read(nrows)
File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 1170, in read
data = self._reader.read(nrows)
File "pandas/parser.pyx", line 769, in pandas.parser.TextReader.read (pandas/parser.c:7544)
File "pandas/parser.pyx", line 791, in pandas.parser.TextReader._read_low_memory (pandas/parser.c:7784)
File "pandas/parser.pyx", line 844, in pandas.parser.TextReader._read_rows (pandas/parser.c:8401)
File "pandas/parser.pyx", line 831, in pandas.parser.TextReader._tokenize_rows (pandas/parser.c:8275)
File "pandas/parser.pyx", line 1742, in pandas.parser.raise_parser_error (pandas/parser.c:20691)
pandas.parser.CParserError: Error tokenizing data. C error: Expected 11 fields in line 5, saw 12
I've tried conda uninstall pandas and subsequently conda install pandas again to see, however with the same result. The package is there, it tells me an error if I uninstall and try to run the code again with missing package, but it gives this error when it's properly installed.
Anyone knows what's up? | 0 | 1 | 332 |
0 | 30,869,829 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2015-06-16T13:27:00.000 | 2 | 2 | 0 | Generate Random Sentence From Grammar or Ngrams? | 30,868,980 | 1.2 | python,nltk,n-gram,sentence | If I'm getting it right and if the purpose is to test yourself on the vocabulary you already have learned, then another approach could be taken:
Instead of going through the difficult labor of NLG (Natural Language Generation), you could create a search program that goes online, reads news feeds or even simply Wikipedia, and finds sentences with only the words you have defined.
In any case, for what you want, you will have to create lists of words that you have learned. You could then create search algorithms for sentences that contain only / nearly only these words.
That would have the major advantage of testing yourself on real sentences, as opposed to artificially-constructed ones (which are likely to sound not quite right in a number of cases).
An app like this would actually be a great help for learning a foreign language. If you did it nicely I'm sure a lot of people would benefit from using it. | I am writing a program that should spit out a random sentence of a complexity of my choosing. As a concrete example, I would like to aid my language learning by spitting out valid sentences of a grammar structure and using words that I have already learned. I would like to use python and nltk to do this, although I am open to other ideas.
It seems like there are a couple of approaches:
Define a grammar file that uses the grammar and lexicon I know about, and then generate all valid sentences from this list, then selecting a random answer.
Load in corpora to train ngrams, which then can be used to construct a sentence.
Am I thinking about this correctly? Is one approach preferred over the other? Any tips are appreciated. Thanks! | 0 | 1 | 1,204 |
0 | 30,887,058 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-06-17T08:35:00.000 | 0 | 2 | 0 | How does result changes by using .distinct() in spark? | 30,886,340 | 0 | python,apache-spark | I think when you only use map action on FIRST_RDD(logs) you will get SECOND_RDD count of new this SECOND_RDD will be equal to count of FIRST_RDD.But if you use distinct on SECOND_RDD, count will decrease to number of distinct tuples present in SECOND_RDD. | I was working with Apache Log File. And I created RDD with tuple (day,host) from each log line. Next step was to Group up host and then display the result.
I used distinct() with mapping of first RDD into (day,host) tuple. When I don't use distinct I get different result as when I do. So how does a result change when using distinct() in spark?? | 0 | 1 | 172 |
0 | 30,902,169 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2015-06-17T20:44:00.000 | 1 | 4 | 0 | How to parse CSV file and search by item in first column | 30,902,045 | 0.049958 | python | You can loop though the csv.reader(). It will return you rows. Rows consist of lists. Compare the first element of the list ie row[0]. If it is what you want, add the row to an output list. | I have a CSV file with over 4,000 lines formatted like...
name, price, cost, quantity
How do I trim my CSV file so only the 20 names I want remain? I am able to parse/trim the CSV file, I am coming up blank on how to search column 1. | 0 | 1 | 145 |
0 | 30,938,653 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-06-19T12:01:00.000 | 3 | 2 | 0 | NaiveBayes classifier handling different data types in python | 30,937,667 | 1.2 | python,scikit-learn,gaussian,naivebayes | Yes, you will need to convert the strings to numerical values
The naive Bayes classifier can not handle strings as there is not a way an string can enter in a mathematical equation.
If your strings have some "scalar value" for example "large, medium, small" you might want to classify them as "3,2,1",
However, if your strings are things without order such as colours or names, you can do this or assign binary variables with every variable referring to a colour or name, if they are not many.
For example if you are classifying cars an they can be red blue and green you can define the variables 'Red' 'Blue' 'Green' that take the values 0/1, depending on the colour of your car. | I am trying to implement Naive Bayes classifier in Python. My attributes are of different data types : Strings, Int, float, Boolean, Ordinal
I could use Gaussian Naive Bayes classifier (Sklearn.naivebayes : Python package) , But I do not know how the different data types are to be handled. The classifier throws an error, stating cannot handle data types other than Int or float
One way I could possibly think of is encoding the strings to numerical values. But I also doubt , how good the classifier would perform if I do this. | 0 | 1 | 3,764 |
0 | 54,027,432 | 0 | 1 | 0 | 0 | 1 | false | 14 | 2015-06-20T04:16:00.000 | 6 | 3 | 0 | AttributeError: 'TimedeltaProperties' object has no attribute 'years' in Pandas | 30,950,198 | 1 | python,datetime,pandas | This workaround gets you closer.
round((df["Accident Date"] - df["Iw Date Of Birth"]).dt.days / 365, 1) | In Pandas, why does a TimedeltaProperties object have no attribute 'years'?
After all, the datetime object has this property.
It seems like a very natural thing for an object that is concerned with time to have. Especially if it already has an hours, seconds, etc attribute.
Is there a workaround so that my column, which is full of values like
10060 days,
can be converted to years? Or better yet, just converted to an integer representation for years? | 0 | 1 | 19,665 |
0 | 30,961,062 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2015-06-21T02:07:00.000 | 2 | 1 | 0 | LZ 77, 78 algorithm for ECG Compression | 30,960,617 | 0.379949 | python,c,matlab,lzw,lz77 | Reconsider your choice of the LZ 77/78 algorithms. ECG waves look similar but they are not binary identical so the dictionary-based compression algorithms don't provide ideal results.
Complicated algorithms can hardly be expressed in few lines of code. | I am interested to implement LZ algorithms for the compression of ECG signal and want to optimized the code with relevant to Micro controller.
So that it would Entropy efficient and take less time to compress and decompress the ECG signal. I am totally stuck how I go through to achieve this. I am open to any programming language.
I have searched the internet for the source codes and I found a very long code which is difficult to understand in a short period of time.
Any suggestion...? | 0 | 1 | 339 |
0 | 56,899,773 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2015-06-22T09:13:00.000 | 0 | 3 | 0 | Find the tf-idf score of specific words in documents using sklearn | 30,976,120 | 0 | python,scikit-learn,tf-idf | @kinkajou, No, TF and IDF are not same but they belong to the same algorithm- TF-IDF, i.e Term frequency Inverse document Frequency | I have code that runs basic TF-IDF vectorizer on a collection of documents, returning a sparse matrix of D X F where D is the number of documents and F is the number of terms. No problem.
But how do I find the TF-IDF score of a specific term in the document? i.e. is there some sort of dictionary between terms (in their textual representation) and their position in the resulting sparse matrix? | 0 | 1 | 11,861 |
0 | 31,006,818 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-06-22T18:40:00.000 | 0 | 1 | 0 | Drawing layout with constraints in networkx | 30,987,425 | 0 | python,networkx | I ended up with forking the spring layout with a hold_dim parameter, so during updating the positions only x or y is being changed. | I wonder if it is implemented in networkx.drawing to hold one dimension during the layout optimization with a predefined position array.
Lets say you want to optimize the layout of a graph and have the x dimension of the positions already given, so you only want to optimize the y directions of the vertices.
So far I've only noticed, that one can hold positions of certain vertices, but then of course non of those are being moved.
In the Python package grandalf, they have DicgoLayout, so I'd expect something similar in networkx. | 0 | 1 | 380 |
0 | 31,486,223 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-06-22T21:54:00.000 | 1 | 2 | 0 | How to make data points in a 3D python scatter plot look like "discs" instead of "spheres" | 30,990,501 | 1.2 | python,matplotlib,3d,data-visualization | I actually was able to do this using the matplotlib.patches library, creating a patch for every data point, and then making it whatever shape I wanted with the help of mpl_toolkits.mplot3d.art3d. | In a standard 3D python plot, each data point is, by default, represented as a sphere in 3D. For the data I'm plotting, the z-axis is very sensitive, while the x and y axes are very general, so is there a way to make each point on the scatter plot spread out over the x and y direction as it normally would with, for example, s=500, but not spread at all along the z-axis? Ideally this would look like a set of stacked discs, rather than overlapping spheres.
Any ideas? I'm relatively new to python and I don't know if there's a way to make custom data points like this with a scatter plot. | 0 | 1 | 612 |
0 | 30,993,345 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-06-23T02:38:00.000 | 0 | 2 | 0 | Read many files in parallel with a specific sample rate | 30,992,964 | 0 | python,file,python-2.7 | Open 10 files.
Read 1 record from each file (or 10, the question is not clear).
Use these records.
Wait until the current 5 second interval elapses.
Go to 2. | I have 10 CSV files with million records. I want to read the 10 files in parallel, but with a specific rate (10 records per 5 sec). What is the efficient way to do so ?. I am using Windows in case someone will suggest to use OS scheduler | 0 | 1 | 90 |
0 | 31,009,436 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2015-06-23T17:16:00.000 | 2 | 1 | 0 | When writing large data into .csv file, is it better to open and close file often? | 31,009,284 | 1.2 | python,csv | I believe the write will immediately write to the disk, so there isn't any benefit that I can see from closing and reopening the file. The file isn't stored in memory when it's opened, you just get essentially a pointer to the file, and then load or write a portion of it at a time.
Edit
To be more explicit, no, opening a large file will not use a large amount of memory. Similarly writing a large amount of data will not use a large amount of memory as long as you don't hold the data in memory after it has been written to the file. | I am writing a program with a while loop, which would write giant amount of data into a csv file. There maybe more than 1 million rows.
Considering running time, memory usage, debugging and so on, what is the better option between the two:
open a CSV file, keep it open and write line by line, until the 1 million all written
Open a file, write about 100 lines, close(), open again, write about 100 lines, ......
I guess I just want to know would it take more memories if we're to keep the file open all the time? And which one will take longer?
I can't run the code to compare because I'm using a VPN for the code, and testing through testing would cost too much $$ for me. So just some rules of thumb would be enough for this thing. | 0 | 1 | 2,830 |
0 | 31,022,952 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-06-24T09:09:00.000 | 0 | 4 | 0 | How to save a dataframe as a csv file with '/' in the file name | 31,022,379 | 0 | python,pandas | You can not use any of these chars in the file name ;
/:*?\"| | I want to save a dataframe to a .csv file with the name '123/123', but it will split it in to two strings if I just type like df.to_csv('123/123.csv').
Anyone knows how to keep the slash in the name of the file? | 0 | 1 | 373 |
0 | 31,058,586 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2015-06-25T18:49:00.000 | 0 | 2 | 0 | How to model particle bouncing off of a curved surface in 3D? | 31,058,446 | 1.2 | python,3d,geometry,computational-geometry | Get the normal vector of the plane. (you can make a cross product between two non parallel vectors in the plane for that).
Take the velocity vector components relative to the normal vector and:
Make parallel component negative
Keep perpendicular components the same.
Following an approach similar to @jacdeh:
Normalize the normal vector
Make the inner product of velocity with unit normal (that is the scalar speed against the surface)
Multiply the inner product with unit normal (that is the parallel component I mentioned).
Subtract 2 times that component from velocity and this is the result. | I am trying to simulate a particle bouncing off the sides of a cylinder (inside) or any closed curved surface in 3 dimensions.
At the moment of interaction with the surface, I have the position vector, the velocity vector, and the plane tangent to the surface at the intersection, and I am looking to derive the new velocity vector.
Currently coding in Python, but pseudocode/general algorithm would be immensely helpful as well. | 0 | 1 | 222 |
0 | 31,058,674 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2015-06-25T18:49:00.000 | 0 | 2 | 0 | How to model particle bouncing off of a curved surface in 3D? | 31,058,446 | 0 | python,3d,geometry,computational-geometry | Compute normal vector as cross product of two tangent vectors to your surface
Normalize it to get unit normal
Compute inner product (dot product) of minus velocity and normal vector
Multiply inner product with unit normal to get projection on normal
Subtract projection on normal from minus velocity
Add that to projection to get new speed vector | I am trying to simulate a particle bouncing off the sides of a cylinder (inside) or any closed curved surface in 3 dimensions.
At the moment of interaction with the surface, I have the position vector, the velocity vector, and the plane tangent to the surface at the intersection, and I am looking to derive the new velocity vector.
Currently coding in Python, but pseudocode/general algorithm would be immensely helpful as well. | 0 | 1 | 222 |
0 | 31,062,641 | 0 | 1 | 0 | 0 | 1 | true | 4 | 2015-06-25T23:43:00.000 | 6 | 1 | 0 | Determine where a class is imported from in python | 31,062,620 | 1.2 | python,scikit-learn | Use the __module__ attribute, i.e.: Ridge.__module__
If you want to know it from an instance of the class: inst.__class__.__module__
If you need the module object (not just the name as string): sys.modules[Ridge.__module__] | Is there any way to determine where a class is coming from in python (especially sklearn)? I want to determine if a class is from sklearn.linear_models or sklearn.ensemble.
As an example, I would like to be able to determine if Ridge() is a member of sklearn.linear_model.
The fit function is a bit different depending on the model so formulas fed to each via patsy need to be different. | 0 | 1 | 62 |
0 | 31,088,239 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-06-26T18:30:00.000 | 0 | 1 | 0 | Multi-Axis Graph using the OpenPyxl library in Python | 31,079,888 | 0 | python,excel,parsing,openpyxl | This will be possible in version 2.3. | I'm writing a parsing script in python and am curious as to how I could create a graph with two axis's under different data ranges.
Is this possible in openpyxl? | 0 | 1 | 196 |
0 | 52,132,286 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-06-27T18:41:00.000 | 0 | 1 | 0 | Cant import matplotlib on python 2.7.5 | 31,092,232 | 0 | python,matplotlib,import,module,importerror | Try pip uninstall matplotlib then python -mpip install matplotlib see if it helps. Otherwise, i'd recommend using a virtual environment which you can install using sudo easy_install virtualenv or pip install virtualenv This way you could just install and use matplotlib by following these steps:
Create a new virtual environment : user@yourmac$ virtualenv .env (.env can be any name or directory)
Activate it by doing : user@yourmac$ source .env/bin/activate
Then once it's activated just do a pip install matplotlib and use it to your satisfaction.
Hopefully this should solve your problem. | I installed matplotlib using (pip install --user matplotlib) and the installation was successful. when I try to import it using (import matplotlib) in python shell I get this error:
Traceback (most recent call last):
File "", line 1, in
import matplotlib
ImportError: No module named matplotlib
I couldn't find another question similar to mine because I'm not using anaconda. | 0 | 1 | 3,341 |
0 | 31,280,601 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-06-27T22:05:00.000 | 0 | 3 | 0 | Efficient multiple precision numerical arrays | 31,093,986 | 0 | python,numpy,mpmath,gmpy | As far as I'm aware, there is no existing Python library that supports vectorized array operations on multiple precision values. There's unfortunately no particularly efficient way to use multiple precision values within a numpy ndarray, and it's extremely unlikely that there ever will be, since multiple precision values are incompatible with numpy's basic array model.
Each element in a floating point numpy ndarray takes up the same number of bytes, so the array can be represented in terms of the memory address of the first element, the dimensions, and a regular byte offset (or stride) between consecutive array elements.
This scheme has significant performance benefits - adjacent array elements are located at adjacent memory addresses, so sequential reads/writes to an array benefit from better locality of reference. Striding is also very important for usability, since it allows you to do things like operating on views of the same array without creating new copies in memory. When you do x[::2], you are really just doubling the stride over the first axis of the array, such that you address every other element.
By contrast, an array containing multiple precision values would have to contain elements of unequal size, since higher precision values would take up more bytes than low-precision values. A multiple precision array therefore cannot be regularly strided, and loses out on the benefits mentioned above.
In addition to the problems with constructing arrays, even plain arithmetic on multiple precision scalars is likely to be much slower than for floating point scalars. Almost all modern processors have specialized floating point units, whereas multiple precision arithmetic must be implemented in software rather than hardware.
I suspect that these performance issues might be a big part of the reason why there isn't already a Python library that provides the functionality you're looking for. | Numpy is a library for efficient numerical arrays.
mpmath, when backed by gmpy, is a library for efficient multiprecision numbers.
How do I put them together efficiently? Or is it already efficient to just use a Numpy array with mpmath numbers?
It doesn't make sense to ask for "as efficient as native floats", but you can ask for it to be close to the efficiency of equivalent C code (or, failing that, Java/C# code). In particular, an efficient array of multi-precision numbers would mean that you can do vectorized operations and not have to look up, say, __add__ a million times in the Global Interpreter.
Edit: To the close voter: My question is about an efficient way of putting them together. The answer in the possible duplicate specifically points out that the naive approach is not efficient.
Having a numpy array of dtype=object can be a liitle misleading, because the powerful numpy machinery that makes operations with the standard dtypes super fast, is now taken care of by the default object's python operators, which means that the speed will not be there anymore | 0 | 1 | 807 |
0 | 31,097,467 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-06-28T06:33:00.000 | 2 | 1 | 0 | call python from matlab2015a | 31,096,849 | 0.379949 | python,matlab | Matlab comes with it's own python Installation which is located in your Matlab installation directory. There these 3rd party libraries are probably not installed.
Go to the python folder inside the Matlab installation directory, search for pip and use it to install the libraries you need. | As matlab2015a has been able to support python. But it seems it can only call standard library in matlab. If I want to import other library such as numpy, scipy or sklearn, what should I do? And can I execute the python script directly. Unfortunately, the offical document of matlab has not given enough explanations. If anyone can explain, I will be really appreciated! | 0 | 1 | 88 |
0 | 48,433,206 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2015-06-28T09:40:00.000 | 2 | 3 | 0 | Solving system using linalg with constraints | 31,098,228 | 0.132549 | python,numpy,matrix,equation-solving | You could add a row consisting of ones to A and add one to B. After that use
result = linalg.lstsq(A, B)[0]
Or you can replace one of A's rows to row consisting of ones, also replace value in B to one in the same row. Then use
result = linalg.solve(A, B) | I want to solve some system in the form of matrices using linalg, but the resulting solutions should sum up to 1. For example, suppose there are 3 unknowns, x, y, z. After solving the system their values should sum up to 1, like .3, .5, .2. Can anyone please tell me how I can do that?
Currently, I am using something like result = linalg.solve(A, B), where A and B are matrices. But this doesn't return solutions in the range [0, 1]. | 0 | 1 | 9,737 |
0 | 31,130,479 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-06-30T05:37:00.000 | 2 | 3 | 0 | Does it make sense to use R to read in and search through extremely large CSV file? | 31,130,348 | 0.132549 | python,r,database,csv | Depending how much data is in each column and if you're planning to do statistical analysis, I would definitely go with R. If no analysis then python with pandas is a good solution. Do not use office for those files, it'll give you a headache.
If you're brave and your data is going to increase, implement MongoDB with either R or python depending on previous need. | I have a CSV file with approximately 700 columns and 10,000 rows. Each of these columns contains attribute information for the object in column 1 of each row. I would like to search through this "database" for specific records that match a set of requirements based on their attribute information.
For instance, one column contains state information in the 2 letter abbreviation form. Another column might contain an acronym referring to a certain geographic characteristic. Suppose I'm looking for all rows where the state is NY, and the acronym in GRG.
What packages should I use to handle this work/data anlaysis in R?
If there are no good packages in R, for handling such a large dataset, what should I look to using?
I am familiar with R, Python, Office and some SQL commands.
Edit: I am not going to modify the dataset, but record (print out or create a subset from) the results of the querying. I'll have a total of 10-12 queries at first to determine if this dataset truly serves my need. But I may possibly have hundreds of queries later - at which point I'd like to switch from manual querying of the dataset to an automated querying (if possible). | 0 | 1 | 104 |
0 | 31,134,116 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-06-30T07:13:00.000 | 1 | 1 | 0 | A file format writable by python, readable as a Dataframe in Spark | 31,131,855 | 0.197375 | python,scala,apache-spark | If your data doesn't have newlines in then a simple text-based format such as TSV is probably best.
If you need to include binary data then a separated format like protobuf makes sense - anything for which a hadoop InputFormat exists should be fine. | I have python scripts (no Spark here) producing some data files, that I want to be readable easily as Dataframes in a scala/spark application.
What's the best choice ? | 0 | 1 | 57 |
0 | 31,546,622 | 0 | 1 | 0 | 0 | 1 | false | 7 | 2015-06-30T15:46:00.000 | 1 | 2 | 0 | Sort (sorting) order of files in notebook list | 31,142,810 | 0.099668 | ipython-notebook,jupyter | If you are handy with jQuery, you can edit IPython's static files and customize the "tree" page you are referring to.
On my Ubuntu machine I could manipulate the page by editing the file /usr/local/lib/python2.7/dist-packages/IPython/html/static/tree/js/notebooklist.js | How can I change the sort order of files in the notebook list? I would like it to be alphabetical for all file types and not case sensitive. Directories can appear before files or within the files.
Currently my list is sorted as follows:
- Directories (starting with upper case)
- directories (starting with lower case)
- IPython notebooks (starting with upper case)
- ipython noteooks (starting with lower case)
- Other files (starting with upper case)
- other files (staring with lower case)
I'm using Jupyter 3.0.0-f75fda4 | 0 | 1 | 2,153 |
0 | 31,157,260 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2015-06-30T21:11:00.000 | 4 | 5 | 0 | How to automatically label a cluster of words using semantics? | 31,148,582 | 0.158649 | python,r,nlp,nltk,wordnet | Your best bet is probably is to label the clusters manually, especially if there are few of them. This a difficult problem even for humans to solve, because you might need a domain expert. Anyone claiming they could do that automatically and reliably (except in some very limited domains) is probably running a startup and trying to get your business.
Also, going through the clusters yourself will have benefits. 1) you may discover you had the wrong number of clusters (k parameter) or that there was too much junk in the input to begin with. 2) you will gain qualitative insight into what is being talked about and what topic there are in the data (which you probably can't know before looking at the data). Therefore, label manually if qualitative insight is what you are after. If you need quantitative result too, you could then train a classifier on the manually labelled topics to 1) predict topics for the rest of the clusters, or 2) for future use, if you repeat the clustering, get new data, ... | The context is : I already have clusters of words (phrases actually) resulting from kmeans applied to internet search queries and using common urls in the results of the search engine as a distance (co-occurrence of urls rather than words if I simplify a lot).
I would like to automatically label the clusters using semantics, in other words I'd like to extract the main concept surrounding a group of phrases considered together.
For example - sorry for the subject of my example - if I have the following bunch of queries : ['my husband attacked me','he was arrested by the police','the trial is still going on','my husband can go to jail for harrassing me ?','free lawyer']
My study deals with domestic violence, but clearly this cluster is focused on the legal aspect of the problem so the label could be "legal" for example.
I am new to NPL but I have to precise that I don't want to extract words using POS tagging (or at least this is not the expected final outcome but maybe a necessary preliminary step).
I read about Wordnet for sense desambiguation and I think that might be a good track, but I don't want to calculate similarity between two queries (since the clusters are the input) nor obtain the definition of one selected word thanks to the context provided by the whole bunch of words (which word to select in this case ?). I want to use the whole bunch of words to provide a context (maybe using synsets or categorization with the xml structure of the wordnet) and then summarize the context in one or few words.
Any ideas ? I can use R or python, I read a little about nltk but I don't find a way to use it in my context. | 0 | 1 | 3,398 |
0 | 31,191,159 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-07-02T15:12:00.000 | 1 | 1 | 0 | How to control feature subsetting in random forest in scikit-learn? | 31,188,415 | 0.197375 | python-3.x,scikit-learn,random-forest | Short version: This is all you.
I assume by "subsetting features for every node" you are referring to the random selection of a subset of samples and possibly features used to train individual trees in the forest. If that's what you mean, then you aren't building a random forest; you want to make a nonrandom forest of particular trees.
One way to do that is to build each DecisionTreeClassifier individually using your carefully specified subset of features, then use the VotingClassifier to combine the trees into a forest. (That feature is only available in 0.17/dev, so you may have to build your own, but it is super simple to build a voting classifier estimator class.) | I am trying to change the way that random forest algorithm using in subsetting features for every node. The original algorithm as it is implemented in Scikit-learn way is randomly subsetting. I want to define which subset for every new node from several choices of several subsets. Is there direct way in scikit-learn to control such method? If not, is there any way to update the same code of Scikit-learn? If yes, which function in the source code is what you think should be updated? | 0 | 1 | 525 |
0 | 31,197,249 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2015-07-02T19:10:00.000 | 3 | 2 | 0 | Efficient algorithm for finding pairs of nearby, parallel streets in a map | 31,192,752 | 0.291313 | python,algorithm,openstreetmap | Your idea of processing the segments in bins is not bad. You do need to think through what happens to road segments that traverse bin boundaries.
Another idea is to Hough transform all the road segments. The infinite line that each segment lies on corresponds to a point in 2d Hough space: the polar angle of the line is one axis and the distance to the origin of the line's nearest point is the other. The transformation from two points on a line to a Hough point is simple algebra.
Now you can detect nearly co-linear road segments by using a closest point pair algorithm. Happily this can be done in O(n log n) expected time. E.g. using a k-d tree. Insert all the points in the tree. Use the standard k-d tree algorithm to find each point's nearest neighbor. Sort the pair distances and take a prefix of the result as pairs to consider, stopping where the pairs are too far apart to meet your criterion of "nearby and parallel". There are O(n) of such nearest neighbor pairs.
All that's left is to filter out segment pairs that - though nearly co-linear - don't overlap. These segments lie on or near different parts of the same infinite line, but they're not of interest. This is just a little more algebra.
There are reasonably good Wikipedia articles on all of the algorithms mentioned here. Look them up if they're not familiar. | I am working on a Python program that processes map data from Openstreetmap, and I need to be able to identify pairs of streets (ways) that are close to each other and parallel. Right now, the basic algorithm I'm using is quite inefficient:
Put all of the streets (Street objects) into a large list
Find every possible pair of two streets in the list using nested for loops; for each pair, draw a rectangle around the two streets and calculate the angle at which each street is oriented.
If the rectangles overlap, the overlapping area is big enough, and the angles are similar, the two streets in the pair are considered parallel and close to each other.
This works well for small maps but with large maps, the biggest problem obviously is that there would be a huge number of pairs to iterate through since there could be thousands of streets in a city. I want to be able to run the program on a large area (like a city) without having to split the area into smaller pieces.
One idea I'm thinking of is sorting the list of streets by latitude or longitude, and only comparing pairs of streets that are within, say, 50 positions away from each other in the list. It would probably be more efficient but it still doesn't seem very elegant; is there any better way?
Each Street is composed of Node objects, and I can easily retrieve both the Node objects and the lat/long position of each Node. I can also easily retrieve the angle at which a street is oriented. | 0 | 1 | 750 |
0 | 31,213,214 | 0 | 1 | 0 | 0 | 1 | false | 6 | 2015-07-03T08:29:00.000 | 1 | 2 | 0 | NumPy or Dictionary? | 31,202,124 | 0.099668 | python,sorting,numpy,dictionary,scikit-learn | As you mention in the comments you don't know the size of the words/tweets matrix that you will eventually obtain, so that makes using an array a cumbersome solution.
It feels more natural to use a dictionary here, for the reasons you noted.
The keys of the dictionary will be the words in the tweets, and the values can be lists with (tweet_id, term_frequency) elements.
Eventually you might want to do something else (e.g. classification) with your term frequencies. I suspect this is why you want to use a numpy array from the start.
It should not be too hard to convert the dictionary to a numpy array afterwards though, if that is what you wish to do.
However note that this array is likely to be both very big (1M * number of words) and very sparse, which means it will contain mostly zeros.
Because this numpy array will take a lot of memory to store a lot of zeros, you might want to look at a data structure that is more memory efficient to store sparse matrix (see scipy.sparse).
Hope this helps. | I have to deal with a large data-set. I need to store term frequency of each sentence; which I can do either using a dictionary list or using NumPy array.
But, I will have to sort and append (in case the word already exists)- Which will be better in this case? | 0 | 1 | 1,345 |
0 | 31,219,138 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-07-04T09:30:00.000 | 0 | 1 | 0 | Is there any python code for CreditGrades model? | 31,219,085 | 0 | python-3.x | I guess I can try Symbolic differentiation. Also I can use SymPy with NumPy. | I am looking for any existing code for CreditGrades model.
Any python code is OKAY.
Also, I will need numeric differentiation for hedge ratio.
Any suggestion on numeric differentiation in python.
Thanks a lot. | 0 | 1 | 140 |
Subsets and Splits