GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
61,413,025
0
0
0
0
1
false
1,569
2013-04-11T08:14:00.000
4
15
0
How do I get the row count of a Pandas DataFrame?
15,943,769
0.053283
python,pandas,dataframe
Either of this can do it (df is the name of the DataFrame): Method 1: Using the len function: len(df) will give the number of rows in a DataFrame named df. Method 2: using count function: df[col].count() will count the number of rows in a given column col. df.count() will give the number of rows for all the columns.
How do I get the number of rows of a pandas dataframe df?
0
1
3,270,072
0
59,262,156
0
1
0
0
1
false
3
2013-04-11T19:21:00.000
3
2
0
easy_install and pip giving errors when trying to install numpy
15,957,071
0.291313
python,numpy,pip,easy-install
I was facing the same error while installing the requirements for my django project. This worked for me. Upgrade your setuptools version via pip install --upgrade setuptools and run the command for installing the packages again.
I am running Python 2.7.2 on my machine. I am trying to install numpy with easy_install and pip, but none of them are able to do so. So, when I try: sudo easy_install-2.7 numpy I get this error: "The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available." Moreover, when I try with pip: sudo pip-2.7 install numpy I get this error: RuntimeError: Broken toolchain: cannot link a simple C program Is there any fix available for this?
0
1
5,561
0
15,998,577
0
0
0
0
1
false
5
2013-04-13T15:44:00.000
5
2
0
Using sklearn and Python for a large application classification/scraping exercise
15,989,610
0.462117
python,scrapy,classification,scikit-learn
Use the HashingVectorizer and one of the linear classification modules that supports the partial_fit API for instance SGDClassifier, Perceptron or PassiveAggresiveClassifier to incrementally learn the model without having to vectorize and load all the data in memory upfront and you should not have any issue in learning a classifier on hundreds of millions of documents with hundreds of thousands (hashed) features. You should however load a small subsample that fits in memory (e.g. 100k documents) and grid search good parameters for the vectorizer using a Pipeline object and the RandomizedSearchCV class of the master branch. You can also fine tune the value of the regularization parameter (e.g. C for PassiveAggressiveClassifier or alpha for SGDClassifier) using the same RandomizedSearchCVor a larger, pre-vectorized dataset that fits in memory (e.g. a couple of millions of documents). Also linear models can be averaged (average the coef_ and intercept_ of 2 linear models) so that you can partition the dataset, learn linear models independently and then average the models to get the final model.
I am working on a relatively large text-based web classification problem and I am planning on using the multinomial Naive Bayes classifier in sklearn in python and the scrapy framework for the crawling. However, I am a little concerned that sklearn/python might be too slow for a problem that could involve classifications of millions of websites. I have already trained the classifier on several thousand websites from DMOZ. The research framework is as follows: 1) The crawler lands on a domain name and scrapes the text from 20 links on the site (of depth no larger than one). (The number of tokenized words here seems to vary between a few thousand to up to 150K for a sample run of the crawler) 2) Run the sklearn multionmial NB classifier with around 50,000 features and record the domain name depending on the result My question is whether a Python-based classifier would be up to the task for such a large scale application or should I try re-writing the classifier (and maybe the scraper and word tokenizer as well) in a faster environment? If yes what might that environment be? Or perhaps Python is enough if accompanied with some parallelization of the code? Thanks
1
1
940
0
15,990,493
0
1
0
0
1
false
14
2013-04-13T17:00:00.000
12
2
0
List of lists vs dictionary
15,990,456
1
python,arrays,list,math,dictionary
When the keys of the dictionary are 0, 1, ..., n, a list will be faster, since no hashing is involved. As soon as the keys are not such a sequence, you need to use a dict.
In Python, are there any advantages / disadvantages of working with a list of lists versus working with a dictionary, more specifically when doing numerical operations with them? I'm writing a class of functions to solve simple matrix operations for my linear algebra class. I was using dictionaries, but then I saw that numpy uses list of lists instead, so I guess there must be some advantages in it. Example: [[1,2,3],[4,5,6],[7,8,9]] as opposed to {0:[1,2,3],1:[4,5,6],2:[7,8,9]}
0
1
20,631
0
16,064,603
0
0
0
0
1
false
0
2013-04-17T15:07:00.000
0
1
0
Alternative inputs to SciPy Radial Basis Functions
16,063,698
0
python,scipy
After looking through the source for the SciPy function, I will just subclass it and override init where the individual inputs are combined into an array anyway.
I am trying to generate a radial basis function where the input variables are defined at runtime. The SciPy.interpolate.Rbf function seems to request discrete lists for each input and output variable, eg: rbf(x,y,z). This restricts you to defining fixed variables before hand. I have tried unsuccessfully to pass a list or array of variables to the Rbf function (eg. rbf(list(x,...)) with no success. Has anyone else found a solution to this problem using this Rbf library? I would like to avoid switching to a different library or rewriting one if possible. Is there a way to generate discrete variables at runtime to feed into a function?
0
1
512
0
16,225,932
0
1
0
0
1
true
4
2013-04-19T19:32:00.000
5
2
0
Multiplying Columns by Scalars in Pandas
16,112,209
1.2
python,pandas
As Wouter said, the recommended method is to convert the dict to a pandas.Series and multiple the two objects together: result = df * pd.Series(myDict)
Suppose I have a pandas DataFrame with two columns named 'A' and 'B'. Now suppose I also have a dictionary with keys 'A' and 'B', and the dictionary points to a scalar. That is, dict['A'] = 1.2 and similarly for 'B'. Is there a simple way to multiply each column of the DataFrame by these scalars? Cheers!
0
1
9,702
0
16,133,372
0
0
0
0
1
false
1
2013-04-21T16:07:00.000
1
2
0
Python Pylab subplot bug?
16,133,206
0.099668
python,matplotlib,plot
I found out that subplot() should be called before the plot(), issue resolved.
I am plotting some data using pylab and everything works perfect as I expect. I have 6 different graphs to plot and I can individually plot them in separate figures. But when I try to subplot() these graphs, the last one (subplot(3,2,6)) doesn't show anything. What confuses me is that this 6th graph is drawn perfectly when put in a separate figure but not in the subplot - with identical configurations. Any ideas what may be causing the problem ?
0
1
375
0
16,149,290
0
1
0
0
2
false
0
2013-04-22T14:06:00.000
0
4
0
Saving python data for an application
16,149,187
0
python,file,numpy
I had this problem long ago so i dont have the code near to show you, but i used a binary write in a tmp file to get that done. EDIT: Thats is, pickle is what i used. Thanks SpankMe and RoboInventor
I need to save multiple numpy arrays along with the user input that was used to compute the data these arrays contain in a single file. I'm having a hard time finding a good procedure to use to achieve this or even what file type to use. The only thing i can think of is too put the computed arrays along with the user input into one single array and then save it using numpy.save. Does anybody know any better alternatives or good file types for my use?
0
1
106
0
16,149,283
0
1
0
0
2
false
0
2013-04-22T14:06:00.000
2
4
0
Saving python data for an application
16,149,187
0.099668
python,file,numpy
How about using pickle and then storing pickled array objects in a storage of your choice, like database or files?
I need to save multiple numpy arrays along with the user input that was used to compute the data these arrays contain in a single file. I'm having a hard time finding a good procedure to use to achieve this or even what file type to use. The only thing i can think of is too put the computed arrays along with the user input into one single array and then save it using numpy.save. Does anybody know any better alternatives or good file types for my use?
0
1
106
0
16,401,807
0
0
1
0
1
false
0
2013-04-23T14:07:00.000
1
1
0
how to make minepy.MINE run faster?
16,171,519
0.197375
python,python-2.7
first, use the latest version of minepy. Second, you can use a smaller value of "alpha" parameter, say 0.5 or 0.45. In this way, you will reduce the computational time in despite of characteristic matrix accuracy. Davide
I have a numerical matrix of 2500*2500. To calculate the MIC (maximal information coefficient) for each pair of vectors, I am using minepy.MINE, but this is taking forever, can I make it faster?
0
1
425
0
16,180,974
0
0
0
0
1
false
90
2013-04-23T23:35:00.000
2
3
0
Drawing average line in histogram (matplotlib)
16,180,946
0.132549
python,matplotlib,axis
I would look at the largest value in your data set (i.e. the histogram bin values) multiply that value by a number greater than 1 (say 1.5) and use that to define the y axis value. This way it will appear above your histogram regardless of the values within the histogram.
I am drawing a histogram using matplotlib in python, and would like to draw a line representing the average of the dataset, overlaid on the histogram as a dotted line (or maybe some other color would do too). Any ideas on how to draw a line overlaid on the histogram? I am using the plot() command, but not sure how to draw a vertical line (i.e. what value should I give for the y-axis? thanks!
0
1
127,120
0
16,186,125
0
1
0
0
1
false
1
2013-04-24T05:21:00.000
1
6
0
How to generate a list of all possible alphabetical combinations based on an input of numbers
16,183,941
0.033321
python,algorithm,alphabetical
As simple as a tree Let suppose you have give "1261" Construct a tree with it a Root . By defining the node(left , right ) , where left is always direct map and right is combo version suppose for the if you take given Number as 1261 1261 -> (1(261) ,12(61)) -> 1 is left-node(direct map -> a) 12 is right node(combo-map1,2->L) (A(261) , L(61)) -> (A(2(61),26(1))) ,L(6(1)) -> (A(B(6(1)),Z(1)) ,L(F(1))) -> (A(B(F(1)),Z(A)) ,L(F(A))) -> (A(B(F(A)),Z(A)) ,L(F(A))) so now you have got all the leaf node.. just print all paths from root to leaf node , this gives you all possible combinations . like in this case ABFA , AZA , LFA So once you are done with the construction of tree just print all paths from root to node which is your requirement .
I have just come across an interesting interview style type of question which I couldn't get my head around. Basically, given a number to alphabet mapping such that [1:A, 2:B, 3:C ...], print out all possible combinations. For instance "123" will generate [ABC, LC, AW] since it can be separated into 12,3 and 1,23. I'm thinking it has to be some type of recursive function where it checks with windows of size 1 and 2 and appending to a previous result if it's a valid letter mapping. If anyone can formulate some pseudo/python code that'd be much appreciated.
0
1
1,582
0
16,223,497
0
1
0
0
2
false
3
2013-04-25T16:57:00.000
-1
2
0
Python numpy - Reproducibility of random numbers
16,220,585
-0.099668
python,random,numpy,prng
If reproducibility is very important to you, I'm not sure I'd fully trust any PRNG to always produce the same output given the same seed. You might consider capturing the random numbers in one phase, saving them for reuse; then in a second phase, replay the random numbers you've captured. That's the only way to eliminate the possibility of non-reproducibility -- and it solves your current problem too.
We have a very simple program (single-threaded) where we we do a bunch of random sample generation. For this we are using several calls of the numpy random functions (like normal or random_sample). Sometimes the result of one random call determines the number of times another random function is called. Now I want to set a seed in the beginning s.th. multiple runs of my program should yield the same result. For this I'm using an instance of the numpy class RandomState. While this is the case in the beginning, at some time the results become different and this is why I'm wondering. When I am doing everything correctly, having no concurrency and thereby a linear call of the functions AND no other random number generator involded, why does it not work?
0
1
1,368
0
16,296,438
0
1
0
0
2
true
3
2013-04-25T16:57:00.000
5
2
0
Python numpy - Reproducibility of random numbers
16,220,585
1.2
python,random,numpy,prng
Okay, David was right. The PRNGs in numpy work correctly. Throughout every minimal example I created, they worked as they are supposed to. My problem was a different one, but finally I solved it. Do never loop over a dictionary within a deterministic algorithm. It seems that Python orders the items arbitrarily when calling the .item() function for getting in iterator. So I am not that disappointed that this was this kind of error, because it is a useful reminder of what to think about when trying to do reproducible simulations.
We have a very simple program (single-threaded) where we we do a bunch of random sample generation. For this we are using several calls of the numpy random functions (like normal or random_sample). Sometimes the result of one random call determines the number of times another random function is called. Now I want to set a seed in the beginning s.th. multiple runs of my program should yield the same result. For this I'm using an instance of the numpy class RandomState. While this is the case in the beginning, at some time the results become different and this is why I'm wondering. When I am doing everything correctly, having no concurrency and thereby a linear call of the functions AND no other random number generator involded, why does it not work?
0
1
1,368
0
16,231,323
0
0
0
0
2
false
0
2013-04-26T07:29:00.000
0
2
0
train nltk classifier for just one label
16,230,984
0
python,machine-learning,nlp,classification,nltk
I see two questions How to train the system? Can the system consist of "sci-fi" and "others"? The answer to 2 is yes. Having a 80% confidence threshold idea also makes sense, as long as you see with your data, features and algorithm that 80% is a good threshold. (If not, you may want to consider lowering it if not all sci-fi movies are being classified as sci-fi, or lowering it, if too many non-sci-fi movies are being categorized as sci-fi.) The answer to 1 depends on the data you have, the features you can extract, etc. Jared's approach seems reasonable. Like Jared, I'd also to emphasize the importance of enough and representative data.
I am just starting out with nltk, and I am following the book. Chapter six is about text classification, and i am a bit confused about something. In the examples (the names, and movie reviews) the classifier is trained to select between two well-defined labels (male-female, and pos-neg). But how to train if you have only one label. Say I have a bunch of movie plot outlines, and I am only interested in fishing out movies from the sci-fi genre. Can I train a classifier to only recognize sci-fi plots, en say f.i. if classification confidence is > 80%, then put it in the sci-fi group, otherwise, just ignore it. Hope somebody can clarify, thank you,
0
1
441
0
16,231,216
0
0
0
0
2
true
0
2013-04-26T07:29:00.000
0
2
0
train nltk classifier for just one label
16,230,984
1.2
python,machine-learning,nlp,classification,nltk
You can simply train a binary classifier to distinguish between sci-fi and not sci-fi So train on the movie plots that are labeled as sci-fi and also on a selection of all other genres. It might be a good idea to have a representative sample of the same size for the other genres such that not all are of the romantic comedy genre, for instance.
I am just starting out with nltk, and I am following the book. Chapter six is about text classification, and i am a bit confused about something. In the examples (the names, and movie reviews) the classifier is trained to select between two well-defined labels (male-female, and pos-neg). But how to train if you have only one label. Say I have a bunch of movie plot outlines, and I am only interested in fishing out movies from the sci-fi genre. Can I train a classifier to only recognize sci-fi plots, en say f.i. if classification confidence is > 80%, then put it in the sci-fi group, otherwise, just ignore it. Hope somebody can clarify, thank you,
0
1
441
0
16,236,700
0
0
0
0
1
true
0
2013-04-26T12:37:00.000
4
3
0
list instead of separated arguments in python
16,236,652
1.2
python
Well, to convert your example, you would use np.random.normal(x, *P). However, np.random.normal(x,a,b,c,...,z) wouldn't actually work. Maybe you meant another function?
I am trying to use function np.random.curve_fit(x,a,b,c,...,z) with a big but fixed number of fitting parameters. Is it possible to use tuples or lists here for shortness, like np.random.curve_fit(x,P), where P=(a,b,c,...,z)?
0
1
77
0
54,499,731
0
0
0
0
1
false
24
2013-04-26T22:19:00.000
0
3
0
Clustering words based on Distance Matrix
16,246,066
0
python,cluster-computing,scikit-learn,hierarchical-clustering
Recommend to take a look at agglomerative clustering.
My objective is to cluster words based on how similar they are with respect to a corpus of text documents. I have computed Jaccard Similarity between every pair of words. In other words, I have a sparse distance matrix available with me. Can anyone point me to any clustering algorithm (and possibly its library in Python) which takes distance matrix as input ? I also do not know the number of clusters beforehand. I only want to cluster these words and obtain which words are clustered together.
0
1
28,598
0
16,273,799
0
1
0
0
1
false
1
2013-04-29T07:30:00.000
1
4
0
Solving a linear equation in one variable
16,273,351
0.049958
c++,python,algorithm,linear-algebra
The first thing is to parse the string, to identify the various tokens (numbers, variables and operators), so that an expression tree can be formed by giving operator proper precedences. Regular expressions can help, but that's not the only method (grammar parsers like boost::spirit are good too, and you can even run your own: its all a "find and recourse"). The tree can then be manipulated reducing the nodes executing those operation that deals with constants and by grouping variables related operations, executing them accordingly. This goes on recursively until you remain with a variable related node and a constant node. At the point the solution is calculated trivially. They are basically the same principles that leads to the production of an interpreter or a compiler.
What would be the most efficient algorithm to solve a linear equation in one variable given as a string input to a function? For example, for input string: "x + 9 – 2 - 4 + x = – x + 5 – 1 + 3 – x" The output should be 1. I am considering using a stack and pushing each string token onto it as I encounter spaces in the string. If the input was in polish notation then it would have been easier to pop numbers off the stack to get to a result, but I am not sure what approach to take here. It is an interview question.
0
1
3,057
0
31,707,656
0
1
0
0
2
false
26
2013-04-29T21:43:00.000
1
10
0
Generate random number between 0.1 and 1.0. Python
16,288,749
0.019997
python,random,floating-point
Try random.randint(1, 10)/100.0
I'm trying to generate a random number between 0.1 and 1.0. We can't use rand.randint because it returns integers. We have also tried random.uniform(0.1,1.0), but it returns a value >= 0.1 and < 1.0, we can't use this, because our search includes also 1.0. Does somebody else have an idea for this problem?
0
1
37,037
0
16,289,924
0
1
0
0
2
false
26
2013-04-29T21:43:00.000
1
10
0
Generate random number between 0.1 and 1.0. Python
16,288,749
0.019997
python,random,floating-point
The standard way would be random.random() * 0.9 + 0.1 (random.uniform() internally does just this). This will return numbers between 0.1 and 1.0 without the upper border. But wait! 0.1 (aka ¹/₁₀) has no clear binary representation (as ⅓ in decimal)! So You won't get a true 0.1 anyway, simply because the computer cannot represent it internally. Sorry ;-)
I'm trying to generate a random number between 0.1 and 1.0. We can't use rand.randint because it returns integers. We have also tried random.uniform(0.1,1.0), but it returns a value >= 0.1 and < 1.0, we can't use this, because our search includes also 1.0. Does somebody else have an idea for this problem?
0
1
37,037
0
16,319,018
0
0
0
0
1
false
2
2013-05-01T11:37:00.000
-1
3
0
Sample integers from truncated geometric distribution
16,317,420
-0.066568
python,math
The simple answer is: pick a random number from geometric distribution and return mod n. Eg: random.geometric(p)%n P(x) = p(1-p)^x+ p(1-p)^(x+n) + p(1-p)^(x+2n) .... = p(1-p)^x *(1+(1-p)^n +(1-p)^(2n) ... ) Note that second part is a constant for a given p and n. The first part is geometric.
What is a good way to sample integers in the range {0,...,n-1} according to (a discrete version of) the exponential distribution? random.expovariate(lambd) returns a real number from 0 to positive infinity. Update. Changed title to make it more accurate.
0
1
2,007
0
16,320,580
0
0
0
0
2
false
10
2013-05-01T14:49:00.000
0
3
0
Quantifying randomness
16,320,412
0
python,random
You can use some mapping to convert strings to numeric and then apply standard tests like Diehard and TestU01. Note that long sequences of samples are needed (typically few MB files will do)
I've come up with 2 methods to generate relatively short random strings- one is much faster and simpler and the other much slower but I think more random. Is there a not-super-complicated method or way to measure how random the data from each method might be? I've tried compressing the output strings (via zlib) figuring the more truly random the data, the less it will compress but that hasn't proved much.
0
1
740
0
16,328,721
0
0
0
0
2
false
10
2013-05-01T14:49:00.000
0
3
0
Quantifying randomness
16,320,412
0
python,random
An outcome is considered random if it can't be predicted ahead of time with certainty. If it can be predicted with certainty it is considered deterministic. This is a binary categorization, outcomes either are deterministic or random, there aren't degrees of randomness. There are, however, degrees of predictability. One measure of predictability is entropy, as mentioned by EMS. Consider two games. You don't know on any given play whether you're going to win or lose. In game 1, the probability of winning is 1/2, i.e., you win about half the time in the long run. In game 2, the odds of winning are 1/100. Both games are considered random, because the outcome isn't a dead certainty. Game 1 has greater entropy than game 2, because the outcome is less predictable - while there's a chance of winning, you're pretty sure you're going to lose on any given trial. The amount of compression that can be achieved (by a good compression algorithm) for a sequence of values is related to the entropy of the sequence. English has pretty low entropy (lots of redundant info both in the relative frequency of letters and the sequences of words that occur as groups), and hence tends to compress pretty well.
I've come up with 2 methods to generate relatively short random strings- one is much faster and simpler and the other much slower but I think more random. Is there a not-super-complicated method or way to measure how random the data from each method might be? I've tried compressing the output strings (via zlib) figuring the more truly random the data, the less it will compress but that hasn't proved much.
0
1
740
0
16,332,805
0
0
0
0
1
false
6
2013-05-01T21:18:00.000
1
2
0
How many features can scikit-learn handle?
16,326,699
0.099668
python,numpy,machine-learning,scipy,scikit-learn
Some linear model (Regression, SGD, Bayes) will probably be your best bet if you need to train your model frequently. Although before you go running any models you could try the following 1) Feature reduction. Are there features in your data that could easily be removed? For example if your data is text or ratings based there are lots known options available. 2) Learning curve analysis. Maybe you only need a small subset of your data to train a model, and after that you are only fitting to your data or gaining tiny increases in accuracy. Both approaches could allow you to greatly reduce the training data required.
I have a csv file of [66k, 56k] size (rows, columns). Its a sparse matrix. I know that numpy can handle that size a matrix. I would like to know based on everyone's experience, how many features scikit-learn algorithms can handle comfortably?
0
1
2,840
0
17,346,232
0
0
0
0
1
true
7
2013-05-03T17:13:00.000
2
1
0
Render a mayavi scene with a large pipeline faster
16,364,311
1.2
python,mayavi
The general principle is that vtk objects have a lot of overhead, and so you for rendering performance you want to pack as many things into one object as possible. When you call mlab convenience functions like points3d it creates a new vtk object to handle that data. Thus iterating and creating thousands of single points as vtk objects is a very bad idea. The trick of temporarily disabling the rendering as in that other question -- the "right" way to do it is to have one VTK object that holds all of the different points. To set the different points as different colors, give scalar values to the vtk object. x,y,z=np.random.random((3,100)) some_data=mlab.points3d(x,y,z,colormap='cool') some_data.mlab_source.dataset.point_data.scalars=np.random.random((100,)) This only works if you can adequately represent the color values you need in a colormap. This is easy if you need a small finite number of colors or a small finite number of simple colormaps, but very difficult if you need completely arbitrary colors.
I am using mayavi.mlab to display 3D data extracted from images. The data is as follows: 3D camera parameters as 3 lines in the x, y, x direction around the camera center, usually for about 20 cameras using mlab.plot3d(). 3D coloured points in space for about 4000 points using mlab.points3d(). For (1) I have a function to draw each line for each camera seperately. If I am correct, all these lines are added to the mayavi pipeline for the current scene. Upon mlab.show() the scene takes about 10 seconds to render all these lines. For (2) I couldn't find a way to plot all the points at once with each point a different color, so at the moment I iterate with mlab.points3d(x,y,z, color = color). I have newer waited for this routine to finish as it takes to long. If I plot all the points at once with the same color, it takes about 2 seconds. I already tried to start my script with fig.scene.disable_render = True and resetting fig.scene.disable_render = False before displaying the scene with mlab.show(). How can I display my data with mayavi within a reasonable waiting time?
0
1
1,606
0
16,382,307
0
1
0
0
1
true
0
2013-05-05T06:30:00.000
2
1
0
How do I integrate a multivariable function?
16,382,019
1.2
python,algorithm,math,integral
It depends on your context and the performance criteria. I assume that you are looking for a numerical approximation (as opposed to a algebraic integration) A Riemann Sum is the standard 'educational' way of numerically calculating integrals but several computationally more efficient algorithms exist.
I have a function f(x_1, x_2, ..., x_n) where n >= 1 that I would like to integrate. What algorithm should I use to provide a decently stable / accurate solution? I would like to program it in Python so any open source examples are more than welcome! (I realize that I should use a library but this is just a learning exercise.)
0
1
270
0
61,941,548
0
0
0
0
2
false
249
2013-05-06T10:35:00.000
5
8
0
Delete the first three rows of a dataframe in pandas
16,396,903
0.124353
python,pandas
inp0= pd.read_csv("bank_marketing_updated_v1.csv",skiprows=2) or if you want to do in existing dataframe simply do following command
I need to delete the first three rows of a dataframe in pandas. I know df.ix[:-1] would remove the last row, but I can't figure out how to remove first n rows.
0
1
412,922
0
52,984,033
0
0
0
0
2
false
249
2013-05-06T10:35:00.000
9
8
0
Delete the first three rows of a dataframe in pandas
16,396,903
1
python,pandas
A simple way is to use tail(-n) to remove the first n rows df=df.tail(-3)
I need to delete the first three rows of a dataframe in pandas. I know df.ix[:-1] would remove the last row, but I can't figure out how to remove first n rows.
0
1
412,922
0
16,444,230
0
0
0
0
2
false
6
2013-05-07T18:54:00.000
1
3
0
Data structure options for efficiently storing sets of integer pairs on disk?
16,426,469
0.066568
python,c,data-structures,integer
How about using one hash table or B-tree per bucket? On-disk hashtables are standard. Maybe the BerkeleyDB libraries (availabe in stock python) will work for you; but be advised that they since they come with transactions they can be slow, and may require some tuning. There are a number of choices: gdbm, tdb that you should all give a try. Just make sure you check out the API and initialize them with appropriate size. Some will not resize automatically, and if you feed them too much data their performance just drops a lot. Anyway, you may want to use something even more low-level, without transactions, if you have a lot of changes. A pair of ints is a long - and most databases should accept a long as a key; in fact many will accept arbitrary byte sequences as keys.
I have a bunch of code that deals with document clustering. One step involves calculating the similarity (for some unimportant definition of "similar") of every document to every other document in a given corpus, and storing the similarities for later use. The similarities are bucketed, and I don't care what the specific similarity is for purposes of my analysis, just what bucket it's in. For example, if documents 15378 and 3278 are 52% similar, the ordered pair (3278, 15378) gets stored in the [0.5,0.6) bucket. Documents sometimes get either added or removed from the corpus after initial analysis, so corresponding pairs get added to or removed from the buckets as needed. I'm looking at strategies for storing these lists of ID pairs. We found a SQL database (where most of our other data for this project lives) to be too slow and too large disk-space-wise for our purposes, so at the moment we store each bucket as a compressed list of integers on disk (originally zlib-compressed, but now using lz4 instead for speed). Things I like about this: Reading and writing are both quite fast After-the-fact additions to the corpus are fairly straightforward to add (a bit less so for lz4 than for zlib because lz4 doesn't have a framing mechanism built in, but doable) At both write and read time, data can be streamed so it doesn't need to be held in memory all at once, which would be prohibitive given the size of our corpora Things that kind of suck: Deletes are a huge pain, and basically involve streaming through all the buckets and writing out new ones that omit any pairs that contain the ID of a document that's been deleted I suspect I could still do better both in terms of speed and compactness with a more special-purpose data structure and/or compression strategy So: what kinds of data structures should I be looking at? I suspect that the right answer is some kind of exotic succinct data structure, but this isn't a space I know very well. Also, if it matters: all of the document IDs are unsigned 32-bit ints, and the current code that handles this data is written in C, as Python extensions, so that's probably the general technology family we'll stick with if possible.
0
1
704
0
16,444,440
0
0
0
0
2
false
6
2013-05-07T18:54:00.000
1
3
0
Data structure options for efficiently storing sets of integer pairs on disk?
16,426,469
0.066568
python,c,data-structures,integer
Why not just store a table containing stuff that was deleted since the last re-write? This table could be the same structure as your main bucket, maybe with a Bloom filter for quick membership checks. You can re-write the main bucket data without the deleted items either when you were going to re-write it anyway for some other modification, or when the ratio of deleted items:bucket size exceeds some threshold. This scheme could work either by storing each deleted pair alongside each bucket, or by storing a single table for all deleted documents: I'm not sure which is a better fit for your requirements. Keeping a single table, it's hard to know when you can remove an item unless you know how many buckets it affects, without just re-writing all buckets whenever the deletion table gets too large. This could work, but it's a bit stop-the-world. You also have to do two checks for each pair you stream in (ie, for (3278, 15378), you'd check whether either 3278 or 15378 has been deleted, instead of just checking whether pair (3278, 15378) has been deleted. Conversely, the per-bucket table of each deleted pair would take longer to build, but be slightly faster to check, and easier to collapse when re-writing the bucket.
I have a bunch of code that deals with document clustering. One step involves calculating the similarity (for some unimportant definition of "similar") of every document to every other document in a given corpus, and storing the similarities for later use. The similarities are bucketed, and I don't care what the specific similarity is for purposes of my analysis, just what bucket it's in. For example, if documents 15378 and 3278 are 52% similar, the ordered pair (3278, 15378) gets stored in the [0.5,0.6) bucket. Documents sometimes get either added or removed from the corpus after initial analysis, so corresponding pairs get added to or removed from the buckets as needed. I'm looking at strategies for storing these lists of ID pairs. We found a SQL database (where most of our other data for this project lives) to be too slow and too large disk-space-wise for our purposes, so at the moment we store each bucket as a compressed list of integers on disk (originally zlib-compressed, but now using lz4 instead for speed). Things I like about this: Reading and writing are both quite fast After-the-fact additions to the corpus are fairly straightforward to add (a bit less so for lz4 than for zlib because lz4 doesn't have a framing mechanism built in, but doable) At both write and read time, data can be streamed so it doesn't need to be held in memory all at once, which would be prohibitive given the size of our corpora Things that kind of suck: Deletes are a huge pain, and basically involve streaming through all the buckets and writing out new ones that omit any pairs that contain the ID of a document that's been deleted I suspect I could still do better both in terms of speed and compactness with a more special-purpose data structure and/or compression strategy So: what kinds of data structures should I be looking at? I suspect that the right answer is some kind of exotic succinct data structure, but this isn't a space I know very well. Also, if it matters: all of the document IDs are unsigned 32-bit ints, and the current code that handles this data is written in C, as Python extensions, so that's probably the general technology family we'll stick with if possible.
0
1
704
0
16,495,361
0
0
0
0
1
false
0
2013-05-08T08:45:00.000
0
1
1
openCV install Error using brew
16,436,260
0
python,macos,opencv
Try using macports it builds opencv including python bindings without any issue. I have used this for osx 10.8.
i am trying to install opencv on my MacbookPro OSX 10.6.8 (snow leopard) and Xcode version is 3.2.6 and result of "which python" is Hong-Jun-Choiui-MacBook-Pro:~ teemo$ which python /Library/Frameworks/Python.framework/Versions/2.7/bin/python and i am suffering from this below.. Linking CXX shared library ../../lib/libopencv_contrib.dylib [ 57%] Built target opencv_contrib make: * [all] Error 2 Full log is here link by "brew install -v opencv" 54 248 246 33:7700/log.txt any advice for me? i just need opencv lib for python.
0
1
447
0
16,446,014
0
0
0
0
1
false
3
2013-05-08T13:30:00.000
0
3
0
Machine Learning in Python - Get the best possible feature-combination for a label
16,442,055
0
python,machine-learning,nltk
You could compute the representativeness of each feature to separate the classes via feature weighting. The most common method for feature selection (and therefore feature weighting) in Text Classification is chi^2. This measure will tell you which features are better. Based on this information you can analyse the specific values that are best for every case. I hope this helps. Regards,
My Question is as follows: I know a little bit about ML in Python (using NLTK), and it works ok so far. I can get predictions given certain features. But I want to know, is there a way, to display the best features to achieve a label? I mean the direct opposite of what I've been doing so far (put in all circumstances, and get a label for that) I try to make my question clear via an example: Let's say I have a database with Soccer games. The Labels are e.g. 'Win', 'Loss', 'Draw'. The Features are e.g. 'Windspeed', 'Rain or not', 'Daytime', 'Fouls committed' etc. Now I want to know: Under which circumstances will a Team achieve a Win, Loss or Draw? Basically I want to get back something like this: Best conditions for Win: Windspeed=0, No Rain, Afternoon, Fouls=0 etc Best conditions for Loss: ... Is there a way to achieve this?
0
1
879
0
16,950,633
0
0
0
0
1
false
6
2013-05-08T19:46:00.000
-3
4
0
Randomized stratified k-fold cross-validation in scikit-learn?
16,448,988
-0.148885
python,machine-learning,scikit-learn,cross-validation
As far as I know, this is actually implemented in scikit-learn. """ Stratified ShuffleSplit cross validation iterator Provides train/test indices to split data in train test sets. This cross-validation object is a merge of StratifiedKFold and ShuffleSplit, which returns stratified randomized folds. The folds are made by preserving the percentage of samples for each class. Note: like the ShuffleSplit strategy, stratified random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets. """
Is there any built-in way to get scikit-learn to perform shuffled stratified k-fold cross-validation? This is one of the most common CV methods, and I am surprised I couldn't find a built-in method to do this. I saw that cross_validation.KFold() has a shuffling flag, but it is not stratified. Unfortunately cross_validation.StratifiedKFold() does not have such an option, and cross_validation.StratifiedShuffleSplit() does not produce disjoint folds. Am I missing something? Is this planned? (obviously I can implement this by myself)
0
1
9,847
0
16,471,982
0
1
0
0
1
false
10
2013-05-09T21:51:00.000
1
6
0
Generating numbers with Gaussian function in a range using python
16,471,763
0.033321
python,gaussian
If you have a small range of integers, you can create a list with a gaussian distribution of the numbers within that range and then make a random choice from it.
I want to use the gaussian function in python to generate some numbers between a specific range giving the mean and variance so lets say I have a range between 0 and 10 and I want my mean to be 3 and variance to be 4 mean = 3, variance = 4 how can I do that ?
0
1
36,597
0
16,524,872
0
0
0
0
1
true
4
2013-05-11T22:51:00.000
5
1
0
Residuals of Random Forest Regression (Python)
16,502,445
1.2
python,python-2.7,numpy,scipy,scikit-learn
There is no function for that, as we like to keep the interface very simple. You can just do y - rf.predict(X)
When using RandomForestRegressor from Sklearn, how do you get the residuals of the regression? I would like to plot out these residuals to check the linearity.
0
1
2,682
0
16,521,114
0
0
0
0
1
false
6
2013-05-12T17:36:00.000
4
3
0
Python / Scipy - implementing optimize.curve_fit 's sigma into optimize.leastsq
16,510,227
0.26052
python,scipy,curve-fitting,least-squares
Assuming your data are in arrays x, y with yerr, and the model is f(p, x), just define the error function to be minimized as (y-f(p,x))/yerr.
I am fitting data points using a logistic model. As I sometimes have data with a ydata error, I first used curve_fit and its sigma argument to include my individual standard deviations in the fit. Now I switched to leastsq, because I needed also some Goodness of Fit estimation that curve_fit could not provide. Everything works well, but now I miss the possibility to weigh the least sqares as "sigma" does with curve_fit. Has someone some code example as to how I could weight the least squares also in leastsq? Thanks, Woodpicker
0
1
4,123
0
16,583,343
0
0
0
0
1
false
0
2013-05-16T05:44:00.000
5
2
0
Stochastic Gradient Boosting giving unpredictable results
16,579,775
0.462117
python,machine-learning,scikit-learn,scikits
First, a couple of remarks: the name of the algorithm is Gradient Boosting (Regression Trees or Machines) and is not directly related to Stochastic Gradient Descent you should never evaluate the accuracy of a machine learning algorithm on you training data, otherwise you won't be able to detect the over-fitting of the model. Use: sklearn.cross_validation.train_test_split to split X and y into a X_train, y_train for fitting and X_test, y_test for scoring instead. Now to answer your question, GBRT models are indeed non deterministic models. To get deterministic / reproducible runs, you can pass random_state=0 to seed the pseudo random number generator (or alternatively pass max_features=None but this is not recommended). The fact that you observe such big variations in your training error is weird though. Maybe your output signal if very correlated with a very small number of informative features and most other features are just noise? You could try to fit a RandomForestClassifier model to your data and use the computed feature_importance_ array to discard noisy features and help stabilize your GBRT models.
I'm using the Scikit module for Python to implement Stochastic Gradient Boosting. My data set has 2700 instances and 1700 features (x) and contains binary data. My output vector is 'y', and contains 0 or 1 (binary classification). My code is, gb = GradientBoostingClassifier(n_estimators=1000,learn_rate=1,subsample=0.5) gb.fit(x,y) print gb.score(x,y) Once I ran it, and got an accuracy of 1.0 (100%), and sometimes I get an accuracy of around 0.46 (46%). Any idea why there is such a huge gap in its performance?
0
1
1,888
0
42,148,893
0
0
0
0
1
false
15
2013-05-16T19:48:00.000
5
2
0
Pandas Convert 'NA' to NaN
16,596,188
0.462117
python,pandas,bioinformatics
Just ran into this issue--I specified a str converter for the column instead, so I could keep na elsewhere: pd.read_csv(... , converters={ "file name": str, "company name": str})
I just picked up Pandas to do with some data analysis work in my biology research. Turns out one of the proteins I'm analyzing is called 'NA'. I have a matrix with pairwise 'HA, M1, M2, NA, NP...' on the column headers, and the same as "row headers" (for the biologists who might read this, I'm working with influenza). When I import the data into Pandas directly from a CSV file, it reads the "row headers" as 'HA, M1, M2...' and then NA gets read as NaN. Is there any way to stop this? The column headers are fine - 'HA, M1, M2, NA, NP etc...'
0
1
11,645
0
21,592,812
0
0
0
0
1
false
3
2013-05-16T23:58:00.000
0
2
0
Python 3.3 pandas, pip-3.3
16,599,357
0
python,pandas,pip
Thanks, I just had the same issue with Angstrom Linux on the BeagleBone Black board and the easy_install downgrade solution solved it. One thing I did need to do, is after installing easy_install using opkg install python-setuptools I then had to go into the easy_install file (located in /usr/bin/easy_install) and change the top line from #!/usr/bin/python-native/python to #!/usr/bin/python this fixed easy_install so it would detect python on the BeagleBone and then I could run your solution.
So, I'm trying to install pandas for Python 3.3 and have been having a really hard time- between Python 2.7 and Python 3.3 and other factors. Some pertinent information: I am running Mac OSX Lion 10.7.5. I have both Python 2.7 and Python 3.3 installed, but for my programming purposes only use 3.3. This is where I'm at: I explicitly installed pip-3.3 and can now run that command to install things. I have XCode installed, and have also installed the command line tools (from 'Preferences'). I have looked through a number of pages through Google as well as through this site and haven't had any luck getting pandas to download/download and install. I have tried downloading the tarball, 'cd' into the downloaded file and running setup.py install, but to no avail. I have downloaded and installed EPD Free, and then added 'Library/Framework/Python.framework/Versions/Current/bin:${PATH} to .bash_profile - still doesn't work. I'm not sure where to go frome here...when I do pip-3.3 install pandas terminal relates that There was a problem confirming the ssl certificate: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:547)> and so nothing ends up getting downloaded or installed, for either pandas, or I also tried to the same for numpy as I thought that could be a problem, but the same error was returned.
0
1
2,260
0
16,607,943
0
0
0
0
1
false
0
2013-05-17T03:42:00.000
0
4
0
How do I convert a 2D numpy array into a 1D numpy array of 1D numpy arrays?
16,601,049
0
python,arrays,numpy,nested,vectorization
I think it makes little sense to use numpy arrays to do that, just think you're missing out on all the advantages of numpy.
In other words, each element of the outer array will be a row vector from the original 2D array.
0
1
5,758
0
16,652,325
0
0
0
0
2
false
2
2013-05-18T14:20:00.000
1
2
0
Interpolaton algorithm to correct a slight clock drift
16,625,298
0.099668
python,scipy,numeric,numerical-methods
Before you can ask the programming question, it seems to me you need to investigate a more fundamental scientific one. Before you can start picking out particular equations to fit badfastclock to goodslowclock, you should investigate the nature of the drift. Let both clocks run a while, and look at their points together. Is badfastclock bad because it drifts linearly away from real time? If so, a simple quadratic equation should fit badfastclock to goodslowclock, just as a quadratic equation describes the linear acceleration of a object in gravity; i.e., if badfastclock is accelerating linearly away from real time, you can deterministically shift badfastclock toward real time. However, if you find that badfastclock is bad because it is jumping around, then smooth curves -- even complex smooth curves like splines -- won't fit. You must understand the data before trying to manipulate it.
I have some sampled (univariate) data - but the clock driving the sampling process is inaccurate - resulting in a random slip of (less than) 1 sample every 30. A more accurate clock at approximately 1/30 of the frequency provides reliable samples for the same data ... allowing me to establish a good estimate of the clock drift. I am looking to interpolate the sampled data to correct for this so that I 'fit' the high frequency data to the low-frequency. I need to do this 'real time' - with no more than the latency of a few low-frequency samples. I recognise that there is a wide range of interpolation algorithms - and, among those I've considered, a spline based approach looks most promising for this data. I'm working in Python - and have found the scipy.interpolate package - though I could see no obvious way to use it to 'stretch' n samples to correct a small timing error. Am I overlooking something? I am interested in pointers to either a suitable published algorithm, or - ideally - a Python library function to achieve this sort of transform. Is this supported by SciPy (or anything else)? UPDATE... I'm beginning to realise that what, at first, seemed a trivial problem isn't as straightforward as I first thought. I am no-longer convinced that naive use of splines will suffice. I've also realised that my problem can be better described without reference to 'clock drift'... like this: A single random variable is sampled at two different frequencies - one low and one high, with no common divisor - e.g. 5hz and 144hz. If we assume sample 0 is identical at both sample rates, sample 1 @5hz falls between samples 28 amd 29. I want to construct a new series - at 720hz, say - that fits all the known data points "as smoothly as possible". I had hoped to find an 'out of the box' solution.
0
1
426
0
16,708,058
0
0
0
0
2
false
2
2013-05-18T14:20:00.000
0
2
0
Interpolaton algorithm to correct a slight clock drift
16,625,298
0
python,scipy,numeric,numerical-methods
Bsed on your updated question, if the data is smooth with time, just place all the samples in a time trace, and interpolate on the sparse grid (time).
I have some sampled (univariate) data - but the clock driving the sampling process is inaccurate - resulting in a random slip of (less than) 1 sample every 30. A more accurate clock at approximately 1/30 of the frequency provides reliable samples for the same data ... allowing me to establish a good estimate of the clock drift. I am looking to interpolate the sampled data to correct for this so that I 'fit' the high frequency data to the low-frequency. I need to do this 'real time' - with no more than the latency of a few low-frequency samples. I recognise that there is a wide range of interpolation algorithms - and, among those I've considered, a spline based approach looks most promising for this data. I'm working in Python - and have found the scipy.interpolate package - though I could see no obvious way to use it to 'stretch' n samples to correct a small timing error. Am I overlooking something? I am interested in pointers to either a suitable published algorithm, or - ideally - a Python library function to achieve this sort of transform. Is this supported by SciPy (or anything else)? UPDATE... I'm beginning to realise that what, at first, seemed a trivial problem isn't as straightforward as I first thought. I am no-longer convinced that naive use of splines will suffice. I've also realised that my problem can be better described without reference to 'clock drift'... like this: A single random variable is sampled at two different frequencies - one low and one high, with no common divisor - e.g. 5hz and 144hz. If we assume sample 0 is identical at both sample rates, sample 1 @5hz falls between samples 28 amd 29. I want to construct a new series - at 720hz, say - that fits all the known data points "as smoothly as possible". I had hoped to find an 'out of the box' solution.
0
1
426
0
18,478,963
0
0
0
0
1
false
2
2013-05-18T22:19:00.000
0
2
0
Can I link numpy with AMD's gpu accelerated blas library
16,629,529
0
python,numpy,opencl,gpgpu
If memory servers, pyCuda at least, probably also pyOpenCL can work with numPy
I reconized numpy can link with blas, and I thought of why not using gpu accelerated blas library. Did anyone use to do so?
0
1
2,411
0
16,657,039
0
1
0
0
1
false
0
2013-05-20T19:25:00.000
1
2
0
Which python data type should I use to create a huge 2d array (7Mx7M) with fast random access?
16,656,850
0.099668
python,arrays,2d,large-data
It would help to know more about your data, and what kind of access you need to provide. How fast is "fast enough" for you? Just to be clear, "7M" means 7,000,000 right? As a quick answer without any of that information, I have had positive experiences working with redis and tokyo tyrant for fast read access to large amounts of data, either hundreds of megabytes or gigabytes.
Which python data type should I use to create a huge 2d array (7Mx7M) with fast random access? I want to write each element once and read many times. Thanks
0
1
390
0
46,957,388
0
1
0
0
3
false
32
2013-05-21T03:47:00.000
9
4
0
Difference between plt.close() and plt.clf()
16,661,790
1
python,matplotlib
I think it is worth mentioning that plt.close() releases the memory, thus is preferred when generating and saving many figures in one run. Using plt.clf() in such case will produce a warning after 20 plots (even if they are not going to be shown by plt.show()): More than 20 figures have been opened. Figures created through the pyplot interface (matplotlib.pyplot.figure) are retained until explicitly closed and may consume too much memory.
In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way? I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out.
0
1
49,996
0
44,976,331
0
1
0
0
3
false
32
2013-05-21T03:47:00.000
2
4
0
Difference between plt.close() and plt.clf()
16,661,790
0.099668
python,matplotlib
plt.clf() clears the entire current figure with all its axes, but leaves the window opened, such that it may be reused for other plots. plt.close() closes a window, which will be the current window, if not specified otherwise.
In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way? I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out.
0
1
49,996
0
16,661,815
0
1
0
0
3
true
32
2013-05-21T03:47:00.000
41
4
0
Difference between plt.close() and plt.clf()
16,661,790
1.2
python,matplotlib
plt.close() will close the figure window entirely, where plt.clf() will just clear the figure - you can still paint another plot onto it. It sounds like, for your needs, you should be preferring plt.clf(), or better yet keep a handle on the line objects themselves (they are returned in lists by plot calls) and use .set_data on those in subsequent iterations.
In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way? I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out.
0
1
49,996
0
16,689,864
0
0
0
0
1
true
1
2013-05-22T10:41:00.000
5
1
0
Alternative to Matlab cell data-structure in Python
16,689,681
1.2
python,matlab,data-structures,cell
Have you considered a list of numpy.arrays?
I have a Matlab cell array, each of whose cells contains an N x M matrix. The value of M varies across cells. What would be an efficient way to represent this type of a structure in Python using numpy or any standard Python data-structure?
0
1
1,299
0
16,698,292
0
0
0
0
1
true
8
2013-05-22T16:49:00.000
7
1
0
Comparing computer vision libraries in python
16,697,391
1.2
python,opencv,image-processing,computer-vision,scikit-learn
I have worked mainly with OpenCV and also with scikit-image. I would say that while OpenCV is more focus on computer vision (classification, feature detection and extraction,...). However lately scikit-image is improving rapidly. I faced that some algorithms perform faster under OpenCV, however in most cases I find much more easier working with scikit-image, OpenCV documentations is quite cryptic. As long as OpenCV 2.x bindings works with numpy as well as scikit-image I would take into account using both libraries, trying to take the better of each of them. At least is what I have done in my last project.
I want to decide about a Python computer vision library. I had used OpenCV in C++, and like it very much. However this time I need to develop my algorithm in Python. My short list has three libraries: 1- OpenCV (Python wrapper) 2- PIL (Python Image Processing Library) 3- scikit-image Would you please help me to compare these libraries? I use numpy, scipy, scikit-learn in the rest of my code. The performance and ease of use is an important factor, also, portability is an important factor for me. Thanks for your help
0
1
2,421
0
16,710,763
0
0
0
0
2
false
0
2013-05-23T09:35:00.000
0
4
0
Breadth First Search or Depth First Search?
16,710,374
0
python,algorithm,graph
When you want to find the shortest path you should use BFS and not DFS because BFS explores the closest nodes first so when you reach your goal you know for sure that you used the shortest path and you can stop searching. Whereas DFS explores one branch at a time so when you reach your goal you can't be sure that there is not another path via another branch that is shorter. So you should use BFS. If your graph have different weights on its edges, then you should use Dijkstra's algorithm which is an adaptation of BFS for weighted graphs, but don't use it if you don't have weights. Some people may recommend you to use Floyd-Warshall algorithm but it is a very bad idea for a graph this large.
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better? Thank you for your time
0
1
1,736
0
16,710,738
0
0
0
0
2
false
0
2013-05-23T09:35:00.000
0
4
0
Breadth First Search or Depth First Search?
16,710,374
0
python,algorithm,graph
If there are no weights for the edges on the graph, a simple Breadth-first search where you access nodes in the graph iteratively and check if any of the new nodes equals the destination-node can be done. If the edges have weights, DJikstra's algorithm and Bellman-Ford algoriths are things which you should be looking at, depending on your expected time and space complexities that you are looking at.
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better? Thank you for your time
0
1
1,736
0
16,735,119
0
1
0
0
1
false
0
2013-05-24T09:41:00.000
0
3
0
What is a good way to merge non intersecting sets in a list to end up with denseley packed sets?
16,731,960
0
python,algorithm
I am not sure whether it will give an optimal solution, but would simply repeatedly merging the two largest non-overlapping sets not work?
I'm currently doing this by using a sort of a greedy algorithm by iterating over the sets from largest to smallest set. What would be a good algorithm to choose if i'm more concerned about finding the best solution rather than efficiency? Details: 1) Each set has a predefined range 2) My goal is to end up with a lot of densely packed sets rather than reducing the total number of sets. Example: Suppose the range is 8. The sets might be: [1,5,7], [2,6], [3,4,5], [1,2] , [4], [1] A good result would be [1,5,7,2,6,4], [3,4,5,1,2], [1]
0
1
124
0
18,938,559
0
0
0
0
1
false
2
2013-05-25T01:12:00.000
1
1
1
celery.chord gives IndexError: list index out of range error in celery version 3.0.19
16,745,487
0.197375
python,runtime-error,celery
This is an error that occurs when a chord header has no tasks in it. Celery tries to access the tasks in the header using self.tasks[0] which results in an index error since there are no tasks in the list.
Has anyone seen this error in celery (a distribute task worker in Python) before? Traceback (most recent call last): File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task R = retval = fun(*args, **kwargs) File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/task/trace.py", line 415, in protected_call return self.run(*args, **kwargs) File "/home/mcapp/lister/lister/tasks/init.py", line 69, in update_playlist_db video_update(videos) File "/home/mcapp/lister/lister/tasks/init.py", line 55, in video_update chord(tasks)(update_complete.s(update_id=update_id, update_type='db', complete=True)) File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/canvas.py", line 464, in call _chord = self.type File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/canvas.py", line 461, in type return self._type or self.tasks[0].type.app.tasks['celery.chord'] IndexError: list index out of range This particular version of celery is 3.0.19, and happens when the celery chord feature is used. We don't think there is any error in our application, as 99% of the time our code works correctly, but under heavier loads this error would happen. We are trying to find out if this is an actual bug in our application or a celery bug, any help would be greatly appreciated.
0
1
859
0
16,752,052
0
0
0
0
2
false
2
2013-05-25T17:13:00.000
0
2
0
Split a weighted graph into n graphs to minimize the sum of weights in each graph
16,751,995
0
python,algorithm
Remove the k-1 edges with the highest weights.
Suppose I have a graph of N nodes, and each pair of nodes have a weight associated with them. I would like to split this graph into n smaller graphs to reduce the overall weight.
0
1
161
0
16,752,049
0
0
0
0
2
false
2
2013-05-25T17:13:00.000
1
2
0
Split a weighted graph into n graphs to minimize the sum of weights in each graph
16,751,995
0.099668
python,algorithm
What you are searching for is called weighted max-cut.
Suppose I have a graph of N nodes, and each pair of nodes have a weight associated with them. I would like to split this graph into n smaller graphs to reduce the overall weight.
0
1
161
0
16,766,609
0
1
0
0
2
false
0
2013-05-27T05:05:00.000
0
3
0
Reading certain lines of a string
16,766,587
0
python,list,printing,lines
sed -n 200,300p, perhaps, for 200 to 300 inclusive; adjust the numbers by ±1 if exclusive or whatever?
Hi I am trying to read a csv file into a double list which is not the problem atm. What I am trying to do is just print all the sL values between two lines. I.e i want to print sL [200] to sl [300] but i dont want to manually have to type print sL for all values between these two numbers is there a code that can be written to print all values between these two lines that would be the same as typing sL out individually all the way from 200 to 300
0
1
56
0
16,766,726
0
1
0
0
2
false
0
2013-05-27T05:05:00.000
0
3
0
Reading certain lines of a string
16,766,587
0
python,list,printing,lines
If it is a specific column ranging between 200 and 300, use filter() function. new_array = filter(lambda x: x['column'] >= 200 or z['column'] <= 300, sl)
Hi I am trying to read a csv file into a double list which is not the problem atm. What I am trying to do is just print all the sL values between two lines. I.e i want to print sL [200] to sl [300] but i dont want to manually have to type print sL for all values between these two numbers is there a code that can be written to print all values between these two lines that would be the same as typing sL out individually all the way from 200 to 300
0
1
56
0
16,823,062
0
1
0
0
1
true
1
2013-05-29T18:04:00.000
1
2
0
Pylab after upgrading
16,820,903
1.2
python,matplotlib,python-3.3
I suspect you need to install python3-matplotlib, python3-numpy, etc. python-matlab is the python2 version.
Today I upgraded to Xubuntu 13.04 which comes with Python 3.3. Before that, I was working with Pyton 3.2, which was working perfectly fine. When running my script under Python 3.3, I get an ImportError: No module named 'pylab' in import pylab. Running in Python 3.2, which I reinstalled, throws ImportError: cannot import name multiarray in import numpy. Scipy, numpy and matplotlib are, recording to apt, on the newest version. I don't have much knowledge about this stuff. Do you have any recommendations on how to get my script to work again, preferably on Python 3.2? Thanks in advance, Katrin Edit: We solved the problem: Apparently, there where a lot of fragments / pieces of the packages in different paths, as I installed from apt, pip as well as manually. After deleting all packages and installing them only via pip, everything works fine. Thank you very much for the help!
0
1
312
0
60,213,042
0
1
0
0
2
false
133
2013-05-31T08:23:00.000
-1
10
0
How do I convert strings in a Pandas data frame to a 'date' data type?
16,852,911
-0.019997
python,date,pandas
Try to convert one of the rows into timestamp using the pd.to_datetime function and then use .map to map the formular to the entire column
I have a Pandas data frame, one of the column contains date strings in the format YYYY-MM-DD For e.g. '2013-10-28' At the moment the dtype of the column is object. How do I convert the column values to Pandas date format?
0
1
297,830
0
33,577,649
0
1
0
0
2
false
133
2013-05-31T08:23:00.000
25
10
0
How do I convert strings in a Pandas data frame to a 'date' data type?
16,852,911
1
python,date,pandas
Now you can do df['column'].dt.date Note that for datetime objects, if you don't see the hour when they're all 00:00:00, that's not pandas. That's iPython notebook trying to make things look pretty.
I have a Pandas data frame, one of the column contains date strings in the format YYYY-MM-DD For e.g. '2013-10-28' At the moment the dtype of the column is object. How do I convert the column values to Pandas date format?
0
1
297,830
0
16,877,799
0
1
0
0
1
true
1
2013-06-01T21:29:00.000
2
1
0
Scikit-Learn n_jobs Multiprocessing Locked To One Core
16,877,448
1.2
python,scikit-learn
Found it, there's a note on the svm page that if you enable verbose settings multiprocessing may break. Disabling verbose fixed it
Trying to use gridsearch CV with multiple jobs via the n_jobs argument and I can see using htop that they're all launched and running but they all get stuck/assigned on the same core using 25% each (I'm using a 4 core machine). I'm on Ubuntu 12.10 and I'm running the latest master pulled from github. Anyone know how to fix this? Thanks!
0
1
391
0
16,893,364
0
0
0
0
1
true
0
2013-06-03T08:54:00.000
2
1
0
How to save 32/64 bit grayscale floats to TIFF with matplotlib?
16,893,102
1.2
python,matplotlib,tiff
Using matplotlib to export to TIFF will use PIL anyway. As far as I know, matplotlib has native support only for PNG, and uses PIL to convert to other file formats. So when you are using matplotlib to export to TIFF, you can use PIL immediately.
I'm trying to save some arrays as TIFF with matplotlib, but I'm getting 24 bit RGB files instead with plt.imsave(). Can I change that without resorting to the PIL? It's quite important for me to keep everything in pure matplotlib.
0
1
792
0
69,644,920
0
1
0
0
2
false
22
2013-06-04T05:00:00.000
2
4
0
Delete a group after pandas groupby
16,910,114
0.099668
python,pandas
it is so simple, you need to use the filter function and lambda exp: df_filterd=df.groupby('name').filter(lambda x:(x.name == 'cond1' or...(other condtions ))) you need to take care that if you want to use more than condtion to put it in brackets().. and you will get back a DataFrame not GroupObject.
Is it possible to delete a group (by group name) from a groupby object in pandas? That is, after performing a groupby, delete a resulting group based on its name.
0
1
27,093
0
58,868,271
0
1
0
0
2
false
22
2013-06-04T05:00:00.000
0
4
0
Delete a group after pandas groupby
16,910,114
0
python,pandas
Should be easy: df.drop(index='group_name',inplace=True)
Is it possible to delete a group (by group name) from a groupby object in pandas? That is, after performing a groupby, delete a resulting group based on its name.
0
1
27,093
0
16,927,877
0
0
0
0
1
false
0
2013-06-04T19:19:00.000
0
2
0
Sorting using Map-Reduce - Possible approach
16,925,802
0
python,sorting,hadoop,bigdata,hadoop-streaming
I'll assume that you are looking for a total sort order without a secondary sort for all your rows. I should also mention that 'better' is never a good question since there is typically a trade-off between time and space and in Hadoop we tend to think in terms of space rather than time unless you use products that are optimized for time (TeraData has the capability of putting Databases in memory for Hadoop use) Out of the two possible approaches you mention, I think only one would work within the Hadoop infrastructure. Num 2, Since Hadoop leverages many nodes to do one job, sorting becomes a little trickier to implement and we typically want the 'shuffle and sort' phase of MR to take care of the sorting since distributed sorting is at the heart of the programming model. At the point when the 59th variable is generated, you would want to sample the distribution of that variable so that you can send it through the framework then merge like you mentioned. Consider the case when the variable distribution of x contain 80% of your values. What this might do is send 80% of your data to one reducer who would do most of the work. This assumes of course that some keys will be grouped in the sort and shuffle phase which would be the case unless you programmed them unique. It's up to the programmer to set up partitioners to evenly distribute the load by sampling the key distribution. If on the other hand we were to sort in memory then we could accomplish the same thing during reduce but there are inherent scalability issues since the sort is only as good as the amount of memory available in the node currently running the sort and dies off quickly when it starts to use HDFS to look for the rest of the data that did not fit into memory. And if you ignored the sampling issue you will likely run out of memory unless all your key values pairs are evenly distributed and you understand the memory capacity within your data.
I have a large dataset with 500 million rows and 58 variables. I need to sort the dataset using one of the 59th variable which is calculated using the other 58 variables. The variable happens to be a floating point number with four places after decimal. There are two possible approaches: The normal merge sort While calculating the 59th variables, i start sending variables in particular ranges to to particular nodes. Sort the ranges in those nodes and then combine them in the reducer once i have perfectly sorted data and now I also know where to merge what set of data; It basically becomes appending. Which is a better approach and why?
0
1
889
0
16,969,259
0
1
0
0
2
false
0
2013-06-06T18:12:00.000
1
2
0
I am working on a project where you have to input a matrix that has an arbitrary size. I am using python. Suggestions?
16,969,190
0.099668
python,matrix
Read the input till Ctrl+d, split by newline symbols first and then split the results by spaces.
Entering arbitrary sized matrices to manipulate them using different operations.
0
1
51
0
16,971,642
0
1
0
0
2
false
0
2013-06-06T18:12:00.000
1
2
0
I am working on a project where you have to input a matrix that has an arbitrary size. I am using python. Suggestions?
16,969,190
0.099668
python,matrix
Think about who is using this programme, and how, then develop an interface which meets those needs.
Entering arbitrary sized matrices to manipulate them using different operations.
0
1
51
0
16,982,178
0
1
0
0
1
true
0
2013-06-07T10:15:00.000
2
1
0
Scikit-Learn windows Installation error : python 2.7 required which was not found in the registry
16,981,708
1.2
python,python-2.7,scikit-learn,enthought,epd-python
Enthought Canopy 1.0.1 does not register the user's Python installation as the main one for the system. This has been fixed and will work in the upcoming release.
I have installed Enthought Canopy 32 - bit which comes with python 2.7 32 bit . And I downloaded windows installer scikit-learn-0.13.1.win32-py2.7 .. My machine is 64 bit. I could'nt find 64 bit scikit learn installer for intel processor, only AMD is available. Python 2.7 required which was not found in the registry is the error message I get when I try to run the installer. How do I solve this?
0
1
494
0
17,032,461
0
0
0
0
1
false
0
2013-06-08T16:10:00.000
0
1
0
New sort criteria "random" for Plone 4 old style collections
17,001,402
0
python,collections,plone,zope
There is no random sort criteria. Any randomness will need to be done in custom application code.
is there any best practice for adding a "random" sort criteria to the old style collection in Plone? My versions: Plone 4.3 (4305) CMF 2.2.7 Zope 2.13.19
0
1
109
0
17,070,022
0
0
0
0
1
false
6
2013-06-11T20:48:00.000
3
1
0
How can i distribute processing of minibatch kmeans (scikit-learn)?
17,053,548
0.53705
python,machine-learning,multiprocessing,scikit-learn
I don't think this is possible. You could implement something with OpenMP inside the minibatch processing. I'm not aware of any parallel minibatch k-means procedures. Parallizing stochastic gradient descent procedures is somewhat hairy. Btw, the n_jobs parameter in KMeans only distributes the different random initializations afaik.
In Scikit-learn , K-Means have n_jobs but MiniBatch K-Means is lacking it. MBK is faster than KMeans but at large sample sets we would like it distribute the processing across multiprocessing (or other parallel processing libraries). Is MKB's Partial-fit the answer?
0
1
1,981
0
17,054,932
0
1
0
0
3
true
63
2013-06-11T20:56:00.000
40
5
0
How do you stop numpy from multithreading?
17,053,671
1.2
python,multithreading,numpy
Set the MKL_NUM_THREADS environment variable to 1. As you might have guessed, this environment variable controls the behavior of the Math Kernel Library which is included as part of Enthought's numpy build. I just do this in my startup file, .bash_profile, with export MKL_NUM_THREADS=1. You should also be able to do it from inside your script to have it be process specific.
I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all. I am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.
0
1
25,964
0
21,673,595
0
1
0
0
3
false
63
2013-06-11T20:56:00.000
12
5
0
How do you stop numpy from multithreading?
17,053,671
1
python,multithreading,numpy
In more recent versions of numpy I have found it necessary to also set NUMEXPR_NUM_THREADS=1. In my hands, this is sufficient without setting MKL_NUM_THREADS=1, but under some circumstances you may need to set both.
I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all. I am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.
0
1
25,964
0
48,665,619
0
1
0
0
3
false
63
2013-06-11T20:56:00.000
52
5
0
How do you stop numpy from multithreading?
17,053,671
1
python,multithreading,numpy
Only hopefully this fixes all scenarios and system you may be on. Use numpy.__config__.show() to see if you are using OpenBLAS or MKL From this point on there are a few ways you can do this. 2.1. The terminal route export OPENBLAS_NUM_THREADS=1 or export MKL_NUM_THREADS=1 2.2 (This is my preferred way) In your python script import os and add the line os.environ['OPENBLAS_NUM_THREADS'] = '1' or os.environ['MKL_NUM_THREADS'] = '1'. NOTE when setting os.environ[VAR] the number of threads must be a string! Also, you may need to set this environment variable before importing numpy/scipy. There are probably other options besides openBLAS or MKL but step 1 will help you figure that out.
I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all. I am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.
0
1
25,964
0
21,414,260
0
0
0
0
1
false
7
2013-06-12T02:33:00.000
0
4
0
Parsing a txt file into a dictionary to write to csv file
17,056,818
0
python,csv,file-io
I know this is an older question so maybe you have long since solved it but I think you are approaching this in a more complex way than is needed. I figure I'll respond in case someone else has the same problem and finds this. If you are doing things this way because you do not have a software key, it might help to know that the E-Merge and E-DataAid programs for eprime don't require a key. You only need the key for editing build files. Whoever provided you with the .txt files should probably have an install disk for these programs. If not, it is available on the PST website (I believe you need a serial code to create an account, but not certain) Eprime generally creates a .edat file that matches the content of the text file you have posted an example of. Sometimes though if eprime crashes you don't get the edat file and only have the .txt. Luckily you can generate the edat file from the .txt file. Here's how I would approach this issue: If you do not have the edat files available first use E-DataAid to recover the files. Then presuming you have multiple participants you can use E-Merge to merge all of the edat files together for all participants in who completed this task. Open the merged file. It might look a little chaotic depending on how much you have in the file. You can got to Go to tools->Arrange columns. This will show a list of all your variables. Adjust so that only the desired variables are in the right hand box. Hit ok. Then you should have something resembling your end goal which can be exported as a csv. If you have many procedures in the program you might at this point have lines that just have startup info and NULL in the locations where your variables or interest are. You can fix this by going to tools->filter and creating a filter to eliminate those lines.
Eprime outputs a .txt file like this: *** Header Start *** VersionPersist: 1 LevelName: Session Subject: 7 Session: 1 RandomSeed: -1983293234 Group: 1 Display.RefreshRate: 59.654 *** Header End *** Level: 2 *** LogFrame Start *** MeansEffectBias: 7 Procedure: trialProc itemID: 7 bias1Answer: 1 *** LogFrame End *** Level: 2 *** LogFrame Start *** MeansEffectBias: 2 Procedure: trialProc itemID: 2 bias1Answer: 0 I want to parse this and write it to a .csv file but with a number of lines deleted. I tried to create a dictionary that took the text appearing before the colon as the key and the text after as the value: {subject: [7, 7], bias1Answer : [1, 0], itemID: [7, 2]} def load_data(filename): data = {} eprime = open(filename, 'r') for line in eprime: rows = re.sub('\s+', ' ', line).strip().split(':') try: data[rows[0]] += rows[1] except KeyError: data[rows[0]] = rows[1] eprime.close() return data for line in open(fileName, 'r'): if ':' in line: row = line.strip().split(':') fullDict[row[0]] = row[1] print fullDict both of the scripts below produce garbage: {'\x00\t\x00M\x00e\x00a\x00n\x00s\x00E\x00f\x00f\x00e\x00c\x00t\x00B\x00i\x00a\x00s\x00': '\x00 \x005\x00\r\x00', '\x00\t\x00B\x00i\x00a\x00s\x002\x00Q\x00.\x00D\x00u\x00r\x00a\x00t\x00i\x00o\x00n\x00E\x00r\x00r\x00o\x00r\x00': '\x00 \x00-\x009\x009\x009\x009\x009\x009\x00\r\x00' If I could set up the dictionary, I can write it to a csv file that would look like this!!: Subject itemID ... bias1Answer 7 7 1 7 2 0
0
1
1,142
0
23,725,918
0
0
1
0
1
false
8
2013-06-12T21:09:00.000
4
7
0
Embed python into fortran 90
17,075,418
0.113791
python,fortran,embed
There is a very easy way to do this using f2py. Write your python method and add it as an input to your Fortran subroutine. Declare it in both the cf2py hook and the type declaration as EXTERNAL and also as its return value type, e.g. REAL*8. Your Fortran code will then have a pointer to the address where the python method is stored. It will be SLOW AS MOLASSES, but for testing out algorithms it can be useful. I do this often (I port a lot of ancient spaghetti Fortran to python modules...) It's also a great way to use things like optimised Scipy calls in legacy fortran
I was looking at the option of embedding python into fortran90 to add python functionality to my existing fortran90 code. I know that it can be done the other way around by extending python with fortran90 using the f2py from numpy. But, i want to keep my super optimized main loop in fortran and add python to do some additional tasks / evaluate further developments before I can do it in fortran, and also to ease up code maintenance. I am looking for answers for the following questions: 1) Is there a library that already exists from which I can embed python into fortran? (I am aware of f2py and it does it the other way around) 2) How do we take care of data transfer from fortran to python and back? 3) How can we have a call back functionality implemented? (Let me describe the scenario a bit....I have my main_fortran program in Fortran, that call Func1_Python module in python. Now, from this Func1_Python, I want to call another function...say Func2_Fortran in fortran) 4) What would be the impact of embedding the interpreter of python inside fortran in terms of performance....like loading time, running time, sending data (a large array in double precision) across etc. Thanks a lot in advance for your help!! Edit1: I want to set the direction of the discussion right by adding some more information about the work I am doing. I am into scientific computing stuff. So, I would be working a lot on huge arrays / matrices in double precision and doing floating point operations. So, there are very few options other than fortran really to do the work for me. The reason i want to include python into my code is that I can use NumPy for doing some basic computations if necessary and extend the capabilities of the code with minimal effort. For example, I can use several libraries available to link between python and some other package (say OpenFoam using PyFoam library).
0
1
9,062
0
63,390,537
0
0
0
0
1
false
415
2013-06-13T23:05:00.000
2
13
0
How to reversibly store and load a Pandas dataframe to/from disk
17,098,654
0.03076
python,pandas,dataframe
Another quite fresh test with to_pickle(). I have 25 .csv files in total to process and the final dataframe consists of roughly 2M items. (Note: Besides loading the .csv files, I also manipulate some data and extend the data frame by new columns.) Going through all 25 .csv files and create the dataframe takes around 14 sec. Loading the whole dataframe from a pkl file takes less than 1 sec
Right now I'm importing a fairly large CSV as a dataframe every time I run the script. Is there a good solution for keeping that dataframe constantly available in between runs so I don't have to spend all that time waiting for the script to run?
0
1
432,843
0
17,101,084
0
1
1
0
1
false
3
2013-06-14T01:39:00.000
3
1
0
PyPy and efficient arrays
17,099,850
0.53705
python,arrays,numpy,pypy
array.array is a memory efficient array. It packs bytes/words etc together, so there is only a few bytes of extra overhead for the entire array. The one place where numpy can use less memory is when you have a sparse array (and are using one of the sparse array implementations) If you are not using sparse arrays, you simply measured it wrong. array.array also doesn't have a packed bool type, so you can implement that as wrapper around an array.array('I') or a bytearray() or even just use bit masks with a Python long
My project currently uses NumPy, only for memory-efficient arrays (of bool_, uint8, uint16, uint32). I'd like to get it running on PyPy which doesn't support NumPy. (failed to install it, at any rate) So I'm wondering: Is there any other memory-efficient way to store arrays of numbers in Python? Anything that is supported by PyPy? Does PyPy have anything of it's own? Note: array.array is not a viable solution, as it uses a lot more memory than NumPy in my testing.
0
1
919
0
17,117,416
0
0
0
0
1
false
0
2013-06-14T16:55:00.000
0
2
0
Finding log-likelihood in a restricted boltzmann machine
17,113,613
0
python,machine-learning,artificial-intelligence,neural-network
Assume you have v visible units, and h hidden units, and v < h. The key idea is that once you've fixed all the values for each visible unit, the hidden units are independent. So you loop through all 2^v subsets of visible unit activations. Then computing the likelihood for the RBM with this particular activated visible subset is tractable, because the hidden units are independent[1]. So then loop through each hidden unit, and add up the probability of it being on and off conditioned on your subset of visible units. Then multiply out all of those summed on/off hidden probabilities to get the probability that particular subset of visible units. Add up all subsets and you are done. The problem is that this is exponential in v. If v > h, just "transpose" your RBM, pretending the hidden are visible and vice versa. [1] The hidden units can't influence each other, because you influence would have to go through the visible units (no h to h connections), but you've fixed the visible units.
I have been researching RBMs for a couple months, using Python along the way, and have read all your papers. I am having a problem, and I thought, what the hey? Why not go to the source? I thought I would at least take the chance you may have time to reply. My question is regarding the Log-Likelihood in a Restricted Boltzmann Machine. I have read that finding the exact log-likelihood in all but very small models is intractable, hence the introduction of contrastive divergence, PCD, pseudo log-likelihood etc. My question is, how do you find the exact log-likelihood in even a small model? I have come across several definitions of this formula, and all seem to be different. In Tielemen’s 2008 paper “Training Restricted Boltzmann Machines using Approximations To the Likelihood Gradient”, he performs a log-likelihood version of the test to compare to the other types of approximations, but does not say the formula he used. The closest thing I can find is the probabilities using the energy function over the partition function, but I have not been able to code this, as I don’t completely understand the syntax. In Bengio et al “Representation Learning: A Review and New Perspectives”, the equation for the log-likelihood is: sum_t=1 to T (log P(X^T, theta)) which is equal to sum_t=1 to T(log * sum_h in {0,1}^d_h(P(x^(t), h; theta)) where T is training examples. This is (14) on page 11. The only problem is that none of the other variables are defined. I assume x is the training data instance, but what is the superscript (t)? I also assume theta are the latent variables h, W, v… But how do you translate this into code? I guess what I’m asking is can you give me a code (Python, pseudo-code, or any language) algorithm for finding the log-likelihood of a given model so I can understand what the variables stand for? That way, in simple cases, I can find the exact log-likelihood and then compare them to my approximations to see how well my approximations really are.
0
1
1,642
0
17,133,162
0
0
0
0
1
false
2
2013-06-15T15:35:00.000
4
1
0
SVM Multiclass Classification using Scikit Learn - Code not completing
17,125,247
0.664037
python,python-2.7,machine-learning,svm,scikit-learn
First, for text data you don't need a non linear kernel, so you should use an efficient linear SVM solver such as LinearSVC or PassiveAggressiveClassifier instead. The SMO algorithm of SVC / libsvm is not scalable: the complexity is more than quadratic which is practice often makes it useless for dataset larger than 5000 samples. Also to deal with the class imbalance you might want to try to subsample the class 2 and class 3 to have a number of samples maximum twice the number of samples of class 1.
I have a text data labelled into 3 classes and class 1 has 1% data, class 2 - 69% and class 3 - 30%. Total data size is 10000. I am using 10-fold cross validation. For classification, SVM of scikit learn python library is used with class_weight=auto. But the code for 1 step of 10-fold CV has been running for 2 hrs and has not finished. This implies that for code will take at least 20 hours for completion. Without adding the class_weight=auto, it finishes in 10-15min. But then, no data is labelled of class 1 in the output. Is there some way to achieve solve this issue ?
0
1
2,914
0
52,143,806
0
0
0
0
1
false
2
2013-06-15T15:35:00.000
1
2
0
Get first element of Pandas Series of string
17,125,248
0.099668
python,string,pandas,series
Get the Series head(), then access the first value: df1['tweet'].head(1).item() or: Use the Series tolist() method, then slice the 0'th element: df.height.tolist() [94, 170] df.height.tolist()[0] 94 (Note that Python indexing is 0-based, but head() is 1-based)
I think I have a relatively simply question but am not able to locate an appropriate answer to solve the coding problem. I have a pandas column of string: df1['tweet'].head(1) 0 besides food, Name: tweet I need to extract the text and push it into a Python str object, of this format: test_messages = ["line1", "line2", "etc"] The goal is to classify a test set of tweets and therefore believe the input to: X_test = tfidf.transform(test_messages) is a str object.
0
1
8,156
0
17,141,432
0
0
0
0
1
false
0
2013-06-17T03:36:00.000
0
1
0
Unable to export pandas dataframe into excel file
17,140,080
0
python-2.7,pandas,xls
The problem you are facing is that your excel has a character that cannot be decoded to unicode. It was probably working before but maybe you edited this xls file somehow in Excel/Libre. You just need to find this character and either get rid of it or replace it with the one that is acceptable.
I am trying to export dataframe to .xls file using to_excel() method. But while execution it was throwing an error: "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 892: ordinal not in range(128)". Just few moments back it was working fine. The code I used is: :csv2.to_excel("C:\\Users\\shruthi.sundaresan\\Desktop\\csat1.xls",sheet_name='SAC_STORE_DATA',index=False). csv2 is the dataframe. Why does this kind of error happens and how to avoid this is in the future?
0
1
474
0
17,154,439
0
0
0
0
1
true
1
2013-06-17T18:11:00.000
2
1
0
Pyplot polar scatter plot color for sign
17,154,006
1.2
python,colors,matplotlib,scatter
I'm not sure if this is the "proper" way to do this, but you could programmatically split your data into two subsets: one containing the positive values and the second containing the negative values. Then you can call the plot function twice, specifying the color you want for each subset. It's not an elegant solution, but a solution nonetheless.
I have a pyplot polar scatter plot with signed values. Pyplot does the "right" thing and creates only a positive axis, then reflects negative values to look as if they are a positive value 180 degrees away. But, by default, pyplot plots all points using the same color. So positive and negative values are indistinguishable. I'd like to easily tell positive values at angle x from negative values at angle (x +/- 180), with positive values red and negative values blue. I've made no progress creating what should be a very simple color map for this situation. Help?
0
1
785
0
20,356,186
0
1
0
0
1
false
3
2013-06-17T18:34:00.000
0
3
0
How can scipy.weave.inline be used in a MPI-enabled application on a cluster?
17,154,381
0
python,scipy,cluster-computing,mpi
One quick workaround is to use a local directory on each node (e.g. /tmp as Wesley said), but use one MPI task per node, if you have the capacity.
If scipy.weave.inline is called inside a massive parallel MPI-enabled application that is run on a cluster with a home-directory that is common to all nodes, every instance accesses the same catalog for compiled code: $HOME/.pythonxx_compiled. This is bad for obvious reasons and leads to many error messages. How can this problem be circumvented?
0
1
282
0
17,181,542
0
0
0
0
1
false
2
2013-06-18T23:01:00.000
2
1
0
change detection on video frames
17,180,409
0.379949
python,opencv,image-processing,frame,python-imaging-library
For effects like zooming in and out, optical flow seems the best choice. Search for research papers on "Shot Detection" for other possible approaches. As for the techniques you mention, did you apply some form of noise reduction before using them?
I have a sequence of actions taking place on a video like, say "zooming in and zooming out" a webpage. I want to catch the frames that had a visual change from a some previous frame and so on. Basically, want to catch the visual difference happening in the video. I have tried using feature detection using SURF. It just detects random frames and does not detect most of the times. I have also tried, histograms and it does not help. Any directions and pointers? Thanks in advance,
0
1
371
0
17,212,232
0
0
0
0
1
false
1
2013-06-20T09:13:00.000
0
1
0
to measure Contact Angle between 2 edges in an image
17,209,762
0
opencv,python-2.7
Simplify edge A and B into a line equation (using only the few last pixels) Get the line equations of the two lines (form y = mx + b) Get the angle orientations of the two lines θ=atan|1/m| Subtract the two angles from each other Make sure to do the special case of infinite slope, and also do some simple math to get Final_Angle = (0;pi)
i need to find out contact angle between 2 edges in an image using open cv and python so can anybody suggest me how to find it? if not code please let me know algorithm for the same.
0
1
531
0
18,653,827
0
0
0
0
1
true
1
2013-06-20T15:44:00.000
0
1
0
finding corresponding pixels before and after scipy.ndimage.interpolate.rotate
17,218,051
1.2
python,indexing,scipy,rotatetransform,correspondence
After some time of debugging, I realized that depending on the angle - typically under and over n*45 degrees - scipy adds a row and a column to the output image. a simple test of the angle adding one to the indices solved my problem. I hope this can help the future reader of this topic.
I hope this hasn't already been answered but I haven't found this anywhere. My problem is quite simple : I'm trying to compute an oblic profile of an image using scipy. One great way to do it is to : locate the segment along which I want my profile by giving the beginning and the end of my segment, extract the minimal image containing the segment, compute the angle from which my image should be rotated to get the desired profile along a nice raw, extract said raw. That's the theory. I'm stuck right now on (4), because my profile should fall in raw number array.shape[0]/2, but rotate seems to add sometimes lines of zeros below the data and columns on the left. The correct raw number can thus be shifted... Has anyone any idea how to get the right raw number, for example using the correspondence matrix used by scipy ? Thanks.
0
1
131
0
17,220,530
0
0
0
0
2
false
0
2013-06-20T16:51:00.000
0
2
0
MATLAB to web app
17,219,344
0
python,django,matlab,web-applications,octave
You could always just host the MATLAB code and sample .mat on a website for people to download and play with on their own machines if they have a MATLAB license. If you are looking at having some sort of embedded app on your website you are going to need to rewrite your code in another language. The project sounds doable in python using the packages you mentioned however hosting it online will not be as simple as running a program from your command line. Django would help you build a website but I do not think that it will allow you to just run a python script in the browser.
Hi I have a MATLAB function that graphs the trajectory of different divers (the Olympic sport diving) depending on the position of a slider at the bottom of the window. The file takes multiple .mat files (with trajectory information in 3 dimensions) as input. I am trying to put this MATLAB app on to the internet. What would be the easiest/most efficient way of doing this? I have experience programming in Python and little experience programming in Java. Here are the options that I have considered: 1. MATLAB Builder JA (too expensive) 2. Rewrite entire MATLAB function into Java (not experienced enough in Java) 3. Implement MATLAB file using mlabwrapper and using Django to deploy into web app. (having a lot of trouble installing mlabwrapper onto OSX) 4. Rewrite MATLAB function into Python using SciPy, NumPy, and matlibplot and then using Django. I do not have any experience with Django but I am willing to learn it. Can someone point me in the right direction?
1
1
934
0
17,224,492
0
0
0
0
2
false
0
2013-06-20T16:51:00.000
1
2
0
MATLAB to web app
17,219,344
0.099668
python,django,matlab,web-applications,octave
A cheap and somewhat easy way (with limited functionality) would be: Install MATLAB on your server, or use the MATLAB Compiler to create a stand alone executable (not sure if that comes with your version of MATLAB or not). If you don't have the compiler and can't install MATLAB on your server, you could always go to a freelancing site such as elance.com, and pay someone $20 to compile your code for you into a windows exe file. Either way, the end goal is to make your MATLAB function callable from the command line (the server will be doing the calling) You could make your input arguments into the slider value, and the .mat files you want to open, and the compiled version of MATLAB will know how to handle this. Once you do that, have the code create a plot and save an image of it. (using getframe or other figure export tools, check out FEX). Have your server output this image to the client. Tah-dah, you have a crappy low cost work around! I hope this helps , if not, I apologize!
Hi I have a MATLAB function that graphs the trajectory of different divers (the Olympic sport diving) depending on the position of a slider at the bottom of the window. The file takes multiple .mat files (with trajectory information in 3 dimensions) as input. I am trying to put this MATLAB app on to the internet. What would be the easiest/most efficient way of doing this? I have experience programming in Python and little experience programming in Java. Here are the options that I have considered: 1. MATLAB Builder JA (too expensive) 2. Rewrite entire MATLAB function into Java (not experienced enough in Java) 3. Implement MATLAB file using mlabwrapper and using Django to deploy into web app. (having a lot of trouble installing mlabwrapper onto OSX) 4. Rewrite MATLAB function into Python using SciPy, NumPy, and matlibplot and then using Django. I do not have any experience with Django but I am willing to learn it. Can someone point me in the right direction?
1
1
934
0
17,241,104
0
0
0
0
1
false
289
2013-06-21T17:25:00.000
75
8
0
How do I convert a pandas Series or index to a Numpy array?
17,241,004
1
python,pandas
You can use df.index to access the index object and then get the values in a list using df.index.tolist(). Similarly, you can use df['col'].tolist() for Series.
Do you know how to get the index or column of a DataFrame as a NumPy array or python list?
0
1
558,431
0
17,257,084
0
0
0
0
1
true
1
2013-06-23T01:56:00.000
2
1
0
Inverse of a Matrix in Python
17,257,056
1.2
python,numpy,matrix-inverse
It may very well have to do with the smallness of the values in the matrix. Some matrices that are not, in fact, mathematically singular (with a zero determinant) are totally singular from a practical point of view, in that the math library one is using cannot process them properly. Numerical analysis is tricky, as you know, and how well it deals with such situations is a measure of the quality of a matrix library.
While trying to compute inverse of a matrix in python using numpy.linalg.inv(matrix), I get singular matrix error. Why does it happen? Has it anything to do with the smallness of the values in the matrix. The numbers in my matrix are probabilities and add up to 1.
0
1
7,873
0
17,270,654
0
1
0
0
2
false
2
2013-06-24T07:39:00.000
1
2
0
how do you distinguish numpy arrays from Python's built-in objects
17,270,293
0.099668
python,numpy,naming-conventions
numpy arrays and lists should occupy similar syntactic roles in your code and as such I wouldn't try to distinguish between them by naming conventions. Since everything in python is an object the usual naming conventions are there not to help distinguish type so much as usage. Data, whether represented in a list or a numpy.ndarray has the same usage. I agree that it's awkward that eg. + means different things for lists and arrays. I implicitly deal with this by never putting anything like numerical data in a list but rather always in an array. That way I know if I want to concatenate blocks of data I should be using numpy.hstack. That said, there are definitely cases where I want to build up a list through concatenation and turn it into a numpy array when I'm done. In those cases the code block is usually short enough that it's clear what's going on. Some comments in the code never hurt.
PEP8 has naming conventions for e.g. functions (lowercase), classes (CamelCase) and constants (uppercase). It seems to me that distinguishing between numpy arrays and built-ins such as lists is probably more important as the same operators such as "+" actually mean something totally different. Does anyone have any naming conventions to help with this?
0
1
415
0
17,270,547
0
1
0
0
2
false
2
2013-06-24T07:39:00.000
2
2
0
how do you distinguish numpy arrays from Python's built-in objects
17,270,293
0.197375
python,numpy,naming-conventions
You may use a prefix np_ for numpy arrays, thus distinguishing them from other variables.
PEP8 has naming conventions for e.g. functions (lowercase), classes (CamelCase) and constants (uppercase). It seems to me that distinguishing between numpy arrays and built-ins such as lists is probably more important as the same operators such as "+" actually mean something totally different. Does anyone have any naming conventions to help with this?
0
1
415
0
17,300,620
0
1
0
0
1
false
7
2013-06-25T14:43:00.000
1
4
0
Python Sort On The Fly
17,300,419
0.049958
python,algorithm,sorting
Have a list of size 20 tupples initialised with less than the minimum result of the calculation and two indices of -1. On calculating a result append it to the results list, with the indices of the pair that resulted, sort on the value only and trim the list to length 20. Should be reasonably efficient as you only ever sort a list of length 21.
I am thinking about a problem I haven't encountered before and I'm trying to determine the most efficient algorithm to use. I am iterating over two lists, using each pair of elements to calculate a value that I wish to sort on. My end goal is to obtain the top twenty results. I could store the results in a third list, sort that list by absolute value, and simply slice the top twenty, but that is not ideal. Since these lists have the potential to become extremely large, I'd ideally like to only store the top twenty absolute values, evicting old values as a new top value is calculated. What would be the most efficient way to implement this in python?
0
1
911
0
17,347,945
0
0
0
0
1
true
44
2013-06-26T09:07:00.000
84
4
0
How can I check if a Pandas dataframe's index is sorted
17,315,881
1.2
python,pandas
How about: df.index.is_monotonic
I have a vanilla pandas dataframe with an index. I need to check if the index is sorted. Preferably without sorting it again. e.g. I can test an index to see if it is unique by index.is_unique() is there a similar way for testing sorted?
0
1
16,454
0
17,370,686
0
0
0
0
1
true
2
2013-06-27T21:46:00.000
0
1
0
Are pandas Panels as efficient as multi-indexed DataFrames?
17,353,773
1.2
python,pandas,data-analysis
they have a similiar storage mechanism, and only really differ in the indexing scheme. Performance wise they should be similar. There is more support (code-wise) for multi-level df's as they are more often used. In addition Panels have different silicing semantics, so dtype guarantees are different.
I am wondering whether there is any computational or storage disadvantage to using Panels instead of multi-indexed DataFrames in pandas. Or are they the same behind the curtain?
0
1
169
0
17,371,090
0
1
0
0
1
false
4
2013-06-28T18:08:00.000
1
2
0
numpy array memory allocation
17,371,059
0.099668
python,numpy
Numpy in general is more efficient if you pre-allocate the size. If you know you're going to be populating an MxN matrix...create it first then populate as opposed to using appends for example. While the list does have to be created, a lot of the improvement in efficiency comes from acting on that structure. Reading/writing/computations/etc.
From what I've read about Numpy arrays, they're more memory efficient that standard Python lists. What confuses me is that when you create a numpy array, you have to pass in a python list. I assume this python list gets deconstructed, but to me, it seems like it defeats the purpose of having a memory efficient data structure if you have to create a larger inefficient structure to create the efficient one. Does numpy.zeros get around this?
0
1
4,676
0
19,917,484
0
0
0
0
1
false
0
2013-06-29T05:07:00.000
1
1
0
What IS a .fits file, as in, what is a .fits array?
17,376,904
0.197375
python,arrays,astronomy,fits
A FITS file consists of header-data units. A header-data unit contains an ASCII-type header with keyword-value-comment triples plus either binary FITS tables or (hyperdimensional) image cubes. Each entry in a table of a binary FITS table may itself contain hyperdimensional image cubes. An array is some slice through some dimensions of any of these cubes. Now as a shortcut to images stored in the first (a.k.a primary) header-data unit, many viewers allow to indicate in square brackets some indices of windows into these images (which in most common cases is based on the equivalent support by the cfitsio library).
I'm basically trying to plot some images based on a given set of parameters of a .fits file. However, this made me curious: what IS a .fits array? When I type in img[2400,3456] or some random values in the array, I get some output. I guess my question is more conceptual than code-based, but, it boils down to this: what IS a .fits file, and what do the arrays and the outputs represent?
0
1
119
0
17,416,531
0
0
0
0
1
true
2
2013-07-02T02:21:00.000
2
1
0
Multiplication of Multidimensional matrices (arrays) in Python
17,416,448
1.2
python,numpy,linear-algebra,multidimensional-array
Let's say you're trying to use a Markov chain to model english sentence syntax. Your transition matrix will give you the probability of going from one part of speech to another part of speech. Now let's suppose that we're using a 3rd-order Markov model. This would give use the probability of going from state 123 to 23X, where X is a valid state. The Markov transition matrix would be N3 x N, which is still a 2-dimensional matrix regardless of the dimensionality of the states, themselves. If you're generating the probability distributions based on empirical evidence, then, in this case, there's going to be states with probability 0. If you're worried about sparsity, perhaps arrays are not the best choice. Instead of using an array of arrays, perhaps you should use a dictionary of dictionaries. Or if you have many transition matrices, an array of dictionaries of dictionaries. EDIT (based off comment): You're right, that is more complicated. Nonetheless, for any state, (i,j), there exists a probability distribution for going to the next state, (m,n). Hence, we have our "outer" dictionary, whose keys are all the possible states. Each key (state) points to a value that is a dictionary, which holds the probability distribution for that state.
First of all, I am aware that matrix and array are two different data types in NumPy. But I put both in the title to make it a general question. If you are editing this question, please feel free to remove one. Ok, here is my question, Here is an edit to the original question. Consider a Markov Chain with a 2 dimensional state vector x_t=(y_t,z_t) where y_t and z_t are both scalars. What is the best way of representing/storing/manipulating transition matrix of this Markov Chain? Now, what I explained is a simplified version of my problem. My Markov Chain state vector is a 5*1 vector. Hope this clarifies
0
1
846
0
17,456,347
0
1
0
0
1
false
0
2013-07-03T19:09:00.000
1
4
0
Python Matching License Plates
17,456,233
0.049958
python,comparison
What you're asking is about a fuzzy search, from what it sounds like. Instead of checking string equality, you can check if the two string being compared have a levenshtein distance of 1 or less. Levenshtein distance is basically a fancy way of saying how many insertions, deletions or changes will it take to get from word A to B. This should account for small typos. Hope this is what you were looking for.
I am working on a traffic study and I have the following problem: I have a CSV file that contains time-stamps and license plate numbers of cars for a location and another CSV file that contains the same thing. I am trying to find matching license plates between the two files and then find the time difference between the two. I know how to match strings but is there a way I can find matches that are close maybe to detect user input error of the license plate number? Essentially the data looks like the following: A = [['09:02:56','ASD456'],...] B = [...,['09:03:45','ASD456'],...] And I want to find the time difference between the two sightings but say if the data was entered slightly incorrect and the license plate for B says 'ASF456' that it will catch that
0
1
1,137