GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 9,300,400 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2012-02-15T05:16:00.000 | 1 | 2 | 0 | NLTK certainty measure? | 9,288,221 | 1.2 | python,classification,nltk,probability | I am not sure about the NLTK implementation of Naive Bayes, but the Naive Bayes algorithm outputs probabilities of class membership. However, they are horribly calibrated.
If you want good measures of certainty, you should use a different classification algorithm. Logistic regression will do a decent job at producing calibrated estimates. | In NLTK, if I write a NaiveBayes classifier for say movie reviews (determining if positive or negative), how can I determine the classifier "certainty" when classify a particular review? That is, I know how to run an 'accuracy' test on a given test set to see the general accuracy of the classifier. But is there anyway to have NLTk output its certainess? (perhaps on the basis on the most informative features...)
Thanks
A | 0 | 1 | 524 |
0 | 9,300,932 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2012-02-15T05:16:00.000 | 1 | 2 | 0 | NLTK certainty measure? | 9,288,221 | 0.099668 | python,classification,nltk,probability | nltk.classify.util.log_likelihood. For this problem, you can also try measuring the results by precision, recall, F-score at the token level, that is, scores for positive and negative respectively. | In NLTK, if I write a NaiveBayes classifier for say movie reviews (determining if positive or negative), how can I determine the classifier "certainty" when classify a particular review? That is, I know how to run an 'accuracy' test on a given test set to see the general accuracy of the classifier. But is there anyway to have NLTk output its certainess? (perhaps on the basis on the most informative features...)
Thanks
A | 0 | 1 | 524 |
0 | 9,297,835 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2012-02-15T16:57:00.000 | 0 | 2 | 0 | 2d random walk in python - drawing hypotenuse from distribution | 9,297,679 | 0 | python,random,trigonometry,hypotenuse | If you have a hypotenuse in the form of a line segment, then you have two points. From two points in the form P0 = (x0, y0) P1 = (x1, y1) you can get the x and y displacements by subtracting x0 from x1 and y0 from y1.
If your hypotenuse is actually a vector in a polar coordinate plane, then yes, you'll have to take the sin of the angle and multiply it by the magnitude of the vector to get the y displacement and likewise with cos for the x displacement. | I'm writing a simple 2d brownian motion simulator in Python. It's obviously easy to draw values for x displacement and y displacement from a distribution, but I have to set it up so that the 2d displacement (ie hypotenuse) is drawn from a distribution, and then translate this to new x and y coordinates. This is probably trivial and I'm just too far removed from trigonometry to remember how to do it correctly. Am I going to need to generate a value for the hypotenuse and then translate it into x and y displacements with sin and cos? (How do you do this correctly?) | 0 | 1 | 996 |
0 | 9,298,238 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2012-02-15T16:57:00.000 | 1 | 2 | 0 | 2d random walk in python - drawing hypotenuse from distribution | 9,297,679 | 1.2 | python,random,trigonometry,hypotenuse | This is best done by using polar coordinates (r, theta) for your distributions (where r is your "hypotenuse")), and then converting the result to (x, y), using x = r cos(theta) and y = r sin(theta). That is, select r from whatever distribution you like, and then select a theta, usually from a flat, 0 to 360 deg, distribution, and then convert these values to x and y.
Going the other way around (i.e., constructing correlated (x, y) distributions that gave a direction independent hypotenuse) would be very difficult. | I'm writing a simple 2d brownian motion simulator in Python. It's obviously easy to draw values for x displacement and y displacement from a distribution, but I have to set it up so that the 2d displacement (ie hypotenuse) is drawn from a distribution, and then translate this to new x and y coordinates. This is probably trivial and I'm just too far removed from trigonometry to remember how to do it correctly. Am I going to need to generate a value for the hypotenuse and then translate it into x and y displacements with sin and cos? (How do you do this correctly?) | 0 | 1 | 996 |
0 | 9,349,317 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-02-18T11:15:00.000 | 1 | 1 | 0 | Pybrain: Completely linear network | 9,340,677 | 0.197375 | python,neural-network,backpropagation,forecasting,pybrain | Try applying log() to the price-attribute - then scale all inputs and outputs to [-1..1] - of course, when you want to get the price from the network-output you'll have to reverse log() with exp() | I am currently trying to create a Neural Network with pybrain for stock price forecasting. Up to now I have only used Networks with a binary output. For those Networks sigmoid inner layers were sufficient but I don't think this would be the right approach for Forecasting a price.
The problem is, that when I create such a completely linear network I always get an error like
RuntimeWarning: overflow encountered in square while backprop training.
I already scaled down the inputs. Could it be due to the size of my training sets (50000 entries per training set)?
Has anyone done something like this before? | 0 | 1 | 1,055 |
0 | 9,355,945 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2012-02-20T02:25:00.000 | 1 | 2 | 0 | What are the best algorithms for Word-Sense-Disambiguation | 9,355,460 | 0.099668 | python,nlp,nltk,text-processing | Well, WSD is an open problem (since it's language... and AI...), so currently each of those claims are in some sense valid. If you are engaged in a domain-specific project, I think you'd be best served by a statistical method (Support Vector Machines) if you can find a proper corpus. Personally, if you're using python, unless you're attempting to do some significant original research, I think you should just use the NLTK module to accomplish whatever you're trying to do. | What are the best algorithms for Word-Sense-Disambiguation
I read a lot of posts, and each one proves in a research document that a specific algorithm is the best, this is very confusing.
I just come up with 2 realizations 1-Lesk Algorithm is deprecated, 2-Adapted Lesk is good but not the best
Please if anybody based on his (Experience) know any other good algorithm that give accuracy up to say 70% or more please mention it . and if there's a link to any Pseudo Code for the algorithm it'll be great, I'll try to implement it in Python or Java . | 0 | 1 | 1,301 |
0 | 17,582,671 | 0 | 0 | 0 | 0 | 3 | false | 34 | 2012-02-20T17:56:00.000 | 20 | 7 | 0 | Missing values in scikits machine learning | 9,365,982 | 1 | python,machine-learning,scikit-learn,missing-data,scikits | I wish I could provide a simple example, but I have found that RandomForestRegressor does not handle NaN's gracefully. Performance gets steadily worse when adding features with increasing percentages of NaN's. Features that have "too many" NaN's are completely ignored, even when the nan's indicate very useful information.
This is because the algorithm will never create a split on the decision "isnan" or "ismissing". The algorithm will ignore a feature at a particular level of the tree if that feature has a single NaN in that subset of samples. But, at lower levels of the tree, when sample sizes are smaller, it becomes more likely that a subset of samples won't have a NaN in a particular feature's values, and a split can occur on that feature.
I have tried various imputation techniques to deal with the problem (replace with mean/median, predict missing values using a different model, etc.), but the results were mixed.
Instead, this is my solution: replace NaN's with a single, obviously out-of-range value (like -1.0). This enables the tree to split on the criteria "unknown-value vs known-value". However, there is a strange side-effect of using such out-of-range values: known values near the out-of-range value could get lumped together with the out-of-range value when the algorithm tries to find a good place to split. For example, known 0's could get lumped with the -1's used to replace the NaN's. So your model could change depending on if your out-of-range value is less than the minimum or if it's greater than the maximum (it could get lumped in with the minimum value or maximum value, respectively). This may or may not help the generalization of the technique, the outcome will depend on how similar in behavior minimum- or maximum-value samples are to NaN-value samples. | Is it possible to have missing values in scikit-learn ? How should they be represented? I couldn't find any documentation about that. | 0 | 1 | 39,190 |
0 | 48,199,308 | 0 | 0 | 0 | 0 | 3 | false | 34 | 2012-02-20T17:56:00.000 | 1 | 7 | 0 | Missing values in scikits machine learning | 9,365,982 | 0.028564 | python,machine-learning,scikit-learn,missing-data,scikits | When you run into missing values on input features, the first order of business is not how to impute the missing. The most important question is WHY SHOULD you. Unless you have clear and definitive mind what the 'true' reality behind the data is, you may want to curtail urge to impute. This is not about technique or package in the first place.
Historically we resorted to tree methods like decision trees mainly because some of us at least felt that imputing missing to estimate regression like linear regression, logistic regression, or even NN is distortive enough that we should have methods that do not require imputing missing 'among the columns'. The so-called missing informativeness. Which should be familiar concept to those familiar with, say, Bayesian.
If you are really modeling on big data, besides talking about it, the chance is you face large number of columns. In common practice of feature extraction like text analytics, you may very well say missing means count=0. That is fine because you know the root cause. The reality, especially when facing structured data sources, is you don't know or simply don't have time to know the root cause. But your engine forces to plug in a value, be it NAN or other place holders that the engine can tolerate, I may very well argue your model is as good as you impute, which does not make sense.
One intriguing question is : if we leave missingness to be judged by its close context inside the splitting process, first or second degree surrogate, does foresting actually make the contextual judgement a moot because the context per se is random selection? This, however, is a 'better' problem. At least it does not hurt that much. It certainly should make preserving missingness unnecessary.
As a practical matter, if you have large number of input features, you probably cannot have a 'good' strategy to impute after all. From the sheer imputation perspective, the best practice is anything but univariate. Which is in the contest of RF pretty much means to use the RF to impute before modeling with it.
Therefore, unless somebody tells me (or us), "we are not able to do that", I think we should enable carrying forward missing 'cells', entirely bypassing the subject of how 'best' to impute. | Is it possible to have missing values in scikit-learn ? How should they be represented? I couldn't find any documentation about that. | 0 | 1 | 39,190 |
0 | 18,020,591 | 0 | 0 | 0 | 0 | 3 | false | 34 | 2012-02-20T17:56:00.000 | 11 | 7 | 0 | Missing values in scikits machine learning | 9,365,982 | 1 | python,machine-learning,scikit-learn,missing-data,scikits | I have come across very similar issue, when running the RandomForestRegressor on data. The presence of NA values were throwing out "nan" for predictions. From scrolling around several discussions, the Documentation by Breiman recommends two solutions for continuous and categorical data respectively.
Calculate the Median of the data from the column(Feature) and use
this (Continuous Data)
Determine the most frequently occurring Category and use this
(Categorical Data)
According to Breiman the random nature of the algorithm and the number of trees will allow for the correction without too much effect on the accuracy of the prediction. This I feel would be the case if the presence of NA values is sparse, a feature containing many NA values I think will most likely have an affect. | Is it possible to have missing values in scikit-learn ? How should they be represented? I couldn't find any documentation about that. | 0 | 1 | 39,190 |
0 | 9,367,777 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2012-02-20T20:01:00.000 | 0 | 1 | 0 | Optimizer/minimizer for integer argument | 9,367,630 | 0 | python,numpy,scipy | There is no general solution for this problem. If you know the properties of the function it should be possible to deduce some bounds for the variables and then test all combinations. But that is not very efficient.
You could approximate a solution with scipy.optimize.leastsq and then round the results to integers. The quality of the result of course depends on the structure of the function. | Does anybody know a python function (proven to work and having its description in internet) which able to make minimum search for a provided user function when argument is an array of integers?
Something like
scipy.optimize.fmin_l_bfgs_b
scipy.optimize.leastsq
but for integers | 0 | 1 | 569 |
0 | 9,375,030 | 0 | 0 | 0 | 0 | 1 | true | 8 | 2012-02-21T09:17:00.000 | 8 | 1 | 0 | Why has the numpy random.choice() function been discontinued? | 9,374,885 | 1.2 | python,numpy,scipy | random.choice is as far as I can tell part of python itself, not of numpy. Did you import random?
Update: numpy 1.7 added a new function, numpy.random.choice. Obviously, you need numpy 1.7 for it.
Update2: it seems that in unreleased numpy 2.0, this was temporarily called numpy.random.sample. It has been renamed back. Which is why when using unreleased versions, you really should have a look at the API (pydoc numpy.random) and changelogs. | I've been working with numpy and needed the random.choice() function. Sadly, in version 2.0 it's not in the random or the random.mtrand.RandomState modules. Has it been excluded for a particular reason? There's nothing in the discussion or documentation about it!
For info, I'm running Numpy 2.0 on python 2.7 on mac os. All installed from the standard installers provided on the sites.
Thanks! | 0 | 1 | 6,270 |
0 | 62,791,902 | 0 | 1 | 0 | 0 | 1 | false | 23 | 2012-02-22T13:29:00.000 | 0 | 3 | 0 | How much memory is used by a numpy ndarray? | 9,395,758 | 0 | python,arrays,memory,numpy,floating-point | I gauss, easily, we can compute by print(a.size // 1024 // 1024, a.dtype)
it is similar to how much MB is uesd, however with the param dtype, float=8B, int8=1B ... | Does anybody know how much memory is used by a numpy ndarray? (with let's say 10,000,000 float elements). | 0 | 1 | 14,086 |
0 | 9,419,462 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2012-02-23T18:53:00.000 | 1 | 4 | 0 | have local numpy override global | 9,419,327 | 1.2 | python,numpy,centos | Python searches the path in order, so simply put the directory where you installed your NumPy first in the path.
You can check numpy.version.version to make sure you're getting the version you want. | I'm using a server where I don't have administrative rights and I need to use the latest version of numpy. The system administrator insists that he cannot update the global numpy to the latest version, so I have to install it locally.
I can do that without trouble, but how do I make sure that "import numpy" results in the newer local install to be imported, as opposed to the older global version? I can adjust my PYTHONPATH, but I will want to use some of the global imports as well so I can't exclude all the global packages.
I'm on CentOS 6, by the way.
Thanks! | 0 | 1 | 1,857 |
0 | 9,455,730 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2012-02-26T18:05:00.000 | 4 | 2 | 0 | RAM requirements for matrix processing | 9,455,651 | 1.2 | python,matrix | In theory, an element of {0, 1} should consume at most 1 bit per cell. That means 8 cells per byte or 1192092895 megabytes or about one petabyte, which is too much, unless you are google :) Not to mention, even processing (or saving) such matrix would take too much time (about a year I'd say).
You said that in many cases you won't even need matrix so large. So, you can create smaller matrix at start (10,000 x 10,000) and then double the size every time enlargment is needed, copying old contents.
If your matrix is sparse (has much much more 1's than 0's or vice-versa), then it is much more efficient to store just coordinates where ones are in some efficient data structure, depending what operations (search, data access) you need.
Side note: In many languages, you have to take proper care for that to be true, for example in C, even if you specify variable as boolean, it still takes one byte, 8 times as much as needed. | So I'm designing a matrix for a computer vision project and believe I have one of my calculations wrong. Unfortunately, I'm not sure where it's wrong.
I was considering creating a matrix that was 100,000,000 x 100,000,000 with each 'cell' containing a single integer (1 or 0). If my calculations are correct, it would take 9.53674316 × 10^9 MB. Is that correct?!?
My next question is, if it IS correct, are there ways to reduce memory requirements to a more realistic level while still keeping the matrix the same size? Of course, there is a real possibility I won't actually need a matrix that size but this is absolute worse case scenario (as put forth by a friend). The size seems ridiculous to me since we'd be covering such a small distance at a time.
Thanks1
Anthony | 0 | 1 | 793 |
0 | 9,478,656 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2012-02-28T08:05:00.000 | 2 | 3 | 0 | How do i fill "holes" in an image? | 9,478,347 | 0.132549 | python,image-processing,interpolation,mask,astronomy | What you want is not interpolation at all. Interpolation depends on the assumption that data between known points is roughly contiguous. In any non-trivial image, this will not be the case.
You actually want something like the content-aware fill that is in Photoshop CS5. There is a free alternative available in The GIMP through the GIMP-resynthesize plugin. These filters are extremely advanced and to try to re-implement them is insane. A better choice would be to figure out how to use GIMP-resynthesize in your program instead. | I have photo images of galaxies. There are some unwanted data on these images (like stars or aeroplane streaks) that are masked out. I don't just want to fill the masked areas with some mean value, but to interpolate them according to surrounding data. How do i do that in python?
We've tried various functions in SciPy.interpolate package: RectBivariateSpline, interp2d, splrep/splev, map_coordinates, but all of them seem to work in finding new pixels between existing pixels, we were unable to make them fill arbitrary "hole" in data. | 0 | 1 | 5,262 |
0 | 9,500,809 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2012-02-29T14:02:00.000 | 1 | 1 | 0 | Eclipse editor doesn't recognize Scipy content | 9,500,524 | 1.2 | python,eclipse,scipy | Try to recreate your project in PyDev and add these new libraries. | I just installed Scipy and Numpy on my machine and added them to the System Library option in eclipse.
Now the program runs fine, but eclipse editor keeps giving this red mark on the side says "Unresolved import".
I guess I didn't configure correctly.
Any one know how to fix this ?
Thanks. | 0 | 1 | 611 |
0 | 9,523,685 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2012-03-01T20:31:00.000 | 1 | 4 | 0 | Random number generation with C++ or Python | 9,523,570 | 0.049958 | c++,python,random,simulation,probability | At least in C++, rand is sometimes rather poor quality, so code should rarely use it for anything except things like rolling dice or shuffling cards in children's games. In C++ 11, however, a set of random number generator classes of good quality have been added, so you should generally use them by preference.
Seeding based on time can work fine under some circumstances, but not if you want to make it difficult for somebody else to duplicate the same series of numbers (e.g., if you're generating nonces for encryption). Normally, you want to seed only once at the beginning of the program, at least in a single-threaded program. With multithreading, you frequently want a separate seed for each thread, in which case you need each one to start out unique to prevent generating the same sequences in all threads. | I heard that computation results can be very sensitive to choice of random number generator.
1 I wonder whether it is relevant to program own Mersenne-Twister or other pseudo-random routines to get a good number generator. Also, I don't see why I should not trust native or library generators as random.uniform() in numpy, rand() in C++. I understand that I can build generators on my own for distributions other than uniform (inverse repartition function methor, polar method). But is it evil to use one built-in generator for uniform sampling?
2 What is wrong with the default 'time' seed? Should one re-seed and how frequently in a code sample (and why)?
3 Maybe you have some good links on these topics!
--edit More precisely, I need random numbers for multistart optimization routines, and for uniform space sample to initialize some other optimization routine parameters. I also need random numbers for Monte Carlo methods (sensibility analysis). I hope the precisions help figure out the scope of question. | 0 | 1 | 3,275 |
0 | 9,567,221 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2012-03-05T11:45:00.000 | 1 | 3 | 0 | Cassandra/Pycassa: Getting random rows | 9,566,060 | 1.2 | python,cassandra,uuid,pycassa | You might be able to do this by making a get_range request with a random start key (just a random string), and a row_count of 1.
From memory, I think the finish key would need to be the same as start, so that the query 'wraps around' the keyspace; this would normally return all rows, but the row_count will limit that.
Haven't tried it but this should ensure you get a single result without having to know exact row keys. | Is there a possibility to retrieve random rows from Cassandra (using it with Python/Pycassa)?
Update: With random rows I mean randomly selected rows! | 0 | 1 | 2,264 |
0 | 9,570,191 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2012-03-05T16:23:00.000 | 0 | 4 | 0 | Blank Values in Excel File From CSV (not just rows) | 9,570,157 | 0 | python,excel,csv | Can you see the missing values when you open the CSV with wordpad? If so, then Python or any other scripting language should see them too. | I'm currently opening CSV files in Excel with multiple columns, where values will only appear if a number changes. For example, the data may ACTUALLY be: 90,90,90,90,91. But it will only appear as 90,,,,91. I'd really like the values in between to be filled with 90s. Is there anyway python could help with this? I really appreciate your help! | 0 | 1 | 2,496 |
0 | 9,570,477 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2012-03-05T16:23:00.000 | 0 | 4 | 0 | Blank Values in Excel File From CSV (not just rows) | 9,570,157 | 0 | python,excel,csv | You can also do this entirely in excel:
Select column (or whatever range you're working with), then go to Edit>Go To (Ctrl+G) and click Special.
Check Blanks & click OK.
This will select only the empty cells within the list.
Now type the = key, then up arrow and ctrl-enter.
This will put a formula in every blank cell to equal the cell above it. You could then copy ^ paste values only to get rid of the formulas. | I'm currently opening CSV files in Excel with multiple columns, where values will only appear if a number changes. For example, the data may ACTUALLY be: 90,90,90,90,91. But it will only appear as 90,,,,91. I'd really like the values in between to be filled with 90s. Is there anyway python could help with this? I really appreciate your help! | 0 | 1 | 2,496 |
0 | 9,597,117 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2012-03-07T03:43:00.000 | 4 | 4 | 0 | Clustering using k-means in python | 9,595,494 | 0.197375 | python,tags,cluster-analysis,data-mining,k-means | Since the data you have is binary and sparse (in particular, not all users have tagged all documents, right)? So I'm not at all convinced that k-means is the proper way to do this.
Anyway, if you want to give k-means a try, have a look at the variants such as k-medians (which won't allow "half-tagging") and convex/spherical k-means (which supposedly works better with distance functions such as cosine distance, which seems a lot more appropriate here). | I have a document d1 consisting of lines of form user_id tag_id.
There is another document d2 consisting of tag_id tag_name
I need to generate clusters of users with similar tagging behaviour.
I want to try this with k-means algorithm in python.
I am completely new to this and cant figure out how to start on this.
Can anyone give any pointers?
Do I need to first create different documents for each user using d1 with his tag vocabulary?
And then apply k-means algorithm on these documents?
There are like 1 million users in d1. I am not sure I am thinking in right direction, creating 1 million files ? | 0 | 1 | 2,252 |
0 | 9,646,517 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2012-03-08T01:31:00.000 | 1 | 1 | 0 | Solve linear system in Python without NumPy | 9,611,746 | 1.2 | python,numpy,scipy,jython,linear-algebra | As suggested by @talonmies' comment, the real answer to this is 'find an equivalent Java package.' | I have to solve linear equations system using Jython, so I can't use Num(Sci)Py for this purpose. What are the good alternatives? | 0 | 1 | 2,868 |
0 | 11,955,915 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2012-03-09T22:36:00.000 | 0 | 4 | 0 | Python Pandas: can't find numpy.core.multiarray when importing pandas | 9,641,916 | 0 | python,numpy,pandas | @user248237:
I second Keith's suggestion that its probably a 32/64 bit compatibility issue. I ran into the same problem just this week while trying to install a different module. Check the versions of each of your modules and make everything matches. In general, I would stick to the 32 bit versions -- not all modules have official 64 bit support. I uninstalled my 64 bit version of python and replaced it with a 32 bit one, reinstalled the modules, and haven't had any problems since. | I'm trying to get my code (running in eclipse) to import pandas.
I get the following error: "ImportError: numpy.core.multiarray failed to import"when I try to import pandas. I'm using python2.7, pandas 0.7.1, and numpy 1.5.1 | 0 | 1 | 5,833 |
0 | 12,003,130 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2012-03-09T22:36:00.000 | 1 | 4 | 0 | Python Pandas: can't find numpy.core.multiarray when importing pandas | 9,641,916 | 0.049958 | python,numpy,pandas | Just to make sure:
Did you install pandas from the sources ? Make sure it's using the version of NumPy you want.
Did you upgrade NumPy after installing pandas? Make sure to recompile pandas, as there can be some changes in the ABI (but w/ that version of NumPy, I doubt it's the case)
Are you calling pandas and/or Numpy from their source directory ? Bad idea, NumPy tends to choke on that. | I'm trying to get my code (running in eclipse) to import pandas.
I get the following error: "ImportError: numpy.core.multiarray failed to import"when I try to import pandas. I'm using python2.7, pandas 0.7.1, and numpy 1.5.1 | 0 | 1 | 5,833 |
0 | 12,007,981 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2012-03-09T22:36:00.000 | 1 | 4 | 0 | Python Pandas: can't find numpy.core.multiarray when importing pandas | 9,641,916 | 0.049958 | python,numpy,pandas | Try to update to numpy version 1.6.1. Helped for me! | I'm trying to get my code (running in eclipse) to import pandas.
I get the following error: "ImportError: numpy.core.multiarray failed to import"when I try to import pandas. I'm using python2.7, pandas 0.7.1, and numpy 1.5.1 | 0 | 1 | 5,833 |
0 | 11,876,131 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2012-03-10T07:29:00.000 | 1 | 1 | 0 | Interpolation of large 2d masked array | 9,644,735 | 0.197375 | python,matrix,numpy,scipy,interpolation | Very late, but...
I have a problem similar to yours, and am getting the segmentation fault with bisplines, and also memory error with rbf (in which the "thin_plate" function works great for me.
Since my data is unstructured but is created in a structured manner, I use downsampling to half or one third of the density of data points, so that I can use Rbf. What I advise you to do is (very inefficient, but still better than not doing at all) to subdivide the matrix in many overlapping regions, then create rbf interpolators for each region, then when you interpolate one point you choose the appropriate interpolator.
Also, if you have a masked array, you could still perform interpolation in the unmasked array, then apply the mask on the result. (well actually no, see the comments)
Hope this helps somebody | I have a numpy masked matrix. And wanted to do interpolation in the masked regions.
I tried the RectBivariateSpline but it didn't recognize the masked regions as masked and used those points also to interpolate. I also tried the bisplrep after creating the X,Y,Z 1d vectors. They were each of length 45900. It took a lot of time to calculate the Bsplines. And finally gave a Segmentation fault while running bisplev .
The 2d matrix is of the size 270x170.
Is there any way to make RectBivariateSpline not to include the masked regions in interpolation? Or is there any other method?
bisplrep was too slow.
Thanking you,
indiajoe
UPDATE :
When the grid is small the scipy.interpolate.Rbf with 'linear' function is doing reasonable job. But it gives error when the array is large.
Is there any other function which will allow me to interpolate and smooth my matrix?
I have also concluded the following. Do correct me if I am wrong.
1) RectBivariateSpline requires perfect filled matrix and hence masked matrices cannot be used. | 0 | 1 | 1,646 |
0 | 9,651,522 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-03-10T22:18:00.000 | 1 | 2 | 0 | Image spot detection in Python | 9,650,670 | 0.099668 | python,image-processing,computer-vision | I do not know if there is a library but you could segment these areas using a simple thresholding segmentation algorithm. Say, you want to find red spots. Extract the red channel from the image, select a threshold, and eliminate pixels that are below the threshold. The resulting pixels are your spots. To find a suitable threshold you can build the image's red channel histogram and find the valley there. The lowest point in the valley is the threshold that you could use. If there are more than one valley, smooth the histogram until there is one valley and two peaks. You can use a Gaussian function to smooth the histogram. To find the spots from the remaining pixels you can use the labeling algorithm and then find the connected components in the graph that the labeling algorithm produced. Yes, it is simple. :) | I have millions of images containing every day photos. I'm trying to find a way to pick out those in which some certain colours are present, say, red and orange, disregarding the shape or object. The size may matter - e.g., at least 50x50 px.
Is there an efficient and lightweight library for achieving this? I know there is OpenCV and it seems quite powerful, but would it be too bloated for this task? It's a relatively simple task, right?
Thanks | 0 | 1 | 2,147 |
0 | 9,675,452 | 0 | 0 | 1 | 0 | 1 | false | 2 | 2012-03-12T21:56:00.000 | 2 | 4 | 0 | Running m-files from Python | 9,675,386 | 0.099668 | python,matlab | You can always start matlab as separate subprocess and collect results via std.out/files. (see subprocess package). | pymat doesnt seem to work with current versions of matlab, so I was wondering if there is another equivalent out there (I havent been able to find one). The gist of what would be desirable is running an m-file from python (2.6). (and alternatives such as scipy dont fit since I dont think they can run everything from the m-file).
Thanks in advance! | 0 | 1 | 5,746 |
0 | 9,739,828 | 0 | 1 | 0 | 0 | 1 | true | 6 | 2012-03-15T15:33:00.000 | 7 | 1 | 0 | How do I tell pandas to parse a particular column as a datetime object, but not make it an index? | 9,723,000 | 1.2 | python,parsing,datetime,pandas | Pass dateutil.parser.parse (or another datetime conversion function) in the converters argument to read_csv | I have a csv file where one of the columns is a date/time string. How do I parse it correctly with pandas? I don't want to make that column the index. Thanks!
Uri | 0 | 1 | 1,378 |
0 | 9,731,658 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2012-03-16T03:36:00.000 | 2 | 2 | 0 | Creating a thread for each operation or a some threads for various operations? | 9,731,496 | 1.2 | python,multithreading,matrix,distributed | You will probably get the best performance if you use one thread for each CPU core available to the machine running your application. You won't get any performance benefit by running more threads than you have processors.
If you are planning to spawn new threads each time you perform a matrix multiplication then there is very little hope of your multi-threaded app ever outperforming the single-threaded version unless you are multiplying really huge matrices. The overhead involved in thread creation is just too high relative to the time required to multiply matrices. However, you could get a significant performance boost if you spawn all the worker threads once when your process starts and then reuse them over and over again to perform many matrix multiplications.
For each pair of matrices you want to multiply you will want to load the multiplicand and multiplier matrices into memory once and then allow all of your worker threads to access the memory simultaneously. This should be safe because those matrices will not be changing during the multiplication.
You should also be able to allow all the worker threads to write their output simultaneously into the same output matrix because (due to the nature of matrix multiplication) each thread will end up writing its output to different elements of the matrix and there will not be any contention.
I think you should distribute the rows between threads by maintaining an integer NextRowToProcess that is shared by all of the threads. Whenever a thread is ready to process another row it calls InterlockedIncrement (or whatever atomic increment operation you have available on your platform) to safely get the next row to process. | For a class project I am writing a simple matrix multiplier in Python. My professor has asked for it to be threaded. The way I handle this right now is to create a thread for every row and throw the result in another matrix.
What I wanted to know if it would be faster that instead of creating a thread for each row it creates some amount threads that each handles various rows.
For example: given Matrix1 100x100 * Matrix2 100x100 (matrix sizes can vary widely):
4 threads each handling 25 rows
10 threads each handling 10 rows
Maybe this is a problem of fine tuning or maybe the thread creation process overhead is still faster than the above distribution mechanism. | 0 | 1 | 251 |
0 | 9,760,852 | 0 | 0 | 0 | 0 | 1 | true | 6 | 2012-03-16T09:05:00.000 | 2 | 1 | 0 | scikits confusion matrix with cross validation | 9,734,403 | 1.2 | python,machine-learning,scikits,scikit-learn | You can either use an aggregate confusion matrix or compute one for each CV partition and compute the mean and the standard deviation (or standard error) for each component in the matrix as a measure of the variability.
For the classification report, the code would need to be modified to accept 2 dimensional inputs so as to pass the predictions for each CV partitions and then compute the mean scores and std deviation for each class. | I am training a svm classifier with cross validation (stratifiedKfold) using the scikits interfaces. For each test set (of k), I get a classification result. I want to have a confusion matrix with all the results.
Scikits has a confusion matrix interface:
sklearn.metrics.confusion_matrix(y_true, y_pred)
My question is how should I accumulate the y_true and y_pred values. They are arrays (numpy). Should I define the size of the arrays based on my k-fold parameter? And for each result I should add the y_true and y-pred to the array ???? | 0 | 1 | 4,075 |
0 | 9,760,985 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-03-18T17:46:00.000 | 0 | 2 | 0 | How to get related topics from a present wikipedia article? | 9,760,636 | 0 | python,keyword,wikipedia,topic-maps | You can scrape the categories if you want. If you're working with python, you can read the wikitext directly from their API, and use mwlib to parse the article and find the links.
A more interesting but harder to implement approach would be to create clusters of related terms, and given the list of terms extracted from an article, find the closest terms to them. | I am writing a user-app that takes input from the user as the current open wikipedia page. I have written a piece of code that takes this as input to my module and generates a list of keywords related to that particular article using webscraping and natural language processing.
I want to expand the functionality of the app by providing in addition to the keywords that i have identified, a set of related topics that may be of interest to the user. Is there any API that wikipedia provides that will do the trick. If there isn't, Can anybody Point me to what i should be looking into (incase i have to write code from scratch). Also i will appreciate any pointers in identifying any algorithm that will train the machine to identify topic maps. I am not seeking any paper but rather a practical implementation of something basic
so to summarize,
I need a way to find topics related to current article in wikipedia (categories will also do)
I will also appreciate a sample algorithm for training a machine to identify topics that usually are related and clustered.
ps. please be specific because i have researched through a number of obvious possibilities
appreciate it thank you | 0 | 1 | 1,060 |
0 | 9,771,616 | 0 | 0 | 0 | 0 | 3 | true | 8 | 2012-03-19T10:04:00.000 | 11 | 3 | 0 | How to save ctypes objects containing pointers | 9,768,218 | 1.2 | python,ctypes,pickle | Python has no way of doing that automatically for you:
You will have to build code to pick all the desired Data yourself, putting them in a suitable Python data structure (or just adding the data in a unique bytes-string where you will know where each element is by its offset) - and then save that object to disk.
This is not a "Python" problem - it is exactly a problem Python solves for you when you use Python objects and data. When coding in C or lower level, you are responsible to know not only where your data is, but also, the length of each chunk of data (and allocate memory for each chunk, and free it when done, and etc). And this is what you have to do in this case.
Your data structure should give you not only the pointers, but also the length of the data in each pointed location (in a way or the other - if the pointer is to another structure, "size_of" will work for you) | I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers.
How can I save the ctypes object and what the pointers are pointing to for later use?
I tried
scipy.io.savemat => TypeError: Could not convert object to array
cPickle => ctypes objects containing pointers cannot be pickled | 0 | 1 | 14,249 |
0 | 41,899,145 | 0 | 0 | 0 | 0 | 3 | false | 8 | 2012-03-19T10:04:00.000 | 1 | 3 | 0 | How to save ctypes objects containing pointers | 9,768,218 | 0.066568 | python,ctypes,pickle | To pickle a ctypes object that has pointers, you would have to define your own __getstate__/__reduce__ methods for pickling and __setstate__ for unpickling. More information in the docs for pickle module. | I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers.
How can I save the ctypes object and what the pointers are pointing to for later use?
I tried
scipy.io.savemat => TypeError: Could not convert object to array
cPickle => ctypes objects containing pointers cannot be pickled | 0 | 1 | 14,249 |
0 | 9,768,597 | 0 | 0 | 0 | 0 | 3 | false | 8 | 2012-03-19T10:04:00.000 | 0 | 3 | 0 | How to save ctypes objects containing pointers | 9,768,218 | 0 | python,ctypes,pickle | You could copy the data into a Python data structure and dereference the pointers as you go (using the contents attribute of a pointer). | I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers.
How can I save the ctypes object and what the pointers are pointing to for later use?
I tried
scipy.io.savemat => TypeError: Could not convert object to array
cPickle => ctypes objects containing pointers cannot be pickled | 0 | 1 | 14,249 |
0 | 9,802,282 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2012-03-21T08:48:00.000 | 3 | 1 | 0 | python/numpy: Using own data structure with np.allclose() ? Where to look for the requirements / what are they? | 9,801,235 | 1.2 | python,numpy,magic-methods,type-coercion | The term "arraylike" is used in the numpy documentation to mean "anything that can be passed to numpy.asarray() such that it returns an appropriate numpy.ndarray." Most sequences with proper __len__() and __getitem__() methods work okay. Note that the __getitem__(i) must be able to accept a single integer index in range(len(self)), not just a list of indices as you seem to indicate. The result from this __getitem__(i) must either be an atomic value that numpy knows about, like a float or an int, or be another sequence as above. Without more details about your Matrix Product State implementation, that's about all I can help. | I'm implementing a Matrix Product State class, which is some kind of special tensor decomposition scheme in python/numpy for fast algorithm prototyping.
I don't think that there already is such a thing out there, and I want to do it myself to get a proper understanding of the scheme.
What I want to have is that, if I store a given tensor T in this format as T_mps, I can access the reconstructed elements by T_mps[ [i0, i1, ..., iL] ]. This is achieved by the getitem(self, key) method and works fine.
Now I want to use numpy.allclose(T, mps_T) to see if my decomposition is correct.
But when I do this I get a type error for my own type:
TypeError: function not supported for these types, and can't coerce safely to supported types
I looked at the documentation of allclose and there it is said, that the function works for "array like" objects. Now, what is this "array like" concept and where can I find its specification ?
Maybe I'm better off, implementing my own allclose method ? But that would somewhat be reinventing the wheel, wouldn't it ?
Appreciate any help
Thanks in advance | 0 | 1 | 416 |
0 | 9,820,025 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2012-03-22T07:22:00.000 | 1 | 1 | 0 | Python import works in interpreter, doesn't work in script Numpy/Matplotlib | 9,817,995 | 1.2 | python,path,numpy,matplotlib | You'll generally need to install numpy, matplotlib etc once for every version of python you use, as it will install itself to the specific 'python2.x/site-packages' directory.
Is the above output generated from a 2.6 or 2.7 session? If it's a 2.6 session, then yes, pointing your PYTHONPATH at 2.7 won't work - numpy includes compiled C code (e.g. the multiarray.so file) which will have been built against a specific version of python.
If you don't fancy maintaining two sets of packages, I'd recommend installing numpy, matplotlib etc all for version 2.7, removing that PYTHONPATH setting, and making sure that both scripts and interpreter sessions use version 2.7.
If you want to keep both versions you'll just have to install each packages twice (and you'll probably still wnat to undo your PTYHONPATH change) | I'm on OSX Snow Leopard and I run 2.7 in my scripts and the interpreter seems to be running 2.6
Before I was able to import numpy but then I would get an error when trying to import matplotlib so I went looking for a solution and updated my PYTHONPATH variable, but I think I did it incorrectly and have now simply screwed everything up.
This is what I get when I try and import numpy in my script:
Traceback (most recent call last):
File "./hh_main.py", line 5, in
import numpy
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/numpy/init.py", line 137, in
import add_newdocs
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/numpy/add_newdocs.py", line 9, in
from numpy.lib import add_newdoc
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/numpy/lib/init.py", line 4, in
from type_check import *
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/numpy/lib/type_check.py", line 8, in
import numpy.core.numeric as _nx
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/numpy/core/init.py", line 5, in
import multiarray
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/numpy/core/multiarray.so, 2): Symbol not found: _PyCapsule_Import
Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/numpy/core/multiarray.so
Expected in: flat namespace
in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/numpy/core/multiarray.so
Furthermore this is what I get from sys.path in the interpreter:
['', '/Users/joshuaschneier/Documents/python_files', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload']
And this is my PYTHONPATH which I guess I updated wrong:
:/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/
Thanks for any help. | 0 | 1 | 2,566 |
0 | 9,873,885 | 0 | 1 | 0 | 0 | 1 | false | 12 | 2012-03-26T14:05:00.000 | 3 | 5 | 0 | Choose m evenly spaced elements from a sequence of length n | 9,873,626 | 0.119427 | python,algorithm | Use a loop (int i=0; i < m; i++)
Then to get the indexes you want, Ceil(i*m/n). | I have a vector/array of n elements. I want to choose m elements.
The choices must be fair / deterministic -- equally many from each subsection.
With m=10, n=20 it is easy: just take every second element.
But how to do it in the general case? Do I have to calculate the LCD? | 0 | 1 | 11,243 |
0 | 12,095,050 | 0 | 1 | 0 | 0 | 1 | false | 33 | 2012-03-27T20:38:00.000 | 14 | 8 | 0 | Pickle alternatives | 9,897,345 | 1 | python,serialization | Pickle is actually quite fast so long as you aren't using the (default) ASCII protocol. Just make sure to dump using protocol=pickle.HIGHEST_PROTOCOL. | I am trying to serialize a large (~10**6 rows, each with ~20 values) list, to be used later by myself (so pickle's lack of safety isn't a concern).
Each row of the list is a tuple of values, derived from some SQL database. So far, I have seen datetime.datetime, strings, integers, and NoneType, but I might eventually have to support additional data types.
For serialization, I've considered pickle (cPickle), json, and plain text - but only pickle saves the type information: json can't serialize datetime.datetime, and plain text has its obvious disadvantages.
However, cPickle is pretty slow for data this large, and I'm looking for a faster alternative. | 0 | 1 | 35,941 |
0 | 9,927,957 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-03-29T14:42:00.000 | 3 | 3 | 0 | Reading csv in python pandas and handling bad values | 9,927,711 | 0.197375 | python,numpy,pandas | You can pass a custom list of values to be treated as missing using pandas.read_csv . Alternately you can pass functions to the converters argument. | I am using pandas to read a csv file. The data are numbers but stored in the csv file as text. Some of the values are non-numeric when they are bad or missing. How do I filter out these values and convert the remaining data to integers.
I assume there is a better/faster way than looping over all the values and using isdigit() to test for them being numeric.
Does pandas or numpy have a way of just recognizing bad values in the reader? If not, what is the easiest way to do it? Do I have to specific the dtypes to make this work? | 0 | 1 | 3,639 |
0 | 9,993,744 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2012-03-29T16:05:00.000 | 0 | 1 | 0 | Numpy needs the ucs2 | 9,929,170 | 1.2 | linux,unicode,numpy,python-2.7,ucs | I suggest that a quick solution to these sort of complications is that you use the Enthought Python Distribpotion (EPD) on Linux which includes a wide range of extensions. Cheers. | I have installed Numpy using ActivePython and when I try to import numpy module, it is throwing the following error:
ImportError:
/opt/ActivePython-2.7/lib/python2.7/site-packages/numpy/core/multiarray.so:
undefined symbol: PyUnicodeUCS2_FromUnicode
I am fairly new to python, and I am not sure what to do. I appreciate if you could point me to the right direction.
Should I remove python and configure its compilation with the
"--enable-unicode=ucs2" or "--with-wide-unicode" option?
Cheers
OS: Fedora 16, 64bit;
Python version: Python 2.7.2 (default, Mar 26 2012, 10:29:24);
The current compile Unicode version: ucs4 | 0 | 1 | 1,398 |
0 | 28,609,198 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2012-03-30T08:13:00.000 | 2 | 3 | 0 | ZeroMQ PUB/SUB filtering and performance | 9,939,238 | 0.132549 | python,zeromq | From the ØMQ guide:
From ZeroMQ v3.x, filtering happens at the publisher side when using a connected protocol (tcp:// or ipc://). Using the epgm:// protocol, filtering happens at the subscriber side. In ZeroMQ v2.x, all filtering happened at the subscriber side. | I am trying to implement a broker using zeromq PUB/SUB(python eventlets). zeromq 2.1 does not seem to implement filtering at publisher and all messages are broadcasted to all subscribers which inturn apply filter. Is there some kind of workaround to achieve filtering at publisher. If not how bad is the performance if there are ~25 publishers and 25 subscribers exchanging msgs @ max rate of 200 msgs per second where msg_size ~= 5K through the broker.
Are there any opensource well-tested zero-mq broker implementations.?? | 0 | 1 | 4,347 |
0 | 9,982,670 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2012-04-02T16:37:00.000 | 1 | 4 | 0 | Thinning contour lines in a binary image | 9,980,270 | 0.049958 | python,image-processing,binary | A combination of erosion and dilation (and vice versa) on a binary image can help to get rid of salt n pepper like noise leaving small lines intact. Keywords are 'rank order filters' and 'morphological filters'. | I have a binary image with contour lines and need to purify each contour line of all unnecessary pixels, leaving behind a minimally connected line.
Can somebody give me a source, code example or further information for this kind of problem and where to search for help, please? | 0 | 1 | 6,610 |
0 | 10,003,296 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2012-04-04T00:08:00.000 | 6 | 2 | 0 | Python determinant calculation(without the use of external libraries) | 10,003,232 | 1 | python,matrix,linear-algebra | The rule of Sarrus is only a mnemonic for solving 3x3 determinants, and won't be as helpful moving beyond that size.
You should investigate the Leibniz formula for calculating the determinant of an arbitrarily large square matrix. The nice thing about this formula is that the determinant of an n*n matrix is that it can be determined in terms of a combination of the determinants of some of its (n-1)*(n-1) sub-matrices, which lends itself nicely to a recursive function solution.
If you can understand the algorithm behind the Leibniz formula, and you have worked with recursive functions before, it will be straightforward to translate this in to code (Python, or otherwise), and then you can find the determinant of 4x4 matrices and beyond! | I'm making a small matrix operations library as a programming challenge to myself(and for the purpose of learning to code with Python), and I've come upon the task of calculating the determinant of 2x2, 3x3 and 4x4 matrices.
As far as my understanding of linear algebra goes, I need to implement the Rule of Sarrus in order to do the first 2, but I don't know how to tackle this Pythonically or for matrices of larger size. Any hints, tips or guides would be much appreciated. | 0 | 1 | 3,244 |
0 | 10,054,525 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2012-04-04T07:11:00.000 | 1 | 2 | 0 | Creating a corpus from data in a custom format | 10,006,467 | 1.2 | python,nlp,nltk | You don't need to input the files yourself or to provide words and sents methods.
Read in your corpus with PlaintextCorpusReader, and it will provide those for you.
The corpus reader constructor accepts arguments for the path and filename pattern of the files, and for the input encoding (be sure to specify it).
The constructor also has optional arguments for the sentence and word tokenization functions, so you can pass it your own method to break up the text into sentences. If word and sentence detection is really simple, i.e., if the | character has other uses, you can configure a tokenization function from the nltk's RegexpTokenizer family, or you can write your own from scratch. (Before you write your own, study the docs and code or write a stub to find out what kind of input it's called with.)
If recognizing sentence boundaries is non-trivial, you can later figure out how to train the nltk's PunktSentenceTokenizer, which uses an unsupervized statistical algorithm to learn which uses of the sentence terminator actually end a sentence.
If the configuration of your corpus reader is fairly complex, you may find it useful to create a class that specializes PlaintextCorpusReader. But much of the time that's not necessary. Take a look at the NLTK code to see how the gutenberg corpus is implemented: It's just a PlainTextCorpusReader instance with appropriate arguments for the constructor. | I have hundreds of files containing text I want to use with NLTK. Here is one such file:
বে,বচা ইয়াণ্ঠা,র্চা ঢার্বিত তোখাটহ নতুন, অ প্রবঃাশিত।
তবে ' এ বং মুশায়েরা ' পত্রিব্যায় প্রকাশিত তিনটি লেখাই বইযে
সংব্যজান ব্যরার জনা বিশেষভাবে পরিবর্ধিত। পাচ দাপনিকেব
ড:বন নিয়ে এই বই তৈরি বাবার পরিব্যল্পনাও ম্ভ্রাসুনতন
সামন্তেরই। তার আর তার সহকারীদেব নিষ্ঠা ছাডা অল্প সময়ে
এই বই প্রব্যাশিত হতে পারত না।,তাঁদের সকলকে আমাধ
নমস্কার জানাই।
বতাব্যাতা শ্রাবন্তা জ্জাণ্ণিক
জানুয়ারি ২ ণ্ট ণ্ট ৮
Total characters: 378
Note that each line does not contain a new sentence. Rather, the sentence terminator - the equivalent of the period in English - is the '।' symbol.
Could someone please help me create my corpus? If imported into a variable MyData, I would need to access MyData.words() and MyData.sents(). Also, the last line should not appear in the corpus (it merely contains a character count).
Please note that I will need to run operations on data from all the files at once.
Thanks in advance! | 0 | 1 | 272 |
0 | 10,043,383 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2012-04-06T10:46:00.000 | 2 | 1 | 0 | Segmentation fault Python | 10,042,429 | 1.2 | python,segmentation-fault,enthought | Python only SEGFAULTs if
There is error in a native extension DLL code loaded
Virtual machine has bugs (it has not)
Run Python in -vvv mode to see more information about import issues.
You probably need to recompile the modules you need against the Python build you are using. Python major versions and architecture (32-bit vs. 64-bit) native extensions are not compatible between versions.
Also you can use gdb to extract a C stack trace needed to give the exact data where and why it crashes.
There are only tips what you should do; because the problem is specific to your configuration only and not repeatable people can only give you information how to further troubleshoot the issue. Because it is very likely that methods to troubleshoot issue given here might be too advanced, I just recommend reinstall everything. | I have the Enthought Python Distribution installed.
Before that I installed Python2.7 and installed other modules (e.g. opencv).
Enthought establishes itself as the default python.
Called 7.2, but it is 2.7.
Now if i want to import cv in the Enthought Python it always gives me the Segmentation fault Error.
Is there anyway to import cv in the Enthought Python ?
That would be awesome.
Also installing any new module into Enthought, seems to have the same error.
Any solution for that would be great.
Thanks! | 0 | 1 | 4,794 |
0 | 10,724,685 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2012-04-09T06:20:00.000 | 0 | 1 | 0 | Spyder plotting blocks console commands | 10,069,680 | 0 | python,matlab,matplotlib,ipython,spyder | I was having a similar (I think) problem. Make sure your interpreter is set to execute in the current interpreter (default, should allow for interactive plotting). If it's set to execute in a new dedicated python interpreter make sure that interact with the python interpreter after execution is selected. This solved the problem for me. | Whenever I execute a plt.show() in an Ipython console in spyderlib, the console freezes until I close the figure window. This only occurs in spyderlib and the blocking does occur when I run ipython --pylab or run ipython normally and call plt.ion() before plotting. I've tried using plt.draw(), but nothing happens with that command.
plt.ion() works for ipython, but when I run the same command in spyder it seems to not plot anything altogether (plt.show() no longer works).
Enviroment Details:
Python 2.6.5, Qt 4.6.2, PyQt4 (API v2) 4.7.2 on Linux | 0 | 1 | 1,329 |
0 | 20,901,570 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2012-04-09T09:57:00.000 | 0 | 2 | 0 | Can we search content(text) within images using plone 4.1? | 10,071,609 | 1.2 | python,plone | Best is to use collective.DocumentViewer with various options to select from | How can we search content(text) within images using plone 4.1. I work on linux Suppose an image say a sample.jpg contains text like 'Happy Birthday', on using search 'Birthday' I should get the contents i.e sample.jpg | 0 | 1 | 232 |
0 | 10,076,295 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2012-04-09T16:20:00.000 | 4 | 1 | 0 | scipy: significance of the return values of spearmanr (correlation) | 10,076,222 | 1.2 | python,statistics,scipy,correlation | It's up to you to choose the level of significance (alpha). To be coherent you shall choose it before running the test. The function will return you the lowest alpha you can choose for which you reject the null hypothesis (H0) [reject H0 when p-value < alpha or equivalently -p-value>-alpha].
You therefore know that the lowest value for which you reject the null hypothesis (H0) is p-value (2.3569040685361066e-65). Therefore being p-value incredibly small your null hypothesis is rejected for any relevant level of alpha (usually alpha = 0.05). | The output of spearmanr (Spearman correlation) of X,Y gives me the following:
Correlation: 0.54542821980327882
P-Value: 2.3569040685361066e-65
where len(X)=len(Y)=800.
My questions are as follows:
0) What is the confidence (alpha?) here ?
1) If correlation coefficient > alpha, the hypothesis of the correlation being a coincidence is rejected, thus there is correlation. Is this true ?
Thanks in advance.. | 1 | 1 | 1,006 |
0 | 39,361,043 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2012-04-10T05:54:00.000 | -3 | 2 | 0 | python numpy sort eigenvalues | 10,083,772 | -0.291313 | python,numpy | np.linalg.eig will often return complex values. You may want to consider using np.sort_complex(eig_vals). | I am using linalg.eig(A) to get the eigenvalues and eigenvectors of a matrix. Is there an easy way to sort these eigenvalues (and associated vectors) in order? | 0 | 1 | 13,966 |
0 | 10,101,163 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2012-04-11T04:01:00.000 | 0 | 4 | 0 | How do I initialize a one-dimensional array of two-dimensional elements in Python? | 10,099,619 | 0 | python,arrays | Using the construct
[[0,0]]*3
works just fine and returns the following:
[[0, 0], [0, 0], [0, 0]] | I want to initialize an array that has X two-dimensional elements. For example, if X = 3, I want it to be [[0,0], [0,0], [0,0]]. I know that [0]*3 gives [0, 0, 0], but how do I do this for two-dimensional elements? | 0 | 1 | 1,407 |
0 | 10,099,628 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2012-04-11T04:01:00.000 | 0 | 4 | 0 | How do I initialize a one-dimensional array of two-dimensional elements in Python? | 10,099,619 | 0 | python,arrays | I believe that it's [[0,0],]*3 | I want to initialize an array that has X two-dimensional elements. For example, if X = 3, I want it to be [[0,0], [0,0], [0,0]]. I know that [0]*3 gives [0, 0, 0], but how do I do this for two-dimensional elements? | 0 | 1 | 1,407 |
0 | 10,108,983 | 0 | 0 | 1 | 0 | 1 | false | 20 | 2012-04-11T14:47:00.000 | 1 | 5 | 0 | Detecting geographic clusters | 10,108,368 | 0.039979 | python,r,geolocation,cran | A few ideas:
Ad-hoc & approximate: The "2-D histogram". Create arbitrary "rectangular" bins, of the degree width of your choice, assign each bin an ID. Placing a point in a bin means "associate the point with the ID of the bin". Upon each add to a bin, ask the bin how many points it has. Downside: doesn't correctly "see" a cluster of points that stradle a bin boundary; and: bins of "constant longitudinal width" actually are (spatially) smaller as you move north.
Use the "Shapely" library for Python. Follow it's stock example for "buffering points", and do a cascaded union of the buffers. Look for globs over a certain area, or that "contain" a certain number of original points. Note that Shapely is not intrinsically "geo-savy", so you'll have to add corrections if you need them.
Use a true DB with spatial processing. MySQL, Oracle, Postgres (with PostGIS), MSSQL all (I think) have "Geometry" and "Geography" datatypes, and you can do spatial queries on them (from your Python scripts).
Each of these has different costs in dollars and time (in the learning curve)... and different degrees of geospatial accuracy. You have to pick what suits your budget and/or requirements. | I have a R data.frame containing longitude, latitude which spans over the entire USA map. When X number of entries are all within a small geographic region of say a few degrees longitude & a few degrees latitude, I want to be able to detect this and then have my program then return the coordinates for the geographic bounding box. Is there a Python or R CRAN package that already does this? If not, how would I go about ascertaining this information? | 0 | 1 | 5,580 |
0 | 10,177,394 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2012-04-11T19:20:00.000 | 4 | 2 | 0 | Implementing alternative forms of LDA | 10,112,500 | 0.379949 | python,r,nlp,text-mining,lda | My friend's response is below, pardon the language please.
First I wrote up a Python implementation of the collapsed Gibbs sampler seen here (http://www.pnas.org/content/101/suppl.1/5228.full.pdf+html) and fleshed out here (http://cxwangyi.files.wordpress.com/2012/01/llt.pdf). This was slow as balls.
Then I used a Python wrapping of a C implementation of this paper (http://books.nips.cc/papers/files/nips19/NIPS2006_0511.pdf). Which is fast as f*ck, but the results are not as great as one would see with NMF.
But NMF implementations I've seen, with scitkits, and even with the scipy sparse-compatible recently released NIMFA library, they all blow the f*ck up on any sizable corpus. My new white whale is a sliced, distributed implementation of the thing. This'll be non-trivial. | I am using Latent Dirichlet Allocation with a corpus of news data from six different sources. I am interested in topic evolution, emergence, and want to compare how the sources are alike and different from each other over time. I know that there are a number of modified LDA algorithms such as the Author-Topic model, Topics Over Time, and so on.
My issue is that very few of these alternate model specifications are implemented in any standard format. A few are available in Java, but most exist as conference papers only. What is the best way to go about implementing some of these algorithms on my own? I am fairly proficient in R and jags, and can stumble around in Python when given long enough. I am willing to write the code, but I don't really know where to start and I don't know C or Java. Can I build a model in JAGS or Python just having the formulas from the manuscript? If so, can someone point me at an example of doing this? Thanks. | 0 | 1 | 2,734 |
0 | 10,144,447 | 0 | 1 | 1 | 0 | 1 | false | 1 | 2012-04-13T15:19:00.000 | 1 | 3 | 0 | How fast are nested python generators? | 10,143,637 | 0.066568 | python,generator | "Nested" iterators amount to the composition of the functions that the iterators implement, so in general they pose no particularly novel performance considerations.
Note that because generators are lazy, they also tend to cut down on memory allocation as compared with repeatedly allocating one sequence to transform into another. | Okay, so I probably shouldn't be worrying about this anyway, but I've got some code that is meant to pass a (possibly very long, possibly very short) list of possibilities through a set of filters and maps and other things, and I want to know if my implementation will perform well.
As an example of the type of thing I want to do, consider this chain of operations:
get all numbers from 1 to 100
keep only the even ones
square each number
generate all pairs [i, j] with i in the list above and j in [1, 2, 3, 4,5]
keep only the pairs where i + j > 40
Now, after doing all this nonsense, I want to look through this set of pairs [i, j] for a pair which satisfies a certain condition. Usually, the solution is one of the first entries, in which case I don't even look at any of the others. Sometimes, however, I have to consume the entire list, and I don't find the answer and have to throw an error.
I want to implement my "chain of operations" as a sequence of generators, i.e., each operation iterates through the items generated by the previous generator and "yields" its own output item by item (a la SICP streams). That way, if I never look at the last 300 entries of the output, they don't even get processed. I known that itertools provides things like imap and ifilter for doing many of the types of operations I would want to perform.
My question is: will a series of nested generators be a major performance hit in the cases where I do have to iterate through all possibilities? | 0 | 1 | 1,253 |
0 | 10,144,991 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2012-04-13T16:03:00.000 | 1 | 2 | 0 | Emails and Map Reduce Job | 10,144,325 | 0.099668 | python,map,hadoop,mapreduce,reduce | Yea, you need to use hadoop streaming if you want to use write Python code for running MapReduce Jobs | I'm just starting out with Hadoop and writing some Map Reduce jobs. I was looking for help on writing a MR job in python that allows me to take some emails and put them into HDFS so I can search on the text or attachments of the email?
Thank you! | 0 | 1 | 251 |
0 | 10,155,633 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2012-04-14T17:04:00.000 | 0 | 1 | 0 | Deciding input values to DBSCAN algorithm | 10,155,542 | 0 | python,cluster-analysis,dbscan | DBSCAN is pretty often hard to estimate its parameters.
Did you think about the OPTICS algorithm? You only need in this case Min_samples which would correspond to the minimal cluster size.
Otherwise for DBSCAN I've done it in the past by trial and error : try some values and see what happens. A general rule to follow is that if your dataset is noisy, you should have a larger value, and it is also correlated with the number of dimensions (10 in this case). | I have written code in python to implement DBSCAN clustering algorithm.
My dataset consists of 14k users with each user represented by 10 features.
I am unable to decide what exactly to keep as the value of Min_samples and epsilon as input
How should I decide that?
Similarity measure is euclidean distance.(Hence it becomes even more tough to decide.) Any pointers? | 0 | 1 | 2,333 |
0 | 12,068,757 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2012-04-15T00:37:00.000 | 0 | 3 | 0 | Importing Confusion Pandas | 10,158,613 | 0 | python,pandas | I had the same error. I did not build pandas myself so i thought i should not get this error as mentioned on the pandas site. So i was confused on how to resolved this error.
The pandas site says that matplotlib is an optional depenedency so i didn't install it initially. But interestingly, after installing matplotlib the error disappeared. I am not sure what effect it had.
it found something! | I had 0.71 pandas before today. I tried to update and I simply ran the .exe file supplied by the website.
now I tried " import pandas" but then it gives me an error
ImportError: C extensions not built: if you installed already verify that you are not importing from the source directory.
I am new to python and pandas in general. Anything will help.
thanks, | 0 | 1 | 2,351 |
0 | 11,630,790 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2012-04-15T00:37:00.000 | 1 | 3 | 0 | Importing Confusion Pandas | 10,158,613 | 0.066568 | python,pandas | Had the same issue. Resolved by checking dependencies - make sure you have numpy > 1.6.1 and python-dateutil > 1.5 installed. | I had 0.71 pandas before today. I tried to update and I simply ran the .exe file supplied by the website.
now I tried " import pandas" but then it gives me an error
ImportError: C extensions not built: if you installed already verify that you are not importing from the source directory.
I am new to python and pandas in general. Anything will help.
thanks, | 0 | 1 | 2,351 |
0 | 10,210,348 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2012-04-18T08:52:00.000 | 3 | 1 | 0 | Customizing csv output in htsql | 10,205,990 | 0.53705 | python,sql,htsql | If you want TAB as a delimiter, use tsv format (e.g. /query/:tsv instead of /query/:csv).
There is no way to specify the encoding other than UTF-8. You can reencode the output manually on the client. | I would like to know if somebody knows a way to customize the csv output in htsql, and especially the delimiter and the encoding ?
I would like to avoid iterating over each result and find a way through configuration and/or extensions.
Thank in advance.
Anthony | 0 | 1 | 170 |
0 | 10,248,052 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2012-04-20T08:10:00.000 | 1 | 2 | 0 | When to and when not to use map() with multiprocessing.Pool, in Python? case of big input values | 10,242,525 | 0.099668 | python,dictionary,parallel-processing,multiprocessing,large-data | I hit a similar issue: parallelizing calculations on a big dataset. As you mentioned multiprocessing.Pool.map pickles the arguments. What I did was to implement my own fork() wrapper that only pickles the return values back to the parent process, hence avoiding pickling the arguments. And a parallel map() on top of the wrapper. | Is it efficient to calculate many results in parallel with multiprocessing.Pool.map() in a situation where each input value is large (say 500 MB), but where input values general contain the same large object? I am afraid that the way multiprocessing works is by sending a pickled version of each input value to each worker process in the pool. If no optimization is performed, this would mean sending a lot of data for each input value in map(). Is this the case? I quickly had a look at the multiprocessing code but did not find anything obvious.
More generally, what simple parallelization strategy would you recommend so as to do a map() on say 10,000 values, each of them being a tuple (vector, very_large_matrix), where the vectors are always different, but where there are say only 5 different very large matrices?
PS: the big input matrices actually appear "progressively": 2,000 vectors are first sent along with the first matrix, then 2,000 vectors are sent with the second matrix, etc. | 0 | 1 | 2,008 |
0 | 10,308,680 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2012-04-24T18:56:00.000 | 1 | 1 | 0 | python, scikits-learn: which learning methods support sparse feature vectors? | 10,304,280 | 1.2 | python,machine-learning,scikits,scikit-learn | We don't have that yet. You have to read the docstrings of the individual classes for now.
Anyway, non linear models do not tend to work better than linear model for high dim sparse data such as text documents (and they can overfit more easily). | I'm getting a memory error trying to do KernelPCA on a data set of 30.000 texts. RandomizedPCA works alright. I think what's happening is that RandomizedPCA works with sparse arrays and KernelPCA don't.
Does anyone have a list of learning methods that are currently implemented with sparse array support in scikits-learn? | 0 | 1 | 691 |
0 | 27,016,762 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2012-04-24T21:04:00.000 | 3 | 4 | 0 | Quantile/Median/2D binning in Python | 10,305,964 | 0.148885 | python,numpy,statistics,scipy | I'm just trying to do this myself and it sound like you want the command "scipy.stats.binned_statistic_2d" from you can find the mean, median, standard devation or any defined function for the third parameter given the bins.
I realise this question has already been answered but I believe this is a good built in solution. | do you know a quick/elegant Python/Scipy/Numpy solution for the following problem:
You have a set of x, y coordinates with associated values w (all 1D arrays). Now bin x and y onto a 2D grid (size BINSxBINS) and calculate quantiles (like the median) of the w values for each bin, which should at the end result in a BINSxBINS 2D array with the required quantiles.
This is easy to do with some nested loop,but I am sure there is a more elegant solution.
Thanks,
Mark | 0 | 1 | 5,849 |
0 | 10,327,841 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-04-24T22:58:00.000 | 0 | 2 | 0 | Most efficient language to implement tensor factorization for Web Application | 10,307,173 | 0 | python,c++,mysql,django,large-data | Python is just fine. I am a Python person. I do not know C++ personally. However, during my research of python the creator of mathematica stated himself that python is equally as powerful as mathematica. Python is used in many highly accurate calculations. (i. e. engineering software, architecture work, etc. . .) | I have implemented Tensor Factorization Algorithm in Matlab. But, actually, I need to use it in Web Application.
So I implemented web site on Django framework, now I need to merge it with my Tensor Factorization algorithm.
For those who are not familiar with tensor factorization, you can think there are bunch of multiplication, addition and division on large matrices of size, for example 10 000 x 8 000. In tensor factorization case we do not have matrices, instead we have 3-dimensional(for my purpose) arrays.
By the way, I m using MySQL as my database.
I am considering to implement this algorithm in Python or in C++. But I can't be sure which one is better.
Do you have any idea about efficiency of Python and C++ when processing on huge data set? Which one is better? Why? | 0 | 1 | 291 |
0 | 10,352,335 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2012-04-27T13:23:00.000 | 3 | 2 | 0 | best way to extend python / numpy performancewise | 10,351,450 | 1.2 | python,c,numpy | I would say it depends on your skills/experience and your project.
If this is very ponctual and you are profficient in C/C++ and you have already written python wrapper, then write your own extension and interface it.
If you are going to work with Numpy on other project, then go for the Numpy C-API, it's extensive and rather well documented but it is also quite a lot of documentation to process.
At least I had a lot of difficulty processing it, but then again I suck at C.
If you're not really sure go Cython, far less time consuming and the performance are in most cases very good. (my choice)
From my point of view you need to be a good C coder to do better than Cython with the 2 previous implementation, and it will be much more complexe and time consuming.
So are you a great C coder ?
Also it might be worth your while to look into pycuda or some other GPGPU stuff if you're looking for performance, depending on your hardware of course. | As there are multitude of ways to write binary modules for python, i was hopping those of you with experience could advice on the best approach if i wish to improve the performance of some segments of the code as much as possible.
As i understand, one can either write an extension using the python/numpy C-api, or wrap some already written pure C/C++/Fortran function to be called from the python code.
Naturally, tools like Cython are the easiest way to go, but i assume that writing the code by hand gives better control and provide better performance.
The question, and it may be to general, is which approach to use. Write a C or C++ extension? wrap external C/C++ functions or use callback to python functions?
I write this question after reading chapter 10 in Langtangen's "Python scripting for computational science" where there is a comparison of several methods to interface between python and C. | 0 | 1 | 1,000 |
0 | 10,395,730 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2012-05-01T09:12:00.000 | 1 | 1 | 0 | Numpy cannot be accessed in sub directories | 10,395,691 | 1.2 | python,numpy | then it does not recognize the module zeroes in the program
Make sure you don't have a file called numpy.py in your subdirectory. If you do, it would shadow the "real" numpy module and cause the symptoms you describe. | I have used import numpy as np in my program and when I try to execute np.zeroes to create a numpy array then it does not recognize the module zeroes in the program.
This happens when I execute in the subdirectory where the python program is.
If I copy it root folder and execute, then it shows the results.
Can someone guide me as to why is this happening and what can I do to get the program executed in the subdirectory it self?
Thanks | 0 | 1 | 73 |
0 | 10,409,722 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2012-05-02T07:46:00.000 | 1 | 4 | 0 | Finding a specific index in a binary image in linear time? | 10,409,674 | 0.049958 | python,image-processing,numpy,boolean,python-imaging-library | Depending on the size of your blob, I would say that dramatically reducing the resolution of your image may achieve what you want.
Reduce it to a 1/10 resolution, find the one white pixel, and then you have a precise idea of where to search for the centroid. | I've got a 640x480 binary image (0s and 255s). There is a single white blob in the image (nearly circular) and I want to find the centroid of the blob (it's always convex). Essentially, what we're dealing with is a 2D boolean matrix. I'd like the runtime to be linear or better if possible - is this possible?
Two lines of thought so far:
Make use of the numpy.where() function
Sum the values in each column and row, then find where the max value is based on those numbers... but is there a quick and efficient way to do this? This might just be a case of me being relatively new to python. | 0 | 1 | 1,130 |
0 | 10,409,877 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2012-05-02T07:46:00.000 | 2 | 4 | 0 | Finding a specific index in a binary image in linear time? | 10,409,674 | 0.099668 | python,image-processing,numpy,boolean,python-imaging-library | The centroid's coordinates are arithmetic means of coordinates of the points.
If you want the linear solution, just go pixel by pixel, and count means of each coordinates, where the pixels are white, and that's the centroid.
There is probably no way you can make it better than linear in general case, however, if your circular object is much smaller than the image, you can speed it up, by searching for it first (sampling a number of random pixels, or a grid of pixels, if you know the blob is big enough) and then using BFS or DFS to find all the white points. | I've got a 640x480 binary image (0s and 255s). There is a single white blob in the image (nearly circular) and I want to find the centroid of the blob (it's always convex). Essentially, what we're dealing with is a 2D boolean matrix. I'd like the runtime to be linear or better if possible - is this possible?
Two lines of thought so far:
Make use of the numpy.where() function
Sum the values in each column and row, then find where the max value is based on those numbers... but is there a quick and efficient way to do this? This might just be a case of me being relatively new to python. | 0 | 1 | 1,130 |
0 | 10,421,975 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2012-05-02T20:14:00.000 | 0 | 1 | 0 | Manipulator/camera calibration issue (linear algebra oriented) | 10,420,966 | 0 | python,linear-algebra,robotics,calibration | You really need four data points to characterize three independent axes of movement.
Can you can add some other constraints, ie are the manipulator axes orthogonal to each other, even if not fixed relative to the stage's axes? Do you know the manipulator's alignment roughly, even if not exactly?
What takes the most time - moving the stage to re-center? Can you move the manipulator and stage at the same time? How wide is the microscope's field of view? How much distance-distortion is there near the edges of the view - does it actually have to be re-centered each time to be accurate? Maybe we could come up with a reverse-screen-distortion mapping instead? | I'm working on a research project involving a microscope (with a camera connected to the view port; the video feed is streamed to an application we're developing) and a manipulator arm. The microscope and manipulator arm are both controlled by a Luigs & Neumann control box (very obsolete - the computer interfaces with it with a serial cable and its response time is slowwww.) The microscope can be moved in 3 dimensions; X, Y, and Z, whose axes are at right angles to one another. When the box is queried, it will return decimal values for the position of each axis of each device. Each device can be sent a command to move to a specific position, with sub-micrometer precision.
The manipulator arm, however, is adjustable in all 3 dimensions, and thus there is no guarantee that any of its axes are aligned at right angles. We need to be able to look at the video stream from the camera, and then click on a point on the screen where we want the tip of the manipulator arm to move to. Thus, the two coordinate systems have to be calibrated.
Right now, we have achieved calibration by moving the microscope/camera's position to the tip of the manipulator arm, setting that as the synchronization point between the two coordinate systems, and moving the manipulator arm +250um in the X direction, moving the microscope to the tip of the manipulator arm at this new position, and then using the differences between these values to define a 3d vector that corresponds to the distance and direction moved by the manipulator, per unit in the microscope coordinate system. This is repeated for each axis of the manipulator arm.
Once this data is obtained, in order to move the manipulator arm to a specific location in the microscope coordinate system, a system of equations can be solved by the program which determines how much it needs to move the manipulator in each axis to move it to the center point of the screen. This works pretty reliably so far.
The issue we're running into here is that due to the slow response time of the equipment, it can take 5-10 minutes to complete the calibration process, which is complicated by the fact that the tip of the manipulator arm must be changed occasionally during an experiment, requiring the calibration process to be repeated. Our research is rather time sensitive and this creates a major bottleneck in the process.
My linear algebra is a little patchy, but it seems like if we measure the units traveled by the tip of the manipulator arm per unit in the microscope coordinate system and have this just hard coded into the program (for now), it might be possible to move all 3 axes of the manipulator a specific amount at once, and then to derive the vectors for each axis from this information. I'm not really sure how to go about doing this (or if it's even possible to do this), and any advice would be greatly appreciated. If there's any additional information you need, or if you need clarification on anything please let me know. | 0 | 1 | 303 |
0 | 10,428,163 | 0 | 1 | 1 | 0 | 3 | false | 1 | 2012-05-03T08:42:00.000 | 2 | 3 | 0 | Stop Python from using more than one cpu | 10,427,900 | 0.132549 | python,multithreading,parallel-processing,cpu-usage | Your code might be calling some functions that uses C/C++/etc. underneath. In that case, it is possible for multiple thread usage.
Are you calling any libraries that are only python bindings to some more efficiently implemented functions? | I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu.
However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so.
Interesting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python.
If it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script.
UPDATE:
My friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is.
There's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu. | 0 | 1 | 1,390 |
0 | 10,429,302 | 0 | 1 | 1 | 0 | 3 | false | 1 | 2012-05-03T08:42:00.000 | 1 | 3 | 0 | Stop Python from using more than one cpu | 10,427,900 | 0.066568 | python,multithreading,parallel-processing,cpu-usage | You can always set your process affinity so it run on only one cpu. Use "taskset" command on linux, or process explorer on windows.
This way, you should be able to know if your script has same performance using one cpu or more. | I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu.
However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so.
Interesting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python.
If it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script.
UPDATE:
My friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is.
There's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu. | 0 | 1 | 1,390 |
0 | 10,445,816 | 0 | 1 | 1 | 0 | 3 | false | 1 | 2012-05-03T08:42:00.000 | 1 | 3 | 0 | Stop Python from using more than one cpu | 10,427,900 | 0.066568 | python,multithreading,parallel-processing,cpu-usage | Could it be that your code uses SciPy or other numeric library for Python that is linked against Intel MKL or another vendor provided library that uses OpenMP? If the underlying C/C++ code is parallelised using OpenMP, you can limit it to a single thread by setting the environment variable OMP_NUM_THREADS to 1:
OMP_NUM_THREADS=1 python myscript.py
Intel MKL for sure is parallel in many places (LAPACK, BLAS and FFT functions) if linked with the corresponding parallel driver (the default link behaviour) and by default starts as many compute threads as is the number of available CPU cores. | I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu.
However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so.
Interesting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python.
If it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script.
UPDATE:
My friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is.
There's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu. | 0 | 1 | 1,390 |
0 | 10,481,152 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2012-05-07T11:08:00.000 | 1 | 2 | 0 | system swaps before the memory is full | 10,481,008 | 0.099668 | python,linux,matplotlib,archlinux | One thing to take into account for the huge numpy array is that you are not touching it. Memory is allocated lazily by default by the kernel. Try writing some values in that huge array and then check for swapping behaviour. | My program plots a large number of lines (~200k) with matplotlib which is pretty greedy for memory. I usually have about 1.5G of free memory before plotting. When I show the figures, the system starts swapping heavily when there's still about 600-800M of free RAM. This behavior is not observed when, say, creating a huge numpy array, it just takes all the available memory instantaneously. It would be nice to figure out whether this is a matplotlib or system problem.
I'm using 64-bit Arch Linux.
UPD: The swapiness level is set to 10. Tried setting it to 0, as DoctororDrive suggested, but same thing. However, other programs seem to be ok with filling almost all the memory before the swap is used. | 0 | 1 | 199 |
0 | 10,859,872 | 0 | 1 | 0 | 0 | 1 | true | 6 | 2012-05-07T21:26:00.000 | 4 | 4 | 0 | What are good libraries for creating a python program for (visually appealing) 3D physics simulations/visualizations? | 10,489,377 | 1.2 | python,3d,visualization,physics,simulation | 3D support for python is fairly weak compared to other languages, but with the way that most of them are built, the appearance of the program is far more mutable than you might think. For instance, you talked about Vpython, while many of their examples are not visually appealing, most of them are also from previous releases, the most recent release includes both extrusions, materials, and skins, which allow you to customize your appearance much moreso than before.
It is probably worth noting also, that it is simply not possible to make render-quality images in real time (cycles is a huge step in that direction, but it's still not quite there). I believe that most of your issue here is you are looking for something that technology is simply not capable of now, however if you are willing to take on the burden for making your simulation look visually appealing, Vpython (which is a gussied up version of PyOpenGL) is probably your best bet. Below is a run down of different technologies though, in case you are looking for anything more general:
Blender: The most powerful python graphics program available, however it is made for graphic design and special effects, though it has very complex physics running underneath it, Blender is not made for physics simulations. Self contained.
Panda3D: A program very often compared to Blender, however mostly useful for games. The game engine is nicer to work with than Blender's, but the render quality is far lower, as is the feature-richness. Self contained
Ogre: A library that was very popular for game development back in the day, with a lot of powerful functionality, especially for creating game environments. Event handling is also very well implemented. Can be made to integrate with other libraries, but with difficulty.
VPython: A library intended for physics simulations that removes a lot of the texture mapping and rendering power compared to the other methods, however this capability is still there, as VPython is largely built from OpenGL, which is one of the most versatile graphics libraries around. As such, VPython also is very easy to integrate with other libraries.
PyOpenGL: OpenGL for Python. OpenGL is one of the most widely use graphics libraries, and is without a doubt capable of producing some of the nicest visuals on this list (Except for Blender, which is a class of its own), however it will not be easy to do so. PyOpenGL is very bare bones, and while the functionality is there, it will be harder to implement than anything else. Plays very will with other libraries, but only if you know what you're doing. | What are good libraries for creating a python program for (visually appealing) 3D physics simulations/visualizations?
I've looked at Vpython but the simulations I have seen look ugly, I want them to be visually appealing. It also looks like an old library. For 3D programming I've seen suggestions of using Panda3D and python-ogre but I'm not sure if it is really suited for exact simulations. Also, I would prefer a library that combines well with other libraries (E.g. pygame does not combine so well with other libraries). | 0 | 1 | 5,810 |
0 | 10,516,123 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2012-05-09T12:17:00.000 | 0 | 4 | 0 | document classification using naive bayes in python | 10,515,907 | 0 | python,nltk,document-classification | There could be many reasons for the classifier not working, and there are many ways to tweak it.
did you train it with enough positive and negative examples?
how did you train the classifier? did you give it every word as a feature, or did you also add more features for it to train on(like length of the text for example)?
what exactly are you trying to classify? does the specified classification have specific words that are related to it?
So the question is rather broad. Maybe If you give more details You could get more relevant suggestions. | I'm doing a project on document classification using naive bayes classifier in python. I have used the nltk python module for the same. The docs are from reuters dataset. I performed preprocessing steps such as stemming and stopword elimination and proceeded to compute tf-idf of the index terms. i used these values to train the classifier but the accuracy is very poor(53%). What should I do to improve the accuracy? | 0 | 1 | 2,839 |
0 | 21,133,966 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2012-05-09T12:17:00.000 | 1 | 4 | 0 | document classification using naive bayes in python | 10,515,907 | 1.2 | python,nltk,document-classification | A few points that might help:
Don't use a stoplist, it lowers accuracy (but do remove punctuation)
Look at word features, and take only the top 1000 for example. Reducing dimensionality will improve your accuracy a lot;
Use bigrams as well as unigrams - this will up the accuracy a bit.
You may also find alternative weighting techniques such as log(1 + TF) * log(IDF) will improve accuracy. Good luck! | I'm doing a project on document classification using naive bayes classifier in python. I have used the nltk python module for the same. The docs are from reuters dataset. I performed preprocessing steps such as stemming and stopword elimination and proceeded to compute tf-idf of the index terms. i used these values to train the classifier but the accuracy is very poor(53%). What should I do to improve the accuracy? | 0 | 1 | 2,839 |
0 | 10,535,067 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2012-05-10T13:24:00.000 | 0 | 1 | 0 | How to have multiple y axis on a line graph in Python | 10,534,950 | 0 | python,graph | The generic answer is to write a method that allows you to scale the y value for each data set to lay on the graph the way you want it. Then all your data points will have a y value on the same scale, and you can label the Y axis based upon how you define your translation for each data set. | I want to make a line graph with multiple sets of data on the same graph. But they all scale differently so will need individual y axis scales.
What code will put each variable on a separate axis? | 0 | 1 | 150 |
0 | 10,546,350 | 0 | 1 | 0 | 0 | 2 | false | 18 | 2012-05-11T05:36:00.000 | 1 | 6 | 0 | creating pandas data frame from multiple files | 10,545,957 | 0.033321 | python,pandas | I might try to concatenate the files before feeding them to pandas. If you're in Linux or Mac you could use cat, otherwise a very simple Python function could do the job for you. | I am trying to create a pandas DataFrame and it works fine for a single file. If I need to build it for multiple files which have the same data structure. So instead of single file name I have a list of file names from which I would like to create the DataFrame.
Not sure what's the way to append to current DataFrame in pandas or is there a way for pandas to suck a list of files into a DataFrame. | 0 | 1 | 24,945 |
0 | 10,563,786 | 0 | 1 | 0 | 0 | 2 | false | 18 | 2012-05-11T05:36:00.000 | 3 | 6 | 0 | creating pandas data frame from multiple files | 10,545,957 | 0.099668 | python,pandas | Potentially horribly inefficient but...
Why not use read_csv, to build two (or more) dataframes, then use join to put them together?
That said, it would be easier to answer your question if you provide some data or some of the code you've used thus far. | I am trying to create a pandas DataFrame and it works fine for a single file. If I need to build it for multiple files which have the same data structure. So instead of single file name I have a list of file names from which I would like to create the DataFrame.
Not sure what's the way to append to current DataFrame in pandas or is there a way for pandas to suck a list of files into a DataFrame. | 0 | 1 | 24,945 |
0 | 10,569,942 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2012-05-13T06:47:00.000 | 0 | 2 | 0 | Sorting a list with elements containing dictionary values | 10,569,853 | 0 | python,list,sorting,dictionary | have you already tried
sorted(list_for_sorting, key=dictionary_you_wrote.__getitem__)
? | I'm trying to make a sorting system with card ranks and their values are obtained from a separate dictionary. In a simple deck of 52 cards, we have 2 to Ace ranks, in this case I want a ranking system where 0 is 10, J is 11, Q is 12, K is 13, A is 14 and 2 is 15 where 2 is the largest valued rank. The thing is, if there is a list where I want to sort rank cards in ASCENDING order according to the numbering system, how do I do so?
For example, here is a list, [3,5,9,7,J,K,2,0], I want to sort the list into [3,5,7,9,0,J,K,2]. I also made a dictionary for the numbering system as {'A': 14, 'K': 13, 'J': 11, 'Q': 12, '0': 10, '2': 15}.
THANKS | 0 | 1 | 198 |
0 | 10,596,347 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2012-05-14T05:26:00.000 | 1 | 1 | 0 | How to use Products.csvreplicata 1.1.7 with Products.PressRoom to export PressContacts in Plone 4.1 | 10,577,866 | 1.2 | python,plone | Go to Site setup / CSV Replicata tool, and select PressRoom content(s) as exportable (and then select the schemata you want to be considered during import/export). | How to use Products.csvreplicata 1.1.7 with Products.PressRoom 3.18 to export PressContacts to csv in Plone 4.1? Or is there any other product to import/export all the PressRoom contacts into csv. | 1 | 1 | 153 |
0 | 71,463,872 | 0 | 0 | 0 | 0 | 1 | false | 15 | 2012-05-14T08:04:00.000 | 0 | 10 | 0 | How to generate negative random value in python | 10,579,518 | 0 | python,random | If you want to generate 2 random integers between 2 negative values than print(f"{-random.randint(1, 5)}") can also do the work. | I am starting to learn python, I tried to generate random values by passing in a negative and positive number. Let say -1, 1.
How should I do this in python? | 0 | 1 | 43,543 |
0 | 10,583,784 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2012-05-14T12:19:00.000 | 1 | 3 | 0 | How can I dynamically generate class instances with single attributes read from flat file in Python? | 10,583,195 | 0.066568 | python,oop | I have a .csv file
You're in luck; CSV support is built right in, via the csv module.
Do you suggest creating a class dictionary for accessing every instance?
I don't know what you think you mean by "class dictionary". There are classes, and there are dictionaries.
But I still need to provide a name to every single instance, right? How can I do that dynamically? Probably the best thing would be to use the unique ID, read from the file, as the instance name, but I think that numbers can't be used as instance names, can they?
Numbers can't be instance names, but they certainly can be dictionary keys.
You don't want to create "instance names" dynamically anyway (assuming you're thinking of having each in a separate variable or something gross like that). You want a dictionary. So just let the IDs be keys.
I miss pointers! :(
I really, honestly, can't imagine how you expect pointers to help here, and I have many years of experience with C++. | I apologise if this question has already been asked.
I'm really new to Python programming, and what I need to do is this:
I have a .csv file in which each line represent a person and each column represents a variable.
This .csv file comes from an agent-based C++ simulation I have done.
Now, I need to read each line of this file and for each line generate a new instance of the class Person(), passing as arguments every variable line by line.
My problem is this: what is the most pythonic way of generating these agents while keeping their unique ID (which is one of the attributes I want to read from the file)? Do you suggest creating a class dictionary for accessing every instance? But I still need to provide a name to every single instance, right? How can I do that dynamically? Probably the best thing would be to use the unique ID, read from the file, as the instance name, but I think that numbers can't be used as instance names, can they? I miss pointers! :(
I am sure there is a pythonic solution I cannot see, as I still have to rewire my mind a bit to think in pythonic ways...
Thank you very much, any help would be greatly appreciated!
And please remember that this is my first project in python, so go easy on me! ;)
EDIT:
Thank you very much for your answers, but I still haven't got an answer on the main point: how to create an instance of my class Person() for every line in my csv file. I would like to do that automatically! Is it possible?
Why do I need this? Because I need to create networks of these people with networkx and I would like to have "agents" linked in a network structure, not just dictionary items. | 0 | 1 | 725 |
0 | 10,608,972 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2012-05-15T19:16:00.000 | 0 | 2 | 0 | Modular serialization with pickle (Python) | 10,607,350 | 0 | python,pickle | Metaprogramming is strong in Python; Python classes are extremely malleable. You can alter them after declaration all the way you want, though it's best done in a metaclass (decorator). More than that, instances are malleable, independently of their classes.
A 'reference to a place' is often simply a string. E.g. a reference to object's field is its name. Assume you have multiple node references inside your node object. You could have something like {persistent_id: (object, field_name),..} as your unresolved references table, easy to look up. Similarly, in lists of nodes 'references to places' are indices.
BTW, could you use a key-value database for graph storage? You'd be able to pull nodes by IDs without waiting. | I want to perform serialisation of some object graph in a modular way. That is I don't want to serialize the whole graph. The reason is that this graph is big. I can keep timestamped version of some part of the graph, and i can do some lazy access to postpone loading of the parts i don't need right now.
I thought i could manage this with metaprogramming in Python. But it seems that metaprogramming is not strong enough in Python.
Here's what i do for now. My graph is composed of several different objects. Some of them are instances of a special class. This class describes the root object to be pickled. This is where the modularity come in. Each time i pickle something it starts from one of those instances and i never pickle two of them at the same time. Whenever there is a reference to another instance, accessible by the root object, I replace this reference by a persistant_id, thus ensuring that i won't have two of them in the same pickling stream. The problem comes when unpickling the stream. I can found a persistant_id of an instance which is not loaded yet. When this is the case, i have to wait for the target instance to be loaded before allowing access to it. And i don't see anyway to do that :
1/ I tried to build an accessor which get methods return the target of the reference. Unfortunately, accessors must be placed in the class declaration, I can't assign them to the unpickled object.
2/ I could store somewhere the places where references have to be resolved. I don't think this is possible in Python : one can't keep reference to a place (a field, or a variable), it is only possible to keep a reference to a value.
My problem may not be clear. I'm still looking for a clear formulation. I tried other things like using explicit references which would be instances of some "Reference" class. It isn't very convenient though.
Do you have any idea how to implement modular serialisation with pickle ? Would i have to change internal behaviour of Unpickler to be able to remember places where i need to load the remaining of the object graph ? Is there another library more suitable to achieve similar results ? | 0 | 1 | 289 |
0 | 10,608,783 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2012-05-15T19:16:00.000 | 0 | 2 | 0 | Modular serialization with pickle (Python) | 10,607,350 | 0 | python,pickle | Here's how I think I would go about this.
Have a module level dictionary mapping persistent_id to SpecialClass objects. Every time you initialise or unpickle a SpecialClass instance, make sure that it is added to the dictionary.
Override SpecialClass's __getattr__ and __setattr__ method, so that specialobj.foo = anotherspecialobj merely stores a persistent_id in a dictionary on specialobj (let's call it specialobj.specialrefs). When you retrieve specialobj.foo, it finds the name in specialrefs, then finds the reference in the module-level dictionary.
Have a module level check_graph function which would go through the known SpecialClass instances and check that all of their specialrefs were available. | I want to perform serialisation of some object graph in a modular way. That is I don't want to serialize the whole graph. The reason is that this graph is big. I can keep timestamped version of some part of the graph, and i can do some lazy access to postpone loading of the parts i don't need right now.
I thought i could manage this with metaprogramming in Python. But it seems that metaprogramming is not strong enough in Python.
Here's what i do for now. My graph is composed of several different objects. Some of them are instances of a special class. This class describes the root object to be pickled. This is where the modularity come in. Each time i pickle something it starts from one of those instances and i never pickle two of them at the same time. Whenever there is a reference to another instance, accessible by the root object, I replace this reference by a persistant_id, thus ensuring that i won't have two of them in the same pickling stream. The problem comes when unpickling the stream. I can found a persistant_id of an instance which is not loaded yet. When this is the case, i have to wait for the target instance to be loaded before allowing access to it. And i don't see anyway to do that :
1/ I tried to build an accessor which get methods return the target of the reference. Unfortunately, accessors must be placed in the class declaration, I can't assign them to the unpickled object.
2/ I could store somewhere the places where references have to be resolved. I don't think this is possible in Python : one can't keep reference to a place (a field, or a variable), it is only possible to keep a reference to a value.
My problem may not be clear. I'm still looking for a clear formulation. I tried other things like using explicit references which would be instances of some "Reference" class. It isn't very convenient though.
Do you have any idea how to implement modular serialisation with pickle ? Would i have to change internal behaviour of Unpickler to be able to remember places where i need to load the remaining of the object graph ? Is there another library more suitable to achieve similar results ? | 0 | 1 | 289 |
0 | 54,660,955 | 0 | 0 | 0 | 0 | 1 | false | 70 | 2012-05-17T12:48:00.000 | 1 | 20 | 0 | Python / Pandas - GUI for viewing a DataFrame or Matrix | 10,636,024 | 0.01 | python,user-interface,pandas,dataframe | I've also been searching very simple gui. I was surprised that no one mentioned gtabview.
It is easy to install (just pip3 install gtabview ), and it loads data blazingly fast.
I recommend using gtabview if you are not using spyder or Pycharm. | I'm using the Pandas package and it creates a DataFrame object, which is basically a labeled matrix. Often I have columns that have long string fields, or dataframes with many columns, so the simple print command doesn't work well. I've written some text output functions, but they aren't great.
What I'd really love is a simple GUI that lets me interact with a dataframe / matrix / table. Just like you would find in a SQL tool. Basically a window that has a read-only spreadsheet like view into the data. I can expand columns, page up and down through long tables, etc.
I would suspect something like this exists, but I must be Googling with the wrong terms. It would be great if it is pandas specific, but I would guess I could use any matrix-accepting tool. (BTW - I'm on Windows.)
Any pointers?
Or, conversely, if someone knows this space well and knows this probably doesn't exist, any suggestions on if there is a simple GUI framework / widget I could use to roll my own? (But since my needs are limited, I'm reluctant to have to learn a big GUI framework and do a bunch of coding for this one piece.) | 0 | 1 | 100,991 |
0 | 10,661,419 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2012-05-19T00:38:00.000 | 4 | 1 | 0 | Generating animated SVG with python | 10,661,381 | 1.2 | python,svg,slice,animated | The support for animated svg in svgwrite seems to only work in the form of algorithmically moving objects in the drawing.
Well, yes. That's how SVG animation works; it takes the current objects in the image and applies transformations to them. If you want a "movie" then you will need to make a video from the images. | I have been using the svgwrite library to generate a sequence of svg images. I would like to turn this sequence of images into an animated svg. The support for animated svg in svgwrite seems to only work in the form of algorithmically moving objects in the drawing. Is it possible to use the time slices I have to generate an animated svg or am I stuck rasterizing them and creating a video from the images. Thanks! | 0 | 1 | 1,810 |
0 | 12,110,615 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2012-05-19T22:24:00.000 | 2 | 2 | 0 | Python numpy memmap matrix multiplication | 10,669,270 | 0.197375 | python,numpy,memory-management,matrix-multiplication,large-data | you might try to use np.memmap, and compute the 10x10 output matrix one element at a time.
so you just load the first row of the first matrix and the first column of the second, and then np.sum(row1 * col1). | Im trying to produce a usual matrix multiplication between two huge matrices (10*25,000,000).
My memory runs out when I do so. How could I use numpy's memmap to be able to handle this?
Is this even a good idea? I'm not so worried about the speed of the operation, I just want the result even if it means waiting some time. Thank you in advanced!
8 gbs ram, I7-2617M 1.5 1.5 ghz, Windows7 64 bits. Im using the 64 bit version of everything: python(2.7), numpy, scipy.
Edit1:
Maybe h5py is a better option? | 0 | 1 | 1,031 |
0 | 10,688,691 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2012-05-21T16:01:00.000 | 2 | 1 | 0 | Installed NumPy successfully, but not accessible with virtualenv | 10,688,601 | 0.379949 | python,ubuntu,numpy,virtualenv | You have to install it inside of your virtual environment. The easiest way to do this is:
source [virtualenv]/bin/activate
pip install numpy | I have successfully install NumPy on Ubuntu; however when inside a virtualenv, NumPy is not available. I must be missing something obvious, but I do not understand why I can not import NumPy when using python from a virtualenv. Can anyone help? I am using Python 2.7.3 as my system-wide python and inside my virtualenv. Thanks in advance for the help. | 0 | 1 | 394 |
0 | 10,727,166 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-05-23T20:12:00.000 | 2 | 2 | 0 | How can I speed up the training of a network using my GPU? | 10,727,140 | 0.197375 | python,neural-network,gpu,pybrain | Unless PyBrain is designed for that, you probably can't.
You might want to try running your trainer under PyPy if you aren't already -- it's significantly faster than CPython for some workloads. Perhaps this is one of those workloads. :) | I was wondering if there is a way to use my GPU to speed up the training of a network in PyBrain. | 0 | 1 | 2,082 |
0 | 10,768,872 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-05-26T18:36:00.000 | 0 | 2 | 0 | How to calculate estimation for monotonically growing sequence in python? | 10,768,817 | 0 | python,math,numpy,scipy | If the sequence does not have a lot of noise, just use the latest point, and the point for 1/3 of the current, then estimate your line from that. Otherwise do something more complicated like a least squares fit for the latter half of the sequence.
If you search on Google, there are a number of code samples for doing the latter, and some modules that may help. (I'm not a Python programmer so I can't give a meaningful recommend for the best one.) | I have a monotonically growing sequence of integers. For example
seq=[(0, 0), (1, 5), (10, 20), (15, 24)].
And a integer value greater than the largest argument in the sequence (a > seq[-1][0]). I want to estimate value corresponding to the given value. The sequence grows nearly linearly, and earlier values are less important than later. Nevertheless I can't simply take 2 last points and calculate new value, because mistakes are very likely and the curve may change the angle.
Can anyone suggest a simple solution for this kind of task in Python? | 0 | 1 | 254 |
0 | 10,790,083 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2012-05-28T19:56:00.000 | 3 | 2 | 0 | Phrase corpus for sentimental analysis | 10,789,834 | 0.291313 | python,nlp,nltk | In this case, the work not modifies the meaning of the phrase expecteed to win, reversing it. To identify this, you would need to POS tag the sentence and apply the negative adverb not to the (I think) verb phrase as a negation. I don't know if there is a corpus that would tell you that not would be this type of modifier or not, however. | Good day,
I'm attempting to write a sentimental analysis application in python (Using naive-bayes classifier) with the aim to categorize phrases from news as being positive or negative.
And I'm having a bit of trouble finding an appropriate corpus for that.
I tried using "General Inquirer" (http://www.wjh.harvard.edu/~inquirer/homecat.htm) which works OK but I have one big problem there.
Since it is a word list, not a phrase list I observe the following problem when trying to label the following sentence:
He is not expected to win.
This sentence is categorized as being positive, which is wrong. The reason for that is that "win" is positive, but "not" does not carry any meaning since "not win" is a phrase.
Can anyone suggest either a corpus or a work around for that issue?
Your help and insight is greatly appriciated. | 0 | 1 | 1,434 |
0 | 10,802,164 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2012-05-29T13:20:00.000 | 0 | 3 | 0 | Saving large Python arrays to disk for re-use later --- hdf5? Some other method? | 10,800,039 | 0 | python,database,arrays,save,hdf5 | I would use a single file with fixed record length for this usecase. No specialised DB solution (seems overkill to me in that case), just plain old struct (see the documentation for struct.py) and read()/write() on a file. If you have just millions of entries, everything should be working nicely in a single file of some dozens or hundreds of MB size (which is hardly too large for any file system). You also have random access to subsets in case you will need that later. | I'm currently rewriting some python code to make it more efficient and I have a question about saving python arrays so that they can be re-used / manipulated later.
I have a large number of data, saved in CSV files. Each file contains time-stamped values of the data that I am interested in and I have reached the point where I have to deal with tens of millions of data points. The data has got so large now that the processing time is excessive and inefficient---the way the current code is written the entire data set has to be reprocessed every time some new data is added.
What I want to do is this:
Read in all of the existing data to python arrays
Save the variable arrays to some kind of database/file
Then, the next time more data is added I load my database, append the new data, and resave it. This way only a small number of data need to be processed at any one time.
I would like the saved data to be accessible to further python scripts but also to be fairly "human readable" so that it can be handled in programs like OriginPro or perhaps even Excel.
My question is: whats the best format to save the data in? HDF5 seems like it might have all the features I need---but would something like SQLite make more sense?
EDIT: My data is single dimensional. I essentially have 30 arrays which are (millions, 1) in size. If it wasn't for the fact that there are so many points then CSV would be an ideal format! I am unlikely to want to do lookups of single entries---more likely is that I might want to plot small subsets of data (eg the last 100 hours, or the last 1000 hours, etc). | 0 | 1 | 1,123 |
0 | 10,817,026 | 0 | 0 | 0 | 0 | 2 | true | 3 | 2012-05-29T13:20:00.000 | 2 | 3 | 0 | Saving large Python arrays to disk for re-use later --- hdf5? Some other method? | 10,800,039 | 1.2 | python,database,arrays,save,hdf5 | HDF5 is an excellent choice! It has a nice interface, is widely used (in the scientific community at least), many programs have support for it (matlab for example), there are libraries for C,C++,fortran,python,... It has a complete toolset to display the contents of a HDF5 file. If you later want to do complex MPI calculation on your data, HDF5 has support for concurrently read/writes. It's very well suited to handle very large datasets. | I'm currently rewriting some python code to make it more efficient and I have a question about saving python arrays so that they can be re-used / manipulated later.
I have a large number of data, saved in CSV files. Each file contains time-stamped values of the data that I am interested in and I have reached the point where I have to deal with tens of millions of data points. The data has got so large now that the processing time is excessive and inefficient---the way the current code is written the entire data set has to be reprocessed every time some new data is added.
What I want to do is this:
Read in all of the existing data to python arrays
Save the variable arrays to some kind of database/file
Then, the next time more data is added I load my database, append the new data, and resave it. This way only a small number of data need to be processed at any one time.
I would like the saved data to be accessible to further python scripts but also to be fairly "human readable" so that it can be handled in programs like OriginPro or perhaps even Excel.
My question is: whats the best format to save the data in? HDF5 seems like it might have all the features I need---but would something like SQLite make more sense?
EDIT: My data is single dimensional. I essentially have 30 arrays which are (millions, 1) in size. If it wasn't for the fact that there are so many points then CSV would be an ideal format! I am unlikely to want to do lookups of single entries---more likely is that I might want to plot small subsets of data (eg the last 100 hours, or the last 1000 hours, etc). | 0 | 1 | 1,123 |
0 | 10,807,433 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2012-05-29T19:22:00.000 | 3 | 4 | 0 | Change color weight of raw image file | 10,805,356 | 1.2 | java,python,image-processing,rgb | Typical white-balance issues are caused by differing proportions of red, green, and blue in the makeup of the light illuminating a scene, or differences in the sensitivities of the sensors to those colors. These errors are generally linear, so you correct for them by multiplying by the inverse of the error.
Suppose you measure a point you expect to be perfectly white, and its RGB values are (248,237,236) i.e. pink. If you multiply each pixel in the image by (248/248,248/237,248/236) you will end up with the correct balance.
You should definitely ensure that your Bayer filter is producing the proper results first, or the base assumption of linear errors will be incorrect. | I am working on a telescope project and we are testing the CCD. Whenever we take pictures things are slightly pink-tinted and we need true color to correctly image galactic objects. I am planning on writing a small program in python or java to change the color weights but how can I access the weight of the color in a raw data file (it is rgb.bin)?
We are using a bayer matrix algorithm to convert monochromatic files to color files and I would imagine the problem is coming from there but I would like to fix it with a small color correcting program.
Thanks! | 0 | 1 | 1,116 |
0 | 39,332,002 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2012-05-30T10:19:00.000 | 3 | 2 | 0 | Using scipy to perform discrete integration of the sample | 10,814,353 | 0.291313 | python,integration,scipy,integral | There is only one method in SciPy that does cumulative integration which is scipy.integrate.cumtrapz() which does what you want as long as you don't specifically need to use the Simpson rule or another method. For that, you can as suggested always write the loop on your own. | I am trying to port from labview to python.
In labview there is a function "Integral x(t) VI" that takes a set of samples as input, performs a discrete integration of the samples and returns a list of values (the areas under the curve) according to Simpsons rule.
I tried to find an equivalent function in scipy, e.g. scipy.integrate.simps, but those functions return the summed integral across the set of samples, as a float.
How do I get the list of integrated values as opposed to the sum of the integrated values?
Am I just looking at the problem the wrong way around? | 0 | 1 | 26,770 |
0 | 11,115,655 | 0 | 1 | 0 | 0 | 1 | false | 18 | 2012-06-01T01:12:00.000 | 1 | 4 | 0 | Python multiprocessing design | 10,843,240 | 0.049958 | python,iteration,multiprocessing,gdal | As python is not really meant to do intensive number-cunching, I typically start converting time-critical parts of a python program to C/C++ and speed things up a lot.
Also, the python multithreading is not very good. Python keeps using a global semaphore for all kinds of things. So even when you use the Threads that python offers, things won't get faster. The threads are useful for applications, where threads will typically wait for things like IO.
When making a C module, you can manually release the global semaphore when processing your data (then, of course, do not access the python values anymore).
It takes some practise using the C API, but's its clearly structured and much easier to use than, for example, the Java native API.
See 'extending and embedding' in the python documentation.
This way you can make the time critical parts in C/C++, and the slower parts with faster programming work in python... | I have written an algorithm that takes geospatial data and performs a number of steps. The input data are a shapefile of polygons and covariate rasters for a large raster study area (~150 million pixels). The steps are as follows:
Sample points from within polygons of the shapefile
For each sampling point, extract values from the covariate rasters
Build a predictive model on the sampling points
Extract covariates for target grid points
Apply predictive model to target grid
Write predictions to a set of output grids
The whole process needs to be iterated a number of times (say 100) but each iteration currently takes more than an hour when processed in series. For each iteration, the most time-consuming parts are step 4 and 5. Because the target grid is so large, I've been processing it a block (say 1000 rows) at a time.
I have a 6-core CPU with 32 Gb RAM, so within each iteration, I had a go at using Python's multiprocessing module with a Pool object to process a number of blocks simultaneously (steps 4 and 5) and then write the output (the predictions) to the common set of output grids using a callback function that calls a global output-writing function. This seems to work, but is no faster (actually, it's probably slower) than processing each block in series.
So my question is, is there a more efficient way to do it? I'm interested in the multiprocessing module's Queue class, but I'm not really sure how it works. For example, I'm wondering if it's more efficient to have a queue that carries out steps 4 and 5 then passes the results to another queue that carries out step 6. Or is this even what Queue is for?
Any pointers would be appreciated. | 0 | 1 | 2,043 |
0 | 10,857,757 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2012-06-01T14:01:00.000 | 1 | 1 | 0 | xlrd - append data to already opened workbook | 10,851,726 | 0.197375 | python,xlrd,xlwt | Not directly. xlutils can use xlrd and xlwt to copy a spreadsheet, and appending to a "to be written" worksheet is straightforward. I don't think reading the open spreadsheet is a problem -- but xlwt will not write to the open book/sheet.
You might write an Excel VBA macro to draw the graphs. In principle, I think a macro from a command workbook could close your stock workbook, invoke your python code to copy and update, open the new spreadsheet, and maybe run the macro to re-draw the graphs.
Another approach is to use matplotlib for the graphs. I'd think a sleep loop could wake up every n seconds, grab the new csv data, append it to your "big" csv data, and re-draw the graph. Taking this approach keeps you in python and should make things a lot easier, imho. Disclosure: my Python is better than my VBA. | I am trying to write a python program for appending live stock quotes from a csv file to an excel file (which is already open) using xlrd and xlwt.
The task is summarised below.
From my stock-broker's application, a csv file is continually being updated on my hard disk.
I wish to write a program which, when run, would append the new data from csv file to an excel file, which is kept open (I wonder whether it is possible to read & write an open file).
I wish to keep the file open because I will be having stock-charts in it.
Is it possible? If yes, how? | 0 | 1 | 1,299 |
0 | 10,875,787 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2012-06-04T00:09:00.000 | 0 | 2 | 0 | "Cloning" a corpus in NLTK? | 10,874,994 | 0 | python,nlp,nltk,corpus | Why don't you a define a new corpus by copying the definition of movie_reviews in nltk.corpus? You can do this all you want with new directories, and then copy the directory structure and replace the files. | I'm attempting to create my own corpus in NLTK. I've been reading some of the documentation on this and it seems rather complicated... all I wanted to do is "clone" the movie reviews corpus but with my own text. Now, I know I can just change files in the move reviews corpus to my own... but that limits me to working with just one such corpus at a time (ie. I'd have to continually be swapping files). is there any way i could just clone the movie reviews corpus?
thanks
Alex | 0 | 1 | 301 |
0 | 10,886,261 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2012-06-04T18:07:00.000 | 2 | 6 | 0 | Multiply each pixel in an image by a factor | 10,885,984 | 0.066568 | python,image-processing,python-imaging-library,rgb,pixel | As a basic optimization, it may save a little time if you create 3 lookup tables, one each for R, G, and B, to map the input value (0-255) to the output value (0-255). Looking up an array entry is probably faster than multiplying by a decimal value and rounding the result to an integer. Not sure how much faster.
Of course, this assumes that the values should always map the same. | I have an image that is created by using a bayer filter and the colors are slightly off. I need to multiply RG and B of each pixel by a certain factor ( a different factor for R, G and B each) to get the correct color. I am using the python imaging library and of course writing in python. is there any way to do this efficiently?
Thanks! | 0 | 1 | 12,396 |
Subsets and Splits