GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 64,900,468 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2012-06-04T18:07:00.000 | 0 | 6 | 0 | Multiply each pixel in an image by a factor | 10,885,984 | 0 | python,image-processing,python-imaging-library,rgb,pixel | If the type is numpy.ndarray just img = np.uint8(img*factor) | I have an image that is created by using a bayer filter and the colors are slightly off. I need to multiply RG and B of each pixel by a certain factor ( a different factor for R, G and B each) to get the correct color. I am using the python imaging library and of course writing in python. is there any way to do this efficiently?
Thanks! | 0 | 1 | 12,396 |
0 | 12,700,150 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-06-04T20:23:00.000 | 1 | 2 | 1 | QueryFrame very slow on Windows | 10,887,836 | 0.099668 | python,windows,performance,opencv | I had same issue and I found out that this is caused by prolonged exposure. It may be the case that Windows drivers increased exposure to increase brightness of picture. Try to point your camera to light source or manually set decreased exposure | I have build a simple webcam recorder on linux which works quite well.
I get ~25fps video and good audio.
I am porting the recorder on windows (win7) and while it works, it is unusable.
The QueryFrame function takes something more than 350ms, i.e 2.5fps.
The code is in python but the problem really seems to be the lib call.
I tested on the same machine with the same webcam (a logitech E2500).
On windows, I installed openCV v2.2. I cannot check right now but the version might be a bit higher on Ubuntu.
Any idea what could be the problem ?
edit : I've just installed openCV2.4 and I have the same slow speed. | 0 | 1 | 565 |
0 | 10,901,418 | 0 | 0 | 0 | 0 | 3 | false | 12 | 2012-06-05T16:09:00.000 | 0 | 6 | 0 | May near seeds in random number generation give similar random numbers? | 10,900,852 | 0 | python,random,seed | First: define similarity. Next: code a similarity test. Then: check for similarity.
With only a vague description of similarity it is hard to check for it. | I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well?
I think it doesn't change anything, but I'm using python
Edit: I have done some tests and the numbers don't look similar. But I'm afraid that the similarity cannot be noticed just by looking at the numbers. Is there any theoretical feature of random number generation that guarantees that different seeds give completely independent pseudo-random numbers? | 0 | 1 | 2,871 |
0 | 10,904,377 | 0 | 0 | 0 | 0 | 3 | false | 12 | 2012-06-05T16:09:00.000 | 0 | 6 | 0 | May near seeds in random number generation give similar random numbers? | 10,900,852 | 0 | python,random,seed | What kind of simulation are you doing?
For simulation purposes your argument is valid (depending on the type of simulation) but if you implement it in an environment other than simulation, then it could be easily hacked if it requires that there are security concerns of the environment based on the generated random numbers.
If you are simulating the outcome of a machine whether it is harmful to society or not then the outcome of your results will not be acceptable. It requires maximum randomness in every way possible and I would never trust your reasoning. | I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well?
I think it doesn't change anything, but I'm using python
Edit: I have done some tests and the numbers don't look similar. But I'm afraid that the similarity cannot be noticed just by looking at the numbers. Is there any theoretical feature of random number generation that guarantees that different seeds give completely independent pseudo-random numbers? | 0 | 1 | 2,871 |
0 | 10,905,149 | 0 | 0 | 0 | 0 | 3 | false | 12 | 2012-06-05T16:09:00.000 | 0 | 6 | 0 | May near seeds in random number generation give similar random numbers? | 10,900,852 | 0 | python,random,seed | To quote the documentation from the random module:
General notes on the underlying Mersenne Twister core generator:
The period is 2**19937-1.
It is one of the most extensively tested generators in existence.
I'd be more worried about my code being broken than my RNG not being random enough. In general, your gut feelings about randomness are going to be wrong - the Human mind is really good at finding patterns, even if they don't exist.
As long as you know your results aren't going to be 'secure' due to your lack of random seeding, you should be fine. | I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well?
I think it doesn't change anything, but I'm using python
Edit: I have done some tests and the numbers don't look similar. But I'm afraid that the similarity cannot be noticed just by looking at the numbers. Is there any theoretical feature of random number generation that guarantees that different seeds give completely independent pseudo-random numbers? | 0 | 1 | 2,871 |
0 | 10,955,138 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2012-06-06T18:43:00.000 | 0 | 2 | 0 | Recommendation system - using different metrics | 10,920,199 | 0 | python,metrics,recommendation-engine,personalization,cosine-similarity | Recommender systems in the land of research generally work on a scale of 1 - 5. It's quite nice to get such an explicit signal from a user. However I'd imagine the reality is that most users of your system would never actually give a rating, in which case you have nothing to work with.
Therefore I'd track page views but also try and incorporate some explicit feedback mechanism (1-5, thumbs up or down etc.)
Your algorithm will have to take this into consideration. | I'm looking to implement an item-based news recommendation system. There are several ways I want to track a user's interest in a news item; they include: rating (1-5), favorite, click-through, and time spent on news item.
My question: what are some good methods to use these different metrics for the recommendation system? Maybe merge and normalize them in some way? | 0 | 1 | 614 |
0 | 10,956,591 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2012-06-06T18:43:00.000 | 0 | 2 | 0 | Recommendation system - using different metrics | 10,920,199 | 0 | python,metrics,recommendation-engine,personalization,cosine-similarity | For recommendation system, there are two problems:
how to quantify the user's interest in a certain item based on the numbers you collected
how to use the quantified interest data to recommend new items to the user
I guess you are more interested in the first problem.
To solve the first problem, you need either linear combination or some other fancy functions to combine all the numbers. There is really no a single universal function for all systems. It heavily depends on the type of your users and your items. If you want a high quality recommandation system, you need to have some data to do machine learning to train your functions.
For the second problem, it's somehow the same thing, plus you need to analyze all the items to abstract some relationships between each other. You can google "Netflix prize" for some interesting info. | I'm looking to implement an item-based news recommendation system. There are several ways I want to track a user's interest in a news item; they include: rating (1-5), favorite, click-through, and time spent on news item.
My question: what are some good methods to use these different metrics for the recommendation system? Maybe merge and normalize them in some way? | 0 | 1 | 614 |
0 | 10,921,702 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2012-06-06T20:22:00.000 | 0 | 2 | 0 | trouble with installing epdfree | 10,921,655 | 0 | python,enthought | The problem is that you don't have the library scipy installed, which is a totally different library of epdfree.
you can install it from apt-get in linux I guess, or going to their website
www.scipy.org | I am trying to install epdfree on two virtually identical machines: Linux 2.6.18-308.1.1.el5, CentOS release 5.8., 64-bit machines. (BTW, I'm a bit new to python.)
After the install on one machine, I run python and try to import scipy. Everything goes fine.
On the other machine, I follow all the same steps as far as I can tell, but when I try to import scipy, I am told “ImportError: No module named scipy”.
As far as I can tell, I am doing everything the same on the two machines. I installed from the same script, I run the python in the epdfree installation directory, everything I can think of.
Does anyone have any idea what would keep “import scipy” from working on one machine while it works fine on the other? Thanks. | 0 | 1 | 242 |
0 | 10,922,030 | 0 | 1 | 0 | 0 | 2 | true | 0 | 2012-06-06T20:22:00.000 | 1 | 2 | 0 | trouble with installing epdfree | 10,921,655 | 1.2 | python,enthought | Well, turns out there was one difference. File permissions were being set differently on the two machines. I installed epdfree as su on both machines. On the second machine, everything was locked out when I tried to run it without going under "su". Now my next task is to find out why the permissions were set differently. I guess it's a difference in umask settings? Well, this I won't bother anyone with. But feel free to offer an answer if you want to! Thanks. | I am trying to install epdfree on two virtually identical machines: Linux 2.6.18-308.1.1.el5, CentOS release 5.8., 64-bit machines. (BTW, I'm a bit new to python.)
After the install on one machine, I run python and try to import scipy. Everything goes fine.
On the other machine, I follow all the same steps as far as I can tell, but when I try to import scipy, I am told “ImportError: No module named scipy”.
As far as I can tell, I am doing everything the same on the two machines. I installed from the same script, I run the python in the epdfree installation directory, everything I can think of.
Does anyone have any idea what would keep “import scipy” from working on one machine while it works fine on the other? Thanks. | 0 | 1 | 242 |
0 | 69,231,502 | 0 | 0 | 0 | 0 | 1 | false | 44 | 2012-06-12T13:02:00.000 | 0 | 9 | 0 | "Converting" Numpy arrays to Matlab and vice versa | 10,997,254 | 0 | python,matlab,numpy | In latest R2021a, you can pass a python numpy ndarray to double() and it will convert to a native matlab matrix, even when calling in console the numpy array it will suggest at the bottom "Use double function to convert to a MATLAB array" | I am looking for a way to pass NumPy arrays to Matlab.
I've managed to do this by storing the array into an image using scipy.misc.imsave and then loading it using imread, but this of course causes the matrix to contain values between 0 and 256 instead of the 'real' values.
Taking the product of this matrix divided by 256, and the maximum value in the original NumPy array gives me the correct matrix, but I feel that this is a bit tedious.
is there a simpler way? | 0 | 1 | 75,113 |
0 | 11,027,196 | 0 | 0 | 0 | 0 | 3 | true | 4 | 2012-06-13T01:42:00.000 | 6 | 3 | 0 | What should I worry about if I compress float64 array to float32 in numpy? | 11,007,169 | 1.2 | python,numpy,floating-point,compression | It is unlikely that a simple transformation will reduce error significantly, since your distribution is centered around zero.
Scaling can have effect in only two ways: One, it moves values away from the denormal interval of single-precision values, (-2-126, 2-126). (E.g., if you multiply by, say, 2123 values that were in [2-249, 2-126) are mapped to [2-126, 2-3), which is outside the denormal interval.) Two, it changes where values lie in each “binade” (interval from one power of two to the next). E.g., your maximum value is 20, where the relative error may be 1/2 ULP / 20, where the ULP for that binade is 16*2-23 = 2-19, so the relative error may be 1/2 * 2-19 / 20, about 4.77e-8. Suppose you scale by 32/20, so values just under 20 become values just under 32. Then, when you convert to float, the relative error is at most 1/2 * 2-19 / 32 (or just under 32), about 2.98e-8. So you may reduce the error slightly.
With regard to the former, if your values are nearly normally distributed, very few are in (-2-126, 2-126), simply because that interval is so small. (A trillion samples of your normal distribution almost certainly have no values in that interval.) You say these are scientific measurements, so perhaps they are produced with some instrument. It may be that the machine does not measure or calculate finely enough to return values that range from 2-126 to 20, so it would not surprise me if you have no values in the denormal interval at all. If you have no values in the single-precision denormal range, then scaling to avoid that range is of no use.
With regard to the latter, we see a small improvement is available at the end of your range. However, elsewhere in your range, some values are also moved to the high end of a binade, but some are moved across a binade boundary to the small end of a new binade, resulting in increased relative error for them. It is unlikely there is a significant net improvement.
On the other hand, we do not know what is significant for your application. How much error can your application tolerate? Will the change in the ultimate result be unnoticeable if random noise of 1% is added to each number? Or will the result be completely unacceptable if a few numbers change by as little as 2-200?
What do you know about the machinery producing these numbers? Is it truly producing numbers more precise than single-precision floats? Perhaps, although it produces 64-bit floating-point values, the actual values are limited to a population that is representable in 32-bit floating-point. Have you performed a conversion from double to float and measured the error?
There is still insufficient information to rule out these or other possibilities, but my best guess is that there is little to gain by any transformation. Converting to float will either introduce too much error or it will not, and transforming the numbers first is unlikely to alter that. | This is a particular kind of lossy compression that's quite easy to implement in numpy.
I could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error.
Other than looking at the maximum error for my actual data, does anybody have a good idea what type of distortions this creates, e.g. as a function of the magnitude of the original value?
Would I be better off mapping all values (in 64-bits) onto say [-1,1] first (as a fraction of extreme values, which could be preserved in 64-bits) to take advantage of greater density of floats near zero?
I'm adding a specific case I have in mind. Let's say I have 500k to 1e6 values ranging from -20 to 20, that are approximately IID ~ Normal(mu=0,sigma=4) so they're already pretty concentrated near zero and the "20" is ~5-sigma rare. Let's say they are scientific measurements where the true precision is a whole lot less than the 64-bit floats, but hard to really know exactly. I have tons of separate instances (potentially TB's worth) so compressing has a lot of practical value, and float32 is a quick way to get 50% (and if anything, works better with an additional round of lossless compression like gzip). So the "-20 to 20" eliminates a lot of concerns about really large values. | 0 | 1 | 1,086 |
0 | 11,007,250 | 0 | 0 | 0 | 0 | 3 | false | 4 | 2012-06-13T01:42:00.000 | 2 | 3 | 0 | What should I worry about if I compress float64 array to float32 in numpy? | 11,007,169 | 0.132549 | python,numpy,floating-point,compression | The exponent for float32 is quite a lot smaller (or bigger in the case of negative exponents), but assuming all you numbers are less than that you only need to worry about the loss of precision. float32 is only good to about 7 or 8 significant decimal digits | This is a particular kind of lossy compression that's quite easy to implement in numpy.
I could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error.
Other than looking at the maximum error for my actual data, does anybody have a good idea what type of distortions this creates, e.g. as a function of the magnitude of the original value?
Would I be better off mapping all values (in 64-bits) onto say [-1,1] first (as a fraction of extreme values, which could be preserved in 64-bits) to take advantage of greater density of floats near zero?
I'm adding a specific case I have in mind. Let's say I have 500k to 1e6 values ranging from -20 to 20, that are approximately IID ~ Normal(mu=0,sigma=4) so they're already pretty concentrated near zero and the "20" is ~5-sigma rare. Let's say they are scientific measurements where the true precision is a whole lot less than the 64-bit floats, but hard to really know exactly. I have tons of separate instances (potentially TB's worth) so compressing has a lot of practical value, and float32 is a quick way to get 50% (and if anything, works better with an additional round of lossless compression like gzip). So the "-20 to 20" eliminates a lot of concerns about really large values. | 0 | 1 | 1,086 |
0 | 11,019,850 | 0 | 0 | 0 | 0 | 3 | false | 4 | 2012-06-13T01:42:00.000 | 7 | 3 | 0 | What should I worry about if I compress float64 array to float32 in numpy? | 11,007,169 | 1 | python,numpy,floating-point,compression | The following assumes you are using standard IEEE-754 floating-point operations, which are common (with some exceptions), in the usual round-to-nearest mode.
If a double value is within the normal range of float values, then the only change that occurs when the double is rounded to a float is that the significand (fraction portion of the value) is rounded from 53 bits to 24 bits. This will cause an error of at most 1/2 ULP (unit of least precision). The ULP of a float is 2-23 times the greatest power of two not greater than the float. E.g., if a float is 7.25, the greatest power of two not greater than it is 4, so its ULP is 4*2-23 = 2-21, about 4.77e-7. So the error when double in the interval [4, 8) is converted to float is at most 2-22, about 2.38e-7. For another example, if a float is about .03, the greatest power of two not greater than it is 2-6, so the ULP is 2-29, and the maximum error when converting to double is 2-30.
Those are absolute errors. The relative error is less than 2-24, which is 1/2 ULP divided by the smallest the value could be (the smallest value in the interval for a particular ULP, so the power of two that bounds it). E.g., for each number x in [4, 8), we know the number is at least 4 and error is at most 2-22, so the relative error is at most 2-22/4 = 2-24. (The error cannot be exactly 2-24 because there is no error when converting an exact power of two from float to double, so there is an error only if x is greater than four, so the relative error is less than, not equal to, 2-24.) When you know more about the value being converted, e.g., it is nearer 8 than 4, you can bound the error more tightly.
If the number is outside the normal range of a float, errors can be larger. The maximum finite floating-point value is 2128-2104, about 3.40e38. When you convert a double that is 1/2 ULP (of a float; doubles have finer ULP) more than that or greater to float, infinity is returned, which is, of course, an infinite absolute error and an infinite relative error. (A double that is greater than the maximum finite float but is greater by less than 1/2 ULP is converted to the maximum finite float and has the same errors discussed in the previous paragraph.)
The minimum positive normal float is 2-126, about 1.18e-38. Numbers within 1/2 ULP of this (inclusive) are converted to it, but numbers less than that are converted to a special denormalized format, where the ULP is fixed at 2-149. The absolute error will be at most 1/2 ULP, 2-150. The relative error will depend significantly on the value being converted.
The above discusses positive numbers. The errors for negative numbers are symmetric.
If the value of a double can be represented exactly as a float, there is no error in conversion.
Mapping the input numbers to a new interval can reduce errors in specific situations. As a contrived example, suppose all your numbers are integers in the interval [248, 248+224). Then converting them to float would lose all information that distinguishes the values; they would all be converted to 248. But mapping them to [0, 224) would preserve all information; each different input would be converted to a different result.
Which map would best suit your purposes depends on your specific situation. | This is a particular kind of lossy compression that's quite easy to implement in numpy.
I could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error.
Other than looking at the maximum error for my actual data, does anybody have a good idea what type of distortions this creates, e.g. as a function of the magnitude of the original value?
Would I be better off mapping all values (in 64-bits) onto say [-1,1] first (as a fraction of extreme values, which could be preserved in 64-bits) to take advantage of greater density of floats near zero?
I'm adding a specific case I have in mind. Let's say I have 500k to 1e6 values ranging from -20 to 20, that are approximately IID ~ Normal(mu=0,sigma=4) so they're already pretty concentrated near zero and the "20" is ~5-sigma rare. Let's say they are scientific measurements where the true precision is a whole lot less than the 64-bit floats, but hard to really know exactly. I have tons of separate instances (potentially TB's worth) so compressing has a lot of practical value, and float32 is a quick way to get 50% (and if anything, works better with an additional round of lossless compression like gzip). So the "-20 to 20" eliminates a lot of concerns about really large values. | 0 | 1 | 1,086 |
0 | 11,028,685 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2012-06-14T00:37:00.000 | 0 | 1 | 0 | In rtree, how can I specify the threshold for float equality testing? | 11,025,297 | 0 | python,indexing,spatial-index,spatial-query,r-tree | Actually it does not need to have a threshold to handle ties. They just happen.
Assuming you have the data points (1.,0.) and (0.,1.) and query point (0.,0.), any implementation I've seen of Euclidean distance will return the exact same distance for both, without any threshold. | In rtree, how can I specify the threshold for float equality testing?
When checking nearest neighbours, rtree can return more than the specified number of results, as if two points are equidistant, it returns both them. To check this equidistance, it must have some threshold since the distances are floats. I want to be able to control this threshold. | 0 | 1 | 236 |
0 | 11,077,060 | 0 | 0 | 0 | 0 | 2 | false | 202 | 2012-06-18T04:45:00.000 | 61 | 3 | 0 | What are the differences between Pandas and NumPy+SciPy in Python? | 11,077,023 | 1 | python,numpy,scipy,pandas | Numpy is required by pandas (and by virtually all numerical tools for Python). Scipy is not strictly required for pandas but is listed as an "optional dependency". I wouldn't say that pandas is an alternative to Numpy and/or Scipy. Rather, it's an extra tool that provides a more streamlined way of working with numerical and tabular data in Python. You can use pandas data structures but freely draw on Numpy and Scipy functions to manipulate them. | They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis. | 0 | 1 | 135,225 |
0 | 11,077,215 | 0 | 0 | 0 | 0 | 2 | true | 202 | 2012-06-18T04:45:00.000 | 327 | 3 | 0 | What are the differences between Pandas and NumPy+SciPy in Python? | 11,077,023 | 1.2 | python,numpy,scipy,pandas | pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool, similar to MATLAB. pandas on the other hand provides rich time series functionality, data alignment, NA-friendly statistics, groupby, merge and join methods, and lots of other conveniences. It has become very popular in recent years in financial applications. I will have a chapter dedicated to financial data analysis using pandas in my upcoming book. | They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis. | 0 | 1 | 135,225 |
0 | 11,172,695 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2012-06-18T13:27:00.000 | 4 | 2 | 0 | Support Vector Regression with High Dimensional Output using python's libsvm | 11,083,921 | 0.379949 | python,machine-learning,svm,regression,libsvm | libsvm might not be the best tool for this task.
The problem you describe is called multivariate regression, and usually for regression problems, SVM's are not necessarily the best choice.
You could try something like group lasso (http://www.di.ens.fr/~fbach/grouplasso/index.htm - matlab) or sparse group lasso (http://spams-devel.gforge.inria.fr/ - seems to have a python interface), which solve the multivariate regression problem with different types of regularization. | I would like to ask if anyone has an idea or example of how to do support vector regression in python with high dimensional output( more than one) using a python binding of libsvm? I checked the examples and they are all assuming the output to be one dimensional. | 0 | 1 | 3,847 |
0 | 11,098,023 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2012-06-19T06:00:00.000 | 0 | 1 | 1 | How to read other files in hadoop jobs? | 11,095,220 | 1.2 | python,hadoop | Problem solved by adding the file needed with the -file option or file= option in conf file. | I need to read in a dictionary file to filter content specified in the hdfs_input, and I have uploaded it to the cluster using the put command, but I don't know how to access it in my program.
I tried to access it using path on the cluster like normal files, but it gives the error information: IOError: [Errno 2] No such file or directory
Besides, is there any way to maintain only one copy of the dictionary for all the machines that runs the job ?
So what's the correct way of access files other than the specified input in hadoop jobs? | 0 | 1 | 91 |
0 | 11,101,558 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2012-06-19T11:33:00.000 | 3 | 1 | 0 | how to save a boolean numpy array to textfile in python? | 11,100,066 | 0.53705 | python,numpy,save,boolean | Thats correct, bools are integers, so you can always go between the two.
import numpy as np
arr = np.array([True, True, False, False])
np.savetxt("test.txt", arr, fmt="%5i")
That gives a file with 1 1 0 0 | The following saves floating values of a matrix into textfiles
numpy.savetxt('bool',mat,fmt='%f',delimiter=',')
How to save a boolean matrix ? what is the fmt for saving boolean matrix ? | 0 | 1 | 3,549 |
0 | 42,273,797 | 0 | 0 | 0 | 0 | 1 | false | 55 | 2012-06-19T18:11:00.000 | 2 | 4 | 0 | Adding two pandas dataframes | 11,106,823 | 0.099668 | python,pandas | Both the above answers - fillna(0) and a direct addition would give you Nan values if either of them have different structures.
Its Better to use fill_value
df.add(other_df, fill_value=0) | I have two dataframes, both indexed by timeseries. I need to add the elements together to form a new dataframe, but only if the index and column are the same. If the item does not exist in one of the dataframes then it should be treated as a zero.
I've tried using .add but this sums regardless of index and column. Also tried a simple combined_data = dataframe1 + dataframe2 but this give a NaN if both dataframes don't have the element.
Any suggestions? | 0 | 1 | 102,462 |
0 | 11,109,571 | 0 | 1 | 0 | 0 | 3 | false | 5 | 2012-06-19T21:13:00.000 | 1 | 4 | 0 | Can csv data be made lazy? | 11,109,524 | 0.049958 | python,csv,clojure,lazy-evaluation | The csv module does load the data lazily, one row at a time. | Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists?
I am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python. | 0 | 1 | 2,302 |
0 | 11,109,589 | 0 | 1 | 0 | 0 | 3 | false | 5 | 2012-06-19T21:13:00.000 | 2 | 4 | 0 | Can csv data be made lazy? | 11,109,524 | 0.099668 | python,csv,clojure,lazy-evaluation | Python's reader or DictReader are generators. A row is produced only when the object's next() method is called. | Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists?
I am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python. | 0 | 1 | 2,302 |
0 | 11,109,568 | 0 | 1 | 0 | 0 | 3 | false | 5 | 2012-06-19T21:13:00.000 | 6 | 4 | 0 | Can csv data be made lazy? | 11,109,524 | 1 | python,csv,clojure,lazy-evaluation | The csv module's reader is lazy by default.
It will read a line in at a time from the file, parse it to a list, and return that list. | Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists?
I am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python. | 0 | 1 | 2,302 |
0 | 60,879,363 | 0 | 0 | 1 | 0 | 1 | false | 21 | 2012-06-21T15:18:00.000 | 0 | 6 | 0 | FileStorage for OpenCV Python API | 11,141,336 | 0 | c++,python,image-processing,opencv | pip install opencv-contrib-python for video support to install specific version use pip install opencv-contrib-python | I'm currently using FileStorage class for storing matrices XML/YAML using OpenCV C++ API.
However, I have to write a Python Script that reads those XML/YAML files.
I'm looking for existing OpenCV Python API that can read the XML/YAML files generated by OpenCV C++ API | 0 | 1 | 16,840 |
0 | 11,155,580 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2012-06-22T10:02:00.000 | 1 | 3 | 0 | 0/1 Knapsack with few variables: which algorithm? | 11,154,101 | 0.066568 | python,algorithm,knapsack-problem | You can either use pseudopolynomial algorithm, which uses dynamic programming, if the sum of weights is small enough. You just calculate, whether you can get weight X with first Y items for each X and Y.
This runs in time O(NS), where N is number of items and S is sum of weights.
Another possibility is to use meet-in-the middle approach.
Partition items into two halves and:
For the first half take every possible combination of items (there are 2^(N/2) possible combinations in each half) and store its weight in some set.
For the second half take every possible combination of items and check whether there is a combination in first half with suitable weight.
This should run in O(2^(N/2)) time. | I have to implement the solution to a 0/1 Knapsack problem with constraints.
My problem will have in most cases few variables (~ 10-20, at most 50).
I recall from university that there are a number of algorithms that in many cases perform better than brute force (I'm thinking, for example, to a branch and bound algorithm).
Since my problem is relative small, I'm wondering if there is an appreciable advantange in terms of efficiency when using a sophisticate solution as opposed to brute force.
If it helps, I'm programming in Python. | 0 | 1 | 1,291 |
0 | 57,324,664 | 0 | 0 | 0 | 0 | 1 | false | 104 | 2012-06-23T12:15:00.000 | 4 | 13 | 0 | NumPy style arrays for C++? | 11,169,418 | 0.061461 | c++,arrays,python-3.x,numpy,dynamic-arrays | Use LibTorch (PyTorch frontend for C++) and be happy. | Are there any C++ (or C) libs that have NumPy-like arrays with support for slicing, vectorized operations, adding and subtracting contents element-by-element, etc.? | 0 | 1 | 81,916 |
0 | 11,187,374 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2012-06-25T09:57:00.000 | 3 | 2 | 0 | The best way to export openerp data to csv file using python | 11,187,086 | 0.291313 | python,export-to-excel,openerp,export-to-csv | Why not to use Open ERP client it self.
you can go for xlwt if you really require to write a python program to generate it. | Which is the best to way to export openerp data to csv/xls file using python so that i can schedule it in openerp( i cant use the client side exporting)?
using csv python package
using xlwt python package
or any other package?
And also how can I dynamically provide the path and name to save this newly created csv file | 0 | 1 | 4,385 |
0 | 11,198,804 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2012-06-25T22:29:00.000 | 1 | 2 | 0 | ABM under python with advanced visualization | 11,198,288 | 1.2 | python,netlogo,pycuda,agent-based-modeling,mayavi | You almost certainly do not want to use CUDA unless you are running into a significant performance problem. In general CUDA is best used for solving floating point linear algebra problems. If you are looking for a framework built around parallel computations, I'd look towards OpenCL which can take advantage of GPUs if needed..
In terms of visualization, I'd strongly suggest targeting a a specific data interchange format and then letting some other program do that rendering for you. The only reason I'd use something like VTK is if for some reason you need more control over the visualization process or you are looking for a real time solution. | sorry if this all seem nooby and unclear, but I'm currently learning Netlogo to model agent-based collective behavior and would love to hear some advice on alternative software choices. My main thing is that I'd very much like to take advantage of PyCuda since, from what I understand, it enables parallel computation. However, does that mean I still have to write the numerical script in some other environment and implement the visuals in yet another one???
If so, my questions are:
What numerical package should I use? PyEvolve, DEAP, or something else? It appears that PyEvolve is no longer being developed and DEAP is just a wrapper on the outdated(?) EAP.
Graphic-wise, I find mayavi2 and vtk promising. The problem is, none of the numerical package seems to bind to these readily. Is there no better alternative than to save the numerical output to datafile and feed them into, say, mayavi2?
Another option is to generate the data via Netlogo and feed them into a graphing package from (2). Is there any disadvantage to doing this?
Thank you so much for shedding light on this confusion. | 0 | 1 | 1,399 |
0 | 11,217,921 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-06-27T00:39:00.000 | 1 | 2 | 0 | Split quadrilateral into sub-regions of a maximum area | 11,217,855 | 0.099668 | python,math,geometry,gis | You could recursively split the quad in half on the long sides until the resulting area is small enough. | It is pretty easy to split a rectangle/square into smaller regions and enforce a maximum area of each sub-region. You can just divide the region into regions with sides length sqrt(max_area) and treat the leftovers with some care.
With a quadrilateral however I am stumped. Let's assume I don't know the angle of any of the corners. Let's also assume that all four points are on the same plane. Also, I don't need for the the small regions to be all the same size. The only requirement I have is that the area of each individual region is less than the max area.
Is there a particular data structure I could use to make this easier?
Is there an algorithm I'm just not finding?
Could I use quadtrees to do this? I'm not incredibly versed in trees but I do know how to implement the structure.
I have GIS work in mind when I'm doing this, but I am fairly confident that that will have no impact on the algorithm to split the quad. | 0 | 1 | 1,141 |
0 | 11,267,361 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2012-06-29T18:32:00.000 | 1 | 2 | 0 | Storing text mining data | 11,267,143 | 0.099668 | python,database,data-mining,text-mining | Why not have simple SQL tables
Tables:
documents with a primary key of id or file name or something
observations with foreign key into documents and the term (indexed on both fields probably unique)
The array approach you mentioned seems like a slow way to get at terms.
With sql you can easily allow new terms be added to the observations table.
Easy to aggregate and even do trending stuff by aggregating by date if the documents table includes a timestamp. | I am looking to track topic popularity on a very large number of documents. Furthermore, I would like to give recommendations to users based on topics, rather than the usual bag of words model.
To extract the topics I use natural language processing techniques that are beyond the point of this post.
My question is how should I persist this data so that:
I) I can quickly fetch trending data for each topic (in principle, every time a user opens a document, the topics in that document should go up in popularity)
II) I can quickly compare documents to provide recommendations (here I am thinking of using clustering techniques)
More specifically, my questions are:
1) Should I go with the usual way of storing text mining data? meaning storing a topic occurrence vector for each document, so that I can later measure the euclidean distance between different documents.
2) Some other way?
I am looking for specific python ways to do this. I have looked into SQL and NoSQL databases, and also into pytables and h5py, but I am not sure how I would go about implementing a system like this. One of my concerns is how can I deal with an ever growing vocabulary of topics?
Thank you very much | 0 | 1 | 612 |
0 | 11,289,709 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2012-07-02T07:50:00.000 | 1 | 3 | 0 | How to compare image with python | 11,289,652 | 0.066568 | python,image,compare | If you want to check if they are binary equal you can count a checksum on them and compare it. If you want to check if they are similar in some other way , it will be more complicated and definitely would not fit into simple answer posted on Stack Overflow. It just depends on how you define similarity but anyway it would require good programming skills and a lot of code written. | I'm looking for an algorithm to compare two images (I work with python).
I find the PIL library, numpy/scipy and opencv. I know how to transform in greyscale, binary, make an histogram, .... that's ok but I don't know what I have to do with the two images to say "yes they're similar // they're probably similar // they don't match".
Do you know the right way to go about it ? | 0 | 1 | 1,355 |
0 | 11,616,216 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2012-07-02T07:52:00.000 | 0 | 3 | 0 | .EXE installer crashes when installing Python modules: IPython, Pandas and Matplotlib | 11,289,670 | 0 | python,numpy,matplotlib,ipython,pandas | This happened to me too. It works if you right click and 'Run As Administrator' | I have recently installed numpy due to ease using the exe installer for Python 2.7. However, when I attempt to install IPython, Pandas or Matplotlib using the exe file, I consistently get a variant of the following error right after the installation commeces (pandas in the following case):
pandas-0.8.0.win32-py2.7[1].exe has stopped working
The problem caused the program to stop working correctly: windows close the program and notify whether a solution is available.
NumPy just worked fine when I installed it. This is extremely frustrating and I would appreciate any insight.
Thanks | 0 | 1 | 1,016 |
0 | 11,312,776 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2012-07-02T17:06:00.000 | 1 | 1 | 0 | concatenating TimeSeries of different lengths using Pandas | 11,298,097 | 0.197375 | python,dataframe,concat,pandas | If the Series are in a dict data, you need only do:
frame = DataFrame(data)
That puts things into a DataFrame and unions all the dates. If you want to fill values forward, you can call frame = frame.fillna(method='ffill'). | I am using pandas in python. I have several Series indexed by dates that I would like to concat into a single DataFrame, but the Series are of different lengths because of missing dates etc. I would like the dates that do match up to match up, but where there is missing data for it to be interpolated or just use the previous date or something like that. What is the easiest way to do this? | 0 | 1 | 1,369 |
0 | 11,359,378 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-07-05T19:31:00.000 | 1 | 2 | 0 | Change numpy.seterr defaults? | 11,351,264 | 0.099668 | python,numpy | There is no configuration file for this. You will have to call np.seterr() yourself. | I'd like to change my seterr defaults to be either all 'warn' or all 'ignore'. This can be done interactively by doing np.seterr(all='ignore'). Is there a way to make it a system default? There is no .numpyrc as far as I can tell; is there some other configuration file where these defaults can be changed?
(I'm using numpy 1.6.1)
EDIT: The problem was not that numpy's default settings had changed, as I had incorrectly suspected, but that another code, pymc, was changing things that are normally ignore or warn to raise, causing all sorts of undesired and unexpected crashes. | 0 | 1 | 880 |
0 | 11,355,186 | 0 | 1 | 0 | 0 | 1 | false | 19 | 2012-07-05T19:32:00.000 | 2 | 3 | 0 | nltk tokenization and contractions | 11,351,290 | 0.132549 | python,nlp,nltk | Because the number of contractions are very minimal, one way to do it is to search and replace all contractions to it full equivalent (Eg: "don't" to "do not") and then feed the updated sentences into the wordpunct_tokenizer. | I'm tokenizing text with nltk, just sentences fed to wordpunct_tokenizer. This splits contractions (e.g. 'don't' to 'don' +" ' "+'t') but I want to keep them as one word. I'm refining my methods for a more measured and precise tokenization of text, so I need to delve deeper into the nltk tokenization module beyond simple tokenization.
I'm guessing this is common and I'd like feedback from others who've maybe had to deal with the particular issue before.
edit:
Yeah this a general, splattershot question I know
Also, as a novice to nlp, do I need to worry about contractions at all?
EDIT:
The SExprTokenizer or TreeBankWordTokenizer seems to do what I'm looking for for now. | 0 | 1 | 12,956 |
0 | 11,401,559 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-07-06T12:39:00.000 | 1 | 1 | 0 | Append extras informations to Series in Pandas | 11,362,376 | 0.197375 | python,pandas | Right now there is not an easy way to maintain metadata on pandas objects across computations.
Maintaining metadata has been an open discussion on github for some time now but we haven't had to time code it up.
We'd welcome any additional feedback you have (see pandas on github) and would love to accept a pull-request if you're interested in rolling your own. | Is it possible to customize Serie (in a simple way, and DataFrame by the way :p) from pandas to append extras informations on the display and in the plots? A great thing will be to have the possibility to append informations like "unit", "origin" or anything relevant for the user that will not be lost during computations, like the "name" parameter. | 0 | 1 | 148 |
0 | 68,635,240 | 0 | 1 | 0 | 0 | 1 | false | 14 | 2012-07-08T01:03:00.000 | 0 | 2 | 0 | How do I change the font size of the scale in matplotlib plots? | 11,379,910 | 0 | python,matplotlib | simply put, you can use the following command to set the range of the ticks and change the size of the ticks
import matplotlib.pyplot as plt
set the range of ticks for x-axis and y-axis
plt.set_yticks(range(0,24,2))
plt.set_xticks(range(0,24,2))
change the size of ticks for x-axis and y-axis
plt.yticks(fontsize=12,)
plt.xticks(fontsize=12,) | While plotting using Matplotlib, I have found how to change the font size of the labels.
But, how can I change the size of the numbers in the scale?
For clarity, suppose you plot x^2 from (x0,y0) = 0,0 to (x1,y1) = (20,20).
The scale in the x-axis below maybe something like
0 1 2 ... 20.
I want to change the font size of such scale of the x-axis. | 0 | 1 | 44,462 |
0 | 11,462,417 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2012-07-12T20:28:00.000 | 5 | 1 | 0 | NLTK multiple feature sets in one classifier? | 11,460,115 | 1.2 | python,nlp,nltk | NLTK classifiers can work with any key-value dictionary. I use {"word": True} for text classification, but you could also use {"contains(word)": 1} to achieve the same effect. You can also combine many features together, so you could have {"word": True, "something something": 1, "something else": "a"}. What matters most is that your features are consistent, so you always have the same kind of keys and a fixed set of possible values. Numeric values can be used, but the classifier isn't smart about them - it will treat numbers as discrete values, so that 99 and 100 are just as different as 1 and 100. If you want numbers to be handled in a smarter way, then I recommend using scikit-learn classifiers. | In NLTK, using a naive bayes classifier, I know from examples its very simply to use a "bag of words" approach and look for unigrams or bigrams or both. Could you do the same using two completely different sets of features?
For instance, could I use unigrams and length of the training set (I know this has been mentioned once on here)? But of more interest to me would be something like bigrams and "bigrams" or combinations of the POS that appear in the document?
Is this beyond the power of the basic NLTK classifier?
Thanks
Alex | 0 | 1 | 1,502 |
0 | 11,517,362 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2012-07-17T06:33:00.000 | 0 | 1 | 0 | Fast access and update integer matrix or array in Python | 11,517,143 | 1.2 | python,performance | What you ask is quite of a problem. Different data structures have different properties. In general, if you need quick access, do not use lists! They have linear access time, which means, the more you put in them, the longer it will take in average to access an element.
You could perhaps use numpy? That library has matrices that can be accessed quite fast, and can be reshaped on the fly. However, if you want to add or delete rows, it will might be a bit slow because it generally reallocates (thus copies) the entire data. So it is a trade off.
If you are gonna have so many internal arrays of different sizes, perhaps you could have a dictionary that contains the internal arrays. I think if it is indexed by integers it will be much faster than a list. Then, the internal arrays could be created with numpy. | I will need to create array of integer arrays like [[0,1,2],[4,4,5,7]...[4,5]]. The size of internal arrays changeable. Max number of internal arrays is 2^26. So what do you recommend for the fastest way for updating this array.
When I use list=[[]] * 2^26 initialization is very fast but update is very slow. Instead I use
list=[] , for i in range(2**26): list.append.([]) .
Now initialization is slow, update is fast. For example, for 16777216 internal array and 0.213827311993 avarage number of elements on each array for 2^26-element array it takes 1.67728900909 sec. It is good but I will work much bigger datas, hence I need the best way. Initialization time is not important.
Thank you. | 0 | 1 | 497 |
0 | 11,522,576 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2012-07-17T12:15:00.000 | 0 | 3 | 0 | Process 5 million key-value data in python.Will NoSql solve? | 11,522,232 | 0 | python,nosql | If this is just a one-time process, you might want to just setup an EC2 node with more than 1G of memory and run the python scripts there. 5 million items isn't that much, and a Python dictionary should be fairly capable of handling it. I don't think you need Hadoop in this case.
You could also try to optimize your scripts by reordering the items in several runs, than running over the 5 files synchronized using iterators so that you don't have to keep everything in memory at the same time. | I would like to get the suggestion on using No-SQL datastore for my particular requirements.
Let me explain:
I have to process the five csv files. Each csv contains 5 million rows and also The common id field is presented in each csv.So, I need to merge all csv by iterating 5 million rows.So, I go with python dictionary to merge all files based on the common id field.But here the bottleneck is you can't store the 5 million keys in memory(< 1gig) with python-dictionary.
So, I decided to use No-Sql.I think It might be helpful to process the 5 million key value storage.Still I didn't have clear thoughts on this.
Anyway we can't reduce the iteration since we have the five csvs each has to be iterated for updating the values.
Is it there an simple steps to go with that?
If this is the way Could you give me the No-Sql datastore to process the key-value pair?
Note: We have the values as list type also. | 0 | 1 | 347 |
0 | 51,053,926 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2012-07-17T17:48:00.000 | 0 | 2 | 0 | OpenCV: Converting from NumPy to IplImage in Python | 11,528,009 | 0 | python,opencv | 2-way to apply:
img = cv2.imread(img_path)
img_buf = cv2.imencode('.jpg', img)[1].tostring()
just read the image file:
img_buf = open(img_path, 'rb').read() | I have an image that I load using cv2.imread(). This returns an NumPy array. However, I need to pass this into a 3rd party API that requires the data in IplImage format.
I've scoured everything I could and I've found instances of converting from IplImage to CvMat,and I've found some references to converting in C++, but not from NumPy to IplImage in Python. Is there a function that is provided that can do this conversion? | 0 | 1 | 10,288 |
0 | 11,564,801 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-07-18T14:41:00.000 | 1 | 2 | 0 | How to determine set of connected line from an array in python | 11,543,991 | 0.099668 | python,numpy,nearest-neighbor | I don't know of anything which provides the functionality you desire out of the box. If you have already written the logic, and it is just slow, have you considered Cython-ing your code. For simple typed looping operations you could get a significant speedup. | I have an array that looks something like:
[0 x1 0 0 y1 0 z1
0 0 x2 0 y2 0 z2
0 0 x3 0 0 y3 z3
0 0 x4 0 0 y4 z4
0 x5 0 0 0 y5 z5
0 0 0 0 y6 0 0]
I need to determine set of connected line (i.e. line that connects to the points [x1,x2,x3..], [y1,y2,y3...], [z1,z2,z3..]) from the array and then need to find maximum value in each line i.e. max{x1,x2,x3,...}, max{y1,y2,y3..} etc. i was trying to do nearest neighbor search using kdtree but it return the same array. I have array of the size (200 x 8000). is there any easier way to do this? Thx. | 0 | 1 | 526 |
0 | 11,585,354 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2012-07-20T18:18:00.000 | 0 | 1 | 0 | How to efficiently convert dates in numpy record array? | 11,584,856 | 0 | python,numpy | I could be wrong, but it seems to me like your issue is having repeated occurrences, thus doing the same conversion more times than necessary. IF that interpretation is correct, the most efficient method would depend on how many repeats there are. If you have 100,000 repeats out of 1.7 million, then writing 1.6 million to a dictionary and checking it 1.7 million times might not be more efficient, since it does 1.6+1.7million read/writes. However, if you have 1 million repeats, then returning an answer (O(1)) for those rather than doing the conversion an extra million times would be much faster.
All-in-all, though, python is very slow and you might not be able to speed this up much at all, given that you are using 1.7 million inputs. As for numpy functions, I'm not that well versed in it either, but I believe there's some good documentation for it online. | I have to read a very large (1.7 million records) csv file to a numpy record array. Two of the columns are strings that need to be converted to datetime objects. Additionally, one column needs to be the calculated difference between those datetimes.
At the moment I made a custom iterator class that builds a list of lists. I then use np.rec.fromrecords to convert it to the array.
However, I noticed that calling datetime.strptime() so many times really slows things down. I was wondering if there was a more efficient way to do these conversions. The times are accurate to the second within the span of a date. So, assuming that the times are uniformly distributed (they're not), it seems like I'm doing 20x more conversions that necessary (1.7 million / (60 X 60 X 24).
Would it be faster to store converted values in a dictionary {string dates: datetime obj} and first check the dictionary, before doing unnecessary conversions?
Or should I be using numpy functions (I am still new to the numpy library)? | 0 | 1 | 420 |
0 | 11,610,785 | 0 | 0 | 0 | 0 | 1 | true | 20 | 2012-07-23T06:24:00.000 | 9 | 3 | 0 | Is there a C/C++ API for python pandas? | 11,607,387 | 1.2 | python,c,api,pandas | All the pandas classes (TimeSeries, DataFrame, DatetimeIndex etc.) have pure-Python definitions so there isn't a C API. You might be best off passing numpy ndarrays from C to your Python code and letting your Python code construct pandas objects from them.
If necessary you could use PyObject_CallFunction etc. to call the pandas constructors, but you'd have to take care of accessing the names from module imports and checking for errors. | I'm extracting mass data from a legacy backend system using C/C++ and move it to Python using distutils. After obtaining the data in Python, I put it into a pandas DataFrame object for data analysis. Now I want to go faster and would like to avoid the second step.
Is there a C/C++ API for pandas to create a DataFrame in C/C++, add my C/C++ data and pass it to Python? I'm thinking of something that is similar to numpy C API.
I already thougth of creating numpy array objects in C as a workaround but i'm heavily using timeseries data and would love to have the TimeSeries and date_range objects as well. | 0 | 1 | 28,186 |
0 | 42,165,467 | 0 | 0 | 0 | 0 | 1 | false | 96 | 2012-07-24T00:50:00.000 | 1 | 6 | 0 | Large, persistent DataFrame in pandas | 11,622,652 | 0.033321 | python,pandas,sas | You can use Pytable rather than pandas df.
It is designed for large data sets and the file format is in hdf5.
So the processing time is relatively fast. | I am exploring switching to python and pandas as a long-time SAS user.
However, when running some tests today, I was surprised that python ran out of memory when trying to pandas.read_csv() a 128mb csv file. It had about 200,000 rows and 200 columns of mostly numeric data.
With SAS, I can import a csv file into a SAS dataset and it can be as large as my hard drive.
Is there something analogous in pandas?
I regularly work with large files and do not have access to a distributed computing network. | 0 | 1 | 76,790 |
0 | 18,619,898 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2012-07-28T22:20:00.000 | 0 | 4 | 0 | Junk characters (smart quotes, etc.) in output file | 11,705,114 | 0 | python,mysql,vim,encoding,smart-quotes | Are all these "junk" characters in the range <80> to <9F>? If so, it's highly likely that they're Microsoft "Smart Quotes" (Windows-125x encodings). Someone wrote up the text in Word or Outlook, and copy/pasted it into a Web application. Both Latin-1 and UTF-8 regard these characters as control characters, and the usual effect is that the text display gets cut off (Latin-1) or you see a ?-in-black-diamond-invalid-character (UTF-8).
Note that Word and Outlook, and some other MS products, provide a UTF-8 version of the text for clipboard use. Instead of <80> to <9F> codes, Smart Quotes characters will be proper multibyte UTF-8 sequences. If your Web page is in UTF-8, you should normally get a proper UTF-8 character instead of the Smart Quote in Windows-125x encoding. Also note that this is not guaranteed behavior, but "seems to work pretty consistently". It all depends on a UTF-8 version of the text being available, and properly handled (i.e., you didn't paste into, say, gvim on the PC, and then copy/paste into a Web text form). This may well also work for various PC applications, so long as they are looking for UTF-8-encoded text. | I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like <92>,<89>, <94> etc.
Any thoughts? I tried doing string.encode('utf-8') before writing to csv but that gave an error that UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 905: ordinal not in range(128) | 0 | 1 | 1,903 |
0 | 11,736,635 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2012-07-31T04:28:00.000 | 2 | 2 | 0 | Detecting Halo in Images | 11,733,106 | 0.197375 | python,image-processing,machine-learning,computer-vision | If I understand you correctly then you have complete black images with white borders?
In this case I think the easiest approach is to compute a histogram of the intensity values of the pixels, i.e. how „dark/bright” is the overall image. I guess that the junk images are significantly darker than the non-junk images. You can then filter the images based on their histogram. For that you have to choose a threshold: Every image darker than this threshold is considered as junk.
If this approach is to fuzzy you can easily improve it. For example: just compute the histogram of the inner image without the edges, because this makes the histogram much more darker in comparison to non-junk images. | I've never done any image processing and I was wondering if someone can nudge me in the right direction.
Here's my issue: I have a bunch of images of black and white images of places around a city. Due to some problems with the camera system, some images contain nothing but a black image with a white vignette around the edge. This vignette is noisy and non-uniform (sometimes it can be on both sides, other times only one).
What are some good ways I can go about detecting these frames? I would just need to be able to write a bit.
My image set is huge, so I would need this to be an automated process and in the end it should use Python since it needs to integrate into my existing code.
I was thinking some sort of machine learning algorithm but I'm not sure what to do beyond that. | 0 | 1 | 1,550 |
0 | 52,399,670 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2012-08-02T00:30:00.000 | 3 | 4 | 0 | rpy2: Convert FloatVector or Matrix back to a Python array or list? | 11,769,471 | 0.148885 | python,rpy2 | In the latest version of rpy2, you can simply do this in a direct way:
import numpy as np
array=np.array(vector_R) | I'm using rpy2 and I have this issue that's bugging me: I know how to convert a Python array or list to a FloatVector that R (thanks to rpy2) can handle within Python, but I don't know if the opposite can be done, say, I have a FloatVector or Matrix that R can handle and convert it back to a Python array or list...can this be done?
Thanks in advance! | 0 | 1 | 5,269 |
0 | 11,788,967 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2012-08-03T03:40:00.000 | 1 | 2 | 0 | Importing Numpy into functions | 11,788,950 | 0.099668 | python,numpy,python-import | You have to import modules in every file in which you use them. Does that answer your question? | I do not know the right way to import modules.
I have a main file which initializes the code, does some preliminary calculations etc.
I also have 5 functions f1, f2, ... f5. The main code and all functions need Numpy.
If I define all functions in the main file, the code runs fine.
(Importing with : import numpy as np)
If I put the functions in a separate file, I get an error:
Error : Global name 'linalg' is not defined.
What is the right way to import modules such that the functions f1 - f5 can access the Numpy functionality? | 0 | 1 | 9,291 |
0 | 11,790,050 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2012-08-03T05:42:00.000 | 2 | 1 | 0 | What's a good way to output matplotlib graphs on a PHP website? | 11,789,917 | 1.2 | php,python,matplotlib | You could modify your python script so it outputs an image (image/jpeg) instead of saving it to a file. Then use the tag as normal, but pointing directly to the python script. Your php wouldn't call the python script at all, It would just include it as the src of the image. | I have a python script that can output a plot using matplotlib and command line inputs.
What i'm doing right now is making the script print the location/filename of the generated plot image and then when PHP sees it, it outputs an img tag to display it.
The python script deletes images that are older than 20 minutes when it runs. It seems like too much of a workaround, and i'm wondering if there's a better solution. | 0 | 1 | 2,549 |
0 | 11,809,642 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2012-08-04T04:49:00.000 | 2 | 2 | 0 | Heat map generator of a floor plan image | 11,805,983 | 0.197375 | matlab,python-3.x,heatmap,color-mapping | One way to do this would be:
1) Load in the floor plan image with Matlab or NumPy/matplotlib.
2) Use some built-in edge detection to locate the edge pixels in the floor plan.
3) Form a big list of (x,y) locations where an edge is found in the floor plan.
4) Plot your heat map
5) Scatterplot the points of the floor plan as an overlay.
It sounds like you know how to do each of these steps individually, so all you'll need to do is look up some stuff on how to overlay plots onto the same axis, which is pretty easy in both Matlab and matplotlib.
If you're unfamiliar, the right commands look at are things like meshgrid and surf, possibly contour and their Python equivalents. I think Matlab has a built-in for Canny edge detection. I believe this was more difficult in Python, but if you use the PIL library, the Mahotas library, the scikits.image library, and a few others tailored for image manipulation, it's not too bad. SciPy may actually have an edge filter by now though, so check there first.
The only sticking point will be if your (x,y) data for the temperature are not going to line up with the (x,y) pixel locations in the image. In that case, you'll have to play around with some x-scale factor and y-scale factor to transform your heat map's coordinates into pixel coordinates first, and then plot the heat map, and then the overlay should work.
This is a fairly low-tech way to do it; I assume you just need a quick and dirty plot to illustrate how something's working. This method does have the advantage that you can change the style of the floorplan points easily, making them larger, thicker, thinner, different colors, or transparent, depending on how you want it to interact with the heat map. However, to do this for real, use GIMP, Inkscape, or Photoshop and overlay the heatmap onto the image after the fact. | I want to generate a heat map image of a floor. I have the following things:
A black & white .png image of the floor
A three column array stored in Matlab.
-- The first two columns indicate the X & Y coordinates of the floorpan image
-- The third coordinate denotes the "temperature" of that particular coordinate
I want to generate a heat map of the floor that will show the "temperature" strength in those coordinates. However, I want to display the heat map on top of the floor plan so that the viewers can see which rooms lead to which "temperatures".
Is there any software that does this job? Can I use Matlab or Python to do this?
Thanks,
Nazmul | 0 | 1 | 4,036 |
0 | 11,826,057 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2012-08-06T08:15:00.000 | 0 | 2 | 0 | Python For OpenCV2.4 | 11,824,697 | 0 | python,opencv | Have you run 'make install' or 'sudo make install'? While not absolutely necessary, it copies the generated binaries to your system paths. | I downloaded opencv 2.4 source code from svn, and then I used the command 'cmake -D BUILD_TESTS=OFF' to generate makefile and make and install. I found that the python module was successfully made. But when I import cv2 in python, no module cv2 exists. Is there anything else I should configure? Thanks for your help. | 0 | 1 | 329 |
0 | 11,824,855 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2012-08-06T08:15:00.000 | 2 | 2 | 0 | Python For OpenCV2.4 | 11,824,697 | 1.2 | python,opencv | You should either copy the cv2 library to a location in your PYTHONPATH or add your current directory to the PYTHONPATH. | I downloaded opencv 2.4 source code from svn, and then I used the command 'cmake -D BUILD_TESTS=OFF' to generate makefile and make and install. I found that the python module was successfully made. But when I import cv2 in python, no module cv2 exists. Is there anything else I should configure? Thanks for your help. | 0 | 1 | 329 |
0 | 13,551,794 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2012-08-07T11:08:00.000 | 3 | 1 | 0 | Matplotlib with TkAgg error: PyEval_RestoreThread: null tstate on save_fig() - do I need threads enabled? | 11,844,628 | 1.2 | python,c,matplotlib | Finally resolved this - so going to explain what occurred for the sake of Googlers!
This only happened when using third-party libraries like numpy or matplotlib, but actually related to an error elsewhere in my code. As part of the software I wrote, I was extending the Python interpreter following the same basic pattern as shown in the Python C API documentation.
At the end of this code, I called the Py_DECREF function on some of the Python objects I had created along the way. My mistake was that I was calling this function on borrowed references, which should not be done.
This caused the software to crash with the error above when it reached the Py_Finalize command that I used to clean up. Removing the DECREF on the borrowed references fixed this error. | I'm puzzling over an embedded Python 2.7.2 interpreter issue. I've embedded the interpreter in a Visual C++ 2010 application and it essentially just calls user-written scripts.
My end-users want to use matplotlib - I've already resolved a number of issues relating to its dependence on numpy - but when they call savefig(), the application crashes with:
**Fatal Python Error: PyEval_RestoreThread: NULL tstate
This isn't an issue running the same script using the standard Python 2.7.2 interpreter, even using the same site-packages, so it seems to definitely be something wrong with my embedding. I call Py_Initialize() - do I need to do something with setting up Python threads?
I can't quite get the solution from other questions here to work, but I'm more concerned that this is symptomatic of a wider problem in how I'm setting up the Python interpreter. | 0 | 1 | 2,495 |
0 | 71,602,253 | 0 | 0 | 0 | 0 | 1 | false | 46 | 2012-08-09T10:15:00.000 | 0 | 3 | 0 | Slice Pandas DataFrame by Row | 11,881,165 | 0 | python,pandas,slice | If you just need to get the top rows; you can use df.head(10) | I am working with survey data loaded from an h5-file as hdf = pandas.HDFStore('Survey.h5') through the pandas package. Within this DataFrame, all rows are the results of a single survey, whereas the columns are the answers for all questions within a single survey.
I am aiming to reduce this dataset to a smaller DataFrame including only the rows with a certain depicted answer on a certain question, i.e. with all the same value in this column. I am able to determine the index values of all rows with this condition, but I can't find how to delete this rows or make a new df with these rows only. | 0 | 1 | 92,185 |
0 | 11,890,470 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2012-08-09T19:17:00.000 | 15 | 3 | 0 | Is there an equivalent of the Python range function in MATLAB? | 11,890,437 | 1 | python,matlab | Yes, there is the : operator. The command -10:5:11 would produce the vector [-10, -5, 0, 5, 10]; | Is there an equivalent MATLAB function for the range() function in Python?
I'd really like to be able to type something like range(-10, 11, 5) and get back [-10, -5, 0, 5, 10] instead of having to write out the entire range by hand. | 0 | 1 | 21,375 |
0 | 11,903,766 | 0 | 1 | 0 | 0 | 1 | false | 18 | 2012-08-10T13:50:00.000 | -1 | 3 | 0 | Find the set difference between two large arrays (matrices) in Python | 11,903,083 | -0.066568 | python,numpy,set-difference | I'm not sure what you are going for, but this will get you a boolean array of where 2 arrays are not equal, and will be numpy fast:
import numpy as np
a = np.random.randn(5, 5)
b = np.random.randn(5, 5)
a[0,0] = 10.0
b[0,0] = 10.0
a[1,1] = 5.0
b[1,1] = 5.0
c = ~(a-b==0)
print c
[[False True True True True]
[ True False True True True]
[ True True True True True]
[ True True True True True]
[ True True True True True]] | I have two large 2-d arrays and I'd like to find their set difference taking their rows as elements. In Matlab, the code for this would be setdiff(A,B,'rows'). The arrays are large enough that the obvious looping methods I could think of take too long. | 0 | 1 | 13,283 |
0 | 58,766,819 | 0 | 1 | 0 | 0 | 1 | false | 19 | 2012-08-10T23:24:00.000 | 0 | 3 | 0 | Best Machine Learning package for Python 3x? | 11,910,481 | 0 | python,python-3.x,machine-learning,scikit-learn | Old question, Scikit-Learn is supported by Python3 now. | I was bummed out to see that scikit-learn does not support Python 3...Is there a comparable package anyone can recommend for Python 3? | 0 | 1 | 3,322 |
0 | 11,926,117 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2012-08-12T20:48:00.000 | 1 | 3 | 0 | What are some good methods for detecting movement using a camera? (opencv) | 11,925,782 | 0.066568 | python,opencv | Please go through the book Learning OpenCV: Computer Vision with the OpenCV Library
It has theory as well as example codes. | I am looking for some methods for detecting movement. I've tried two of them. One method is to have background frame that has been set on program start and other frames are compared (threshold) to that background frame. Other method is to compare current frame (let's call that frame A) with frame-1 (frame before A). None of these methods are great. I want to know other methods that work better. | 0 | 1 | 637 |
0 | 12,489,308 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2012-08-14T02:02:00.000 | 1 | 1 | 0 | cv.KMeans2 clustering indices inconsistent | 11,944,796 | 1.2 | python,opencv,cluster-analysis,k-means | Since k-means is a randomized approach, you will probably encounter this problem even when analyzing the same frame multiple times.
Try to use the previous frames cluster centers as initial centers for k-means. This may make the ordering stable enough for you, and it may even significantly speed up k-means (assuming that the green spots don't move too fast).
Alternatively, just try reordering the means so that they are closest to the previous images means. | So I have a video with 3 green spots on it. These spots have a bunch of "good features to track" around their perimeter.
The spots are very far away from each other so using KMeans I am easily able to identify them as separate clusters.
The problem comes in that the ordering of the clusters changes from frame to frame. In one frame a particular cluster is the first in the output list. In the next cluster it is the second in the output list.
It is making for a difficult time measuring angles.
Has anyone come across this or can think of a fix other than writing extra code to compare each list to the list of the previous frame? | 0 | 1 | 232 |
0 | 11,998,662 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2012-08-16T00:57:00.000 | 1 | 2 | 0 | efficient way to resize numpy or dataset? | 11,979,316 | 0.099668 | python,numpy,h5py | NumPy arrays are not designed to be resized. It's doable, but wasteful in terms of memory (because you need to create a second array larger than your first one, then fill it with your data... That's two arrays you have to keep) and of course in terms of time (creating the temporary array).
You'd be better off starting with lists (or regular arrays, as suggested by @HYRY), then convert to ndarrays when you have a chunk big enough.
The question is, when do you need to do the conversion ? | I want to understand the effect of resize() function on numpy array vs. an h5py dataset. In my application, I am reading a text file line by line and then after parsing the data, write into an hdf5 file. What would be a good approach to implement this. Should I add each new row into a numpy array and keep resizing (increasing the axis) for numpy array (eventually writing the complete numpy array into h5py dataset) or should I just add each new row data into h5py dataset directly and thus resizing the h5py dataset in memory. How does resize() function affects the performance if we keep resizing after each row? Or should I resize after every 100 or 1000 rows?
There can be around 200,000 lines in each dataset.
Any help is appreciated. | 0 | 1 | 1,987 |
0 | 12,011,024 | 0 | 0 | 0 | 0 | 3 | true | 7 | 2012-08-17T02:48:00.000 | 2 | 3 | 0 | Is it possible to use complex numbers as target labels in scikit learn? | 11,999,147 | 1.2 | python,numpy,scipy,scikit-learn | So far I discovered that most classifiers, like linear regressors, will automatically convert complex numbers to just the real part.
kNN and RadiusNN regressors, however, work well - since they do a weighted average of the neighbor labels and so handle complex numbers gracefully.
Using a multi-target classifier is another option, however I do not want to decouple the x and y directions since that may lead to unstable solutions as Colonel Panic mentions, when both results come out close to 0.
I will try other classifiers with complex targets and update the results here. | I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating.
I cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms.
Can anyone suggest how I can get a regression algorithm to operate with complex numbers as targets? | 0 | 1 | 2,406 |
0 | 12,003,586 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2012-08-17T02:48:00.000 | 1 | 3 | 0 | Is it possible to use complex numbers as target labels in scikit learn? | 11,999,147 | 0.066568 | python,numpy,scipy,scikit-learn | Good question. How about transforming angles into a pair of labels, viz. x and y co-ordinates. These are continuous functions of angle (cos and sin). You can combine the results from separate x and y classifiers for an angle? $\theta = \sign(x) \arctan(y/x)$. However that result will be unstable if both classifiers return numbers near zero. | I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating.
I cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms.
Can anyone suggest how I can get a regression algorithm to operate with complex numbers as targets? | 0 | 1 | 2,406 |
0 | 12,004,759 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2012-08-17T02:48:00.000 | 4 | 3 | 0 | Is it possible to use complex numbers as target labels in scikit learn? | 11,999,147 | 0.26052 | python,numpy,scipy,scikit-learn | Several regressors support multidimensional regression targets. Just view the complex numbers as 2d points. | I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating.
I cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms.
Can anyone suggest how I can get a regression algorithm to operate with complex numbers as targets? | 0 | 1 | 2,406 |
0 | 12,014,071 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2012-08-17T22:34:00.000 | 1 | 3 | 0 | Python muliple deepcopy behaviors | 12,014,042 | 1.2 | python,deep-copy | If there are no other objects referenced in graph (just simple fields), then copy.copy(graph) should make a copy, while copy.deepcopy(manager) should copy the manager and its graphs, assuming there is a list such as manager.graphs.
But in general you are right, the copy module does not have this flexibility, and for slightly fancy situations you'd probably need to roll your own. | Suppose I have two classes, say Manager and Graph, where each Graph has a reference to its manager, and each Manager has references to a collection of graphs that it owns. I want to be able to do two things
1) Copy a graph, which performs a deepcopy except that the new graph references the same manager as the old one.
2) Copy a manager, which creates a new manager and also copies all the graphs it owns.
What is the best way to do this? I don't want to have to roll my own deepcopy implementation, but the standard copy.deepcopy doesn't appear to provide this level of flexibility. | 0 | 1 | 1,009 |
0 | 12,079,563 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-08-20T21:13:00.000 | 0 | 1 | 1 | Data analysis using MapReduce in MongoDb vs a Distributed Queue using Celery & RabbitMq | 12,045,278 | 0 | python,mongodb,mapreduce,celery,distributed-computing | It's impossible to say without benchmarking for certain, but my intuition leans toward doing more calculations in Python rather than mapreduce. My main concern is that mapreduce is single-threaded: One MongoDB process can only run one Javascript function at a time. It can, however, serve thousands of queries simultaneously, so you can take advantage of that concurrency by querying MongoDB from multiple Python processes. | I am currently working on a project which involves performing a lot of statistical calculations on many relatively small datasets. Some of these calculations are as simple as computing a moving average, while others involve slightly more work, like Spearman's Rho or Kendell's Tau calculations.
The datasets are essentially a series of arrays packed into a dictionary, whose keys relate to a document id in MongoDb that provides further information about the subset. Each array in the dictionary has no more than 100 values. The dictionaries, however, may be infinitely large. In all reality however, around 150 values are added each year to the dictionary.
I can use mapreduce to perform all of the necessary calculations. Alternately, I can use Celery and RabbitMQ on a distributed system, and perform the same calculations in python.
My question is this: which avenue is most recommended or best-practice?
Here is some additional information:
I have not benchmarked anything yet, as I am just starting the process of building the scripts to compute the metrics for each dataset.
Using a celery/rabbitmq distributed queue will likely increase the number of queries made against the Mongo database.
I do not envision the memory usage of either method being a concern, unless the number of simultaneous tasks is very large. The majority of the tasks themselves are merely taking an item within a dataset, loading it, doing a calculation, and then releasing it. So even if the amount of data in a dataset is very large, not all of it will be loaded into memory at one time. Thus, the limiting factor, in my mind, comes down to the speed at which mapreduce or a queued system can perform the calculations. Additionally, it is dependent upon the number of concurrent tasks.
Thanks for your help! | 0 | 1 | 944 |
0 | 12,150,625 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2012-08-27T22:31:00.000 | 0 | 1 | 0 | Numpy/Scipy modulus function | 12,150,513 | 0 | python,numpy,scipy | According to the doc, np.mod(x1,x2)=x1-floor(x1/x2)*x2. The problem here is that you are working with very small values, a dark domain where floating point errors (truncation...) happen quite often and results are often unpredictable...
I don't think you should spend a lot of time worrying about that. | The Numpy 'modulus' function is used in a code to check if a certain time is an integral multiple of the time-step.
But some weird behavior is seeen.
numpy.mod(121e-12,1e-12) returns 1e-12
numpy.mod(60e-12,1e-12) returns 'a very small value' (compared to 1e-12).
If you play around numpy.mode('122-126'e-12,1e-12) it gives randomly 0 and 1e-12.
Can someone please explain why?
Thanks much | 0 | 1 | 4,240 |
0 | 12,151,167 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2012-08-27T23:27:00.000 | 0 | 2 | 0 | How do I shuffle in Python for a column (years) with keeping the corrosponding column values? | 12,150,908 | 0 | python-2.7 | Are you familiar with NumPy ? Once you have your data in a numpy ndarray, it's a breeze to shuffle the rows while keeping the column orders, without the hurdle of creating many temporaries.
You could use a function like np.genfromtxt to read your data file and create a ndarray with different named fields. You could then use the np.random.shuffle function to reorganize the rows. | I have a text file with five columns. First column has year(2011 to 2040), 2nd has Tmax, 3rd has Tmin, 4th has Precip, and fifth has Solar for 30 years. I would like to write a python code which shuffles the first column (year) 10 times with remaining columns having the corresponding original values in them, that is: I want to shuffle year columns only for 10 times so that year 1 will have the corresponding values. | 0 | 1 | 1,120 |
0 | 12,161,433 | 0 | 0 | 1 | 0 | 2 | true | 0 | 2012-08-28T14:12:00.000 | 2 | 2 | 0 | Computing the null space of a large matrix | 12,161,182 | 1.2 | c++,python,c,algorithm,matrix | The manner to avoid trashing CPU caches greatly depends on how the matrix is stored/loaded/transmitted, a point that you did not address.
There are a few generic recommendations:
divide the problem into worker threads addressing contiguous rows per threads
increment pointers (in C) to traverse rows and keep the count on a per-thread basis
consolidate the per-thread results at the end of all worker threads.
If your matrix cells are made of bits (instead of bytes, ints, or arrays) then you can read words (either 4-byte or 8-byte on 32-bit/64-bit platforms) to speedup the count.
There are too many questions left unanswered in the problem description to give you any further guidance. | I'm looking for the fastest algorithm/package i could use to compute the null space of an extremely large (millions of elements, and not necessarily square) matrix. Any language would be alright, preferably something in Python/C/C++/Java. Your help would be greatly appreciated! | 0 | 1 | 766 |
0 | 12,161,500 | 0 | 0 | 1 | 0 | 2 | false | 0 | 2012-08-28T14:12:00.000 | -1 | 2 | 0 | Computing the null space of a large matrix | 12,161,182 | -0.099668 | c++,python,c,algorithm,matrix | In what kind of data structure is your matrix represented?
If you use an element list to represent the matrix, i.e. "column, row, value" tuple for one matrix element, then the solution would be just count the number of the tuples (subtracted by the matrix size) | I'm looking for the fastest algorithm/package i could use to compute the null space of an extremely large (millions of elements, and not necessarily square) matrix. Any language would be alright, preferably something in Python/C/C++/Java. Your help would be greatly appreciated! | 0 | 1 | 766 |
0 | 12,170,479 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2012-08-28T20:49:00.000 | 2 | 1 | 0 | How to broadcast to a multiindex | 12,167,324 | 0.379949 | python,arrays,pandas,multi-index | If you just want to do simple arithmetic operations, I think something like A.div(B, level='date') should work.
Alternatively, you can do something like B.reindex(A.index, level='date') to manually match the indices. | I have two pandas arrays, A and B, that result from groupby operations. A has a 2-level multi-index consisting of both quantile and date. B just has an index for date.
Between the two of them, the date indices match up (within each quantile index for A).
Is there a standard Pandas function or idiom to "broadcast" B such that it will have an extra level to its multi-index that matches the first multi-index level of A? | 0 | 1 | 1,335 |
0 | 12,610,165 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2012-08-29T08:16:00.000 | 5 | 2 | 0 | MatPlotLib with Sublime Text 2 on OSX | 12,173,541 | 1.2 | python,matplotlib,sublimetext2,sublimetext | I had the same problem and the following fix worked for me:
1 - Open Sublime Text 2 -> Preferences -> Browse Packages
2 - Go to the Python folder, select file Python.sublime-build
3 - Replace the existing cmd line for this one:
"cmd": ["/Library/Frameworks/Python.framework/Versions/Current/bin/python", "$file"],
Then click CMD+B and your script with matplotlib stuff will work. | I want to use matplotlib from my Sublime Text 2 directly via the build command.
Does anybody know how I accomplish that? I'm really confused about the whole multiple python installations/environments. Google didn't help.
My python is installed via homebrew and in my terminal (which uses brew python), I have no problem importing matplotlib from there. But Sublime Text shows me an import Error (No module named matplotlib.pyplot).
I have installed Matplotlib via EPD free. The main matplotlib .dmg installer refused to install it on my disk, because no system version 2.7 was found. I have given up to understand the whole thing. I just want it to work.
And, I have to say, for every bit of joy python brings with it, the whole thing with installations and versions and path, environments is a real hassle.
Beneath a help for this specific problem I would appreciate any helpful link to understand and this environment mess. | 0 | 1 | 3,245 |
0 | 12,199,026 | 0 | 0 | 0 | 0 | 1 | true | 7 | 2012-08-29T10:01:00.000 | 13 | 1 | 0 | scikit-learn GMM produce positive log probability | 12,175,404 | 1.2 | python,machine-learning,scikit-learn,mixture-model | Positive log probabilities are okay.
Remember that the GMM computed probability is a probability density function (PDF), so can be greater than one at any individual point.
The restriction is that the PDF must integrate to one over the data domain.
If the log probability grows very large, then the inference algorithm may have reached a degenerate solution (common with maximum likelihood estimation if you have a small dataset).
To check that the GMM algorithm has not reached a degenerate solution, you should look at the variances for each component. If any of the variances is close to zero, then this is bad. As an alternative, you should use a Bayesian model rather than maximum likelihood estimation (if you aren't doing so already). | I am using Gaussian Mixture Model from python scikit-learn package to train my dataset , however , I fount that when I code
-- G=mixture.GMM(...)
-- G.fit(...)
-- G.score(sum feature)
the resulting log probability is positive real number... why is that?
isn't log probability guaranteed to be negative?
I get it. what Gaussian Mixture Model returns to us i the log probability "density" instead of probability "mass" so positive value is totally reasonable.
If the covariance matrix is near to singular, then the GMM will not perfomr well, and generally it means the data is not good for such generative task | 0 | 1 | 5,301 |
0 | 12,618,627 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-08-29T17:55:00.000 | 0 | 1 | 1 | Using Numpy and SciPy on Apache Pig | 12,183,759 | 0 | python,numpy,scipy,apache-pig | You can stream through a (C)Python script that imports scipy.
I am for instance using this to cluster data inside bags, using import scipy.cluster.hierarchy | I want to write UDFs in Apache Pig. I'll be using Python UDFs.
My issue is I have tons of data to analyse and need packages like NumPy and SciPy. Buy this they dont have Jython support I cant use them along with Pig.
Do we have a substitue ? | 0 | 1 | 524 |
0 | 12,185,246 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2012-08-29T19:26:00.000 | 3 | 1 | 0 | Python - Numpy matrix multiplication | 12,185,117 | 1.2 | python,matrix,numpy,python-2.7,matrix-multiplication | In numpy convention, the transpose of X is represented byX.T and you're in luck, X.T is just a view of the original array X, meaning that no copy is done. | I am trying to optimize (memorywise the multiplication of X and its transpose X'
Does anyone know if numpys matrix multiplication takes into consideration that X' is just the transpose of X. What I mean is that if it detects this and therfore does not create the object X' but just works on the cols/rows of X to produce the product? Thank you for any help on this!
J. | 0 | 1 | 505 |
0 | 12,187,770 | 0 | 0 | 0 | 0 | 1 | true | 13 | 2012-08-29T21:47:00.000 | 9 | 4 | 0 | Johansen cointegration test in python | 12,186,994 | 1.2 | python,statistics,pandas,statsmodels | statsmodels doesn't have a Johansen cointegration test. And, I have never seen it in any other python package either.
statsmodels has VAR and structural VAR, but no VECM (vector error correction models) yet.
update:
As Wes mentioned, there is now a pull request for Johansen's cointegration test for statsmodels. I have translated the matlab version in LeSage's spatial econometrics toolbox and wrote a set of tests to verify that we get the same results.
It should be available in the next release of statsmodels.
update 2:
The test for cointegration coint_johansen was included in statsmodels 0.9.0 together with the vector error correction models VECM.
(see also 3rd answer) | I can't find any reference on funcionality to perform Johansen cointegration test in any Python module dealing with statistics and time series analysis (pandas and statsmodel). Does anybody know if there's some code around that can perform such a test for cointegration among time series? | 0 | 1 | 18,067 |
0 | 12,205,078 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2012-08-29T21:59:00.000 | 1 | 5 | 0 | Open Source Scientific Project - Use Python 2.6 or 2.7? | 12,187,115 | 0.039979 | python,numpy,version,scipy,optparse | I personally use Debian stable for my own projects so naturally I gravitate toward what the distribution uses as the default Python installation. For Squeeze (current stable), it's 2.6.6 but Wheezy will use 2.7.
Why is this relevant? Well, as a programmer there are a number of times I wish I had access to new features from more recent versions of Python, but Debian in general is so conservative that I find it's a good metric of covering wider audience who may be running an older OS.
Since Wheezy probably will become stable by the end of the year (or earlier next year), I'll be moving to 2.7 as well. | I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7.
I am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools. Which version, 2.6 or 2.7, would be better for this?
This would also clear up whether or not to use optparse when making my scripts.
Edit: I am working at a university and the workstation I picked up had Python 2.4. Picking between 2.6 and 2.7 determines which distro to upgrade to. Thanks for the advice! | 0 | 1 | 169 |
0 | 12,187,327 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2012-08-29T21:59:00.000 | 2 | 5 | 0 | Open Source Scientific Project - Use Python 2.6 or 2.7? | 12,187,115 | 0.07983 | python,numpy,version,scipy,optparse | If you intend to distribute this code, your answer depends on your target audience, actually. A recent stint in some private sector research lab showed me that Python 2.5 is still often use.
Another example: EnSight, a commercial package for 3D visualization/manipulation, ships with Python 2.5 (and NumPy 1.3 or 1.4, if I'm not mistaken).
For a personal project, I'd shoot for 2.7. For a larger audience, I'd err towards 2.6. | I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7.
I am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools. Which version, 2.6 or 2.7, would be better for this?
This would also clear up whether or not to use optparse when making my scripts.
Edit: I am working at a university and the workstation I picked up had Python 2.4. Picking between 2.6 and 2.7 determines which distro to upgrade to. Thanks for the advice! | 0 | 1 | 169 |
0 | 12,187,140 | 0 | 1 | 0 | 0 | 3 | true | 2 | 2012-08-29T21:59:00.000 | 9 | 5 | 0 | Open Source Scientific Project - Use Python 2.6 or 2.7? | 12,187,115 | 1.2 | python,numpy,version,scipy,optparse | If everything you need would work with 2.7 I would use it, no point staying with 2.6. Also, .format() works a bit nicer (no need to specify positions in the {} for the arguments to the formatting directives).
FWIW, I usually use 2.7 or 3.2 and every once in a while I end up porting some code to my Linux box which still runs 2.6.5 and the format() thing is annoying enough :)
2.7 has been around enough to be supported well - and 3.x is hopefully getting there too. | I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7.
I am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools. Which version, 2.6 or 2.7, would be better for this?
This would also clear up whether or not to use optparse when making my scripts.
Edit: I am working at a university and the workstation I picked up had Python 2.4. Picking between 2.6 and 2.7 determines which distro to upgrade to. Thanks for the advice! | 0 | 1 | 169 |
0 | 12,187,989 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2012-08-29T23:16:00.000 | 0 | 1 | 1 | How can I stream data, on my Mac, from a bluetooth source using R? | 12,187,795 | 1.2 | python,macos,r,bluetooth | there is a strong probability that you can enumerate the bluetooth as serial port for the bluetooth and use pyserial module to communicate pretty easily...
but if this device does not enumerate serially you will have a very large headache trying to do this...
see if there are any com ports that are available if there are its almost definitely enumerating as a serial connection | I have a device that is connected to my Mac via bluetooth. I would like to use R (or maybe Python, but R is preferred) to read the data real-time and process it. Does anyone know how I can do the data streaming using R on a Mac?
Cheers | 0 | 1 | 377 |
0 | 12,212,637 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2012-08-31T09:14:00.000 | 4 | 1 | 0 | Cassandra get_range_slice | 12,212,321 | 1.2 | c#,java,c++,python,cassandra | The columns for each row will be returned in sorted order, sorted by the column key, depending on you comparator_type. The row ordering will depend on your partitioner, and if you use the random partitioner, the rows will come back in a 'random' order.
In Cassandra, it is possible for each row to have a different set of columns, so you should really read the column key before using the value. This will depend on the data you have inserted into you cluster. | When get_range_slice returns, in what order are the columns returned? Is it random or the order in which the columns were created? Is it best practice to iterate through all resulting columns for each row and compare the column name prior to using the value or can one just index into the returning array? | 0 | 1 | 161 |
0 | 14,346,374 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-09-02T02:01:00.000 | 0 | 2 | 0 | How to make a 2d map with perlin noise python | 12,232,901 | 0 | python,pyglet,terrain,perlin-noise | You could also use 1d perlin noise to calculate the radius from each point to the "center" of the island. It should be really easy to implement, but it will make more circular islands, and won't give each point different heights. | I have been experimenting on making a random map for a top down RPG I am making in Python. (and Pyglet) So far I have been making island by starting at 0,0 and going in a random direction 500 times (x+=32 or y -=32 sort of thing) However this doesn't look like a real image very much so I had a look at the Perlin Noise approach. How would I get a randomly generated map out of this :/ (preferably an island) and is it better than the random direction method? | 0 | 1 | 5,966 |
0 | 12,243,670 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2012-09-03T04:37:00.000 | 1 | 1 | 0 | What algorithms i can use from machine learning or Artificial intelligence which i can show via web site | 12,242,054 | 1.2 | python,web,machine-learning,artificial-intelligence | I assume you are mostly concerned with a general approach to implementing AI in a web context, and not in the details of the AI algorithms themselves. Any computable algorithm can be implemented in any turing complete language (i.e.all modern programming languages). There's no special limitations for what you can do on the web, it's just a matter of representation, and keeping track of session-specific data, and shared data. Also, there is no need to shy away from "calculation" and "graph based" algorithms; most AI-algorithms will be either one or the other (or indeed both) - and that's part of the fun.
For example, as an overall approach for a neural net, you could:
Implement a standard neural network using python classes
Possibly train the set with historical data
Load the state of the net on each request (i.e. from a pickle)
Feed a part of the request string (i.e. a product-ID) to the net, and output the result (i.e. a weighted set of other products, like "users who clicked this, also clicked this")
Also, store the relevant part of the request (i.e. the product-ID) in a session variable (i.e. "previousProduct"). When a new request (i.e. for another product) comes in from the same user, strengthen/create the connection between the first product and the next.
Save the state of the net between each request (i.e. back to pickle)
That's just one, very general example. But keep in mind - there is nothing special about web-programming in this context, except keeping track of session-specific data, and shared data. | I am die hard fan of Artificial intelligence and machine learning. I don't know much about them but i am ready to learn. I am currently a web programmer in PHP , and I am learning python/django for a website.
Now as this AI field is very wide and there are countless algorithms I don't know where to start.
But eventually my main target is to use whichever algorithms; like Genetic Algorithms , Neural networks , optimization which can be programmed in web application to show some stuff.
For Example : Recommendation of items in amazon.com
Now what I want is that in my personal site I have the demo of each algorithm where if I click run and I can show someone what this algorithm can do.
So can anyone please guide which algorithms should I study for web based applications.
I see lot of example in sci-kit python library but they are very calculation and graph based.
I don't think I can use them from web point of view.
Any ideas how should I go? | 1 | 1 | 1,093 |
0 | 12,283,724 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2012-09-03T10:14:00.000 | 0 | 3 | 0 | Integrating a function using non-uniform measure (python/scipy) | 12,245,859 | 0 | python,numpy,scipy,probability,numerical-methods | Another possibilty would be to integrate x -> f( H(x)) where H is the inverse of the cumulative distribution of your probability distribtion.
[This is because of change of variable: replacing y=CDF(x) and noting that p(x)=CDF'(x) yields the change dy=p(x)dx and thus int{f(x)p(x)dx}==int{f(x)dy}==int{f(H(y))dy with H the inverse of CDF.] | I would like to integrate a function in python and provide the probability density (measure) used to sample values. If it's not obvious, integrating f(x)dx in [a,b] implicitly use the uniform probability density over [a,b], and I would like to use my own probability density (e.g. exponential).
I can do it myself, using np.random.* but then
I miss the optimizations available in scipy.integrate.quad. Or maybe all those optimizations assume the uniform density?
I need to do the error estimation myself, which is not trivial. Or maybe it is? Maybe the error is just the variance of sum(f(x))/n?
Any ideas? | 0 | 1 | 545 |
0 | 12,268,227 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2012-09-03T10:14:00.000 | 0 | 3 | 0 | Integrating a function using non-uniform measure (python/scipy) | 12,245,859 | 0 | python,numpy,scipy,probability,numerical-methods | Just for the sake of brevity, 3 ways were suggested for calculating the expected value of f(x) under the probability p(x):
Assuming p is given in closed-form, use scipy.integrate.quad to evaluate f(x)p(x)
Assuming p can be sampled from, sample N values x=P(N), then evaluate the expected value by np.mean(f(X)) and the error by np.std(f(X))/np.sqrt(N)
Assuming p is available at stats.norm, use stats.norm.expect(f)
Assuming we have the CDF(x) of the distribution rather than p(x), calculate H=Inverse[CDF] and then integrate f(H(x)) using scipy.integrate.quad | I would like to integrate a function in python and provide the probability density (measure) used to sample values. If it's not obvious, integrating f(x)dx in [a,b] implicitly use the uniform probability density over [a,b], and I would like to use my own probability density (e.g. exponential).
I can do it myself, using np.random.* but then
I miss the optimizations available in scipy.integrate.quad. Or maybe all those optimizations assume the uniform density?
I need to do the error estimation myself, which is not trivial. Or maybe it is? Maybe the error is just the variance of sum(f(x))/n?
Any ideas? | 0 | 1 | 545 |
0 | 12,277,912 | 0 | 1 | 0 | 0 | 1 | false | 8 | 2012-09-05T09:02:00.000 | 1 | 3 | 0 | python clear csv file | 12,277,864 | 0.066568 | python,csv | The Python csv module is only for reading and writing whole CSV files but not for manipulating them. If you need to filter data from file then you have to read it, create a new csv file and write the filtered rows back to new file. | how can I clear a complete csv file with python. Most forum entries that cover the issue of deleting row/columns basically say, write the stuff you want to keep into a new file. I need to completely clear a file - how can I do that? | 0 | 1 | 41,353 |
0 | 12,363,743 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-09-10T11:25:00.000 | 0 | 1 | 0 | Inverted color marks in matplotlib | 12,350,693 | 0 | python,matplotlib | I post here a schematic approach how to solve your problem with out any real python code, it might help though.
When actually plotting you need to store all in some kind of two lists, which will enable you to access them later.
For each element, bar and marker you can get the color.
For each marker you can find if it overlapping or inside a bar, for example you could usenxutils.pntpoly() [point in polygon test] from matplotlib itself.
Now you can decide on the best color. If you know the color of the bar in RGB format you can
calculate the completing color of the marker using some simple rules you can define.
When you got the color use the method set_color() or the appropriate method the object has. | I am using matplotlib to draw a bar chart with many different colors. I also draw a number of markers on the plot with scatter.
Since I am already using many different colors for the bars, I do not want to use a separate contrasting color for the marks, as that would add a big limit to the color space I can choose my bars from.
Therefore, the question is whether it is possible to have scatter draw marks, not with a given color, but with a color that is the inverse of the color that happens to be behind any given mark, wherever it is placed.
Also, note that the marks may fully overlap bars, partly overlap bars, or not overlap a bar at all. | 0 | 1 | 385 |
0 | 12,474,645 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2012-09-18T03:43:00.000 | 0 | 1 | 0 | Fast algorithm comparing unsorted data | 12,470,094 | 1.2 | python,sql,dna-sequence,genome | probably what you want is called "de novo assembly"
an approach would be to calculate N-mers, and use these in an index
nmers will become more important if you need partial matches / mismatches
if billion := 1E9, python might be too weak
also note that 18 bases* 2 bits := 36 bits of information to enumerate them. That is tentavely close to 32 bits and could fit into 64 bits. hashing / bitfiddling might be an option | I have data that needs to stay in the exact sequence it is entered in (genome sequencing) and I want to search approximately one billion nodes of around 18 members each to locate patterns.
Obviously speed is an issue with this large of a data set, and I actually don't have any data that I can currently use as a discrete key, since the basis of the search is to locate and isolate (but not remove) duplicates.
I'm looking for an algorithm that can go through the data in a relatively short amount of time to locate these patterns and similarities, and I can work out the regex expressions for comparison, but I'm not sure how to get a faster search than O(n).
Any help would be appreciated.
Thanks | 0 | 1 | 386 |
0 | 12,473,913 | 0 | 0 | 0 | 0 | 1 | true | 17 | 2012-09-18T08:52:00.000 | 25 | 1 | 0 | What does matplotlib `imshow(interpolation='nearest')` do? | 12,473,511 | 1.2 | python,image-processing,numpy,matplotlib | interpolation='nearest' simply displays an image without trying to interpolate between pixels if the display resolution is not the same as the image resolution (which is most often the case). It will result an image in which pixels are displayed as a square of multiple pixels.
There is no relation between interpolation='nearest' and the grayscale image being displayed in color. By default imshow uses the jet colormap to display an image. If you want it to be displayed in greyscale, call the gray() method to select the gray colormap. | I use imshow function with interpolation='nearest' on a grayscale image and get a nice color picture as a result, looks like it does some sort of color segmentation for me, what exactly is going on there?
I would also like to get something like this for image processing, is there some function on numpy arrays like interpolate('nearest') out there?
EDIT: Please correct me if I'm wrong, it looks like it does simple pixel clustering (clusters are colors of the corresponding colormap) and the word 'nearest' says that it takes the nearest colormap color (probably in the RGB space) to decide to which cluster the pixel belongs. | 0 | 1 | 22,970 |
0 | 12,497,790 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2012-09-19T15:09:00.000 | 3 | 2 | 0 | Using NumPy in Pyramid | 12,497,545 | 0.291313 | python,numpy,pyramid | If the array is something that can be shared between threads then you can store it in the registry at application startup (config.registry['my_big_array'] = ??). If it cannot be shared then I'd suggest using a queuing system with workers that can always have the data loaded, probably in another process. You can hack this by making the value in the registry be a threadlocal and then storing a new array in the variable if one is not there already, but then you will have a copy of the array per thread and that's really not a great idea for something that large. | I'd like to perform some array calculations using NumPy for a view callable in Pyramid. The array I'm using is quite large (3500x3500), so I'm wondering where the best place to load it is for repeated use.
Right now my application is a single page and I am using a single view callable.
The array will be loaded from disk and will not change. | 0 | 1 | 409 |
0 | 12,497,850 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2012-09-19T15:09:00.000 | 2 | 2 | 0 | Using NumPy in Pyramid | 12,497,545 | 0.197375 | python,numpy,pyramid | I would just load it in the obvious place in the code, where you need to use it (in your view, I guess?) and see if you have performance problems. It's better to work with actual numbers than try to guess what's going to be a problem. You'll usually be surprised by the reality.
If you do see performance problems, assuming you don't need a copy for each of multiple threads, try just loading it in the global scope after your imports. If that doesn't work, try moving it into its own module and importing that. If that still doesn't help... I don't know what then. | I'd like to perform some array calculations using NumPy for a view callable in Pyramid. The array I'm using is quite large (3500x3500), so I'm wondering where the best place to load it is for repeated use.
Right now my application is a single page and I am using a single view callable.
The array will be loaded from disk and will not change. | 0 | 1 | 409 |
0 | 42,903,054 | 0 | 1 | 0 | 0 | 1 | false | 22 | 2012-09-20T01:20:00.000 | 1 | 5 | 0 | Save session in IPython like in MATLAB? | 12,504,951 | 0.039979 | python,ipython,pandas | There is also a magic command, history, that can be used to write all the commands/statements given by user.
Syntax : %history -f file_name.
Also %save file_name start_line-end_line, where star_line is the starting line number and end_line is ending line number. Useful in case of selective save.
%run can be used to execute the commands in the saved file | It would be useful to save the session variables which could be loaded easily into memory at a later stage. | 0 | 1 | 10,066 |
0 | 12,536,067 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2012-09-21T16:54:00.000 | 1 | 2 | 0 | How interpolate 3D coordinates | 12,534,813 | 1.2 | python,r,3d,interpolation,splines | By "compact manifold" do you mean a lower dimensional function like a trajectory or a surface that is embedded in 3d? You have several alternatives for the surface-problem in R depending on how "parametric" or "non-parametric" you want to be. Regression splines of various sorts could be applied within the framework of estimating mean f(x,y) and if these values were "tightly" spaced you may get a relatively accurate and simple summary estimate. There are several non-parametric methods such as found in packages 'locfit', 'akima' and 'mgcv'. (I'm not really sure how I would go about statistically estimating a 1-d manifold in 3-space.)
Edit: But if I did want to see a 3D distribution and get an idea of whether is was a parametric curve or trajectory, I would reach for package:rgl and just plot it in a rotatable 3D frame.
If you are instead trying to form the convex hull (for which the word interpolate is probably the wrong choice), then I know there are 2-d solutions and suspect that searching would find 3-d solutions as well. Constructing the right search strategy will depend on specifics whose absence the 2 comments so far reflects. I'm speculating that attempting to model lower and higher order statistics like the 1st and 99th percentile as a function of (x,y) could be attempted if you wanted to use a regression effort to create boundaries. There is a quantile regression package, 'rq' by Roger Koenker that is well supported. | I have data points in x,y,z format. They form a point cloud of a closed manifold. How can I interpolate them using R-Project or Python? (Like polynomial splines) | 0 | 1 | 1,960 |
0 | 12,616,286 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2012-09-24T12:49:00.000 | 0 | 1 | 0 | Trouble installing scipy on Mac OSX Lion | 12,565,351 | 1.2 | numpy,python-2.7,scipy | EPD distribution saved the day. | I've installed new instance of python-2.7.2 with brew. Installed numpy from pip, then from sources. I keep getting
numpy.distutils.npy_pkg_config.PkgNotFound: Could not find file(s) ['/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/lib/npy-pkg-config/npymath.ini']
when I try to install scipy, either from sources or by pip, and it drives me mad.
Scipy's binary installer tells me, that python 2.7 is required and that I don't have it (I have 2 versions installed). | 0 | 1 | 253 |
1 | 12,638,921 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2012-09-25T09:37:00.000 | 2 | 2 | 0 | How do you display a 2D numpy array in glade-3 ? | 12,580,198 | 1.2 | python,arrays,numpy,gtk,glade | In the end i decided to create a buffer for the pixels using:
self.pixbuf = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,0,8,1280,1024)
I then set the image from the pixel buffer:
self.liveImage.set_from_pixbuf(self.pixbuf) | I'm making live video GUI using Python and Glade-3, but I'm finding it hard to convert the Numpy array that I have into something that can be displayed in Glade. The images are in black and white with just a single value giving the brightness of each pixel. I would like to be able to draw over the images in the GUI so I don't know whether there is a specific format I should use (bitmap/pixmap etc) ?
Any help would be much appreciated! | 0 | 1 | 760 |
0 | 12,584,253 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2012-09-25T11:36:00.000 | 1 | 2 | 0 | Check Arrayfire Array against NaNs | 12,582,140 | 0.099668 | python,arrayfire | bottleneck is worth looking into. They have performed several optimizations over the numpy.nanxxx functions which, in my experience, makes it around 5x faster than numpy. | I'm using Arrayfire on Python and I can't use the af.sum() function since my input array has NaNs in it and it would return NAN as sum.
Using numpy.nansum/numpy.nan_to_num is not an option due to speed problems.
I just need a way to convert those NaNs to floating point zeros in arrayfire. | 0 | 1 | 299 |
0 | 12,594,030 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2012-09-26T02:30:00.000 | 0 | 1 | 0 | Pandas Data Reconcilation | 12,593,759 | 1.2 | python,pandas | Try DataFrame.duplicated and DataFrame.drop_duplicates | I need to reconcile two separate dataframes. Each row within the two dataframes has a unique id that I am using to match the two dataframes. Without using a loop, how can I reconcile one dataframe against another and vice-versa?
I tried merging the two dataframes on an index (unique id) but the problem I run into when I do this is when there are duplicate rows of data. Is there a way to identify duplicate rows of data and put that data into an array or export it to a CSV?
Your help is much appreciated. Thanks. | 0 | 1 | 745 |
0 | 12,606,715 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2012-09-26T16:12:00.000 | 1 | 2 | 0 | f2py speed with array ordering | 12,606,027 | 0.099668 | python,performance,numpy,f2py | There shouldn't be any slow-down. Since NumPy 1.6, most ufuncs (ie, the basic 'universal' functions) take an optional argument allowing a user to specify the memory layout of her array: by default, it's K, meaning that the 'the element ordering of the inputs (is matched) as closely as possible`.
So, everything should be taken care of below the hood.
At worst, you could always switch from one order to another with the order parameter of np.array (but that will copy your data and is probably not worth it). | I'm writing some code in fortran (f2py) in order to gain some speed because of a large amount of calculations that would be quite bothering to do in pure Python.
I was wondering if setting NumPy arrays in Python as order=Fortran will kind of slow down
the main python code with respect to the classical C-style order. | 0 | 1 | 502 |
0 | 12,699,435 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2012-10-02T22:30:00.000 | 2 | 1 | 0 | Can normal algos run on PyOpenGL? | 12,699,376 | 1.2 | python,opengl | Is PyOpenGL the answer?
No. At least not in the way you expect it. If your GPU does support OpenGL-4.3 you could use Compute Shaders in OpenGL, but those are not written in Python
but simply run a "vanilla" python script ported to the GPU.
That's not how GPU computing works. You have to write the shaders of computation kernels in a special language. Either OpenCL or OpenGL Compute Shaders or, specific to NVIDIA, in CUDA.
Python would then just deliver the framework for getting the GPU computation running. | I want to write an algorithm that would benefit from the GPU's superior hashing capability over the CPU.
Is PyOpenGL the answer? I don't want to use drawing tools, but simply run a "vanilla" python script ported to the GPU.
I have an ATI/AMD GPU if that means anything. | 0 | 1 | 389 |
0 | 12,711,895 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2012-10-03T15:27:00.000 | 1 | 3 | 0 | categorizing items in a list with python | 12,711,743 | 0.066568 | python,list | I assume that the data are noisy, in the sense that it could just be anything at all, written in. The main difficulty here is going to be how to define the mapping between your input data, and categories, and that is going to involve, in the first place, looking through the data.
I suggest that you look at what you have, and draw up a list of mappings from input occupations to categories. You can then use pretty much any tool (and if you're using excel, stick with excel) to apply that mapping to each row. Some rows will not fall into any category. You should look at them, and figure out if that is because your mapping is inadequate (e.g. you didn't think of how to deal with veterinarians), or if it is because the data are noisy. If it's noise, you can either deal with the remainder by hand, or try to use some other technique to categorise the data, e.g. regular expressions or some kind of natural language processing library.
Once you have figured out what your problem cases are, come back and ask us about them, with sample data, and the code you have been using.
If you can't even take the first step in figuring out how to run the mapping, do some research, try to write something, then come back with a specific question about that. | Currently I have a list of 110,000 donors in Excel. One of the pieces of information they give to us is their occupation. I would like to condense this list down to say 10 or 20 categories that I define.
Normally I would just chug through this, going line by line, but since I have to do this for a years worth of data, I don't really have the time to do a line by line of 1,000,000+ rows.
Is there anyway to define my 10 or 20 categories and then have python sort it out from there?
Update:
The data is poorly formatted. People self populate a field either online or on a slip of paper and then mail it into a data processing company. There is a great deal of variance. CEO, Chief Executive, Executive Office, the list goes on.
I used a SORT UNIQ comand and found that my list has ~13,000 different professions. | 0 | 1 | 1,266 |
Subsets and Splits