GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
14,396,884
0
0
0
0
1
true
0
2013-01-18T10:13:00.000
1
1
0
Setting the SVM discriminant value in PyML
14,396,632
1.2
python,svm,pyml
Roundabout way of doing it below: Use the result.getDecisionFunction() method and choose according to your own preference. Returns a list of values like: [-1.0000000000000213, -1.0000000000000053, -0.9999999999999893] Better answers still appreciated.
I'm using PyML's SVM to classify reads, but would like to set the discriminant to a higher value than the default (which I assume is 0). How do I do it? Ps. I'm using a linear kernel with the liblinear-optimizer if that matters.
0
1
80
0
15,023,264
0
1
0
0
2
true
8
2013-01-18T14:27:00.000
2
3
0
Spyder default module import list
14,400,993
1.2
python,import,module,spyder
The startup script for Spyder is in site-packages/spyderlib/scientific_startup.py. Carlos' answer would also work, but this is what I was looking for.
I'm trying to set up a slightly customised version of Spyder. When Spyder starts, it automatically imports a long list of modules, including things from matplotlib, numpy, scipy etc. Is there a way to add my own modules to that list? In case it makes a difference, I'm using the Spyder configuration provided by the Python(X,Y) Windows installer.
0
1
15,729
0
14,401,134
0
1
0
0
2
false
8
2013-01-18T14:27:00.000
-2
3
0
Spyder default module import list
14,400,993
-0.132549
python,import,module,spyder
If Spyder is executed as a python script by python binary, then you should be able to simply edit Spyder python sources and include the modules you need. You should take a look into how is it actually executed upon start.
I'm trying to set up a slightly customised version of Spyder. When Spyder starts, it automatically imports a long list of modules, including things from matplotlib, numpy, scipy etc. Is there a way to add my own modules to that list? In case it makes a difference, I'm using the Spyder configuration provided by the Python(X,Y) Windows installer.
0
1
15,729
0
14,443,106
0
0
0
0
1
true
10
2013-01-21T03:21:00.000
10
3
0
Can we load pandas DataFrame in .NET ironpython?
14,432,059
1.2
python,.net,pandas,ironpython,python.net
No, Pandas is pretty well tied to CPython. Like you said, your best bet is to do the analysis in CPython with Pandas and export the result to CSV.
Can we load a pandas DataFrame in .NET space using iron python? If not I am thinking of converting pandas df into a csv file and then reading in .net space.
0
1
8,863
0
35,618,939
0
1
0
0
1
false
2
2013-01-22T21:11:00.000
0
2
0
Is stemming used when gensim creates a dictionary for tf-idf model?
14,468,078
0
python,nlp,gensim
I was also struggling with the same case. To overcome i first stammed documents using NLTK and later processed it with gensim. Probably it can be a easier and handy way to perform your task.
I am using Gensim python toolkit to build tf-idf model for documents. So I need to create a dictionary for all documents first. However, I found Gensim does not use stemming before creating the dictionary and corpus. Am I right ?
0
1
940
0
23,838,980
0
0
0
0
2
false
30
2013-01-25T15:55:00.000
-9
4
0
What's the maximum size of a numpy array?
14,525,344
-1
arrays,numpy,python-2.7,size,max
It is indeed related to the system maximum address length, to say it simply, 32-bit system or 64-bit system. Here is an explanation for these questions, originally from Mark Dickinson Short answer: the Python object overhead is killing you. In Python 2.x on a 64-bit machine, a list of strings consumes 48 bytes per list entry even before accounting for the content of the strings. That's over 8.7 Gb of overhead for the size of array you describe. On a 32-bit machine it'll be a bit better: only 28 bytes per list entry. Longer explanation: you should be aware that Python objects themselves can be quite large: even simple objects like ints, floats and strings. In your code you're ending up with a list of lists of strings. On my (64-bit) machine, even an empty string object takes up 40 bytes, and to that you need to add 8 bytes for the list pointer that's pointing to this string object in memory. So that's already 48 bytes per entry, or around 8.7 Gb. Given that Python allocates memory in multiples of 8 bytes at a time, and that your strings are almost certainly non-empty, you're actually looking at 56 or 64 bytes (I don't know how long your strings are) per entry. Possible solutions: (1) You might do (a little) better by converting your entries from strings to ints or floats as appropriate. (2) You'd do much better by either using Python's array type (not the same as list!) or by using numpy: then your ints or floats would only take 4 or 8 bytes each. Since Python 2.6, you can get basic information about object sizes with the sys.getsizeof function. Note that if you apply it to a list (or other container) then the returned size doesn't include the size of the contained list objects; only of the structure used to hold those objects. Here are some values on my machine.
I'm trying to create a matrix containing 2 708 000 000 elements. When I try to create a numpy array of this size it gives me a value error. Is there any way I can increase the maximum array size? a=np.arange(2708000000) ValueError Traceback (most recent call last) ValueError: Maximum allowed size exceeded
0
1
82,332
0
14,525,604
0
0
0
0
2
false
30
2013-01-25T15:55:00.000
19
4
0
What's the maximum size of a numpy array?
14,525,344
1
arrays,numpy,python-2.7,size,max
You're trying to create an array with 2.7 billion entries. If you're running 64-bit numpy, at 8 bytes per entry, that would be 20 GB in all. So almost certainly you just ran out of memory on your machine. There is no general maximum array size in numpy.
I'm trying to create a matrix containing 2 708 000 000 elements. When I try to create a numpy array of this size it gives me a value error. Is there any way I can increase the maximum array size? a=np.arange(2708000000) ValueError Traceback (most recent call last) ValueError: Maximum allowed size exceeded
0
1
82,332
0
14,535,721
0
0
0
0
1
false
0
2013-01-26T09:36:00.000
1
2
0
How would I go about finding all possible permutations of a 4x4 matrix with static corner elements?
14,535,650
0.099668
python,math,matrix,permutation,itertools
Just pull the placed numbers out of the permutation set. Then insert them into their proper position in the generated permutations. For your example you'd take out 1, 16, 4, 13. Permute on (2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15), for each permutation, insert 1, 16, 4, 13 where you have pre-selected to place them.
So far I have been using python to generate permutations of matrices for finding magic squares. So what I have been doing so far (for 3x3 matrices) is that I find all the possible permutations of the set {1,2,3,4,5,6,7,8,9} using itertools.permutations, store them as a list and do my calculations and print my results. Now I want to find out magic squares of order 4. Since finding all permutations means 16! possibilities, I want to increase efficiency by placing likely elements in the corner, for example 1, 16 on diagonal one corners and 4, 13 on diagonal two corners. So how would I find permutations of set {1,2....16} where some elements are not moved is my question
0
1
2,502
0
17,913,278
0
0
0
0
1
false
17
2013-01-26T18:15:00.000
3
3
0
Pandas Drop Rows Outside of Time Range
14,539,992
0.197375
python,pandas
You can also do: rng = pd.date_range('1/1/2000', periods=24, freq='H') ts = pd.Series(pd.np.random.randn(len(rng)), index=rng) ts.ix[datetime.time(10):datetime.time(14)] Out[4]: 2000-01-01 10:00:00 -0.363420 2000-01-01 11:00:00 -0.979251 2000-01-01 12:00:00 -0.896648 2000-01-01 13:00:00 -0.051159 2000-01-01 14:00:00 -0.449192 Freq: H, dtype: float64 DataFrame works same way.
I am trying to go through every row in a DataFrame index and remove all rows that are not between a certain time. I have been looking for solutions but none of them separate the Date from the Time, and all I want to do is drop the rows that are outside of a Time range.
0
1
19,605
0
14,544,871
0
1
0
0
1
false
0
2013-01-27T05:55:00.000
4
1
0
find a value other than a root with fsolve in python's scipy
14,544,838
0.664037
python,scipy,root,solver
This is easy if you change your definition of f(x). e.g. if you want f(x) = 5, define your function: g(x) = f(x) - 5 = 0
I know how I can solve for a root in python using scipy.optimize.fsolve. I have a function defined f = lambda : -1*numpy.exp(-x**2) and I want to solve for x setting the function to a certain nonzero. For instance, I want to solve for x using f(x) = 5. Is there a way to do this with fsolve or would I need to use another tool in scipy? In other words, I'm looking for something analogous to Maple's fsolve.
0
1
380
0
14,545,631
0
0
0
0
2
true
3
2013-01-27T08:11:00.000
4
3
0
numpy vs native Python - most efficient way
14,545,602
1.2
python,performance,numpy
In general, it probably matters most (efficiency-wise) to avoid conversions between the two. If you're mostly using non-numpy functions on data, then they'll be internally operating using standard Python data types, and thus using numpy arrays would be inefficient due to the need to convert back and forth. Similarly, if you're using a lot of numpy functions to manipulate data, transforming it all back to basic Python types in between would also be inefficient. As far as choosing functions goes, use whichever one was designed to operate on the form your data is already in - e.g. if you already have a numpy array, use the numpy functions on it; similarly, if you have a basic Python data type, use the Python functions on it. numpy's functions are going to be optimized for working with numpy's data types.
For a lot of functions, it is possible to use either native Python or numpy to proceed. This is the case for math functions, that are available with Python native import math, but also with numpy methods. This is also the case when it comes to arrays, with narray from numpy and pythons list comprehensions, or tuples. I have two questions relative to these features that are in Python and also in numpy in general, if method is available in native Python AND numpy, which of both solutions would you prefer ? In terms of efficiency ? Is it different and how Python and numpy would differ in their proceeding? More particularly, regarding arrays, and basic functions that are dealing with arrays, like sort, concatenate..., which solution is more efficient ? What makes the efficiency of the most efficient solution? This is very open and general question. I guess that will not impact my code greatly, but I am just wondering.
0
1
1,039
0
14,545,646
0
0
0
0
2
false
3
2013-01-27T08:11:00.000
1
3
0
numpy vs native Python - most efficient way
14,545,602
0.066568
python,performance,numpy
When there's a choice between working with NumPy array and numeric lists, the former are typically faster. I don't quite understand the second question, so won't try to address it.
For a lot of functions, it is possible to use either native Python or numpy to proceed. This is the case for math functions, that are available with Python native import math, but also with numpy methods. This is also the case when it comes to arrays, with narray from numpy and pythons list comprehensions, or tuples. I have two questions relative to these features that are in Python and also in numpy in general, if method is available in native Python AND numpy, which of both solutions would you prefer ? In terms of efficiency ? Is it different and how Python and numpy would differ in their proceeding? More particularly, regarding arrays, and basic functions that are dealing with arrays, like sort, concatenate..., which solution is more efficient ? What makes the efficiency of the most efficient solution? This is very open and general question. I guess that will not impact my code greatly, but I am just wondering.
0
1
1,039
0
14,552,819
0
1
1
0
1
false
13
2013-01-27T19:48:00.000
3
3
0
Embed R code in python
14,551,472
0.197375
python,r
When I need to do R calculations, I usually write R scripts, and run them from Python using the subprocess module. The reason I chose to do this was because the version of R I had installed (2.16 I think) wasn't compatible with RPy at the time (which wanted 2.14). So if you already have your R installation "just the way you want it", this may be a better option.
I need to make computations in a python program, and I would prefer to make some of them in R. Is it possible to embed R code in python ?
0
1
13,823
0
14,756,113
0
0
0
0
1
false
0
2013-01-28T10:00:00.000
0
2
0
Can a neural network recognize a screen and replicate a finite set of actions?
14,559,547
0
python,neural-network,image-recognition,online-algorithm
This is not entirely correct. A 3-layer feedforward MLP can theoretically replicate any CONTINUOUS function. If there are discontinuities, then you need a 4th layer. Since you are dealing with pixelated screens and such, you probably would need to consider a fourth layer. Finally, if you are looking at circular shapes, etc., than a radial basis function (RBF) network may be more suitable.
I learned, that neural networks can replicate any function. Normally the neural network is fed with a set of descriptors to its input neurons and then gives out a certain score at its output neuron. I want my neural network to recognize certain behaviours from a screen. Objects on the screen are already preprocessed and clearly visible, so recognition should not be a problem. Is it possible to use the neural network to recognize a pixelated picture of the screen and make decisions on that basis? The amount of training data would be huge of course. Is there way to teach the ANN by online supervised learning? Edit: Because a commenter said the programming problem would be too general: I would like to implement this in python first, to see if it works. If anyone could point me to a resource where i could do this online-learning thing with python, i would be grateful.
0
1
888
0
14,575,243
0
0
0
0
3
true
26
2013-01-29T00:28:00.000
38
3
0
Python statistics package: difference between statsmodel and scipy.stats
14,573,728
1.2
python,scipy,scikits,statsmodels
Statsmodels has scipy.stats as a dependency. Scipy.stats has all of the probability distributions and some statistical tests. It's more like library code in the vein of numpy and scipy. Statsmodels on the other hand provides statistical models with a formula framework similar to R and it works with pandas DataFrames. There are also statistical tests, plotting, and plenty of helper functions in statsmodels. Really it depends on what you need, but you definitely don't have to choose one. They have different aims and strengths.
I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats. One thing that I know is those with scikits namespace are specific "branches" of scipy, and what used to be scikits.statsmodels is now called statsmodels. On the other hand there is also scipy.stats. What are the differences between the two, and which one is the statistics package for Python? Thanks. --EDIT-- I changed the title because some answers are not really related to the question, and I suppose that's because the title is not clear enough.
0
1
16,292
0
14,574,087
0
0
0
0
3
false
26
2013-01-29T00:28:00.000
-1
3
0
Python statistics package: difference between statsmodel and scipy.stats
14,573,728
-0.066568
python,scipy,scikits,statsmodels
I think THE statistics package is numpy/scipy. It works also great if you want to plot your data using matplotlib. However, as far as I know, matplotlib doesn't work with Python 3.x yet.
I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats. One thing that I know is those with scikits namespace are specific "branches" of scipy, and what used to be scikits.statsmodels is now called statsmodels. On the other hand there is also scipy.stats. What are the differences between the two, and which one is the statistics package for Python? Thanks. --EDIT-- I changed the title because some answers are not really related to the question, and I suppose that's because the title is not clear enough.
0
1
16,292
0
14,575,672
0
0
0
0
3
false
26
2013-01-29T00:28:00.000
5
3
0
Python statistics package: difference between statsmodel and scipy.stats
14,573,728
0.321513
python,scipy,scikits,statsmodels
I try to use pandas/statsmodels/scipy for my work on a day-to-day basis, but sometimes those packages come up a bit short (LOESS, anybody?). The problem with the RPy module is (last I checked, at least) that it wants a specific version of R that isn't current---my R installation is 2.16 (I think) and RPy wanted 2.14. So either you have to have two parallel installations of R, or you have to downgrade. (If you don't have R installed, then you can just install the correct version of R and use RPy.) So when I need something that isn't in pandas/statsmodels/scipy I write R scripts, and run them with the subprocess module. This lets me interact with R as little as possible (which I really don't like programming in), but I can still leverage all the stuff that R has that the Python packages don't. The lesson is that there isn't ever one solution to any problem---you have to assemble a whole bunch of parts that are all useful to you (and maybe write some of your own), in a way that you understand, to solve problems. (R aficionados will disagree, of course!)
I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats. One thing that I know is those with scikits namespace are specific "branches" of scipy, and what used to be scikits.statsmodels is now called statsmodels. On the other hand there is also scipy.stats. What are the differences between the two, and which one is the statistics package for Python? Thanks. --EDIT-- I changed the title because some answers are not really related to the question, and I suppose that's because the title is not clear enough.
0
1
16,292
0
14,600,682
0
0
0
0
1
true
4
2013-01-29T10:25:00.000
1
1
0
How to grab matplotlib plot as html in ipython notebook?
14,580,684
1.2
python,pandas,matplotlib,jupyter-notebook
Ok, if you go that route, this answer stackoverflow.com/a/5314808/243434 on how to capture >matplotlib figures as inline PNGs may help – @crewbum To prevent duplication of plots, try running with pylab disabled (double-check your config >files and the command line). – @crewbum --> this last requires a restart of the notebook: ipython notebook --pylab (NB no inline)
I have an IPython Notebook that is using Pandas to back-test a rule-based trading system. I have a function that accepts various scalars and functions as parameters and outputs a stats pack as some tables and a couple of plots. For automation, I want to be able to format this nicely into a "page" and then call the function in a loop while varying the inputs and have it output a number of pages for comparison, all from a single notebook cell. The approach I am taking is to create IpyTables and then call _repr_html_(), building up the HTML output along the way so that I can eventually return it from the function that runs the loop. How can I capture the output of the plots this way - matplotlib subplot objects don't seem to implement _repr_html_()? Feel free to suggest another approach entirely that you think might equally solve the problem. TIA
0
1
2,790
0
15,783,554
0
0
0
0
1
false
7
2013-01-30T22:20:00.000
-2
6
0
Easy way to implement a Root Raised Cosine (RRC) filter using Python & Numpy
14,614,966
-0.066568
python,numpy,scipy,signal-processing
SciPy will support any filter. Just calculate the impulse response and use any of the appropriate scipy.signal filter/convolve functions.
SciPy/Numpy seems to support many filters, but not the root-raised cosine filter. Is there a trick to easily create one rather than calculating the transfer function? An approximation would be fine as well.
0
1
13,131
0
14,635,675
0
1
0
0
1
true
0
2013-01-31T21:33:00.000
3
1
0
Python vector transformation (normalize, scale, rotate etc.)
14,635,549
1.2
python,math,vector-graphics
No, the standard in numpy. I wouldn't think of it as overkill, think of it as a very well written and tested library, even if you do just need a small portion of it. All the basic vector & matrix operations are implemented efficiently (falling back to C and Fortan) which makes it fast and memory efficient. Don't make your own, use numpy.
I'm about to write my very own scaling, rotation, normalization functions in python. Is there a convenient way to avoid this? I found NumPy, but it kind-a seems like an overkill for my little 2D-needs. Are there basic vector operations available in the std python libs?
0
1
1,480
0
20,688,782
0
0
0
0
1
true
0
2013-01-31T21:42:00.000
2
1
0
GAE MapReduce, How to write Multiple Outputs
14,635,693
1.2
python,google-app-engine,mapreduce
I don't think such functionality exists (yet?) in the GAE Mapreduce library. Depending on the size of your dataset, and the type of output required, you can small-time-investment hack your way around it by co-opting the reducer as another output writer. For example, if one of the reducer outputs should go straight back to the datastore, and another output should go to a file, you could open a file yourself and write the outputs to it. Alternatively, you could serialize and explicitly store the intermediate map results to a temporary datastore using operation.db.Put, and perform separate Map or Reduce jobs on that datastore. Of course, that will end up being more expensive than the first workaround. In your specific key-value example, I'd suggest writing to a Google Cloud Storage File, and postprocessing it to split it into three files as required. That'll also give you more control over final file names.
I have a data set which I do multiple mappings on. Assuming that I have 3 key-values pair for the reduce function, how do I modify the output such that I have 3 blobfiles - one for each of the key value pair? Do let me know if I can clarify further.
0
1
139
0
42,075,989
0
0
0
0
1
false
11
2013-01-31T23:16:00.000
1
4
0
A-star search in numpy or python
14,636,918
0.049958
python,numpy,a-star
No, there is no A* search in Numpy.
i tried searching stackoverflow for the tags [a-star] [and] [python] and [a-star] [and] [numpy], but nothing. i also googled it but whether due to the tokenizing or its existence, i got nothing. it's not much harder than your coding-interview tree traversals to implement. but, it would be nice to have a correct efficient implementation for everyone. does numpy have A*?
0
1
15,316
0
14,655,846
0
0
0
0
3
true
0
2013-02-01T21:55:00.000
0
3
0
Is there a way to generate a random variate from a non-standard distribution without computing CDF?
14,655,681
1.2
c++,python,algorithm,random,montecarlo
Acceptance\Rejection: Find a function that is always higher than the pdf. Generate 2 Random variates. The first one you scale to calculate the value, the second you use to decide whether to accept or reject the choice. Rinse and repeat until you accept a value. Sorry I can't be more specific, but I haven't done it for a while.. Its a standard algorithm, but I'd personally implement it from scratch, so I'm not aware of any implementations.
I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution. I do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) random variate and compare it to the CDF get a random variate from my distribution, but the parameters in the distributions are always changing. Using this method is too slow. So I guess my question has two parts: Is there a method/algorithm to quickly generate finite, discrete random variates without using the CDF? Is there a Python module and/or a C++ library which already has this functionality?
0
1
374
0
14,657,373
0
0
0
0
3
false
0
2013-02-01T21:55:00.000
0
3
0
Is there a way to generate a random variate from a non-standard distribution without computing CDF?
14,655,681
0
c++,python,algorithm,random,montecarlo
Indeed acceptance/rejection is the way to go if you know analytically your pdf. Let's call it f(x). Find a pdf g(x) such that there exist a constant c, such that c.g(x) > f(x), and such that you know how to simulate a variable with pdf g(x) - For example, as you work with a distribution with a finite support, a uniform will do: g(x) = 1/(size of your domain) over the domain. Then draw a couple (G, U) such that G is simulated with pdf g(x), and U is uniform on [0, c.g(G)]. Then, if U < f(G), accept U as your variable. Otherwise draw again. The U you will finally accept will have f as a pdf. Note that the constant c determines the efficiency of the method. The smaller c, the most efficient you will be - basically you will need on average c drawings to get the right variable. Better get a function g simple enough (don't forget you need to draw variables using g as a pdf) but will the smallest possible c.
I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution. I do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) random variate and compare it to the CDF get a random variate from my distribution, but the parameters in the distributions are always changing. Using this method is too slow. So I guess my question has two parts: Is there a method/algorithm to quickly generate finite, discrete random variates without using the CDF? Is there a Python module and/or a C++ library which already has this functionality?
0
1
374
0
18,890,513
0
0
0
0
3
false
0
2013-02-01T21:55:00.000
0
3
0
Is there a way to generate a random variate from a non-standard distribution without computing CDF?
14,655,681
0
c++,python,algorithm,random,montecarlo
If acceptance rejection is also too inefficient you could also try some Markov Chain MC method, they generate a sequence of samples each one dependent on the previous one, so by skipping blocks of them one can subsample obtaining a more or less independent set. They only need the PDF, or even just a multiple of it. Usually they work with fixed distributions, but can also be adapted to slowly changing ones.
I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution. I do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) random variate and compare it to the CDF get a random variate from my distribution, but the parameters in the distributions are always changing. Using this method is too slow. So I guess my question has two parts: Is there a method/algorithm to quickly generate finite, discrete random variates without using the CDF? Is there a Python module and/or a C++ library which already has this functionality?
0
1
374
0
14,665,480
0
0
0
0
1
false
0
2013-02-02T19:03:00.000
0
2
0
Taking data from a text file and putting it into a numpy array
14,665,379
0
python,numpy
numpy.loadtxt() is the function you are looking for. This returns a two-dimenstional array.
I need some help taking data from a .txt file and putting it into an array. I have a very rudimentary understanding of Python, and I have read through the documentation sited in threads relevant to my problem, but after hours of attempting to do this I still have not been able to get anywhere. The data in my file looks like this: 0.000000000000000000e+00 7.335686114232199684e-02 1.999999999999999909e-07 7.571960558042964973e-01 3.999999999999999819e-07 9.909475704320810374e-01 5.999999999999999728e-07 3.412754086075696081e-01 I used numpy.genfromtxt, but got the following output: array(nan) Could you tell me what the proper way to do this is?
0
1
132
0
14,727,117
0
0
0
0
2
true
0
2013-02-03T21:55:00.000
1
2
0
Counting Cars in OpenCV + Python
14,677,763
1.2
python,video,image-processing,opencv,computer-vision
I guess you are detecting the cars in each frame and creating a new bounding box each time a car is detected. This would explain the many increments of your variable. You have to find a way to figure out if the car detected in one frame is the same car from the frame before (if you had a car detected in the previous frame). You might be able to achieve this by simply comparing the bounding box distances between two frames; if the distance is less than a threshold value, you can say that it's the same car from the previous frame. This way you can track the cars. You could increment the counter variable when the detected car leaves the camera's field of view (exits the frame). The tracking procedure I proposed here is very simple, try searching for "object tracking" to see what else you can use (maybe have a look at OpenCV's KLT tracking).
ive got this big/easy problem that i need to solve but i cant.. What im trying to do is to count cars on a highway, and i actually can detect the moving cars and put bounding boxes on them... but when i try to count them, i simply cant. I tried making a variable (nCars) and increment everytime the program creates a bounding box, but that seems to increment to many times.. The question is: Whats the best way to count moving cars/objects? PS: I Dont know if this is a silly question but im going nutts.... Thanks for everything (: And im new here but i know this website for some time (: Its great!
0
1
1,748
0
14,678,095
0
0
0
0
2
false
0
2013-02-03T21:55:00.000
0
2
0
Counting Cars in OpenCV + Python
14,677,763
0
python,video,image-processing,opencv,computer-vision
You should use an sqlite database for store cars' informations.
ive got this big/easy problem that i need to solve but i cant.. What im trying to do is to count cars on a highway, and i actually can detect the moving cars and put bounding boxes on them... but when i try to count them, i simply cant. I tried making a variable (nCars) and increment everytime the program creates a bounding box, but that seems to increment to many times.. The question is: Whats the best way to count moving cars/objects? PS: I Dont know if this is a silly question but im going nutts.... Thanks for everything (: And im new here but i know this website for some time (: Its great!
0
1
1,748
0
25,715,719
0
0
0
0
1
false
127
2013-02-04T13:59:00.000
14
13
0
Adding meta-information/metadata to pandas DataFrame
14,688,306
1
python,pandas
Just ran into this issue myself. As of pandas 0.13, DataFrames have a _metadata attribute on them that does persist through functions that return new DataFrames. Also seems to survive serialization just fine (I've only tried json, but I imagine hdf is covered as well).
Is it possible to add some meta-information/metadata to a pandas DataFrame? For example, the instrument's name used to measure the data, the instrument responsible, etc. One workaround would be to create a column with that information, but it seems wasteful to store a single piece of information in every row!
0
1
59,446
0
14,718,635
0
1
0
0
1
false
3
2013-02-05T22:55:00.000
2
1
0
Natural Language Processing - Similar to ngram
14,718,543
0.379949
python,nlp,nltk,n-gram
You might want to look into word sense disambiguation (WSD), it is the problem of determining which "sense" (meaning) of a word is activated by the use of the word in a particular context, a process which appears to be largely unconscious in people.
I'm currently working on a NLP project that is trying to differentiate between synonyms (received from Python's NLTK with WordNet) in a context. I've looked into a good deal of NLP concepts trying to find exactly what I want, and the closest thing I've found is n-grams, but its not quite a perfect fit. Suppose I am trying to find the proper definition of the verb "box". "Box" could mean "fight" or "package"; however, somewhere else in the text, the word "ring" or "fighter" appears. As I understand it, an n-gram would be "box fighter" or "box ring", which is rather ridiculous as a phrase, and not likely to appear. But on a concept map, the "box" action might be linked with a "ring", since they are conceptually related. Is n-gram what I want? Is there another name for this? Any help on where to look for retrieving such relational data? All help is appreciated.
0
1
1,365
0
14,734,299
0
1
0
0
1
false
2
2013-02-06T16:06:00.000
2
3
0
python changes format numbers ndarray many digits
14,733,471
0.132549
python,numpy,string-formatting,multidimensional-array
Try numpy.set_printoptions() -- there you can e.g. specify the number of digits that are printed and suppress the scientific notation. For example, numpy.set_printoptions(precision=8,suppress=True) will print 8 digits and no "...e+xx".
I'm a beginner in python and easily get stucked and confused... When I read a file which contains a table with numbers with digits, it reads it as an numpy.ndarray Python is changing the display of the numbers. For example: In the input file i have this number: 56143.0254154 and in the output file the number is written as: 5.61430254e+04 but i want to keep the first format in the output file. i tried to use the string.format or locale.format functions but it doesn't work Can anybody help me to do this? Thanks! Ruxy
0
1
1,536
0
14,754,539
0
0
0
0
1
false
2
2013-02-07T03:13:00.000
2
5
0
Interpolation Function
14,742,893
0.07983
python,matlab,pandas,gps
What you can do is use interp1 function. This function will fit in deifferent way the numbers for a new X series. For example if you have x=[1 3 5 6 10 12] y=[15 20 17 33 56 89] This means if you want to fill in for x1=[1 2 3 4 5 6 7 ... 12], you will type y1=interp1(x,y,x1)
This is probably a very easy question, but all the sources I have found on interpolation in Matlab are trying to correlate two values, all I wanted to benefit from is if I have data which is collected over an 8 hour period, but the time between data points is varying, how do I adjust it such that the time periods are equal and the data remains consistent? Or to rephrase from the approach I have been trying: I have GPS lat,lon and Unix time for these points; what I want to do is take the lat and lon at time 1 and time 3 and for the case where I don't know time 2, simply fill it with data from time 1 - is there a functional way to do this? (I know in something like Python Pandas you can use fill) but I'm unsure of how to do this in Matlab.
0
1
705
0
14,827,656
1
0
0
0
2
false
2
2013-02-12T05:48:00.000
0
4
0
Search in Large data set
14,826,245
0
python,data-structures
I'd give you a code sample if I better understood what your current data structures look like, but this sounds like a job for a pandas dataframe groupby (in case you don't feel like actually using a database as others have suggested).
I have a list of user:friends (50,000) and a list of event attendees (25,000 events and list of attendees for each event). I want to find top k friends with whom the user goes to the event. This needs to be done for each user. I tried traversing lists but is computationally very expensive. I am also trying to do it by creating weighted graph.(Python) Let me know if there is any other approach.
0
1
177
0
14,826,472
1
0
0
0
2
false
2
2013-02-12T05:48:00.000
0
4
0
Search in Large data set
14,826,245
0
python,data-structures
Can you do something like this. Im assuming friends of a user is relatively less, and the events attended by a particular user is also much lesser than total number of events. So have a boolean vector of attended events for each friend of the user. Doing a dot product and those that have max will be the friend who most likely resembles the user. Again,.before you do this..you will have to filter some events to keep the size of your vectors manageable.
I have a list of user:friends (50,000) and a list of event attendees (25,000 events and list of attendees for each event). I want to find top k friends with whom the user goes to the event. This needs to be done for each user. I tried traversing lists but is computationally very expensive. I am also trying to do it by creating weighted graph.(Python) Let me know if there is any other approach.
0
1
177
0
14,864,547
0
0
0
0
1
false
22
2013-02-13T21:06:00.000
22
2
0
sklearn logistic regression with unbalanced classes
14,863,125
1
python,scikit-learn,classification
Have you tried to pass to your class_weight="auto" classifier? Not all classifiers in sklearn support this, but some do. Check the docstrings. Also you can rebalance your dataset by randomly dropping negative examples and / or over-sampling positive examples (+ potentially adding some slight gaussian feature noise).
I'm solving a classification problem with sklearn's logistic regression in python. My problem is a general/generic one. I have a dataset with two classes/result (positive/negative or 1/0), but the set is highly unbalanced. There are ~5% positives and ~95% negatives. I know there are a number of ways to deal with an unbalanced problem like this, but have not found a good explanation of how to implement properly using the sklearn package. What I've done thus far is to build a balanced training set by selecting entries with a positive outcome and an equal number of randomly selected negative entries. I can then train the model to this set, but I'm stuck with how to modify the model to then work on the original unbalanced population/set. What are the specific steps to do this? I've poured over the sklearn documentation and examples and haven't found a good explanation.
0
1
18,285
0
14,877,368
0
0
0
0
1
true
1
2013-02-14T06:40:00.000
1
1
0
customize the default toolbar icon images of a matplotlib graph
14,869,145
1.2
python,matplotlib
I suspect that exactly what you will have to do will depend on you gui toolkit. The code that you want to look at is in matplotlib/lib/matplotlib/backends and you want to find the class that sub-classes NavigationToolbar2 in which ever backend you are using.
i want to change the default icon images of a matplotplib. even when i replaced the image with the same name and size from the image location i.e. C:\Python27\Lib\site-packages\matplotlib\mpl-data\images\home.png its still plotting the the graphs with the same default images. If I need to change the code of the image location in any file kindly direct me to the code and the code segment.
0
1
888
0
14,877,671
0
0
0
0
1
false
0
2013-02-14T13:01:00.000
4
2
0
How to Train Single-Object Recognition?
14,875,450
0.379949
python,language-agnostic,machine-learning,object-recognition,pybrain
First, a note regarding the classification method to use. If you intend to use the image pixels themselves as features, neural network might be a fitting classification method. In that case, I think it might be a better idea to train the same network to distinguish between the various objects, rather than using a separate network for each, because it would allow the network to focus on the most discriminative features. However, if you intend to extract synthetic features from the image and base the classification on them, I would suggest considering other classification methods, e.g. SVM. The reason is that neural networks generally have many parameters to set (e.g. network size and architecture), making the process of building a classifier longer and more complicated. Specifically regarding your NN-related questions, I would suggest using a feedforward network, which is relatively easy to build and train, with a softmax output layer, which allows assigning probabilities to the various classes. In case you're using a single network for classification, the question regarding negative examples is irrelevant; for each class, other classes would be its negative examples. If you decide to use different networks, you can use the same counter-examples (i.e. other classes), but as a rule of thumb, I'd suggest showing no more than 2-10 negative examples per positive example. EDIT: based on the comments below, it seems the problem is to decide how fitting is a given image (drawing) to a given concept, e.g. how similar to a tree is the the user-supplied tree drawing. In this case, I'd suggest a radically different approach: extract visual features from each drawing, and perform knn classification, based on all past user-supplied drawings and their classifications (possibly, plus a predefined set generated by you). You can score the similarity either by the nominal distance to same-class examples, or by the class distribution of the closest matches. I know that this is not neccessarily what you're asking, but this seems to me an easier and more direct approach, especially given the fact that the number of examples and classes is expected to constantly grow.
I was thinking of doing a little project that involves recognizing simple two-dimensional objects using some kind of machine learning. I think it's better that I have each network devoted to recognizing only one type of object. So here are my two questions: What kind of network should I use? The two I can think of that could work are simple feed-forward networks and Hopfield networks. Since I also want to know how much the input looks like the target, Hopfield nets are probably not suitable. If I use something that requires supervised learning and I only want one output unit that indicates how much the input looks like the target, what counter-examples should I show it during the training process? Just giving it positive examples I'm pretty sure won't work (the network will just learn to always say 'yes'). The images are going to be low resolution and black and white.
0
1
2,529
0
14,884,277
0
0
0
0
1
true
0
2013-02-14T21:18:00.000
1
1
0
Read Vector from Text File
14,884,214
1.2
python,numpy
Just use loadtxt and reshape (or ravel) the resulting array.
I have a text file with a bunch of number that contains newlines every 32 entries. I want to read this file a a column vector using Numpy. How can I use numpy.loadtxt and ignore the newlines such that the generated array is of size 1024x1 and not 32x32?
0
1
174
0
14,900,251
0
1
0
0
1
false
1
2013-02-15T16:35:00.000
2
1
0
how to find the integral of a matrix exponential in python
14,899,139
0.379949
python,matrix,numpy,integration,exponential
Provided A has the right properties, you could transform it to the diagonal form A0 by calculating its eigenvectors and eigenvalues. In the diagonal form, the solution is sol = [exp(A0*b) - exp(A0*a)] * inv(A0), where A0 is the diagonal matrix with the eigenvalues and inv(A0) just contains the inverse of the eigenvalues in its diagonal. Finally, you transform back the solution by multiplying it with the transpose of the eigenvalues from the left and the eigenvalues from the right: transpose(eigvecs) * sol * eigvecs.
I have a matrix of the form, say e^(Ax) where A is a square matrix. How can I integrate it from a given value a to another value bso that the output is a corresponding array?
0
1
1,283
0
14,966,265
0
0
0
0
1
false
1
2013-02-18T22:31:00.000
2
2
0
Does the EPD Free distribution use MKL?
14,946,512
0.197375
lapack,blas,enthought,intel-mkl,epd-python
The EPD Free 7.3 installers do not include MKL. The BLAS/LAPACK libraries which they use are ATLAS on Linux & Windows and Accelerate on OSX.
According to the Enthought website, the EPD Python distribution uses MKL for numpy and scipy. Does EPD Free also use MKL? If not does it use another library for BLAS/LAPACK? I am using EPD Free 7.3-2 Also, what library does the windows binary installer for numpy that can be found on scipy.org use?
0
1
538
0
14,999,516
0
0
0
0
1
true
0
2013-02-21T09:20:00.000
1
1
0
error in Python gradient measurement
14,998,497
1.2
python,scipy
The standard error of a linear regression is the standard deviation of the serie obtained by substracting the fitted model to your data points. It indicates how well your data points can be fitted by a linear model.
I need to fit a straight line to my data to find out if there is a gradient. I am currently doing this with scipy.stats.linregress. I'm a little confused though, because one of the outputs of linregress is the "standard error", but I'm not sure how linregress calculated this, as the uncertainty of your data points is not given as an input. Surely the uncertainty on the data points influence how uncertain the given gradient is? Thank you!
0
1
340
0
15,011,126
0
0
0
0
1
true
8
2013-02-21T17:44:00.000
6
2
0
Artificial life with neural networks
15,008,875
1.2
python,artificial-intelligence,neural-network,artificial-life
If the environment is benign enough (e.g it's easy enough to find food) then just moving randomly may be a perfectly viable strategy and reproductive success may be far more influenced by luck than anything else. Also consider unintended consequences: e.g if offspring is co-sited with its parent then both are immediately in competition with each other in the local area and this might be sufficiently disadvantageous to lead to the death of both in the longer term. To test your system, introduce an individual with a "premade" neural network set up to steer the individual directly towards the nearest food (your model is such that such a thing exists and is reasobably easy to write down, right? If not, it's unreasonable to expect it to evolve!). Introduce that individual into your simulation amongst the dumb masses. If the individual doesn't quickly dominate, it suggests your simulation isn't set up to reinforce such behaviour. But if the individual enjoys reproductive success and it and its descendants take over, then your simulation is doing something right and you need to look elsewhere for the reason such behaviour isn't evolving. Update in response to comment: Seems to me this mixing of angles and vectors is dubious. Whether individuals can evolve towards the "move straight towards nearest food" behaviour must rather depend on how well an atan function can be approximated by your network (I'm sceptical). Again, this suggests more testing: set aside all the ecological simulation and just test perturbing a population of your style of random networks to see if they can evolve towards the expected function. (simpler, better) Have the network output a vector (instead of an angle): the direction the individual should move in (of course this means having 2 output nodes instead of one). Obviously the "move straight towards food" strategy is then just a straight pass-through of the "direction towards food" vector components, and the interesting thing is then to see whether your random networks evolve towards this simple "identity function" (also should allow introduction of a readymade optimised individual as described above). I'm dubious about the "fixed amount of food" too. (I assume you mean as soon as a red dot is consumed, another one is introduced). A more "realistic" model might be to introduce food at a constant rate, and not impose any artificial population limits: population limits are determined by the limitations of food supply. e.g If you introduce 100 units of food a minute and individuals need 1 unit of food per minute to survive, then your simulation should find it tends towards a long term average population of 100 individuals without any need for a clamp to avoid a "population explosion" (although boom-and-bust, feast-or-famine dynamics may actually emerge depending on the details).
I am trying to build a simple evolution simulation of agents controlled by neural network. In the current version each agent has feed-forward neural net with one hidden layer. The environment contains fixed amount of food represented as a red dot. When an agent moves, he loses energy, and when he is near the food, he gains energy. Agent with 0 energy dies. the input of the neural net is the current angle of the agent and a vector to the closest food. Every time step, the angle of movement of each agent is changed by the output of its neural net. The aim of course is to see food-seeking behavior evolves after some time. However, nothing happens. I don't know if the problem is the structure the neural net (too simple?) or the reproduction mechanism: to prevent population explosion, the initial population is about 20 agents, and as the population becomes close to 50, the reproduction chance approaches zero. When reproduction does occur, the parent is chosen by going over the list of agents from beginning to end, and checking for each agent whether or not a random number between 0 to 1 is less than the ratio between this agent's energy and the sum of the energy of all agents. If so, the searching is over and this agent becomes a parent, as we add to the environment a copy of this agent with some probability of mutations in one or more of the weights in his neural network. Thanks in advance!
0
1
2,480
0
15,016,437
0
0
0
0
1
false
1
2013-02-22T03:09:00.000
1
2
0
Pandas: Attaching Descriptive Dict() to Hierarchical Index (i.e. CountryCode and CountryName)
15,016,187
0.099668
python,pandas
I think the simplest solution split this into two columns in your DataFrame, one for country_code and country_name (you could name them something else). When you print or graph you can select which column is used.
Is there anyway to attach a descriptive version to an Index Column? For Example, I use ISO3 CountryCode's to merge from different data sources 'AUS' -> Australia etc. This is very convenient for merging different data sources, but when I want to print the data I would like the description version (i.e. Australia). I am imagining a dictionary attached to the Index Column of 'CountryCode' (where CountryCode is Key and CountryName is Value) and a flag that will print the Value instead of the Key which is used for data manipulation. Is the best solution to generate my own Dictionary() and then when it comes time to print or graph to then merge the country names in? This is ok, except it would be nice for ALL of the dataset information to be carried within the dataframe object.
0
1
102
0
15,020,070
0
0
0
0
1
false
11
2013-02-22T06:54:00.000
3
3
0
Python SciPy convolve vs fftconvolve
15,018,526
0.197375
python,scipy,fft,convolution
FFT fast convolution via the overlap-add or overlap save algorithms can be done in limited memory by using an FFT that is only a small multiple (such as 2X) larger than the impulse response. It breaks the long FFT up into properly overlapped shorter but zero-padded FFTs. Even with the overlap overhead, O(NlogN) will beat M*N in efficiency for large enough N and M.
I know generally speaking FFT and multiplication is usually faster than direct convolve operation, when the array is relatively large. However, I'm convolving a very long signal (say 10 million points) with a very short response (say 1 thousand points). In this case the fftconvolve doesn't seem to make much sense, since it forces a FFT of the second array to the same size of the first array. Is it faster to just do direct convolve in this case?
0
1
10,751
0
15,038,477
0
0
0
0
1
true
12
2013-02-22T10:05:00.000
15
3
0
How to encode a categorical variable in sklearn?
15,021,521
1.2
python,machine-learning,scikit-learn
DictVectorizer is the recommended way to generate a one-hot encoding of categorical variables; you can use the sparse argument to create a sparse CSR matrix instead of a dense numpy array. I usually don't care about multicollinearity and I haven't noticed a problem with the approaches that I tend to use (i.e. LinearSVC, SGDClassifier, Tree-based methods). It shouldn't be a problem to patch the DictVectorizer to drop one column per categorical feature - you simple need to remove one term from DictVectorizer.vocabulary at the end of the fit method. (Pull requests are always welcome!)
I'm trying to use the car evaluation dataset from the UCI repository and I wonder whether there is a convenient way to binarize categorical variables in sklearn. One approach would be to use the DictVectorizer of LabelBinarizer but here I'm getting k different features whereas you should have just k-1 in order to avoid collinearization. I guess I could write my own function and drop one column but this bookkeeping is tedious, is there an easy way to perform such transformations and get as a result a sparse matrix?
0
1
22,439
1
15,036,718
0
0
0
0
1
false
1
2013-02-23T03:25:00.000
0
3
0
2D Python list will have random results
15,036,694
0
python,list
Your board is getting multiple references to the same array. You need to replace the * 10 with another list comprehension.
I have created a 10 by 10 game board. It is a 2D list, with another list of 2 inside. I used board = [[['O', 'O']] * 10 for x in range(1, 11)]. So it will produce something like ['O', 'O'] ['O', 'O']... ['O', 'O'] ['O', 'O']... Later on I want to set a single cell to have 'C' I use board.gameBoard[animal.y][animal.x][0] = 'C' board being the class the gameBoard is in, and animal is a game piece, x & y are just ints. Some times it will work and the specified cell will become ['C', 'O'], other times it will fill the entire row with ['C', 'O']['C', 'O']['C', 'O']['C', 'O'] Does anyone know why that might be happening?
0
1
168
0
15,042,390
0
1
0
0
1
false
8
2013-02-23T14:34:00.000
0
2
0
How to efficiently compute the cosine similarity between millions of strings
15,041,647
0
java,python,algorithm,divide-and-conquer,cosine-similarity
Work with the transposed matrix. That is what Mahout does on Hadoop to do this kind of task fast (or just use Mahout). Essentially, computing cosine similarity the naive way is bad. Because you end up computing a lot of 0 * something. Instead, you better work in columns, and leave away all 0s there.
I need to compute the cosine similarity between strings in a list. For example, I have a list of over 10 million strings, each string has to determine similarity between itself and every other string in the list. What is the best algorithm I can use to efficiently and quickly do such task? Is the divide and conquer algorithm applicable? EDIT I want to determine which strings are most similar to a given string and be able to have a measure/score associated with the similarity. I think what I want to do falls in line with clustering where the number of clusters are not initially known.
0
1
2,043
0
15,068,825
0
0
0
0
1
false
1
2013-02-25T13:59:00.000
0
3
0
representation of a number as multiplication of its factors
15,068,698
0
python
I know one... If you're using python, you can use dictionary's to simplify the storage... You'll have to check for every prime less than square root of the number. Now, suppose p^k divides your number n, your task, I suppose is to find k. Here's the method: int c = 0; int temp = n; while(temp!=0) { temp /= p; c+= temp; } The above is a C++ code but you'll get the idea... At the end of this loop you'll have c = k And yeah, the link given by will is a perfect python implementation of the same algorithm
I want to represent a number as the product of its factors.The number of factors that are used to represent the number should be from 2 to number of prime factors of the same number(this i s the maximum possible number of factors for a number). for example taking the number 24: representation of the number as two factors multiplication are 2*12, 8*3, 6*4 and so on..., representation of the number as three factors multiplication are 2*2*6, 2*3*4 and so on..., representation of the number as four factors multiplication(prime factors alone) are 2*2*2*3. please help me get some simple and generic algorithm for this
0
1
1,491
0
15,107,442
0
0
0
0
1
false
17
2013-02-25T16:48:00.000
5
3
0
keep/slice specific columns in pandas
15,072,005
0.321513
python,pandas
If your column names have information that you can filter for, you could use df.filter(regex='name*'). I am using this to filter between my 189 data channels from a1_01 to b3_21 and it works fine.
I know about these column slice methods: df2 = df[["col1", "col2", "col3"]] and df2 = df.ix[:,0:2] but I'm wondering if there is a way to slice columns from the front/middle/end of a dataframe in the same slice without specifically listing each one. For example, a dataframe df with columns: col1, col2, col3, col4, col5 and col6. Is there a way to do something like this? df2 = df.ix[:, [0:2, "col5"]] I'm in the situation where I have hundreds of columns and routinely need to slice specific ones for different requests. I've checked through the documentation and haven't seen something like this. Have I overlooked something?
0
1
28,966
0
15,079,557
0
0
0
0
2
false
2
2013-02-26T00:15:00.000
4
2
0
Linear time algorithm to compute cartesian product
15,079,069
0.379949
python,algorithm,cartesian-product
There are mn results; the minimum work you have to do is write each result to the output. So you cannot do better than O(mn).
I was asked in an interview to come up with a solution with linear time for cartesian product. I did the iterative manner O(mn) and a recursive solution also which is also O(mn). But I could not reduce the complexity further. Does anyone have ideas on how this complexity can be improved? Also can anyone suggest an efficient recursive approach?
0
1
1,149
0
20,166,514
0
0
0
0
2
false
2
2013-02-26T00:15:00.000
0
2
0
Linear time algorithm to compute cartesian product
15,079,069
0
python,algorithm,cartesian-product
The question that comes to my mind reading this is, "Linear with respect to what?" Remember that in mathematics, all variables must be defined to have meaning. Big-O notation is no exception. Simply saying an algorithm is O(n) is meaningless if n is not defined. Assuming the question was meaningful, and not a mistake, my guess is that they wanted you to ask for clarification. Another possibility is that they wanted to see how you would respond when presented with an impossible situation.
I was asked in an interview to come up with a solution with linear time for cartesian product. I did the iterative manner O(mn) and a recursive solution also which is also O(mn). But I could not reduce the complexity further. Does anyone have ideas on how this complexity can be improved? Also can anyone suggest an efficient recursive approach?
0
1
1,149
0
15,090,586
0
0
0
0
1
false
1
2013-02-26T10:56:00.000
0
1
0
contourf result differs when switching axes
15,087,303
0
python,matplotlib,axes
The problem was the sampling. Although the arrays have the same size, the stepsize in the plot is not equal for x and y axis.
I am plotting a contourmap. When first plotting I noticed I had my axes wrong. So I switched the axes and noticed that the structure of both plots is different. On the first plot the axes and assignments are correct, but the structure is messy. On the second plot it is the other way around. Since it's a square matrix I don't see why there should be a sampling issue. Transposing the matrix with z-values or the meshgrid of x and y does not help either. Whatever way I plot x and y correctly it keeps looking messy. Does anybody here know any more ideas which I can try or what might solve it?
0
1
50
0
15,111,407
0
0
0
0
3
false
5
2013-02-27T11:42:00.000
7
4
0
what is a reason to use ndarray instead of python array
15,111,230
1
python,numpy,multidimensional-array
There are at least two main reasons for using NumPy arrays: NumPy arrays require less space than Python lists. So you can deal with more data in a NumPy array (in-memory) than you can with Python lists. NumPy arrays have a vast library of functions and methods unavailable to Python lists or Python arrays. Yes, you can not simply convert lists to NumPy arrays and expect your code to continue to work. The methods are different, the bool semantics are different. For the best performance, even the algorithm may need to change. However, if you are looking for a Python replacement for Matlab, you will definitely find uses for NumPy. It is worth learning.
I build a class with some iteration over coming data. The data are in an array form without use of numpy objects. On my code I often use .append to create another array. At some point I changed one of the big array 1000x2000 to numpy.array. Now I have an error after error. I started to convert all of the arrays into ndarray but comments like .append does not work any more. I start to have a problems with pointing to rows, columns or cells. and have to rebuild all code. I try to google an answer to the question: "what is and advantage of using ndarray over normal array" I can't find a sensible answer. Can you write when should I start to use ndarrays and if in your practice do you use both of them or stick to one only. Sorry if the question is a novice level, but I am new to python, just try to move from Matlab and want to understand what are pros and cons. Thanks
0
1
1,532
0
15,111,278
0
0
0
0
3
true
5
2013-02-27T11:42:00.000
8
4
0
what is a reason to use ndarray instead of python array
15,111,230
1.2
python,numpy,multidimensional-array
NumPy and Python arrays share the property of being efficiently stored in memory. NumPy arrays can be added together, multiplied by a number, you can calculate, say, the sine of all their values in one function call, etc. As HYRY pointed out, they can also have more than one dimension. You cannot do this with Python arrays. On the other hand, Python arrays can indeed be appended to. Note that NumPy arrays can however be concatenated together (hstack(), vstack(),…). That said, NumPy arrays are mostly meant to have a fixed number of elements. It is common to first build a list (or a Python array) of values iteratively and then convert it to a NumPy array (with numpy.array(), or, more efficiently, with numpy.frombuffer(), as HYRY mentioned): this allows mathematical operations on arrays (or matrices) to be performed very conveniently (simple syntax for complex operations). Alternatively, numpy.fromiter() might be used to construct the array from an iterator. Or loadtxt() to construct it from a text file.
I build a class with some iteration over coming data. The data are in an array form without use of numpy objects. On my code I often use .append to create another array. At some point I changed one of the big array 1000x2000 to numpy.array. Now I have an error after error. I started to convert all of the arrays into ndarray but comments like .append does not work any more. I start to have a problems with pointing to rows, columns or cells. and have to rebuild all code. I try to google an answer to the question: "what is and advantage of using ndarray over normal array" I can't find a sensible answer. Can you write when should I start to use ndarrays and if in your practice do you use both of them or stick to one only. Sorry if the question is a novice level, but I am new to python, just try to move from Matlab and want to understand what are pros and cons. Thanks
0
1
1,532
0
53,073,528
0
0
0
0
3
false
5
2013-02-27T11:42:00.000
1
4
0
what is a reason to use ndarray instead of python array
15,111,230
0.049958
python,numpy,multidimensional-array
Another great advantage of using NumPy arrays over built-in lists is the fact that NumPy has a C API that allows native C and C++ code to access NumPy arrays directly. Hence, many Python libraries written in low-level languages like C are expecting you to work with NumPy arrays instead of Python lists. Reference: Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython
I build a class with some iteration over coming data. The data are in an array form without use of numpy objects. On my code I often use .append to create another array. At some point I changed one of the big array 1000x2000 to numpy.array. Now I have an error after error. I started to convert all of the arrays into ndarray but comments like .append does not work any more. I start to have a problems with pointing to rows, columns or cells. and have to rebuild all code. I try to google an answer to the question: "what is and advantage of using ndarray over normal array" I can't find a sensible answer. Can you write when should I start to use ndarrays and if in your practice do you use both of them or stick to one only. Sorry if the question is a novice level, but I am new to python, just try to move from Matlab and want to understand what are pros and cons. Thanks
0
1
1,532
0
15,143,804
0
1
0
0
1
false
3
2013-02-28T18:45:00.000
6
2
0
Is there any documentation on the interdependencies between packages in the scipy, numpy, pandas, scikit ecosystem? Python
15,143,253
1
python,numpy,scipy,pandas,scikit-learn
AFAIK, here is the dependency tree (numpy is a dependency of everything): numpy scipy scikit-learn pandas
Is there any documentation on the interdependencies and relationship between packages in the the scipy, numpy, pandas, scikit ecosystem?
0
1
198
0
15,155,284
0
0
0
0
1
true
2
2013-03-01T09:44:00.000
3
1
0
svm.sparse.SVC taking a lot of time to get trained
15,154,690
1.2
python,svm,scikit-learn
Try using sklearn.svm.LinearSVC. This also has a linear kernel, but the underlying implementation is liblinear, which is known to be faster. With that in mind, your data set isn't very small, so even this classifier might take a while. Edit after first comment: In that I think you have several options, neither of which is perfect: The non-solution option: call it a day and hope that training of svm.sparse.SVC has finished tomorrow morning. If you can, buy a better computer. The cheat option: give up on probabilities. You haven't told us what your problem is, so they may not be essential. The back-against-the-wall option: if you absolutely need probabilities and things must run faster, use a different classifier. Options include sklearn.naive_bayes.*, sklearn.linear_model.LogisticRegression. etc. These will be much faster to train, but the price you pay is somewhat reduced accuracy.
I am trying to train svm.sparse.SVC in scikit-learn. Right now the dimension of the feature vectors is around 0.7 million and the number of feature vectors being used for training is 20k. I am providing input using csr sparse matrices as only around 500 dimensions are non-zero in each feature vector. The code is running since the past 5 hours. Is there any estimate on how much time it will take? Is there any way to do the training faster? Kernel is linear.
0
1
1,802
0
29,629,579
0
0
0
0
1
false
1
2013-03-01T12:25:00.000
-1
2
0
Where do I add a scale factor to the Essential Matrix to produce a real world translation value
15,157,756
-0.099668
python,opencv,image-processing
I have the same problem.I think the monocular camera may need a object known the 3D coordinate.That may help .
I'm working with OpenCV and python and would like to obtain the real world translation between two cameras. I'm using a single calibrated camera which is moving. I've already worked through feature matching, calculation of F via RANSAC, and calculation of E. To get the translation between cameras, I think I can use: w, u, vt = cv2.SVDecomp and then my t vector could be: t = u[:,2] An example output is: [[ -1.16399893 9.78967574 1.40910252] [ -7.79802049 -0.26646268 -13.85252956] [ -2.67690676 13.89538682 0.19209676]] t vector: [ 0.81586158 0.0750399 -0.57335756] I think I understand how the translation is not in real world scale so I need to provide that scale somehow if I want a real world translation. If I do know the distance between the cameras, can I just apply it directly to my t vector by multiplication? I think I'm missing something here...
0
1
726
0
15,181,340
0
0
0
0
1
false
1
2013-03-02T17:45:00.000
0
3
0
Alternative to support vector machine classifier in python?
15,177,490
0
python,opencv,machine-learning,scikit-learn,classification
If your images that belong to the same class are results of a transformations to some starting image you can increase your training size by making transofrmations to your labeled examples. For example if you are doing character recognition, afine or elastic transforamtions can be used. P.Simard in Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis describes it in more detail. In the paper he uses Neural Networks but the same applies for SVM.
I have to make comparison between 155 image feature vectors. Every feature vector has got 5 features. My image are divided in 10 classes. Unfortunately i need at least 100 images for class for using support vector machine , There is any alternative?
0
1
1,430
0
15,200,968
0
0
0
0
1
false
11
2013-03-04T11:32:00.000
8
2
0
Calculate inverse of a function--Library
15,200,560
1
python,c,math
As has already been mentioned, not all functions are invertible. In some cases imposing additional constraints helps: think about the inverse of sin(x). Once you are sure your function has a unique inverse, solve the equation f(x) = y. The solution gives you the inverse, y(x). In python, look for nonlinear solvers from scipy.optimize.
Is there any library available to have inverse of a function? To be more specific, given a function y=f(x) and domain, is there any library which can output x=f(y)? Sadly I cannot use matlab/mathematics in my application, looking for C/Python library..
0
1
36,620
0
15,319,789
0
0
0
0
1
false
2
2013-03-07T21:42:00.000
0
1
0
creating arabic corpus
15,282,336
0
python,nlp,nltk,sentiment-analysis,rapidminer
Well, I think that rapidminer is very interesting and can handle this task. It contains several operators dealing with text mining. Also, it allows the creation of new operators with high fluency.
I'm doing the sentiment analysis for the Arabic language , I want to creat my own corpus , to do that , I collect 300 status from facebook and I classify them into positive and negative , now I want to do the tokenization of these status , in order to obain a list of words , and hen generate unigrams and bigrams, trigrams and use the cross fold validation , I'm using for the moment the nltk python, is this software able to do this task fr the arabic language or the rapis Minner will be better to work with , what do you think and I'm wondering how to generate the bigrams, trigrams and use the cross fold validation , is there any idea ??
0
1
1,057
0
15,306,292
0
1
0
0
2
false
5
2013-03-09T01:45:00.000
1
4
0
How to calculate all interleavings of two lists?
15,306,231
0.049958
python,algorithm
As suggested by @airza, the itertools module is your friend. If you want to avoid using encapsulated magical goodness, my hint is to use recursion. Start playing the process of generating the lists in your mind, and when you notice you're doing the same thing again, try to find the pattern. For example: Take the first element from the first list Either take the 2nd, or the first from the other list Either take the 3rd, or the 2nd if you didn't, or another one from the other list ... Okay, that is starting to look like there's some greater logic we're not using. I'm just incrementing the numbers. Surely I can find a base case that works while changing the "first element, instead of naming higher elements? Play with it. :)
I want to create a function that takes in two lists, the lists are not guaranteed to be of equal length, and returns all the interleavings between the two lists. Input: Two lists that do not have to be equal in size. Output: All possible interleavings between the two lists that preserve the original list's order. Example: AllInter([1,2],[3,4]) -> [[1,2,3,4], [1,3,2,4], [1,3,4,2], [3,1,2,4], [3,1,4,2], [3,4,1,2]] I do not want a solution. I want a hint.
0
1
1,387
0
15,307,850
0
1
0
0
2
false
5
2013-03-09T01:45:00.000
0
4
0
How to calculate all interleavings of two lists?
15,306,231
0
python,algorithm
You can try something a little closer to the metal and more elegant(in my opinion) iterating through different possible slices. Basically step through and iterate through all three arguments to the standard slice operation, removing anything added to the final list. Can post code snippet if you're interested.
I want to create a function that takes in two lists, the lists are not guaranteed to be of equal length, and returns all the interleavings between the two lists. Input: Two lists that do not have to be equal in size. Output: All possible interleavings between the two lists that preserve the original list's order. Example: AllInter([1,2],[3,4]) -> [[1,2,3,4], [1,3,2,4], [1,3,4,2], [3,1,2,4], [3,1,4,2], [3,4,1,2]] I do not want a solution. I want a hint.
0
1
1,387
0
15,329,656
0
0
0
0
1
false
1
2013-03-11T00:01:00.000
0
3
0
How can I efficiently get all divisors of X within a range if I have X's prime factorization?
15,329,256
0
c++,python,algorithm,primes,prime-factoring
As Malvolio was (indirectly) going about, I personal wouldn't find a use for prime factorization if you want to find factors in a range, I would start at int t = (int)(sqrt(n)) and then decremnt until1. t is a factor2. Complete util t or t/n range has been REACHED(a flag) and then (both) has left the range Or if your range is relatively small, check versus those values themselves.
So I have algorithms (easily searchable on the net) for prime factorization and divisor acquisition but I don't know how to scale it to finding those divisors within a range. For example all divisors of 100 between 23 and 49 (arbitrary). But also something efficient so I can scale this to big numbers in larger ranges. At first I was thinking of using an array that's the size of the range and then use all the primes <= the upper bound to sieve all the elements in that array to return an eventual list of divisors, but for large ranges this would be too memory intensive. Is there a simple way to just directly generate the divisors?
0
1
1,052
0
15,330,625
0
1
0
0
2
true
1
2013-03-11T03:01:00.000
2
3
0
NumPy array size issue
15,330,521
1.2
python,numpy,scipy
To answer your second question: Tuples in Python are n-dimensional. That is you can have a 1-2-3-...-n tuple. Due to syntax, the way you represent a 1-dimensional tuple is ('element',) where the trailing comma is mandatory. If you have ('element') then this is just simply the expression inside the parenthesis. So (3) + 4 == 7, but (3,) + 4 == TypeError. Likewise ('element') == 'element'. To answer your first question: You're more than likely doing something wrong with passing the array around. There is no reason for the NumPy array to misrepresent itself without some type of mutation to the array.
I have a NumPy array that is of size (3, 3). When I print shape of the array within __main__ module I get (3, 3). However I am passing this array to a function and when I print its size in the function I get (3, ). Why does this happen? Also, what does it mean for a tuple to have its last element unspecified? That is, shouldn't (3, ) be an invalid tuple in the first place?
0
1
1,032
0
15,330,600
0
1
0
0
2
false
1
2013-03-11T03:01:00.000
2
3
0
NumPy array size issue
15,330,521
0.132549
python,numpy,scipy
A tuple like this: (3, ) means that it's a tuple with a single element (a single dimension, in this case). That's the correct syntax - with a trailing , because if it looked like this: (3) then Python would interpret it as a number surrounded by parenthesis, not a tuple. It'd be useful to see the actual code, but I'm guessing that you're not passing the entire array, only a row (or a column) of it.
I have a NumPy array that is of size (3, 3). When I print shape of the array within __main__ module I get (3, 3). However I am passing this array to a function and when I print its size in the function I get (3, ). Why does this happen? Also, what does it mean for a tuple to have its last element unspecified? That is, shouldn't (3, ) be an invalid tuple in the first place?
0
1
1,032
0
15,330,640
0
0
0
0
1
false
0
2013-03-11T03:08:00.000
0
2
0
Searching multiple words in many blocks of data
15,330,568
0
c++,python
Why no consider using multithreading to store the result? Make an array with size equal to number of blocks, then each thread count for the result in one block, then the thread writes the result to the corresponding entry in the array. Later on you sort the array by decreasing order then you get the result.
I have to search about 100 words in blocks of data (20000 blocks approximately) and each block consists of about 20 words. The blocks should be returned in the decreasing order of the number of matches. The brute force technique is very cumbersome because you have to search for all the 100 words one by one and then combine the number of related searches in a complicated manner. Is there any other algorithm which allows to search multiple words at the same time and store the number of matching words? Thank you
0
1
77
0
15,370,151
0
0
0
0
3
false
6
2013-03-12T19:07:00.000
0
5
0
python - saving numpy array to a file (smallest size possible)
15,369,985
0
python,numpy,scipy
If you don't mind installing additional packages (for both python and c++), you can use [BSON][1] (Binary JSON).
Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program. What I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs. I found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB. Is there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program? Thank you for the help. I appreciate any guidance I can get. EDIT: There are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in
0
1
7,265
0
15,370,191
0
0
0
0
3
false
6
2013-03-12T19:07:00.000
1
5
0
python - saving numpy array to a file (smallest size possible)
15,369,985
0.039979
python,numpy,scipy
numpy.ndarray.tofile and numpy.fromfile are useful for direct binary output/input from python. std::ostream::write std::istream::read are useful for binary output/input in c++. You should be careful about endianess if the data are transferred from one machine to another.
Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program. What I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs. I found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB. Is there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program? Thank you for the help. I appreciate any guidance I can get. EDIT: There are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in
0
1
7,265
0
19,226,920
0
0
0
0
3
false
6
2013-03-12T19:07:00.000
1
5
0
python - saving numpy array to a file (smallest size possible)
15,369,985
0.039979
python,numpy,scipy
Use the an hdf5 file, they are really simple to use through h5py and you can use set compression a flag. Note that hdf5 has also a c++ interface.
Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program. What I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs. I found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB. Is there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program? Thank you for the help. I appreciate any guidance I can get. EDIT: There are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in
0
1
7,265
0
15,425,560
0
0
0
0
1
false
0
2013-03-15T05:37:00.000
3
3
0
Pandas: More Efficient .map() function or method?
15,425,492
0.197375
python,pandas
There isn't, but if you want to only apply to unique values, just do that yourself. Get mySeries.unique(), then use your function to pre-calculate the mapped alternatives for those unique values and create a dictionary with the resulting mappings. Then use pandas map with the dictionary. This should be about as fast as you can expect.
I am using a rather large dataset of ~37 million data points that are hierarchically indexed into three categories country, productcode, year. The country variable (which is the countryname) is rather messy data consisting of items such as: 'Austral' which represents 'Australia'. I have built a simple guess_country() that matches letters to words, and returns a best guess and confidence interval from a known list of country_names. Given the length of the data and the nature of hierarchy it is very inefficient to use .map() to the Series: country. [The guess_country function takes ~2ms / request] My question is: Is there a more efficient .map() which takes the Series and performs map on only unique values? (Given there are a LOT of repeated countrynames)
0
1
2,544
0
18,670,288
0
0
0
0
1
false
2
2013-03-17T16:29:00.000
1
1
0
Choosing the threshold values for hysteresis
15,463,191
0.197375
python,opencv,image-processing,computer-vision
Such image statistics as mean, std etc. are not sufficient to answer the question, and canny may not be the best approach; it all depends on characteristics of the image. To learn about those characteristics and approaches, you may google for a survey of image segmentation / edge detection methods. And this kind of problems often involve some pre-processing and post-processing steps.
I'm trying to choose the best parameters for the hysteresis phase in the canny function of OpenCV. I found some similar questions in stackoverflow but they didn't solve my problem. So far I've found that there are two main approaches: Compute mean and standard deviation and set the thresholds as: lowT = mean - std, highT = mean+std Compute the median and set the thresholds as: 0.6*median, 1.33*median However, any of these thresholds is the best fit for my data. Manually, I've found that lowT=100, highT=150 are the best values. The data (gray-scale image) has the following properties: median=202.0, mean=206.6283375, standard deviation = 35.7482520742 Does anyvbody know where is the problem? or knows where can I found more information about this?
0
1
2,035
0
15,743,100
0
0
0
0
1
true
1
2013-03-18T07:06:00.000
2
2
0
Ensamble methods with scikit-learn
15,471,372
1.2
python,machine-learning,scikit-learn
Do you just want to do majority voting? This is not implemented afaik. But as I said, you can just average the predict_proba scores. Or you can use LabelBinarizer of the predictions and average those. That would implement a voting scheme. Even if you are not interested in the probabilities, averaging the predicted probabilities might be more robust than doing a simple voting. This is hard to tell without trying out, though.
Is there any way to combine different classifiers into one in sklearn? I find sklearn.ensamble package. It contains different models, like AdaBoost and RandofForest, but they use decision trees under the hood and I want to use different methods, like SVM and Logistic regression. Is it possible with sklearn?
0
1
694
0
15,513,160
0
0
0
0
1
false
0
2013-03-19T23:12:00.000
0
2
0
How should I divide a large (~50Gb) dataset into training, test, and validation sets?
15,512,276
0
python,numpy,dataset
You could assign a unique sequential number to each row, then choose a random sample of those numbers, then serially extract each relevant row to a new file.
I have a large dataset. It's currently in the form of uncompressed numpy array files that were created with numpy.array.tofile(). Each file is approximately 100000 rows of 363 floats each. There are 192 files totalling 52 Gb. I'd like to separate a random fifth of this data into a test set, and a random fifth of that test set into a validation set. In addition, I can only train on 1 Gb at a time (limitation of GPU's onboard memory) So I need to randomize the order of all the data so that I don't introduce a bias by training on the data in the order it was collected. My main memory is 8 Gb in size. Can any recommend a method of randomizing and partitioning this huge dataset?
0
1
756
0
15,514,922
0
0
0
0
2
false
0
2013-03-20T03:15:00.000
1
2
0
Finding points in space closer than a certain value
15,514,641
0.099668
python,performance,algorithm,numpy,kdtree
The first thing that comes to my mind is: If we calculate the distance between each two atoms in the set it will be O(N^2) operations. It is very slow. What about to introduce the statical orthogonal grid with some cells size (for example close to the distance you are interested) and then determine the atoms belonging to the each cell of the grid (it takes O(N) operations) After this procedure you can reduce the time for searching of the neighbors.
In an python application I'm developing I have an array of 3D points (of size between 2 and 100000) and I have to find the points that are within a certain distance from each other (say between two values, like 0.1 and 0.2). I need this for a graphic application and this search should be very fast (~1/10 of a second for a sample of 10000 points) As a first experiment I tried to use the scipy.spatial.KDTree.query_pairs implementation, and with a sample of 5000 point it takes 5 second to return the indices. Do you know any approach that may work for this specific case? A bit more about the application: The points represents atom coordinates and the distance search is useful to determine the bonds between atoms. Bonds are not necessarily fixed but may change at each step, such as in the case of hydrogen bonds.
0
1
772
0
15,514,859
0
0
0
0
2
true
0
2013-03-20T03:15:00.000
5
2
0
Finding points in space closer than a certain value
15,514,641
1.2
python,performance,algorithm,numpy,kdtree
Great question! Here is my suggestion: Divide each coordinate by your "epsilon" value of 0.1/0.2/whatever and round the result to an integer. This creates a "quotient space" of points where distance no longer needs to be determined using the distance formula, but simply by comparing the integer coordinates of each point. If all coordinates are the same, then the original points were within approximately the square root of three times epsilon from each other (for example). This process is O(n) and should take 0.001 seconds or less. (Note: you would want to augment the original point with the three additional integers that result from this division and rounding, so that you don't lose the exact coordinates.) Sort the points in numeric order using dictionary-style rules and considering the three integers in the coordinates as letters in words. This process is O(n * log(n)) and should take certainly less than your 1/10th of a second requirement. Now you simply proceed through this sorted list and compare each point's integer coordinates with the previous and following points. If all coordinates match, then both of the matching points can be moved into your "keep" list of points, and all the others can be marked as "throw away." This is an O(n) process which should take very little time. The result will be a subset of all the original points, which contains only those points that could be possibly involved in any bond, with a bond being defined as approximately epsilon or less apart from some other point in your original set. This process is not mathematically exact, but I think it is definitely fast and suited for your purpose.
In an python application I'm developing I have an array of 3D points (of size between 2 and 100000) and I have to find the points that are within a certain distance from each other (say between two values, like 0.1 and 0.2). I need this for a graphic application and this search should be very fast (~1/10 of a second for a sample of 10000 points) As a first experiment I tried to use the scipy.spatial.KDTree.query_pairs implementation, and with a sample of 5000 point it takes 5 second to return the indices. Do you know any approach that may work for this specific case? A bit more about the application: The points represents atom coordinates and the distance search is useful to determine the bonds between atoms. Bonds are not necessarily fixed but may change at each step, such as in the case of hydrogen bonds.
0
1
772
0
15,540,786
0
1
0
0
2
false
5
2013-03-21T06:10:00.000
0
2
0
Are there any downsides to using virtualenv for scientific python and machine learning?
15,540,640
0
python-2.7,virtualenv,scientific-computing
There's no performance overhead to using virtualenv. All it's doing is using different locations in the filesystem. The only "overhead" is the time it takes to set it up. You'd need to install each package in your virtualenv (numpy, pandas, etc.)
I have received several recommendations to use virtualenv to clean up my python modules. I am concerned because it seems too good to be true. Has anyone found downside related to performance or memory issues in working with multicore settings, starcluster, numpy, scikit-learn, pandas, or iPython notebook.
0
1
1,022
0
15,540,795
0
1
0
0
2
true
5
2013-03-21T06:10:00.000
3
2
0
Are there any downsides to using virtualenv for scientific python and machine learning?
15,540,640
1.2
python-2.7,virtualenv,scientific-computing
Virtualenv is the best and easiest way to keep some sort of order when it comes to dependencies. Python is really behind Ruby (bundler!) when it comes to dealing with installing and keeping track of modules. The best tool you have is virtualenv. So I suggest you create a virtualenv directory for each of your applications, put together a file where you list all the 'pip install' commands you need to build the environment and ensure that you have a clean repeatable process for creating this environment. I think that the nature of the application makes little difference. There should not be any performance issue since all that virtualenv does is to load libraries from a specific path rather than load them from the directory where they are saved by default. In any case (this may be completely irrelevant), but if performance is an issue, then perhaps you ought to be looking at a compiled language. Most likely though, any performance bottlenecks could be improved with better coding.
I have received several recommendations to use virtualenv to clean up my python modules. I am concerned because it seems too good to be true. Has anyone found downside related to performance or memory issues in working with multicore settings, starcluster, numpy, scikit-learn, pandas, or iPython notebook.
0
1
1,022
0
15,578,952
0
0
0
0
1
true
18
2013-03-22T16:35:00.000
24
1
0
How do you improve matplotlib image quality?
15,575,466
1.2
python,graph,matplotlib
You can save the images in a vector format so that they will be scalable without quality loss. Such formats are PDF and EPS. Just change the extension to .pdf or .eps and matplotlib will write the correct image format. Remember LaTeX likes EPS and PDFLaTeX likes PDF images. Although most modern LaTeX executables are PDFLaTeX in disguise and convert EPS files on the fly (same effect as if you included the epstopdf package in your preamble, which may not perform as well as you'd like). Alternatively, increase the DPI, a lot. These are the numbers you should keep in mind: 300dpi: plain paper prints 600dpi: professional paper prints. Most commercial office printers reach this in their output. 1200dpi: professional poster/brochure grade quality. I use these to adapt the quality of PNG figures in conjunction with figure's figsize option, which allows for correctly scaled text and graphics as you improve the quality through dpi.
I am using a python program to produce some data, plotting the data using matplotlib.pyplot and then displaying the figure in a latex file. I am currently saving the figure as a .png file but the image quality isn't great. I've tried changing the DPI in matplotlib.pyplot.figure(dpi=200) etc but this seems to make little difference. I've also tried using differnet image formats but they all look a little faded and not very sharp. Has anyone else had this problem? Any help would be much appreciated
0
1
18,876
0
15,643,468
0
0
0
1
2
false
2
2013-03-23T22:45:00.000
0
3
0
How to use NZ Loader (Netezza Loader) through Python Script?
15,592,980
0
python,netezza
You need to get the nzcli installed on the machine that you want to run nzload from - your sysadmin should be able to put it on your unix/linux application server. There's a detailed process to setting it all up, caching the passwords, etc - the sysadmin should be able to do that to. Once it is set up, you can create NZ control files to point to your data files and execute a load. The Netezza Data Loading guide has detailed instructions on how to do all of this (it can be obtained through IBM). You can do it through aginity as well if you have the CREATE EXTERNAL TABLE privledge - you can do a INSERT INTO FROM EXTERNAL ... REMOTESOURCE ODBC to load the file from an ODBC connection.
I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow. Can point me some example python script or some idea how can I do the same? Thank you
0
1
4,583
0
17,522,337
0
0
0
1
2
false
2
2013-03-23T22:45:00.000
1
3
0
How to use NZ Loader (Netezza Loader) through Python Script?
15,592,980
0.066568
python,netezza
you can use nz_load4 to load the data,This is the support utility /nz/support/contrib/bin the syntax is same like nzload,by default nz_load4 will load the data using 4 thread and you can go upto 32 thread by using -tread option for more details use nz_load4 -h This will create the log files based on the number of thread,like if
I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow. Can point me some example python script or some idea how can I do the same? Thank you
0
1
4,583
0
15,623,721
0
0
0
0
1
true
1
2013-03-25T16:24:00.000
1
1
0
Matplotlib fullscreen not working
15,619,825
1.2
python,numpy,matplotlib,scipy
SOLVED - My problem was that I was not up to the latest version of Matplotlib. I did the following steps to get fullscreen working in Matplotlib with Ubuntu 12.10. Uninstalled matplotlib with sudo apt-get remove python-matplotlib Installed build dependencies for matplotlib sudo apt-get build-dep python-matplotlib Installed matplotlib 1.2 with pip sudo pip install matplotlib Set matplotlib to use the GTK backend with matplotlib.rcParams['backend'] = 'GTK' Used keyboard shortcut 'f' when the plot was onscreen and it worked!
I am trying desperately to make a fullscreen plot in matplotlib on Ubuntu 12.10. I have tried everything I can find on the web. I need my plot to go completely fullscreen, not just maximized. Has anyone ever gotten this to work? If so, could you please share how? Thanks.
0
1
1,451
0
15,638,779
0
0
0
0
1
true
2
2013-03-26T12:11:00.000
2
2
0
Efficient way to do a rolling linear regression
15,636,796
1.2
python,matlab,numpy,linear-regression,rolling-computation
No, there is NO function that will do a rolling regression, returning all the statistics you wish, doing it efficiently. That does not mean you can't write such a function. To do so would mean multiple calls to a tool like conv or filter. This is how a Savitsky-Golay tool would work, which DOES do most of what you want. Make one call for each regression coefficient. Use of up-dating and down-dating tools to use/modify the previous regression estimates will not be as efficient as the calls to conv, since you only need factorize a linear system ONCE when you then do the work with conv. Anyway, there is no need to do an update, as long as the points are uniformly spaced in the series. This is why Savitsky-Golay works.
I have two vectors x and y, and I want to compute a rolling regression for those, e.g a on (x(1:4),y(1:4)), (x(2:5),y(2:5)), ... Is there already a function for that? The best algorithm I have in mind for this is O(n), but applying separate linear regressions on every subarrays would be O(n^2). I'm working with Matlab and Python (numpy).
0
1
4,368
0
15,638,712
0
1
0
0
1
false
4
2013-03-26T13:44:00.000
6
2
0
calculating mean and standard deviation of the data which does not fit in memory using python
15,638,612
1
python,statistics
Sounds like a math question. For the mean, you know that you can take the mean of a chunk of data, and then take the mean of the means. If the chunks aren't the same size, you'll have to take a weighted average. For the standard deviation, you'll have to calculate the variance first. I'd suggest doing this alongside the calculation of the mean. For variance, you have Var(X) = Avg(X^2) - Avg(X)^2 So compute the average of your data, and the average of your (data^2). Aggregate them as above, and the take the difference. Then the standard deviation is just the square root of the variance. Note that you could do the whole thing with iterators, which is probably the most efficient.
I have a lot of data stored at disk in large arrays. I cant load everything in memory altogether. How one could calculate the mean and the standard deviation?
0
1
6,151
0
15,791,843
0
0
0
0
1
false
2
2013-04-03T14:41:00.000
0
2
0
Grouping data by frequency
15,790,467
0
python,group-by,timestamp,time-series
There are several ways to approach this, but you're effectively "binning" on the times. I would approach it in a few steps: You don't want to parse the time yourself with string manipulation, it will blow up in your face; trust me! Parse out the timestamp into a datetime object (google should give you a pretty good answer). Once you have that, you can do lots of fun stuff like compare two times. Now that you have the datetime objects, you can start to "bin" them. I'm going to assume the records are in order. Start with the first record's time of "03/04/2013 13:37:20" and create a new datetime object at "03/04/2013 13:37:00" [hint: set seconds=0 on the datetime object you read in]. This is the start of your first "bin". Now add one minute to your start datetime [hint: endDT = startDT + timedelta(seconds=60)], that's the end of your first bin. Now start going through your records checking if the record is less than your endDT, if it is, add it to a list for that bin. If the record is greater than your endDT, you're in the next bin. To start the new bin, add one minute to your endDT and create a new list to hold those items and keep chugging along in your loop. Once you go through the loop, you can run max/min/avg on the lists. Ideally, you'll store the lists in a dictionary that looks like {datetimeObject : [34, 23, 45, 23]}. It'll make printing and sorting easy. This isn't the most efficient/flexible/cool way to do it, but I think it's probably the most helpful to start with.
I did a code which generates random numbers below and i save them in a csv which look like below, I am trying to play around and learn the group by function. I would like for instance do the sum or average of those group by timestamp. I am new in Python, i cannot find anywhere to start though. Ulitmately i would like to do the same but for 1min or 5min (every 5min starting from 00:00:00, not enough data in my example below but that would do something like 13:35:00 to 13:40:00 and the next one 13:40:00 included to 13:45:00 excluded, etc), i think i could figure out the 1min in extracting the minute part from the timestamp but the 5min seems complex. Not asking for a copy paste of a code, but i have no idea where to start to be honest. Level Timestamp 99 03/04/2013 13:37:20 98 03/04/2013 13:37:20 98 03/04/2013 13:37:20 99 03/04/2013 13:37:20 105 03/04/2013 13:37:20 104 03/04/2013 13:37:20 102 03/04/2013 13:37:21 102 03/04/2013 13:37:21 103 03/04/2013 13:37:22 82 03/04/2013 13:37:23 83 03/04/2013 13:37:23 82 03/04/2013 13:37:23 83 03/04/2013 13:37:23 54 03/04/2013 13:37:24 55 03/04/2013 13:37:24 54 03/04/2013 13:37:24 55 03/04/2013 13:37:24 56 03/04/2013 13:37:25 57 03/04/2013 13:37:25
0
1
584
0
16,271,420
0
0
0
0
1
false
0
2013-04-03T15:52:00.000
0
1
0
openstreet maps: dynamically retrieve shp files in python
15,792,073
0
python,openstreetmap,shapefile,mapnik
Dynamic retrieval of data from shapefiles is not suggested for large applications. The best practice is to dump the shapefile in databases like postgres (shp2pgsql) & generated the map using mapnik & tile them using tilecache.
I manage to install mapnik for python and am able to render maps using a provided shp file. Is there a possibility to retrieve dynamically the shape file for the map I want to render (given coordinates) from python? or do I need to download the whole OSM files, and import them into my own database? thanks
0
1
286
0
15,847,067
0
1
0
0
2
false
4
2013-04-05T11:12:00.000
0
3
0
Cannot seem to install pandas for python 2.7 on windows
15,832,445
0
python,windows,installation,pandas
After you have installed python check to see if the appropriate path variables are set by typing the following at the command line: echo %PATH% if you do not see something like: C:\Python27;C:\Python27\Scripts on the output (probably with lots of other paths) then type this: set PATH=%PATH%;C:\\Python27\\;C:\\Python27\Scripts Then try installing the 32-bit pandas executable.
Sorry if this has been answered somewhere already, I couldn't find the answer. I have installed python 2.7.3 onto a windows 7 computer. I then downloaded the pandas-0.10.1.win-amd64-py2.7.exe and tried to install it. I have gotten past the first window, but then it states "Python 2.7 is required, which was not found in the registry". I then get the option to put the path in to find python, but I cannot get it to work. How would I fix this? Sorry for the silly question. Thanks. ~Kututo
0
1
3,609
0
25,210,272
0
1
0
0
2
false
4
2013-04-05T11:12:00.000
2
3
0
Cannot seem to install pandas for python 2.7 on windows
15,832,445
0.132549
python,windows,installation,pandas
I faced the same issue. Here is what worked Changed to PATH to include C:\Python27;C:\Python27\Lib\site-packages\;C:\Python27\Scripts\; uninstall 64bit numpy and pandas install 32win 2.7 numpy and pandas I had to also install dateutil and pytz pandas and numpy work import work fine
Sorry if this has been answered somewhere already, I couldn't find the answer. I have installed python 2.7.3 onto a windows 7 computer. I then downloaded the pandas-0.10.1.win-amd64-py2.7.exe and tried to install it. I have gotten past the first window, but then it states "Python 2.7 is required, which was not found in the registry". I then get the option to put the path in to find python, but I cannot get it to work. How would I fix this? Sorry for the silly question. Thanks. ~Kututo
0
1
3,609
0
16,186,805
0
0
0
0
1
true
19
2013-04-05T15:29:00.000
8
2
0
Making pyplot.hist() first and last bins include outliers
15,837,810
1.2
python,numpy,matplotlib
No. Looking at matplotlib.axes.Axes.hist and the direct use of numpy.histogram I'm fairly confident in saying that there is no smarter solution than using clip (other than extending the bins that you histogram with). I'd encourage you to look at the source of matplotlib.axes.Axes.hist (it's just Python code, though admittedly hist is slightly more complex than most of the Axes methods) - it is the best way to verify this kind of question.
pyplot.hist() documentation specifies that when setting a range for a histogram "lower and upper outliers are ignored". Is it possible to make the first and last bins of a histogram include all outliers without changing the width of the bin? For example, let's say I want to look at the range 0-3 with 3 bins: 0-1, 1-2, 2-3 (let's ignore cases of exact equality for simplicity). I would like the first bin to include all values from minus infinity to 1, and the last bin to include all values from 2 to infinity. However, if I explicitly set these bins to span that range, they will be very wide. I would like them to have the same width. The behavior I am looking for is like the behavior of hist() in Matlab. Obviously I can numpy.clip() the data and plot that, which will give me what I want. But I am interested if there is a builtin solution for this.
0
1
7,794
0
15,887,123
0
0
0
0
3
false
3
2013-04-07T21:34:00.000
0
3
0
Feature Selection in dataset containing both string and numerical values?
15,868,108
0
python,machine-learning,weka,rapidminer,feature-selection
Feature selection algorithms assigns weights to different features based on their impact in the classification. In my best knowledge the features types does not make difference when computing different weights. I suggest to convert string features to numerical based on their ASCII codes or any other techniques. Then you can use the existing feature selection algorithm in rapid miner.
Hi I have big dataset which has both strings and numerical values ex. User name (str) , handset(str), number of requests(int), number of downloads(int) ,....... I have around 200 such columns. Is there a way/algorithm which can handle both strings and integers during feature selection ? Or how should I approach this issue. thanks
0
1
1,683
0
17,920,216
0
0
0
0
3
false
3
2013-04-07T21:34:00.000
0
3
0
Feature Selection in dataset containing both string and numerical values?
15,868,108
0
python,machine-learning,weka,rapidminer,feature-selection
I've used Weka Feature Selection and although the attribute evaluator methods I've tried can't handle string attributes you can temporary remove them in the Preprocess > Filter > Unsupervised > Attribute > RemoveType, then perform the feature selection and, later, include strings again to do the classification.
Hi I have big dataset which has both strings and numerical values ex. User name (str) , handset(str), number of requests(int), number of downloads(int) ,....... I have around 200 such columns. Is there a way/algorithm which can handle both strings and integers during feature selection ? Or how should I approach this issue. thanks
0
1
1,683
0
16,003,658
0
0
0
0
3
false
3
2013-04-07T21:34:00.000
0
3
0
Feature Selection in dataset containing both string and numerical values?
15,868,108
0
python,machine-learning,weka,rapidminer,feature-selection
There are a set of operators you could use in the Attribute Weighting group within RapidMiner. For example, Weight By Correlation or Weight By Information Gain. These will assess how much weight to give an attribute based on its relevance to the label (in this case the download flag). The resulting weights can then be used with the Select by Weights operator to eliminate those that are not needed. This approach considers attributes by themselves. You could also build a classification model and use the forward selection operators to add more and more attributes and monitor performance. This approach will consider the relationships between attributes.
Hi I have big dataset which has both strings and numerical values ex. User name (str) , handset(str), number of requests(int), number of downloads(int) ,....... I have around 200 such columns. Is there a way/algorithm which can handle both strings and integers during feature selection ? Or how should I approach this issue. thanks
0
1
1,683
0
15,892,422
0
0
0
0
1
false
0
2013-04-08T01:28:00.000
1
2
0
Random Forest - Predict using less estimators
15,869,919
0.099668
python,limit,scikit-learn,prediction,random-forest
Once trained, you can access these via the "estimators_" attribute of the random forest object.
I've trained a Random Forest (regressor in this case) model using scikit learn (python), and I'would like to plot the error rate on a validation set based on the numeber of estimators used. In other words, there's a way to predict using only a portion of the estimators in your RandomForestRegressor? Using predict(X) will give you the predictions based on the mean of every single tree results. There is a way to limit the usage of the trees? Or eventually, get each single output for each single tree in the forest?
0
1
1,415
0
52,104,659
0
0
0
0
2
false
340
2013-04-08T12:41:00.000
-2
5
0
What is the difference between ndarray and array in numpy?
15,879,315
-0.07983
python,arrays,numpy,multidimensional-array,numpy-ndarray
I think with np.array() you can only create C like though you mention the order, when you check using np.isfortran() it says false. but with np.ndarrray() when you specify the order it creates based on the order provided.
What is the difference between ndarray and array in Numpy? And where can I find the implementations in the numpy source code?
0
1
148,479
0
15,879,428
0
0
0
0
2
false
340
2013-04-08T12:41:00.000
66
5
0
What is the difference between ndarray and array in numpy?
15,879,315
1
python,arrays,numpy,multidimensional-array,numpy-ndarray
numpy.array is a function that returns a numpy.ndarray. There is no object type numpy.array.
What is the difference between ndarray and array in Numpy? And where can I find the implementations in the numpy source code?
0
1
148,479
0
15,904,277
0
1
0
0
1
true
75
2013-04-09T14:02:00.000
135
2
0
matplotlib bar graph black - how do I remove bar borders
15,904,042
1.2
python,graph,matplotlib,border
Set the edgecolor to "none": bar(..., edgecolor = "none")
I'm using pyplot.bar but I'm plotting so many points that the color of the bars is always black. This is because the borders of the bars are black and there are so many of them that they are all squished together so that all you see is the borders (black). Is there a way to remove the bar borders so that I can see the intended color?
0
1
72,444
0
15,957,090
0
0
0
0
1
false
3
2013-04-09T15:40:00.000
1
2
0
how does ImageFilter in PIL normalize the pixel values between 0 and 255 after filtering with Kernel or mask
15,906,368
0.099668
image,image-processing,python-2.7,python-imaging-library
The above answer of Mark states his theory regarding what happens when a Zero-summing kernel is used with scale argument 0 or None or not passed/mentioned. Now talking about how PIL handles calculated pixel values after applying kernel,scale and offset, which are not in [0,255] range. My theory about how it normalizes calculated pixel value is that it simply do: Any resulting value <= 0 becomes 0 and anything > 255 becomes 255.
how does ImageFilter in PIL normalize the pixel values(not the kernel) between 0 and 255 after filtering with Kernel or mask?(Specially zero-summing kernel like:( -1,-1,-1,0,0,0,1,1,1 )) my code was like: import Image import ImageFilter Horiz = ImageFilter.Kernel((3, 3), (-1,-2,-1,0,0,0,1,2,1), scale=None, offset=0) #sobel mask im_fltd = myimage.filter(Horiz)
0
1
1,691
0
15,915,785
0
1
0
0
1
false
0
2013-04-10T01:08:00.000
0
2
0
Creating many arrays at once
15,915,255
0
python,arrays,os.walk
Python's lists are dynamic, you can change their length on-the-fly, so just store them in a list. Or if you wanted to reference them by name instead of number, use a dictionary, whose size can also change on the fly.
I am currently working with hundreds of files, all of which I want to read in and view as a numpy array. Right now I am using os.walk to pull all the files from a directory. I have a for loop that goes through the directory and will then create the array, but it is not stored anywhere. Is there a way to create arrays "on the go" or to somehow allocate a certain amount of memory for empty arrays?
0
1
57
0
15,925,696
0
0
0
0
1
false
1
2013-04-10T11:03:00.000
0
1
0
using python for opencv
15,924,060
0
python,opencv
You should have a look at Python Boost. This might help you to bind C++ functions you need to python.
There are many functions in OpenCV 2.4 not available using Python. Please advice me how to convert the C++ functions so that I can use in Python 2.7. Thanks in advance.
0
1
61
0
15,949,294
0
0
0
0
1
true
5
2013-04-10T23:02:00.000
5
1
0
How to Extend Scipy Sparse Matrix returned by sklearn TfIdfVectorizer to hold more features
15,938,025
1.2
python-2.7,scipy,sparse-matrix,scikit-learn
I think the easiest would be to create a new sparse matrix with your custom features and then use scipy.sparse.hstack to stack the features. You might also find the "FeatureUnion" from the pipeline module helpful.
I am working on a text classification problem using scikit-learn classifiers and text feature extractor, particularly TfidfVectorizer class. The problem is that I have two kinds of features, the first are captured by the n-grams obtained from TfidfVectorizer and the other are domain specific features that I extract from each document. I need to combine both features in a single feature vector for each document; to do this I need to update the scipy sparse matrix returned by TfidfVectorizer by adding a new dimension in each row holding the domain feature for this document. However, I can't find a neat way to do this, by neat I mean not converting the sparse matrix into a dense one since simply it won't fit in memory. Probably I am missing a feature in scikit-learn or something, since I am new to both scipy and scikit-learn.
0
1
692