GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 17,457,967 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2013-07-03T20:21:00.000 | 1 | 3 | 0 | Converting Pandas Dataframe types | 17,457,418 | 0.066568 | python,numpy,pandas | df = df.convert_objects(convert_numeric=True) will work in most cases.
I should note that this copies the data. It would be preferable to get it to a numeric type on the initial read. If you post your code and a small example, someone might be able to help you with that. | I have a pandas dataFrame created through a mysql call which returns the data as object type.
The data is mostly numeric, with some 'na' values.
How can I cast the type of the dataFrame so the numeric values are appropriately typed (floats) and the 'na' values are represented as numpy NaN values? | 0 | 1 | 4,908 |
0 | 18,509,671 | 0 | 0 | 1 | 0 | 1 | false | 3 | 2013-07-03T20:24:00.000 | 0 | 3 | 0 | Large training and testing data in libsvm | 17,457,460 | 0 | python,c++,svm,libsvm | easy.py is a script for training and evaluating a classifier. it does a metatraining for the SVM parameters with grid.py. in grid.py is a parameter "nr_local_worker" which is defining the mumber of threads. you might wish to increase it (check processor load). | I'm using Libsvm in a 5x2 cross validation to classify a very huge amount of data, that is, I have 47k samples for training and 47k samples for testing in 10 different configurations.
I usually use the Libsvm's script easy.py to classify the data, but it's taking so long, I've been waiting for results for more than 3 hours and nothing, and I still have to repeat this procedure more 9 times!
does anybody know how to use the libsvm faster with a very huge amount of data? does the C++ Libsvm functions work faster than the python functions? | 0 | 1 | 3,432 |
0 | 17,471,244 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-07-04T07:28:00.000 | 1 | 1 | 0 | An issue with generating a random graph with given degree sequence: time consuming or some error? | 17,464,274 | 1.2 | python,networkx | That algorithm's run time could get very long for some degree sequences. And it is not guaranteed to produce a graph. Depending on your end use you might consider using the configuration_model(). Though it doesn't sample graphs uniformly at random and might produce parallel edges and self loops it will always finish. | I have an empirical network with 585 nodes and 5,441 edges. This is a scale-free network with max node degree of 179 and min node degree of 1. I am trying to create an equivalent random graph (using random_degree_sequence_graph from networkx), but my python just keeps running. I did similar exercise fro the network with 100 nodes - it took just a second to create a random graph. But with 585 nodes it takes forever. The result of is_valid_degree_sequence command is True. Is it possible that python goes into some infinite loop with my degree sequence or does it actually take a very long time (more than half hour) to create a graph of such size? Please let me know if anyone has had any experience with this. I am using Python 2.7.4.
Thanks in advance! | 0 | 1 | 532 |
0 | 17,488,424 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-07-05T10:10:00.000 | 0 | 1 | 1 | Installing python (same version) on accident twice | 17,486,322 | 1.2 | python,scipy,reinstall | How about sudo port uninstall python27? | I accidentally installed python 2.7 again on my mac (mountain lion), when trying to install scipy using macports:
sudo port install py27-scipy
---> Computing dependencies for py27-scipy
---> Dependencies to be installed: SuiteSparse gcc47 cctools cctools-headers llvm-3.3 libffi llvm_select cloog gmp isl gcc_select
ld64 libiconv libmpc mpfr libstdcxx ppl glpk zlib py27-nose
nosetests_select py27-setuptools python27 bzip2 db46 db_select
gettext expat ncurses libedit openssl python_select sqlite3 py27-numpy
fftw-3 swig-python swig pcre
I am still using my original install of python (and matplotlib and numpy etc), without scipy. How do I remove this new version? It is taking up ~2Gb space. | 0 | 1 | 609 |
0 | 42,853,468 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2013-07-08T11:50:00.000 | 0 | 2 | 0 | How to pass Unicode title to matplotlib? | 17,525,882 | 0 | python,matplotlib,unicode,python-2.x | In Python3, there is no need to worry about all that troublesome UTF-8 problems.
One note that you will need to set a Unicode font before plotting.
matplotlib.rc('font', family='Arial') | Can't get the titles right in matplotlib:
'technologieën in °C' gives: technologieÃn in ÃC
Possible solutions already tried:
u'technologieën in °C' doesn't work
neither does: # -*- coding: utf-8 -*- at the beginning of the code-file.
Any solutions? | 0 | 1 | 6,904 |
0 | 28,424,354 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2013-07-08T21:36:00.000 | 1 | 2 | 0 | How can i reduce memory usage of Scikit-Learn Vectorizers? | 17,536,394 | 0.099668 | python,numpy,machine-learning,scipy,scikit-learn | One way to overcome the inability of HashingVectorizer to account for IDF is to index your data into elasticsearch or lucene and retrieve termvectors from there using which you can calculate Tf-IDF. | TFIDFVectorizer takes so much memory ,vectorizing 470 MB of 100k documents takes over 6 GB , if we go 21 million documents it will not fit 60 GB of RAM we have.
So we go for HashingVectorizer but still need to know how to distribute the hashing vectorizer.Fit and partial fit does nothing so how to work with Huge Corpus? | 0 | 1 | 4,523 |
0 | 17,538,468 | 0 | 1 | 0 | 0 | 2 | false | 11 | 2013-07-08T22:03:00.000 | 1 | 5 | 0 | Scala equivalent of Python help() | 17,536,758 | 0.039979 | python,scala,equivalent | Similarly, IDEA has its "Quick Documentation Look-up" command, which works for Scala as well as Java (-Doc) JARs and source-code documentation comments. | I'm proficient in Python but a noob at Scala. I'm about to write some dirty experiment code in Scala, and came across the thought that it would be really handy if Scala had a function like help() in Python. For example, if I wanted to see the built-in methods for a Scala Array I might want to type something like help(Array), just like I would type help(list) in Python. Does such a thing exist for Scala? | 0 | 1 | 2,972 |
0 | 55,942,086 | 0 | 1 | 0 | 0 | 2 | false | 11 | 2013-07-08T22:03:00.000 | 0 | 5 | 0 | Scala equivalent of Python help() | 17,536,758 | 0 | python,scala,equivalent | In scala , you can try using the below ..( similar to the one we have in python )..
help(RDD1) in python will give you the rdd1 description with full details.
Scala > RDD1.[tab]
On hitting tab you will find the list of options available to the specified RDD1, similar option you find in eclipse . | I'm proficient in Python but a noob at Scala. I'm about to write some dirty experiment code in Scala, and came across the thought that it would be really handy if Scala had a function like help() in Python. For example, if I wanted to see the built-in methods for a Scala Array I might want to type something like help(Array), just like I would type help(list) in Python. Does such a thing exist for Scala? | 0 | 1 | 2,972 |
0 | 17,581,063 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2013-07-10T15:19:00.000 | 2 | 1 | 0 | pycuda.gpuarray.dot() very slow at first call | 17,574,547 | 0.379949 | python,cuda,pycuda,mailing-list | One reason would be that Pycuda is compiling the kernel before uploading it. As far as I remember thought that should happen only the very first time it executes it.
One solution could be to "warm up" the kernel by executing it once and then start the profiling procedure. | I have a working conjungate gradient method implementation in pycuda, that I want to optimize. It uses a self written matrix-vector-multiplication and the pycuda-native gpuarray.dot and gpuarray.mul_add functions
Profiling the program with kernprof.py/line_profiler returned most time (>60%) till convergence spend in one gpuarray.dot() call. (About .2 seconds)
All following calls of gpuarray.dot() take about 7 microseconds. All calls have the same type of input vectors (size: 400 doubles)
Is there any reason why? I mean in the end it's just a constant, but it is making the profiling difficult.
I wanted to ask the question at the pycuda mailing list. However I wasn't able to subscribe with an @gmail.com adress. If anyone has either an explanation for the strange .dot() behavior or my inability to subscribe to that mailing list please give me a hint ;) | 0 | 1 | 748 |
0 | 17,587,872 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2013-07-11T06:34:00.000 | 0 | 3 | 0 | Python - Combing data from different .csv files. into one | 17,586,573 | 0 | python,csv,python-2.7 | you can use os.listdir() to get list of files in directory | I need some help from python programmers to solve the issue I'm facing in processing data:-
I have .csv files placed in a directory structure like this:-
-MainDirectory
Sub directory 1
sub directory 1A
fil.csv
Sub directory 2
sub directory 2A
file.csv
sub directory 3
sub directory 3A
file.csv
Instead of going into each directory and accessing the .csv files, I want to run a script that can combine the data of the all the sub directories.
Each file has the same type of header. And I need to maintain 1 big .csv file with one header only and all the .csv file data can be appended one after the other.
I have the python script that can combine all the files in a single file but only when those files are placed in one folder.
Can you help to provide a script that can handle the above directory structure? | 0 | 1 | 1,781 |
0 | 17,659,174 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-07-15T16:12:00.000 | 0 | 1 | 0 | How to read out point position of a given plot in matplotlib?(without using mouse) | 17,658,836 | 1.2 | python,matplotlib | Just access the input data that you used to generate the plot. Either this is a mathematical function which you can just evaluate for a given x or this is a two-dimensional data set which you can search for any given x. In the latter case, if x is not contained in the data set, you might want to interpolate or throw an error. | The case is, I have a 2D array and can convert it to a plot. How can I read the y value of a point with given x? | 0 | 1 | 37 |
0 | 17,667,109 | 0 | 0 | 0 | 0 | 1 | true | 14 | 2013-07-16T02:26:00.000 | 11 | 3 | 0 | Finding N closest numbers | 17,667,022 | 1.2 | python,algorithm | collect all the values to create a single ordered sequence, with each element tagged with the array it came from:
0(0), 2(1), 3(2), 4(0), 6(1), ... 12(3), 13(2)
then create a window across them, starting with the first (0(0)) and ending it at the first position that makes the window span all the arrays (0(0) -> 7(3))
then roll this window by incrementing the start of the window by one, and increment the end of the window until you again have a window that covers all elements.
then roll it again: (2(1), 3(2), 4(0), ... 7(3)), and so forth.
at each step keep track of the the difference between the largest and the smallest. Eventually you find the one with the smallest window. I have the feeling that in the worst case this is O(n^2) but that's just a guess. | Have a 2-dimensional array, like -
a[0] = [ 0 , 4 , 9 ]
a[1] = [ 2 , 6 , 11 ]
a[2] = [ 3 , 8 , 13 ]
a[3] = [ 7 , 12 ]
Need to select one element from each of the sub-array in a way that the resultant set of numbers are closest, that is the difference between the highest number and lowest number in the set is minimum.
The answer to the above will be = [ 9 , 6 , 8 , 7 ].
Have made an algorithm, but don't feel its a good one.
What would be a efficient algorithm to do this in terms of time and space complexity?
EDIT - My Algorithm (in python)-
INPUT - Dictionary : table{}
OUTPUT - Dictionary : low_table{}
#
N = len(table)
for word_key in table:
for init in table[word_key]:
temp_table = copy.copy(table)
del temp_table[word_key]
per_init = copy.copy(init)
low_table[init]=[]
for ite in range(N-1):
min_val = 9999
for i in temp_table:
for nums in temp_table[i]:
if min_val > abs(init-nums):
min_val = abs(init-nums)
del_num = i
next_num = nums
low_table[per_init].append(next_num)
init = (init+next_num)/2
del temp_table[del_num]
lowest_val = 99
lowest_set = []
for x in low_table:
low_table[x].append(x)
low_table[x].sort()
mini = low_table[x][-1]-low_table[x][0]
if mini < lowest_val:
lowest_val = mini
lowest_set = low_table[x]
print lowest_set | 0 | 1 | 667 |
0 | 17,716,804 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2013-07-18T07:04:00.000 | 3 | 1 | 0 | Random number indexing past inputs Python | 17,716,737 | 1.2 | python,random | Create 2 variables that contain the lowest and highest possible values. Whenever you get a response, store it in the appropriate variable. Make the RNG pick a value between the two. | I'm having a little bug in my program where you give the computer a random number to try to guess, and a range to guess between and the amount of guesses it has. After the computer generates a random number, it asks you if it is your number, if not, it asks you if it is higher or lower than it. My problem is, if your number is 50, and it generates 53, you would say "l" or "lower" or something that starts with "l". Then it would generate 12 or something, you would say "higher" or something, and it might give you 72. How could I make it so that it remembers to be lower than 53? | 0 | 1 | 53 |
0 | 19,030,208 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-07-19T07:21:00.000 | 0 | 2 | 0 | MCMC implementation in Python | 17,740,281 | 0 | python,c,statistics,mcmc | The simplest way to explore the distribution of A is to generate samples based on the samples of B, C, and D, using your rule. That is, for each iteration, draw one value of B, C, and D from their respective sample sets, independently, with repetition, and calculate A = B*C/D.
If the sample sets for B, C, and D have the same size, I recommend generating a sample for A of the same size. Much fewer samples would result in loss of information, much more samples would not gain much. And yes, even though many samples will not be drawn, I still recommend drawing with repetition. | I have the following problem:
There are 12 samples around 20000 elements each from unknown distributions (sometimes the distributions are not uni-modal so it's hard to automatically estimate an analytical family of the distributions).
Based on these distributions I compute different quantities. How can I explore the distribution of the target quantity in the most efficient (and simplest) way?
To be absolutely clear, here's a simple example: quantity A is equal to B*C/D
B,C,D are distributed according to unknown laws but I have samples from their distributions and based on these samples I want to compute the distribution of A.
So in fact what I want is a tool to explore the distribution of the target quantity based on samples of the variables.
I know that there are MCMC algorithms to do that. But does anybody know a good implementation of an MCMC sampler in Python or C? Or are there any other ways to solve the problem?
Maxim | 0 | 1 | 838 |
0 | 17,756,813 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2013-07-19T23:01:00.000 | 2 | 1 | 0 | Python Process using only 1.6 GB RAM Ubuntu 32 bit in Numpy Array | 17,756,791 | 0.379949 | python,numpy,ubuntu-12.04,32-bit | A 32-bit OS can only address up to aroung 4gb of ram, while a 64-bit OS can take advantage of a lot more ram (theoretically 16.8 million terabytes). Since your OS is 32-bit, your OS can only take advantage of 4gb, so your other 4gb isn't used.
The other 64-bit machine doesn't have the 4gb ram limit, so it can take advantage of all of its installed ram.
These limits come from the fact that a 32-bit machine can only store memory address (pointers) of 32-bytes, so there are 2^32 different possible memory locations that the computer can identify. Similarly, a 64-bit machine can identify 2^64 different possible memory locations, so it can address 2^64 different bytes. | I have a program for learning Artificial Neural Network and it takes a 2-d numpy array as training data. The size of the data array I want to use is around 300,000 x 400 floats. I can't use chunking here because the library I am using (DeepLearningTutorials) takes a single numpy array as training data.
The code shows MemoryError when the RAM usage is around 1.6Gb by this process(I checked it in system monitor) but I have a total RAM of 8GB. Also, the system is Ubuntu-12.04 32-bit.
I checked for the answers ofor other similar questions but somewhere it says that there is nothing like allocating memory to your python program and somewhere the answer is not clear as to how to increase the process memory.
One interesting thing is I am running the same code on a different machine and it can take a numpy array of almost 1,500,000 x 400 floats without any problem. The basic configurations are similar except that the other machine is 64-bit and this one is 32-bit.
Could someone please give some theoretical answer as to why there is so much difference in this or is this the only reason for my problem? | 0 | 1 | 452 |
0 | 54,003,707 | 0 | 0 | 0 | 0 | 1 | false | 15 | 2013-07-20T17:12:00.000 | 5 | 4 | 0 | pandas dataframe group year index by decade | 17,764,619 | 0.244919 | python,pandas | if your Data Frame has Headers say : DataFrame ['Population','Salary','vehicle count']
Make your index as Year: DataFrame=DataFrame.set_index('Year')
use below code to resample data in decade of 10 years and also gives you some of all other columns within that dacade
datafame=dataframe.resample('10AS').sum() | suppose I have a dataframe with index as monthy timestep, I know I can use dataframe.groupby(lambda x:x.year) to group monthly data into yearly and apply other operations. Is there some way I could quick group them, let's say by decade?
thanks for any hints. | 0 | 1 | 31,949 |
0 | 17,768,482 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2013-07-20T23:55:00.000 | 0 | 1 | 0 | Co-clustering algorithm in python | 17,767,807 | 0 | python,machine-learning,scipy,scikit-learn,unsupervised-learning | The fastest clustering algorithm I know of does this:
Repeat O(log N) times:
C = M x X
Where X is N x dim and M is clus x N...
If your clusters are not "flat"...
Perform f(X) = ... This just projects X onto some "flat" space... | Are there implementations available for any co-clustering algorithms in python? The scikit-learn package has k-means and hierarchical clustering but seems to be missing this class of clustering. | 0 | 1 | 720 |
0 | 20,057,520 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2013-07-22T03:04:00.000 | 3 | 3 | 0 | Un-normalized Gaussian curve on histogram | 17,779,316 | 0.197375 | python,matplotlib,histogram,gaussian | Another way of doing this is to find the normalized fit and multiply the normal distribution with (bin_width*total length of data)
this will un-normalize your normal distribution | I have data which is of the gaussian form when plotted as histogram. I want to plot a gaussian curve on top of the histogram to see how good the data is. I am using pyplot from matplotlib. Also I do NOT want to normalize the histogram. I can do the normed fit, but I am looking for an Un-normalized fit. Does anyone here know how to do it?
Thanks!
Abhinav Kumar | 0 | 1 | 12,313 |
0 | 17,786,438 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2013-07-22T09:01:00.000 | 3 | 1 | 0 | Constraints on fitting parameters with Python and ODRPACK | 17,783,481 | 1.2 | python,scipy,curve-fitting | I'm afraid that the older FORTRAN-77 version of ODRPACK wrapped by scipy.odr does not incorporate constraints. ODRPACK95 is a later extension of the original ODRPACK library that predates the scipy.odr wrappers, and it is unclear that we could legally include it in scipy. There is no explicit licensing information for ODRPACK95, only the general ACM TOMS non-commercial license. | I'm using the ODRPACK library in Python to fit some 1d data. It works quite well, but I have one question: is there any possibility to make constraints on the fitting parameters? For example if I have a model y = a * x + b and for physical reasons parameter a can by only in range (-1, 1). I've found that such constraints can be done in original Fortran implementation of the ODRPACK95 library, but I can't find how to do that in Python.
Of course, I can implement my functions such that they will return very big values, if the fitting parameters are out of bounds and chi squared will be big too, but I wonder if there is a right way to do that. | 0 | 1 | 780 |
0 | 17,813,855 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-07-23T14:25:00.000 | 2 | 1 | 0 | An optimal algorithm for the weighted set cover issue? | 17,813,029 | 0.379949 | python,algorithm,set,cover | If you want an exponential algorithm, just try every subset of the set of packages and take the cheapest one that contains all the things you need. | Sorry about the title, SO wasn't allowing the word "problem" in it. I have the following problem:
I have packages of things I want to sell, and each package has a price. When someone requests things X, Y and Z, I want to look through all the packages, some of which contain more than one item, and give the user the combination of packages that will cover their things and have the minimal price.
For example, I might suggest [(X, Y), (Z, Q)] for $10, since [(X), (Y), (Z)] costs $11. Since these are prices, I can't use the greedy weighted set cover algorithm, because two people getting the same thing for different prices would be bad.
However, I haven't been able to find a paper (or anything) detailing an optimal algorithm for the weighted set cover problem. Can someone help with an implementation (I'm using Python), a paper, or even a high-level description of how it works?
The packages I have are in the hundreds, and the things are 5-6, so running time isn't really an issue.
Thanks! | 0 | 1 | 1,472 |
0 | 17,822,258 | 0 | 0 | 0 | 0 | 1 | false | 22 | 2013-07-23T18:13:00.000 | 20 | 1 | 0 | Is there any way to add points to KD tree implementation in Scipy | 17,817,889 | 1 | python,scipy,kdtree | The problem with k-d-trees is that they are not designed for updates.
While you can somewhat easily insert objects (if you use a pointer based representation, which needs substantially more memory than an array-based tree), and do deletions with tricks such as tombstone messages, doing such changes will degrate the performance of the tree.
I am not aware of a good method for incrementally rebalancing a k-d-tree. For 1-dimensional trees you have red-black-trees, B-trees, B*-trees, B+-trees and such things. These don't obviously work with k-d-trees because of the rotating axes and thus different sorting. So in the end, with a k-d-tree, it may be best to just collect changes, and from time to time do a full tree rebuild. Then at least this part of the tree will be quite good.
However, there exists a similar structure (that in my experiments often outperforms the k-d-tree!): the R*-tree. Instead of performing binary splits, it uses rectangular bounding boxes to collect objects, and a lot of thought was put into making the tree a dynamic data structure. This is also where the R*-tree performs much better than the R-tree: it has a much more clever split for kNN search, and it performs incremental rebalancing to improve its structure. | I have a set of points for which I want to construct KD Tree. After some time I want to add few more points to this KDTree periodically. Is there any way to do this in scipy implementation | 0 | 1 | 6,831 |
0 | 17,822,815 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-07-23T23:17:00.000 | 1 | 1 | 0 | Specify DataType using read_table() in Pandas | 17,822,595 | 1.2 | python,pandas | I believe we enabled in 0.12
you can pass str,np.str_,object in place of an S4
which all convert to object dtype in any event
or after you read it in
df['year'].astype(object) | I am loading a text file into pandas, and have a field that contains year. I want to make sure that this field is a string when pulled into the dataframe.
I can only seem to get this to work if I specify the exact length of the string using the code below:
df = pd.read_table('myfile.tsv', dtype={'year':'S4'})
Is there a way to do this without specifying length? I will need to perform this action on different columns that vary in length. | 0 | 1 | 4,022 |
0 | 17,845,330 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-07-24T20:58:00.000 | 0 | 1 | 0 | Fast processing | 17,844,688 | 0 | python,igraph | As far as I understand, you wont have access to the C backend from Python. What about storing the sorted edge in an attribute of the vertices eg in g.vs["sortedOutEdges"] ? | In python and igraph I have many nodes with high degree. I always need to consider the edges from a node in order of their weight. It is slow to sort the edges each time I visit the same node. Is there some way to persuade igraph to always give the edges from a node in weight sorted order, perhaps by some preprocessing? | 0 | 1 | 71 |
0 | 58,946,534 | 0 | 0 | 0 | 0 | 1 | false | 78 | 2013-07-26T06:04:00.000 | 19 | 6 | 0 | Is there a parameter in matplotlib/pandas to have the Y axis of a histogram as percentage? | 17,874,063 | 1 | python,pandas,matplotlib | I know this answer is 6 years later but to anyone using density=True (the substitute for the normed=True), this is not doing what you might want to. It will normalize the whole distribution so that the area of the bins is 1. So if you have more bins with a width < 1 you can expect the height to be > 1 (y-axis). If you want to bound your histogram to [0;1] you will have to calculate it yourself. | I would like to compare two histograms by having the Y axis show the percentage of each column from the overall dataset size instead of an absolute value. Is that possible? I am using Pandas and matplotlib.
Thanks | 0 | 1 | 86,215 |
0 | 17,900,531 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2013-07-27T16:38:00.000 | 0 | 2 | 0 | Represent a tree hierarchy using an Excel spreadsheet to be easily parsed by Python CSV reader? | 17,900,112 | 0 | python,excel,csv,tree,hierarchy | If spreadsheet is a must in this solution, hierarchy can be represented by indents on the Excel side (empty cells at the beginnings of rows), one row per node/leaf. On the Python side, one can parse them to tree structure (of course, one needs to filter out empty rows and some other exceptions). Node type can be specified on it's own column. For example, it could even be the first non-empty cell.
I guess, hierarchy level is limited (say, max 8 levels), otherwise Excel is not good idea at all.
Also, there is a library called openpyxl, which can help reading Excel files directly, without user needing to convert them to CSV (it adds usability to the overall approach).
Another approach is to put a level number in the first cell. The number should never be incremented by 2 or more.
Yet another approach is to use some IDs for each node and each node leaf would need to specify parent's id. But this is not very user-friendly. | I have a non-technical client who has some hierarchical product data that I'll be loading into a tree structure with Python. The tree has a variable number of levels, and a variable number nodes and leaf nodes at each level.
The client already knows the hierarchy of products and would like to put everything into an Excel spreadsheet for me to parse.
What format can we use that allows the client to easily input and maintain data, and that I can easily parse into a tree with Python's CSV? Going with a column for each level isn't without its hiccups (especially if we introduce multiple node types) | 0 | 1 | 13,410 |
0 | 61,605,458 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2013-07-28T03:06:00.000 | 0 | 3 | 0 | Pandas import error | 17,904,600 | 0 | python-2.7,pandas,easy-install | Panda does not work with python 2.7 , do you will need python 3.6 or higer | I tried installing pandas using easy_install and it claimed that it successfully installed the pandas package in my Python Directory.
I switch to IDLE and try import pandas and it throws me the following error -
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import pandas
File "C:\Python27\lib\site-packages\pandas-0.12.0-py2.7-win32.egg\pandas\__init__.py", line 6, in <module>
from . import hashtable, tslib, lib
File "numpy.pxd", line 157, in init pandas.hashtable (pandas\hashtable.c:20282)
ValueError: numpy.dtype has the wrong size, try recompiling
Please help me diagnose the error.
FYI: I have already installed the numpy package | 0 | 1 | 26,604 |
0 | 17,938,780 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2013-07-29T13:32:00.000 | 0 | 5 | 0 | Dividing an array into partitions NOT evenly sized, given the points where each partition should start or end, in python | 17,925,460 | 0 | python,arrays,slice | For each range in your limits list create an empty list plus one for the overflow values as a tupple with the max value in the list and the min value for that list, the last one will have a max on None
For each value in the values list run through your tupples until you find the one that your value is > min and < max or the max is None.
When you find the right list append the value to it and go on to the next. | How do I divide a list into smaller not evenly sized intervals, give the ideal initial and final values of each interval?
I have a list of 16383 items. I also have a separate list of the values at which each interval should end and the following should enter.
I would need to use the given intervals to assign each element to the partition it belongs to, depending on its value.
I have tried reading stuff, but I encountered only the case when given the original list, people split it into evenly sized partitions...
Thanks
Blaise | 0 | 1 | 141 |
0 | 17,951,581 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-07-30T14:57:00.000 | 0 | 1 | 0 | Passing 2D argument into numpy.optimize.fmin error | 17,950,492 | 0 | python,optimization,scipy | It seems it is in fact impossible to pass a 2D list to numpy.optimize.fmin. However flattening the input f was not that much of a problem and while it makes the code slightly uglier, the optimisation now works.
Interestingly I also coded the optimisation in Matlab which does take 2D inputs to its fminsearch function. Both programs give the same output (y). | I currently have a function PushLogUtility(p,w,f) that I am looking to optimise w.r.t f (2xk) list for fixed p (9xk list) and w (2xk) list.
I am using the scipy.optimize.fmin function but am getting errors I believe because f is 2-dimensional. I had written a previous function LogUtility(p,q,f) passing a 1-dimensional input and it worked.
One option it seems is to write the p, w and f into 1-dimensional lists but this would be time-consuming and less readable. Is there any way to make fmin optimise a function with a 2D input? | 0 | 1 | 380 |
0 | 17,971,361 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-07-31T04:04:00.000 | 1 | 2 | 0 | OpenCV Python 3.3 | 17,961,391 | 1.2 | python,opencv | You'll have to install all the libraries you want to use together with OpenCV for Python 2.7. This is not much of a problem, you can do it with pip in one line, or choose one of the many pre-built scientific Python packages. | I have Python 3.3 and 2.7 installed on my computer
For Python 3.3, I installed many libraries like numpy, scipy, etc
Since I also want to use opencv, which only supports python 2.7 so far, I installed opencv under Python 2.7.
Hey, here comes the problem, what if I want to import numpy as well as cv in the same script? | 0 | 1 | 1,449 |
0 | 18,023,402 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2013-08-02T18:02:00.000 | 1 | 1 | 0 | Disambiguation of Names using Edit Distance | 18,023,356 | 0.197375 | python,levenshtein-distance | Create a dictionary keyed by zipcode, with lists of company names as the values. Now you only have to match company names per zipcode, a much smaller search space. | I have a huge list of company names and a huge list of zipcodes associated with those names. (>100,000).
I have to output similar names (for example, AJAX INC and AJAX are the same company, I have chosen a threshold of 4 characters for edit distance), but only if their corresponding zipcodes match too.
The trouble is that I can put all these company names in a dictionary, and associate a list of zipcode and other characteristics with that dictionary key. However, then I have to match each pair, and with O(n^2), it takes forever. Is there a faster way to do it? | 0 | 1 | 164 |
0 | 47,751,572 | 0 | 0 | 0 | 0 | 2 | false | 170 | 2013-08-06T20:18:00.000 | 116 | 7 | 0 | How to estimate how much memory a Pandas' DataFrame will need? | 18,089,667 | 1 | python,pandas | Here's a comparison of the different methods - sys.getsizeof(df) is simplest.
For this example, df is a dataframe with 814 rows, 11 columns (2 ints, 9 objects) - read from a 427kb shapefile
sys.getsizeof(df)
>>> import sys
>>> sys.getsizeof(df)
(gives results in bytes)
462456
df.memory_usage()
>>> df.memory_usage()
...
(lists each column at 8 bytes/row)
>>> df.memory_usage().sum()
71712
(roughly rows * cols * 8 bytes)
>>> df.memory_usage(deep=True)
(lists each column's full memory usage)
>>> df.memory_usage(deep=True).sum()
(gives results in bytes)
462432
df.info()
Prints dataframe info to stdout. Technically these are kibibytes (KiB), not kilobytes - as the docstring says, "Memory usage is shown in human-readable units (base-2 representation)." So to get bytes would multiply by 1024, e.g. 451.6 KiB = 462,438 bytes.
>>> df.info()
...
memory usage: 70.0+ KB
>>> df.info(memory_usage='deep')
...
memory usage: 451.6 KB | I have been wondering... If I am reading, say, a 400MB csv file into a pandas dataframe (using read_csv or read_table), is there any way to guesstimate how much memory this will need? Just trying to get a better feel of data frames and memory... | 0 | 1 | 124,007 |
0 | 18,089,887 | 0 | 0 | 0 | 0 | 2 | false | 170 | 2013-08-06T20:18:00.000 | 10 | 7 | 0 | How to estimate how much memory a Pandas' DataFrame will need? | 18,089,667 | 1 | python,pandas | Yes there is. Pandas will store your data in 2 dimensional numpy ndarray structures grouping them by dtypes. ndarray is basically a raw C array of data with a small header. So you can estimate it's size just by multiplying the size of the dtype it contains with the dimensions of the array.
For example: if you have 1000 rows with 2 np.int32 and 5 np.float64 columns, your DataFrame will have one 2x1000 np.int32 array and one 5x1000 np.float64 array which is:
4bytes*2*1000 + 8bytes*5*1000 = 48000 bytes | I have been wondering... If I am reading, say, a 400MB csv file into a pandas dataframe (using read_csv or read_table), is there any way to guesstimate how much memory this will need? Just trying to get a better feel of data frames and memory... | 0 | 1 | 124,007 |
0 | 18,339,341 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2013-08-09T10:36:00.000 | 0 | 1 | 0 | nearest k neighbours that satisfy conditions (python) | 18,144,810 | 0 | python,constraints,nearest-neighbor,kdtree | If you are looking for the neighbours in a line of sight, couldn't use an method like
cKDTree.query_ball_point(self, x, r, p, eps)
which allows you to query the KDTree for neighbours that are inside a radius of size r around the x array points.
Unless I misunderstood your question, it seems that the line of sight is known and is equivalent to this r value. | I have a slight variant on the "find k nearest neighbours" algorithm which involves rejecting those that don't satisfy a certain condition and I can't think of how to do it efficiently.
What I'm after is to find the k nearest neighbours that are in the current line of sight. Unfortunately scipy.spatial.cKDTree doesn't provide an option for searching with a filter to conditionally reject points.
The best algorithm I can come up with is to query for n nearest neighbours and if there aren't k that are in the line of sight then query it again for 2n nearest neighbours and repeat. Unfortunately this would mean recomputing the n nearest neighbours repeatedly in the worst cases. The performance hit gets worse the more times I have to repeat this query. On the other hand setting n too high is potentially wasteful if most of the points returned aren't needed.
The line of sight changes frequently so I can't recompute the cKDTree each time either. Any suggestions? | 0 | 1 | 631 |
0 | 18,153,590 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-08-09T15:19:00.000 | 1 | 2 | 0 | FFT resolution bandwidth | 18,150,150 | 0.099668 | python,numpy,fft | The bandwidth of each FFT result bin is inversely proportional to the length of the FFT window. For a wider bandwidth per bin, use a shorter FFT. If you have more data, then Welch's method can be used with sequential STFT windows to get an average estimate. | I have a numpy fft for a large number of samples. How do I reduce the resolution bandwidth, so that it will show me fewer frequency bins, with averaged power output? | 0 | 1 | 921 |
0 | 18,201,497 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2013-08-10T17:07:00.000 | 3 | 2 | 0 | Efficient nearest neighbour search for sparse matrices | 18,164,348 | 0.291313 | python,scipy,scikit-learn,nearest-neighbor | You can try to transform your high-dimensional sparse data to low-dimensional dense data using TruncatedSVD then do a ball-tree. | I have a large corpus of data (text) that I have converted to a sparse term-document matrix (I am using scipy.sparse.csr.csr_matrix to store sparse matrix). I want to find, for every document, top n nearest neighbour matches. I was hoping that NearestNeighbor routine in Python scikit-learn library (sklearn.neighbors.NearestNeighbor to be precise) would solve my problem, but efficient algorithms that use space partitioning data structures such as KD trees or Ball trees do not work with sparse matrices. Only brute-force algorithm works with sparse matrices (which is infeasible in my case as I am dealing with large corpus).
Is there any efficient implementation of nearest neighbour search for sparse matrices (in Python or in any other language)?
Thanks. | 0 | 1 | 3,890 |
0 | 18,180,322 | 0 | 1 | 0 | 0 | 1 | false | 6 | 2013-08-12T04:47:00.000 | 0 | 6 | 0 | finding a set of ranges that a number fall in | 18,179,680 | 0 | python,algorithm | How about,
sort by first column O(n log n)
binary search to find indices that are out of range O(log n)
throw out values out of range
sort by second column O(n log n)
binary search to find indices that are out of range O(log n)
throw out values out of range
you are left with the values in range
This should be O(n log n)
You can sort rows and cols with np.sort and a binary search should only be a few lines of code.
If you have lots of queries, you can save the first sorted copy for subsequent calls but not the second. Depending on the number of queries, it may turn out to be better to do a linear search than to sort then search. | I have a 200k lines list of number ranges like start_position,stop position.
The list includes all kinds of overlaps in addition to nonoverlapping ones.
the list looks like this
[3,5]
[10,30]
[15,25]
[5,15]
[25,35]
...
I need to find the ranges that a given number fall in. And will repeat it for 100k numbers.
For example if 18 is the given number with the list above then the function should return
[10,30]
[15,25]
I am doing it in a overly complicated way using bisect, can anybody give a clue on how to do it in a faster way.
Thanks | 0 | 1 | 3,636 |
0 | 18,321,537 | 0 | 1 | 0 | 0 | 1 | true | 7 | 2013-08-16T21:44:00.000 | 3 | 1 | 0 | SciPy 0.12.0 and Numpy 1.6.1 - numpy.core.multiarray failed to import | 18,282,568 | 1.2 | python,numpy,scipy | So it seems that the cause of the error was incompatibility between scipy 0.12.0 and the much older numpy 1.6.1.
There are two ways to fix this - either to upgrade numpy (to ~1.7.1) or to downgrade scipy (to ~0.10.1).
If ArcGIS 10.2 specifically requires Numpy 1.6.1, the easiest option is to downgrade scipy. | I just installed ArcGIS v10.2 64bit background processing which installs Python 2.7.3 64bit and NumPy 1.6.1. I installed SciPy 0.12.0 64bit to the same Python installation.
When I opened my Python interpreter I was able to successfully import arcpy, numpy, and scipy. However, when I tried to import scipy.ndimage I got an error that said numpy.core.multiarray failed to import. Everything I have found online related to this error references issues between scipy and numpy and suggest upgrading to numpy 1.6.1. I'm already at numpy 1.6.1.
Any ideas how to deal with this? | 0 | 1 | 6,801 |
0 | 18,315,125 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-08-19T13:23:00.000 | -1 | 2 | 0 | Selecting random rows with python and writing to a new file | 18,314,913 | -0.099668 | python,random-sample | The basic procedure is this:
1. Open the input file
This can be accomplished with the basic builtin open function.
2. Open the output file
You'll probably use the same method that you chose in step #1, but you'll need to open the file in write mode.
3. Read the input file to a variable
It's often preferable to read the file one line at a time, and operate on that one line before reading the next, but if memory is not a concern, you can also read the entire thing into a variable all at once.
4. Choose selected lines
There will be any number of ways to do this, depending on how you did step #3, and your requirements. You could use filter, or a list comprehension, or a for loop with an if statement, etc. The best way depends on the particular constraints of your goal.
5. Write the selected lines
Take the selected lines you've chosen in step #4 and write them to the file.
6. Close the files
It's generally good practice to close the files you've opened to prevent resource leaks. | I need to open a csv file, select 1000 random rows and save those rows to a new file. I'm stuck and can't see how to do it. Can anyone help? | 0 | 1 | 11,279 |
0 | 21,596,301 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2013-08-21T08:30:00.000 | 0 | 2 | 0 | OpenCv2: Using HoughLinesP raises " is not a numpy array" | 18,352,493 | 0 | python,opencv | For me, it wasn't working when the environment was of ROS Fuerte but it worked when the environment was of ROS Groovy.
As Alexandre had mentioned above, it must be the problem with the opencv2 versions. Fuerte had 2.4.2 while Groovy had 2.4.6 | Using HoughLinesP raises "<'unknown'> is not a numpy array", but my array is really a numpy array.
It works on one of my computer, but not on my robot... | 0 | 1 | 158 |
0 | 18,353,075 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2013-08-21T08:30:00.000 | 2 | 2 | 0 | OpenCv2: Using HoughLinesP raises " is not a numpy array" | 18,352,493 | 0.197375 | python,opencv | Found it:
I don't have the same opencv version on my robot and on my computer !
For the records calling HoughLinesP:
works fine on 2.4.5 and 2.4.6
leads to "<unknown> is not a numpy array" with version $Rev: 4557 $ | Using HoughLinesP raises "<'unknown'> is not a numpy array", but my array is really a numpy array.
It works on one of my computer, but not on my robot... | 0 | 1 | 158 |
0 | 71,715,493 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2013-08-23T02:45:00.000 | 0 | 6 | 0 | K-th order neighbors in graph - Python networkx | 18,393,842 | 0 | python,networkx,adjacency-list | Yes,you can get a k-order ego_graph of a node
subgraph = nx.ego_graph(G,node,radius=k)
then neighbors are nodes of the subgraph
neighbors= list(subgraph.nodes()) | I have a directed graph in which I want to efficiently find a list of all K-th order neighbors of a node. K-th order neighbors are defined as all nodes which can be reached from the node in question in exactly K hops.
I looked at networkx and the only function relevant was neighbors. However, this just returns the order 1 neighbors. For higher order, we need to iterate to determine the full set. I believe there should be a more efficient way of accessing K-th order neighbors in networkx.
Is there a function which efficiently returns the K-th order neighbors, without incrementally building the set?
EDIT: In case there exist other graph libraries in Python which might be useful here, please do mention those. | 0 | 1 | 10,646 |
0 | 21,031,826 | 0 | 0 | 0 | 0 | 2 | true | 11 | 2013-08-23T02:45:00.000 | 27 | 6 | 0 | K-th order neighbors in graph - Python networkx | 18,393,842 | 1.2 | python,networkx,adjacency-list | You can use:
nx.single_source_shortest_path_length(G, node, cutoff=K)
where G is your graph object. | I have a directed graph in which I want to efficiently find a list of all K-th order neighbors of a node. K-th order neighbors are defined as all nodes which can be reached from the node in question in exactly K hops.
I looked at networkx and the only function relevant was neighbors. However, this just returns the order 1 neighbors. For higher order, we need to iterate to determine the full set. I believe there should be a more efficient way of accessing K-th order neighbors in networkx.
Is there a function which efficiently returns the K-th order neighbors, without incrementally building the set?
EDIT: In case there exist other graph libraries in Python which might be useful here, please do mention those. | 0 | 1 | 10,646 |
0 | 18,456,597 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2013-08-27T01:35:00.000 | 1 | 6 | 0 | Retrieve 10 random lines from a file | 18,455,589 | 0.033321 | python,numpy | It is possible to do the job with one pass and without loading the entire file into memory as well. Though the code itself is going to be much more complicated and mostly unneeded unless the file is HUGE.
The trick is the following:
Suppose we only need one random line, then first save first line into a variable, then for ith line, replace the currently with probability 1/i. Return the saved line when reaching end of file.
For 10 random lines, then have an list of 10 element and do the process 10 times for each line in the file. | I have a text file which is 10k lines long and I need to build a function to extract 10 random lines each time from this file. I already found how to generate random numbers in Python with numpy and also how to open a file but I don't know how to mix it all together. Please help. | 0 | 1 | 3,957 |
1 | 18,474,882 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-08-27T17:55:00.000 | 1 | 1 | 0 | AttributeError: 'FigureCanvasWxAgg' object has no attribute '_idletimer' | 18,472,394 | 0.197375 | python,macos,matplotlib,wxpython | _idletimer is likely to be a private, possibly implementation specific member of one of the classes - since you do not include the code or context I can not tell you which.
In general anything that starts with an _ is private and if it is not your own, and specific to the local class, should not be used by your code as it may change or even disappear when you rely on it. | I am creating a GUI program using wxPython. I am also using matplotlib to graph some data. This data needs to be animated. To animate the data I am using the FuncAnimate function, which is part of the matplotlib package.
When I first started to write my code I was using a PC, running windows 7. I did my initial testing on this computer and everything was working fine. However my program needs to be cross platform. So I began to run some test using a Mac. This is where I began to encounter an error. As I explained before, in my code I have to animate some data. I programmed it such that the user has the ability to play and pause the animation. Now when the user pauses the animation I get the following error: AttributeError: 'FigureCanvasWxAgg' object has no attribute '_idletimer'. Now I find this to be very strange because like I said I ran this same code on a PC and never got this error.
I was wondering if anyone could explain to me what is meant by this _idletimer error and what are possible causes for this. | 0 | 1 | 287 |
0 | 18,511,409 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2013-08-29T12:36:00.000 | 2 | 1 | 0 | What does extent do within imshow()? | 18,511,206 | 1.2 | python,matplotlib,histogram2d | Extent defines the images max and min of the horizontal and vertical values. It takes four values like so: extent=[horizontal_min,horizontal_max,vertical_min,vertical_max]. | Im wanting to use imshow() to create an image of a 2D histogram. However on several of the examples ive seen the 'extent' is defined. What does 'extent' actually do and how do you choose what values are appropriate? | 0 | 1 | 128 |
0 | 18,537,377 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-08-29T19:19:00.000 | 1 | 1 | 0 | DBSCAN with potentially imprecise lat/long coordinates | 18,519,356 | 1.2 | python,algorithm,cluster-analysis,data-mining,dbscan | Note that DBSCAN doesn't actually need the distances.
Look up Generalized DBSCAN: all it really uses is a "is a neighbor of" relationship.
If you really need to incorporate uncertainty, look up the various DBSCAN variations and extensions that handle imprecise data explicitely. However, you may get pretty much the same results just by choosing a threshold for epsilon that is somewhat reasonable. There is room for choosing a larger epsilon that the one you deem adequate: if you want to use epsilon = 1km, and you assume your data is imprecise on the range of 100m, then use 1100m as epsilon instead. | I've been running sci-kit learn's DBSCAN implementation to cluster a set of geotagged photos by lat/long. For the most part, it works pretty well, but I came across a few instances that were puzzling. For instance, there were two sets of photos for which the user-entered text field specified that the photo was taken at Central Park, but the lat/longs for those photos were not clustered together. The photos themselves confirmed that they both sets of observations were from Central Park, but the lat/longs were in fact further apart than epsilon.
After a little investigation, I discovered that the reason for this was because the lat/long geotags (which were generated from the phone's GPS) are pretty imprecise. When I looked at the location accuracy of each photo, I discovered that they ranged widely (I've seen a margin of error of up to 600 meters) and that when you take the location accuracy into account, these two sets of photos are within a nearby distance in terms of lat/long.
Is there any way to account for margin of error in lat/long when you're doing DBSCAN?
(Note: I'm not sure if this question is as articulate as it should be, so if there's anything I can do to make it more clear, please let me know.) | 0 | 1 | 723 |
0 | 20,780,282 | 0 | 0 | 0 | 0 | 2 | false | 14 | 2013-09-03T14:39:00.000 | 0 | 5 | 0 | How I can use cv2.ellipse? | 18,595,099 | 0 | python,opencv | these parameters should be integer, or it will raise TypeError | OpenCV2 for python have 2 function
[Function 1]
Python: cv2.ellipse(img, center, axes, angle, startAngle, endAngle, color[, thickness[, lineType[, shift]]]) → None
[Function 2]
Python: cv2.ellipse(img, box, color[, thickness[, lineType]]) → None
I want to use [Function 1]
But when I use this Code
cv2.ellipse(ResultImage, Circle, Size, Angle, 0, 360, Color, 2, cv2.CV_AA, 0)
It raise
TypeError: ellipse() takes at most 5 arguments (10 given)
Could you help me? | 0 | 1 | 13,427 |
0 | 28,592,694 | 0 | 0 | 0 | 0 | 2 | false | 14 | 2013-09-03T14:39:00.000 | 7 | 5 | 0 | How I can use cv2.ellipse? | 18,595,099 | 1 | python,opencv | Make sure all the ellipse parameters are int otherwise it raises "TypeError: ellipse() takes at most 5 arguments (10 given)". Had the same problem and casting the parameters to int, fixed it.
Please note that in Python, you should round the number first and then use int(), since int function will cut the number:
x = 2.7 , int(x) will be 2 not 3 | OpenCV2 for python have 2 function
[Function 1]
Python: cv2.ellipse(img, center, axes, angle, startAngle, endAngle, color[, thickness[, lineType[, shift]]]) → None
[Function 2]
Python: cv2.ellipse(img, box, color[, thickness[, lineType]]) → None
I want to use [Function 1]
But when I use this Code
cv2.ellipse(ResultImage, Circle, Size, Angle, 0, 360, Color, 2, cv2.CV_AA, 0)
It raise
TypeError: ellipse() takes at most 5 arguments (10 given)
Could you help me? | 0 | 1 | 13,427 |
0 | 18,641,432 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-09-05T14:02:00.000 | 3 | 1 | 0 | Pytables, HDF5 Attribute setting and deletion, | 18,638,461 | 1.2 | python,hdf5,pytables | HDF5 attribute access is notoriously slow. HDF5 is really built for and around the array data structure. Things like groups and attributes are great helpers but they are not optimized.
That said while attribute reading is slow, attribute writing is even slower. Therefore, it is always worth the extra effort to do what you suggest. Check if the attribute exists and if it has the desired value before writing it. This should give you a speed boost as compared to just writing it out every time.
Luckily, the effect on memory of attributes -- both on disk and in memory -- is minimal. This is because ALL attributes on a node fit into 64 kb of special metadata space. If you try to write more than 64 kb worth of attributes, HDF5 and PyTables will fail.
I hope this helps. | I am working a lot with pytables and HDF5 data and I have a question regarding the attributes of nodes (the attributes you access via pytables 'node._v_attrs' property).
Assume that I set such an attribute of an hdf5 node. I do that over and over again, setting a particular attribute
(1) always to the same value (so overall the value stored in the hdf5file does not change qualitatively)
(2) always with a different value
How are these operations in terms of speed and memory? What I mean is the following, does setting the attribute really imply deletion of the attribute in the hdf5 file and adding a novel attribute with the same name as before? If so, does that mean every time I reset an existing attribute the size of the hdf5 file is slightly increased and keeps slowly growing until my hard disk is full?
If this is true, would it be more beneficial to check before I reset whether I have case (1) [and I should not store at all but compare data to the attribute written on disk] and only reassign if I face case (2) [i.e. the attribute value in the hdf5file is not the one I want to write to the hdf5 file].
Thanks a lot and best regards,
Robert | 0 | 1 | 986 |
0 | 18,647,689 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2013-09-05T21:12:00.000 | 1 | 2 | 0 | How to create a .py file within canopy? | 18,646,039 | 0.099668 | python,enthought | Umair, ctrl + n or File > Python File will do what you want.
Best,
Jonathan | I am using Enthought canopy for data analysis. I didn't find any option to create a .py file to write my code and save it for later use. I tried File> New >IPython Notebook, wrote my code and saved it. But the next time I opened it within Canopy editor, it wasn't editable. I need something like a Python shell where you just open a 'New Window', write all your code and press F5 to execute it. It could be saved for later use as well. Although pandas and numpy work in canopy editor, they are not recognized by Python shell (whenever I write import pandas as pd, it says no module named pandas). Please help. | 0 | 1 | 1,009 |
0 | 19,578,868 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2013-09-05T21:12:00.000 | 1 | 2 | 0 | How to create a .py file within canopy? | 18,646,039 | 0.099668 | python,enthought | Let me add that if you need to open the file, even if it's a text file but you want to be able to run it as a Python file (or whatever language format) just look at the bottom of the Canopy window and select the language you want to use. In some cases it may default to just text. Click it and select the language you want. Once you've done that, you'll see that the run button will be active and the command appear in their respective color. | I am using Enthought canopy for data analysis. I didn't find any option to create a .py file to write my code and save it for later use. I tried File> New >IPython Notebook, wrote my code and saved it. But the next time I opened it within Canopy editor, it wasn't editable. I need something like a Python shell where you just open a 'New Window', write all your code and press F5 to execute it. It could be saved for later use as well. Although pandas and numpy work in canopy editor, they are not recognized by Python shell (whenever I write import pandas as pd, it says no module named pandas). Please help. | 0 | 1 | 1,009 |
0 | 18,687,144 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2013-09-07T07:35:00.000 | 1 | 2 | 0 | How to get a series of random points in a specific range of distances from a reference point and generate xyz coordinates | 18,670,974 | 0.099668 | python | Generically you can populate your points in two ways:
1) use random to create the coordinates for your points within the outer bounds of the solution. If a given random point falls outside the max or inside the inner limit.
2) You can do it using polar coordinates: generate a random distance between the inner and outer bound and a yaw rotation. In 3d, you'd have to use two rotations, one for yaw and another for pitch. This avoids the need for rejecting points.
You can simplify the code for both by generating all the points in a circle (or sphere) around origin (0,0,0) instead of in place. Them move the whole set of points to the correct blue circle location by adding it's position to the position of each point. | I have to generate a various points and its xyz coordinates and then calculate the distance.
Lets say I want create random point coordinates from point a away from 2.5 cm in all directions. so that i can calculate the mutual distance and angles form a to all generated point (red)
I want to remove the redundant point and all those points which do not satisfy my criteria and also have same position.
![enter image description here][1]
for example, I know the coordinates for the two points a (-10, 12, 2) and b (-9, 11, 5). The distance between a and b is 5 cm.
The question is: How can I generate the red points' coordinates. I knew how to calculate the distance and angle. So far, I have tried the following calculation:
I am not able to define the points randomly.
I found a few solutions that don't work.
Any help would be appreciated. | 0 | 1 | 5,337 |
0 | 18,719,287 | 0 | 0 | 0 | 0 | 1 | true | 14 | 2013-09-09T16:49:00.000 | 19 | 3 | 0 | Proximity Matrix in sklearn.ensemble.RandomForestClassifier | 18,703,136 | 1.2 | python,scikit-learn,random-forest | We don't implement proximity matrix in Scikit-Learn (yet).
However, this could be done by relying on the apply function provided in our implementation of decision trees. That is, for all pairs of samples in your dataset, iterate over the decision trees in the forest (through forest.estimators_) and count the number of times they fall in the same leaf, i.e., the number of times apply give the same node id for both samples in the pair.
Hope this helps. | I'm trying to perform clustering in Python using Random Forests. In the R implementation of Random Forests, there is a flag you can set to get the proximity matrix. I can't seem to find anything similar in the python scikit version of Random Forest. Does anyone know if there is an equivalent calculation for the python version? | 0 | 1 | 6,625 |
0 | 18,735,714 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2013-09-10T14:09:00.000 | 1 | 2 | 0 | Sort out bad pictures of a dataset (k-means, clustering, sklearn) | 18,721,204 | 0.099668 | python,computer-vision,cluster-analysis,scikit-learn,k-means | K-means is not very robust to noise; and your "bad pictures" probably can be considered as such. Furthermore, k-means doesn't work too well for sparse data; as the means will not be sparse.
You may want to try other, more modern, clustering algorithms that can handle this situation much better. | I'm testing some things in image retrival and i was thinking about how to sort out bad pictures of a dataset. For e.g there are only pictures of houses and in between there is a picture of people and some of cars. So at the end i want to get only the houses.
At the Moment my approach looks like:
computing descriptors (Sift) of all pictures
clustering all descriptors with k-means
creating histograms of the pictures by computing the euclidean distance between the cluster centers and the descriptors of a picture
clustering the histograms again.
at this moment i have got a first sort (which isn't really good). Now my Idea is to take all pictures which are clustered to a center with len(center) > 1 and cluster them again and again. So the Result is that the pictures which are particular in a center will be sorted out. Maybe its enough to fit the result again to the same k-means without clustering again?!
the result isn't satisfying so maybe someone has got a good idea.
For Clustering etc. I'm using k-means of scikit learn. | 0 | 1 | 550 |
0 | 18,735,840 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2013-09-10T14:09:00.000 | 1 | 2 | 0 | Sort out bad pictures of a dataset (k-means, clustering, sklearn) | 18,721,204 | 0.099668 | python,computer-vision,cluster-analysis,scikit-learn,k-means | I don't have the solution to your problem but here is a sanity check to perform prior to the final clustering, to check that the kind of features you extracted is suitable for your problem:
extract the histogram features for all the pictures in your dataset
compute the pairwise distances of all the pictures in your dataset using the histogram features (you can use sklearn.metrics.pairwise_distance)
np.argsort the raveled distances matrix to find the indices of the 20 top closest pairs of distinct pictures according to your features (you have to filter out the zero-valued diagonal elements of the distance matrix) and do the same to extract the top 20 most farest pairs of pictures based on your histogram features.
Visualize (for instance with plt.imshow) the pictures of top closest pairs and check that they are all pairs that you would expect to be very similar.
Visualize the pictures of the top farest pairs and check that they are all very dissimilar.
If one of those 2 checks fails, then it means that histogram of bag of SIFT words is not suitable to your task. Maybe you need to extract other kinds of features (e.g. HoG features) or reorganized the way your extract the cluster of SIFT descriptors, maybe using a pyramidal pooling structure to extract info on the global layout of the pictures at various scales. | I'm testing some things in image retrival and i was thinking about how to sort out bad pictures of a dataset. For e.g there are only pictures of houses and in between there is a picture of people and some of cars. So at the end i want to get only the houses.
At the Moment my approach looks like:
computing descriptors (Sift) of all pictures
clustering all descriptors with k-means
creating histograms of the pictures by computing the euclidean distance between the cluster centers and the descriptors of a picture
clustering the histograms again.
at this moment i have got a first sort (which isn't really good). Now my Idea is to take all pictures which are clustered to a center with len(center) > 1 and cluster them again and again. So the Result is that the pictures which are particular in a center will be sorted out. Maybe its enough to fit the result again to the same k-means without clustering again?!
the result isn't satisfying so maybe someone has got a good idea.
For Clustering etc. I'm using k-means of scikit learn. | 0 | 1 | 550 |
0 | 18,815,103 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-09-11T14:26:00.000 | 0 | 1 | 0 | Deleting Lines of a Plot Which has Plotted with axvspan() | 18,743,895 | 1.2 | python,matplotlib,interactive-mode | Ok I have found the necessary functions. I used dir() function to find methods. axvspan() returns a matplotlib.patches.Polygon result. This type of data has set_visible method, using it as x.set_visible(0) I removed the lines and shapes. | I am doing some animating plots with ion()function. I want to draw and delete some lines. I found out axvspan() function, I can plot the lines and shapes with it as I want. But as long as I am doing an animation I also want to delete that lines and shapes. I couldn't find a way to delete them. | 0 | 1 | 159 |
0 | 18,777,073 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2013-09-13T01:48:00.000 | 1 | 3 | 0 | Python 2.7 - ImportError: No module named Image | 18,776,988 | 0.066568 | python,windows,opencv,installation | Try to put the python(2.7) at your Windows path.
Do the following steps:
Open System Properties (Win+Pause) or My Computer and right-click then Properties
Switch to the Advanced tab
Click Environment Variables
Select PATH in the System variables section
Click Edit
Add python's path to the end of the list (the paths are separated by semicolons).
example C:\Windows;C:\Windows\System32;C:\Python27 | Recently, I have been studying OpenCV to detect and recognize faces using C++. In order to execute source code demonstration from the OpenCV website I need to run Python to crop image first. Unfortunately, the message error is 'ImportError: No module named Image' when I run the Python script (this script is provided by OpenCV website). I installed "python-2.7.amd64" and downloaded "PIL-1.1.7.win32-py2.7" to install Image library. However, the message error is 'Python version 2.7 required, which was not found in the registry'. And then, I downloaded the script written by Joakim Löw for Secret Labs AB / PythonWare to register registry in my computer. But the message error is "Unable to register. You probably have the another Python installation".
I spent one month to search this issue on the internet but I cannot find the answer. Please support me to resolve my issue.
Thanks,
Tran Dang Bao | 0 | 1 | 22,097 |
0 | 18,778,542 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2013-09-13T04:21:00.000 | 0 | 2 | 0 | Cash flow diagram in python | 18,778,266 | 0 | python,image,matplotlib | If you simply need arrows pointing up and down, use Unicode arrows like "↑" and "↓". This would be really simple if rendering in a browser. | I need to make a very simple image that will illustrate a cash flow diagram based on user input. Basically, I just need to make an axis and some arrows facing up and down and proportional to the value of the cash flow. I would like to know how to do this with matplot. | 0 | 1 | 875 |
0 | 36,248,111 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2013-09-13T16:55:00.000 | 0 | 1 | 0 | How to plot a heatmap of a big matrix with matplotlib (45K * 446) | 18,791,469 | 1.2 | python,matplotlib,bigdata,heatmap | I solved by downsampling the matrix to a smaller matrix.
I decided to try two methodologies:
supposing I want to down-sample a matrix of 45k rows to a matrix of 1k rows, I took a row value every 45 rows
another methodology is, to down-sample 45k rows to 1k rows, to group the 45k rows into 1k groups (composed by 45 adjacent rows) and to take the average for each group as representative row
Hope it helps. | I am trying to plot a heatmap of a big microarray dataset (45K rows per 446 columns).
Using pcolor from matplotlib I am unable to do it because my pc goes easily out of memory (more than 8G)..
I'd prefer to use python/matplotlib instead of R for personal opinion..
Any way to plot heatmaps in an efficient way?
Thanks | 0 | 1 | 1,446 |
0 | 35,315,482 | 0 | 1 | 0 | 0 | 1 | false | 6 | 2013-09-18T21:17:00.000 | 1 | 1 | 0 | How to share Ipython notebook kernels? | 18,882,510 | 0.197375 | ipython,ipython-notebook | When I have a long noetbook, I create functions from my code, and hide it into python modules, which I then import in the notebook.
So that I can have huge chunk of code hidden on the background, and my notebook smaller for handier manipulation. | I have some very large IPython (1.0) notebooks, which I find very unhandy to work with. I want to split the large notebook into several smaller ones, each covering a specific part of my analysis. However, the notebooks need to share data and (unpickleable) objects.
Now, I want these notebooks to connect to the same kernel. How do I do this? How can I change the kernel to which a notebook is connected? (And any ideas how to automate this step?)
I don't want to use the parallel computing mechanism (which would be a trivial solution), because it would add much code overhead in my case. | 0 | 1 | 1,689 |
0 | 18,897,337 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-09-19T04:49:00.000 | 0 | 1 | 0 | Repeat rows in files in wakari | 18,886,383 | 0 | python | cat dataset.csv dataset.csv dataset.csv dataset.csv > bigdata.csv | In wakari, how do I download a CSV file and create a new CSV file with each of the rows in the original file repeated N number of times in the new CSV file. | 0 | 1 | 61 |
0 | 18,908,045 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2013-09-20T02:37:00.000 | 0 | 2 | 0 | Difference between a numpy array and a multidimensional list in Python? | 18,907,998 | 0 | python,arrays,list,numpy,multidimensional-array | Numpy is an extension, and demands that all the objects on it are of the same type , defined on creation. It also provides a set of linear algebra operations. Its more like a mathematical framework for python to deal with Numeric Calculations (matrix, n stuffs). | After only briefly looking at numpy arrays, I don't understand how they are different than normal Python lists. Can someone explain the difference, and why I would use a numpy array as opposed to a list? | 0 | 1 | 2,487 |
0 | 18,910,584 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-09-20T06:26:00.000 | 0 | 1 | 0 | Analyze Text to find patterns and useful information | 18,910,200 | 0 | python,nlp | Try xlrd Python Module to read and process excel sheets.
I think an appropriate implementation using this module is an easy way to solve your problem. | to provide some context: Issues in an application are logged in an excel sheet and one of the columns in that sheet contains the email communication between the user (who had raised the issue) and the resolve team member. There are bunch of other columns containing other useful information. My job is to find useful insights from this data for Business.
Find out what type of issue was that? e.g. was that a training issue for the user or access issue etc. This would mean that I analyze the mail text and figure out by some means the type of issue.
How many email conversations have happened for one issue?
Is it a repeat issue?
There are other simple statistical problems e.g. How many issues per week etc...
I read that NLP with Python can be solution to my problems. I also looked at Rapidminer for the same.
Now my Question is
a. "Am I on the right track?, Is NLP(Natural Language Processing) the solution to these problems?"
b. If yes, then how to start.. I have started reading book on NLP with Python, but that is huge, any specific areas that I should concentrate on and can start my analysis?
c. How is Rapidminer tool? Can it answer all of these questions? The data volume is not too huge (may be 100000 rows)... looks like it is quite easy to build a process in rapidminer, hence started on it...
Appreciate any suggestions!!! | 0 | 1 | 637 |
0 | 20,853,862 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2013-09-24T05:49:00.000 | 2 | 4 | 0 | How to install scikit-learn for Portable Python? | 18,973,863 | 0.099668 | python-2.7,scikit-learn,portable-python | you can easily download SciKit executable, extract it with python, copy SciKit folder and content to c:\Portable Python 2.7.5.1\App\Lib\site-packages\ and you'll have SciKit in your portable python.
I just had this problem and solved this way. | While I am trying to install scikit-learn for my portable python, its saying " Python 2.7 is not found in the registry". In the next window, it does ask for an installation path but neither am I able to copy-paste the path nor write it manually. Otherwise please suggest some other alternative for portable python which has numpy, scipy and scikit-learn by default. Please note that I don't have administrative rights of the system so a portable version is preferred. | 0 | 1 | 1,485 |
0 | 44,480,375 | 0 | 0 | 0 | 0 | 1 | false | 12 | 2013-09-25T01:33:00.000 | 0 | 2 | 0 | Python Svmlight Error: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future | 18,994,787 | 0 | python-2.7,scipy,svmlight | I also met this problem when I assigned numbers to a matrix.
like this:
Qmatrix[list2[0], list2[j]] = 1
the component may be a non-integer number, so I changed to this:
Qmatrix[int(list2[0]), int(list2[j])] = 1
and the warning removed | I'm running python 2.7.5 with scikit_learn-0.14 on my Mac OSX Mountain Lion.
Everything I run a svmlight command however, I get the following warning:
DeprecationWarning: using a non-integer number instead of an integer will result in an error >in the future | 0 | 1 | 34,473 |
0 | 38,722,056 | 0 | 1 | 0 | 0 | 2 | false | 21 | 2013-09-26T13:14:00.000 | 0 | 4 | 0 | How to check that the anaconda package was properly installed | 19,029,333 | 0 | python,macos,numpy,installation,anaconda | I don't think the existing answer answers your specific question (about installing packages within Anaconda). When I install a new package via conda install <PACKAGE>, I then run conda list to ensure the package is now within my list of Anaconda packages. | I'm completely new to Python and want to use it for data analysis. I just installed Python 2.7 on my mac running OSX 10.8. I need the NumPy, SciPy, matplotlib and csv packages. I read that I could simply install the Anaconda package and get all in one. So I went ahead and downloaded/installed Anaconda 1.7.
However, when I type in:
import numpy as np
I get an error telling me that there is no such module. I assume this has to do with the location of the installation, but I can't figure out how to:
A. Check that everything is actually installed properly
B. Check the location of the installation.
Any pointers would be greatly appreciated!
Thanks | 0 | 1 | 69,121 |
0 | 41,600,022 | 0 | 1 | 0 | 0 | 2 | false | 21 | 2013-09-26T13:14:00.000 | 1 | 4 | 0 | How to check that the anaconda package was properly installed | 19,029,333 | 0.049958 | python,macos,numpy,installation,anaconda | Though the question is not relevant to Windows environment, FYI for windows. In order to use anaconda modules outside spyder or in cmd prompt, try to update the PYTHONPATH & PATH with C:\Users\username\Anaconda3\lib\site-packages.
Finally, restart the command prompt.
Additionally, sublime has a plugin 'anaconda' which can be used for sublime to work with anaconda modules. | I'm completely new to Python and want to use it for data analysis. I just installed Python 2.7 on my mac running OSX 10.8. I need the NumPy, SciPy, matplotlib and csv packages. I read that I could simply install the Anaconda package and get all in one. So I went ahead and downloaded/installed Anaconda 1.7.
However, when I type in:
import numpy as np
I get an error telling me that there is no such module. I assume this has to do with the location of the installation, but I can't figure out how to:
A. Check that everything is actually installed properly
B. Check the location of the installation.
Any pointers would be greatly appreciated!
Thanks | 0 | 1 | 69,121 |
0 | 19,042,578 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2013-09-27T01:53:00.000 | 1 | 1 | 0 | How to enforce scipy.optimize.fmin_l_bfgs_b to use 'dtype=float32' | 19,041,486 | 1.2 | python,optimization,scipy,gpu,multidimensional-array | I am not sure you can ever do it. fmin_l_bfgd_b is provided not by pure python code, but by a extension (a wrap of FORTRAN code). In Win32/64 platform it can be found at \scipy\optimize\_lbfgsb.pyd. What you want may only be possible if you can compile the extension differently or modify the FORTRAN code. If you check that FORTRAN code, it has double precision all over the place, which is basically float64. I am not sure just changing them all to single precision will do the job.
Among the other optimization methods, cobyla is also provided by FORTRAN. Powell's methods too. | I am trying to optimize functions with GPU calculation in Python, so I prefer to store all my data as ndarrays with dtype=float32.
When I am using scipy.optimize.fmin_l_bfgs_b, I notice that the optimizer always passes a float64 (on my 64bit machine) parameter to my objective and gradient functions, even when I pass a float32 ndarray as the initial search point x0. This is different when I use the cg optimizer scipy.optimize.fmin_cg, where when I pass in a float32 array as x0, the optimizer will use float32 in all consequent objective/gradient function invocations.
So my question is: can I enforce scipy.optimize.fmin_l_bfgs_b to optimize on float32 parameters like in scipy.optimize.fmin_cg?
Thanks! | 0 | 1 | 1,678 |
0 | 28,295,797 | 0 | 0 | 0 | 0 | 1 | true | 19 | 2013-09-27T19:18:00.000 | 20 | 2 | 0 | Specifying the line width of the legend frame, in matplotlib | 19,058,485 | 1.2 | python,matplotlib | For the width: legend.get_frame().set_linewidth(w)
For the color: legend.get_frame().set_edgecolor("red") | In matplotlib, how do I specify the line width and color of a legend frame? | 0 | 1 | 13,797 |
0 | 61,194,900 | 0 | 0 | 0 | 0 | 1 | false | 184 | 2013-09-28T20:10:00.000 | 6 | 11 | 0 | Drop columns whose name contains a specific string from pandas DataFrame | 19,071,199 | 1 | python,pandas,dataframe | This method does everything in place. Many of the other answers create copies and are not as efficient:
df.drop(df.columns[df.columns.str.contains('Test')], axis=1, inplace=True) | I have a pandas dataframe with the following column names:
Result1, Test1, Result2, Test2, Result3, Test3, etc...
I want to drop all the columns whose name contains the word "Test". The numbers of such columns is not static but depends on a previous function.
How can I do that? | 0 | 1 | 184,940 |
0 | 21,080,034 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2013-09-30T07:19:00.000 | 0 | 1 | 0 | How to resize y axis of a dendogram | 19,088,527 | 0 | python,scipy,hierarchical-clustering,dendrogram | If your're really only interested in distance proportions between the fusions, you could
adapt your input linkage (cut off an offset in the third column of the linkage matrix). This will screw the absolute cophenetic distances of course.
do some normalization of your input data, before clustering it
Or you
manipulate the dendrogram axes / adapt limits (I didn't try that) | I am using scipy.cluster.hierarchy as sch to draw a dendogram after makeing an hierarchical clustering. The problem is that the clustering happens on the top of the dendogram in between 0.8 and 1.0 which is the similarity degree in the y axis. How can i "cut" all the graph from 0 to 0.6 where nothing "interesting" graphically is happening? | 0 | 1 | 362 |
0 | 19,616,987 | 0 | 0 | 0 | 0 | 1 | false | 26 | 2013-10-01T03:48:00.000 | 6 | 4 | 0 | How to compute scipy sparse matrix determinant without turning it to dense? | 19,107,617 | 1 | python,numpy,scipy,linear-algebra,sparse-matrix | The "standard" way to solve this problem is with a cholesky decomposition, but if you're not up to using any new compiled code, then you're out of luck. The best sparse cholesky implementation is Tim Davis's CHOLMOD, which is licensed under the LGPL and thus not available in scipy proper (scipy is BSD). | I am trying to figure out the fastest method to find the determinant of sparse symmetric and real matrices in python. using scipy sparse module but really surprised that there is no determinant function. I am aware I could use LU factorization to compute determinant but don't see a easy way to do it because the return of scipy.sparse.linalg.splu is an object and instantiating a dense L and U matrix is not worth it - I may as well do sp.linalg.det(A.todense()) where A is my scipy sparse matrix.
I am also a bit surprised why others have not faced the problem of efficient determinant computation within scipy. How would one use splu to compute determinant?
I looked into pySparse and scikits.sparse.chlmod. The latter is not practical right now for me - needs package installations and also not sure sure how fast the code is before I go into all the trouble.
Any solutions? Thanks in advance. | 0 | 1 | 6,098 |
0 | 19,220,952 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-10-02T05:12:00.000 | 1 | 1 | 0 | How to detect start of raw-rgb video frame? | 19,130,365 | 0.197375 | python,video,rgb,gstreamer | If this really is raw rgb video, there is no (realistic) way to detect the start of the frame. I would assume your video would come as whole frames, so one buffer == one frame, and hence no need for such detection. | I have raw-rgb video coming from PAL 50i camera. How can I detect the start of frame, just like I would detect the keyframe of h264 video, in gstreamer? I would like to do that for indexing/cutting purposes. | 0 | 1 | 234 |
0 | 19,389,797 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2013-10-04T01:40:00.000 | 1 | 1 | 0 | Scipy or pandas for sparse matrix computations? | 19,171,822 | 0.197375 | python,numpy,matrix,pandas | After some research I found that both pandas and Scipy have structures to represent sparse matrix efficiently in memory. But none of them have out of box support for compute similarity between vectors like cosine, adjusted cosine, euclidean etc. Scipy support this on dense matrix only. For sparse, Scipy support dot products and others linear algebra basic operations. | I have to compute massive similarity computations between vectors in a sparse matrix. What is currently the best tool, scipy-sparse or pandas, for this task? | 0 | 1 | 839 |
0 | 19,185,124 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2013-10-04T15:22:00.000 | 4 | 1 | 0 | Is there a standard way to work with numerical probability density functions in Python? | 19,184,975 | 0.664037 | python,random,numpy,scipy,distribution | How about numpy.convolve? It takes two arrays, rather than two functions, which seems ideal for your use. I'll also mention the ECDF function in the statsmodels package in case you really want to turn your observations into (step) functions. | I have a continuous random variable given by its density distribution function or by cumulative probability distribution function.
The distribution functions are not analytical. They are given numerically (for example as a list of (x,y) values).
One of the things that I would like to do with these distributions is to find a convolution of two of them (to have a distribution of a sum of two random properties).
I do not want to write my own function for that if there is already something standard and tested. Does anybody know if it is the case? | 0 | 1 | 287 |
0 | 19,217,476 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2013-10-06T20:09:00.000 | 0 | 4 | 0 | How do I use Gimp / OpenCV Color to separate images into coloured RGB layers? | 19,213,407 | 0 | python,opencv,rgb,gimp | As the blue,green,red images each has 1 channel only.So, this is basically a gray-scale image.
If you want to add colors in the dog_blue.jpg for example then you create a 3-channel image and copy the contents in all the channels or do cvCvtColor(src,dst,CV_GRAY2BGR). Now you will be able to add colors to it as it has become 3-channel image. | I have a JPG image, and I would like to find a way to:
Decompose the image into red, green and blue intensity layers (8 bit per channel).
Colorise each of these now 'grayscale' images with its appropriate color
Produce 3 output images in appropriate color, of each channel.
For example if I have an image:
dog.jpg
I want to produce:
dog_blue.jpg dog_red.jpg and dog_green.jpg
I do not want grayscale images for each channel. I want each image to be represented by its correct color.
I have managed to use the decompose function in gimp to get the layers, but each one is grayscale and I can't seem to add color to it.
I am currently using OpenCV and Python bindings for other projects so any suitable code that side may be useful if it is not easy to do with gimp | 0 | 1 | 2,596 |
0 | 55,448,010 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2013-10-06T20:09:00.000 | 0 | 4 | 0 | How do I use Gimp / OpenCV Color to separate images into coloured RGB layers? | 19,213,407 | 0 | python,opencv,rgb,gimp | In the BGR image, you have three channel. When you split the channel using the split() function, like B,G,R=cv2.split(img), then B,G,R becomes a single or monochannel image. So you need to add two extra channel with zeros to make it 3 channel image but activated for a specific color channel. | I have a JPG image, and I would like to find a way to:
Decompose the image into red, green and blue intensity layers (8 bit per channel).
Colorise each of these now 'grayscale' images with its appropriate color
Produce 3 output images in appropriate color, of each channel.
For example if I have an image:
dog.jpg
I want to produce:
dog_blue.jpg dog_red.jpg and dog_green.jpg
I do not want grayscale images for each channel. I want each image to be represented by its correct color.
I have managed to use the decompose function in gimp to get the layers, but each one is grayscale and I can't seem to add color to it.
I am currently using OpenCV and Python bindings for other projects so any suitable code that side may be useful if it is not easy to do with gimp | 0 | 1 | 2,596 |
0 | 19,221,774 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-10-07T09:52:00.000 | 6 | 2 | 0 | How many columns in pandas, python? | 19,221,694 | 1 | python,pandas | You get an out of memory error because you run out of memory, not because there is a limit on the number of columns. | Have anyone known the total columns in pandas, python?
I have just created a dataframe for pandas included more than 20,000 columns but I got memory error.
Thanks a lot | 0 | 1 | 2,587 |
0 | 19,228,974 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2013-10-07T15:10:00.000 | 0 | 1 | 0 | macport and FINK | 19,228,380 | 0 | python,macos,port,fink | In terms of how your Python interpreter works, no: there is no negative effect on having Fink Python as well as MacPorts Python installed on the same machine, just as there is no effect from having multiple installations of Python by anything. | I have a mac server and I have both FINK and macport installation of python/numpy/scipy
I was wondering if having both will affect the other? In terms of memory leaks/unusual results?
In case you are wondering why both ? Well I like FINK but macports allows me to have python2.4 which FINK does not provide (yes I needed an old version for a piece of code I have)
I wonder this since I tried to use homebrew once and it complained about the machine having port and FINK (I did not realize that port provided python2.4 so was looking at homebrew but when I realized port did give 2.4 I abandoned it) | 0 | 1 | 421 |
0 | 19,235,077 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2013-10-07T21:17:00.000 | 0 | 2 | 0 | Linked Matrix Implementation in Python? | 19,234,950 | 0 | python,list,matrix,linked-list | There's more than one way to interpret this, but one option is:
Have a single "head" node at the top-left corner and a "tail" node at the bottom-right. There will then be row-head, row-tail, column-head, and column-tail nodes, but these are all accessible from the overall head and tail, so you don't need to keep track of them, and they're already part of the linked matrix, so they don't need to be part of a separate linked list.
(Of course a function that builds up an RxC matrix of zeroes will probably have local variables representing the current row's head/tail, but that's not a problem.) | I know in a linked list there are a head node and a tail node. Well, for my data structures assignment, we are suppose to create a linked matrix with references to a north, south, east, and west node. I am at a loss of how to implement this. A persistent problem that bothers me is the head node and tail node. The user inputs the number of rows and the number of columns. Should I have multiple head nodes then at the beginning of each row and multiple tail nodes at the end of each row? If so, should I store the multiple head/tail nodes in a list?
Thank you. | 0 | 1 | 1,333 |
0 | 19,237,061 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2013-10-07T21:17:00.000 | 0 | 2 | 0 | Linked Matrix Implementation in Python? | 19,234,950 | 0 | python,list,matrix,linked-list | It really depends on what options you want/need to efficiently support.
For instance, a singly linked list with only a head pointer can be a stack (insert and remove at the head). If you add a tail pointer you can insert at either end, but only remove at the head (stack or queue). A doubly linked list can support insertion or deletion at either end (deque). If you try to implement an operation that your data structure is not designed for you incur an O(N) penalty.
So I would start with a single pointer to the (0,0) element and then start working on the operations your instructor asks for. You may find you need additional pointers, you may not. My guess would be that you will be fine with a single head pointer. | I know in a linked list there are a head node and a tail node. Well, for my data structures assignment, we are suppose to create a linked matrix with references to a north, south, east, and west node. I am at a loss of how to implement this. A persistent problem that bothers me is the head node and tail node. The user inputs the number of rows and the number of columns. Should I have multiple head nodes then at the beginning of each row and multiple tail nodes at the end of each row? If so, should I store the multiple head/tail nodes in a list?
Thank you. | 0 | 1 | 1,333 |
0 | 43,874,003 | 0 | 0 | 0 | 0 | 2 | false | 9 | 2013-10-08T19:46:00.000 | 11 | 5 | 0 | Python - how to normalize time-series data | 19,256,930 | 1 | python,time-series | The solutions given are good for a series that aren’t incremental nor decremental(stationary). In financial time series( or any other series with a a bias) the formula given is not right. It should, first be detrended or perform a scaling based in the latest 100-200 samples.
And if the time series doesn't come from a normal distribution ( as is the case in finance) there is advisable to apply a non linear function ( a standard CDF funtion for example) to compress the outliers.
Aronson and Masters book (Statistically sound Machine Learning for algorithmic trading) uses the following formula ( on 200 day chunks ):
V = 100 * N ( 0.5( X -F50)/(F75-F25)) -50
Where:
X : data point
F50 : mean of the latest 200 points
F75 : percentile 75
F25 : Percentile 25
N : normal CDF | I have a dataset of time-series examples. I want to calculate the similarity between various time-series examples, however I do not want to take into account differences due to scaling (i.e. I want to look at similarities in the shape of the time-series, not their absolute value). So, to this end, I need a way of normalizing the data. That is, making all of the time-series examples fall between a certain region e.g [0,100]. Can anyone tell me how this can be done in python | 0 | 1 | 20,885 |
0 | 21,486,466 | 0 | 0 | 0 | 0 | 2 | false | 9 | 2013-10-08T19:46:00.000 | 0 | 5 | 0 | Python - how to normalize time-series data | 19,256,930 | 0 | python,time-series | I'm not going to give the Python code, but the definition of normalizing, is that for every value (datapoint) you calculate "(value-mean)/stdev". Your values will not fall between 0 and 1 (or 0 and 100) but I don't think that's what you want. You want to compare the variation. Which is what you are left with if you do this. | I have a dataset of time-series examples. I want to calculate the similarity between various time-series examples, however I do not want to take into account differences due to scaling (i.e. I want to look at similarities in the shape of the time-series, not their absolute value). So, to this end, I need a way of normalizing the data. That is, making all of the time-series examples fall between a certain region e.g [0,100]. Can anyone tell me how this can be done in python | 0 | 1 | 20,885 |
0 | 64,969,412 | 0 | 1 | 0 | 0 | 1 | false | 61 | 2013-10-11T02:25:00.000 | 2 | 7 | 0 | How to (intermittently) skip certain cells when running IPython notebook? | 19,309,287 | 0.057081 | python,ipython,ipython-notebook,ipython-magic | The simplest way to skip python code in jupyter notebook cell from running, I temporarily convert those cells to markdown. | I usually have to rerun (most parts of) a notebook when reopen it, in order to get access to previously defined variables and go on working.
However, sometimes I'd like to skip some of the cells, which have no influence to subsequent cells (e.g., they might comprise a branch of analysis that is finished) and could take very long time to run. These cells can be scattered throughout the notebook, so that something like "Run All Below" won't help much.
Is there a way to achieve this?
Ideally, those cells could be tagged with some special flags, so that they could be "Run" manually, but would be skipped when "Run All".
EDIT
%%cache (ipycache extension) as suggested by @Jakob solves the problem to some extent.
Actually, I don't even need to load any variables (which can be large but unnecessary for following cells) when re-run, only the stored output matters as analyzing results.
As a work-around, put %%cache folder/unique_identifier to the beginning of the cell. The code will be executed only once and no variables will be loaded when re-run unless you delete the unique_identifier file.
Unfortunately, all the output results are lost when re-run with %%cache...
EDIT II (Oct 14, 2013)
The master version of ipython+ipycache now pickles (and re-displays) the codecell output as well.
For rich display outputs including Latex, HTML(pandas DataFrame output), remember to use IPython's display() method, e.g., display(Latex(r'$\alpha_1$')) | 0 | 1 | 40,298 |
0 | 19,317,456 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2013-10-11T11:15:00.000 | 1 | 1 | 0 | Is there a way to see column 'grams' of the TfidfVectoririzer output? | 19,316,788 | 1.2 | python,scikit-learn,tf-idf | Use get_feature_names method as specified in comments by larsmans | I want to visualize "Words/grams" used in columns of TfidfVectorizer outut in python-scikit library . Is there a way ?
I tried to to convert csr to array , but cannot see header composed of grams. | 0 | 1 | 31 |
0 | 19,352,360 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2013-10-14T01:23:00.000 | -1 | 2 | 0 | Sample from weighted histogram | 19,352,225 | -0.099668 | python,numpy,histogram,sample | You need to refine your problem statement better. For example, if your array has only 1 row, what do you expect. If your array has 20,000 rows what do you expect? ... | I have a 2 column array, 1st column weights and the 2 nd column values which I am plotting using python. I would like to draw 20 samples from this weighted array, proportionate to their weights. Is there a python/numpy command which does that? | 0 | 1 | 1,324 |
0 | 19,356,586 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2013-10-14T08:15:00.000 | 1 | 3 | 0 | How can I implement a data structure that preserves order and has fast insertion/removal? | 19,355,986 | 0.066568 | python,data-structures,python-3.x,deque | Using doubly-linked lists in Python is a bit uncommon. However, your own proposed solution of a doubly-linked list and a dictionary has the correct complexity: all the operations you ask for are O(1).
I don't think there is in the standard library a more direct implementation. Trees might be nice theoretically, but also come with drawbacks, like O(log n) or (precisely) their general absence from the standard library. | I'm looking for a data structure that preserves the order of its elements (which may change over the life of the data structure, as the client may move elements around).
It should allow fast search, insertion before/after a given element, removal of a given element, lookup of the first and last elements, and bidirectional iteration starting at a given element.
What would be a good implementation?
Here's my first attempt:
A class deriving from both collections.abc.Iterable and collections.abc.MutableSet that contains a linked list and a dictionary. The dictionary's keys are elements, values are nodes in the linked list. The dictionary would handle search for a node given an element. Once an element is found, the linked list would handle insertion before/after, deletion, and iteration. The dictionary would be updated by adding or deleting the relevant key/value pair. Clearly, with this approach the elements must be hashable and unique (or else, we'll need another layer of indirection where each element is represented by an auto-assigned numeric identifier, and only those identifiers are stored as keys).
It seems to me that this would be strictly better in asymptotic complexity than either list or collections.deque, but I may be wrong. [EDIT: Wrong, as pointed out by @roliu. Unlike list or deque, I would not be able to find an element by its numeric index in O(1). As of now, it is O(N) but I am sure there's some way to make it O(log N) if necessary.] | 0 | 1 | 708 |
0 | 19,374,300 | 0 | 1 | 0 | 0 | 1 | false | 116 | 2013-10-15T06:02:00.000 | 7 | 6 | 0 | Assigning a variable NaN in python without numpy | 19,374,254 | 1 | python,constants,nan | You can do float('nan') to get NaN. | Most languages have a NaN constant you can use to assign a variable the value NaN. Can python do this without using numpy? | 0 | 1 | 180,714 |
0 | 19,378,238 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2013-10-15T09:52:00.000 | 0 | 4 | 0 | Python: Merging two arbitrary data structures | 19,378,143 | 0 | python | If you know one structure is always a subset of the other, then just iterate the superset and in O(n) time you can check element by element whether it exists in the subset and if it doesn't, put it there. As far as I know there's no magical way of doing this other than checking it manually element by element. Which, as I said, is not bad as it can be done in with O(n) complexity. | I am looking to efficiently merge two (fairly arbitrary) data structures: one representing a set of defaults values and one representing overrides. Example data below. (Naively iterating over the structures works, but is very slow.) Thoughts on the best approach for handling this case?
_DEFAULT = { 'A': 1122, 'B': 1133, 'C': [ 9988, { 'E': [ { 'F': 6666, }, ], }, ], }
_OVERRIDE1 = { 'B': 1234, 'C': [ 9876, { 'D': 2345, 'E': [ { 'F': 6789, 'G': 9876, }, 1357, ], }, ], }
_ANSWER1 = { 'A': 1122, 'B': 1234, 'C': [ 9876, { 'D': 2345, 'E': [ { 'F': 6789, 'G': 9876, }, 1357, ], }, ], }
_OVERRIDE2 = { 'C': [ 6543, { 'E': [ { 'G': 9876, }, ], }, ], }
_ANSWER2 = { 'A': 1122, 'B': 1133, 'C': [ 6543, { 'E': [ { 'F': 6666, 'G': 9876, }, ], }, ], }
_OVERRIDE3 = { 'B': 3456, 'C': [ 1357, { 'D': 4567, 'E': [ { 'F': 6677, 'G': 9876, }, 2468, ], }, ], }
_ANSWER3 = { 'A': 1122, 'B': 3456, 'C': [ 1357, { 'D': 4567, 'E': [ { 'F': 6677, 'G': 9876, }, 2468, ], }, ], }
This is an example of how to run the tests:
(The dictionary update doesn't work, just an stub function.)
import itertools
def mergeStuff( default, override ):
# This doesn't work
result = dict( default )
result.update( override )
return result
def main():
for override, answer in itertools.izip( _OVERRIDES, _ANSWERS ):
result = mergeStuff(_DEFAULT, override)
print('ANSWER: %s' % (answer) )
print('RESULT: %s\n' % (result) ) | 0 | 1 | 1,800 |
0 | 19,423,078 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2013-10-17T09:22:00.000 | 0 | 2 | 0 | how to draw a nonlinear function using matplotlib? | 19,422,749 | 0 | python,matplotlib | my 2 cents:
x^3+y^3+y^2+2xy^2=0
y^2=-x^3-y^3-2xy^2
y^2>0 => -x^3-y^3-2xy^2>0 => x^3+y^3+2xy^2<0 =>
x(x^2+2y^2)+y^3<0 => x(x^2+2y^2)<-y^3 => (x^2+2y^2)<-y^3/x
0<(x^2+2y^2) => 0<-y^3/x => 0>y^3/x =>
(x>0 && y<0) || (x<0 && y>0)
your graph will span across the 2nd and 4th quadrants | I would like to draw the curve a generic cubic function using matplotlib. I want to draw curves that are defined by functions such as: x^3 + y^3 + y^2 + 2xy^2 = 0. Is this possible to do? | 0 | 1 | 2,800 |
0 | 41,857,586 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2013-10-17T20:06:00.000 | 0 | 1 | 0 | Python Pandas Excel Display | 19,436,220 | 0 | python,excel,pandas | It sounds to me like your python code is inserting a carriage return either before or after the value.
I've replicated this behavior in Excel 2016 and can confirm that the cell appears blank, but does contain a value.
Furthermore, I've verified that using the text to columns will parse the carriage return out. | I used the Python Pandas library as a wrap-around instead of using SQL. Everything worked perfectly, except when I open the output excel file, the cells appear blank, but when I click on the cell, I can see the value in the cell above. Additionally, Python and Stata recognize the value in the cell, even though the eye cannot see it. Furthermore, if I do "text to columns", then the values in the cell become visible to the eye.
Clearly it's a pain to go through every column and click "text to columns", and I'm wondering the following:
(1) Why is the value not visible to the eye when it exists in the cell?
(2) What's the easiest way to make all the values visible to the eye aside from the cumbersome "text to columns" for all columns approach?
(3) I did a large number of tests to make sure the non-visible values in the cells in fact worked in analysis. Is my assumption that the non-visible values in the cells will always be accurate, true?
Thanks in advance for any help you can provide! | 0 | 1 | 311 |
0 | 58,662,181 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2013-10-20T06:48:00.000 | 0 | 3 | 0 | Pandas DataFrame.reset_index for columns | 19,474,693 | 0 | python,pandas | Transpose df, reset index, and transopse again.
df.T.reset_index().T | Is there a reset_index equivalent for the column headings? In other words, if the column names are an MultiIndex, how would I drop one of the levels? | 0 | 1 | 4,607 |
0 | 69,479,472 | 0 | 0 | 0 | 0 | 1 | false | 29 | 2013-10-21T03:04:00.000 | 1 | 4 | 0 | Python random sample of two arrays, but matching indices | 19,485,641 | 0.049958 | python,random,numpy | Using the numpy.random.randint function, you generate a list of random numbers, meaning that you can select certain datapoints twice. | I have two numpy arrays x and y, which have length 10,000.
I would like to plot a random subset of 1,000 entries of both x and y.
Is there an easy way to use the lovely, compact random.sample(population, k) on both x and y to select the same corresponding indices? (The y and x vectors are linked by a function y(x) say.)
Thanks. | 0 | 1 | 15,029 |
0 | 19,501,335 | 0 | 1 | 0 | 0 | 1 | false | 10 | 2013-10-21T17:46:00.000 | 1 | 6 | 0 | How do I ONLY round a number/float down in Python? | 19,501,279 | 0.033321 | python,integer,rounding | I'm not sure whether you want math.floor, math.trunc, or int, but... it's almost certainly one of those functions, and you can probably read the docs and decide more easily than you can explain enough for usb to decide for you. | I will have this random number generated e.g 12.75 or 1.999999999 or 2.65
I want to always round this number down to the nearest integer whole number so 2.65 would be rounded to 2.
Sorry for asking but I couldn't find the answer after numerous searches, thanks :) | 0 | 1 | 42,133 |
0 | 19,510,028 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2013-10-22T05:00:00.000 | 4 | 1 | 0 | How to merge specific axes without ambuigity with numpy.ndarray | 19,509,314 | 0.664037 | python,numpy,reshape | Using reshape is never ambiguous. It doesn't change the memory-layout of the data.
Indexing is always done using the strides determined by the shape.
The right-most axis has stride 1, while the axes to the left have strides given by the product of the sizes to their right.
That means for you: as long as you collect neighboring axes, it will do the "right" thing. | Basically I want to reshape tensors represented by numpy.ndarray.
For example, I want to do something like this (latex notation)
A_{i,j,k,l,m,n,p} -> A_{i,jk,lm,np}
or
A_{i,j,k,l,m,n,p} -> A_{ij,k,l,m,np}
where A is an ndarray. i,j,k,... denotes the original axes.
so the new axis 2 becomes the "flattened" version of axis 2 and 3, etc. If I simply use numpy.reshape, I don't think it knows what axes I want to merge, so it seems ambiguous and error prone.
Is there any neat way of doing this rather than creating another ndarray manually? | 0 | 1 | 802 |
0 | 19,540,052 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2013-10-22T17:50:00.000 | 4 | 2 | 0 | Get most common colours in an image using OpenCV | 19,524,905 | 0.379949 | python,opencv | I would transform the images to the HSV color space and then compute a histogram of the H values. Then, take the bins with the largest values. | Hi I'm using Opencv and I want to find the n most common colors of an image using x sensitivity. How could I do this? Are there any opencv functions to do this?
Cheers!
*Note: this isn't homework, i'm just using opencv for fun! | 0 | 1 | 4,153 |
0 | 19,532,207 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-10-23T03:18:00.000 | 1 | 3 | 0 | Most efficient way to store data on drive | 19,532,159 | 0.066568 | python,sqlite,csv | I would write all the lines to one file. For 10,000 lines it's probably not worthwhile, but you can pad all the lines to the same length - say 1000 bytes.
Then it's easy to seek to the nth line, just multiply n by the line length | baseline - I have CSV data with 10,000 entries. I save this as 1 csv file and load it all at once.
alternative - I have CSV data with 10,000 entries. I save this as 10,000 CSV files and load it individually.
Approximately how much more inefficient is this computationally. I'm not hugely interested in memory concerns. The purpose of the alternative method is because I frequently need to access subsets of the data and don't want to have to read the entire array.
I'm using python.
Edit: I can other file formats if needed.
Edit1: SQLite wins. Amazingly easy and efficient compared to what I was doing before. | 0 | 1 | 1,356 |
0 | 19,650,488 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2013-10-25T06:02:00.000 | 0 | 3 | 0 | How to find eigenvectors and eigenvalues without numpy and scipy? | 19,582,197 | 0 | python,linear-algebra | Writing a program to solve an eigenvalue problem is about 100 times as much work as fixing the library mismatch problem. | I need to calculate eigenvalues and eigenvectors in python. numpy and scipy do not work. They both write Illegal instruction (core dumped). I found out that to resolve the problem I need to check my blas/lapack. So, I thought that may be an easier way is to write/find a small function to solve the eigenvalue problem. Does anybody know if such solutions exist? | 0 | 1 | 12,872 |
0 | 19,674,497 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2013-10-25T13:59:00.000 | 0 | 1 | 0 | Installing scikits.bvp_solver | 19,591,907 | 0 | python,scipy,enthought,scikits,canopy | You can try to download the package tar.gz and use easy_install . Or you can unpack the package and use the standard way of python setup.py install. I believe both ways require a fortran compiler. | I need to use scikits.bvp_solver in python.
I currently use Canopy as my standard Python interface, where this package isn't available. Is there another available package for solving boundary value problems? I have also tried downloading using macports but the procedure sticks when it tries building gcc48 dependency. | 0 | 1 | 611 |
0 | 19,611,533 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2013-10-26T19:52:00.000 | 2 | 1 | 0 | Find a point in 3d collinear with 2 other points | 19,611,177 | 0.379949 | python,algorithm,3d,point | Given 2 points, (x1,y1,z1) and (x2,y2,z2), you can take the difference between the two, so you end up with (x2-x1,y2-y1,z2-z1). Take the norm of this (i.e. take the distance between the original 2 points), and divide (x2-x1,y2-y1,z2-z1) by that value. You now have a vector with the same slope as the line between the first 2 points, but it has magnitude one, since you normalized it (by dividing by its magnitude). Then add/subtract that vector to one of the original points to get your final answer. | I need to write a script in python which given coordinates of 2 points in 3d space finds a collinear point in distane 1 unit from one the given points. This third point must lay between those two given.
I think I will manage with scripting but I am not really sure how to calculate it from mathematical point of view. I found some stuff on google, but they do not answer my question.
Thanks for any advice. | 0 | 1 | 1,351 |
Subsets and Splits