GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 21,508,062 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2014-02-02T06:58:00.000 | 3 | 1 | 0 | Plotting dictionaries within a dictionary in Myplotlib python | 21,507,956 | 1.2 | python,matplotlib | Creating sample data
In [3]: data = {'title1': {10:20, 4:10}, 'title2':{8:10, 9:20, 10:30}}
In [4]: data
Out[4]: {'title1': {4: 10, 10: 20}, 'title2': {8: 10, 9: 20, 10: 30}}
Iterating over data; creating x and y for each title and plotting it in new figure
In [5]: for title, data_dict in data.iteritems():
...: x = data_dict.keys()
...: y = data_dict.values()
...: plt.figure()
...: plt.plot(x,y)
...: plt.title(title)
If you are not using IPython
plt.show() | I need help plotting a dictionary, below is the data sample data set. I want to create a graph where x:y are (x,y) coordinates and title'x' would be the title of the graph.. I want to create individual graphs for each data set so one for title1':{x:y, x:y}, another one for title2:{x:y, x:y}....and so on.
Any help would be greatly appreciated. Thank you.
data = {'title1':{x:y, x:y},title2:{x:y,x:y,x:y},'title3':{x:y,x:y}....} | 0 | 1 | 6,297 |
0 | 21,532,842 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-02-03T16:39:00.000 | 1 | 2 | 0 | Handling K-means with large dataset 6gb with scikit-learn? | 21,532,724 | 0.099668 | python,scikit-learn | Clustering is not in itself that well-defined a problem (a 'good' clustering result depends on your application) and k-means algorithm only gives locally optimal solutions based on random initialization criteria. Therefore I doubt that the results you would get from clustering a random 2GB subsample of the dataset would be qualitatively different from the results you would get clustering over the entire 6GB. I would certainly try clustering on the reduced dataset as a first port of call. Next options are to subsample more intelligently, or do multiple training runs with different subsets and do some kind of selection/ averaging across multiple runs. | I am using scikit-learn. I want to cluster a 6gb dataset of documents and find clusters of documents.
I only have about 4Gb ram though. Is there a way to get k-means to handle large datasets in scikit-learn?
Thank you, Please let me know if you have any questions. | 0 | 1 | 2,177 |
0 | 21,613,541 | 0 | 0 | 0 | 0 | 2 | true | 4 | 2014-02-06T19:54:00.000 | 0 | 5 | 0 | Determine if determinant is exactly zero | 21,612,677 | 1.2 | python,math,numpy,linear-algebra | As the entries in the matrices are either 1 or 0 the smallest non-zero absolute value of a determinant is 1. So there is no need to fear a true non-zero value that is very close to 0.
Alternatively one can apparently use sympy to get an exact answer. | I have a lot of 10 by 10 (0,1)-matrices and I would like to determine which have determinant exactly 0 (that is which are singular). Using scipy.linalg.det I get a floating point number which I have to test to see if it is close to zero. Is it possible to do the calculation exactly so I can be sure I am not finding false positives?
On the other hand, maybe there is some guarantee about the smallest eigenvalue which can be used to make sure the floating point method never makes a false positive? | 0 | 1 | 2,357 |
0 | 21,613,054 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2014-02-06T19:54:00.000 | 3 | 5 | 0 | Determine if determinant is exactly zero | 21,612,677 | 0.119427 | python,math,numpy,linear-algebra | You can use Gaussian elimination to bring the matrix to a triangular form.
Since your elements are all 0 or 1, the calculation even using floating point arithmetic will be exact (you are only multiplying/dividing/adding/subtracting by -1, 0 and 1, which is exact).
The determinant is then 0 if one element of the diagonal is zero and nonzero otherwise.
So for this specific algorithm (Gaussian elimination), calculation of the determinant will be exact even in floating point arithmetic.
This algorithm also should be pretty efficient. It can even be implemented using integers, which is faster and shows even in a more obvious way that the problem is exactly solvable.
EDIT: the point is, that an algorithm which operates on the 0,1 matrix can be exact. It depends on the algorithm. I would check how det() is implemented and maybe, there is no issue with numerical noise, and, in fact, you could just test for det(M) == 0.0 and get neither false negatives nor false positives. | I have a lot of 10 by 10 (0,1)-matrices and I would like to determine which have determinant exactly 0 (that is which are singular). Using scipy.linalg.det I get a floating point number which I have to test to see if it is close to zero. Is it possible to do the calculation exactly so I can be sure I am not finding false positives?
On the other hand, maybe there is some guarantee about the smallest eigenvalue which can be used to make sure the floating point method never makes a false positive? | 0 | 1 | 2,357 |
0 | 21,645,757 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-02-07T16:38:00.000 | 0 | 1 | 0 | The predict method shows standardized probability? | 21,633,136 | 0 | python-2.7,probability,scikit-learn,prediction,adaboost | Do you mean you get probabilities per sample that are 1/n_classes on average? That's necessarily the case; the probabilities reported by predict_proba are the conditional class probability distribution P(y|X) over all values for y. To produce different probabilities, perform any necessary computations according to your probability model. | I'm using the AdaBoostClassifier in Scikit-learn and always get an average probability of 0.5 regardless of how unbalanced the training sets are. The class predictions (predict_) seems to give correct estimates, but these aren't reflected in the predict_probas method which always average to 0.5.
If my "real" probability is 0.02, how do I transform the standardized probability to reflect that proportion? | 0 | 1 | 213 |
0 | 24,217,870 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-02-08T00:08:00.000 | 1 | 2 | 0 | Error when trying to sum an array by block's | 21,640,028 | 0.099668 | python,arrays,numpy | In case anyone else has a similar problem but the chosen answer doesn't solve it, one possibility could be that in Python3, some index or integer quantity fed into a np function is an expression using '/' for example n/2, which ought to be '//'. | I have a large dataset stored in a numpy array (A) I am trying to sum by block's using:
B=numpy.add.reduceat(numpy.add.reduceat(A, numpy.arange(0, A.shape[0], n),axis=0), numpy.arange(0, A.shape[1], n), axis=1)
it work's fine when i try it on a test array but with my data's I get the following message:
TypeError: Cannot cast array data from dtype('float64') to dtype('int32') according to the rule 'safe'
Does someone now how to handle this?
Thanks for the help. | 0 | 1 | 4,600 |
0 | 21,656,184 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2014-02-08T21:53:00.000 | -2 | 2 | 0 | nltk interface to stanford parser | 21,652,251 | -0.197375 | python,nlp,nltk,stanford-nlp | There is no module named stanford in NLTK.You can store output of stanford parser and make use of it through python program. | I am getting problems to access Stanford parser through python NLTK (they developed an interface for NLTK)
import nltk.tag.stanford
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named stanford | 0 | 1 | 5,543 |
0 | 21,668,162 | 0 | 0 | 0 | 0 | 1 | true | 10 | 2014-02-10T01:17:00.000 | 1 | 2 | 0 | Converting large SAS dataset to hdf5 | 21,667,547 | 1.2 | python,pandas,sas,hdf5 | I haven't had much luck with this in the past. We (where I work) just use Tab separated files for transport between SAS and Python -- and we do it a lot.
That said, if you are on Windows, you can attempt to setup an ODBC connection and write the file that way. | I have multiple large (>10GB) SAS datasets that I want to convert for use in pandas, preferably in HDF5. There are many different data types (dates, numerical, text) and some numerical fields also have different error codes for missing values (i.e. values can be ., .E, .C, etc.) I'm hoping to keep the column names and label metadata as well. Has anyone found an efficient way to do this?
I tried using MySQL as a bridge between the two, but I got some Out of range errors when transferring, plus it was incredibly slow. I also tried export from SAS in Stata .dta format, but SAS (9.3) exports in an old Stata format that is not compatible with read_stat() in pandas. I also tried the sas7bdat package, but from the description it has not been widely tested so I'd like to load the datasets another way and compare the results to make sure everything is working properly.
Extra details: the datasets I'm looking to convert are those from CRSP, Compustat, IBES and TFN from WRDS. | 0 | 1 | 2,301 |
0 | 23,423,563 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2014-02-10T11:11:00.000 | 0 | 4 | 0 | Nearest Neighbors in Python given the distance matrix | 21,675,570 | 0 | python,machine-learning,scipy,scikit-learn | Want to add to ford's answer that you have to do like this
metric = DistanceMetric.get_metric('pyfunc',func=/your function name/)
You cannot just put your own function as the second argument, you must name the argument as "func" | I have to apply Nearest Neighbors in Python, and I am looking ad the scikit-learn and the scipy libraries, which both require the data as input, then will compute the distances and apply the algorithm.
In my case I had to compute a non-conventional distance, therefore I would like to know if there is a way to directly feed the distance matrix. | 0 | 1 | 5,868 |
0 | 21,687,176 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2014-02-10T19:18:00.000 | 2 | 3 | 0 | How to install numpy in OSX properly? | 21,685,980 | 1.2 | python,macos,numpy | Using the built-in python for OS X is not recommended and will likely cause more headaches in the future (assuming it's not behind your current problems).
Assuming your python is fine, there's still the issue of getting numpy working. In my experience, installing numpy with pip will often run into problems.
In addition to CT Zhu's advice, if you just want numpy and python, the Enthought distribution is quite good and free for students.
Also getting Homebrew working is a good idea and, because it's quite well supported, is not hard. With homebrew, installing numpy is as easy as brew install numpy -- and it makes installing other packages that also often don't install right with pip (sklearn, scipy, etc) easy too. | I'm using the built in python version in OSX, I also installed pip by sudo easy_install pip and secondly I installed numpy by sudo pip install numpy.
However, when I run any python file which uses numpy I get an error message like:
Import error: No module named numpy
Like numpy isn't installed in system. When I called locate numpy I found out most of outputs tell numpy is installed at: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/Python/numpy
How can I get it to work? | 0 | 1 | 273 |
0 | 59,060,580 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2014-02-12T09:34:00.000 | 0 | 2 | 0 | replace rows in a pandas data frame | 21,723,830 | 0 | python,pandas,dataframe | If you are replacing the entire row then you can just use an index and not need row,column slices.
...
data.loc[2]=5,6 | I want to start with an empty data frame and then add to it one row each time.
I can even start with a 0 data frame data=pd.DataFrame(np.zeros(shape=(10,2)),column=["a","b"]) and then replace one line each time.
How can I do that? | 0 | 1 | 63,571 |
0 | 21,734,013 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2014-02-12T14:12:00.000 | 1 | 2 | 0 | PyObjC: How can one use NSCoding to implement python pickling? | 21,730,339 | 0.099668 | python,pickle,nscoding,pyobjc | PyObjC does support writing Python objects to a (keyed) archive (that is, any object that can be pickled implements NSCoding).
That’s probably the easiest way to serialize arbitrary graphs of Python and Objective-C objects.
As I wrote in the comments for another answer I ran into problems when trying to find a way to implement pickle support for any object that implements NSCoding due to incompatibilities in how NSArchiver and pickle traverse the object graph (IIRC primarily when restoring the archive). | Title says it all. It seems like it ought be possible (somehow) to implement python-side pickling for PyObjC objects whose Objective-C classes implement NSCoding without re-implementing everything from scratch. That said, while value-semantic members would probably be straightforward, by-reference object graphs and conditional coding might be tricky. How might you get the two sides to "collaborate" on the object graph parts? | 0 | 1 | 261 |
0 | 21,733,669 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2014-02-12T14:12:00.000 | 0 | 2 | 0 | PyObjC: How can one use NSCoding to implement python pickling? | 21,730,339 | 0 | python,pickle,nscoding,pyobjc | Shouldn't it be pretty straightforward?
On pickling, call encodeWithCoder on the object using an NSArchiver or something. Have pickle store that string.
On unpickling, use NSUnarchiver to create an NSObject from the pickled string. | Title says it all. It seems like it ought be possible (somehow) to implement python-side pickling for PyObjC objects whose Objective-C classes implement NSCoding without re-implementing everything from scratch. That said, while value-semantic members would probably be straightforward, by-reference object graphs and conditional coding might be tricky. How might you get the two sides to "collaborate" on the object graph parts? | 0 | 1 | 261 |
0 | 21,740,743 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-02-12T21:50:00.000 | 1 | 1 | 0 | Best format to pack data for correlation determination? | 21,740,498 | 1.2 | python,csv,scipy,correlation | Each dataset is a column and all the datasets combined to make a CSV. It get read as a 2D array by numpy.genfromtxt() and then call numpy.corrcoef() to get correlation coefficients.
Note: you should also consider the same data layout, but using pandas. Read CSV into a dataframe by pandas.read_csv() and get the correlation coefficients by .corr() | I'm using a Java program to extract some data points, and am planning on using scipy to determine the correlation coefficients. I plan on extracting the data into a csv-style file. How should I format each corresponding dataset, so that I can easily read it into scipy? | 1 | 1 | 73 |
0 | 21,762,285 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2014-02-13T18:09:00.000 | 0 | 2 | 0 | Python CSV reader start at line_num | 21,762,173 | 0 | python,csv | If I were doing this I think I would add a marker line after each read - before the file is saved again , then I would read the file in as a string , split on the marker, convert back to a list and feed the list to the process. | I need to read a CSV with a couple million rows. The file grows throughout the day. After each time I process the file (and zip each row into a dict), I start the process over again, except creating the dict only for the new lines.
In order to get to the new lines though, I have to iterate over each line with CSV reader and compare the line number to my 'last line read' number (as far as I know).
Is there a way to just 'skip' to that line number? | 0 | 1 | 1,904 |
0 | 21,765,862 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-02-13T21:16:00.000 | 4 | 2 | 0 | How to read a large image in chunks in python? | 21,765,647 | 1.2 | python,image,image-processing,numpy,pytables | You can use numpy.memmap and let the operating system decide which parts of the image file to page in or out of RAM. If you use 64-bit Python the virtual memory space is astronomic compared to the available RAM. | I'm trying to compute the difference in pixel values of two images, but I'm running into memory problems because the images I have are quite large. Is there way in python that I can read an image lets say in 10x10 chunks at a time rather than try to read in the whole image? I was hoping to solve the memory problem by reading an image in small chunks, assigning those chunks to numpy arrays and then saving those numpy arrays using pytables for further processing. Any advice would be greatly appreciated.
Regards,
Berk | 0 | 1 | 3,371 |
0 | 21,773,776 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-02-14T07:45:00.000 | 0 | 2 | 0 | Detecting certain columns and deleting these | 21,773,514 | 0 | python,pandas | In pandas it would be del df['columnname']. | I have a dataframe where some columns (not row) are like ["","","",""].
Those columns with that characteristic I would like to delete.
Is there an efficient way of doing that? | 0 | 1 | 72 |
0 | 21,775,708 | 0 | 1 | 1 | 0 | 2 | false | 7 | 2014-02-14T08:03:00.000 | -1 | 5 | 0 | Making a Python unit test that never runs in parallel | 21,773,821 | -0.039979 | python,unit-testing,python-unittest | The problem is that the name of config_custom.csv should itself be a configurable parameter. Then each test can simply look for config_custom_<nonce>.csv, and any number of tests may be run in parallel.
Cleanup of the overall suite can just clear out config_custom_*.csv, since we won't be needing any of them at that point. | tl;dr - I want to write a Python unittest function that deletes a file, runs a test, and the restores the file. This causes race conditions because unittest runs multiple tests in parallel, and deleting and creating the file for one test messes up other tests that happen at the same time.
Long Specific Example:
I have a Python module named converter.py and it has associated tests in test_converter.py. If there is a file named config_custom.csv in the same directory as converter.py, then the custom configuration will be used. If there is no custom CSV config file, then there is a default configuration built into converter.py.
I wrote a unit test using unittest from the Python 2.7 standard library to validate this behavior. The unit test in setUp() would rename config_custom.csv to wrong_name.csv, then it would run the tests (hopefully using the default config), then in tearDown() it would rename the file back the way it should be.
Problem: Python unit tests run in parallel, and I got terrible race conditions. The file config_custom.csv would get renamed in the middle of other unit tests in a non-deterministic way. It would cause at least one error or failure about 90% of the time that I ran the entire test suite.
The ideal solution would be to tell unittest: Do NOT run this test in parallel with other tests, this test is special and needs complete isolation.
My work-around is to add an optional argument to the function that searches for config files. The argument is only passed by the test suite. It ignores the config file without deleting it. Actually deleting the test file is more graceful, that is what I actually want to test. | 0 | 1 | 2,425 |
0 | 34,140,669 | 0 | 1 | 1 | 0 | 2 | false | 7 | 2014-02-14T08:03:00.000 | 0 | 5 | 0 | Making a Python unit test that never runs in parallel | 21,773,821 | 0 | python,unit-testing,python-unittest | The best testing strategy would be to make sure your testing on disjoint data sets. This will bypass any race conditions and make the code simpler. I would also mock out open or __enter__ / __exit__ if your using the context manager. This will allow you to fake the event that a file doesn't exist. | tl;dr - I want to write a Python unittest function that deletes a file, runs a test, and the restores the file. This causes race conditions because unittest runs multiple tests in parallel, and deleting and creating the file for one test messes up other tests that happen at the same time.
Long Specific Example:
I have a Python module named converter.py and it has associated tests in test_converter.py. If there is a file named config_custom.csv in the same directory as converter.py, then the custom configuration will be used. If there is no custom CSV config file, then there is a default configuration built into converter.py.
I wrote a unit test using unittest from the Python 2.7 standard library to validate this behavior. The unit test in setUp() would rename config_custom.csv to wrong_name.csv, then it would run the tests (hopefully using the default config), then in tearDown() it would rename the file back the way it should be.
Problem: Python unit tests run in parallel, and I got terrible race conditions. The file config_custom.csv would get renamed in the middle of other unit tests in a non-deterministic way. It would cause at least one error or failure about 90% of the time that I ran the entire test suite.
The ideal solution would be to tell unittest: Do NOT run this test in parallel with other tests, this test is special and needs complete isolation.
My work-around is to add an optional argument to the function that searches for config files. The argument is only passed by the test suite. It ignores the config file without deleting it. Actually deleting the test file is more graceful, that is what I actually want to test. | 0 | 1 | 2,425 |
0 | 21,791,001 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-02-14T22:43:00.000 | 3 | 2 | 0 | filter pandas dataframe for timedeltas | 21,790,816 | 0.291313 | python-2.7,pandas | for the 60 days you're looking to compare to, create a timedelta object of that value timedelta(days=60) and use that for the filter. and if you're already getting timedelta objects from the subtraction, recasting it to a timedelta seems unnecessary.
and finally, make sure you check the signs of the timedeltas you're comparing. | I got a pandas dataframe, containing timestamps 'expiration' and 'date'.
I want to filter for rows with a certain maximum delta between expiration and date.
When doing fr.expiration - fr.date I obtain timedelta values, but don't know how
to get a filter criteria such as fr[timedelta(fr.expiration-fr.date)<=60days] | 0 | 1 | 2,238 |
0 | 21,824,056 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-02-15T20:07:00.000 | 2 | 2 | 0 | Computing K-means clustering on Location data in Python | 21,802,946 | 0.197375 | python,scikit-learn,cluster-analysis,data-mining,k-means | Is the data already in vector space e.g. gps coordinates? If so you can cluster on it directly, lat and lon are close enough to x and y that it shouldn't matter much. If not, preprocessing will have to be applied to convert it to a vector space format (table lookup of locations to coords for instance). Euclidean distance is a good choice to work with vector space data.
To answer the question of whether they played music in a given location, you first fit your kmeans model on their location data, then find the "locations" of their clusters using the cluster_centers_ attribute. Then you check whether any of those cluster centers are close enough to the locations you are checking for. This can be done using thresholding on the distance functions in scipy.spatial.distance.
It's a little difficult to provide a full example since I don't have the dataset, but I can provide an example given arbitrary x and y coords instead if that's what you want.
Also note KMeans is probably not ideal as you have to manually set the number of clusters "k" which could vary between people, or have some more wrapper code around KMeans to determine the "k". There are other clustering models which can determine the number of clusters automatically, such as meanshift, which may be more ideal in this case and also can tell you cluster centers. | I have a dataset of users and their music plays, with every play having location data. For every user i want to cluster their plays to see if they play music in given locations.
I plan on using the sci-kit learn k-means package, but how do I get this to work with location data, as opposed to its default, euclidean distance?
An example of it working would really help me! | 0 | 1 | 976 |
0 | 21,825,022 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2014-02-15T20:07:00.000 | 5 | 2 | 0 | Computing K-means clustering on Location data in Python | 21,802,946 | 1.2 | python,scikit-learn,cluster-analysis,data-mining,k-means | Don't use k-means with anything other than Euclidean distance.
K-means is not designed to work with other distance metrics (see k-medians for Manhattan distance, k-medoids aka. PAM for arbitrary other distance functions).
The concept of k-means is variance minimization. And variance is essentially the same as squared Euclidean distances, but it is not the same as other distances.
Have you considered DBSCAN? sklearn should have DBSCAN, and it should by now have index support to make it fast. | I have a dataset of users and their music plays, with every play having location data. For every user i want to cluster their plays to see if they play music in given locations.
I plan on using the sci-kit learn k-means package, but how do I get this to work with location data, as opposed to its default, euclidean distance?
An example of it working would really help me! | 0 | 1 | 976 |
0 | 21,828,293 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2014-02-17T10:42:00.000 | 3 | 2 | 0 | Is it possible to run Python's scikit-learn algorithms over Hadoop? | 21,826,863 | 0.291313 | python,hadoop,machine-learning,bigdata,scikit-learn | Look out for jpype module. By using jpype you can run Mahout Algorithms and you will be writing code in Python. However I feel this won't be the best of solution. If you really want massive scalability than go with Mahout directly. I practice, do POC's, solve toy problems using scikit-learn, however when I need to do massive big data clustering and so on than I go Mahout. | I know it is possible to use python language over Hadoop.
But is it possible to use scikit-learn's machine learning algorithms on Hadoop ?
If the answer is no, is there some machine learning library for python and Hadoop ?
Thanks for your Help. | 0 | 1 | 5,638 |
0 | 21,840,597 | 1 | 0 | 0 | 0 | 1 | false | 0 | 2014-02-17T18:47:00.000 | 2 | 1 | 0 | least cpu-expensive way to find the two most (and least) remote vertices of a graph [igraph] | 21,836,929 | 0.379949 | python,igraph,shortest-path | For the first question, you can find all shortest paths, and then choose between the pairs making up the longest distances.
I don't really understand the second question. If you are searching for unweighted paths, then every pair of vertices at both ends of an edge have the minimum distance (1). That is, if you don't consider paths to the vertices themselves, these have length zero, by definition. | In igraph, what's the least cpu-expensive way to find:
the two most remote vertices (in term of shortest distances form one another) of a graph. Unlike the farthest.points() function, which chooses the first found pair of vertices with the longest shortest distance if more than one pair exists, I'd like to randomly select this pair.
same thing with the closest vertices of a graph.
Thanks! | 0 | 1 | 99 |
0 | 59,014,783 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-02-18T07:13:00.000 | 1 | 3 | 0 | i have python 33 but unable to import numpy and matplotlib package | 21,846,661 | 0.066568 | python,numpy,matplotlib | I would suggest that first uninstall numpy and matplotlib using pip uninstall command then install again using pip install from python command line terminal and restart your system. | I am unable to import numpy and matplotlib package in python33. I am getting this error. I have tried to install this two packages but unable to import. I am getting the following error:
import numpy
Traceback (most recent call last):
File "", line 1, in
import numpy
ImportError: No module named 'numpy'
import matplotlib
Traceback (most recent call last):
File "", line 1, in
import matplotlib
ImportError: No module named 'matplotlib' | 0 | 1 | 275 |
0 | 71,455,335 | 0 | 0 | 0 | 0 | 1 | false | 233 | 2014-02-19T15:00:00.000 | 1 | 7 | 0 | warning about too many open figures | 21,884,271 | 0.028564 | python,python-3.x,matplotlib | matplotlib by default keeps a reference of all the figures created through pyplot. If a single variable used for storing matplotlib figure (e.g "fig") is modified and rewritten without clearing the figure, all the plots are retained in RAM memory. Its important to use plt.cla() and plt.clf() instead of modifying and reusing the fig variable. If you are plotting thousands of different plots and saving them without clearing the figure, then eventually your RAM will get exhausted and program will get terminated. Clearing the axes and figures have a significant impact on memory usage if you are plotting many figures. You can monitor your RAM consumption in task manager (Windows) or in system monitor (Linux). First your RAM will get exhausted, then the OS starts consuming SWAP memory. Once both are exhausted, the program will get automatically terminated. Its better to clear figures, axes and close them if they are not required. | In a script where I create many figures with fix, ax = plt.subplots(...), I get the warning RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (matplotlib.pyplot.figure) are retained until explicitly closed and may consume too much memory.
However, I don't understand why I get this warning, because after saving the figure with fig.savefig(...), I delete it with fig.clear(); del fig. At no point in my code, I have more than one figure open at a time. Still, I get the warning about too many open figures. What does that mean / how can I avoid getting the warning? | 0 | 1 | 164,804 |
0 | 21,894,459 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-02-19T22:28:00.000 | 0 | 2 | 0 | SciPy - Constrained Minimization derived from a Directed Graph | 21,893,973 | 0 | python,algorithm,graph,scipy,mathematical-optimization | The prohibition against self-flows makes some instances of this problem infeasible (e.g., one node that has in- and out-flows of 1). Otherwise, a reasonably sparse solution with at most one self-flow always can be found as follows. Initialize two queues, one for the nodes with positive out-flow from lowest ID to highest and one for the nodes with positive in-flow from highest ID to lowest. Add a flow from the front node of the first queue to the front node of the second, with quantity equal to the minimum of the out-flow of the former and the in-flow of the latter. Update the out- and in-flows to their residual values and remove the exhausted node(s) from their queues. Since the ID of the front of the first queue increases, and the ID of the front of the second queue decreases, the only node that self-flows is the one where the ID numbers cross.
Minimizing the total flow is trivial; it's constant. Finding the sparsest solution is NP-hard; there's a reduction from subset sum where each of the elements being summed has a source node with that amount of out-flow, and two more sink nodes have in-flows, one of which is equal to the target sum. The subset sum instance is solvable if and only if no source flows to both sinks. The algorithm above is a 2-approximation.
To get rid of the self-flow on that one bad node sparsely: repeatedly grab a flow not involving the bad node and split it into two, via the bad node. Stop when we exhaust the self-flow. This fails only if there are no flows left that don't use the bad node and there is still a self-flow, in which case the bad node has in- and out-flows that sum to more than the total flow, a necessary condition for the existence of a solution. This algorithm is a 4-approximation in sparsity. | I'm looking for a solution to the following graph problem in order to perform graph analysis in Python.
Basically, I have a directed graph of N nodes where I know the following:
The sum of the weights of the out-edges for each node
The sum of the weights of the in-edges for each node
Following from the above, the sum of the sum across all nodes of the in-edges equals the sum of the sum of out-edges
No nodes have edges with themselves
All weights are positive (or zero)
However, I know nothing about to which nodes a given node might have an edge to, or what the weights of any edges are
Represented as a weighted adjacency matrix, I know the column sums and row sums but not the value of the edges themselves. I've realized that there is not a unique solution to this problem (Does anyone how to prove that, given the above, there is an assured solution?). However, I'm hoping that I can at least arrive at a solution to this problem that minimizes the sum of the edge weights or maximizes the number of 0 edge weights or something along those lines (Basically, out of infinite choices, I'd like the most 'simple' graph).
I've thought about representing it as:
Min Sum(All Edge Weights) s.t. for each node, the sum of its out-edge weights equals the known sum of these, and the sum of its in-edge weights equals the known sum of these. Additionally, constrained such that all weights are >= 0
I'm primarily using this for data analysis in Scipy and Numpy. However, using their constrained minimization techniques, I'll end up with approximately 2N^2-2N constraints from the edge-weight sum portion, and N constraints from the positive portion. I'm worried this will be unfeasible for large data sets. I could have up to 500 nodes. Is this a feasible solution using SciPy's fmin_cobyla? Is there another way to layout this problem / another solver in Python that would be more efficient?
Thanks so much! First post on StackOverflow. | 0 | 1 | 200 |
0 | 21,900,644 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2014-02-20T01:06:00.000 | 5 | 2 | 0 | NumPy Array Copy-On-Write | 21,896,030 | 1.2 | python,numpy,copy-on-write | Copy-on-write is a nice concept, but explicit copying seems to be "the NumPy philosophy". So personally I would keep the "readonly" solution if it isn't too clumsy.
But I admit having written my own copy-on-write wrapper class. I don't try to detect write access to the array. Instead the class has a method "get_array(readonly)" returning its (otherwise private) numpy array. The first time you call it with "readonly=False" it makes a copy. This is very explicit, easy to read and quickly understood.
If your copy-on-write numpy array looks like a classical numpy array, the reader of your code (possibly you in 2 years) may have a hard time. | I have a class that returns large NumPy arrays. These arrays are cached within the class. I would like the returned arrays to be copy-on-write arrays. If the caller ends up just reading from the array, no copy is ever made. This will case no extra memory will be used. However, the array is "modifiable", but does not modify the internal cached arrays.
My solution at the moment is to make any cached arrays readonly (a.flags.writeable = False). This means that if the caller of the function may have to make their own copy of the array if they want to modify it. Of course, if the source was not from cache and the array was already writable, then they would duplicate the data unnecessarily.
So, optimally I would love something like a.view(flag=copy_on_write). There seems to be a flag for the reverse of this UPDATEIFCOPY which causes a copy to update the original once deallocated.
Thanks! | 0 | 1 | 2,415 |
0 | 21,919,317 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2014-02-20T20:23:00.000 | 1 | 3 | 0 | How to label certain x values | 21,918,718 | 0.066568 | python,matplotlib,plot,weather | Matplotlib xticks are your friend. Will allow you to set where the ticks appear.
As for date formatting, make sure you're using dateutil objects, and you'll be able to handle the formatting. | I want to plot weather data over the span of several days every half hour, but I only want to label the days at the start as a string in the format 'mm/dd/yy'. I want to leave the rest unmarked.
I would also want to control where such markings are placed along the x axis, and control the range of the axis.
I also want to plot multiple sets of measurements taken over different intervals on the same figure. Therefore being able to set the axis and plot the measurements for a given day would be best.
Any suggestions on how to approach this with matplotlib? | 0 | 1 | 2,934 |
0 | 21,919,748 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2014-02-20T20:23:00.000 | 2 | 3 | 0 | How to label certain x values | 21,918,718 | 0.132549 | python,matplotlib,plot,weather | You can use a DayLocator as in: plt.gca().xaxis.set_major_locator(dt.DayLocator())
And DateFormatter as in: plt.gca().xaxis.set_major_formatter(dt.DateFormatter("%d/%m/%Y"))
Note: import matplotlib.dates as dt | I want to plot weather data over the span of several days every half hour, but I only want to label the days at the start as a string in the format 'mm/dd/yy'. I want to leave the rest unmarked.
I would also want to control where such markings are placed along the x axis, and control the range of the axis.
I also want to plot multiple sets of measurements taken over different intervals on the same figure. Therefore being able to set the axis and plot the measurements for a given day would be best.
Any suggestions on how to approach this with matplotlib? | 0 | 1 | 2,934 |
0 | 21,947,575 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2014-02-21T23:49:00.000 | -1 | 2 | 0 | Extract a certain part of a string after a key phrase using pandas? | 21,947,487 | -0.099668 | python,string,pandas,extract | This will grab the number 10 and put it in a variable called yards.
x = "(12:25) (No Huddle Shotgun) P.Manning pass short left to W.Welker pushed ob at DEN 34 for 10 yards (C.Graham)."
yards = (x.split("for ")[-1]).split(" yards")[0] | I have an NFL dataset with a 'description' column with details about the play. Each successful pass and run play has a string that's structured like:
"(12:25) (No Huddle Shotgun) P.Manning pass short left to W.Welker pushed ob at DEN 34 for 10 yards (C.Graham)."
How do I locate/extract the number after "for" in the string, and place it in a new column? | 0 | 1 | 4,497 |
0 | 21,987,220 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2014-02-24T11:24:00.000 | 2 | 3 | 0 | Detect an arc from an image contour or edge | 21,986,356 | 0.132549 | python,opencv,image-processing | If you think that there will not be any change in the shape (i mean arc won't become line or something like this) then you can have a look a Generalized Hough Transform (GHT) which can detect any shape you want.
Cons:
There is no directly function in openCV library for GHT but you can get several source code at internet.
It is sometimes slow but can become fast if you set the parameters properly.
It won't be able to detect if the shape changes. for exmaple, i tried to detect squares using GHT and i got good results but when square were not perfect squares (i.e. rectangle or something like that), it didn't detect. | I am trying to detect arcs inside an image. The information that I have for certain with me is the radius of the arc. I can try and maybe get the centre of the circle whose arc I want to identify.
Is there any algorithm in Open CV which can tell us that the detected contour ( or edge from canny edge is an arc or an approximation of an arc)
Any help on how this would be possible in OpenCV with Python or even a general approach would be very helpful
Thanks | 0 | 1 | 5,077 |
0 | 22,008,350 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2014-02-24T11:24:00.000 | 1 | 3 | 0 | Detect an arc from an image contour or edge | 21,986,356 | 0.066568 | python,opencv,image-processing | You can do it this way:
Convert the image to edges using canny filter.
Make the image binary using threshold function there is an option for regular threshold, otsu or adaptive.
Find contours with sufficient length (findContours function)
Iterate all the contours and try to fit ellipse (fitEllipse function)
Validate fitted ellipses by radius.
Check if detected ellipse is good fit - checking how much of the contour pixels are on the detected ellipse.
Select the best one.
You can try to increase the speed using RANSAC each time selecting 6 points from binarized image and trying to fit. | I am trying to detect arcs inside an image. The information that I have for certain with me is the radius of the arc. I can try and maybe get the centre of the circle whose arc I want to identify.
Is there any algorithm in Open CV which can tell us that the detected contour ( or edge from canny edge is an arc or an approximation of an arc)
Any help on how this would be possible in OpenCV with Python or even a general approach would be very helpful
Thanks | 0 | 1 | 5,077 |
0 | 21,998,600 | 0 | 0 | 0 | 0 | 1 | true | 11 | 2014-02-24T20:07:00.000 | 18 | 2 | 0 | Changing what the ends of whiskers represent in matplotlib's boxplot function | 21,997,897 | 1.2 | python,matplotlib | To get the whiskers to appear at the min and max of the data, set the whis parameter to an arbitrarily large number. In other words: boxplots = ax.boxplot(myData, whis=np.inf).
The whis kwarg is a scaling factor of the interquartile range. Whiskers are drawn to the outermost data points within whis * IQR away from the quartiles.
Now that v1.4 is out:
In matplotlib v1.4, you can say: boxplots = ax.boxplot(myData, whis=[5, 95]) to set the whiskers at the 5th and 95th percentiles. Similarly, you'll be able to say boxplots = ax.boxplot(myData, whis='range') to set the whiskers at the min and max.
Note: you could probably modify the artists contained in the boxplots dictionary returned by the ax.boxplot method, but that seems like a huge hassle | I understand that the ends of whiskers in matplotlib's box plot function extend to max value below 75% + 1.5 IQR and minimum value above 25% - 1.5 IQR. I would like to change it to represent max and minimum values of the data or the 5th and 95th quartile of the data. Is is possible to do this? | 0 | 1 | 8,736 |
0 | 22,001,369 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-02-24T23:12:00.000 | 1 | 1 | 0 | Pandas read csv data type | 22,001,176 | 0.197375 | python,csv,pandas | Just do
str(int(float('2.09228E+14'))) which should give you '209228000000000' | I'm trying to read a csv with pandas using the read_csv command. However, one of my columns is a 15 digit number which is read in as a float and then truncated to exponential notation. So the entries in this column become 2.09228E+14 instead of the 15 digit number I want. I've tried reading it as a string, but I get '2.09228E+14' instead of the number. Any suggestions? | 0 | 1 | 1,866 |
0 | 22,024,672 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2014-02-25T19:45:00.000 | 0 | 1 | 0 | Large dataset - no error - but it wont run - python memory issue? | 22,024,577 | 0 | python,numpy | The exit code of the python process should reveal the reason for the process exiting. In the event of an adverse condition, the exit code will be something other than 0. If you are running in a Bash shell or similar, you can run "echo $?" in your shell after running Python to see its exit status.
If the exit status is indeed 0, try putting some print statements in your code to trace the execution of your program. In any case, you would do well to post your code for better feedback.
Good luck! | So I am trying to run various large images which gets put into an array using numpy so that I can then do some calculations. The calculations get done per image and the opening and closing of each image is done in a loop. I a have reached a frustration point because I have no errors in the code (well none to my knowledge nor any that python is complaining about), and as a matter of fact my code runs for one loop, and then it simply does not run for the second, third, or other loops.
I get no errors! No memory error, no syntax error, no nothing. I have used Spyder and even IDLE, and it simply runs all the calculations sometimes only for one image, sometimes for two, then it just quits the loop (again WITH NO ERROR) as if it had completed running for all images (when it has only ran for one/two images).
I am assuming its a memory error? - I mean it runs one loop , sometimes two, but never the rest? -- so ...
I have attempted to clear the tracebacks using this:
sys.exc_clear()
sys.exc_traceback = sys.last_traceback = None
I have also even tried to delete each variable when I am done with it
ie. del variable
However, nothing seems to fix it --
Any ideas of what could be wrong would be appreciated! | 0 | 1 | 107 |
0 | 22,026,711 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2014-02-25T21:17:00.000 | 5 | 3 | 0 | Python: Fast and efficient way of writing large text file | 22,026,393 | 0.321513 | python,python-2.7,file-io,dataframe,string-concatenation | Unless you are running into a performance issue, you can probably write to the file line by line. Python internally uses buffering and will likely give you a nice compromise between performance and memory efficiency.
Python buffering is different from OS buffering and you can specify how you want things buffered by setting the buffering argument to open. | I have a speed/efficiency related question about python:
I need to write a large number of very large R dataframe-ish files, about 0.5-2 GB sizes. This is basically a large tab-separated table, where each line can contain floats, integers and strings.
Normally, I would just put all my data in numpy dataframe and use np.savetxt to save it, but since there are different data types it can't really be put into one array.
Therefore I have resorted to simply assembling the lines as strings manually, but this is a tad slow. So far I'm doing:
1) Assemble each line as a string
2) Concatenate all lines as single huge string
3) Write string to file
I have several problems with this:
1) The large number of string-concatenations ends up taking a lot of time
2) I run of of RAM to keep strings in memory
3) ...which in turn leads to more separate file.write commands, which are very slow as well.
So my question is: What is a good routine for this kind of problem? One that balances out speed vs memory-consumption for most efficient string-concatenation and writing to disk.
... or maybe this strategy is simply just bad and I should do something completely different?
Thanks in advance! | 0 | 1 | 15,877 |
0 | 22,044,916 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-02-26T00:21:00.000 | 2 | 1 | 0 | Scipy: Fitting Data with Two Dimensional Error | 22,029,142 | 1.2 | python,scipy | Try scipy.odr. It allows to specify weights/errors in both input and response variable. | So I already know how to use scipy.optimize.curve_fit for normal fitting needs, but what do I do if both my x data and my y data both have error bars? | 0 | 1 | 104 |
0 | 65,142,140 | 0 | 0 | 0 | 0 | 1 | false | 394 | 2014-02-26T20:55:00.000 | 2 | 8 | 0 | Difference between numpy.array shape (R, 1) and (R,) | 22,053,050 | 0.049958 | python,numpy,matrix,multidimensional-array | The data structure of shape (n,) is called a rank 1 array. It doesn't behave consistently as a row vector or a column vector which makes some of its operations and effects non intuitive. If you take the transpose of this (n,) data structure, it'll look exactly same and the dot product will give you a number and not a matrix.
The vectors of shape (n,1) or (1,n) row or column vectors are much more intuitive and consistent. | In numpy, some of the operations return in shape (R, 1) but some return (R,). This will make matrix multiplication more tedious since explicit reshape is required. For example, given a matrix M, if we want to do numpy.dot(M[:,0], numpy.ones((1, R))) where R is the number of rows (of course, the same issue also occurs column-wise). We will get matrices are not aligned error since M[:,0] is in shape (R,) but numpy.ones((1, R)) is in shape (1, R).
So my questions are:
What's the difference between shape (R, 1) and (R,). I know literally it's list of numbers and list of lists where all list contains only a number. Just wondering why not design numpy so that it favors shape (R, 1) instead of (R,) for easier matrix multiplication.
Are there better ways for the above example? Without explicitly reshape like this: numpy.dot(M[:,0].reshape(R, 1), numpy.ones((1, R))) | 0 | 1 | 193,239 |
0 | 22,161,505 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-02-27T08:23:00.000 | -2 | 2 | 0 | write and read on real time pytables | 22,062,837 | -0.197375 | python,real-time,pytables | This is definitely possible. It is especially easy if you only have one process in 'w' and multiple processes in 'r' mode. Just make sure in your 'w' process to flush() the file and/or the datasets occasionally. If you do this, the 'r' process will be able to see the data. | I am not sure if what I am thinking would be possible, I would need the help from someone experienced working with HDF5/PyTables.
The escenario would be like this:
Let's say that we have a process, or a machine or a connexion etc, acquiring data, and storing in a HDF5/PyTable format. I will call it store software.
Would it be possible to have another software, I will call it analysis software, running on time?.
If it helps, the store software and the analysis software would be totally independent, even wrote on different languages.
My doubt is that, if the store program is writing the PyTable, mode='w', then, at the same time, can the analysis program access in mode='r', and read some data to perform some basic analysis, averages, etc, etc??.
The basic idea of this is to be able to analyze data stored in a PyTable on real time.
Of course any other proposed solution would be appreciated. | 0 | 1 | 996 |
0 | 22,080,238 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2014-02-27T20:42:00.000 | 6 | 1 | 0 | How to transpose a matrix without using numpy or zip (or other imports) | 22,079,882 | 1.2 | python,matrix,transpose | [[row[i] for row in data] for i in range(len(data[0]))] | How do you transpose a matrix without using numpy or zip or other imports?
I thought this was the answer but it does not work if the matrix has only 1 row...
[[row[i] for row in data] for i in range(len(data[1]))] | 0 | 1 | 4,612 |
0 | 22,084,546 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-02-28T01:33:00.000 | 1 | 3 | 0 | Python: Create a graph with defined number of edges per node | 22,084,435 | 0.066568 | python,networkx | It seems to me you should
decide how many nodes you will have
generate the number of links per node in your desired distribution - make sure the sum is even
start randomly connecting pairs of nodes until all link requirements are satisfied
There are a few more constraints - no pair of nodes should be connected more than once, no node should have more than (number of nodes - 1) links, maybe you want to ensure the graph is fully connected - but basically that's it. | How I can create a graph with
-predefined number of connections for each node, say 3
-given distribution of connections (say Poisson distribution with given mean)
Thanks | 0 | 1 | 1,817 |
0 | 60,347,465 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-02-28T09:37:00.000 | -1 | 2 | 0 | Pandas: DataReader in combination with ISIN identification | 22,091,306 | -0.099668 | python,pandas,stock | Forget about Python. There is absolutely no way to convert an ISIN to a Ticker Symbol. You have completely misunderstood the wikipedia page. | I'm trying to compute some portfolio statistics using Python Pandas, and I am looking for a way to query stock data with DataReader using the ISIN (International Securities Identification Number).
However, as far as I can see, DataReader is not compatible with such ids, although both YahooFinance and GoogleFinance can handle such queries.
How can I use DataReader with stock ISINs? | 0 | 1 | 3,894 |
0 | 22,109,817 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2014-02-28T23:58:00.000 | 2 | 1 | 0 | set capstyle of spines for pdf backend | 22,108,095 | 1.2 | python,matplotlib | I don't think it's possible. I did a little bit of the backend's work in my main script, setting up a RendererPdf (defined in backend_pdf.py) and conatining a GraphicsContextPdf which is a GraphicsContextBase which keeps a capstyle, intialized as butt. As confirmed by grep, this is the only place where butt is hardcoded as a capstyle. After some ipython debugging, I've found that a new GraphicsContextPdf or 'gc' is generated each time a patch is drawn (c.f. patches.py:392, called by way of a necessary fig.draw() in the main script), and the settings for the new gc (again initialized as butt) are incorporated into the original RendererPdf's gc. So everything gets a butt capstyle. Line2D objects are not patches, so they can maintain a particular capstyle. | Figures rendered with the PDF backend have a 'butt' capstyle in my reader. (If I zoom at the corner of a figure in a pdf, I do not see a square corner, but the overlap of shortened lines.) I would like either a 'round' or 'projecting' (what matplotlib calls the 'square' capstyle) cap. Thus the Spine objects are in question, and a Spine is a Patch is an Artist, none of which seem to have anything like the set_solid_capstyle() of Line2D, so I'm not sure how or where to force a particular capstyle, or if it's even possible. | 0 | 1 | 176 |
0 | 22,123,913 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2014-03-02T00:52:00.000 | 4 | 2 | 0 | What's the difference between using libSVM in sci-kit learn, or e1070 in R, for training and using support vector machines? | 22,122,506 | 1.2 | python,r,machine-learning,scikit-learn,svm | I do not have experiece with e1070, however from googling it it seems that it either uses or is based on LIBSVM (I don't know enough R to determine which from the cran entry). Scilearnkit also uses LIBSVM.
In both cases the model is going to be trained by LIBSVM. Speed, scalability, variety of options available is going to be exactly the same, and in using SVMs with these libraries the main limitations you will face are the limitations of LIBSVM.
I think that giving further advice is going to be difficult unless you clarify a couple of things in your question: what is your objective? Do you already know LIBSVM? Is this a learning project? Who is paying for your time? Do you feel more comfortable in Python or in R? | Recently I was contemplating the choice of using either R or Python to train support vector machines.
Aside from the particular strengths and weaknesses intrinsic to both programming languages, I'm wondering if there is any heuristic guidelines for making a decision on which way to go, based on the packages themselves.
I'm thinking in terms of speed of training a model, scalability, availability of different kernels, and other such performance-related aspects.
Given some data sets of different sizes, how could one decide which path to take?
I apologize in advance for such a possibly vague question. | 0 | 1 | 381 |
0 | 22,189,863 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2014-03-02T00:52:00.000 | 0 | 2 | 0 | What's the difference between using libSVM in sci-kit learn, or e1070 in R, for training and using support vector machines? | 22,122,506 | 0 | python,r,machine-learning,scikit-learn,svm | Sometime back I had the same question. Yes, both e1070 and scikit-learn use LIBSVM. I have experience with e1070 only.
But there are some areas where R is better. I have read in the past that Python does not handle categorical features properly (at least not right out of the box). This could be a big deal for some.
I also prefer R's formula interface. And some of the nice data manipulation packages.
Python is definitely better for general purpose programming and scikit-learn aids in using a single programming language for all tasks. | Recently I was contemplating the choice of using either R or Python to train support vector machines.
Aside from the particular strengths and weaknesses intrinsic to both programming languages, I'm wondering if there is any heuristic guidelines for making a decision on which way to go, based on the packages themselves.
I'm thinking in terms of speed of training a model, scalability, availability of different kernels, and other such performance-related aspects.
Given some data sets of different sizes, how could one decide which path to take?
I apologize in advance for such a possibly vague question. | 0 | 1 | 381 |
0 | 22,165,531 | 0 | 0 | 0 | 0 | 1 | true | 11 | 2014-03-03T08:58:00.000 | 2 | 4 | 0 | The equivalent function of Matlab imfilter in Python | 22,142,369 | 1.2 | python,matlab,scipy | Using the functions scipy.ndimage.filters.correlate and scipy.ndimage.filters.convolve | I know the equivalent functions of conv2 and corr2 of MATLAB are scipy.signal.correlate and scipy.signal.convolve. But the function imfilter has the property of dealing with the outside the bounds of the array. Like as symmetric, replicate and circular. Can Python do that things | 0 | 1 | 12,599 |
0 | 37,731,398 | 0 | 0 | 0 | 0 | 5 | false | 1 | 2014-03-03T10:02:00.000 | 0 | 5 | 0 | Examples on N-D arrays usage | 22,143,644 | 0 | python,arrays,numpy | They are very applicable in scientific computing. Right now, for instance, I am running simulations which output data in a 4D array: specifically
| Time | x-position | y-position | z-position |.
Almost every modern spatial simulation will use multidimensional arrays, along with programming for computer games. | I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them.
Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen? | 0 | 1 | 141 |
0 | 22,146,242 | 0 | 0 | 0 | 0 | 5 | false | 1 | 2014-03-03T10:02:00.000 | 0 | 5 | 0 | Examples on N-D arrays usage | 22,143,644 | 0 | python,arrays,numpy | There are so many examples... The way you are trying to represent it is probably wrong, let's take a simple example:
You have boxes and a box stores N items in it. You can store up to 100 items in each box.
You've organized the boxes in shelves. A shelf allows you to store M boxes. You can identify each box by a index.
All the shelves are in a warehouse with 3 floors. So you can identify any shelf using 3 numbers: the row, the column and the floor.
A box is then identified by: row, column, floor and the index in the shelf.
An item is identified by: row, column, floor, index in shelf, index in box.
Basically, one way (not the best one...) to model this problem would be to use a 5D array. | I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them.
Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen? | 0 | 1 | 141 |
0 | 22,144,505 | 0 | 0 | 0 | 0 | 5 | false | 1 | 2014-03-03T10:02:00.000 | 1 | 5 | 0 | Examples on N-D arrays usage | 22,143,644 | 0.039979 | python,arrays,numpy | A few simple examples are:
A n x m 2D array of p-vectors represented as an n x m x p 3D matrix, as might result from computing the gradient of an image
A 3D grid of values, such as a volumetric texture
These can even be combined in the case of a gradient of a volume in which case you get a 4D matrix
Staying with the graphics paradigm, adding time adds an extra dimension, so a time-variant 3D gradient texture would be 5D
numpy.sum(array, axis=5) is not valid for a 5D-array (as axes are numbered starting at 0) | I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them.
Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen? | 0 | 1 | 141 |
0 | 22,144,263 | 0 | 0 | 0 | 0 | 5 | false | 1 | 2014-03-03T10:02:00.000 | 0 | 5 | 0 | Examples on N-D arrays usage | 22,143,644 | 0 | python,arrays,numpy | For example, a 3D array could be used to represent a movie, that is a 2D image that changes with time.
For a given time, the first two axes would give the coordinate of a pixel in the image, and the corresponding value would give the color of this pixel, or a grey scale level. The third axis would then represent time. For each time slot, you have a complete image.
In this example, numpy.sum(array, axis=2) would integrate the exposure in a given pixel. If you think about a film taken in low light conditions, you could think of doing something like that to be able to see anything. | I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them.
Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen? | 0 | 1 | 141 |
0 | 22,144,157 | 0 | 0 | 0 | 0 | 5 | false | 1 | 2014-03-03T10:02:00.000 | 0 | 5 | 0 | Examples on N-D arrays usage | 22,143,644 | 0 | python,arrays,numpy | Practical applications are hard to come up with but I can give you a simple example for 3D.
Imagine taking a 3D world (a game or simulation for example) and splitting it into equally sized cubes. Each cube could contain a specific value of some kind (a good example is temperature for climate modelling). The matrix can then be used for further operations (simple ones like calculating its Transpose, its Determinant etc...).
I recently had an assignment which involved modelling fluid dynamics in a 2D space. I could have easily extended it to work in 3D and this would have required me to use a 3D matrix instead.
You may wish to also further extend matrices to cater for time, which would make them 4D. In the end, it really boils down to the specific problem you are dealing with.
As an end note however, 2D matrices are still used for 3D graphics (You use a 4x4 augmented matrix). | I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them.
Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen? | 0 | 1 | 141 |
0 | 25,162,895 | 0 | 0 | 0 | 0 | 1 | true | 14 | 2014-03-03T20:04:00.000 | 10 | 2 | 0 | Pandas MultiIndex versus Panel | 22,156,258 | 1.2 | python,pandas | In my practice, the strongest, easiest-to-see difference is that a Panel needs to be homogeneous in every dimension. If you look at a Panel as a stack of Dataframes, you cannot create it by stacking Dataframes of different sizes or with different indexes/columns. You can indeed handle more non-homogeneous type of data with multiindex.
So the first choice has to be made based on how your data is to be organized. | Using Pandas, what are the reasons to use a Panel versus a MultiIndex DataFrame?
I have personally found significant difference between the two in the ease of accessing different dimensions/levels, but that may just be my being more familiar with the interface for one versus the other. I assume there are more substantive differences, however. | 0 | 1 | 2,444 |
0 | 22,512,378 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-03-03T22:00:00.000 | 2 | 1 | 0 | ArtistAnimation vs FuncAnimation matplotlib animation matplotlib.animation | 22,158,395 | 0.379949 | python,animation,matplotlib | I think you are right, although it is simple to go from a list to a function (just iterate over it) or back (store function values in an array).
So it really doesn't matter too much, but you can pick the one that best suits your code, as you described.
(Personally I find ArtistAnimation to be the most convenient)
If your result is very large, it might be good to use FuncAnimation so you don't need to store your data. MPL still stores it's own copy for plotting, but this factor two might make a difference. | So in the examples of matplotlib.animation there are two main functions that are used to make animations: AritstAnimation and FuncAnimation.
According to the documentation the use of each of them is:
.ArtistAnimation:
Before calling this function, all plotting should have taken place and the relevant artists saved.
FuncAnimation
Makes an animation by repeatedly calling a function func, passing in (optional) arguments in fargs.
So it appears to me that ArtistAnimation is useful when you have already the whole array, list or set of whatever object you want to make an animation from. FuncAnimation in the other hand seems to be more useful whenever you have a function that is able to give it your next result.
Is my intuition above correct about this? My question in general is when is more convenient to use one or the other.
Thanks in advance | 0 | 1 | 2,205 |
0 | 22,161,688 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2014-03-03T22:48:00.000 | 5 | 3 | 0 | IPython & matplotlib config profiles and files | 22,159,215 | 1.2 | python,matplotlib,ipython | We (IPython) have kind of gone back and forth on the best location for config on Linux. We used to always use ~/.ipython, but then we switched to ~/.config/ipython, which is the XDG-specified location (more correct, for a given value of correct), while still checking both. In IPython 2, we're switching back to ~/.ipython by default, to make it more consistent across the different platforms we support.
However, I don't think it should have been using ~/.config on a Mac - it should always have been ~/.ipython there. | Over time, I have seen IPython (and equivalently matplotlib) using two locations for config files:
~/.ipython/profile_default/
~/.config/ipython/profile_default
which is the right one? Do these packages check both?
In case it matters, I am using Anaconda on OS X and on Linux | 0 | 1 | 555 |
0 | 22,161,946 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2014-03-03T22:48:00.000 | 2 | 3 | 0 | IPython & matplotlib config profiles and files | 22,159,215 | 0.132549 | python,matplotlib,ipython | As far as matplotlib is concerned, on OS X the config file (matplotlibrc) will be looked for first in the current directory, then in ~/.matplotlib, and finally in INSTALL/matplotlib/mpl-data/matplotlibrc, where INSTALL is the Python site-packages directory. With a standard install of Python from python.org, this is /Library/Frameworks/Python.framework/Versions/X.Y/lib/pythonX.Y/site-packages, where X.Y is the version you're using, like 2.7 or 3.3. | Over time, I have seen IPython (and equivalently matplotlib) using two locations for config files:
~/.ipython/profile_default/
~/.config/ipython/profile_default
which is the right one? Do these packages check both?
In case it matters, I am using Anaconda on OS X and on Linux | 0 | 1 | 555 |
0 | 22,159,481 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2014-03-03T22:58:00.000 | 2 | 1 | 0 | (Text Classification) Handling same words but from different documents [TFIDF ] | 22,159,351 | 1.2 | python,text,machine-learning,classification,tf-idf | First, let's get some terminology clear. A term is a word-like unit in a corpus. A token is a term at a particular location in a particular document. There can be multiple tokens that use the same term. For example, in my answer, there are many tokens that use the term "the". But there is only one term for "the".
I think you are a little bit confused. TF-IDF style weighting functions specify how to make a per term score out of the term's token frequency in a document and the background token document frequency in the corpus for each term in a document. TF-IDF converts a document into a mapping of terms to weights. So more tokens sharing the same term in a document will increase the corresponding weight for the term, but there will only be one weight per term. There is no separate score for tokens sharing a term inside the doc. | So I'm making a python class which calculates the tfidf weight of each word in a document. Now in my dataset I have 50 documents. In these documents many words intersect, thus having multiple same word features but with different tfidf weight. So the question is how do I sum up all the weights into one singular weight? | 0 | 1 | 722 |
0 | 22,185,324 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2014-03-04T23:17:00.000 | 4 | 1 | 0 | Determine sparsity of sparse matrix ( Lil matrix ) | 22,185,277 | 1.2 | python,scipy,sparse-matrix | m.nnz is the number of nonzero elements in the matrix m, you can use m.size to get the total number of elements. | I have a large sparse matrix, implemented as a lil sparse matrix from sci-py. I just want a statistic for how sparse the matrix is once populated. Is there a method to find out this? | 0 | 1 | 157 |
0 | 22,188,201 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-03-05T03:46:00.000 | 3 | 3 | 0 | Estimate probability that two random integers between 0 and k are relatively prime | 22,188,097 | 0.197375 | python,primes | Without a complete enumeration of the relative primeness of all numbers between 0 and k (a huge task and one that grows as the square of k) you can make an estimate by selecting a relatively large number of random pairs (p of them) and determine whether they are relatively prime.
The assumption is that as the sample size increases the proportion of relative primes tends towards the required probability value (i.e. if you take 10,000 sampled pairs and you find that 7,500 of them are relatively prime then you'd estimate the probability of relative primeness at 0.75).
in Python random.randint(0, k) selects a (pseudo-)random integer between 0 and k. | By generating and checking p random pairs.
Somewhat confused on how to go about doing this. I know I could make an algorithm that determines whether or not two integers are relatively prime. I am also having difficulty understanding what generating and checking p random pairs means. | 0 | 1 | 519 |
0 | 22,214,465 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-03-06T03:20:00.000 | 0 | 2 | 0 | Given an array length 1 or more of ints, return the smallest value in the array | 22,214,166 | 0 | python-2.7 | Is this ok? using C
int def my_min(nums)
{
int i,min;
int min[N];
for(i=0;i<nums;i++)
{
scanf("%d",&min[i]);
if (i==0)
{
min=min[0];
}
else
{
if(min>min[i])
{
min=min[i];
}
}
}
return min;
} | Given an array length 1 or more of ints, return the smallest value in the array.
my_min([10, 3, 5, 6]) -> 3
The program starts with def my_min(nums): | 0 | 1 | 58 |
0 | 22,274,333 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-03-08T19:45:00.000 | 1 | 1 | 0 | one colormap for multiple subplots with different maximum values | 22,274,186 | 1.2 | python,matplotlib,color-mapping | imgshow takes two arguments vmin and vmax for the color scale. You could do what you want by putting the same vmin and vmax for both your subplots.
To find vmin you can take the minimum between the minimum of all the values in your data (and same reasoning for vmax). | I want to do two subplots with imshow using the same colormap by which I mean: if points in both plots have the same color, they correspond to the same value. But how can I get imshow to use only 9/10 or so of the colormap for the first plot, because it's maximal value is only 9/10 of the maximal value in the second plot?
Thanks,
Alice | 0 | 1 | 105 |
0 | 22,281,914 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-03-09T07:37:00.000 | 0 | 3 | 0 | Grouping a list in python specifically | 22,279,611 | 0 | python,sorting | I would use a dictionary with the first element as key.
Also look into ordered dictionaries. | Hi is there anyway to group this list such that it would return a string(first element) and a list within a tuple for each equivalent first element?
ie.,
[('106', '1', '1', '43009'), ('106', '1', '2', '43179'), ('106', '1', '4', '43619'), ('171', '1', '3', '59111'), ('171', '1', '4', '57089'), ('171', '1', '5', '57079'), ('184', '1', '18', '42149'), ('184', '1', '19', '12109'), ('184', '1', '20', '12099')]
becomes :
[('106',[('106', '1', '1', '43009'), ('106', '1', '2', '43179'), ('106', '1', '4', '43619')]),
('171',[('171', '1', '3', '59111'), ('171', '1', '4', '57089'), ('171', '1', '5', '57079')]),
('184'[(('184', '1', '18', '42149'), ('184', '1', '19', '12109'), ('184', '1', '20', '12099')])] | 0 | 1 | 53 |
0 | 22,284,670 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-03-09T14:35:00.000 | 0 | 1 | 0 | Python averaging with ndarrays, csvfile data | 22,283,494 | 0 | python,arrays,numpy | I think you're certainly on the right track (because python together with numpy is a great combination for this task), but in order to do what you want to do, you do need some basic programming skills. I'll assume you at least know a little about working in an interactive python shell and how to import modules etc. :-)
Then probably the easiest approach is to only have a single numpy array: one that contains the sum of the data in your files. After that it's just a matter of dividing by the number of files that you have.
So for example you could follow the following approach:
loop over all files in a folder with a for-loop and the os.listdir method
check if the file belongs to the data collection, for example by using something like str.endswith('.csv')
convert the filename to a full path by using os.path.join
read the data to a numpy array with numpy.loadtxt
add this data to the array containing the sums, which is initialized with np.zeros before the loop
keep a count of how many files you processed
after the loop, calculate the means by dividing the sums by the number of files processed | In a folder, I have a number of .csv files (count varies) each of which has 5 rows and 1200 columns of numerical data(float).
Now I want to average the data in these files (i.e. R1C1 of files gives one averaged value in a resulting file, and so on for every position (R2C2 of all files gives one value in the same position of resulting file etc.).
How do I sequentially input all files in that folder into a couple of arrays; what functions in numpy can be used to just find the mean among the files (now arrays) that have been read into these arrays. Is there a better way to this? New to computing, appreciate any help. | 0 | 1 | 59 |
0 | 22,323,918 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-03-10T18:46:00.000 | 1 | 1 | 0 | Using Python parser to sniff delimiter Spammed to STDOUT | 22,308,688 | 1.2 | python,python-2.7,pandas | This is a 'bug' in that I think this is a debugging message.
To work-around, pass engine='python' to disable the message. | When using pandas.read_csv setting sep = None for automatic delimiter detection, the message Using Python parser to sniff delimiter is printed to STDOUT. My code calls this function often so this greatly annoys me, how can I prevent this from happening short of going into the source and deleting the print statement.
This is with pandas 0.13.1, Python 2.7.5 | 0 | 1 | 113 |
0 | 22,346,834 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-03-12T09:10:00.000 | 1 | 1 | 0 | Compare Numpy and Matlab code that uses random permutation | 22,346,684 | 0.197375 | python,matlab,random,numpy,permutation | This is a common issue. While the random number generator is identical, the function which converts your random number stream into a random permutation is different. There is no specified standard algorithm which describes the expected result.
To solve this issue, you have to use the same library in both tools. | I'm having problems to compare the output of two code because of random number state.
I'm comparing the MATLAB randperm function with the output of the equivalent numpy.random.permutation function but, even if I've set the seed to the same value with a MATLAB rand('twister',0) and a python numpy.random.seed(0) I'm obtaining different permutations.
I've to say that the result of MATLAB's rand and numpy numpy.random.rand are the same if the seed are set like above. | 0 | 1 | 1,337 |
0 | 22,360,726 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-03-12T18:17:00.000 | 0 | 1 | 0 | Python Neurolab - fixing output range | 22,360,412 | 1.2 | python,neural-network | Simply use a standard sigmoid/logistic activation function on the output neuron. sigmoid(x) > 0 forall real-valued x so that should do what you want.
By default, many neural network libraries will use either linear or symmetric sigmoid outputs (which can go negative).
Just note that it takes longer to train networks with a standard sigmoid output function. It's usually better in practice to let the values go negative and instead transform the outputs from the network into the range [0,1] after the fact (shift up by the minimum, divide by the range (aka max-min)). | I am learning some model based on examples ${((x_{i1},x_{i2},....,x_{ip}),y_i)}_{i=1...N}$ using a neural network of Feed Forward Multilayer Perceptron (newff) (using python library neurolab). I expect the output of the NN to be positive for any further simulation of the NN.
How can I make sure that the results of simulation of my learned NN are always positive?
(how I do it in neurolab?) | 0 | 1 | 948 |
0 | 22,375,190 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2014-03-13T06:28:00.000 | 1 | 1 | 0 | Panel truncate error: tuple object has no attribute 'year' | 22,370,760 | 1.2 | python,pandas | In dateutil 2.2 there was an internal API change. Pandas 0.12 shows this bug as it relies on this API.
Pandas >= 0.13 works around, or you can downgrade to dateutil 2.1 | I am running code on two separate machines, it works on one machine and not on the other. I have a Pandas panel object x and I am using x.truncate('2002-01-01'). It works on one machine and not the other.
The error thrown is DateParseError: 'tuple' object has no attribute 'year'.
I have some inkling there is something wrong with the dateUtil package upgrade but didn't know if there's a better fix than backwardating the install. | 0 | 1 | 179 |
0 | 22,389,392 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-03-13T15:39:00.000 | 0 | 2 | 0 | How can I create a subset of the 'most dissimilar' arrays from a set of possible combinations? | 22,383,642 | 0 | python,arrays,math,numpy,combinations | Your algorithm could look like this:
Keep the last X (say 10) of the combinations that have been used in a list of some sort.
Pick Y (say 10) combinations randomly.
Analyze each of the Y combinations against the last X combinations to find the most dissimilar combination. This would involve writing a method that would generate a score as to how dissimilar the combination is with each of the last X combinations. Average the score and pick the one that is most dissimilar. | Say I have an array of shape (32,).
Each element can have one of four int values:0 to 3
If I wanted to create an array for each possible combination I would have 432 ( approximately 1.84 x 1019) arrays - this is overly burdensome.
Is there a straightforward way to pick fewer arrays, say 1 x 106, by picking the 'most dissimilar' combinations?
By 'most dissimilar' I mean avoiding arrays that are different by one (or few) values and picking arrays that have many dissimilar values.
Also, if there is an area of mathematics that I should be looking at to improve my description please let me know. | 0 | 1 | 120 |
0 | 22,419,620 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-03-15T02:54:00.000 | 1 | 1 | 0 | Scikit-learn, random forests - How many samples does each tree contain? | 22,418,958 | 0.197375 | python,scikit-learn,random-forest | I believe RandomForestClassier will use the entire training set to build each tree. Typically building each tree involves selecting the features which have the most predictive power(the ones which create the largest 'split'), and having more data makes computing that more accurate. | In scikit-learn's RandomForestClassifier, there is no setting to specify how many samples each tree should be built from. That is, how big the subsets should be that are randomly pulled from the data to build each tree.
I'm having trouble finding how many samples scikit-learn pulls by default. Does anyone know? | 0 | 1 | 280 |
0 | 22,436,696 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-03-16T11:55:00.000 | 1 | 1 | 0 | Statistical analysis of .h5 files (SPSS?) | 22,436,515 | 0.197375 | python,r,hdf5,statistics,h5py | Is there a way to convert the data without losing any information?
If the HDF5 data is regular enough, you can just load it in Python or R and save it out again as CSV (or even SPSS .sav format if you're a bit more adventurous and/or care about performance).
Why doesn't SPSS support h5 anyway?
Who knows. It probably should. Oh well.
Do you think it is worthwhile to learn programming in R?
If you find SPSS useful, you may also find R useful. Since you mentioned Python, you may find that useful too, but it's more of a general-purpose language: more flexible, but less focused on math and stats.
Would R give me the same arsenal of methods as I have in SPSS?
Probably, depending on exactly what you're doing. R has most stuff for math and stats, including some fairly esoteric and/or new algorithms in installable packages. It has a few things Python doesn't have (yet), but Python also covers most of the bases for many users. | I have two sets of data in separated .h5 files (Hierarchical Data Format 5, HDF5), obtained with python scripts, and I would like to perform statistical analysis to find correlations between them. My experience here is limited; I don't know any R.
I would like to load the data into SPSS, but SPSS doesn't seem to support .h5. What would be the best way to go here? I can write everything to a .csv file, but I would loose the names of the variables. Is there a way to convert the data without loosing any information? And why doesn't SPSS support h5 anyway?
I am aware of the existence of the Rpy module. Do you think it is worthwhile to learn programming in R? Would this give me the same arsenal of methods as I have in SPSS?
Thank you for your input! | 0 | 1 | 398 |
0 | 22,519,952 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2014-03-16T12:51:00.000 | 2 | 1 | 0 | Import nested Json into cassandra | 22,437,058 | 0.379949 | java,python,json,cassandra,cassandra-cli | If you don't need to be able to query individual items from the json structure, just store the whole serialized string into one column.
If you do need to be able to query individual items, I suggest using one of the collection types: list, set, or map. As far as typing goes, I would leave the value as text or blob and rely on json to handle the typing. In other words, json encode the values before inserting and then json decode the values when reading. | I have list of nested Json objects which I want to save into cassandra (1.2.15).
However the constraint I have is that I do not know the column family's column data types before hand i.e each Json object has got a different structure with fields of different datatypes.
So I am planning to use dynamic composite type for creating a column family.
So I would like to know if there is an API or suggest some ideas on how to save such Json object list into cassandra.
Thanks | 0 | 1 | 1,459 |
0 | 22,440,992 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-03-16T18:23:00.000 | 1 | 2 | 0 | How to pick a random element in an np array only if it isn't a certain value | 22,440,923 | 0.099668 | python,arrays,numpy | If you're willing to accept probabilistic times and you have fewer than 50% ignored values, you can just retry until you have an acceptable value.
If you can't, you're going to have to go over the entire array at least once to know which values to ignore, but that takes n memory. | I'm using python.
I have what might be an easy question, though I can't see it.
If I have an array x = array([1.0,0.0,1.5,0.0,6.0]) and y = array([1,2,3,4,5])
I'm looking for an efficient way to pick randomly between 1.0,1.5,6.0 ignoring all zeros while retaining the index for comparison with another array such as y. So if I was to randomly pick 6.0 I could still relate it to y[4].
The reason for the efficient bit is that eventually I want to pick between possibly 10 values from an array of 1000+ with the rest zero. Depending on how intensive the other calculations become depends on the size of the array though it could easily become much larger than 1000.
Thanks | 0 | 1 | 1,746 |
0 | 22,673,856 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-03-17T17:03:00.000 | 0 | 1 | 0 | Python with openCV on MAC crashes | 22,460,645 | 1.2 | eclipse,macos,opencv,python-2.7,pydev | Ok, it's working now. Here is what I did:
Install Python and every package I need for it with Macports
Set the Macports version as standard
Adjust PATH and PYTHONPATH
Reboot (not sure if needed)
Remove old interpreter and libs in Eclipse
Choose the new Python installation as Interpreter in Eclipse
Confirm the new libs in Eclipse
Restart Eclipse
Done | My final goal is to use Python scripts with SciPy, NumPy, Theano and openCV libraries to write code for a machine learning application. Everything worked so far apart from the openCV.
I am trying to install openCV 2.4.8 to use in Python projects in my Eclipse Kepler installation on my MBA running Mac OSX 10.9.2. I have the PyDef plugin v2.7 and a installation of Anaconda v1.9.1.
Here is what I did to install opencv:
sudo port selfupdate
sudo port upgrade outdated
sudo port install opencv
Then I realized that I can't use it that way in Python and did another:
sudo port install opencv +python27
Ok, then I had another Python installation and I added it to my PYTHONPATH in Eclipse>Preferences>PyDev>Interpreter-Python>Libraries.
Before the installation I got an error in the line import cv2, and everything else looked promising. Now this error disappeared but I get other errors when using any functions or variables of cv2. For example I get two errors in this line: cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
Also Python crashes and has to be restarted when I run a simple test program which worked fine before.
With this PYTHONPATH everything works but I have no openCV:
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pyObjC
/Library/Python/2.7/site-packages/
/Users/xxx/anaconda/lib/python2.7/site-packages
When I add this new folder to the PYTHONPATH...
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
... openCV seems to work but I have the crashes and the other issue described above.
So, can anyone tell me what the problem is and what I can do to make this work?
Thanks for reading this so far and any help/hint you can provide! Please don't be too harsh, I am, as you can probably easily see just a beginner. | 0 | 1 | 611 |
0 | 22,463,625 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2014-03-17T17:15:00.000 | 4 | 2 | 0 | scikit learn creation of dummy variables | 22,460,948 | 1.2 | python,machine-learning,scikit-learn | For which algorithms in scikit-learn is this transformation into dummy variables necessary? And for those algorithms that aren't, it can't hurt, right?
All algorithms in sklearn with the notable exception of tree-based methods require one-hot encoding (also known as dummy variables) for nominal categorical variables.
Using dummy variables for categorical features with very large cardinalities might hurt tree-based methods, especially randomized tree methods by introducing a bias in the feature split sampler. Tree-based method tend to work reasonably well with a basic integer encoding of categorical features. | In scikit-learn, which models do I need to break categorical variables into dummy binary fields?
For example, if the column is political-party, and the values are democrat, republican and green, for many algorithms, you have to break this into three columns where each row can only hold one 1, and all the rest must be 0.
This avoids enforcing an ordinality that doesn't exist when discretizing [democrat, republican and green] => [0, 1, 2], since democrat and green aren't actually "farther" away then another pair.
For which algorithms in scikit-learn is this transformation into dummy variables necessary? And for those algorithms that aren't, it can't hurt, right? | 0 | 1 | 3,572 |
0 | 22,663,453 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-03-18T18:51:00.000 | 0 | 2 | 0 | Time signal shifted in amplitude, FIR filter with scipy.signal | 22,488,460 | 1.2 | python,scipy,filtering | So finally I adapted one filter to get the zerofrequency and another bandpassfilter to get the 600 Hz frequency. Passzero has to be true just for the zerofrequency then it works.
I'm not yet happy with the phase delay but I'm working on it.
1)bandpass 600 Hz:
taps_bp = bandpass_fir(ntaps, lowcut, highcut, fs)
Function for the bandpassfilter
def bp_fir(ntaps, lowcut, highcut, fs, window = 'hamming')
taps = scipy.signal.firwin(ntaps,[lowcut, highcut],nyq, pass_zero=False)
return taps
2)zerofrequency filter
taps_zerofrequency = zero_fir(ntaps, zerofreq=1, fs)
Function for the zerofrequency filter
def zero_fir(ntaps, zerofreq, fs, window = 'hamming')
taps = scipy.signal.firwin(ntaps,[zerofreq],nyq, pass_zero=True)
return taps | I am implementing a bandpass filter in Python using scipy.signal (using the firwin function). My original signal consists of two frequencies (w_1=600Hz, w_2=800Hz). There might be a lot more frequencies that's why I need a bandpass filter.
In this case I want to filter the frequency band around 600 Hz, so I took 600 +/- 20Hz as cutoff frequencies. When I implemented the filter and reproduced the signal in the time domain using lfilter the frequency is fine.
The amplitude is reproduced in the right magnitude as well. But the problem is the signal is shifted in the y-direction. For example: s(t)=s_1(t)+s_2(t) with s_1(t)=sin(w_1 t)+3 and s_2(t)=sin(w_2 t) returns a filtered signal which varies around 0 but not [2,4]`. | 0 | 1 | 794 |
0 | 22,522,819 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-03-19T12:45:00.000 | 1 | 2 | 0 | Weibull Censored Data | 22,506,268 | 0.099668 | python,scipy,weibull | If I understand correctly, then this requires estimation with censored data.
None of the scipy.stats.distribution will directly estimate this case. You need to combine the likelihood function of the non-censored and the likelihood function of the censored observations.
You can use the pdf and the cdf, or better sf, of the scipy.stats.distributions for the two parts. Then, you could just use scipy optimize to minimize the negative log-likelihood, or try the GenericLikelihoodModel in statsmodels if you are also interested in the uncertainty of the parameter estimates. | I'm currently working with some lifetime data that corresponds to the Installation date and Failure date of units. The data is field data, so I do have a major number of suspensions (units that haven't presented a failure yet). I would like to make some Weibull analysis with this data using Scipy stats library (fitting the data to a weibull curve and obtaining the parameters of the distribution for instance). I'm quite new to Python and Scipy so I can't find a way to include the suspended data in any avaiable Weibull distribution (dweibull, exponweibull, minweibull, maxweibull). Is there a easy way to work with suspensions? I would not like to recriate the wheel, but I'm having difficulties in estimating the parameters of the Weibull from my data. Can anyone help me?
Thanks a lot! | 0 | 1 | 1,430 |
0 | 22,562,740 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2014-03-20T14:48:00.000 | 3 | 2 | 0 | How to deal with indeterminate form in Python | 22,536,589 | 0.291313 | python,numpy,complex-numbers | The short answer is that the C99 standard (Annex G) on complex number arithmetic recognizes only a single complex infinity (think: Riemann sphere). (inf, nan) is one representation for it, and (-inf, 6j) is another, equivalent representation. | At some point in my python script, I require to make the calculation: 1*(-inf + 6.28318530718j). I understand why this will return -inf + nan*j since the imaginary component of 1 is obviously 0, but I would like the multiplication to have the return value of -inf + 6.28318530718j as would be expected. I also want whatever solution to be robust to any of these kinds of multiplications. Any ideas?
Edit:
A Complex multiplication like x*y where x = (a+ib) and y = (c+id) I assume is handled like (x.real*y.real-x.imag*y.imag)+1j*(x.real*y.imag+x.imag*y.real) in python as this is what the multiplication comes down to mathematically. Now if say x=1.0 and y=-inf+1.0j then the result will contain nan's as inf*0 will be undefined. I want a way for python to interpret * so that the return value to this example will be -inf+1.0j. It seems unnecessary to have to define my own multiplication operator (via say a function cmultiply(x,y)) such that I get the desired result. | 0 | 1 | 584 |
0 | 22,584,181 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2014-03-21T14:31:00.000 | 6 | 2 | 0 | Pandas dataset into an array for modelling in Scikit-Learn | 22,562,540 | 1 | python,pandas,scikit-learn | Pandas DataFrames are very good at acting like Numpy arrays when they need to. If in doubt, you can always use the values attribute to get a Numpy representation (df.values will give you a Numpy array of the values in DataFrame df. | Can we run scikit-learn models on Pandas DataFrames or do we need to convert DataFrames into NumPy arrays? | 0 | 1 | 9,121 |
0 | 22,591,329 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-03-23T12:18:00.000 | 1 | 1 | 0 | Can anyone in detail explain how cv and cv2 are different and what makes cv2 better and faster than cv? | 22,590,811 | 1.2 | python,c++,opencv,numpy | there is no question at all, - use cv2
the old cv api, that wraps IplImage and CvMat is being phased out, and will be no more available in the next release of opencv
the newer cv2 api uses numpy arrays for almost anything, so you can easily combine it with scipy, matplotlib, etc. | I've recently started using openCV in python. I've come across various posts comparing cv and cv2 and with an overview saying how cv2 is based on numpy and makes use of an array (cvMat) as opposed to cv makes use of old openCV bindings that was using Iplimage * (correct me if i'm wrong).
However I would really like know how basic techniques (Iplimage* and cvMat) differ and why later is faster and better and how that being used in cv and cv2 respectively makes difference in terms of performance.
Thanks. | 0 | 1 | 230 |
0 | 22,595,047 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-03-23T16:03:00.000 | 1 | 1 | 0 | Python widget for real time plotting | 22,593,328 | 0.197375 | python,user-interface,matplotlib,tkinter,wxpython | Tkinter, which is part of python, comes with a canvas widget that can be used for some simple plotting. It can draw lines and curves, and one datapoint every couple of seconds is very easy for it to handle. | Is there a minimalistic python module out there that I can use to plot real time data that comes in every 2-3 seconds?
I've tried matplotlib but I'm having a couple errors trying to get it to run so I'm not looking for something as robust and with many features. | 0 | 1 | 133 |
0 | 48,487,656 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-03-23T21:22:00.000 | 0 | 2 | 0 | Making scikit-learn train on all training data after cross-validation | 22,597,239 | 0 | python,scikit-learn | My recommendation is to not use the cross-validation split that had the best performance. That could potential give you problems with high bias. Afterall, the performance just happened to be good because there was a fold used for testing that just happened to match the data used for training. When you generalize it to the real world, that probably won't happen.
A strategy I got from Andrew Ng is to have a train, dev, and test sets. I would first split your dataset into a test and train set. Then use cross fold validation on your training set, where effectively the training set will be split into training and dev sets. Do cross fold validation to validate your model and store the precision and recall and other metrics to build a ROC curve. Average the values and report those. You can also tune the hyperparameters using your dev set as well.
Next, train the model with the entire training set, then validate the model with your hold out test set. | I'm using scikit-learn to train classifiers. I want also to do cross validation, but after cross-validation I want to train on the entire dataset. I found that cross_validation.cross_val_score() just returns the scores.
Edit: I would like to train the classifier that had the best cross-validation score with all of my data. | 0 | 1 | 1,088 |
0 | 22,609,701 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-03-24T12:03:00.000 | 1 | 2 | 0 | what library is able to extract SIFT features in Python? | 22,608,905 | 0.099668 | python-2.7,computer-vision,python-module | OpenCV is free to use.
But SIFT itself as algorithm is patented, so if you would make your own implementation of SIFT, not based on Lowe`s code, you still could not use it in commercial application. So, unless you have got a license for SIFT, no library with it, is free.
But you can consult with patent guys - some countries like Russia does not allow to patent algorithms - so you can you SIFT inside such country. | In python which library is able to extract SIFT visual descriptors? I know opencv has an implementation but it is not free to use and skimage does not include SIFT particularly. | 0 | 1 | 2,214 |
0 | 23,098,414 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2014-03-24T12:03:00.000 | 2 | 2 | 0 | what library is able to extract SIFT features in Python? | 22,608,905 | 1.2 | python-2.7,computer-vision,python-module | I would like to suggest VLFeat, another open source vision library. It also has a python wrapper. The implementation of SIFT in VLFeat is modified from the original algorithm, but I think the performance is good. | In python which library is able to extract SIFT visual descriptors? I know opencv has an implementation but it is not free to use and skimage does not include SIFT particularly. | 0 | 1 | 2,214 |
0 | 22,619,589 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2014-03-24T20:04:00.000 | 2 | 2 | 0 | How can i check in numpy if a binary image is almost all black? | 22,619,506 | 0.197375 | python,opencv,image-processing,numpy,scikit-image | Here is a list of ideas I can think of:
get the np.sum() and if it is lower than a threshold, then consider it almost black
calculate np.mean() and np.std() of the image, an almost black image is an image that has low mean and low variance | How can i see in if a binary image is almost all black or all white in numpy or scikit-image modules ?
I thought about numpy.all function or numpy.any but i do not know how neither for a total black image nor for a almost black image. | 0 | 1 | 1,554 |
0 | 22,619,838 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2014-03-24T20:04:00.000 | 2 | 2 | 0 | How can i check in numpy if a binary image is almost all black? | 22,619,506 | 1.2 | python,opencv,image-processing,numpy,scikit-image | Assuming that all the pixels really are ones or zeros, something like this might work (not at all tested):
def is_sorta_black(arr, threshold=0.8):
tot = np.float(np.sum(arr))
if tot/arr.size > (1-threshold):
print "is not black"
return False
else:
print "is kinda black"
return True | How can i see in if a binary image is almost all black or all white in numpy or scikit-image modules ?
I thought about numpy.all function or numpy.any but i do not know how neither for a total black image nor for a almost black image. | 0 | 1 | 1,554 |
0 | 22,698,775 | 0 | 0 | 0 | 0 | 1 | false | 25 | 2014-03-27T20:37:00.000 | 4 | 2 | 0 | How to sort 2D array (numpy.ndarray) based to the second column in python? | 22,698,687 | 0.379949 | python,arrays,sorting,numpy | sorted(Data, key=lambda row: row[1]) should do it. | I'm trying to convert all my codes to Python. I want to sort an array which has two columns so that the sorting must be based on the 2th column in the ascending order. Then I need to sum the first column data (from first line to, for example, 100th line). I used "Data.sort(axis=1)", but it doesn't work. Does anyone have any idea to solve this problem? | 0 | 1 | 75,440 |
0 | 22,731,897 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-03-28T01:16:00.000 | 1 | 2 | 0 | Clustering a list of dates | 22,702,428 | 0.099668 | python-2.7,numpy,scipy,cluster-analysis | k-means is exclusively for coordinates. And more precisely: for continuous and linear values.
The reason is the mean functions. Many people overlook the role of the mean for k-means (despite it being in the name...)
On non-numerical data, how do you compute the mean?
There exist some variants for binary or categorial data. IIRC there is k-modes, for example, and there is k-medoids (PAM, partitioning around medoids).
It's unclear to me what you want to achieve overall... your data seems to be 1-dimensional, so you may want to look at the many questions here about 1-dimensional data (as the data can be sorted, it can be processed much more efficiently than multidimensional data).
In general, even if you projected your data into unix time (seconds since 1.1.1970), k-means will likely only return mediocre results for you. The reason is that it will try to make the three intervals have the same length.
Do you have any reason to suspect that "before", "during" and "after" have the same duration? If not, don't use k-means.
You may however want to have a look at KDE; and plot the estimated density. Once you have understood the role of density for your task, you can start looking at appropriate algorithms (e.g. take the derivative of your density estimation, and look for the largest increase / decrease, or estimate an "average" level, and look for the longest above-average interval). | I have a list of dates I'd like to cluster into 3 clusters. Now, I can see hints that I should be looking at k-means, but all the examples I've found so far are related to coordinates, in other words, pairs of list items.
I want to take this list of dates and append them to three separate lists indicating whether they were before, during or after a certain event. I don't have the time for this event, but that's why I'm guessing it by breaking the date/times into three groups.
Can anyone please help with a simple example on how to use something like numpy or scipy to do this? | 0 | 1 | 7,440 |
0 | 24,051,792 | 1 | 0 | 0 | 0 | 1 | false | 1 | 2014-03-28T17:48:00.000 | 0 | 1 | 0 | Fix the seed for the community module in Python that uses networkx module | 22,719,863 | 0 | python,networkx | I had to change the seed inside every class I used. | I am using the community module to extract communities from a networkx graph. For the community module, the order in which the nodes are processed makes a difference. I tried to set the seed of random to get consistent results but that is not working. Any idea on how to do this?
thanks | 0 | 1 | 165 |
0 | 22,724,963 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-03-28T18:17:00.000 | 1 | 2 | 0 | Can I select rows based on group size with pandas? Or do I have to use SQL? | 22,720,349 | 1.2 | python,pandas | I've made it work with records.groupby('product_name').filter(lambda x: len(x['url']) == 1). Note that simply using len(x) doesn't work. With a dataframe with more than two columns (which is probably most of the real-life dataframes), one has to specify a column for x: any column, except the one to group by with. Also, this code initially didn't work for me because my index on the dataframe was not unique. I'm not sure why this should interfere with the function of filtering, but it did. After reindexing the dataframe, I finally got it to work. | With pandas I can do grouping using df.groupby('product_name').size(). But if I'm only interested rows whose "product_name" is unique, i.e. those records with groupby.size equal to one, how can I filter the df to see only such rows? In other words, can I perform filtering on a database using pandas, based on the number of times an attribute occurs in the database? (I could do that with SQL alright.) | 0 | 1 | 2,961 |
0 | 22,730,167 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2014-03-29T09:21:00.000 | 4 | 3 | 0 | How to calculate exp(x) for really big integers in Python? | 22,729,223 | 0.26052 | python,math,numpy,artificial-intelligence | @Paul already gave you the answer for computational question
However - from neural network point of view your problem is indication that you are doing something wrong. There is no reasonable use of neural networks, where you have to compute such number. You seem to forget about at least one of:
Input data scaling/normalization/standarization
small weights bounds initizlization
regularization term which keeps weights small when the size of network grows
all these elements are basic and crucial parts of working with neural networks. I recommend to have a look at Neural Networks and Learning Machines by Haykin. | I'm using a sigmoid function for my artificial neural network. The value that I'm passing to the function ranges from 10,000 to 300,000. I need a high-precision answer because that would serve as the weights of the connection between the nodes in my artificial neural network. I've tried looking in numpy but no luck. Is there a way to compute the e^(-x) | 0 | 1 | 3,244 |
0 | 22,737,241 | 0 | 0 | 0 | 0 | 2 | false | 15 | 2014-03-29T21:15:00.000 | 6 | 4 | 0 | one-dimensional array shapes (length,) vs. (length,1) vs. (length) | 22,737,000 | 1 | python,arrays,math,numpy | In Python, (length,) is a tuple, with one 1 item. (length) is just parenthesis around a number.
In numpy, an array can have any number of dimensions, 0, 1, 2, etc. You are asking about the difference between 1 and 2 dimensional objects. (length,1) is a 2 item tuple, giving you the dimensions of a 2d array.
If you are used to working with MATLAB, you might be confused by the fact that there, all arrays are 2 dimensional or larger. | When I check the shape of an array using numpy.shape(), I sometimes get (length,1) and sometimes (length,). It looks like the difference is a column vs. row vector... but It doesn't seem like that changes anything about the array itself [except some functions complain when I pass an array with shape (length,1)].
What is the difference between these two?
Why isn't the shape just, (length)? | 0 | 1 | 18,706 |
0 | 61,132,626 | 0 | 0 | 0 | 0 | 2 | false | 15 | 2014-03-29T21:15:00.000 | 0 | 4 | 0 | one-dimensional array shapes (length,) vs. (length,1) vs. (length) | 22,737,000 | 0 | python,arrays,math,numpy | A vector in Python is actually a two-dimensional array. It's just a coincidence that the number of rows is 1 (for row vectors), or the number of columns is 1 (for column vectors).
By contrast, a one-dimensional array is not a vector (neither a row vector nor a column vector). To understand this, think a concept in geometry, scalar. A scalar only has one attribute, which is numerical. By contrast, a vector has two attributes, number and direction. Fortunately, in linear algebra, vectors also have "directions", although only two possible directions - either horizontal or vertical (unlike infinite possible directions in geometry). A one-dimensional array only has numerical meaning - it doesn't show which direction this array is pointing to. This is why we need two-dimensional arrays to describe vectors. | When I check the shape of an array using numpy.shape(), I sometimes get (length,1) and sometimes (length,). It looks like the difference is a column vs. row vector... but It doesn't seem like that changes anything about the array itself [except some functions complain when I pass an array with shape (length,1)].
What is the difference between these two?
Why isn't the shape just, (length)? | 0 | 1 | 18,706 |
0 | 22,766,449 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2014-03-31T14:17:00.000 | 0 | 1 | 0 | Installation of Compatible version of Numpy and Scipy on Abaqus 6.13-2 with python 2.6.2 | 22,764,021 | 0 | python,numpy,scipy | What you should do is: install python 2.6.2 separately onto your system (it looks like you are using windows, right?), and then install scipy corresponding to python 2.6.2, and then copy the site-packages to the abaqus folder.
Note that 1) you can't use matplotlib due to the tkinter problem; 2) the numpy is already coming with abaqus so you don't need to install it by hand. | Can anyone give inputs/clue/direction on installation of compatible version of numpy and scipy in abaqus python 2.6.2?
I tried installing numpy-1.6.2, numpy-1.7.1 and numpy-1.8.1. But all gives an error of unable to find vcvarsall.bat. because it doesn't have a module named msvccomplier. based on the some of the answers, I verified the visual studio version and it is 2008.
Could anyone please give direction on this? | 0 | 1 | 2,294 |
0 | 22,776,862 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-04-01T02:24:00.000 | 1 | 1 | 0 | How to know name of the person in the image? | 22,775,681 | 1.2 | python-2.7,opencv | You can use the filename of the image for that purpose. All you need to do is keep the filenames stored somewhere in your application, alongside the Mat objects. | I implemented face recognition algorithm in raspberry pi(python 2.7 is was used), i have many sets of faces, if the captured face is one in database then the face is detected(i am using eigen faces algo). My question is can i know whose face(persons name) is detected? (can we have sort of tags to image and display name corresponding to it when the face is detected) Note: OpenCV used | 0 | 1 | 599 |
0 | 22,799,245 | 0 | 0 | 0 | 0 | 1 | false | 20 | 2014-04-02T00:05:00.000 | 3 | 3 | 0 | Keep finite entries only in Pandas | 22,799,208 | 0.197375 | python,pandas | You can use .dropna() after a DF[DF==np.inf]=np.nan, (unless you still want to keep the NANs and only drop the infs) | In Pandas, I can use df.dropna() to drop any NaN entries. Is there anything similar in Pandas to drop non-finite (e.g. Inf) entries? | 0 | 1 | 18,662 |
0 | 22,817,669 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2014-04-02T16:27:00.000 | 3 | 2 | 0 | Pip doesn’t know where numpy is installed | 22,817,533 | 0.291313 | python,python-2.7,numpy,pip,python-packaging | Maybe run deactivate if you are running virtualenv? | Trying to uninstall numpy. I tried pip uninstall numpy. It tells me that it isn't installed. However, numpy is still installed at /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy.
How can I make sure pip finds the numpy package? | 0 | 1 | 7,578 |
0 | 22,848,834 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2014-04-03T13:34:00.000 | 3 | 2 | 0 | What are the operational limits of Rpy2? | 22,839,403 | 1.2 | python,rpy2 | Bring the presumed limitations on.
Rpy2 is, at its lower level (the rpy2.rinterface level), exposing a very large part of the R C-API. Technically, one can do more with rpy2 than one can from R itself (writing C extension for R would possibly be the only way to catch up). As an amusing fact,doing "R stuff" from rpy2 can be faster than doing the same from R itself (see the rpy2 documentation benchmarking the access of elements in an R vector).
The higher level in rpy2 (rpy2.robject level) is adding a layer that makes "doing R stuff" more "pythonic" (although by surrendering the performance claim mentioned above). R packages look like Python modules, has classes such as Formula, Factor, etc... to have all R objects as Python classes, has a conversion system to let one complex R structures can be mapped to Python objects automagically (see example with lme4 in the rpy2 documentation0, translates on the fly invalid R variable names ('.' is a valid character for variable names in R), create on the fly Python docstrings from R documentation. | I know basic python programming and thus want to stay on the python datasci path.
Problem is, there are many R packages that appeal to me as a social science person.
Can Rpy2 allow full use of any general arbitrary r package, or is there a catch. How well does it work in practice?
If Rpy2 is too limited, I'd unfortunately have to branch over to r, but I would rather not, because of the extra overhead.
Thanks
Tai | 0 | 1 | 488 |
0 | 23,772,908 | 0 | 1 | 0 | 0 | 1 | false | 7 | 2014-04-05T07:47:00.000 | 18 | 4 | 0 | Error installing scipy library through pip on python 3: "compile failed with error code 1" | 22,878,109 | 1 | python,python-3.x,scipy,pip | I was getting the same thing when using pip, I went to the install and it pointed to the following dependencies.
sudo apt-get install python python-dev libatlas-base-dev gcc gfortran g++ | I'm trying to install scipy library through pip on python 3.3.5. By the end of the script, i'm getting this error:
Command /usr/local/opt/python3/bin/python3.3 -c "import setuptools, tokenize;file='/private/tmp/pip_build_root/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record /tmp/pip-9r7808-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/tmp/pip_build_root/scipy
Storing debug log for failure in /Users/dan/.pip/pip.log | 0 | 1 | 11,676 |
0 | 22,897,471 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-04-06T17:12:00.000 | 0 | 2 | 0 | Perform n linear regressions, simultaneously | 22,897,243 | 0 | python,pandas,linear-regression | as far as I know, there is no way to put this all at once in the optimized Fortran library, LAPACK, since each regression is it's own independent optimization problem.
note that the loop over 4 items is not taking any time relative to the regression itself, that you need to fully compute because each regression is an isolated linear algebra problem... so I don't think there is much time to save here... | I have y - a 100 row by 5 column Pandas DataFrame
I have x - a 100 row by 5 column Pandas DataFrame
For i=0,...,4 I want to regress y[:,i] against x[:,i].
I know how to do it using a loop.
But is there a way to vectorise the linear regression, so that I don't have the loop in there? | 0 | 1 | 472 |
0 | 62,341,726 | 0 | 0 | 0 | 0 | 2 | false | 275 | 2014-04-06T19:24:00.000 | 4 | 16 | 0 | Filtering Pandas DataFrames on dates | 22,898,824 | 0.049958 | python,datetime,pandas,filtering,dataframe | You could just select the time range by doing: df.loc['start_date':'end_date'] | I have a Pandas DataFrame with a 'date' column. Now I need to filter out all rows in the DataFrame that have dates outside of the next two months. Essentially, I only need to retain the rows that are within the next two months.
What is the best way to achieve this? | 0 | 1 | 624,303 |
Subsets and Splits