GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
15,066,821
0
0
0
0
1
false
3
2012-10-03T17:32:00.000
0
3
0
Topic-based text and user similarity
12,713,797
0
python,numpy,recommendation-engine,topic-modeling,gensim
My tricks are using a search engine such as ElasticSearch, and it works very well, and in this way we unified the api of all our recommend systems. Detail is listed as below: Training the topic model by your corpus, each topic is an array of words and each of the word is with a probability, and we take the first 6 most probable words as a representation of a topic. For each document in your corpus, we can inference a topic distribution for it, the distribution is an array of probabilities for each topic. For each document, we generate a fake document with the topic distribution and the representation of the topics, for example the size of the fake document is about 1024 words. For each document, we generate a query with the topic distribution and the representation of the topics, for example the size of the query is about 128 words. All preparation is finished as above. When you want to get a list of similar articles or others, you can just perform a search: Get the query for your document, and then perform a search by the query on your fake documents. We found this way is very convenient.
I am looking to compute similarities between users and text documents using their topic representations. I.e. each document and user is represented by a vector of topics (e.g. Neuroscience, Technology, etc) and how relevant that topic is to the user/document. My goal is then to compute the similarity between these vectors, so that I can find similar users, articles and recommended articles. I have tried to use Pearson Correlation but it ends up taking too much memory and time once it reaches ~40k articles and the vectors' length is around 10k. I am using numpy. Can you imagine a better way to do this? or is it inevitable (on a single machine)? Thank you
0
1
1,318
0
50,292,755
0
0
0
0
1
false
35
2012-10-05T02:50:00.000
1
6
0
How can I call scikit-learn classifiers from Java?
12,738,827
0.033321
java,python,jython,scikit-learn
I found myself in a similar situation. I'll recommend carving out a classifier microservice. You could have a classifier microservice which runs in python and then expose calls to that service over some RESTFul API yielding JSON/XML data-interchange format. I think this is a cleaner approach.
I have a classifier that I trained using Python's scikit-learn. How can I use the classifier from a Java program? Can I use Jython? Is there some way to save the classifier in Python and load it in Java? Is there some other way to use it?
0
1
42,287
0
14,583,682
0
0
0
0
2
false
2
2012-10-06T20:31:00.000
0
2
0
User profiling for topic-based recommender system
12,763,608
0
python,machine-learning,recommendation-engine,latent-semantic-indexing,topic-modeling
"represent a user as the aggregation of all the documents viewed" : that might work indeed, given that you are in linear spaces. You can easily add all the documents vectors in one big vector. If you want to add the ratings, you could simply put a coefficient in the sum. Say you group all documents rated 2 in a vector D2, rated 3 in D3 etc... you then simply define a user vector as U=c2*D2+c3*D3+... You can play with various forms for c2, c3, but the easiest approach would be to simply multiply by the rating, and divide by the max rating for normalisation reasons. If your max rating is 5, you could define for instance c2=2/5, c3=3/5 ...
I'm trying to come up with a topic-based recommender system to suggest relevant text documents to users. I trained a latent semantic indexing model, using gensim, on the wikipedia corpus. This lets me easily transform documents into the LSI topic distributions. My idea now is to represent users the same way. However, of course, users have a history of viewed articles, as well as ratings of articles. So my question is: how to represent the users? An idea I had is the following: represent a user as the aggregation of all the documents viewed. But how to take into account the rating? Any ideas? Thanks
0
1
602
0
12,764,041
0
0
0
0
2
false
2
2012-10-06T20:31:00.000
1
2
0
User profiling for topic-based recommender system
12,763,608
0.099668
python,machine-learning,recommendation-engine,latent-semantic-indexing,topic-modeling
I don't think that's working with lsa. But you maybe could do some sort of k-NN classification, where each user's coordinates are the documents viewed. Each object (=user) sends out radiation (intensity is inversely proportional to the square of the distance). The intensity is calculated from the ratings on the single documents. Then you can place a object (user) in in this hyperdimensional space, and see what other users give the most 'light'. But: Can't Apache Lucene do that whole stuff for you?
I'm trying to come up with a topic-based recommender system to suggest relevant text documents to users. I trained a latent semantic indexing model, using gensim, on the wikipedia corpus. This lets me easily transform documents into the LSI topic distributions. My idea now is to represent users the same way. However, of course, users have a history of viewed articles, as well as ratings of articles. So my question is: how to represent the users? An idea I had is the following: represent a user as the aggregation of all the documents viewed. But how to take into account the rating? Any ideas? Thanks
0
1
602
0
12,807,402
0
0
0
0
1
false
1
2012-10-09T15:31:00.000
1
2
0
Algorithms for Mining Tuples of Data on huge sample space
12,803,495
0.099668
python,data-mining,graph-algorithm,recommendation-engine,apriori
For Apriori, you do not need to have tuples or vectors. It can be implemented with very different data types. The common data type is a sorted item list, which could as well look like 1 13 712 1928 123945 191823476 stored as 6 integers. This is essentially equivalent to a sparse binary vector and often very memory efficient. Plus, APRIORI is actually designed to run on data sets too large for your main memory! Scalability of APRIORI is a mixture of the number of transactions and the number of items. Depending of how they are, you might prefer different data structures and algorithms.
I read that Apriori algorithm is used to fetch association rules from the dataset like a set of tuples. It helps us to find the most frequent 1-itemsets, 2-itemsets and so-on. My problem is bit different. I have a dataset, which is a set of tuples, each of varying size - as follows : (1, 234, 56, 32) (25, 4575, 575, 464, 234, 32) . . . different size tuples The domain for entries is huge, which means that I cannot have a binary vector for each tuple, that tells me if item 'x' is present in tuple. Hence, I do not see Apriori algorithm fitting here. My target is to answer questions like : Give me the ranked list of 5 numbers, that occur with 234 most of the time Give me the top 5 subsets of size 'k' that occur most frequently together Requirements : Exact representation of numbers in output (not approximate), Domain of numbers can be thought of as 1 to 1 billion. I have planned to use the simple counting methods, if no standard algorithm fits here. But, if you guys know about some algorithm that can help me, please let me know
0
1
631
0
12,810,026
0
0
0
0
1
false
1
2012-10-09T20:39:00.000
0
2
0
Can anyone provide me with some clustering examples?
12,808,050
0
python,scipy,cluster-analysis
The second is what clustering is: group objects that are somewhat similar (and that could be images). Clustering is not a pure imaging technique. When processing a single image, it can for example be applied to colors. This is a quite good approach for reducing the number of colors in an image. If you cluster by colors and pixel coordinates, you can also use it for image segmentation, as it will group pixels that have a similar color and are close to each other. But this is an application domain of clustering, not pure clustering.
I am having a hard time understanding what scipy.cluster.vq really does!! On Wikipedia it says Clustering can be used to divide a digital image into distinct regions for border detection or object recognition. on other sites and books it says we can use clustering methods for clustering images for finding groups of similar images. AS i am interested in image processing ,I really need to fully understand what clustering is . So Can anyone show me simple examples about using scipy.cluster.vq with images??
0
1
1,663
0
12,810,655
0
0
0
0
1
false
2
2012-10-10T00:56:00.000
1
3
0
randomly choose pair (i, j) with probability P[i, j] given stochastic matrix P
12,810,499
0.066568
python,numpy,statistics
Here's a simple algorithm in python that does what you are expecting. Let's take for example a single dimension array P equal to [0.1,0.3,0.4,0.2]. The logic can be extended to any number of dimensions. Now we set each element to the sum of all the elements that precede it: P => [0, 0.1, 0.4, 0.8, 1] Using a random generator, we generate numbers that are between 0 and 1. Let's say x = 0.2. Using a simple binary search, we can determine that x is between the first element and the second element. We just pick the first element for this value of x. If you look closely, the chance that 0 =< X < 0.1 is 0.1. The chance that 0.1 =< x < 0.4 is 0.3 and so on. For the 2D array, it is better to convert it to a 1D array, even though, you should be able to implement a 2D array binary search algorithm.
I have numpy two dimension array P such that P[i, j] >= 0 and all P[i, j] sums to one. How to choose pair of indexes (i, j) with probability P[i, j] ? EDIT: I am interested in numpy build function. Is there something for this problem? May be for one dimensional array?
0
1
1,166
0
12,821,336
0
1
0
0
1
false
14
2012-10-10T14:01:00.000
-1
4
0
What are ngram counts and how to implement using nltk?
12,821,201
-0.049958
python,nlp,nltk
I don't think there is a specific method in nltk to help with this. This isn't tough though. If you have a sentence of n words (assuming you're using word level), get all ngrams of length 1-n, iterate through each of those ngrams and make them keys in an associative array, with the value being the count. Shouldn't be more than 30 lines of code, you could build your own package for this and import it where needed.
I've read a paper that uses ngram counts as feature for a classifier, and I was wondering what this exactly means. Example text: "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam" I can create unigrams, bigrams, trigrams, etc. out of this text, where I have to define on which "level" to create these unigrams. The "level" can be character, syllable, word, ... So creating unigrams out of the sentence above would simply create a list of all words? Creating bigrams would result in word pairs bringing together words that follow each other? So if the paper talks about ngram counts, it simply creates unigrams, bigrams, trigrams, etc. out of the text, and counts how often which ngram occurs? Is there an existing method in python's nltk package? Or do I have to implement a version of my own?
0
1
21,407
0
19,391,151
0
0
0
0
1
false
3
2012-10-11T00:25:00.000
0
1
0
How to read tbf file with STAMP encryption
12,830,437
0
c#,python,sql,database,sml
Have you looked in using something like FileViewerPro? Its free download tool in order to open files. Also, what windows programs have you tried so far, like notepad, excel?
I have Toronto Stock Exchange stock data in a maxtor hard drive. The data is a TBF file with .dat and .pos components. The .dat file contains all the Stamp format transmission information in binary format. I can read .pos file using R. It has 3 column with numbers, which make no sense to me. The data is information on stock and I think it is the result of Streambase. I need to get 2007 price, value, and etc. information on some stocks that I am interested in. Could you please suggest any way to read the data? Should I use some particular software to make sense of this data?
0
1
188
0
12,930,212
0
0
0
0
1
true
1
2012-10-15T16:02:00.000
5
1
0
Python: OpenCV can not be loaded on windows xp
12,899,513
1.2
python,opencv,windows-xp,py2exe
The problem comes from 4 dlls which are copyied by py2exe: msvfw32.dll msacm32.dll, avicap32.dll and avifil32.dll As I am building on Vista, I think that it forces the use of Vista dlls on Windows XP causing some mismatch when trying to load it. I removed these 4 dlls and everything seems to work ok (in this case it use the regular system dlls.)
I have a Python app built with Python, OpenCv and py2exe. When I distribute this app and try to run it on a windows XP machine, I have an error on startup due to error loading cv2.pyd (opencv python wrapper) I looked at cv2.pyd with dependency walker and noticed that some dlls are missing : ieshims.dll and wer.dll. Unfortunately copying these libs doesn't solve the issues some other dlls are missing or not up-to-date. Any idea?
0
1
941
0
12,926,989
0
0
0
0
1
true
36
2012-10-17T03:58:00.000
62
2
0
numpy unique without sort
12,926,898
1.2
python,numpy
You can do this with the return_index parameter: >>> import numpy as np >>> a = [4,2,1,3,1,2,3,4] >>> np.unique(a) array([1, 2, 3, 4]) >>> indexes = np.unique(a, return_index=True)[1] >>> [a[index] for index in sorted(indexes)] [4, 2, 1, 3]
How can I use numpy unique without sorting the result but just in the order they appear in the sequence? Something like this? a = [4,2,1,3,1,2,3,4] np.unique(a) = [4,2,1,3] rather than np.unique(a) = [1,2,3,4] Use naive solution should be fine to write a simple function. But as I need to do this multiple times, are there any fast and neat way to do this?
0
1
32,836
0
12,938,704
0
0
0
0
1
false
12
2012-10-17T16:07:00.000
1
3
0
Matplotlib savefig into different pages of a PDF
12,938,568
0.066568
python,pdf,pagination,matplotlib
I suspect that there is a more elegant way to do this, but one option is to use tempfiles or StringIO to avoid making traditional files on the system and then you can piece those together.
I have a lengthy plot, composed o several horizontal subplots organized into a column. When I call fig.savefig('what.pdf'), the resulting output file shows all the plots crammed onto a single page. Question: is there a way to tell savefig to save on any number (possibly automatically determined) of pdf pages? I'd rather avoid multiple files and then os.system('merge ...'), if possible.
0
1
16,939
0
13,009,064
0
0
0
0
1
false
0
2012-10-19T06:20:00.000
1
2
0
Python: Perform an operation on each pixel of a 2-d array simultaneously
12,968,446
0.099668
python,image,filter,numpy,scipy
Even if python did provide functionality to apply an operation to an NxM array without looping over it, the operation would still not be executed simultaneously in the background since the amount of instructions a CPU can handle per cycle is limited and thus no time could be saved. For your use case this might even be counterproductive since the fields in your arrays proably have dependencies and if you don't know in what order they are accessed this will most likely end up in a mess. Hugues provided some useful links about parallel processing in Python, but be careful when accessing the same data structure such as an array with multiple threads at the same time. If you don't synchronize the threads they might access the same part of the array at the same time and mess things up. And be aware, the amount of threads that can effectively be run in parallel is limited by the number of processor cores.
I want to apply a 3x3 or larger image filter (gaussian or median) on a 2-d array. Though there are several ways for doing that such as scipy.ndimage.gaussian_filter or applying a loop, I want to know if there is a way to apply a 3x3 or larger filter on each pixel of a mxn array simultaneously, because it would save a lot of time bypassing loops. Can functional programming be used for the purpose?? There is a module called scipy.ndimage.filters.convolve, please tell whether it is able to perform simultaneous operations.
0
1
854
0
15,449,900
0
0
0
0
1
false
1
2012-10-19T19:58:00.000
2
2
0
Panda3D and Python, render only one frame and other questions
12,981,607
0.197375
python,3d,rendering,panda3d
You can use a buffer with setOneShot enabled to make it render only a single frame. You can start Panda3D without a window by setting the "window-type" PRC variable to "none", and then opening an offscreen buffer yourself. (Note: offscreen buffers without a host window may not be supported universally.) If you set "window-type" to "offscreen", base.win will actually be a buffer (which may be a bit easier than having to set up your own), after which you can call base.graphicsEngine.render_frame() to render a single frame while avoiding the overhead of the task manager. You have to call it twice because of double-buffering. Yes. Panda3D is used by Disney for some of their MMORPGs. I do have to add that Panda's high-level networking interfaces are poorly documented. You can calculate the transformation from an object to the camera using nodepath.get_transform(base.cam), which is a TransformState object that you may optionally convert into a matrix using ts.get_mat(). This is surprisingly fast, since Panda maintains a composition cache for transformations so that this doesn't have to happen multiple times. You can get the projection matrix (from view space to clip space) using lens.get_projection_mat() or the inverse using lens.get_projection_mat_inv(). You may also access the individual vertex data using the Geom interfaces, this is described to detail in the Panda3D manual. You can use set_color to change the base colour of the object (replacing any vertex colours), or you can use set_color_scale to tint the objects, ie. applying a colour that is multiplied with the existing colour. You can also apply a Material object if you use lights and want to use different colours for the diffuse, specular and ambient components.
I would like to use Panda3D for my personal project, but after reading the documentation and some example sourcecodes, I still have a few questions: How can I render just one frame and save it in a file? In fact I would need to render 2 different images: a single object, and a scene of multiple objects including the previous single object, but just one frame for each and they both need to be saved as image files. The application will be coded in Python, and needs to be very scalable (be used by thousand of users). Would Panda3D fit the bill here? (about my program in Python, it's almost a constant complexity so no problem here, and 3D models will be low-poly and about 5 to 20 per scene). I need to calculate the perspective projection of every object to the camera. Is it possible to directly access the vertexes and faces (position, parameters, etc..)? Can I recolor my 3D objects? I need to set a simple color for the whole object, but a different color per object. Is it possible? Please also note that I'm quite a newbie in the field of graphical and game development, but I know some bits of 3D modelling and 3D theory, as well as computer imaging theory. Thank you for reading me. PS: My main alternative currently is to use Soya3D or PySoy, but they don't seem to be very actively developped nor optimized, so although they would both have a smaller memory footprints, I don't know if they would really perform faster than Panda3D since they're not very optimized...
0
1
2,225
0
12,990,987
0
1
0
0
1
false
3
2012-10-20T16:17:00.000
0
2
0
Data interpolation in python
12,990,315
0
python,graph,plot,interpolation
If you use matplotlib, you can just call plot(X1, Y1, 'bo', X2, Y2, 'r+'). Change the formatting as you'd like, but it can cope with different lengths just fine. You can provide more than two without any issue.
I have four one dimensional lists: X1, Y1, X2, Y2. X1 and Y1 each have 203 data points. X2 and Y2 each have 1532 data points. X1 and X2 are at different intervals, but both measure time. I want to graph Y1 vs Y2. I can plot just fine once I get the interpolated data, but can't think of how to interpolate data. I've thought and researched this a couple hours, and just can't figure it out. I don't mind a linear interpolation, but just can't figure out a way.
0
1
2,501
0
13,016,728
0
0
0
0
1
false
1
2012-10-22T15:35:00.000
0
2
0
Calling Python from Stata
13,014,789
0
python,merge,python-3.x,stata
Type "help shell" in Stata. What you want to do is shell out from Stata, call Python, and then have Stata resume whatever you want it to do after the Python script has completed.
This is probably very easy, but after looking through documentation and possible examples online for the past several hours I cannot figure it out. I have a large dataset (a spreadsheet) that gets heavily cleaned by a DO file. In the DO file I then want to save certain variables of the cleaned data as a temp .csv run some Python scripts, that produce a new CSV and then append that output to my cleaned data. If that was unclear here is an example. After cleaning my data set (XYZ) goes from variables A to Z with 100 observations. I want to take variables A and D through F and save it as test.csv. I then want to run a python script that takes this data and creates new variables AA to GG. I want to then take that information and append it to the XYZ dataset (making the dataset now go from A to GG with 100 observations) and then be able to run a second part of my DO file for analysis. I have been doing this manually and it is fine but the file is going to start changing quickly and it would save me a lot of time.
0
1
2,897
0
15,627,502
0
0
0
0
1
false
8
2012-10-22T16:22:00.000
0
2
0
NLTK: Document Classification with numeric score instead of labels
13,015,593
0
python,nltk
This is a very late answer, but perhaps it will help someone. What you're asking about is regression. Regarding Jacob's answer, linear regression is only one way to do it. However, I agree with his recommendation of scikit-learn.
In the light of a project I've been playing with Python NLTK and Document Classification and the Naive Bayes classifier. As I understand from the documentation, this works very well if your different documents are tagged with either pos or neg as a label (or more than 2 labels) The documents I'm working with that are already classified don't have labels, but they have a score, a floating point between 0 and 5. What I would like to do is build a classifier, like the movies example in the documentation, but that would predict the score of a piece of text, rather than the label. I believe this is mentioned in the docs but never further explored as 'probabilities of numeric features' I am not a language expert nor a statistician so if someone has an example of this lying around I would be most grateful if you would share this with me. Thanks!
0
1
1,239
0
13,019,636
0
0
0
0
1
false
1
2012-10-22T20:05:00.000
3
2
0
How to automatically detect if image is of high quality?
13,018,968
0.291313
python,image-processing
You are making this way too hard. I handled this in production code by generating a histogram of the image, throwing away outliers (1 black pixel doesn't mean that the whole image has lots of black; 1 white pixel doesn't imply a bright image), then seeing if the resulting distribution covered a sufficient range of brightnesses. In stats terms, you could also see if the histogram approximates a Gaussian distribution with a satisfactorily large standard deviation. If the whole image is medium gray with a tiny stddev, then you have a low contrast image - by definition. If the mean is approximately medium-gray but the stddev covers brightness levels from say 20% to 80%, then you have a decent contrast. But note that neither of these approaches require anything remotely resembling machine learning.
I want an algorithm to detect if an image is of high professional quality or is done with poor contrast, low lighting etc. How do I go about designing such an algorithm. I feel that it is feasible, since if I press a button in picassa it tries to fix the lighting, contrast and color. Now I have seen that in good pictures if I press the auto-fix buttons the change is not that high as in the bad images. Could this be used as a lead? Please throw any ideas at me. Also if this has already been done before, and I am doing the wheel invention thing, kindly stop me and point me to previous work. thanks much,
0
1
3,360
0
13,057,566
0
0
0
0
1
true
2
2012-10-24T20:24:00.000
3
1
0
Using custom Pipeline for Cross Validation scikit-learn
13,057,113
1.2
python,machine-learning,scikit-learn
You probably need to derive from the KMeans class and override the following methods to use your vocabulary logic: fit_transform will only be called on the train data transform will be called on the test data Maybe class derivation is not alway the best option. You can also write your own transformer class that wraps calls to an embedded KMeans model and provides the fit / fit_transform / transform API that is expected by the Pipeline class for the first stages.
I would like to be use GridSearchCV to determine the parameters of a classifier, and using pipelines seems like a good option. The application will be for image classification using Bag-of-Word features, but the issue is that there is a different logical pipeline depending on whether training or test examples are used. For each training set, KMeans must run to produce a vocabulary that will be used for testing, but for test data no KMeans process is run. I cannot see how it is possible to specify this difference in behavior for a pipeline.
0
1
1,601
0
13,062,863
0
1
0
0
1
false
8
2012-10-25T00:50:00.000
0
3
0
Fast algorithm to detect main colors in an image?
13,060,069
0
python,algorithm,colors,python-imaging-library
K-means is a good choice for this task because you know number of main colors beforehand. You need to optimize K-means. I think you can reduce your image size, just scale it down to 100x100 pixels or so. Find the size on witch your algorithm works with acceptable speed. Another option is to use dimensionality reduction before k-means clustering. And try to find fast k-means implementation. Writing such things in python is a misuse of python. It's not supposed to be used like this.
Does anyone know a fast algorithm to detect main colors in an image? I'm currently using k-means to find the colors together with Python's PIL but it's very slow. One 200x200 image takes 10 seconds to process. I've several hundred thousand images.
0
1
4,441
0
14,449,025
0
0
0
1
2
false
3
2012-10-25T04:54:00.000
1
3
0
Choice of technology for loading large CSV files to Oracle tables
13,061,800
0.066568
python,csv,etl,sql-loader,smooks
Create a process / script that will call a procedure to load csv files to external Oracle table and another script to load it to the destination table. You can also add cron jobs to call these scripts that will keep track of incoming csv files into the directory, process it and move the csv file to an output/processed folder. Exceptions also can be handled accordingly by logging it or sending out an email. Good Luck.
I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience. I want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then populate around 6-7 stagings tables in Oracle using this XML. The data needs to be populated such that the elements of the XML and eventually the rows of the table come from multiple CSV files. So for e.g. an element A would have sub-elements coming data from CSV file 1, file 2 and file 3 etc. I have a framework built on Top of Apache Camel, Jboss on Linux. Oracle 10G is the database server. Options I am considering, Smooks - However the problem is that Smooks serializes one CSV at a time and I cant afford to hold on to the half baked java beans til the other CSV files are read since I run the risk of running out of memory given the sheer number of beans I would need to create and hold on to before they are fully populated written to disk as XML. SQLLoader - I could skip the XML creation all together and load the CSV directly to the staging tables using SQLLoader. But I am not sure if I can a. load multiple CSV files in SQL Loader to the same tables updating the records after the first file. b. Apply some translation rules while loading the staging tables. Python script to convert the CSV to XML. SQLLoader to load a different set of staging tables corresponding to the CSV data and then writing stored procedure to load the actual staging tables from this new set of staging tables (a path which I want to avoid given the amount of changes to my existing framework it would need). Thanks in advance. If someone can point me in the right direction or give me some insights from his/her personal experience it will help me make an informed decision. regards, -v- PS: The CSV files are fairly simple with around 40 columns each. The depth of objects or relationship between the files would be around 2 to 3.
0
1
2,011
0
13,062,737
0
0
0
1
2
true
3
2012-10-25T04:54:00.000
2
3
0
Choice of technology for loading large CSV files to Oracle tables
13,061,800
1.2
python,csv,etl,sql-loader,smooks
Unless you can use some full-blown ETL tool (e.g. Informatica PowerCenter, Pentaho Data Integration), I suggest the 4th solution - it is straightforward and the performance should be good, since Oracle will handle the most complicated part of the task.
I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience. I want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then populate around 6-7 stagings tables in Oracle using this XML. The data needs to be populated such that the elements of the XML and eventually the rows of the table come from multiple CSV files. So for e.g. an element A would have sub-elements coming data from CSV file 1, file 2 and file 3 etc. I have a framework built on Top of Apache Camel, Jboss on Linux. Oracle 10G is the database server. Options I am considering, Smooks - However the problem is that Smooks serializes one CSV at a time and I cant afford to hold on to the half baked java beans til the other CSV files are read since I run the risk of running out of memory given the sheer number of beans I would need to create and hold on to before they are fully populated written to disk as XML. SQLLoader - I could skip the XML creation all together and load the CSV directly to the staging tables using SQLLoader. But I am not sure if I can a. load multiple CSV files in SQL Loader to the same tables updating the records after the first file. b. Apply some translation rules while loading the staging tables. Python script to convert the CSV to XML. SQLLoader to load a different set of staging tables corresponding to the CSV data and then writing stored procedure to load the actual staging tables from this new set of staging tables (a path which I want to avoid given the amount of changes to my existing framework it would need). Thanks in advance. If someone can point me in the right direction or give me some insights from his/her personal experience it will help me make an informed decision. regards, -v- PS: The CSV files are fairly simple with around 40 columns each. The depth of objects or relationship between the files would be around 2 to 3.
0
1
2,011
0
13,084,224
0
0
0
0
1
false
10
2012-10-25T12:10:00.000
11
2
0
Multiprocessing scikit-learn
13,068,257
1
python,multithreading,numpy,machine-learning,scikit-learn
For linear models (LinearSVC, SGDClassifier, Perceptron...) you can chunk your data, train independent models on each chunk and build an aggregate linear model (e.g. SGDClasifier) by sticking in it the average values of coef_ and intercept_ as attributes. The predict method of LinearSVC, SGDClassifier, Perceptron compute the same function (linear prediction using a dot product with an intercept_ threshold and One vs All multiclass support) so the specific model class you use for holding the average coefficient is not important. However as previously said the tricky point is parallelizing the feature extraction and current scikit-learn (version 0.12) does not provide any way to do this easily. Edit: scikit-learn 0.13+ now has a hashing vectorizer that is stateless.
I got linearsvc working against training set and test set using load_file method i am trying to get It working on Multiprocessor enviorment. How can i get multiprocessing work on LinearSVC().fit() LinearSVC().predict()? I am not really familiar with datatypes of scikit-learn yet. I am also thinking about splitting samples into multiple arrays but i am not familiar with numpy arrays and scikit-learn data structures. Doing this it will be easier to put into multiprocessing.pool() , with that , split samples into chunks , train them and combine trained set back later , would it work ? EDIT: Here is my scenario: lets say , we have 1 million files in training sample set , when we want to distribute processing of Tfidfvectorizer on several processors we have to split those samples (for my case it will only have two categories , so lets say 500000 each samples to train) . My server have 24 cores with 48 GB , so i want to split each topics into number of chunks 1000000 / 24 and process Tfidfvectorizer on them. Like that i would do to Testing sample set , as well as SVC.fit() and decide(). Does it make sense? Thanks. PS: Please do not close this .
0
1
11,435
0
13,242,515
0
1
0
0
1
false
1
2012-10-30T10:15:00.000
0
2
0
Rotations in 3D
13,136,828
0
python,3d,geometry
I'll assume the "geometry library for python" already answered in the comments on the question. So once you have a transformation that takes 'a' parallel to 'b', you'll just apply it to 'c' The vectors 'a' and 'b' uniquely define a plane. Each vector has a canonical representation as a point difference from the origin, so you have three points: the head of 'a', the head of 'b', and the origin. First compute this plane. It will have an equation in the form Ax + By + Cz = 0. A normal vector to this plane defines both the axis of rotation and the sign convention for the direction of rotation. All you need is one normal vector to the plane, since they're all collinear. You can solve for such a vector by picking at two non-collinear vectors in the plane and taking the dot product with the normal vector. This gives a pair of linear equations in two variables that you can solve with standard methods such as Cramer's rule. In all of these manipulations, if any of A, B, or C are zero, you have a special case to handle. The angle of the rotation is given by the cosine relation for the dot product of 'a' and 'b' and their lengths. The sign of the angle is determined by the triple product of 'a', 'b', and the normal vector. Now you've got all the data to construct a rotation matrix in one of the many canonical forms you can look up.
I have three vectors in 3D a,b,c. Now I want to calculate a rotation r that when applied to a yields a result parallel to b. Then the rotation r needs to be applied to c. How do I do this in python? Is it possible to do this with numpy/scipy?
0
1
1,659
0
13,150,059
0
0
0
0
1
false
1
2012-10-31T01:28:00.000
2
2
0
Is it possible to create a numpy matrix with 10 rows and 0 columns?
13,150,020
0.197375
python,numpy
Add columns to ndarray(or matrix) need full copy of the content, so you should use other method such as list or the array module, or create a large matrix first, and fill data in it.
My objective is to start with an "empty" matrix and repeatedly add columns to it until I have a large matrix.
0
1
255
0
13,162,109
0
0
0
0
1
true
5
2012-10-31T04:47:00.000
2
2
0
Feature extraction for butterfly images
13,151,428
1.2
python,image-processing,opencv,image-segmentation
Are you willing to write your own image processing logic? Your best option will likely be to optimize the segmentation/feature extraction for your problem, instead of using previous implementations like opencv meant for more general use-cases. An option that I've found to work well in noisy/low-contrast environments is to use a sliding window (i.e. 10x10 pixels) and build a gradient orientation histogram. From this histogram you can recognize the presence of more dominant edges (they accumulate in the histogram) and their orientations (allowing for detection for things like corners) and see the local maximum/minimums. (I can give more details if needed) If your interested in segmentation as a whole AND user interaction is possible, I would recommend graph cut or grab cut. In graph cut users would be able to fine tune the segmentation. Grab cut is already in opencv, but may result in the same problems as it takes a single input from the user, then automatically segments the image.
I have a set of butterfly images for training my system to segment a butterfly from a given input image. For this purpose, I want to extract the features such as edges, corners, region boundaries, local maximum/minimum intensity etc. I found many feature extraction methods like Harris corner detection, SIFT but they didn't work well when the image background had the same color as that of the butterfly's body/boundary color. Could anyone please tell whether there is any good feature extraction method which works well for butterfly segmentation? I'm using the Python implementation of OpenCV.
0
1
2,622
0
13,156,356
0
0
0
0
1
false
1
2012-10-31T05:45:00.000
0
2
0
Looking for a specific python gui module to perform the following task
13,151,907
0
python,graph,matplotlib,tkinter,wxwidgets
You can do what you want with Tkinter, though there's no specific widget that does what you ask. There is a general purpose canvas widget that allows you to draw objects (rectangles, circles, images, buttons, etc), and it's pretty easy to add the ability to drag those items around.
I am looking for a GUI python module that is best suited for the following job: I am trying to plot a graph with many columns (perhaps hundreds), each column representing an individual. The user should be able to drag the columns around and drop them onto different columns to switch the two. Also, there are going to be additional dots drawn on the columns and by hovering over those dots, the user should see the values corresponding to those dots. What is the best way to approach this?
0
1
244
0
16,198,408
0
0
0
0
1
false
1
2012-11-02T06:08:00.000
0
1
0
How to use BaseMap with chaco plots
13,190,187
0
python,matplotlib-basemap,chaco
Chaco and matplotlib are completely different tools. Basemap has been built on top of matplotlib so it is not possible to add a Basemap map on a Chaco plot. I'm afraid I couldn't find any mapping layer to go with Chaco. Is there a reason you cannot use matplotlib for you plot?
I had developed scatter and lasso selection plots with Chaco. Now, I need to embed a BaseMap [with few markers on a map] onto the plot area side by side. I created a BaseMap and tried to add to the traits_view; but it is failing with errors. Please give me some pointers to achieve the same.
0
1
156
0
13,198,574
0
0
0
0
1
true
1
2012-11-02T14:22:00.000
5
1
0
Forecast Package from R in Python
13,197,097
1.2
python,time-series,forecasting
Yes, you could use the [no longer developed or extended, but maintained] package RPy, or you could use the newer package RPy2 which is actively developed. There are other options too, as eg headless network connections to Rserve.
I found forecast package from R the best solution for time series analysis and forecasting. I want to use it in Python. Could I use rpy and after take the forecast package in Python?
0
1
1,673
0
13,203,622
0
1
0
0
1
false
61
2012-11-02T21:59:00.000
10
3
0
Big-O of list slicing
13,203,601
1
python,list,big-o
For a list of size N, and a slice of size M, the iteration is actually only O(M), not O(N). Since M is often << N, this makes a big difference. In fact, if you think about your explanation, you can see why. You're only iterating from i_1 to i_2, not from 0 to i_1, then I_1 to i_2.
Say I have some Python list, my_list which contains N elements. Single elements may be indexed by using my_list[i_1], where i_1 is the index of the desired element. However, Python lists may also be indexed my_list[i_1:i_2] where a "slice" of the list from i_1 to i_2 is desired. What is the Big-O (worst-case) notation to slice a list of size N? Personally, if I were coding the "slicer" I would iterate from i_1 to i_2, generate a new list and return it, implying O(N), is this how Python does it? Thank you,
0
1
47,961
0
13,229,534
0
1
0
0
1
false
2
2012-11-05T06:38:00.000
0
3
0
Backward integration in time using scipy odeint
13,227,115
0
python-2.7,scipy
You can make a change of variables s = t_0 - t, and integrate the differential equation with respect to s. odeint doesn't do this for you.
Is it possible to integrate any Ordinary Differential Equation backward in time using scipy.integrate.odeint ? If it is possible, could someone tell me what should be the arguement 'time' in 'odeint.
0
1
4,017
0
13,236,277
0
0
0
0
1
true
9
2012-11-05T16:20:00.000
2
2
0
How to load only specific columns from csv file into a DataFrame
13,236,098
1.2
python,pandas,csv
There's no default way to do this right now. I would suggest chunking the file and iterating over it and discarding the columns you don't want. So something like pd.concat([x.ix[:, cols_to_keep] for x in pd.read_csv(..., chunksize=200)])
Suppose I have a csv file with 400 columns. I cannot load the entire file into a DataFrame (won't fit in memory). However, I only really want 50 columns, and this will fit in memory. I don't see any built in Pandas way to do this. What do you suggest? I'm open to using the PyTables interface, or pandas.io.sql. The best-case scenario would be a function like: pandas.read_csv(...., columns=['name', 'age',...,'income']). I.e. we pass a list of column names (or numbers) that will be loaded.
0
1
7,641
0
58,968,554
0
0
0
0
1
false
76
2012-11-06T11:27:00.000
3
6
0
How to keep leading zeros in a column when reading CSV with Pandas?
13,250,046
0.099668
python,pandas,csv,types
You Can do This , Works On all Versions of Pandas pd.read_csv('filename.csv', dtype={'zero_column_name': object})
I am importing study data into a Pandas data frame using read_csv. My subject codes are 6 numbers coding, among others, the day of birth. For some of my subjects this results in a code with a leading zero (e.g. "010816"). When I import into Pandas, the leading zero is stripped of and the column is formatted as int64. Is there a way to import this column unchanged maybe as a string? I tried using a custom converter for the column, but it does not work - it seems as if the custom conversion takes place before Pandas converts to int.
0
1
62,070
0
29,600,848
0
1
0
0
2
false
2
2012-11-10T02:17:00.000
2
2
0
Sort a list of ints and floats with negative and positive values?
13,318,611
0.197375
sorting,python-2.7,absolute-value
I had the same problem. The answer: Python will sort numbers by the absolute value if you have them as strings. So as your key, make sure to include an int() or float() argument. My working syntax was data = sorted(data, key = lambda x: float(x[0])) ...the lambda x part just gives a function which outputs the thing you want to sort by. So it takes in a row in my list, finds the float 0th element, and sorts by that.
I'm trying to sort a list of unknown values, either ints or floats or both, in ascending order. i.e, [2,-1,1.0] would become [-1,1.0,2]. Unfortunately, the sorted() function doesn't seem to work as it seems to sort in descending order by absolute value. Any ideas?
0
1
5,933
0
65,985,248
0
1
0
0
2
false
2
2012-11-10T02:17:00.000
0
2
0
Sort a list of ints and floats with negative and positive values?
13,318,611
0
sorting,python-2.7,absolute-value
In addition to doublefelix,below code gives the absolute order to me from string. siparis=sorted(siparis, key=lambda sublist:abs(float(sublist[1])))
I'm trying to sort a list of unknown values, either ints or floats or both, in ascending order. i.e, [2,-1,1.0] would become [-1,1.0,2]. Unfortunately, the sorted() function doesn't seem to work as it seems to sort in descending order by absolute value. Any ideas?
0
1
5,933
0
13,345,287
0
0
0
0
1
false
1
2012-11-10T10:01:00.000
0
1
0
Creating a 5D array in Python
13,321,042
0
python,numpy
If I understand correctly, every pixel in the gray image is mapped to a single pixel in N other images. In that case, the map array is numpy.zeros((i.shape[0], i.shape[1], N, 2), dtype=numpy.int32) since you need to store 1 x and 1 y coordinate into each other N arrays, not the full Nth array every time. Using integer indices will further reduce memory use. Then result[y,x,N,0] and result[y,x,N,1] are the y and x mappings into the Nth image.
I have a gray image in which I want to map every pixel to N other matrices of size LxM.How do I initialize such a matrix?I tried result=numpy.zeros(shape=(i_size[0],i_size[1],N,L,M)) for which I get the Value Error 'array is too big'.Can anyone suggest an alternate method?
0
1
3,179
0
13,397,869
0
0
0
0
1
false
4
2012-11-13T06:20:00.000
2
1
0
Smoothing in python NLTK
13,356,348
0.379949
python,nltk,smoothing
I'd suggest to replace all the words with low (specially 1) frequency to <unseen>, then train the classifier in this data. For classifying you should query the model for <unseen> in the case of a word that is not in the training data.
I am using Naive Bayes classifier in python for text classification. Is there any smoothing methods to avoid zero probability for unseen words in python NLTK? Thanks in advance!
0
1
1,430
0
13,445,709
0
0
0
0
1
false
0
2012-11-14T17:09:00.000
0
1
0
Is it possible to use in Python the svm_model, generated in matlab?
13,383,684
0
python,matlab,libsvm
Normally you would just call a method in libsvm to save your model to a file. You then can just use it in Python using their svm.py. So yes, you can - it's all saved in libsvm format.
Is it possible to use in Python the svm_model, generated in matlab? (I use libsvm)
0
1
77
0
13,399,425
0
0
0
0
1
false
4
2012-11-15T09:51:00.000
0
2
0
How to expose an NLTK based ML(machine learning) Python Script as a Web Service?
13,394,969
0
python,machine-learning,cherrypy
NLTK based system tends to be slow at response per request, but good throughput can be achieved given enough RAM.
Let me explain what I'm trying to achieve. In the past while working on Java platform, I used to write Java codes(say, to push or pull data from MySQL database etc.) then create a war file which essentially bundles all the class files, supporting files etc and put it under a servlet container like Tomcat and this becomes a web service and can be invoked from any platform. In my current scenario, I've majority of work being done in Java, however the Natural Language Processing(NLP)/Machine Learning(ML) part is being done in Python using the NLTK, Scipy, Numpy etc libraries. I'm trying to use the services of this Python engine in existing Java code. Integrating the Python code to Java through something like Jython is not that straight-forward(as Jython does not support calling any python module which has C based extensions, as far as I know), So I thought the next option would be to make it a web service, similar to what I had done with Java web services in the past. Now comes the actual crux of the question, how do I run the ML engine as a web service and call the same from any platform, in my current scenario this happens to be Java. I tried looking in the web, for various options to achieve this and found things like CherryPy, Werkzeug etc but not able to find the right approach or any sample code or anything that shows how to invoke a NLTK-Python script and serve the result through web, and eventually replicating the functionality Java web service provides. In the Python-NLTK code, the ML engine does a data-training on a large corpus(this takes 3-4 minutes) and we don't want the Python code to go through this step every time a method is invoked. If I make it a web service, the data-training will happen only once, when the service starts and then the service is ready to be invoked and use the already trained engine. Now coming back to the problem, I'm pretty new to this web service things in Python and would appreciate any pointers on how to achieve this .Also, any pointers on achieving the goal of calling NLTK based python scripts from Java, without using web services approach and which can deployed on production servers to give good performance would also be helpful and appreciable. Thanks in advance. Just for a note, I'm currently running all my code on a Linux machine with Python 2.6, JDK 1.6 installed on it.
1
1
1,090
0
60,718,937
0
0
0
0
1
false
49
2012-11-15T20:49:00.000
0
5
0
Convert an image RGB->Lab with python
13,405,956
0
python,numpy,scipy,python-imaging-library,color-space
At the moment I haven't found a good package to do that. You have to bear in mind that RGB is a device-dependent colour space so you can't convert accurately to XYZ or CIE Lab if you don't have a profile. So be aware that many solutions where you see converting from RGB to CIE Lab without specifying the colour space or importing a colour profile must be carefully evaluated. Take a look at the code under the hood most of the time they assume that you are dealing with sRGB colour space.
What is the preferred way of doing the conversion using PIL/Numpy/SciPy today?
0
1
55,003
0
13,420,016
0
0
0
0
1
true
19
2012-11-16T15:43:00.000
41
1
0
pandas dataframe, copy by value
13,419,822
1.2
python,pandas
All functions in Python are "pass by reference", there is no "pass by value". If you want to make an explicit copy of a pandas object, try new_frame = frame.copy().
I noticed a bug in my program and the reason it is happening is because it seems that pandas is copying by reference a pandas dataframe instead of by value. I know immutable objects will always be passed by reference but pandas dataframe is not immutable so I do not see why it is passing by reference. Can anyone provide some information? Thanks! Andrew
0
1
17,836
0
66,955,473
0
0
1
0
2
false
92
2012-11-17T17:14:00.000
3
5
0
Does performance differ between Python or C++ coding of OpenCV?
13,432,800
0.119427
c++,python,performance,opencv
Why choose? If you know both Python and C++, use Python for research using Jupyter Notebooks and then use C++ for implementation. The Python stack of Jupyter, OpenCV (cv2) and Numpy provide for fast prototyping. Porting the code to C++ is usually quite straight-forward.
I aim to start opencv little by little but first I need to decide which API of OpenCV is more useful. I predict that Python implementation is shorter but running time will be more dense and slow compared to the native C++ implementations. Is there any know can comment about performance and coding differences between these two perspectives?
0
1
76,398
0
13,432,830
0
0
1
0
2
false
92
2012-11-17T17:14:00.000
6
5
0
Does performance differ between Python or C++ coding of OpenCV?
13,432,800
1
c++,python,performance,opencv
You're right, Python is almost always significantly slower than C++ as it requires an interpreter, which C++ does not. However, that does require C++ to be strongly-typed, which leaves a much smaller margin for error. Some people prefer to be made to code strictly, whereas others enjoy Python's inherent leniency. If you want a full discourse on Python coding styles vs. C++ coding styles, this is not the best place, try finding an article. EDIT: Because Python is an interpreted language, while C++ is compiled down to machine code, generally speaking, you can obtain performance advantages using C++. However, with regard to using OpenCV, the core OpenCV libraries are already compiled down to machine code, so the Python wrapper around the OpenCV library is executing compiled code. In other words, when it comes to executing computationally expensive OpenCV algorithms from Python, you're not going to see much of a performance hit since they've already been compiled for the specific architecture you're working with.
I aim to start opencv little by little but first I need to decide which API of OpenCV is more useful. I predict that Python implementation is shorter but running time will be more dense and slow compared to the native C++ implementations. Is there any know can comment about performance and coding differences between these two perspectives?
0
1
76,398
0
13,436,279
0
0
0
0
1
true
0
2012-11-17T23:55:00.000
0
1
0
K-Means plus plus implementation
13,436,032
1.2
python,colors,cluster-computing,k-means
You can use a vector quantisation. You can make a list of each pixel and each adjacent pixel in x+1 and y+1 direction and pick the difference and plot it along a diagonale. Then you can calculate a voronoi diagram and get the mean color and compute a feature vector. It's a bit more effectice then to use a simple grid based mean color.
My score was to get the most frequent color in a image, so I implemented a k-means algorithm. The algorithm works good, but the result is not the one I was waiting for. So now I'm trying to do some improvements, the first I thought was to implement k-means++, so I get a beter position for the inicial clusters centers. First I select a random point, but how can I select the others. I mean how I define the minimal distance between them. Any help for this? Thanks
0
1
1,239
0
13,463,491
0
0
0
0
1
false
0
2012-11-19T19:08:00.000
0
1
0
Scipy / Numpy Reimann Sum Height
13,460,428
0
python,numpy,scipy
For computing Reimann sums you could look into numpy.cumsum(). I am not sure if you can do a surface or only an array with this method. However, you could always loop through all the rows of your terrain and store each row in a two dimensional array as you go. Leaving you with an array of all the terrain heights.
I am working on a visualization that models the trajectory of an object over a planar surface. Currently, the algorithm I have been provided with uses a simple trajectory function (where velocity and gravity are provided) and Runge-Kutta integration to check n points along the curve for a point where velocity becomes 0. We are discounting any atmospheric interaction. What I would like to do it introduce a non-planar surface, say from a digital terrain model (raster). My thought is to calculate a Reimann sum at each pixel and determine if the offset from the planar surface is equal to or less than the offset of the underlying topography from the planar surface. Is it possible, using numpy or scipy, to calculate the height of a Reimann rectangle? Conversely, the area of the rectangle (midpoint is fine) would work, as I know the width nd can calculate the height.
0
1
329
0
13,467,084
0
1
0
0
1
true
2
2012-11-20T05:08:00.000
3
1
0
Installing (and using) numpy without access to a compiler or binaries
13,466,939
1.2
python,numpy
a compiler is unavailable, and no pre-built binaries can be installed This... makes numpy impossible. If you cannot install numpy binaries, and you cannot compile numpy source code, then you are left with no options.
Assuming performance is not an issue, is there a way to deploy numpy in a environment where a compiler is unavailable, and no pre-built binaries can be installed? Alternatively, is there a pure-python numpy implementation?
0
1
435
0
13,491,927
0
0
0
0
1
true
0
2012-11-21T10:55:00.000
2
1
0
Numpy save file is larger than the original
13,491,731
1.2
python,numpy
Have you looked at the way floats are represented in text before and after? You might have a line "1.,2.,3." become "1.000000e+0, 2.000000e+0,3.000000e+0" or something like that, the two are both valid and both represent the same numbers. More likely, however, is that if the original file contained floats as values with relatively few significant digits (for example "1.1, 2.2, 3.3"), after you do normalization and scaling, you "create" more digits which are needed to represent the results of your math but do not correspond to real increase in precision (for example, normalizing the sum of values to 1.0 in the last example gives "0.1666666, 0.3333333, 0.5"). I guess the short answer is that there is no guarantee (and no requirement) for floats represented as text to occupy any particular amount of storage space, or less than the maximum possible per float; it can vary a lot even if the data remains the same, and will certainly vary if the data changes.
I'm extracting a large CSV file (200Mb) that was generated using R with Python (I'm the one using python). I do some tinkling with the file (normalization, scaling, removing junk columns, etc) and then save it again using numpy's savetxt with data delimiter as ',' to kee the csv property. Thing is, the new file is almost twice as large than the original (almost 400Mb). The original data as well as the new one are only arrays of floats. If it helps, it looks as if the new file has really small values, that need exponential values, which the original did not have. Any idea on why is this happening?
0
1
280
0
13,520,565
0
0
0
0
1
false
3
2012-11-22T21:37:00.000
1
1
0
Peak detection in Python
13,520,319
0.197375
python,r,time-series
Calculate the derivation of your sample points, for example for every 5 points (THRESHOLD!) calculate the slope of the five points with Least squares methods (search on wiki if you dont know what it is. Any lineair regression function uses it). And when this slope is almost (THRESHOLD!) zero there is a peak.
In time series we can find peak (min and max values). There are algorithms to find peaks. My question is: In python are there libraries for peak detection in time series data? or something in R using RPy?
0
1
1,640
0
13,530,687
0
0
0
0
2
true
2
2012-11-23T09:57:00.000
0
2
0
Tracking a multicolor object
13,526,654
1.2
python,opencv,tracking
For a full proof track you need to combine more than one method...following are some of the hints... if you have prior knowledge of the object then you can use template matching...but template matching is little process intensive...if you are using GPU then you might have some benefit from your write up i presume external light varies to a lesser extent...so on that ground you can use goodfeaturestotrack function of opencv and use optical flow to track only those ponits found by goodfeaturestotrack in the next frames of the video if the background is stable except some brightness variation and the object is moving comparatively more than background then you can subtract the previous frame from the present frame to get the position of the moving object...this is kind of fast and easy change detection technique... Filtering contours based on area is good idea but try to add some more features to the filtering criteria...i.e. you can try filtering based on ellpticity,aspect ratio of the bounding box etc... lastly...if you have any prior knowledge about the motion path of the object you can use kalman filter... if the background is all most not variant or to some extent variant then you can try gaussian mixture model to model the background...while the changing ball is your fore ground...
I want to track a multicolored object(4 colors). Currently, I am parsing the image into HSV and applying multiple color filter ranges on the camera feed and finally adding up filtered images. Then I filter the contours based on area. This method is quite stable most of the time but when the external light varies a bit, the object is not recognized as the hue values are getting messed up and it is getting difficult to track the object. Also since I am filtering the contours based on area I often have false positives and the object is not being tracked properly sometimes. Do you have any suggestion for getting rid of these problems. Could I use some other method to track it instead of filtering individually on colors and then adding up the images and searching for contours?
0
1
475
0
13,534,342
0
0
0
0
2
false
2
2012-11-23T09:57:00.000
0
2
0
Tracking a multicolor object
13,526,654
0
python,opencv,tracking
You might try having multiple or an infinite number of models of the object depending upon the light sources available, and then classifying your object as either the object with one of the light sources or not the object. Note: this is a machine learning-type approach to the problem. Filtering with a Kalman, extended Kalman filter, or particle filter (depending on your application) would be a good idea, so that you can have a "memory" of the recently tracked features and have expectations for the next tracked color/feature in the near term (i.e. if you just saw the object, there is a high likelihood that it hasn't disappeared in the next frame). In general, this is a difficult problem that I have run into a few times doing robotics research. The only robust solution is to learn models and to confirm or deny them with what your system actually sees. Any number of machine learning approaches should work, but the easiest would probably be support vector machines. The most robust would probably be something like Gaussian Processes (if you want to do an infinite number of models). Good luck and don't get too frustrated; this is not an easy problem!
I want to track a multicolored object(4 colors). Currently, I am parsing the image into HSV and applying multiple color filter ranges on the camera feed and finally adding up filtered images. Then I filter the contours based on area. This method is quite stable most of the time but when the external light varies a bit, the object is not recognized as the hue values are getting messed up and it is getting difficult to track the object. Also since I am filtering the contours based on area I often have false positives and the object is not being tracked properly sometimes. Do you have any suggestion for getting rid of these problems. Could I use some other method to track it instead of filtering individually on colors and then adding up the images and searching for contours?
0
1
475
0
13,544,567
0
0
0
0
1
true
5
2012-11-24T16:25:00.000
1
4
0
how to create random single source random acyclic directed graphs with negative edge weights in python
13,543,069
1.2
python,random,graph,networkx,bellman-ford
I noticed that the generated graphs have always exactly one sink vertex which is the first vertex. You can reverse direction of all edges to get a graph with single source vertex.
I want to do a execution time analysis of the bellman ford algorithm on a large number of graphs and in order to do that I need to generate a large number of random DAGS with the possibility of having negative edge weights. I am using networkx in python. There are a lot of random graph generators in the networkx library but what will be the one that will return the directed graph with edge weights and the source vertex. I am using networkx.generators.directed.gnc_graph() but that does not quite guarantee to return only a single source vertex. Is there a way to do this with or even without networkx?
0
1
4,642
1
13,585,375
0
0
0
0
1
true
1
2012-11-27T13:04:00.000
1
1
0
What is PyOpenGL's "context specific data"?
13,584,900
1.2
python,opengl,ctypes,pyopengl
Are there any scenarios I'm missing? Buffer mappings obtained through glMapBuffer
PyOpenGL docs say: Because of the way OpenGL and ctypes handle, for instance, pointers, to array data, it is often necessary to ensure that a Python data-structure is retained (i.e. not garbage collected). This is done by storing the data in an array of data-values that are indexed by a context-specific key. The functions to provide this functionality are provided by the OpenGL.contextdata module. When exactly is it the case? One situation I've got in my mind is client-side vertex arrays back from OpenGL 1, but they have been replaced by buffer objects for years. A client side array isn't required any more after a buffer object is filled (= right after glBufferData returns, I pressume). Are there any scenarios I'm missing?
0
1
176
0
13,593,942
0
1
0
0
1
true
22
2012-11-27T20:38:00.000
18
2
0
python pandas dataframe thread safe?
13,592,618
1.2
python,thread-safety,pandas
The data in the underlying ndarrays can be accessed in a threadsafe manner, and modified at your own risk. Deleting data would be difficult as changing the size of a DataFrame usually requires creating a new object. I'd like to change this at some point in the future.
I am using multiple threads to access and delete data in my pandas dataframe. Because of this, I am wondering is pandas dataframe threadsafe?
0
1
17,987
0
13,595,084
0
1
0
0
2
true
0
2012-11-27T23:15:00.000
0
2
0
numpy for 64 bit windows
13,594,953
1.2
windows,numpy,python-2.7,64-bit
It should work if you're using 32-bit Python. If you're using 64-bit Python you'll need 64-bit Numpy.
I have read several related posts about installing numpy for python version 2.7 on a 64 bit windows7 OS. Before I try these, does anybody know if the 32bit version will work on a 64bit system?
0
1
380
0
33,553,807
0
1
0
0
2
false
0
2012-11-27T23:15:00.000
0
2
0
numpy for 64 bit windows
13,594,953
0
windows,numpy,python-2.7,64-bit
If you are getting it from pip and you want a 64 bit version of NumPy, you need MSVS 2008. pip needs to compile NumPy module with the same compiler that Python binary was compiled with. The last I checked (this Summer), python's build.py on Windows only supported up to that version of MSVS. Probably because build.py isn't updated for compilers which are not clearly available for free as compile-only versions. There is an "Express" version of MSVS 2010, 2012 and 2013 (which would satisfy that requirement). But I am not sure if there is a dedicated repository for them and if they have a redistribution license. If there is, then the only problem is that no one got around to upgrading build.py to support the newer vertsions of MSVS.
I have read several related posts about installing numpy for python version 2.7 on a 64 bit windows7 OS. Before I try these, does anybody know if the 32bit version will work on a 64bit system?
0
1
380
0
13,615,685
0
0
0
0
1
false
53
2012-11-28T11:21:00.000
1
5
0
Feature Selection and Reduction for Text Classification
13,603,882
0.039979
python,nlp,svm,sentiment-analysis,feature-extraction
Linear svm is recommended for high dimensional features. Based on my experience the ultimate limitation of SVM accuracy depends on the positive and negative "features". You can do a grid search (or in the case of linear svm you can just search for the best cost value) to find the optimal parameters for maximum accuracy, but in the end you are limited by the separability of your feature-sets. The fact that you are not getting 90% means that you still have some work to do finding better features to describe your members of the classes.
I am currently working on a project, a simple sentiment analyzer such that there will be 2 and 3 classes in separate cases. I am using a corpus that is pretty rich in the means of unique words (around 200.000). I used bag-of-words method for feature selection and to reduce the number of unique features, an elimination is done due to a threshold value of frequency of occurrence. The final set of features includes around 20.000 features, which is actually a 90% decrease, but not enough for intended accuracy of test-prediction. I am using LibSVM and SVM-light in turn for training and prediction (both linear and RBF kernel) and also Python and Bash in general. The highest accuracy observed so far is around 75% and I need at least 90%. This is the case for binary classification. For multi-class training, the accuracy falls to ~60%. I need at least 90% at both cases and can not figure how to increase it: via optimizing training parameters or via optimizing feature selection? I have read articles about feature selection in text classification and what I found is that three different methods are used, which have actually a clear correlation among each other. These methods are as follows: Frequency approach of bag-of-words (BOW) Information Gain (IG) X^2 Statistic (CHI) The first method is already the one I use, but I use it very simply and need guidance for a better use of it in order to obtain high enough accuracy. I am also lacking knowledge about practical implementations of IG and CHI and looking for any help to guide me in that way. Thanks a lot, and if you need any additional info for help, just let me know. @larsmans: Frequency Threshold: I am looking for the occurrences of unique words in examples, such that if a word is occurring in different examples frequently enough, it is included in the feature set as a unique feature. @TheManWithNoName: First of all thanks for your effort in explaining the general concerns of document classification. I examined and experimented all the methods you bring forward and others. I found Proportional Difference (PD) method the best for feature selection, where features are uni-grams and Term Presence (TP) for the weighting (I didn't understand why you tagged Term-Frequency-Inverse-Document-Frequency (TF-IDF) as an indexing method, I rather consider it as a feature weighting approach). Pre-processing is also an important aspect for this task as you mentioned. I used certain types of string elimination for refining the data as well as morphological parsing and stemming. Also note that I am working on Turkish, which has different characteristics compared to English. Finally, I managed to reach ~88% accuracy (f-measure) for binary classification and ~84% for multi-class. These values are solid proofs of the success of the model I used. This is what I have done so far. Now working on clustering and reduction models, have tried LDA and LSI and moving on to moVMF and maybe spherical models (LDA + moVMF), which seems to work better on corpus those have objective nature, like news corpus. If you have any information and guidance on these issues, I will appreciate. I need info especially to setup an interface (python oriented, open-source) between feature space dimension reduction methods (LDA, LSI, moVMF etc.) and clustering methods (k-means, hierarchical etc.).
0
1
30,760
0
13,612,350
0
0
0
0
1
true
1
2012-11-28T17:38:00.000
1
1
0
Creating a haar classifier using opencv_traincascade
13,611,126
1.2
python,opencv,machine-learning,computer-vision,object-detection
This looks like you need to determine what features you would like to train your classifier on first, as using the haar classifier it benefits from those extra features. From there you will need to train the classifier, this requires you to get a lot of images that have cars and those that do not have cars in them and then run it over this and having it tweak the average it is shooting for in order to classify to the best it can with your selected features. To get a better classifier you will have to figure out the order of your features and the optimal order you put them together to further dive into the object and determine if it is in fact what you are looking for. Again this will require a lot of examples for your particular features and your problem as a whole.
I am having a little bit of trouble creating a haar classifier. I need to build up a classifier to detect cars. At the moment I made a program in python that reads in an image, I draw a rectangle around the area the object is in, Once the rectangle is drawn, it outputs the image name, the top left and bottom right coordinates of the rectangle. I am unsure of where to go from here and how to actually build up the classifier. Can anyone offer me any help? EDIT* I am looking for help on how to use the opencv_traincascade. I have looked at the documentation but I can't quite figure out how to use it to create the xml file to be used in the detection program.
0
1
1,867
0
13,637,244
0
0
0
0
1
true
6
2012-11-29T23:05:00.000
3
2
0
ElasticSearch: EdgeNgrams and Numbers
13,636,419
1.2
python,elasticsearch,django-haystack
if you're using the edgeNGram tokenizer, then it will treat "EdgeNGram 12323" as a single token and then apply the edgeNGram'ing process on it. For example, if min_grams=1 max_grams=4, you'll get the following tokens indexed: ["E", "Ed", "Edg", "Edge"]. So I guess this is not what you're really looking for - consider using the edgeNGram token filter instead: If you're using the edgeNGram token filter, make sure you're using a tokenizer that actually tokenizes the text "EdgeNGram 12323" to produce two tokens out of it: ["EdgeNGram", "12323"] (standard or whitespace tokenizer will do the trick). Then apply the edgeNGram filter next to it. In general, edgeNGram will take "12323" and produce tokens such as "1", "12", "123", etc...
Any ideas on how EdgeNgram treats numbers? I'm running haystack with an ElasticSearch backend. I created an indexed field of type EdgeNgram. This field will contain a string that may contain words as well as numbers. When I run a search against this field using a partial word, it works how it's supposed to. But if I put in a partial number, I'm not getting the result that I want. Example: I search for the indexed field "EdgeNgram 12323" by typing "edgen" and I'll get the index returned to me. If I search for that same index by typing "123" I get nothing. Thoughts?
0
1
2,671
0
13,645,588
0
1
0
0
1
false
2
2012-11-29T23:33:00.000
3
1
0
Fast Fourier Transform (fft) with Time Associated Data Python
13,636,758
0.53705
python,numpy,scipy,fft
If the data is not uniformly sampled (i.e. Tx[i]-Tx[i-1] is constant), then you cannot do an FFT on it. Here's an idea: If you have a pretty good idea of the bandwidth of the signal, then you could create a resampled version of the DFT basis vectors R. I.e. the complex sinusoids evaluated at the Tx times. Then solve the linear system x = A*z: where x is your observation, z is the unknown frequency content of the signal, and A is the resamapled DFT basis. Note that A may not actually be a basis depending on the severity of the non-uniformity. It will almost certainly not be an orthogonal basis like the DFT.
I have data and a time 'value' associated with it (Tx and X). How can I perform a fast Fourier transform on my data. Tx is an array I have and X is another array I have. The length of both arrays are of course the same and they are associated by Tx[i] with X[i] , where i goes from 0 to len(X). How can I perform a fft on such data to ultimately achieve a Power Spectral Density plot frequency against |fft|^2.
0
1
1,901
0
53,256,590
0
0
0
0
2
false
123
2012-11-30T18:38:00.000
-4
7
0
How can I filter lines on load in Pandas read_csv function?
13,651,117
-1
python,pandas
You can specify nrows parameter. import pandas as pd df = pd.read_csv('file.csv', nrows=100) This code works well in version 0.20.3.
How can I filter which lines of a CSV to be loaded into memory using pandas? This seems like an option that one should find in read_csv. Am I missing something? Example: we've a CSV with a timestamp column and we'd like to load just the lines that with a timestamp greater than a given constant.
0
1
95,461
0
60,026,814
0
0
0
0
2
false
123
2012-11-30T18:38:00.000
4
7
0
How can I filter lines on load in Pandas read_csv function?
13,651,117
0.113791
python,pandas
If the filtered range is contiguous (as it usually is with time(stamp) filters), then the fastest solution is to hard-code the range of rows. Simply combine skiprows=range(1, start_row) with nrows=end_row parameters. Then the import takes seconds where the accepted solution would take minutes. A few experiments with the initial start_row are not a huge cost given the savings on import times. Notice we kept header row by using range(1,..).
How can I filter which lines of a CSV to be loaded into memory using pandas? This seems like an option that one should find in read_csv. Am I missing something? Example: we've a CSV with a timestamp column and we'd like to load just the lines that with a timestamp greater than a given constant.
0
1
95,461
0
13,852,311
0
0
0
0
1
false
26
2012-11-30T20:54:00.000
4
7
0
How do I Pass a List of Series to a Pandas DataFrame?
13,653,030
0.113791
python,pandas
Check out DataFrame.from_items too
I realize Dataframe takes a map of {'series_name':Series(data, index)}. However, it automatically sorts that map even if the map is an OrderedDict(). Is there a simple way to pass a list of Series(data, index, name=name) such that the order is preserved and the column names are the series.name? Is there an easy way if all the indices are the same for all the series? I normally do this by just passing a numpy column_stack of series.values and specifying the column names. However, this is ugly and in this particular case the data is strings not floats.
0
1
51,167
0
13,663,566
0
1
0
0
1
true
0
2012-12-01T20:12:00.000
0
1
0
Efficient Hadoop Word counting for large file
13,663,294
1.2
python,hadoop,hadoop-streaming
The most efficient way to do this is to maintain a hash map of word frequency in your mappers, and flush them to the output context when they reach a certain size (say 100,000 entries). Then clear out the map and continue (remember to flush the map in the cleanup method too). If you still truely have 100 of millions of words, then you'll either need to wait a long time for the reducers to finish, or increase your cluster size and use more reducers.
I want to implement a hadoop reducer for word counting. In my reducer I use a hash table to count the words.But if my file is extremely large the hash table will use extreme amount of memory.How I can address this issue ? (E.g A file with 10 million lines each reducer receives 100million words how can he count the words a hash table requires 100million keys) My current implementation is in python. Is there a smart way to reduce the amount of memory?
0
1
442
0
13,701,036
0
0
0
0
2
false
1
2012-12-04T10:39:00.000
0
3
0
boolean indexing on index (instead of dataframe)
13,701,035
0
python,pandas
I see two ways of getting this, both of which look like a detour – which makes me think there must be a better way which I'm overlooking. Converting the MultiIndex into columns: df[df.reset_index()["B"] == 2] Swapping the name I want to use to the start of the MultiIndex and then use lookup by index: df.swaplevel(0, "B").ix[2]
When I have a pandas.DataFrame df with columns ["A", "B", "C", "D"], I can filter it using constructions like df[df["B"] == 2]. How do I do the equivalent of df[df["B"] == 2], if B is the name of a level in a MultiIndex instead? (For example, obtained by df.groupby(["A", "B"]).mean() or df.setindex(["A", "B"]))
0
1
148
0
13,755,051
0
0
0
0
2
true
1
2012-12-04T10:39:00.000
1
3
0
boolean indexing on index (instead of dataframe)
13,701,035
1.2
python,pandas
I would suggest either: df.xs(2, level='B') or df[df.index.get_level_values('B') == val] I'd like to make the syntax for the latter operation a little nicer.
When I have a pandas.DataFrame df with columns ["A", "B", "C", "D"], I can filter it using constructions like df[df["B"] == 2]. How do I do the equivalent of df[df["B"] == 2], if B is the name of a level in a MultiIndex instead? (For example, obtained by df.groupby(["A", "B"]).mean() or df.setindex(["A", "B"]))
0
1
148
0
13,774,224
0
0
0
1
1
false
0
2012-12-07T23:52:00.000
0
1
0
Plotting data using Flot and MySQL
13,772,857
0
python,mysql,flot
Install a httpd server Install php Write a php script to fetch the data from the database and render it as a webpage. This is fairly elaborate request, with relatively little details given. More information will allow us to give better answers.
So I am trying to create a realtime plot of data that is being recorded to a SQL server. The format is as follows: Database: testDB Table: sensors First record contains 3 records. The first column is an auto incremented ID starting at 1. The second column is the time in epoch format. The third column is my sensor data. It is in the following format: 23432.32 112343.3 53454.322 34563.32 76653.44 000.000 333.2123 I am completely lost on how to complete this project. I have read many pages showing examples dont really understand them. They provide source code, but I am not sure where that code goes. I installed httpd on my server and that is where I stand. Does anyone know of a good how-to from beginning to end that I could follow? Or could someone post a good step by step for me to follow? Thanks for your help
0
1
428
0
37,094,880
0
0
0
0
1
false
14
2012-12-08T23:33:00.000
4
2
0
Finding the indices of the top three values via argmin() or min() in python/numpy without mutation of list?
13,783,071
0.379949
python,list,numpy,min
numpy.argpartition(cluster, 3) would be much more effective.
So I have this list called sumErrors that's 16000 rows and 1 column, and this list is already presorted into 5 different clusters. And what I'm doing is slicing the list for each cluster and finding the index of the minimum value in each slice. However, I can only find the first minimum index using argmin(). I don't think I can just delete the value, because otherwise it would shift the slices over and the indices is what I have to recover the original ID. Does anyone know how to get argmin() to spit out indices for the lowest three? Or perhaps a more optimal method? Maybe I should just assign ID numbers, but I feel like there maybe a more elegant method.
0
1
11,034
0
13,795,874
0
0
0
0
1
true
13
2012-12-10T05:47:00.000
25
2
0
Numpy error: Singular matrix
13,795,682
1.2
python,numpy
A singular matrix is one that is not invertible. This means that the system of equations you are trying to solve does not have a unique solution; linalg.solve can't handle this. You may find that linalg.lstsq provides a usable solution.
What does the error Numpy error: Matrix is singular mean specifically (when using the linalg.solve function)? I have looked on Google but couldn't find anything that made it clear when this error occurs.
0
1
60,944
0
19,329,962
0
0
0
0
1
false
4
2012-12-12T13:52:00.000
2
2
0
FFT in Numpy (Python) when N is not a power of 2
13,841,296
0.197375
python,numpy,fft
In my experience the algorithms don't do automatic padding, or at least some of them don't. For example, running the scipy.signal.hilbert method on a signal that wasn't of length == a power of two took about 45 seconds. When I padded the signal myself with zeros to such a length, it took 100ms. YMMV but it's something to double check basically any time you run a signal processing algorithm.
My question is about the algorithm which is used in Numpy's FFT function. The documentation of Numpy says that it uses the Cooley-Tukey algorithm. However, as you may know, this algorithm works only if the number N of points is a power of 2. Does numpy pad my input vector x[n] in order to calculate its FFT X[k]? (I don't think so, since the number of points I have in the output is also N). How could I actually "see" the code which is used by numpy for its FFT function? Cheers!
0
1
6,896
0
13,858,423
0
1
1
0
1
false
5
2012-12-13T03:51:00.000
2
3
0
I want Python as front end, Fortran as back end. I also want to make fortran part parallel - best strategy?
13,852,646
0.132549
python,arrays,parallel-processing,fortran,f2py
An alternative approach to VladimirF's suggestion, could be to set up the two parts as a client server construct, where your Python part could talk to the Fortran part using sockets. Though this comes with the burden to implement some protocol for the interaction, it has the advantage, that you get a clean separation and can even go on running them on different machines with an interaction over the network. In fact, with this approach you even could do the embarrassing parallel part, by spawning as many instances of the Fortran application as needed and feed them all with different data.
I have a python script I hope to do roughly this: calls some particle positions into an array runs algorithm over all 512^3 positions to distribute them to an NxNxN matrix feed that matrix back to python use plotting in python to visualise matrix (i.e. mayavi) First I have to write it in serial but ideally I want to parrallelize step 2 to speed up computation. What tools/strategy might get me started. I know Python and Fortran well but not much about how to connect the two for my particular problem. At the moment I am doing everything in Fortran then loading my python program - I want to do it all at once.I've heard of py2f but I want to get experienced people's opinions before I go down one particular rabbit hole. Thanks Edit: The thing I want to make parallel is 'embarrassingly parallel' in that is is just a loop of N particles and I want to get through that loop as quickly as possible.
0
1
929
0
13,875,710
0
0
0
0
1
false
0
2012-12-14T09:09:00.000
1
2
0
Efficient way to find number of distinct elements in a list
13,875,584
0.099668
python,python-3.x,k-means
One way is to sort your list and then run over the elements by comparing each one to the previous one. If they are not equal sum 1 to your "distinct counter". This operation is O(n), and for sorting you can use the sorting algorithm you prefer, such as quick sort or merge sort, but I guess there is an available sorting algorithm in the lib you use. Another option is to create a hash table and add all the elements. The number of insertions will be the distinct elements, since repeated elements will not be inserted. I think this is O(1) in the best case so maybe this is the better solution. Good luck! Hope this helps, Dídac Pérez
I'm trying to do K-Means Clustering using Kruskal's Minimum Spanning Tree Algorithm. My original design was to run the full-length Kruskal algorithm of the input and produce an MST, after which delete the last k-1 edges (or equivalently k-1 most expensive edges). Of course this is the same as running Kruskal algorithm and stopping it just before it adds its last k-1 edges. I want to use the second strategy i.e instead of running the full length Kruskal algorithm, stop it just after the number of clusters so far equals K. I'm using Union-Find data structure and using a list object in this Union-Find data structure. Each vertex on this graph is represented by its current cluster on this list e.g [1,2,3...] means vertices 1,2,3 are in their distinct independent clusters. If two vertices are joined their corresponding indices on the list data structure are updated to reflect this. e.g merging vertices 2 and 3 leaves the list data object as [1,2,2,4,5.....] My strategy is then every time two nodes are merged, count the number of DISTINCT elements in the list and if it equals the number of desired clusters, stop. My worry is that this may not be the most efficient option. Is there a way I could count the number of distinct objects in a list efficiently?
0
1
296
0
13,921,674
0
0
0
0
1
true
104
2012-12-17T20:27:00.000
165
2
0
Python - Dimension of Data Frame
13,921,647
1.2
python,pandas
df.shape, where df is your DataFrame.
New to Python. In R, you can get the dimension of a matrix using dim(...). What is the corresponding function in Python Pandas for their data frame?
0
1
147,970
0
13,986,712
0
0
0
0
1
true
2
2012-12-21T01:14:00.000
0
1
0
sklearn.svm.SVC doesn't give the index of support vectors for sparse dataset?
13,982,983
1.2
python,machine-learning,libsvm,scikit-learn,scikits
Not without going to the cython code I am afraid. This has been on the todo list for way to long. Any help with it would be much appreciated. It shouldn't be too hard, I think.
sklearn.svm.SVC doesn't give the index of support vectors for sparse dataset. Is there any hack/way to get the index of SVs?
0
1
265
0
15,980,514
0
1
0
0
1
false
0
2012-12-21T11:18:00.000
0
1
0
Why are my Pylot graphs blank?
13,989,166
0
python,numpy,matplotlib,pylot
I had the same identical problem. I spent sometime on it today debugging few things, I realized the problem with me was that the data collected to plot charts wasn't correct and i needed to adjust. What I did was just changing the time from absolute to relative and dynamically adjusting the range of the axis. I'm not that good in python and so my code doesn't look that good.
I'm using Pylot 1.26 with Python 2.7 on Windows 7 64bit having installed Numpy 1.6.2 and Matplotlib 1.1.0. The test case executes and produces a report but the response time graph is empty (no data) and the throughput graph is just one straight line. I've tried the 32 bit and 64 bit installers but the result is the same.
0
1
682
0
15,859,052
0
0
0
0
1
false
4
2012-12-24T10:35:00.000
0
1
0
update U V data for matplotlib streamplot
14,020,155
0
python,matplotlib,scipy
I suspect the answer is no, because if you change the vectors, it would need to re-compute the stream lines. The objects returned by streamline are a line and patch collections, which know nothing about the vectors. To get this functionality would require writing a new class to wrap everything up and finding a sensible way to re-use the existing objects. The best bet is to use cla() (as suggested by dmcdougall) to clear your axes and just re-plot them. A slightly less drastic approach would be to just remove the artists added by streamplot.
After plotting streamlines using 'matplotlib.streamplot' I need to change the U V data and update the plot. For imshow and quiver there are the functions 'set_data' and 'set_UVC', respectively. There does not seem to be any similar function for streamlines. Is there any way to still updateget similar functionality?
0
1
1,159
0
14,048,691
0
1
0
0
1
false
2
2012-12-26T01:23:00.000
2
2
0
OpenCV anonymous/guaranteed unique window
14,035,161
0.197375
python,opencv
In modules/highgui/src/window_w32.cpp(or in some other file if you are not using windows - look at void cv::namedWindow( const string& winname, int flags ) in ...src/window.cpp) there is a function static CvWindow* icvFindWindowByName( const char* name ) which probably is what you need, but it's internal so authors of OpenCV for some reason didn't want other to use it(or doesn't know someone may need it). I think that the best option is to use system api to find whether a window with specific name exists. Eventually use something that is almost impossible to be a window name, for example current time in ms + user name + random number + random string(yeah i know that window name "234564312cyriel123234123dgbdfbddfgb#$%grw$" is not beautiful).
quite new to OpenCV so please bear with me: I need to open up a temporary window for user input, but I need to be certain it won't overwrite a previously opened window. Is there a way to open up either an anonymous window, or somehow create a guaranteed unique window name? Obviously a long random string would be pretty safe, but that seems like a hack. P.S. I'm using the python bindings at the moment, but If you want to write a response in c/c++ that's fine, I'm familiar with them.
0
1
343
0
14,070,812
0
0
0
0
1
false
7
2012-12-28T13:53:00.000
1
4
0
Calculating Point Density using Python
14,070,565
0.049958
python
Yes, you do have edges, and they are the distances between the nodes. In your case, you have a complete graph with weighted edges. Simply derive the distance from each node to each other node -- which gives you O(N^2) in time complexity --, and use both nodes and edges as input to one of these approaches you found. Happens though your problem seems rather an analysis problem other than anything else; you should try to run some clustering algorithm on your data, like K-means, that clusters nodes based on a distance function, in which you can simply use the euclidean distance. The result of this algorithm is exactly what you'll need, as you'll have clusters of close elements, you'll know what and how many elements are assigned to each group, and you'll be able to, according to these values, generate the coefficient you want to assign to each node. The only concern worth pointing out here is that you'll have to determine how many clusters -- k-means, k-clusters -- you want to create.
I have a list of X and Y coordinates from geodata of a specific part of the world. I want to assign each coordinate, a weight, based upon where it lies in the graph. For Example: If a point lies in a place where there are a lot of other nodes around it, it lies in a high density area, and therefore has a higher weight. The most immediate method I can think of is drawing circles of unit radius around each point and then calculating if the other points lie within in and then using a function, assign a weight to that point. But this seems primitive. I've looked at pySAL and NetworkX but it looks like they work with graphs. I don't have any edges in the graph, just nodes.
0
1
12,400
0
14,126,790
0
0
0
0
1
false
5
2013-01-02T17:14:00.000
1
2
0
Performance/standard using 1d vs 2d vectors in numpy
14,126,201
0.099668
python,matlab,numpy,linear-algebra
In matlab (for historical reason I would argue) the basic type is an M-by-N array (matrix) so that scalars are 1-by-1 arrays and vectors either N-by-1 or 1-by-N arrays. (Memory layout is always Fortran style). This "limitation" is not present in numpy: you have true scalars and ndarray's can have as many dimensions you like. (Memory layout can be C or Fortran-contigous.) For this reason there is no preferred (standard) practice. It is up to you, according to your application, to choose the one which better suits your needs.
Is there a standard practice for representing vectors as 1d or 2d ndarrays in NumPy? I'm moving from MATLAB which represents vectors as 2d arrays.
0
1
1,655
0
26,791,595
0
0
0
0
1
false
0
2013-01-05T20:49:00.000
-2
4
0
Test for statistically significant difference between two arrays
14,176,280
-0.099668
python,arrays,numpy,statistics,scipy
Go to MS Excel. If you don't have it your work does, there are alternatives Enter the array of numbers in Excel worksheet. Run the formula in the entry field, =TTEST (array1,array2,tail). One tail is one, Two tail is two...easy peasy. It's a simple Student's T and I believe you may still need a t-table to interpret the statistic (internet). Yet it's quick for on the fly comparison of samples.
I have two 2-D arrays with the same shape (105,234) named A & B essentially comprised of mean values from other arrays. I am familiar with Python's scipy package, but I can't seem to find a way to test whether or not the two arrays are statistically significantly different at each individual array index. I'm thinking this is just a large 2D paired T-test, but am having difficulty. Any ideas or other packages to use?
0
1
7,302
0
28,408,552
0
0
0
0
1
false
3
2013-01-07T07:14:00.000
1
4
0
Microarray hierarchical clustering and PCA with python
14,191,487
0.049958
python,bioinformatics,pca,biopython,hierarchical-clustering
I recommend to use R Bioconductor and free software like Expander and MeV. Good flexible choice is a Cluster software with TreeViews. You can also run R and STATA or JMP from your Python codes and completely automate your data management.
I'm trying to analyze microarray data using hierarchical clustering of the microarray columns (results from the individual microarray replicates) and PCA. I'm new to python. I have python 2.7.3, biopyhton, numpy, matplotlib, and networkx. Are there functions in python or biopython (similar to MATLAB's clustergram and mapcaplot) that I can use to do this?
0
1
1,029
0
14,223,556
0
0
0
0
1
true
1
2013-01-07T17:50:00.000
1
1
0
heterogeneous data logging and analysis
14,201,284
1.2
python,logging,numpy,matplotlib
Another option for storage could be using hdf5 or pytables. Depending on how you structure the data, with pytables you can query the data at key "points". As noted in comments, I dont think an off the shelf solution exists.
I'm using python to prototype the algorithms of a computer vision system I'm creating. I would like to be able to easily log heterogeneous data, for example: images, numpy arrays, matplotlib plots, etc, from within the algorithms, and do that using two keys, one for the current frame number and another to describe the logged object. Then I would like to be able to browse all the data from a web browser. Finally, I would like to be able to easily process the logs to generate summaries, for example retrieve the key "points" for all the frame numbers and calculate some statistics on them. My intention is to use this logging subsystem to facilitate debugging the behaviour of the algorithms and produce summaries for benchmarking. I'm set to create this subsystem myself but I thought to ask first if someone has already done something similar. Does anybody know of any python package that I can use to do what I ask? otherwise, does anybody have any advice on which tools to use to create this myself?
0
1
281
1
14,233,016
0
0
0
0
1
false
0
2013-01-09T09:53:00.000
2
1
0
2D image projections to 3D Volume
14,232,451
0.379949
python,image-processing,3d,2d
There are several things you can mean, I think none of which currently exists in free software (but I may be wrong about that), and they differ in how hard they are to implement: First of all, "a 3D volume" is not a clear definition of what you want. There is not one way to store this information. A usual way (for computer games and animations) is to store it as a mesh with textures. Getting the textures is easy: you have the photographs. Creating the mesh can be really hard, depending on what exactly you want. You say your object looks like a cylinder. If you want to just stitch your images together and paste them as a texture over a cylindrical mesh, that should be possible. If you know the angles at which the images are taken, the stitching will be even easier. However, the really cool thing that most people would want is to create any mesh, not just a cylinder, based on the stitching "errors" (which originate from the parallax effect, and therefore contain information about the depth of the pictures). I know Autodesk (the makers of AutoCAD) have a web-based tool for this (named 123-something), but they don't let you put it into your own program; you have to use their interface. So it's fine for getting a result, but not as a basis for a program of your own. Once you have the mesh, you'll need a viewer (not view first, save later; it's the other way around). You should be able to use any 3D drawing program, for example Blender can view (and edit) many file types.
I am looking for a library, example or similar that allows me to loads a set of 2D projections of an object and then converts it into a 3D volume. For example, I could have 6 pictures of a small toy and the program should allow me to view it as a 3D volume and eventually save it. The object I need to convert is very similar to a cylinder (so the program doesn't have to 'understand' what type of object it is).
0
1
1,288
0
14,236,501
0
0
0
0
1
false
1
2013-01-09T13:31:00.000
2
2
0
Advantage of metropolis hastings or MonteCarlo methods over a simple grid search?
14,236,371
0.197375
python,montecarlo
When the search space becomes larger, it can become infeasible to do an exhaustive search. So we turn to Monte Carlo methods out of necessity.
I have a relatively simple function with three unknown input parameters for which I only know the upper and lower bounds. I also know what the output Y should be for all of my data. So far I have done a simple grid search in python, looping through all of the possible parameter combinations and returning those results where the error between Y predicted and Y observed is within a set limit. I then look at the results to see which set of parameters performs best for each group of samples, look at the trade-off between parameters, see how outliers effect the data etc.. So really my questions is - whilst the grid search method I'm using is a bit cumbersome, what advantages would there be in using Monte Carlo methods such as metropolis hastings instead? I am currently researching into MCMC methods, but don’t have any practical experience in using them and, in this instance, can’t quite see what might be gained. I’d greatly appreciate any comments or suggestions Many Thanks
0
1
983
0
14,242,912
0
1
0
0
1
false
1
2013-01-09T17:18:00.000
0
1
0
use / load new python module without installation
14,242,764
0
python,numpy,scipy,python-module
Use the --user option to easy_install or setup.py to indicate where the installation is to take place. It should point to a directory where you have write access. Once the module has been built and installed, you then need to set the environmental variable PYTHONPATH to point to that location. When you next run the python command, you should be able to import the module.
I am totally new to Python, and I have to use some modules in my code, like numpy and scipy, but I have no permission on my hosting to install new modules using easy-install or pip ( and of course I don't know how to install new modules in a directory where I have permission [ I have SSH access ] ). I have downloaded numpy and used from numpy import * but it doesn't work. I also tried the same thing with scipy : from scipy import *, but it also don't work. How to load / use new modules in Python without installing them [ numpy, scipy .. ] ?
0
1
1,476
0
34,036,255
0
0
0
0
3
false
76
2013-01-10T09:08:00.000
15
6
0
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
14,254,203
1
python,machine-learning,data-mining,classification,scikit-learn
The simple answer: multiply result!! it's the same. Naive Bayes based on applying Bayes’ theorem with the “naive” assumption of independence between every pair of features - meaning you calculate the Bayes probability dependent on a specific feature without holding the others - which means that the algorithm multiply each probability from one feature with the probability from the second feature (and we totally ignore the denominator - since it is just a normalizer). so the right answer is: calculate the probability from the categorical variables. calculate the probability from the continuous variables. multiply 1. and 2.
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age", "Length of membership" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!
0
1
29,596
0
69,929,209
0
0
0
0
3
false
76
2013-01-10T09:08:00.000
0
6
0
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
14,254,203
0
python,machine-learning,data-mining,classification,scikit-learn
You will need the following steps: Calculate the probability from the categorical variables (using predict_proba method from BernoulliNB) Calculate the probability from the continuous variables (using predict_proba method from GaussianNB) Multiply 1. and 2. AND Divide by the prior (either from BernoulliNB or from GaussianNB since they are the same) AND THEN Divide 4. by the sum (over the classes) of 4. This is the normalisation step. It should be easy enough to see how you can add your own prior instead of using those learned from the data.
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age", "Length of membership" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!
0
1
29,596
0
14,255,284
0
0
0
0
3
true
76
2013-01-10T09:08:00.000
74
6
0
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
14,254,203
1.2
python,machine-learning,data-mining,classification,scikit-learn
You have at least two options: Transform all your data into a categorical representation by computing percentiles for each continuous variables and then binning the continuous variables using the percentiles as bin boundaries. For instance for the height of a person create the following bins: "very small", "small", "regular", "big", "very big" ensuring that each bin contains approximately 20% of the population of your training set. We don't have any utility to perform this automatically in scikit-learn but it should not be too complicated to do it yourself. Then fit a unique multinomial NB on those categorical representation of your data. Independently fit a gaussian NB model on the continuous part of the data and a multinomial NB model on the categorical part. Then transform all the dataset by taking the class assignment probabilities (with predict_proba method) as new features: np.hstack((multinomial_probas, gaussian_probas)) and then refit a new model (e.g. a new gaussian NB) on the new features.
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age", "Length of membership" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!
0
1
29,596
0
14,260,955
0
0
0
0
1
false
4
2013-01-10T15:06:00.000
18
8
0
How to randomly generate decreasing numbers in Python?
14,260,923
1
python,random,python-2.7,numbers
I would generate a list of n random numbers then sort them highest to lowest.
I'm wondering if there's a way to generate decreasing numbers within a certain range? I want to program to keep outputting until it reaches 0, and the highest number in the range must be positive. For example, if the range is (0, 100), this could be a possible output: 96 57 43 23 9 0 Sorry for the confusion from my original post
0
1
5,706
0
59,647,574
0
0
0
0
4
false
1,156
2013-01-10T16:20:00.000
-2
16
0
"Large data" workflows using pandas
14,262,433
-0.024995
python,mongodb,pandas,hdf5,large-data
At the moment I am working "like" you, just on a lower scale, which is why I don't have a PoC for my suggestion. However, I seem to find success in using pickle as caching system and outsourcing execution of various functions into files - executing these files from my commando / main file; For example i use a prepare_use.py to convert object types, split a data set into test, validating and prediction data set. How does your caching with pickle work? I use strings in order to access pickle-files that are dynamically created, depending on which parameters and data sets were passed (with that i try to capture and determine if the program was already run, using .shape for data set, dict for passed parameters). Respecting these measures, i get a String to try to find and read a .pickle-file and can, if found, skip processing time in order to jump to the execution i am working on right now. Using databases I encountered similar problems, which is why i found joy in using this solution, however - there are many constraints for sure - for example storing huge pickle sets due to redundancy. Updating a table from before to after a transformation can be done with proper indexing - validating information opens up a whole other book (I tried consolidating crawled rent data and stopped using a database after 2 hours basically - as I would have liked to jump back after every transformation process) I hope my 2 cents help you in some way. Greetings.
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about "big data" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive. My first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this: What are some best-practice workflows for accomplishing the following: Loading flat files into a permanent, on-disk database structure Querying that database to retrieve data to feed into a pandas data structure Updating the database after manipulating pieces in pandas Real-world examples would be much appreciated, especially from anyone who uses pandas on "large data". Edit -- an example of how I would like this to work: Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory. In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory. I would create new columns by performing various operations on the selected columns. I would then have to append these new columns into the database structure. I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem. Edit -- Responding to Jeff's questions specifically: I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns. Typical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset. Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model. A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case. It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations. The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns. It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics/machine learning parlance).
0
1
341,120
0
29,910,919
0
0
0
0
4
false
1,156
2013-01-10T16:20:00.000
21
16
0
"Large data" workflows using pandas
14,262,433
1
python,mongodb,pandas,hdf5,large-data
One more variation Many of the operations done in pandas can also be done as a db query (sql, mongo) Using a RDBMS or mongodb allows you to perform some of the aggregations in the DB Query (which is optimized for large data, and uses cache and indexes efficiently) Later, you can perform post processing using pandas. The advantage of this method is that you gain the DB optimizations for working with large data, while still defining the logic in a high level declarative syntax - and not having to deal with the details of deciding what to do in memory and what to do out of core. And although the query language and pandas are different, it's usually not complicated to translate part of the logic from one to another.
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about "big data" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive. My first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this: What are some best-practice workflows for accomplishing the following: Loading flat files into a permanent, on-disk database structure Querying that database to retrieve data to feed into a pandas data structure Updating the database after manipulating pieces in pandas Real-world examples would be much appreciated, especially from anyone who uses pandas on "large data". Edit -- an example of how I would like this to work: Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory. In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory. I would create new columns by performing various operations on the selected columns. I would then have to append these new columns into the database structure. I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem. Edit -- Responding to Jeff's questions specifically: I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns. Typical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset. Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model. A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case. It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations. The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns. It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics/machine learning parlance).
0
1
341,120
0
20,690,383
0
0
0
0
4
false
1,156
2013-01-10T16:20:00.000
167
16
0
"Large data" workflows using pandas
14,262,433
1
python,mongodb,pandas,hdf5,large-data
I think the answers above are missing a simple approach that I've found very useful. When I have a file that is too large to load in memory, I break up the file into multiple smaller files (either by row or cols) Example: In case of 30 days worth of trading data of ~30GB size, I break it into a file per day of ~1GB size. I subsequently process each file separately and aggregate results at the end One of the biggest advantages is that it allows parallel processing of the files (either multiple threads or processes) The other advantage is that file manipulation (like adding/removing dates in the example) can be accomplished by regular shell commands, which is not be possible in more advanced/complicated file formats This approach doesn't cover all scenarios, but is very useful in a lot of them
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about "big data" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive. My first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this: What are some best-practice workflows for accomplishing the following: Loading flat files into a permanent, on-disk database structure Querying that database to retrieve data to feed into a pandas data structure Updating the database after manipulating pieces in pandas Real-world examples would be much appreciated, especially from anyone who uses pandas on "large data". Edit -- an example of how I would like this to work: Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory. In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory. I would create new columns by performing various operations on the selected columns. I would then have to append these new columns into the database structure. I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem. Edit -- Responding to Jeff's questions specifically: I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns. Typical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset. Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model. A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case. It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations. The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns. It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics/machine learning parlance).
0
1
341,120
0
19,739,768
0
0
0
0
4
false
1,156
2013-01-10T16:20:00.000
72
16
0
"Large data" workflows using pandas
14,262,433
1
python,mongodb,pandas,hdf5,large-data
If your datasets are between 1 and 20GB, you should get a workstation with 48GB of RAM. Then Pandas can hold the entire dataset in RAM. I know its not the answer you're looking for here, but doing scientific computing on a notebook with 4GB of RAM isn't reasonable.
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about "big data" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive. My first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this: What are some best-practice workflows for accomplishing the following: Loading flat files into a permanent, on-disk database structure Querying that database to retrieve data to feed into a pandas data structure Updating the database after manipulating pieces in pandas Real-world examples would be much appreciated, especially from anyone who uses pandas on "large data". Edit -- an example of how I would like this to work: Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory. In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory. I would create new columns by performing various operations on the selected columns. I would then have to append these new columns into the database structure. I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem. Edit -- Responding to Jeff's questions specifically: I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns. Typical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset. Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model. A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case. It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations. The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns. It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics/machine learning parlance).
0
1
341,120
0
14,271,696
0
1
0
0
1
false
1
2013-01-11T01:18:00.000
1
2
0
Pandas storing 1000's of dataframe objects
14,270,163
0.099668
python,object,pandas,dataframe,storage
Redis with redis-py is one solution. Redis is really fast and there are nice Python bindings. Pytables, as mentioned above, is a good choice as well. PyTables is HDF5, and is really really fast.
I am working on a large project that does SPC analysis and have 1000's of different unrelated dataframe objects. Does anyone know of a module for storing objects in memory? I could use a python dictionary but would like it more elaborate and functional mechanisms like locking, thread safe, who has it and a waiting list etc? I was thinking of creating something that behaves like my local public library system. The way it checks in and out books to one owner ...etc.
0
1
1,701
0
14,309,807
0
0
0
0
1
true
1
2013-01-13T14:27:00.000
0
1
0
Feature importance based on extremely randomize trees and feature redundancy
14,304,420
1.2
python-2.7,scikit-learn
Maybe you could extract the top n important features and then compute pairwise Spearman's or Pearson's correlations for those in order to detect redundancy only for the top informative features as it might not be feasible to compute all pairwise feature correlations (quadratic with the number of features). There might be more clever ways to do the same by exploiting the statistics of the relative occurrences of the features as nodes in the decision trees though.
I am using the Scikit-learn Extremely Randomized Trees algorithm to get info about the relative feature importances and I have a question about how "redundant features" are ranked. If I have two features that are identical (redundant) and important to the classification, the extremely randomized trees cannot detect the redundancy of the features. That is, both features get a high ranking. Is there any other way to detect that two features are actualy redundant?
0
1
203
0
14,309,992
0
0
0
0
1
true
0
2013-01-13T22:23:00.000
2
1
0
How to save memory for a large python array?
14,308,889
1.2
python,arrays
Given your description, a sparse representation may not be very useful to you. There are many other options, though: Make sure your values are represented using the smallest data type possible. The example you show above is best represented as single-byte integers. Reading into a numpy array or python array will give you good control over data type. You can trade memory for performance by only reading a part of the data at a time. If you re-write the entire dataset as binary instead of CSV, then you can use mmap to access the file as if it were already in memory (this would also make it faster to read and write). If you really need the entire dataset in memory (and it really doesn't fit), then some sort of compression may be necessary. Sparse matrices are an option (as larsmans mentioned in the comments, both scipy and pandas have sparse matrix implementations), but these will only help if the fraction of zero-value entries is large. Better compression options will depend on the nature of your data. Consider breaking up the array into chunks and compressing those with a fast compression algorithm like RLE, SZIP, etc.
I read in an large python array from a csv file (20332 *17009) using window7 64 bit OS machine with 12 G ram. The array has values in the half of places, like the example below. I only need the array where has values for analysis, rather than the whole array. [0 0 0 0 0 0 0 0 0 3 8 0 0 4 2 7 0 0 0 0 5 2 0 0 0 0 1 0 0 0] I am wondering: is it possible to ignore 0 value for analysis and save more memory? Thanks in advance!
0
1
733
0
44,592,825
0
0
0
0
1
false
18
2013-01-16T16:57:00.000
1
3
0
Python Pandas - Deleting multiple series from a data frame in one command
14,363,640
0.066568
python,pandas
You can also specify a list of columns to keep with the usecols option in pandas.read_table. This speeds up the loading process as well.
In short ... I have a Python Pandas data frame that is read in from an Excel file using 'read_table'. I would like to keep a handful of the series from the data, and purge the rest. I know that I can just delete what I don't want one-by-one using 'del data['SeriesName']', but what I'd rather do is specify what to keep instead of specifying what to delete. If the simplest answer is to copy the existing data frame into a new data frame that only contains the series I want, and then delete the existing frame in its entirety, I would satisfied with that solution ... but if that is indeed the best way, can someone walk me through it? TIA ... I'm a newb to Pandas. :)
0
1
30,755
0
14,369,860
0
0
0
0
2
false
3
2013-01-16T23:21:00.000
0
3
0
How do I make large datasets load quickly in Python?
14,369,696
0
python,performance,data-mining,pdb,large-data
Write a script that does the selects, the object-relational conversions, then pickles the data to a local file. Your development script will start by unpickling the data and proceeding. If the data is significantly smaller than physical RAM, you can memory map a file shared between two processes, and write the pickled data to memory.
I do data mining research and often have Python scripts that load large datasets from SQLite databases, CSV files, pickle files, etc. In the development process, my scripts often need to be changed and I find myself waiting 20 to 30 seconds waiting for data to load. Loading data streams (e.g. from a SQLite database) sometimes works, but not in all situations -- if I need to go back into a dataset often, I'd rather pay the upfront time cost of loading the data. My best solution so far is subsampling the data until I'm happy with my final script. Does anyone have a better solution/design practice? My "ideal" solution would involve using the Python debugger (pdb) cleverly so that the data remains loaded in memory, I can edit my script, and then resume from a given point.
0
1
806
0
63,300,344
0
0
0
0
2
false
3
2013-01-16T23:21:00.000
0
3
0
How do I make large datasets load quickly in Python?
14,369,696
0
python,performance,data-mining,pdb,large-data
Jupyter notebook allows you to load a large data set into a memory resident data structure, such as a Pandas dataframe in one cell. Then you can operate on that data structure in subsequent cells without having to reload the data.
I do data mining research and often have Python scripts that load large datasets from SQLite databases, CSV files, pickle files, etc. In the development process, my scripts often need to be changed and I find myself waiting 20 to 30 seconds waiting for data to load. Loading data streams (e.g. from a SQLite database) sometimes works, but not in all situations -- if I need to go back into a dataset often, I'd rather pay the upfront time cost of loading the data. My best solution so far is subsampling the data until I'm happy with my final script. Does anyone have a better solution/design practice? My "ideal" solution would involve using the Python debugger (pdb) cleverly so that the data remains loaded in memory, I can edit my script, and then resume from a given point.
0
1
806
0
14,386,145
0
0
0
0
1
true
6
2013-01-17T18:42:00.000
1
2
0
Scipy Binary Closing - Edge Pixels lose value
14,385,921
1.2
python,image,image-processing,numpy,scipy
Operations that involve information from neighboring pixels, such as closing will always have trouble at the edges. In your case, this is very easy to get around: just process subimages that are slightly larger than your tiling, and keep the good parts when stitching together.
I am attempting to fill holes in a binary image. The image is rather large so I have broken it into chunks for processing. When I use the scipy.ndimage.morphology.binary_fill_holes functions, it fills larger holes that belong in the image. So I tried using scipy.ndimage.morphology.binary_closing, which gave the desired results of filling small holes in the image. However, when I put the chunks back together, to create the entire image, I end up with seamlines because the binary_closing function removes any values from the border pixels of each chunk. Is there any way to avoid this effect?
0
1
2,670
0
14,389,347
0
0
1
0
1
true
1
2013-01-17T22:26:00.000
5
2
0
Compress large python objects
14,389,279
1.2
python,memory,numpy,compression
Incremental (de)compression should be done with zlib.{de,}compressobj() so that memory consumption can be minimized. Additionally, higher compression ratios can be attained for most data by using bz2 instead.
I am trying to compress a huge python object ~15G, and save it on the disk. Due to requrement constraints I need to compress this file as much as possible. I am presently using zlib.compress(9). My main concern is the memory taken exceeds what I have available on the system 32g during compression, and going forward the size of the object is expected to increase. Is there a more efficient/better way to achieve this. Thanks. Update: Also to note the object that I want to save is a sparse numpy matrix, and that I am serializing the data before compressing, which also increases the memory consumption. Since I do not need the python object after it is serialized, would gc.collect() help?
0
1
1,086