GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
24,598,296
0
0
0
0
1
true
5
2014-07-06T16:54:00.000
3
1
0
Unistall opencv 2.4.9 and install 3.0.0
24,598,160
1.2
python,opencv,ubuntu,uninstallation
The procedure depends on whether or not you built OpenCV from source with CMake, or snatched it from a repository. From repository sudo apt-get purge libopencv* will cleanly remove all traces. Substitute libopencv* as appropriate in case you were using an unofficial ppa. From source If you still have the files generated by CMake (the directory from where you executed sudo make install), cd there and sudo make uninstall. Otherwise, you can either rebuild them with the exact same configuration and use the above command, or recall your CMAKE_INSTALL_PREFIX (/usr/local by default), and remove everything with opencv in its name within that directory tree.
Im using openCV on Ubuntu 14.04, but some of the functions that I require particularly in cv2 library (cv2.drawMatches, cv2.drawMatchesKnn) does not work in 2.4.9. How do I uninstall 2.4.9 and install 3.0.0 from the their git? I know the procedure for installing 3.0.0 but how do I make sure that 2.4.9 get completely removed from disk?
0
1
18,476
0
24,616,160
0
0
0
0
2
false
0
2014-07-07T16:46:00.000
0
3
0
Genetic Algorithm in Optimization of Events
24,615,687
0
python,artificial-intelligence,genetic-algorithm
To start, let's make sure I understand your problem. You have a set of sample data, each element containing a time series of a binary variable (we'll call it V). When V is set to True, a function (A, B, or C) is applied which returns V to it's False state. You would like to apply a genetic algorithm to determine which function (or solution) will return V to False in the least amount of time. If this is the case, I would stay away from GAs. GAs are typically used for some kind of function optimization / tuning. In general, the underlying assumption is that what you permute is under your control during the algorithm's application (i.e., you are modifying parameters used by the algorithm that are independent of the input data). In your case, my impression is that you just want to find out which of your (I assume) static functions perform best in a wide variety of cases. If you don't feel your current dataset provides a decent approximation of your true input distribution, you can always sample from it and permute the values to see what happens; however, this would not be a GA. Having said all of this, I could be wrong. If anyone has used GAs in verification like this, please let me know. I'd certainly be interested in learning about it.
I'm a data analysis student and I'm starting to explore Genetic Algorithms at the moment. I'm trying to solve a problem with GA but I'm not sure about the formulation of the problem. Basically I have a state of a variable being 0 or 1 (0 it's in the normal range of values, 1 is in a critical state). When the state is 1 I can apply 3 solutions (let's consider Solution A, B and C) and for each solution I know the time that the solution was applied and the time where the state of the variable goes to 0. So I have for the problem a set of data that have a critical event at 1, the solution applied and the time interval (in minutes) from the critical event to the application of the solution, and the time interval (in minutes) from the application of the solution until the event goes to 0. I want with a genetic algorithm to know which is the best solution for a critical event and the fastest one. And if it is possible to rank the solutions acquired so if in the future on solution can't be applied I can always apply the second best for example. I'm thinking of developing the solution in Python since I'm new to GA. Edit: Specifying the problem (responding to AMack) Yes is more a less that but with some nuances. For example the function A can be more suitable to make the variable go to F but because exist other problems with the variable are applied more than one solution. So on the data that i receive for an event of V, sometimes can be applied 3 ou 4 functions but only 1 or 2 of them are specialized for the problem that i want to analyze. My objetive is to make a decision support on the solution to use when determined problem appear. But the optimal solution can be more that one because for some event function A acts very fast but in other case of the same event function A don't produce a fast response and function C is better in that case. So in the end i pretend a solution where is indicated what are the best solutions to the problem but not only the fastest because the fastest in the majority of the cases sometimes is not the fastest in the same issue but with a different background.
0
1
335
0
24,616,007
0
0
0
0
2
false
0
2014-07-07T16:46:00.000
1
3
0
Genetic Algorithm in Optimization of Events
24,615,687
0.066568
python,artificial-intelligence,genetic-algorithm
I'm unsure of what your question is, but here are the elements you need for any GA: A population of initial "genomes" A ranking function Some form of mutation, crossing over within the genome and reproduction. If a critical event is always the same, your GA should work very well. That being said, if you have a different critical event but the same genome you will run into trouble. GA's evolve functions towards the best possible solution for A Set of conditions. If you constantly run the GA so that it may adapt to each unique situation you will find a greater degree of adaptability, but have a speed issue. You have a distinct advantage using python because string manipulation (what you'll probably use for the genome) is easy, however... python is slow. If the genome is short, the initial population is small, and there are very few generations this shouldn't be a problem. You lose possibly better solutions that way but it will be significantly faster. have fun...
I'm a data analysis student and I'm starting to explore Genetic Algorithms at the moment. I'm trying to solve a problem with GA but I'm not sure about the formulation of the problem. Basically I have a state of a variable being 0 or 1 (0 it's in the normal range of values, 1 is in a critical state). When the state is 1 I can apply 3 solutions (let's consider Solution A, B and C) and for each solution I know the time that the solution was applied and the time where the state of the variable goes to 0. So I have for the problem a set of data that have a critical event at 1, the solution applied and the time interval (in minutes) from the critical event to the application of the solution, and the time interval (in minutes) from the application of the solution until the event goes to 0. I want with a genetic algorithm to know which is the best solution for a critical event and the fastest one. And if it is possible to rank the solutions acquired so if in the future on solution can't be applied I can always apply the second best for example. I'm thinking of developing the solution in Python since I'm new to GA. Edit: Specifying the problem (responding to AMack) Yes is more a less that but with some nuances. For example the function A can be more suitable to make the variable go to F but because exist other problems with the variable are applied more than one solution. So on the data that i receive for an event of V, sometimes can be applied 3 ou 4 functions but only 1 or 2 of them are specialized for the problem that i want to analyze. My objetive is to make a decision support on the solution to use when determined problem appear. But the optimal solution can be more that one because for some event function A acts very fast but in other case of the same event function A don't produce a fast response and function C is better in that case. So in the end i pretend a solution where is indicated what are the best solutions to the problem but not only the fastest because the fastest in the majority of the cases sometimes is not the fastest in the same issue but with a different background.
0
1
335
0
28,782,077
0
1
0
0
1
true
8
2014-07-08T17:26:00.000
2
1
0
Embedded charts in PyCharm IPython console
24,638,043
1.2
python,matplotlib,pycharm
It doesn't look like you can do it: PyCharm does not use the 'qtconsole' of ipython, but either a plain text console (when you open the "Python console" tab in PyCharm) or ipython notebook (when you open a *.ipynb file). Moreover, PyCharm is done in Java, while to have an interactive plot Matplotlib needs to have a direct connection/knowledge/understanding of the underlying graphic toolkit used... Matplotlib doesn't support any Java based backend, so i guess Pycharm would need to "bridge" the native underlying toolkit...
Is there a way to allow embedded Matplotlib charts in the IPython console that is activated within PyCharm? I'm looking for similar behavior to what can be done with the QT console version of IPython, i.e. ipython qtconsole --matplotlib inline
0
1
2,490
0
54,384,472
0
0
0
0
1
false
43
2014-07-09T07:12:00.000
5
7
0
What is the best stemming method in Python?
24,647,400
0.141893
python,nltk,stemming
Stemming is all about removing suffixes(usually only suffixes, as far as I have tried none of the nltk stemmers could remove a prefix, forget about infixes). So we can clearly call stemming as a dumb/ not so intelligent program. It doesn't check if a word has a meaning before or after stemming. For eg. If u try to stem "xqaing", although not a word, it will remove "-ing" and give u "xqa". So, in order to use a smarter system, one can use lemmatizers. Lemmatizers uses well-formed lemmas (words) in form of wordnet and dictionaries. So it always returns and takes a proper word. However, it is slow because it goes through all words in order to find the relevant one.
I tried all the nltk methods for stemming but it gives me weird results with some words. Examples It often cut end of words when it shouldn't do it : poodle => poodl article articl or doesn't stem very good : easily and easy are not stemmed in the same word leaves, grows, fairly are not stemmed Do you know other stemming libs in python, or a good dictionary? Thank you
0
1
74,285
0
24,663,901
0
0
0
0
1
false
1
2014-07-09T21:07:00.000
1
1
0
How to filter out all grayscale pixels in an image?
24,663,825
0.197375
python,opencv,image-processing
filter out greyscale or filter in the allowed colors Idk if the range of colors or range of greyscale is larger but maybe whitelisting instead of blacklisting is helpful here
I am working on a project which involves using a thermal video camera to detect objects of a certain temperature. The output I am receiving from the camera is an image where the pixels of interest (within the specified temperature range) are colored yellow-orange depending on intensity, and all other pixels are grayscale. I have tried using cv2.inRange() to filter for the colored pixels, but results have been spotty, as the pixel colors I have been provided with in the color lookup tables do not match those actually output by the camera. I figured then that it would be easiest for me to just filter out all grayscale pixels, as then I will be left with only colored pixels (pixels of interest). I have tried looping through each pixel of each frame of the video and checking to see if each channel has the same intensity, but this takes far too long to do. Is there a better way to filter out all grayscale pixels than this?
0
1
1,335
0
30,565,768
0
1
0
0
1
true
3
2014-07-10T16:14:00.000
0
1
0
Locally save data from remote iPython notebook
24,681,509
1.2
python,ipython-notebook
Maybe some combination of cPickle and bash magic for scp?
I'm using an ipython notebook that is running on a remote server. I want to save data from the notebook (e.g. a pandas dataframe) locally. Currently I'm saving the data as a .csv file on the remote server and then move it over to my local machine via scp. Is there a more elegant way directly from the notebook?
0
1
671
0
24,687,460
0
0
0
0
1
false
0
2014-07-10T22:19:00.000
0
1
0
Reading Large File from non local disk in Python
24,687,248
0
python,memory,local-storage,large-files
Copying a file is sequentially reading it and saving in another place. The performance of application might vary depending on the data access patterns, computation to I/O time, network latency and network bandwidth. If you execute your script once, and read through it sequentially it's the same as copying the file, except you perform computations on it instead of saving. Even if you process small chunks of data it probably gets buffered. In fact in case of a single execution, if you copy 1st you actually read the same data twice, once for copy, and once the copy for computation. If you execute your script multiple times, then you need to check what is your data troughput. For example, if you have a gigabit ethernet then 1GBit/s is 125MByte/s, if you process data slower, then you are not limited by the bandwidth. Network latency comes into play when you send multiple requests for small chunks of data. Upon a read request you send a network request and get data back in a finite time, this is the latency. If you make a request for one big chunk of data you "pay" the latency limit once, if you ask 1000 times for 1/1000 of the big chunk you will need to "pay" it 1000 times. However, this is probably abstracted from you by the network file system and in case of a sequential read it will get buffered. It would manifest itself in randon jumping over file and reading small chunks of it. You can check what you are limited by checking how much bytes you process per second and compare it to limits of the hardware. If it's close to HDD speed (which in your case i bet is not) you are bound by I/O - HDD. If it's lower, close to network bandwidth, you are limited by I/O - network. If it's even lower, it's either you are bound by processing speed of data or I/O network latency. However, if you were bound by I/O you should see difference between the two approaches, so if you are seeing the same results it's computation.
Sorry if the topic was already approached, I didn't find it. I am trying to read with Python a bench of large csv files (>300 MB) that are not located in a local drive. I am not an expert in programming but I know that if you copy it into a local drive first it should take less time that reading it (or am I wrong?). The thing is that I tested both methods and the computation times are similar. Am I missing something? Can someone explain / give me a good method to read those file as fast as possible? For copying to local drive I am using: shutil.copy2 For reading the file: for each line in MyFile Thanks a lot for your help, Christophe
0
1
214
0
42,065,440
0
0
0
0
1
false
31
2014-07-11T10:03:00.000
48
2
0
python equivalent of qnorm, qf and qchi2 of R
24,695,174
1
python,r,scipy
The equivalent of the R pnorm() function is: scipy.stats.norm.cdf() with python The equivalent of the R qnorm() function is: scipy.stats.norm.ppf() with python
I need the quantile of some distributions in python. In r it is possible to compute these values using the qf, qnorm and qchi2 functions. Is there any python equivalent of these R functions? I have been looking on scipy but I did non find anything.
0
1
32,670
0
24,708,214
0
0
0
0
1
true
1
2014-07-11T22:58:00.000
3
1
0
Evaluating convergence of SGD classifier in scikit learn
24,707,836
1.2
python,scikit-learn
This a known limitation of the current implementation of scikit-learn's SGD classifier, there is currently no automated convergence check on that model. You can set verbose=1 to get some feedback when running though.
Is there any automated way to evaluate convergence of the SGDClassifier? I'm trying to run an elastic net logit in python and am using scikit learn's SGDClassifier with log loss and elastic net penalty. When I fit the model in python, I get all zeros for my coefficients. When I run glmnet in R, I get significant non-zero coefficients. After some twiddling I found that the scikit learn coefficients approach the R coefficients after around 1000 iterations. Is there any method that I'm missing in scikit learn to iterate until the change in coefficients is relatively small (or a max amount of iterations has been performed), or do I need to do this myself via cross-validation.
0
1
822
0
66,810,359
0
0
0
0
2
false
71
2014-07-12T16:54:00.000
1
5
0
Can sklearn random forest directly handle categorical features?
24,715,230
0.039979
python,scikit-learn,random-forest,one-hot-encoding
Maybe you can use 1~4 to replace these four color, that is, it is the number rather than the color name in that column. And then the column with number can be used in the models
Say I have a categorical feature, color, which takes the values ['red', 'blue', 'green', 'orange'], and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn't include any of them. I've heard that there's no way to do this, but I'd imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that.
0
1
72,780
0
35,471,754
0
0
0
0
2
false
71
2014-07-12T16:54:00.000
16
5
0
Can sklearn random forest directly handle categorical features?
24,715,230
1
python,scikit-learn,random-forest,one-hot-encoding
You have to make the categorical variable into a series of dummy variables. Yes I know its annoying and seems unnecessary but that is how sklearn works. if you are using pandas. use pd.get_dummies, it works really well.
Say I have a categorical feature, color, which takes the values ['red', 'blue', 'green', 'orange'], and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn't include any of them. I've heard that there's no way to do this, but I'd imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that.
0
1
72,780
0
24,736,966
0
0
0
0
1
false
28
2014-07-13T01:08:00.000
1
6
0
PySpark Drop Rows
24,718,697
0.033321
python,apache-spark,pyspark
Personally I think just using a filter to get rid of this stuff is the easiest way. But per your comment I have another approach. Glom the RDD so each partition is an array (I'm assuming you have 1 file per partition, and each file has the offending row on top) and then just skip the first element (this is with the scala api). data.glom().map(x => for (elem <- x.drop(1){/*do stuff*/}) //x is an array so just skip the 0th index Keep in mind one of the big features of RDD's is that they are immutable, so naturally removing a row is a tricky thing to do UPDATE: Better solution. rdd.mapPartions(x => for (elem <- x.drop(1){/*do stuff*/} ) Same as the glom but doesn't have the overhead of putting everything into an array, since x is an iterator in this case
how do you drop rows from an RDD in PySpark? Particularly the first row, since that tends to contain column names in my datasets. From perusing the API, I can't seem to find an easy way to do this. Of course I could do this via Bash / HDFS, but I just want to know if this can be done from within PySpark.
0
1
49,327
0
24,732,678
0
0
0
0
1
false
0
2014-07-14T08:09:00.000
0
1
0
Program gets stuck at finding the Contour while using Open CV
24,732,112
0
python,opencv,camera,detection,hsv
You can use a while loop and check if the blob region is not null and then find contours! it would be helpful if you posted your code. We can explain the answer in a better way then.
I recently started using Python and I've been working on an Open CV based project for over a month now. I am using Simple Thresholding to detect a coloured blob and I have thresholded the HSV values to detect the blob. All works well, but when the blob goes out of the FOV of the camera, the program gets stuck. I was wondering if there could be a while/if condition that I can add at the top of the loop in order to skip the whole loop in case the blob goes outside FOV of the camera and then enter the loop when the blob returns. Would really appreciate your help on this one! Cheers.
0
1
111
0
24,967,653
0
0
0
0
1
false
9
2014-07-14T14:52:00.000
10
2
0
Updated Bokeh to 0.5.0, now plots all previous versions of graph in one window
24,739,390
1
python,plot,bokeh
as of 0.5.1 there is now bokeh.plotting.reset_output that will clear all output_modes and state. This is especially useful in situations where a new interpreter is not started in between executions (e.g., Spyder and the notebook)
Before I updated, I would run my script and output the html file. There would be my one plot in the window. I would make changes to my script, run it, output the html file, look at the new plot. Then I installed the library again to update it using conda. I made some changes to my script, ran it again, and the output file included both the plot before I made some changes AND a plot including the changes. I ran the script again out of curiosity. Three plots in the one file! Ran it again. Four! Deleted the html file (instead of overwriting). Five! Changed the name of the output html file. Six! I even tried changing the name of the script. The plots just keep piling up. What's going on? Why is it plotting every version of the graph I've ever made?
0
1
2,333
0
24,757,540
0
0
0
0
1
true
0
2014-07-14T19:33:00.000
2
1
0
Updating a NaiveBayes Classifier (in scikit-learn) over time
24,744,409
1.2
python,scikit-learn
Use the partial_fit method on the naive Bayes estimator.
I'm building a NaiveBayes classifier using scikit-learn, and so far things are going well if I have a set body of data to train. However, for the particular project I'm working on, there will be new data coming in every day that ideally would be part of the training set. I'm aware that you can pickle the classifier to store it for later use, but is there any way to "update" the classifier with new data? Re-training the classifier from scratch every day is obviously an option, but that would require drawing a lot of historical data each time, for a growing time period.
0
1
346
0
24,765,554
0
0
0
0
1
true
1
2014-07-14T19:49:00.000
1
1
0
Convert CudaNdarraySharedVariable to TensorVariable
24,744,701
1.2
python,machine-learning,neural-network,gpu,theano
For a plain CudaNdarray variable, something like this should work: '''x = CudaNdarray... x_new=theano.tensor.TensorVariable(CudaNdarrayType([False] * tensor_dim)) f = theano.function([x_new], x_new) converted_x = f(x) '''
I'm trying to convert a pylearn2 GPU model to a CPU compatible version for prediction on a remote server -- how can I convert CudaNdarraySharedVariable's to TensorVariable's to avoid an error calling cuda code on a GPU-less machine? The experimental theano flag unpickle_gpu_to_cpu seems to have left a few CudaNdarraySharedVariable's hanging around (specifically model.layers[n].transformer._W).
0
1
505
0
24,762,074
0
0
0
0
1
false
0
2014-07-15T15:09:00.000
0
2
0
Manipulating Large Amounts of Image Data in Python
24,761,787
0
python,image
It is possible, if you us NumPy and especially numpy.memmap to store the image data. That way the image data looks as if it were in memory but is on the disk using the mmap mechanism. The nice thing is that the numpy.memmap arrays are not more difficult to handle than ordinary arrays. There is some performance overhead as all memmap arrays are saved to disk. The arrays could be described as "disk-backed arrays", i.e. the data is also kept in RAM as long as possible. This means that if you access some data array very often, it is most likely in memory, and there is no disk read overhead. So, keep your metadata in a dict in memory, but memmap your bigger arrays. This is probably the easiest way. However, make sure you have a 64-bit Python in use, as the 32-bit one runs out of addresses ad 2 GiB. Of course, there are a lot of ways to compress the image data. If your data may be compressed, then you might consider using compression to save memory.
I have a large number of images of different categories, e.g. "Cat", "Dog", "Bird". The images have some hierarchical structure, like a dict. So for example the key is the animal name and the value is a list of animal images, e.g. animalPictures[animal][index]. I want to manipulate each image (e.g. compute histogram) and then save the manipulated data in an identical corresponding structure, e.g. animalPictures['bird'][0] has its histogram stored in animalHistograms['bird'][0]. The only issue is I do not have enough memory to load all the images, perform all the manipulations, and create an additional structure of the transformed images. Is it possible to load an image from disk, manipulate the image, and stream the data to a dict on disk? This way I could work on a per-image basis and not worry about loading everything into memory at once. Thanks!
0
1
258
0
24,785,891
0
0
0
1
1
false
1
2014-07-16T16:20:00.000
0
5
0
Converting a folder of Excel files into CSV files/Merge Excel Workbooks
24,785,824
0
python,csv,xlrd,xlsxwriter
Look at openoffice's python library. Although, I suspect openoffice would support MS document files. Python has no native support for Excel file.
I have a folder with a large number of Excel workbooks. Is there a way to convert every file in this folder into a CSV file using Python's xlrd, xlutiles, and xlsxWriter? I would like the newly converted CSV files to have the extension '_convert.csv'. OTHERWISE... Is there a way to merge all the Excel workbooks in the folder to create one large file? I've been searching for ways to do both, but nothing has worked...
0
1
2,934
0
24,797,554
0
0
0
0
1
false
0
2014-07-17T01:51:00.000
1
1
0
How to find and graph the intersection of 3+ circles with Matplotlib
24,793,636
0.197375
python,graph,matplotlib,geometry,intersection
Maybe you should try something more analytical? It should not be very difficult: Find the circle pairs whose distance is less than the sum of their radii; they intersect. Calculate the intersection angles by simple trigonometry. Draw a polygon (path) by using a suitably small delta angle in both cases (half of the polygon comes from one circle, the other half from the other circle. Collect the paths to a PathCollection None of the steps should be very long or difficult.
I'm working on a problem that involves creating a graph which shows the areas of intersection of three or more circles (each circle is the same size). I have many sets of circles, each set containing at least three circles. I need to graph the area common to the interior of each and every circle in the set, if it even exists. If there is no area where all the circles within the set intersect, I have nothing to graph. So the final product is a graph with little "slivers" of intersecting circles all over. I already have a solution for this written in Python with matplotlib, but it doesn't perform very well. This wasn't an issue before, but now I need to apply it to a larger data set so I need a better solution. My current approach is basically a test-and-check brute force method: I check individual points within an area to see if they are in that common intersection (by checking distance from the point to the center of each circle). If the point meets that criteria, I graph it and move on. Otherwise, I just don't graph it and move on. So it works, but it takes forever. Just to clarify, I don't scan through every point in the entire plane for each set of circles. First, I narrow my "search" area to a rectangle tightly bounded around the first two (arbitrarily chosen) circles in the set, and then test-and-check each point in there. I was thinking it would be nice if there were a way for me to graph each circle in a set (say there 5 circles in the set), each with an alpha value of 0.1. Then, I could go back through and only keep the areas with an alpha value of 0.5, because that's the area where all 5 circles intersect, which is all I want. I can't figure out how to implement this using matplotlib, or using anything else, for that matter, without resorting to the same brute force test-and-check strategy. I'm also familiar with Java and C++, if anyone has a good idea involving those languages. Thank you!
0
1
1,241
0
26,639,275
0
0
0
0
1
false
1
2014-07-21T21:21:00.000
0
1
0
Produce a PMML file for the Nnet model in python
24,875,008
0
python,neural-network,pmml
Finally I found my own solution. I wrote my own PMML Parser and scorer . PMML is very much same as XML so its easy to build and retrieve fields accordingly. If anyone needs more information please comment below. Thanks , Raghu.
I have a model(Neural Network) in python which I want to convert into a PMML file . I have tried the following: 1.)py2pmml -> Not able to find the source code for this 2.)in R -> PMML in R works fine but my model is in Python.(Cant run the data in R to generate the same model in R) . Does not work for my dataset. 3.) Now I am trying to use augustus to make the PMML file. But augustus has examples of using a already built PMML file but not how to make one I am not able to find proper examples on how to use augustus in Python to customize the model. Any suggestion will be good. Thanks in advance. GGR
0
1
382
0
24,902,112
0
0
0
0
1
true
0
2014-07-22T17:44:00.000
1
2
0
What activation function to use or modifications to make when neural network gives same output on regression with PyBrain?
24,894,231
1.2
python,machine-learning,neural-network,pybrain
Neural nets are not stable when fed input data on arbitrary scales (such as between approximately 0 and 1000 in your case). If your output units are tanh they can't even predict values outside the range -1 to 1 or 0 to 1 for logistic units! You should try recentering/scaling the data (making it have mean zero and unit variance - this is called standard scaling in the datascience community). Since it is a lossless transformation you can revert back to your original scale once you've trained the net and predicted on the data. Additionally, a linear output unit is probably the best as it makes no assumptions about the output space and I've found tanh units to do much better on recurrent neural networks in low dimensional input/hidden/output nets.
I have a neural network with one input, three hidden neurons and one output. I have 720 input and corresponding target values, 540 for training, 180 for testing. When I train my network using Logistic Sigmoid or Tan Sigmoid function, I get the same outputs while testing, i.e. I get same number for all 180 output values. When I use Linear activation function, I get NaN, because apparently, the value gets too high. Is there any activation function to use in such a case? Or any improvements to be done? I can update the question with details and code if required.
0
1
815
0
36,859,613
0
1
0
0
1
false
14
2014-07-22T19:07:00.000
2
4
0
Is it possible to create grouping of input cells in IPython Notebook?
24,895,714
0.099668
python,ipython,ipython-notebook
Latest version of Ipython/Jupyter notebook allows selection of multiple cells using shift key which can be useful for batch operations such as copy, paste, delete, etc.
When I do data analysis on IPython Notebook, I often feel the need to move up or down several adjacent input cells, for better flow of the analysis story. I'd expected that once I'd create a heading, all cells under that heading would move together if I move the heading. But this is not the case. Any way I can do this? Edit: To clarify, I can of course move cells individually, and the keyboard shortcuts are handy; but what I'm looking for is a way to group cells so that I can move (or even delete) them all together.
0
1
6,408
0
24,897,058
0
0
0
0
1
false
12
2014-07-22T19:31:00.000
5
2
0
sklearn: Have an estimator that filters samples
24,896,178
0.462117
python,scikit-learn
The scikit-learn transformer API is made for changing the features of the data (in nature and possibly in number/dimension), but not for changing the number of samples. Any transformer that drops or adds samples is, as of the existing versions of scikit-learn, not compliant with the API (possibly a future addition if deemed important). So in view of this it looks like you will have to work your way around standard scikit-learn API.
I'm trying to implement my own Imputer. Under certain conditions, I would like to filter some of the train samples (that I deem low quality). However, since the transform method returns only X and not y, and y itself is a numpy array (which I can't filter in place to the best of my knowledge), and moreover - when I use GridSearchCV- the y my transform method receives is None, I can't seem to find a way to do it. Just to clarify: I'm perfectly clear on how to filter arrays. I can't find a way to fit sample filtering on the y vector into the current API. I really want to do that from a BaseEstimator implementation so that I could use it with GridSearchCV (it has a few parameters). Am I missing a different way to achieve sample filtration (not through BaseEstimator, but GridSearchCV compliant)? is there some way around the current API?
0
1
2,829
0
24,912,790
0
0
0
0
1
true
0
2014-07-23T13:59:00.000
0
2
0
How to convert standardized outputs of a neural network back to original scale?
24,912,521
1.2
python,machine-learning,neural-network
Assuming the mean and standard deviation of the targets are mu and sigma, the normalized value of a target y should be (y-mu)/sigma. In that case if you get an output y', you can move it back to original scale by converting y' -> mu + y' * sigma.
In my neural network, I have inputs varying from 0 to 719, and targets varying from 0 to 1340. So, I standardize the inputs and targets by standard scaling such that the mean is 0 and variance is 1. Now, I calculate the outputs using back-propagation. All my outputs lie between -2 and 2. How do I convert these outputs to the original scale, i.e. lying in the range (0,1340)? EDIT: I have 1 input, 5 hidden neurons and 1 output. I have used logistic sigmoid activation function. I did the standard scaling by taking mean and then dividing by standard deviation. In particular, my output lies between -1.28 and 1.64.
0
1
1,506
0
25,251,918
0
0
0
0
1
false
2
2014-07-24T13:33:00.000
0
1
0
How to overcome version incompatibility with Abaqus and Numpy (Python's library)?
24,935,230
0
python,numpy,nlopt,abaqus
I have similar problems. As an (annoying) work around I usually write out important data in text files using the regular python. Afterwards, using a bash script, I start a second python (different version) to further analyse the data (matplotlib etc).
I want to run an external library of python called NLopt within Abaqus through python. The issue is that the NLopt I found is compiled against the latest release of Numpy, i.e. 1.9, whereas Abaqus 6.13-2 is compiled against Numpy 1.4. I tried to replace the Numpy folder under the site-packages under the Abaqus installation folder with the respective one of version 1.9 that I created externally through an installation of Numpy 1.9 over Python 2.6 (version that Abaqus uses). Abaqus couldn't even start so I guess that such approach is incorrect. Are there any suggestions on how to overcome such issue? Thanks guys
0
1
318
0
25,012,484
0
0
0
0
1
false
2
2014-07-26T02:41:00.000
3
2
0
One-hot encoding of large dataset with scikit-learn
24,966,984
0.291313
python,scikit-learn
There is no way around finding out which possible values your categorical features can take, which probably implies that you have to go through your data fully once in order to obtain a list of unique values of your categorical variables. After that it is a matter of transforming your categorical variables to integer values and setting the n_values= kwarg in OneHotEncoder to an array corresponding to the number of different values each variable can take.
I have a large dataset which I plan to do logistic regression on. It has lots of categorical variables, each having thousands of features which I am planning to use one hot encoding on. I will need to deal with the data in small batches. My question is how to make sure that one hot encoding sees all the features of each categorical variable during the first run?
0
1
3,004
0
24,971,512
0
0
0
0
1
true
1
2014-07-26T13:17:00.000
1
1
0
When should I use numpy?
24,971,400
1.2
python,numpy
Long answer short, when you need do huge mathematical operations, like vector multiplications and so on which requires writing lots of loops and what not, yet your codes gets unreadable yet not efficient you should use Numpy. Few key benefits: NumPy arrays have a fixed size at creation, unlike Python lists (which can grow dynamically). Changing the size of an ndarray will create a new array and delete the original. So it is more memory efficient than the other. The elements in a NumPy array are all required to be of the same data type, and thus will be the same size in memory. The exception: one can have arrays of (Python, including NumPy) objects, thereby allowing for arrays of different sized elements. NumPy arrays facilitate advanced mathematical and other types of operations on large numbers of data. Typically, such operations are executed more efficiently and with less code than is possible using Python’s built-in sequences. A growing plethora of scientific and mathematical Python-based packages are using NumPy arrays; though these typically support Python-sequence input, they convert such input to NumPy arrays prior to processing, and they often output NumPy arrays. In other words, in order to efficiently use much (perhaps even most) of today’s scientific/mathematical Python-based software, just knowing how to use Python’s built-in sequence types is insufficient - one also needs to know how to use NumPy arrays. -Vector operations comes handy in Numpy. You don't need to go through writing loops but yet pythonic. -Object oriented approach
I'm a newbee of python. And recently I heard some people say that numpy is a good module for dealing with huge data. I'm curious what can numpy do for us in the daily work. As I know, most of us were not scientists and researchers, at what circumstances numpy can bring us benefit? Can you share a good practice with me?
0
1
669
0
24,985,330
0
1
0
0
1
true
8
2014-07-27T20:05:00.000
3
3
0
Efficiently grouping a list of coordinates points by location in Python
24,985,127
1.2
python,algorithm,grid
You can hash all coordinate points (e.g. using dictionary structure in python) and then for each coordinate point, hash the adjacent neighbors of the point to find pairs of points that are adjacent and "merge" them. Also, for each point you can maintain a pointer to the connected component that that point belongs to (using the dictionary structure), and for each connected component you maintain a list of points that belong to the component. Then, when you hash a neighbor of a point and find a match, you merge the two connected component sets that the points belong to and update the group pointers for all new points in the union set. You can show that you only need to hash all the neighbors of all points just once and this will find all connected components, and furthermore, if you update the pointers for the smaller of the two connected components sets when two connected component sets are merged, then the run-time will be linear in the number of points.
Given a list of X,Y coordinate points on a 2D grid, what is the most efficient algorithm to create a list of groups of adjacent coordinate points? For example, given a list of points making up two non-adjacent squares (3x3) on a grid (15x15), the result of this algorithm would be two groups of points corresponding to the two squares. I suppose you could do a flood fill algorithm, but this seems overkill and not very efficient for a large 2D array of say 1024 size.
0
1
7,135
0
25,005,318
0
0
0
0
1
true
1
2014-07-28T21:21:00.000
1
1
0
Parallel exact matrix diagonalization with Python
25,004,564
1.2
python,numpy,scipy,linear-algebra,numerical-methods
For symmetric sparse matrix eigenvalue/eigenvector finding, you may use scipy.sparse.linalg.eigsh. It uses ARPACK behind the scenes, and there are parallel ARPACK implementations. AFAIK, SciPy can be compiled with one if your scipy installation uses the serial version. However, this is not a good answer, if you need all eigenvalues and eigenvectors for the matrix, as the sparse version uses the Lanczos algorithm. If your matrix is not overwhelmingly large, then just use numpy.linalg.eigh. It uses LAPACK or BLAS and may use parallel code internally. If you end up rolling your own, please note that SciPy/NumPy does all the heavy lifting with different highly optimized linear algebra packages, not in pure Python. Due to this the performance and degree of parallelism depends heavily on the libraries your SciPy/NumPy installation is compiled with. (Your question does not reveal if you just want to have parallel code running on several processors, or on several computers. Also, the size of your matrix has a big impact on the best method. So, this answer may be completely off-the-mark.)
Is anyone aware of an implemented version (perhaps using scipy/numpy) of parallel exact matrix diagonalization (equivalently, finding the eigensystem)? If it helps, my matrices are symmetric and sparse. I would hate to spend a day reinventing the wheel. EDIT: My matrices are at least 10,000x10,000 (but, preferably, at least 20 times larger). For now, I only have access to a 4-core Intel machine (with hyperthreading, so 2 processes per core), ~3.0Ghz each with 12GB of RAM. I may later have access to a 128-core node ~3.6Ghz/core with 256GB of RAM, so single machine/multiple cores should do it (for my other parallel tasks, I have been using multiprocessing). I would prefer for the algorithms to scale well. I do need exact diagonalization, so scipy.sparse routines are not be good for me (tried, didn't work well). I have been using numpy.linalg.eigh (I see only single core doing all the computations). Alternatively (to the original question): is there an online resource where I can find out more about compiling SciPy so as to insure parallel execution?
0
1
3,153
0
25,036,779
0
0
0
0
1
true
6
2014-07-29T16:35:00.000
6
1
0
How can sklearn select categorical features based on feature selection
25,020,482
1.2
python,scikit-learn,feature-selection
You can't. The feature selection routines in scikit-learn will consider the dummy variables independently of each other. This means they can "trim" the domains of categorical variables down to the values that matter for prediction.
My question is i want to run feature selection on the data with several categorical variables. I have used get_dummies in pandas to generate all the sparse matrix for these categorical variables. My question is how sklearn knows that one specific sparse matrix actually belongs to one feature and select/drop them all? For example, I have a variable called city. There are New York, Chicago and Boston three levels for that variable, so the sparse matrix looks like: [1,0,0] [0,1,0] [0,0,1] How can I inform the sklearn that in these three "columns" actually belong to one feature, which is city and won't end up with choosing New York, and delete Chicago and Boston? Thank you so much!
0
1
3,078
0
25,032,965
0
0
0
1
1
false
1
2014-07-29T20:32:00.000
-1
1
0
Sorting multiple columns in excel via xlwt for python
25,024,437
-0.197375
python,excel,xlwt
You will get data from queries, right? Then you will write them to an excel by xlwt. Just before writing, you can sort them. If you can show us your code, then maybe I can optimize them. Otherwise, you have to follow wnnmaw's advice, do it in a more complicate way.
I'm using python to write a report which is put into an excel spreadshet. There are four columns, namely: Product Name | Previous Value | Current Value | Difference When I am done putting in all the values I then want to sort them based on Current Value. Is there a way I can do this in xlwt? I've only seen examples of sorting a single column.
0
1
1,410
0
25,069,634
0
0
0
0
1
false
0
2014-07-31T10:52:00.000
0
1
0
Performing a rolling vector auto regression with two variables X and Y, with time lag on X in Python
25,057,063
0
python,signals,filtering,time-series
There are some rather quick ways. I assume you are only interested in the slope and average of the signal Y. In order to calculate these, you need to have: sum(Y) sum(X) sum(X.X) sum(X.Y) All sums are over the samples in the window. When you have these, the average is: sum(Y) / n and the slope: (sum(X.Y) - sum(X) sum(Y) / n) / (sum(X.X) - sum(X)^2 / n) To make a quick algorithm it is worth noting that all of these except for sum(X.Y) can be calculated in a trivial way from either X or Y. The rolling sums are very fast to calculate as they are cumulative sums of differences of two samples ("incoming to window" minus "outgoing from the window"). Only sum(X.Y) needs to be calculated separately for each time step. All these operations can be vectorized, even though the time lag is probably easier to write as a loop without any notable performance hit. This way you will be able to calculate tens of millions of regressions per second. Is that fast enough?
I have to perform linear regressions on a rolling window on Y and a time lagged version of X, ie finding Y(t) = aX(t-1) + b. The window size is fixed at 30 samples. I want to return a numpy array of all the beta coefficients. Is there a quick way of doing this? I read about the Savitsky-Golay filter, but it regresses only X with the time lagged version of itself. Thanks!
0
1
149
0
32,809,719
0
0
0
0
1
false
7
2014-08-01T19:11:00.000
1
4
0
2D Interpolation with periodic boundary conditions
25,087,111
0.049958
python,interpolation
Another function that could work is scipy.ndimage.interpolation.map_coordinates. It does spline interpolation with periodic boundary conditions. It does not not directly provide derivatives, but you could calculate them numerically.
I'm running a simulation on a 2D space with periodic boundary conditions. A continuous function is represented by its values on a grid. I need to be able to evaluate the function and its gradient at any point in the space. Fundamentally, this isn't a hard problem -- or to be precise, it's an almost already solved problem. The function can be interpolated using a cubic spline with scipy.interpolate.RectBivariateSpline. The reason it's almost solved is that RectBivariateSpline cannot handle periodic boundary conditions, nor can anything else in scipy.interpolate, as far as I can figure out from the documentation. Is there a python package that can do this? If not, can I adapt scipy.interpolate to handle periodic boundary conditions? For instance, would it be enough to put a border of, say, four grid elements around the entire space and explicitly represent the periodic condition on it? [ADDENDUM] A little more detail, in case it matters: I am simulating the motion of animals in a chemical gradient. The continuous function I mentioned above is the concentration of a chemical that they are attracted to. It changes with time and space according to a straightforward reaction/diffusion equation. Each animal has an x,y position (which cannot be assumed to be at a grid point). They move up the gradient of attractant. I'm using periodic boundary conditions as a simple way of imitating an unbounded space.
0
1
3,228
0
25,110,368
0
0
0
0
1
false
0
2014-08-02T15:00:00.000
1
3
0
What is the data structure in python that can contain multiple pandas data frames?
25,096,357
0.066568
python,pandas
I haven't done much with Panels, but what exactly is the functionality that you need? Is there a reason a simple python list wouldn't work? Or, if you want to refer by name and not just by list position, a dictionary?
I want to write a function to return several data frames (different dims) and put them into a larger "container" and then select each from the "container" using indexing. I think I want to find some data structure like list in R, which can have different kinds of objects. What can I use to do this?
0
1
104
0
25,123,660
0
1
0
0
1
false
1
2014-08-03T02:31:00.000
1
1
0
Spyder, Python IDE startup code crashing GUI
25,101,081
0.197375
python,numpy,scipy,ipython,spyder
You may find Spyder's array editor better suited for large arrays than the qt console.
I am using Spyder from the Anaconda scientific package set (3.x) and consistently work with very large arrays. I want to be able to see these arrays in my console window so I use these two commands: set_printoptions(linewidth=1000) to set the maximum characters displayed on a single line to 1000 and: set_printoptions(threshold='nan') to prevent truncation of large arrays. Putting these two lines of code into the startup option as such set_printoptions(linewidth=1000),set_printoptions(threshold='nan') causes Spyder to hang and crash upon a new session of ipython in the console. Is there a way to run these lines of code without having me type them all the time. Also, the console window only allows me to scroll up to a certain point then stops. This can be a problem when I want to view large arrays. Is there any way to increase the scroll buffer? (Note, I'm very new to Python having just switched over from MATLAB).
0
1
604
0
25,122,607
0
0
0
1
1
false
0
2014-08-03T23:40:00.000
0
1
0
Benefits of Pytables / databases over file system for data organization?
25,110,089
0
python,csv,organization,pytables
First of all, I am a big fan of Pytables, because it helped me manage huge data files (20GB or more per file), which I think is where Pytables plays out its strong points (fast access, built-in querying etc.). If the system is also used for archiving, the compression capabilities of HDF5 will reduce space requirements and reduce network load for transfer. I do not think that 'reproducing' your file system inside an HDF5 file has advantages (happy to be told I'm wrong on this). I would suggest a hybrid approach: keep the normal filesystem structure and put the experimental data in hdf5 containers with all the meta-data. This way you keep the flexibility of your normal filesystem (access rights, copying, etc.) and can still harness the power of pytables if you have bigger files where memory is an issue. Pulling the data from HDF5 into normal pandas or numpy is very cheap, so your 'normal' work flow shouldn't suffer.
I'm currently in the process of trying to redesign the general workflow of my lab, and am coming up against a conceptual roadblock that is largely due to my general lack of knowledge in this subject. Our data currently is organized in a typical file system structure along the lines of: Date\Cell #\Sweep # where for a specific date there are generally multiple Cell folders, and within those Cell folders there are multiple Sweep files (these are relatively simple .csv files where the recording parameters are saved separated in .xml files). So within any Date folder there may be a few tens to several hundred files for recordings that day organized within multiple Cell subdirectory folders. Our workflow typically involves opening multiple sweep files within a Cell folder, averaging them, and then averaging those with data points from other Cell folders, often across multiple days. This is relatively straightforward to do with the Pandas and Numpy, although there is a certain “manual” feel to it when remotely accessing folders saved to the lab server. We also, on occasion, run into issues because we often have to pull in data from many of these files at once. While this isn’t usually an issue, the files can range between a few MBs to 1000s of MBs in size. In the latter case we have to take steps to not load the entire file into memory (or not load multiple files at once at the very least) to avoid memory issues. As part of this redesign I have been reading about Pytables for data organization and for accessing data sets that may be too large to store within memory. So I guess my 2 main questions are If the out-of-memory issues aren’t significant (i.e. that utility wouldn’t be utilized often), are there any significant advantages to using something like Pytables for data organization over simply maintaining a file system on a server (or locally)? Is there any reason NOT to go the Pytables database route? We are redesigning our data collection as well as our storage, and one option is to collect the data directly into Pandas dataframes and save the files in the HDF5 file type. I’m currently weighing the cost/benefit of doing this over the current system of the data being stored into csv files and then loaded into Pandas for analysis later on. My thinking is that by creating a database vs. the filesystem we current have we may 1. be able to reduce (somewhat anyway) file size on disk through the compression that hdf5 offers and 2. accessing data may overall become easier because of the ability to query based on different parameters. But my concern for 2 is that ultimately, since we’re usually just opening an entire file, that we won’t be utilizing that to functionality all that much – that we’d basically be performing the same steps that we would need to perform to open a file (or a series of files) within a file system. Which makes me wonder whether the upfront effort that this would require is worth it in terms of our overall workflow.
0
1
332
0
25,288,486
0
1
0
0
1
true
1
2014-08-04T16:07:00.000
-1
1
0
Using Pickle vs database for loading large amount of data?
25,122,947
1.2
python,database,computer-vision,pickle
Use a database because it allows you to query faster. I've done this before. I would suggest against using cPickle. What specific implementation are you using?
I have previously saved a dictionary which maps image_name -> list of feature vectors, with the file being ~32 Gb. I have been using cPickle to load the dictionary in, but since I only have 8 GB of RAM, this process takes forever. Someone suggested using a database to store all the info, and reading from that, but would that be a faster/better solution than reading a file from disk? Why?
0
1
811
0
25,142,706
0
0
0
0
1
false
0
2014-08-05T12:11:00.000
0
2
0
How to check a random 3d object surface if is flat in python
25,138,508
0
python,image,3d,transform
Firstly, all lines in 3d correspond to an equation; secondly, all lines in 3d that lie on a particular plane for part of their length correspond to equations that belong to a set of linear equations that share certain features, which you would need to determine. The first thing you should do is identify the four corners of the supposed plane - they will have x, y or z values more extreme than the other points. Then check that the lines between the corners have equations in the set - three points in 3d always define a plane, four points may not. Then you should 'plot' the points of two parallel sides using the appropriate linear equations. All the other points in the supposed plane will be 'on' lines (whose equations are also in the set) that are perpendicular between the two parallel sides. The two end points of a perpendicular line on the sides will define each equation. The crucial thing to remember when determining whether a point is 'on' a line is that it may not be, even if the supposed plane was inputted as a plane. This is because x, y and z values as generated by an equation will be rounded so as correspond to 'real' points as defined by the resolution that the graphics program allows. Therefore you must allow for a (very small) discrepancy between where a point 'should be' and where it actually is - this may be just one pixel (or whatever unit of resolution is being used). To look at it another way - a point may be on a perpendicular between two sides but not on a perpendicular between the other two solely because of a rounding error with one of the two equations. If you want to test for a 'bumpy' plane, for whatever reason, just increase the discrepancy allowed. If you post a carefully worded question about the set of equations for the lines in a plane on math.stackexchange.com someone may know more about it.
I used micro CT (it generates a kind of 3D image object) to evaluate my samples which were shaped like a cone. However the main surface which should be flat can not always be placed parallel to the surface of image stacks. To perform the transform, first of all, I have to find a way to identify the flat surface. Therefore I learnt python to read the image data into numpy array. Then I realized that I totally have no clue how to achieve the idea in a mathematical way. If you have any idea or any suggestion or even packages would be so appreciated.
0
1
435
0
67,782,295
0
0
0
0
1
false
42
2014-08-05T18:09:00.000
-2
4
0
TFIDF for Large Dataset
25,145,552
-0.099668
python,lucene,nlp,scikit-learn,tf-idf
The lengths of the documents The number of terms in common Whether the terms are common or unusual How many times each term appears
I have a corpus which has around 8 million news articles, I need to get the TFIDF representation of them as a sparse matrix. I have been able to do that using scikit-learn for relatively lower number of samples, but I believe it can't be used for such a huge dataset as it loads the input matrix into memory first and that's an expensive process. Does anyone know, what would be the best way to extract out the TFIDF vectors for large datasets?
0
1
28,481
0
25,178,533
0
0
0
1
1
false
0
2014-08-06T18:55:00.000
0
1
0
Python pandas module openpxyl version issue
25,168,058
0
python,pandas,openpyxl,versions
The best thing would be to remove the version of openpyxl you installed and let Pandas take care.
My installed version of the python(2.7) module pandas (0.14.0) will not import. The message I receive is this: UserWarning: Installed openpyxl is not supported at this time. Use >=1.6.1 and <2.0.0. Here's the problem - I already have openpyxl version 1.8.6 installed so I can't figure out what the problem might be! Does anybody know where the issue may lie? Do I need a different combination of versions?
0
1
114
0
25,170,275
0
1
0
0
1
false
0
2014-08-06T20:22:00.000
1
2
0
error in import pandas after installing it using pip
25,169,506
0.099668
python,pandas
Try to locate your pandas lib in /python*/lib/site-packages, add dir to your sys.path file.
I installed pandas using pip and get the following message "pip install pandas Requirement already satisfied (use --upgrade to upgrade): pandas in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages Cleaning up..." When I load up python and try to import pandas, it says module not found. Please help
0
1
1,176
0
25,414,604
0
0
0
0
1
false
1
2014-08-07T20:56:00.000
0
1
0
How to test the default NLTK NER chunker's accuracy on own corpus?
25,192,029
0
python,nltk
Read in the chunked portion of your corpus and convert it into the format that the NLTK expects, i.e. as a list of shallow Trees. Once you have it in this form, you can pass it to the evaluate() method just like you would pass the "gold standard" examples. The evaluate method will strip off the chunks, run your text through the chunker, and compare the result with the chunks you supplied to calculate accuracy.
How to test the default NLTK NER chunker's accuracy on own corpus? I've tagged a percentage of my own corpus. I'm curious if it's possible to use the default NLTK tagger to see accuracy rate on this corpus? I already know about the ne_chunker.evaluate() function, but it's not immediately clear to me how to input in my own corpus for evaluation (rather than the gold standard corpus)
0
1
269
0
25,233,675
0
0
0
0
1
false
0
2014-08-10T22:16:00.000
1
2
0
Large Array of binary data
25,233,539
0.099668
python
Another possibility is to represent the last axis of 20 bits as a single 32 bit integer. This way a 5000x5000 array would suffice.
I'm working with a large 3 dimensional array of data that is binary, each value is one of two possible values. I currently have this data stored in the numpy array as int32 objects that are either 1 or 0. It works fine for small arrays but eventually i will need to make the array 5000x5000x20, which I can't even get close to without getting "Memory Error". Does anyone have any suggestions for a better way to do this? I am really hoping that I can keep it all together in one data structure because I will need to access slices of it along all three axes.
0
1
119
0
25,242,915
0
0
0
0
1
true
0
2014-08-11T07:36:00.000
0
1
0
Python hierarchical clustering visualization dump [scipy]
25,238,028
1.2
python,scipy,cluster-analysis,hierarchical-clustering
The solution was that scipy had it's own built in function to turn linkage matrix to binary tree. The function name is scipy.to_tree(matrix)
Recently I was visualizing my datasets using python modules scikit and scipy hierarchical clustering and dendrogram. Dendrogram method drawing me a graph and now I need to export this tree as a graph in my code. I am wondering is there any way to get this data. Any help would be really appreciated. Thanks.
0
1
682
0
25,298,846
0
1
0
0
1
true
0
2014-08-13T13:52:00.000
1
1
0
Python NLTK tokenizing text using already found bigrams
25,288,032
1.2
python,python-2.7,nlp,nltk
The way how topic modelers usually pre-process text with n-grams is they connect them by underscore (say, topic_modeling or white_house). You can do that when identifying big rams themselves. And don't forget to make sure that your tokenizer does not split by underscore (Mallet does if not setting token-regex explicitly). P.S. NLTK native bigrams collocation finder is super slow - if you want something more efficient look around if you haven't yet or create your own based on, say, Dunning (1993).
Background: I got a lot of text that has some technical expressions, which are not always standard. I know how to find the bigrams and filter them. Now, I want to use them when tokenizing the sentences. So words that should stay together (according to the calculated bigrams) are kept together. I would like to know if there is a correct way to doing this within NLTK. If not, I can think of various non efficient ways of rejoining all the broken words by checking dictionaries.
0
1
317
0
63,317,500
0
0
0
0
1
false
374
2014-08-17T17:52:00.000
0
9
0
How can I display full (non-truncated) dataframe information in HTML when converting from Pandas dataframe to HTML?
25,351,968
0
python,html,pandas
For those who like to reduce typing (i.e., everyone!): pd.set_option('max_colwidth', None) does the same thing
I converted a Pandas dataframe to an HTML output using the DataFrame.to_html function. When I save this to a separate HTML file, the file shows truncated output. For example, in my TEXT column, df.head(1) will show The film was an excellent effort... instead of The film was an excellent effort in deconstructing the complex social sentiments that prevailed during this period. This rendition is fine in the case of a screen-friendly format of a massive Pandas dataframe, but I need an HTML file that will show complete tabular data contained in the dataframe, that is, something that will show the latter text element rather than the former text snippet. How would I be able to show the complete, non-truncated text data for each element in my TEXT column in the HTML version of the information? I would imagine that the HTML table would have to display long cells to show the complete data, but as far as I understand, only column-width parameters can be passed into the DataFrame.to_html function.
1
1
438,196
1
44,932,618
0
1
0
0
1
false
8
2014-08-24T03:57:00.000
1
4
0
Using Anaconda Python 3.4 with PyQt5
25,468,397
0.049958
matplotlib,anaconda,python-3.4,pyqt5
I use Anaconda and with Python v2.7.X and qt5 doesn't work. The work-around I found was Tools -> Preferences -> Python console -> External modules -> Library: PySlide
I have an existing PyQt5/Python3.4 application that works great, and would now like to add "real-time" data graphing to it. Since matplotlib installation specifically looks for Python 3.2, and NumPhy / ipython each have there own Python version requirements, I thought I'd use a python distribution to avoid confusion. But out of all the distros (pythonxy, winpython, canopy epd) Anaconda is the only one that supports Python 3.4, however it only has PyQt 4.10.4. Is there a way I can install Anaconda, and use matplotlib from within my existing PyQt5 gui app? Would I be better off just using another charting package (pyqtgraph, pyqwt, guiqwt, chaco, etc) that might work out of the box with PyQt5/Python3.4?
0
1
15,580
0
71,228,716
0
0
0
0
2
false
27
2014-08-25T12:23:00.000
0
7
0
How to convert a 16-bit to an 8-bit image in OpenCV?
25,485,886
0
python,numpy,opencv,image-processing
This is the simplest way I found: img8 = cv2.normalize(img, None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U)
I have a 16-bit grayscale image and I want to convert it to an 8-bit grayscale image in OpenCV for Python to use it with various functions (like findContours etc.). How can I do this in Python?
0
1
66,029
0
63,122,556
0
0
0
0
2
false
27
2014-08-25T12:23:00.000
1
7
0
How to convert a 16-bit to an 8-bit image in OpenCV?
25,485,886
0.028564
python,numpy,opencv,image-processing
Yes you can in Python. To get the expected result, choose a method based on what you want the values mapped from say uint16 to uint8 be. For instance, if you do img8 = (img16/256).astype('uint8') values below 256 are mapped to 0 if you do img8 = img16.astype('uint8') values above 255 are mapped to 0 In the LUT method as described and corrected above, you have to define the mapping.
I have a 16-bit grayscale image and I want to convert it to an 8-bit grayscale image in OpenCV for Python to use it with various functions (like findContours etc.). How can I do this in Python?
0
1
66,029
0
25,508,739
0
0
0
0
1
false
32
2014-08-26T14:34:00.000
4
4
0
Fastest way to parse large CSV files in Pandas
25,508,510
0.197375
python,pandas
One thing to check is the actual performance of the disk system itself. Especially if you use spinning disks (not SSD), your practical disk read speed may be one of the explaining factors for the performance. So, before doing too much optimization, check if reading the same data into memory (by, e.g., mydata = open('myfile.txt').read()) takes an equivalent amount of time. (Just make sure you do not get bitten by disk caches; if you load the same data twice, the second time it will be much faster because the data is already in RAM cache.) See the update below before believing what I write underneath If your problem is really parsing of the files, then I am not sure if any pure Python solution will help you. As you know the actual structure of the files, you do not need to use a generic CSV parser. There are three things to try, though: Python csv package and csv.reader NumPy genfromtext Numpy loadtxt The third one is probably fastest if you can use it with your data. At the same time it has the most limited set of features. (Which actually may make it fast.) Also, the suggestions given you in the comments by crclayton, BKay, and EdChum are good ones. Try the different alternatives! If they do not work, then you will have to do write something in a compiled language (either compiled Python or, e.g. C). Update: I do believe what chrisb says below, i.e. the pandas parser is fast. Then the only way to make the parsing faster is to write an application-specific parser in C (or other compiled language). Generic parsing of CSV files is not straightforward, but if the exact structure of the file is known there may be shortcuts. In any case parsing text files is slow, so if you ever can translate it into something more palatable (HDF5, NumPy array), loading will be only limited by the I/O performance.
I am using pandas to analyse large CSV data files. They are around 100 megs in size. Each load from csv takes a few seconds, and then more time to convert the dates. I have tried loading the files, converting the dates from strings to datetimes, and then re-saving them as pickle files. But loading those takes a few seconds as well. What fast methods could I use to load/save the data from disk?
0
1
36,848
0
25,533,547
0
0
0
0
1
false
2
2014-08-27T16:32:00.000
0
3
0
Many independent pseudorandom graphs each with same arbitrary y for any input x
25,532,502
0
python,algorithm,random,python-3.4
Well you're probably going to need to come up with some more detailed requirements but yes, there are ways: pre-populate a dictionary with however many terms in the series you require for a given seed and then at run-time simply look the nth term up. if you're not fussed about the seed values and/or do not require some n terms for any given seed, then find a O(1) way of generating different seeds and only use the first term in each series. Otherwise, you may want to stop using the built-in python functionality & devise your own (more predictable) algo. EDIT wrt the new infos: Ok. so i also looked at your profile & so you are doing something (musical?) other than any new crypto thing. if that's the case, then it's unfortunately mixed blessings, because while you don't require security, you also still won't want (audible) patterns appearing. so you unfortunately probably do still need a strong prng. One of the transformers that I want adds a random value to the input y coordinate depending on the the input x coordinate It's not yet clear to me if there is actually any real requirement for y to depend upon x... Now say that I want two different instances of the transformer that adds random values to y. My question is about my options for making this new random transformer give different values than the first one. ..because here, i'm getting the impression that all you really require is for two different instances to be different in some random way. But, assuming you have some object containing tuple (x,y) and you really do want a transform function to randomly vary y for the same x; and you want an untransform function to quickly undo any transform operations, then why not just keep a stack of the state changes throughout the lifetime of any single instance of an object; and then in the untransform implementation, you just pop the last transformation off the stack ?
By 'graph' I mean 'function' in the mathematical sense, where you always find one unchanging y value per x value. Python's random.Random class's seed behaves as the x-coordinate of a random graph and each new call to random.random() gives a new random graph with all new x-y mappings. Is there a way to directly refer to random.Random's nth graph, or in other words, the nth value in a certain seed's series without calling random.random() n times? I am making a set of classes that I call Transformers that take any (x,y) coordinates as input and output another pair of (x,y) coordinates. Each transformer has two methods: transform and untransform. One of the transformers that I want adds a random value to the input y coordinate depending on the the input x coordinate. Say that I then want this transformer to untransform(x, y), now I need to subtract the same value I added from y if x is the same. This can be done by setting the seed to the same value it had when I added to y, so acting like the x value. Now say that I want two different instances of the transformer that adds random values to y. My question is about my options for making this new random transformer give different values than the first one.
0
1
172
0
25,554,002
0
1
0
0
1
false
0
2014-08-28T16:35:00.000
1
2
0
Numpy array element-by-element comparison optimization
25,553,781
0.099668
python,optimization,numpy
It computes max(a) once, then it compares the (scalar) result against each (scalar) element in a, and creates a bool-array for the result.
Let a be a numpy array of length n. Does the statement a == max(a) calculate the expression max(a) n-times or just one?
0
1
70
0
25,562,736
0
0
0
0
1
true
1
2014-08-29T06:09:00.000
6
2
0
How to return 'negative' of a value in pandas dataframe?
25,562,570
1.2
python,pandas
Just use the negative sign on the column directly. For instance, if your DataFrame has a column "A", then -df["A"] gives the negatives of those values.
In pandas, is there any function that returns the negative of the values in a column?
0
1
2,315
0
25,594,611
0
0
0
0
1
true
2
2014-08-31T16:14:00.000
1
4
0
Generate random matrix with every number 0..k
25,593,876
1.2
python,algorithm
You want to generate a random n*m matrix of integers 1..k with every integer used, and no integer used twice in any row. And you want to do it efficiently. If you just want to generate a reasonable answer, reasonably quickly, you can generate the rows by taking a random selection of elements, and putting them into a random order. numpy.random.random_sample and numpy.random.shuffle can do that. You will avoid the duplicate element issue. If you fail to use all of your elements, then what you can do is randomly "evolve" this towards a correct solution, At every step identify all elements that are repeated more than once in the matrix, randomly select one, and convert it to an as yet unused integer from 1..k. This will not cause duplications within rows, and will in at most k steps give you a matrix of the desired form. Odds are that this is a good enough answer for what you want, and is what you should do. But it is imperfect - matrices of this form do not all happen with exactly equal probability. (In particular ones with lots of elements only appearing once show up slightly more than they should.) If you need a perfectly even distribution, then you're going to have to do a lot more work. To get there you need a bit of theory. If you have that theory, then you can understand the answer as, "Do a dynamic programming solution forwards to find a count of all possible solutions, then run that backwards making random decisions to identify a random solution." Odds are that you don't have that theory. I'm not going to give a detailed explanation of that theory. I'll just outline what you do. You start with the trivial statement, "There are n!/(n-m)! ways in which I could have a matrix with 1 row satisfying my condition using m of the k integers, and none which use more." For i from 1..n, for j from m to k, you figure out the count of ways in which you could build i rows using j of the k integers. You ALSO keep track of how many of those ways came from which previous values for j for the previous row. (You'll need that later.) This step can be done in a double loop. Note that the value in the table you just generated for j=k and i=n is the number of matrices that satisfy all of your conditions. We'll build a random one from the bottom up. First you generate a random row for the last row of your matrix - all are equally likely. For each row until you get to the top, you use the table you built to randomly decide how many of the elements that you used in the last row you generated will never be used again. Randomly decide which those elements will be. Generate a random row from the integers that you are still using. When you get to the top you'll have chosen a random matrix meeting your description, with no biases in how it was generated.
Given an integer k, I am looking for a pythonic way to generate a nxm matrix (or nested list) which has every integer from 0..k-1 but no integer appears more than once in each row. Currently I'm doing something like this random.sample(list(combinations(xrange(k), m)), n) but this does not guarantee every number from 0..k-1 is included, only that no integer appears more than once in each row. Also this has combinatorial complexity which is obviously undesirable. Thanks.
0
1
407
0
25,607,452
0
0
0
0
1
false
0
2014-09-01T08:54:00.000
1
1
0
Multiple files or single files into HDFStore
25,602,155
0.197375
python,pandas,hdfstore
These are the differences: multiple files when using multiple files you can only corrupt a single file when writing (eg you have a power failure when writing) you can parallelize writing with multiple files (note - never, ever try to parallelize with a single file a this will corrupt it!!!) single file grouping if logical sets IMHO the advantages of multiple files outweigh using a single file as you can easily replicate the grouping properties by using sub directories
I am converting 100 csv files into dataframes and storing them in an HDFStore. What are the pros and cons of a - storing the csv file as 100 different HDFStore files? b - storing all the csv files as separate items in a single HDFStore? Other than performance issues, I am asking the question as I am having stability issues and my HDFStore files often get corrupted. So, for me, there is a risk associated with a single HDFStore. However, I am wondering if there are benefits to having a single store.
0
1
202
0
25,691,635
0
0
0
0
1
true
0
2014-09-02T15:30:00.000
0
1
0
Installing matplotlib via pip on Ubuntu 12.04
25,627,100
1.2
python,matplotlib,ubuntu-12.04
Okay, the problem was in gcc version. During building and creating wheel of package pip uses system gcc (which version is 4.7.2). I'm using python from virtualenv, which was built with gcc 4.4.3. So version of libstdc++ library is different in IPython and one that pip used. As always there are two solutions (or even more): pass LD_PRELOAD environment variable with correct libstdc++ before entering IPython or to use same version of gcc during creating wheel and building virtualenv. I prefred the last one. Thank you all.
I'm trying to use matplotlib on Ubuntu 12.04. So I built a wheel with pip: python .local/bin/pip wheel --wheel-dir=wheel/ --build=build/ matplotlib Then successfully installed it: python .local/bin/pip install --user --no-index --find-links=wheel/ --build=build/ matplotlib But when I'm trying to import it in ipython ImportError occures: In [1]: import matplotlib In [2]: matplotlib.get_backend() Out[2]: u'agg' In [3]: import matplotlib.pyplot ImportError Traceback (most recent call last) /place/home/yefremat/ in () ----> 1 import matplotlib.pyplot /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/pyplot.py in () 32 from matplotlib import docstring 33 from matplotlib.backend_bases import FigureCanvasBase ---> 34 from matplotlib.figure import Figure, figaspect 35 from matplotlib.gridspec import GridSpec 36 from matplotlib.image import imread as _imread /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/figure.py in () 38 import matplotlib.colorbar as cbar 39 ---> 40 from matplotlib.axes import Axes, SubplotBase, subplot_class_factory 41 from matplotlib.blocking_input import BlockingMouseInput, BlockingKeyMouseInput 42 from matplotlib.legend import Legend /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/axes/init.py in () 2 unicode_literals) 3 ----> 4 from ._subplots import * 5 from ._axes import * /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/axes/_subplots.py in () 8 from matplotlib import docstring 9 import matplotlib.artist as martist ---> 10 from matplotlib.axes._axes import Axes 11 12 import warnings /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/axes/_axes.py in () 36 import matplotlib.ticker as mticker 37 import matplotlib.transforms as mtransforms ---> 38 import matplotlib.tri as mtri 39 import matplotlib.transforms as mtrans 40 from matplotlib.container import BarContainer, ErrorbarContainer, StemContainer /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/tri/init.py in () 7 import six 8 ----> 9 from .triangulation import * 10 from .tricontour import * 11 from .tritools import * /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/tri/triangulation.py in () 4 import six 5 ----> 6 import matplotlib._tri as _tri 7 import matplotlib._qhull as _qhull 8 import numpy as np ImportError: /home/yefremat/.local/lib/python2.7/site-packages/matplotlib/_tri.so: undefined symbol: _ZNSt8__detail15_List_node_base9_M_unhookEv May be I'm doing somethig wrong? Or may be there is a way to turn off gui support of matplotlib? Thanks in advance.
0
1
520
0
48,203,281
0
1
0
0
1
false
164
2014-09-03T13:53:00.000
35
6
0
Python: Convert timedelta to int in a dataframe
25,646,200
1
python,pandas,timedelta
Timedelta objects have read-only instance attributes .days, .seconds, and .microseconds.
I would like to create a column in a pandas data frame that is an integer representation of the number of days in a timedelta column. Is it possible to use 'datetime.days' or do I need to do something more manual? timedelta column 7 days, 23:29:00 day integer column 7
0
1
343,961
0
25,750,072
0
0
0
0
1
true
4
2014-09-05T20:56:00.000
1
1
0
Inspecting or turning off Numpy/SciPy Parallelization
25,693,870
1.2
python,numpy,parallel-processing,scipy,scikit-learn
Indeed BLAS, or in my case OpenBLAS, was performing the parallelization. The solution was to set the environment variable OMP_NUM_THREADS to 1. Then all is right with the world.
I am running some K-Means clustering from the sklearn package. Although I am setting the parameter n_jobs = 1 as indicated in the sklearn documentation, and although a single process is running, that process will apparently consume all the CPUs on my machine. That is, in top, I can see the python job is using, say 400% on a 4 core machine. To be clear, if I set n_jobs = 2, say, then I get two python instances running, but each one uses 200% CPU, again consuming all 4 of my machine's cores. I believe the issue may be parallelization at the level of NumPy/SciPy. Is there a way to verify my assumption? Is there a way to turn off any parallelization in NumPy/SciPy, for example?
0
1
525
0
25,735,046
0
0
0
0
1
false
3
2014-09-06T19:09:00.000
0
2
0
Find the 'shape' of a list of numbers (straight-line/concave/convex, how many humps)
25,703,792
0
python,statistics,classification,computer-science,differentiation
How about if you difference the data (I.e., x[i+1] - x[i]) repeatedly until all the results are the same sign? For example, if you difference it twice and all the results are nonnegative, you know it's convex. Otherwise difference again and check the signs. You could set a limit, say 10 or so, beyond which you figure the sequence is too complex to characterize. Otherwise, your shape is characterized by the number of times you difference, and the ultimate sign.
This is a bit hard to explain. I have a list of integers. So, for example, [1, 2, 4, 5, 8, 7, 6, 4, 1] - which, when plotted against element number, would resemble a convex graph. How do I somehow extract this 'shape' characteristic from the list? It doesn't have to particularly accurate - just the general shape, convex w/ one hump, concave w/ two, straight line, etc - would be fine. I could use conditionals for every possible shape: for example, if the slope is positive upto a certain index, and negative after, it's a slope, with the skewness depending on index/list_size. Is there some cleverer, generalised way? I suppose this could be a classification problem - but is it possible without ML? Cheers.
0
1
2,040
0
25,714,220
0
1
0
0
1
false
3
2014-09-07T19:40:00.000
1
3
0
does numpy asarray() refer to original list
25,714,046
0.066568
python,numpy
Yes, it is safe to delete it if your input data consists of a list. From the documentation No copy is performed (ONLY) if the input is already an ndarray.
I have a very long list of list and I am converting it to a numpy array using numpy.asarray(), is it safe to delete the original list after getting this matrix or does the newly created numpy array will also be affected by this action?
0
1
1,721
0
25,714,754
0
1
0
0
1
false
21
2014-09-07T20:36:00.000
2
4
0
Find rhyme using NLTK in Python
25,714,531
0.099668
python,nltk
Use soundex or double metaphone to find out if they rhyme. NLTK doesn't seem to implement these but a quick Google search showed some implementations.
I have a poem and I want the Python code to just print those words which are rhyming with each other. So far I am able to: Break the poem sentences using wordpunct_tokenize() Clean the words by removing the punctuation marks Store the last word of each sentence of the poem in a list Generate another list using cmudict.entries() with elements as those last words and their pronunciation. I am stuck with the next step. How should I try to match those pronunciations? In all, my major task is to find out if two given words rhyme or not. If rhyme, then return True, else False.
0
1
14,269
0
25,824,846
1
0
0
0
1
true
0
2014-09-12T23:14:00.000
1
1
0
What big data solution can I use to process a huge number of input files?
25,818,198
1.2
python,amazon-ec2,bigdata,amazon-sqs
Problem with Hadoop is when you get a very large number of files that you do not combine with CombineFileInput format, it makes the job less efficient. Spark doesnt seem to have a problem with this though, Ive had jobs run without problems with 10s of 1000s of files and output 10s of 1000s of files. Not tried to really push the limits, not sure if there even is one!
I am currently searching for the best solution + environment for a problem I have. I'm simplifying the problem a bit, but basically: I have a huge number of small files uploaded to Amazon S3. I have a rule system that matches any input across all file content (including file names) and then outputs a verdict classifying each file. NOTE: I cannot combine the input files because I need an output for each input file. I've reached the conclusion that Amazon EMR with MapReduce is not a good solution for this. I'm looking for a big data solution that is good at processing a large number of input files and performing a rule matching operation on the files, outputting a verdict per file. Probably will have to use ec2. EDIT: clarified 2 above
1
1
148
0
25,824,686
0
1
0
0
1
true
0
2014-09-13T15:00:00.000
2
1
0
How do raise exception if all elements of numpy array are not floats?
25,824,415
1.2
python,numpy
All numbers in a numpy array have the same dtype. So you can quickly check what dtype the array has by looking at array.dtype. If this is float or float64 then every single item in the array will be of type float. Numpy can also create arrays with mixed dtypes similar to normal python lists but then array.dtype=np.object, in this case anything can be in the array elements. But in my experience there are only a few cases where you actually need np.object To check if the dtype is either of float16,float32, float64 use if not issubclass(array.dtype.type, numpy.float): raise TypeError('float type expected')
Just as the title says, I want to raise an exception when I send in an input A that should be an array containing floats. That is, if A contains at least one item that is not a float it should return an TypeError.
0
1
828
0
25,830,628
0
1
0
0
1
false
0
2014-09-14T06:08:00.000
2
2
0
Print from specific row of csv file
25,830,569
0.197375
python,csv
You need to come up with a way to detect the start and end of the relevant section of the file; the csv module does not contain any built-in mechanism for doing this by itself, because there is no general and unambiguous delimiter for the beginning and end of a particular section. I have to question the wisdom of jamming multiple CSV files together like this. Is there a reason that you can't separate the sections into individual files?
my csv file has multiple tables in a single file for example name age gender n1 10 f n2 20 m n3 30 m city population city1 10 city2 20 city3 30 How can I print from city row to city3 row.using python csv module
0
1
764
0
25,836,165
0
0
0
0
1
false
0
2014-09-14T17:43:00.000
1
1
0
Why is a list of cumulative frequency sums required for implementing a random word generator?
25,836,133
0.197375
python,algorithm,random,cumulative-sum,cumulative-frequency
Your approach is (also) correct, but it uses space proportional to the input text size. The approach suggested by the book uses space proportional only to the number of distinct words in the input text, which is usually much smaller. (Think about how often words like "the" appear in English text.)
I'm working on exercise 13.7 from Think Python: How to Think Like a Computer Scientist. The goal of the exercise is to come up with a relatively efficient algorithm that returns a random word from a file of words (let's say a novel), where the probability of the word being returned is correlated to its frequency in the file. The author suggests the following steps (there may be a better solution, but this is assumably the best solution for what we've covered so far in the book). Create a histogram showing {word: frequency}. Use the keys method to get a list of words in the book. Build a list that contains the cumulative sum of the word frequencies, so that the last item in this list is the total number of words in the book, n. Choose a random number from 1 to n. Use a bisection search to find the index where the random number would be inserted in the cumulative sum. Use the index to find the corresponding word in the word list. My question is this: What's wrong with the following solution? Turn the novel into a list t of words, exactly as they as they appear in the novel, without eliminating repeat instances or shuffling. Generate a random integer from 0 to n, where n = len(t) – 1. Use that random integer as an index to retrieve a random word from t. Thanks.
0
1
242
0
40,950,835
0
1
0
0
1
false
1
2014-09-15T11:49:00.000
2
1
0
Numba vectorize maxing out all processors
25,847,411
0.379949
python,vectorization,numba
You can limit the number of threads that target=parallel will use by setting the NUMBA_NUM_THREADS envvar. Note that you can't change this after Numba is imported, it gets set when you first start it up. You can check whether it works by examining the value of numba.config.NUMBA_DEFAULT_NUM_THREADS
Does anyone know if there is a way to configure anaconda such that @vectorize does not take all the processors available in the machine? For example, if I have an eight core machine, I only want @vectorize to use four cores.
0
1
430
0
25,902,242
0
0
0
0
1
false
0
2014-09-17T19:58:00.000
0
2
0
Signal feature identification
25,899,286
0
python,audio,machine-learning,signal-processing
Your points 1 and 2 are not very different: 1) is the end results of a classification problem 2) is the feature that you give for classification. What you need is a good classifier (SVM, decision trees, hierarchical classifiers etc.) and a good set of features (pitch, formants etc. that you mentioned).
I'm am trying to identify phonemes in voices using a training database of known ones. I'm wondering if there is a way of identifying common features within my training sample and using that to classify a new one. It seems like there are two paths: Give the process raw/normalised data and it will return similar ones Extract certain metrics such as pitch, formants etc and compare to training set My interest is the first! Any recommendations on machine learning or regression methods/algorithms?
0
1
401
0
25,961,702
0
0
0
0
1
false
0
2014-09-20T14:23:00.000
0
2
0
asymmetric regularization in machine learning libraries (e.g. scikit ) in python
25,949,733
0
python,machine-learning,scikit-learn,asymmetric,regularized
Depending on the amount of data you have and the classifier you would like to use, it might be easier to implement the loss and then use a standard solver like lbfgs or newton, or do stochastic gradient descent, if you have a lot of data. Using a simple custom solver will most likely be much slower than using scikit-learn code, but it will also be much easier to write. In particular if you are after logistic regression, for example, you would need to dig into LibLinear C code. On the other hand, I'm pretty certain that you can implement it in ~10 lines of python using lbfgs in an unoptimized way.
The problem requires me to regularize weights of selected features while training a linear classifier. I am using python SKlearn. Having googled a lot about incorporating asymmetric regularization for classifiers in SKlearn, I could not find any solution. The core library function that performs this task is provided as a DLL for windows hence modifying the existing library is not possible. Is there any machine learning library for python with this kind of flexibility? Any kind of help will be appreciated.
0
1
206
0
26,011,654
0
0
0
0
1
true
0
2014-09-21T20:21:00.000
0
1
1
"Text file busy" error for the mapper in a Hadoop streaming job execution
25,963,463
1.2
python,hadoop,mapreduce,streaming
Can you please try stopping all the daemons using 'stop-all' first and then rerun your MR job after restarting the daemons (using 'start-all')? Lets see if it helps!
I have an application that creates text files with one line each and dumps it to hdfs. This location is in turn being used as the input directory for a hadoop streaming job. The expectation is that the number of mappers will be equal to the "input file split" which is equal to the number of files in my case. Some how all the mappers are not getting triggered and I see a weird issue in the streaming output dump: Caused by: java.io.IOException: Cannot run program "/mnt/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1411140750872_0001/container_1411140750872_0001_01_000336/./CODE/python_mapper_unix.py": error=26, Text file busy "python_mapper.py" is my mapper file. Environment Details: A 40 node aws r3.xlarge AWS EMR cluster [No other job runs on this cluster] When this streaming jar is running, no other job is running on the cluster, hence none of the external processes should be trying to open the "python_mapper.py" file Here is the streaming jar command: ssh -o StrictHostKeyChecking=no -i hadoop@ hadoop jar /home/hadoop/contrib/streaming/hadoop-streaming.jar -files CODE -file CODE/congfiguration.conf -mapper CODE/python_mapper.py -input /user/hadoop/launchidlworker/input/1 -output /user/hadoop/launchidlworker/output/out1 -numReduceTasks 0
0
1
317
0
25,996,185
0
0
0
0
1
false
1
2014-09-22T02:34:00.000
0
1
0
Pybrain Reinforcement Learning dynamic output
25,965,953
0
python,pybrain,reinforcement-learning
It is certainly possible to train a neural network (based on pybrain or otherwise) to make predictions of this sort that are better than a coin toss. However, weather prediction is a very complex art, even for people who do it as their full-time profession and have been for decades. Those weather forecasters have much bigger neural networks inside their head than pybrain can simulate. If it were possible to make accurate predictions in the way you describe, then it would have been done long ago. For this reason, I wouldn't expect to do better than (or even as well as) the local weather forecaster. So, if your goal is to learn pybrain, I would pick a less complex system to model, and if your goal is to predict the weather, I would suggest www.wunderground.com.
Can you use Reinforcement Learning from Pybrain on dynamic changing output. For example weather: lets say you have 2 attributes Humidity and Wind and the output will be either Rain or NO_Rain ( and all attributes are either going to have a 1 for true or 0 for false in the text file i am using). can you use Reinforcement Learning on this type of problem? the reason I ask is sometime even if we have humidity it does not guaranty that its going to rain.
0
1
241
0
26,202,109
0
1
0
0
1
true
0
2014-09-24T08:11:00.000
1
1
0
Tell IPython Parallel to use Pickle again after Dill has been activated
26,011,787
1.2
ipython,pickle,ipython-parallel,dill
I'm the dill author. I don't know if IPython does anything unusual, but you can revert to pickle if you like through dill directly with dill.extend(False)… although this is a relatively new feature (not yet in a stable release). If IPython doesn't have a dv.use_pickle() (it doesn't at the moment), it should… and could just use the above to do it.
I'm developing a distributed application using IPython parallel. There are several tasks which are carried out one after another on the IPython cluster engines. One of these tasks inevitably makes use of closures. Hence, I have to tell IPython to use Dill instead of Pickle by calling dv.use_dill(). Though this should be temporarily. Is there any way to activate Pickle again once Dill is enabled? I couldn't find any function (something of the form dv.use_pickle()) which would make such an option explicit.
0
1
186
0
26,047,701
0
1
0
0
2
false
6
2014-09-25T20:07:00.000
-1
3
0
import anaconda packages to IDLE?
26,047,185
-0.066568
python,anaconda
You should try starting IDLE with the anaconda interpreter instead. AFAIK it's too primitive an IDE to be configurable which interpreter to use. So if anaconda doesn't ship one, use a different IDE instead, such as PyCharm, PyDev, Eric, Sublime2, Vim, Emacs.
I installed numpy, scipy, matplotlib, etc through Anaconda. I set my PYTHONPATH environment variable to include C://Anaconda; C://Anaconda//Scripts; C://Anaconda//pkgs;. import sys sys.path shows that IDLE is searching in these Anaconda directories. conda list in the command prompt shows that all the desired packages are installed on Anaconda. But import numpy in IDLE gives me the error No module named numpy. Suggestions? How do I tell IDLE where to look for modules/packages installed via Anaconda? I feel like I'm missing something obvious but I can't find an answer on any previous Overflow questions.
0
1
7,062
0
26,067,690
0
1
0
0
2
false
6
2014-09-25T20:07:00.000
2
3
0
import anaconda packages to IDLE?
26,047,185
0.132549
python,anaconda
You need to add those directories to PATH, not PYTHONPATH, and it should not include the pkgs directory.
I installed numpy, scipy, matplotlib, etc through Anaconda. I set my PYTHONPATH environment variable to include C://Anaconda; C://Anaconda//Scripts; C://Anaconda//pkgs;. import sys sys.path shows that IDLE is searching in these Anaconda directories. conda list in the command prompt shows that all the desired packages are installed on Anaconda. But import numpy in IDLE gives me the error No module named numpy. Suggestions? How do I tell IDLE where to look for modules/packages installed via Anaconda? I feel like I'm missing something obvious but I can't find an answer on any previous Overflow questions.
0
1
7,062
0
26,089,798
1
0
0
0
1
false
2
2014-09-28T05:16:00.000
0
2
0
How to convert xml file of stack overflow dump to csv file
26,081,880
0
python-3.x,data-dump
Use one of the python xml modules to parse the .xml file. Unless you have much more that 27GB ram, you will need to do this incrementally, so limit your choices accordingly. Use the csv module to write the .csv file. Your real problem is this. Csv files are lines of fields. They represent a rectangular table. Xml files, in general, can represent more complex structures: hierarchical databases, and/or multiple tables. So your real problem to to understand the data dump format well enough to extract records to write to the .csv file.
I have stack overflow data dump file in .xml format,nearly 27GB and I want to convert them in .csv file. Please somebody tell me, tools to convert xml to csv file or python program
0
1
704
0
26,099,751
0
1
0
0
1
false
3
2014-09-29T09:06:00.000
3
1
0
python scipy ode dopri5 'larger nmax needed'
26,096,209
0.53705
python,scipy
nmax refers to the maximum number of internal steps that the solver will take. The default is 500. You can change it with the nsteps argument of the set_integrator method. E.g. ode(f).set_integrator('dopri5', nsteps=1000) (The Fortran code calls this NMAX, and apparently the Fortran name was copied to the error message in the python code for the "dopri5" solver. In the ode class API, all the solvers ("dopri5", "vode", "lsoda", etc) consistently call this solver parameter nsteps, so scipy should change the error message used in the python code to say nsteps.)
While using scipy 0.13.0, ode(f).set_integrator('dopri5'), I get the error message - larger nmax is needed I looked for nmax in the ode.py but I can't see the variable. I guess that the number call for integration exceeds the allowed default value. How can I increase the nmax value?
0
1
2,600
0
54,617,685
0
0
0
0
2
false
53
2014-09-29T11:21:00.000
12
9
0
Rename unnamed column pandas dataframe
26,098,710
1
python,pandas,csv
The solution can be improved as data.rename( columns={0 :'new column name'}, inplace=True ). There is no need to use 'Unnamed: 0', simply use the column number, which is 0 in this case and then supply the 'new column name'.
My csv file has no column name for the first column, and I want to rename it. Usually, I would do data.rename(columns={'oldname':'newname'}, inplace=True), but there is no name in the csv file, just ''.
0
1
116,651
0
58,729,636
0
0
0
0
2
false
53
2014-09-29T11:21:00.000
5
9
0
Rename unnamed column pandas dataframe
26,098,710
0.110656
python,pandas,csv
Try the below code, df.columns = [‘A’, ‘B’, ‘C’, ‘D’]
My csv file has no column name for the first column, and I want to rename it. Usually, I would do data.rename(columns={'oldname':'newname'}, inplace=True), but there is no name in the csv file, just ''.
0
1
116,651
0
26,111,537
0
0
0
0
2
false
0
2014-09-29T16:21:00.000
0
2
0
method for implementing regression tree on raster data - python
26,104,434
0
python,regression,weka,raster,landsat
I have had some experience using LandSat Data for the prediction of environmental properties of soil, which seems to be somewhat related to the problem that you have described above. Although I developed my own models at the time, I could describe the general process that I went through in order to map the predicted data. For the training data, I was able to extract the LandSat values (in addition to other properties) for the spatial points where known soil samples were taken. This way, I could use the LandSat data as inputs for predicting the environmental data. A part of this data would also be reserved for testing to confirm that the trained models were not overfitting to training data and that it predicted the outputs well. Once this process was completed, it would be possible to map the desired area by getting the spatial information at each point of the desired area (matching the resolution of the desired image). From there, you should be able to input these LandSat factors into the model for prediction and the output used to map the predicted depth. You could likely just use Weka in this case to predict all of the cases, then use another tool to build the map from your estimates. I believe I whipped up some code long ago to extract each of my required factors in ArcGIS, but it's been a while since I did this. There should be some good tutorials out there that could help you in that direction. I hope this helps in your particular situation.
I'm trying to build and implement a regression tree algorithm on some raster data in python, and can't seem to find the best way to do so. I will attempt to explain what I'm trying to do: My desired output is a raster image, whose values represent lake depth, call it depth.tif. I have a series of raster images, each representing the reflectance values in different Landsat bands, say [B1.tif, B2.tif, ..., B7.tif] that I want to use as my independent variables to predict lake depth. For my training data, I have a shapefile of ~6000 points of known lake depth. To create a tree, I extracted the corresponding reflectance values for each of those points, then exported that to a table. I then used that table in weka, a machine-learning software, to create a 600-branch regression tree that would predict depth values based on the set of reflectance values. But because the tree is so large, I can't write it in python manually. I ran into the python-weka-wrapper module so I can use weka in python, but have gotten stuck with the whole raster part. Since my data has an extra dimension (if converted to array, each independent variable is actually a set of ncolumns x nrows values instead of just a row of values, like in all of the examples), I don't know if it can do what I want. In all the examples for the weka-python-wrapper, I can't find one that deals with spatial data, and I think this is what's throwing me off. To clarify, I want to use the training data (which is a point shapefile/table right now but can- if necessary- be converted into a raster of the same size as the reflectance rasters, with no data in all cells except for the few points I have known depth data at), to build a regression tree that will use the reflectance rasters to predict depth. Then I want to apply that tree to the same set of reflectance rasters, in order to obtain a raster of predicted depth values everywhere. I realize this is confusing and I may not be doing the best job at explaining. I am open to other options besides just trying to implement weka in python, such as sklearn, as long as they are open source. My question is, can what I described be done? I'm pretty sure it can, as it's very similar to image classification, with the exception that the target values (depth) are continuous and not discrete classes but so far I have failed. If so, what is the best/most straight-forward method and/or are there any examples that might help? Thanks
0
1
709
0
26,123,179
0
0
0
0
2
false
0
2014-09-29T16:21:00.000
0
2
0
method for implementing regression tree on raster data - python
26,104,434
0
python,regression,weka,raster,landsat
It sounds like you are not using any spatial information to build your tree (such as information on neighboring pixels), just reflectance. So, you can apply your decision tree to the pixels as if the pixels were all in a one-dimensional list or array. A 600-branch tree for a 6000 point training data file seems like it may be overfit. Consider putting in an option that requires the tree to stop splitting when there are fewer than N points at a node or something similar. There may be a pruning factor that can be set as well. You can test different settings till you find the one that gives you the best statistics from cross-validation or a held-out test set.
I'm trying to build and implement a regression tree algorithm on some raster data in python, and can't seem to find the best way to do so. I will attempt to explain what I'm trying to do: My desired output is a raster image, whose values represent lake depth, call it depth.tif. I have a series of raster images, each representing the reflectance values in different Landsat bands, say [B1.tif, B2.tif, ..., B7.tif] that I want to use as my independent variables to predict lake depth. For my training data, I have a shapefile of ~6000 points of known lake depth. To create a tree, I extracted the corresponding reflectance values for each of those points, then exported that to a table. I then used that table in weka, a machine-learning software, to create a 600-branch regression tree that would predict depth values based on the set of reflectance values. But because the tree is so large, I can't write it in python manually. I ran into the python-weka-wrapper module so I can use weka in python, but have gotten stuck with the whole raster part. Since my data has an extra dimension (if converted to array, each independent variable is actually a set of ncolumns x nrows values instead of just a row of values, like in all of the examples), I don't know if it can do what I want. In all the examples for the weka-python-wrapper, I can't find one that deals with spatial data, and I think this is what's throwing me off. To clarify, I want to use the training data (which is a point shapefile/table right now but can- if necessary- be converted into a raster of the same size as the reflectance rasters, with no data in all cells except for the few points I have known depth data at), to build a regression tree that will use the reflectance rasters to predict depth. Then I want to apply that tree to the same set of reflectance rasters, in order to obtain a raster of predicted depth values everywhere. I realize this is confusing and I may not be doing the best job at explaining. I am open to other options besides just trying to implement weka in python, such as sklearn, as long as they are open source. My question is, can what I described be done? I'm pretty sure it can, as it's very similar to image classification, with the exception that the target values (depth) are continuous and not discrete classes but so far I have failed. If so, what is the best/most straight-forward method and/or are there any examples that might help? Thanks
0
1
709
0
38,604,808
0
0
0
0
1
false
0
2014-10-03T11:19:00.000
0
2
0
Way to compute the value of the loss function on data for an SGDClassifier?
26,178,035
0
python,machine-learning,scikit-learn
The above answer was too short, outdated and might result in misleading. Using score method could only give accuracy (it's in the BaseEstimator). If you want the loss function, you could either call private function _get_loss_function (defined in the BaseSGDClassifier). Or accessing BaseSGDClassifier.loss_functions class attribute which will give you a dict and whose entry is the callable for loss function (with default setting) Also using sklearn.metrics might not get exact loss used for minimization (due to regularization and what to minimize, but you can hand compute anyway). The exact code for Loss function is defined in cython code (sgd_fast.pyx, you could look up the code in scikit-learn github repo) I'm looking for a good way to plot the minimization progress. Probably will redirect stdout and parse the output. BTW, I'm using 0.17.1. So a update for the answer.
I'm using an SGDClassifier in combination with the partial fit method to train with lots of data. I'd like to monitor when I've achieved an acceptable level of convergence, which means I'd like to know the loss every n iterations on some data (possibly training, possibly held-out, maybe both). I know this information is available if I pass verbose=1 in the constructor of the classifier, but I'd like to query it programmatically rather than visually. I also know I can use the score method to get accuracy, but I'd like actual loss as measured by my chosen loss function. Does anyone know how to do this?
0
1
1,528
0
26,199,110
0
0
0
0
1
false
0
2014-10-05T02:30:00.000
0
1
0
Is there a ways to take out the artist and song from the array?
26,199,088
0
python,python-2.7
Assuming this array is a string array of x strings, you can create a substring from the index of the second instance of '|' to the end of the string.
Suppose you have this type of array (Sonny Rollins|Who Cares?|Sonny Rollins And Friends|Jazz| Various|, Westminster Philharmonic Orchestra conducted by Kenneth Alwyn|Curse of the Werewolf: Prelude|Horror!|Soundtrack|1996). Is there any possible way to only take out Sonny Rollins and Who cares from the array?
1
1
27
0
29,718,799
0
0
0
0
1
false
1
2014-10-06T07:04:00.000
0
1
0
preclassified trained twitter comments for categorization
26,211,308
0
python,twitter,machine-learning,classification,nltk
The thing is that how do I even generate/create a training data for such a huge data I would suggest finding a training data set that could help you with the categories you are interested in. So let's say price related articles, you might want to find a training data set that is all about price related articles and then perhaps expand it by using synonyms for key-words like cheap or so. And perhaps look into sentence structure to find out whether if the structure of the sentence helps with your classifier algorithm. If not then what is the best approach to create a training data for multi-class classification of text/comments? key-words, pulling articles that are all about related categories and then go from there. Lastly, I suggest being very familiar with NLTK's corpus library, this might also help you with retrieving training data as well. As for your last question, I'm kinda confused to what you mean by 'multiple categories to classify the comments into', do you mean having multiple classifiers for a particular comment to belong in? So a comment can belong to 1 to more classifiers?
So I have some 1 million lines of twitter comments data in csv format. I need to classify them in certain categories like if somebody is talking about : "product longevity", "cheap/costly", "on sale/discount" etc. As you can see I have multiple classes to classify these tweets data into. The thing is that how do I even generate/create a training data for such a huge data.Silly question but I was wondering whether/not there are already preclassified/tagged comments data to train our model with? If not then what is the best approach to create a training data for multi-class classification of text/comments ? While I have tried and tested NaiveBayes for sentiment classification for a smaller dataset, could you please suggest which classifier shall I use for this problem (multiple categories to classify the comments into). Thanks!!!
0
1
149
0
68,602,469
0
0
0
0
1
false
11
2014-10-07T21:59:00.000
0
4
0
Python: DBSCAN in 3 dimensional space
26,246,015
0
python,cluster-analysis,dbscan
Why not just flatten the data to 2 dimensions with PCA and use DBSCAN with only 2 dimensions? Seems easier than trying to custom build something else.
I have been searching around for an implementation of DBSCAN for 3 dimensional points without much luck. Does anyone know I library that handles this or has any experience with doing this? I am assuming that the DBSCAN algorithm can handle 3 dimensions, by having the e value be a radius metric and the distance between points measured by euclidean separation. If anyone has tried implementing this and would like to share that would also be greatly appreciated, thanks.
0
1
15,341
0
26,331,726
0
0
0
0
1
false
2
2014-10-08T13:41:00.000
1
2
0
Can lambdify return an array with dytpe np.float128?
26,258,420
0.099668
python,numpy,sympy
If you need a lot of precision, you can try using SymPy floats, or mpmath directly (which is part of SymPy), which provides arbitrary precision. For example, sympy.Float('2.0', 100) creates a float of 2.0 with 100 digits of precision. You can use something like sympy.sin(2).evalf(100) to get 100 digits of sin(2) for instance. This will be a lot slower than numpy because it is arbitrary precision, meaning it doesn't use machine floats, and it is implemented in pure Python (whereas numpy is written in Fortran and C).
I am solving a large non-linear system of equations and I need a high degree of numerical precision. I am currently using sympy.lambdify to convert symbolic expressions for the system of equations and its Jacobian into vectorized functions that take ndarrays as inputs and return an ndarray as outputs. By default, lambdify returns an array with dtype of numpy.float64. Is it possible to have it return an array with dtype numpy.float128? Perhaps this requires the inputs to have dtype of numpy.float128?
0
1
667
0
26,286,979
0
0
0
0
1
false
0
2014-10-09T14:19:00.000
0
3
0
How can I find the break frequencies/3dB points from a bandpass filter frequency sweep data in python?
26,280,838
0
python,math,signal-processing
Assuming that you've loaded multiple readings of the PSD from the signal analyzer, try averaging them before attempting to find the bandedges. If the signal isn't changing too dramatically, the averaging process might smooth away any peaks and valleys and noise within the passband, making it easier to find the edges. This is what many spectrum analyzers can do to make for a smoother PSD. In case that wasn't clear, assume that each reading gives you 128 tuples of the frequency and power and that you capture 100 of these buffers of data. Now average the 100 samples from bin 0, then samples from 1, 2, ..., 128. Now try and locate the bandpass on this data. It should be easier than on any single buffer. Note I used 100 as an example. If your data is very noisy, it may require more. If there isn't much noise, fewer. Be careful when doing the averaging. Your data is in dB. To add the samples together in order to find an average, you must first convert the dB data back to decimal, do the adds, do the divide to find the average, and then convert the averaged power back into dB.
The data that i have is stored in a 2D list where one column represents a frequency and the other column is its corresponding dB. I would like to programmatically identify the frequency of the 3db points on either end of the passband. I have two ideas on how to do this but they both have drawbacks. Find maximum point then the average of points in the passband then find points about 3dB lower Use the sympy library to perform numerical differentiation and identify the critical points/inflection points use a histogram/bin function to find the amplitude of the passband. drawbacks sensitive to spikes, not quite sure how to do this i don't under stand the math involved and the data is noisy which could lead to a lot of false positives correlating the amplitude values with list index values could be tricky Can you think of better ideas and/or ways to implement what I have described?
0
1
1,113
0
26,303,687
0
0
0
0
1
false
0
2014-10-10T12:55:00.000
1
2
0
Algorithm for matching objects
26,299,978
0.099668
python,algorithm,pattern-matching,cluster-analysis,data-mining
If your comparison works with "create a sum of all features and find those which the closest sum", there is a simple trick to get close objects: Put all objects into an array Calculate all the sums Sort the array by sum. If you take any index, the objects close to it will now have a close index as well. So to find the 5 closest objects, you just need to look at index+5 to index-5 in the sorted array.
I have 1,000 objects, each object has 4 attribute lists: a list of words, images, audio files and video files. I want to compare each object against: a single object, Ox, from the 1,000. every other object. A comparison will be something like: sum(words in common+ images in common+...). I want an algorithm that will help me find the closest 5, say, objects to Ox and (a different?) algorithm to find the closest 5 pairs of objects I've looked into cluster analysis and maximal matching and they don't seem to exactly fit this scenario. I don't want to use these method if something more apt exists, so does this look like a particular type of algorithm to anyone, or can anyone point me in the right direction to applying the algorithms I mentioned to this?
0
1
543
0
26,714,084
0
0
0
0
1
false
0
2014-10-10T16:00:00.000
1
3
0
Spotfire column title colors
26,303,493
0.066568
python,colors,crosstab,tibco,spotfire
One can make a color according to category by using Properties->Color->Add Rule , where you can see many conditions to apply your visualization.
Spotfire 5.5 Hi, I am looking for a way to color code or group columns together in a Spotfire cross-table. I have three categories (nearest, any, all) and three columns associated with each category. Is there a way I can visually group these columns with their corresponding category. Is there a way to change column heading color? Is there a way to put a border around the three column groups? Can I display their category above the three corresponding columns? Thanks
0
1
3,241
0
26,310,892
0
0
0
0
1
false
0
2014-10-11T03:41:00.000
-1
1
0
Half-integer indices
26,310,822
-0.197375
python,indexing
You can make __getitem__ which takes arbitrary objects as indices (and floating point numbers in particular).
I am working on a fluid dynamics simulation tool in Python. Traditionally (at least in my field), integer indices refer to the center of a cell. Some quantities are stored on the faces between cells, and in the literature are denoted by half-integer indices. In codes, however, these are shifted to integers to fit into arrays. The problem is, the shift is not always consistent: do you add a half or subtract? If you switch between codes enough, you quickly lose track of the conventions for any particular code. And honestly, there are enough quirky conventions in each code that eliminating a few would be a big help... but not if the remedy is worse than the original problem. I have thought about using even indices for cell centers and odd for faces, but that is counterintuitive. Also, it's rare for a quantity to exist on both faces and in cell centers, so you never use half of your indices. You could also implement functions plus_half(i) and minus_half(i), but that gets verbose and inelegant (at least in my opinion). And of course floating point comparisons are always problematic in case someone gets cute in how they calculate 1/2. Does anyone have a suggestion for an elegant way to implement half-integer indices in Python? I'm sure I'm not the first person to wish for this, but I've never seen it done in a simple way that is obvious to a user (without requiring the user to memorize the shift convention you've chosen). And just to clarify: I assume there is likely to be a remap step hidden from the user to get to integer indices (I intend to wrap NumPy arrays for my data grids). I'm thinking of the interface exposed to the user, rather than how the data is stored.
0
1
149
0
26,340,459
0
0
0
0
1
false
0
2014-10-13T12:17:00.000
0
3
0
How can I subset a data frame based on dates, when my dates column is not the index in Python?
26,339,828
0
python,date,subset
Assuming you're using Pandas. dfQ1 = df[(df.date > Qstartdate) & (df.date < Qenddate)]
I have a large dataset with a date column (which is not the index) with the following format %Y-%m-%d %H:%M:%S. I would like to create quarterly subsets of this data frame i.e. the data frame dfQ1 would contain all rows where the date was between month [1 and 4], dfQ2 would contain all rows where the date was between month [5 and 8], etc... The header of the subsets is the same as that of the main data frame. How can I do this? Thanks!
0
1
1,077
0
26,362,076
0
0
0
0
2
false
2
2014-10-14T13:24:00.000
0
3
0
Numpy's loadtxt(): OverflowError: Python int too large to convert to C long
26,362,010
0
python,numpy
You need to use a compound dtype, with a separate type per column. Or you can use np.genfromtxt without specifying any dtype, and it will be determined automatically, per each column, which may give you what you need with less effort (but perhaps slightly less performance and less error checking too).
I'm trying to load a matrix from a file using numpy. When I use any dtype other than float I get this error: OverflowError: Python int too large to convert to C long The code: X = np.loadtxt(feats_file_path, delimiter=' ', dtype=np.int64 ) The problem is that my matrix has only integers and I can't use a float because the first column in the matrix are integer "keys" that refer to node ids. When I use a float, numpy "rounds" the integer id into something like 32423e^10, and I don't want this behavior. So my questions: How to solve the OverflowError? If it's not possible to solve, then how could I prevent numpy from doing that to the ids?
0
1
2,553
0
26,362,482
0
0
0
0
2
false
2
2014-10-14T13:24:00.000
0
3
0
Numpy's loadtxt(): OverflowError: Python int too large to convert to C long
26,362,010
0
python,numpy
Your number looks like it would fit in the uint64_t type, which is available if you have C99.
I'm trying to load a matrix from a file using numpy. When I use any dtype other than float I get this error: OverflowError: Python int too large to convert to C long The code: X = np.loadtxt(feats_file_path, delimiter=' ', dtype=np.int64 ) The problem is that my matrix has only integers and I can't use a float because the first column in the matrix are integer "keys" that refer to node ids. When I use a float, numpy "rounds" the integer id into something like 32423e^10, and I don't want this behavior. So my questions: How to solve the OverflowError? If it's not possible to solve, then how could I prevent numpy from doing that to the ids?
0
1
2,553
0
26,368,952
0
0
0
0
1
true
1
2014-10-14T14:51:00.000
1
1
0
Testing against NumPy/SciPy sane version pairs
26,363,853
1.2
python,testing,numpy,scipy,integration-testing
This doesn't completely answer your question, but I think the policy of scipy release management since 0.11 or earlier has been to support all of the numpy versions from 1.5.1 up to the numpy version in development at the time of the scipy release.
Testing against NumPy/SciPy includes testing against several versions of them, since there is the need to support all versions since Numpy 1.6 and Scipy 0.11. Testing all combinations would explode the build matrix in continuous integration (like travis-ci). I've searched the SciPy homepage for notes about version compatibility or sane combinations, but have not found something useful. So my question is how to safely reduce the amount of combinations, while maintaining maximum testing compliance. Is it possible to find all combinations in the wild? Or are there certain dependencies between Scipy and Numpy?
0
1
69
0
26,393,748
0
0
0
0
1
false
0
2014-10-15T22:33:00.000
0
1
0
Why we calculate pseudo inverse over inverse
26,393,254
0
python,matrix,scipy
Short answer A positive semi-definite matrix does not have to have full rank, thus might not be invertible using the normal inverse. Long answer If cov does not have full rank, it does have some eigenvalues equal to zero and its inverse is not defined (because some of its eigenvalues would be infinitely large). Thus, in order to be able to invert a positive semi-definite covariance matrix ("semi": not all eigenvalues are larger than zero), they use the pseudo-inverse. The latter inverts the non-zero eigenvalues and preserves the zero eigenvalues rather than inverting them to infinity. The determinant of a matrix without full rank is zero. The pseudo-determinant only considers non-zero eigenvalues, yielding a non-zero result. If, however, cov does have full rank, the results should be the same as with the usual inverse and determinant.
I was looking into "scipy.stats.multivariate_normal" function, there they mentioned that they are using the pseudo inverse, and pseudo determinant. The covariance matrix cov must be a (symmetric) positive semi-definite matrix. The determinant and inverse of cov are computed as the pseudo-determinant and pseudo-inverse, respectively, so that cov does not need to have full rank.
0
1
239
0
26,565,108
0
0
0
0
1
false
1
2014-10-18T17:10:00.000
1
2
0
Sampling parts of a vector from gaussian mixture model
26,442,403
0.099668
python,numpy,random-sample,normal-distribution,mixture-model
Since for sampling only relative proportion of the distribution matters, scaling preface or can be thrown away. For diagonal covariance matrix, one can just use the covariance submarine and mean subvector that has dimensions of missing data. For covariance with off-diagonal elements, the mean and std dev of a sampling gaussian will need to be changed.
I want to sample only some elements of a vector from a sum of gaussians that is given by their means and covariance matrices. Specifically: I'm imputing data using gaussian mixture model (GMM). I'm using the following procedure and sklearn: impute with mean get means and covariances with GMM (for example 5 components) take one of the samples and sample only the missing values. the other values stay the same. repeat a few times There are two problems that I see with this. (A) how do I sample from the sum of gaussians, (B) how do I sample only part of the vector. I assume both can be solved at the same time. For (A), I can use rejection sampling or inverse transform sampling but I feel that there is a better way utilizing multivariate normal distribution generators in numpy. Or, some other efficient method. For (B), I just need to multiply the sampled variable by a gaussian that has known values from the sample as an argument. Right? I would prefer a solution in python but an algorithm or pseudocode would be sufficient.
0
1
1,040
0
26,509,674
0
0
0
0
1
true
1
2014-10-22T14:02:00.000
2
1
0
How to convert numpy array into libsvm format
26,509,319
1.2
python,arrays,numpy,svm,libsvm
The svmlight format is tailored to classification/regression problems. Therefore, the array X is a matrix with as many rows as data points in your set, and as many columns as features. y is the vector of instance labels. For example, suppose you have 1000 objects (images of bicycles and bananas, for example), featurized in 400 dimensions. X would be 1000x400, and y would be a 1000-vector with a 1 entry where there should be a bicycle, and a -1 entry where there should be a banana.
I have a numpy array for an image and am trying to dump it into the libsvm format of LABEL I0:V0 I1:V1 I2:V2..IN:VN. I see that scikit-learn has a dump_svmlight_file and would like to use that if possible since it's optimized and stable. It takes parameters of X, y, and file output name. The values I'm thinking about would be: X - numpy array y - ???? file output name - self-explanatory Would this be a correct assumption for X? I'm very confused about what I should do for y though. It appears it needs to be a feature set of some kind. I don't know how I would go about obtaining that however. Thanks in advance for the help!
0
1
2,922
0
29,172,738
0
0
0
0
1
false
3
2014-10-23T12:24:00.000
0
2
0
Python statsmodel.api logistic regression (Logit)
26,528,019
0
python,statistics,statsmodels,logistic-regression
If the response is on the unit interval interpreted as a probability, in addition to loss considerations, the other perspective which may help is looking at it as a Binomial outcome, as a count instead of a Bernoulli. In particular, in addition to the probabilistic response in your problem, is there any counterpart to numbers of trials in each case? If there were, then the logistic regression could be reexpressed as a Binomial (count) response, where the (integer) count would be the rounded expected value, obtained by product of the probability and the number of trials.
So I'm trying to do a prediction using python's statsmodels.api to do logistic regression on a binary outcome. I'm using Logit as per the tutorials. When I try to do a prediction on a test dataset, the output is in decimals between 0 and 1 for each of the records. Shouldn't it be giving me zero and one? or do I have to convert these using a round function or something. Excuse the noobiness of this question. I am staring my journey.
0
1
10,827
0
26,559,152
0
0
0
0
1
false
2
2014-10-24T14:50:00.000
0
1
0
efficient different sized list comparisons
26,550,430
0
python,algorithm,memory-efficient
Nothing is going to be superfast, and there's a lot of data there (half a million results, to start with), but the following should fit in your time and space budget on modern hardware. If possible, start by sorting the lists by length, from longest to shortest. (I don't mean sort each lists; the order of elements in the list is irrelevant. I mean, sort the collection of lists so that you can process the longest list first.) The only point of doing this is to allow the similarity metrics to be stored in a half-diagonal matrix instead of full matrix, which saves half the matrix space. So if you don't know the length of the lists before you start, it's not a crisis; it just means that you'll need a bit more space. Note 1: The important thing is that the metric you propose is completely symmetric as long as no list has repeated elements.Without repeated elements, the metric is simply |A⋂B|, regardless of whether A or B is longer, so when you compute the size of the intersection of A and B you could fill in the similarity matrix for both (A,B) and (B,A).) Note 2: The description of the algorithm seemed confusing to me when I reread it so I changed the word "list" to "list" when it refers to one of the thousand input lists, leaving "list" to mean an ordinary Python list. Because lists can't be keys in Python dictionaries, working on the assumption that lists are implemented as lists, it's necessary to somehow identify each list with an identifier which can be used as a key. I hope that's clear. The Algorithm: We need two auxiliary structures: one is the (half-diagonal) result matrix, keyed by pairs of list identifiers, which we initialize to all 0s. The other one is a dictionary keyed by unique data element, mapping onto a list of list identifiers. Then, taking each list in turn, for each element in that list we do the following: If the element is not yet present in the dictionary, add it, mapping to a single element list consisting of the current list's identifier. If the element is present in the dictionary but the last element in the corresponding list of ids is the current id, then we've found a repeated element. Since we don't expect repeated elements, either ignore it or issue an error message. Otherwise, we've seen the element before and we have a list of identifiers of lists in which the element appears. For each such identifier, increment the similarity count between the current identifier and the identifier in the list. (Note that if we scan lists in reverse order by length, all the identifiers in the list correspond to lists which are at least as long as the current list, which is why we sorted the lists in the first place.) Finally, append the current identifier to the end of the list, so that the next time that data element is found, the current list will be present. That's it. The space requirement is O(N2 + M) where N is the number of lists and M is the total size of all the lists. The time requirement is essentially O(M2) in the worst case -- that being the case where every list has exactly one element and they are all the same element. (More precisely, it's the sum of the squares of the frequencies of each unique element.)
I wish to compare around 1000 lists of varying size. Each list might have thousands of items. I want to compare each pair of lists, so potentially around 500000 comparisons. Each comparison consists of counting how many of the smaller list exists in the larger list (if same size, pick either list).Ultimately I want to cluster the lists using these counts. I want to be able to do this for two types of data: any textual data strings of binary digits of the same length. Is there an efficient way of doing this in python? I've looked at LShash and other clustering related algorithms, but they seem to require same length lists. TIA. An example to try to clarify what I am aiming to do: List A: car, dig, dog, the. List B: fish, the, dog. (No repeats in any list. Not sorted although I suppose they could be fairly easily. Size of lists varies.) Result:2, since 'dog' and 'the' are in both lists. In reality the length of each list can be thousands and there are around 1000 such lists, each having to be compared with every other. Continuing the example: List C: dog, the, a, fish, fry. Results: AB: 2 AC: 2 BC: 3
0
1
159
0
26,591,932
0
1
0
0
1
false
0
2014-10-27T16:06:00.000
1
3
0
how to store duration in a pandas column in minutes:second format that allows arithemtic?
26,591,805
0.066568
python,pandas
If you don't want to convert to date time but still want to do math with them you'd most like be best off converting them to seconds in a different column while retaining the string format of them or creating a function that converts to string and applying that after any computations.
Currently I am storing duration in a pandas column using strings. For example '12:05' stands for 12 minutes and 5 seconds. I would like to convert this pandas column from string to a format that allows arithmetic, while retaining the MM:SS format. I would like to avoid storing day, hour, dates, etc.
0
1
173
0
26,625,834
0
0
0
0
1
false
1
2014-10-27T19:39:00.000
0
1
0
Using numpy with PyDev
26,595,519
0
python,eclipse,numpy,pydev
I recommend you to either use the setup.py from the downloaded archive or to download the "superpack"-executable for windows, if you work on windows anyway. In PyDev, i overcame problems with new libraries by using the autoconfig button. If that doesn't work, another solution could be deleting and reconfiguring the python interpreter.
Although I've been doing things with python by myself for a while now, I'm completely new to using python with external libraries. As a result, I seem to be having trouble getting numpy to work with PyDev. Right now I'm using PyDev in Eclipse, so I first tried to go to My Project > Properties > PyDev - PYTHONPATH > External Libraries > Add zip/jar/egg, similar to how I would add libraries in Eclipse. I then selected the numpy-1.9.0.zip file that I had downloaded. I tried importing numpy and using it, but I got the following error message in Eclipse: Undefined variable from import: array. I looked this up, and tried a few different things. I tried going into Window > Preferences > PyDev > Interpreters > Python Interpreters. I selected Python 3.4.0, then went to Forced Builtins > New, and entered "numpy". This had no effect, so I tried going back to Window > Preferences > PyDev > Interpreters > Python Interpreters, selecting Python 3.4.0, and then, under Libraries, choosing New Egg/Zip(s), then adding the numpy-1.9.0.zip file. This had no effect. I also tried the String Substitution Variables tab under Window > Preferences > PyDev > Interpreters > Python Interpreters (Python 3.4.0). This did nothing. Finally, I tried simply adding # @UndefinedVariable to the broken lines. When I ran it, it gave me the following error: ImportError: No module named 'numpy' What can I try to get this to work?
0
1
2,141