GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 31,226,011 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-07-04T23:30:00.000 | 1 | 2 | 0 | Random Forest for multi-label classification | 31,225,935 | 0.099668 | python,machine-learning,svm,random-forest,text-classification | It is very hard to answer this question without looking at the data in question.
SVM does have a history of working better with text classification - but machine learning by definition is context dependent.
Consider the parameters by which you are running the random forest algorithm. What are your number and depth of trees, are you pruning branches? Are you searching a larger parameter space for SVMs therefore are more likely to find a better optimum. | I am making an application for multilabel text classification .
I've tried different machine learning algorithm.
No doubt the SVM with linear kernel gets the best results.
I have also tried to sort through the algorithm Radom Forest and the results I have obtained have been very bad, both the recall and precision are very low.
The fact that the linear kernel to respond better result gives me an idea of the different categories are linearly separable.
Is there any reason the Random Forest results are so low? | 0 | 1 | 2,289 |
0 | 31,234,627 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-07-04T23:30:00.000 | 2 | 2 | 0 | Random Forest for multi-label classification | 31,225,935 | 0.197375 | python,machine-learning,svm,random-forest,text-classification | The ensemble of the random forest performs well across many domains and types of data. They are excellent at reducing error from variance and don't over fit if trees are kept simple enough.
I would expect a forest to perform comparably to a SVM with a linear kernel.
The SVM will tend to overfit more because it does not benefit from being an ensemble.
If you are not using cross validation of some kind. At minimum measuring performance on unseen data using a test/training regimen than i could see you obtaining this type of result.
Go back and make sure performance is measured on unseen data and likelier you'll see the RF performing more comparably.
Good luck. | I am making an application for multilabel text classification .
I've tried different machine learning algorithm.
No doubt the SVM with linear kernel gets the best results.
I have also tried to sort through the algorithm Radom Forest and the results I have obtained have been very bad, both the recall and precision are very low.
The fact that the linear kernel to respond better result gives me an idea of the different categories are linearly separable.
Is there any reason the Random Forest results are so low? | 0 | 1 | 2,289 |
0 | 38,479,294 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2015-07-10T12:44:00.000 | 1 | 2 | 0 | Matplotlib with multiprocessing freeze computer | 31,341,127 | 0.099668 | python,python-2.7,matplotlib,multiprocessing | I just had a very similar issue in which I have a class which produces plots in parallel. The first time I create a new instance of that class and run the plotting function, everything works perfectly. But if I create a new instance and plot, everything freezes.
I fixed it by writing a bash script which will in turn run a python script with the code for a single class instantiation + plot call. In other words, closing python between one plot call and the next one makes a clean slate of your working environment the computer does not freeze anymore. This is not an optimal solution, but it's working :) | I have an issue with matplotlib and multiprocessing.
I launch a first process, where I display an image and select an area, and close the figure. Then I launch another process, where I call a graph function that is regularly updated. Up this point, eveything works fine.
Then when I try to launch another process with the SAME graph function, it freeze my whole computer, BUT the background processes stil work...
I only have one of these errors (it's not always the same):
error 1 :
XIO: fatal IO error 25 (Inappropriate ioctl for device) on X server
":0.0"
after 4438 requests (4438 known processed) with 30 events remaining. XIO: fatal IO error 11 (Resource temporarily unavailable)
on X server ":0.0"
after 4443 requests (4443 known processed) with 31 events remaining. [xcb] Unknown sequence number while processing queue [xcb]
Most likely this is a multi-threaded client and XInitThreads has not
been called [xcb] Aborting, sorry about that. python:
../../src/xcb_io.c:274: poll_for_event: Assertion
`!xcb_xlib_threads_sequence_lost' failed.
error 2 :
X Error of failed request: BadIDChoice (invalid resource ID chosen
for this connection) Major opcode of failed request: 53
(X_CreatePixmap) Resource id in failed request: 0x5600299 Serial
number of failed request: 4793 Current serial number in output
stream: 4795 XIO: fatal IO error 25 (Inappropriate ioctl for device)
on X server ":0.0"
after 4788 requests (4788 known processed) with 31 events remaining. XIO: fatal IO error 25 (Inappropriate ioctl for device) on
X server ":0.0"
after 4793 requests (4793 known processed) with 32 events remaining.
The weird part is that I can totaly launch several process calling the graph function without any issue, it's the coupling with the first plot that make it unstable.
When trying to debug, I found out that a simple fig=plt.figure() is enough to crash everything : in fact, any call to plt ...
I read here and there that you can force matplotlib to use the agg backend and it helps with the multiprocess, but some widgets doesn't work with it so I would like to avoid this.
I don't really understand why using matplotlib in differents processes could cause problems, so if anyone could explain the reasons and/or help me with a workaround, it would be very nice. | 0 | 1 | 1,823 |
0 | 43,903,727 | 0 | 0 | 0 | 0 | 2 | false | 15 | 2015-07-11T22:43:00.000 | 10 | 2 | 0 | Performance difference in pandas read_table vs. read_csv vs. from_csv vs. read_excel? | 31,362,573 | 1 | python,performance,csv,pandas,dataframe | I've found that CSV and tab-delimited text (.txt) are equivalent in read and write speed, both are much faster than reading and writing MS Excel files. However, Excel format compresses the file size a lot.
For the same 320 MB CSV file (16 MB .xlsx)
(i7-7700k, SSD, running Anaconda Python 3.5.3, Pandas 0.19.2)
Using the standard convention import pandas as pd
2 seconds to read .csv df = pd.read_csv('foo.csv') (same for pd.read_table)
15.3 seconds to read .xlsx df = pd.read_excel('foo.xlsx')
10.5 seconds to write .csv df.to_csv('bar.csv', index=False)
(same for .txt)
34.5 seconds to write .xlsx df.to_excel('bar.xlsx', sheet_name='Sheet1', index=False)
To write your dataframes to tab-delimited text files you can use:
df.to_csv('bar.txt', sep='\t', index=False) | I tend to import .csv files into pandas, but sometimes I may get data in other formats to make DataFrame objects.
Today, I just found out about read_table as a "generic" importer for other formats, and wondered if there were significant performance differences between the various methods in pandas for reading .csv files, e.g. read_table, from_csv, read_excel.
Do these other methods have better performance than read_csv?
Is read_csv much different than from_csv for creating a DataFrame? | 0 | 1 | 20,524 |
0 | 31,362,987 | 0 | 0 | 0 | 0 | 2 | true | 15 | 2015-07-11T22:43:00.000 | 28 | 2 | 0 | Performance difference in pandas read_table vs. read_csv vs. from_csv vs. read_excel? | 31,362,573 | 1.2 | python,performance,csv,pandas,dataframe | read_table is read_csv with sep=',' replaced by sep='\t', they are two thin wrappers around the same function so the performance will be identical. read_excel uses the xlrd package to read xls and xlsx files into a DataFrame, it doesn't handle csv files.
from_csv calls read_table, so no. | I tend to import .csv files into pandas, but sometimes I may get data in other formats to make DataFrame objects.
Today, I just found out about read_table as a "generic" importer for other formats, and wondered if there were significant performance differences between the various methods in pandas for reading .csv files, e.g. read_table, from_csv, read_excel.
Do these other methods have better performance than read_csv?
Is read_csv much different than from_csv for creating a DataFrame? | 0 | 1 | 20,524 |
0 | 69,462,087 | 0 | 0 | 0 | 1 | 1 | false | 106 | 2015-07-13T13:56:00.000 | 0 | 9 | 0 | How to export a table dataframe in PySpark to csv? | 31,385,363 | 0 | python,apache-spark,dataframe,apache-spark-sql,export-to-csv | try display(df) and use the download option in the results. Please note: only 1 million rows can be downloaded with this option but its really quick. | I am using Spark 1.3.1 (PySpark) and I have generated a table using a SQL query. I now have an object that is a DataFrame. I want to export this DataFrame object (I have called it "table") to a csv file so I can manipulate it and plot the columns. How do I export the DataFrame "table" to a csv file?
Thanks! | 0 | 1 | 340,481 |
0 | 31,619,724 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2015-07-13T18:32:00.000 | 0 | 2 | 0 | Using semantic word representation (e.g. word2vec) to build a classifier | 31,390,838 | 0 | python,classification,word2vec | For a multi-class classification problem of sentences, doc2vec can work just fine, since context rarely changes a lot within the sentence.
If you want to use only python, I will recommend, doc2vec (for building features) followed by xgboost (for training classifier), which has worked for me in similar problems. | I want to build a classifier for forum posts that will automatically categorize these
posts into some defined categories(so multiclass classification not only binary
classification) by using semantic word representations. For this task I want to make use
of word2vec and doc2vec and check the feasability of using these models to support a fast
selection of training data for the classifier. At this moment I have tried both models and
they work like charm. However, as I do not want to manually label each sentence to predict
what is it describing, I want to leave this task for the word2vec or doc2vec models. So,
my question is : what algorithm can I use in Python for the classifier? ( I was thinking
to apply some clustering over word2vec or doc2vec - manually label each cluster (this
would require some time and is not the best solution). Previously, I made use of
"LinearSVC"(from SVM) and OneVsRestClassifier, however, I labeled each sentence (by
manually training a vector "y_train" ) in order to predict to which class a new test
sentence would belong to. What would be a good alghorithm and method in python to use for
this type of classifier(making use of semantic word representations to train data)? | 0 | 1 | 1,109 |
0 | 31,393,600 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2015-07-13T18:32:00.000 | 2 | 2 | 0 | Using semantic word representation (e.g. word2vec) to build a classifier | 31,390,838 | 1.2 | python,classification,word2vec | The issue with things like word2vec/doc2vec and so on - actually any usupervised classifier - is that it just uses context. So, for example if I have a sentence like "Today is a hot day" and another like "Today is a cold day" it thinks hot and cold are very very similar and should be in the same cluster.
This makes it pretty bad for tagging. Either way, there is a good implementation of Doc2Vec and Word2Vec in gensim module for python - you can quickly use the google-news dataset's prebuilt binary and test whether you get meaningful clusters.
The other way you could try is implement a simple lucene/solr system on your computer and begin tagging a few sentences randomly. Over time lucene/solr will suggest tags clearfor your document, and they do come out to be pretty decent tags if your data is not really bad.
The issue here is the problem youre trying to solve isnt particularly easy nor is completely solvable - If you have very good/clear data, then you may be able to auto classify about 80-90% of your data ... but if it is bad, you wont be able to auto classify it much. | I want to build a classifier for forum posts that will automatically categorize these
posts into some defined categories(so multiclass classification not only binary
classification) by using semantic word representations. For this task I want to make use
of word2vec and doc2vec and check the feasability of using these models to support a fast
selection of training data for the classifier. At this moment I have tried both models and
they work like charm. However, as I do not want to manually label each sentence to predict
what is it describing, I want to leave this task for the word2vec or doc2vec models. So,
my question is : what algorithm can I use in Python for the classifier? ( I was thinking
to apply some clustering over word2vec or doc2vec - manually label each cluster (this
would require some time and is not the best solution). Previously, I made use of
"LinearSVC"(from SVM) and OneVsRestClassifier, however, I labeled each sentence (by
manually training a vector "y_train" ) in order to predict to which class a new test
sentence would belong to. What would be a good alghorithm and method in python to use for
this type of classifier(making use of semantic word representations to train data)? | 0 | 1 | 1,109 |
0 | 31,399,878 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-07-14T04:30:00.000 | 0 | 1 | 0 | Cannot import python-numpy using Ubuntu | 31,397,675 | 0 | python,linux,ubuntu | try pip freeze to find whether the module numpy is installed, If its not then try pip install numpy, also check weather in the environmental variable that the /Scripts is added because that's where all the packages reside. | Currently I am trying to import numpy using the python command but I am getting the error which is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named numpy
I have already installed the latest version of numpy and my python version is 2.7.10. | 0 | 1 | 103 |
0 | 31,429,440 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-07-14T20:51:00.000 | 0 | 1 | 0 | Python pickle dump memory error | 31,417,091 | 0 | python,list,pickle | If you really want to keep it simple and use something like pickle, the best thing is to use cPickle. This library is written in C and can handle bigger files and is faster than pickle. | I have a very large list that I want to write to file. My list is 2 dimensional, and each element of the list is a 1 dimensional list. Different elements of the 2 dimensional list has 1 dimensional lists of varying size.
When my 2D list was small, pickle dump worked great. But now it just gives me memory error.
Any suggestions on how to store and reload such arrays to disk?
Thanks! | 0 | 1 | 1,136 |
0 | 31,417,912 | 0 | 0 | 0 | 0 | 1 | true | 29 | 2015-07-14T21:12:00.000 | 16 | 7 | 0 | sklearn LogisticRegression and changing the default threshold for classification | 31,417,487 | 1.2 | python,scikit-learn,classification,regression | That is not a built-in feature. You can "add" it by wrapping the LogisticRegression class in your own class, and adding a threshold attribute which you use inside a custom predict() method.
However, some cautions:
The default threshold is actually 0. LogisticRegression.decision_function() returns a signed distance to the selected separation hyperplane. If you are looking at predict_proba(), then you are looking at logit() of the hyperplane distance with a threshold of 0.5. But that's more expensive to compute.
By selecting the "optimal" threshold like this, you are utilizing information post-learning, which spoils your test set (i.e., your test or validation set no longer provides an unbiased estimate of out-of-sample error). You may therefore be inducing additional over-fitting unless you choose the threshold inside a cross-validation loop on your training set only, then use it and the trained classifier with your test set.
Consider using class_weight if you have an unbalanced problem rather than manually setting the threshold. This should force the classifier to choose a hyperplane farther away from the class of serious interest. | I am using LogisticRegression from the sklearn package, and have a quick question about classification. I built a ROC curve for my classifier, and it turns out that the optimal threshold for my training data is around 0.25. I'm assuming that the default threshold when creating predictions is 0.5. How can I change this default setting to find out what the accuracy is in my model when doing a 10-fold cross-validation? Basically, I want my model to predict a '1' for anyone greater than 0.25, not 0.5. I've been looking through all the documentation, and I can't seem to get anywhere. | 0 | 1 | 39,588 |
0 | 31,422,570 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-07-15T01:39:00.000 | 1 | 1 | 0 | How to make dataframe in pandas as numeric? | 31,420,095 | 1.2 | python,numpy,pandas | It seems that you have a list of int in your data frame. To convert it to you need to select the value inside and form data frame.
I suggest you this code to convert
for col in df:
df[col] = df[col].apply(lambda x: x[0]) | I am learning the book Python for Data Analysis, after running the code from the book I got a pandas dataframe diversity like this:
sex F M
year
1880 [38] [14]
1881 [38] [14]
When I want to use diversity.plot() to draw some pictures, there is TypeError:
Empty 'DataFrame': no numeric data to plot
So, my question is how to deal with this dataframe to make it as numeric? | 0 | 1 | 352 |
0 | 31,441,162 | 0 | 0 | 0 | 0 | 1 | true | 9 | 2015-07-15T15:42:00.000 | 7 | 2 | 0 | Using Numba with scikit-learn | 31,435,024 | 1.2 | python,scikit-learn,numba | Scikit-learn makes heavy use of numpy, most of which is written in C and already compiled (hence not eligible for JIT optimization).
Further, the LogisticRegression model is essentially LinearSVC with the appropriate loss function. I could be slightly wrong about that, but in any case, it uses LIBLINEAR to do the solving, which is again a compiled C library.
The makers of scikit-learn also make heavy use of one of the python-to-compiled systems, Pyrex I think, which again results in optimized machine compiled code ineligible for JIT compilation. | Has anyone succeeded in speeding up scikit-learn models using numba and jit compilaition. The specific models I am looking at are regression models such as Logistic Regressions.
I am able to use numba to optimize the functions I write using sklearn models, but the model functions themselves are not affected by this and are not optimized, thus not providing a notable increase in speed. Is there are way to optimize the sklearn functions?
Any info about this would be much appreciated. | 0 | 1 | 6,532 |
0 | 31,493,214 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2015-07-18T13:20:00.000 | 3 | 1 | 0 | How can I get randomized grid search to be more verbose? (seems stopped, but can't diagnose) | 31,491,583 | 1.2 | python,scikit-learn,random-forest,cross-validation,grid-search | As a first step, adding the verbose parameter to the RandomForestClassifier as well could let you see if the search is really stuck. It will display progress in fitting the trees (building tree 88 out of 100 ...).
I don't really know why your search got stuck, but thinking about it removing the search on n_estimators should enable you to grid search the entire space of parameters you specified here in just 8 iterations. | I'm running a relatively large job, which involves doing a randomized grid search on a dataset, which (with a small n_iter_search) already takes a long time.
I'm running it on a 64 core machine, and for about 2 hours it kept 2000 threads active working on the first folds. It then stopped reporting completely into the stdout. It's last report was:
[Parallel(n_jobs=-1)]: Done 4 out of 60 | elapsed: 84.7min remaining: 1185.8min
I've noticed on htop that almost all cores are at 0%, which would not happen when training random forests. No feedback or errors from the program, if it weren't for htop I would assume it is still training. This has happened before, so it is a recurring problem. The machine is perfectly responsive and the process seems alive.
I already have verbose = 10. Any thoughts on how I can diagnose what it going on inside the RandomizedSearchCV?
The grid search I'm doing:
rfc = RandomForestClassifier(n_jobs=-1)
param_grid = {
'n_estimators': sp_randint(100, 5000),
'max_features' : ['auto', None],
'min_samples_split' : sp_randint(2, 6)
}
n_iter_search = 20
CV_rfc = RandomizedSearchCV(estimator=rfc, param_distributions=param_grid, n_iter = n_iter_search, verbose = 10,n_jobs = -1) | 0 | 1 | 6,252 |
0 | 31,493,846 | 0 | 1 | 0 | 0 | 1 | true | 6 | 2015-07-18T17:24:00.000 | 4 | 1 | 0 | How to 'partially' install a Python package | 31,493,649 | 1.2 | python,numpy,pip,python-wheel | This is probably not worth the hassle but it's up to you to make that trade-off. numpy.random.choice is not implemented in Python but in a .pyx file which needs to be compiled to C using Cython.
You could refactor it and construct a new package which implements only that functionality (possibly with a few related data structures). But with recent improvements with Python wheels files installation of numpy should be much easier than in the past. So I reckon it's easier to install numpy as it is and accept that you have it as a dependency. | I need to use a function in numpy package, say numpy.random.choice (another Python lib function random.choice samples the list uniformly while I want it to do that from some discrete distributions).
My program will be distributed to a lot of people to develop and test. So that means they should also install numpy before they are able to run the code. I'm now trying to find a way to get rid of installing the whole numpy library.
Definitely rewriting the function myself is a solution (for example using alias method). But I'm wondering that is there a way that I can only install the part of numpy related to numpy.random.choice? | 0 | 1 | 1,071 |
0 | 31,497,506 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2015-07-18T22:03:00.000 | 2 | 4 | 0 | ImportError on cv2.so | 31,496,020 | 0.099668 | python,opencv,caffe | The problem has been solved by some tryings.
Since I installed under my /.~local path, it should be noticed that [include],[bin] and [lib] should all point to the local version by modifying the bashrc.
I just change the lib path while the other 2 paths remained unchanged, which point to the cluster's opencv version 2.4.9.(Mine is 2.4.11) | I am trying to run fast-rcnn on a cluster, where cv2.so is not installed for public use. So I directly move the cv2.so into a PATH, but it turns as:
/lib64/libc.so.6: version `GLIBC_2.14' not found
So I have to install the opencv on my local path again, this time it says:
ImportError: /home/username/.local/lib/python2.7/site-packages/cv2.so: undefined symbol: _ZN2cv11arrowedLineERNS_3MatENS_6Point_IiEES3_RKNS_7Scalar_IdEEiiid
This really confused me, could anyone give me a hand? | 0 | 1 | 12,670 |
0 | 42,237,346 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2015-07-18T22:03:00.000 | 3 | 4 | 0 | ImportError on cv2.so | 31,496,020 | 0.148885 | python,opencv,caffe | I know this is a little late, but I just got this same error with python 2.7 and opencv 3.1.0 on Ubuntu. Turns out I had to reinstall opencv-python. Running sudo pip install opencv-python did the trick. | I am trying to run fast-rcnn on a cluster, where cv2.so is not installed for public use. So I directly move the cv2.so into a PATH, but it turns as:
/lib64/libc.so.6: version `GLIBC_2.14' not found
So I have to install the opencv on my local path again, this time it says:
ImportError: /home/username/.local/lib/python2.7/site-packages/cv2.so: undefined symbol: _ZN2cv11arrowedLineERNS_3MatENS_6Point_IiEES3_RKNS_7Scalar_IdEEiiid
This really confused me, could anyone give me a hand? | 0 | 1 | 12,670 |
0 | 31,499,284 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2015-07-19T07:02:00.000 | 6 | 1 | 0 | Performance difference between filling existing numpy array and creating a new one | 31,498,784 | 1 | python,numpy | The answer depends on the size of your arrays. While allocating a new memory region takes nearly a fixed amount of time, the time to fill this memory region grows linear with size.
But, filling a new allocated memory with numpy.zeros is nearly twice as fast, as filling an existing array with numpy.fill, and three times faster than item setting x[:] = 0.
So on my machine, filling vectors with less than 800 elements is faster than creating new vectors, with more than 800 elements creating new vectors gets faster. | In iterative algorithms, it is common to use large numpy arrays many times. Frequently the arrays need to be manually "reset" on each iteration. Is there a performance difference between filling an existing array (with nans or 0s) and creating a new array? If so, why? | 0 | 1 | 1,368 |
0 | 31,504,403 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-07-19T16:55:00.000 | 0 | 1 | 0 | Distributed mapping and lookup with Spark | 31,503,613 | 0 | python,apache-spark,pyspark | Although the effect depends on your data set and operations, here are options I come up with
Optimize text format (Avro, Kryo etc) so that hashmap can build quickly
Combine the files into one HDFS file and increase block replication factor of the file, so that many Spark executors can get read the files locally
Use Spark broadcast variables for the hashmap so that executors don't need to deserialize it | I need to build a hashmap using text files and map values using that hashmap.
The files are already in HDFS.
I want to map data using this hashmap.
The text files are fairly small (I have around 10 files each few MB that I need to use for building the hashmap).
If the files are already on HDFS is there anything else that I can do to optimize the processing, so that building the hashmap and the lookup will happen in a distributed fashion? | 0 | 1 | 502 |
0 | 31,528,542 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-07-20T22:10:00.000 | 1 | 1 | 0 | FFT in Python with Explanations | 31,527,206 | 0.197375 | python,audio,signal-processing,fft | FFT data is in units of normalized frequency where the first point is 0 Hz and one past the last point is fs Hz. You can create the frequency axis yourself with linspace(0.0, (1.0 - 1.0/n)*fs, n). You can also use fftfreq but the components will be negative.
These are the same if n is even. You can also use rfftfreq I think. Note that this is only the "positive half" of your frequencies, which is probably what you want for audio (which is real-valued). Note that you can use rfft to just produce the positive half of the spectrum, and then get the frequencies with rfftfreq(n,1.0/fs).
Windowing will decrease sidelobe levels, at the cost of widening the mainlobe of any frequencies that are there. N is the length of your signal and you multiply your signal by the window. However, if you are looking in a long signal you might want to "chop" it up into pieces, window them, and then add the absolute values of their spectra.
"is it correct" is hard to answer. The simple approach is as you said, find the bin closest to your frequency and check its amplitude. | I have a WAV file which I would like to visualize in the frequency domain. Next, I would like to write a simple script that takes in a WAV file and outputs whether the energy at a certain frequency "F" exceeds a threshold "Z" (whether a certain tone has a strong presence in the WAV file). There are a bunch of code snippets online that show how to plot an FFT spectrum in Python, but I don't understand a lot of the steps.
I know that wavfile.read(myfile) returns the sampling rate (fs) and the data array (data), but when I run an FFT on it (y = numpy.fft.fft(data)), what units is y in?
To get the array of frequencies for the x-axis, some posters do this where n = len(data):
X = numpy.linspace(0.0, 1.0/(2.0*T), n/2)
and others do this:
X = numpy.fft.fftfreq(n) * fs)[range(n/2)]
Is there a difference between these two methods and is there a good online explanation for what these operations do conceptually?
Some of the online tutorials about FFTs mention windowing, but not a lot of posters use windowing in their code snippets. I see that numpy has a numpy.hamming(N), but what should I use as the input to that method and how do I "apply" the output window to my FFT arrays?
For my threshold computation, is it correct to find the frequency in X that's closest to my desired tone/frequency and check if the corresponding element (same index) in Y has an amplitude greater than the threshold? | 0 | 1 | 1,122 |
0 | 31,540,754 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2015-07-21T13:27:00.000 | 0 | 3 | 0 | Creating arrays in Python by using excel data sheet | 31,540,437 | 0 | python | I might, but as a former user of the excellent xlrd package, I really would recommend switching to pyopenxl. Quite apart from other benefits, each worksheet has a columns attribute that is a list of columns, each column being a list of cells. (There is also a rows) attribute.
Converting your code would be relatively painless as long as there isn't too much and it's reasonably well-written. I believe I've never had do do anything other than pip install pyopenxl to add it to a virtual environment.
I observe that there's no code in your question, and it's harder (and more time-consuming) to write examples than point out required changes in code, so since you are an xlrd user I'm going to assume that you can take it from here. If you need help, edit the question and add your problem code. If you get through to what you want, submit it as an answer and mark it correct.
Suffice to say I recently wrote some code to extract Pandas datasets from UK government health statistics, and pyopenxl was amazingly helpful in my investigations and easy to use.
Since it appears from the comments this is not a new question I'll leave it at that. | I have an excel file with 234 rows and 5 columns. I want to create an array for each column so that when I can read each column separately in xlrd. Does anyone can help please? | 0 | 1 | 898 |
0 | 31,581,102 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-07-23T07:10:00.000 | 1 | 2 | 0 | Creating a disk-based data structure | 31,580,478 | 1.2 | python,file,data-structures,disk | There's quite a number of problems you have to solve, some are quite straight forward and some are a little bit more elaborate, but since you want to do it yourself I don't think you minding about filling out details yourself (so I'll skip some parts).
First simple step is to serialize and deserialize nodes (in order to be able to store on disk at all). That could be done in an ad hoc manner by having your nodes having an serialize/deserialize method - in addition you might want to have the serialized data to have an type indicator so you can know which class' deserialize you should use to deserialize data. Note that on disk representation of a node must reference other nodes by file offset (either directly or indirectly).
The actual reading or writing of the data is done by ordinary (binary) file operations, but you have to seek to the right position in the file first.
Second step is to have the possibility to allocate space in the file. If you only want to have a write-once-behaviour it's quiete forward to just grow the file, but if you want to modify the data in the file (adding and removing nodes or even replacing them) you will have to cope with situation where regions in the file that are no longer in use and either reuse these or even pack the layout of the file.
Further steps could involve making the update atomic in some sense. One solution is to have a region where you write enough information so that the update can be completed (or abandoned) if it were terminated prematurely in it's most simple form it might just be a list of indempotent operations (operation that yields the same result if you repeat them, fx writing particular data to a particular place in the file).
Note that while (some of) the builtin solutions does indeed handle writing and reading the entire graph to/from disk they do not really handle the situation where you want to read only part of the graph or modifying the graph very efficient (you have to read mostly the whole graph and writing the complete graph in one go). Databases are the exception where you may read/write smaller parts of your data in a random manner. | I couldn't find any resources on this topic. There are a few questions with good answers describing solutions to problems which call for data stored on disk (pickle, shelve, databases in general), but I want to learn how to implement my own.
1) If I were to create a disk based graph structure in Python, I'd have to implement the necessary methods by writing to disk. But how do I do that?
2) One of the benefits on disk based structures is having the efficiency of the structure while working with data that might not all fit on memory. If the data does not fit in memory, only some parts of it are accessed at once. How does one access only part of the structure at once? | 0 | 1 | 1,607 |
0 | 31,582,822 | 0 | 0 | 0 | 1 | 1 | false | 4 | 2015-07-23T09:04:00.000 | 0 | 3 | 0 | How to open an excel file with multiple sheets in pandas? | 31,582,821 | 0 | python,excel,import | exFile = ExcelFile(f) #load file f
data = ExcelFile.parse(exFile) #this creates a dataframe out of the first sheet in file | I have an excel file composed of several sheets. I need to load them as separate dataframes individually. What would be a similar function as pd.read_csv("") for this kind of task?
P.S. due to the size I cannot copy and paste individual sheets in excel | 0 | 1 | 15,976 |
0 | 63,382,231 | 0 | 0 | 0 | 0 | 1 | false | 13 | 2015-07-24T09:53:00.000 | 1 | 4 | 0 | How to add clipboard support to Matplotlib figures? | 31,607,458 | 0.049958 | python,matplotlib,plot,scipy,clipboard | The last comment is very useful.
Install the package with
pip install addcopyfighandler
Import the module after importing matplotlib, for instance:
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
from matplotlib.cm import get_cmap
import addcopyfighandler
Use ctr + C to copy the Figure to the clipboard
And enjoy. | In MATLAB, there is a very convenient option to copy the current figure to the clipboard. Although Python/numpy/scipy/matplotlib is a great alternative to MATLAB, such an option is unfortunately missing.
Can this option easily be added to Matplotlib figures? Preferably, all MPL figures should automatically benefit from this functionality.
I'm using MPL's Qt4Agg backend, with PySide. | 0 | 1 | 12,554 |
0 | 66,269,162 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2015-07-24T12:54:00.000 | 0 | 2 | 0 | Random Forest pruning | 31,611,075 | 0 | python,machine-learning,scikit-learn,random-forest,pruning | You could try ensemble pruning. This boils down to removing from your random forest a number of the decision trees that make it up.
If you remove trees at random, the expected outcome is that the performance of the ensemble will gradually deteriorate with the number of removed trees. However, you can do something more clever like removing those trees whose predictions are highly correlated with the predictions of the rest of the ensemble, and thus do to significantly modify the outcome of the whole ensemble.
Alternatively, you can train a linear classifier that uses as inputs the outputs of the individual ensembles, and include some kind of l1 penalty in the training to enforce sparse weights on the classifier. The weights with 0 or very small value will hint which trees could be removed from the ensemble with a small impact on accuracy. | I have sklearn random forest regressor. It's very heavy, 1.6 GBytes, and works very long time when predicting values.
I want to prune it to make lighter. As I know pruning is not implemented for decision trees and forests. I can't implement it by myself since tree code is written on C and I don't know it.
Does anyone know the solution? | 0 | 1 | 5,467 |
0 | 31,611,419 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2015-07-24T12:54:00.000 | 3 | 2 | 0 | Random Forest pruning | 31,611,075 | 0.291313 | python,machine-learning,scikit-learn,random-forest,pruning | The size of the trees can be a solution for you. Try to limit the size of the trees in the forest (max leaf noders, max depth, min samples split...). | I have sklearn random forest regressor. It's very heavy, 1.6 GBytes, and works very long time when predicting values.
I want to prune it to make lighter. As I know pruning is not implemented for decision trees and forests. I can't implement it by myself since tree code is written on C and I don't know it.
Does anyone know the solution? | 0 | 1 | 5,467 |
0 | 31,646,627 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-07-24T12:54:00.000 | 0 | 1 | 0 | How to identify objects related to KD Tree data? | 31,611,089 | 0 | python,tree,kdtree | A typical KD tree node contains a reference to the data point.
A KD tree that only keeps the coordinates is much less useful.
This way, you can easily identify them. | I've been studying KD Trees and KNN searching in 2D & 3D space. The thing I cannot seem to find a good explanation of is how to identify which objects are being referenced by each node of the tree.
Example would be an image comparison database. If you generated descriptors for all the images, would you push all the descriptor data on to one tree? If so, how do you know which nodes are related to which original images? If not, would you generate a tree for each image, and then do some type of KD-Tree Random Forest nearest neighbor queries to determine which trees are closest to each other in 3-D space?
The image example might not be a good use case for KD-Trees since it's highly dimensional space, but I'm more using it to help explain the question I'm asking.
Any guidance on practical applications of KD-Tree KNN queries for comparing objects is greatly appreciated.
Thanks! | 0 | 1 | 245 |
0 | 31,778,280 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-07-24T13:39:00.000 | 0 | 1 | 0 | Theano continue training | 31,612,074 | 0 | python,theano,deep-learning | When pickling models, it is always better to save the parameters and when loading re-create the shared variable and rebuild the graph out of this. This allow to swap the device between CPU and GPU.
But you can pickle Theano functions. If you do that, pickle all associated function at the same time. Otherwise, they will have each of them a different copy of the shared variable. Each call to load() will create new shared variable if they where pickled. This is a limitation of pickle. | I am looking for some suggestions about how to do continue training in theano. For example, I have the following:
classifier = my_classifier()
cost = ()
updates = []
train_model = theano.function(...)
eval_model = theano.function(...)
best_accuracy = 0
while (epoch < n_epochs):
train_model()
current_accuracy = eval_model()
if current_accuracy > best_accuracy:
save classifier or save theano functions?
best_accuracy = current_accuracy
else:
load saved classifier or save theano functions?
if we saved classifier previously, do we need to redefine train_model and eval_model functions?
epoch+=1
#training is finished
save classifier
I want to save the current trained model if it has higher accuracy than previously trained models, and load the saved model later if the current trained model accuracy is lower than the best accuracy.
My questions are:
When saving, should I save the classifier, or theano functions?
If the classifier needs to be saved, do I need to redefine theano functions when loading it, since classifier is changed.
Thanks, | 0 | 1 | 246 |
0 | 53,998,316 | 0 | 1 | 0 | 0 | 1 | false | 22 | 2015-07-26T23:19:00.000 | 1 | 6 | 0 | Finding if two strings are almost similar | 31,642,940 | 0.033321 | python,regex,string | You could split the string and check to see if it contains at least one first/last name that is correct. | I want to find out if you strings are almost similar. For example, string like 'Mohan Mehta' should match 'Mohan Mehte' and vice versa. Another example, string like 'Umesh Gupta' should match 'Umash Gupte'.
Basically one string is correct and other one is a mis-spelling of it. All my strings are names of people.
Any suggestions on how to achieve this.
Solution does not have to be 100 percent effective. | 0 | 1 | 17,280 |
0 | 31,643,427 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-07-26T23:41:00.000 | 1 | 1 | 0 | Spatstat: Given a list of 2-d points, how to connect them into a polygon, and further make it the study region? | 31,643,100 | 0.197375 | python,r,opencv,computational-geometry,spatstat | Do you want it to be a spatstat study region (of class owin) since you have the spatstat tag on there? In that case you can just use owin(poly=x) where x is your nx2 matrix (after loading the spatstat library of course). The rows in this matrix should contain the vertices of the polygon in the order that you want them connected (that's how R knows which point to connect with which). See help(owin) for more details. | Please allow me to start the question with a simplest task:If I have four points which are vertices of a rectangle, stored in a 4x2 matrix, how can I turn this into a rectangular window? (Please do not use any special command specific to drawing rectangles as the rectangle is raised just to represent a general class of regular geometrical object)
To make things more complicated, suppose I have a nx2 matrix, how can I connect all of the n points so that it becomes a polygon? Note the object is not necessarily convex. I think the main difficulty is that, how can R know which point should be connected with which?
The reason I am asking is that I was doing some image processing on a fish, and I managed to get the body line of the fish by finding the contour with opencv in python, and output it as a nx2 csv file. When I read the csv file into R and tried to use the SpatialPolygnos in the sp package to turn this into a polygon, some very unexpected behavior happened; there seems to be a break somewhere in the middle that the polygon got cut in half, i.e. the boundary of the polygon was not connected. Is there anyway I can fix this problem?
Thank you.
Edit: Someone kindly pointed out that this is possibly a duplicate of another question: drawing polygons in R. However the solution to that question relies on the shape being drawn is convex and hence it makes sense to order by angels; However here the shape is not necessarily convex and it will not work. | 0 | 1 | 422 |
0 | 31,667,553 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-07-27T18:07:00.000 | 1 | 1 | 0 | How to save a graph that is generated by GNU Radio? | 31,660,214 | 1.2 | python,gnuradio,gnuradio-companion | The "QT GUI Frequency Sink" block will display the frequency domain representation of a signal. You can save a static image of the spectrum by accessing the control panel using center-click and choosing "Save". | I have generated the spectrogram with GNU Radio and want to save the output graph but have no idea how to do it. | 0 | 1 | 791 |
0 | 31,671,206 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-07-28T07:27:00.000 | 1 | 1 | 0 | How to set output file permissions using matplotlib savefig()? | 31,669,940 | 1.2 | python,matplotlib,permissions | matplotlib.pyplot.savefig() does not have capability to change file permissions. It has to be done afterwards with os.chmod(path, mode) for example os.chmod(fname, 0o400). | How can I specify the *nix read/write permissions for an output file (e.g. PDF) produced by the matplotlib savefig() command from within a Python script? i.e. without having to use chmod after the file has been produced. | 0 | 1 | 1,542 |
0 | 31,676,562 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-07-28T11:38:00.000 | 1 | 1 | 1 | Using mpi4py (or any python module) without installing | 31,675,214 | 0.197375 | python,python-2.7,numpy,mpi4py | Did you try pip install --user mpi4py?
However, I think the best solution would be to just talk to the people in charge of the cluster and see if they will install it. It seems pretty useless to have a cluster without mpi4py installed. | I have some parallel code I have written using numpy and mpi4py modules. Till now I was running it on my laptop but now I want to attack bigger problem sizes by using the computing clusters at my university. The trouble is that they don't have mpi4py installed. Is there anyway to use the module by copying the necessary files to my home directory in the cluster?
I tried some ways to install it with out root access but that didn't workout. So I am looking for a way to use the module by just copying it to the remote machine
I access the cluster using ssh from terminal | 0 | 1 | 283 |
0 | 31,699,669 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-07-28T21:17:00.000 | -1 | 2 | 0 | supervised dimensionality redunction/topic model using sklearn or gensim | 31,687,263 | -0.099668 | python,machine-learning,gensim,dimensionality-reduction | you can only perform dimensionality reduction in an unsupervised manner OR supervised but with different labels than your target labels.
For example you could train a logistic regression classifier with a dataset containing 100 topics. the output of this classifier (100 values) using your training data could be your dimensionality reduced feature set. | I've got BOW vectors and I'm wondering if there's a supervised dimensionality reduction algorithm in sklearn or gensim capable of taking high-dimensional, supervised data and projecting it into a lower dimensional space which preserves the variance between these classes.
Actually I'm trying to find a proper metric for the classification/regression, and I believe using dimensionality can help me. I know there's unsupervised methods, but I want to keep the label information along the way. | 0 | 1 | 468 |
0 | 32,441,545 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-07-29T06:57:00.000 | 0 | 2 | 0 | Avoid deepcopy() in Python when changing dictionary | 31,693,319 | 1.2 | python,parsing,runtime | As @dmargol1 suggested in the comments, I was able to avoid deepcopy() and copy() by instead building the graph from scratch, rather than copying and modifying it, which was actually a lot faster.
If that is possible: do it!
If copying is neccessary, there are two ways. If you don't need to alter the values, copy() is the way to go, since it is a lot faster than deepcopy() (see the comment of @george-solymosi). If alteration of the values is needed, deepcopy is the only way (see the comment of @gall). | I am looking for a solution to avoid deepcopy() in my task using Python.
I am implementing a statistical dependency parser using chu-liu-edmonds algorithm. I have a graph represented as a dictionary with every head node stored as a key with each having a list containing one or more objects of the class arc as the value.
In the cle-algorithm, I need to modify the graph (contract a cycle). That means, that I need to delete arc objects and heads, and add others, while I later need the original graph to expand those contracted cycles. Right now, I achieve this by deepcopying the original graph and pass it to the contract function.
Now I ran my programm with cProfile and found out that everything that has to do with deepcopy is by far the part of the algorithm that takes the most time.
So my question is: Is there any way to avoid/reduce this in my situation? | 0 | 1 | 1,493 |
0 | 31,742,947 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-07-29T21:34:00.000 | 0 | 2 | 0 | random forest with specified false positive and sensitivity | 31,711,555 | 0 | python,r,machine-learning,random-forest | You can do a grid serarch over the 'regularazation' parameters to best match your target behavior.
Parameters of interest:
max depth
number of features | Using the randomForest package in R, I was able to train a random forest that minimized overall error rate. However, what I want to do is train two random forests, one that first minimizes false positive rate (~ 0) and then overall error rate, and one that first maximizes sensitivity (~1), and then overall error. Another construction of the problem would be: given a false positive rate and sensitivity rate, train two different random forests that satisfy one of the rates respectively, and then minimize overall error rate. Does anyone know if theres an r package or python package, or any other software out there that does this and or how to do this? Thanks for the help. | 0 | 1 | 1,366 |
0 | 37,935,771 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-07-31T07:20:00.000 | 0 | 1 | 0 | Regarding opencv 3.0.0 and updating sift feature module in anaconda IDE | 31,740,332 | 0 | python,opencv | This is happening because SIFT (which is considered patent-encumbered or non-free) has been moved from the opencv package to the opencv "contrib" repo. You need a version of cv2 that has been compiled specifically with the contrib included.
Alternatively in cv2, use ORB instead of SIFT. | Currently I am using windows 8.1 64 bit machine and anaconda as IDE. I am getting the error as shown below. please help me how to update module. Import cv2 is working fine but not with sift features.
File "C:\Users\conquistador\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 71, in execfile exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Users/conquistador/Documents/opencv/test8.py", line 15, in sift = cv2.xfeatures2d.SIFT()
AttributeError: 'module' object has no attribute 'xfeatures2d' | 0 | 1 | 366 |
0 | 33,232,261 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2015-07-31T07:32:00.000 | 2 | 1 | 0 | Random number generator using Joblib | 31,740,561 | 0.379949 | python,random,parallel-processing,joblib | This is expected, although unfortunate.
The reason is that joblib (based on the standard multiprocessing Python tool) relies on forking under Unix. Forking creates the exact same processes and thus the same pseudo-random number generation.
The right way to solve this problem is to pass to the function that you are calling in parallel a seed for each call, eg a randomly-generated integer. That seed is then used inside the function to seed the local random number generation. | I need to generate random number in a function which is paralleled using Joblib. However, the random number generated from the cores are exactly the same.
Currently I solved the problem by assigning random seeds for different cores. Is there any simple way to solve this problem? | 0 | 1 | 591 |
0 | 31,763,839 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2015-08-01T16:33:00.000 | 5 | 1 | 0 | Reassigning index in pandas DataFrame | 31,763,800 | 1.2 | python,pandas,dataframe | df.reset_index(drop=True, inplace=True) | I'm fairly sure this is a duplicate, but suppose I have a pandas DataFrame and I've sorted the rows based on the values of some column. Originally the indices were the integers 0, 1, …, n-1 but now they're out of order. How do I reassign these indices to be in the proper order for the new sorted DataFrame? | 0 | 1 | 3,763 |
0 | 66,423,995 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-08-03T11:29:00.000 | 0 | 1 | 0 | Scipy.generic_filter - window translation to 1D | 31,786,076 | 0 | python,scipy,ndimage | This is really late, but the data that gets passed to your filter function is a numpy array. You should just be able to reshape the data like normal
arr = arr.reshape((y, x)) | I am trying to use scipy.generic_filter to process an image. However, I need to further subset the window within the function I am applying. In another words I need to know the process (function) used to convert the 2D window to 1D array within the generic filter, so I can recreate the 2D array within the applied function in the right way. Does anybody know what function doe the scipy filter use to reshape the 2D to 1D? | 0 | 1 | 64 |
0 | 31,794,556 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-08-03T18:22:00.000 | 2 | 1 | 0 | Python adaptive histogram widths | 31,794,158 | 0.379949 | python,numpy | I think you should first remap your data, then create the histogram, and then interpret the histogram knowing the values have been transformed. One possibility would be to tweak the histogram tick labels so that they display mapped values.
One possible way of doing it, for example, would be:
Sort one dimension of data as an unidimensional array;
Integrate this array, so you have a cumulative distribution;
Find the steepest part of this distribution, and choose a horizontal interval corresponding to a "good" bin size for the peak of your histogram - that is, a size that gives you good resolution;
Find the size of this same interval along the vertical axis. That will give you a bin size to apply along the vertical axis;
Create the bins using the vertical span of that bin - that is, "draw" horizontal, equidistant lines to create your bins, instead of the most common way of drawing vertical ones;
That way, you'll have lots of bins where data is more dense, and lesser bins where data is more sparse.
Two things to consider:
The mapping function is the cumulative distribution of the sorted values along that dimension. This can be quite arbitrary. If the distribution resembles some well known algebraic function, you could define it mathematically and use it to perform a two-way transform between actual value data and "adaptive" histogram data;
This applies to only one dimension. Care must be taken as how this would work if the histograms from multiple dimensions are to be combined. | I am currently working on a project where I have to bin up to 10-dimensional data. This works totally fine with numpy.histogramdd, however with one have a serious obstacle:
My parameter space is pretty large, but only a fraction is actually inhabited by data (say, maybe a few % or so...). In these regions, the data is quite rich, so I would like to use relatively small bin widths. The problem here, however, is that the RAM usage totally explodes. I see usage of 20GB+ for only 5 dimensions which is already absolutely not practical. I tried defining the grid myself, but the problem persists...
My idea would be to manually specify the bin edges, where I just use very large bin widths for empty regions in the data space. Only in regions where I actually have data, I would need to go to a finer scale.
I was wondering if anyone here knows of such an implementation already which works in arbitrary numbers of dimensions.
thanks | 0 | 1 | 740 |
0 | 31,815,098 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-08-03T23:02:00.000 | 0 | 1 | 0 | Fit a level set field to CAD data | 31,798,097 | 0 | python,autocad,euclidean-distance,freecad | You might look into the DXF file type. (drawing exchange format)
AutoCAD and most freeCAD will read/write DXF files. | I'm looking for software that can fit a signed distance function to a vector image or output from either AutoCAD or FreeCAD. Preferable CAD data because that output is 3D. I'm looking at coding something in either C or Python but I thought I'd check to see if there was anything out there because I couldn't find anything using google.
Thanks for the help | 0 | 1 | 122 |
0 | 31,822,646 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2015-08-05T02:39:00.000 | 0 | 4 | 0 | Logic behind Python indexing | 31,822,523 | 0 | python,indexing | In python: 0 == -0, so x[0] == x[-0].
Why is sequence indexing zero based instead of one based? It is a choice the language designer should do. Most languages I know of use 0 based indexing. Xpath uses 1 based for selection.
Using negative indexing is also a convention for the language. Not sure why it was chosen, but it allows for circling or looping the sequence by simple addition (subtraction) on the index. | I'm curious in Python why x[0] retrieves the first element of x while x[-1] retrieves the first element when reading in the reverse order. The syntax seems inconsistent to me since in the one case we're counting distance from the first element, whereas we don't count distance from the last element when reading backwards. Wouldn't something like x[-0] make more sense? One thought I have is that intervals in Python are generally thought of as inclusive with respect to the lower bound but exclusive for the upper bound, and so the index could maybe be interpreted as distance from a lower or upper bound element. Any ideas on why this notation was chosen? (I'm also just curious why zero indexing is preferred at all.) | 0 | 1 | 180 |
0 | 31,881,089 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2015-08-07T14:53:00.000 | 0 | 3 | 0 | Python - Pick from Array based on price with a caveat | 31,880,759 | 0 | python,arrays | From your description, it might be enough to have a counter associated with each sub-array as to how many of the items in that sub-array have already been bought. Give that you haven't shown any details as to your representation of these things, I can't give more details as to how to implement this. | I'm relatively new to Python and have already written a code to randomly select from two tables based on user input but the next function I need to create is more complex and I'm having trouble wrapping my head around.
I'm going to have some code that's going to take user input and generate an amount of money I'm going to add to a variable, lets say, wallet.
I then want to write some code that takes random objects from an array based on price.
Now here's the caveat(s). Lets say array A is chosen. In Array A there will be 3-4 other sub arrays. Within those arrays are 4 objects first, second, third, and fourth. With the first being the cheapest and the fourth being the most expensive. I want this code to NOT be able to buy object second without having bought object first. I don't want an object purchasable unless the prerequisite is also purchased.
I'm just having a hard time thinking it through (a weakness in general in programming I need to overcome) but any advice or links to a concept similar to what I'm aiming to do would be greatly appreciated. Thanks! | 0 | 1 | 45 |
0 | 31,881,413 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2015-08-07T14:53:00.000 | 0 | 3 | 0 | Python - Pick from Array based on price with a caveat | 31,880,759 | 0 | python,arrays | It's difficult to understand what you're getting at, because you're not expressing your ideas very well. You're finding this general programming difficult as programming can be considered as the precise expression of ideas.
So, in very general terms you are trying to simulate some sort of curated shopping experience. You need to:
Track a value of currency.
Manage a product catalogue.
Allow a selection of products based on value and constraints based on prior selections.
If I were doing this, I might write a class that I'd use to manage the basket. I might instantiate a basket with a budget figure and a product catalogue to select from. I might express the constraints in the catalogue, but enforce them in the basket.
I would probably use the basket (budget, tally and selections) to filter the product catalogue to highlight eligible products.
If multiple transactions are allowed, the basket would need to have knowledge of previous purchases and therefore which prerequisites have already been fulfilled. | I'm relatively new to Python and have already written a code to randomly select from two tables based on user input but the next function I need to create is more complex and I'm having trouble wrapping my head around.
I'm going to have some code that's going to take user input and generate an amount of money I'm going to add to a variable, lets say, wallet.
I then want to write some code that takes random objects from an array based on price.
Now here's the caveat(s). Lets say array A is chosen. In Array A there will be 3-4 other sub arrays. Within those arrays are 4 objects first, second, third, and fourth. With the first being the cheapest and the fourth being the most expensive. I want this code to NOT be able to buy object second without having bought object first. I don't want an object purchasable unless the prerequisite is also purchased.
I'm just having a hard time thinking it through (a weakness in general in programming I need to overcome) but any advice or links to a concept similar to what I'm aiming to do would be greatly appreciated. Thanks! | 0 | 1 | 45 |
0 | 31,888,691 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-08-08T01:08:00.000 | 0 | 3 | 0 | Calculating distances on grid | 31,888,624 | 0 | python,python-2.7,numpy,grid,distance | The simplest way I know of to calculate the distance between two points on a plane is using the Pythagorean theorem.
That is, picture a right angle triangle where the hypotenuse goes between the two points and the base of the triangle is parallel to the x axis and the height is parallel to the y axis. We then know that the distance (represented by the length of the hypotenuse) h adheres to the following: h^2 = a^2 + b^2, where a and b are the lengths of the two remaining sides of the triangle.
It's hard to give any other help without seeing your code. Have you tried something similar yet? You need to specify your question more if you want more specific answers. | I have a 10 x 10 grid of cells (as a numpy array). I also have a list of 3 points on that grid. For each cell on the grid, I need to find the closest of the three points. I can do this in series of nested loops in python (2.7) which works but is slow (especially if I upscale to larger grids) but I suspect there is a faster way. Does anyone have any suggestions? | 0 | 1 | 2,279 |
0 | 53,851,540 | 0 | 0 | 0 | 0 | 1 | false | 33 | 2015-08-08T14:08:00.000 | -1 | 7 | 0 | Download CSV from an iPython Notebook | 31,893,930 | -0.028564 | csv,pandas,ipython-notebook | My simple approach to download all the files from the jupyter notebook would be by simply using this wonderful command
!tar cvfz my_compressed_file_name.tar.gz *
This will download all the files of the server including the notebooks.
In case if your server has multiple folders, you might be willing to use the following command. write ../ before the * for every step up the directory.
tar cvfz zipname.tar.gz ../../*
Hope it helps.. | I run an iPython Notebook server, and would like users to be able to download a pandas dataframe as a csv file so that they can use it in their own environment. There's no personal data, so if the solution involves writing the file at the server (which I can do) and then downloading that file, I'd be happy with that. | 0 | 1 | 50,655 |
0 | 31,909,251 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2015-08-09T21:21:00.000 | 1 | 3 | 0 | Numpy conditional multiply data in array (if true multiply A, false multiply B) | 31,908,956 | 0.066568 | python,arrays,numpy | np.where is the answer. I spend time messing with np.place without knowing its existence. | Say I have a large array of value 0~255. I wanted every element in this array that is higher than 100 got multiplied by 1.2, otherwise, got multiplied by 0.8.
It sounded simple but I could not find anyway other than iterate through all the variable and multiply it one by one. | 0 | 1 | 6,219 |
0 | 31,915,492 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2015-08-10T08:39:00.000 | 2 | 2 | 0 | Python save variable to file like save() in MATLAB | 31,915,120 | 0.197375 | python-2.7 | Use pickle.dump in Python 3.x, or cPickle.dump in Python 2.x. | I just new in python. How to save variable data to file like save command in MATLAB.
Thank you | 0 | 1 | 1,761 |
0 | 32,591,030 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-08-11T13:18:00.000 | 1 | 1 | 0 | How to show star rating on ckan for datasets | 31,942,911 | 1.2 | python,ckan | I'm not aware of any extensions that do this.
You could write one to add this info in a dataset extra field. You may wish to store it as JSON and record the ratings given by each user.
Alternatively you could try the rating_create API function - this is old functionality which has no UI, but it may just do what you want. | I have used ckanext-qa but its seems its not as per my requirement I am looking for extension by which Logged in user can be able to rate form 1 to 5 for each dataset over ckan.
Anybody have an idea how to do like that | 0 | 1 | 261 |
0 | 31,945,716 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2015-08-11T14:37:00.000 | 3 | 1 | 0 | How does one interpret class weight when working with non linear SVMs? | 31,944,799 | 1.2 | python,machine-learning,scikit-learn,svm | Using class weighting has nothing to do w linear/non-linear kernel. It simply controls the cost of missclassifing particular sample during training. Per-class weight simply put constant weight on each sample in a given class. When you use auto, classes samples gets weightes invertibly proportional to the class size. So if you have class A twice as big as B, then samples from A are twice "cheapier" to missclassify. This will lead to highly balanced model structure, in particular, such SVM tries to maximize Balanced Accuracy (BAC), not "classical" accuracy. | I'm using Scikit-learn SVM classifier to make predictions and i'm using a rbf kernel. I have set the class_weight = 'auto'. Am I right in thinking that classes that appear more often will get lower weights? Say I had two classes, A and B. If A appeared a lot more than B does that mean that later on when making the predictions, there will be fewer A predictions than if I hadn't set the class_weight= 'auto'?
I'm pretty new to this so I'm just trying to get my head around what is happening and why. | 0 | 1 | 847 |
0 | 31,951,440 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-08-11T20:02:00.000 | 0 | 1 | 0 | Plotting in real time | 31,950,928 | 0 | python,multithreading,csv,plot,real-time | Short answer: it depends on your use case.
If you simply want a static graph that updates in real time, then your approach is totally fine.
On the other hand, threading is useful when the user needs to somehow interact with the plot (like zoom in/out, etc) because otherwise I/O operations would block the main routine that handles user interaction, so the user might experience "pauses" periodically. | In python, is it necessary to use threading when plotting in real time? In my specific case, I wish to continuously plot from a .csv that is having lines of data added to it indefinitely. Can't I just create a function that re-reads the data source with every iteration of the plot update? | 0 | 1 | 82 |
0 | 31,952,102 | 0 | 1 | 0 | 0 | 2 | false | 120 | 2015-08-11T21:08:00.000 | 127 | 3 | 0 | What exactly does numpy.exp() do? | 31,951,980 | 1 | python,numpy,statistics,exp | The exponential function is e^x where e is a mathematical constant called Euler's number, approximately 2.718281. This value has a close mathematical relationship with pi and the slope of the curve e^x is equal to its value at every point. np.exp() calculates e^x for each value of x in your input array. | I'm very confused as to what np.exp() actually does. In the documentation it says that it: "Calculates the exponential of all elements in the input array." I'm confused as to what exactly this means. Could someone give me more information to what it actually does? | 0 | 1 | 221,837 |
0 | 31,952,092 | 0 | 1 | 0 | 0 | 2 | false | 120 | 2015-08-11T21:08:00.000 | 62 | 3 | 0 | What exactly does numpy.exp() do? | 31,951,980 | 1 | python,numpy,statistics,exp | It calculates ex for each x in your list where e is Euler's number (approximately 2.718). In other words, np.exp(range(5)) is similar to [math.e**x for x in range(5)]. | I'm very confused as to what np.exp() actually does. In the documentation it says that it: "Calculates the exponential of all elements in the input array." I'm confused as to what exactly this means. Could someone give me more information to what it actually does? | 0 | 1 | 221,837 |
0 | 36,915,460 | 0 | 0 | 0 | 0 | 1 | true | 8 | 2015-08-12T20:38:00.000 | 16 | 3 | 0 | pandas - plot sorted column to increasing integer index | 31,974,942 | 1.2 | python,pandas | This will first sort the series and then plot, ignoring the index of the series:
ts = pd.Series(np.random.randn(100), index=pd.date_range('1/1/2000', periods=100))
ts.sort_values().plot(use_index=False) | Let's say I have a pandas series with numerical values. What's the shortest way to plot the sorted series against an increasing integer index?
The plot should show:
x-axis: 0,1,2,3,4,...
y-axis: the sorted values of the series.
(please notice that I cannot plot it against the series' index, because the index is not necessarily an increasing index. In my case it's some id that I use for different reasons)
Thanks | 0 | 1 | 21,421 |
0 | 31,977,464 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-08-12T21:30:00.000 | 0 | 2 | 0 | gensim LDA: How can i generate topics with different words for each topic? | 31,975,754 | 0 | python,algorithm,api,lda,gensim | LDA provides for each topic and each word a probability that the topic generates that word. You can try assigning words to topics by just taking the max over all topics of the probability to generate the word. In other words if topic A generates "monkey" with probability 0.01 and topic B generates the word monkey with probability 0.02 then you can assign the word monkey to topic B. | I'm using the LDA algorithm from the gensim package to find topics in a given text.
I've been asked that the resulting topics will include different words for each topic, E.G If topic A has the word 'monkey' in it then no other topic should include the word 'monkey' in its list.
My thoughts so far: run it multiple times and each time add the previous words to the stop words list.
Since:
A) I'm not even sure of algorithmically/logically it's the right thing to do.
B) I hope there's a built in way to do it that i'm not aware of.
C) This is a large database, and it takes about 20 minutes to run the LDA
each time (using the multi-core version).
Question: Is there a better way to do it?
Hope to get some help,
Thanks. | 0 | 1 | 925 |
0 | 31,990,326 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2015-08-13T13:39:00.000 | 6 | 2 | 0 | Local mean filter in numpy | 31,989,903 | 1.2 | python-3.x,numpy,scipy | scipy.ndimage.filters.convolve() with a weight: np.full((3, 3, 3), 1.0/27). | I have a 512x512x512 numpy array. Is there any efficient way to perform a mean filter where every array value is substituted by all 3x3x3 local values?
We are seeking somethin similar to scipy.ndimage.filters.median_filter but insted of median with mean. | 0 | 1 | 9,128 |
0 | 70,254,029 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-08-17T21:06:00.000 | 0 | 2 | 0 | In scipy, what's the point of the two different distance functions used in hierarchical clustering? | 32,059,711 | 0 | python,scipy,hierarchical-clustering | The parameter 'method' is used to measure the similarities between clusters through the hierarchical clustering process. The parameter 'metric' is used to measure the distance between two objects in the dataset.
The 'metric' is closely related to the nature of the data (e.g., you could want to use 'euclidean' distance for objects with the same number of features, or Dynamic Time Warping for time series with different durations).
The thing is that there are two ways of using the linkage function. The first parameter is y, and it can be either the data itself or a distance matrix produced previously by a given measure.
If you choose to feed 'linkage' with a distance matrix, then you won't need the 'metric' parameter, because you have already calculated all the distance between all objects. | There is one distance function I can pass to pdist use to create the distance matrix that is given to linkage. There is a second distance function that I can pass to linkage as the metric.
Why are there two possible distance functions?
If they are different, how are they used? For instance, does linkage use the distances in the distance matrix for its initial iterations, i.e. to see if any two original observations should be combined into a cluster, and then use the metric function for further combinations, i.e. of two clusters or of a cluster with an original observation? | 0 | 1 | 206 |
0 | 32,062,644 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-08-18T00:14:00.000 | 3 | 1 | 0 | How to view a saved matplotlib plot | 32,061,658 | 0.53705 | python,matplotlib,graph | Vi is a text editor, and can't view images as images. The Windows Paint program should be able to view them, however, or on a Mac, Preview should work. | I have successfully saved my graphs using the plt.savefig() function in the matplotlib library.
When I try to open my graph using vi, the file is there but there are a lot of strange characters. I guess I'm viewing the code and other info rather than the visualization of the graph. How do I see the graph in its pictoral form? | 0 | 1 | 204 |
0 | 32,094,372 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-08-18T15:03:00.000 | 0 | 1 | 0 | Why does scikit neural network reshape my input array automatically? | 32,076,040 | 0 | python,arrays,numpy,scikit-learn,neural-network | Looking at the source, scikit will reshape input arrays if the X (input samples) you pass in when initializing is a different size from creating/splitting the dataset in the MLP backend. This is to reduce overfitting by training and validating on the same samples. | When I pass my training data to the scikit neural network it is a NumPy array of all my 24*24 image matrices. I check and this is the shape of the data: (3237, 24, 24) - 3237 24*24 images.
However, once I feed it into the neural network, I get this warning:
WARNING:sknn: - Reshaping input array from (3237, 24, 24) to (2589, 24, 24, 1).
The program still runs, I just don't understand why it is reshaping my array. | 0 | 1 | 218 |
0 | 32,119,557 | 0 | 0 | 0 | 1 | 1 | true | 1 | 2015-08-18T16:17:00.000 | 1 | 1 | 0 | Use Pyxl to read Excel spreadsheet data up to a certain row | 32,077,627 | 1.2 | python,excel,openpyxl | To be honest I'd be tempted to suggest you use openpyxl all the way if there is something that xlsxwriter doesn't do, though I think that it's formatting options are pretty extensive. The most recent version of openpyxl is as fast as xlsxwriter if lxml is installed.
However, it's worth noting that Pandas has tended to ship with an older version of openpyxl because we changed the style API.
Otherwise you can use max_row to get the highest row but this won't check for an empty row. | I created an Excel spreadsheet using Pandas and xlsxwriter, which has all the data in the right rows and columns. However, the formatting in xlsxwriter is pretty basic, so I want to solve this problem by writing my Pandas spreadsheet on top of a template spreadsheet with Pyxl.
First, however, I need to get Pyxl to only import data up to the first blank row, and to get rid of the column headings. This way I could write my Excel data from the xlsxwriter output to the template.
I have no clue how to go about this and can't find it here or in the docs. Any ideas?
How about if I want to read data from the first column after the first blank column? (I can think of a workaround for this, but it would help if I knew how) | 0 | 1 | 256 |
0 | 62,125,080 | 0 | 0 | 0 | 0 | 1 | false | 91 | 2015-08-20T03:58:00.000 | 1 | 9 | 0 | How to implement the ReLU function in Numpy | 32,109,319 | 0.022219 | python,numpy,machine-learning,neural-network | ReLU(x) also is equal to (x+abs(x))/2 | I want to make a simple neural network which uses the ReLU function. Can someone give me a clue of how can I implement the function using numpy. | 0 | 1 | 165,706 |
0 | 32,126,371 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-08-20T18:56:00.000 | 1 | 2 | 0 | How to calculate correlation on large number of records? | 32,126,189 | 0.099668 | python,statistics,correlation,pyspark | Firstly, make sure you're applying the right formula for correlation. Remember, given vectors x and y, correlation is ((x-mean(x)) * (y - mean(y)))/(length(x)*length(y)), where * represents the dot-product and length(x) is the square root of the sum of the squares of the terms in x. (I know that's silly, but noticing a mis-typed formula is a lot easier than redoing a program.)
Do you have a strong hunch that there should be some correlation among these columns? If you don't, then those small values are reasonable. On the other hand, if you're pretty sure that there ought to be a strong correlation, then try sampling a random 100 pairs and either finding the correlation there, or plotting them for visual inspection, which can also show you if there is correlation present. | I am trying to calculate correlation amongst three columns in a dataset. The dataset is relatively large (4 GB in size). When I calculate correlation among the columns of interest, I get small values like 0.0024, -0.0067 etc. I am not sure this result makes any sense or not. Should I sample the data and then try calculating correlation?
Any thoughts/experience on this topic would be appreciated. | 0 | 1 | 116 |
0 | 32,127,507 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-08-20T18:56:00.000 | 0 | 2 | 0 | How to calculate correlation on large number of records? | 32,126,189 | 0 | python,statistics,correlation,pyspark | There is nothing special about correlation of large data sets. All you need to do is some simple aggregation.
If you want to improve your numerical precision (remember that floating point math is lossy) you can use Kahan summation and similar techniques, in particular for values close to 0.
But maybe your data justt doesn't have strong correlation?
Try visualizing a sample! | I am trying to calculate correlation amongst three columns in a dataset. The dataset is relatively large (4 GB in size). When I calculate correlation among the columns of interest, I get small values like 0.0024, -0.0067 etc. I am not sure this result makes any sense or not. Should I sample the data and then try calculating correlation?
Any thoughts/experience on this topic would be appreciated. | 0 | 1 | 116 |
0 | 32,147,241 | 0 | 1 | 0 | 0 | 1 | true | 5 | 2015-08-20T23:19:00.000 | 3 | 1 | 0 | what is the difference between rdd.repartition() and partition size in sc.parallelize(data, partitions) | 32,130,000 | 1.2 | python,apache-spark,rdd | By calling repartition(N) spark will do a shuffle to change the number of partitions (and will by default result in a HashPartitioner with that number of partitions). When you call sc.parallelize with a desired number of partitions it splits your data (more or less) equally up amongst the slices (effectively similar to a range partitioner), you can see this in ParallelCollectionRDD inside of the slice function.
That being said, it is possible that both of these sc.parallelize(data, N) and rdd.reparitition(N) (and really almost any form of reading in data) can result in RDDs with empty partitions (its a pretty common source of errors with mapPartitions code so I biased the RDD generator in spark-testing-base to create RDDs with empty partitions). A really simple fix for most functions is just checking if you've been passed in an empty iterator and just returning an empty iterator in that case. | I was going through the documentation of spark. I got a bit confused with rdd.repartition() function and the number of partitions we pass during context initialization in sc.parallelize().
I have 4 cores on my machine, if I sc.parallelize(data, 4) everything works fine, but when I rdd.repartition(4) and apply rdd.mappartitions(fun) sometimes the partitions has no data and my function fails in such cases.
So, just wanted to understand what is the difference between these two ways of partitioning. | 0 | 1 | 2,919 |
0 | 32,138,958 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-08-21T11:00:00.000 | 0 | 4 | 0 | Finding the points nearest to a given x? | 32,138,651 | 0 | python,algorithm,function | You can compute the distance of chosen points.
1) Search for the minimun distance X value (left and right).
2) Search each points corresponding with X_MIN_LEFT and X_MIN_RIGHT. At the same time you can check the distance with Y and find the minimum Y distance.
That's it. | I need to interpolate a linear function and I can't use numpy or scipy.
I'm given this data points ((0,10), (1,4), (2,3), (3,5), (4,12)) and a point on `x = 2.8. the data points is polyline (linear between 2 coordinates)
of course I need to use the closest x-points from the data for 2.8. which is from (2,3) and (3,5) because 2.8 lies between the 2 and 3.
How do I make a function to find this closest points? | 0 | 1 | 3,022 |
0 | 32,236,196 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-08-21T19:46:00.000 | 2 | 1 | 0 | Truncated SVD vs Partial SVD | 32,148,219 | 0.379949 | python-2.7,scikit-learn,pca | Truncated or partial means that you only calculate a certain number of components/singular vector-value pairs (the strongest ones).
In scikit-learn parlance, "partial" usually refers to the fact that a method is on line, meaning that it can be fed with partial data. The more data you give it the better it will converge to the expected optimum.
Both can be combined, and have been, also in sklearn: sklearn.decomposition.IncrementalPCA does this. | Can somebody tell me the difference between truncated SVD as implemented in sklearn and partial SVD as implemented in, say, fbpca?
I couldn't find a definitive answer as I haven't seen anybody use truncated SVD for principal component pursuit (PCP). | 0 | 1 | 1,242 |
0 | 32,192,487 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-08-24T14:19:00.000 | 1 | 1 | 1 | Apache Spark RDD transformations with 2 elements as input | 32,184,638 | 0.197375 | python,tcp,apache-spark,pyspark,rdd | It seems like what your looking for might be best done with something like reduceByKey where you can remove the duplicates as you go for each sequence (assuming that the resulting amount of data for each sequence isn't too large, in your example it seems pretty small). Sorting the results can be done with the standard sortBy operator.
Saving the data out to HDFS is indeed done in parallel on the workers, forwarding the data to the Spark client app would create a bottleneck and sort of defeat the purpose (although if you do want to bring the data back locally you can use collect provided that the data is pretty small). | I am trying to 'follow-tcp-stream' in Hadoop sequence file that structured as follows:
i. Time stamp as key
ii. Raw Ethernet frame as value
The file contains a single TCP session, and because the record is very long, sequence-id of TCP frame overflows (which means that seq-id not necessarily unique and data cannot be sorted by seq-id because then it will get scrambled).
I use Apache Spark/Python/Scapy.
To create the TCP-stream I intended to:
1.) Filter out any non TCP-with-data frames
2.) Sort the RDD by TCP-sequence-ID (within each overflow cycle)
3.) Remove any duplicates of sequence-ID (within each overflow cycle)
4.) Map each element to TCP data
5.) Store the resulting RDD as testFile within HDFS
Illustration of operation on RDD:
input: [(time:100, seq:1), (time:101, seq:21), (time:102, seq:11), (time:103, seq:21), ... , (time:1234, seq=1000), (time:1235, seq:2), (time:1236, seq:30), (time:1237, seq:18)]
output:[(seq:1, time:100), (seq:11, time:102), (seq:21, time:101), ... ,(seq=1000, time:1234), (seq:2, time:1235), (seq:18, time:1237), (seq:30, time:1236)]
Steps 1 and 4 or obvious. The ways I came up for solving 2 and 3 required comparison between adjacent elements within the RDD, with the option to return any number of new elements (not necessarily 2, Without making any action of course - so the code will run in parallel). Is there any way to do this? I went over RDD class methods few times nothing came up.
Another issue the storage of the RDD (step 5). Is it done in parallel? Each node stores his part of the RDD to different Hadoop block? Or the data first forwarded to Spark client app and then it stores it? | 0 | 1 | 237 |
0 | 34,873,978 | 0 | 0 | 0 | 0 | 1 | false | 20 | 2015-08-24T14:32:00.000 | 1 | 1 | 0 | Calculating eigen values of very large sparse matrices in python | 32,184,915 | 0.197375 | python,scipy,sparse-matrix,eigenvector,eigenvalue | I agree with @pv. If your matrix S was symmetric, you could see it as a laplacian matrix of the matrix I - S. The number of connected components of I - S is the number of zero-eigenvalues of this matrix (i.e, the dimension of the space associated to eigenvalue 1 of S). You could check the number of connected components of the graph whose similarity matrix is I - S*S' for a start, e.g. with scipy.sparse.csgraph.connected_components. | I have a very large sparse matrix which represents a transition martix in a Markov Chain, i.e. the sum of each row of the matrix equals one and I'm interested in finding the first eigenvalue and its corresponding vector which is smaller than one. I know that the eigenvalues are bounded in the section [-1, 1] and they are all real (non-complex).
I am trying to calculate the values using python's scipy.sparse.eigs function, however, one of the parameters of the functions is the number of eigenvalues/vectors to estimate and every time I've increased the number of parameters to estimate, the numbers of eigenvalues which are exactly one grew as well.
Needless to say, I am using the which parameter with the value 'LR' in order to get the k largest eigenvalues, with k being the number of values to estimate.
Does anyone have an idea how to solve this problem (finding the first eigenvalue smaller than one and its corresponding vector)? | 0 | 1 | 1,334 |
0 | 32,235,511 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2015-08-26T19:37:00.000 | 4 | 2 | 0 | Normalizing a list of restaurant dishes | 32,235,272 | 1.2 | python,machine-learning,nlp,nltk | You might want look for TFIDF and cosine similarity.
There are challenging cases, however. Let's say you have the following three dishes:
Pulled pork
Pulled egg
Egg sandwich
Which of the two you are going to combine?
Pulled pork and pulled egg
Pulled egg and egg sandwich
Using TFIDF, you can find the most representative words. For example the word sandwich may happen to be in many dishes, hence not very representative. (Tuna sandwich, egg sandwich, cheese sandwich, etc.) Merging tuna sandwich and cheese sandwich may not be a good idea.
After you have the TFIDF vectors, you can use cosine similarity (using the TFIDF vectors) and maybe a static threshold, you can decide whether to merge them or not.
There is also another issue arises: When you match, what are you going to name them? (Pulled egg or egg sandwich?)
Update:
@alvas suggests to use clustering after having the similarity/dissimilarity values. I think that would be good idea. You can first create your nxn distance/similarity matrix using the cosine similarity with TFIDF vectors. And after you have the distance matrix, you can cluster them using a clustering algorithm. | I have a large data set of restaurant dishes (for example, "Pulled Pork", "Beef Brisket"...)
I am trying to "normalize" (wrong word) the dishes. I want "Pulled Pork" and "Pulled Pork Sandwich" and "Jumbo Pork Slider" all to map to a single dish, "Pulled Pork".
So far I have gotten started with NLTK using Python and had some fun playing around with frequency distributions and such.
Does anyone have a high-level strategy to approach this problem? Perhaps some keywords I could google?
Thanks | 0 | 1 | 183 |
0 | 32,281,179 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-08-28T23:23:00.000 | 2 | 1 | 0 | Computational Logistic Regression With Python, Different Sample Sizes | 32,281,130 | 1.2 | python-2.7,machine-learning,statistics,logistic-regression | How much of a problem it is depends on the nature of your data. The bigger issue will be that you simply have a huge class imbalance (50 As for every B). If you end up getting good classification accuracy anyway, then fine - nothing to do. What to do next depends on your data and the nature of the problem and what is acceptable in a solution. There really isn't a dead set "do this" answer for this question. | Currently, I am trying to implement a basic logistic regression algorithm in Python to differentiate between A vs. B.
For my training and test data, I have ~50,000 samples of A vs. 1000 samples of B. Is this a problem if I use half the data of each to train the algorithm and the other half as testing data (25000 train A, 500 train B and so on for testing accuracy).
If so, how can I overcome this problem. Should I consider resampling, doing some other "fancy stuff". | 0 | 1 | 69 |
0 | 32,290,589 | 0 | 0 | 0 | 0 | 1 | false | 22 | 2015-08-29T20:09:00.000 | 10 | 2 | 0 | Choosing between numpy.interp vs scipy.interpolate.interp1d (with kind='linear') | 32,290,233 | 1 | python,numpy,scipy | While numpy returns an array with discrete datapoints, 'interp1d' returns a function. You can use the generated function later in your code as often as you want. Furthermore, you can choose other methods than linear interpoationn | I'm trying to choose between numpy.interp vs scipy.interpolate.interp1d (with kind='linear' of course). I realize they have different interfaces but that doesn't matter much to me (I can code around either interface). I'm wondering whether there are other differences I should be aware of. Thanks. | 0 | 1 | 7,258 |
0 | 32,304,062 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-08-30T20:04:00.000 | 0 | 1 | 0 | Passing objects to Spark | 32,300,616 | 0 | python,apache-spark,cloud | If your object's aren't picklable your options are pretty limited. If you can create them on the executor side though (frequently a useful option for things like database connections), you can parallelize a regular list (e.g. maybe a list of the constructor parameters) and then use map if your dostuff function returns (picklable) values you want to use or foreach if your dostuff function is called for its side effects (like updating a database or similar). | I try to understand the capabilities of Spark, but I fail to see if the following is possible in Python.
I have some objects that are non Pickable (wrapped from C++ with SWIG).
I have a list of those objects obj_list = [obj1, obj2, ...]
All those objects have a member function called .dostuff
I'd like to parallelized the following loop in Spark (in order to run it on AWS since I don't have a big architecture internally. We could probably use multiprocessing, but I don't think I can easily send the objects over the network):
[x.dostuff() for x in obj_list]
Any pointers would be appreciated. | 0 | 1 | 335 |
0 | 32,347,770 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2015-09-02T06:00:00.000 | 4 | 3 | 0 | Simulink for Python | 32,345,638 | 0.26052 | python,matlab,simulink | Until now there is no library like Simulink in Python. The closest match is the Modelica language with OpenModelica and a python implementation JModelica. | I have been searching and found many libraries (scipy, numpy, matplotlib) for Python that lets a user easily shift from MATLAB to Python. However, I am unable to find any library that is related to the Simulink in MATLAB. I would like to know if such a library exists or something else that resembles Simulink in it's GUI and computation features. | 0 | 1 | 28,386 |
0 | 32,363,759 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2015-09-02T21:47:00.000 | 0 | 1 | 0 | openCV track object in video and obtain a better image from multiple frames | 32,363,678 | 0 | python,opencv,raspberry-pi | You need to ensure that your frame rate is fast enough to get a decent still of the moving car. When filming, each frame will most likely be blurry, and our brain pieces together the number plate on playback. Of course a blurry frame is no good for letter recognition, so is something you'll need to deal with on the hardware side, rather than software side.
Remember the old saying: Garbage in; Garbage out. | I'm working on detecting license plates with openCV, python and a raspberry pi. Most of it is already covered. What I want is to detect the ROI (Region of interest) of the plate, track it on a few frames and add those together to get a more clear and crisp image of the plate.
I want to get a better image of the plate by taking the information from several frames. I detect the plate, and have a collection of plates from several frames, as many as I wish and as many as the car is moving by the camera. How can I take all those and get a better version? | 0 | 1 | 330 |
0 | 32,409,317 | 0 | 0 | 0 | 0 | 3 | false | 3 | 2015-09-05T03:38:00.000 | 4 | 5 | 0 | Algo to find duplicates in a very large array | 32,409,274 | 0.158649 | javascript,java,python,c,algorithm | You can do it in O(nlog(n)):
Sort the array
find the duplicates (they will be next to each other) in one pass.
I think that is what the interviewer wanted to hear.
if you did a merge sort or a quick sort, finding the duplicates could be done at merging in hidden time.
These can be implemented "in-place", or "by-part" if the array is too large to fit in memory. | During one of technical interview got this question.
I know way to solve this problem using (in java) HashSet.
But i could not understand when interviewer forced on the word "a very large array let's say 10 million elements in the given array".
Do i need to change the approach? If not, what should be the efficient to achieve this?
PS: Algo or implementation is language agnostic.
Thank you. | 0 | 1 | 3,410 |
0 | 32,409,400 | 0 | 0 | 0 | 0 | 3 | false | 3 | 2015-09-05T03:38:00.000 | 4 | 5 | 0 | Algo to find duplicates in a very large array | 32,409,274 | 0.158649 | javascript,java,python,c,algorithm | There were some key things, which interviewer expected you to ask back like: if you can not load the array in memory, then how much I can load. These are the steps to solve the problem:
you need to divide the array in how much memory is available to you.
Let's say you can load 1M number at a time. You have split the data in k parts. You load the first 1M and build Min Heap of it. Then remove the top and apply the Heapify on Min Heap.
Repeat the same for other parts of the data.
Now you will have K sorted splits.
Now fetch a first number from each K split and again build a Min Heap.
Now remove the top from the Min Heap and store the value in temporary variable as well for comparing with the next coming number for finding the duplicates.
Now fetch the next number from the same split (part) whose number got removed last time. Put that number on top of Min Heap and apply Heapify.
Now the top of the Min Heap is your next sorted number and compare it with the temporary variable for finding the duplicates. Update thetemporary variable` if number is not duplicate. | During one of technical interview got this question.
I know way to solve this problem using (in java) HashSet.
But i could not understand when interviewer forced on the word "a very large array let's say 10 million elements in the given array".
Do i need to change the approach? If not, what should be the efficient to achieve this?
PS: Algo or implementation is language agnostic.
Thank you. | 0 | 1 | 3,410 |
0 | 32,409,590 | 0 | 0 | 0 | 0 | 3 | true | 3 | 2015-09-05T03:38:00.000 | 3 | 5 | 0 | Algo to find duplicates in a very large array | 32,409,274 | 1.2 | javascript,java,python,c,algorithm | One thing to keep in mind is that O-notation doesn't necessarily tell you what algorithm is fastest. If one algorithm is O(n log n) and another algorithm is O(n2), then there is some value M such that the first algorithm is faster for all n > M. But M could be much larger than the amount of data you'll ever have to deal with.
The reason I'm bringing this up is that I think a HashSet is probably still the best answer, although I'd have to profile it to find out for sure. Assuming that you aren't allowed to set up a hash table with 10 million buckets, you may still be able to set up a reasonable-sized table. Say you can create a HashSet with table size 100,000. The buckets will then be sets of objects. If n is the size of the array, the average bucket size will be n / 100000. So to see if an element is already in the HashSet, and add it if not, will take a fixed amount of time to compute the hash value, and O(n) to search the elements in the bucket if they're stored in a linear list(*). Technically, this means that the algorithm to find all duplicates is O(n2). But since one of the n's in n2 is for a linear list that is so much smaller than the array size (by a factor of 100000), it seems likely to me that it will still take much less time than a O(n log n) sort, for 10 million items. The value of M, the point at which the O(n log n) sort becomes faster, is likely to be much, much larger than that. (I'm just guessing, though; to find out for certain would require some profiling.)
I'd tend to lean against using a sort anyway, because if all you need to do is find duplicates, a sort is doing more work than you need. You shouldn't need to put the elements in order, just to find duplicates. That to me suggests that a sort is not likely to be the best answer.
(*) Note that in Java 8, the elements in each bucket will be in some kind of search tree, probably a red-black tree, instead of in a linear list. So the algorithm will still be O(n log n), and still probably lots faster than a sort. | During one of technical interview got this question.
I know way to solve this problem using (in java) HashSet.
But i could not understand when interviewer forced on the word "a very large array let's say 10 million elements in the given array".
Do i need to change the approach? If not, what should be the efficient to achieve this?
PS: Algo or implementation is language agnostic.
Thank you. | 0 | 1 | 3,410 |
0 | 32,428,486 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-09-06T21:06:00.000 | 0 | 1 | 0 | Break into debugger on numpy exception in PTVS | 32,428,313 | 0 | python,numpy,ptvs | You can add your own exception types in the Exceptions dialog (in Debug -> Exceptions), and then check them to have it break on them. | I'm using PTVS to debug some code I've written. I'd like to get it to break into the debugger whenever a numpy exception is raised. Currently, it only breaks into the debugger when a standard Python exception is raised; whenever a numpy exception is raised, all I get is a traceback print out. | 0 | 1 | 92 |
0 | 32,554,802 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-09-07T09:02:00.000 | 0 | 2 | 0 | POS tagging - NLTK- Python | 32,434,810 | 0 | python,nltk | Your question confuses the nltk itself with nltk_data. You can't really download just part of the nltk (though you could manually trim it down, carefully, if you need to save space). But I think you're trying to avoid downloading all of the nltk data. As @barny wrote, you can see the IDs of different resources when you open the interactive nltk.download() window.
To use the treebank pos tagger, you need its pickled training tables (not the treebank corpus); you'll find them in the "Models" tab under the ID maxent_treebank_pos_tagger. (Hence: nltk.download("maxent_treebank_pos_tagger").
The FreqDist class doesn't have or need any trained model.
Neither does word_tokenize, which takes a sentence as a single string and breaks it up into words. However, you'll probably need the model for sent_tokenize, which breaks up a longer text into sentences. That's handled by the "Punkt" sentence tokenizer, and you can download its model with nltk.download("punkt").
PS. For general-purpose use, I recommend downloading everything in the "book" collection, i.e. nltk.download("book"). It's only a fraction of the total, and it lets you do most things without scrambling every so often to figure out what's missing. | I want to use word_tokenize, pos_tag, FreqDist. I don't want to download all nltk as default. I want to use nltk.download(info_or_id=''). What options I should put in info_or_id to get the POS tagging and its frequency. POS tagging - Penn Treebank POS. | 0 | 1 | 283 |
0 | 32,510,602 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-09-09T20:41:00.000 | 1 | 3 | 0 | Use Histogram data to generate random samples in scipy | 32,488,808 | 0.066568 | python,statistics,scipy | You can also use the histogram, piecewise uniform distribution directly, then you get exactly the corresponding random numbers instead of an approximation.
The inverse cdf, ppf, is piecewise linear and linear interpolation can be used to transform uniform random numbers appropriately. | Suppose I have a process where I push a button, and after a certain amount of time (from 1 to 30 minutes), an event occurs. I then run a very large number of trials, and record how long it takes the event to occur for each trial. This raw data is then reduced to a set of 30 data points where the x value is the number of minutes it took for the event to occur, and the y value is the percentage of trials which fell into that bucket. I do not have access to the original data.
How can I use this set of 30 points to identify an appropriate probability distribution which I can then use to generate representative random samples?
I feel like scipy.stats has all the tools I need built in, but for the life of me I can't figure out how to go about it. Any tips? | 0 | 1 | 2,122 |
0 | 32,490,655 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2015-09-09T23:12:00.000 | 4 | 2 | 0 | Change date format when importing from CSV | 32,490,561 | 1.2 | python,excel,date,csv,import-from-csv | Option 3. Import it properly
Use DATA, Get External Data, From Text and when the wizard prompts you choose the appropriate DMY combination (Step 3 of 3, Under Column data format, and Date). | I have a CSV file where the date is formatted as yy/mm/dd, but Excel is reading it wrongly as dd/mm/yyyy (e.g. 8th September 2015 is read as 15th of September 2008).
I know how to change the format that Excel outputs, but how can I change the format it uses to interpret the CSV data?
I'd like to keep it to Excel if possible, but I could work with a Python program. | 0 | 1 | 1,588 |
0 | 32,502,985 | 0 | 0 | 0 | 0 | 1 | false | 22 | 2015-09-10T03:17:00.000 | 8 | 2 | 0 | What is the difference between sample weight and class weight options in scikit learn? | 32,492,550 | 1 | python,machine-learning,scikit-learn,classification | sample_weight and class_weight have a similar function, that is to make your estimator pay more attention to some samples.
Actual sample weights will be sample_weight * weights from class_weight.
This serves the same purpose as under/oversampling but the behavior is likely to be different: say you have an algorithm that randomly picks samples (like in random forests), it matters whether you oversampled or not.
To sum it up:
class_weight and sample_weight both do 2), option 2) is one way to handle class imbalance. I don't know of an universally recommended way, I would try 1), 2) and 1) + 2) on your specific problem to see what works best. | I have class imbalance problem and want to solve this using cost sensitive learning.
under sample and over sample
give weights to class to use a modified loss function
Question
Scikit learn has 2 options called class weights and sample weights. Is sample weight actually doing option 2) and class weight options 1). Is option 2) the the recommended way of handling class imbalance. | 0 | 1 | 12,531 |
0 | 46,094,876 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2015-09-10T20:40:00.000 | 1 | 1 | 0 | Does scikit-learn have Bayes Net ? If yes is there an implementation for reference | 32,511,232 | 0.197375 | python,scikit-learn,bayesian-networks | You can use Weka for classify the data using BayesNet in Python.You can train your data using Weka and save your model as XML then you can write prediction API's in python for that saved model. | I need to classify the data using BayesNet in Python. I have used scikit learn for other classifiers like Random Forests, SVM etc. I know it has Naive Bayes but I am looking for Bayesian Network alone. If anyone could help me with it it would be very helpful Also, if there is an implementation of it for reference that would be even more helpful.
Thanks | 0 | 1 | 1,288 |
0 | 32,912,189 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-09-11T08:40:00.000 | 3 | 2 | 0 | Scipy installation cygwin64 Windows10 fails at late stage | 32,519,166 | 0.291313 | python-2.7,scipy,cygwin,windows-10 | I suffered for days with the same issue. My final solution was to install scipy0.15.1: pip install scipy==0.15.1. Hope it works for you too. | Installed cygwin64, including Python 2.7, on my new computer running Windows10.
Python runs fine, adding modules like matplotlib or bitstream goes fine, but when trying to add scipy the build eventually, after about an hour, having successfully compiled lots of fortran and C/C++ files, fails with:
error: Setup script exited with error: Command "g++ -fno-strict-aliasing -ggdb -O2 -pipe -Wimplicit-function-declaration -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/build=/usr/src/debug/python-2.7.10-1 -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/src/Python-2.7.10=/usr/src/debug/python-2.7.10-1 -DNDEBUG -g -fwrapv -O3 -Wall -I/usr/include/python2.7 -I/usr/lib/python2.7/site-packages/numpy/core/include -Iscipy/spatial/ckdtree/src -I/usr/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c scipy/spatial/ckdtree/src/ckdtree_query.cxx -o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/ckdtree/src/ckdtree_query.o" failed with exit status 1
I've tried both pip install and easy_install, both result in the same error.
Greatful for any hints on what to try next. | 0 | 1 | 620 |
0 | 32,574,361 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-09-12T23:49:00.000 | 1 | 1 | 0 | Cannot import scikit-learn | 32,544,925 | 0.197375 | python,numpy,matplotlib,scikit-learn,anaconda | The MacHD/Library/Frameworks/python.framework/versions/3.4/site-packages/sklearn is for Python 3.4 (note the 3.4 in the path) and the MacHD/Library/Python/2.7/ is for Python 2.7. The packages for each are independent of each other. | Beginner here, please be gentle! I’m receiving an error that reads
ImportError: No module named sklearn when using pycharm.
I’m trying to import matplotlib, numpy, and sklearn. I’ve downloaded scikit_learn. I’ve also downloaded anaconda.
I have “two” pythons. Looks like this…
MacHD/Library/Frameworks/python.framework/versions/3.4/site-packages/sklearn
MacHD/Library/Python/2.7/ ... in here is pip and scikit_learn
The strange thing is that matplotlib and numpy work but not sklearn. How can I figure out what's wrong? | 0 | 1 | 809 |
0 | 32,551,334 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2015-09-13T06:35:00.000 | 4 | 1 | 0 | Which classifiers handle missing values in scikit learn 0.16.1 | 32,546,992 | 1.2 | python,scikit-learn | RandomForests in scikit-learn don't handle missing values at the moment [as of 0.16 and upcoming 0.17], and you do need to impute the values before. | I have data with missing values and I would like to build a classifier for it. I know that scikit-learn will help you impute values for the missing data. However, in my case it is not clear this is the right thing to do or even easy. The problem is that the features in the data are correlated so it's not obvious now to do this imputation in a sensible way.
I know that in R some of the classifiers (decision trees, random forests) can directly handle missing values without your having to do any imputation.
Can any of the classifiers in scikit learn 0.16.1 do likewise and if so, how should I represent the missing values to help it?
I have read discussions on the scikit learn github about this topic but I can''t work out what has actually been implemented and what hasn't. | 0 | 1 | 2,290 |
0 | 32,555,599 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2015-09-13T19:39:00.000 | 2 | 2 | 0 | If my entire training set of documents is class A, how can I use TF-IDF to find other documents of class A? | 32,553,806 | 1.2 | python,machine-learning,tf-idf,text-classification | What I think you have is an unsupervised learning application. Clustering. Using the combined X & Y dataset, generate clusters. Then overlay the X boundary; the boundary that contains all X samples. All items from Y in the X boundary can be considered X. And the X-ness of a given sample from Y is the distance from the X cluster centroid. Something like that. | I have a collection X of documents, all of which are of class A (the only class in which I'm interested or know anything about). I also have a much larger collection Y of documents that I know nothing about. The documents in X and Y come from the same source and have similar formats and somewhat similar subject matters. I'd like to use the TF-IDF feature vectors of the documents in X to find the documents in Y that are most likely to be of class A.
In the past, I've used TF-IDF feature vectors to build naive Bayes classifiers, but in these situations, my training set X consisted of documents of many classes, and my objective was to classify each document in Y as one of the classes seen in X.
This seems like a different situation. Here, my entire training set has the same class (I have no documents that I know are not of class A), and I'm only interested in determining if documents in Y are or are not of that class.
A classifier seems like the wrong route, but I'm not sure what the best next step is. Is there a different algorithm that can use that TF-IDF matrix to determine the likelihood that a document is of the same class?
FYI, I'm using scikit-learn in Python 2.7, which obviously made computing the TF-IDF matrix of X (and Y) simple. | 0 | 1 | 264 |
0 | 32,604,626 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2015-09-13T19:39:00.000 | 1 | 2 | 0 | If my entire training set of documents is class A, how can I use TF-IDF to find other documents of class A? | 32,553,806 | 0.099668 | python,machine-learning,tf-idf,text-classification | The easiest thing to do is what was already proposed - clustering. More specifically, you extract a single feature vector from set X and then apply K-means clustering to the whole X & Y set.
ps: Be careful not to confuse k-means with kNN (k-nearest neighbors). You are able to apply only unsupervised learning methods. | I have a collection X of documents, all of which are of class A (the only class in which I'm interested or know anything about). I also have a much larger collection Y of documents that I know nothing about. The documents in X and Y come from the same source and have similar formats and somewhat similar subject matters. I'd like to use the TF-IDF feature vectors of the documents in X to find the documents in Y that are most likely to be of class A.
In the past, I've used TF-IDF feature vectors to build naive Bayes classifiers, but in these situations, my training set X consisted of documents of many classes, and my objective was to classify each document in Y as one of the classes seen in X.
This seems like a different situation. Here, my entire training set has the same class (I have no documents that I know are not of class A), and I'm only interested in determining if documents in Y are or are not of that class.
A classifier seems like the wrong route, but I'm not sure what the best next step is. Is there a different algorithm that can use that TF-IDF matrix to determine the likelihood that a document is of the same class?
FYI, I'm using scikit-learn in Python 2.7, which obviously made computing the TF-IDF matrix of X (and Y) simple. | 0 | 1 | 264 |
0 | 32,574,697 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2015-09-14T21:09:00.000 | 1 | 1 | 1 | How can I reference libraries for ApacheSpark using IPython Notebook only? | 32,573,995 | 1.2 | python,apache-spark,ipython,ibm-cloud,jupyter | You cannot add 3rd party libraries at this point in the beta. This will most certainly be coming later in the beta as it's a popular requirement ;-) | I'm currently playing around with the Apache Spark Service in IBM Bluemix. There is a quick start composite application (Boilerplate) consisting of the Spark Service itself, an OpenStack Swift service for the data and an IPython/Jupyter Notebook.
I want to add some 3rd party libraries to the system and I'm wondering how this could be achieved. Using an python import statement doesn't really help since the libraries are then expected to be located on the SparkWorker nodes.
Is there a ways of loading python libraries in Spark from an external source during job runtime (e.g. a Swift or ftp source)?
thanks a lot! | 0 | 1 | 157 |
0 | 32,641,163 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-09-17T22:16:00.000 | 0 | 2 | 0 | In Python, can you change how a method from class 1 acts on class 2 from within class 2? | 32,640,830 | 0 | python,numpy,multidimensional-array,scipy,subclass | No. Numpy's asarray() is coded to instantiate a regular numpy array, and you can't change that without editing asarray() or changing the caller's code to call your special method instead of asarray() | Basically I have a class which subclasses ndarray and has additional information. When I call np.asarray() on my object, it returns just the numpy array and destroys my additional information.
My question is then this: Is there a way in Python to change how np.asarray() acts on my subclass of ndarray from within my subclass? I don't want to change numpy of course, and I do not want to go through every instance where np.asarray() is called to take care of this.
Thanks in advance!
Chris | 0 | 1 | 54 |
0 | 32,664,411 | 0 | 1 | 0 | 0 | 1 | true | 8 | 2015-09-18T20:00:00.000 | 28 | 3 | 0 | How to install OpenCV on Windows and enable it for PyCharm without using the package manager | 32,660,114 | 1.2 | python,windows,opencv,installation,pycharm | I finally figured out on how to solve this issue:
Steps to follow:
Install Python 2.7.10
Install Pycharm(If you have not done it already)
Download and install the OpenCV executable.
Add OpenCV in the system path(%OPENCV_DIR% = /path/of/opencv/directory)
Goto C:\opencv\build\python\2.7\x86 folder and copy cv2.pyd file.
Goto C:\Python27\DLLs directory and paste the cv2.pyd file.
Goto C:\Python27\Lib\site-packages directory and paste the cv2.pyd file.
Goto PyCharm IDE and goto DefaultSettings>PythonInterpreter.
Select the Python which you have installed on Step1.
Install the packages numpy,matplotlib and pip in pycharm.
Restart your PyCharm.
PyCharm now has OpenCV library installed and working. | I am trying to install and use OpenCV library for python development. I want to use it for PyCharm IDE. I am trying to do it without the package manager.
The environment is a windows 64 bit architecture. For Python I am using Python 2.7.10.
I have already included the OpenCV directory in the system path.
I am using python 2.7.10 interpreter for PyCharm and have installed the pip and numpy packages.
opencv version is 3.0.0
How do I enable OpenCV and make it working in PyCharm? | 0 | 1 | 66,287 |
0 | 65,357,723 | 0 | 1 | 0 | 0 | 4 | false | 24 | 2015-09-20T02:07:00.000 | 1 | 8 | 0 | Getting PyCharm to import sklearn | 32,675,024 | 0.024995 | python,scikit-learn,python-import,anaconda | SOLVED:
reinstalled Python 3.7.9 (not the latet)
installed numpy 1.17.5 (not the latest)
installed scikit-learn (latest)
sklearn works now! | Beginner here.
I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn”
The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one.
Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn”
Does anyone know how to solve this problem? | 0 | 1 | 67,839 |
0 | 53,357,368 | 0 | 1 | 0 | 0 | 4 | false | 24 | 2015-09-20T02:07:00.000 | 0 | 8 | 0 | Getting PyCharm to import sklearn | 32,675,024 | 0 | python,scikit-learn,python-import,anaconda | For Mac OS:
PyCharm --> Preferences --> Project Interpreter --> Double Click on pip (a new window will open with search option) --> mention 'Scikit-learn' on the search bar --> Install Packages --> Once installed, close that new window --> OK on the existing window
and you are done. | Beginner here.
I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn”
The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one.
Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn”
Does anyone know how to solve this problem? | 0 | 1 | 67,839 |
0 | 54,702,828 | 0 | 1 | 0 | 0 | 4 | false | 24 | 2015-09-20T02:07:00.000 | 8 | 8 | 0 | Getting PyCharm to import sklearn | 32,675,024 | 1 | python,scikit-learn,python-import,anaconda | please notice that, in the packages search 'Scikit-learn', instead 'sklearn' | Beginner here.
I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn”
The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one.
Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn”
Does anyone know how to solve this problem? | 0 | 1 | 67,839 |
0 | 50,808,255 | 0 | 1 | 0 | 0 | 4 | false | 24 | 2015-09-20T02:07:00.000 | 1 | 8 | 0 | Getting PyCharm to import sklearn | 32,675,024 | 0.024995 | python,scikit-learn,python-import,anaconda | Same error occurs to me i have fixed by selecting File Menu-> Default Settings-> Project Interpreter -> Press + button and type 'sklearn' Press install button. Installation will be done in 10 to 20 seconds.
If issue not resolved please check you PyCharm Interpreter path. Sometimes your machine have Python 2.7 and Python 3.6 both installed and there may be some conflict by choosing one. | Beginner here.
I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn”
The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one.
Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn”
Does anyone know how to solve this problem? | 0 | 1 | 67,839 |
0 | 32,709,770 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-09-21T14:22:00.000 | 0 | 1 | 0 | Face detection for two faces-Cascade opencv python | 32,697,663 | 0 | python,opencv,face-detection | Sort the detected faces by size and keep the biggest one only? | I am working on with the Face detection using cascade Classifier in opencv python. Its working fine. But i want to develop my code to detect only one face and also the largest face only to detect. | 0 | 1 | 216 |
0 | 32,797,781 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-09-24T07:54:00.000 | 2 | 1 | 0 | OpenCV can't find the right version of CUDA | 32,756,140 | 0.379949 | python,c++,opencv,cuda,nvidia | Upgrading from CMake 2.8 to CMake 3.2.2 seems to have solved this particular issue.
[This answer has been added from information gleaned from comments in order to get the question off the unanswered list] | I installed OpenCV 3.0.0 but I'm having troubles any C++ or Python code using OpenCV. For testing, I went into the directory opencv-3.0.0/samples and ran cmake to build the samples. I got the following error:
CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108 (message):
Could NOT find CUDA: Found unsuitable version "5.5", but required is exact version "7.0" (found /usr)
However, I'm pretty sure that I have the CUDA 7.0 installed and I verified by getting the following output from nvcc --version on the command line:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Mon_Feb_16_22:59:02_CST_2015
Cuda compilation tools, release 7.0, V7.0.27
Why might OpenCV think that I got the wrong version installed? Is there a workaround for this? | 0 | 1 | 1,214 |
0 | 55,287,554 | 0 | 1 | 0 | 0 | 2 | false | 19 | 2015-09-24T12:38:00.000 | 8 | 3 | 0 | Python scikit learn n_jobs | 32,761,556 | 1 | python,parallel-processing,scikit-learn,joblib | what is the point of using n-jobs (and joblib) if the the library uses all cores anyway?
It does not, if you specify n_jobs to -1, it will use all cores. If it is set to 1 or 2, it will use one or two cores only (test done scikit-learn 0.20.3 under Linux). | This is not a real issue, but I'd like to understand:
running sklearn from Anaconda distrib on a Win7 4 cores 8 GB system
fitting a KMeans model on a 200.000 samples*200 values table.
running with n-jobs = -1: (after adding the if __name__ == '__main__': line to my script) I see the script starting 4 processes with
10 threads each. Each process uses about 25% of the CPU (total:
100%). Seems to work as expected
running with n-jobs = 1: stays on a single process (not a surprise), with 20 threads, and also uses 100% of the CPU.
My question: what is the point of using n-jobs (and joblib) if the the library uses all cores anyway? Am I missing something? Is it a Windows-specific behaviour? | 0 | 1 | 46,321 |
0 | 59,874,972 | 0 | 1 | 0 | 0 | 2 | false | 19 | 2015-09-24T12:38:00.000 | -1 | 3 | 0 | Python scikit learn n_jobs | 32,761,556 | -0.066568 | python,parallel-processing,scikit-learn,joblib | You should either use n_jobs or joblib, don't use both simultaneously. | This is not a real issue, but I'd like to understand:
running sklearn from Anaconda distrib on a Win7 4 cores 8 GB system
fitting a KMeans model on a 200.000 samples*200 values table.
running with n-jobs = -1: (after adding the if __name__ == '__main__': line to my script) I see the script starting 4 processes with
10 threads each. Each process uses about 25% of the CPU (total:
100%). Seems to work as expected
running with n-jobs = 1: stays on a single process (not a surprise), with 20 threads, and also uses 100% of the CPU.
My question: what is the point of using n-jobs (and joblib) if the the library uses all cores anyway? Am I missing something? Is it a Windows-specific behaviour? | 0 | 1 | 46,321 |
Subsets and Splits