GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
37,311,742
0
0
0
0
1
false
6
2016-05-18T23:33:00.000
3
3
0
High-dimensional data structure in Python
37,311,699
0.197375
python,numpy,pandas,machine-learning,multi-index
If you need labelled arrays and pandas-like smart indexing, you can use xarray package which is essentially an n-dimensional extension of pandas Panel (panels are being deprecated in pandas in future in favour of xarray). Otherwise, it may sometimes be reasonable to use plain numpy arrays which can be of any dimensionality; you can also have arbitrarily nested numpy record arrays of any dimension.
What is best way to store and analyze high-dimensional date in python? I like Pandas DataFrame and Panel where I can easily manipulate the axis. Now I have a hyper-cube (dim >=4) of data. I have been thinking of stuffs like dict of Panels, tuple as panel entries. I wonder if there is a high-dim panel thing in Python. update 20/05/16: Thanks very much for all the answers. I have tried MultiIndex and xArray, however I am not able to comment on any of them. In my problem I will try to use ndarray instead as I found the label is not essential and I can save it separately. update 16/09/16: I came up to use MultiIndex in the end. The ways to manipulate it are pretty tricky at first, but I kind of get used to it now.
0
1
2,444
0
37,313,404
0
0
0
0
2
false
1
2016-05-19T03:10:00.000
1
2
0
How to convert two channel audio into one channel audio
37,313,320
0.099668
python,audio
i handle this by using Matlab.python can do the same. (left-channel+right-channel)/2.0
I am playing around with some audio processing in python. Right now I have the audio as a 2x(Large Number) numpy array. I want to combine the channels since I only want to try some simple stuff. I am just unsure how I should do this mathematically. At first I thought this is kind of like converting an RGB image to gray-scale where you would average each of the color channels to create a gray pixel. Then I thought that maybe I should add them due to the superposition principal of waves (then again average is just adding and dividing by two.) Does anyone know the best way to do this?
0
1
3,971
0
37,313,414
0
0
0
0
2
true
1
2016-05-19T03:10:00.000
1
2
0
How to convert two channel audio into one channel audio
37,313,320
1.2
python,audio
To convert any stereo audio to mono, what I have always seen is the following: For each pair of left and right samples: Add the values of the samples together in a way that will not overflow Divide the resulting value by two Use this resulting value as the sample in the mono track - make sure to round it properly if you are converting it to an integer value from a floating point value
I am playing around with some audio processing in python. Right now I have the audio as a 2x(Large Number) numpy array. I want to combine the channels since I only want to try some simple stuff. I am just unsure how I should do this mathematically. At first I thought this is kind of like converting an RGB image to gray-scale where you would average each of the color channels to create a gray pixel. Then I thought that maybe I should add them due to the superposition principal of waves (then again average is just adding and dividing by two.) Does anyone know the best way to do this?
0
1
3,971
0
37,333,670
0
0
0
0
1
true
1
2016-05-19T18:40:00.000
0
1
0
Annotating bokeh chart plot
37,331,559
1.2
python,plot,bokeh,glyph
It's always possible to use .add_glyph directly, but it is a bit of a pain. The feature to add all the "glyph methods" e.g. .circle, .rect, etc. is in GitHub master, and will be available in the upcoming 0.12 release.
Is it possible to annotate or add markers of any form to Bokeh charts (specifically Bar graphs)? Say I want to add an * (asterisks) on top of a bar; is this possible?
0
1
240
0
37,352,954
0
1
0
0
1
false
0
2016-05-20T17:49:00.000
1
1
0
Python arrays for objects
37,352,895
0.197375
python,multidimensional-array
NumPy arrays actually do allow non-numerical contents, so you can just use NumPy.
I would like to make a numpy-like multi-dimensional array of non-numerical objects in python. I believe Numpy arrays only allow numerical values. List-of-lists are much less convenient to index- for example, I'd like to be able to ask for myarray[1,:,2] which requires much more complicated calls from lists of lists. Is there a good tool for this?
0
1
42
0
37,416,493
0
0
0
0
1
true
4
2016-05-23T08:52:00.000
4
1
0
What is the optimal topic-modelling workflow with MALLET?
37,386,595
1.2
python,r,text-mining,lda,mallet
Thank you for this thorough summary! As an alternative to topicmodels try the package mallet in R. It runs Mallet in a JVM directly from R and allows you to pull out results as R tables. I expect to release a new version soon, and compatibility with tm constructs is something others have requested. To clarify, it's a good idea for documents to be at most around 1000 tokens long (not vocabulary). Any more and you start to lose useful information. The assumption of the model is that the position of a token within a given document doesn't tell you anything about that token's topic. That's rarely true for longer documents, so it helps to break them up. Another point I would add is that documents that are too short can also be a problem. Tweets, for example, don't seem to provide enough contextual information about word co-occurrence, so the model often devolves into a one-topic-per-doc clustering algorithm. Combining multiple related short documents can make a big difference. Vocabulary curation is in practice the most challenging part of a topic modeling workflow. Replacing selected multi-word terms with single tokens (for example by swapping spaces for underscores) before tokenizing is a very good idea. Stemming is almost never useful, at least for English. Automated methods can help vocabulary curation, but this step has a profound impact on results (much more than the number of topics) and I am reluctant to encourage people to fully trust any system. Parameters: I do not believe that there is a right number of topics. I recommend using a number of topics that provides the granularity that suits your application. Likelihood can often detect when you have too few topics, but after a threshold it doesn't provide much useful information. Using hyperparameter optimization makes models much less sensitive to this setting as well, which might reduce the number of parameters that you need to search over. Topic drift: This is not a well understood problem. More examples of real-world corpus change would be useful. Looking for changes in vocabulary (e.g. proportion of out-of-vocabulary words) is a quick proxy for how well a model will fit.
Introduction I'd like to know what other topic modellers consider to be an optimal topic-modelling workflow all the way from pre-processing to maintenance. While this question consists of a number of sub-questions (which I will specify below), I believe this thread would be useful for myself and others who are interested to learn about best practices of end-to-end process. Proposed Solution Specifications I'd like the proposed solution to preferably rely on R for text processing (but Python is fine also) and topic-modelling itself to be done in MALLET (although if you believe other solutions work better, please let us know). I tend to use the topicmodels package in R, however I would like to switch to MALLET as it offers many benefits over topicmodels. It can handle a lot of data, it does not rely on specific text pre-processing tools and it appears to be widely used for this purpose. However some of the issues outline below are also relevant for topicmodels too. I'd like to know how others approach topic modelling and which of the below steps could be improved. Any useful piece of advice is welcome. Outline Here is how it's going to work: I'm going to go through the workflow which in my opinion works reasonably well, and I'm going to outline problems at each step. Proposed Workflow 1. Clean text This involves removing punctuation marks, digits, stop words, stemming words and other text-processing tasks. Many of these can be done either as part of term-document matrix decomposition through functions such as for example TermDocumentMatrix from R's package tm. Problem: This however may need to be performed on the text strings directly, using functions such as gsub in order for MALLET to consume these strings. Performing in on the strings directly is not as efficient as it involves repetition (e.g. the same word would have to be stemmed several times) 2. Construct features In this step we construct a term-document matrix (TDM), followed by the filtering of terms based on frequency, and TF-IDF values. It is preferable to limit your bag of features to about 1000 or so. Next go through the terms and identify what requires to be (1) dropped (some stop words will make it through), (2) renamed or (3) merged with existing entries. While I'm familiar with the concept of stem-completion, I find that it rarely works well. Problem: (1) Unfortunately MALLET does not work with TDM constructs and to make use of your TDM, you would need to find the difference between the original TDM -- with no features removed -- and the TDM that you are happy with. This difference would become stop words for MALLET. (2) On that note I'd also like to point out that feature selection does require a substantial amount of manual work and if anyone has ideas on how to minimise it, please share your thoughts. Side note: If you decide to stick with R alone, then I can recommend the quanteda package which has a function dfm that accepts a thesaurus as one of the parameters. This thesaurus allows to to capture patterns (usually regex) as opposed to words themselves, so for example you could have a pattern \\bsign\\w*.?ups? that would match sign-up, signed up and so on. 3. Find optimal parameters This is a hard one. I tend to break data into test-train sets and run cross-validation fitting a model of k topics and testing the fit using held-out data. Log likelihood is recorded and compared for different resolutions of topics. Problem: Log likelihood does help to understand how good is the fit, but (1) it often tends to suggest that I need more topics than it is practically sensible and (2) given how long it generally takes to fit a model, it is virtually impossible to find or test a grid of optimal values such as iterations, alpha, burn-in and so on. Side note: When selecting the optimal number of topics, I generally select a range of topics incrementing by 5 or so as incrementing a range by 1 generally takes too long to compute. 4. Maintenance It is easy to classify new data into a set existing topics. However if you are running it over time, you would naturally expect that some of your topics may cease to be relevant, while new topics may appear. Furthermore, it might be of interest to study the lifecycle of topics. This is difficult to account for as you are dealing with a problem that requires an unsupervised solution and yet for it to be tracked over time, you need to approach it in a supervised way. Problem: To overcome the above issue, you would need to (1) fit new data into an old set of topics, (2) construct a new topic model based on new data (3) monitor log likelihood values over time and devise a threshold when to switch from old to new; and (4) merge old and new solutions somehow so that the evolution of topics would be revealed to a lay observer. Recap of Problems String cleaning for MALLET to consume the data is inefficient. Feature selection requires manual work. Optimal number of topics selection based on LL does not account for what is practically sensible Computational complexity does not give the opportunity to find an optimal grid of parameters (other than the number of topics) Maintenance of topics over time poses challenging issues as you have to retain history but also reflect what is currently relevant. If you've read that far, I'd like to thank you, this is a rather long post. If you are interested in the suggest, feel free to either add more questions in the comments that you think are relevant or offer your thoughts on how to overcome some of these problems. Cheers
0
1
1,478
0
53,760,310
0
0
0
0
1
false
5
2016-05-24T12:15:00.000
-1
4
0
f1 score of all classes from scikits cross_val_score
37,413,302
-0.049958
python,scikit-learn,cross-validation
For individual scores of each class, use this : f1 = f1_score(y_test, y_pred, average= None) print("f1 list non intent: ", f1)
I'm using cross_val_score from scikit-learn (package sklearn.cross_validation) to evaluate my classifiers. If I use f1 for the scoring parameter, the function will return the f1-score for one class. To get the average I can use f1_weighted but I can't find out how to get the f1-score of the other class. (precision and recall analogous) The functions in sklearn.metrics have a labels parameter which does this, but I can't find anything like this in the documentation. Is there a way to get the f1-score for all classes at once or at least specify the class which should be considered with cross_val_score?
0
1
13,883
0
37,845,330
0
0
0
0
1
false
1
2016-05-24T18:17:00.000
1
1
0
H5 file with images in Python: Want to randomly select without replacement
37,421,035
0.197375
python,file,vectorization,hdf5,h5py
An elegant way for sampling without replacement is computing a random permutation of the numbers 1..N (numpy.random.permutation) and then using chunks of size M from it. Storing data in an h5py file is kind of arbitrary. You could use a single higher dimensional data set or a group containing the N two dimensional data sets. It's up to you. I would actually prefer to have the two dimensional data sets separately (gives you more flexibility) and to iterate over it using Group.iteritems.
I have familiarized myself with the basics of H5 in python. What I would like to do now is two things: Write images (numpy arrays) into an H5 file. Once that is done, be able to pick out $M$ randomly. What is meant here is the following: I would like to write a total of $N=100000$ numpy arrays (images), into one H5 file. Once that is done, then I would like to randomly select say, $M=50$ images from the H5 file at random, and read them. Then, I would like to pick another $M=50$ at random, and read them in as well, etc etc, until I have gone through all $N$ images. (Basically, sample without replacement). Is there an elegant way to do this? I am currently experimenting with having each image be stored as a separate key-value pair, but I am not sure if this is the most elegant. Another solution is to store the entire volume of $N$ images, and then randomly select from there, but I am not sure that is elegant either, as it requires me to read in the entire block.
0
1
274
0
37,522,101
0
0
0
0
1
false
5
2016-05-24T20:25:00.000
1
2
0
orderedDict vs pandas series
37,423,208
0.099668
python,python-3.x,pandas,series,ordereddictionary
Ordered dict is implemented as part of the python collections lib. These collection are very fast containers for specific use cases. If you would be looking for only dictionary related functionality (like order in this case) i would go for that. While you say you are going to do more deep analysis in an area where pandas is really made for (eg plotting, filling missing values). So i would recommend you going for pandas.Series.
Still new to this, sorry if I ask something really stupid. What are the differences between a Python ordered dictionary and a pandas series? The only difference I could think of is that an orderedDict can have nested dictionaries within the data. Is that all? Is that even true? Would there be a performance difference between using one vs the other? My project is a sales forecast, most of the data will be something like: {Week 1 : 400 units, Week 2 : 550 units}... Perhaps an ordered dictionary would be redundant since input order is irrelevant compared to Week#? Again I apologize if my question is stupid, I am just trying to be thorough as I learn. Thank you! -Stephen
0
1
1,177
0
38,984,971
0
1
0
0
7
false
20
2016-05-25T00:55:00.000
0
15
0
import error; no module named Quandl
37,426,196
0
python-2.7,importerror,quandl
I am following a Youtube tutorial where they use 'Quandl'. It should be quandl. Change it and it won't throw error.
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
0
1
49,345
0
38,992,511
0
1
0
0
7
false
20
2016-05-25T00:55:00.000
12
15
0
import error; no module named Quandl
37,426,196
1
python-2.7,importerror,quandl
Use below syntax all in lower case import quandl
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
0
1
49,345
0
60,087,731
0
1
0
0
7
false
20
2016-05-25T00:55:00.000
1
15
0
import error; no module named Quandl
37,426,196
0.013333
python-2.7,importerror,quandl
check whether it exists with the installed modules by typing pip list in the command prompt and if there is no module with the name quandl then type pip install quandl in the command prompt . Worked for me in the jupyter
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
0
1
49,345
0
59,360,553
0
1
0
0
7
false
20
2016-05-25T00:55:00.000
0
15
0
import error; no module named Quandl
37,426,196
0
python-2.7,importerror,quandl
quandl has now changed, you require an api key, go the site and register your email. import quandl quandl.ApiConfig.api_key = 'your_api_key_here' df = quandl.get('WIKI/GOOGL')
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
0
1
49,345
0
52,893,601
0
1
0
0
7
false
20
2016-05-25T00:55:00.000
0
15
0
import error; no module named Quandl
37,426,196
0
python-2.7,importerror,quandl
With Anaconda\Jupyter notebook go to the install directory (C:\Users\<USER_NAME>\AppData\Local\Continuum\anaconda3) where <USER_NAME> is your logged in Username. Then execute in command prompt: python -m pip install Quandl import quandl
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
0
1
49,345
0
43,598,162
0
1
0
0
7
false
20
2016-05-25T00:55:00.000
0
15
0
import error; no module named Quandl
37,426,196
0
python-2.7,importerror,quandl
install quandl for version 3.1.0 Check package path where you installed, make sure it's name is quandl not Quandl (my previous name is Quandl, so when I use import quandl, it always said "no module named quandl") If your package's name is Quandl, delete it and reinstall it. (I use anaconda to install my package, it's covenient!)
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
0
1
49,345
0
45,563,219
0
1
0
0
7
false
20
2016-05-25T00:55:00.000
-1
15
0
import error; no module named Quandl
37,426,196
-0.013333
python-2.7,importerror,quandl
Sometimes the quandl module is present with "Quandl" in following location C:\Program Files (x86)\Anaconda\lib\site-packages\Quandl. But the scripts from Quandl refer to quandl in import statements. So, renaming folder Quandl to quandl worked for me. New path: "C:\Program Files (x86)\Anaconda\lib\site-packages**quandl**".
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
0
1
49,345
0
37,480,887
0
0
0
0
1
false
0
2016-05-27T10:10:00.000
0
1
0
how to access a particular column of a particular csv file from many imported csv files
37,480,728
0
python,r,csv
you can use list.data[[1]]$name1
Suppose that i have multiple .csv files with columns of same kind . If i wanted to access data of a particular column from a specified .csv file , how is it possible? All .csv files have been stored in list.data for ex: Suppose that here , list.data[1] gives me the first .csv file. How will i access a column of this file? I have tried list.data[1]$nameofthecolumn. But this is giving me null values. I am not much familiar with R. list.data[1]$name1 NULL list.data[1]$name1 NULL list.data[1] $ NULL
0
1
51
0
37,485,073
0
0
0
0
1
false
0
2016-05-27T13:36:00.000
1
1
0
Import error: Error importing scipy
37,484,932
0.197375
python-2.7,pandas,matplotlib,anaconda,importerror
The csv_test.py file you have "added by hand" tries to import scipy; as the error message says, don't do that. You can probably place your test code in a private location, without messing with the scipy installation directory. I suggest uninstalling and reinstalling at least pandas and scipy, possibly everything, to eliminate intentional and unintentional alterations.
import pandas     Traceback (most recent call last):     File "", line 1, in     File "/Users/.../anaconda/lib/python2.7/site-packages/pandas/init.py", line 37 , in import pandas.core.config_init      File "/Users/.../anaconda/lib/python2.7/site-packages/pandas/core/config_init.py", line 18, in     from pandas.formats.format import detect_console_encoding     File "/Users/.../anaconda/lib/python2.7/site-packages/pandas/formats/format.py", l ine 16, in     from pandas.io.common import _get_handle, UnicodeWriter, _expand_user     File "/Users/.../anaconda/lib/python2.7/site-packages/pandas/io/common.py", line 5 , in      import csv      File "csv.py", line 10, in      field_size_limit, \      File "csv_test.py", line 4, in      from scipy import stats      File "scipy/init.py", line 103, in     raise ImportError(msg)     ImportError: Error importing scipy: you cannot import scipy while     being in scipy source directory; please exit the scipy source      tree first, and relaunch your python intepreter. I tried to import pandas by invoking python from bash. Before doing so,I used 'which python' command to make sure which python I'm using. It interacted to me saying "/Users/...(Usersname)/anaconda/bin/python". But when I imported pandas from IPython of anaconda's distribution, successfully imported. for the record,I added csv_test.py module by hand. guess this module is doing something wrong? it just imports scipy.stats and matplotlib.pyplot. So I hope it is not affecting any other module. seems something seriously wrong is going on here,judging from Error sentence..
0
1
466
0
37,596,921
0
0
0
1
1
false
4
2016-05-27T20:59:00.000
0
3
0
loss of precision when using pandas to read excel
37,492,173
0
python,excel,pandas,dataframe,precision
Excel might be truncating your values, not pandas. If you export to .csv from Excel and are careful about how you do it, you should then be able to read with pandas.read_csv and maintain all of your data. pandas.read_csv also has an undocumented float_precision kwarg, that might be useful, or not useful.
I tried to use pandas to read an excel sheet into a dataframe but for floating point columns, the data is read incorrectly. I use the function read_excel() to do the task In excel, the value is 225789.479905466 while in the dataframe, the value is 225789.47990546614 which creates discrepancy for me to import data from excel to database. Does anyone face the same issue with pandas.read_exel(). I have no issue reading a csv to dataframe. Jeremy
0
1
4,533
0
37,499,607
0
0
0
0
1
true
0
2016-05-28T10:19:00.000
0
1
0
Classification of sparse data
37,497,795
1.2
python,r,classification,data-mining,text-classification
There is nothing wrong with using this coding strategy for text and support vector machines. For your actual objective: support vector regression (SVR) may be more appropriate beware of the journal impact factor. It is very crude. You need to take temporal aspects into account; and many very good work is not published in journals at all
I am struggling with the best choice for a classification/prediction problem. Let me explain the task - I have a database of keywords from abstracts for different research papers, also I have a list of journals with specified impact factors. I want to build a model for article classification based on their keywords, the result is the possible impact factor (taken just as a number without any further journal description) with a given keywords. I removed the unique keyword tags as they do not have much statistical significance so I have only keywords that are repeated 2 and more times in my abstract list (6000 keyword total). I think about dummy coding - for each article I will create a binary feature vector 6000 attributes in length - each attribute refers to presence of the keyword in the abstract and classify the whole set by SVM. I am pretty sure that this solution is not very elegant and probably also not correct, do you have any suggestions for a better deal?
0
1
539
0
37,506,002
0
0
1
0
1
false
0
2016-05-28T21:51:00.000
0
2
0
Finding multiple roots on an interval with Python
37,504,035
0
python,optimization,scipy
Depends on your function, but it might be possible to solve symbolically using SymPy. That would give all roots. It can find eigenvalues symbolically if necessary. Finding all extrema is the same as finding all roots of the derivative of your function, so it won't be any easier than finding all roots (as WarrenWeckesser mentioned). Finding all roots numerically will require using knowledge about the function. As a simple example, say you knew some minimum spacing between roots. You could try to recursively partition the interval and find roots in each. Stop after finding the maximum number of roots. But if the spacing is small, this could require many function evaluations (e.g. in the worst case when there are zero roots). The more constraints you can impose, the more you can cut down on function evaluations.
I'm looking for an efficient method to find all the roots of a function f on an interval [a,b]. The problem I have is that all the nice methods from scipy.optimize require either that f(a) and f(b) have different signs, or that I provide an initial guess x0, but I know nothing about my roots before running the code. Note: The function f is smooth (at least C1), and doesn't have a pathological behaviour [nothing like sin(1/x)]. However, it requires building a matrix A(x) and finding its eigenvalues, and is therefore time-consuming. It is expected to have between 0 and 10 roots on [a,b], whose position is completely arbitrary. I can't afford missing any of them (e.g. I can't take 100 initial guesses x0 and just hope that i'll catch all the roots). I was thinking about implementing something like this: Find all the extrema {m_1, m_2.., m_k} of f with scipy.optimize [maybe fmin, but I don't know which method is the most efficient]: Search for a minimum m_1 starting from point a [initial guess for gradient algorithm] Search for a maximum m_2 starting from point m_1 + dx [forcing the gradient algorithm to go forward] Search for a minimum m_3... If two consecutive extrema m_i and m_(i+1) have opposite signs, apply brentq on interval [m_i, m_(i+1)] to find a root. Is there a better way of solving this problem? If not, are fmin and brentq the best choices among the scipy.optimize library in order to minimize the number of calls to my function f?
0
1
1,874
0
37,531,473
0
1
0
0
1
false
0
2016-05-29T03:49:00.000
0
2
0
Error importing numpy & graphlab after installing ipython
37,505,970
0
python,python-2.7,ipython,graphlab
Please remember that console applications and GUI application do not share the same environment on OS X. This also means that if you install a Python package from the console, this would probably not be visible to PyCharm. Usually you need to install packages using PyCharm in order to be able to use them.
I have got a strange issue. I am now using graphlab/numpy to develop a project via Pycharm 5. OS is Mac OS 10.11.5. I created a p2.7 virtual environment for the project. Programme runs well. But after I install ipython, I can no longer import graphlab and numpy correctly. Error message: AttributeError: 'module' object has no attribute 'core' System keeps telling that attribute ‘core' and 'connect' are missing. But I am pretty sure that they can be found in graphlab/numpy folders, and no duplicates. Project runs all right in terminal. Now I have to uninstall ipython. Then everything is ok again. Please kindly help.
0
1
398
0
37,516,498
0
0
0
0
1
true
1
2016-05-29T08:26:00.000
1
1
0
Is there any way to use libgpuarray with Intel GPU?
37,507,758
1.2
python,gpu,gpgpu,theano
libgpuarray have been made to support OpenCL, but we don't have time to finish it. Many thinks work, but we don't have the time to make sure it work everywhere. In any cases, you must find an OpenCL version that support that GPU, install it and reinstall libgpuarray to have it use it. Also, I'm not sure that GPU will give you any speed up. Don't spend too much time on tring to make it work.
I'm looking for a way to use Intel GPU as a GPGPU with Theano. I've already installed Intel OpenCL and libgpuarray, but a test code 'python -c "import pygpu;pygpu.test()"' crashed the process. And I found out devname method caused it. It seems there would be a lot more errors. Is it easy to fixed them to work well? I understand Intel OpenCL project can use GPGPU but it might not be supported by libgpuarray. Environment Windows 7 x64 Intel(R) HD Graphics 4600 Python 2.7 x86 or x64
0
1
920
0
43,774,149
0
0
0
0
1
false
0
2016-05-30T02:03:00.000
0
1
0
Backtesting with Data
37,516,730
0
python,pandas,back-testing
The problem in optimising the loop is that say you have 3 years and you have only 3 events that you are interested. Then you can use an event based backtesting with 3 iteration only. The problem here is that you have to precompute the event which will need to the data anyway and most of the time you will need statistics about you backtrading that will need the data also like max-drawdown. So most backtrading framework will use the loop anyway. Or vectorisation if you are on R/Matlab Numpy If you really need to optimize it, you probably need to precompute and store all the information and then just do look-ups
In a hypothetical scenario where I have a large amount of data that is received, and is generally in chronological order upon receipt, is there any way to "play" the data forward or backward, thereby recreating the flow of new information on demand? I know that in a simplistic sense, I can always have a script (with whatever output being unimportant) that starts with a for loop that takes in whatever number of events or observations and does something, then takes in more observations, updates what was previously output with a new result, and so on. Is there a way of doing that is more scaleable than a simple for loop? Basically, any time I look into this subject I quickly find myself navigating to the subject area of High Frequency Trading, specifically algorithm efficacy by way of backtesting against historical data. While my question is about doing this in a more broad sense where our observation do not need to be stock/option/future price pips, the same principles must apply. Does anyone have such experience on how such a platform is built on a more scaleable level than just a for loop with logic below it? Another example would be health data / claims where one can see backward and forward what happened as more claims come in over time.
0
1
392
0
37,538,260
0
0
0
0
1
false
1
2016-05-31T06:11:00.000
0
1
0
giving more weight to a feature using sklearn svm
37,538,068
0
python,svm
You can create one more feature vector in your training data, if the name of the book contains your predefined words then make it one otherwise zero.
I 'm using svm to predict the label from the title of a book. However I want to give more weight to some features pre defined. For example, if the title of the book contains words like fairy, Alice I want to label them as children's books. I'm using word n-gram svm. Please suggest how to achieve this using sklearn.
0
1
156
0
37,543,931
0
0
0
0
2
false
32
2016-05-31T10:50:00.000
0
10
0
How to replace all non-NaN entries of a dataframe with 1 and all NaN with 0
37,543,647
0
python,pandas,dataframe
Use: df.fillna(0) to fill NaN with 0.
I have a dataframe with 71 columns and 30597 rows. I want to replace all non-nan entries with 1 and the nan values with 0. Initially I tried for-loop on each value of the dataframe which was taking too much time. Then I used data_new=data.subtract(data) which was meant to subtract all the values of the dataframe to itself so that I can make all the non-null values 0. But an error occurred as the dataframe had multiple string entries.
0
1
42,508
0
65,940,835
0
0
0
0
2
false
32
2016-05-31T10:50:00.000
0
10
0
How to replace all non-NaN entries of a dataframe with 1 and all NaN with 0
37,543,647
0
python,pandas,dataframe
Generally there are two steps - substitute all not NAN values and then substitute all NAN values. dataframe.where(~dataframe.notna(), 1) - this line will replace all not nan values to 1. dataframe.fillna(0) - this line will replace all NANs to 0 Side note: if you take a look at pandas documentation, .where replaces all values, that are False - this is important thing. That is why we use inversion to create a mask ~dataframe.notna(), by which .where() will replace values
I have a dataframe with 71 columns and 30597 rows. I want to replace all non-nan entries with 1 and the nan values with 0. Initially I tried for-loop on each value of the dataframe which was taking too much time. Then I used data_new=data.subtract(data) which was meant to subtract all the values of the dataframe to itself so that I can make all the non-null values 0. But an error occurred as the dataframe had multiple string entries.
0
1
42,508
0
37,553,823
0
0
0
0
1
false
0
2016-05-31T17:24:00.000
0
1
0
B-splines with Scipy : can I add a datapoint without full recompute?
37,552,035
0
python,numpy,scipy,bspline
Short answer: No. Spline construction is a global process, so if you add a data point, you really need to recompute the whole spline. Which involves solving an N-by-N linear system etc. If you're adding many knots sequentially, you probably can construct a process where you're using a factorization of the colocation matrix on step n to compute things on step n+1. You'll need to carefully check the stability of this process. And splrep and friends do not give you any help here, so you'll need to write this yourself. (If you do, you might find it helpful to check the sources of interpolate.CubicSpline). But before you start on that, consider using a local interpolator instead. If all you want is to add a knot given data, then there's scipy.interpolate.insert.
I have a bspline created with scipy.interpolate.splrep with points (x_0,y_0) to (x_n,y_n). Usual story. But I would like to add a data point (x_n+1,y_n+1) and appropriate knot without recomputing the entire spline. Can anyone think of a way of doing this elegantly? I could always take the knot list returned by splrep, and add on the last knot of a smaller spline created with (x_n-2, y_n-2) to (x_n+1,y_n+1) but that seems less efficient than it could be.
0
1
56
0
43,214,847
0
0
0
0
1
false
4
2016-05-31T21:24:00.000
6
3
0
Python OpenCV import error with python 3.5
37,555,890
1
python,opencv
No need to change the python version, you can just use the pip command open cmd ( admin mode) and type pip install opencv-python
I am having some difficulties installing opencv with python 3.5. I have linked the cv files, but upon import cv2 I get an error saying ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv2.so, 2): Symbol not found: _PyCObject_Type or more specifically: /Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5 /Users/Jamie/Desktop/tester/test.py Traceback (most recent call last): File "/Users/Jamie/Desktop/tester/test.py", line 2, in import cv File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv.py", line 1, in from cv2.cv import * ImportError:dlopen(/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv2.so, 2): Symbol not found: _PyCObject_Type Referenced from: /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv2.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv2.so I have linked cv.py and cv2.so from location /usr/local/Cellar/opencv/2.4.12_2/lib/python2.7/site-packages correctly into /Library/Frameworks/Python.framework/Versions/3.5/bin Would anybody be able to help please? Thanks very much
0
1
12,712
0
37,556,551
0
0
0
0
1
false
0
2016-05-31T22:00:00.000
0
2
0
Is there a way to prefilter data before using read_csv to download data into pandas dataframe?
37,556,334
0
python,csv,pandas,dataframe,yelp
Typically yes, load everything, then filter your dataset. But if you really want to pre-filter, and you're on a unix like system, you can prefilter using grep before even starting Python. A compromise between them is to write the prefilter using Python and Pandas, this way you download the data, prefilter them (write prefiltered data to another csv) and play with your prefiltered data. The way to go depend on the number of times you need to load the whole dataset, if you want to read once and discard it, there's no need to prefilter, but while working on the code, if you want to test it a lot of time, prefiltering may save you some seconds. But here again, there's another possibility: use ipython notebook, so you'll be able to load your dataset, filter it, then execute the block of code on which you're currently working any number of time on this already-loaded dataset, it's even faster than loading a prefiltered dataset. So no real answer here, it really depends on your usage and personal tastes.
I'm working with the yelp dataset and of course it is millions of entries long so I was wondering if there was any way where you can just download what you need or do you have to pick away at it manually? For example, yelp has reviews on everything from auto repair to beauty salons but I only want the reviews on the restaurants. So do I have to read the entire thing and then drop the rows I don't need?
0
1
412
0
37,557,227
0
1
0
0
1
false
0
2016-05-31T23:24:00.000
-1
2
0
How do I get python to import pandas?
37,557,174
-0.099668
python,python-3.x,pandas,packages
Anaconda has included one version of Python with it. You have to change your system environment path with Anaconda's instead of the former one to avoid conflict. Also, if you want to make the whole process easy, it is recommended to use PyCharm, and it will ask you to choose the Python interpreter you want.
I installed Python 3.5.1 from www.python.org. Everything works great. Except that you can't install pandas using pip (it needs visualstudio to compile, which I don't have). So I installed Anaconda (www.continuum.io/downloads). Now I can see pandas as part of the list of installed modules, but when I run python programs I still get: ImportError: No module named 'pandas' How do I set up my environment to use the modules from Anaconda? Note: I have Anaconda's home directory and Library/bin on my path, as well as Python's home directory. I do not have PYTHONPATH or PYTHONHOME set, and I know I have the correct privileges to see everything.
0
1
1,823
0
37,574,806
0
0
0
0
1
false
5
2016-06-01T14:11:00.000
0
4
0
How can I make my neural network emphasize that some data is more important than the rest?
37,571,165
0
python,neural-network,tensorflow,data-science
To emphasize on any vector elements in your input vector you will have to give less information of the unimportant vector to your neural network. Try to encode the first less important 285 numbers into one number or any vector size you like, with a a multiplayer neural network then use that number with other 4 number as a input to a neural network. Example: v1=[1,2,3,..........285] v2=[286,287,288,289] v_out= Neural_network(input_vector=v1,neurons=[100,1]) # 100 hidden unit with one outpt. v_final=Neural_network(input_vector=v_out,neurons=[100,1]) # 100 hidden unit with one outpt.
I looked around online but couldn't find anything, but I may well have missed a piece of literature on this. I am running a basic neural net on a 289 component vector to produce a 285 component vector. In my input, the last 4 pieces of data are critical to change the rest of the input into the resultant 285 for the output. That is to say, the input is 285 + 4, such that the 4 morph the rest of the input into the output. But when running a neural network on this, I am not sure how to reflect this. Would I need to use convolution on the rest of the input? I want my system to emphasize the 4 data points that critically affect the other 285. I am still new to all of this, so a few pointers would be great! Again, if there is something already written on this, then that would be awesome too.
0
1
1,982
0
37,575,927
0
0
0
0
1
false
2
2016-06-01T17:32:00.000
0
2
0
What Arguments to use while doing a KS test in python with student's t distribution?
37,575,270
0
python,scipy,kolmogorov-smirnov
The args argument must be a tuple but it can be a single variable. You can do your test using ks_statistic, pvalue = scipy.stats.kstest(x, 't', (10,)) if 10 is the degrees of freedom.
I have data regarding metallicity in stars, I want to compare it with a student's t distribution. To do this I am running a Kolmogorov-Smirnov test using scipy.stats.kstest on python KSstudentst = scipy.stats.kstest(data,"t",args=(a,b)) But I am unable to find what the arguments are supposed to be. I know the student's t requires a degree of freedom (df) parameter but what is the other parameter. Also which one of the two is the df parameter. In the documentation for scipy.stats.t.cdf the inputs are the position at which value is to be calculated and df, but in the KS test there is no sense in providing the position.
0
1
3,626
0
37,580,756
0
0
0
0
1
false
2
2016-06-01T23:14:00.000
1
4
0
numpy Boolean array representation of an integer
37,580,272
0.049958
python,numpy
Maybe it is not the easiest, but a compact way is from numpy import array array([i for i in bin(5)[2:]]) == '1'
What's the easiest way to produce a numpy Boolean array representation of an integer? For example, map 6 to np.array([False, True, True], dtype=np.bool).
0
1
653
0
37,593,003
0
0
0
0
1
true
1
2016-06-02T13:00:00.000
1
2
0
Can I export RapidMiner model to integrate with python?
37,592,608
1.2
python,machine-learning,scikit-learn,rapidminer
Practically, I would say no - just train your model in sklearn from the beginning if that's where you want it. Your RapidMiner model is some kind of object. The two formats you are exporting as are just storage methods. Sklearn models are a different kind of object. You can't directly save one and load it into the other. A similar example would be to ask if you can take an airplane engine and load it into a train. To do what you're asking, you'll need to take the underlying data that your classifier saved, find the format, and then figure out a way to get it in the same format as a sklearn classifier. This is dependent on what type of classifier you have. For example, if you're using a bayesian model, you could somehow capture the prior probabilities and then use those, but this isn't trivial.
I have trained a classifier model using RapidMiner after a trying a lot of algorithms and evaluate it on my dataset. I also export the model from RapidMiner as XML and pkl file, but I can't read it in my python program (scikit-learn). Is there any way to import RapidMiner classifier/model in a python program and use it to predict or classify new data in my end application?
0
1
2,438
0
37,614,696
0
0
0
0
1
false
0
2016-06-02T19:54:00.000
0
1
0
anaconda env couldn't import any of the packages
37,600,960
0
python,python-3.x,anaconda,conda
Finally, I figured out the answer. It is all about the PATH variable. It was pointing to os python rather than anaconda python. Thanks all for your time.
pip list inside conda env: pip list matplotlib (1.4.0) nose (1.3.7) numpy (1.9.1) pandas (0.15.2) pip (8.1.2) pyparsing (2.0.1) python-dateutil (2.4.1) pytz (2016.4) scikit-learn (0.15.2) scipy (0.14.0) setuptools (21.2.1) six (1.10.0) wheel (0.29.0) which python: /Users/xxx/anaconda/envs/pythonenvname/bin/python (pythonenvname)pc-xx-xx:oo xxx$ which pip /Users/xxx/anaconda/envs/pythonenvname/bin/pip python Python 3.4.4 |Anaconda custom (x86_64)| (default, Jan 9 2016, 17:30:09) [GCC 4.2.1 (Apple Inc. build 5577)] on darwin Type "help", "copyright", "credits" or "license" for more information. import pandas as pd error: sh: sysctl: command not found
0
1
204
0
48,292,419
0
0
0
0
1
false
25
2016-06-03T10:50:00.000
34
3
0
What are ways to speed up seaborns pairplot
37,612,434
1
python,performance,parallel-processing,seaborn
Rather than parallelizing, you could downsample your DataFrame to say, 1000 rows to get a quick peek, if the speed bottleneck is indeed occurring there. 1000 points is enough to get a general idea of what's going on, usually. i.e. sns.pairplot(df.sample(1000)).
I have a dataframe with 250.000 rows but 140 columns and I'm trying to construct a pair plot. of the variables. I know the number of subplots is huge, as well as the time it takes to do the plots. (I'm waiting for more than an hour on an i5 with 3,4 GHZ and 32 GB RAM). Remebering that scikit learn allows to construct random forests in parallel, I was checking if this was possible also with seaborn. However, I didn't find anything. The source code seems to call the matplotlib plot function for every single image. Couldn't this be parallelised? If yes, what is a good way to start from here?
0
1
15,683
0
60,295,986
0
0
0
0
1
false
13
2016-06-03T16:06:00.000
3
4
0
PySpark computing correlation
37,618,977
0.148885
python,apache-spark,pyspark,apache-spark-sql,apache-spark-mllib
df.stat.corr("column1","column2")
I want to use pyspark.mllib.stat.Statistics.corr function to compute correlation between two columns of pyspark.sql.dataframe.DataFrame object. corr function expects to take an rdd of Vectors objects. How do I translate a column of df['some_name'] to rdd of Vectors.dense object?
0
1
31,767
0
37,682,991
0
0
0
0
1
false
1
2016-06-04T16:13:00.000
1
1
0
how to close tensorboard server with jupyter notebook
37,632,393
0.197375
python,tensorflow,jupyter-notebook,tensorboard
The jupyter stuff seems fine. In general, if you don't close TensorBoard properly, you'll find out as soon as you try to turn on TensorBoard again and it fails because port 6006 is taken. If that isn't happening, then your method is fine. As regards the logdir, passing in the top level logdir is generally best because that way you will get support for comparing multiple "runs" of the same code in TensorBoard. However, for this to work, it's important that each "run" be in its own subdirectory, e.g.: logs/run1/..tfevents.. logs/run2/..tfevents.. tensorboard --logdir=logs
what is the proper way to close tensorboard with jupyter notebook? I'm coding tensorflow on my jupyter notebook. To launch, I'm doing: 1. !tensorboard --logdir = logs/ open a new browser tab and type in localhost:6006 to close, I just do: close the tensorflow tab on my browser on jupyter notebook, I click on interrupt kernel Just wondering if this is the proper way.... BTW, in my code I set my log file as './log/log1'. when starting tensorboard, should I use --logdir = ./log or --logdir = ./log/log1? thank you very much.
0
1
2,652
0
37,693,149
0
1
0
0
1
false
4
2016-06-07T17:54:00.000
0
3
0
TFLearn pip installation bug
37,686,139
0
python,tensorflow
Last tflearn update had a compatibility issue with old TensorFlow versions (like mrry said, caused by 'variance_scaling_initializer()' that was only compatible with TensorFlow 0.9). That error had already been fix, so you can just update TFLearn and it should works fine with any TensorFlow version over 0.7.
I've tried installing tflearn through pip as follows pip install tflearn and now when I open python, the following happens: >>> import tflearn Traceback (most recent call last): File "<stdin>", line 1, in <module> File "//anaconda/lib/python2.7/site-packages/tflearn/__init__.py", line 22, in <module> from . import activations File "//anaconda/lib/python2.7/site-packages/tflearn/activations.py", line 7, in <module> from . import initializations File "//anaconda/lib/python2.7/site-packages/tflearn/initializations.py", line 5, in <module> from tensorflow.contrib.layers.python.layers.initializers import \ ImportError: cannot import name variance_scaling_initializer Any ideas? I'm using an anaconda installation of python.
0
1
3,984
0
65,464,031
0
1
0
0
1
false
3
2016-06-08T07:00:00.000
1
2
0
Python and OpenCV - getting the duration time of a video at certain points
37,695,376
0.099668
python,opencv,video
You can simply measure a certain position in the video in milliseconds using time_milli = cap.get(cv2.CAP_PROP_POS_MSEC) and then divide time_milli by 1000 to get the time in seconds.
Let say I have made a program to detect a green ball in a video. Whenever there is a green ball detected, I want to print out the duration of video at the time the green ball is detected. Is it possible?
0
1
5,963
0
46,964,844
0
0
0
0
1
false
1
2016-06-08T12:33:00.000
0
1
0
Convert a 16-bit image from rgb to Lab without losing precision
37,702,630
0
python,image,16-bit,lab-color-space
skimage version of rgb2lab uses floating point input where the color range is 0-1. You can normalize your 16bit image to this range and use the rgb2lab routine.
i have a 16 bit image in PhoPhoto RGB color space.For equalizzation I want to convert it in Lab colorspace and then equalize L-channel without losing precision. I have used skimage.color.rgb2lab but this convert image in float64. help me!!
0
1
455
0
37,721,273
0
1
0
0
1
false
1
2016-06-09T08:26:00.000
0
1
0
Efficient representing sub-graphs (data structure) in Python
37,720,588
0
python,performance,data-structures,graph
The key point here is the data format of the graphs already generated by your algorithm. Does it contruct a new graph by adding vertices and edges ? Is it rewritable ? Does it uses a given format (matrix, adjacency list, vertices and nodes sets etc.) If you have the choice however, because your subgraph have a "low" cardinality and because space is not an issue, I would store subgraphs as arrays of bitmasks (the bitmask part is optional, but it is hashable and makes a compact set). A subgraph representation would be then L a list of node references in your global graph G. It can also be a bitmask to be used as a hash A an array of bitmask (matrix) where A[i][j] is the truth value of the edge L[i] -> L[j] This takes advantage of the infinite size low space requirement of Python integers. The space complexity is O(n*n) but you get efficient traversal and can easily hash your structure.
What is the efficient way of keeping and comparing generated sub-graphs from given input graph G in Python? Some details: Input graph G is a directed, simple graph with number of vertices varying from n=100-10000. Number of edges - it can be assumed that maximum would be 10% of complete graph (usually less) so it gives in that case maximum number of n*(n-1)/10 There is an algorithm that can generate from input graph G sub-graphs in number of hundreds/thousands. And for each sub-graph are made some (time consuming) computations. Pair "subgraph, computation results" must be stored for later use (dynamic programming approach - if given sub-graph were already processed we want to re-use its results). Because of point (2.) it would be really nice to store sub-graph/results pairs in kind of dictionary where sub-graph is a key. How it can be done efficiently? Some ideas of efficient calculation of sub-graph hash value maybe? Let's assume that memory is not a problem and I can find machine with enough memory to keep a data - so let's focus only on speed. Of course If there are already nice to use data-structures that might be helpful in this problem (like sparse matrices from scipy) they are very welcome. I just would like to know your opinions about it and maybe some hints regarding approach to this problem. I know that there are nice graph/network libraries for Python like NetworkX, igraph, graph-tool which have very efficient algorithms to process provided graph. But seems (or I could not find) efficient way to fulfill points (2. 3.)
0
1
437
0
37,729,976
0
0
0
0
1
false
1
2016-06-09T13:51:00.000
0
1
0
creating new CSV by duplicating and modifying existing records multiple times from the source CSV
37,727,899
0
python,csv,hive,apache-pig
It really depends on what do you want to achieve and what hardware do you use. If you need to process this file fast and you actually have real Hadoop cluster (bigger than 1 or 2 nodes), then probably the best way would be to write Pig script or even simple Hadoop MapReduce job to process this file. With this approach you would get output file on HDFS so it will be easily accessible via Hive. On the other hand if you have single computer or some "toy" Hadoop cluster processing that file using Hadoop will take much longer time than simply executing python script on this file. It's because Hadopp processing has quite big overhead on data serialization and sending data through network. Of course in that case you will have to deal with the fact that input and output file may not fit into your RAM on your own.
I am a newbie in big data, I have an assignment in which I was given a CSV file and date field is one of the fields in that file. The file size is only 10GB, but I need to create a much larger file, 2TB in size, for big data practice purpose, by duplicating the file's content but increasing the date in order making the duplicated records different from the original one. Then have the new 2TB file accessible via Hive. Need help on how the best way I suppose to implement this? Is it best using pig in hadoop or python?
0
1
59
0
37,752,325
0
0
0
0
1
false
0
2016-06-10T15:00:00.000
1
1
0
K-Means Implementation in Python
37,751,430
0.197375
python,machine-learning,scikit-learn,computer-science,k-means
Before answering which is better, here is a quick reminder of the algorithm: "Choose" the number of clusters K Initiate your first centroids For each point, find the closest centroid according to a distance function D When all points are attributed to a cluster, calculate the barycenter of the cluster which become its new centroid Repeat step 3. and step 4. until convergence As stressed previously, the algorithm depends on various parameters: The number of clusters Your initial centroid positions A distance function to calculate distance between any point and centroid A function to calculate the barycenter of each new cluster A convergence metric ... If none of the above is familiar to you, and you want to understand the role of each parameter, I would recommend to re-implement it on low-dimensional data-sets. Moreover, the implemented Python libraries might not match your specific requirements - even though they provide good tuning possibilities. If your point is to use it quickly with a big-picture understanding, you can use existing implementation - scikit-learn would be a good choice.
Is it better to implement my own K-means Algorithm in Python or use the pre-implemented K-mean Algorithm in Python libraries like for example Scikit-Learn?
0
1
614
0
37,768,933
0
0
0
0
1
false
1
2016-06-11T19:33:00.000
1
1
0
Scikit-learn KNN(K Nearest Neighbors ) parallelize using Apache Spark
37,767,790
0.197375
python,scala,apache-spark,machine-learning,scikit-learn
Well according to discussions https://issues.apache.org/jira/browse/SPARK-2336 here MLLib (Machine Learning Library for Apache Spark) does not have an implementation of KNN. You could try https://github.com/saurfang/spark-knn.
I have been working on machine learning KNN (K Nearest Neighbors) algorithm with Python and Python's Scikit-learn machine learning API. I have created sample code with toy dataset simply using python and Scikit-learn and my KNN is working fine. But As we know Scikit-learn API is build to work on single machine and hence once I will replace my toy data with millions of dataset it will decrease my output performance. I have searched for many options, help and code examples, which will distribute my machine learning processing parallel using spark with Scikit-learn API, but I was not found any proper solution and examples. Can you please let me know how can I achieve and increase my performance with Apache Spark and Scikit-learn API's K Nearest Neighbors? Thanks in advance!!
0
1
5,059
0
37,796,440
0
0
0
0
1
false
0
2016-06-13T08:50:00.000
0
1
0
Sync Choregraphe and Matlab
37,785,380
0
python,c++,matlab,nao-robot
Using NAO C++ SDK, it may be possible to make a MEX-FILE in Matlab that "listens" to NAO. Then NAO just has to raise an event in its memory (ALMemory) that Matlab would catch to start running the script.
I have a Wizard of Oz experiment using Choregraphe to make a NAO perform certain tasks running on machine A. The participant interacting with the NAO also interacts with a machine B. When I start the experiment (in Choregraphe on machine A) I want a certain MATLAB script to start on machine B. I.e. Choregraphe will initiate the MATLAB script. Do you have any suggestions of how to do this? My programming are limited to that of MATLAB and R, while Choregraphe is well integrated with Python and C++ hence my question here on Stack. Kind Regards, KD
0
1
107
0
37,821,215
0
0
0
0
1
false
2
2016-06-14T19:36:00.000
1
2
0
Using Multiple Languages while developing a Spark application
37,820,668
0.099668
python,scala,apache-spark,pyspark
An experienced developer will be able to pick up a new language and become productive fairly quickly. I would only consider using the two languages together if: The deadlines are too tight to allow for the developer to get up to speed, The integration between the modules is quite limited (and you're confident that won't change) and There is a clear deployment strategy. I would suggest doing a small-scale test first to confirm the deployment and integration plans you have will work.
I'm working on a project with another person. My part of the project involves analytics with Spark's Machine Learning, while my teammate is using Spark Streaming to pipeline data from the source to the program and out to an interface. I am planning to use Scala since it has the best support for Spark. However, my teammate does not have any experience with Scala, and would probably prefer to use Python. Given that our parts of the program are doing two different things, would it be a good idea for us to have his Python script call my Scala executable? Or would using different languages raise complications later on?
0
1
452
0
37,846,825
0
0
0
0
3
false
0
2016-06-15T07:46:00.000
2
4
0
Neural network only learns most common training image
37,829,169
0.099668
python,neural-network,tensorflow
Trying to squeeze blood from a stone! I'm skeptical that with 4283 training examples your net will learn 62 categories...that's a big ask for such a small amount of data. Especially since your net is not a conv net...and it's forced to reduce its dimensionality to 100 at the first layer. You may as well pca it and save time. Try this: Step 1: download an MNIST example and learn how to train and run it. Step 2: use the same mnist network design and throw your data at it...see how it goes. you may need to pad your images. Train and then run it on your test data. Now step 3: take your fully trained step 1 mnist model and "finetune" it by continuing to train with your data(only) and with a lower learning rate for a few epochs(ultimately determine #epochs by validation). Then run it on your test data again and see how it does. Look up "transfer learning"...and a "finetuning example" for your toolkit.(Note that for finetuning you need to mod the output layer of the net) I'm not sure how big your original source images are but you can resize them and throw a pre-trained cifar100 net at it(finetuned) or even an imagenet if the source images are big enough. Hmmm cifar/imagnet are for colour images...but you could replicate your greyscale to each rgb band for fun. Mark my words...these steps may "seem simple"...but if you can work through it and get some results(even if they're not great results) by finetuning with your own data, you can consider yourself a decent NN technician. One good tutorial for finetuning is on the Caffe website...flickr style(I think)...there's gotta be one for TF too. The last step is to design your own CNN...be careful when changing filter sizes--you need to understand how it affects outputs of each layer and how information is preserved/lost. I suppose another thing to do is to do "data augmentation" to get yourself some more of it. slight rotations/resizing/lighting...etc. Tf has some nice preprocessing for doing some of this...but some will need to be done by yourself. good luck!
I'm building a neural net using TensorFlow and Python, and using the Kaggle 'First Steps with Julia' dataset to train and test it. The training images are basically a set of images of different numbers and letters picked out of Google street view, from street signs, shop names, etc. The network has 2 fully-connected hidden layers. The problem I have is that the network will very quickly train itself to only give back one answer: the most common training letter (in my case 'A'). The output is in the form of a (62, 1) vector of probabilities, one for each number and letter (upper- and lower-case). This vector is EXACTLY the same for all input images. I've then tried to remove all of the 'A's from my input data, at which point the network changed to only give back the next most common input type (an 'E'). So, is there some way to stop my network stopping at a local minima (not sure if that's the actual term)? Is this even a generic problem for neural networks, or is it just that my network is broken somehow? I'm happy to provide code if it would help. EDIT: These are the hyperparameters of my network: Input size : 400 (20x20 greyscale images) Hidden layer 1 size : 100 Hidden layer 2 size : 100 Output layer size : 62 (Alphanumeric, lower- and upper-case) Training data size : 4283 images Validation data size : 1000 images Test data size : 1000 images Batch size : 100 Learning rate : 0.5 Dropout rate : 0.5 L2 regularisation parameter : 0
0
1
379
0
37,846,462
0
0
0
0
3
false
0
2016-06-15T07:46:00.000
0
4
0
Neural network only learns most common training image
37,829,169
0
python,neural-network,tensorflow
Which optimizer are you using? If you've only tried gradient descent, try using one of the adaptive ones (e.g. adagrad/adadelta/adam).
I'm building a neural net using TensorFlow and Python, and using the Kaggle 'First Steps with Julia' dataset to train and test it. The training images are basically a set of images of different numbers and letters picked out of Google street view, from street signs, shop names, etc. The network has 2 fully-connected hidden layers. The problem I have is that the network will very quickly train itself to only give back one answer: the most common training letter (in my case 'A'). The output is in the form of a (62, 1) vector of probabilities, one for each number and letter (upper- and lower-case). This vector is EXACTLY the same for all input images. I've then tried to remove all of the 'A's from my input data, at which point the network changed to only give back the next most common input type (an 'E'). So, is there some way to stop my network stopping at a local minima (not sure if that's the actual term)? Is this even a generic problem for neural networks, or is it just that my network is broken somehow? I'm happy to provide code if it would help. EDIT: These are the hyperparameters of my network: Input size : 400 (20x20 greyscale images) Hidden layer 1 size : 100 Hidden layer 2 size : 100 Output layer size : 62 (Alphanumeric, lower- and upper-case) Training data size : 4283 images Validation data size : 1000 images Test data size : 1000 images Batch size : 100 Learning rate : 0.5 Dropout rate : 0.5 L2 regularisation parameter : 0
0
1
379
0
37,829,933
0
0
0
0
3
false
0
2016-06-15T07:46:00.000
0
4
0
Neural network only learns most common training image
37,829,169
0
python,neural-network,tensorflow
Your learning rate is way too high. It should be around 0.01, you can experiment around it but 0.5 is too high. With a high learning rate, the network is likely to get stuck in a configuration and output something fixed, like you observed. EDIT It seems the real problem is the unbalanced classes in the dataset. You can try: to change the loss so that less frequent examples get a higher loss change your sampling strategy by using balanced batches of data. When selecting the 64 examples in your batch, select randomly in the dataset but with the same probability for each class.
I'm building a neural net using TensorFlow and Python, and using the Kaggle 'First Steps with Julia' dataset to train and test it. The training images are basically a set of images of different numbers and letters picked out of Google street view, from street signs, shop names, etc. The network has 2 fully-connected hidden layers. The problem I have is that the network will very quickly train itself to only give back one answer: the most common training letter (in my case 'A'). The output is in the form of a (62, 1) vector of probabilities, one for each number and letter (upper- and lower-case). This vector is EXACTLY the same for all input images. I've then tried to remove all of the 'A's from my input data, at which point the network changed to only give back the next most common input type (an 'E'). So, is there some way to stop my network stopping at a local minima (not sure if that's the actual term)? Is this even a generic problem for neural networks, or is it just that my network is broken somehow? I'm happy to provide code if it would help. EDIT: These are the hyperparameters of my network: Input size : 400 (20x20 greyscale images) Hidden layer 1 size : 100 Hidden layer 2 size : 100 Output layer size : 62 (Alphanumeric, lower- and upper-case) Training data size : 4283 images Validation data size : 1000 images Test data size : 1000 images Batch size : 100 Learning rate : 0.5 Dropout rate : 0.5 L2 regularisation parameter : 0
0
1
379
0
37,876,437
0
0
0
0
1
true
0
2016-06-15T10:33:00.000
0
1
0
Adding h5 files in a zip to use with PySpark
37,832,937
1.2
python,pyspark,caffe
Found that you can add the additional files to all the workers by using --files argument in spark-submit.
I am using PySpark 1.6.1 for my spark application. I have additional modules which I am loading using the argument --py-files. I also have a h5 file which I need to access from one of the modules for initializing the ApolloNet. Is there any way I could access those files from the modules if I put them in the same archive? I tried this approach but it was throwing an error because the files are not there in every worker. I can think of copying the file to each of the workers but I want to know if there are better ways to do it?
1
1
84
0
37,839,374
0
0
0
0
1
false
1
2016-06-15T15:08:00.000
0
1
1
How to plot data while it's being processed
37,839,265
0
python,plot,gnuplot
You can plot the data as it is being processed, but there's a couple of issues that come along with it in terms of efficiency. Gnuplot needs to do work each time to process your data Gnuplot needs to wait for your operating system to paint your screen whenever it updates Your program needs to wait for Gnuplot to do any of this in order to go to the next step All of these will greatly impact the amount of time you spend waiting for your data. You could potentially have it run every x iterations (eg. every 5 iterations), but even this wouldn't give you much of a speed-up.
I'm in the process of converting a large (several GBs) bin file to csv format using Python so the resulting data can be plotted. I'm doing this conversion because the bin file is not in a format that a plotting tool/module could understand, so there needs to be some decoding/translation. Right now it seems like Gnuplot is the way to go for such large data size. I'm wondering if instead of waiting for the whole file to finish converting and then running Gnuplot, is there a way to plot the data as it's being processed? Perhaps I could bypass the csv file altogether. Everything I've read so far points to plotting a file with data, but I have not seen any ways of plotting/appending individual data points.
0
1
67
0
37,890,119
0
0
0
0
1
true
2
2016-06-15T20:33:00.000
0
2
0
Plotting a point cloud and moving the camera
37,845,256
1.2
python,matplotlib,gnuplot,visualization
No, gnuplot cannot really move the viewing point, for the good reason that the viewing point is at infinity: all you can do is set an angle and magnification (using set view) and an offset within the viewing window (with set origin). That means, you can move the viewing point on a sphere at infinity, but not among the points you're plotting. (Question 2 is off-topic as a software advice, but you're looking for a rendering software such as paraview)
I have a list of points given by their x, y, z coordinates. I would like to plot these points on a computer. I have managed to do this with gnuplot and with the python library matplotlib separately. However for these two solutions, it seems hard to change the 'viewing point', or the point from which the projection of the 3D point cloud to the 2D screen is done. 1) Is there any easy way to, preferably continuously, move the viewing point in gnuplot (the splot command) or with matplotlib (the plot command)? 2) What other libraries are there for which this is an easy task? EDIT: I want to move the viewing point (like the player in an first-person shooter, say), not change the viewing angles.
0
1
1,343
0
37,851,865
0
0
0
0
1
false
1
2016-06-16T06:50:00.000
1
1
0
low_memory parameter in read_csv function
37,851,796
0.197375
python,pandas,ipython,spyder
This come from the docs themselves. Have you read them? low_memory : boolean, default True Internally process the file in chunks, resulting in lower memory use while parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with the dtype parameter. Note that the entire file is read into a single DataFrame regardless, use the chunksize or iterator parameter to return the data in chunks. (Only valid with C parser)
What does the low_memory parameter do in the read_csv function from the pandas library?
0
1
113
0
37,855,371
0
0
0
0
1
true
3
2016-06-16T09:24:00.000
4
2
0
Why does accumulate work for numpy.maximum but not numpy.argmax
37,855,059
1.2
python,numpy,numpy-ufunc
Because max is associative, but argmax is not: max(a, max(b, c)) == max(max(a, b), c) argmax(a, argmax(b, c)) != argmax(argmax(a, b), c)
These two look like they should be very much equivalent and therefore what works for one should work for the other? So why does accumulate only work for maximum but not argmax? EDIT: A natural follow-up question is then how does one go about creating an efficient argmax accumulate in the most pythonic/numpy-esque way?
0
1
1,719
0
37,879,801
0
0
0
0
1
true
1
2016-06-17T10:42:00.000
1
1
0
Creating a 3D grid using X,Y,Z coordinates at cell centers
37,879,558
1.2
python,numpy,scipy
If you grid is regular: You have calculate dx = x[i+1]-x[i], dy = y[i+1]-y[i], dz = z[i+1]-z[i]. Then calculate new arrays of points: x1[i] = x[i]-dx/2, y1[i] = y[i]-dy/2, z1[i] = z[i]-dz/2. If mesh is irregular you have to do the same but dx,dy,dz you have to define for every grid cell.
I have a question, I have been given x,y,z coordinate values at cell centers of a grid. I would like to create structured grid using these cell center coordinates. Any ideas how to do this?
0
1
425
0
37,905,017
0
0
0
0
1
false
13
2016-06-17T21:49:00.000
5
2
0
Pandas Series: Log Normalize
37,890,849
0.462117
python,pandas,normalization
If your data is in the range (-1;+1) (assuming you lost the minus in your question) then log transform is probably not what you need. At least from a theoretical point of view, it's obviously the wrong thing to do. Maybe your data has already been preprocessed (inadequately)? Can you get the raw data? Why do you think log transform will help? If you don't care about what is the meaningful thing to do, you can call log1p, which is the same as log(1+x) and which will thus work on (-1;∞).
I have a Pandas Series, that needs to be log-transformed to be normal distributed. But I can´t log transform yet, because there are values =0 and values below 1 (0-4000). Therefore I want to normalize the Series first. I heard of StandardScaler(scikit-learn), Z-score standardization and Min-Max scaling(normalization). I want to cluster the data later, which would be the best method? StandardScaler and Z-score standardization use mean, variance etc. Can I use them on "not yet normal distibuted" data?
0
1
64,605
0
64,930,005
0
0
0
0
1
false
87
2016-06-18T02:51:00.000
-2
8
0
Using Keras & Tensorflow with AMD GPU
37,892,784
-0.049958
python,python-2.7,opencl,tensorflow,keras
Technically you can if you use something like OpenCL, but Nvidia's CUDA is much better and OpenCL requires other steps that may or may not work. I would recommend if you have an AMD gpu, use something like Google Colab where they provide a free Nvidia GPU you can use when coding.
I'm starting to learn Keras, which I believe is a layer on top of Tensorflow and Theano. However, I only have access to AMD GPUs such as the AMD R9 280X. How can I setup my Python environment such that I can make use of my AMD GPUs through Keras/Tensorflow support for OpenCL? I'm running on OSX.
0
1
121,342
0
37,902,814
0
0
0
0
1
false
5
2016-06-18T10:46:00.000
1
3
0
How to represent a 3D .obj object as a 3D array?
37,896,090
0.066568
python,c++,arrays,opencv,3d
If I understand correctly, you want to create a voxel representation of 3D models? Something like the visible human displays? I would use one of the OBJ file loaders recommended above to import the model into an OpenGL program. Rotate and scale to whatever alignment you want along XYZ. Then draw the object with a fragment shader that discards any pixel with Z < 0.001 or Z >= 0.002 (or whatever resolution works - I'm just trying to explain the method). This gives you the first image slice, which you store or save. Clear and draw again this time discarding Z < 0.002 or Z >= 0.003 … Because it's the same model in the same position, all your slices will be aligned. However, are you aware that OBJ (and nearly all other 3D formats) are surface descriptions, not solid? They're hollow inside like origami models. So your 3D array representation will be mostly empty. Hope this helps.
Is there any way by which 3D models can be represented as 3D arrays? Are there any libraries that take .obj or .blend files as input and give an array representation of the same? I thought that I would slice object and export the slices to an image. I would then use those images in opencv to build arrays for each slice. In the end I would combine all the arrays of all the slices to finally get a 3D array representation of my .obj file. But I gave up halfway through because it is a painfully long process to get the image slices aligned to each other. Is there any other index based representation I could use to represent 3D models in code? A 3D array would be very convenient for my purposes.
0
1
3,543
0
37,947,823
0
0
0
0
1
true
5
2016-06-21T14:42:00.000
6
2
0
Neural Network composed of multiple activation functions
37,947,558
1.2
python,neural-network,scikits,activation-function
A neural network is just a (big) mathematical function. You could even use different activation functions for different neurons in the same layer. Different activation functions allow for different non-linearities which might work better for solving a specific function. Using a sigmoid as opposed to a tanh will only make a marginal difference. What is more important is that the activation has a nice derivative. The reason tanh and sigmoid are usually used is that for values close to 0 they act like a linear function while for big absolute values they act more like the sign function ((-1 or 0) or 1 ) and they have a nice derivative. A relatively new introduced one is the ReLU (max(x,0)), which has a very easy derivative (except for at x=0), is non-linear but importantly is fast to compute so nice for deep networks with high training times. What it comes down to is that for the global performance the choice in this is not very important, the non-linearity and capped range is important. To squeeze out the last percentage points this choice will matter however but is mostly dependent on your specific data. This choice just like the number of hidden layers and the number of neurons inside these layers will have to be found by crossvalidation, although you could adapt your genetic operators to include these.
I am using the sknn package to build a neural network. In order to optimize the parameters of the neural net for the dataset I am using I am using an evolutionary algorithm. Since the package allows me to build a neural net where each layer has a different activation function, I was wondering if that is a practical choice, or whether I should just use one activation function per net? Does having multiple activation functions in a neural net harm, does no damage, or benefit the neural network? Also what is the maximum amount of neuron per layer I should have, and the maximum amount of layers per net should I have?
0
1
5,281
0
37,971,709
0
1
0
0
2
false
0
2016-06-21T15:39:00.000
1
2
0
spyder, numpy, anaconda : cannot import name multiarray
37,948,852
0.099668
python-2.7,numpy,spyder
I solved the problem by executing the spyder version of the python2 environment. It is located in Anaconda3\envs\python2\Scripts\spyder.exe
I am on Windows 10, 64bits, use Anaconda 4 and I created an environment with python 2.7 (C:/Anaconda3/envs/python2/python.exe) In this environment, I successfully installed numpy and when I type "python", enter, "import numpy", enter, it works perfectly in the anaconda prompt window. In spyder however, when I open a python console and type "import numpy", I get "cannot import name multiarray". I have obviously changed the path of the python interpreter used by spyder to match the python.exe of the environment I created (C:/Anaconda3/envs/python2/python.exe). I also updated the PYTHONSTARTUP to C:/Anaconda3/envs/python2/Lib/site-packages/spyderlib/scientific_startup.py It's supposed to be the exact same python program running but it's two different behavior. How is it possible and how to fix it ? PS: I already tried the various solutions to this error like uninstalling numpy and reinstalling it. It shouldn't be a problem with numpy since it works just fine in the python console of the anaconda prompt window.
0
1
833
0
42,165,864
0
1
0
0
2
false
0
2016-06-21T15:39:00.000
1
2
0
spyder, numpy, anaconda : cannot import name multiarray
37,948,852
0.099668
python-2.7,numpy,spyder
I have encountered same issue. I have followed every possible solution, which is stated on stack-overflow. But no luck. The cause of error might be the python console. I have installed a 3.5 Anaconda, and the default console is the python 2.7, which I have installed primarily with pydev. I did this and now it is working absolutely fine. Go to tools>preferences and click on reset to defaults. It might solve the issue. Or another solution is to uninstall the current Anaconda i.e. y.x and installing the correct one according to the default. In my case 2.7 Anaconda instead of 3.5
I am on Windows 10, 64bits, use Anaconda 4 and I created an environment with python 2.7 (C:/Anaconda3/envs/python2/python.exe) In this environment, I successfully installed numpy and when I type "python", enter, "import numpy", enter, it works perfectly in the anaconda prompt window. In spyder however, when I open a python console and type "import numpy", I get "cannot import name multiarray". I have obviously changed the path of the python interpreter used by spyder to match the python.exe of the environment I created (C:/Anaconda3/envs/python2/python.exe). I also updated the PYTHONSTARTUP to C:/Anaconda3/envs/python2/Lib/site-packages/spyderlib/scientific_startup.py It's supposed to be the exact same python program running but it's two different behavior. How is it possible and how to fix it ? PS: I already tried the various solutions to this error like uninstalling numpy and reinstalling it. It shouldn't be a problem with numpy since it works just fine in the python console of the anaconda prompt window.
0
1
833
0
38,003,329
0
0
0
0
1
false
10
2016-06-21T20:49:00.000
-3
3
0
How to load one line at a time from a pickle file?
37,954,324
-0.197375
python,numpy,pickle
Thanks everyone. I ended up finding a workaround (a machine with more RAM so I could actually load the dataset into memory).
I have a large dataset: 20,000 x 40,000 as a numpy array. I have saved it as a pickle file. Instead of reading this huge dataset into memory, I'd like to only read a few (say 100) rows of it at a time, for use as a minibatch. How can I read only a few randomly-chosen (without replacement) lines from a pickle file?
0
1
14,112
0
37,971,491
0
0
0
0
1
false
0
2016-06-22T14:22:00.000
0
1
0
Neural Network converging and accurate in training, but failing in real world
37,970,895
0
python,neural-network,prediction
What Ashafix says was my first though, you should post your training and test data also the data that you use for 'real world'. Another problem it could be that when you are testing, you are using only previous correct whether data(data you already have), and when you are in practice you are using your predicts and correct whether data. You should be consistent here. PD. Sorry for my english, I'm still learning.
This is my first question on this site. I'm attempting to practice neural networks by having my program predict whether the temperature will go up or down on a given day relative to the previous day. My training data set consists of the previous ten days and whether they went up or down relative to the day before them. I'm not implying this is an effective way to predict weather, but that makes my problem even more confusing. When I train the program over 25 days (50 days ago to 25 days ago) then test it on the next 25 days (25 days ago to yesterday) I consistently get 100% accuracy in the test set. I've added an alpha for gradient descent and have around 60 hidden layers, and if I make the alpha something bigger like 0.7 the accuracy will reduce to ~40%, so I think my testing code is correct. Assuming I have a true 100% accuracy, I had the program predict tomorrow's weather, then use that and 9 days of historical to predict the day after tomorrow, and so on until I've predicted 5 days in the future. I then waited to see what the weather would be and my program was comically bad in its predictions. I ran this for a week to test, and had an accuracy of predicting the next day of about 60% and after that only around 10%. TL;DR Sorry for rambling the details, but my question is what would cause a neural network to be 100% accuracy in testing and then fail spectacularly in practice? Thanks for the help, I can post code if needed (and someone explains how to in a comment)
0
1
68
0
37,997,580
0
0
0
0
1
false
1
2016-06-23T12:48:00.000
0
3
0
Java calling python function with tensorflow graph
37,992,129
0
java,python-2.7,tensorflow
I've had the same problem, Java+Python+TensorFlow. I've ended up setting up a simple http server. If that's too slow for you, you can shave off some overhead by employing sockets directly.
So I have a neural network in tensorflow (python2.7) and I need to retrieve its output using Java. I have a simple python function getValue(input) which starts the session and retrieves the value. I am open to any suggestions. I believe Jython wont work cause tensorflow is not in the library. I need the call to be as fast as possible. JNI exists for Java calling C so can I convert with cython and compile then use JNI? Is there a way to pass the information in RAM or some other way I haven't thought of?
1
1
1,916
0
37,996,674
0
0
0
0
2
true
2
2016-06-23T16:02:00.000
3
3
0
Extremely low p-values from non-parametric tests
37,996,628
1.2
python,scipy,statistics,distribution,kolmogorov-smirnov
You do not need to worry about something going wrong with the scipy functions. P values that low just mean that it's really unlikely that your samples have the same parent populations. That said, if you were not expecting the distributions to be (that) different, now is a good time to make sure you're measuring what you think you're measuring, i.e. you are feeding in the right data to scipy.
I'm using Python's non-parametric tests to check whether two samples are consistent with being drawn from the same underlying parent populations: scipy.stats.ks_2samp (2-sample Kolmogorov-Smirnov), scipy.stats.anderson_ksamp (Anderson-Darling for k samples), and scipy.stats.ranksums (Mann-Whitney-Wilcoxon for 2 samples). My significance threshold to say that two samples are significantly different from each other is p = 0.01. If these three tests return extremely low p-values (sometimes like 10^-30 or lower), then do I need to worry about something having gone wrong with the scipy functions? Are these ridiculously small p-values reliable, and can I just report p << 0.01 (p much less than my threshold)?
0
1
1,931
0
38,006,265
0
0
0
0
2
false
2
2016-06-23T16:02:00.000
0
3
0
Extremely low p-values from non-parametric tests
37,996,628
0
python,scipy,statistics,distribution,kolmogorov-smirnov
Well, you've bumped into a well-known feature of significance tests, which is that the p-value typically goes to zero as the sample size increases without bound. If the null hypothesis is false (which can often be established a priori), then you can get as small a p-value as you wish, just by increasing the sample size. My advice is to think about what practical difference it makes that the distributions differ. Try to quantify that in terms of cost, either real (dollars) or abstract. Then devise a measurement for that.
I'm using Python's non-parametric tests to check whether two samples are consistent with being drawn from the same underlying parent populations: scipy.stats.ks_2samp (2-sample Kolmogorov-Smirnov), scipy.stats.anderson_ksamp (Anderson-Darling for k samples), and scipy.stats.ranksums (Mann-Whitney-Wilcoxon for 2 samples). My significance threshold to say that two samples are significantly different from each other is p = 0.01. If these three tests return extremely low p-values (sometimes like 10^-30 or lower), then do I need to worry about something having gone wrong with the scipy functions? Are these ridiculously small p-values reliable, and can I just report p << 0.01 (p much less than my threshold)?
0
1
1,931
0
46,231,053
0
0
0
0
2
false
0
2016-06-26T20:41:00.000
0
2
0
matplotlib: How to assign the dpi of your figure to meet some set maximum file size?
38,042,987
0
python,image,matplotlib,save
You should do a for loop over different dpi values in decreasing order and in every loop save the image, check the file size and delete the image if filesize > 15 MB. After filesize < 15 MB break the loop.
Just wondering if there is a tried and tested method for setting the dpi output of your figure in matplotlib to be governed by some maximum file size, e.g., 15MB.
0
1
80
0
38,043,037
0
0
0
0
2
false
0
2016-06-26T20:41:00.000
0
2
0
matplotlib: How to assign the dpi of your figure to meet some set maximum file size?
38,042,987
0
python,image,matplotlib,save
There can not be such a mechanism, because the file size can only be determined by actually rendering the finished drawing to a file, and that must happen after setting up the figure (where you set the DPI). How, for example, should anyone know, before rendering your curve, how well it's compressible as PNG? Or how big the PDF you might alternatively generate might be, without knowing how many lines you'll plot? Also, matplotlib potentially has a lot of different output formats. Hence, no, this is impossible.
Just wondering if there is a tried and tested method for setting the dpi output of your figure in matplotlib to be governed by some maximum file size, e.g., 15MB.
0
1
80
0
38,061,355
0
0
0
0
1
false
0
2016-06-27T18:27:00.000
0
2
0
Trying to find frequent patterns in a sequence python
38,060,783
0
python,algorithm,pattern-matching,sequence,apriori
There are a couple of business decisions you have to take before you will have a normal algorithm. The first and the most important decision is what size of the set do you want. Clearly if you have {a, b, ... x} is the most frequent set, then every subset (like {a, x} or {c, d, y} will be at least with the same frequency). You need to know which one do you need (may be all or any). Also what would you do in case of these frequencies {a, b} with frequency 100 and {a, c, d, e, f, g} with frequency 20. Clearly the first one is more frequent, but the second is also pretty frequent and really long. One way to approach this is to iterate over all 1 element subsequences and find their frequency. Then all 2 element and so on. Then create some weighted score which can be the frequency multiplied by some function based on the length of the sequence. Select the highest score.
I am trying to find frequent ( ordered or unordered) patterns in a column. The column contains numeric IDs. for eg: s=[1 2 3 4 1 2 6 7 8 2 1 10 11] Here 1 2 or 2 1 taking as a same case is the most frequent set. Please help me to solve this problem, I could think of apriori, FP algorithms but I don't have any transaction, just a sequence.
0
1
735
0
38,065,115
0
1
0
0
1
true
0
2016-06-27T23:42:00.000
0
1
0
Python: multi-dimensional lists - appending one 2D list into another list
38,064,885
1.2
python,arrays,list,multidimensional-array
Python's fairly friendly about this sort of thing and will let you have lists as elements of lists . Here's an example of one way to do it. TableA = [['01/01/2000', '$10'], ['02/01/2000', '$11']] If you entered this straight into the python interpreter, you'd define TableA as a list with two elements. Both of these elements are also lists. If you then entered in TableA[0] you'd get ['01/01/2000', '$10']. Furthermore, by entering TableA[0][0] you'd get '01/01/2000' as that's the first element of the first list in TableA. Extending this further, you can have lists of lists of lists (and so on). First, let's define TableA and TableB. TableA = [['01/01/2000', '$10'], ['02/01/2000', '$11']] TableB = [['03/01/2000', '$13'], ['04/01/2000', '$14']] Now we can simply define BigTable as having TableA and TableB as its elements. BigTable = [TableA, TableB] Now, BigTable[0] is just TableA so BigTable[0][0][0] will be the same as TableA[0][0] If at some point down the line you realise that you want BigTable to have more lists in it, say a TableC or TableD. Just use the append function. BigTable.append(TableC) By the way, you'll probably want to have prices and dates expressed as numbers rather than strings, but it's easier to follow the example this way.
I'm looking to create a master array/list that takes several two dimensional lists and integrates them into the larger list. For example I have a TableA[] which has dates as one array/list and prices as another array/list. I have another as TableB[] which has the same. TableA[0][0] has the first date; TableA[0][1] has the first price; TableA[1][0] has the second date, and so on. I would like to create BigTabe[] that has BigTable[0][0][0] = TableA[0][0] and BigTable[1][0][0] = TableB[0][0]. Any guidance would be much appreciated. Thank you!
0
1
1,397
0
38,067,029
0
0
0
0
1
false
1
2016-06-28T00:57:00.000
0
1
0
Tensorflow Copy Weights Issue
38,065,448
0
python-3.x,neural-network,tensorflow
If you could have your code/ more detail here that would be beneficial. However, you can return the session you're using to train N1 and access it while you want to train N2.
I am using Tensorflow 0.8 to train the Deep Neural Networks. Currently, I encounter an issue that I want to define two exact same Neural Networks N1 and N2, and I train N1, during the training loop, I copy updated weights from N1 to N2 every 4 iterations. In fact, I know there is way using tf.train.saver.save() to save all N1 weights into a .ckpt file on Disk, and using tf.train.saver.restore() to load those weights from .ckpt file, which is equivalent to the copy functionality. However, this load/reload will impact the training speed, and I wonder if there are other more efficient ways to do the copy (For example, do in-memory copy, etc.). Thanks!
0
1
539
0
38,072,512
0
0
0
0
1
false
0
2016-06-28T08:52:00.000
0
1
0
Plot 2 boxplots , each from different pandas dataframe in a figure?
38,071,436
0
python,pandas,matplotlib,boxplot
Use return_type='axes' to get data1.boxplot to return a matplotlib Axes object. Then pass that axes to the second call to boxplot using ax=ax. This will cause both boxplots to be drawn on the same axes. Alternatively if you just want them plotted side to side use matplotlib subplot
I want to plot boxplots for each of the dataframes side by side. Below is an example dataset. data 1 : id |type |activity | feature1 1 | A | ACTIVE | 12 2 | B | INACTIVE| 10 3 | C | ACTIVE| 9 data 2: id | type | activity | feature1 1 | A | ACTIVE | 13 2 | B | INACTIVE | 14 3 | C | ACTIVE | 15 First boxplot should be to plot the feature1 grouped by type and the second boxplot should be to plot the feature1 grouped by activity.Both the plots should be placed in the same figure. Note : I do not want to do combined grouping.
0
1
865
0
38,833,074
0
0
0
0
1
false
0
2016-06-28T13:59:00.000
0
1
0
Implicit DAE Mass Matrix Python
38,078,299
0
python,matrix,ode
Since the mass matrix is singular, this is a "differential-algebraic equation". You can find off-the-shelf solvers for DAEs, such as the IDA solver from the SUNDIALS library. SUNDIALS has python bindings in the scikit.odes package.
I have a problem M*y' = f(y) that are going to be solved in Python, where M is the mass matrix, y' the derivative and y is a vector, such that y1, y2 etc. refers to different points in r. Have anyone used a mass matrix on a similar problem in Python? The problem is a 2D-problem in r- and z-direction. The r-direction is discretized to reduce the problem to a 1D-problem. The mass matrix is a diagonal matrix with ones and zeros on the diagonal.
0
1
680
0
54,546,005
0
0
0
0
1
false
51
2016-06-28T15:05:00.000
0
10
0
How can I implement incremental training for xgboost?
38,079,853
0
python,machine-learning,xgboost
To paulperry's code, If change one line from "train_split = round(len(train_idx) / 2)" to "train_split = len(train_idx) - 50". model 1+update2 will changed from 14.2816257268 to 45.60806270012028. And a lot of "leaf=0" result in dump file. Updated model is not good when update sample set is relative small. For binary:logistic, updated model is unusable when update sample set has only one class.
The problem is that my train data could not be placed into RAM due to train data size. So I need a method which first builds one tree on whole train data set, calculate residuals build another tree and so on (like gradient boosted tree do). Obviously if I call model = xgb.train(param, batch_dtrain, 2) in some loop - it will not help, because in such case it just rebuilds whole model for each batch.
0
1
48,270
0
38,084,884
0
1
0
0
1
false
2
2016-06-28T17:58:00.000
0
2
0
Python 3.5.1 Unable to import numpy after update
38,083,176
0
python,numpy,pandas,anaconda
I was able to resolve this issue using conda to remove and reinstall the packages that were failing to import. I will leave the question marked unanswered to see if anyone else has a better solution, or guidance on how to prevent this in the future.
I'm running Python 3.5.1 on a Windows 7 machine. I've been using Anaconda without issue for several months now. This morning, I updated my packages (conda update --all) and now I can't import numpy (version 1.11.0) or pandas(version 0.18.1). The error I get from Python is: Syntax Error: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape. This error occurs when the import statement is executed. I'm able to import other packages, some from anaconda's bundle and some from other sources without issue. Any thoughts on how to resolve this?
0
1
604
0
38,106,001
0
0
0
0
1
true
0
2016-06-29T16:45:00.000
0
1
0
Odd results from MATLAB function called in python
38,105,605
1.2
python,matlab,python-2.7,cplex
So turns out, that on occasion the code will run if some variables aren't specified as double, and in other cases it integer division or what not, results false results. I have no idea how this correlates to the input as it really shouldn't but I just went and specified all variables in the relevant section of code to be doubles and that fixed it. So tl;dr: Even if it runs, just enforce all variables are doubles and the problem is solved. Really something Mathworks should fix in their python api
So I have a rather comlicated matlab function (it calls a simulation that in turn calls an external optimisation suite (cplex or gurobi)). And for certain settings and inputs the MATLAB function and the python function called from Matlab give the same result but for others they differ (correct answer is ~4500) python sometimes returns 0.015... or 162381, so widely varying results I can't spot a pattern or correlation for. my guess would be either something with int/ float / double variable conversions, or some form of memory problem. The result comes straight from CPLEX so I'm little confused as to why it changes. On a side note, if I return a structure that contains a structure of arrays, that kills the python kernel. That makes debugging from python a little harder (I have pymatbridge and metakernel installed) Has anyone, had similar issues of unreliable matlab functions in python? Solution ideas other than, executing matlab from the console and reading in a results file?
0
1
62
0
38,110,151
0
1
0
0
1
true
0
2016-06-29T20:55:00.000
0
1
0
OpenCV Python - cv2 Module not found
38,109,860
1.2
python,python-2.7,opencv
Try to reinstall it by sudo apt-get install python-opencv, But first you may check out something you might be skipping. Make sure the script you are running on the terminal is on the same python version/ location as on IDLE. Maybe your IDLE is running on a different interpreter(different location). Open IDLE and check the path of cv2 module by cv2.__file__. Or you can check the executables path by sys.path. Then check the executable python path by running the script from the terminal, it must be same or else you need to explicitly set the PYTHONPATH to executable path shown in IDLE. Edit: According to the comments the problem you are facing is with the execution path, add the idle execution path to the environment variable path - on Windows. You can do it on the go by SET PATH=%PATH%;c:\python27 on cmd Change the path to your context (of the IDLE).
Even though I believe I have installed correctly OpenCV, I cannot overcome the following problem. When I start a new python project from IDLE (2.7) the cv2 module is imported successfully. If I close IDLE and try to run the .py file, an error message is displayed that says "ImportError: No module named cv2". Then if I create a clean project through IDLE it works until I close it. What could be the problem? P.S. I am using Python 2.7 and OpenCV 3.1, but tried also with 2.4.13 on Windows 10.
0
1
2,805
0
63,753,350
0
0
0
0
1
false
0
2016-06-30T06:05:00.000
1
2
0
Install python CV2 on spark cluster(data bricks)
38,115,108
0.099668
python,opencv,apache-spark,pyspark,databricks
Try to download numpy first followed by opencv-python it will work. Steps: Navigate to Install Library-->Select PyPI----->In Package--->numpy (after installation completes, proceed to step 2) Navigate to Install Library-->Select PyPI----->In Package--->opencv-python
i want to install pythons library CV2 on a spark cluster using databricks community edition and i'm going to: workspace-> create -> library , as the normal procedure and then selecting python in the Language combobox, but in the "PyPi Package" textbox , i tried "cv2" and "opencv" and had no luck. Does anybody has tried this? Do you know if cv2 can be installed on the cluster through this method? and if so, which name should be used in the texbox?
0
1
1,543
0
38,124,167
0
0
0
0
1
false
1
2016-06-30T07:04:00.000
2
1
0
python sklearn: what is the different between "sklearn.preprocessing.normalize(X, norm='l2')" and "sklearn.svm.LinearSVC(penalty='l2')"
38,116,078
0.379949
python,scikit-learn
These 2 are different things and you normally need them both in order to make a good SVC model. 1) The first one means that in order to scale (normalize) the X data matrix you need to divide with the L2 norm of each column, which is just this : sqrt(sum(abs(X[:,j]).^2)) , where j is each column in your data matrix X . This ensures that none of the values of each column become too big, which makes it tough for some algorithms to converge. 2) Irrespective of how scaled (and small in values) your data is, there still may be outliers or some features (j) that are way too dominant and your algorithm (LinearSVC()) may over trust them while it shouldn't. This is where L2 regularization comes into play , that says apart from the function the algorithm minimizes, a cost will be applied to the coefficients so that they don't become too big . In other words the coefficients of the model become additional cost for the SVR cost function. How much cost ? is decided by the C (L2) value as C*(beta[j])^2 To sum up, first one tells with which value to divide each column of the X matrix. How much weight should a coefficient burden the cost function with is the second.
here is two method of normalize : 1:this one is using in the data Pre-Processing: sklearn.preprocessing.normalize(X, norm='l2') 2:the other method is using in the classify method : sklearn.svm.LinearSVC(penalty='l2') i want to know ,what is the different between them? and does the two step must be used in a completely model ? is it right that just use a method is enough?
0
1
378
0
38,119,245
0
0
0
0
1
false
5
2016-06-30T09:24:00.000
0
2
0
Use hard drive instead of RAM in Python
38,118,942
0
python,pandas,memory,pydev
If all you need is a virtualization of the disk as a large RAM memory you might set up a swap file on the system. The kernel will then automatically swap pages in and out as needed, using heuristics to figure out what pages should be swapped and which should stay on disk.
I'd like to know if there's a method or a Python Package that can make me use a large dataset without writing it in RAM. I'm also using pandas for statistical function. I need to have access on the entire dataset because many statistical functions needs the entire dataset to return credible results. I'm using PyDev (with interpreter Python 3.4) on LiClipse with Windows 10.
0
1
5,299
0
57,539,584
0
1
0
0
2
false
6
2016-06-30T15:59:00.000
0
2
0
Pycharm debugger, view as array option
38,128,164
0
python,matlab,debugging,numpy,pycharm
You need to ensure that after you "view as array" you then enter the correct slice. I.e. if you view a color image which has shape (500, 1000, 3) as an array, the default slicing option will be image[0]. This is the first row of pixels and will appear as a (1000, 3) array. In order to see one of the three color channels you must change the slicing option to image[:, :, color], then you will see one of the three color channels slices appear as a (500, 1000) array.
First of all, sorry if it's not the place to post this question, I know it is more related to the software I'm using to program than programming itself, but I figured someone here would probably know the answer. I often use PyCharm (currently on version 2016.1.2) and its useful debugger to code in Python. I'm currently translating Matlab code to Python code and I often need to compare outputs of functions. In PyCharm's debugger, I can right click on a variable in the variable space and then press « View as array ». This gives me a nice grid view of my array (Excel kind of grid) and I can easily compare with my array in Matlab, which can also be displayed in a grid. However, sometimes, this option won't work in PyCharm and I don't know why! For example, I have a variable of type numpy.ndarray containing 137 by 60 floats and when I press « view as array », it displays the window, but instead of showing the grid, it shows « Nothing to show ». Curiously, I tried to copy the first 30 lines in another variable and this time PyCharm was able to show me the grid associated with this new variable. Usually, the number doesn't seem to be a problem. I tried to display a 500 by 500 array containing floats and it did just fine. If someone could tell me why this happens and how I can overcome this problem, I'd be very glad. Also, if anyone has another way to display a matrix in Python in an elegant way, I'd take it too since it could also help me in my task! Thanks!
0
1
2,669
0
41,962,870
0
1
0
0
2
false
6
2016-06-30T15:59:00.000
6
2
0
Pycharm debugger, view as array option
38,128,164
1
python,matlab,debugging,numpy,pycharm
I encountered the same problem when I tried to view a complex arrays with 'Color' check box checked. Unchecking the check box showed the array. Maybe some inf or nan value present in you array which does not allow to show colored array.
First of all, sorry if it's not the place to post this question, I know it is more related to the software I'm using to program than programming itself, but I figured someone here would probably know the answer. I often use PyCharm (currently on version 2016.1.2) and its useful debugger to code in Python. I'm currently translating Matlab code to Python code and I often need to compare outputs of functions. In PyCharm's debugger, I can right click on a variable in the variable space and then press « View as array ». This gives me a nice grid view of my array (Excel kind of grid) and I can easily compare with my array in Matlab, which can also be displayed in a grid. However, sometimes, this option won't work in PyCharm and I don't know why! For example, I have a variable of type numpy.ndarray containing 137 by 60 floats and when I press « view as array », it displays the window, but instead of showing the grid, it shows « Nothing to show ». Curiously, I tried to copy the first 30 lines in another variable and this time PyCharm was able to show me the grid associated with this new variable. Usually, the number doesn't seem to be a problem. I tried to display a 500 by 500 array containing floats and it did just fine. If someone could tell me why this happens and how I can overcome this problem, I'd be very glad. Also, if anyone has another way to display a matrix in Python in an elegant way, I'd take it too since it could also help me in my task! Thanks!
0
1
2,669
0
38,130,043
0
0
0
0
1
false
0
2016-06-30T17:44:00.000
3
3
0
Python equivalent for matlab's perms
38,130,008
0.197375
python,matlab,numpy,scipy
Python has a built-in function called itertools.permutations. You can call it on any iterable in python and it returns all full length permutations.
Is there an equivalent method in numpy or scipy for matlab's perms function? In matlab, perms returns a matrix of all possible permutations of the input in reverse lexicographical order.
0
1
865
0
38,144,150
0
0
0
0
1
false
2
2016-07-01T11:13:00.000
1
2
0
importing whole python module doesn't allow using submodules
38,143,991
0.099668
python,numpy,import,module,scikit-learn
Numpy conveniently imports its submodules in its __init__.py file and adds them to __all__. There's not much you can do about it when using a library - it either does it or not. sklearn apparently doesn't.
My question is specific to scikit-learn python module, but I had similar issues with matplotlib as well. When I want to use sklearn, if I just do 'import sklearn' and then call whatever submodule I need, like ' sklearn.preprocessing.scale()', I get an error "AttributeError: 'module' object has no attribute 'preprocessing'" On the other hand, when I do 'from sklearn import preprocessing' and then use 'preprocessing.scale()' it works normally. When I use other modules like Numpy, it is sufficient to just 'import numpy' and it works well. Therefore, I would like to ask if anyone can tell me why is this happening and if I am doing something wrong? Thanks.
0
1
1,297
0
38,145,044
0
1
0
0
1
true
5
2016-07-01T11:56:00.000
7
2
0
Does Python cache repeatedly accessed files?
38,144,825
1.2
python,pandas
No, Python is just a language and doesn't really do anything on its own. A particular Python library might implement caching, but the standard functions you use to open and read files don't do so. The higher-level file-loading functions in Pandas and the CSV module don't do any caching either. The operating system might do some caching of its own, but you can't control that from within Python.
I was wondering if Python is smart enough enough to cache repeatedly accessed files, e.g. when reading the same CSV with pandas or unpickling the same file multiple times. Is this even Python's responsibility, or should the operating system take care of it?
0
1
1,776
0
38,439,059
0
0
0
0
1
true
0
2016-07-01T13:23:00.000
0
2
1
Where can i cache pandas dataframe in tornado requesthandler
38,146,607
1.2
python,caching,tornado,requesthandler
Depends on how and where you want to be able to access this cache in the future, and how you want to handle invalidation. If the CSV files don't change then this could be as simple as @functools.lru_cache or a global dict. If you need one cache shared across multiple processes then you could use something like memcached or redis, but then you'll still have some parsing overhead depending on what format you use. In any case, there's not really anything Tornado-specific about this.
I want to cache a pandas dataframe into tornado requesthandler. So i don't want to repeat the pd.read_csv() for every hit to that particular url.
0
1
906
0
57,898,590
0
0
0
0
2
false
2
2016-07-01T13:33:00.000
0
2
0
Estimation of fundamental matrix or essential matrix from feature matching
38,146,821
0
python,opencv,matrix,computer-vision
Yes, Computing Fundamental Matrix gives a different matrix every time as it is defined up to a scale factor. It is a Rank 2 matrix with 7DOF(3 rot, 3 trans, 1 scaling). The fundamental matrix is a 3X3 matrix, F33(3rd col and 3rd row) is scale factor. You make ask why do we append matrix with constant at F33, Because of (X-Left)F(x-Right)=0, This is a homogenous equation with infinite solutions, we are adding a constraint by making F33 constant.
I am estimating the fundamental matrix and the essential matrix by using the inbuilt functions in opencv.I provide input points to the function by using ORB and brute force matcher.These are the problems that i am facing: 1.The essential matrix that i compute from in built function does not match with the one i find from mathematical computation using fundamental matrix as E=k.t()FK. 2.As i vary the number of points used to compute F and E,the values of F and E are constantly changing.The function uses Ransac method.How do i know which value is the correct one?? 3.I am also using an inbuilt function to decompose E and find the correct R and T from the 4 possible solutions.The value of R and T also change with the changing E.More concerning is the fact that the direction vector T changes without a pattern.Say it was in X direction at a value of E,if i change the value of E ,it changes to Y or Z.Y is this happening????.Has anyone else had the same problem.??? How do i resolve this problem.My project involves taking measurements of objects from images. Any suggestions or help would be welcome!!
0
1
1,100
0
38,615,979
0
0
0
0
2
false
2
2016-07-01T13:33:00.000
1
2
0
Estimation of fundamental matrix or essential matrix from feature matching
38,146,821
0.099668
python,opencv,matrix,computer-vision
Both F and E are defined up to a scale factor. It may help to normalize the matrices, e. g. by dividing by the last element. RANSAC is a randomized algorithm, so you will get a different result every time. You can test how much it varies by triangulating the points, or by computing the reprojection errors. If the results vary too much, you may want to increase the number of RANSAC trials or decrease the distance threshold, to make sure that RANSAC converges to the correct solution.
I am estimating the fundamental matrix and the essential matrix by using the inbuilt functions in opencv.I provide input points to the function by using ORB and brute force matcher.These are the problems that i am facing: 1.The essential matrix that i compute from in built function does not match with the one i find from mathematical computation using fundamental matrix as E=k.t()FK. 2.As i vary the number of points used to compute F and E,the values of F and E are constantly changing.The function uses Ransac method.How do i know which value is the correct one?? 3.I am also using an inbuilt function to decompose E and find the correct R and T from the 4 possible solutions.The value of R and T also change with the changing E.More concerning is the fact that the direction vector T changes without a pattern.Say it was in X direction at a value of E,if i change the value of E ,it changes to Y or Z.Y is this happening????.Has anyone else had the same problem.??? How do i resolve this problem.My project involves taking measurements of objects from images. Any suggestions or help would be welcome!!
0
1
1,100
0
38,156,630
0
1
0
0
1
false
54
2016-07-01T23:34:00.000
39
3
0
What is the difference between native int type and the numpy.int types?
38,155,039
1
python,numpy
There are several major differences. The first is that python integers are flexible-sized (at least in python 3.x). This means they can grow to accommodate any number of any size (within memory constraints, of course). The numpy integers, on the other hand, are fixed-sized. This means there is a maximum value they can hold. This is defined by the number of bytes in the integer (int32 vs. int64), with more bytes holding larger numbers, as well as whether the number is signed or unsigned (int32 vs. uint32), with unsigned being able to hold larger numbers but not able to hold negative number. So, you might ask, why use the fixed-sized integers? The reason is that modern processors have built-in tools for doing math on fixed-size integers, so calculations on those are much, much, much faster. In fact, python uses fixed-sized integers behind-the-scenes when the number is small enough, only switching to the slower, flexible-sized integers when the number gets too large. Another advantage of fixed-sized values is that they can be placed into consistently-sized adjacent memory blocks of the same type. This is the format that numpy arrays use to store data. The libraries that numpy relies on are able to do extremely fast computations on data in this format, in fact modern CPUs have built-in features for accelerating this sort of computation. With the variable-sized python integers, this sort of computation is impossible because there is no way to say how big the blocks should be and no consistentcy in the data format. That being said, numpy is actually able to make arrays of python integers. But rather than arrays containing the values, instead they are arrays containing references to other pieces of memory holding the actual python integers. This cannot be accelerated in the same way, so even if all the python integers fit within the fixed integer size, it still won't be accelerated. None of this is the case with Python 2. In Python 2, Python integers are fixed integers and thus can be directly translated into numpy integers. For variable-length integers, Python 2 had the long type. But this was confusing and it was decided this confusion wasn't worth the performance gains, especially when people who need performance would be using numpy or something like it anyway.
Can you please help understand what are the main differences (if any) between the native int type and the numpy.int32 or numpy.int64 types?
0
1
41,529
0
38,158,929
0
0
0
0
2
false
0
2016-07-02T05:30:00.000
0
4
0
opencv-python object detection
38,156,827
0
python,python-2.7,opencv
Your question is way too general. Feature matching is a very vast field. The type of algorithm to be used totally depends on the object you want to detect, its environment etc. So if your object won't change its size or angle in the image then use Template Matching. If the image will change its size and orientation you can use SIFT or SURF. If your object has unique color features that is different from its background, you can use hsv method. If you have to classify a group of images as you object,for example all the cricket bats should be detected then you can train a number of positive images to tell the computer how the object looks like and negative image to tell how it doesn't, it can be done using haar training.
I'm a beginner in opencv using python. I have many 16 bit gray scale images and need to detect the same object every time in the different images. Tried template matching in opencv python but needed to take different templates for different images which could be not desirable. Can any one suggest me any algorithm in python to do it efficiently.
0
1
905
0
38,674,476
0
0
0
0
2
false
0
2016-07-02T05:30:00.000
0
4
0
opencv-python object detection
38,156,827
0
python,python-2.7,opencv
u can try out sliding window method. if ur object is the same in all samples
I'm a beginner in opencv using python. I have many 16 bit gray scale images and need to detect the same object every time in the different images. Tried template matching in opencv python but needed to take different templates for different images which could be not desirable. Can any one suggest me any algorithm in python to do it efficiently.
0
1
905
0
38,176,821
0
0
0
0
1
false
0
2016-07-04T03:42:00.000
0
1
0
Does pandas.read_csv loads all data at once?
38,176,645
0
python,pandas
The way to check is len(df). This will give you the number of lines in DataFrame. Then, you need to check the csv file for lines. On linux, use wc -l. Or, you may need to use something else to findout lines in csv. Notepad++, Nano, SumlineText.
I want to know when I use pandas.read_csv('file.csv') function to read csv file, did it load all data of file.csv into DataFrame?
0
1
106
0
38,181,950
0
0
0
0
1
false
0
2016-07-04T06:22:00.000
0
1
0
Regression: Variable influence
38,178,028
0
python,sas,regression
In SAS, apart from the correlation (Pearson index) you can use a ranking index like the Spearman coefficient (proc corr). In addition, supposing you have the correct modules (STAT/MINER) licensed you can use: a linear (logistic) regression on standardized regressors and compare the betas a tree and compare the variables on one of the available metrics (Gini, Chi2)
Is there any way in SAS , Python to find the most influential variables in rank order apart from correlation ? I might be missing something and any suggestion would be appreciated how to interpret it.
0
1
64
0
38,220,858
0
0
0
0
2
false
0
2016-07-05T12:52:00.000
1
2
0
TF-IDF vectorizer doesn't work better than countvectorizer (sci-kit learn
38,203,983
0.099668
python-2.7,scikit-learn,tf-idf
There is no reason why idf would give more information for a classification task. It performs well for search and ranking, but classification needs to gather similarity, not singularities. IDF is meant to spot the singularity between one sample vs the rest of the corpus, what you are looking for is the singularity between one sample vs the other clusters. IDF smoothens the intra-cluster TF similarity.
I am working on a multilabel text classification problem with 10 labels. The dataset is small, +- 7000 items and +-7500 labels in total. I am using python sci-kit learn and something strange came up in the results. As a baseline I started out with using the countvectorizer and was actually planning on using the tfidf vectorizer which I thought would work better. But it doesn't.. with the countvectorizer I get a performance of a 0,1 higher f1score. (0,76 vs 0,65) I cannot wrap my head around why this could be the case? There are 10 categories and one is called miscellaneous. Especially this one gets a much lower performance with tfidf. Does anyone know when tfidf could perform worse than count?
0
1
1,238
0
38,204,179
0
0
0
0
2
false
0
2016-07-05T12:52:00.000
1
2
0
TF-IDF vectorizer doesn't work better than countvectorizer (sci-kit learn
38,203,983
0.099668
python-2.7,scikit-learn,tf-idf
The question is, why not ? Both are different solutions. What is your dataset, how many words, how are they labelled, how do you extract your features ? countvectorizer simply count the words, if it does a good job, so be it.
I am working on a multilabel text classification problem with 10 labels. The dataset is small, +- 7000 items and +-7500 labels in total. I am using python sci-kit learn and something strange came up in the results. As a baseline I started out with using the countvectorizer and was actually planning on using the tfidf vectorizer which I thought would work better. But it doesn't.. with the countvectorizer I get a performance of a 0,1 higher f1score. (0,76 vs 0,65) I cannot wrap my head around why this could be the case? There are 10 categories and one is called miscellaneous. Especially this one gets a much lower performance with tfidf. Does anyone know when tfidf could perform worse than count?
0
1
1,238
0
44,310,145
0
0
0
0
1
false
2
2016-07-05T12:52:00.000
1
3
0
Encoding in .sas7dbat
38,203,988
0.066568
python,pandas,encoding,sas
read_sas from pandas seem to not like encoding = "utf-8". I had a similar problem. Using SAS7BDAT('foo.sas7bdata').to_data_frame() solved the decoding issues of sas files for me.
I am trying to import a sas dataset(.sas7bdat format) using pandas function read_sas(version 0.17) but it is giving me the following error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 12: ordinal not in range(128)
0
1
9,854
0
38,206,084
0
0
0
0
1
false
2
2016-07-05T13:10:00.000
1
2
0
Viewing a portion of a very large CSV file?
38,204,346
0.099668
python,excel,csv
If you want to do somewhat more selective fishing for particular rows, then the python csv module will allow you to read the csv file row by row into Python data structures. Consult the documentation. This may be useful if just grabbing the first hundred lines reveals nothing about many of the columns because they are blank in all those rows. So you could easily write a program in Python to read as many rows as it takes to find and write out a few rows with non-blank data in particular columns. Likewise if you want to analyze a subset of the data matching particular criteria, you can read all the rows in and write only the interesting ones out for further analysis. An alternative to csv is pandas. Bigger learning curve, but it is probably the right tool for analyzing big data. (1Gb is not very big these days).
I have a ~1.0gb CSV file, and when trying to load it into Excel just to view, Excel crashes. I don't know the schema of the file, so it's difficult for me to load it into R or Python. The file contains restaurant reviews and has commas in it. How can I open just a portion of the file (say, the first 100 rows, or 1.0mb's worth) in Windows Notepad or Excel?
0
1
1,514
0
38,773,030
0
1
0
0
1
false
2
2016-07-05T16:03:00.000
1
1
0
Error in loadNamespace(name) Cairo
38,207,942
0.197375
python,r,jupyter,cairo
I had the precise same issue and fixed it by installing Cairo package: install.packages("Cairo")` library(Cairo) Thanks to my collegue Bartek for providing this solution
It's my first time using R in jupyter. I've installed everything like jupyter, python, R and lRkernel. I can use just typing or calculation in jupyter but whenever I want to use a graph library like plot or ggplot2 it shows Error in loadNamespace(name): there is no package called 'Cairo' Traceback: plot without title Someone please guide me about how to handle this issue.
0
1
411