GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
35,362,451
0
0
0
0
1
true
1
2016-02-12T12:05:00.000
1
1
0
numpy.arange floating point errors
35,362,241
1.2
python,python-2.7,numpy,floating-point
That's because floats (most of the time) cannot represent the exact value you put in. Try print("%.25f" % np.float64(0.1)) which returns 0.1000000000000000055511151 that's not exactly 0.1. Numpy already provides a good workaround for almost-equal (floating point) comparisons: np.testing.assert_almost_equal so you can test by using np.testing.assert_almost_equal(20,np.arange(5, 60, 0.1)[150]). The reason why your second example provides the real valus is because 0.5 can be represented as exact float 2**(-1) = 0.5 and therefore multiplications with this value do not suffer from that floating point problem.
Why does np.arange(5, 60, 0.1)[150] yield 19.999999999999947. But np.arange(5, 60, 0.5)[30] yield 20.0? Why does this happen?
0
1
1,265
0
35,412,636
0
1
0
0
1
false
0
2016-02-12T13:25:00.000
0
2
0
Error Importing Python Packages into Jupyter
35,363,795
0
python,windows,python-2.7
Running this solved the problem: pip install scipy-0.16.1-cp27-none-win_amd64.whl After doing this, all other packages were able to be re-installed and successfully imported.
I've been working in Jupyter IPython notebook (using Python 2.7) and haven't had any issues before this importing and installing packages. Most of my packages were installed via Anaconda. Now I'm randomly having problems importing packages that I've always been able to import. I have an example below. Please help. I'm pretty new to Python so I'm completing stuck on what the problem is and how to fix it. import pandas as pd ImportError Traceback (most recent call last) in () ----> 1 import pandas as pd C:\Users\IBM_ADMIN\Anaconda2\lib\site-packages\pandas__init__.py in () 11 "pandas from the source directory, you may need to run " 12 "'python setup.py build_ext --inplace' to build the C " ---> 13 "extensions first.".format(module)) 14 15 from datetime import datetime ImportError: C extension: No module named numpy not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first.
0
1
1,319
0
35,383,780
0
0
0
0
2
false
0
2016-02-13T13:56:00.000
1
3
0
Making Dataframe Analysis faster
35,380,802
0.066568
python,pandas
Use cProfile and lineprof to figure out where the time is being spent. To get help from others, post your real code and your real profile results. Optimization is an empirical process. The little tips people have are often counterproductive.
I am using three dataframes to analyze sequential numeric data - basically numeric data captured in time. There are 8 columns, and 360k entries. I created three identical dataframes - one is the raw data, the second a "scratch pad" for analysis and a third dataframe contains the analyzed outcome. This runs really slowly. I'm wondering if there are ways to make this analysis run faster? Would it be faster if instead of three separate 8 column dataframes I had one large one 24 column dataframe?
0
1
69
0
35,383,875
0
0
0
0
2
false
0
2016-02-13T13:56:00.000
0
3
0
Making Dataframe Analysis faster
35,380,802
0
python,pandas
Most probably it doesn't matter because pandas stores each column separately anyway (DataFrame is a collection of Series). But you might get better data locality (all data next to each other in memory) by using a single frame, so it's worth trying. Check this empirically.
I am using three dataframes to analyze sequential numeric data - basically numeric data captured in time. There are 8 columns, and 360k entries. I created three identical dataframes - one is the raw data, the second a "scratch pad" for analysis and a third dataframe contains the analyzed outcome. This runs really slowly. I'm wondering if there are ways to make this analysis run faster? Would it be faster if instead of three separate 8 column dataframes I had one large one 24 column dataframe?
0
1
69
0
35,384,823
0
1
0
0
1
true
0
2016-02-13T16:29:00.000
0
1
0
dumping several objects into the same file
35,382,336
1.2
python,numpy,scipy,pickle
You could use dill. dill.dump accesses and uses the dump method from numpy to store an array or matrix object, so it's stored the same way it would be if you did it directly from the method on the numpy object. You'd just dill.dump the dictionary. dill also has the ability to store pickles in compressed format, but it's slower. As mentioned in the comments, there's also joblib, which can also do the same as dill… but basically, joblib leverages cloudpickle (which is another serializer) or can also use dill, to do the serialization. If you have a huge dictionary, and don't need all of the contents at once… maybe a better option would be klepto, which can use advanced serialization methods (from dill) to store a dict to several files on disk (or a database), where you have a proxy dict in memory that enables you to only get the entries you need. All of these packages give you a fast unified dump for standard python and also for numpy objects.
Let's say I have a dictionary of about 100k pairs of strings, and a numpy matrix of shape (100k, 500). I would like to save them to the disk in a same file. What I'm doing right now is using cPickle to dump the dictionary, and scipy.io.savemat to dump the matrix. This way, the dump / load is very fast. But the problem is that since I use different methods I obtain 2 files, and I would like to have just one file containing my 2 objects. How can I do this? I could cPickle them both in the same file, but cPickle is incredibly slow on big arrays.
0
1
498
0
35,423,410
0
0
0
0
1
false
0
2016-02-15T08:12:00.000
0
1
0
pygal data series of different lengths
35,404,323
0
python,pygal
Never mind, I got it sorted, I was using the wrong graph type
How can I plot multiple data series with different number of elements and have them fill the graph along the x axis? At the moment if I graph a = [1,2,3,4,5] and b = [1,2,3] the b lines only covers half the graph. Is this possible or do I need to somehow combine the graphs after plotting/rendering them?
0
1
110
0
35,420,774
0
1
1
0
1
false
2
2016-02-15T13:56:00.000
0
2
0
Python logger.debug converting arguments to string without logging
35,411,265
0
python,logging,optimization
Use logger.debug('%s', myArray) rather than logger.debug(myArray). The first argument is expected to be a format string (as all the documentation and examples show) and is not assumed to be computationally expensive. However, as @dwanderson points out, the logging will actually only happen if the logger is enabled for the level. Note that you're not forced to pass a format string as the first parameter - you can pass a proxy object that will return the string when str() is called on it (this is also documented). In your case, that's what'll probably do the array to string conversion. If you use logging in the manner that you're supposed to (i.e. as documented), there shouldn't be the problem you describe here.
I'm optimizing a Python program that performs some sort of calculation. It uses NumPy quite extensively. The code is sprinkled with logger.debug calls (logger is the standard Python log object). When I run cProfile I see that Numpy's function that converts an array to string takes 50% of the execution time. This is surprising, since there is no handler that outputs messages as the DEBUG level, only INFO and above. Why is the logger converting its arguments to string even though nobody is going to use this string? Is there a way to prevent it (other than not performing the logger calls)?
0
1
2,623
0
35,438,322
0
0
0
0
1
false
1
2016-02-16T15:32:00.000
1
3
0
can i make linear regression predict like a classification?
35,436,599
0.066568
python,scikit-learn,linear-regression
There is a linear classifier sklearn.linear_model.RidgeClassifer(alpha=0.) that you can use for this. Setting the Ridge penalty to 0. makes it do exactly the linear regression you want and set the threshold to divide between classes.
I trained a linear regression model(using sklearn with python3), my train set was with 94 features and the class of them was 0 or 1.. than i went to check my linear regression model on the test set and it gave me those results: 1.[ 0.04988957] its real value is 0 on the test set 2.[ 0.00740425] its real value is 0 on the test set 3.[ 0.01907946] its real value is 0 on the test set 4.[ 0.07518938] its real value is 0 on the test set 5.[ 0.15202335] its real value is 0 on the test set 6.[ 0.04531345] its real value is 0 on the test set 7.[ 0.13394644] its real value is 0 on the test set 8.[ 0.16460608] its real value is 1 on the test set 9.[ 0.14846777] its real value is 0 on the test set 10.[ 0.04979875] its real value is 0 on the test set as you can see that at row 8 it gave the highest value but the thing is that i want to use my_model.predict(testData) and it will give only 0 or 1 as results, how can i possibly do it? the model got any threshold or auto cutoff that i can use?
0
1
2,479
0
35,441,725
0
0
0
0
1
true
0
2016-02-16T19:22:00.000
1
1
0
pandas to_csv on dataframe with a column that has dates below 1900
35,441,310
1.2
python-2.7,csv,pandas
Upgrading pandas from 0.15.1 to 0.17.1 resolved this issue.
I need to save to csv, but have date values in the series that are below 1900 (ie Mar 1 1899), which is preventing this from happening. I get ValueError: year=1899 is before 1900; the datetime strftime() methods require year >= 1900. It seems a little absurd for a function like this to work only for dates above 1900s, so I think there must be something I am missing. What is the right way of getting a csv when you're working with a dataframe that has a column with dates before the 1900s?
0
1
136
0
35,464,658
0
0
0
0
1
false
0
2016-02-17T14:21:00.000
1
2
0
Creating a 3d numpy array matrix using append method
35,464,138
0.099668
python,numpy,memory,arrays
By definition you cannot append anything to an array because when the array is declared in memory it has to reserve as much space as it is going to need. What you can do is to either declare an array with the known geometry and initial values and then rewrite the new values per row keeping a counter of the rows "appended" or you can double the size of the array when you run out of space.
Is there a way to create a 3d numpy array by appending 2d numpy arrays? What I currently do is append my 2d numpy array into an initialized list of pre determined 2d numpy array, i.e., List=[np.zeros((600,600))]. After appending all my 2d numpy arrays I use numpy.dstack to create 3d numpy array. I think this is not a very efficient method. Any suggestions.
0
1
1,796
0
44,714,952
0
1
0
0
2
false
10
2016-02-17T19:53:00.000
7
5
0
OpenCV for Python 3.5.1
35,466,429
1
python,python-3.x,opencv
For anyone who would like to install OpenCV on Python 3.5.1,use this library called opencv-contrib-python This library works for Python 3.5.1
I have searched quite a bit regarding this and I've tried some of these methods myself but I'm unable to work with OpenCV.So can anyone of you help me install OpenCV for python 3.5.1? I'm using anaconda along with Pycharm in windows Or is this not possible and i have to use python 2.7? Thanks in advance
0
1
79,914
0
63,226,795
0
1
0
0
2
false
10
2016-02-17T19:53:00.000
1
5
0
OpenCV for Python 3.5.1
35,466,429
0.039979
python,python-3.x,opencv
For OS:Windows 10 and Python version: 3.5.1 & 3.6, this worked for me pip install opencv-contrib-python
I have searched quite a bit regarding this and I've tried some of these methods myself but I'm unable to work with OpenCV.So can anyone of you help me install OpenCV for python 3.5.1? I'm using anaconda along with Pycharm in windows Or is this not possible and i have to use python 2.7? Thanks in advance
0
1
79,914
0
41,201,223
0
0
0
0
1
false
1
2016-02-18T04:23:00.000
1
3
0
Can we predict time-series single-dimensional data using H2O?
35,472,785
0.066568
r,python-2.7,machine-learning,prediction,h2o
I have tried to use many of the default methods inside H2O with time series data. If you treat the system as a state machine where the state variables are a series of lagged prior states, it's possible, but not entirely effective as the prior states don't maintain their causal order. One way to alleviate this is to assign weights to each lagged state set based on time past, similar to how an EMA gives precedence to more recent data. If you are looking to see how easy or effective the DL/ML can be for a non-linear time series model, I would start with an easy problem to validate the DL approach gives any improvement over a simple 1 period ARIMA/GARCH type process. I have used this technique, with varying success. What I have had success with is taking well known non linear time series models and improving their predictive qualities with additional factors using the the handcrafted non linear model as an input into the DL method. It seems that certain qualities that I haven't manually worked out about the entire parameter space are able to supplement a decent foundation. The real question at that point is there is now an introduction of immense complexity that isn't entirely understood. Is that complexity warranted in the compiled landscape when the nonlinear model encapsulates about 95% of the information between the two stages?
We have hourly time series data having 2 columns, one is the timestamp and other is the error rate. We used H2O deep-learning model to learn and predict future error-rate but looks like it requires at least 2 features (except timestamp) for creating the model. Is there any way h2o can learn this type of data (time, value) having only one feature and predict the value given future time?
0
1
2,186
0
35,486,318
0
1
0
0
1
false
0
2016-02-18T15:29:00.000
0
2
0
Removing decimals on export python
35,485,629
0
python,pandas
figured it out. specified the data type on import with dtype = {"phone" : str, "other_phone" : str})
I'm using pandas to input a list of names and phone numbers, clean that list, then export it. When I export the list, all of the phone numbers have '.0' tacked on to the end. I tried two solutions: A: round() B: converting to integer then converting to text (which has worked in the past) For some reason when I tried A, the decimal still comes out when I export to a text file and when I tried B, I got an unexpected negative ten digit number Any ideas about what's happening here and/or how to fix it? Thanks!
0
1
79
0
35,492,991
0
0
0
0
1
true
7
2016-02-18T21:14:00.000
12
1
0
How to update an SVM model with new data
35,492,556
1.2
python,numpy,machine-learning,computer-vision,scikit-learn
In sklearn you can do this only for linear kernel and using SGDClassifier (with appropiate selection of loss/penalty terms, loss should be hinge, and penalty L2). Incremental learning is supported through partial_fit methods, and this is not implemented for neither SVC nor LinearSVC. Unfortunately, in practise fitting SVM in incremental fashion for such small datasets is rather useless. SVM has easy obtainable global solution, thus you do not need pretraining of any form, in fact it should not matter at all, if you are thinking about pretraining in the neural network sense. If correctly implemented, SVM should completely forget previous dataset. Why not learn on the whole data in one pass? This is what SVM is supposed to do. Unless you are working with some non-convex modification of SVM (then pretraining makes sense). To sum up: From theoretical and practical point of view there is no point in pretraining SVM. You can either learn only on the second dataset, or on both in the same time. Pretraining is only reasonable for methods which suffer from local minima (or hard convergence of any kind) thus need to start near actual solution to be able to find reasonable model (like neural networks). SVM is not one of them. You can use incremental fitting (although in sklearn it is very limited) for efficiency reasons, but for such small dataset you will be just fine fitting whole dataset at once.
I have two data set with different size. 1) Data set 1 is with high dimensions 4500 samples (sketches). 2) Data set 2 is with low dimension 1000 samples (real data). I suppose that "both data set have the same distribution" I want to train an non linear SVM model using sklearn on the first data set (as a pre-training ), and after that I want to update the model on a part of the second data set (to fit the model). How can I develop a kind of update on sklearn. How can I update a SVM model?
0
1
4,307
0
37,745,749
0
0
0
0
1
false
0
2016-02-20T11:45:00.000
1
1
0
Installation of nltk and scikit-learn
35,522,767
0.197375
python,numpy,scikit-learn
I met the same problem today. Now I have solved it. Because I have installed numPy manually, and I use the command "pip" to install the else package. Solve way: find the old version of numPy. You can import numPy and print the path of it. delete the folder. use pip to install again.
I have been trying to install and use scikit-learn and nltk. However, I get the following error while importing anything: Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/site-packages/sklearn/init.py", line 57, in from .base import clone File "/usr/local/lib/python2.7/site-packages/sklearn/base.py", line 11, in from .utils.fixes import signature File "/usr/local/lib/python2.7/site-packages/sklearn/utils/init.py", line 10, in from .murmurhash import murmurhash3_32 File "numpy.pxd", line 155, in init sklearn.utils.murmurhash (sklearn/utils/murmurhash.c:5029) ValueError: numpy.dtype has the wrong size, try recompiling I did a pip uninstall numpy followed by a pip install numpy and also a pip uninstall scikit-learn and again reinstalled it. But the error persists.
0
1
474
0
35,531,393
0
0
0
0
1
false
0
2016-02-21T01:51:00.000
1
4
0
In pandas, how to get 2nd mode
35,531,367
0.049958
python,pandas
Try this method: Create a duplicate data set. Use .mode() to find the most common value. Pop all items with that value from the set. Run .mode() again on the modified data set.
So I'm generating a summary report from a data set. I used .describe() to do the heavy work but it doesn't generate everything I need i.e. the second most common thing in the data set. I noticed that if I use .mode() it returns the most common value, is there an easy way to get the second most common?
0
1
2,825
0
38,318,621
0
0
0
0
1
false
0
2016-02-21T08:39:00.000
0
1
0
Include django rest pandas dataframe Index field in json response
35,534,039
0
python,django,django-rest-framework
One simple way would be to just reset the index in transform_dataframe method. df = df.reset_index() This would just add a new index and set your old index as a column, included in the output.
Backgroud I am using django-rest-pandas for serving json & xls. Observation When I hit url with format=xls, I get complete data in the downloaded file. But for format=josn, the index field of dataframe is not part of the records. Question How can I make django-rest-pandas to include dataframe's index field in json response? Note that the index field is present as part of serializer (extending serializer.ModelSerializer).
1
1
363
0
41,886,452
0
0
0
0
1
false
1
2016-02-21T17:03:00.000
1
1
0
how to convert a RGB image to its YIQ components in python , and then apply dct transform to compress it?
35,539,077
0.197375
python,matplotlib,dct
Are you willing to use a library outside of numpy, scipy, and matplotlib? If so you can use skimage.color.rgb2yiq() from the scikit-image library.
In matlab we have rgb2ntsc() function to get YIQ components of a RGB image. Is there a similar function available in python (numpy ,matplotlib or scipy libraries)? Also to apply discrete cosine transform (compress it) we can use dct2() , in matlab , is there a similar function in python?
0
1
1,415
0
43,568,782
0
0
0
0
1
false
63
2016-02-22T10:31:00.000
27
3
0
Tensorflow Tensorboard default port
35,551,326
1
python,tensorflow,tensorboard
You should provide a port flag (--port=6007). But I am here to explain how you can find it and other flags without any documentation. Almost all command line tools have a flag -h or --help which shows all possible flags this tool allows. By running it you will see information about a port flag and that --logdir allows you to also pass a comma separated list of log directories and you can also inspect separate event-files and tags with --event_file and --tag flags
Is there a way to change the default port (6006) on TensorBoard so we could open multiple TensorBoards? Maybe an option like --port="8008"?
0
1
72,323
0
35,576,424
0
0
0
0
2
false
0
2016-02-22T11:39:00.000
0
2
0
Ignoring NaN when interpolating grid in Python
35,552,667
0
python,scipy,interpolation,nan
Spline fitting/interpolation is global, so it's likely that even a single nan is messing up the whole mesh.
I have a gridded velocity field that I want to interpolate in Python. Currently I'm using scipy.interpolate's RectBivariateSpline to do this, but I want to be able to define edges of my field by setting certain values in the grid to NaN. However, when I do this it messes up the interpolation of the entire grid, effectively making it NaN everywhere. Apparently this is an error in the scipy fitpack, so what would be the best way to work around this? I want to be able to keep the NaNs in the grid to work with edges and out of bounds later on, but I don't want it to affect the interpolation in the rest of the grid.
0
1
776
0
35,552,764
0
0
0
0
2
false
0
2016-02-22T11:39:00.000
1
2
0
Ignoring NaN when interpolating grid in Python
35,552,667
0.099668
python,scipy,interpolation,nan
All languages that implement floating point correctly (which includes python) allow you to test for a NaN by comparing a number with itself. x is not equal to x if, and only if, x is NaN. You'll be able to use that to filter your data set accordingly.
I have a gridded velocity field that I want to interpolate in Python. Currently I'm using scipy.interpolate's RectBivariateSpline to do this, but I want to be able to define edges of my field by setting certain values in the grid to NaN. However, when I do this it messes up the interpolation of the entire grid, effectively making it NaN everywhere. Apparently this is an error in the scipy fitpack, so what would be the best way to work around this? I want to be able to keep the NaNs in the grid to work with edges and out of bounds later on, but I don't want it to affect the interpolation in the rest of the grid.
0
1
776
0
48,353,933
0
1
0
0
1
false
0
2016-02-23T12:06:00.000
0
1
0
getting error for importing numpy at Python 3.5.1
35,577,179
0
python,numpy
It may be possible that you have have installed pip for some lower version of python. To check it first look for your default python version by: $python Now check for your linked version of python with pip $pip --version Now see if the two python versions match. If they don't match then, you need to upgrade you pip : $pip install -U pip Now install numpy for this: sudo pip install numpy Hope this helps !
I have installed numpy-1.11.0b3 by, pip install "numpy-1.11.0b3+mkl-cp35-cp35m-win32.whl". The installation became successful. But, when I write "import numpy" at the Python Shell (3.5.1), I am getting the error as - ImportError: No module named 'numpy'. Can anyone suggest me regarding this ? Regards, Arpan Ghose
0
1
429
0
35,583,196
0
0
0
1
2
false
2
2016-02-23T15:28:00.000
1
3
0
Fastest approach for geopandas (reading and spatialJoin)
35,581,528
0.066568
python,multithreading,pandas,geopandas
I am assuming you have already implemented GeoPandas and are still finding difficulties? you can improve this by further hashing your coords data. similar to how google hashes their search data. Some databases already provide support for these types of operations (eg mongodb). Imagine if you took the first (left) digit of your coords, and put each set of cooresponding data into a seperate sqlite file. each digit can be a hash pointing to the correct file to look for. now your lookup time has improved by a factor of 20 (range(-9,10)), assuming your hash lookup takes minimal time in comparison
I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time. Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise this process. So I wonder: if there is any relatively easy way to parallelize those computations?
0
1
939
0
35,786,998
0
0
0
1
2
true
2
2016-02-23T15:28:00.000
1
3
0
Fastest approach for geopandas (reading and spatialJoin)
35,581,528
1.2
python,multithreading,pandas,geopandas
As it turned out, the most convenient solution in my case is to use pandas.read_SQL function with specific chunksize parameter. In this case, it returns a generator of data chunks, which can be effectively feed to the mp.Pool().map() along with the job; In this (my) case job consists of 1) reading geoboundaries, 2) spatial joint of the chunk 3) writing the chunk to the database.
I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time. Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise this process. So I wonder: if there is any relatively easy way to parallelize those computations?
0
1
939
0
35,589,047
0
0
0
0
1
false
0
2016-02-23T18:41:00.000
1
1
0
Bokeh: add a % sign to axis ticks without changing numeric value
35,585,580
0.197375
python,formatting,bokeh
To elaborate on what bigreddot proposed: PrintfTickFormatter(format='%0.0f %%') worked. One thing to note is the %% to properly escape the %.
I am creating a plot in Bokeh with percentages on the y-axis. The data is represented as a percent (e.g. '99.0') as opposed to a likelihood (e.g. '0.990'). I want to add a '%' sign after each number on the axis, but when using NumeralTickFormatter(format='0 %') my values are multiplied by 100 because it expects a likelihood. I don't want to change the data representation to a likelihood, so is there some other way I can get the '%' sign to appear on the axis ticks?
0
1
1,425
0
35,604,894
0
0
0
0
1
false
4
2016-02-24T13:57:00.000
1
2
0
Split numpy array into similar array based on its content
35,604,173
0.099668
python,arrays,numpy,curves,curvesmoothing
take the length of the line on every axes, the split as you want. example: point 1: [0,0] point 2: [1,1] then: length of the line on X axes: 1-0 = 1 also in the Y axes. now, if you want to split it in two, just divide these lengths, and create a new array. [0,0],[.5,.5],[1,1]
I have a 2D numpy array that represents the coordinates (x, y) of a curve, and I want to split that curve into parts of the same length, obtaining the coordinates of the division points. The most easy example is a line defined for two points, for example [[0,0],[1,1]], and if I want to split it in two parts the result would be [0.5,0.5], and for three parts [[0.33,0.33],[0.67,0.67]] and so on. How can I do that in a large array where the data is less simple? I'm trying to split the array by its length but the results aren't good.
0
1
279
0
35,629,019
0
0
0
0
2
false
0
2016-02-25T13:32:00.000
1
2
0
Hide wrong values of a graph
35,628,756
0.099668
python-3.x,matplotlib
Forgive me for I'm not familiar with matplotlib but I'm presuming that you're reading the csv file directly into matplotlib. If so is there an option to read the csv file into your app as a list of ints or as a string and then do the data validation before passing that string to the library? Apologies if my idea is not applicable.
I am building graphics using Matplotlib and I sometimes have wrong values in my Csv files, it creates spikes in my graph that I would like to suppress, also sometimes I have lots of zeros ( when the sensor is disconnected ) but I would prefer the graph showing blank spaces than wrong zeros that could be interpreted as real values.
0
1
21
0
35,629,514
0
0
0
0
2
true
0
2016-02-25T13:32:00.000
0
2
0
Hide wrong values of a graph
35,628,756
1.2
python-3.x,matplotlib
I found a way that works: I used the Xlim to set my max and min x values and then i set all the values that i didnt want to nan !
I am building graphics using Matplotlib and I sometimes have wrong values in my Csv files, it creates spikes in my graph that I would like to suppress, also sometimes I have lots of zeros ( when the sensor is disconnected ) but I would prefer the graph showing blank spaces than wrong zeros that could be interpreted as real values.
0
1
21
0
35,638,383
0
0
0
0
1
false
5
2016-02-25T18:53:00.000
1
1
0
Is it better to store Pandas Data Frames in a dictionary or in a Panel?
35,635,870
0.197375
python,dictionary,pandas,dataframe,panel
I don't think you need a panel. I recommend a nested dataframe approach.
I hope this doesn't sound as an open question for discussion. I am going to give some details for my specific case. I am new to Pandas and I need to store several 2D arrays, where columns represent frequencies and rows represent directions (2D waves spectra, if you are curious). Each array represent a specific time. I am storing these arrays as Pandas DataFrames, but for keeping them in a single object I thought of 2 options: Storing the DataFrames in a dictionary where the key is the time stamp. Storing the DataFrames in a Pandas Panel where the item is the time stamp. The first option seems simple and has the flexibility to store arrays with different sizes, indexes and column names. The second option seems better for processing the data, since Panels have specific methods, and can also be easily saved or exported (e.g. to csv or pickle). Which of the two options is better suited in terms of: speed, memory use, flexibility and data analysis? Regards
0
1
1,554
0
35,664,986
0
0
0
0
1
true
0
2016-02-25T20:19:00.000
0
2
0
Rounding up a jitter in psychopy
35,637,501
1.2
python,psychopy
difference = 1.0 - (RT - int(RT))
I'm building a fMRI paradigm and I have a stimulus that disappears when a user presses a button (up to 4s), then a jitter (0-12s), then another stimulus presentation. I'm locking the stimuli presentation to the 1s TR of the scanner so I'm curious how I can round up the jitter time to the nearest second. So, the task is initialized as: stimulus 1 ( ≤4 s) -- jitter (e.g. 6 s) -- stimulus 2 But if the user responds to stimulus-1 at 1.3 seconds, then the task becomes stimulus-1 (1.3 s) -- jitter (6.7 s) -- stimulus-2 Does that make sense? Thanks for the help!
0
1
188
0
35,638,661
0
0
0
0
1
true
0
2016-02-25T20:46:00.000
1
1
0
Force classifer to select a fixed number of targets
35,637,968
1.2
python,machine-learning,classification,random-forest
Just do it yourself. Each classifier in scikit-learn gives you access to decision_function or predict_proba, both of which are support for predict operation (predict is just argmax of these). Thus - just select 100 with the highest support.
If I have a dataset with events where each event has data with 1000 possible items with only 100 being correct for each event. How do I force my classifier to select only 100 for each event? After I run it through my training model (with 18 features and always has 100 targets/event flagged as 1) the classifier selects anywhere between 60-80 items instead of 100. Even if I give each event an event number that doesn't help. I'm using python sklearn gradient boosting and random forest method.
0
1
33
0
35,646,195
0
0
0
0
1
true
0
2016-02-26T07:20:00.000
1
1
0
Statistic to consider while analysing a value with previous set of values
35,645,776
1.2
python,statistics,mathematical-optimization
As I'm understanding it, you want some measure of central tendency. There are three of these: mean, median, and mode. Which one you want to use depends on your goals and priorities. Mean is very popular and understandable to people. It has a lot of useful statistical properties. However, it is subject to outliers. On the other hand, mode and median are not (as) influenced by outliers, but they have fewer statistical usages. Further, in the case of the median and mean, the value you calculate may not actually be in your data set, whereas the mode will. Which of these considerations matter for you? But even after you pick the measure of central tendency you like, how are you going to determine when something is "too far" out of the set? In your question you're doing it as just a percentage, but this might not be the best way. For most problems, I would probably use the mean as my measure of central tendency and use standard deviation as the statistic to determine if a figure is "off the mark." But something else might work better for you.
I am trying to find a good statistical method to compare a given value with an existing set of values. Currently I am considering mean of the existing numbers and comparing it with the given value. If the value is off by 50% of the mean then I would say it is off the flow. I am using python programming language for all calculations. Is there any other method possible which is more efficient? Ex: 1,4,7,0,0,0 are the values that exist currently. I get the mean of these : 2 If the given value is 10, I would say it is off the mark. Can there be a more efficient way?
0
1
37
0
35,835,011
0
0
0
0
1
false
0
2016-02-26T09:42:00.000
0
2
0
Find all roots of a nonlinear function
35,648,212
0
python,numerical-methods,nonlinear-functions,nonlinear-optimization
You can square the function and use global optimization software to locate all the minima inside a domain and pick those with a zero value. Stochastic multistart based global optimization methods with clustering are quite proper for this task.
Let us assume I have a smooth, nonlinear function f: R^n -> R with the (known) maximum number of roots N. How can I find the roots efficiently? Right now I have calculated the function on the grid on a preselected area, refined the grid where the function is below a predefined threshold and continued that routine, but this does not seem to be very efficient, though, because I have noticed that it is difficult to select the area correctly before and to define the threshold accordingly.
0
1
866
0
35,658,350
0
0
0
0
1
false
0
2016-02-26T15:32:00.000
2
1
0
Python packages for Spark on datanodes
35,655,693
0.379949
python,numpy,apache-spark,pyspark
Yes, putting the packages on a NAS mount to which all the datanodes are connected will work up to dozens and perhaps 100 nodes if you have a good NAS. However, this solution will break down at scale as all the nodes try to import the files they need. The Python import mechanism usese a lot of os.stat calls to the file-system and this can cause bottle-necks when all the nodes are trying to load the same code.
We want to use Python 3.x with packages like NumPy, Pandas,etc. on top of Spark. We know the Python distribution with these packages needs to be present/distributed on all the datanodes for Spark to use these packages. Instead of setting up this Python distro on all the datanodes, will putting it on a NAS mount to which all datanodes are connected work? Thanks
0
1
144
0
35,663,902
0
0
0
0
2
false
0
2016-02-26T20:46:00.000
1
2
0
How to detect objects in a video opencv with Python?
35,661,419
0.099668
python,opencv,frames
How are your objects filled or just an outline? In either case the approach I would take is to detect the vertices by finding the maximum gradient or just by the bounding box. The vertices will be on the bounding box. Once you have the vertices, you can say whether the object is a square or a rectangle just by finding the distances between the consecutive vertices.
I have a video consisting of different objects such as square, rectangle , triangle. I somehow need to detect and show only square objects. So in each frame, if there is a square, it is fine but if there is a triangle or rectangle then it should display it. I am using background subtraction and I am able to detect all the three objects and create a bounding box around them. But I am not able to figure out how to display only square object.
0
1
948
0
35,788,514
0
0
0
0
2
true
0
2016-02-26T20:46:00.000
3
2
0
How to detect objects in a video opencv with Python?
35,661,419
1.2
python,opencv,frames
You can use the following algorithm: -Perform Background subtraction, as you're doing currently -enclose foreground in contours (using findContours(,,,) then drawContours(,,,) function) -enclose obtained contours in bounding boxes (using boundingRect(,,,) function) -if area of bounding box is approximately equal to that of enclosed contour, then the shape is a square or rectangle, not a triangle. (A large part of the box enclosing a triangle will lie outside the triangle) -if boundingBox height is approximately equal to its width, then it is a square. (access height and width by Rect.height and Rect.width)
I have a video consisting of different objects such as square, rectangle , triangle. I somehow need to detect and show only square objects. So in each frame, if there is a square, it is fine but if there is a triangle or rectangle then it should display it. I am using background subtraction and I am able to detect all the three objects and create a bounding box around them. But I am not able to figure out how to display only square object.
0
1
948
0
35,670,515
0
0
0
0
1
false
1
2016-02-27T13:24:00.000
0
2
0
python pandas ... alternatives to iterrows in pandas to get next rows value (NEW)
35,670,348
0
python,pandas,apply,next
While it's not the most "fancy" way - I would just use a numeric iterator and access lines i and i+1
I have a df in pandas import pandas as pd import pandas as pd df = pd.DataFrame(['AA', 'BB', 'CC'], columns = ['value']) I want to iterate over rows in df. For each row i want rows value and next rows value. Here is the desired result. 0 1 AA BB 1 2 BB CC I have tried a pairwise() function with itertools. from itertools import tee, izip def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return izip(a, b) import pandas as pd df = pd.DataFrame(['AA', 'BB', 'CC'], columns = ['value']) for (i1, row1), (i2, row2) in pairwise(df.iterrows()): print i1, i2, row1["value"], row2["value"] But, its too slow. Any idea how to achieve the output with iterrows ? I would like to try pd.apply for a large dataset.
0
1
908
0
35,677,835
0
1
0
0
1
false
39
2016-02-28T01:47:00.000
0
3
0
Understanding matplotlib: plt, figure, ax(arr)?
35,677,767
0
python,matplotlib,plot,figure
pyplot is matlab like API for those who are familiar with matlab and want to make quick and dirty plots figure is object-oriented API for those who doesn't care about matlab style plotting So you can use either one but perhaps not both together.
I'm not really new to matplotlib and I'm deeply ashamed to admit I have always used it as a tool for getting a solution as quick and easy as possible. So I know how to get basic plots, subplots and stuff and have quite a few code which gets reused from time to time...but I have no "deep(er) knowledge" of matplotlib. Recently I thought I should change this and work myself through some tutorials. However, I am still confused about matplotlibs plt, fig(ure) and ax(arr). What is really the difference? In most cases, for some "quick'n'dirty' plotting I see people using just pyplot as plt and directly plot with plt.plot. Since I am having multiple stuff to plot quite often, I frequently use f, axarr = plt.subplots()...but most times you see only code putting data into the axarr and ignoring the figure f. So, my question is: what is a clean way to work with matplotlib? When to use plt only, what is or what should a figure be used for? Should subplots just containing data? Or is it valid and good practice to everything like styling, clearing a plot, ..., inside of subplots? I hope this is not to wide-ranging. Basically I am asking for some advice for the true purposes of plt <-> fig <-> ax(arr) (and when/how to use them properly). Tutorials would also be welcome. The matplotlib documentation is rather confusing to me. When one searches something really specific, like rescaling a legend, different plot markers and colors and so on the official documentation is really precise but rather general information is not that good in my opinion. Too much different examples, no real explanations of the purposes...looks more or less like a big listing of all possible API methods and arguments.
0
1
8,825
0
35,731,697
0
0
0
0
2
true
0
2016-03-01T19:08:00.000
0
2
0
Efficiency of passing arrays to a function vs unpickling the array inside the function in Python?
35,731,339
1.2
python,arrays,performance
It depends on your requirement. If you pass an array to the function, python passes the reference of that array. Whereas if you pass the array in the form of unpacked arguments, objects comprising the array is passed. Within the function if the order of your original array is not getting modified, it is better to pass it as reference i.e as a argument.
Sorry in advance, I know almost nothing about efficiency, so I might need a little extra help. I have a function that calls another function and it passes in pretty big arrays. Due to limitations in memory, I've just been saving the arrays to pickle files and unpickling inside the second function. Just for context, I have 64 pickle files that is each 47 megabytes (3 gigs total). I'm calling this function upwards of 100,000 times. I'm assuming that unpickling is far less efficient than unpickling once and passing the arrays, but I was wondering on the order of how much time I'd be losing in doing the way I'm doing it now and whether there are more efficient ways of doing this
0
1
45
0
35,732,141
0
0
0
0
2
false
0
2016-03-01T19:08:00.000
0
2
0
Efficiency of passing arrays to a function vs unpickling the array inside the function in Python?
35,731,339
0
python,arrays,performance
If you memory limitations doesn't allow to keep all files in memory, you will read the file each time and pass list to another function (it will be passed by reference).
Sorry in advance, I know almost nothing about efficiency, so I might need a little extra help. I have a function that calls another function and it passes in pretty big arrays. Due to limitations in memory, I've just been saving the arrays to pickle files and unpickling inside the second function. Just for context, I have 64 pickle files that is each 47 megabytes (3 gigs total). I'm calling this function upwards of 100,000 times. I'm assuming that unpickling is far less efficient than unpickling once and passing the arrays, but I was wondering on the order of how much time I'd be losing in doing the way I'm doing it now and whether there are more efficient ways of doing this
0
1
45
0
60,751,250
0
0
0
0
3
false
17
2016-03-02T03:54:00.000
0
9
0
Matplotlib.pyplot.hist() very slow
35,738,199
0
python,matplotlib,histogram
If you are working with pandas, make sure the data you passed in plt.hist() is a 1-d series rather than a dataframe. This helped me out.
I'm plotting about 10,000 items in an array. They are of around 1,000 unique values. The plotting has been running half an hour now. I made sure rest of the code works. Is it that slow? This is my first time plotting histograms with pyplot.
0
1
21,439
0
58,707,409
0
0
0
0
3
false
17
2016-03-02T03:54:00.000
0
9
0
Matplotlib.pyplot.hist() very slow
35,738,199
0
python,matplotlib,histogram
For me it took calling figure.canvas.draw() after the call to hist to update immediately, i.e. hist was actually fast (discovered that after timing it), but there was a delay of a few seconds before figure was updated. I was calling hist inside a matplotlib callback in a jupyter lab cell (qt5 backend).
I'm plotting about 10,000 items in an array. They are of around 1,000 unique values. The plotting has been running half an hour now. I made sure rest of the code works. Is it that slow? This is my first time plotting histograms with pyplot.
0
1
21,439
0
56,879,388
0
0
0
0
3
false
17
2016-03-02T03:54:00.000
2
9
0
Matplotlib.pyplot.hist() very slow
35,738,199
0.044415
python,matplotlib,histogram
For me, the problem is that the data type of pd.Series, say S, is 'object' rather than 'float64'. After I use S = np.float64(S), then plt.hist(S) is very quick.
I'm plotting about 10,000 items in an array. They are of around 1,000 unique values. The plotting has been running half an hour now. I made sure rest of the code works. Is it that slow? This is my first time plotting histograms with pyplot.
0
1
21,439
0
35,886,350
0
0
0
0
1
true
0
2016-03-03T23:30:00.000
0
1
0
Does pcolormesh plot the 2D array rightside-up or upside-down?
35,785,178
1.2
python,image,matplotlib,plot
The array is plotted upside-down, meaning the index (0, 0) is at the bottom left.
Every time I go to plot a 2D array in matplotlib using, for example, pcolormesh, I have the same question: Is the resultant image showing the array rightside-up or upside-down? That is, is index (0, 0) at the top left of the plot or the bottom left? It's tedious to write a test every six months to remind myself. This should be clearly documented in an obvious place, like SO.
0
1
335
0
35,881,615
0
0
0
0
1
true
2
2016-03-04T16:17:00.000
3
2
0
Trouble Installing OpenFace in Python
35,800,893
1.2
python,deep-learning,face-recognition,torch
As I posted in the comments, this segfault was caused by compiling dlib with one Python version and running it with another. This was resolved by manually installing dlib rather than using their pip package.
I am new to deeplearning and face recognition. After searching, I found this python package about deeplearning applied to face recognition called OpenFace. From its documentation, I think it is build on top of Torch for the neural nets computation. I want to install the package in a virtual environment, so basically these are steps I made: brew install the necessary system requirements: bash,coreutils,curl,findutils,opencv, python and boost-python Make a virtual environment and install dlib, numpy, scipy, pandas, scikit-learn, scikit-image Cloned the openface github repository Install Torch curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash git clone https://github.com/torch/distro.git torch --recursive cd torch ./install.sh source install/bin/torch-activate luarocks install csvigo luarocks install dpnn luarocks install nn cd to cloned openface repo and run python setup.py install However when I run python: >>>import openface I get: Segmentation Fault: 11 How do I fix this? Also, are there any other tutorials for using openface? How to properly install OpenFace?
0
1
4,278
0
35,953,956
0
0
0
0
1
true
4
2016-03-04T16:52:00.000
4
1
0
Which dtype performs better when training a randomforest in python?
35,801,638
1.2
python,pandas,scikit-learn
Nearly all scikit-learn estimators will convert input data to float before running the algorithm, regardless of the original types in the array. This holds for the random forest implementation.
I was trying to train a randomforest classifier in python. However, in my original pandas.dataframe, there are float64, object, datetime64, int64 and bool dtypes(nearly all kinds of dtypes allowed in pandas). Is it necessary to convert a bool to float or int? For a two-value object column, should I convert it to bool, int, or float? Which one would perform better? Or does it not matter? Thanks!
0
1
1,673
0
35,827,519
0
1
0
0
1
true
3
2016-03-06T05:47:00.000
1
1
0
iPython Notebook for big / complex analysis. Good idea or not?
35,823,618
1.2
python,matplotlib,neural-network,ipython,jupyter-notebook
It doesn't matter. It just runs a Python kernel in the background which is no different from one you would run from the command line. The only thing you should avoid, obviously, is displaying huge amounts of data in your notebook (like plotting your whole image set at once).
I'm working on a project involving Neural Networks (using theano) with a big data set 50,000 images of 3072 pixels. The computational process gets expensive when training the Neural Network as you may expect. I was using PyCharm to debug and write the code but since I had some trouble using matplotlib and other libraries I decided to go for iPython Notebook. So far I'm just using it to do dummy plots etc but my main concern is : Is it a good idea to use iPython Notebook to run this kind of expensive computationally projects? Is there any drawbacks when using the notebook instead of just running a python script from the terminal? I researched about good IDE's for Data Analysis and Scientific computation for python and I found that iPtyhon Notebook is the best but any other recommendations are very appreciated.
0
1
265
0
35,825,863
0
1
0
0
1
true
13
2016-03-06T10:34:00.000
10
1
0
What is the difference between a NumPy array and a python list?
35,825,802
1.2
python,arrays,list,numpy
Numpy arrays is a typed array, the array in memory stores a homogenous, densely packed numbers. Python list is a heterogeneous list, the list in memory stores references to objects rather than the number themselves. This means that Python list requires dereferencing a pointer every time the code needs to access the number. While numpy array can be processed directly by numpy vector operations, which makes these vector operations much faster than anything you can code with list. The drawback of numpy array is that if you need to access single items in the array, numpy will need to box/unbox the number into a python numeric object, which can make it slow in certain situations; and that it can't hold heterogeneous data.
Why do we use numpy arrays in place of lists in python? What is the main difference between them?
0
1
17,191
0
35,827,413
0
0
0
0
4
false
28
2016-03-06T12:38:00.000
1
7
0
What is a good heuristic to detect if a column in a pandas.DataFrame is categorical?
35,826,912
0.028564
python,pandas,scikit-learn
IMO the opposite strategy, identifying categoricals is better because it depends on what the data is about. Technically address data can be thought of as unordered categorical data, but usually I wouldn't use it that way. For survey data, an idea would be to look for Likert scales, e.g. 5-8 values, either strings (which might probably need hardcoded (and translated) levels to look for "good", "bad", ".agree.", "very .*",...) or int values in the 0-8 range + NA. Countries and such things might also be identifiable... Age groups (".-.") might also work.
I've been developing a tool that automatically preprocesses data in pandas.DataFrame format. During this preprocessing step, I want to treat continuous and categorical data differently. In particular, I want to be able to apply, e.g., a OneHotEncoder to only the categorical data. Now, let's assume that we're provided a pandas.DataFrame and have no other information about the data in the DataFrame. What is a good heuristic to use to determine whether a column in the pandas.DataFrame is categorical? My initial thoughts are: 1) If there are strings in the column (e.g., the column data type is object), then the column very likely contains categorical data 2) If some percentage of the values in the column is unique (e.g., >=20%), then the column very likely contains continuous data I've found 1) to work fine, but 2) hasn't panned out very well. I need better heuristics. How would you solve this problem? Edit: Someone requested that I explain why 2) didn't work well. There were some tests cases where we still had continuous values in a column but there weren't many unique values in the column. The heuristic in 2) obviously failed in that case. There were also issues where we had a categorical column that had many, many unique values, e.g., passenger names in the Titanic data set. Same column type misclassification problem there.
0
1
13,722
0
35,827,781
0
0
0
0
4
false
28
2016-03-06T12:38:00.000
5
7
0
What is a good heuristic to detect if a column in a pandas.DataFrame is categorical?
35,826,912
0.141893
python,pandas,scikit-learn
There's are many places where you could "steal" the definitions of formats that can be cast as "number". ##,#e-# would be one of such format, just to illustrate. Maybe you'll be able to find a library to do so. I try to cast everything to numbers first and what is left, well, there's no other way left but to keep them as categorical.
I've been developing a tool that automatically preprocesses data in pandas.DataFrame format. During this preprocessing step, I want to treat continuous and categorical data differently. In particular, I want to be able to apply, e.g., a OneHotEncoder to only the categorical data. Now, let's assume that we're provided a pandas.DataFrame and have no other information about the data in the DataFrame. What is a good heuristic to use to determine whether a column in the pandas.DataFrame is categorical? My initial thoughts are: 1) If there are strings in the column (e.g., the column data type is object), then the column very likely contains categorical data 2) If some percentage of the values in the column is unique (e.g., >=20%), then the column very likely contains continuous data I've found 1) to work fine, but 2) hasn't panned out very well. I need better heuristics. How would you solve this problem? Edit: Someone requested that I explain why 2) didn't work well. There were some tests cases where we still had continuous values in a column but there weren't many unique values in the column. The heuristic in 2) obviously failed in that case. There were also issues where we had a categorical column that had many, many unique values, e.g., passenger names in the Titanic data set. Same column type misclassification problem there.
0
1
13,722
0
35,828,098
0
0
0
0
4
false
28
2016-03-06T12:38:00.000
1
7
0
What is a good heuristic to detect if a column in a pandas.DataFrame is categorical?
35,826,912
0.028564
python,pandas,scikit-learn
I think the real question here is whether you'd like to bother the user once in a while or silently fail once in a while. If you don't mind bothering the user, maybe detecting ambiguity and raising an error is the way to go. If you don't mind failing silently, then your heuristics are ok. I don't think you'll find anything that's significantly better. I guess you could make this into a learning problem if you really want to. Download a bunch of datasets, assume they are collectively a decent representation of all data sets in the world, and train based on features over each data set / column to predict categorical vs. continuous. But of course in the end nothing can be perfect. E.g. is the column [1, 8, 22, 8, 9, 8] referring to hours of the day or to dog breeds?
I've been developing a tool that automatically preprocesses data in pandas.DataFrame format. During this preprocessing step, I want to treat continuous and categorical data differently. In particular, I want to be able to apply, e.g., a OneHotEncoder to only the categorical data. Now, let's assume that we're provided a pandas.DataFrame and have no other information about the data in the DataFrame. What is a good heuristic to use to determine whether a column in the pandas.DataFrame is categorical? My initial thoughts are: 1) If there are strings in the column (e.g., the column data type is object), then the column very likely contains categorical data 2) If some percentage of the values in the column is unique (e.g., >=20%), then the column very likely contains continuous data I've found 1) to work fine, but 2) hasn't panned out very well. I need better heuristics. How would you solve this problem? Edit: Someone requested that I explain why 2) didn't work well. There were some tests cases where we still had continuous values in a column but there weren't many unique values in the column. The heuristic in 2) obviously failed in that case. There were also issues where we had a categorical column that had many, many unique values, e.g., passenger names in the Titanic data set. Same column type misclassification problem there.
0
1
13,722
0
38,108,924
0
0
0
0
4
false
28
2016-03-06T12:38:00.000
1
7
0
What is a good heuristic to detect if a column in a pandas.DataFrame is categorical?
35,826,912
0.028564
python,pandas,scikit-learn
I've been thinking about a similar problem and the more that I consider it, it seems that this itself is a classification problem that could benefit from training a model. I bet if you examined a bunch of datasets and extracted these features for each column / pandas.Series: % floats: percentage of values that are float % int: percentage of values that are whole numbers % string: percentage of values that are strings % unique string: number of unique string values / total number % unique integers: number of unique integer values / total number mean numerical value (non numerical values considered 0 for this) std deviation of numerical values and trained a model, it could get pretty good at inferring column types, where the possible output values are: categorical, ordinal, quantitative. Side note: as far as a Series with a limited number of numerical values goes, it seems like the interesting problem would be determining categorical vs ordinal; it doesn't hurt to think a variable is ordinal if it turns out to be quantitative right? The preprocessing steps would encode the ordinal values numerically anyways without one-hot encoding. A related problem that is interesting: given a group of columns, can you tell if they are already one-hot encoded? E.g in the forest-cover-type-prediction kaggle contest, you would automatically know that soil type is a single categorical variable.
I've been developing a tool that automatically preprocesses data in pandas.DataFrame format. During this preprocessing step, I want to treat continuous and categorical data differently. In particular, I want to be able to apply, e.g., a OneHotEncoder to only the categorical data. Now, let's assume that we're provided a pandas.DataFrame and have no other information about the data in the DataFrame. What is a good heuristic to use to determine whether a column in the pandas.DataFrame is categorical? My initial thoughts are: 1) If there are strings in the column (e.g., the column data type is object), then the column very likely contains categorical data 2) If some percentage of the values in the column is unique (e.g., >=20%), then the column very likely contains continuous data I've found 1) to work fine, but 2) hasn't panned out very well. I need better heuristics. How would you solve this problem? Edit: Someone requested that I explain why 2) didn't work well. There were some tests cases where we still had continuous values in a column but there weren't many unique values in the column. The heuristic in 2) obviously failed in that case. There were also issues where we had a categorical column that had many, many unique values, e.g., passenger names in the Titanic data set. Same column type misclassification problem there.
0
1
13,722
0
35,847,976
0
0
0
0
1
false
6
2016-03-06T13:30:00.000
2
4
0
How can SciKit-Learn Random Forest sub sample size may be equal to original training data size?
35,827,446
0.099668
python,scikit-learn,random-forest,subsampling
Certainly not all samples are selected for each tree. Be default each sample has a 1-((N-1)/N)^N~0.63 chance of being sampled for one particular tree and 0.63^2 for being sampled twice, and 0.63^3 for being sampled 3 times... where N is the sample size of the training set. Each bootstrap sample selection is in average enough different from other bootstraps, such that decision trees are adequately different, such that the average prediction of trees is robust toward the variance of each tree model. If sample size could be increased to 5 times more than training set size, every observation would probably be present 3-7 times in each tree and the overall ensemble prediction performance would suffer.
In the documentation of SciKit-Learn Random Forest classifier , it is stated that The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default). What I dont understand is that if the sample size is always the same as the input sample size than how can we talk about a random selection. There is no selection here because we use all the (and naturally the same) samples at each training. Am I missing something here?
0
1
3,033
0
35,836,274
0
0
0
0
1
false
0
2016-03-07T03:39:00.000
0
1
0
How is "accuracy" calculated using Libsvm - SVM_Predict.exe
35,835,787
0
python,machine-learning,svm,libsvm,prediction
The file "trainingdata.svm.prediction" is predicting the labels 1 and 0 for your set (1 means the sample was predicted to be male, 0 is female). It assumes all the labels belong to class index 0, I believe.
I am using the LIBSVM for the first time. I was able to train a data(for images) and my model is ready "trainingdata.svm.model" Now, when I run my classification against an unknown test data it is giving me two files: 1. trainingdata.svm.prediction (This file contains 1's and 0's) against my each of test data. 2. It is giving me Accuracy = 8 % THE QUESTION: 1. How do I interpret the 1s and 0s in my "trainingdata.svm.prediction". Note: I am classifying genders where 1 could be male and 0 could be female. How is Accuracy calculated? How can a program calculate accuracy since the test data is an unknown entity and we do not know the labels yet. Thanks
0
1
190
0
46,668,033
0
0
0
0
1
false
8
2016-03-07T06:09:00.000
6
1
0
HDF5 possible data corruption or loss?
35,837,243
1
python,python-2.7,hdf5,h5py,hdf
Declaration up front: I help maintain h5py, so I probably have a bias etc. The wikipedia page has changed since the question was posted, here's what I see: Criticism Criticism of HDF5 follows from its monolithic design and lengthy specification. Though a 150-page open standard, the only other C implementation of HDF5 is just a HDF5 reader. HDF5 does not enforce the use of UTF-8, so client applications may be expecting ASCII in most places. Dataset data cannot be freed in a file without generating a file copy using an external tool (h5repack). I'd say that pretty much sums up the problems with HDF5, it's complex (but people need this complexity, see the virtual dataset support), it's got a long history with backwards compatibly as it's focus, and it's not really designed to allow for massive changes in files. It's also not the best on Windows (due to how it deals with filenames). I picked HDF5 for my research because of the available options, it had decent metadata support (HDF5 at least allows UTF-8, formats like FITS don't even have that), support for multidimensional arrays (which formats like Protocol Buffers don't really support), and it supports more than just 64 bit floats (which is very rare). I can't comment about known bugs, but I have seen corruption (this happened when I was writing to a file and linux OOM'd my script). However, this shouldn't be a concern as long as you have proper data hygiene practices (as mentioned in the hackernews link), which in your case would be to not continuously write to the same file, but for each run create a new file. You should also not modify the file, instead any data reduction should produce new files, and you should always backup the originals. Finally, it is worth pointing out there are alternatives to HDF5, depending on what exactly your requirements are: SQL databases may fit you needs better (and sqlite comes with Python by default, so it's easy to experiment with), as could a simple csv file. I would recommend against custom/non-portable formats (e.g. pickle and similar), as they're neither more robust than HDF5, and more complex than a csv file.
On wikipedia one can read the following criticism about HDF5: Criticism of HDF5 follows from its monolithic design and lengthy specification. Though a 150-page open standard, there is only a single C implementation of HDF5, meaning all bindings share its bugs and performance issues. Compounded with the lack of journaling, documented bugs in the current stable release are capable of corrupting entire HDF5 databases. Although 1.10-alpha adds journaling, it is backwards-incompatible with previous versions. HDF5 also does not support UTF-8 well, necessitating ASCII in most places. Furthermore even in the latest draft, array data can never be deleted. I am wondering if this is just applying to the C implementation of HDF5 or if this is a general flaw of HDF5? I am doing scientific experiments which sometimes generate Gigabytes of data and in all cases at least several hundred Megabytes of data. Obviously data loss and especially corruption would be a huge disadvantage for me. My scripts always have a Python API, hence I am using h5py (version 2.5.0). So, is this criticism relevant to me and should I be concerned about corrupted data?
0
1
1,980
0
35,880,616
0
1
0
0
1
false
0
2016-03-07T17:09:00.000
0
2
0
I am trying to read several csv files in python 2.7 and assign to a list variable
35,849,754
0
python,list,csv
OK, I should close this. My comment resolved the question above. I reformatted my input csv files to be values, each on a separate "line", I was able to read the lines in one by one and append them to a list. This seems really sloppy and wasteful. I was hoping for a method to read a csv file and in one line assign it to a single list – not a list of lists, a single list.
I know this has been asked many times, but when I try this, I always get a list of lists. The data in my input file (col.csv) looks like: 1,2,"black", "orange" There are NO hard returns in the data (\n)(it's a csv file, right?). When I use the csv module in python to import to a list using reader, I end up with list of lists, with the first entry, list [0][0] containing all the data. How do I import the data into a list such that each comma separated value is a single list entry? The typical method I see uses for row in..., but I don't have rows – there are no returns in the data. Sorry for such a rank amateur question.
0
1
55
0
39,803,190
0
1
0
0
1
false
4
2016-03-08T12:07:00.000
0
1
0
Issues with importing vpython for anaconda
35,866,967
0
python,anaconda,vpython
The graph functions now live in the main vpython library when using Jupyter. So, from vpython import * should be sufficient. (P.S. I'd recommend not importing * but rather importing the functions you plan to use or just import vpython.) Note however that some functions change name in the Jupyter-compatible version of VPython, so display becomes canvas and gdisplay becomes graph, and you have to explicitly use vector(x,y,z) rather than (x,y,z) and have to use obj.pos.x rather than obj.x
I try to import vpython into anaconda. It seems to work so far, but if I call from visual import * it gives me an error. However, it does work when I type from vpython import * ,which is really weird since in all programs I only see the from visual import * command. Now to the real problem: I can't draw graphs. I have to call from visual.graph import * but this does not work (from vpython.graph import * doesn't work either). I am receiving the error below: ImportError Traceback (most recent call last) in () ----> 1 from visual import * ImportError: No module named visual
0
1
3,831
0
35,882,190
0
1
0
0
3
false
1
2016-03-09T02:30:00.000
0
3
0
Can someone consolidate the definition and the differences between a list, an array, a numpy array, a pandas dataframe , series?
35,881,832
0
python,arrays,numpy,pandas,dataframe
Here's a general overview (partial credit to online documentation and Mark Lutz and Wes McKinney O'Reilly books): list: General selection object available in Python's standard library. Lists are positionally ordered collections of arbitrarily typed objects, and have no fixed size. They are also mutable (str for example, are not). numpy.ndarray: Stores a collection of items of the same type. Every item takes up the same size block of memory (not necessarily the case in a list). How each item in the array is to be interpreted is specified by a separate data-type object (dtype, not to be confused with type). Also, differently from lists, ndarrays can't be have items appended in place (i.e. the .append method returns a new array with the appended items, differently from lists). A single ndarray is a vector, an ndarray of same-sized ndarrays is a 2-d array (a.k.a matrix) and so on. You can make arbitrary n-dimensional objects by nesting. pandas.Series: A one-dimensional array-like object containing an array of data (of any dtype) and an associated array of data labels, called its index. It's basically a glorified numpy.ndarray, with labels (stored inside a Series as an Index object) for each items and some handy extra functionality. Also, a Series can contain multiple objects of different dtypes (more like a list). pandas.DataFrame: A collection of multiple Series, forming a table-like object, with a lot of very handy functionality for data analysis.
I am a Python beginner and getting confused by these different forms of storing data? When should one use which. Also which of these is suitable to store a matrix (and a vector)?
0
1
80
0
35,882,181
0
1
0
0
3
false
1
2016-03-09T02:30:00.000
0
3
0
Can someone consolidate the definition and the differences between a list, an array, a numpy array, a pandas dataframe , series?
35,881,832
0
python,arrays,numpy,pandas,dataframe
list - the original Python way of storing multiple values array - a little used Python module (let's ignore it) numpy array - the closest thing in Python to the arrays, matrices and vectors used in mathematics and languages like MATLAB dataframe, datseries - pandas structures, generally built on numpy, better suited for the kind of data found in tables and databases. To be more specific, you need to give us an idea of what kinds of problems you need to solve. What kind of data are you using, and what do you need to do with it? lists can change in size, and can contain a wide mix of elements. numpy.array is fixed in size, and contains a uniform type of elements. It is multidimensional, and has implemented many mathematical functions.
I am a Python beginner and getting confused by these different forms of storing data? When should one use which. Also which of these is suitable to store a matrix (and a vector)?
0
1
80
0
35,882,258
0
1
0
0
3
false
1
2016-03-09T02:30:00.000
0
3
0
Can someone consolidate the definition and the differences between a list, an array, a numpy array, a pandas dataframe , series?
35,881,832
0
python,arrays,numpy,pandas,dataframe
Lists: lists are very flexible and can hold completely heterogeneous, arbitrary data, and they can be appended to very efficiently. Array: The array.array type, on the other hand, is just a thin wrapper on C arrays. It can hold only homogeneous data, all of the same type, and so it uses only sizeof(one object) * length bytes of memory. Numpy arrays: However, if you want to do math on a homogeneous array of numeric data, then you're much better off using NumPy, which can automatically vectorize operations on complex multi-dimensional arrays. Pandas: Pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool. Pandas provides a bunch of C or Cython optimized routines that can be faster than numpy "equivalents" (e.g. reading text). For something like a dot product, pandas DataFrames are generally going to be slower than a numpy array FYI: Taken from different web sources
I am a Python beginner and getting confused by these different forms of storing data? When should one use which. Also which of these is suitable to store a matrix (and a vector)?
0
1
80
0
43,664,585
0
0
0
0
1
false
1
2016-03-09T02:45:00.000
0
1
0
the input shape of array about Keras on Tensorflow
35,881,949
0
python,tensorflow,keras
Keras assumes that if you are using tensorflow, you are going with (samples, channels, rows, cols)
I have a question about the 4D tensor on keras about Convolution2D Layers. The Keras doc says: 4D tensor with shape: (samples, channels, rows, cols) if dim_ordering='th' or 4D tensor with shape: (samples, rows, cols, channels) if dim_ordering='tf'. I use 'tf', how about my input? When I use (samples, channels, rows, cols), it is ok, but when I use (samples, rows, cols, channels) as input, it has some problems.
0
1
1,225
0
35,889,624
0
0
0
0
1
false
0
2016-03-09T02:58:00.000
0
1
0
Vectorizer where or how fit information is stored?
35,882,062
0
python,scikit-learn
In the test phase you should use the same model names as you used in the trainings phase. In this way you will be able to use the model parameters which are derived in the training phase. Here is an example below; First give a name to your vectorizer and to your predictive algoritym (It is NB in this case) vectorizer = TfidVectorizer() classifier = MultinomialNB() Then, use these names to vectorize and predict your data trainingdata_counts = vectorizer.fit_transform(trainingdata.values) classifier.fit(trainingdata_counts, trainingdatalabels) testdata_counts = vectorizer.transform(testdata.values) predictions=classifier.predict(testdata_counts) By this way, your code will be able to process training and the test phases continuously.
in text mining/classification when a vectorizer is used to transform a text into numerical features, in the training TfidfVectorizer(...).fit_transform(text) or TfidfVectorizer(...).fit(text) is used. In testing it supposes to utilize former training info and just transform the data following the training fit. In general case the test run(s) is completely separate from train run. But it needs some info regarding the fit obtained during the training stage otherwise the transformation fails with error sklearn.utils.validation.NotFittedError: idf vector is not fitted . It's not just a dictionary, it's something else. What should be saved after the training is done, to make the test stage passing smoothly? In other words train and test are separated in time and space, how to make test working, utilizing training results? Deeper question would be what 'fit' means in scikit-learn context, but it's probably out of scope
0
1
172
0
35,887,508
0
0
0
0
1
true
0
2016-03-09T09:09:00.000
1
1
0
Sorting words into categories in Python
35,887,212
1.2
python,machine-learning,deep-learning
for deep learning to work on this you would have to develop a large dataset, most likely manually. the largest natural language processing dataset was, in fact, created manually. BUT even if you were able to find a dataset which a model could learn off. THEN a model such as gradient boosted trees would be one, amongst others, that would be well suited to multi-class classification like this. A classic library for this is xgboost.
I have about 3,000 words and I would like to group them into about 20-50 different categories. My words are typical phrases you might find in company names. "Face", "Book", "Sales", "Force", for example. The libraries I have been looking at so far are pandas and scikit-learn. I'm wondering if there is a machine-learning or deep-learning algorithm that would be well suited for this? The topics I have been looking are Classification: identifying which category an object belongs to, and Dimensionality Reduction: reducing the random number of variables to consider. When I search for putting words into categories on Google, it brings up kids puzzles such as "things you do with a pencil" - draw. Or "parts of a house" - yard, room.
0
1
717
1
35,901,608
0
0
0
0
1
false
2
2016-03-09T19:48:00.000
2
1
0
Show image using OpenCV in Python on a Raspberry Pi terminal
35,901,246
0.379949
python,linux,opencv,terminal,raspberry-pi
You need to use a windowing system to display images using imshow. (That can be enabled in settings running sudo raspi-config) If you absolutely, positively need to display images without using a windowing system, consider providing an html/web interface. Two options that come to mind when serving a web interface are: Creating an HTTP video stream (serve the output image as if it's an IP camera, kind of) Stream the output matrix as a jpg blob via websockets
I get a gtk-WARNING when trying: cv2.imshow("WindowName", image) I'm using this to watch a live stream one frame at a time. Are there any alternative libraries I could use? I tried several other options like PIL and Tkinter as well as wand, but could get none of them to work for various different reason.
0
1
1,687
0
35,923,537
0
0
0
0
1
false
0
2016-03-10T14:57:00.000
0
1
0
Inject custom cost function for linear regression
35,919,948
0
python,python-2.7,machine-learning,scipy,scikit-learn
I suspect there is no a readily available module that suits your needs. If I were you I would: partition the features into 2 groups: one for simple linear regression and another one for regularized regression. Train two models on two different (maybe overlapping?) sets of features. When you cross-validate your models, to prevent information leakage between folds, I'd suggest fixing folds and training both models on the same fixed set of folds. On top, stack and train any other regression model.
I want to run a lasso or ridge regression, but where the L1 or L2 constraint on the coefficients is on some of the coefficients, not all. Another way to say it: I would like to use my own custom cost function inside the lasso or ridge algorithm. I would like to avoid having to rewrite the whole algorithm. Is there a module in python that allows this? I looked into scipy and sckit-learn so far, but could not find that.
0
1
787
0
35,939,091
0
1
0
0
1
true
0
2016-03-11T11:18:00.000
0
1
0
matplotlib & seaborn: how to get rid of lines?
35,938,891
1.2
python,matplotlib,seaborn
Passing arguments into ax.yaxis.grid() and ax.xaxis.grid() will include or omit grids in the graphs
I'm using the whitegrid style and it's fine except for the vertical lines in the background. I just want to retain the horizontal lines.
0
1
202
0
35,948,671
0
0
0
0
1
true
2
2016-03-11T16:01:00.000
3
1
0
random_state parameter in classification models
35,944,725
1.2
python,random,machine-learning,scikit-learn,evaluation
If the random_state affects your results it means that your model has a high variance. In case of Random Forest this simply means that you use too small forest and should increase number of trees (which due to bagging - reduce variance). In scikit-learn this is controlled by n_estimators parameters in the constructor. Why this happens? Each ML method tries to minimize the error, which from matematial perspective can be usually decomposed to bias and variance [+noise] (see bias variance dillema/tradeoff). Bias is simply how far from true values your model has to end up in the expectation - this part of an error usually comes from some prior assumptions, such as using linear model for nonlinear problem etc. Variance is how much your results differ when you train on different subsets of data (or use different hyperparameters, and in case of randomized methods random seed is a parameter). Hyperparameters are initialized by us and Parameters are learnt by the model itself in the training process. Finally - noise is not reducible error coming from the problem itself (or data representation). Thus, in your case - you simply encountered model with high variance, decision trees are well known for their extremely high variance (and small bias). Thus to reduce variance, Breiman proposed the specific bagging method, known today as Random Forest. The larger the forest - stronger the effect of variance reduction. In particular - forest with 1 tree has huge variance, forest of 1000 trees is nearly deterministic for moderate size problems. To sum up, what you can do? Increase number of trees - this has to work, and is well understood and justified method treat random_seed as a hyperparameter during your evaluation, because this is exactly this - a meta knowledge you need to fix before hand if you do not wish to increase size of the forest.
Can someone explain why does the random_state parameter affects the model so much? I have a RandomForestClassifier model and want to set the random_state (for reproducibility pourpouses), but depending on the value I use I get very different values on my overall evaluation metric (F1 score) For example, I tried to fit the same model with 100 different random_state values and after the training ad testing the smallest F1 was 0.64516129 and the largest 0.808823529). That is a huge difference. This behaviour also seems to make very hard to compare two models. Thoughts?
0
1
743
0
35,964,006
0
1
0
0
1
false
1
2016-03-12T21:34:00.000
0
1
0
Can't find pandas in data nitro
35,963,580
0
python,pandas,datanitro
DataNitro is probably using a different copy of Python on your machine. Go to Settings in the DataNitro ribbon, uncheck "use default Python", and select the Canopy python directory manually. Then, restart Excel and see if importing works.
when I try to import pandas using the data nitro shell, I get the error that there is no module named pandas. I have pandas through the canopy distribution, but somehow the data nitro shell isn't "finding" it. I suspect this has to do with the directory in which pandas is stored, but I don't know how to "extract" pandas from that directory and put it into the appropriate directory for data nitro. Any ideas would be super appreciated. Thank you!!
0
1
193
0
47,130,715
0
1
0
0
4
false
2
2016-03-13T00:16:00.000
0
5
0
pycharm error while importing, even though it works in the terminal
35,964,994
0
python,macos,scikit-learn,pycharm,tensorflow
I had a similar problem. My code was was not working on PyCharm professional. I had PyCharm CE previously installed and it worked from there. I had configured PyCharm CE a while ago and I had forgotten what setup I used but if issues persist, make sure that the packages are installed under Preferences > Project > Project Interpreter
I have installed the packages TensorFlow and scikit_learn and all its dependencies. When I try importing them using python 2.7.6 or 2.7.10 (I have tried both) in the terminal, it works fine. However, when I do it using pycharm it gives an error. In the case of scikit_learn with the launcher 2.7.6 says: ImportError: dynamic module does not define init function (init_check_build) In the case of scikit_learn with the launcher 2.7.10 says: ValueError: numpy.dtype has the wrong size, try recompiling In the case of TensorFlow with the launcher 2.7.6 says: ImportError: dlopen(/Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so, 2): no suitable image found. Did find: /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: mach-o, but wrong architecture In the case of TensorFlow with the launcher 2.7.10 says: ImportError: No module named copyreg Error importing tensorflow. Unless you are using bazel, you should not try to import tensorflow from its source directory; please exit the tensorflow source tree, and relaunch your python interpreter from there. I have tried to search in the net but the solutions did not work for me. I have tried to uninstall them and install them again with pip, conda and directly the source code and it gives always the same errors. I have even tried reinstalling pycharm with no better luck. Other libraries, such as scilab or numpy, work fine in pycharm. Any ideas? It is just driving me mental. By the way, I am using a Mac OS 10.10.5.
0
1
2,043
0
40,260,973
0
1
0
0
4
true
2
2016-03-13T00:16:00.000
0
5
0
pycharm error while importing, even though it works in the terminal
35,964,994
1.2
python,macos,scikit-learn,pycharm,tensorflow
At the end, I ended up creating a virtual environment, reinstalling everything in there, and calling it through pycharm. I am not entirely sure what was the problem between conda and pycharm, I probably messed up somewhere. I am now using a different virtual environment depending on the project and I am happier than ever :).
I have installed the packages TensorFlow and scikit_learn and all its dependencies. When I try importing them using python 2.7.6 or 2.7.10 (I have tried both) in the terminal, it works fine. However, when I do it using pycharm it gives an error. In the case of scikit_learn with the launcher 2.7.6 says: ImportError: dynamic module does not define init function (init_check_build) In the case of scikit_learn with the launcher 2.7.10 says: ValueError: numpy.dtype has the wrong size, try recompiling In the case of TensorFlow with the launcher 2.7.6 says: ImportError: dlopen(/Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so, 2): no suitable image found. Did find: /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: mach-o, but wrong architecture In the case of TensorFlow with the launcher 2.7.10 says: ImportError: No module named copyreg Error importing tensorflow. Unless you are using bazel, you should not try to import tensorflow from its source directory; please exit the tensorflow source tree, and relaunch your python interpreter from there. I have tried to search in the net but the solutions did not work for me. I have tried to uninstall them and install them again with pip, conda and directly the source code and it gives always the same errors. I have even tried reinstalling pycharm with no better luck. Other libraries, such as scilab or numpy, work fine in pycharm. Any ideas? It is just driving me mental. By the way, I am using a Mac OS 10.10.5.
0
1
2,043
0
40,121,055
0
1
0
0
4
false
2
2016-03-13T00:16:00.000
0
5
0
pycharm error while importing, even though it works in the terminal
35,964,994
0
python,macos,scikit-learn,pycharm,tensorflow
Add this 'DYLD_LIBRARY_PATH=/usr/local/cuda/lib' to Python environment variable. Run-> Edit Configurations -> Environment variables. Hope it works.
I have installed the packages TensorFlow and scikit_learn and all its dependencies. When I try importing them using python 2.7.6 or 2.7.10 (I have tried both) in the terminal, it works fine. However, when I do it using pycharm it gives an error. In the case of scikit_learn with the launcher 2.7.6 says: ImportError: dynamic module does not define init function (init_check_build) In the case of scikit_learn with the launcher 2.7.10 says: ValueError: numpy.dtype has the wrong size, try recompiling In the case of TensorFlow with the launcher 2.7.6 says: ImportError: dlopen(/Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so, 2): no suitable image found. Did find: /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: mach-o, but wrong architecture In the case of TensorFlow with the launcher 2.7.10 says: ImportError: No module named copyreg Error importing tensorflow. Unless you are using bazel, you should not try to import tensorflow from its source directory; please exit the tensorflow source tree, and relaunch your python interpreter from there. I have tried to search in the net but the solutions did not work for me. I have tried to uninstall them and install them again with pip, conda and directly the source code and it gives always the same errors. I have even tried reinstalling pycharm with no better luck. Other libraries, such as scilab or numpy, work fine in pycharm. Any ideas? It is just driving me mental. By the way, I am using a Mac OS 10.10.5.
0
1
2,043
0
38,749,298
0
1
0
0
4
false
2
2016-03-13T00:16:00.000
0
5
0
pycharm error while importing, even though it works in the terminal
35,964,994
0
python,macos,scikit-learn,pycharm,tensorflow
you should start PyCharm from terminal cd /usr/lib/pycharm-community/bin ./pycharm.sh
I have installed the packages TensorFlow and scikit_learn and all its dependencies. When I try importing them using python 2.7.6 or 2.7.10 (I have tried both) in the terminal, it works fine. However, when I do it using pycharm it gives an error. In the case of scikit_learn with the launcher 2.7.6 says: ImportError: dynamic module does not define init function (init_check_build) In the case of scikit_learn with the launcher 2.7.10 says: ValueError: numpy.dtype has the wrong size, try recompiling In the case of TensorFlow with the launcher 2.7.6 says: ImportError: dlopen(/Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so, 2): no suitable image found. Did find: /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: mach-o, but wrong architecture In the case of TensorFlow with the launcher 2.7.10 says: ImportError: No module named copyreg Error importing tensorflow. Unless you are using bazel, you should not try to import tensorflow from its source directory; please exit the tensorflow source tree, and relaunch your python interpreter from there. I have tried to search in the net but the solutions did not work for me. I have tried to uninstall them and install them again with pip, conda and directly the source code and it gives always the same errors. I have even tried reinstalling pycharm with no better luck. Other libraries, such as scilab or numpy, work fine in pycharm. Any ideas? It is just driving me mental. By the way, I am using a Mac OS 10.10.5.
0
1
2,043
0
44,932,978
0
0
0
0
1
false
11
2016-03-13T17:45:00.000
-1
2
0
pyspark partitioning data using partitionby
35,973,590
-0.099668
python,apache-spark,pyspark,partitioning,rdd
I recently used partitionby. What I did was restructure my data so that all those which I want to put in same partition have the same key, which in turn is a value from the data. my data was a list of dictionary, which I converted into tupples with key from dictionary.Initially the partitionby was not keeping same keys in same partition. But then I realized the keys were strings. I cast them to int. But the problem persisted. The numbers were very large. I then mapped these numbers to small numeric values and it worked. So my take away was that the keys need to be small integers.
I understand that partitionBy function partitions my data. If I use rdd.partitionBy(100) it will partition my data by key into 100 parts. i.e. data associated with similar keys will be grouped together Is my understanding correct? Is it advisable to have number of partitions equal to number of available cores? Does that make processing more efficient? what if my data is not in key,value format. Can i still use this function? lets say my data is serial_number_of_student,student_name. In this case can i partition my data by student_name instead of the serial_number?
0
1
23,512
0
37,115,845
0
0
0
0
1
true
1
2016-03-13T19:48:00.000
0
2
0
Backpropagation with Momentum using Scikit-Learn
35,974,957
1.2
python,scikit-learn,backpropagation,momentum
If anybody ever needs an answer for this, I actually decided to run everything on a Linux VM. I then followed the instructions to install the dev version and everything(well almost) worked perfectly. Running it on Linux is way easier than Windows because you can just install the package from git and run it without having to download required software to compile it. I still struggled a little bit though.
I'm trying to use Scikit-Learn's Neural Network to classify my dataset using a Backpropagation with Momentum. I need to specify these parameters: Hidden neurons, Hidden layers, Training set, Learning rate and Momentum. I found MLPClassifier in Sklearn.neural_network package. The problem is that this package is part of Scikit-learn V0.18 which is a dev version. Is there a way I could use Scikit-Learn V0.17 to do this? Using Anaconda, but I can change that if it causes problems.
0
1
903
0
35,976,436
0
0
0
0
1
true
0
2016-03-13T21:38:00.000
0
1
0
vector changes to matrix at computation
35,976,192
1.2
python,python-2.7,matrix,vector,linear-algebra
I managed to fix it by using a temp variable, setting it to correct size, and iterating over dsdb1. i still don't know what caused the bug.
This is probably a rookie mistake that I'm missing somewhere but I can't for the life of me find anything related to my problem on the web. I have a vector b1 of size 5 by 1, and i have another vector dsdb1 which is also 5 by 1. When I write b1 += tau*dsdb1 I get the error "non-broadcastable output operand with shape (5,1) doesn't match the broadcast shape (5,5)" Now, no one of these is a matrix. I even deleted this line and instead printed both sizes for b1 and dsdb1. For b1 it printed (5,1) and for dsdb1 it printed (5,). tau is just a scalar. Why is it changing dsdb1 to a 5 by 5 matrix when computing?
0
1
23
0
52,134,247
0
0
0
0
1
false
8
2016-03-14T12:39:00.000
0
3
1
Using Scikit-learn google app engine
35,987,785
0
python,python-2.7,google-app-engine,scikit-learn
The newly-released 2nd Generation Python 3.7 Standard Environment (experimental) can run all modules. It's still in beta, though.
I am trying to deploy a python2.7 application on google app engine. It uses few modules like numpy,flask,pandas and scikit-learn. Though I am able to install and use other modules. Installing scikit-learn in lib folder of project give following error:- Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 240, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler handler, path, err = LoadObject(self._handler) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 85, in LoadObject obj = __import__(path[0]) File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/deploynew.py", line 6, in import sklearn File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__init__.py", line 56, in from . import __check_build File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__check_build/__init__.py", line 46, in raise_build_error(e) File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__check_build/__init__.py", line 41, in raise_build_error %s""" % (e, local_dir, ''.join(dir_content).strip(), msg)) ImportError: dynamic module does not define init function (init_check_build) ___________________________________________________________________________ Contents of /base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__check_build: setup.pyc __init__.py _check_build.so setup.py __init__.pyc ___________________________________________________________________________ It seems that scikit-learn has not been built correctly. If you have installed scikit-learn from source, please do not forget to build the package before using it: run python setup.py install or make in the source directory. If you have used an installer, please check that it is suited for your Python version, your operating system and your platform. Is their any way of using scikit-learn on google app engine?
0
1
1,664
0
36,008,452
0
0
0
0
1
false
0
2016-03-15T06:15:00.000
0
1
0
How do I add a quadratic constraint using coeff in the scipsuite python interface
36,003,924
0
python,scip
This is currently not supported. You need to loop through your quadratic constraints and add them one after the other using the expression method.
How do I add a quadratic constraint using the scip python interface? In one of the examples, I see something like model.addCons(x*x+y*y<=6) However, since I have a lot of variables(x1..xn and my constraint is of the form x'Qx<=0.2, where x is n*1 and Q is n*n), this method is rather impossible. How can I put the quadratic constraint in a python dictionary of coeffs as I do the linear constraints? (coeffs={x**2:3.0,y**2:1.0,z**2:5.0} for example if I want 3x^2+y^2+5z^2<=10)
0
1
103
0
36,019,131
0
0
0
0
1
true
3
2016-03-15T17:51:00.000
3
1
0
Identifying overfitting in a cross validated SVM when tuning parameters
36,018,586
1.2
python,scikit-learn,svm,cross-validation,grid-search
Overfitting is generally associated with high variance, meaning that the model parameters that would result from being fitted to some realized data set have a high variance from data set to data set. You collected some data, fit some model, got some parameters ... you do it again and get new data and now your parameters are totally different. One consequence of this is that in the presence of overfitting, usually the training error (the error from re-running the model directly on the data used to train it) will be very low, or at least low in contrast to the test error (running the model on some previously unused test data). One diagnostic that is suggested by Andrew Ng is to separate some of your data into a testing set. Ideally this should have been done from the very beginning, so that happening to see the model fit results inclusive of this data would never have the chance to impact your decision. But you can also do it after the fact as long as you explain so in your model discussion. With the test data, you want to compute the same error or loss score that you compute on the training data. If training error is very low, but testing error is unacceptably high, you probably have overfitting. Further, you can vary the size of your test data and generate a diagnostic graph. Let's say that you randomly sample 5% of your data, then 10%, then 15% ... on up to 30%. This will give you six different data points showing the resulting training error and testing error. As you increase the training set size (decrease testing set size), the shape of the two curves can give some insight. The test error will be decreasing and the training error will be increasing. The two curves should flatten out and converge with some gap between them. If that gap is large, you are likely dealing with overfitting, and it suggests to use a large training set and to try to collect more data if possible. If the gap is small, or if the training error itself is already too large, it suggests model bias is the problem, and you should consider a different model class all together. Note that in the above setting, you can also substitute a k-fold cross validation for the test set approach. Then, to generate a similar diagnostic curve, you should vary the number of folds (hence varying the size of the test sets). For a given value of k, then for each subset used for testing, the other (k-1) subsets are used for training error, and averaged over each way of assigning the folds. This gives you both a training error and testing error metric for a given choice of k. As k becomes larger, the training set sizes becomes bigger (for example, if k=10, then training errors are reported on 90% of the data) so again you can see how the scores vary as a function of training set size. The downside is that CV scores are already expensive to compute, and repeated CV for many different values of k makes it even worse. One other cause of overfitting can be too large of a feature space. In that case, you can try to look at importance scores of each of your features. If you prune out some of the least important features and then re-do the above overfitting diagnostic and observe improvement, it's also some evidence that the problem is overfitting and you may want to use a simpler set of features or a different model class. On the other hand, if you still have high bias, it suggests the opposite: your model doesn't have enough feature space to adequately account for the variability of the data, so instead you may want to augment the model with even more features.
I have an rbf SVM that I'm tuning with gridsearchcv. How do I tell if my good results are actually good results or whether they are overfitting?
0
1
1,516
0
43,882,395
0
0
0
0
1
false
0
2016-03-15T23:57:00.000
0
2
0
Appending to NumPy (Python) Array
36,024,460
0
python,arrays,numpy
You should not append arrays, if you can avoid, due to efficiency issues. Appending means changing the allocated memory size, which can run into non-contiguous memory space, hence inefficient allocation or reallocation would be necessary. These can slow down your program a lot, specially for large arrays. If you are implementing a fixed time-step Runge-Kutta you know beforehand how many points your solution is going to have at time T. It's N = (T-t0)/h+1, where T is the final time, t0 the initial time, and h the time step. You can initialize your array with zeros (using states = np.zeros((N,3))) and fill the values as you go, associating the index i to the time t[i] = t0 +i*h. This would be inside the loop: states[:,i+1] = states[:,i] + RK4_step(states[:,i]), where RK4_step(states[:,i]) is a function returning an array (column) with your variation of the state values in one step of the Runge-Kutta method. Even if your time-step is variable you should still do that, but with nonuniform times t[i] = t0 +i*h. Or, you could use numpy.integrate.ode_int(), which returns the solution of an ODE at the required times.
For starters, I am doing a Runge-Kutta on a three-DOF NumPy array. My array looks like this: states = [[X], [Vx], [Y], [Vy], [Z], [Vz]] I run my Runge-Kutta, and get my four K values, which I transpose with [newaxis]. So when I try to append the new states to my states array as follows: states = append(states, states[:,i] + (K1.T + 2 * K2.T + 2 * K3.T + K4.T)/6, 1) where "i" is a counter that starts at 0 and counts up for each iteration. However, when I run my code my resulting states array is not two columns of six elements. It appears that I am appending a row vector instead of a column vector to my states array. I ran the code with two elements (X, Vx) in the column, and everything appended just fine (or at least my result made sense). I have tried forcing the result of my Runge-Kutta to be a column vector, but that messes up my calculation of the K-values. I have tried variations of my append code, and still have the same result. This is a clone of a Matlab code, and I have been unable to find anything on NumPy arrays and indexing that helps me. Any help is appreciated. Thanks. UPDATE: states[:,0] = [[0], [2300], [0], [0], [-1600], [500]] - original states[:,1] = [[2300], [2100], [0], [0], [-2100], [450]] - append states = [[0, 2300], [2300, 2100], [0, 0], [0, 0], [-1600, -2100], [500, 450]] - final These are column vectors.
0
1
536
0
36,033,515
0
1
0
0
3
false
1
2016-03-16T10:33:00.000
1
3
0
DeprecationWarning with matplotlib and dateutil
36,033,095
0.066568
python,datetime,matplotlib,python-dateutil
First of all it is not an error. It's a warning. Second, most likely the problem is not your problem rather a problem in Matplotlib, which need to fix how they call a function or some method form python-dateutil. Most likely, you can ignore this warning, and it will be fixed in the next Matplotlib version.
Since i've installed the last version of matplotlib (1.5.1), I have the following error when i try to plot values with datetime as X. DeprecationWarning: Using both 'count' and 'until' is inconsistent with RFC 2445 and has been deprecated in dateutil. Future versions will raise an error. Does someone met ths error, and knows how to correct it ?
0
1
842
0
36,626,778
0
1
0
0
3
false
1
2016-03-16T10:33:00.000
1
3
0
DeprecationWarning with matplotlib and dateutil
36,033,095
0.066568
python,datetime,matplotlib,python-dateutil
Best solution : in the file /matplotlib/dates.py, the line 830 : self.rule.set(dtstart=start, until=stop, count=self.MAXTICKS + 1) shall be commented
Since i've installed the last version of matplotlib (1.5.1), I have the following error when i try to plot values with datetime as X. DeprecationWarning: Using both 'count' and 'until' is inconsistent with RFC 2445 and has been deprecated in dateutil. Future versions will raise an error. Does someone met ths error, and knows how to correct it ?
0
1
842
0
36,423,287
0
1
0
0
3
false
1
2016-03-16T10:33:00.000
3
3
0
DeprecationWarning with matplotlib and dateutil
36,033,095
0.197375
python,datetime,matplotlib,python-dateutil
The issue have been fixed on matplotlib, but not released in a finalised version (>= 1.5.2) I had to install the current working version with pip install git+https://github.com/matplotlib/matplotlib
Since i've installed the last version of matplotlib (1.5.1), I have the following error when i try to plot values with datetime as X. DeprecationWarning: Using both 'count' and 'until' is inconsistent with RFC 2445 and has been deprecated in dateutil. Future versions will raise an error. Does someone met ths error, and knows how to correct it ?
0
1
842
0
52,089,783
0
1
0
0
1
false
22
2016-03-17T03:26:00.000
30
4
0
Pandas row to json
36,051,134
1
python,json,pandas
Looping over indices is very inefficient. A faster technique: df['json'] = df.apply(lambda x: x.to_json(), axis=1)
I have a dataframe in pandas and my goal is to write each row of the dataframe as a new json file. I'm a bit stuck right now. My intuition was to iterate over the rows of the dataframe (using df.iterrows) and use json.dumps to dump the file but to no avail. Any thoughts?
0
1
43,885
0
36,081,837
0
0
0
0
1
false
2
2016-03-18T10:00:00.000
1
1
0
Random forest tree growing algorithm
36,081,370
0.197375
python,algorithm,machine-learning,random-forest,decision-tree
Q1 You shouldn't remove the features from M. Otherwise it will not be able to detect some types of relationships (ex: linear relationships) Maybe you can stop earlier, in your condition it might go up to leaves with only 1 sample, this will have no statistical significance. So it's better to stop at say, when the number of samples at leaf is <= 3 Q2 For continuous features maybe you can bin them to groups and use them to figure out a splitting point.
I'm doing a Random Forest implementation (for classification), and I have some questions regarding the tree growing algorithm mentioned in literature. When training a decision tree, there are 2 criteria to stop growing a tree: a. Stop when there are no more features left to split a node on. b. Stop when the node has all samples in it belonging to the same class. Based on that, 1. Consider growing one tree in the forest. When splitting a node of the tree, I randomly select m of the M total features, and then from these m features I find that one feature with maximum information gain. After I've found this one feature, say f, should I remove this feature from the feature list, before proceeding down to the children of the node? If I don't remove this feature, then this feature might get selected again down the tree. If I implement the algorithm without removing the feature selected at a node, then the only way to stop growing the tree is when the leaves of the tree become "pure". When I did this, I got the "maximum recursion depth" reached error in Python, because the tree couldn't reach that "pure" condition earlier. The RF literature even those written by Breiman say that the tree should be grown to the maximum . What does this mean? 2. At a node split, after selecting the best feature to split on (by information gain), what should be the threshold on which to split? One approach is to have no threshold, create one child node for every unique value of the feature; but I've continuous-valued features too, so that means creating one child node per sample!
0
1
586
0
36,104,821
0
0
0
0
1
false
1
2016-03-18T13:58:00.000
1
1
0
Scipy nnls algorithm does not have termination tolerance option
36,086,330
0.197375
python,scipy,spyder
You can use lsq_linear which is available in scipy version 0.17.
I am using the nnls algorithm from scipy and am shocked to find that I cannot control the final tolerance, as can be done in Matlab using Tolx i.e, termination tolerance. Modifying the Fortran code or even reading it is extremely hard. I am only new to python since a month. I need to do $\| A x - b\|_2 \leq \epsilon\| b\|_2$(latex); How can I do this, other than write my own nnls! \epsilon is the tolerance! I found another link that had to add an extra constraint, but that was an equality constriant!
0
1
129
0
36,143,671
0
0
0
0
1
true
1
2016-03-21T21:04:00.000
3
2
0
Spark running on EC2 vs EMR
36,141,570
1.2
python,amazon-web-services,amazon-ec2,apache-spark,amazon-emr
EMR provides a simple to use Hadoop/spark as service. You just have to select the components you want to be installed (spark, hadoop), their versions, how many machines you want to use and a couple other options and then it installs everything for you. Since you are students I assume you don't have experience in automation tools like Ansible, Puppet or Chef and probably you never had to maintain your own hadoop cluster. If that is the case I would definitively suggest EMR. As an experienced hadoop/spark user, at the same time I can tell you that it has its own limitations. When I used it 6 months ago I wanted to use the latest version of EMR (4.0 If remember correctly) because it supported the latest version of Spark and I had few headaches to customise it to install Java 8 instead of the provided Java 7. I believe it was their early days of supporting Java 8 and they should have fixed that by now. But this is what you miss with all the "all included" solutions, flexibility especially if you are an expert user.
We are students that are working on a graduation project related to the Data Science, we are developing a Recommender Engine using Spark with python (Pyspark) with Android Application (Interface for the users) and we have a faced a lot of roadblocks, one of them was how to keep the Spark script up and running on a cloud for a fast processing and real-time results. All we knew about EMR that it's newer than EC2 and already has the Hadoop installed on it. We still have hard time taking the decision on which to use and what are the differences between them dealing with Spark.
1
1
3,688
0
36,157,336
0
0
1
0
1
false
0
2016-03-22T12:59:00.000
0
1
0
How to use freqz to get response of complex FIR filter
36,155,061
0
python,numpy,scipy
Never mind, it is absolutely possible to pass a set of complex coefficients to freqz, I got confused because I tried to plot the response without specifying I wanted the absolute value of h, which rendered the warning: ComplexWarning: Casting complex values to real discards the imaginary part. Trap for young players like myself!
I'm trying to analyse an asymmetric FIR filter with complex coefficients. I know I can use the numpy function freqz to analyse the frequency response of an FIR or IIR-filter with real coefficients. At the moment I'm just using a regular FFT of the FIR filter and I use fftshift to put the negative frequencies in front of 0 and then I do fftfreqs to calculate the frequency bins and finally, I add the carrier frequency to all the frequencies in the array which is given by fftfreqs. Anyway, I'm pretty sure that that's the wrong way.
0
1
631
0
50,800,687
0
0
0
0
1
false
40
2016-03-24T07:56:00.000
0
6
0
How to get the samples in each cluster?
36,195,457
0
python,scikit-learn,cluster-analysis,k-means
You can Simply store the labels in an array. Convert the array to a data frame. Then Merge the data that you used to create K means with the new data frame with clusters. Display the dataframe. Now you should see the row with corresponding cluster. If you want to list all the data with specific cluster, use something like data.loc[data['cluster_label_name'] == 2], assuming 2 your cluster for now.
I am using the sklearn.cluster KMeans package. Once I finish the clustering if I need to know which values were grouped together how can I do it? Say I had 100 data points and KMeans gave me 5 cluster. Now I want to know which data points are in cluster 5. How can I do that. Is there a function to give the cluster id and it will list out all the data points in that cluster?
0
1
62,619
0
36,207,559
0
0
0
0
1
false
2
2016-03-24T12:18:00.000
1
1
0
Running tensorflow model training from dataflow
36,200,052
0.197375
python,tensorflow,google-cloud-dataflow
There is nothing planned at this time. If you can run the Tensorflow training on a single machine (it sounds like this is what you were doing with Spark) then it should be possible to do the training within a DoFn of a Dataflow pipeline.
I am playing around with tensorflow and today I have noticed that google also open-sourced Python SDK for their dataflow. Currently when I need to train and evaluate several networks in parallel I usually use either luigi and run one model training after another or I use spark and I am performing each model training within the map step. Whole this data processing is just a part of the pipeline. I am wondering if there is or if there is planned something like perform tensorflow model training step inside of the dataflow pipeline? Is there currently some best practice around this? Or do I have to run each model setting within the map step? I went through the documentation and for now it seems to be really vague, so I'm asking here if someone has some experience with this.
0
1
310
0
36,209,490
0
1
0
0
1
false
0
2016-03-24T20:23:00.000
1
1
0
No internet access but Pip wants to look for newer version
36,209,363
0.197375
python,matplotlib,installation,pip
On the computer with the internet, use pip wheel matplotlib to download all the wheel files needed for the installation (including dependencies). Then, on the computer without the internet, use pip install matplotlib --find-links="<directory of the wheel files>". This will install matplotlib from the local files.
I'm trying to install matplotlib a server that is sequestered and has no internet connection. When I run pip with this command: pip.exe install matplotlib-1.5.1-cp27-none-win_amd64.whl --no-index I get this message: Could not find a version that satisfies the requirement pyparsing!=2.0.4,>=1.5.6 (from matplotlib==1.5.1) (from versions: ) No matching distribution found for pyparsing!=2.0.4,>=1.5.6 (from matplotlib==1.5.1) I'm relatively new with this and have only used Pip once or twice before but always with an internet connection. This computer is sequestered on a DoD compound and is for sandbox development only. I have to go through three layers of RDP in order to get to it. I've tried everything I can think of or find but no luck. Any suggestions are greatly appreciated. Thanks
0
1
518
0
36,227,222
0
0
0
0
1
false
0
2016-03-25T01:44:00.000
2
1
0
H2OFrame converts dict to all zeros
36,212,815
0.379949
python,django,pandas,scikit-learn,h2o
It seems the Pandas DataFrame to H2OFrame conversion works fine outside Django, but fails inside Django. The problem might be with Django's pre_save not allowing the writing/reading of the temporary .csv file that H2O creates when ingesting a python object. A possible workaround is to explicitly write the Pandas DataFrame to a .csv file with model_data_frame.to_csv(<path>, index=False) and then import the file into H2O with h2o.import_file(<path>).
I am taking input values from a django model admin screen and on pre_save calling h2o to do predictions for other values and save them. Currently I convert my input from pandas (trying to work with sklearn preprocessing easily here) by using: modelH2OFrame = h2o.H2OFrame(python_obj = model_data_frame.to_dict('list')) It parses and loads. Hell it even creates a frame with values when I do it step by step. BUT. When I run this inside of the Django pre_save, the H2OFrame comes back completely empty. Ideas for why this may be happening? Sometimes I get errors connecting to the h2o cluster or timeouts--maybe that is a related issue? I load the H2O models in the pre_save call and do the predictions, allocate them to model fields, and then shut down the h2o cluster (in one function).
1
1
276
0
66,138,989
0
0
0
0
2
false
27
2016-03-25T10:42:00.000
0
4
0
Parameters of detectMultiScale in OpenCV using Python
36,218,385
0
python,opencv,object-detection
detectMultiScale function is used to detect the faces. This function will return a rectangle with coordinates(x,y,w,h) around the detected face. It takes 3 common arguments — the input image, scaleFactor, and minNeighbours. scaleFactor specifies how much the image size is reduced with each scale. In a group photo, there may be some faces which are near the camera than others. Naturally, such faces would appear more prominent than the ones behind. This factor compensates for that. minNeighbours specifies how many neighbours each candidate rectangle should have to retain it. You can read about it in detail here. You may have to tweak these values to get the best results. This parameter specifies the number of neighbours a rectangle should have to be called a face. We obtain these values after trail and test over a specific range.
I am not able to understand the parameters passed to detectMultiScale. I know that the general syntax is detectMultiScale(image, rejectLevels, levelWeights) However, what do the parameters rejectLevels and levelWeights mean? And what are the optimal values used for detecting objects? I want to use this to detect pupil of the eye
0
1
78,382
0
55,628,240
0
0
0
0
2
false
27
2016-03-25T10:42:00.000
20
4
0
Parameters of detectMultiScale in OpenCV using Python
36,218,385
1
python,opencv,object-detection
Amongst these parameters, you need to pay more attention to four of them: scaleFactor – Parameter specifying how much the image size is reduced at each image scale. Basically, the scale factor is used to create your scale pyramid. More explanation, your model has a fixed size defined during training, which is visible in the XML. This means that this size of the face is detected in the image if present. However, by rescaling the input image, you can resize a larger face to a smaller one, making it detectable by the algorithm. 1.05 is a good possible value for this, which means you use a small step for resizing, i.e. reduce the size by 5%, you increase the chance of a matching size with the model for detection is found. This also means that the algorithm works slower since it is more thorough. You may increase it to as much as 1.4 for faster detection, with the risk of missing some faces altogether. minNeighbors – Parameter specifying how many neighbors each candidate rectangle should have to retain it. This parameter will affect the quality of the detected faces. Higher value results in fewer detections but with higher quality. 3~6 is a good value for it. minSize – Minimum possible object size. Objects smaller than that are ignored. This parameter determines how small size you want to detect. You decide it! Usually, [30, 30] is a good start for face detection. maxSize – Maximum possible object size. Objects bigger than this are ignored. This parameter determines how big size you want to detect. Again, you decide it! Usually, you don't need to set it manually, the default value assumes you want to detect without an upper limit on the size of the face.
I am not able to understand the parameters passed to detectMultiScale. I know that the general syntax is detectMultiScale(image, rejectLevels, levelWeights) However, what do the parameters rejectLevels and levelWeights mean? And what are the optimal values used for detecting objects? I want to use this to detect pupil of the eye
0
1
78,382
0
38,489,976
0
0
0
0
1
false
0
2016-03-25T14:13:00.000
0
1
0
How to resume training from *.meta in tensorflow?
36,221,588
0
python,tensorflow
Not sure if this will work for you, but at least for DNNCLassifiers you can specify the model_dir parameter when creating it and that will construct the model from the files and then you can continue the training. For the DNNClassifiers you specify model_dir when first creating the object and the training will store checkpoints and other files on this directory. You can come then after that, and create another DNNClassifier specifying the same model_dir and that will restore your pre-trained model.
In the latest version of tensorflow, when I save the model I find two files are produced: model_xxx and model_xxx.meta. Does model_xxx.meta specify the network? Can I resume training using model_xxx and model_xxx.meta without specify the network in the code? What about training queue structure, are they stored in model_xxx.meta?
1
1
429
0
36,226,280
0
0
0
0
1
false
0
2016-03-25T18:42:00.000
0
1
0
Why does to_datetime produce only missing values for a float column in pandas?
36,225,950
0
python,datetime,numpy,pandas,types
The best way might be to avoid floats altogether. Preempt the conversion to numerics in read_table by specifying dtype, with the column in question being kept an object. to_datetime handles that as intended. HT: BrenBarn in the comments.
I am confused by something in pandas 0.18.0. In my input csv data, a field is supposed to consist dates as YYYYMMDD strings, but some rows have this missing or misformatted. I want to represent this column as datetime with dates where possible, and missing where not. I tried several options, and what got me furthest was not using parse_dates upon reading the table in (with read_table), but then coercing the conversion with pandas.to_datetime(DataFrame['Seriesname'], errors='coerce',format='%Y%m%d'). This is robust to typos where the number cannot represent a date (think '20100231', column imported as int64 first) or when the string does not represent a number at all (think '2o1oo228', column an object upon import). What this procedure is not robust to is when the columns contains only numbers but one field is empty. Then read_table imports the entire column as a float64 (not an int64, which has no missing values in numpy), and the conversion above produces all missing, even for rows where the data makes sense. Is there a way around this?
0
1
61
0
64,681,838
0
0
0
0
3
false
231
2016-03-25T18:50:00.000
0
14
0
How to find which columns contain any NaN value in Pandas dataframe
36,226,083
0
python,pandas,dataframe,nan
df.isna() return True values for NaN, False for the rest. So, doing: df.isna().any() will return True for any column having a NaN, False for the rest
Given a pandas dataframe containing possible NaN values scattered here and there: Question: How do I determine which columns contain NaN values? In particular, can I get a list of the column names containing NaNs?
0
1
303,847
0
68,703,088
0
0
0
0
3
false
231
2016-03-25T18:50:00.000
0
14
0
How to find which columns contain any NaN value in Pandas dataframe
36,226,083
0
python,pandas,dataframe,nan
features_with_na=[features for features in dataframe.columns if dataframe[features].isnull().sum()>0] for feature in features_with_na: print(feature, np.round(dataframe[feature].isnull().mean(), 4), '% missing values') print(features_with_na) it will give % of missing value for each column in dataframe
Given a pandas dataframe containing possible NaN values scattered here and there: Question: How do I determine which columns contain NaN values? In particular, can I get a list of the column names containing NaNs?
0
1
303,847
0
47,418,973
0
0
0
0
3
false
231
2016-03-25T18:50:00.000
40
14
0
How to find which columns contain any NaN value in Pandas dataframe
36,226,083
1
python,pandas,dataframe,nan
You can use df.isnull().sum(). It shows all columns and the total NaNs of each feature.
Given a pandas dataframe containing possible NaN values scattered here and there: Question: How do I determine which columns contain NaN values? In particular, can I get a list of the column names containing NaNs?
0
1
303,847
0
36,270,781
0
1
0
0
1
false
1
2016-03-28T17:57:00.000
1
1
0
Easy way to classify words like "A lot", "A few", "some"
36,267,978
0.197375
python,nlp
Similarity is actually a difficult problem in NLP. I recommend you to use Word2Vec and generate word embeddings of each word. Then you can compare the distance of each word pair and see if could word better than your way. The key to improve the effectiveness of word embedding is to pick a corpus which is large enough and specifies on the area closer to your problem.
I am working on a project that needs to be able to classify modifiers like "a lot", "a few", "lots", "some" etc. into minimum percentages For example "a lot" -> 80% Right now I'm thinking of simply creating a large dictionary that relates these modifiers and numerical values e.g. a few -> 15% some -> 10% lots -> 80% However this is very laborious and probably won't cover all scenarios. Is there an easier way to do this, or is there a NLP tool that already exists for this purpose - preferably in python (or a database out there already?)
0
1
76