GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 41,795,121 | 0 | 0 | 0 | 0 | 2 | true | 5 | 2017-01-22T19:00:00.000 | 9 | 2 | 0 | Adding global attribute using xarray | 41,794,956 | 1.2 | python,netcdf,python-xarray | In Xarray, directly indexing a Dataset like hndl_nc['variable_name'] pulls out a DataArray object. To get or set attributes, index .attrs like hndl_nc.attrs['global_attribute'] or hndl_nc.attrs['global_attribute'] = 25.
You can access both variables and attributes using Python's attribute syntax like hndl_nc.variable_or_attribute_name, but this is a convenience feature that only works when the variable or attribute name does not conflict with a preexisting method or property, and cannot be used for setting. | Is there some way to add a global attribute to a netCDF file using xarray? When I do something like hndl_nc['global_attribute'] = 25, it just adds a new variable. | 0 | 1 | 6,426 |
0 | 46,549,251 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2017-01-22T19:00:00.000 | 6 | 2 | 0 | Adding global attribute using xarray | 41,794,956 | 1 | python,netcdf,python-xarray | I would add here that both Datasets and DataArrays can have attributes, both called with .attrs
e.g.
ds.attrs['global attr'] = 25
ds.variable_2.attrs['variable attr'] = 10
ds.variable_2.attrs['variable attr'] = 10 | Is there some way to add a global attribute to a netCDF file using xarray? When I do something like hndl_nc['global_attribute'] = 25, it just adds a new variable. | 0 | 1 | 6,426 |
0 | 50,193,390 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-01-22T22:54:00.000 | 1 | 3 | 0 | 'Inverse' cumprod in pandas | 41,797,071 | 0.066568 | python,pandas | Just in case anyone else ends up here, let me provide a more generic answer.
Suppose your DataFrame column, Series, vector, whatever, X has n values. At an arbitrary position i you'd like to get
(X[i])*(X[i+1])*...*(X[n]),
which is equivalent to
(X[1])*(X[2])*...*(X[n]) / (X[1])*(X[2])*...*(X[i-1]).
Therefore, you may just do
inverse_cumprod = (np.prod(X) / np.cumprod(X)) * X | I have a data frame which contains dates as index and a value column storing growth percentage between consecutive dates (i.e. dates in the index). Suppose I want to compute 'real' values by setting a 100 basis at the first date of the index and then iteratively applying the % of growth. It is easy with the cumprod method.
Now, I want to set as 100 basis the laste date in the index. I thus need to compute for each date in the index the 'inverse' growth. Is there an easy way (and pythonic) to do this with pandas?
Regards,
Allia | 0 | 1 | 2,303 |
0 | 42,111,038 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-01-23T12:14:00.000 | 2 | 2 | 0 | Tensorflow: Is preprocessing on TFRecord files faster than real-time data preprocessing? | 41,806,128 | 0.197375 | python,machine-learning,tensorflow,computer-vision,deep-learning | I have been wondering the same thing and have been disappointed with my during-training-time image processing performance. It has taken me a while to appreciate quite how big an overhead the image manipulation can be.
I am going to make myself a nice fat juicy preprocessed/augmented data file. Run it overnight and then come in the next day and be twice as productive!
I am using a single GPU machine and it seems obvious to me that piece-by-piece model building is the way to go. However, the workflow-maths may look different if you have different hardware. For example, on my Macbook-Pro tensorflow was slow (on CPU) and image processing was blinding fast because it was automatically done on the laptop's GPU. Now I have moved to a proper GPU machine, tensorflow is running 20x faster and the image processing is the bottleneck.
Just work out how long your augmentation/preprocessing is going to take, work out how often you are going to reuse it and then do the maths. | In Tensorflow, it seems that preprocessing could be done on either during training time, when the batch is created from raw images (or data), or when the images are already static. Given that theoretically, the preprocessing should take roughly equal time (if they are done using the same hardware), is there any practical disadvantage in doing data preprocessing (or even data augmentation) before training than during training in real-time?
As a side question, could data augmentation even be done in Tensorflow if was not done during training? | 0 | 1 | 2,142 |
0 | 41,815,502 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2017-01-23T19:07:00.000 | 4 | 2 | 0 | What are the relative advantages of extending NumPy in Cython vs Boost.Python? | 41,813,799 | 0.379949 | numpy,cython,boost-python | For small one shot problems, I tend to prefer cython, for larger integration with c++ code bases, prefer boost Python.
In part, it depends on the audience for your code. If you're working with a team with significant experience in python, but little experience of using C++, Cython makes sense. If you have a fixed code base with complex types to inter operate with, the boost python can end up being a little cheaper to get running.
Cython encourages you to write incrementally, gradually adding types as required to get extra performance and solves many of the hard packaging problems. boost Python requires a substantial effort in getting a build setup, and it can be hard to produce packages that make sense on PyPI
Cython has good built in error messages/diagnostics, but from what I've seen, the errors that come out of boost can be very hard to interpret - be kind to yourself and use a new-ish c++ compiler, preferably one known for producing readable error messages.
Don't discount alternative tools like numba (similar performance to cython with code that is Python, not just something that looks similar) and pybind11 (boost Python without boost and with better error messages) | I need to speed up some algorithms working on NumPy arrays. They will use std::vector and some of the more advanced STL data structures.
I've narrowed my choices down to Cython (which now wraps most STL containers) and Boost.Python (which now has built-in support for NumPy).
I know from my experience as a programmer that sometimes it takes months of working with a framework to uncover its hidden issues (because they are rarely used as talking points by its disciples), so your help could potentially save me a lot of time.
What are the relative advantages and disadvantages of extending NumPy in Cython vs Boost.Python? | 0 | 1 | 2,636 |
0 | 42,273,559 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-01-24T01:05:00.000 | 0 | 2 | 1 | Amazon device farm - wheel file from macosx platform not supported | 41,818,382 | 1.2 | python-2.7,opencv,numpy,aws-device-farm,python-appium | (numpy-1.12.0-cp27-cp27m-manylinux1_x86_64.whl) is numpy wheel for ubuntu.
But still Amazon device farm throws error while configuring tests with this wheel.
Basically, Device farm is validating if the .whl file has prefix -none-any.whl
Just renaming the file to numpy-1.12.0-cp27-none-any.whl works in device farm.
Note: This renamed file is non-universal python wheel. There might be few things which are not implemented in non-universal python wheel. This may cause somethings to break. So, test to ensure all your dependencies are working fine before using this. | I am facing the following error on configuring Appium python test in AWS device farm:
There was a problem processing your file. We found at least one wheel file wheelhouse/numpy-1.12.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl specified a platform that we do not support. Please unzip your test package and then open the wheelhouse directory, verify that names of wheel files end with -any.whl or -linux_x86_64.whl, and try again
I require numpy and opencv-python packages to run my tests.
How to get this issue fixed? | 0 | 1 | 290 |
0 | 41,834,290 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-01-24T15:06:00.000 | 0 | 1 | 0 | Stop images produced by pymc.Matplot.plot being saved | 41,831,529 | 1.2 | python,pymc | There is currently no way to plot them without being saved to disk. I would recommend only plotting a few diagnostic parameters, and specifying plot=False for the others. That would at least cut down on the volume of plots being generated. There probably should be a saveplot argument, however, I agree. | I recently started experimenting with pymc and only just realised that the images being produced by pymc.Matplot.plot, which I use to diagnose whether the MCMC has performed well, are being saved to disk. This results in images appearing wherever I am running my scripts from, and it is time consuming to clear them up. Is there a way to stop figures being saved to disk? I can't see anything clearly in the documentation. | 0 | 1 | 60 |
0 | 41,849,001 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-01-24T17:04:00.000 | 0 | 1 | 0 | ipython can't load module when using magic %load, but succeed when loading interactively | 41,834,141 | 0 | python-3.x,ipython | Shame on me, it was just a typo: the correct module is named sklearn.ensemble. | When I launch ipython -i script_name or load the script with %load, it fails loading sklearn.ensamble.
But it succeed in loading and I am able to use it when I launch ipython alone and then from sklearn.ensamble import *.
Why? | 0 | 1 | 21 |
0 | 41,836,728 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-01-24T18:50:00.000 | 0 | 1 | 0 | I have a DataFrame with some values of np.inf. How does .corr() work? | 41,836,727 | 1.2 | python,pandas | np.inf is treated the same way np.NaN.
I replaced the all the values of np.inf with np.NaN and the results were exactly the same. If there is some subtle differences, please let me know. I was looking for an answer on this and couldn't find one anywhere so I figured I would post this here. | What will happen when I use df.corr()? Will np.inf effect my results some how? | 0 | 1 | 72 |
0 | 41,839,189 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-24T20:48:00.000 | 0 | 1 | 0 | can't install model_selection on python 2.7.12 | 41,838,726 | 0 | python-2.7,module,scikit-learn,grid-search | I just found the answer. the 0.18 sklearn has seen a number of updates. you may update your sklearn by typing "conda update scikit-learn" in your windows command line.
If it still didn't code you might want to update your conda/Anaconda as well:
"conda update conda" and "conda update Anaconda" | I try to run this line:
from sklearn.model_selection import GridSearchCV
but I get an ImportError (i.e. No module named model_selection) although I have installed sklearn and I can import other packages. here is my python version :
2.7.12 |Anaconda 4.2.0 (64-bit)| (default, Jun 29 2016, 11:07:13) [MSC v.1500 64 bit (AMD64)]
is there a way to use "sklearn.model_selection" on my current version? | 0 | 1 | 238 |
0 | 41,846,775 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-25T08:26:00.000 | 0 | 2 | 0 | Pass data between MATLAB R2011b and Python (Windows 7) | 41,846,630 | 0 | python,matlab,python-2.7,parameter-passing,language-interoperability | Depending on what you want to do and your type of data, you could write it to a file and read from it in the other language. You could use numpy.fromfile for that in the python part. | Hello Friends,
I want to pass data between MATLAB and Python, One way would be to use matlab.engine in Python or Call Python Libraries from MATLAB. But this approach requires MATLAB 2014 Version unlike mine which is MATLAB R2011b.
So I request you to please guide for a different Approach in order to comunicate between Python and MATLAB R2011b Version.
Thanks in advance | 0 | 1 | 110 |
0 | 41,851,421 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-01-25T11:22:00.000 | 0 | 2 | 0 | How class_weight emphasis a class in in scikit-learn | 41,850,349 | 0 | python,scikit-learn | I'm not sure if there is a single method of treating class_weight for all the algorithms.
The way Decision Trees (and Forests) deals with this is by modifying the weights of each sample according to its class.
You can consider weighting samples as a more general case of oversampling all the minority class samples (using weights you can "oversample" fractions of samples). | I would like to know how scikit-learn put more emphasis on a class when we use the parameter class_weight. Is it an oversampling of the minority sampling ? | 0 | 1 | 420 |
0 | 41,864,069 | 0 | 0 | 0 | 0 | 1 | false | 15 | 2017-01-25T23:51:00.000 | 5 | 7 | 0 | Is there a built-in KL divergence loss function in TensorFlow? | 41,863,814 | 0.141893 | python,statistics,tensorflow,entropy | I'm not sure why it's not implemented, but perhaps there is a workaround. The KL divergence is defined as:
KL(prob_a, prob_b) = Sum(prob_a * log(prob_a/prob_b))
The cross entropy H, on the other hand, is defined as:
H(prob_a, prob_b) = -Sum(prob_a * log(prob_b))
So, if you create a variable y = prob_a/prob_b, you could obtain the KL divergence by calling negative H(proba_a, y). In Tensorflow notation, something like:
KL = tf.reduce_mean(-tf.nn.softmax_cross_entropy_with_logits(prob_a, y)) | I have two tensors, prob_a and prob_b with shape [None, 1000], and I want to compute the KL divergence from prob_a to prob_b. Is there a built-in function for this in TensorFlow? I tried using tf.contrib.distributions.kl(prob_a, prob_b), but it gives:
NotImplementedError: No KL(dist_a || dist_b) registered for dist_a type Tensor and dist_b type Tensor
If there is no built-in function, what would be a good workaround? | 0 | 1 | 17,205 |
0 | 41,911,859 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-01-27T16:49:00.000 | 2 | 2 | 0 | Real time data using sklearn | 41,899,011 | 0.197375 | python,machine-learning,scikit-learn,real-time | With most algorithms training is slow and predicting is fast. Therefore it is better to train offline using training data; and then use the trained model to predict each new case in real time.
Obviously you might decide to train again later if you acquire more/better data. However there is little benefit in retraining after every case. | I have a real time data feed of health patient data that I connect to with python. I want to run some sklearn algorithms over this data feed so that I can predict in real time if someone is going to get sick. Is there a standard way in which one connects real time data to sklearn? I have traditionally had static datasets and never an incoming stream so this is quite new to me. If anyone has sort of some general rules/processes/tools used that would be great. | 0 | 1 | 2,016 |
0 | 43,380,457 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-01-27T16:49:00.000 | 2 | 2 | 0 | Real time data using sklearn | 41,899,011 | 0.197375 | python,machine-learning,scikit-learn,real-time | It is feasible to train the model from a static dataset and predict classifications for incoming data with the model. Retraining the model with each new set of patient data not so much. Also breaks the train/test mode of testing a ML model.
Trained models can be saved to file and imported in the code used for real time prediction.
In python scikit learn, this is via the pickle package.
R programming saves to an rda object. saveRDS
yay... my first answering a ML question! | I have a real time data feed of health patient data that I connect to with python. I want to run some sklearn algorithms over this data feed so that I can predict in real time if someone is going to get sick. Is there a standard way in which one connects real time data to sklearn? I have traditionally had static datasets and never an incoming stream so this is quite new to me. If anyone has sort of some general rules/processes/tools used that would be great. | 0 | 1 | 2,016 |
0 | 41,908,820 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-01-27T17:41:00.000 | 0 | 2 | 0 | sample entries from a matrix while satisfying a given requirement | 41,899,930 | 0 | python,algorithm | We can do it in the following manner: first get all the (x,y) tuples (indices) of the matrix A where A[x,y]=1. Let there be k such indices. Now roll a k-sided unbiased dice M times (we can simulate by using function randint(1,k) drawing sample from uniform distribution). If you want samples with replacements (same position of the matrix can be chosen multiple times) then it can be done with M invocations of the function. Otherwise for samples with replacements (without repetitions allowed) you need to keep track of positions already selected and delete those indices from the array before throwing the dive next time. | There is a 0-1 matrix, I need to sample M different entries of 1 value from this matrix. Are there any efficient Python implements for this kind of requirement?
A baseline approach is having M iterations, during each iteration, randomly sample 1, if it is of value 1, then keep it and save its position, otherwise, continue this iteration until find entry with value 1; and continue to next iteration. It seems not a good heuristic at all. | 0 | 1 | 49 |
0 | 41,947,311 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-30T22:45:00.000 | 1 | 3 | 0 | Pandas plot ONLY overlap between multiple data frames | 41,946,758 | 0.066568 | python,pandas,matplotlib,ipython,jupyter-notebook | To plot only the portion of df1 whose index lies within the index range of df2, you could do something like this:
ax = df1.loc[df2.index.min():df2.index.max()].plot()
There may be other ways to do it, but that's the one that occurs to me first.
Good luck! | Found on S.O. the following solution to plot multiple data frames:
ax = df1.plot()
df2.plot(ax=ax)
But what if I only want to plot where they overlap?
Say that df1 index are timestamps that spans 24 hour and df2 index also are timestamps that spans 12 hours within the 24 hours of df1 (but not exactly the same as df1).
If I only want to plot the 12 hours that both data frames covers. What's the easies way to do this? | 0 | 1 | 3,052 |
0 | 63,090,168 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-01-31T06:38:00.000 | 0 | 2 | 0 | Converting scientific notation in Series to commas and thousands separator | 41,951,160 | 0 | python,python-3.x,pandas | You can also use
SeriesName.map('{:,}'.format) | I have a Series with Name as the index and a number in scientific notation such as 3.176154e+08. How can I convert this number to 317,615,384.61538464 with a thousands separator? I tried:
format(s, ',')
But it returns TypeError: non-empty format string passed to object.format
There are no NaNs in the data.
Thanks for your help! | 0 | 1 | 6,217 |
0 | 56,069,261 | 0 | 0 | 0 | 0 | 1 | false | 21 | 2017-01-31T13:14:00.000 | 0 | 4 | 0 | Pruning in Keras | 41,958,566 | 0 | python-3.x,neural-network,keras,pruning | If you set an individual weight to zero won't that prevent it from being updated during back propagation? Shouldn't thatv weight remain zero from one epoch to the next? That's why you set the initial weights to nonzero values before training. If you want to "remove" an entire node, just set all of the weights on that node's output to zero and that will prevent that nodes from having any affect on the output throughout training. | I'm trying to design a neural network using Keras with priority on prediction performance, and I cannot get sufficiently high accuracy by further reducing the number of layers and nodes per layer. I have noticed that very large portion of my weights are effectively zero (>95%). Is there a way to prune dense layers in hope of reducing prediction time? | 0 | 1 | 9,448 |
0 | 41,967,371 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-01-31T15:48:00.000 | 2 | 1 | 0 | scaling glyphs in data units (not screen units) | 41,961,680 | 0.379949 | python-3.x,bokeh | Markers (e.g. Triangle) are really meant for use as "scatter" plot markers. With the exception of Circle, they only accept screen dimensions (pixles) for size. If you need triangular regions that scale with data space range changes, your options are to use patch or patches to draw the triangles as polygons (either one at a time, or "vectorized", respectively) | I am plotting both wedges and triangles on the same figure. The wedges scale up as I zoom in (I like this), but the triangles do not (I wish they did), presumably because wedges are sized in data units (via radius property) and traingles are in screen units (via size property).
Is it possible to switch the triangles to data units, so everything scales up during zoom in?
I am using bokeh version 0.12.4 and python 3.5.2 (both installed via Anaconda). | 0 | 1 | 49 |
0 | 41,965,766 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2017-01-31T18:48:00.000 | 3 | 1 | 1 | Repeated task execution using the distributed Dask scheduler | 41,965,253 | 1.2 | python,dask | Correct, if a task is allocated to one worker and another worker becomes free it may choose to steal excess tasks from its peers. There is a chance that it will steal a task that has just started to run, in which case the task will run twice.
The clean way to handle this problem is to ensure that your tasks are idempotent, that they return the same result even if run twice. This might mean handling your database error within your task.
This is one of those policies that are great for data intensive computing workloads but terrible for data engineering workloads. It's tricky to design a system that satisfies both needs simultaneously. | I'm using the Dask distributed scheduler, running a scheduler and 5 workers locally. I submit a list of delayed() tasks to compute().
When the number of tasks is say 20 (a number >> than the number of workers) and each task takes say at least 15 secs, the scheduler starts rerunning some of the tasks (or executes them in parallel more than once).
This is a problem since the tasks modify a SQL db and if they run again they end up raising an Exception (due to DB uniqueness constraints). I'm not setting pure=True anywhere (and I believe the default is False). Other than that, the Dask graph is trivial (no dependencies between the tasks).
Still not sure if this is a feature or a bug in Dask. I have a gut feeling that this might be related to worker stealing... | 0 | 1 | 864 |
0 | 41,968,970 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-01-31T20:49:00.000 | 0 | 2 | 0 | Database design for complex music analytics | 41,967,226 | 1.2 | python,data-modeling | I think that the hard part of the problem is that you'll probably want the stimulus (tune) data formatted differently for different queries. What I would think about doing is making a relatively simple data structure for your stimuli (tunes) and add a unique identifier to each unique tune. You could probably get away with using your dictionary structures here if your structure can fit into memory.
Then I would put your trials into a relational database with the corresponding stimulus IDs. Each trial entry in the database would have complete subject and session information.
Your for each analysis permutation you will do two steps to get the relevant data:
Filter the stimuli using the stimulus data structure and get a list of their corresponding IDs.
Preform a query on your trials database to get the trials with this list IDs. You can add other parameters to your query, obviously, to filter based on subject, session, etc.
I hope that helps | I'm a researcher studying animal behavior, and I'm trying to figure out the best way to structure my data. I present short musical tunes to animals and record their responses.
The Data
Each tune consists of 1-10 notes randomly chosen from major + minor scales spanning several octaves. Each note is played for a fixed duration but played randomly within some short time window.
I then record the animal's binary response to the tune (like / dislike).
I play >500 tunes to the animal each day, for >300 days. I also combine data from >10 animals.
I also need to store variables such as trial number on each day (was it the 1st tune presented? last? etc.), and date so that I know what data points to exclude due to external issues (e.g. animal stopped responding after 100 trials or for the entire day).
The Analysis
I'm trying to uncover what sorts of musical structure in these randomly generated tunes will lead to likes/dislikes from the animal. I do this in a mostly hypothesis-driven manner, based on previous research. The queries I need to perform on my dataset are of the form: "does having more notes from the same octave increase likeability of the tune?"
I'm also performing analysis on the dataset throughout the year as data is being accumulated.
What I've tried
I combine data from all animals into a single gigantic list containing dicts. Each dict represents a single trial and its associated:
animal ID#
session ID#
trial ID#
binary response (like/dislike)
tune, which is defined by a dict. The keys are simply the notes played, and the values denote when the note is played. E.g. {'1A#':[30,100]} means a tune with just a single note, A# from 1st octave, played from 30ms to 100ms.
I save this to a single pickle file. Every day after all the animals are done, I update the pickle file. I run my data analysis roughly once per week by loading the updated pickle file.
I've been looking to re-structure my data into a database or Pandas DataFrame format because of speed of 1) serializing data and 2) querying, and 3) possible cleaner code instead of dealing with nested dicts. I initially thought that my data would naturally lend itself well to some table structure because of the trial-by-trial structure of my experiment. Unfortunately, the definition of tunes within the table seems tricky, as the tunes don't really have some fixed structure.
What would be possible alternatives in structuring my data? | 0 | 1 | 65 |
0 | 47,134,942 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-01T01:01:00.000 | 0 | 1 | 0 | How to deal with name column in Scikitlearn randomforest classifier. python 3 | 41,970,230 | 0 | python,scikit-learn,random-forest,countvectorizer | Well name is a unique thing and an id kind of use sklearn.preprocessing.LabelEncoder after storing the original to a separate list. It will automatically convert the names to a serial number.
Also, note if it's a unique thing you should remove names during predicting. | I have a dataframe containing 13 columns. Among 13 three columns are string. One string column is simple male and female which I converted to 1 and 0 using
pd.get_dummies()
2nd column contains three different types of string so, easily converted to array using
from sklearn.feature_extraction.text import CountVectorizer
No issue at all. Problem is my third and last column contains large number of names. if I try to convert using Countvectorizer it converts the names into long unreadable strings.
df['name']=Countvectorizer.fit_transform(df.name)
if I try to convert back it to dataframe as shown in other examples on stackoverflow page in this case I get this
245376 (0, 14297)\t1\n (1, 5843)\t1\n (1, 13365)...
245377 (0, 14297)\t1\n (1, 5843)\t1\n (1, 13365)...
Name: supplier_name, dtype: object
and this next code results Memory Error
df['name'] =pd.DataFrame(CV.fit_transform(df.name).toarray(),columns=CV.get_feature_names())
I have looked that issue as well.
Question: is there any way best to use this name column in numeric forms except above mentioned. Or any other idea how to improve this so that data perfectly fit in Randomforest classifier. As, Dataframe is quit large containing 123790 rows. Thank you in advance for help or suggestion. | 0 | 1 | 498 |
0 | 42,181,934 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-02-02T16:38:00.000 | 1 | 1 | 0 | Single object detection keras | 42,007,591 | 1.2 | python,keras,object-detection,training-data | Your task is a so-called binary classification.
Make sure, that your final layer has got only one neuron (e.g. for Sequential model model.add(Dense(1, ... other parameters ... ))) and use the binary_crossentropy as loss function.
Hope this helps. | I want to make a system that recognizes a single object using keras. In my case I will be detecting car wheels. How do I train my system just for 1 object? I did classification task before using cats and dogs, but now its a completely different task. Do I still "classify", with class 0= wheels, class = 1 non wheels(just random images of anything)? How do I do following steps in this problem?
1) Train system for 1 object
2) Detect object(sliding window or heatmap) | 0 | 1 | 907 |
0 | 42,027,081 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-03T14:04:00.000 | -1 | 1 | 0 | Finding out installed packages in Spark | 42,026,072 | -0.197375 | python,apache-spark,pyspark | include the package anyway to be sure eg via spak submit:
$SPARK_HOME/bin/spark-shell --packages graphframes:graphframes:0.1.0-spark1.6 | I have been at it for some time and tried everything.
I need to find out whether the package GraphFrames is included in the spark installation at my office cluster. I am using Spark version 1.5.0.
Is there a way to list all the installed packages in Spark? | 0 | 1 | 1,593 |
0 | 42,029,561 | 1 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-03T16:51:00.000 | 0 | 1 | 0 | Python: Shortest Weighted Path and Least Number of Edges | 42,029,159 | 0 | python,algorithm,graph | Instead of using floating points for weights, use tuples (weight, number_of_edges) with pairwise addition. The lowest weight path using these new weights will have the lowest weight, and in the case of a tie, be the shortest path.
To define these weights I would make them a subclass of tuple with __add__ redefined. Then you should be able to use your existing code. | I am using a networkx weighted graph in order to model a transportation network. I am attempting to find the shortest path in terms of the sum of weighted edges. I have used Dijkstra path in order to find this path. My problem occurs when there is a tie in terms of weighted edges. When this occurs I would always like to choose from the set of paths that tied, the path that has the least number of edges. Dijkstra path does not seem to be doing this.
Is there a way to ensure that I can choose the path with the least number of edges from a set of paths that are tied in terms of sum of weighted edges? | 0 | 1 | 711 |
0 | 42,040,862 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-02-04T13:15:00.000 | -7 | 2 | 0 | Why doesn’t 'array' have an in-place sort like list does? | 42,040,813 | -1 | python,arrays,python-3.x,sorting | A list is a data structure that has characteristics which make it easy to do some things. An array is a very well understood standard data structure and isn't optimized for sorting. An array is basically a standard way of storing the product of sets of data. There hasn't ever been a notion of sorting it. | Why doesn’t the array class have a .sort()? I don't know how to sort an array directly.
The class array.array is a packed list which looks like a C array.
I want to use it because only numbers are needed in my case, but I need to be able to sort it. Is there some way to do that efficiently? | 0 | 1 | 144 |
0 | 42,046,236 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-02-04T13:50:00.000 | 1 | 1 | 0 | numpy irfft by amplitude and phase spectrum | 42,041,151 | 1.2 | python,numpy,fft,ifft | If you have amplitude and phase vectors for a spectrum, you can convert them to a complex (IQ or Re,Im) vector by multiplying the cosine and sine of each phase value by its associated amplitude value (for each FFT bin with a non-zero amplitude, or vector-wise). | How to compute irfft if I have only amplitude and phase spectrum of signal? In numpy docs I've found only irfft which use fourier coefficients for this transformation. | 0 | 1 | 411 |
0 | 42,046,234 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-02-04T22:07:00.000 | 3 | 3 | 0 | Define logarithmic power for NumPy | 42,046,184 | 0.197375 | python,numpy,logarithm | You can use ** for exponentiation: np.log(x/y) ** 2 | I am trying to define ln2(x/y) in Python, within NumPy.
I can define ln(x) as np.log(x) but how I can define ln2(x/y)?
ln2(x/y); natural logarithm to the power of 2 | 0 | 1 | 235 |
0 | 42,048,793 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-02-05T05:01:00.000 | 1 | 2 | 0 | Text Classification - Label Pre Process | 42,048,725 | 0.099668 | python,r,nlp,preprocessor,text-classification | Manual annotation is a good option since you have a very good idea of an ideal document corresponding to your label.
However, with the large dataset size, I would recommend that you fit an LDA to the documents and look at the topics generated, this will give you a good idea of labels that you can use for text classification.
You can also use LDA for text classification eventually by finding out representative documents for your labels and then finding the closest documents to that document by a similarity metric(say cosine).
Alternatively, once you have an idea of labels, you can also assign them without any manual intervention using LDA, but then you will get restricted to unsupervised learning.
Hope this helps!
P.S. - Be sure to remove all the stopwords and use a stemmer to club together words of similar king example(managing,manage,management) at the pre-processing stage. | I have a data set of 1M+ observations of customer interactions with a call center. The text is free text written by the representative taking the call. The text is not well formatted nor is it close to being grammatically correct (a lot of short hand). None of the free text has a label on the data as I do not know what labels to provide.
Given the size of the data, would a random sample of the data (to give a high level of confidence) be reasonable first step in determining what labels to create? Is it possible not to have to manually label 400+ random observations from the data, or is there no other method to pre-process the data in order to determine the a good set of labels to use for classification?
Appreciate any help on the issue. | 0 | 1 | 501 |
0 | 42,063,332 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2017-02-05T05:01:00.000 | 1 | 2 | 0 | Text Classification - Label Pre Process | 42,048,725 | 1.2 | python,r,nlp,preprocessor,text-classification | Text Pre-Processing:
Convert all text to lower case, tokenize into unigrams, remove all stop words, use stemmer to normalize a token to it's base word.
There are 2 approaches I can think of for classifying the documents a.k.a. the free text you spoke about. Each free text is a document:
1) Supervised classification Take some time and randomly pick few samples of documents and assign them a category. Do this until you have multiple documents per category and all categories that you want to predict are covered.
Next, create a Tf-Idf matrix from this text. Select the top K features (tune value of K to get best results). Alternatively, you can use SVD to reduce the number of features by combining correlated features into one. Please bare in mind that you can use other features like the department of the customer service executive and many others also as predictors. Now train a machine learning model and test it out.
2) Unsupervised learning: If you know how many categories you have in your output variable, you can use that number as the number of clusters you want to create. Use the Tf-Idf vector from above technique and create k clusters. Randomly pick a few documents from each cluster and decide which category the documents belong to. Supposing you picked 5 documents and noticed that they belong to the category "Wanting Refund". Label all documents in this cluster to "Wanting Refund". Do this for all the remaining clusters.
The advantage of unsupervised learning is that it saves you the pain of pre-classification and data preparation, but beware of unsupervised learning. The accuracy might not be as good as supervised learning.
The 2 method explained are an abstract overview of what can be done. Now that you have an idea, read up more on the topics and use a tool like rapidminer to achieve your task much faster. | I have a data set of 1M+ observations of customer interactions with a call center. The text is free text written by the representative taking the call. The text is not well formatted nor is it close to being grammatically correct (a lot of short hand). None of the free text has a label on the data as I do not know what labels to provide.
Given the size of the data, would a random sample of the data (to give a high level of confidence) be reasonable first step in determining what labels to create? Is it possible not to have to manually label 400+ random observations from the data, or is there no other method to pre-process the data in order to determine the a good set of labels to use for classification?
Appreciate any help on the issue. | 0 | 1 | 501 |
0 | 42,057,766 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-05T21:42:00.000 | 1 | 1 | 0 | How can we use MNIST dataset one class as an input using tensorflow? | 42,057,667 | 0.197375 | python,tensorflow | Not sure exactly what you are asking. I will answer about what I understood. In case you want to predict only one class for example digit 5 and rest of the digits. Then first you need to label your vectors in such a way that maybe label all those vectors as 'one' which has ground truth 5 and 'zero' to those vectors whose ground truth is not 5.
Then design your network with only two nodes in output, where first node will show the probability that the input vector belongs to class 'one' (or digit 5) and second node will show the probability of belonging to class 'zero'. Then just train your network.
To find accuracy, you can simple techniques like just count how many times you predict right i.e. if probability is higher than 0.5 for right class classify it as that class.
I hope that helps, if not maybe it would be better if you could explain your question more precisely. | I want to find the accuracy of one class in MMNIST dataset .So how can i split it on the basis of classes? | 0 | 1 | 567 |
0 | 59,105,698 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-02-06T01:03:00.000 | 0 | 1 | 0 | how to predict only one class in tensorflow | 42,059,103 | 0 | python,tensorflow,one-hot-encoding | while preparing the data you can use numpy to set all the data points in class 5 as 1 and the others will be set to as 0 using .
arr = np.where(arr!=5,arr,0)
arr = np.where(arr=5,arr,1)
and then you can create a binary classifier using Tensorflow to classifiy them while using a binary_crossentropy loss to optimize the classifier | In case you want to predict only one class. Then first you need to label your vectors in such a way that maybe label all those vectors as 'one' which has ground truth 5 and 'zero' to those vectors whose ground truth is not 5.
how can I implement this in tensorflow using puthon | 0 | 1 | 164 |
0 | 42,061,940 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-02-06T06:39:00.000 | 1 | 1 | 0 | Statistics: How to identify dependent and independent variables in my dataset? | 42,061,730 | 1.2 | python,statistics | In any given data set, labeling variables as dependent or independent is arbitrary -- there is no fundamental reason that one column should be independent and another should be dependent.
That said, typically it's conventional to say that "causes" are independent variables and "effects" are dependent variables. But this business about causes and effects is arbitrary too -- often enough there are several interacting variables, with each of the them "causing" the others, and each of them "affected" by the others.
The bottom line is that you should assign dependent and independent according to what you're trying to achieve. What is the most interesting or most useful variable in your data? Typically if that one is missing or has an unknown value, you'll have to estimate it from the other variables. In that case the interesting variable is the dependent variable, and all others are independent.
You'll probably get more interest in this question on stats.stackexchange.com. | I am a little bit confused in the classification of dependent and independent variables in my dataset, on which I need to make a model for prediction. Any insights or how-to's would be very helpful here.
Suppose my dataset have 40 variables. In this case, it would be very difficult to classify the variables as independent or dependent. Are there any tests in python which can help us in identifying these? | 0 | 1 | 2,782 |
0 | 42,104,038 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-06T14:28:00.000 | 0 | 1 | 0 | Combining different names in a database | 42,070,138 | 0 | python,regex,database,chess | @Ev.Kounis solution is simple and effective, I've used it myself successfully. Most of the time, we only care the top chess players. That's what I did:
Created a simple function like @Ev.Jounis suggests
I also scanned the player rating. For example, there were several "Carlsen" players in my database, but they wouldn't have FIDE rating over 2700.
I also search the other player in the game. If I'm interested in Garry Kasparov, he wouldn't be playing a club game with a 1600 rated opponent.
Get a better database. Chessgames and TWIC have better quality than Chessbase.
You could try regular expression, but it's unnecessary. There's a simple pattern how a player name would differ:
"Carlsen, M" == "Magnus Carlsen"
This applies to other players in the database. Save regular expression until you really have to do it. | I am studying a chess database with more than one million games. I am interested in identifying some characteristics of different players. The problem I have is that each single player appears with several identifications.
For example,
"Carlsen, M.", "Carlsen, Ma", "Carlsen, Magnus" and "Magnus Carlsen"
all correspond to player "Magnus Carlsen".
Furthermore, there are other players which share Carlsen's last name, but have different names, such as "Carlsen, Ingrid Oen" and "Carlsen, Jesper".
I need to identify all the different names in the database which correspond to each specific player and combine them. Is there any way to do that with Python? | 0 | 1 | 78 |
0 | 69,476,803 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2017-02-07T02:19:00.000 | 0 | 6 | 0 | What is the best way to build and expose a Machine Learning model REST api? | 42,080,598 | 0 | java,python,rest,machine-learning,scikit-learn | I have been experimenting with this same task and would like to add another option, not using a REST API: The format of the Apache Spark models is compatible in both the Python and Jave implementations of the framework. So, you could train and build your model in Python (using PySpark), export, and import on the Java side for serving/predictions. This works well.
There are, however, some downsides to this approach:
Spark has two separate ML packages (ML and MLLib) for different data formats (RDD and dataframes)
The algorithms for training models in each of these packages are not the same (no model parity)
The models and training classes don't have uniform interfaces. So, you have to be aware of what the expected format is and might have to transform your data accordingly for both training and inference.
Pre-processing for both training and inference has to be the same, so you either need to do this on the Python side for both stages or somehow replicate the pre-processing on the Java side.
So, if you don't mind the downsides of a Rest API solution (availability, network latency), then this might be the preferable solution. | I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python.
Now I have a use case where in I want to expose a REST api which builds Machine Learning Model, and another REST api which makes the prediction. What architecture should help me to achieve the same. (An example of the same maybe a Amazon Machine Learning. They have exposed REST api for generating model and making prediction)
I searched round the internet and found following ways:
Write the whole thing in Java - ML model + REST api
Write the whole thing in Python - ML model + REST api
But playing around with Machine Learning, its models and predictions is really easier and more supported in python with libraries like sklearn, rather than Java. I would really like to use python for Machine Learning part.
I was thinking about and approach wherein I write REST api using JAVA but use sub-process to make python ML calls. Will that work?
Can someone help me regarding the probable architectural approaches that I can take. Also please suggest the most feasible solution.
Thanks in advance. | 1 | 1 | 5,677 |
0 | 46,918,647 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2017-02-07T02:19:00.000 | 0 | 6 | 0 | What is the best way to build and expose a Machine Learning model REST api? | 42,080,598 | 0 | java,python,rest,machine-learning,scikit-learn | I'm using Node.js as my rest service and I just call out to the system to interact with my python that holds the stored model. You could always do that if you are more comfortable writing your services in JAVA, just make a call to Runtime exec or use ProcessBuilder to call the python script and get the reply back. | I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python.
Now I have a use case where in I want to expose a REST api which builds Machine Learning Model, and another REST api which makes the prediction. What architecture should help me to achieve the same. (An example of the same maybe a Amazon Machine Learning. They have exposed REST api for generating model and making prediction)
I searched round the internet and found following ways:
Write the whole thing in Java - ML model + REST api
Write the whole thing in Python - ML model + REST api
But playing around with Machine Learning, its models and predictions is really easier and more supported in python with libraries like sklearn, rather than Java. I would really like to use python for Machine Learning part.
I was thinking about and approach wherein I write REST api using JAVA but use sub-process to make python ML calls. Will that work?
Can someone help me regarding the probable architectural approaches that I can take. Also please suggest the most feasible solution.
Thanks in advance. | 1 | 1 | 5,677 |
0 | 42,127,532 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2017-02-07T02:19:00.000 | 0 | 6 | 0 | What is the best way to build and expose a Machine Learning model REST api? | 42,080,598 | 0 | java,python,rest,machine-learning,scikit-learn | Well it depends the situation you use python for ML.
For classification models like randomforest,use your train dataset to built tree structures and export as nested dict.Whatever the language you uesd,transform the model object to a kind of data structure then you can ues it anywhere.
BUT if your situation is a large scale,real-timeing,distributional datesets,far as I know,maybe the best way is to deploy the whole ML process on severs. | I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python.
Now I have a use case where in I want to expose a REST api which builds Machine Learning Model, and another REST api which makes the prediction. What architecture should help me to achieve the same. (An example of the same maybe a Amazon Machine Learning. They have exposed REST api for generating model and making prediction)
I searched round the internet and found following ways:
Write the whole thing in Java - ML model + REST api
Write the whole thing in Python - ML model + REST api
But playing around with Machine Learning, its models and predictions is really easier and more supported in python with libraries like sklearn, rather than Java. I would really like to use python for Machine Learning part.
I was thinking about and approach wherein I write REST api using JAVA but use sub-process to make python ML calls. Will that work?
Can someone help me regarding the probable architectural approaches that I can take. Also please suggest the most feasible solution.
Thanks in advance. | 1 | 1 | 5,677 |
0 | 42,081,531 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-02-07T03:27:00.000 | 0 | 2 | 0 | Importing images Azure Machine Learning Studio | 42,081,202 | 0 | python,azure,azure-blob-storage,azure-machine-learning-studio | yes, you should be able to do that using Python. At the very least, straight REST calls should work. | Is it possible to import images from your Azure storage account from within a Python script module as opposed to using the Import Images module that Azure ML Studio provides. Ideally I would like to use cv2.imread(). I only want to read in grayscale data but the Import Images module reads in RGB.
Can I use the BlockBlobService library as if I were calling it from an external Python script? | 0 | 1 | 775 |
0 | 42,081,957 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-02-07T04:33:00.000 | 2 | 1 | 0 | Pandas dataframe: Listing amount of people per gender in each major | 42,081,790 | 1.2 | python,pandas,dataframe | altering @VaishaliGarg's answer a little,
you can use
df.groupby(['Qgender','Qmajor']).count()
Also if needed a dataframe out of it, we need to add .reset_index()
since it would be a groupbyObject.
df.groupby(['Qgender','Qmajor']).count().reset_index() | Sorry about the vague title, but I didn't know how to word it.
So I have a pandas dataframe with 3 columns and any amount of rows. The first column is a person's name, the second column is their major (six possible majors, always written the same), and the third column is their gender (always 'Male' or 'Female').
I was told to print out the number of people in each major, which I was able to accomplish by saying table.Qmajor.value_counts() (table being my dataframe variable name). Now I am being asked to print the amount of males and females in each major, and I have no idea where to start. Any help is appreciated.
The column names are Qnames, Qmajor, and Qgender. | 0 | 1 | 5,858 |
0 | 44,600,199 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-07T09:32:00.000 | 1 | 1 | 0 | pycharm cannot run script but can debug it | 42,086,214 | 0.197375 | python,tensorflow,pycharm | I have run into a similar error running caffe on pycharm. I think it's because of the version of Python. When I installed Python 2.7.13, it worked! | When I ran a script in PyCharm, it exited with:
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally
DEBUG:tm._add: /joy, sensor_msgs/Joy, sub
Fatal Python error: (pygame parachute) Segmentation Fault
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
But when I debug it in PyCharm, the program ran without any problem.
Also, if I ran the script in ubuntu terminal, no problem occurs, too. Why does this happen, or how can I debug this problem? | 0 | 1 | 1,056 |
0 | 42,093,881 | 0 | 0 | 0 | 0 | 2 | false | 7 | 2017-02-07T14:32:00.000 | 6 | 3 | 0 | Accuracy difference on normalization in KNN | 42,092,448 | 1 | python,machine-learning,scikit-learn,knn | That's a pretty good question, and is unexpected at first glance because usually a normalization will help a KNN classifier do better. Generally, good KNN performance usually requires preprocessing of data to make all variables similarly scaled and centered. Otherwise KNN will be often be inappropriately dominated by scaling factors.
In this case the opposite effect is seen: KNN gets WORSE with scaling, seemingly.
However, what you may be witnessing could be overfitting. The KNN may be overfit, which is to say it memorized the data very well, but does not work well at all on new data. The first model might have memorized more data due to some characteristic of that data, but it's not a good thing. You would need to check your prediction accuracy on a different set of data than what was trained on, a so-called validation set or test set.
Then you will know whether the KNN accuracy is OK or not.
Look into learning curve analysis in the context of machine learning. Please go learn about bias and variance. It's a deeper subject than can be detailed here. The best, cheapest, and fastest sources of instruction on this topic are videos on the web, by the following instructors:
Andrew Ng, in the online coursera course Machine Learning
Tibshirani and Hastie, in the online stanford course Statistical Learning. | I had trained my model on KNN classification algorithm , and I was getting around 97% accuracy. However,I later noticed that I had missed out to normalise my data and I normalised my data and retrained my model, now I am getting an accuracy of only 87%. What could be the reason? And should I stick to using data that is not normalised or should I switch to normalized version. | 0 | 1 | 8,817 |
0 | 42,093,691 | 0 | 0 | 0 | 0 | 2 | false | 7 | 2017-02-07T14:32:00.000 | 2 | 3 | 0 | Accuracy difference on normalization in KNN | 42,092,448 | 0.132549 | python,machine-learning,scikit-learn,knn | If you use normalized feature vectors, the distances between your data points are likely to be different than when you used unnormalized features, particularly when the range of the features are different. Since kNN typically uses euclidian distance to find k nearest points from any given point, using normalized features may select a different set of k neighbors than the ones chosen when unnormalized features were used, hence the difference in accuracy. | I had trained my model on KNN classification algorithm , and I was getting around 97% accuracy. However,I later noticed that I had missed out to normalise my data and I normalised my data and retrained my model, now I am getting an accuracy of only 87%. What could be the reason? And should I stick to using data that is not normalised or should I switch to normalized version. | 0 | 1 | 8,817 |
0 | 42,110,000 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-02-08T05:55:00.000 | 0 | 1 | 0 | Integrate Spark SQL using Pyspark with python interpreter and pandas and Ipython notebook | 42,105,716 | 0 | python-3.x,pandas,matplotlib,pyspark,apache-spark-sql | Check out the hortonworks sandbox. It's a virtual machine with hadoop and all its components - such as spark ad hdfs - installed and configured. In a addition to that, there is a note book called Zeppelin notebook allowing you to write script in python or other languages.
You're also free to install python libs and access them through the notebook, even though i'm pretty it comes with it's own data visualisation.
Note that the spark dataframe type is not compatible with the pandas one. you'll have to convert your data to a simple matrix and integrate back to spark or pandas type. | I want to know the which interpreter is good for Python to use features like Numpy, pandas and matplotlib with the feature of integrated Ipython note book.
Also I want to integrate this with Apache Spark. Is it possible?
My aim is I need to load different tables from different sources like Oracle, MS SQL, and HDFS files and need to transform them using Pyspark, SparkSQL. And then I want to use the pandas/matplolib for manipulation and visualization. | 0 | 1 | 186 |
0 | 42,115,789 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-02-08T08:40:00.000 | 0 | 2 | 0 | How to know the factor by which a feature affects a model's prediction | 42,108,324 | 0 | python,machine-learning,scikit-learn,decision-tree | In general - no. Decision trees work differently that that. For example it could have a rule under the hood that if feature X > 100 OR X < 10 and Y = 'some value' than answer is Yes, if 50 < X < 70 - answer is No etc. In the instance of decision tree you may want to visualize its results and analyse the rules. With RF model it is not possible, as far as I know, since you have a lot of trees working under the hood, each has independent decision rules. | I have trained my model on a data set and i used decision trees to train my model and it has 3 output classes - Yes,Done and No , and I got to know the feature that are most decisive in making a decision by checking feature importance of the classifier. I am using python and sklearn as my ML library. Now that I have found the feature that is most decisive I would like to know how that feature contributes, in the sense that if the relation is positive such that if the feature value increases the it leads to Yes and if it is negative It leads to No and so on and I would also want to know the magnitude for the same.
I would like to know if there a solution to this and also would to know a solution that is independent of the algorithm of choice, Please try to provide solutions that are not specific to decision tree but rather general solution for all the algorithms.
If there is some way that would tell me like:
for feature x1 the relation is 0.8*x1^2
for feature x2 the relation is -0.4*x2
just so that I would be able to analyse the output depends based on input feature x1 ,x2 and so on
Is it possible to find out the whether a high value for particular feature to a certain class, or a low value for the feature. | 0 | 1 | 998 |
0 | 54,422,825 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-08T10:16:00.000 | 0 | 3 | 0 | How will I integrate MATLAB to TensorFlow? | 42,110,293 | 0 | python,matlab,tensorflow | I used a mex function for inference via the C++ API of TensorFlow once. That's pretty straight forward. I had to link the required TensorFlow libs statically from source though. | I want to integrate MATLAB and TensorFlow, although I can run TensorFlow native in python but I am required to use MATLAB for image processing. Can someone please help me out with this one? | 0 | 1 | 1,628 |
0 | 42,120,658 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-08T15:54:00.000 | 1 | 1 | 0 | Keras with Theano BackEnd | 42,117,777 | 0.197375 | python,machine-learning,neural-network,theano,keras | In Keras < 1.0 (I believe), one would pass the show_accuracy argument to model.fit in order to display the accuracy during training.
This method has been replaced by metrics, as you can now define custom metrics that can help you during training. One of the metrics is of course, accuracy. The changes to your code to keep the same behavior are minimum:
Remove show_accuracy from the model.fit call.
Add metrics = ["accuracy"] to the model.compile call.
And that's it. | im new to Keras in python, i got this warning message when after executing my code. I tried to search on google, but still didnt manage to solve this problem. Thank you in advance.
UserWarning: he "show_accuracy" argument is deprecated, instead you
should pass the "accurac " metric to the model at compile time:
model.compile(optimizer, loss, metrics=["accuracy"])
warnings.warn('The "show_accuracy" argument is deprecated, ' | 0 | 1 | 223 |
0 | 46,009,804 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2017-02-08T16:42:00.000 | 0 | 2 | 0 | How to retrieve the filename of an image with keras flow_from_directory shuffled method? | 42,118,850 | 0 | python,machine-learning,neural-network,generator,keras | I think the only option here is to NOT shuffle the files. I have been wondering this myself and this is the only thing I could find in the docs. Seems odd and not correct... | If I don't shuffle my files, I can get the file names with generator.filenames. But when the generator shuffles the images, filenames isn't shuffled, so I don't know how to get the file names back. | 0 | 1 | 1,974 |
0 | 46,544,816 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2017-02-09T07:18:00.000 | 5 | 2 | 0 | Batch-major vs time-major LSTM | 42,130,491 | 0.462117 | python,tensorflow,deep-learning,lstm,recurrent-neural-network | There is no difference in what the model learns.
At timestep t, RNNs need results from t-1, therefore we need to compute things time-major. If time_major=False, TensorFlow transposes batch of sequences from (batch_size, max_sequence_length) to (max_sequence_length, batch_size)*. It processes the transposed batch one row at a time: at t=0, the first element of each sequence is processed, hidden states and outputs calculated; at t=max_sequence_length, the last element of each sequence is processed.
So if your data is already time-major, use time_major=True, which avoids a transpose. But there isn't much point in manually transposing your data before feeding it to TensorFlow.
*If you have multidimensional inputs (e.g. sequences of word embeddings: (batch_size, max_sequence_length, embedding_size)), axes 0 and 1 are transposed, leading to (max_sequence_length, batch_size, embedding_size) | Do RNNs learn different dependency patterns when the input is batch-major as opposed to time-major? | 0 | 1 | 4,154 |
0 | 42,133,075 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2017-02-09T08:06:00.000 | 2 | 2 | 0 | Django JSON file to Pandas Dataframe | 42,131,205 | 1.2 | python,json,django,pandas | You can also use pd.DataFrame.from_records() when you have json or dictonary
df = pd.DataFrame.from_records([ json ]) OR df = pd.DataFrame.from_records([ dict. ])
or
you need to provide iterables for pandas dataframe:
e.g. df = pd.DataFrame({'column_1':[ values ],'column_2':[ values ]}) | I have a simple json in Django. I catch the file with this command data = request.body and i want to convert it to pandas datarame
JSON:
{ "username":"John", "subject":"i'm good boy", "country":"UK","age":25}
I already tried pandas read_json method and json.loads from json library but it didn't work. | 1 | 1 | 820 |
0 | 42,145,160 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-02-09T19:17:00.000 | 0 | 2 | 0 | Why do bokeh tutorials use explicit imports rather than aliases? | 42,145,097 | 0 | python,bokeh | Importing individual names from a library isn't really "contamination". What you want to avoid is doing from somelibrary import *. This is different because you don't know which names will be imported, so you can't be sure there won't be a name clash.
In contrast, doing from numpy import linspace just creates one name linspace. It's no different from doing creating an ordinary variable like linspace = 2 or defining your own function with def linspace. There's no danger of unexpected name clashes because you know exactly which names you're creating in your local namespace. | As I was checking out the Bokeh package I noticed that the tutorials use explicit import statements like from bokeh.plotting import figure and from numpy import linspace. I usually try to avoid these in favor of, e.g., import numpy as np, import matplotlib.pyplot as plt. I thought this is considered good practice as it helps to avoid namespace contamination.
Is there any reason why Bokeh deviates from this practice, and/or are there common aliases to use for Bokeh imports (e.g. import bokeh.plotting as bp)? | 0 | 1 | 545 |
0 | 42,148,230 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-02-09T22:30:00.000 | 2 | 2 | 0 | Using scipy routines outside of the GIL | 42,148,101 | 1.2 | python,scipy,cython,shared-memory,python-multithreading | Not safe. If CPython could safely run that kind of code without the GIL, we wouldn't have the GIL in the first place. | This is sort of a general question related to a specific implementation I have in mind, about whether it's safe to use python routines designed for use inside the GIL in a shared memory environment. Specifically what I'd like to do is use scipy.optimize.curve_fit on a large array inside a cython function.
The data can be expressed as a 2d numpy array (say, of floats) with the axis to be fit along and the other the serialized axis to be parallelized over. Then I'd just like to release the GIL and start looping through the data with a cython.parallel.prange (the idea being then that I can have all my cores working on fitting at once).
The main issue I can foresee is that curve_fit does not operate "in place"; it returns the fit values of the parameters (and optionally their covariance matrix) and so has to allocate that memory at some point. (Of course I also have no idea about any intermediate memory allocation the routine performs.) I'm worried about how this will operate outside the GIL with many threads working concurrently.
I realize that the answer could just be "it should work fine go try it," but I'm hoping to get some idea of what to look out for. I also realize that this question is similar to others about parallelizing scipy/numpy routines, but I think this one is worded differently in that falls within the cython scope of a C environment for python.
Thanks for any help/suggestions. | 0 | 1 | 312 |
0 | 50,688,040 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-02-10T00:59:00.000 | 0 | 1 | 0 | How to plot data from different runs on one figure in Spyder | 42,149,777 | 0 | ipython,spyder | One way that I have figured out is to define a dictionary and then record the results you want individually. Apparently, this is not the most efficient way, but it works. | What I meant by the title is that I have two different programs and I want to plot data on one figure. In Matlab there is this definition for figure handle which eventually points to a specific plot. Let's say if I call figure(1) the first time, I get a figure named ''1'' created. The second I call figure(1), instead of creating a new one, Matlab simply just plot on the previous figure named ''1''. I wondered how I can go about and do that in Spyder.
I am using Matplotlib in sypder. I would imagine this could be easily achieved. But I simply don't know much about this package to figure my problem out. :(
Any suggestions are appreciated! | 0 | 1 | 54 |
0 | 56,578,297 | 0 | 0 | 0 | 0 | 1 | false | 18 | 2017-02-10T10:23:00.000 | -1 | 6 | 0 | How to update model parameters with accumulated gradients? | 42,156,957 | -0.033321 | python,tensorflow,gradient | You can use Pytorch instead of Tensorflow as it allows the user to accumulate gradients during training | I'm using TensorFlow to build a deep learning model. And new to TensorFlow.
Due to some reason, my model has limited batch size, then this limited batch-size will make the model has a high variance.
So, I want to use some trick to make the batch size larger. My idea is to store the gradients of each mini-batch, for example 64 mini-batches, and then sum the gradients together, use the mean gradients of this 64 mini batches of training data to update the model's parameters.
This means that for the first 63 mini-batches, do not update the parameters, and after the 64 mini batch, update the model's parameters only once.
But as TensorFlow is graph based, do anyone know how to implement this wanted feature?
Thanks very much. | 0 | 1 | 8,795 |
0 | 42,171,552 | 0 | 0 | 0 | 0 | 1 | true | 90 | 2017-02-11T02:24:00.000 | 183 | 5 | 0 | Get current number of partitions of a DataFrame | 42,171,499 | 1.2 | python,scala,dataframe,apache-spark,apache-spark-sql | You need to call getNumPartitions() on the DataFrame's underlying RDD, e.g., df.rdd.getNumPartitions(). In the case of Scala, this is a parameterless method: df.rdd.getNumPartitions. | Is there any way to get the current number of partitions of a DataFrame?
I checked the DataFrame javadoc (spark 1.6) and didn't found a method for that, or am I just missed it?
(In case of JavaRDD there's a getNumPartitions() method.) | 0 | 1 | 144,311 |
0 | 42,195,147 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-02-12T22:00:00.000 | 0 | 3 | 0 | Can't import legacy_seq2seq from tensorflow.contrib | 42,193,779 | 0 | python,tensorflow | I'm using tf.nn.seq2seq.sequence_loss_by_example - they've moved a lot of stuff from tf.contrib to main packages. This is because they updated their code, but not their examples - if you open github - you'll see a lot of requests to fix issues related to that! | I'm using tensorflow 0.12.1 on Python 3.5.2 on a Windows 10 64bit computer. For some reason, whenever I try to import legacy_seq2seq from tensorflow.contrib, it always occurs the error: ImportError: cannot import name 'legacy_seq2seq'.
What causes the problem and how can I fix it? | 0 | 1 | 2,843 |
0 | 42,225,141 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-02-13T12:54:00.000 | 0 | 1 | 0 | How to divide map-reduce tasks? | 42,204,582 | 0 | python,hadoop,mapreduce,hadoop-streaming | So, your have a table with 200 columns(say T), a separate list of entries(say L) to be picked from T and with the last 24-hours(from the timestamp in T).
MapReduce, mapper does give entries from T sequentially. Before your mapper gets into map(), I.e in setup() put the block of code to read from the L and make it handy(use a feasible data structure to hold the list of data). Now, your code should hold two checks/conditions 1) if the entry from T contains/matches with L. If so, then check 2) if the data is within 24-hours range.
Done. Your output is what you have expected. No, reducer is required here, at least to do this much.
Happy Mapreducing. | I have a table containing 200 columns out of which I need around 50 column mentioned in a list,
and rows of last 24 months according to column 'timestamp'.
I'm confused what comes under mapper and what under reducer?
As it is just transformation, will it only have mapper phase, or filtering of rows to last 24 months will come under reducer? I'm not sure if this exactly utilises
what map-reduce was made for.
I'm using python with hadoop streaming. | 0 | 1 | 165 |
0 | 60,818,333 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2017-02-13T15:06:00.000 | 0 | 4 | 0 | A simple way to insert a table of contents in a multiple page pdf generated using PdfPages | 42,207,211 | 0 | python,python-3.x,pandas,pdf,matplotlib | What I do sometimes is generate a HTML file with my tables as I want and after I convert to PDF file. I know this is a little harder but I can control any element on my documents. Logically, this is not a good solution if you want write many files. Another good solution is make PDF from Jupyter Notebook. | I am using Pandas to read data from some datafiles and generate a multiple-page pdf using PdfPages, in which each page contains the matplotlib figures from one datafile. It would be nice to be able to get a linked table of contents or bookmarks at each page, so that i can easily find figures corresponding to a given datafile . Is there a simple way to achieve this (for example by somehow inserting the name of the data file) in python 3.5? | 0 | 1 | 1,445 |
0 | 51,373,915 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2017-02-13T15:06:00.000 | 0 | 4 | 0 | A simple way to insert a table of contents in a multiple page pdf generated using PdfPages | 42,207,211 | 0 | python,python-3.x,pandas,pdf,matplotlib | It sounds like you want to generate fig{1, 2, ..., N}.pdf and then generate a LaTeX source file which mentions an \includegraphics for each of them, and produces a ToC. If you do scratch this particular itch, consider packaging it up for others to use, as it is a pretty generic use case. | I am using Pandas to read data from some datafiles and generate a multiple-page pdf using PdfPages, in which each page contains the matplotlib figures from one datafile. It would be nice to be able to get a linked table of contents or bookmarks at each page, so that i can easily find figures corresponding to a given datafile . Is there a simple way to achieve this (for example by somehow inserting the name of the data file) in python 3.5? | 0 | 1 | 1,445 |
0 | 42,253,328 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-02-14T11:11:00.000 | 0 | 1 | 0 | Tensorflow import error after python update | 42,224,655 | 1.2 | python-2.7,import,tensorflow,protocol-buffers,importerror | I had a similar problem. Make sure that pip and python has the same path when typing which pipand which python. If they differ, change your ~.bash_profile so that the python path match the pip path, and use source ~\.bash_profile.
If that doesn't work, I would try to reinstall pip and tensorflow.
I installed pip using this command:
wget https://bootstrap.pypa.io/get-pip.py
sudo python2.7 get-pip.py | I am using tensorflow with python 2.7. However, after updating python 2.7.10 to 2.7.13, I get an import error with tensorflow
File "", line 1, in
File "/Users/usrname/Library/Python/2.7/lib/python/site-
packages/tensorflow/__init__.py", line 24, in
from tensorflow.python import *
File "/Users/usrname/Library/Python/2.7/lib/python/site-
packages/tensorflow/python/__init__.py", line 63, in
from tensorflow.core.framework.graph_pb2 import *
File "/Users/usrname/Library/Python/2.7/lib/python/site-
packages/tensorflow/core/framework/graph_pb2.py", line 6, in
from google.protobuf import descriptor as _descriptor
ImportError: No module named google.protobuf
Output from pip install protobuf
Requirement already satisfied: protobuf in /usr/local/lib/python2.7/site-packages
Requirement already satisfied: setuptools in /Users/usrname/Library/Python/2.7/lib/
python/site-packages (from protobuf)
Requirement already satisfied: six>=1.9 in /Library/Python/2.7/site-packages/
six-1.10.0-py2.7.egg (from protobuf)
Requirement already satisfied: appdirs>=1.4.0 in /usr/local/lib/python2.7/site-packages
(from setuptools->protobuf)
Requirement already satisfied: packaging>=16.8 in /usr/local/lib/python2.7/site-packages
(from setuptools->protobuf)
Requirement already satisfied: pyparsing in /usr/local/lib/python2.7/site-packages
(from packaging>=16.8->setuptools->protobuf)
Output from which python:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
I believe this path changed after the python update, but not sure. A solution could possibly be to downgrade python, but this seems like a bad solution? As I work in a team, I would like to avoid reinstalling Tensorflow due to end up with different versions, but this is perhaps the way to go? Any advice?
Update: I tired to install tensorflow all over, but the same error keeps popping up. Maybe the problem is the environment variables as which pipreturns /usr/local/bin/pip(which is different from which python)? | 0 | 1 | 781 |
0 | 43,529,327 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-02-14T17:11:00.000 | 0 | 3 | 0 | Import cv2: ImportError: DLL load failed: windows 7 Anaconda 4.3.0 (64-bit) Python 3.6.0 | 42,232,177 | 0 | python,opencv,dll | use python 2.7.1.0 instead of python 3, cv2 worked and dll load error fixed after using python 2.7 | I am using Anaconda 4.3.0 (64-bit) Python 3.6.0 on windows 7. I am getting the error "ImportError: DLL load failed: The specified module could not be found." for importing the package import cv2.
I have downloaded the OpenCV package and copy paste cv2.pyd into the Anaconda site package and updated my system path to point to OpenCV bin path to get the DLL. Still I am not able resolve this issue.
I did another way to install using pip install opencv-python. Still not working.
Please need suggestions. Thank you | 0 | 1 | 2,068 |
0 | 42,255,630 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-02-14T17:27:00.000 | 0 | 1 | 0 | Using external pose estimates to improve stationary marker contour tracking | 42,232,500 | 0 | python,computer-vision,opencv3.0,robotics,pose-estimation | The obvious advantage of having a pose estimate is that it restricts the image region for searching your target.
Next, if your problem is occlusion, you then need to model that explicitly, rather than just try to paper it over with image processing tricks: add to your detector objective function a term that expresses what your target may look like when partially occluded. This can be either an explicit "occluded appearance" model, or implicit - e.g. using an algorithm that is able to recognize visible portions of the targets independently of the whole of it. | Suppose that I have an array of sensors that allows me to come up with an estimate of my pose relative to some fixed rectangular marker. I thus have an estimate as to what the contour of the marker will look like in the image from the camera. How might I use this to better detect contours?
The problem that I'm trying to overcome is that sometimes, the marker is occluded, perhaps by a line cutting across it. As such, I'm left with two contours that if merged, would yield the marker. I've tried opening and closing to try and fix the problem, but it isn't robust to the different types of lighting.
One approach that I'm considering is to use the predicted contour, and perform a local convolution with the gradient of the image, to find my true pose.
Any thoughts or advice? | 0 | 1 | 68 |
0 | 42,240,221 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-02-15T03:22:00.000 | 1 | 2 | 0 | Add path in Python to a Notebook | 42,240,124 | 0.099668 | python,path,ipython-notebook | It's easy question, modify \C:\Users\User\Desktop\A Student's Guide to Python for Physical Modeling by KInder and Nelson\code to C:\\Users\\User\\Desktop\\A Student's Guide to Python for Physical Modeling by KInder and Nelson\\code. | What am I doing wrong here?
I cannot add a path to my Jupyter Notebook. I am stuck. Any of my attempts did not work at all.
home_dir="\C:\Users\User\Desktop\"
data_dir=home_dir + "\C:\Users\User\Desktop\A Student's Guide to Python for Physical Modeling by KInder and Nelson\code"
data_set=np.loadtxt(data_dir + "HIVseries.csv", delimiter=',') | 0 | 1 | 541 |
0 | 42,244,824 | 0 | 0 | 0 | 0 | 1 | true | 7 | 2017-02-15T04:56:00.000 | 6 | 1 | 0 | TensorFlow - Text recognition in image | 42,241,038 | 1.2 | python,tensorflow,deep-learning,text-recognition | The difficulty is that you don't know where the text is. The solution is, given an image, you need to use a sliding window to crop different part of the image, then use a classifier to decide if there are texts in the cropped area. If so, use your character/digit recognizer to tell which characters/digits they really are.
So you need to train another classifer: given a cropped image (the size of cropped images should be slightly larger than that of your text area), decide if there are texts inside.
Just construct training set (positive samples are text areas, negative samples are other areas randomly cropped from the big images) and train it~ | I am new to TensorFlow and to Deep Learning.
I am trying to recognize text in naturel scene images. I used to work with an OCR but I would like to use Deep Learning. The text has always the same format :
ABC-DEF 88:88.
What I have done is recognize every character/digit. It means that I cropped the image around every character (so each picture gives me 10 characters) to build my training and test set and they build a two conv neural networks. So my training set was a set of characters pictures and the labels were just characters/digits.
But I want to go further. What I would like to do is just to give the full pictures and output the entire text (not one character such as in my previous model).
Thank you in advance for any help. | 0 | 1 | 13,369 |
0 | 45,752,337 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-02-15T15:49:00.000 | 0 | 2 | 0 | unable to import pyspark statistics module | 42,253,981 | 0 | python,pyspark | I have the same problem. The Python file stat.py does not seem to be in Spark 2.1.x but in Spark 2.2.x. So it seems that you need to upgrade Spark with its updated pyspark (but Zeppelin 0.7.x does not seem to work with Spark 2.2.x). | Python 2.7, Apache Spark 2.1.0, Ubuntu 14.04
In the pyspark shell I'm getting the following error:
>>> from pyspark.mllib.stat import Statistics
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named stat
Solution ?
similarly
>>> from pyspark.mllib.linalg import SparseVector
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named linalg
I have numpy installed and
>>> sys.path
['', u'/tmp/spark-2d5ea25c-e2e7-490a-b5be-815e320cdee0/userFiles-2f177853-e261-46f9-97e5-01ac8b7c4987', '/usr/local/lib/python2.7/dist-packages/setuptools-18.1-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/pyspark-2.1.0+hadoop2.7-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/py4j-0.10.4-py2.7.egg', '/home/d066537/spark/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip', '/home/d066537/spark/spark-2.1.0-bin-hadoop2.7/python', '/home/d066537', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client'] | 0 | 1 | 1,550 |
0 | 42,260,728 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-02-15T21:36:00.000 | 3 | 2 | 0 | Writing data into a CSV within a loop Python | 42,260,538 | 1.2 | python,loops,csv,time,export-to-csv | How do I best write the data of the polygons that I create into the csv? Do I open the csv at the beginning and then write each row into the file, as I iterate over classes and images?
I suspect most folks would gather the data in a list or perhaps dictionary and then write it all out at the end. But if you don't need to do additional processing to it, yeah -- send it to disk and release the resources.
And I guess writing the data into the csv right away would result in less RAM used, right?
Yes, it would but it's not going to impact CPU usage; just reduce RAM usage though it does depend on when Python GC them. You really shouldn't worry about details like this. Get accurate output, first and foremost. | I am currently working on the Dstl satellite kaggle challenge. There I need to create a submission file that is in csv format. Each row in the csv contains:
Image ID, polygon class (1-10), Polygons
Polygons are a very long entry with starts and ends and starts etc.
The polygons are created with an algorithm, for one class at a time, for one picture at a time (429 pictures, 10 classes each).
Now my question is related to computation time and best practice: How do I best write the data of the polygons that I create into the csv? Do I open the csv at the beginning and then write each row into the file, as I iterate over classes and images?
Or should I rather save the data in a list or dictionary or something and then write the whole thing into the csv file at once?
The thing is, I am not sure how fast the writing into a csv file is. Also, as the algorithm is already rather consuming computationally, I would like to save my pc the trouble of keeping all the data in the RAM.
And I guess writing the data into the csv right away would result in less RAM used, right?
So you say that disc operations are slow. What exactly does that mean? When I write into the csv each row live as I create the data, does that slow down my program? So if I write a whole list into a csv file that would be faster than writing a row, then again calculating a new data row? So that would mean, that the computer waits for an action to finish before the next action gets started, right? But then still, what makes the process faster if I wait for the whole data to accumulate? Anyway the same number of rows have to be written into the csv, why would it be slower if I do it line by line? | 0 | 1 | 705 |
0 | 42,642,759 | 0 | 1 | 0 | 0 | 2 | false | 14 | 2017-02-16T10:07:00.000 | 0 | 7 | 0 | How do I resolve these tensorflow warnings? | 42,270,739 | 0 | python,tensorflow | Those are simply warnings.
They are just informing you if you build TensorFlow from source it can be faster on your machine.
Those instructions are not enabled by default on the builds available I think to be compatible with more CPUs as possible. | I just installed Tensorflow 1.0.0 using pip. When running, I get warnings like the one shown below.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
I get 5 more similar warning for SSE4.1, SSE4.2, AVX, AVX2, FMA.
Despite these warnings the program seems to run fine. | 0 | 1 | 10,737 |
0 | 42,539,825 | 0 | 1 | 0 | 0 | 2 | false | 14 | 2017-02-16T10:07:00.000 | 0 | 7 | 0 | How do I resolve these tensorflow warnings? | 42,270,739 | 0 | python,tensorflow | It would seem that the PIP build for the GPU is bad as well as I get the warnings with the GPU version and the GPU installed... | I just installed Tensorflow 1.0.0 using pip. When running, I get warnings like the one shown below.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
I get 5 more similar warning for SSE4.1, SSE4.2, AVX, AVX2, FMA.
Despite these warnings the program seems to run fine. | 0 | 1 | 10,737 |
0 | 42,284,733 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-16T13:02:00.000 | 2 | 1 | 0 | How to analyse 3d mesh data(in .stl) by TensorFlow | 42,274,756 | 0.379949 | python,machine-learning,3d,tensorflow,scikit-learn | You have to first extract "features" out of your dataset. These are fixed-dimension vectors. Then you have to define labels which define the prediction. Then, you have to define a loss function and a neural network. Put that all together and you can train a classifier.
In your example, you would first need to extract a fixed dimension vector out of an object. For instance, you could extract the object and project it on a fixed support on the x, y, and z dimensions. That defines the features.
For each object, you'll need to label whether it's convex or concave. You can do that by hand, analytically, or by creating objects analytically that are known to be concave or convex. Now you have a dataset with a lot of sample pairs (object, is-concave).
For the loss function, you can simply use the negative log-probability.
Finally, a feed-forward network with some convoluational layers at the bottom is probably a good idea. | I try to write an script in python for analyse an .stl data file(3d geometry) and say which model is convex or concave and watertight and tell other properties...
I would like to use and TensorFlow, scikit-learn or other machine learning library. Create some database with examples of objects with tags and in future add some more examples and just re-train model for better results.
But my problem is: I don´t know how to recalculate or restructure 3d data for working in ML libraries. I have no idea.
Thank you for your help. | 0 | 1 | 1,021 |
0 | 42,292,153 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2017-02-17T05:36:00.000 | 1 | 2 | 0 | How to give more than one labels with an image in tensorflow? | 42,290,182 | 0.099668 | python,tensorflow,neural-network | Do they have always two labels? If so try "label1-label2" as one label.
Or simply build two networks, one for label 1 and the other for label 2.
Are they hierarchical labels? Then, check out Hierarchical classifiers. | I want to implememt multitask Neural Network in tensorflow, for which I need my input as:
[image label1 label2]
which I can give to the neural network for training.
My question is, how can I associate more than one label with image in TFRecord file?
I currently was using build_image_data.py file of inception model for genertrating TFRecord file but in that cases there is just one label per image. | 0 | 1 | 249 |
0 | 42,298,590 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2017-02-17T05:36:00.000 | 0 | 2 | 0 | How to give more than one labels with an image in tensorflow? | 42,290,182 | 0 | python,tensorflow,neural-network | I got this working. For any one looking for reference, you can modify Example proto of build_image_data.py file and associate it with two labels. :) | I want to implememt multitask Neural Network in tensorflow, for which I need my input as:
[image label1 label2]
which I can give to the neural network for training.
My question is, how can I associate more than one label with image in TFRecord file?
I currently was using build_image_data.py file of inception model for genertrating TFRecord file but in that cases there is just one label per image. | 0 | 1 | 249 |
0 | 42,313,679 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2017-02-17T15:32:00.000 | 1 | 1 | 0 | Temporarily disable facets in Python's FacetedSearch | 42,301,719 | 1.2 | python,elasticsearch,elasticsearch-dsl,elasticsearch-py | So you just want to use the Search object's query, but not it's aggregations? In that case just call the object's search() method to get the Search object and go from there.
If you want the aggregations, but just want to skip the python-level facets calculation just use the build_search method to get the raw Search object including the aggregations. | I have created my own customised FacetedSearch class using Pythons Elasticsearch DSL library to perform search with additional filtering in def search(self). Now I would like to reuse my class to do some statistical aggregations. To stay DRY I want to reuse this class and for performance reason I would like to temporarily disable facets calculation when they are not necessary while preserving all the filtering. So question is how can I temporarily omit facets in FacetedSearch search? | 0 | 1 | 103 |
0 | 42,317,702 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-02-18T14:30:00.000 | 1 | 1 | 0 | SVD using Scikit-Learn and Gensim with 6 million features | 42,316,431 | 1.2 | python,scikit-learn,gensim,svd | I don't really see why using sparks mllib SVD would improve performance or avoid memory errors. You simply exceed the size of your RAM. You have some options to deal with that:
Reduce the dictionary size of your tf-idf (playing with max_df and min_df parameters of scikit-learn for example).
Use a hashing vectorizer instead of tf-idf.
Get more RAM (but at some point tf-idf + SVD is not scalable).
Also you should show your code sample, you might do something wrong in your python code. | I am trying to classify paragraphs based on their sentiments. I have training data of 600 thousand documents. When I convert them to Tf-Idf vector space with words as analyzer and ngram range as 1-2 there are almost 6 million features. So I have to do Singular value decomposition (SVD) to reduce features.
I have tried gensim and sklearn's SVD feature. Both work fine for feature reduction till 100 but as soon as I try for 200 features they throw memory error.
Also I have not used entire document (600 thousand) as training data, I have taken 50000 documents only. So essentially my training matrix is:
50000 * 6 million and want to reduce it to 50000 * (100 to 500)
Is there any other way I can implement it in python, or do I have to implement sparks mllib SVD(written for only java and scala) ? If Yes, how much faster will it be?
System specification: 32 Gb RAM with 4 core processors on ubuntu 14.04 | 0 | 1 | 961 |
0 | 47,821,847 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-02-18T16:54:00.000 | -1 | 1 | 0 | Select specific MNIST classes to train a neural network in TensorFlow | 42,317,953 | -0.197375 | python,tensorflow,mnist | Found the answer i guess... one hot=True transformed the scalar into a one hot vector :) thanks for your time anyway! | currently i am looking for a way to filter specific classes out of my training dataset (MNIST) to train a neural network on different constellations, e.g. train a network only on classes 4,5,6 then train it on 0,1,2,3,4,5,6,7,8,9 to evaluate the results with the test dataset.
I'd like to do it with an argument parser via console to chose which classes should be in my training dataset so i can split this into mini batches. I think i could do it with sorting out via labels but i am kinda stuck at this moment... would appreciate any tip!!!
Greetings,
Alex | 0 | 1 | 1,514 |
0 | 42,521,000 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-02-20T08:47:00.000 | 1 | 1 | 0 | CNTK 2 sorted minibatch sources | 42,339,941 | 1.2 | python,cntk | You can create two minibatch sources, one for x and one for x_mask, both with randomize=False. Then the examples will be read in the order in which they are listed in the two map files. So as long as the map files are correct and the minibatch sizes are the same for both sources you will get the images and the masks in the order you want. | does anyone know how to create or use 2 minibatch sources or inputs a sorted way? My problem is the following:
I have images named from 0 to 5000 and images named 0_mask to 5000_mask. For each image x the coressponding image x_mask is the regression image for a deconvolution output. So i need a way to tell cntk that each x corresponds to x_match and that there is no regression done between x and y_mask.
I'm well aware of the cntk convolution sample. I've seen it. The problem are the two input streams with x and x_mask.
Can i combine them and make the reference, i need it in an easy way?
Thank you in advance. | 0 | 1 | 65 |
0 | 42,718,191 | 0 | 0 | 0 | 0 | 1 | false | 22 | 2017-02-20T16:45:00.000 | 1 | 5 | 1 | Unable to run pyspark | 42,349,980 | 0.039979 | python,pyspark | The Possible Issues faced when running Spark on Windows is, of not giving proper Path or by using Python 3.x to run Spark.
So,
Do check Path Given for spark i.e /usr/local/spark Proper or Not.
Do set Python Path to Python 2.x (remove Python 3.x). | I installed Spark on Windows, and I'm unable to start pyspark. When I type in c:\Spark\bin\pyspark, I get the following error:
Python 3.6.0 |Anaconda custom (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Traceback (most recent call last): File "c:\Spark\bin..\python\pyspark\shell.py", line 30, in import pyspark File "c:\Spark\python\pyspark__init__.py", line 44, in from pyspark.context import SparkContext File "c:\Spark\python\pyspark\context.py", line 36, in from pyspark.java_gateway import launch_gateway File "c:\Spark\python\pyspark\java_gateway.py", line 31, in from py4j.java_gateway import java_import, JavaGateway, GatewayClient File "", line 961, in _find_and_load File "", line 950, in _find_and_load_unlocked File "", line 646, in _load_unlocked File "", line 616, in _load_backward_compatible File "c:\Spark\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 18, in File "C:\Users\Eigenaar\Anaconda3\lib\pydoc.py", line 62, in import pkgutil File "C:\Users\Eigenaar\Anaconda3\lib\pkgutil.py", line 22, in ModuleInfo = namedtuple('ModuleInfo', 'module_finder name ispkg') File "c:\Spark\python\pyspark\serializers.py", line 393, in namedtuple cls = _old_namedtuple(*args, **kwargs) TypeError: namedtuple() missing 3 required keyword-only arguments: 'verbose', 'rename', and 'module'
what am I doing wrong here? | 0 | 1 | 21,967 |
1 | 51,124,327 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-02-20T16:46:00.000 | 2 | 2 | 0 | Homography and Lucas Kanade what is the difference? | 42,350,006 | 0.197375 | python-2.7,computer-vision,homography,opticalflow | Optical flow: detect motions from one frame to the next. This is either sparse (few positions of interest are tracked, such as in the LKDemo.cpp example) or dense (one motion per position for many positions(e.g. all pixels) such as Farneback demos in openCV).
Regardless of whether you have dense or sparse flow, there are different kinds of transforms that optical flow methods may try to estimate. The most common transform is translation. This is just the offset of postion from frame-to-frame. This can be visualized as vectors per frame, or as color when the flow is dense and high resolution.
One is not limited to only estimating translation per position. You can also estimate rotation for example (how a point is rotating, from frame to frame), or how it is skewed. In affine optical flow, you estimate a full affine transform per position (change translation, rotation, skew and scale). Affine flow is a classical and powerful technique that is much misunderstood, and probably used far less then it should.
Affine transforms are given most economically by a 2x3 matrix: 6 degrees of freedom, compared to the regular 2 d.o.f. of regular translational optical flow.
Leaving the topic of optical flow, An even more general family of transforms is called "Homographies" or "projective transforms". They require a 3x3 transform, and have 8 d.o.f. The affine family is not enough to describe the sort of deformation a plane undergoes, when you view it with projective distortion.
Homographies are commonly estimated from many matched points between frames. In that sense, it uses the output of regular translational optical flow(but where the affine approach is often used under the hood to improve the results).
All of this only scratches the surface... | i am using optical flow to track some features i am a begineer and was tol to follow these steps
Match good features to track
Doing Lucas-Kanade Algorithm on them
Find homography between 1-st frame and current frame
Do camera calibration
Decompose homography map
Now what i don't understand is the homography part because you find the features and track them using Lucas-Kanade, now the homography is used to compute camera motion(rotation and translation—between two images). but isn't that what the Lucas-Kanade does? or the Lucas-Kanade just tracks them and the homography makes the calculations? I am struggling to understand the difference between them, Thanks in advance. | 0 | 1 | 1,959 |
0 | 42,352,213 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-02-20T18:21:00.000 | 0 | 5 | 0 | Tensorflow installation on Windows 10, error 'Not a supported wheel on this platform' | 42,351,728 | 0 | windows,python-3.x,cmd,tensorflow | So are you sure you correctly downgraded your python? Run this command on command line pip -V. This should print the pip version and the python version. | This question is for a Windows 10 laptop. I'm currently trying to install tensorflow, however, when I run:
pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0-cp35-cp35m-win_x86_64.whl
I get the following error:
tensorflow-1.0.0-cp35-cp35m-win_x86_64.whl is not a supported wheel on this platform.
I am trying to install the cpu-version only of tensorflow in an Anaconda 4.3.0 version. I had python 3.6.0 and then I downgraded to 3.5.0, none of them worked. | 0 | 1 | 3,732 |
0 | 45,426,365 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-02-21T02:09:00.000 | 1 | 2 | 0 | Scikit-learn non-negative matrix factorization (NMF) for sparse matrix | 42,357,450 | 0.099668 | python,scikit-learn,nmf | In your data matrix the missing values can be 0, but rather than storing a bunch of zeros for a very sparse matrix you would usually store a COO matrix instead, where each row is stored in CSR format.
If you are using NMF for recommendations, then you would be factorising your data matrix X by finding W and H such that W.H approximately equals X with the condition that all three matrices are non-negative. When you reconstruct this matrix X some of the missing values (where you would have stored zeros) may become non-zero and some may remain zero. At this point, in the reconstructed matrix, the values are your predictions.
So to answer your question, are they 0's or missing data in the NMF model? The NMF model once fit will contain your predicted values, so I would count them as zero. This is a method of predicting missing values in the data. | I am using Scikit-learn's non-negative matrix factorization (NMF) to perform NMF on a sparse matrix where the zero entries are missing data. I was wondering if the Scikit-learn's NMF implementation views zero entries as 0 or missing data.
Thank you! | 0 | 1 | 1,903 |
0 | 42,357,893 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-02-21T02:51:00.000 | 0 | 1 | 0 | Draw rectangle in opencv with length and breadth in cms? | 42,357,801 | 0 | python,opencv | Its dependent on the pixel-distance ratio. You can measure this by taking an image of a meter-stick and and measuring its pixel width (for this example say its 1000px). The ratio of pixels to distance is 1000px/100cm, or 10. You can now use this constant as a multiplier, so for a given length and width in cm., you will just multiply by the ratio, and can get a pixel height and width, which can be passed into opencv's draw rectangle function. | I know how to draw a rectangle in opencv.
But can I choose the length and breadth to be in centi meters? | 0 | 1 | 408 |
0 | 42,635,609 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-02-21T05:33:00.000 | 0 | 2 | 0 | SIFT Input to ANN | 42,359,440 | 0 | python,opencv,neural-network,classification,sift | It will be good if you apply Normalization on each image before getting the feature extractor. | I'm trying to classify images using an Artificial Neural Network and the approach I want to try is:
Get feature descriptors (using SIFT for now)
Classify using a Neural Network
I'm using OpenCV3 and Python for this.
I'm relatively new to Machine Learning and I have the following question -
Each image that I analyse will have different number of 'keypoints' and hence different dimensions of the 2D 'descriptor' array. How do I decide the input for my ANN. For example for one sample image the descriptor shape is (12211, 128) so do I flatten this array and use it as an input, in which case I have to worry about varying input sizes for each image, or do I compute something else for the input? | 0 | 1 | 578 |
0 | 42,386,754 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-02-22T08:41:00.000 | 2 | 1 | 0 | Tensorflow: concatenating 2 tensors of shapes containing None | 42,386,493 | 0.379949 | python,tensorflow | Before asking a question I should probably try to run the code :) Using tf.concat(values=[A, B], concat_dim=3) seems to be working. | My problem is the following: I have a tensor A of shape [None, None, None, 3] ([batch_size, height, width, num_channels]) and a tensor B. At runtime it is guaranteed that A and B will have the same shape. I would like to concatenate these two tensors along num_channels axis.
PS. Note that I simplified my original problem - so having a tensor of shape [None, None, None, 6] in the first place is not an option. | 0 | 1 | 857 |
0 | 42,403,483 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-02-22T19:05:00.000 | 0 | 1 | 0 | High exponent numbers with scipy.stats functions | 42,400,159 | 0 | python,pandas,scipy,double,precision | You could just write the expression for logsf directly using logs of gamma functions from scipy.special (gammaln, loggamma). And you could send a pull request implementing the logsf for the chi-square distribution. | I have a set of number that can get very small, from 1e-100, to 1e-700 and lower. The precision doesn't matter as much as the exponent.
I can load such numbers just fine using Pandas by simply providing Decimal as a converter for all such numeric columns.
The problem is, even if I use Python's Decimal, I just can't use scipy.stats.chi2.isf and similar functions since their C code explicitly uses double.
A possible workaround is that I can use log10 of the numbers. The problem here is that although there is logsf function, for chi2 it's implemented as just log(sf(...)), and will, therefore, fail when sf returns 0 where it should've returned something like 1e-600. And for isf there is no such log function at all.
I wanted to know if there is any way to work with such numbers without resolving to writing all these functions myself for Decimal. | 0 | 1 | 105 |
0 | 58,574,666 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-02-22T20:29:00.000 | 0 | 1 | 0 | Shape not the same after dumping to libsvm a numpy sparse matrix | 42,401,638 | 0 | python,numpy,scikit-learn,libsvm,sklearn-pandas | I suspect your last two columns consist of only 0's. When loading an libsvm file, it generally doesn't have anything indicating the number of columns. It's a sparse format of col_num:val and will learn the maximum number of columns by the highest column number observed. If you only have 0's in the last two columns, they'll get dropped in this transformation. | I have numpy sparse matrix that I dump in a libsvm format. VC was created using CountVectorizer where the size of the vocabulary is 85731
vc
<1315689x85731 sparse matrix of type '<type 'numpy.int64'>'
with 38911625 stored elements in Compressed Sparse Row format>
But when I load libsvm file back I see that the shape is different. Two columns are gone:
data[0]
<1315689x85729 sparse matrix of type '<type 'numpy.float64'>'
with 38911625 stored elements in Compressed Sparse Row format>
I have no idea why this could be happening ? I also loaded the VC sparse matrix as dmatrix. Same issue 2 columns vanish.
Hope someone with more experience could point out the issue.
Thanks | 0 | 1 | 251 |
0 | 42,406,043 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2017-02-23T01:36:00.000 | 0 | 1 | 0 | Selecting data from large MySQL database where value of one column is found in a large list of values | 42,405,493 | 0 | python,mysql,sql,python-3.x,pandas | I am not familiar with pandas but strictly speaking from a database point of view you could just have your panda values inserted in a PANDA_VALUES table and then join that PANDA_VALUES table with the table(s) you want to grab your data from.
Assuming you will have some indexes in place on both PANDA_VALUES table and the table with your column the JOIN would be quite fast.
Of course you will have to have a process in place to keep PANDA_VALUES tables updated as the business needs change.
Hope it helps. | I generally use Pandas to extract data from MySQL into a dataframe. This works well and allows me to manipulate the data before analysis. This workflow works well for me.
I'm in a situation where I have a large MySQL database (multiple tables that will yield several million rows). I want to extract the data where one of the columns matches a value in a Pandas series. This series could be of variable length and may change frequently. How can I extract data from the MySQL database where one of the columns of data is found in the Pandas series? The two options I've explored are:
Extract all the data from MySQL into a Pandas dataframe (using pymysql, for example) and then keep only the rows I need (using df.isin()).
or
Query the MySQL database using a query with multiple WHERE ... OR ... OR statements (and load this into Pandas dataframe). This query could be generated using Python to join items of a list with ORs.
I guess both these methods would work but they both seem to have high overheads. Method 1 downloads a lot of unnecessary data (which could be slow and is, perhaps, a higher security risk) whilst method 2 downloads only the desired records but it requires an unwieldy query that contains potentially thousands of OR statements.
Is there a better alternative? If not, which of the two above would be preferred? | 0 | 1 | 352 |
0 | 42,419,250 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-02-23T14:46:00.000 | 1 | 1 | 0 | python lbp image classification | 42,418,948 | 1.2 | python,pattern-matching,classification,svm,image-recognition | If you want to use SVM as a classifier it does not make a lot of sense to make one average histogram for male and one for female because when you train you SVM classifier you can take all the histograms into account, but if you compute the average histograms you can use a nearest neighbor classifier instead. | I am working on a personal project: gender classification (male | female) in python. I'm beginner in this domain
I computed histograms for every image in training data.
Now, to test if a test image is male or female is possible to make an average histogram for male | female and compare test histograms? Or I must compare all histograms with test histogram?
If there is possible to make an average. How should I do it?
Also, is ok to use SVM for classification?
PS. I am looking for free faces databases.
Thanks | 0 | 1 | 470 |
0 | 51,049,687 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2017-02-24T17:30:00.000 | 0 | 3 | 0 | How to turn off events.out.tfevents file in tf.contrib.learn Estimator | 42,444,796 | 0 | python,tensorflow | I had the same issue and was not able to find any resolution for this while the events file kept on growing in size. My understanding is that this file stores the events generated by tensorflow. I went ahead and deleted this manually.
Interestingly, it never got created again while the other files are getting updated when I run a train sequence. | When using estimator.Estimator in tensorflow.contrib.learn, after training and prediction there are these files in the modeldir:
checkpoint
events.out.tfevents.1487956647
events.out.tfevents.1487957016
graph.pbtxt
model.ckpt-101.data-00000-of-00001
model.ckpt-101.index
model.ckpt-101.meta
When the graph is complicated or the number of variables is big, the graph.pbtxt file and the events files can be very big. Is here a way to not write these files? Since model reloading only needs the checkpoint files removing them won't affect evaluation and prediction down the road. | 0 | 1 | 4,050 |
0 | 42,461,595 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-02-25T16:14:00.000 | 1 | 1 | 0 | Is there any way to only import the MNIST images with 0's and 1's? | 42,458,415 | 1.2 | tensorflow,python-3.5,mnist | Assuming you are using
from tensorflow.examples.tutorials.mnist import input_data
No, there is no function or argument in that file... What you can do is load all data, and select only the ones and zeros. | I am just starting out with tensorflow and I want to test something only on the 0's and 1's from the MNIST images. Is there a way to import only these images? | 0 | 1 | 98 |
0 | 53,147,988 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-02-25T18:07:00.000 | 0 | 2 | 0 | Why does my keras model terminate and freeze my notebook if I do more than one epoch? | 42,459,726 | 0 | python,keras | You probably have to look into more factors.
Look at the system resources, e.g CPU, Memory, Disk IO. (If you use linux, run sar command)
For me, I had other problem with frozen notebook, and it turns out to be the issue of low memory. | I did 1 nb_epoch with batch sizes of 10 and it successfully completed. The accuracy rate was absolutely horrible coming in at a whopping 27%. I want to make it run on more than one epoch to see if the accuracy will, ideally, be above 80% or so, but it keeps freezing my Jupyter Notebook if I try to make it do more than one epoch. How can I fix this?
My backend is Theano just for clarification.
There is definitely a correlation between performance and batch_size. I tried doing batch_size=1 and it took 12s of horrifying, daunting, unforgivable time out of my day to do 1 epoch. | 0 | 1 | 519 |
0 | 42,460,023 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-02-25T18:07:00.000 | 0 | 2 | 0 | Why does my keras model terminate and freeze my notebook if I do more than one epoch? | 42,459,726 | 0 | python,keras | It takes time to run through the epochs and sometimes it looks like it freezes, but it still runs and if you wait long enough it will finish. Increasing the batch size makes it run through the epochs faster. | I did 1 nb_epoch with batch sizes of 10 and it successfully completed. The accuracy rate was absolutely horrible coming in at a whopping 27%. I want to make it run on more than one epoch to see if the accuracy will, ideally, be above 80% or so, but it keeps freezing my Jupyter Notebook if I try to make it do more than one epoch. How can I fix this?
My backend is Theano just for clarification.
There is definitely a correlation between performance and batch_size. I tried doing batch_size=1 and it took 12s of horrifying, daunting, unforgivable time out of my day to do 1 epoch. | 0 | 1 | 519 |
0 | 42,461,103 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2017-02-25T20:12:00.000 | 11 | 1 | 0 | Subset pandas dataframe using values from two columns | 42,461,086 | 1 | python,pandas,dataframe,subset | I will answer my own question, hoping it will help someone. I tried this and it worked.
df[(df['gold']>0) & (df['silver']>0)]
Note that I have used & instead of and and I have used brackets to separate the different conditions. | I am trying to subset a pandas dataframe based on values of two columns. I tried this code:
df[df['gold']>0, df['silver']>0, df['bronze']>0] but this didn't work.
I also tried:
df[(df['gold']>0 and df['silver']>0). This didn't work too. I got an error saying:
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
What would you suggest? | 0 | 1 | 6,839 |
0 | 42,481,143 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-02-27T07:24:00.000 | 3 | 1 | 0 | Implement Gaussian Mixture Model using keras | 42,479,954 | 0.53705 | python-3.x,tensorflow,keras,gmm | Are you sure that it is what you want? you want to integrate a GMM into a neural network?
Tensorflow and Keras are libraries to create, train and use neural networks models. The Gaussian Mixture Model is not a neural network. | I am trying to implement Gaussian Mixture Model using keras with tensorflow backend. Is there any guide or example on how to implement it? | 0 | 1 | 2,619 |
0 | 43,356,794 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-27T11:10:00.000 | 1 | 1 | 0 | What is Non-Intrusive Load Monitoring or energy disaggregation or power signature analysis? | 42,484,305 | 0.197375 | python,github,dataset,energy | The aim of non-intrusive load monitoring is to obtain a breakdown of the net energy consumption of a building in terms of individual appliance consumption. There has been work on multiple algorithms so as to get this done ( with varying performance) and as always these can be written in any programming language.
NILMTK itself is written in python and is a good toolkit to describe, analyse and integrate nilm algorithms to compare them. | Does anybody know anything about NILM or power signature analysis?
Can i do non-intrusive load monitoring using python?
I got to know about one python toolkit known as NILMTK. But I need help for knowing about NILM.
If anybody know about NILM, then please guide me. Thank you. | 0 | 1 | 613 |
0 | 42,501,574 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-02-28T03:56:00.000 | 1 | 2 | 0 | How to compare if two images representing the same object if the pictures of the object belongs from two different sources - in OpenCV? | 42,499,927 | 0.099668 | python,opencv,image-processing,object-recognition | SIFT feature matching might produce better results than ORB. However, the main problem here is that you have only one image of each type (from the mobile camera and from the Internet. If you have a large number of images of this car model, then you can train a machine learning system using those images. Later you can submit one image of the car to the machine learning system and there is a much higher chance of the machine learning system recognizing it.
From a machine learning point of view, using only one image as the master and matching another with it is analogous to teaching a child the letter "A" using only one handwritten letter "A", and expecting him/her to recognize any handwritten letter "A" written by anyone. | Suppose I have an image of a car taken from my mobile camera and I have another image of the same car taken downloaded from the internet.
(For simplicity please assume that both the images contain the same side view projection of the same car.)
How can I detect that both the images are representing the same object i.e. the car, in this case, using OpenCV?
I've tried template matching, feature matching (ORB) etc but those are not working and are not providing satisfactory results. | 0 | 1 | 1,709 |
0 | 42,510,284 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-02-28T13:27:00.000 | 2 | 1 | 0 | What algorithm to chose for binary image classification | 42,510,042 | 1.2 | python,image,binary,classification,svm | You should probably post this on cross-validated:
But as a direct answer you should probably look into sequence to sequence learners as it has been clear to you SVM is not the ideal solution for this.
You should look into Markov models for sequential learning if you dont wanna go the deep learning route, however, Neural Networks have a very good track record with image classification problems.
Ideally for a Sequential learning you should try to look into Long Short Term Memory Recurrent Neural Networks, and for your current dataset see if pre-training it on an existing data corpus (Say CIFAR-10) may help.
So my recomendation is give Tensorflow a try with a high level library such as Keras/SKFlow.
Neural Networks are just another tool in your machine learning repertoire and you might aswell give them a real chance.
An Edit to address your comment:
Your issue there is not a lack of data for SVM,
the SVM will work well, for a small dataset, as it will be easier for it to overfit/fit a separating hyperplane on this dataset.
As you increase your data dimensionality, keep in mind that separating it using a separating hyperplane becomes increasingly difficult[look at the curse of dimensionality].
However if you are set on doing it this way, try some dimensionality reduction
such as PCA.
Although here you're bound to find another fence-off with Neural Networks,
since the Kohonen Self Organizing Maps do this task beautifully, you could attempt to
project your data in a lower dimension therefore allowing the SVM to separate it with greater accuracy.
I still have to stand by saying you may be using the incorrect approach. | Lets say I have two arrays in dataset:
1) The first one is array classified as (0,1) - [0,1,0,1,1,1,0.....]
2) And the second array costists of grey scale image vectors with 2500 elements in each(numbers from 0 to 300). These numbers are pixels from 50*50px images. - [[13 160 239 192 219 199 4 60..][....][....][....][....]]
The size of this dataset is quite significant (~12000 elements).
I am trying to build bery basic binary classificator which will give appropriate results. Lets say I wanna choose non deep learning but some supervised method.
Is it suitable in this case? I've already tried SVM of sklearn with various parameters. But the outcome is inappropriately inacurate and consists mainly of 1: [1,1,1,1,1,0,1,1,1,....]
What is the right approach? Isnt a size of dataset enough to get a nice result with supervised algorithm? | 0 | 1 | 695 |
0 | 42,522,820 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-01T03:30:00.000 | 0 | 1 | 0 | How Python data structure implemented in Spark when using PySpark? | 42,522,654 | 1.2 | python,python-2.7,apache-spark,pyspark | You can create traditional Python data objects such as array, list, tuple, or dictionary in PySpark.
You can perform most of the operations using python functions in Pyspark.
You can import Python libraries in Pyspark and use them to process data in Pyspark
You can create a RDD and apply spark operations on them | I am currently self-learning Spark programming and trying to recode an existing Python application in PySpark. However, I am still confused about how we use regular Python objects in PySpark.
I understand the distributed data structure in Spark such as the RDD, DataFrame, Datasets, vector, etc. Spark has its own transformation operations and action operations such as .map(), .reduceByKey() to manipulate those objects. However, what if I create traditional Python data objects such as array, list, tuple, or dictionary in PySpark? They will be only stored in the memory of my driver program node, right? If I transform them into RDD, can i still do operations with typical Python function?
If I have a huge dataset, can I use regular Python libraries like pandas or numpy to process it in PySpark? Will Spark only use the driver node to run the data if I directly execute Python function on a Python object in PySpark? Or I have to create it in RDD and use Spark's operations? | 0 | 1 | 854 |
0 | 52,912,100 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-03-01T19:02:00.000 | 0 | 3 | 0 | Why can i not import sklearn | 42,539,906 | 0 | python,scikit-learn | If someone is working with via bash here are the steps :
For ubunutu :
sudo apt-get install python-sklearn | Why am I not able to import sklearn?
I downloaded Anaconda Navigator and it has scikit-learn in it. I even pip installed sklearn , numpy and scipy in Command Prompt and it shows that it has already been installed, but still when I import sklearn in Python (I use PyCharm for coding) it doesn't work. It says 'No module named sklearn'. | 0 | 1 | 7,012 |
0 | 42,574,660 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-03-01T19:02:00.000 | 0 | 3 | 0 | Why can i not import sklearn | 42,539,906 | 0 | python,scikit-learn | Problem solved! I didn't know that I was supposed to change my interpreter to Anaconda's interpreter(I am fairly new to Python). Thanks for the help! | Why am I not able to import sklearn?
I downloaded Anaconda Navigator and it has scikit-learn in it. I even pip installed sklearn , numpy and scipy in Command Prompt and it shows that it has already been installed, but still when I import sklearn in Python (I use PyCharm for coding) it doesn't work. It says 'No module named sklearn'. | 0 | 1 | 7,012 |
0 | 42,558,220 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2017-03-02T04:09:00.000 | 1 | 2 | 0 | Best way to embed Jupyter/IPython notebook information | 42,546,701 | 1.2 | python,python-3.x,ipython-notebook,jupyter-notebook | A link to a gist is by far the superior option from those you have listed as that means helpers can run your code pretty easily and debug it from there.
An alternative option is to post the code that creates your DataFrame (or at least a minimal example of it) so that we can recreate it. This is advantageous over a gist since helpers don't have to look and download the gist because the code is in the body of the question. Also, this method is superior since you may later delete the gist and so the question is now useless for future reference, but if your code is in the body of the question then all future users can enjoy it as long as SO lives :) | I just run across a problem when trying to ask help using Pandas DataFrames in Jupyper notebook.
More specifically my problem is what is the best way to embed iPython notebook input and output to StackOverflow question?
Simply copy&paste breaks DataFrame output formatting so bad it becomes impossible to read.
Which would be preferred way to handle notebooks with StackOverflow:
screenshot
link to gist with the notebook
converting notebook to HTML and embedding it
Something else what? | 0 | 1 | 432 |
0 | 42,551,702 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-02T09:02:00.000 | 0 | 1 | 0 | Give relative path of file in csv python | 42,550,910 | 0 | python,csv,hyperlink | For HYPERLINK you need use only absolute url | I am creating a csv file in which i need to give hyperlinks to files in the same folder of csv file.
I have tried with absolute url like =HYPERLINK("file:///home/user/Desktop/myfolder/clusters.py") and its working fine.But can i given the relative path like
=HYPERLINK("file:///myfolder/clusters.py") because that is what my project required.User will download this csv along with some other files into his machine.So i cant give the absolute path of other files in csv. | 0 | 1 | 215 |
Subsets and Splits