GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
34,163,258
0
0
0
0
1
false
0
2015-12-08T02:33:00.000
0
2
0
Review data sentiment analysis, focusing on extracting negative sentiment?
34,146,996
0
python,machine-learning,nlp,sentiment-analysis
When you annotate the sentiment, don't annotate 'Positive', 'Negative', and 'Neutral'. Instead, annotate them as either "has negative" or "doesn't have negative". Then your sentiment classification will only be concerned with how strongly the features indicate negative sentiment, which appears to be what you want.
I am trying to do sentiment analysis on a review dataset. Since I care more about identifying (extracting) negative sentiments in reviews (unlabeled now but I try to manually label a few hundreds or use Alchemy API), if the review is overall neutral or positive but a part has negative sentiment, I'd like my model to consider it more toward as a negative review. Could someone give me advices on how to do this? I'm thinking about using bag of words/word2vect with supervised (random forest, SVM) /unsupervised learning models (Kmeans).
0
1
268
0
34,154,600
0
0
0
0
1
false
0
2015-12-08T10:46:00.000
0
1
0
Python Selenium infinite loop
34,153,844
0
python,loops,selenium
Your attempts is always less then 5 because there is no variable increment. So your loop is infinite
I'm trying to study customers behavior. Basically, I have information on customer's loyalty points activities data (e.g. how many points they have earned, how many points they have used, how recent they have used/earn points etc). I'm using R to conduct this analysis I'm just wondering how should I go about segmenting customers based on the above information? I'm trying to apply the RFM concept then use K-means to segment my customers(although I have a few more variables than just R,F,M , as i have recency,frequency and monetary on both points earn and use, as well as other ratios and metrics) . Is this a good way to do this? Essentially I have two objectives: 1. To segment customers 2. Via segmenting customers, identify customers behavior(e.g.customers who spend all of their points before churning), provided that segmentation is the right method for such task? Clustering <- kmeans(RFM_Values4,centers = 10) Please enlighten me, need some guidance on the best methods to tackle such problems.
0
1
436
0
34,457,756
0
1
0
0
1
true
2
2015-12-09T08:17:00.000
1
1
0
Is python ggplot still being developed?
34,173,840
1.2
python,plot,graphing,python-ggplot
Yes. They are currently doing a major rewrite.
Python ggplot is great, but missing many customization options. The commit history on github for the past year does not look very promising... Does anyone know if it is still being developed?
0
1
154
0
61,137,400
0
0
0
0
2
false
19
2015-12-09T22:33:00.000
3
3
0
What's the best way to refresh TensorBoard after new events/logs were added?
34,190,298
0.197375
python,tensorflow,tensorboard
I advise to always start tensorboard with --reload_multifile True to force reloading all event files.
What is the best way to quickly see the updated graph in the most recent event file in an open TensorBoard session? Re-running my Python app results in a new log file being created with potentially new events/graph. However, TensorBoard does not seem to notice those differences, unless restarted.
0
1
21,777
0
44,359,908
0
0
0
0
2
false
19
2015-12-09T22:33:00.000
0
3
0
What's the best way to refresh TensorBoard after new events/logs were added?
34,190,298
0
python,tensorflow,tensorboard
My issue is different. Each time I refresh 0.0.0.0:6006, it seems the new graph keep appending to the old one, which is quite annoying. After trying kill process and delete old log several times, I realized the issue comes from writer.add_graph(sess.graph), because I didn't reset the graph in jupyter notebook. After resetting, the tensorboard could show the newest gragh.
What is the best way to quickly see the updated graph in the most recent event file in an open TensorBoard session? Re-running my Python app results in a new log file being created with potentially new events/graph. However, TensorBoard does not seem to notice those differences, unless restarted.
0
1
21,777
0
44,128,902
0
0
0
0
2
false
349
2015-12-10T10:19:00.000
7
16
0
How to prevent tensorflow from allocating the totality of a GPU memory?
34,199,233
1
python,tensorflow,tensorflow2.0,tensorflow2.x,nvidia-titan
Shameless plug: If you install the GPU supported Tensorflow, the session will first allocate all GPUs whether you set it to use only CPU or GPU. I may add my tip that even you set the graph to use CPU only you should set the same configuration(as answered above:) ) to prevent the unwanted GPU occupation. And in an interactive interface like IPython and Jupyter, you should also set that configure, otherwise, it will allocate all memory and leave almost none for others. This is sometimes hard to notice.
I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each. For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the GPU, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having multiple users simultaneously train on the GPU. The problem with TensorFlow is that, by default, it allocates the full amount of available GPU memory when it is launched. Even for a small two-layer neural network, I see that all 12 GB of the GPU memory is used up. Is there a way to make TensorFlow only allocate, say, 4 GB of GPU memory, if one knows that this is enough for a given model?
0
1
237,456
0
52,828,871
0
0
0
0
2
false
349
2015-12-10T10:19:00.000
1
16
0
How to prevent tensorflow from allocating the totality of a GPU memory?
34,199,233
0.012499
python,tensorflow,tensorflow2.0,tensorflow2.x,nvidia-titan
i tried to train unet on voc data set but because of huge image size, memory finishes. i tried all the above tips, even tried with batch size==1, yet to no improvement. sometimes TensorFlow version also causes the memory issues. try by using pip install tensorflow-gpu==1.8.0
I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each. For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the GPU, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having multiple users simultaneously train on the GPU. The problem with TensorFlow is that, by default, it allocates the full amount of available GPU memory when it is launched. Even for a small two-layer neural network, I see that all 12 GB of the GPU memory is used up. Is there a way to make TensorFlow only allocate, say, 4 GB of GPU memory, if one knows that this is enough for a given model?
0
1
237,456
0
35,853,184
0
0
0
0
1
false
3
2015-12-11T22:44:00.000
2
1
0
Event files in Google Tensorflow
34,233,767
0.379949
python,tensorflow,tensorboard
The best solution from a TensorBoard perspective is to have a root directory for your experiment, e.g. ~/tensorflow/mnist_experiment, and then to create a new subdirectory for each run, e.g. ~/tensorflow/mnist_experiment/run1/... Then run TensorBoard against the root directory, and every time you invoke your code, setup the SummaryWriter pointing to a new subdirectory. TensorBoard will then interpret all of the event files correctly, and it will also make it easy to compare between your different runs.
I am using Tensorflow to build up the Neural Network, and I would like to show training results on the Tensorboard. So far everything works fine. But I have a question on "event file" for the Tensorboard. I notice that every time when I run my python script, it generates different event files. And when I run my local server using $ python /usr/local/lib/python2.7/dist-packages/tensorflow/tensorboard/tensorboard.py --logdir=/home/project/tmp/, it shows up error if there are more than 1 event files. It seems to be annoying since whenever I run my local server, I have to delete all previous event files to make it work. So I'm wondering if there is any solution to prevent this issue. I would really appreciate it.
0
1
1,651
0
44,353,399
0
0
0
0
1
false
24
2015-12-12T01:38:00.000
12
4
0
Is it possible to modify an existing TensorFlow computation graph?
34,235,225
1
python,tensorflow
Yes, tf.Graph are build in an append-only fashion as @mrry puts it. But there's workaround: Conceptually you can modify an existing graph by cloning it and perform the modifications needed along the way. As of r1.1, Tensorflow provides a module named tf.contrib.graph_editor which implements the above idea as a set of convinient functions.
TensorFlow graph is usually built gradually from inputs to outputs, and then executed. Looking at the Python code, the inputs lists of operations are immutable which suggests that the inputs should not be modified. Does that mean that there is no way to update/modify an existing graph?
0
1
13,045
0
34,277,060
0
0
0
0
1
false
0
2015-12-14T19:30:00.000
0
2
0
How are the points in a level curve chosen in pyplot?
34,275,096
0
python,math,matplotlib
The function is evaluated at every grid node, and compared to the iso-level. When there is a change of sign along a cell edge, a point is computed by linear interpolation between the two nodes. Points are joined in pairs by line segments. This is an acceptable approximation when the grid is dense enough.
I want to know how the contours levels are chosen in pyplot.contour. What I mean by this is, given a function f(x, y), the level curves are usually chosen by evaluating the points where f(x, y) = c, c=0,1,2,... etc. However if f(x, y) is an array A of nxn points, how do the level points get chosen? I don't mean how do the points get connected, just simply the points that correspond to A = c
0
1
976
0
35,905,473
0
0
0
0
1
false
0
2015-12-14T21:43:00.000
0
1
0
Matplotlib error in importing
34,277,148
0
python,python-2.7,matplotlib
I have the exact same problem. I'm not sure what the issue is but every once in a while, when trying to import matplotlib inside ipython I encounter this error and restarting the computer solves the issue. Maybe that would help in locating the issue?
I am using OSX El Capitan and trying to import matplotlib.pyplot when I do that I get recursive error and at the end it says "ValueError: insecure string pickle" Here is the whole log: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () 4 stats = Statistics("HumanData.xlsx") 5 ----> 6 get_ipython().magic(u'matplotlib inline') 7 8 #matplotlib.pyplot.hist(stats.getActionData("Human", "Pacman", "Left")) /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in magic(self, arg_s) 2334 magic_name, _, magic_arg_s = arg_s.partition(' ') 2335 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC) -> 2336 return self.run_line_magic(magic_name, magic_arg_s) 2337 2338 ------------------------------------------------------------------------- /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in run_line_magic(self, magic_name, line) 2255 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals 2256 with self.builtin_trap: -> 2257 result = fn(*args,**kwargs) 2258 return result 2259 /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/magics/pylab.pyc in matplotlib(self, line) /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/magic.pyc in (f, *a, **k) 191 # but it's overkill for just that one bit of state. 192 def magic_deco(arg): --> 193 call = lambda f, *a, **k: f(*a, **k) 194 195 if callable(arg): /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/magics/pylab.pyc in matplotlib(self, line) 98 print("Available matplotlib backends: %s" % backends_list) 99 else: --> 100 gui, backend = self.shell.enable_matplotlib(args.gui) 101 self._show_matplotlib_backend(args.gui, backend) 102 /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in enable_matplotlib(self, gui) 3130 gui, backend = pt.find_gui_and_backend(self.pylab_gui_select) 3131 -> 3132 pt.activate_matplotlib(backend) 3133 pt.configure_inline_support(self, backend) 3134 /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/pylabtools.pyc in activate_matplotlib(backend) 272 matplotlib.rcParams['backend'] = backend 273 --> 274 import matplotlib.pyplot 275 matplotlib.pyplot.switch_backend(backend) 276 /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/pyplot.py in () 27 from cycler import cycler 28 import matplotlib ---> 29 import matplotlib.colorbar 30 from matplotlib import style 31 from matplotlib import _pylab_helpers, interactive /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/colorbar.py in () 32 import matplotlib.artist as martist 33 import matplotlib.cbook as cbook ---> 34 import matplotlib.collections as collections 35 import matplotlib.colors as colors 36 import matplotlib.contour as contour /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/collections.py in () 25 import matplotlib.artist as artist 26 from matplotlib.artist import allow_rasterization ---> 27 import matplotlib.backend_bases as backend_bases 28 import matplotlib.path as mpath 29 from matplotlib import _path /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/backend_bases.py in () 60 61 import matplotlib.tight_bbox as tight_bbox ---> 62 import matplotlib.textpath as textpath 63 from matplotlib.path import Path 64 from matplotlib.cbook import mplDeprecation, warn_deprecated /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/textpath.py in () 13 from matplotlib.path import Path 14 from matplotlib import rcParams ---> 15 import matplotlib.font_manager as font_manager 16 from matplotlib.ft2font import FT2Font, KERNING_DEFAULT, LOAD_NO_HINTING 17 from matplotlib.ft2font import LOAD_TARGET_LIGHT /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py in () 1418 verbose.report("Using fontManager instance from %s" % _fmcache) 1419 except: -> 1420 _rebuild() 1421 else: 1422 _rebuild() /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py in _rebuild() 1403 def _rebuild(): 1404 global fontManager -> 1405 fontManager = FontManager() 1406 if _fmcache: 1407 pickle_dump(fontManager, _fmcache) /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py in init(self, size, weight) 1041 # Load TrueType fonts and create font dictionary. 1042 -> 1043 self.ttffiles = findSystemFonts(paths) + findSystemFonts() 1044 self.defaultFamily = { 1045 'ttf': 'Bitstream Vera Sans', /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py in findSystemFonts(fontpaths, fontext) 321 fontfiles[f] = 1 322 --> 323 for f in get_fontconfig_fonts(fontext): 324 fontfiles[f] = 1 325 /Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py in get_fontconfig_fonts(fontext) 273 pipe = subprocess.Popen(['fc-list', '--format=%{file}\n'], 274 stdout=subprocess.PIPE, --> 275 stderr=subprocess.PIPE) 276 output = pipe.communicate()[0] 277 except (OSError, IOError): /Users/AhmedKhalifa/anaconda/lib/python2.7/subprocess.pyc in init(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags) 708 p2cread, p2cwrite, 709 c2pread, c2pwrite, --> 710 errread, errwrite) 711 except Exception: 712 # Preserve original exception in case os.close raises. /Users/AhmedKhalifa/anaconda/lib/python2.7/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, to_close, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 1332 if e.errno != errno.ECHILD: 1333 raise -> 1334 child_exception = pickle.loads(data) 1335 raise child_exception 1336 /Users/AhmedKhalifa/anaconda/lib/python2.7/pickle.pyc in loads(str) 1386 def loads(str): 1387 file = StringIO(str) -> 1388 return Unpickler(file).load() 1389 1390 # Doctest /Users/AhmedKhalifa/anaconda/lib/python2.7/pickle.pyc in load(self) 862 while 1: 863 key = read(1) --> 864 dispatchkey 865 except _Stop, stopinst: 866 return stopinst.value /Users/AhmedKhalifa/anaconda/lib/python2.7/pickle.pyc in load_string(self) 970 if rep.startswith(q): 971 if len(rep) < 2 or not rep.endswith(q): --> 972 raise ValueError, "insecure string pickle" 973 rep = rep[len(q):-len(q)] 974 break ValueError: insecure string pickle Any help with that?
0
1
704
0
34,300,675
0
0
0
0
1
false
3
2015-12-15T08:37:00.000
1
2
0
Classifying sentences with overlapping words
34,284,385
0.099668
python,twitter,nltk,document-classification
I wouldn't be so quick to write off Naive Bayes. It does fine in many domains where there are lots of weak clues (as in "overlapping words"), but no absolutes. It all depends on the features you pass it. I'm guessing you are blindly passing it the usual "bag of words" features, perhaps after filtering for stopwords. Well, if that's not working, try a little harder. A good approach is to read a couple of hundred tweets and see how you know which category you are looking at. That'll tell you what kind of things you need to distill into features. But be sure to look at lots of data, and focus on the general patterns. An example (but note that I haven't looked at your corpus): Time expressions may be good clues on whether you are pre- or post-sale, but they take some work to detect. Create some features "past expression", "future expression", etc. (in addition to bag-of-words features), and see if that helps. Of course you'll need to figure out how to detect them first, but you don't have to be perfect: You're after anything that can help the classifier make a better guess. "Past tense" would probably be a good feature to try, too.
I've this CSV file which has comments (tweets, comments). I want to classify them into 4 categories, viz. Pre Sales Post Sales Purchased Service query Now the problems that I'm facing are these : There is a huge number of overlapping words between each of the categories, hence using NaiveBayes is failing. The size of tweets being only 160 chars, what is the best way to prevent words from one category falling into the another. What all ways should I select the features which can take care of both the 160 char tweets and a bit lengthier facebook comments. Please let me know of any reference link/tutorial link to follow up the same, being a newbee in this field Thanks
0
1
622
0
34,323,744
0
0
0
0
1
false
2
2015-12-16T18:01:00.000
0
1
0
Pandas: Read CSV: ValueError: could not convert string to float
34,319,011
0
python,csv,pandas
I found the mistake. The problem was a thousand separator. When writing the CSV file, most numbers were below thousand and were correctly written to the CSV file. However, this one value was greater than thousand and it was written as "1,123" which pandas did not recognize as a number but as a string.
I'm trying to read a large and complex CSV file with pandas.read_csv. The exact command is pd.read_csv(filename, quotechar='"', low_memory=True, dtype=data_types, usecols= columns, true_values=['T'], false_values=['F']) I am pretty sure that the data types are correct. I can read the first 16 million lines (setting nrows=16000000) without problems but somewhere after this I get the following error ValueError: could not convert string to float: '1,123' As it seems, for some reason pandas thinks two columns would be one. What could be the problem? How can I fix it?
0
1
8,440
0
34,323,069
0
0
0
0
1
false
0
2015-12-16T22:08:00.000
1
2
0
Alternative to numpy's linalg.eig?
34,323,027
0.099668
python,python-2.7,numpy,pca
So if scikit's third eigenvector is (a,-b,-c,-d) then mine is (-a,b,c,d). That's completely normal. If v is an eigenvector of a matrix, then -v is an eigenvector with the same eigenvalue.
I have written a simple PCA code that calculates the covariance matrix and then uses linalg.eig on that covariance matrix to find the principal components. When I use scikit's PCA for three principal components I get almost the equivalent result. My PCA function outputs the third column of transformed data with flipped signs to what scikit's PCA function does. Now I think there is a higher probability that scikit's built-in PCA is correct than to assume that my code is correct. I have noticed that the third principal component/eigenvector has flipped signs in my case. So if scikit's third eigenvector is (a,-b,-c,-d) then mine is (-a,b,c,d). I might a bit shabby in my linear algebra, but I assume those are different results. The way I arrive at my eigenvectors is by computing the eigenvectors and eigenvalues of the covariance matrix using linalg.eig. I would gladly try to find eigenvectors by hand, but doing that for a 4x4 matrix (I am using iris data set) is not fun. Iris data set has 4 dimensions, so at most I can run PCA for 4 components. When I run for one component, the results are equivalent. When I run for 2, also equivalent. For three, as I said, my function outputs flipped signs in the third column. When I run for four, again signs are flipped in the third column and all other columns are fine. I am afraid I cannot provide the code for this. This is a project, kind of.
0
1
593
0
34,340,617
0
0
0
0
1
false
2
2015-12-17T15:06:00.000
1
1
1
Location of tensorflow/models.. in Windows
34,337,788
0.197375
python,windows,docker,tensorflow
If you're using one of the devel tags (:latest-devel or :latest-devel-gpu), the file should be in /tensorflow/tensorflow/models/image/imagenet/classify_image.py. If you're using the base container (b.gcr.io/tensorflow/tensorflow:latest), it's not included -- that image just has the binary installed, not a full source distribution, and classify_image.py isn't included in the binary distribution.
I have installed tensorflow on Windows using Docker, I want to go to the folder "tensorflow/models/image/imagenet" that contains "classify_image.py" python file.. Can someone please how to reach this mentioned path?
0
1
1,104
0
70,073,546
0
1
0
0
1
false
36
2015-12-18T14:15:00.000
2
2
0
Append 2D array to 3D array, extending third dimension
34,357,617
0.197375
python,arrays,numpy,append
using np.stack should work but the catch is both arrays should be of 2D form. np.stack([A,B])
I have an array A that has shape (480, 640, 3), and an array B with shape (480, 640). How can I append these two as one array with shape (480, 640, 4)? I tried np.append(A,B) but it doesn't keep the dimension, while the axis option causes the ValueError: all the input arrays must have same number of dimensions.
0
1
65,088
0
34,361,488
0
1
0
0
1
false
0
2015-12-18T16:46:00.000
0
1
0
plotting 2D slice of arbitrary orientation through 3D data in matplotlib
34,360,265
0
python,matplotlib,3d
You can use roll function of numpy to rotate your plane and make it parallel with a base plane. now you can choose your plane and plot. Only problem is that at close to edges the value from one side will be added to opposite side.
I have a 3D regular grid of data. I would like to write a routine allowing the user to specify a plane slicing through the data with arbitrary orientation and returning a contour plot of the data in the plane. Is there a ready-made way in matplotlib to do this? Could find anything in the docs.
0
1
494
0
34,377,289
0
0
0
0
1
true
1
2015-12-20T01:43:00.000
1
1
0
Adding and removing SVM parameters without having to totally retrain
34,377,210
1.2
python,scikit-learn,svm
If you are using SVC from sklearn then the answer is no. There is no way to do it, this implementation is purely batch training based. If you are training linear SVM using SGDClassifier from sklearn then the answer is yes as you can simply start the optimization from the previous solution (when removing feature - simply with removed corresponding weight, and when adding - with added any weight there).
I have a support vector machine trained on ~300,000 examples, and it takes roughly 1.5-2 hours to train this model, and I pickled(serialized) it. Currently, I want to add/remove a couple of the parameters of the model. Is there a way to do this without having to retrain the entire model? I am using sklearn in python.
0
1
44
0
34,391,185
0
0
0
0
1
false
1
2015-12-21T06:56:00.000
1
2
0
What data science programming algorithm is like Naive Bayes for continuous variables?
34,390,336
0.099668
python,algorithm,machine-learning,naivebayes,data-science
For Naive Bayes you can discretize your continuous numerical properties. For example, for "% Owner occupied housing" you split all 100% scale into ten partitions(0-10%, 10-20%, ..., 90-100%) and get the frequency table. For some properties you can move to binary values: Unemployment rate < 30% - yes/no. Good luck in learning Machine Learning :)
I am trying to build and train a machine learning data science algorithm that correctly predicts what presidential won in what county. I have the following information for training data. Total population Median age % BachelorsDeg or higher Unemployment rate Per capita income Total households Average household size % Owner occupied housing % Renter occupied housing % Vacant housing Median home value Population growth House hold growth Per capita income growth Winner I am new to data science. I do know Naive Bayes is a good classifier for algorithms trying to predict with multiple properties. However, I read the first step for a naive bayes classifier requires a frequency table. My problem is all of the above properties are continuous numerical properties and don't fall into "Yes" or "No" categories. Do I not use a Naive Bayes classifier then? I also considered using a k nearest neighbor algorithm, but that doesn't look like it will be the most accurate and weight the properties correctly for me...I am looking for a supervised algorithm because I have training data. Can anyone give me any recommendations as to what algorithm to use? In addition, being new to the field, how can I figure out what algorithm to use on my own in the future.
0
1
229
0
34,568,528
0
0
0
0
1
false
6
2015-12-21T10:47:00.000
4
2
0
Difference between local and dense layers in CNNs
34,393,876
0.379949
python,convolution,deep-learning,tensorflow,conv-neural-network
I am quoting user2576346's comments under the question: As I understand, either it should be densely connected or be a convolutional layer ... No this is not true. A more accurate way to phrase that statement would be that layers are either fully connected (dense) or locally connected. A convolutional layer is an example of a locally connected layer. In general a locally connected layer is a layer in which each of its units is only connected to a local portion of the input. A convolutional layer is a special type of local layer which exhibits a spatial translation invariance as each convolutional feature detector is strided across the entire image in local receptive windows, e.g. of size 3x3 or 5x5 for example.
What is the difference between a "Local" layer and a "Dense" layer in a convolutional neural network? I am trying to understand the CIFAR-10 code in TensorFlow, and I see it uses "Local" layers instead of regular dense layers. Is there any class in TF that supports implementing "Local" layers?
0
1
2,001
0
34,479,029
0
1
0
0
1
false
0
2015-12-25T23:02:00.000
0
2
0
Python: How to give participant the possibility to an answer
34,467,177
0
python,while-loop,psychopy
You probably placed your Answerrunning = False at the wrong place. And probably you need to put break at the end of each branch. Please explain more what you want to do, I don't understand. If you say you need to count tries, then I guess you should have something like number_of_tries = 0 and number_of_tries += 1 somewhere in your code.
I am making an experiment, and the participant must get the possibility to correct himself when he has given the wrong answer. The goal is that the experiment goes on to the next trial when the correct answer is given. When the wrong answer is given, you get another chance. For the moment, the experiment crashes after the first trial and it always waits for the second chance answer (even when the right answer was given).
0
1
152
0
34,484,383
0
0
0
0
1
true
1
2015-12-27T18:04:00.000
1
2
0
5D tensor in Theano
34,483,277
1.2
python,theano,symbolic-computation
Theano variables do not have explicit shape information since they are symbolic variables, not numerical. Even dtensor3 = T.tensor3(T.config.floatX) does not have an explicit shape. When you type dtensor3.shape you'll get an object Shape.0 but when you do dtensor3.shape.eval() to get its value you'll get an error. For both cases however, dtensor.ndim works and prints out 5 and 3 respectively.
I was wondering how to make a 5D tensor in Theano. Specifically, I tried dtensor = T.TensorType('float32', (False,)*5). However, the only issue is that dtensor.shapereturns: AttributeError: 'TensorType' object has no attribute 'shape' Whereas if I used a standard tensor type likedtensor = T.tensor3('float32'), I don't get this issue when I call dtensor.shape. Is there a way to have this not be an issue with a 5D tensor in Theano?
0
1
506
0
34,493,557
0
0
0
0
1
true
1
2015-12-28T07:14:00.000
1
1
0
Using sklearn DictVectorizer in real-time systems
34,493,556
1.2
machine-learning,categorical-data,python,scikit-learn
It depends on the learning algorithm that you are using. If you are using a method that has been designated for sparse data sets (FTRL, FFM, linear SVM) one possible approach is the following (note that it will introduce collisions in the features and a lot of constant columns). First allocate for each element of your sample a (as large as possible) vector V, of length D. For each categorical variable, evaluate hash(var_name + "_" + var_value) % D. This gives you an integer i, and you can store V[i] = 1. Therefore, V never grows larger as new features appear. However, as soon as the number of features is large enough, some features will collide (i.e. be written at the same place) and this may result in an increased error rate... Edit. You can write your own vectorizer to avoid collisions. First call L the current number of features. Prepare the same vector V of length 2L (this 2 will allow you to avoid collisions as new features arrive - at least for some time, depending of the arrival rate of new features). Starting with an emty dictionary<input_type,int>, associate to each feature an integer. If have already seen the feature, return the int corresponding to the feature. If not, create a new entry with an integer corresponding to the new index. I think (but I am not sure) this is what LabelEncoder does for you.
Any binary one-hot encoding is aware of only values seen in training, so features not encountered during fitting will be silently ignored. For real time, where you have millions of records in a second, and features have very high cardinality, you need to keep your hasher/mapper updated with the data. How can we do an incremental update to the hasher (rather calculating the entire fit() every time we incounter a new feature-value pair)? What is the suggested approach here the tackle this?
0
1
237
0
34,502,877
0
0
0
0
1
false
0
2015-12-29T00:37:00.000
2
2
0
Python pandas: determining which "group" has the most entries
34,502,840
0.197375
python,pandas
To sort by name: df.fruit.value_counts().sort_index() To sort by counts: df.fruit.value_counts().sort_values()
Let's say that I have pandas DataFrame with a column called "fruit" that represents what fruit my classroom of kindergartners had for a morning snack. I have 20 students in my class. Breakdown would be something like this. Oranges = 7, Grapes = 3, Blackberries = 4, Bananas = 6 I used sort to group each of these fruit types, but it is grouping based on alphabetical order. I would like it to group based on the largest quantity of entries for that class of fruit. In this case, I would like Oranges to turn up first so that I can easily see that Oranges is the most popular fruit. I'm thinking that sort is not the best way to go about this. I checked out groupby but could not figure out how to use that appropriately either. Thanks in advance.
0
1
38
0
34,515,890
0
0
0
0
1
true
6
2015-12-29T10:59:00.000
11
1
0
spark.storage.memoryFraction setting in Apache Spark
34,509,593
1.2
java,python,apache-spark,hadoop-yarn
The Spark executor is set up into 3 regions. Storage - Memory reserved for caching Execution - Memory reserved for object creation Executor overhead. In Spark 1.5.2 and earlier: spark.storage.memoryFraction sets the ratio of memory set for 1 and 2. The default value is .6, so 60% of the allocated executor memory is reserved for caching. In my experience, I've only ever found that the number is reduced. Typically when a developer is getting a GC issue, the application has a larger "churn" in objects, and one of the first places for optimizations is to change the memoryFraction. If your application does not cache any data, then setting it to 0 is something you should do. Not sure why that would be specific to YARN, can you post the articles? In Spark 1.6.0 and later: Memory management is now unified. Both storage and execution share the heap. So this doesnt really apply anymore.
According to Spark documentation spark.storage.memoryFraction: Fraction of Java heap to use for Spark's memory cache. This should not be larger than the "old" generation of objects in the JVM, which by default is given 0.6 of the heap, but you can increase it if you configure your own old generation size. I found several blogs and article where it is suggested to set it to zero in yarn mode. Why is that better than set it to something close to 1? And in general, what is a reasonable value for it ?
0
1
7,480
0
34,560,260
0
0
0
0
1
true
0
2015-12-30T08:55:00.000
0
1
0
What is the PyMC3 equivalent of the 'pymc.rnormal' function?
34,526,093
1.2
python,pymc,pymc3
Found it... a bit silly of me. pymc3.Normal(mu,sd).random(), which basically just calls scipy.stats.norm
Is there a PyMC3 equivalent to the pymc.rnormal function, or has it been dropped in favor of numpy.random.normal?
0
1
93
0
34,558,742
0
0
0
0
1
true
3
2016-01-01T13:19:00.000
10
1
0
Regularization parameter and iteration of SGDClassifier in scikit-learn
34,556,476
1.2
python,machine-learning,scikit-learn
C and alpha both have the same effect. The difference is a choice of terminology. C is proportional to 1/alpha. You should use GridSearchCV to select either alpha or C the same way, but remember a higher C is more likely to overfit, where a lower alpha is more likely to overfit. L2 will produce a model with many small coefficients, where L1 will choose a model with a large number of 0 coefficients and a few large coefficients. Elastic net is a combination of the two. SGDClassifier uses stochastic gradient descent in which the data is fed through the learning algorithm sample by sample. The n_iter tells it how many passes it should make over the data. As the number of iterations goes up and the learning rate goes down, SGD becomes more like batch gradient descent, but it becomes slower as well.
Python scikit-learn SGDClassifier() supports both l1, l2, and elastic, it seems to be important to find optimal value of regularization parameter. I got an advice to use SGDClassifier() with GridSearchCV() to do this, but in SGDClassifier serves only regularization parameter alpha. If I use loss functions such as SVM or LogisticRegression, I think there should be C instead of alpha for parameter optimization. Is there any way to set optimal parameter in SGDClassifier() when using Logisitic Regression or SVM? In addition, I have one more question about iteration parameter n_iter, but I did not understand what this parameter mean. Does it work like a bagging if used with shuffle option together? So, if I use l1 penalty and large value of n_iter, would it work like RandomizedLasso()?
0
1
4,643
0
34,572,201
0
1
0
0
1
false
1
2016-01-03T00:21:00.000
0
1
0
How do I quote a column in a dataframe that has a period?
34,572,133
0
python,matplotlib,dataframe
You can use the indexing notation: Iris['Petal.Length']. Using . (as in Iris.Species) only works if the column name is a valid Python identifier, but [] always works, even if the column name contains spaces or other symbols.
I could not find a particular way to phrase this into a google search that returned any solid results so asking StackOverflow. Would appreciate the help all! I am using a CSV file, Iris, in Python to do some basic matplot plotting. Within Iris, I am looking to reference a particular column called Petal.Length. Normally, for any other column, like Species, I would use Iris.Species to call that particular column. However for Petal.Length, there is already a period inside meaning I cannot use Iris.Petal.Length. What is the proper syntax to call a column that already contains a period? Thanks all.
0
1
79
0
34,574,445
0
0
0
0
1
true
2
2016-01-03T07:43:00.000
1
1
0
How to save the n-d numpy array data and read it quickly next time?
34,574,396
1.2
python,numpy
If you need to re-read it quickly into numpy you could just use the cPickle module. This is going to be much faster that parsing it back from an ASCII dump (but however only the program will be able to re-read it). As a bonus with just one instruction you could dump more than a single matrix (i.e. any data structure built with core python and numpy arrays). Note that parsing a floating point value from an ASCII string is a quite complex and slow operation (if implemented correctly down to ulp).
Here is my question: I have a 3-d numpy array Data which in the shape of (1000, 100, 100). And I want to save it as a .txt or .csv file, how to achieve that? My first attempt was to reshape it into a 1-d array which length 1000*100*100, and transfer it into pandas.Dataframe, and then, I save it as .csv file. When I wanted to call it next time,I would reshape it back to 3-d array. I think there must be some methods easier.
0
1
53
0
38,167,204
0
0
0
0
1
false
0
2016-01-04T18:45:00.000
0
1
0
Create 3D- polynomial via numpy etc. from given coordinates
34,597,732
0
python,numpy,curve-fitting,algebra
Numpy has functions for multi-variable polynomial evaluation in the polynomial package -- polyval2d, polyval3d -- the problem is getting the coefficients. For fitting, you need the polyvander2d, polyvander3d functions that create the design matrices for the least squares fit. The multi-variable polynomial coefficients thus determined can then be reshaped and used in the corresponding evaluation functions. See the documentation for those functions for more details.
Given some coordinates in 3D (x-, y- and z-axes), what I would like to do is to get a polynomial (fifth order). I know how to do it in 2D (for example just in x- and y-direction) via numpy. So my question is: Is it possible to do it also with the third (z-) axes? Sorry if I missed a question somewhere. Thank you.
0
1
260
0
34,614,120
0
1
0
0
1
false
2
2016-01-05T14:24:00.000
0
1
0
How to wrap text with in a column while writing in csv file using python
34,613,958
0
python,django,export-to-csv
CSV is not formatted. If you want to format the text in the cells, you should consider writing a proper Excel or PDF file. Anyway it looks like that newline characters (\n or \r\n) can be used in CSV files if using semicolon as separator, so this may not be portable. To write an Excel file, you can use libraries like openpyxl or xlwt.
I am trying to export a file as csv. I need to wrap text for a particular column while writing in csv file. I have a too long string. i need to write it in a csv file using python. While trying to write, it doesn't write in a single cell. some of the lines are written into next rows. I need to write the whole string in a single cell. Please help me to do this.
0
1
4,140
0
59,507,979
0
0
0
0
1
false
30
2016-01-06T03:09:00.000
3
4
0
Is there easy way to grid search without cross validation in python?
34,624,978
0.148885
python,scikit-learn,random-forest,grid-search
Although the question has been solved years ago, I just found a more natural way if you insist on using GridSearchCV() instead of other means (ParameterGrid(), etc.): Create a sklearn.model_selection.PredefinedSplit(). It takes a parameter called test_fold, which is a list and has the same size as your input data. In the list, you set all samples belonging to training set as -1 and others as 0. Create a GridSearchCV object with cv="the created PredefinedSplit object". Then, GridSearchCV will generate only 1 train-validation split, which is defined in test_fold.
There is absolutely helpful class GridSearchCV in scikit-learn to do grid search and cross validation, but I don't want to do cross validataion. I want to do grid search without cross validation and use whole data to train. To be more specific, I need to evaluate my model made by RandomForestClassifier with "oob score" during grid search. Is there easy way to do it? or should I make a class by myself? The points are I'd like to do grid search with easy way. I don't want to do cross validation. I need to use whole data to train.(don't want to separate to train data and test data) I need to use oob score to evaluate during grid search.
0
1
20,780
0
34,653,721
0
0
0
0
1
false
0
2016-01-07T07:41:00.000
3
2
0
What the function apply() in scikit-learn can do?
34,649,751
0.291313
python,machine-learning,scikit-learn
From the Sci-Kit Documentation apply(X) Apply trees in the ensemble to X, return leaf indices This function will take input data X and each data point (x) in it will be applied to each non-linear classifier tree. After application, data point x will have associated with it the leaf it end up at for each decision tree. This leaf will have its associated classes ( 1 if binary ). apply(X) returns the above information, which is of the form [n_samples, n_estimators, n_classes]. Thus, the apply(X) function doesn't really have much to do with the Gradient Boosted Decision Tree + Logic Regression (GBDT+LR) classification and feature transform methods. It is a function for the application of data to an existing classification model. I'm sorry if I have misunderstood you in any way, though a few grammar/syntax errors in your question made it harder to decipher.
In scikit-learn new version ,there is a new function called apply() in Gradient boosting. I'm really confused about it . Does it like the method:GBDT + LR that facebook has used? If dose, how can we make it work like GBDT + LR?
0
1
1,064
0
34,661,470
0
0
0
0
1
true
5
2016-01-07T12:51:00.000
3
1
0
How to handle class imbalance in sklearn random forests. Should I use sample weights or class weight parameter
34,655,628
1.2
python,scikit-learn,random-forest,supervised-learning
Class weights are what you should be using. Sample weights allow you to specify a multiplier for the impact a particular sample has. Weighting a sample with a weight of 2.0 roughly has the same effect as if the point was present twice in the data (although the exact effect is estimator dependent). Class weights have the same effect, but it used for applying a set multiplier to every sample that falls into the specified class. In terms of functionality, you could use either, but class_weights is provided for convenience so you do not have to manually weight each sample. Also it is possible to combined the usage of the two in which the class weights are multiplied by the sample weights. One of the main uses for sample_weights on the fit() method is to allow boosting meta-algorithms like AdaBoostClassifier to operate on existing decision tree classifiers and increase or decrease the weights of individual samples as needed by the algorithm.
I am trying to solve a binary classification problem with a class imbalance. I have a dataset of 210,000 records in which 92 % are 0s and 8% are 1s. I am using sklearn (v 0.16) in python for random forests . I see there are two parameters sample_weight and class_weight while constructing the classifier. I am currently using the parameter class_weight="auto". Am I using this correctly? What does class_weight and sample weight actually do and What should I be using ?
0
1
3,318
0
37,417,767
0
0
0
0
1
false
0
2016-01-08T06:59:00.000
0
1
0
How to change the sorting order of dates in the drop down property of spotfire
34,671,202
0
python,spotfire
You can apply custom sorting to STRING columns. One way to achieve your goal is to create calculated columns for the Year and Month, and use these in your date axis. Then you can apply a custom sorting in Column Properties > Sort Order on your data table.
In Spotfire text area, for drop down property, by default sort order for date column is coming with the oldest to latest. We need to display the dates order from newest to oldest. Can you please advise. Default Order:12/29/2015 12/30/2015 12/31/2015 01/01/2016 Needed 1/1/2016 12/31/2015 12/30/2015 12/29/2015 Thanks
0
1
975
0
36,937,080
0
0
0
0
1
true
3
2016-01-08T21:11:00.000
2
1
0
Unable to do bulk indexing for large file in elasticsearch
34,686,119
1.2
java,python,elasticsearch
You have to increase the content uploading length which is by default 100mb. Go to elasticsearch.yml in config folder add/update - http.max_content_length: 300M
I am trying to do bulk indexing in elasticsearch using Python for a big file (~800MB). However, everytime I try [2016-01-08 15:06:49,354][WARN ][http.netty ] [Marvel Man] Caught exception while handling client http tra ffic, closing connection [id: 0x2d26baec, /0:0:0:0:0:0:0:1:58923 => /0:0:0:0:0:0:0:1:9200] org.jboss.netty.handler.codec.frame.TooLongFrameException: HTTP content length exceeded 104857600 bytes. at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:169) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeli ne.java:791) at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:135) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeli ne.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459) at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536) at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeli ne.java:791) at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Can anyone please help me understand what is happening here, and how I can solve this issue?
1
1
2,316
0
56,676,656
0
0
0
0
1
false
92
2016-01-10T06:38:00.000
12
2
0
Pandas: group by and Pivot table difference
34,702,815
1
python,pandas
It's more appropriate to use .pivot_table() instead of .groupby() when you need to show aggregates with both rows and column labels. .pivot_table() makes it easy to create row and column labels at the same time and is preferable, even though you can get similar results using .groupby() with few extra steps.
I just started learning Pandas and was wondering if there is any difference between groupby() and pivot_table() functions. Can anyone help me understand the difference between them. Help would be appreciated.
0
1
55,623
0
34,736,355
0
0
0
0
1
false
2
2016-01-12T03:17:00.000
2
2
0
Choosing an sklearn pipeline for classifying user text data
34,735,016
0.197375
python,machine-learning,scikit-learn,feature-selection
Naive Bayes and MultinomialNB are the same algorithms. The difference that you get is from the tfidf transformation which penalises the words that occur in lots of documents in your corpus. My advice: Use tfidf and tune the sublinear_tf, binary parameters and normalization parameters of TfidfVectorization for features. Also try all kind of different classifiers available in scikit-learn which i suspect will give you better results if you properly tune the value of regularization type (penalty eighther l1 or l2) and the regularization parameter (alpha). If you tune them properly I suspect you can get much better results using SGDClassifier with 'log' loss (Logistic Regression) or 'hinge' loss (SVM). The way people usually tune the parameters is through GridSearchCV class in scikit-learn.
I'm working on a machine learning application in Python (using the sklearn module), and am currently trying to decide on a model for performing inference. A brief description of the problem: Given many instances of user data, I'm trying to classify them into various categories based on relative keyword containment. It is supervised, so I have many, many instances of pre-classified data that are already categorized. (Each piece of data is between 2 and 12 or so words.) I am currently trying to decide between two potential models: CountVectorizer + Multinomial Naive Bayes. Use sklearn's CountVectorizer to obtain keyword counts across the training data. Then, use Naive Bayes to classify data using sklearn's MultinomialNB model. Use tf-idf term weighting on keyword counts + standard Naive Bayes. Obtain a keyword count matrix for the training data using CountVectorizer, transform that data to be tf-idf weighted using sklearn's TfidfTransformer, and then dump that into a standard Naive Bayes model. I've read through the documentation for the classes use in both methods, and both seem to address my problem very well. Are there any obvious reasons for why tf-idf weighting with a standard Naive Bayes model might outperform a multinomial Naive Bayes for this type of problem? Are there any glaring issues with either approach?
0
1
2,014
0
34,787,266
0
1
0
0
1
false
1
2016-01-14T10:30:00.000
2
2
0
meaning of "." after first element in numpy array
34,787,213
0.197375
python,numpy
It has nothing to do with the array. 1. means 1.0. 1. is a float, 1 is an int.
what is difference between numpy.array([[1., 2], [3, 4], [5, 6]]) and numpy.array([[1, 2], [3, 4], [5, 6]]). I came across one code using two different type of declaration but could not find its meaning.
0
1
38
0
34,788,487
0
0
0
0
2
true
1
2016-01-14T11:02:00.000
1
2
0
CSV format data manipulation: why use python scripts instead of MS excel functions?
34,787,957
1.2
python,excel,csv
Using python is recommended for below scenarios: Repeated action: Perform similar set of action over a similar dataset repeatedly. For ex, say you get a monthly forecast data and you have to perform various slicing & dicing and plotting. Here the structure of data and the steps of analysis is more or less the same, but the data differs for every month. Using Python and Pandas will save you a bulk of time and also reduces manual error. Exploratory analysis: Once you establish a certain familiarity with Pandas, Numpy and Matplotlib, analysis using these python libraries are faster and much efficient than Excel Analysis. One simple usecase to justify this statement is backtracking. With Pandas, you can quickly trace back and regain the dataset to its original form or an earlier analysed form. With Excel, you could get lost after a maze of analysis, and might be in a lose of backtrack to an earlier form outside of cntrl + z Teaching Tool: In my opinion, this is the most underutilized feature. IPython notebook could be an excellent teaching tool and reference document for data analysis. Using this, you can efficiently transfer knowledge between colleagues rather than sharing a complicated excel file.
I am currently working on large data sets in csv format. In some cases, it is faster to use excel functions to get the work done. However, I want to write python scripts to read/write csv and carry out the required function. In what cases would python scripts be better than using excel functions for data manipulation tasks? What would be the long term advantages?
0
1
236
0
34,788,093
0
0
0
0
2
false
1
2016-01-14T11:02:00.000
0
2
0
CSV format data manipulation: why use python scripts instead of MS excel functions?
34,787,957
0
python,excel,csv
After learning python, you are more flexible. The operations you can do with on the user interface of MS excel are limited, whereas there are no limits if you use python. The benefit is also, that you automate the modifications, e.g. you can re-use it or re-apply it to a different dataset. The speed depends heavily on the algorithm and library you use and on the operation. You can also use VB script / macros in excel to automate things, but usually python is less cumbersome and more flexible.
I am currently working on large data sets in csv format. In some cases, it is faster to use excel functions to get the work done. However, I want to write python scripts to read/write csv and carry out the required function. In what cases would python scripts be better than using excel functions for data manipulation tasks? What would be the long term advantages?
0
1
236
0
34,796,644
0
0
0
0
1
false
0
2016-01-14T17:37:00.000
0
3
0
How do you find the largest/smallest number amongst several array's?
34,796,147
0
python,arrays,numpy
Combine your arrays into one, then take the min/max along the new axis. A = np.array([a1,a2, ... , an]) A.min(axis=0), A.max(axis=0)
I'm trying to get the largest/smallest number returned out of two or more numpy.array of equal length. Since max()/min() function doesn't work on multiple arrays, this is some of the best(worst) I've come up with: max(max(a1), max(a2), max(a3), ...) / min(min(a1), min(a2), min(a3), ...) Alternately one can use numpy's maximum, but those only work for two arrays at time. Thanks in advance
0
1
143
0
53,392,066
0
0
0
0
1
false
15
2016-01-14T22:56:00.000
0
8
0
tensorflow: how to rotate an image for data augmentation?
34,801,342
0
python,tensorflow
For rotating an image or a batch of images counter-clockwise by multiples of 90 degrees, you can use tf.image.rot90(image,k=1,name=None). k denotes the number of 90 degrees rotations you want to make. In case of a single image, image is a 3-D Tensor of shape [height, width, channels] and in case of a batch of images, image is a 4-D Tensor of shape [batch, height, width, channels]
In tensorflow, I would like to rotate an image from a random angle, for data augmentation. But I don't find this transformation in the tf.image module.
0
1
28,208
0
34,815,680
0
0
0
0
1
true
4
2016-01-15T15:51:00.000
5
1
0
Which one is faster? Logistic regression or SVM with linear kernel?
34,814,891
1.2
python-2.7,machine-learning,svm,logistic-regression
Faster is a bit of a weird question, in part because it is hard to compare apples to apples on this, and it depends on context. LR and SVM are very similar in the linear case. The TLDR for the linear case is that Logistic Regression and SVMs are both very fast and the speed difference shouldn't normally be too large, and both could be faster/slower in certain cases. From a mathematical perspective, Logistic regression is strictly convex [its loss is also smoother] where SVMs are only convex, so that helps LR be "faster" from an optimization perspective, but that doesn't always translate to faster in terms of how long you wait. Part of this is because, computationally, SVMs are simpler. Logistic Regression requires computing the exp function, which is a good bit more expensive than just the max function used in SVMs, but computing these doesn't make the majority of the work in most cases. SVMs also have hard zeros in the dual space, so a common optimization is to perform "shrinkage", where you assume (often correctly) that a data point's contribution to the solution won't change in the near future and stop visiting it / checking its optimality. The hard zero of the SVM loss and the C regularization term in the soft margin form allow for this, where LR has no hard zeros to exploit like that. However, when you want something to be fast, you usually don't use an exact solver. In this case, the issues above mostly disappear, and both tend to learn just as quick as the other in this scenario. In my own experience, I've found Dual Coordinate Descent based solvers to be the fastest for getting exact solutions to both, with Logistic Regression usually being faster in wall clock time than SVMs, but not always (and never by more than a 2x factor). However, if you try and compare different solver methods for LRs and SVMs you may get very different numbers on which is "faster", and those comparisons won't necessarily be fair. For example, the SMO solver for SVMs can be used in the linear case, but will be orders of magnitude slower because it is not exploiting the fact that you only care are Linear solutions.
I am doing machine learning with python (scikit-learn) using the same data but with different classifiers. When I use 500k of data, LR and SVM (linear kernel) take about the same time, SVM (with polynomial kernel) takes forever. But using 5 million data, it seems LR is faster than SVM (linear) by a lot, I wonder if this is what people normally find?
0
1
4,058
0
34,821,190
0
1
0
0
1
true
1
2016-01-15T22:39:00.000
2
2
0
Length of comprehensions in Python
34,821,065
1.2
python,list-comprehension
No, this is impossible with just the length of the inputs. You can use math to determine the length by computing common prime factors, but the work involved would not improve upon just computing the results and taking the len of that, and it requires knowledge of the set contents, not just their length. After all, with just the length, {2, 3} multiplied with {2, 3} (producing {4, 6, 9}) couldn't be distinguished from {2, 3} multiplied with {10, 11}, which would produce entirely unique outputs (four total). Makes for a simple proof by contradiction; knowing the input lengths alone is insufficient to determine the length of the output, no single operation on (2, 2) can possibly produce both 3 and 4 without additional inputs.
New at Python, so please... Just came across comprehensions and I understand that they are soon going to possibly ramify into perhaps dot products or matrix multiplications (although the fact that the result is a set makes them more interesting), but I at this point I want to ask whether there is any formula to determine the length of a comprehension such as: {x * y for x in {3, 4, 5} for y in {4, 5, 6}}. Evidently I don't mean for this particular one: len({x * y for x in {3, 4, 5} for y in {4, 5, 6}}) = 8, but of any general operation of this type with an element-wise multiplication of two sets, and taking as the result the set of the resultant integers (no repetitions), for any given length of x and y, consecutive integers, and known x[1] and y[1]. I understand that this question is at the crossroads of coding and math, but I am asking it here on the off chance that it happened to be a somewhat common, or well-known computational issue, since I have read that comprehensions are very widely used. It is only in this sense that I am interested in the question. Base on the comments so far, my sense is that this is not the case. EDIT: For instance, here is a pattern: If x = {1, 2, 3} the len(x * y) comprehensions is equal to 9 provided y[1] = or > 3. For example, len({x * y for x in {1, 2, 3} for y in {1111, 1112, 1113}}) = 9. So tentatively, length = length(x) * length(y), provided there is no overlap in the elements of x and y. Does it work with 4-element sets? Sure: len({x * y for x in {1, 2, 3, 4} for y in {1111, 1112, 1113, 1114}}) = 16. In fact, the integers don't need to be consecutive, just not overlap: len({x*y for x in {11,2,39} for y in {3,4,5}}) = 9. And, yes, it doesn't work... Check this out: {x * y for x in {0, 1, 3} for y in {36, 12, 4}} = {0, 4, 12, 36, 108}
0
1
95
0
34,841,121
0
0
0
0
1
true
1
2016-01-16T15:13:00.000
0
1
0
Does the Matplotlib pgf backend support transparency?
34,828,545
1.2
python,matplotlib,latex,pgf
Yes. The .pgf backend does support transparency. If the *.png and *.pdf files come out as transparent, but the *.pgf does not than it may be a problem with your viewer, or tex packages. For me it was the package "transparent" which enables transparent text on pictures, but I wasn't actually using, which clashed with pgf.
I'm currently creating graphics usind the pgf backend for matplotlib. It works very well for integrating graphs generated in python in latex. However, transparency does not seem to be supported, even though I believe this should be possible in pgf. I am currently using version 1.5.1 of matplotlib.
0
1
1,196
0
34,874,093
0
0
0
0
1
false
1
2016-01-18T14:25:00.000
0
1
0
work on distinct elements of RDD-pyspark
34,857,074
0
python,pyspark,spark-streaming,rdd
So what i did was to define a function that checks if I have seen that name in the past and then use the .filter(myfunc) to only work with the names i want... The problem now is that in each new streaming window the function is being applied from the beggining , so if i have seen the name John in the first window 7 times , i will keep it only once , but then if i have seen the name John in the second window 5 times i will keep it again once... I want to keep the name John once for all the streaming application... Any thoughts on that ?
I am receiving data from Kafka into a Spark Streaming application. It comes in the format of Transformed DStreams. I then keep only the features i want. features=data.map(featurize) which gives me the "name","age","whatever". I then want to keep only the name of all the data features=data.map(featurize).map(lambda Names: Names["name"] Now, when i print this command, i get all the names coming from the streaming application, but i want to work on each one separately. More specifically, I want to check each name and if I have already came across it in the past i want to apply a function on it. Otherwise i will just continue with my application. So I want each name to be a string so that I can insert it into my function that checks if one string has been seen in the past. I know that foreach will give me each RDD , but still I want to work on each name of the RDDs separately. Is there any way in pyspark to do so?
0
1
184
0
34,868,531
0
0
0
0
1
true
3
2016-01-18T17:07:00.000
3
1
0
unable to run tensorflow on multiple GPUs
34,860,281
1.2
python,gpu,tensorflow
The problem disappeared after I installed an older version (352.55) of nvidia driver.
I am running the cifar10 multi-GPU example from the tensorflow repository. I am able to utilize more than one GPUs. My ubuntu PC has two Titan X's, I see memory are fully occupied by the process on both GPUs. However, only one GPU is actually computing. I obtain no speedup. I have tried tensorflow 0.5.0 and 0.6.0 pip binaries. I have also tried compiled from source. EDIT: The problem disappeared after I installed an older version of nvidia driver.
0
1
749
0
34,869,483
0
0
0
0
1
true
0
2016-01-19T05:21:00.000
0
1
0
Python/OpenCV/Mac ImportError: numpy.core.multiarray failed to import
34,869,018
1.2
python,macos,opencv
I've found the reaseon, there is a file named time.py in the same foder. I'm sure that's the reason I failed to import numpy. Plus, if i put the file time.py in the same folder and run python test.py, then I got the message "TypeError: 'module' object is not callable" Next, without closing the console, and delete the time.py, then "import numpy", I got the message "ImportError: cannot import name multiarray" close and open the console again, it works. But this time, I did't see "ImportError: numpy.core.multiarray failed to import" ,But it does work!
When I put my python code in "~/Downloads/" folder, it works. However, I failed and gave me the message "ImportError: numpy.core.multiarray failed to import" when I put the test.py file in a deep location like "/Git/Pyehon/....." Why? I run this on Mac
0
1
858
0
39,321,804
0
0
0
0
1
false
3
2016-01-19T07:45:00.000
1
2
0
Pandas dropna does not work as expected on a MultiIndex
34,871,128
0.099668
python,pandas
For me this actually worked : df1=df1[pd.notnull(df1['Cloumn Name'])]
I have a Pandas DataFrame with a multiIndex. The index consists of a date and a text string. Some of the values are NaN and when I use dropna(), the row disappears as expected. However, when I look at the index using df.index, the dropped dates are still there. This is problematic as when I use the to_panel function, the dropped dates reappear. Am I using dropna incorrectly or how can I resolve this?
0
1
1,798
0
34,883,536
0
1
0
0
1
true
0
2016-01-19T17:09:00.000
1
1
0
How do I return a random element from 2 numpy array without repeats?
34,882,862
1.2
python,python-2.7,numpy
Without seeing any code, this is what I would try. Make an identically sized 2D array with just Booleans all set to True (available) by default When your code randomly generates an X,Y location in your 2D array, check the Availability array first: If the value at that location is True (available), return that value in the other Array (whatever values are stored there) and then set that available value to False. If the value at that location is False (not available), keep trying the next value in the array until you find one available. (Do this instead of hitting the random number generator again. The less elements available, the more you'd have to "re-roll" which would eventually become painfully slow.) Make sense? EDIT: I can think of at least 2 other ways of doing this that might be faster or more efficient, but this is the simple verson.
I have very big multi dimension array for example 2d inside for loop. I would like to return one element from this array at each iteration and this element should not returned before. I mean return an element once in the iteration.
0
1
38
0
46,003,443
0
1
0
0
1
false
5
2016-01-20T21:48:00.000
0
1
0
Is there a way to tell NLTK that a certain word isn't a proper noun but a noun?
34,911,264
0
python,nlp,nltk
Summing it up, you have the following options: Correcting the tag in the post-processing - a bit ugly but quick and easy. Employ an external Name Entity Recognizer (Stanford NER as @Bob Dylan has thoughtfully suggested) - this one is more involved, particularly because Stanford NER is in java and is not particularly fast. Retrain a POS Tagger on domain-specific data (do you have a large enough annotated dataset to use it for that?) Use WSD (Word Sense Disambiguation) approach - for a start you need to have a good domain dictionary to use.
I'm doing some NLP where I'm finding out when patients were diagnosed with multiple sclerosis. I'd like to use nltk to tell me that the noun of a sentence was multiple sclerosis. Problem is, doctors frequently refer to multiple sclerosis as MS which nltk picks up as a proper noun. For example, this sentence, "His MS was diagnosed in 1999." Is tagged as: [('His', 'PRP$'), ('MS', 'NNP'), ('was', 'VBD'), ('diagnosed', 'VBN'), ('in', 'IN'), ('1999', 'CD'), ('.', '.')] MS should be a noun here. Any suggestions?
0
1
436
0
34,931,076
0
0
0
0
1
false
1
2016-01-21T17:59:00.000
0
1
0
seaborn plots the same size?
34,930,986
0
python,matplotlib,size,seaborn
Seaborn sizing options vary by the plot type, which can be a bit confusing, so this is a useful universal approach. First run this: import matplotlib as plt Then add the line plt.figure(figsize=(9, 9)) in the notebook cells for each of the plots. You can adjust the integer values as you see fit.
seaborn has a conveninent keyword named size=, that aims to make the plots a certain size. However, the plots significantly differ in size depending on the xy-ticks and the axis labels. What is the best way to generate plots with exactly the same dimensions regardless of ticks and axis labels?
0
1
219
0
37,116,604
0
0
1
0
1
true
1
2016-01-22T05:06:00.000
1
3
0
Can Python's pickle/cpickle/dill speed up imports?
34,939,388
1.2
python,import,pickle,dill
The import latency is most likely due to loading the dependent shared objects of the GEOS-library. Optimising this could maybe done, but it would be very hard. One way would be to build a statically compiled custom python interpreter with all DLLs and extension modules built in. But maintaining that would be a major PITA (trust me - I do it for work). Another option is to turn your application into a service, thus only incurring the runtime-cost of starting the interpreter up once. It depends on your actual problem if this is suitable.
Can pickle/dill/cpickle be used to pickle an imported module to improve import speed? The Shapely module for example takes 5 seconds on my system to find and load all of the required dependencies, which I'd really like to avoid. Can I pickle my imports once, then reuse that pickle instead of having to do slow imports every time?
0
1
627
0
57,711,857
0
0
0
0
1
false
6
2016-01-23T20:03:00.000
3
2
0
counting the number of non-zero numbers in a column of a df in pandas/python
34,968,223
0.291313
python,numpy,pandas
Numpy's count_nonzero function is efficient for this. np.count_nonzero(df["c"])
I have a df that looks something like: a b c d e 0 1 2 3 5 1 4 0 5 2 5 8 9 6 0 4 5 0 0 0 I would like to output the number of numbers in column c that are not zero.
0
1
21,246
0
45,060,104
0
0
0
0
1
true
1
2016-01-24T00:34:00.000
0
1
0
Training, testing, and validation sets for bidirectional LSTM (BLSTM)
34,970,818
1.2
python,neural-network,time-series,keras,recurrent-neural-network
I think this has more to do with your particular dataset than Bi-LSTMs in general. You're confusing splitting a dataset for training/testing vs. splitting a sequence in a particular sample. It seems like you have many different subjects, which constitute a different sample. For a standard training/testing split, you would split your dataset between subjects, as you suggested in the last paragraph. For any sort of RNN application, you do NOT split along your temporal sequence; you input your entire sequence as a single sample to your Bi-LSTM. So the question really becomes whether such a model is well-suited to your problem, which has multiple labels at specific points in the sequence. You can use a sequence-to-sequence variant of the LSTM model to predict which label each time point in the sequence belongs to, but again you would NOT be splitting the sequence into multiple parts.
When it comes to normal ANNs, or any of the standard machine learning techniques, I understand what the training, testing, and validation sets should be (both conceptually, and the rule-of-thumb ratios). However, for a bidirectional LSTM (BLSTM) net, how to split the data is confusing me. I am trying to improve prediction on individual subject data that consists of monitored health values. In the simplest case, for each subject, there is one long time series of values (>20k values), and contiguous parts of that time series are labeled from a set of categories, depending on the current health of the subject. For a BLSTM, the net is trained on all of the data going forwards and backwards simultaneously. The problem then is, how does one split a time series for one subject? I can't just take the last 2,000 values (for example), because they might all fall into a single category. And I can't chop the time series up randomly, because then both the learning and testing phases would be made of disjointed chunks. Finally, each of the subjects (as far as I can tell) has slightly different (but similar) characteristics. So, maybe, since I have thousands of subjects, do I train on some, test on some, and validate on others? However, since there are inter-subject differences, how would I set up the tests if I was only considering one subject to start?
0
1
1,028
0
34,978,549
0
0
0
0
1
false
1
2016-01-24T11:53:00.000
1
1
0
igraph community detection result has too much overlap
34,975,419
0.197375
python-2.7,cluster-analysis,igraph,k-means
Your approach doesn't work because the fast greedy community detection expects similarities as weights, not distances. (Actually, this is probably only one of the reasons. The other is that the community detection algorithms in igraph were designed for sparse graphs. If you have calculated all the distances between all pairs of points, your graph is dense, and these algorithms will not be suitable).
I have a series of points (long, lat) 1) Found the haversine distance between all the points 2) Saved this to a csv file (source, destination, weight) 3) Read the csv file and generated weighted a graph (where weight is the haversine distance) 4) Used igraphs community detection algorithm - fastgreedy I was expecting clusters with low distance to be highly each other, I was expecting something similar to kmeans (without the distinct partitions in space) but there was no order in my results. Question: Why does the community detection algorithm not give me results similar kmeans clustering? If im using the same points/ distances between points then why is there so much overlap between the communities? I'm just looking for some intuition as to why this isnt work as I expected. Thanks
0
1
347
0
34,985,405
0
1
0
0
1
false
4
2016-01-25T04:30:00.000
5
2
0
py2exe: MKL FATAL ERROR: Cannot load mkl_intel_thread.dll
34,985,134
0.462117
python,matplotlib,py2exe
Never mind! I managed to solve it, by copying the required dll from inside numpy/core, into the dist folder that py2exe creates, not outside of it.
I'm trying to compile a python program in py2exe. It is returning a bunch of missing modules, and when I run the executable, it says: "MKL FATAL ERROR: Cannot load mkl_intel_thread.dll" All my 'non-plotting' scripts work perfectly, just scripts utilizing 'matplotlib', and 'pyqtgraph' don't work. I've even found the file in Numpy/Core/mkl_intel_thread.dll, and placed it into the folder with the .exe, and it still doesn't work. Does anyone have any idea how this can be solved? I'm using Anaconda Python 3.4, and matplotlib 1.5.1
0
1
2,857
0
35,020,997
0
0
0
0
1
true
8
2016-01-25T10:40:00.000
8
1
0
Azure Machine Learning Request Response latency
34,990,561
1.2
python,azure,azure-machine-learning-studio
First, I am assuming you are doing your timing test on the published AML endpoint. When a call is made to the AML the first call must warm up the container. By default a web service has 20 containers. Each container is cold, and a cold container can cause a large(30 sec) delay. In the string returned by the AML endpoint, only count requests that have the isWarm flag set to true. By smashing the service with MANY requests(relative to how many containers you have running) can get all your containers warmed. If you are sending out dozens of requests a instance, the endpoint might be getting throttled. You can adjust the number of calls your endpoint can accept by going to manage.windowsazure.com/ manage.windowsazure.com/ Azure ML Section from left bar select your workspace go to web services tab Select your web service from list adjust the number of calls with slider By enabling debugging onto your endpoint you can get logs about the execution time for each of your modules to complete. You can use this to determine if a module is not running as you intended which may add to the time. Overall, there is an overhead when using the Execute python module, but I'd expect this request to complete in under 3 secs.
I have made an Azure Machine Learning Experiment which takes a small dataset (12x3 array) and some parameters and does some calculations using a few Python modules (a linear regression calculation and some more). This all works fine. I have deployed the experiment and now want to throw data at it from the front-end of my application. The API-call goes in and comes back with correct results, but it takes up to 30 seconds to calculate a simple linear regression. Sometimes it is 20 seconds, sometimes only 1 second. I even got it down to 100 ms one time (which is what I'd like), but 90% of the time the request takes more than 20 seconds to complete, which is unacceptable. I guess it has something to do with it still being an experiment, or it is still in a development slot, but I can't find the settings to get it to run on a faster machine. Is there a way to speed up my execution? Edit: To clarify: The varying timings are obtained with the same test data, simply by sending the same request multiple times. This made me conclude it must have something to do with my request being put in a queue, there is some start-up latency or I'm throttled in some other way.
0
1
1,088
0
34,992,250
0
0
0
0
1
true
0
2016-01-25T11:25:00.000
1
1
0
Optimizing scipy.spatial.Delaunay.find_simplex
34,991,430
1.2
python,search,scipy,triangulation,delaunay
You can try point in location test, especially Kirkpatrick algorithm/data structure. Basically you subdivide the mesh in both axis and re-triangulate it. A better and simpler solution is to give each triangle a color and draw a bitmap then check the color of the bitmap with the point.
I have a set of points in a plane where each point has an associated altitude. I'm thinking of using the scipy.spatial library to compute the Delaunay triangulation of the point set and then use the result to interpolate for the points in between. The library implements a nice function that, given a point, finds the triangle it lies in. This would be particularly useful when calculating the depth map from the mesh. I assume though (please do correct me if I'm wrong) that the search function searches from the same starting point every time it is called. Since the points I will be looking for will tend to lie either on the triangle the previous one lied on or on an adjacent one, I figure that's unneccessary, but can't seem to find a way to optimize the search, other than to implement it myself. Is there a way to set the initial triangle for the search, or to optimize the depth map calculation otherwise?
0
1
651
0
35,028,696
0
0
0
0
1
false
1
2016-01-25T15:01:00.000
0
1
0
Matplotlib hexbin - get bin borders
34,995,645
0
python,matplotlib
I'm wishing to do something similar for small hexbins, thinking to: (1) Get the hexbin centres: hexobj_cen=hexobj.get_offsets() lon_hex=hexobj_cen[:,0] #hexbin lon centre lat_hex=hexobj_cen[:,1] #hexbin lat centre (2) Run a for loop (for each hexbin centre) to find the Cartesian distance (N.hypot) between that centre from all points put into an array Then ask if for each hexbin centre, point-hexbin distance is greater than some maximum distance (half the distance between two opposing vertices. Great if there is a standard way (within pylab.hexbin) to do this, but also couldn't yet find it.
Is there a way to get the borders of a matplotlib.pyplot.hexbin plot? Say, i have a pd.DataFrame with spatial latitude and longitude values, which i plot in a hexbin plot. Afterwards i want to assign the corresponding bin of the hexbin grid to each instance of my DataFrame, by checking if the latitude and longitude values of an instance fall in one of the hexbin bins. Can i assign names or indices to the different bins? I have already looked in the documentation for the hexbin plot, all i can find are line properties, which describe the lines that are drawn in the plot.
0
1
399
0
35,013,791
0
0
0
0
1
false
1
2016-01-25T17:10:00.000
1
2
0
Broadcast large array in pyspark (~ 8GB)
34,998,280
0.099668
python,apache-spark,python-3.4,pyspark
This is not the problem of PySpark, this is a limit of Spark implement. Spark use a scala array to store the broadcast elements, since the max Integer of Scala is 2*10^9, so the total string bytes is 2*2*10^9 = 4GB, you can view the Spark code.
In Pyspark, I am trying to broadcast a large numpy array of size around 8GB. But it fails with the error "OverflowError: cannot serialize a string larger than 4GiB". I have 15g in executor memory and 25g driver memory. I have tried using default and kyro serializer. Both didnot work and show same error. Can anyone suggest how to get rid of this error and the most efficient way to tackle large broadcast variables?
0
1
3,217
0
35,015,910
0
0
0
0
1
false
2
2016-01-25T20:04:00.000
0
3
0
Cut selected data from daily precipitation CSV files
35,001,306
0
python,csv
Apart from extracting the data, the first thing you need to do is rearrange your data. As it is now, 191 columns are added every day. To do that, the whole file needs to be parsed (probably in memory, data growing every day), data gets added to the end of each row, and everything has to be fully written to disk again. Usually, to add data to a csv, rows are added at the end of the file. No need to parse and rewrite the whole file each time. On top of that, most software to read csv files starts having problems when the number of columns gets higher. So it would be a lot better to add the daily data as rows at the end of the csv file. While we're at it: assuming the 253 x 191 is some sort of grid, or at least every cell has te same data type, this would be a great candidate for binary storage (not sure how/if Python can handle that). All data could be stored in it's binary form resulting in a fixed length field/cell. To access a field, it's position could simply be calculated and there would be no need to parse and convert all the data each time. Retrieving data would be almost instant.
I have a csv file contain of daily precipitation with (253 rows and 191 column daily) so for one year I have 191 * 365 column. I want to extract data for certain row and column that are my area of interest example row 20 and column 40 for the first day and the 2,3,4 ... 365 days has the same distance between the column. I'm new in python, is there any way that I can extract the data and store it in a new csv for a certain row and column for one year? Thanks
0
1
91
0
35,004,791
0
0
0
0
1
true
3
2016-01-25T23:46:00.000
2
1
0
reading a large dataset in tensorflow
35,004,619
1.2
python,deep-learning,tensorflow
The amount of pre-fetching depends on your queue capacity. If you use string_input_producer for your filenames and batch for batching, you will have 2 queues - filename queue, and prefetching queue created by batch. Queue created by batch has default capacity of 32, controlled by batch(...,capacity=) argument, therefore it can prefetch up to 32 images. If you follow outline in TensorFlow official howto's, processing examples (everything after batch) will happen in main Python thread, whereas filling up the queue will happen in threads created/started by batch/start_queue_runners, so prefetching new data and running prefetched data through the network will occur concurrently, blocking when the queue gets full or empty.
I am not quite sure about how file-queue works. I am trying to use a large dataset like imagenet as input. So preloading data is not the case, so I am wondering how to use the file-queue. According to the tutorial, we can convert data to TFRecords file as input. Now we have a single big TFRecords file. So when we specify a FIFO queue for the reader, does it mean the program would fetch a batch of data each time and feed the graph instead of loading the whole file of data?
0
1
2,670
0
38,405,970
0
0
0
0
3
false
90
2016-01-28T00:21:00.000
81
8
0
How big should batch size and number of epochs be when fitting a model?
35,050,753
1
python,machine-learning,deep-learning
Since you have a pretty small dataset (~ 1000 samples), you would probably be safe using a batch size of 32, which is pretty standard. It won't make a huge difference for your problem unless you're training on hundreds of thousands or millions of observations. To answer your questions on Batch Size and Epochs: In general: Larger batch sizes result in faster progress in training, but don't always converge as fast. Smaller batch sizes train slower, but can converge faster. It's definitely problem dependent. In general, the models improve with more epochs of training, to a point. They'll start to plateau in accuracy as they converge. Try something like 50 and plot number of epochs (x axis) vs. accuracy (y axis). You'll see where it levels out. What is the type and/or shape of your data? Are these images, or just tabular data? This is an important detail.
My training set has 970 samples and validation set has 243 samples. How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size?
0
1
132,079
0
38,457,655
0
0
0
0
3
false
90
2016-01-28T00:21:00.000
11
8
0
How big should batch size and number of epochs be when fitting a model?
35,050,753
1
python,machine-learning,deep-learning
I use Keras to perform non-linear regression on speech data. Each of my speech files gives me features that are 25000 rows in a text file, with each row containing 257 real valued numbers. I use a batch size of 100, epoch 50 to train Sequential model in Keras with 1 hidden layer. After 50 epochs of training, it converges quite well to a low val_loss.
My training set has 970 samples and validation set has 243 samples. How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size?
0
1
132,079
0
44,901,953
0
0
0
0
3
false
90
2016-01-28T00:21:00.000
7
8
0
How big should batch size and number of epochs be when fitting a model?
35,050,753
1
python,machine-learning,deep-learning
I used Keras to perform non linear regression for market mix modelling. I got best results with a batch size of 32 and epochs = 100 while training a Sequential model in Keras with 3 hidden layers. Generally batch size of 32 or 25 is good, with epochs = 100 unless you have large dataset. in case of large dataset you can go with batch size of 10 with epochs b/w 50 to 100. Again the above mentioned figures have worked fine for me.
My training set has 970 samples and validation set has 243 samples. How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size?
0
1
132,079
0
35,064,268
0
0
0
0
1
true
2
2016-01-28T14:17:00.000
1
1
0
reading the last index from a csv file using pandas in python2.7
35,063,946
1.2
python-2.7,csv,pandas,pandasql
Reading the entire index column will still need to read and parse the whole file. If no fields in the file are multiline, you could scan the file backwards to find the first newline (but with a check if there is a newline past the data). The value following that newline will be your last index. Storing the last index in another file would also be a possibility, but you would have to make sure both files stay consistent. Another way would be to reserve some (fixed amount of) bytes at the beginning of the file and write (in place) the last index value there as a comment. But your parser would have to support comments, or be able to skip rows.
I have a .csv file on disk, formatted so that I can read it into a pandas DataFrame easily, to which I periodically write rows. I need this database to have a row index, so every time I write a new row to it I need to know the index of the last row written. There are plenty of ways to do this: I could read the entire file into a DataFrame, append my row, and then print the entire DataFrame to memory again. This might get a bit slow as the database grows. I could read the entire index column into memory, and pick the largest value off, then append my row to the .csv file. This might be a little better, depending on how column-reading is implemented. I am curious if there is a way to just get that one cell directly, without having to read a whole bunch of extra information into memory. Any suggestions?
0
1
511
0
35,069,535
0
0
0
0
1
false
2
2016-01-28T18:40:00.000
1
2
0
Rearranging Data in Pandas
35,069,440
0.099668
python,pandas
Look into the DataFrame.pivot method
I've been looking through the documentation (and stack overflow) and am having trouble figuring out how rearrange a pandas data frame the way described below. I wish to have a row where there is a column name, a row name and the value of that specific row and column: Input: A B C X 1 2 3 Y 4 5 6 Output: X A 1 X B 2 X C 3 Y A 4 Y B 5 Y C 6 Any help would be much appreciated
0
1
121
0
35,090,610
0
0
0
0
2
true
19
2016-01-28T23:33:00.000
37
6
0
How to copy/paste a dataframe from iPython into Google Sheets or Excel?
35,074,209
1.2
python,excel,google-sheets,ipython,ipython-notebook
Try using the to_clipboard() method. E.g., for a dataframe, df: df.to_clipboard() will copy said dataframe to your clipboard. You can then paste it into Excel or Google Docs.
I've been using iPython (aka Jupyter) quite a bit lately for data analysis and some machine learning. But one big headache is copying results from the notebook app (browser) into either Excel or Google Sheets so I can manipulate results or share them with people who don't use iPython. I know how to convert results to csv and save. But then I have to dig through my computer, open the results and paste them into Excel or Google Sheets. That takes too much time. And just highlighting a resulting dataframe and copy/pasting usually completely messes up the formatting, with columns overflowing. (Not to mention the issue of long resulting dataframes being truncated when printed in iPython.) How can I easily copy/paste an iPython result into a spreadsheet?
0
1
17,289
0
66,239,699
0
0
0
0
2
false
19
2016-01-28T23:33:00.000
1
6
0
How to copy/paste a dataframe from iPython into Google Sheets or Excel?
35,074,209
0.033321
python,excel,google-sheets,ipython,ipython-notebook
Paste the output to an IDE like Atom and then paste in Google Sheets/Excel
I've been using iPython (aka Jupyter) quite a bit lately for data analysis and some machine learning. But one big headache is copying results from the notebook app (browser) into either Excel or Google Sheets so I can manipulate results or share them with people who don't use iPython. I know how to convert results to csv and save. But then I have to dig through my computer, open the results and paste them into Excel or Google Sheets. That takes too much time. And just highlighting a resulting dataframe and copy/pasting usually completely messes up the formatting, with columns overflowing. (Not to mention the issue of long resulting dataframes being truncated when printed in iPython.) How can I easily copy/paste an iPython result into a spreadsheet?
0
1
17,289
0
39,967,831
0
1
0
0
1
false
2
2016-01-29T13:30:00.000
3
3
0
Installing OpenCV 3.1 on OS X El Capitan using Python 3.5.1
35,085,809
0.197375
python,opencv,python-3.5
For me the only working way was using conda: conda install --channel https://conda.anaconda.org/menpo opencv3 and then import it using import cv2
I have looked for a proper way to install OpenCV, but all I can find are people fudging around with Python 2.old or virtualenv or other things that are utterly irrelevant. I just want be able to run import cv2 without any import errors. How do I install OpenCV on OS X 10.11 for use with Python 3.5.1?
0
1
4,202
0
35,093,550
0
1
0
0
1
false
0
2016-01-29T18:11:00.000
0
1
0
Are there performance benchmarks of NumPy arrays in an IPython Notebook versus a .py script file?
35,091,235
0
arrays,python-3.x,numpy,ipython-notebook
I have never noticed any performance penalty (5-6 million x 8 arrays here) with IPython/Jupyter, but even if there is some small difference it is unlikely to be noticeable. A much greater speed increase with a similarly low effort would come from writing performance sensitive code in cython, adding type annotations in cython would yield even greater increases. In my own work I have observed speedups of orders of magnitude from using cython smartly.
I'm working with huge multidimensional NumPy arrays in an IPython notebook with Python3 and things are slow going. Is it appreciably quicker to convert the .ipynb file into a .py file and run via the command line?
0
1
155
0
35,121,242
0
0
0
0
1
true
39
2016-02-01T00:13:00.000
14
3
0
Reading a pickle file (PANDAS Python Data Frame) in R
35,121,192
1.2
python,r,pandas,dataframe
Edit: If you can install and use the {reticulate} package, then this answer is probably outdated. See the other answers below for an easier path. You could load the pickle in python and then export it to R via the python package rpy2 (or similar). Once you've done so, your data will exist in an R session linked to python. I suspect that what you'd want to do next would be to use that session to call R and saveRDS to a file or RAM disk. Then in RStudio you can read that file back in. Look at the R packages rJython and rPython for ways in which you could trigger the python commands from R. Alternatively, you could write a simple python script to load your data in Python (probably using one of the R packages noted above) and write a formatted data stream to stdout. Then that entire system call to the script (including the argument that specifies your pickle) can use used as an argument to fread in the R package data.table. Alternatively, if you wanted to keep to standard functions, you could use combination of system(..., intern=TRUE) and read.table. As usual, there are /many/ ways to skin this particular cat. The basic steps are: Load the data in python Express the data to R (e.g., exporting the object via rpy2 or writing formatted text to stdout with R ready to receive it on the other end) Serialize the expressed data in R to an internal data representation (e.g., exporting the object via rpy2 or fread) (optional) Make the data in that session of R accessible to another R session (i.e., the step to close the loop with rpy2, or if you've been using fread then you're already done).
Is there an easy way to read pickle files (.pkl) from Pandas Dataframe into R? One possibility is to export to CSV and have R read the CSV but that seems really cumbersome for me because my dataframes are rather large. Is there an easier way to do so? Thanks!
0
1
36,680
0
35,149,501
0
0
0
0
1
false
0
2016-02-01T04:59:00.000
2
2
0
Word clustering in python
35,123,248
0.197375
python,machine-learning,cluster-analysis,word
Word clustering will be really disappointing because the computer does not understand language. You could use levenshtein distance and then do hierarchical clustering. But: dog and fog have a distance of 1, i.e. are highly similar. dog and cat have 3 out of 3 letters different. So unless you can define a good measure of similarity, don't cluster words.
How to cluster only words in a given set of Data: i have been going through few algorithms online like k-Means algotihm,but it seems they are related to document clustering instead of word clustering.Can anyone suggest me some way to only cluster words in a given set of data???. please am new to python.
0
1
4,147
0
35,147,923
0
0
0
0
1
false
1
2016-02-01T09:27:00.000
1
1
0
How can I know if the epoch point is reached in seq2seq model?
35,126,954
0.197375
python,tensorflow,recurrent-neural-network
It looks like there is a difference between your dev and train data: global step 374600 learning rate 0.0069 step-time 1.92 perplexity 1.02 eval: bucket 0 perplexity 137268.32 Your training perplexity is 1.02 -- the model is basically perfect on the data it receives for training. But your dev perplexity is enormous, the model does not work at all for the dev set. How did it look in earlier epochs? I would suspect that there is some mismatch. Maybe the tokenization is different for train and dev? Maybe you loaded the wrong file? Maybe the sizes of the buckets from the original translation model are not appropriate for your dev data? It's hard to say without knowing more details. As to when to stop: the original translation model has an infinite training loop because it has a large data-set and capacity and could continue improving for many weeks of training. But it also lowers the learning rate when it's not improving any more, so if your learning rate is very low (as it seems to be in your case), it's a clear signal you can stop.
I am training a seq2seq model since many days on a custom parallel corpus of about a million sentences with default settings for the seq2seq model. Following is the output log which has crossed 350k steps as mentioned in the tutorial. I saw that the bucket perplexity have suddenly increased significantly the overall train perplexity is constant at 1.02 since a long time now , also the learning rate was initialized at 0.5 but now it shows about 0.007 , so the learning rate has also significantly decreased, Also the output of the system is not close to satisfactory. How can I know if the epoch point is reached and should I stop and reconfigure settings like parameter tuning and optimizer improvements? global step 372800 learning rate 0.0071 step-time 1.71 perplexity 1.02 eval: bucket 0 perplexity 91819.49 eval: bucket 1 perplexity 21392511.38 eval: bucket 2 perplexity 16595488.15 eval: bucket 3 perplexity 7632624.78 global step 373000 learning rate 0.0071 step-time 1.73 perplexity 1.02 eval: bucket 0 perplexity 140295.51 eval: bucket 1 perplexity 13456390.43 eval: bucket 2 perplexity 7234450.24 eval: bucket 3 perplexity 3700941.57 global step 373200 learning rate 0.0071 step-time 1.69 perplexity 1.02 eval: bucket 0 perplexity 42996.45 eval: bucket 1 perplexity 37690535.99 eval: bucket 2 perplexity 12128765.09 eval: bucket 3 perplexity 5631090.67 global step 373400 learning rate 0.0071 step-time 1.82 perplexity 1.02 eval: bucket 0 perplexity 119885.35 eval: bucket 1 perplexity 11166383.51 eval: bucket 2 perplexity 27781188.86 eval: bucket 3 perplexity 3885654.40 global step 373600 learning rate 0.0071 step-time 1.69 perplexity 1.02 eval: bucket 0 perplexity 215824.91 eval: bucket 1 perplexity 12709769.99 eval: bucket 2 perplexity 6865776.55 eval: bucket 3 perplexity 5932146.75 global step 373800 learning rate 0.0071 step-time 1.78 perplexity 1.02 eval: bucket 0 perplexity 400927.92 eval: bucket 1 perplexity 13383517.28 eval: bucket 2 perplexity 19885776.58 eval: bucket 3 perplexity 7053727.87 global step 374000 learning rate 0.0071 step-time 1.85 perplexity 1.02 eval: bucket 0 perplexity 46706.22 eval: bucket 1 perplexity 35772455.34 eval: bucket 2 perplexity 8198331.56 eval: bucket 3 perplexity 7518406.42 global step 374200 learning rate 0.0070 step-time 1.98 perplexity 1.03 eval: bucket 0 perplexity 73865.49 eval: bucket 1 perplexity 22784461.66 eval: bucket 2 perplexity 6340268.76 eval: bucket 3 perplexity 4086899.28 global step 374400 learning rate 0.0069 step-time 1.89 perplexity 1.02 eval: bucket 0 perplexity 270132.56 eval: bucket 1 perplexity 17088126.51 eval: bucket 2 perplexity 15129051.30 eval: bucket 3 perplexity 4505976.67 global step 374600 learning rate 0.0069 step-time 1.92 perplexity 1.02 eval: bucket 0 perplexity 137268.32 eval: bucket 1 perplexity 21451921.25 eval: bucket 2 perplexity 13817998.56 eval: bucket 3 perplexity 4826017.20 And when will this stop ?
0
1
972
0
35,132,831
0
0
0
0
1
false
4
2016-02-01T14:06:00.000
0
2
0
Use a different estimator based on value
35,132,569
0
python,feature-selection,supervised-learning
I personally am new to Python but I would use the data type of a list. I would then proceed to making a membership check and reference the list you just wrote. Then proceed to say that if member = true then run/use randomForest regressor. If false use/run another regressor.
What I'm trying to do is build a regressor based on a value in a feature. That is to say, I have some columns where one of them is more important (let's suppose it is gender) (of course it is different from the target value Y). I want to say: - If the gender is Male then use the randomForest regressor - Else use another regressor Do you have any idea about if this is possible using sklearn or any other library in python?
0
1
47
0
35,172,568
0
1
0
0
2
false
53
2016-02-02T06:21:00.000
1
4
0
Tensorflow python : Accessing individual elements in a tensor
35,146,444
0.049958
python,python-2.7,tensorflow
You simply can't get value of 0th element of [[1,2,3]] without run()-ning or eval()-ing an operation which would be getting it. Because before you 'run' or 'eval', you have only a description how to get this inner element(because TF uses symbolic graphs/calculations). So even if you would use tf.gather/tf.slice, you still would have to get values of these operations via eval/run. See @mrry's answer.
This question is with respect to accessing individual elements in a tensor, say [[1,2,3]]. I need to access the inner element [1,2,3] (This can be performed using .eval() or sess.run()) but it takes longer when the size of the tensor is huge) Is there any method to do the same faster? Thanks in Advance.
0
1
79,845
0
35,148,137
0
1
0
0
2
false
53
2016-02-02T06:21:00.000
1
4
0
Tensorflow python : Accessing individual elements in a tensor
35,146,444
0.049958
python,python-2.7,tensorflow
I suspect it's the rest of the computation that takes time, rather than accessing one element. Also the result might require a copy from whatever memory is stored in, so if it's on the graphics card it will need to be copied back to RAM first and then you get access to your element. If this is the case you might skip it by adding an tensorflow operation to take the first element, and only return that.
This question is with respect to accessing individual elements in a tensor, say [[1,2,3]]. I need to access the inner element [1,2,3] (This can be performed using .eval() or sess.run()) but it takes longer when the size of the tensor is huge) Is there any method to do the same faster? Thanks in Advance.
0
1
79,845
0
35,153,245
0
0
0
0
1
false
0
2016-02-02T11:21:00.000
0
2
0
Take input of arbitrary size in theano
35,152,052
0
python-2.7,theano
You need to do some data formatting. The input size of a NN is constant, so if the images for your CNN have different sizes you need to resize them to your input size before feeding them in. Its like a person being too close or far away from a painting, your field of view is contant, in order to see everything clearly you need to adjust your distance from the image.
I built a cnn network in Theano. Input are many images, but the size of them are different. The elements of numpy.array have the same size. How can I make them as the input? Thanks a lot.
0
1
61
0
35,159,902
0
1
0
0
1
true
1
2016-02-02T17:16:00.000
1
2
0
Python set number of arguments to capture
35,159,748
1.2
python
Is there a way use the def foo(*x) notation to let python know it needs a certain range of number of arguments? Nope. Also, scipy.optimize.curve_fit ultimately gets its argument count information from f.__code__.co_argcount, not co_nlocals or n_locals (which doesn't exist).
I was working on a project where I doing regressions and I wanted to used scipy.optomize.curve_fit which takes a function and tries to find the right parameters for it. The odd part was that it was never given how many parameters the function took. Eventually we guessed that it used foo.__code__.co_nlocals, but in the case we would've used it I needed 33 arguments. To the question: Is there a way use the def foo(*x) notation to let python know it needs a certain range of number of arguments? Like a def foo(*x[:32])? I'm sure I will never use this in any real code, but it would be interesting to know.
0
1
40
0
50,763,868
0
0
0
0
1
false
7
2016-02-02T21:04:00.000
0
4
0
Theano Dimshuffle equivalent in Google's TensorFlow?
35,163,789
0
python,numpy,theano,tensorflow
tf.transpose is probably what you are looking for. it takes an arbitrary permutation.
I have seen that transpose and reshape together can help but I don't know how to use. Eg. dimshuffle(0, 'x') What is its equivalent by using transpose and reshape? or is there a better way? Thank you.
0
1
4,682
0
36,043,728
0
0
0
0
1
true
1
2016-02-03T17:16:00.000
0
1
1
Creating Contingency Solution Output File for PSS/E using Python 2.7
35,183,538
1.2
python,python-2.7
@Magalhaes, the auxiliary files *.sub, *.mon and *.con are input files. You have to write them; PSSE doesn't generate them. Your recording shows that you defined a bus subsystem twice, generated a *.dfx from existing auxiliary files, ran an AC contingency solution, then generated an *.acc report. So when you did this recording, you must have started with already existing auxiliary files.
I'm using python to interact with PSS/E (siemens software) and I'm trying to create *.acc file for pss/e, from python. I can do this easily using pss/e itself: 1 - create *.sub, *.mon, *.con files 2 - create respective *.dfx file 3 - and finally create *.acc file The idea is to perform all these 3 tasks automatically, using python. So, using the record tool from pss/e I get this code: psspy.bsys(0,0,[ 230., 230.],1,[1],0,[],0,[],0,[]) psspy.bsys(0,0,[ 230., 230.],1,[1],0,[],0,[],0,[]) psspy.dfax([1,1],r"""PATH\reports.sub""",r"""PATH\reports.mon""",r"""PATH\reports.con""",r"""PATH\reports.dfx""") psspy.accc_with_dsp_3( 0.5,[0,0,0,1,1,2,0,0,0,0,0],r"""IEEE""",r"""PATH\reports.dfx""",r"""PATH\reports.acc""","","","") psspy.accc_single_run_report_4([1,1,2,1,1,0,1,0,0,0,0,0],[0,0,0,0,6000],[ 0.5, 5.0, 100.0,0.0,0.0,0.0, 99999.],r"""PATH\reports.acc""") It happens that when I run this code on python, the *.sub, *.mon, *.con and *.dfx files are not created thus the API accc_single_run_report_4() reports an error. Can anyone tell me why these files aren't being created with this code? Thanks in advance for your time
0
1
1,112
0
35,185,050
0
0
0
0
1
false
4
2016-02-03T18:27:00.000
2
2
0
Shape Mismatch Numpy
35,184,815
0.197375
python,numpy
In numpy, (10, 1), (10,) are not the same at all: (10, 1) is a two dimensional array, with a single column. (10, ) is a one dimensional array If you have an array a, and print out len(a.shape), you'll see the difference.
I am continously getting the error: "(shapes (10, 1), (10,) mismatch)" when doing a NumPy operation and I am somewhat confused. Wouldn't (10,1) and (10,) be identical shapes? And if for whatever reason this is not valid, is there a way to convert (10,1) to (10,)? I cannot seem to find it in the NumPy doucmentation. Thanks
0
1
1,521
0
35,184,973
0
0
0
0
1
false
1
2016-02-03T18:31:00.000
1
1
0
Limiting the number of GB to read in read_csv in Pandas
35,184,894
0.197375
python-3.x,pandas
You can pass nrows=number_of_rows_to_read to your read_csv function to limit the lines that are read.
I often work with csv files that are 100s of GB in size. Is there any way to tell read_csv to only read a fixed number of MB from a csv file? Update: It looks like chunks and chunksize can be used for this, but the documentation looks a bit slim here. What would be an example of how to do this with a real csv file? (e.g. say a 100GB file, read only rows up to approximately ~10MB)
0
1
779
0
35,213,773
0
0
0
0
1
false
7
2016-02-04T23:19:00.000
5
4
0
NumPy calculate square of norm 2 of vector
35,213,592
0.244919
python,numpy,inner-product
I don't know if the performance is any good, but (a**2).sum() calculates the right value and has the non-repeated argument you want. You can replace a with some complicated expression without binding it to a variable, just remember to use parentheses as necessary, since ** binds more tightly than most other operators: ((a-b)**2).sum()
I have vector a. I want to calculate np.inner(a, a) But I wonder whether there is prettier way to calc it. [The disadvantage of this way, that if I want to calculate it for a-b or a bit more complex expression, I have to do that with one more line. c = a - b and np.inner(c, c) instead of somewhat(a - b)]
0
1
38,727
0
35,238,504
0
0
0
0
1
true
1
2016-02-05T16:37:00.000
1
1
0
Matplotlib spectrogram versus STFT
35,229,136
1.2
python,matplotlib,fft,spectrogram
The redundancy is because you input a strictly real signal to your FFT, thus the DFT result is complex conjugate (Hermitian) symmetric. This redundancy is due to the fact that all the imaginary components of strictly real input are zero. But the output of this DFT can include non-zero imaginary components to indicate phase. Thus, the this DFT result has to be conjugate symmetric so that all the imaginary components in the result will cancel out between the two DFT result halves (same magnitudes, but opposite phases), indicating strictly real input. Also, the lower 257 bins of the basis transform will have 512 degrees of (scaler)freedom, just like the input. However, a spectrogram throws away all phase information, so it can only display 257 unique values (magnitude-only). If you input a complex (quadrature, for instance) signal to a DFT, then there would likely not be Hermitian redundancy, and you would have 1024 degrees of freedom from a 512 length DFT. If you want an image height of 512 (given real input), try an FFT size of 1024.
I'm currently computing the spectrogram with the matplotlib. I specify NFFT=512 but the resulting image has a height of 257. I then tried to just do a STFT (short time fourier transform) which gives me 512 dimensional vectors (as expected). If I plot the result of the STFT I can see that half of the 512 values are just mirrored so really I only get 257 values (like the matplotlib). Can somebody explain to me why that is the case? I always thought of the FT as a basis transform, why would it introduce this redundancy? Thank you.
0
1
1,806
0
35,237,068
0
0
0
0
1
false
1
2016-02-06T03:00:00.000
0
1
0
tensorflow no module named example.tutorials.mnist.input_data
35,236,851
0
python-2.7,tensorflow
@mkarlovitz Looks like /Library/Python/2.7/site-packages/ is not in the list of paths python is looking for. To see what paths are python uses to find packages, do the below ( you can use command line for this ). 1. import sys 2. sys.path ( This tells the list of Paths ) If the /Library/Python/2.7/site-packages/ is not in the above list. Add it as follows: In the Python file/script you are executing. 1. import sys 2. sys.path.append('/Library/Python/2.7/site-packages/')
I've installed tensor flow on Mac OS X. Successfully ran simple command line test. Now trying the first tutorial. Fail on the first python line: [python prompt:] import tensorflow.examples.tutorials.mnist.input_data Traceback (most recent call last): File "", line 1, in ImportError: No module named examples.tutorials.mnist.input_data But the file seems to be there: new-host-4:~ karlovitz$ ls /Library/Python/2.7/site-packages/tensorflow/examples/tutorials/mnist/ BUILD fully_connected_feed.py mnist.py mnist_with_summaries.py init.py input_data.py mnist_softmax.py
0
1
5,529
0
35,248,119
0
0
0
0
1
false
8
2016-02-06T03:34:00.000
9
1
0
What is the difference between xgboost, extratreeclassifier, and randomforrestclasiffier?
35,237,044
1
python,random-forest,xgboost,kaggle
Extra-trees(ET) aka. extremely randomized trees is quite similar to random forest (RF). Both methods are bagging methods aggregating some fully grow decision trees. RF will only try to split by e.g. a third of features, but evaluate any possible break point within these features and pick the best. However, ET will only evaluate a random few break points and pick the best of these. ET can bootstrap samples to each tree or use all samples. RF must use bootstrap to work well. xgboost is an implementation of gradient boosting and can work with decision trees, typical smaller trees. Each tree is trained to correct the residuals of previous trained trees. Gradient boosting can be more difficult to train, but can achieve a lower model bias than RF. For noisy data bagging is likely to be most promising. For low noise and complex data structures boosting is likely to be most promising.
I am new to all these methods and am trying to get a simple answer to that or perhaps if someone could direct me to a high level explanation somewhere on the web. My googling only returned kaggle sample codes. Are the extratree and randomforrest essentially the same? And xgboost uses boosting when it chooses the features for any particular tree i.e. sampling the features. But then how do the other two algorithms select the features? Thanks!
0
1
2,360
0
35,237,949
0
0
0
0
2
false
0
2016-02-06T05:53:00.000
0
3
0
Python Pandas largest number
35,237,874
0
python-2.7,pandas
If the operations are done in the pydata stack (numpy/pandas), you're limited to fixed precision numbers, up to 64bit. Arbitrary precision numbers as string, perhaps?
I am working on some problem where i have to take 15th power of numbers, when I do it in python console i get the correct output, however when put these numbers in pandas data frame and then try to take the 15th power, i get a negative number. Example, 1456 ** 15 = 280169351358921184433812095498240410552501272576L, however when similar operation is performed in pandas i get negative values. Is there a limit on the size of number which pandas can hold and how can we change this limit.
0
1
112
0
35,237,988
0
0
0
0
2
false
0
2016-02-06T05:53:00.000
0
3
0
Python Pandas largest number
35,237,874
0
python-2.7,pandas
I was able to overcome by changing the data type from int to float, as doing this gives the answer to 290 ** 15 = 8.629189e+36, which is good enough for my exercise.
I am working on some problem where i have to take 15th power of numbers, when I do it in python console i get the correct output, however when put these numbers in pandas data frame and then try to take the 15th power, i get a negative number. Example, 1456 ** 15 = 280169351358921184433812095498240410552501272576L, however when similar operation is performed in pandas i get negative values. Is there a limit on the size of number which pandas can hold and how can we change this limit.
0
1
112
0
45,540,101
1
0
0
0
1
false
4
2016-02-06T17:03:00.000
0
3
0
calculate indegree centralization of graph with python networkx
35,243,795
0
python,networkx,graph-theory
This answer has been taken from a Google Groups on the issue (in the context of using R) that helps clarify the maths taken along with the above answer: Freeman's approach measures "the average difference in centrality between the most central actor and all others". This 'centralization' is exactly captured in the mathematical formula sum(max(x)-x)/(length(x)-1) x refers to any centrality measure! That is, if you want to calculate the degree centralization of a network, x has simply to capture the vector of all degree values in the network. To compare various centralization measures, it is best to use standardized centrality measures, i.e. the centrality values should always be smaller than 1 (best position in any possible network) and greater than 0 (worst position)... if you do so, the centralization will also be in the range of [0,1]. For degree, e.g., the 'best position' is to have an edge to all other nodes (i.e. incident edges = number of nodes minus 1) and the 'worst position' is to have no incident edge at all.
I have a graph and want to calculate its indegree and outdegree centralization. I tried to do this by using python networkx, but there I can only find a method to calculate indegree and outdegree centrality for each node. Is there a way to calculate in- and outdegree centralization of a graph in networkx?
0
1
2,646
0
35,256,493
0
0
0
0
1
false
3
2016-02-07T00:54:00.000
3
4
0
Tensorflow compat modules issues?
35,248,476
0.148885
python,tensorflow
You're most likely using an older version of TensorFlow. I just noticed that some of our install docs still link to 0.5 -- try upgrading to 0.6 or to head. I'll fix the docs soon, but in the meantime, if you installed via pip, you can just change the 0.5 to 0.6 in the path. If you're building from source, just check out the appropriate release tag (or head).
Getting the following error when working through the ipython notebooks on Google's tensorflow udacity course: AttributeError: 'module' object has no attribute 'compat' Trying to call: tf.compat.as_str(f.read(name)).split() Running on Ubuntu 14.04 and wondering if this a tensorflow early bug issue or just me being stupid. :P
0
1
4,727
0
35,255,569
0
0
0
0
1
true
0
2016-02-07T06:47:00.000
0
1
0
Processing array larger than memory for training a neural net in python
35,250,611
1.2
python,machine-learning,neural-network,large-data
What you are probably looking for is minibatching. In general many methods of training neural nets are gradient based, and as your loss function is a function of trianing set - so is the gradient. As you said - it may exceed your memory. Luckily, for additive loss functions (and most you will ever use - are additive) one can prove that you can substitute full gradient descent with stochastic (or minibatch) gradient descent and still converge to a local minima. Nowadays it is very often practise to use batches of 32, 64 or 128 rows, thus rather easy to fit in your memory. Such networks can actually converge faster to solution than the ones trained with full gradient, as you make N / 128 moves per dataset instead of just one. Even if each of them is rather rough - as a combination they work pretty well.
I am trying to train a neural net (backprop + gradient descent) in python with features I am constructing on top of the google books 2-grams (English), it will end up being around a billion rows of data with 20 features each row. This will easily exceed my memory and hence using in-memory arrays such as numpy would not be an option as it requires loading the complete training set. I looked into memory mapping in numpy which could solve the problem for input layer (which are readonly), but I will also need to store and manipulate my internal layers in the net which requires extensive data read/write and considering the size of data, performance is extremely crucial in this process as could save days of processing for me. Is there a way to train the model without having to load the complete training set in memory for each iteration of cost (loss) minimization?
0
1
597
0
47,066,621
0
0
0
0
1
false
0
2016-02-09T04:02:00.000
0
2
0
How to cluster a time series using KMeans in python
35,283,654
0
python,numpy,pandas,machine-learning,scikit-learn
You can add more features based on the raw data, and using methods like RFM Analysis. RFM = recency, frequency, monetary For example: How often the user logged in? The last time the user logged in?
So I have a data in the form [UID obj1 obj2..] x timestamp and I want to cluster this data in python using kmeans from sklearn. Where should I start? EDIT: So basically I'm trying to cluster users based on clickstream data, and classify them based on usage patterns.
0
1
2,413
0
35,295,661
0
0
0
0
1
false
0
2016-02-09T15:11:00.000
3
1
0
How to drop a pandas dataframe after storing in database
35,295,491
0.53705
python,pandas
del dataframe will unpollute your namespace and free your memory, while dataframe = None will only free your memory. Hope that helps!
How do I drop a pandas dataframes after I store them in a database. I can only find a way to drop columns or rows from a dataframe but how can I drop a complete data frame to free my computer memory?
0
1
86
0
35,321,747
0
0
0
0
1
true
1
2016-02-10T15:01:00.000
1
1
0
Modify kmeans alghoritm for 1d array where order matters
35,318,602
1.2
python,cluster-analysis,data-mining,k-means
K-means is about minimizing the least squares. Among it's largest drawbacks (there are many) is that you need to know k. Why do you want to inherit this drawback? Instead of hacking k-means into not ignoring the order, why don't you instead look at time series segmentation and change detection approaches that are much more appropriate for this problem? E.g. split your time series if abs(x[i] - x[-1]) > stddev where stddev is the standard deviation of your data set. Or the standard deviation of the last 10 samples (in above series, the standard deviation is about 3, so it would split as [1,2,2], [8,9], [0,0,0,1,1,1] because the change 0 to 1 is not significant.
I want to find groups in one dimensional array where order/position matters. I tried to use numpys kmeans2 but it works only when i have numbers in increasing order. I have to maximize average difference between neigbour sub-arrays For example: if I have array [1,2,2,8,9,0,0,0,1,1,1] and i want to get 4 groups the result should be something like [1,2,2], [8,9], [0,0,0], [1,1,1] Is there a way to do it in better then O(n^k) answer: I ended up with modiied dendrogram, where I merge neigbors only.
0
1
112
0
35,329,441
0
0
0
0
1
false
1
2016-02-10T22:29:00.000
1
2
0
How do I calculate linear trend for a multi-dimensional array in Python
35,327,272
0.099668
python,arrays,scipy,trend
I would look into numpy.polyfit but I'm not sure what performance gain it has over scipy.stats.linregress. It's pretty fast from my experience. You might have to do some math on your own to get r and p values from residuals and covariance matrix.
I've got a 3d array of shape (time,latitude,longitude). I'd like to calculate the linear trend at each lon/lat point. I know I can simply loop over all points and use spicy.stats.linregress at each point. However, that gets quite slow for large arrays. The scipy function "detrend" can calculate and remove linear trends for n-dimensional arrays, and is really fast. But I can't find any method to just calculate the trends. Does anyone know of a fast way to calculate slope, intercept, r value and p value for each point on a large grid? Any help/suggestion is greatly appreciated! Cheers Joakim
0
1
1,476
0
35,330,365
0
0
0
0
1
false
5
2016-02-11T03:29:00.000
3
3
0
Why Numpy sometimes omits the dimension of an array
35,330,282
0.197375
python,arrays,numpy
Numpy does not omit the dimension of an array. It is a library built for multidimensional arrays (not just 1d or 2d arrays), and so it makes very clear distinctions between arrays of different dimensions (it cannot assume that any array is just a degenerate form of a higher dimension array, because the number of dimensions is infinite, conceptually). An array with dimension (81, 1) is a 2d array with the value of the 2nd dimension equal to 1. An array with dimension (81, ) is just a 1d array. When you write C[:,0], you are referring to a column. Therefore If you write C[: 0] = X, you are assigning a column to one of the columns of C (which happens to be the only one), and therefore are not changing the dimension of C. If you write C = X, then you are saying that C is now a column as well, and therefore are changing the dimension of C.
I am a beginner user of Python. I used to work with matlab intensively. Now I am shifting to python. I have a question about the dimension of an array. I import Numpy I first create an array X, then I use some embedded function, like, sum, to play with my array. Eventually, when I try to check the dimension of my array X, it becomes: X.shape, outputs (81,). The number 81 is what I expected, but I also expect the 2nd dimension is 1, rather than just omitted. This makes me feel very uncomfortable even though when I directly type X, it output correctly, i.e., one column and the figures in X are all as expected. Then when I use another array Y, which literally has Y.shape, outputs (81,1), then if I type X*Y, which I expected to see one array of dimension (81,1) but instead, I saw an array of dimension (81,81). I don't know what is the underlying mechanism to produce this results. The way I solve this problem is very stupid. I first create a new array C = zeros((81,1)), so C literally has dimension (81,1), then I assign my X to C by typing C[:,0]=X, then C.shape = (81,1). Note that if I type C=X, then C.shape=(81,), which goes back to my problem. So I can solve my problem, but I am sure there is better method to solve my problem and I also don't understand why python would produce something like (81,), with the 2nd dimension omitted.
0
1
2,639
0
35,343,669
0
0
1
0
1
true
0
2016-02-11T14:09:00.000
1
1
0
Raspberry pi matrix multiplication
35,341,566
1.2
python,c,raspberry-pi,matrix-multiplication,raspberry-pi2
Mathematica is part of the standard Raspbian distribution. It should be able to multiply matrices.
What matrix multiplication library would you recommend for Raspberry Pi 2? I think about BLAS or NumPy, What do you think? I'm wondering if there is an external hardware module for matrix multiplication available. Thank you!
0
1
357