GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
44,512,629
0
0
0
0
1
false
0
2017-06-11T18:29:00.000
0
1
0
Stanford NLP Output Formatting
44,487,269
0
java,python,stanford-nlp
If you are using the command line you can use -outputFormat text to get a human readable version or -outputFormat json to get a json version. In Java code you can use edu.stanford.nlp.pipeline.StanfordCoreNLP.prettyPrint() or edu.stanford.nlp.pipeline.StanfordCoreNLP.jsonPrint() to print out an Annotation.
Using the Stanford NLP, I want my text to go through lemmatization and coreference resolution. So for an input.txt: "Stanford is located in California. It is a great University, founded in 1891." I would want the output.txt: "Stanford be located in California. Stanford be a great University, found in 1891." I am also looking to get a table where the first column consists of the name-entities that were recognized in the text, and the second column is the name class they were identified as. Thus, for the example sentence above, it would be something like: 1st Column 2nd Column Stanford Location, Organization California Location Thus, in the table, the name-entities would occur only once. There's nothing I was able to find online about manipulating the default xml output or making direct changes to the input text file using the NLP. Could you give me any tips on how to go about this?
0
1
784
0
50,835,528
0
1
0
0
1
false
4
2017-06-12T03:35:00.000
0
2
0
How to loop over all but last column in pandas dataframe + indexing?
44,491,067
0
python,pandas,dataframe,indexing
A simple way would be to use slicing with iloc all but last column would be: df.iloc[:,:-1] all but first column would be: df.iloc[:,1:]
Let's day I have a pandas dataframe df where the column names are the corresponding indices, so 1, 2, 3,....len(df.columns). How do I loop through all but the last column, so one before len(df.columns). My goal is to ultimately compare the corresponding element in each row for each of the columns with that of the last column. Any code with be helpful! Thank you!
0
1
5,300
0
44,530,760
0
0
0
0
1
false
2
2017-06-12T13:20:00.000
0
1
0
Tensorflow major difference in loss between machines
44,500,526
0
python,machine-learning,tensorflow,keras,autoencoder
The dataset I used was a single .mat file, created by using scipy's savemat and loaded with loadmat. It was created on my Macbook and distributed via scp to the other machines. It turned out that the issue was with this .mat file (I do not know exactly what though). I have switched away from the .mat file and everything is fine now.
I have written a Variational Auto-Encoder in Keras using Tensorflow as backend. As optimizer I use Adam, with a learning rate of 1e-4 and batch size 16. When I train the net on my Macbook's CPU (Intel Core i7), the loss value after one epoch (~5000 minibatches) is a factor 2 smaller than after the first epoch on a different machine running Ubuntu. For the other machine I get the same result on both CPU and GPU (Intel Xeon E5-1630 and Nvidia GeForce GTX 1080). Python and the libraries I'm using have the same version number. Both machines use 32 bit floating points. If I use a different optimizer (eg rmsprop), the significant difference between machines is still there. I'm setting np.random.seed to eliminate randomness. My net outputs logits (I have linear activation in the output layer), and the loss function is tf.nn.sigmoid_cross_entropy_with_logits. On top of that, one layer has a regularizer (the KL divergence between its activation, which are params of a Gaussian distribution, and a zero mean Gauss). What could be the cause of the major difference in loss value?
0
1
203
0
45,828,029
0
0
0
0
1
false
1
2017-06-12T16:16:00.000
0
1
0
CNTK & Python: How to do reflect or symmetric padding instead of zero padding?
44,504,140
0
python,padding,cntk
There is a new pad operation (in master; will be released with CNTK 2.2) that supports reflect and symmetric padding.
In the cntk.layers package we have the option to do zero padding: pad (bool or tuple of bools, defaults to False) – if False, then the filter will be shifted over the “valid” area of input, that is, no value outside the area is used. If pad=True on the other hand, the filter will be applied to all input positions, and positions outside the valid region will be considered containing zero. Use a tuple to specify a per-axis value. But how can I use other types of padding like reflect or symmetric padding? Is it possible to integrate my own padding criterion in the cntk.layers? I'm a beginner in cntk and really grateful for every help.
0
1
177
0
44,608,170
0
1
0
0
1
true
0
2017-06-13T07:25:00.000
0
1
0
How to tokenize and tag those tokenized strings from my own custom dictionary using python nltk?
44,514,898
1.2
python-3.x,dictionary,nltk
I hope this is what you are looking for https://github.com/sujitpal/nltk-examples/tree/master/src/cener
I am new to python. I have to build a chatbot using python nltk -- my use case and expected output is this: I have a custom dictionary of some categories (shampoo,hair,lipstick,face wash), some brands (lakme,l'oreal,matrix), some entities ((hair concern: dandruff, hair falling out), (hair type: oily hair, dry hair), (skin type: fair skin, dark skin, dusky skin), etc.). I want to buy shampoo for hair falling out and dry hair or Show me best lipsticks for fair skin and office wear How do I extract values by category: shampoo, hair concern: hair falling out, hair type: dry hair I am using python nltk.
0
1
203
0
44,518,795
0
0
0
0
1
true
1
2017-06-13T07:58:00.000
2
1
0
Tensorflow resize_image_with_crop_or_pad
44,515,532
1.2
python,tensorflow
Let's suppose that you got images that's a [n, W, H] numpy nd-array, in which n is the number of images and W and H are the width and the height of the images. Convert images to a tensor, in order to be able to use tensorflow functions: tf_images = tf.constant(images) Convert tf_images to the image data format used by tensorflow (thus from n, W, H to n, H, W) tf_images = tf.transpose(tf_images, perm=[0,2,1]) In tensorflow, every image has a depth channell, thus altough you're using grayscale images, we have to add the depth=1 channell. tf_images = tf.expand_dims(tf_images, 2) Now you can use tf.image.resize_image_with_crop_or_pad to resize the batch (that how has a shape of [n, H, W, 1] (4-d tensor)): resized = tf.image.resize_image_with_crop_or_pad(tf_images,height,width)
I want to call tf.image.resize_image_with_crop_or_pad(images,height,width) to resize my input images. As my input images are all in form as 2-d numpy array of pixels, while the image input of resize_image_with_crop_or_pad must be 3-d or 4-d tensor, it will cause an error. What should I do?
0
1
2,981
0
44,535,536
0
0
0
0
1
false
1
2017-06-13T09:13:00.000
0
1
0
CNTK: The new clone do not match the cloned inputs of the clonee Block Function
44,517,122
0
python,runtime-error,cntk
This line cloneModel.parameters[0] = cloneModel.parameters[0]*4 tries to replace the first parameter with an expression (a CNTK graph) that multiplies the parameter by 4. I don't think that's the intent here. Rather, you want to do the above on the .value attribute of the parameter. Try this instead: cloneModel.parameters[0].value = cloneModel.parameters[0].value*4
I have trained a model in CNTK. Then I clone it and change some parameters; when I try to test the quantized model, I get RuntimeError: Block Function 'softplus: -> Unknown': Inputs 'Constant('Constant70738', [], []), Constant('Constant70739', [], []), Parameter('alpha', [], []), Constant('Constant70740', [], [])' of the new clone do not match the cloned inputs 'Constant('Constant70738', [], []), Constant('Constant70739', [], []), Constant('Constant70740', [], []), Parameter('alpha', [], [])' of the clonee Block Function. I have no idea what this error means or how to fix it. Do you have any ideas? P.S. I clone and edit the model by doing clonedModel = model.clone(cntk.ops.CloneMethod.clone) cloneModel.parameters[0].value = cloneModel.parameters[0].value*4 then when I try to use cloneModel I get that error above.
0
1
115
0
44,526,860
0
0
0
0
1
false
0
2017-06-13T16:10:00.000
0
1
0
Generate a random nonlinear function going through given points in python
44,526,642
0
python-2.7,nonlinear-functions
It looks more like a math problem to me here, since you ask "how to start". you know that a function's plot is just a lot of points (x, y) where y=f(x). And I know that for any two pairs of points (not vertically aligned), I have an infinity of second-degree functions (parabolas) going through these two points. they are given by y=ax^2+bx+c You want the parabola to go through your 2 points, so you can substitute x and y for each of the 2 points, that will give you 2 equations (where a, b and c are the unknown) . Then you can add a random point (I would suggest on the y-axis : (0; r) ). This will give you a third equation. With these 3 equations, solve for a, b and c. (in function of r) now, for any value of r, you will have some a, b and c that define a parabola going through your 2 known points. Once you understand how to solve this math problem, the python part is completely independant.
I have two given points (3.0, 3.2) and (7.0, 4.59) . My job here is very simple but I don't even know how to start. I just need to plot 4 nonlinear functions that go through these two points. Did somebody have a similar problem before? How does one even start?
0
1
591
0
44,553,514
0
0
0
0
1
false
0
2017-06-14T16:21:00.000
1
2
0
Find 'modern' nltk words corpus
44,550,004
0.099668
python,nltk,corpus
Rethink your approach. Any collection of English texts will have a "long tail" of words that you have not seen before. No matter how large a dictionary you amass, you'll be removing words that are not "non-English". And to what purpose? Leave them in, they won't spoil your classification. If your goal is to remove non-English text, do it at the sentence or paragraph level using a statistical approach, e.g. ngram models. They work well and need minimal resources.
I'm building a text classifier that will classify text into topics. In the first phase of my program as a part of cleaning the data, I remove all the non-English words. For this I'm using the nltk.corpus.words.words() corpus. The problem with this corpus is that it removes 'modern' English words such as Facebook, Instagram etc. Does anybody know another, more 'modern' corpus which I can replace or union with the present one? I prefer nltk corpus but I'm open to other suggestions. Thanks in advance
0
1
507
0
44,554,231
0
0
0
0
1
false
0
2017-06-14T20:25:00.000
0
1
0
Import a column from excel into python and run autocorrelation on it
44,554,135
0
python
You can use Pandas to import a CSV file with the pd.read_csv function.
I have a 1 column excel file. I want to import all the values it has in a variable x (something like x=[1,2,3,4.5,-6.....]), then use this variable to run numpy.correlate(x,x,mode='full') to get autocorrelation, after I import numpy. When I manually enter x=[1,2,3...], it does the job fine, but when I try to copy paste all the values in x=[], it gives me a NameError: name 'NO' is not defined. Can someone tell me how to go around doing this?
0
1
57
0
55,951,963
0
0
0
0
1
false
6
2017-06-14T22:06:00.000
1
1
0
Python multiprocessing tool vs Py(Spark)
44,555,485
0.197375
python,scikit-learn,multiprocessing,pyspark,cluster-computing
True, Spark does have the limitations you have mentioned, that is you are bounded in the functional spark world (spark mllib, dataframes etc). However, what it provides vs other multiprocessing tools/libraries is the automatic distribution, partition and rescaling of parallel tasks. Scaling and scheduling spark code becomes an easier task than having to program your custom multiprocessing code to respond to larger amounts of data + computations.
A newbie question, as I get increasingly confused with pyspark. I want to scale an existing python data preprocessing and data analysis pipeline. I realize if I partition my data with pyspark, I can't treat each partition as a standalone pandas data frame anymore, and need to learn to manipulate with pyspark.sql row/column functions, and change a lot of existing code, plus I am bound to spark mllib libraries and can't take full advantage of more mature scikit-learn package. Then why would I ever need to use Spark if I can use multiprocessing tools for cluster computing and parallelize tasks on existing dataframe?
0
1
2,667
0
44,561,811
0
0
0
0
1
false
0
2017-06-15T06:02:00.000
0
2
0
Equivalent method in Java for np.random.uniform()
44,559,717
0
java,python,random,distribution,uniform-distribution
Just use Random rnd = new Random(); rnd.nextInt(int boundary); rnd.nextDouble(double boundary); rnd.next(); If you want a list of randoms the best way is probably ti write your own little method, just use an array and fill it with a for loop.
I'm trying port a python code into java and am stuck at one place. Is there any method in java that is equivalent to numpy.random.uniform() in python?
1
1
674
0
45,875,329
0
1
0
0
1
false
1
2017-06-15T13:37:00.000
1
1
0
Problems using Tensorflow in PyCharm-keep getting ImportError
44,569,033
0.197375
python-3.x,tensorflow,pycharm,importerror
Since you have in the log Library not loaded: @rpath/libcublas.8.0.dylib I would say you've installed TF with CUDA support but didn't install CUDA libraries properly. Try to install TF CPU only.
for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. Process finished with exit code 1
0
1
223
0
44,624,518
0
0
0
0
1
false
2
2017-06-15T14:18:00.000
0
2
0
Memory Issues Using Keras Convolutional Network
44,569,938
0
python,memory,computer-vision,keras,convolution
While using train_generator(), you should also set the max_q_size parameter. It's set at 10 by default, which means you're loading in 10 batches while using only 1 (since train_generator() was designed to stream data from outside sources that can be delayed like network, not to save memory). I'd recommend setting max_q_size=1for your purposes.
I am very new to ML using Big Data and I have played with Keras generic convolutional examples for the dog/cat classification before, however when applying a similar approach to my set of images, I run into memory issues. My dataset consists of very long images that are 10048 x1687 pixels in size. To circumvent the memory issues, I am using a batch size of 1, feeding in one image at a time to the model. The model has two convolutional layers, each followed by max-pooling which together make the flattened layer roughly 290,000 inputs right before the fully-connected layer. Immediately after running however, Memory usage chokes at its limit (8Gb). So my questions are the following: 1) What is the best approach to process computations of such size in Python locally (no Cloud utilization)? Are there additional python libraries that I need to utilize?
0
1
841
0
44,589,585
0
0
0
0
1
false
0
2017-06-16T06:25:00.000
0
2
0
How to Save Plotted Graph Data into Output Data File in Python
44,582,210
0
python,csv,matplotlib,output
If you plotted the data using numpy array, you can use numpy.savetxt.
Using matplotlib.pyplot, I plotted multiple wave functions w.r.t time series, showing the waves in multiple vertical axes, and output the graph in jpg using savefig. I want to know the easiest way in which I can output all wave functions into a single output data file maybe in CSV or DAT in rows and columns.
0
1
2,703
0
44,651,045
0
0
0
0
1
true
0
2017-06-16T11:10:00.000
0
1
0
Accessing Input Layer data in Tensorflow/Keras
44,587,813
1.2
python,tensorflow,deep-learning,keras,keras-layer
At the time it seems to be impossible to actually access the data within the symbolic tensor. It also seems unlikely that such functionality will be added in the future since in the Tensorflow page it says: A Tensor object is a symbolic handle to the result of an operation, but does not actually hold the values of the operation's output. Keras allows for the creation of personalized layers. However, these are limited by the available backend operations. As such, it is simply not possible to access the batch data.
I am trying to replicate a neural network for depth estimation. The original authors have taken a pre-trained network and added between the fully connected layer and the convolutional layer a 'Superpixel Pooling Layer'. In this layer, the convolutional feature maps are upsampled and the features per superpixel are averaged. My problem is that in order to successfully achieve this, I need to calculate the superpixels per image. How can I access the data being used by keras/tensorflow during batch processing to perform SLIC oversegmentation? I considered splitting the tasks and working by pieces i.e. feed the images into the convolutional network. Process the outputs separately and then feed them into a fully connected layer. However, this makes further training of the network impossible.
0
1
486
0
44,609,082
0
0
0
0
1
false
4
2017-06-16T20:33:00.000
2
2
0
How to use model.fit_generator in keras
44,597,555
0.197375
python,deep-learning
They are useful for on-the-fly augmentations, which the previous poster mentioned. This however is not neccessarily restricted to generators, because you can fit for one epoch and then augment your data and fit again. What does not work with fit is using too much data per epoch though. This means that if you have a dataset of 1 TB and only 8 GB of RAM you can use the generator to load the data on the fly and only hold a couple of batches in memory. This helps tremendously on scaling to huge datasets.
When and how should I use fit_generator? What is the difference between fit and fit_generator?
0
1
3,009
0
44,600,391
0
0
0
0
1
true
2
2017-06-17T02:15:00.000
4
1
0
tf-idf : should I do normalization of documents length
44,600,170
1.2
python,normalization,word,tf-idf
Generally you want to do whatever gives you the best cross validated results on your data. If all you are doing to compare them is taking cosine similarity then you have to normalize the vectors as part of the calculation but it won't affect the score on account of varying document lengths. Many general document retrieval systems consider shorter documents to be more valuable but this is typically handled as a score multiplier after the similarities have been calculated. Oftentimes ln(TF) is used instead of raw TF scores as a normalization feature because differences between seeing a term 1and 2 times is way more important than the difference between seeing a term 100 and 200 times; it also keeps excessive use of a term from dominating the vector and is typically much more robust.
When using TF-IDF to compare Document A, B I know that length of document is not important. But compared to A-B, A-C in this case, I think the length of document B, C should be the same length. for example Log : 100 words Document A : 20 words Document B : 30 words Log - A 's TF-IDF score : 0.xx Log - B 's TF-IDF score : 0.xx Should I do normalization of document A,B? (If the comparison target is different, it seems to be a problem or wrong result)
0
1
2,290
0
44,638,348
0
0
0
0
1
true
0
2017-06-18T12:29:00.000
1
1
0
Is Torch7 defined-by-run like Pytorch?
44,614,977
1.2
python,lua,torch,pytorch
No, Torch7 use static computational graphs, as in Tensorflow. It is one of the major differences between PyTorch and Torch7.
Pytorch have Dynamic Neural Networks (defined-by-run) as opposed to Tensorflow which have to compile the computation graph before run. I see that both Torch7 and PyTorch depend on TH, THC, THNN, THCUNN (C library). Does Torch7 have Dynamic Neural Networks (defined-by-run) feature ?
0
1
79
0
44,617,764
0
0
0
0
1
true
0
2017-06-18T16:51:00.000
1
2
0
Python vertical stack not working
44,617,331
1.2
python,numpy
Since X is a numpy array, you can do X.shape instead of the repeated len. I expect it to show (13934, 74). I expect Y.shape to be (13934,). It's a 1d array, which is why Y[0] is a number, numpy.int64. And since it is 1d, transpose (swapping axes) doesn't do anything. (this isn't MATLAB where everything has at least 2 dimensions.) It looks like you want to create an array that has shape (13934, 75). To do that you'll need to add a dimension to Y. Y[:,None] is a concise way of doing that. The shape of that is (13934,1), which will concatenate with X. If that None syntax is puzzling, try, Y.reshape(-1,1) (or reshape(13934,1)).
I have a matrix X which has len(X) equal to 13934 and len(X[i]), for all i, equal to 74, and I have an array Y which has len(Y) equal to 13934 and len(Y[i]) equal to TypeError: object of type 'numpy.int64' has no len() for all i. When I try np.vstack((X,Y)) or result = np.concatenate((X, Y.T), axis=1) I get ValueError: all the input array dimensions except for the concatenation axis must match exactly What is the problem? When I print out Y it says array([1,...], dtype=int64) and when I print out X it says array([data..]) with no dtype. Could this be the problem? I tried converting them both to float32 by doing X.view('float32') and this did not help.
0
1
203
0
44,623,114
0
0
0
0
1
false
2
2017-06-18T23:14:00.000
1
2
0
Keras: Is there any way to "pop()" the top layers?
44,620,403
0.099668
python,tensorflow,keras
Keras pop() removes the last (aka top) layer, not the bottom one. I suggest you use model.summary() to print out the list of layers and than subsequently use pop() until only the necessary layers are left.
In Keras there is a feature called pop() that lets you remove the bottom layer of a model. Is there any way to remove the top layer of a model? I have a fully saved pre-trained Variational Autoencoder and am trying to only load the decoder (the bottom four layers). I am using Keras with a Tensorflow backend.
0
1
1,067
0
70,358,084
0
0
0
0
1
false
1
2017-06-19T16:34:00.000
0
3
0
Tensorflow - Euclidean Distance of Points in Matrix
44,635,695
0
python,tensorflow
Define a function to calculate distances calc_distance = lambda f, g: tf.norm(f-g, axis=1, ord='euclidean') Pass your n*m vector to the function, example: P = tf.constant([[1, 2], [3, 4], [2, 1], [0, 2], [2, 3]], dtype=tf.float32) distances = calc_distance(P[:-1:], P[1::]) print(distances) <tf.Tensor: shape=(4,), dtype=float32, numpy=array([2.8284273, 3.1622777, 2.2360682, 2.2360682], dtype=float32)>
I have a n*m tensor that basically represents m points in n dimensional euclidean space. I wanted calculate the pairwise euclidean distance between each consecutive point. That is, if my column vectors are the points a, b, c, etc., I want to calculate euc(a, b), euc(b, c), etc. The result would be an m-1 length 1D-tensor with each pairwise euclidean distance. Anyone know who this can be performed in TensorFlow?
0
1
2,501
0
49,830,684
0
0
0
0
2
false
5
2017-06-19T17:47:00.000
0
2
0
Training and validating on images with different resolution in Keras
44,636,877
0
python,validation,machine-learning,neural-network,keras
You need to make sure that your network input is of shape (None,None,3), which means your network accepts an input color image of arbitrary size.
I'm using Keras to build a convolutional neural net to perform regression from microscopic images to 2D label data (for counting). I'm looking into training the network on smaller patches of the microscopic data (where the patches are the size of the receptive field). The problem is, the fit() method requires validation data to be of the same size as the input. Instead, I'm hoping to be able to validate on entire images (not patches) so that I can validate on my entire validation set and compare the results to other methods I've used so far. One solution I found was to alternate between fit() and evaluate() each epoch. However, I was hoping to be able to observe these results using Tensorboard. Since evaluate() doesn't take in callbacks, this solution isn't ideal. Does anybody have a good way validating on full-resolution images while training on patches?
0
1
888
0
45,889,288
0
0
0
0
2
false
5
2017-06-19T17:47:00.000
0
2
0
Training and validating on images with different resolution in Keras
44,636,877
0
python,validation,machine-learning,neural-network,keras
You could use fit generator instead of fit and provide a different generator for validation set. As long as the rest of your network is agnostic to the image size, (e.g, fully convolutional layers), you should be fine.
I'm using Keras to build a convolutional neural net to perform regression from microscopic images to 2D label data (for counting). I'm looking into training the network on smaller patches of the microscopic data (where the patches are the size of the receptive field). The problem is, the fit() method requires validation data to be of the same size as the input. Instead, I'm hoping to be able to validate on entire images (not patches) so that I can validate on my entire validation set and compare the results to other methods I've used so far. One solution I found was to alternate between fit() and evaluate() each epoch. However, I was hoping to be able to observe these results using Tensorboard. Since evaluate() doesn't take in callbacks, this solution isn't ideal. Does anybody have a good way validating on full-resolution images while training on patches?
0
1
888
0
44,937,787
0
0
0
0
1
false
0
2017-06-19T20:10:00.000
0
1
0
TensorFlow extracting columns
44,639,106
0
python,tensorflow
I can't comment on the question because of low rep, so using an answer instead. Can you clarify your question a bit, maybe with a small concrete example using very small tensors? What are the "columns" you are referring to? You say that you want to keep 50 columns (presumably 50 numbers) per image. If so, the (10, 50) shape seems like what you want - it has 50 numbers for each image in the batch. The (10, 50, 20, 3) shape you mention would allocate 50 numbers to each "image_column x channel". That is 20*3*50 = 3000 numbers per image. How do you want to construct them from the 50 that you have? Also, can you give a link to tf.batch_nd(). I did not find anything similar and relevant.
I have a tensor of shape (10, 100, 20, 3). Basically, it can be thought of as a batch of images. So the image height is 100 and width is 20 and channel depth is 3. I have run some computations to generate a set of 10*50 indices corresponding to 50 columns I would like to keep per image in the batch. The indices are stored in a tensor of shape (10, 50). I would like to end up with a tensor of shape (10, 50, 20, 3). I have looked into tf.batch_nd() but I can't figure out the semantics for how indices are actually used. Any thoughts?
0
1
78
0
44,662,310
0
0
0
0
1
false
3
2017-06-20T20:18:00.000
0
1
0
Pip and/or installing the .pyd of library to site-packages leads "import" of library to DLL load faliure
44,662,278
0
python,opencv,dll,pip
Use the zip, extract it, and run sudo python3 setup.py install if you are on Mac or Linux. If on Windows, open cmd or Powershell in Admin mode and then run py -3.6 setup.py install, after cding to the path of the zip. If on Linux, you also have to run sudo apt-get install python-opencv. Maybe on Mac you have to use Homebrew, but I am not sure.
I attempted to install Opencv for python two ways, A) Downloading the opencv zip, then copying cv2.pyd to /Python36/lib/site-packages. B) undoing that, and using "pip install opencv-python" /lib/site-packages is definitly the place where python is loading my modules, as tensorflow and numpy are there, but any attempt to "import cv2" leads to "ImportError: DLL Load Failed: The specified module could not be found" I am at a loss, any help appreciated. And yes i have tried reinstalling VC redist 2015
0
1
4,066
0
44,698,955
0
0
0
0
1
true
1
2017-06-22T11:51:00.000
4
1
0
What are the limits of vectorization?
44,698,632
1.2
python,arrays,numpy,vectorization
For elementwise multiplication it does not matter, and flattening the array does not change a thing. Remember: Arrays, no matter their dimension, are saved linearly in RAM. If you flatten the array before multiplication, you are only changing the way NumPy presents the data to you, the data in RAM is never touched. Multiplying the 1D or the 100D data is exactly the same operation.
I'm working with multidimensional matrices (~100 dimensions or so, see below why). My matrix are NumPy arrays and I mainly multiply them with each other. Does NumPy care (with respect to speed or accuracy) in what form I ask it to multiply these matrices? I.e. would it make sense to reshape them into a linear array before performing the multiplication? I did some own test with random matrices, and it seemed to be irrelevant, but would like to have some theoretical insight into this. I guess there is a limit to how large matrices can be and how large they can be, before Python becomes slow handling them. Is there a way to find this limit? I have several species (biology) and want to assign each of these species a fitness. Then I want to see how these different finesses affect the outcome of competition. And I want to check for all possible fitness combinations of all species. My matrices have many dimensions, but all dimensions are quite small.
0
1
115
0
45,310,833
0
0
0
0
1
true
0
2017-06-22T12:49:00.000
1
2
0
XGBoost - Feature selection using XGBRegressor
44,699,889
1.2
python,xgboost
Finally I have solved this issue by: model.booster().get_score(importance_type='weight')
I am trying to perform features selection (for regression tasks) by XGBRegressor(). More precisely, I would like to know: If there is something like the method feature_importances_, utilized with XGBClassifier, which I could use for regression. If the XGBoost's method plot_importance() is reliable when it is used with XGBRegressor()
0
1
2,264
0
61,737,353
0
0
0
0
1
false
3
2017-06-23T01:31:00.000
0
1
0
Uninstall/upgrade tensorflow failed: __init__.cpython-35.pyc not found
44,711,726
0
python,tensorflow,installation
Posting my own answer (alternative) here in case someone overlooked comment: I forced to delete the package in the python/lib/site-packages/ and reinstalled the tensorflow-gpu, and it seems working well. Though I solve this problem via such alternate I would still like to know the root cause and long-term fix for this.
I previously installed tensorflow-gpu 0.12.0rc0 with Winpython-3.5.2, and when I tried to upgrade or uninstall it to install the newer version using both the Winpython Control Panel and pip, I got the following error: FileNotFoundError: [WinError 3] The system cannot find the path specified: 'c:\\users\\moliang\\downloads\\winpython-64bit-3.5.2.3qt5\\python-3.5.2.amd64\\lib\\site-packages\\tensorflow\\contrib\\session_bundle\\testdata\\saved_model_half_plus_two\\variables\\__pycache__\\__init__.cpython-35.pyc' I installed the tensorflow-gpu 0.12.0rc0 through Winpython-3.5.2 pip, and the __init__.cpython-35.pyc does exist at the correct directory. So I don't understand how this error could happen? And it prevents me from getting the new version.
0
1
508
0
44,736,370
0
0
0
0
1
true
2
2017-06-23T08:14:00.000
3
1
0
KernelPCA produces NaNs
44,716,368
1.2
python,machine-learning,scikit-learn
The NaNs are produced because the eigenvalues (self.lambdas_) of the input matrix are negative which provoke the ValueError as the square root does not operate with negative values. The issue might be overcome by setting KernelPCA(remove_zero_eig=True, ...) but such action would not preserve the original dimensionality of the data. Using this parameter is a last resort as the model's results may be skewed. Actually, it has been stated negative eigenvalues indicate a model misspecification, which is obviously bad. Possible solution for evading that fact without corroding the dimensionality of the data with remove_zero_eig parameter might be reducing the quantity of the original features, which are greatly correlated. Try to build the correlation matrix and see what those values are. Then, try to omit the redundant features and fit the KernelPCA() again.
After applying KernelPCA to my data and passing it to a classifier (SVC) I'm getting the following error: ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). and this warning while performing KernelPCA: RuntimeWarning: invalid value encountered in sqrt X_transformed = self.alphas_ * np.sqrt(self.lambdas_) Looking at the transformed data I've found several nan values. It makes no difference which kernel I'm using. I tried cosine, rbf and linear. But what's interesting: My original data only contains values between 0 and 1 (no inf or nan), it's scaled with MinMaxScaler Applying standard PCA works, which I thought to be the same as KernelPCA with linear kernel. Some more facts: My data is high dimensional ( > 8000 features) and mostly sparse. I'm using the newest version of scikit-learn, 18.2 Any idea how to overcome this and what could be the reason?
0
1
622
0
45,011,256
0
0
0
0
1
true
1
2017-06-23T14:08:00.000
0
1
0
VGG16 Training new dataset: Why VGG16 needs label to have shape (None,2,2,10) and how do I train mnist dataset with this network?
44,723,464
1.2
python,machine-learning,deep-learning,keras
Low accuracy is caused by the problem in layers. I just modified my network and obtained .7496 accuracy.
I was trying to train CIFAR10 and MNIST dataset on VGG16 network. In my first attempt, I got an error which says shape of input_2 (labels) must be (None,2,2,10). What information does this structure hold in 2x2x10 array because I expect input_2 to have shape (None, 10) (There are 10 classes in both my datasets). I tried to expand dimensions of my labels from (None,10) to (None,2,2,10). But I am sure this is not the correct way to do it since I obtain a very low accuracy (around 0.09) (I am using keras, Python3.5)
0
1
212
0
44,740,700
0
0
0
0
1
true
0
2017-06-24T19:25:00.000
2
1
0
how to preserve number of records in word2vec?
44,740,161
1.2
python-3.x,nlp,word2vec
If you are splitting each entry into a list of words, that's essentially 'tokenization'. Word2Vec just learns vectors for each word, not for each text example ('record') – so there's nothing to 'preserve', no vectors for the 45,000 records are ever created. But if there are 26,000 unique words among the records (after applying min_count), you will have 26,000 vectors at the end. Gensim's Doc2Vec (the ' Paragraph Vector' algorithm) can create a vector for each text example, so you may want to try that. If you only have word-vectors, one simplistic way to create a vector for a larger text is to just add all the individual word vectors together. Further options include choosing between using the unit-normed word-vectors or raw word-vectors of many magnitudes; whether to then unit-norm the sum; and whether to otherwise weight the words by any other importance factor (such as TF/IDF). Note that unless your documents are very long, this is a quite small training set for either Word2Vec or Doc2Vec.
I have 45000 text records in my dataframe. I wanted to convert those 45000 records into word vectors so that I can train a classifier on the word vector. I am not tokenizing the sentences. I just split the each entry into list of words. After training word2vec model with 300 features, the shape of the model resulted in only 26000. How can I preserve all of my 45000 records ? In the classifier model, I need all of those 45000 records, so that it can match 45000 output labels.
0
1
277
0
51,706,173
0
0
0
0
1
false
32
2017-06-24T22:39:00.000
4
3
0
pandas timestamp series to string?
44,741,587
0.26052
python,arrays,pandas,vector
Following on from VinceP's answer, to convert a datetime Series in-place do the following: df['Column_name']=df['Column_name'].astype(str)
I am new to python (coming from R), and I am trying to understand how I can convert a timestamp series in a pandas dataframe (in my case this is called df['timestamp']) into what I would call a string vector in R. is this possible? How would this be done? I tried df['timestamp'].apply('str'), but this seems to simply put the entire column df['timestamp'] into one long string. I'm looking to convert each element into a string and preserve the structure, so that it's still a vector (or maybe this a called an array?)
0
1
102,326
0
44,765,840
0
0
0
0
1
false
0
2017-06-26T15:55:00.000
0
2
0
supervised tag suggestion for documents
44,763,743
0
python,machine-learning,nlp,text-classification
I'm currently working on something similar, besides what @Joonatan Samuel suggested I would encourage you to do careful preprocessing and considerations. If you want two or more tags for documents you could train several model : one model per tag. You need to consider if there will be enough cases for each model (tag) If you have a lot of tags, you could run into a problem with document-tag cases like above. Stick to most common tag prediction don't try to predict all tags.
I have thousands of documents with associated tag information. However i also have many documents without tags. I want to train a model on the documents WITH tags and then apply the trained classifier to the UNTAGGED documents; the classifier will then suggest the most appropriate tags for each UNTAGGED document. I have done quite a lot of research and there doesn't seem to be a SUPERVISED implementation to document tag classification. I know NLTK, gensim, word2vec and other libraries will be useful for this problem. I will be coding the project in Python. Any help would be greatly appreciated.
0
1
461
0
44,777,825
0
0
0
0
1
true
0
2017-06-27T09:30:00.000
0
1
0
sklearn::TypeError: Wrong type for parameter `n_values`. Expected 'auto', int or array of ints, got
44,776,786
1.2
python,arrays,numpy,scikit-learn
n_values should only contain domain sizes for categorical values completely skipping out the non-categorical columns in the data matrix. Therefore if [True, False, True] format is used, the size should correspond to the number of True values in the array or if indices are used the two arrays should be of the same size. So there should be no None values but also no 0s, -1s or any other ways to encode real-valued variables in the n_values array.
I am passing in a hardcoded list / tuple (tried both) when initialising the OneHotEncoder and I get this error during fit_transform , not using numpy types anywhere (well except for the data matrix itself). The only thing is that some of the values in that array are None because I am also using categorical_features to specify a mask (as in some of the features are real-valued and I want them to stay real-valued. My n_values looks like [1, 2, 3, None, 5] or (1, 2, 3, None, 5) and my categorical_features looks like [0, 1, 2, 4] though I have also tried: [True, True, True, False, True]. The documentation does not present any actual examples with the mask on. EDIT: So, I tried replacing None with zeroes and this issue went away but now I get: ValueError: Shape mismatch: if n_values is an array, it has to be of shape (n_features,). Whether I wrap my mask array with np.array or not (and when I do the shape is indeed the same as (n_features,)) I get this same error (though interestingly it does not complain about it being a numpy array anymore as long as there are no None values in it.
0
1
1,561
0
45,102,844
0
0
0
0
1
false
1
2017-06-27T12:21:00.000
1
2
0
Generating np.einsum evaluation graph
44,780,195
0.099668
python,numpy,scipy,numpy-einsum
First, why do you need B to be 2-dim? Why not just np.einsum('ab , b -> a', A, B)? Now the actual question: It's not exactly what you want, but by using smart choices for A and B you can make this visible. e.g. A = [[1,10],[100,1000]] and B = [1,2], which gives np.einsum('ab , b -> a', A, B) = [21,2100] and it's quite obvious what has happend. More general versions are a little bit more complicated (but hopefully not necessary). The idea is to use different potences of primes (especially useful are 2 and 5, as they align to easy readyble number in dezimal system). In case you want to sum over more than one dimesion you might consider taking primes (2,3,5,7 etc) and then convert the result into another number system. In case you sume over two dims-> 30-ary system 3 dims (2,3,5,7)-> 210-ary system
I was planning to teach np.einsum to colleagues, by hoping to show how it would be reduced to multiplications and summations. So, instead of numerical data, I thought to use alphabet chars. in the arrays. Say, we have A (2X2) as [['a', 'b'], ['c', 'd']] and B (2X1) as [['e'], ['f']] We could use einsum to create a matrix C, say like: np.einsum('ab , bc -> ac', A, B). What I'd like to see is: it return the computation graph: something like: a*c + ..., etc. Ofcourse, np.einsum expects numerical data and would give an error if given the above code to run.
0
1
193
0
44,783,384
0
0
0
0
1
false
0
2017-06-27T14:29:00.000
1
1
0
sklearn warning message whenever I run tensorflow on terminal
44,782,916
0.197375
python,scikit-learn
It is not an error message, it is simply a warning that a module cross_validation has been transmitted from sklearn.cross_validation to sklearn.model_selection.. It is not a problem at all. If you are still eager to fix it, then you should find out what snippet of code tries to import sklearn.cross_validation and alter it to sklearn.model_selection. If you check both sklearn.cross_validation and sklearn.model_selection, you will see that they contain the same methods. Again, it is not an error.
Every time I run a tensorflow file on terminal, this warning pops up before the file runs. I have checked my version of sklearn and it is 0.18.1. How do you make this message to not appear? Thank you. anaconda2/envs/tensorflow/lib/python2.7/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20. "This module will be removed in 0.20.", DeprecationWarning)
0
1
187
0
46,108,215
0
0
0
0
1
false
0
2017-06-28T05:34:00.000
3
1
0
Installing seaborn on Pyspark
44,794,347
0.53705
python-2.7,pyspark,seaborn
Generally, for plotting, you need to move all the data points to the master node (using functions like collect() ) before you can plot. PLotting is not possible while the data is still distributed in memory.
I am using Apache Pyspark with Jupyter notebook. In one of the machine learning tutorials, the instructors were using seaborn with pyspark. How can we install and use third party libraries like Seaborn on the Apache Spark (rather Pyspark)?
0
1
824
0
44,799,700
0
0
0
0
1
true
0
2017-06-28T09:53:00.000
3
1
0
When run a tensorflow session in iPython, GPU memory usage remain high when exiting iPython
44,799,200
1.2
python,tensorflow,ipython
Control+Z doesn't quit a process, it stops it (use fg to bring it back up). If some computation is running in a forked process, it may not stop with the main process (I'm no OS guy, this is just my intuition). In any case, properly quitting iPython (e.g. by Control+D or by running exit()) should solve the problem. If you need to interrupt a running command, first hit Control+C, then run exit().
I think it's some sort of bug. The problem is quite simple: launch ipython import Tensorflow and run whatever session type nvidia-smi in bash (see really high gpu memory usage, related process name, etc) control+z quit ipython type nvidia-smi in bash (still! really high GPU memory usage, and the same process name, strangely, these processes are not killed!) I guess iPython failed to clean Tensorflow variables or graphs when exiting. Is there any way I can clean the GPU memory without restart my machine? System: Ubuntu 14.04 Python: Python3.5 IPython: IPython6.0.0
0
1
449
0
44,814,853
0
0
0
1
1
true
0
2017-06-28T13:34:00.000
1
1
0
Python/Pandas/BigQuery: How to efficiently update existing tables with a lot of new time series data?
44,804,051
1.2
python,pandas,google-bigquery,google-cloud-platform,gsutil
Consider breaking up your data into daily tables (or partitions). Then you only need to upload the CVS from the current day. The script you have currently defined otherwise seems reasonable. Extract your new day of CSVs from your source of timeline data. Gzip them for fast transfer. Copy them to GCS. Load the new CVSs into the current daily table/partition. This avoids the need to delete existing tables and reduces the amount of data and processing that you need to do. As a bonus, it is easier to backfill a single day if there is an error in processing.
I have one program that downloads time series (ts) data from a remote database and saves the data as csv files. New ts data is appended to old ts data. My local folder continues to grow and grow and grow as more data is downloaded. After downloading new ts data and saving it, I want to upload it to a Google BigQuery table. What is the best way to do this? My current work-flow is to download all of the data to csv files, then to convert the csv files to gzip files on my local machine and then to use gsutil to upload those gzip files to Google Cloud Storage. Next, I delete whatever tables are in Google BigQuery and then manually create a new table by first deleting any existing table in Google BigQuery and then creating a new one by uploading data from Google Cloud Storage. I feel like there is room for significant automation/improvement but I am a Google Cloud newbie. Edit: Just to clarify, the data that I am downloading can be thought of downloading time series data from Yahoo Finance. With each new day, there is fresh data that I download and save to my local machine. I have to uploading all of the data that I have to Google BigQUery so that I can do SQL analysis on it.
0
1
642
0
44,804,374
0
0
0
0
1
true
0
2017-06-28T13:42:00.000
1
1
0
Pandas: Reading CSV files with different delimiters - merge error
44,804,235
1.2
python,csv,pandas,merge,delimiter
In short : no, you do not need similar delimiters within your files to merge pandas Dataframes - in fact, once data has been imported (which requires setting the right delimiter for each of your files), the data is placed in memory and does not keep track of the initial delimiter (you can see this by writing down your imported dataframes to csv using the .to_csv method : the delimiter will always be , by default). Now, in order to understand what is going wrong with your merge, please post more details about your data and the code your are using to perform the operation.
I have 4 separate CSV files that I wish to read into Pandas. I want to merge these CSV files into one dataframe. The problem is that the columns within the CSV files contain the following: , ; | and spaces. Therefore I have to use different delimiters when reading the different CSV files and do some transformations to get them in the correct format. Each CSV file contains an 'ID' column. When I merge my dataframes, it is not done correctly and I get 'NaN' in the column which has been merged. Do you have to use the same delimiter in order for the dataframes to merge properly?
0
1
891
0
44,805,286
0
1
0
0
1
false
19
2017-06-28T14:11:00.000
1
2
0
numpy: "size" vs. "shape" in function arguments?
44,804,965
0.099668
python,numpy
Because you are working with a numpy array, which was seen as a C array, size refers to how big your array will be. Moreover, if you can pass np.zeros(10) or np.zeros((10)). While the difference is subtle, size passed this way will create you a 1D array. You can give size=(n1, n2, ..., nn) which will create an nD array. However, because python users want multi-dimensional arrays, array.reshape allows you to get from 1D to an nD array. So, when you call shape, you get the N dimension shape of the array, so you can see exactly how your array looks like. In essence, size is equal to the product of the elements of shape. EDIT: The difference in name can be attributed to 2 parts: firstly, you can initialise your array with a size. However, you do not know the shape of it. So size is only for total number of elements. Secondly, how numpy was developed, different people worked on different parts of the code, giving different names to roughly the same element, depending on their personal vision for the code.
I noticed that some numpy operations take an argument called shape, such as np.zeros, whereas some others take an argument called size, such as np.random.randint. To me, those arguments have the same function and the fact that they have different names is a bit confusing. Actually, size seems a bit off since it really specifies the .shape of the output. Is there a reason for having different names, do they convey a different meaning even though they both end up being equal to the .shape of the output?
0
1
17,837
0
44,830,094
0
0
0
0
1
false
2
2017-06-29T15:09:00.000
0
2
0
find non-monotonical rows in dataframe
44,828,905
0
python,pandas
"Quick" in terms of what resource? If you want programming ease, then simply make a new frame resulting from subtracting adjacent columns. Any entry of zero or negative value is your target. If you need execution speed, do note that adjacent differences are still necessary: all you can save is the overhead of finding multiple violations in a given row. However, unless you have a particularly wide data frame, it's likely that you'll lose more in short-circuiting than you'll gain by the saved subtractions. Also note that a processor with matrix operations or other parallelism will be fast enough with the whole data frame, that the checking will cost you significant time.
I have a pandas dataframe with Datetime as index. The index is generally monotonically increasing however there seem to be a few rows don't follow this tread. Any quick way to identify these unusual rows?
0
1
759
0
44,835,396
0
0
0
0
2
true
1
2017-06-29T21:16:00.000
2
3
0
Reading file with huge number of columns in python
44,835,126
1.2
python,file-handling
csv is very inefficient for storing large datasets. You should convert your csv file into a better suited format. Try hdf5 (h5py.org or pytables.org), it is very fast and allows you to read parts of the dataset without fully loading it into memory.
I have a huge file csv file with around 4 million column and around 300 rows. File size is about 4.3G. I want to read this file and run some machine learning algorithm on the data. I tried reading the file via pandas read_csv in python but it is taking long time for reading even a single row ( I suspect due to large number of columns ). I checked few other options like numpy fromfile, but nothing seems to be working. Can someone please suggest some way to load file with many columns in python?
0
1
1,206
0
44,835,474
0
0
0
0
2
false
1
2017-06-29T21:16:00.000
3
3
0
Reading file with huge number of columns in python
44,835,126
0.197375
python,file-handling
Pandas/numpy should be able to handle that volume of data no problem. I hope you have at least 8GB of RAM on that machine. To import a CSV file with Numpy, try something like data = np.loadtxt('test.csv', dtype=np.uint8, delimiter=',') If there is missing data, np.genfromtext might work instead. If none of these meet your needs and you have enough RAM to hold a duplicate of the data temporarily, you could first build a Python list of lists, one per row using readline and str.split. Then pass that to Pandas or numpy, assuming that's how you intend to operate on the data. You could then save it to disk in a format for easier ingestion later. hdf5 was already mentioned and is a good option. You can also save a numpy array to disk with numpy.savez or my favorite the speedy bloscpack.(un)pack_ndarray_file.
I have a huge file csv file with around 4 million column and around 300 rows. File size is about 4.3G. I want to read this file and run some machine learning algorithm on the data. I tried reading the file via pandas read_csv in python but it is taking long time for reading even a single row ( I suspect due to large number of columns ). I checked few other options like numpy fromfile, but nothing seems to be working. Can someone please suggest some way to load file with many columns in python?
0
1
1,206
0
60,381,721
0
1
0
0
1
false
3
2017-06-29T21:35:00.000
1
2
0
Pandas - List of Dataframe Names?
44,835,358
0.099668
python,list,pandas
%who_ls DataFrame This is all dataframes loaded in memory as a list all_df_in_mem = %who_ls DataFrame
I've done a lot of searching and can't find anything related. Is there a built-in function to automatically generate a list of Pandas dataframes that I've created? For example, I've created three dataframes: df1 df2 df3 Now I want a list like: df_list = [df1, df2, df3] so I can iterate through it.
0
1
5,562
0
44,935,654
0
0
0
0
2
true
2
2017-06-29T22:49:00.000
1
2
0
Installing rpy2 to work with R 3.4.0 on OSX
44,836,123
1.2
r,conda,python-3.6,rpy2,libiconv
I uninstalled rpy2 and reinstalled with --verborse. I then found ld: warning: ignoring file /opt/local/lib/libpcre.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libpcre.dylib ld: warning: ignoring file /opt/local/lib/liblzma.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/liblzma.dylib ld: warning: ignoring file /opt/local/lib/libbz2.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libbz2.dylib ld: warning: ignoring file /opt/local/lib/libz.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libz.dylib ld: warning: ignoring file /opt/local/lib/libiconv.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libiconv.dylib ld: warning: ignoring file /opt/local/lib/libicuuc.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libicuuc.dylib ld: warning: ignoring file /opt/local/lib/libicui18n.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libicui18n.dylib ld: warning: ignoring file /opt/local/Library/Frameworks/R.framework/R, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/Library/Frameworks/R.framework/R So I supposed the reason is the architecture incompatibility of the libiconv in opt/local, causing make to fall back onto the outdate libiconv in usr/lib. This is strange because my machine should be running on x86_64 not i386. I then tried export ARCHFLAGS="-arch x86_64" and reinstalled libiconv using port. This resolved the problem.
I would like to use some R packages requiring R version 3.4 and above. I want to access these packages in python (3.6.1) through rpy2 (2.8). I have R version 3.4 installed, and it is located in /Library/Frameworks/R.framework/Resources However, when I use pip3 install rpy2 to install and use the python 3.6.1 in /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6) as my interpreter, I get the error: Traceback (most recent call last): File "/Users/vincentliu/PycharmProjects/magic/rpy2tester.py", line 1, in from rpy2 import robjects File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/robjects/init.py", line 16, in import rpy2.rinterface as rinterface File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/init.py", line 92, in from rpy2.rinterface._rinterface import (baseenv, ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libiconv.2.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so Reason: Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Which first seemed like a problem caused by Anaconda, and so I remove all Anaconda-related files but the problem persists. I then uninstalled rpy2, reinstalled Anaconda and used conda install rpy2 to install, which also installs R version 3.3.2 through Anaconda. I can then change the interpreter to /anaconda/bin/python and can use rpy2 fine, but I couldn't use the R packages I care about because they need R version 3.4 and higher. Apparently, the oldest version Anaconda can install is 3.3.2, so is there any way I can use rpy2 with R version 3.4? I can see two general solutions to this problem. One is to install rpy2 through conda and then somehow change its depending R to the 3.4 one in the system. Another solution is to resolve the error Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 After much struggling, I've found no good result with either.
0
1
1,095
0
53,839,320
0
0
0
0
2
false
2
2017-06-29T22:49:00.000
0
2
0
Installing rpy2 to work with R 3.4.0 on OSX
44,836,123
0
r,conda,python-3.6,rpy2,libiconv
I had uninstall the version pip installed and install from source python setup.py install on the download https://bitbucket.org/rpy2/rpy2/downloads/. FWIW not using Anaconda at all either.
I would like to use some R packages requiring R version 3.4 and above. I want to access these packages in python (3.6.1) through rpy2 (2.8). I have R version 3.4 installed, and it is located in /Library/Frameworks/R.framework/Resources However, when I use pip3 install rpy2 to install and use the python 3.6.1 in /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6) as my interpreter, I get the error: Traceback (most recent call last): File "/Users/vincentliu/PycharmProjects/magic/rpy2tester.py", line 1, in from rpy2 import robjects File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/robjects/init.py", line 16, in import rpy2.rinterface as rinterface File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/init.py", line 92, in from rpy2.rinterface._rinterface import (baseenv, ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libiconv.2.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so Reason: Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Which first seemed like a problem caused by Anaconda, and so I remove all Anaconda-related files but the problem persists. I then uninstalled rpy2, reinstalled Anaconda and used conda install rpy2 to install, which also installs R version 3.3.2 through Anaconda. I can then change the interpreter to /anaconda/bin/python and can use rpy2 fine, but I couldn't use the R packages I care about because they need R version 3.4 and higher. Apparently, the oldest version Anaconda can install is 3.3.2, so is there any way I can use rpy2 with R version 3.4? I can see two general solutions to this problem. One is to install rpy2 through conda and then somehow change its depending R to the 3.4 one in the system. Another solution is to resolve the error Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 After much struggling, I've found no good result with either.
0
1
1,095
0
44,852,474
0
0
0
0
1
false
0
2017-06-30T09:30:00.000
0
1
0
How to get the random forest threshold from an h2o random forest object
44,843,175
0
python,random-forest,h2o
you could download and take a look at the POJO which lists all the thresholds used for the model h2o.download_pojo(model, path=u'', get_jar=True, jar_name=u'')
I have an h2o random forest in Python. How to extract for each tree the threshold of each features ? My aim is to implement this random forest in c++ Thanks !
0
1
235
0
44,855,284
0
0
0
0
1
false
1
2017-06-30T17:37:00.000
1
2
0
using tensorflow fill method to create a tensor of certain datatype
44,852,137
0.099668
python,numpy,tensorflow,tensor
You can either provide the value of the datatype you want your resulting tensor to be or to cast a tensor afterwards. tf.fill((3, 3), 0.0) # will be a float 32 tf.cast(tf.fill((3, 3)), tf.float32) # also float 32 The first one is better because you use less operations in the graph
I am trying to use the tf.fill() method to create a tensor of different data types(float16,float32,float64) similar to what you can do with numpy.full(). would tf.constant() be a suitable substitution? or should I create my fill values to be of the data type I want them to be then plug it into the value holder inside tf.fill()
0
1
2,486
0
44,857,779
0
0
0
0
1
true
0
2017-07-01T02:56:00.000
1
1
0
Using a custom threshold value with tf.contrib.learn.DNNClassifier?
44,856,964
1.2
python,machine-learning,tensorflow,neural-network
The tf.contrib.learn.DNNClassifier class has a method called predict_proba which returns the probabilities belonging to each class for the given inputs. Then you can use something like, tf.round(prob+thres) for binary thresholding with the custom parameter thres.
I'm working on a binary classification problem and I'm using the tf.contrib.learn.DNNClassifier class within TensorFlow. When invoking this estimator for only 2 classes, it uses a threshold value of 0.5 as the cutoff between the 2 classes. I'd like to know if there's a way to use a custom threshold value since this might improve the model's accuracy. I've searched all around the web and apparently there isn't a way to do this. Any help will be greatly appreciated, thank you.
0
1
476
0
44,858,027
0
0
0
0
1
false
4
2017-07-01T06:19:00.000
3
4
0
how to get random pixel index from binary image with value 1 in python?
44,857,970
0.148885
python,random,pixel
I'd suggest making a list of coordinates of all non-zero pixels (by checking all pixels in the image), then using random.shuffle on the list and taking the first 100 elements.
I have a binary image of large size (2000x2000). In this image most of the pixel values are zero and some of them are 1. I need to get only 100 randomly chosen pixel coordinates with value 1 from image. I am beginner in python, so please answer.
0
1
3,545
0
44,862,175
0
0
0
0
1
false
5
2017-07-01T14:24:00.000
1
2
0
Reading an excel with pandas basing on columns' colors
44,861,989
0.099668
python,excel,pandas
This can not be done in pandas. You will need to use other library to read the xlsx file and determine what columns are white. I'd suggest using openpyxl library. Then your script will follow this steps: Open xlsx file Read and filter the data (you can access the cell color) and save the results Create pandas dataframe Edit: Switched xlrd to openpyxl as xlrd is no longer actively maintained
I have an xlsx file, with columns with various coloring. I want to read only the white columns of this excel in python using pandas, but I have no clues on hot to do this. I am able to read the full excel into a dataframe, but then I miss the information about the coloring of the columns and I don't know which columns to remove and which not.
0
1
7,328
0
44,864,201
0
0
0
0
1
true
0
2017-07-01T17:18:00.000
1
1
0
Matplotlib is incorrectly rendering axis labels in sans-serif when using LaTeX
44,863,610
1.2
python,matplotlib,latex
I was importing the seaborn package after setting the matplotlib rcParams, which overwrote values such as the font family. Calling rcParams.update(params) after importing seaborn fixes the problem.
I am using matplotlib.rc('text', usetex=True); matplotlib.rc('font', family='serif') to set my font to serif with LaTeX. This works for the tick labels, however the plot title and axis lables are typeset using the sans-serif CMS S 12 computer modern variant. From what I have found on the web, most people seem to have trouble using the sans-serif font. For me the opposite is the case, I cannot get the serif font to work properly. I have tried a hacky solution of setting the sans-serif font to Computer Modern, which unfortunately does not work either.
0
1
184
0
44,871,723
0
1
0
0
1
false
0
2017-07-02T13:26:00.000
1
1
0
In spyder how to get back default view of running a code in Ipython console
44,871,312
0.197375
python-3.x,anaconda,spyder
(Spyder developer here) Please use the Variable Explorer to visualize Numpy arrays and Pandas DataFrames. That's its main purpose.
Hi on running a code in the console I am getting the display as: runfile('C:/Users/DX/Desktop/me template/Part 1 - Data Preprocessing/praCTICE.py', wdir='C:/Users/DX/Desktop/me template/Part 1 - Data Preprocessing') and on viewing a small matrix it is showing up as array([['France', 44.0, 72000.0], ['Spain', 27.0, 48000.0], ['Germany', 30.0, 54000.0], ..., ['France', 48.0, 79000.0], ['Germany', 50.0, 83000.0], ['France', 37.0, 67000.0]], dtype=object) Even though the matrix is pity small how to change it to get the default view when i cun my codes in ipython console I installed the latest version of anaconda
0
1
272
0
44,875,134
0
0
0
0
1
true
0
2017-07-02T16:56:00.000
1
1
0
How can the perplexity of a language model be between 0 and 1?
44,873,156
1.2
python,tensorflow,language-model,sequence-to-sequence,perplexity
This does not make a lot of sense to me. Perplexity is calculated as 2^entropy. And the entropy is from 0 to 1. So your results which are < 1 do not make sense. I would suggest you to take a look at how your model calculate the perplexity because I suspect there might be an error.
In Tensorflow, I'm getting outputs like 0.602129 or 0.663941. It appears that values closer to 0 imply a better model, but it seems like perplexity is supposed to be calculated as 2^loss, which implies that loss is negative. This doesn't make any sense.
0
1
298
0
44,884,458
0
0
0
0
1
false
0
2017-07-03T10:19:00.000
0
1
0
python multiprocessing (using pytable) misses some results from the queue in the final output
44,883,116
0
python,queue,multiprocessing,pytables
It's really hard to help you without code. But I just think if you want to find "thin" places in your code you have to write log of it. As I understood one iteration of your worker has to create 268 Series that are made as columns in final dataframe. If these Series are the same shape, then it seems that the issue in queue-worker — and you have to log it in all steps that you can.
Before I state my question, let me put my constraint - I can't post the code as it is related to my job and they don't allow it. So this is just a survey query to see if somebody has seen similar issues. I have a python multiprocessing set up where the workers do the work and put the result in a queue. A special writer worker then accumulates the results from the queue. These results are simple pandas Series. The accumulator puts the results into a pandas dataframe and writes it to a pytable on the disk. The issue is that I randomly see that sometimes a few results are missing in the dataframe, e.g. out of 268 expected columns I will get 267. This has happened around 10 out of 80 times in the last three months. The cure is - simply rerun the code (which means recalculate everything) and it works 100% the second time. I have ensured that there is no error in the calculations, so my guess is that it is related to multiprocessing or pytable data writing. Any hints are appreciated. Sorry for not being able to put the code.
0
1
54
0
44,894,389
0
0
0
0
1
true
2
2017-07-03T21:47:00.000
2
1
0
Measure Volatility or Stability Of Lists of Floating Point Numbers
44,894,250
1.2
python,math,statistics,volatility
You could use the standard deviation of the list divided by the mean of the list. Those measures have the same units so their quotient will be a pure number, without a unit. This scales the variability (standard deviation) to the size of the numbers (mean). The main difficulty with this is for lists that have both positive and negative numbers, like your List B. The mean could end up being an order of magnitude less that the numbers, exaggerating the stability measure. Worse, the mean could end up being zero, making the measure undefined. I cannot think of any correction that would work well in all cases. The "stability" of a list with both positive and negative numbers is very doubtful and would depend on the context, so I doubt that any general stability measure would work well in all such cases. You would need a variety for different situations.
Wonder if anyone can help. I have a set of lists of numbers, around 300 lists in the set, each list of around 200 numbers. What I wish to calculate is the "relative stability" of each list. For example: List A: 100,101,103,99,98 - the range is v small - so stable. List B: 0.3, 0.1, -0.2, 0.1 - again, v small range, so stable. List C: 0.00003, 0.00002, 0.00007, 0.00008 - stable. Now, I could use standard deviation - but the values returned by the standard deviation will be relative to the values within each list. So std for list C would be tiny in comparison to std for list A - and therefore numerically not give me a comparable measure of stability/volatility enabling me to meaningfully ask: if list A more or less stable than list C? So, I wondered if anyone had any suggestions for a measure that will be comparable across such lists? Many thanks for any help. R
0
1
361
0
44,899,478
0
0
0
0
1
false
1
2017-07-04T07:04:00.000
0
3
0
Merging 2 dataframes on Pandas
44,899,119
0
python,pandas
Instead of df1.merge(...) try: pd.merge(left=df1, right=df2, on ='e', how='inner')
Sorry I have a very simple question. So I have two dataframes that look like Dataframe 1: columns: a b c d e f g h Dataframe 2: columns: e ef I'm trying to join Dataframe 2 on Dataframe 1 at column e, which should yield columns: a b c d e ef g h or columns: a b c d e f g h ef However: df1.merge(df2, how = 'inner', on = 'e') yields a blank dataframe when I print it out. 'outer' merge only extends the dataframe vertically (like using an append function). Would appreciate some help thank you!
0
1
87
0
44,903,656
0
1
0
0
1
false
1
2017-07-04T10:02:00.000
0
3
1
how to install python package on azure hdinsight pyspark3 kernel?
44,902,885
0
python,azure,pyspark,jupyter-notebook,azure-hdinsight
Have you tried installing using pip? In some cases where you have both Python 2 and Python 3, you have to run pip3 instead of just pip to invoke pip for Python 3.
I would like to install python 3.5 packages so they would be available in Jupyter notebook with pyspark3 kernel. I've tried to run the following script action: #!/bin/bash source /usr/bin/anaconda/envs/py35/bin/activate py35 sudo /usr/bin/anaconda/envs/py35/bin/conda install -y keras tensorflow theano gensim but the packages get installed on python 2.7 and not in 3.5
0
1
2,656
0
47,966,038
0
0
0
0
1
false
1
2017-07-05T00:27:00.000
1
1
0
Python - Fitting a polynomial (multi-dimension) through X points
44,915,500
0.197375
python,scikit-learn,regression,polynomials
Your question is ill defined. If you want, say, 14 features of 34 possible, which 14 should that be? In your place, I would generate a redundant number of features and then would use a feature selection algorithm. It would be a sparse model (like Lasso) or a feature elemination algorithm (like RFE).
I've been using scikit learn, very neat. Fitting a polynomial curve through points (X-Y) is pretty easy. If I have N points, I can chose a polynomial of order N-1 and the curve will fit the points perfectly. If my X vector has several dimension, in scikit-learn I can build a pipeline with a PolynomialFeatures and a LinearRegression. Where I am stuck is having a bit more flexibility with the numbers of features created by PolynomialFeatures (which is not an input); I'd like to be able to specify the total amount of features, with the end goal to have a polynomial that goes through all the points. E.g. in 3D (X has 3 columns), if I have 27 points (square matrix of 3X3X3); I'd like to limit the number of features to 27. (PolynomialFeatures does have an attribute of powers_, but it can't be set. Exploring the source also does not seem to show anything specific). Any ideas? Cheers, N
0
1
300
0
44,964,036
0
0
0
0
1
true
0
2017-07-05T10:42:00.000
0
1
0
Tensorflow GPU cuDNn: How do I load cuDDN libraries?
44,923,993
1.2
python,python-3.x,tensorflow,gpu
Fresh install is the key but there are some important points: 1. Install CUDA 8.0 toolkit 2. Install cuDNn 5.1 version (not 6.0) 3. Install from source(bazel) and configure to use ensorflow with CUDA Above steps worked for me! Hope it helps anyone.
I am trying to use tensorflow with gpu and installed CUDA 8.0 toolkit and cuDNn v5.1 libraries as described in nvidia website. But when I try to import tensorflow as module in python3.5, it does not load cuDNn libraries (outputs nothing, just loads tensorflow module). And I do not observe speed in processing (same speed I obtained when I use CPU) with GPU.
0
1
784
0
44,937,815
0
0
0
0
1
true
1
2017-07-05T23:35:00.000
3
1
0
What are use cases for *not* resetting a groupby index in pandas
44,937,573
1.2
python,pandas
When you perform a groupby/agg operation, it is natural to think of the result as a mapping from the groupby keys to the aggregated scalar values. If we were using plain Python, a dict would be the natural data structure to hold such a mapping from keys to values. Since we are using Pandas, a Series is the natural data structure. Its index would hold the keys, and the Series values would be the aggregrated scalars. If there is more than one aggregated value for each key, then the natural data structure to use would be a DataFrame. The advantage of holding the keys in an index rather than a column is that looking up values based on index labels is an O(1) operation, whereas looking up values based on a value in a column is an O(n) operation. Since the result of a groupby/agg operation fits naturally into a Series or DataFrame with groupby keys as the index, and since indexes have this special fast lookup property, it is better to return the result in this form by default.
When working with groupby on a pandas DataFrame instance, I have never not used either as_index=False or reset_index(). I cannot actually think of any reason why I wouldn't do so. Because my behavior is not the pandas default (indeed, because the groupby index exists at all), I suspect that there is some functionality of pandas that I am not taking advantage of. Can anyone describe cases where it would be advantageous to not reset the index?
0
1
40
0
46,657,731
0
0
0
0
1
false
0
2017-07-06T02:12:00.000
2
2
0
Is there a python implementation of a Face shape detector?
44,938,737
0.197375
python-3.x,computer-vision,opencv3.0,dlib
Dear past version of self. With Deep learning; Convolutional Neural Networks, This becomes a trivial problem. You can retrain Google's inception V3 Model to classify faces into the the 5 face shapes you mentioned in your question. With Just 500 training images you can attain an accuracy of 98%.
Is there an OpenCV-python, dlib or any python 3 implementation of a face shape detector (diamond, oblong, square)?
0
1
2,005
0
44,950,403
0
0
0
0
1
false
1
2017-07-06T03:16:00.000
0
2
0
TensorFlow RandomForest vs Deep learning
44,939,210
0
python,machine-learning,tensorflow,neural-network,random-forest
A useful rule for beginning training models, is not to begin with the more complex methods, for example, a Linear model, which you will be able to understand and debug more easily. In case you continue with the current methods, some ideas: Check the initial weight values (init them with a normal distribution) As a previous poster said, diminish the learning rate Do some additional checking on the data, check for NAN and outliers, the current models could be more sensitive to noise. Remember, garbage in, garbage out.
I am using TensorFlow for training model which has 1 output for the 4 inputs. The problem is of regression. I found that when I use RandomForest to train the model, it quickly converges and also runs well on the test data. But when I use a simple Neural network for the same problem, the loss(Random square error) does not converge. It gets stuck on a particular value. I tried increasing/decreasing number of hidden layers, increasing/decreasing learning rate. I also tried multiple optimizers and tried to train the model on both normalized and non-normalized data. I am new to this field but the literature that I have read so far vehemently asserts that the neural network should marginally and categorically work better than the random forest. What could be the reason behind non-convergence of the model in this case?
0
1
2,520
0
44,947,979
0
1
0
0
1
false
0
2017-07-06T10:05:00.000
0
2
0
Installing data science packages to vanilla python
44,945,850
0
python,machine-learning,data-science
as suggested by @DavidG, the following solution worked: Download the whl file use cmd window and go to the download folder and then install like below: C:\Users\XXXXXXXX>cd C:\Users\XXXXXXXX\Documents\Python Packages C:\Users\XXXXXXXX\Documents\Python Packages>pip install numpy-1.13.0+mkl-cp36-cp36m-win32.whl Processing c:\users\XXXXXXXX\documents\python packages\numpy-1.13.0+mkl-cp36-cp 36m-win32.whl Installing collected packages: numpy Found existing installation: numpy 1.13.0 Uninstalling numpy-1.13.0: Successfully uninstalled numpy-1.13.0 Successfully installed numpy-1.13.0+mkl C:\Users\XXXXXXXX\Documents\Python Packages>pip install scipy-0.19.1-cp36-cp36m -win32.whl Processing c:\users\XXXXXXXX\documents\python packages\scipy-0.19.1-cp36-cp36m- win32.whl Requirement already satisfied: numpy>=1.8.2 in c:\users\XXXXXXXX\appdata\local\ programs\python\python36-32\lib\site-packages (from scipy==0.19.1) Installing collected packages: scipy Successfully installed scipy-0.19.1 C:\Users\XXXXXXXX\Documents\Python Packages>
How to download necessary python packages for data analysis (e.g. pandas,scipy,numpy etc) and machine learning packages (sci-kit learn for starter, tensorflow for deeplearning if possible etc) without using github or anaconda? Our client has permitted us to install python 3.6 and above (32-bit) in our terminals for data analysis and machine learning projects but we cannot access github due to security restrictions and also cannot download anaconda bundle. Please provide suitable weblinks and instructions.
0
1
931
0
45,007,258
0
0
0
0
1
true
0
2017-07-06T10:44:00.000
1
1
0
Installing Tensorflow for Python 2.7 for Keras and CoreML conversion on Windows 10
44,946,737
1.2
python-2.7,tensorflow,windows-10,keras,coreml
A non-optimal solution (the only one I found) in my opinion, is to install a Linux virtual machine. I used VitualBox for it. Then, it is very easy to download Anaconda and Python 2, as well as the right versions of the packages. For example, you can download Tensorflow 1.1.0 using the following command $ pip install -I tensorflow==1.1.0.
I am currently working on an artificial neural network model with Keras for image recognition and I want to convert it using CoreML. Unfortunately, I have been working with Python3 and CoreML only works with Python 2.7 at the moment. Moreover, Tensorflow for Python 2.7 does not seem to be supported by Windows... So my only hope is to find a way to install it. I saw some tips using Docker Toolbox but I did not catch it and I failed when trying this solution, even though it looks like the only thing that works. So, is there any quite simple way to install Tensorflow for Python 2.7 on Windows 10? Thank you very much!
0
1
1,130
0
44,966,279
0
0
0
0
1
false
0
2017-07-06T22:07:00.000
2
1
0
Many to one LSTM input shape
44,959,636
0.379949
python-3.x,keras,lstm
In the first layer of the model you should define input_shape=(n_timesteps,n_features). So in your case input_shape = (25,10). Your actual input to the model will have shape (1000,25,10). You should also use keras.np_utils.to_categorical to convert your labels to one-hot-encoded vectors, so that they will become vectors with length X, where X is your class number. Every element will be equal to zero, except the one corresponding to the corresponding class. Hope this helps!
My input data has 10 features and it is taken at 25 different timestamps. My output data consists of class labels. So, basically, I am having a many to one classification problem. I want to implement an LSTM for this problem. Total training data consists of 10000 data points. How should the input and output format (shape) for this LSTM network be?
0
1
248
0
44,965,968
0
0
0
0
2
false
0
2017-07-07T04:11:00.000
0
2
0
Image preprocessing of finetune in ResNet
44,962,433
0
python,deep-learning
In my honest opinion people overstate the impact of image preprocessing. The only truly important thing is that the test data is similar in value scale to the training data. There are some theoretical benefits of having a pre normalized dataset, with the usage of batch normalization, but in practice it never made much of a difference (2-4% Accuracy). If you have a running model and you are trying to get those last few % better accuracy without having to increase the amount of parameter than I would suggest tweaking this to your use-case. In my opinion there is no single method that works for every use-case, but a good starting point is to use the same preprocessing as ImageNet, because the features will be similar to the ones produced for the imagenet classification.
I want to finetune ResNet50 ImageNet pretrained model, and I have a few question about image preprocessing of finetune. In ImageNet preprocessing, we need to subtract the mean of pixel ([103.939, 116.779, 123.68]). When I use my dataset to finetune, should I subtract mean of ImageNet or subtract the mean of my data. I do see many people rescale the data to [0,1], but the pretrained model(ImageNet) use image scale in [0,255]. Why do people do that? Is it reasonable?
0
1
2,606
0
44,983,413
0
0
0
0
2
false
0
2017-07-07T04:11:00.000
0
2
0
Image preprocessing of finetune in ResNet
44,962,433
0
python,deep-learning
I would try both. Subtracting your mean makes sense because generally one tries to get mean 0. Subtracting image net mean makes sense because you want the network as a feature extractor. If you change something that early in the feature extractor it could be that it doesn't work at all. Just like the mean 0 thing, it is generally seen as a desirable property to have features within a fixed range or with a fixed standard deviation. Again, I can't really tell you what is better but you can easily try it. My guess is that there aren't too big differences. Most important: Make sure you apply the same preprocessing steps to your training / testing / evaluation data.
I want to finetune ResNet50 ImageNet pretrained model, and I have a few question about image preprocessing of finetune. In ImageNet preprocessing, we need to subtract the mean of pixel ([103.939, 116.779, 123.68]). When I use my dataset to finetune, should I subtract mean of ImageNet or subtract the mean of my data. I do see many people rescale the data to [0,1], but the pretrained model(ImageNet) use image scale in [0,255]. Why do people do that? Is it reasonable?
0
1
2,606
0
44,968,146
0
0
0
0
1
false
0
2017-07-07T09:49:00.000
0
1
0
Tensorflow:Using a trained model in C++
44,967,751
0
python,c++,tensorflow
Why would want to train the model in C++? Tensorflows core libraries are in c++. I think you mean use the trained model in C++? Once you've trained a model and exported it (assuming you have the .pb file) you use the model for predicting .Theres no way to retrain an exported model.
I have a model build in Python using keras and tensorflow. I want to export the model and use it for training in C++. I am using TF1.2 and used the tf.train.export_metagraph to export my graph. I am not exactly sure on how to proceed in using the model in C++ for training. Thanks :)
0
1
192
0
44,982,417
0
0
0
0
1
false
0
2017-07-08T04:05:00.000
0
1
0
Numpy Float128 Polyfit
44,982,378
0
python,arrays,numpy
You may use logarithmic version of your variables (np.log10), so when dealing with something like 1e-200 you will have -200, less memory and more efficiency.
I'm using numpy's polyfit to find a best fit curve for a set of data. However, numpy's polyfit returns an array of float64 and because the calculated coefficients are so large/small (i.e. 1e-200), it's returning an overflow error that's encountered in multiply : RuntimeWarning: overflow encountered in multiply scale = NX.sqrt((lhs*lhs).sum(axis=0)) I've tried casting the initial array to be float128, but that does not seem to work. Is there any way around this overflow issue / any way to handle such large coefficients?
0
1
281
0
44,983,209
0
0
0
0
1
false
11
2017-07-08T06:16:00.000
22
1
0
importing numpy in hackerrank competitions
44,983,165
1
python,numpy
I have run into the same issue on HackerRank. A number of their challenges do support NumPy--indeed, a handful require it. Either import numpy or the idiomatic import numpy as np will work just fine on those. I believe you're simply trying to use numpy where they don't want you to. Because it's not part of the standard library, HackerRank would need to intentionally provide it. Where they do not, you will need to substitute lower-level, non-numpy code as a result.
I want to use numpy module for solving problems on hackerrank. But, when I imported numpy, it gave me the following error. ImportError: No module named 'numpy'. I understand that this might be a very trivial question. But, I am a beginner in programming. Any help is highly appreciated.
0
1
19,903
0
44,986,475
0
0
0
0
1
false
0
2017-07-08T12:46:00.000
0
1
0
NLTK: No value returned when searching the CMU dictionary based on syllable value
44,986,375
0
python-3.x,nltk
My bad, I realized the mistake. I had swapped the position of "pron" with "word" thereby causing this problem. The corrected code is: p3 = [(pron[0] + '-' + pron[2], word) for word, pron in entries if pron[0] == 'P' and len(pron) == 3] "
I am practicing the nltk examples from the "Natural language processing in Python" book. While trying to get the words that start with syllable "p" and of syllable length 3 from cmu dictionary (one of the examples provided in chapter 2), I am not getting any values returned. I am using Python 3. Below is the code: entries = nltk.corpus.cmudict.entries() p3 = [(pron[0] + '-' + pron[2], word) for pron, word in entries if pron[0] == 'P' and len(pron) == 3] But no value returned: p3 = [] However, I know that the value exist. See below: [(word, pron[0] + '-' + pron[2]) for word, pron in entries if word == 'perch'] [('perch', 'P-CH')]
0
1
414
0
45,001,380
0
0
0
0
1
false
2
2017-07-09T18:45:00.000
2
1
0
Best practice for groupby on Parquet file
44,999,814
0.379949
python,pyspark,parquet,dask
If you are doing a groupby-aggregation with a known aggregation like count or mean then your partitioning won't make that much of a difference. This should be relatively fast regardless. If you are doing a groupby-apply with a non-trivial apply function (like running an sklearn model on each group) then you will have a much faster experience if you store your data so that the grouping column is sorted in parquet. Edit: That being said, even though groupby-count doesn't especially encourage smart partitioning it's still nice to switch to Parquet. You'll find that you can read the relevant columns much more quickly. As a quick disclaimer, dask.dataframe doesn't currently use the count statistics within parquet to accelerate queries, except by filtering within the read_parquet function and to help identify sorted columns.
We have a 1.5BM records spread out in several csv files. We need to groupby on several columns in order to generate a count aggregate. Our current strategy is to: Load them into a dataframe (using Dask or pyspark) Aggregate columns in order to generate 2 columns as key:value (we are not sure if this is worthwhile) Save file as Parquet Read the Parquet file (Dask or pyspark) and run a groupby on the index of the dataframe. What is the best practice for an efficient groupby on a Parquet file? How beneficial is it to perform the groupby on the index rather then on a column (or a group of columns)? We understand that there is a partition that can assist - but in our case we need to groupby on the entire dataset - so we don't think it is relevant.
0
1
1,447
0
66,233,233
0
0
0
1
1
false
15
2017-07-10T03:15:00.000
1
2
0
How to use matplotlib to plot pyspark sql results
45,003,301
0.099668
python,pandas,matplotlib,pyspark-sql
For small data, you can use .select() and .collect() on the pyspark DataFrame. collect will give a python list of pyspark.sql.types.Row, which can be indexed. From there you can plot using matplotlib without Pandas, however using Pandas dataframes with df.toPandas() is probably easier.
I am new to pyspark. I want to plot the result using matplotlib, but not sure which function to use. I searched for a way to convert sql result to pandas and then use plot.
0
1
30,940
0
45,005,490
0
0
0
0
1
false
0
2017-07-10T05:46:00.000
0
2
0
How to classify both sentiment and genres from movie reviews using CNN Tensorflow
45,004,514
0
python-3.x,tensorflow,neural-network,deep-learning,data-science
You can treat this as a multi-label problem, and append the sentiment and the tone labels together. Now since the network has to predict multiple outputs (2 in this case) you need to use an activation function like sigmoid and not softmax. And your prediction can be made using tf.round(logits).
I am trying to classify sentiment on movie review and predict the genres of that movie based on the review itself. Now Sentiment is a Binary Classification problem where as Genres can be Multi-Label Classification problem. Another example to clarify the problem is classifying Sentiment of a sentence and also predicting whether the tone of the sentence is happy, sarcastic, sad, pitiful, angry or fearful. More to that is, I want to perform this classification using Tensorflow CNN. My problem is in structuring the y_label and training the data such that the output helps me retrieve Sentiment as well as the genres. Eg Data Y Label: [[0,1],[0,1,0,1,0]] for sentiment as Negative and mood as sarcastic and angry How do you suggest I tackle this?
0
1
254
0
45,031,046
0
0
0
0
1
false
0
2017-07-11T09:39:00.000
0
2
0
Reverse a matrix with tensorflow
45,030,827
0
python,tensorflow,matrix-inverse,bigdata
You mean you need to swap rows and columns? If that's the case then you might use tf.transpose.
I'm a Beginner in bigdata. I had learned Python. I want get Reverse a matrix with tensorflow (matrix n*n in input), but office boss will to do it with tensorflow, so i wanna do it without Adjoining matrix. help me, please. thank you In advance. <3
0
1
435
0
45,034,487
0
1
0
0
1
false
7
2017-07-11T12:11:00.000
0
3
0
Override `import` for more sophisticated module import
45,034,266
0
python,import
Short answer is NO... But you could and should catch ImportError for when the module is not there, and handle it then. Otherwise replacing all import statements with something else is the clever thing to do.
Is it possible to somehow override import so that I can do some more sophisticated operations on a module before it gets imported? As an example: I have a larger application that uses matplotlib for secondary features that are not vital for the overall functionality of the application. In case that matplotlib is not installed I just want to mock the functionality so that the import and all calls to matplotlib functions appear to be working, just without actually doing anything. A simple warning should then just indicate that the module is not installed, though that fact would not impair the core functionality of the application. I already have an import function that, in the case that matplotlib is not installed, returns a MagicMock object instead of the actual module which just mimics the behavior of the matplotlib API. So, all import matplotlib... or from matplotlib import... should then be automatically overridden by the corresponding function call. I could replace all import and from ... import expressions by hand but I'd like to make but there are a lot of them. I'd rather like to have this functionality automatically by overriding import. Is that possible?
0
1
2,180
0
45,047,193
0
0
0
0
1
false
1
2017-07-11T21:51:00.000
0
1
0
pandas - reading dynamically named file within a .zip
45,045,043
0
python,pandas
Read your file using, df = pd.read_excel('zipfilename 2017-06-28.xlsx',compression='zip', header=1, names=cols)
I am creating a new dataframe in pandas as below: df = pd.read_excel(zipfile.open('zipfilename 2017-06-28.xlsx'), header=1, names=cols) The single .xlsx within the .zip is dynamically named (so changes based on the date). This means I need to change the name of the .xlsx in my code each time I open the .zip to account for the dynamically named .xlsx. Is there a way to make pandas read the file within the .zip, regardless of the name of the file? Or to return the name of the .xlsx within the line of code somehow? Thanks
0
1
71
0
45,052,125
0
0
0
0
1
false
1
2017-07-12T07:15:00.000
0
1
0
how to re-train Saved linear regression ML model in pyspark when new data is coming
45,050,839
0
python,machine-learning,pyspark
I don't think so, you use pyspark.ml.regression.GeneralizedLinearRegression to train, and then you get a pyspark.ml.regression.GeneralizedLinearRegressionModel, that is what you have saved. AFIK, the model can't be refitted, you have to use the regression fit again to get a new model.
I trained a linear regression model using pyspark ml and save it.now i want to re-train it on the bases of new data batch.. is it possible??
0
1
126
0
45,056,341
0
1
0
0
1
true
0
2017-07-12T11:10:00.000
1
1
0
Installing miniconda for theano with gpuarray: as root or as user?
45,056,037
1.2
python,conda
Anaconda and miniconda are designed to be installed by each user individually, into each users $HOME/miniconda directory. If you installed it as a shared install as root, all users would need to access /root/miniconda. Also, environments will be created in $HOME/miniconda/envs, so environments of several people will interfere with each other (plus the whole issue of permissions, file ownership etc.). Bottom line: Don't install it as root, install it as yourself. Any third party dependencies you'd still install as root using apt-get, but once they're installed they're accessible by everyone, no matter if they use miniconda or not.
I've always used virtualenv(wrapper) for my python needs, but now I'm considering trying conda for new projects, mainly because theano docs "strongly" recommend it, and hoping that it will save me some hassle with pygpu config. I'm on linux mint 16( I guess, kernel in uname is from ubuntu 14.04) and there are no system packages for conda/miniconda so I'll have to use their shell script for installation. Now I have a dilemma - should I install as my user or as root? What is likely to give me less hassle in the future (given that I'm going to use (nvidia) GPU for computation).
0
1
94
0
45,061,095
0
0
0
0
1
false
0
2017-07-12T14:24:00.000
0
1
0
is the Matlab radon() function a "circular" radon transform?
45,060,419
0
python,matlab
Matlab's radon() function is not circular. This was the problem. Although the output image sizes do still differ, I am getting essentially the result I want.
I am trying to translate some matlab code to python. In the matlab code, I have a radon transform function. I start with a 146x146 image, feed it into the radon() function, and get a 211x90 image. When I feed the same image into my python radon() function, I get a 146x90 image. The documentation for the python radon () function says it is a circular radon transform. Is the matlab function also circular? Why are these returning different shaped images and how can I get the outputs to match?
0
1
269
0
45,395,282
0
0
0
1
1
false
0
2017-07-12T15:00:00.000
0
3
0
Is Google Cloud Datastore or Google BigQuery better suited for analytical queries?
45,061,306
0
python,pandas,google-cloud-datastore,google-bigquery,google-cloud-platform
As far as I can tell there is no support for Datastore in Pandas. This might affect your decision.
Currently we are uploading the data retrieved from vendor APIs into Google Datastore. Wanted to know what is the best approach with data storage and querying the data. I will be need to query millions of rows of data and will be extracting custom engineered features from the data. So wondering whether I should load the data into BigQuery directly and query it for faster processing or store it in Datastore and then move it to BigQuery for querying?. I will be using pandas for performing statistics on stored data.
0
1
577
0
45,063,449
0
0
0
0
1
false
0
2017-07-12T16:45:00.000
2
1
0
Using pandas, how can I return the number of times an element appears in a column?
45,063,425
0.379949
python,pandas
Use df['your column name'].value_counts()['your value name'].
I have a pandas df with 5 columns, one of them being State. I want to find the number of times each state appears in the State column. I'm guessing I might need to use groupby, but I haven't been able to figure out the exact command.
0
1
354
0
45,070,245
0
0
0
0
1
false
2
2017-07-13T01:49:00.000
0
2
0
Machine learning to classify company names to their industries
45,070,186
0
python,machine-learning,text-classification,multilabel-classification
Not sure what you want. If the point is to use just company names, maybe break names into syllables/phonemes, and train on that data. If the point is to use Word2Vec, I'd recommend pulling the Wikipedia page for each company (easier to automate than an 'about me').
What I'm trying to do is to ask the user to input a company name, for example Microsoft, and be able to predict that it is in the Computer Software industry. I have around 150 000 names and 60+ industries. Some of the names are not English company names. I have tried training a Word2Vec model using Gensim based on company names only and averaged up the word vectors before feeding it into SKlearn's logistic regression but had terrible results. My questions are: Has anyone tried these kind of tasks? Googling on short text classification shows me results on classifying short sentences instead of pure names. If anyone had tried this before, mind sharing a few keywords or research papers regarding this task? Would it be better if I have a brief description for each company instead of only using their names? How much would it help for my Word2Vec model rather than using only the company names?
0
1
2,136
0
45,084,990
0
0
0
0
1
false
4
2017-07-13T08:40:00.000
0
1
0
what the differences between tf.train.Saver().restore() and tf.saved_model.loader
45,075,568
0
python,tensorflow
During training, restoring from checkpoints, etc, you want to use the saver. You only want to use the saved model if you're loading your exported model for inference.
I'd like to know the differences between tf.train.Saver().restore() and tf.saved_model.loader(). As far as I know, tf.train.Saver().restore() restores the previously saved variables from the checkpoint file; and tf.saved_model.loader() loads the graph def from the pb file. But I have no idea about when I should choose restore() or loader()?
0
1
323
0
45,105,205
0
0
0
0
1
false
0
2017-07-13T20:43:00.000
0
1
0
Visual properties of unselected glyphs in Bokeh based on what is selected
45,090,562
0
python,bokeh,glyph
In order to avoid tripling (or quintupling) memory usage in the browser, Bokeh only supports setting "single values" for non-selection colors and alphas. That is, non-selection properties can't be vectorized by pointing them at a ColumnDataSource column. So there's only two options I can think of: Split the glyphs into different groups of glyphs, that each have a different different nonselection_color. This might be feasible if you only have a few groups. Of course now you have to partition your data to have e.g. five calls to p.circle instead of one, but it would entirely avoid JS. Use a tiny amount of JavaScript in a CustomJS callback. You can have an additional column in the CDS that provides the non-selected colors. When a selection happens, the CustomJS callback switches the glyph's normal color field to point to the other column and when a selection is cleared, changes it back to the "normal" field.
I have a glyph that's a series of Circles. I want to click on one point and change the colour / alpha of the unselected glyphs such that each unselected glyph has a custom colour based on it's relationship with the selected point. For example, I'd want the closest points to the selected point to have alpha near to 1 and the furthest to have alpha near to 0. I've seen other questions where the unselected glyphs have different alphas, but the alphas are independent of what is selected. Is it possible to do this without using JavaScript? Edited for more details: The specific dataset I'm working on is a dataset of a bike sharing system, with data on trips made between specific stations. When I click on a specific station, I want to show the destination stations to which users go to when they start from the station selected. For n stations, the data thus has a n * n format: for each station, we have the probability of going to every other station. Ideally, this probability will be the alpha of the unselected stations, such the most popular destinations would have alpha near to 1, and the less popular ones an alpha near to 0.
0
1
195
0
45,106,390
0
0
0
0
1
true
0
2017-07-14T15:06:00.000
1
1
0
Best way to differentiate an array of indices vs a boolean mask
45,106,240
1.2
python,numpy
You can check the dtype, or iterate through and check if the values are not in the set {True, False} as well as checking if the values are not in the set {0,1} Boolean masks must be the same shape as the array they are intended to index into, so that's another check. But there's no hard and fast way to distinguish a priori whether an array of consisting of only values in {0,1} is one or the other without additional knowledge.
If I am given an array of indices but I don't know whether it is a regular index array or a boolean mask, what is the best way to determine which it is?
0
1
57
0
69,032,680
0
0
0
0
2
false
17
2017-07-14T15:15:00.000
1
2
0
PyCharm: Is there a way to make the "data view" auto update when dataframe is changed?
45,106,431
0.099668
python,pycharm
If you put a cursor on the text field just below the displayed dataframe and hit Enter, it'll update itself.
When you select "View as DataFrame" in the variables pane it has a nice spreadsheet like view of the DataFrame. That said, as the DataFrame itself changes, the Data View does not auto update and you need to reclick the View as DataFrame to see it again. Is there a way to make PyCharm autoupdate this? Seems like such a basic feature.
0
1
835
0
66,154,095
0
0
0
0
2
false
17
2017-07-14T15:15:00.000
1
2
0
PyCharm: Is there a way to make the "data view" auto update when dataframe is changed?
45,106,431
0.099668
python,pycharm
Unfortunately, no. The only thing you can do is use 'watches' to watch the variable and open when you want it. It requires a lot of background processing and memory usage to display the dataframe.
When you select "View as DataFrame" in the variables pane it has a nice spreadsheet like view of the DataFrame. That said, as the DataFrame itself changes, the Data View does not auto update and you need to reclick the View as DataFrame to see it again. Is there a way to make PyCharm autoupdate this? Seems like such a basic feature.
0
1
835
0
45,115,173
0
0
0
0
1
false
0
2017-07-14T21:10:00.000
1
1
0
statsmodel fractional logit model
45,111,640
0.197375
python,python-2.7,statistics
I assume fractional Logit in the question refers to using the Logit model to obtain the quasi-maximum likelihood for continuous data within the interval (0, 1) or [0, 1]. The discrete models in statsmodels like GLM, GEE, and Logit, Probit, Poisson and similar in statsmodels.discrete, do not impose an integer condition on the response or endogenous variable. So those models can be used for fractional or positive continuous data. The parameter estimates are consistent if the mean function is correctly specified. However, the covariance for the parameter estimates are not correct under quasi-maximum likelihood. The sandwich covariance is available with the fit argument, cov_type='HC0'. Also available are robust sandwich covariance matrices for cluster robust, panel robust or autocorrelation robust cases. eg. result = sm.Logit(y, x).fit(cov_type='HC0') Given that the likelihood is not assumed to be correctly specified, the reported statistics based on the resulting maximized log-likelihood, i.e. llf, ll_null and likelihood ratio tests are not valid. The only exceptions are multinomial (logit) models which might impose the integer constraint on the explanatory variable, and might or might not work with compositional data. (The support for compositional data with QMLE is still an open question because there are computational advantages to only support the standard cases.)
can anyone let me know what is the method of estimating the parameters in fractional logit model in statsmodel package of python? And can anyone refer me the specific part of the source code of fractional logit model?
0
1
1,645
0
46,335,988
0
0
0
0
1
true
10
2017-07-15T07:16:00.000
2
1
0
difference in predictions between model.predict() and model.predict_generator() in keras
45,115,582
1.2
python,keras,prediction
@petezurich Thanks for your comment. Generator.reset() before model.predict_generator() and turning off the shuffle in predict_generator() fixed the problem
When I use model.predict_generator() on my test_set (images) I am getting a different prediction and when I use mode.predict() on the same test_Set I am getting a different set of predictions. For using model.predict_generator I followed the below steps to create a generator: Imagedatagenerator(no arguments here) and used flow_from_directory with shuffle = False. There are no augmentations nor preprocessing of images(normalization,zero-centering etc) while training the model. I am working on a binary classification problem involving dogs and cats (from kaggle).On the test set, I have 1000 cat images. and by using model.predict_generator() I am able to get 87% accuracy()i.e 870 images are classified correctly. But while using model.predict I am getting 83% accuracy. This is confusing because both should give identical results right? Thanks in advance :)
0
1
5,757
0
45,118,234
0
0
0
0
1
false
1
2017-07-15T11:49:00.000
6
1
0
Can Django work well with pandas and numpy?
45,117,857
1
python,django,pandas,numpy
You can use any framework to do so. If you worked with Python before I can recommend using Django since you have the same (clear Python) syntax along your project. This is good because you keep the same logic everywhere but should not be your major concern when it comes to choosing the right framework for your needs. So for example if you are a top Ruby-On-Rails developer I would not suggest to learn Django just because of Pandas. In general: A lot of packages/libraries are written in other languages but you will still be able to use them in Django/Python. So for example the famous "Elasticsearch" Searchbackend has its roots in JAVA but is still used in a lot of Django apps. But it also goes the other way around "Celery" is written in Python but can be used in Node.js or PHP. There are hundreds of examples but I think you get the Point. I hope that I brought some light into the darkness. If you have questions please leave them in the comments.
I am trying to build a web application that requires intensive mathematical calculations. Can I use Django to populate python charts and pandas dataframe?
1
1
7,691
0
45,171,965
0
0
0
0
1
true
0
2017-07-16T06:56:00.000
1
1
0
MXNet - what is python equivalent of getting scala's mxnet networkExecutor.gradDict("data")
45,125,919
1.2
python,scala,mxnet
How about grad_dict in executor? It returns a dictionary representation of the gradient arrays.
Trying to understand some Scala's code on network training in MXNet. I believe you can access gradient on the executor in Scala by calling networkExecutor.gradDict("data"), what would be equivalent of it in Python MXNet? Thanks!
0
1
64
0
45,154,694
0
0
0
0
2
false
8
2017-07-17T21:47:00.000
0
3
0
How to dynamically freeze weights after compiling model in Keras?
45,154,180
0
python,tensorflow,neural-network,keras,theano
Can you use tf.stop_gradient to conditionally freeze weights?
I would like to train a GAN in Keras. My final target is BEGAN, but I'm starting with the simplest one. Understanding how to freeze weights properly is necessary here and that's what I'm struggling with. During the generator training time the discriminator weights might not be updated. I would like to freeze and unfreeze discriminator alternately for training generator and discriminator alternately. The problem is that setting trainable parameter to false on discriminator model or even on its' weights doesn't stop model to train (and weights to update). On the other hand when I compile the model after setting trainable to False the weights become unfreezable. I can't compile the model after each iteration because that negates the idea of whole training. Because of that problem it seems that many Keras implementations are bugged or they work because of some non-intuitive trick in old version or something.
0
1
7,178
0
47,122,897
0
0
0
0
2
false
8
2017-07-17T21:47:00.000
0
3
0
How to dynamically freeze weights after compiling model in Keras?
45,154,180
0
python,tensorflow,neural-network,keras,theano
Maybe your adversarial net(generator plus discriminator) are wrote in 'Model'. However, even you set the d.trainable=False, the independent d net are set non-trainable, but the d in the whole adversarial net is still trainable. You can use the d_on_g.summary() before then after set d.trainable=False and you would know What I mean(pay attention to the trainable variables).
I would like to train a GAN in Keras. My final target is BEGAN, but I'm starting with the simplest one. Understanding how to freeze weights properly is necessary here and that's what I'm struggling with. During the generator training time the discriminator weights might not be updated. I would like to freeze and unfreeze discriminator alternately for training generator and discriminator alternately. The problem is that setting trainable parameter to false on discriminator model or even on its' weights doesn't stop model to train (and weights to update). On the other hand when I compile the model after setting trainable to False the weights become unfreezable. I can't compile the model after each iteration because that negates the idea of whole training. Because of that problem it seems that many Keras implementations are bugged or they work because of some non-intuitive trick in old version or something.
0
1
7,178
0
54,628,350
0
0
0
0
1
false
4
2017-07-17T22:41:00.000
0
1
0
Scikit learn API xgboost allow for online training?
45,154,751
0
python,machine-learning,scikit-learn,xgboost
I don't think the sklearn wrapper has an option to incrementally train a model. The feat can be achieved to some extent using the warm_start parameter. But, the sklearn wrapper for XGBoost doesn't have that parameter. So, if you want to go for incremental training you might have to switch to the official API version of xgboost.
According to the API, it seems like the normal xgboost interface allows for this option: xgboost.train(params, dtrain, num_boost_round=10, evals=(), obj=None, feval=None, maximize=False, early_stopping_rounds=None, evals_result=None, verbose_eval=True, xgb_model=None, callbacks=None, learning_rates=None). In this option, one can input xgb_model to allow continued training on the same model. However, I'm using the scikit learn API of xgboost so I can put the classifier in a scikit pipeline, along with other nice tools such as random search for hyperparameter tuning. So does anyone know of any (albeit hacky) way of allowing online training for the scikitlearn api for xgboost?
0
1
571
0
45,161,655
0
0
0
0
1
false
0
2017-07-17T22:53:00.000
0
1
0
OpenCV Python, filter edges to only include those connected to a specific pixel
45,154,854
0
python,opencv,edge-detection
The result of Hough Line transform is an array of (rho,theta) parameter pairs. The equation of the line represented by the pair is y + x/tan(theta) + rho/sin(theta) = 0 You can check whether the (x, y) coordinates of the point satisfy this condition, to find lines that pass throught the point (practically, use a small value instead of 0).
I've got a script that uses the Canny method and probabilistic Hough transform to identify line segments in an image. I need to be able to filter out all line segments that are NOT connected to a specific pixel. How would one tackle this problem?
0
1
276
0
45,156,763
0
0
0
1
1
false
0
2017-07-17T23:23:00.000
2
1
0
How to import CSV to an existing table on BigQuery using columns names from first row?
45,155,117
0.379949
python,google-bigquery,import-from-csv
When you import a CSV into BigQuery the columns will be mapped in the order the CSV presents them - the first row (titles) won't have any effect in the order the subsequent rows are read. To be noted, if you were importing JSON files, then BigQuery would use the name of each column, ignoring the order.
I have a python script that execute a gbq job to import a csv file from Google cloud storage to an existing table on BigQuery. How can I set the job properties to import to the right columns provided in the first row of the csv file? I set parameter 'allowJaggedRows' to TRUE, but it import columns in order regardless of column names in the header of csv file.
0
1
3,297