GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 36,281,252 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-03-29T03:43:00.000 | 0 | 2 | 0 | Stratified sampling for Random forest -Python | 36,275,005 | 0 | python,scikit-learn,classification,random-forest | You can use parameter class_weight .
Weights associated with classes in the form {class_label: weight}
You can give more weight to your small class and find best weight using cross-validation.
For example class_weight={1: 10, 0:1}. Gives more weight to class labeled 1. | I'm building a random forest classification model with the response variable split being 98%(False)-2%(True). I'm using Scikit Learn's RandomForest classifier for this.
What is the best way to handle this unbalanced data and avoid oversampling? | 0 | 1 | 1,888 |
0 | 54,748,830 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-03-29T03:43:00.000 | 0 | 2 | 0 | Stratified sampling for Random forest -Python | 36,275,005 | 0 | python,scikit-learn,classification,random-forest | In newer versions of sklearn's random forest classifier, you can simply set class_weight="balanced". | I'm building a random forest classification model with the response variable split being 98%(False)-2%(True). I'm using Scikit Learn's RandomForest classifier for this.
What is the best way to handle this unbalanced data and avoid oversampling? | 0 | 1 | 1,888 |
0 | 36,298,305 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-03-29T15:25:00.000 | 0 | 1 | 0 | Sklearn multi-task: Input data not 3-dimensional? | 36,288,578 | 0 | python,machine-learning,scikit-learn | I don't see why you'd want X to vary for each task: the point of multitask learning is that the same feature space is used to represent instances for multiple tasks which can be mutually informative. I get that you may not have ground truth y for all instances for all tasks, though this is currently assumed in the scikit-learn implementation. | I have one huge data matrix X, of which subsets of rows correspond to different tasks that are related but also have different idiosyncratic properties.
Thus I want to train a Multi-Task model with some regularization and chose sklearn's linear_model MultiTaskElasticNet function.
I am confused with the inputs of fitting the model. It says that both the X and the Y matrix are 2-dimensional. The 2nd dimension in Y corresponds to the number of tasks. That makes sense, but in my understanding the X matrix should be 3-dimensional right? In that way I have selected which subsets of my data correspond to different tasks as I know that in advance (obviously).
Does someone know how to enter my data correctly for this scikit-learn module?
Thank you! | 0 | 1 | 511 |
0 | 36,304,549 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-03-29T17:42:00.000 | 0 | 2 | 0 | How may I calculate Accuracy in NLTK KMeans Clustering | 36,291,392 | 0 | python,machine-learning,nltk,cluster-analysis,k-means | Precision, Recall, and thus the F-measure are inappropriate for cluster analysis. Clustering is not classification, and clusters are not classes!
Common measures for clustering (if you are trying to compare with existing labels, which does not make a whole lot of sense - if you already know the classes, then use classification and not clustering) are the Adjusted Rand Index and its variants. | I am trying to use NLTK's KMeans Clustering Algorithm.
It is generally going fine.
I want to use the Metrics package of NLTK to determine precision,recall and f measure.
I searched for some examples in web and in other references but may be without a clue.
If any one may kindly cite an example or reference.
Thanks in Advance. | 0 | 1 | 2,455 |
0 | 36,292,344 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2016-03-29T18:25:00.000 | 0 | 2 | 0 | I need to find the angle between two sets of Roll and Yaw angles | 36,292,230 | 0 | python,math,trigonometry,angle,euler-angles | Suppose u(1), u(2), ..., u(m), v are all unit vectors. You want to determine i such that the angle between u(i) and v is maximized. This is equivalent to finding the i such that np.dot(u(i), v) is minimized. So if you have a matrix U where the rows are the u(i), you can simply do i = np.argmin(np.dot(U, v)) to find the i that has the angle between u(i) and v maximized. | I have a sensor attached to a drill. The sensor outputs orientation in heading roll and pitch. From what I can tell these are intrinsic rotations in that order. The Y axis of the sensor is parallel to the longitudinal axis of the drill bit. I want to take a set of outputs from the sensor and find the maximum change in orientation from the final orientation.
Since the drill bit will be spinning about the pitch axis, I believe it can be neglected.
My first thought would be to try to convert heading and roll to unit vectors assuming pitch is 0. Once I have their vectors, call them v and vf, the angle between them would be
Θ=arccos(v . vf)
It should then be fairly straight forward to have python calculate Θ for a given set of orientations and pull the largest out.
My question is, is there a simpler way to do this using python, and, if not what is the most efficient way to convert these intrinsic rotations to unit vectors. | 0 | 1 | 914 |
0 | 36,353,073 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-04-01T08:19:00.000 | 0 | 1 | 0 | Querying a dataframe if column and values are of different types | 36,351,403 | 0 | python,pandas | You have many otions but I think they can be summarized. I couldn't tell which one would make more sense to you without more context.
Convert numeric strings to numbers
If you are afraid of issues with floats, convert only integers.
If you want to keep your data as is, store the converted values in a different column / object and use it just for filtering.
If you want to keep the data types in the filtered data, filter the converted data and use the filtered index to subset the original data.
Convert numbers to strings (same considerations as above)
Filter by both the numbers in the lookup list and their string representation. | I am writing a function which takes a pandas df, column name and a list of values and gives the filtered df. This function uses df.query() internally.
In one specific case, I have a dataframe which has a column in which both integers and strings are present. My function should filter this df on a list whose elements are all integers. At the moment, I get an empty df as strings can't be compared to int. Even though in the dataframe and lookup list are same - for eg. '345' & 345.
What is a general way to handle this in pandas? I could coerce the list of integers to strings but I would like to stay away from that. This is because I want my function to be able to handle non-integral values as well. I am not sure if coercing to strings would be safe then: for eg. for floats. | 0 | 1 | 54 |
0 | 36,371,050 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2016-04-01T17:17:00.000 | 0 | 4 | 0 | Slice error when using MultiRNNCell | 36,362,190 | 0 | python,tensorflow | Seem like an invalid argument into embedding_rnn_decoder.
Maybe try to change enc_state:
ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_inputs, enc_state[-1], e_cell, vocab_size, output_projection=(W, b), feed_previous=False) | I am using MultiRNNCell from tensorflow.models.rnn.rnn_cell. This is how declare my MultiRNNCell
Code:
e_cell = rnn_cell.GRUCell(self.rnn_size)
e_cell = rnn_cell.MultiRNNCell([e_cell] * 2)
Later on I use it from inside seq2seq.embedding_rnn_decoder as follows
ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_inputs, enc_state, e_cell, vocab_size, output_projection=(W, b), feed_previous=False)#
On doing this I get the following error
Error:
tensorflow.python.framework.errors.InvalidArgumentError: Expected size[1] in [0, 0], but got 1024
[[Node: en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice = Slice[Index=DT_INT32, T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Sigmoid_2, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/begin, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/size)]]
[[Node: en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient/_1230 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_11541_en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Has anyone seen a similar error? Any pointers? | 0 | 1 | 1,345 |
0 | 36,794,050 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2016-04-01T17:17:00.000 | 0 | 4 | 0 | Slice error when using MultiRNNCell | 36,362,190 | 0 | python,tensorflow | I've got a problem similar to yours.{tensorflow.python.framework.errors.InvalidArgumentError: Expected size[1] in [0, 0], but got 40}
I also use rnn_cell.GRUCell(self.rnn_size)
I want to share my experience ,maybe it's helpful.
Here is how I fixed it.
I want to use gru cell and basic rnn cell ,so I adapt the programme from others which is coded for lstm cell.
The different between lstm and GRU/BasicRnn is the state_size.
Here is lstm celldef state_size(self):return 2 * self._num_units
Here is GRU/BasicRnn cell def state_size(self):return self._num_units
Therefore the shape of matrix is different,and the tensor is not suit for the op.I advice you to check your code contains tf.slice | I am using MultiRNNCell from tensorflow.models.rnn.rnn_cell. This is how declare my MultiRNNCell
Code:
e_cell = rnn_cell.GRUCell(self.rnn_size)
e_cell = rnn_cell.MultiRNNCell([e_cell] * 2)
Later on I use it from inside seq2seq.embedding_rnn_decoder as follows
ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_inputs, enc_state, e_cell, vocab_size, output_projection=(W, b), feed_previous=False)#
On doing this I get the following error
Error:
tensorflow.python.framework.errors.InvalidArgumentError: Expected size[1] in [0, 0], but got 1024
[[Node: en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice = Slice[Index=DT_INT32, T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Sigmoid_2, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/begin, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/size)]]
[[Node: en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient/_1230 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_11541_en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Has anyone seen a similar error? Any pointers? | 0 | 1 | 1,345 |
0 | 38,411,187 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2016-04-01T17:17:00.000 | 0 | 4 | 0 | Slice error when using MultiRNNCell | 36,362,190 | 0 | python,tensorflow | This problem is occurring because you have doubled your GRU cell but your initial vector is not doubled.
If your initial_vector size is [batch_size,50].
Then initial_vector = tf.concat(1,[initial_vector,initial_vector])
Now input this to decoder as initial vector. | I am using MultiRNNCell from tensorflow.models.rnn.rnn_cell. This is how declare my MultiRNNCell
Code:
e_cell = rnn_cell.GRUCell(self.rnn_size)
e_cell = rnn_cell.MultiRNNCell([e_cell] * 2)
Later on I use it from inside seq2seq.embedding_rnn_decoder as follows
ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_inputs, enc_state, e_cell, vocab_size, output_projection=(W, b), feed_previous=False)#
On doing this I get the following error
Error:
tensorflow.python.framework.errors.InvalidArgumentError: Expected size[1] in [0, 0], but got 1024
[[Node: en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice = Slice[Index=DT_INT32, T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Sigmoid_2, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/begin, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/size)]]
[[Node: en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient/_1230 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_11541_en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Has anyone seen a similar error? Any pointers? | 0 | 1 | 1,345 |
0 | 36,363,565 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-01T18:34:00.000 | 1 | 1 | 0 | spark python product top 5 numbers from a file | 36,363,502 | 0.197375 | python,apache-spark | You distribute the data you read among nodes.
Every node finds it's 5 local maximums.
You combine all the local maximums and you keep the 5 max of them,
which is the answer. | total noob question. I have a file that contains a number on each line, there are approximately 5 millions rows, each row has a different number, how do i find the top 5 values in the file using spark and python. | 0 | 1 | 54 |
0 | 53,074,632 | 0 | 1 | 0 | 0 | 1 | false | 32 | 2016-04-01T21:18:00.000 | 0 | 5 | 0 | How to delete an object from a numpy array without knowing the index | 36,365,990 | 0 | python,list,numpy | arr = np.array(['a','b','c','d','e','f'])
Then
arr = [x for x in arr if arr != 'e'] | Is it possible to delete an object from a numpy array without knowing the index of the object but instead knowing the object itself?
I have seen that it is possible using the index of the object using the np.delete function, but I'm looking for a way to do it having the object but not its index.
Example:
[a,b,c,d,e,f]
x = e
I would like to delete x. | 0 | 1 | 51,262 |
0 | 61,442,480 | 0 | 0 | 0 | 0 | 2 | false | 8 | 2016-04-03T00:16:00.000 | 0 | 2 | 0 | Sklearn PCA is pca.components_ the loadings? | 36,380,183 | 0 | python,scikit-learn,pca | This previous answer is mostly correct except about the loadings. components_ is in fact the loadings, as the question asker originally stated. The result of the fit_transform function will give you the principal components (the transformed/reduced matrix). | Sklearn PCA is pca.components_ the loadings? I am pretty sure it is, but I am trying to follow along a research paper and I am getting different results from their loadings. I can't find it within the sklearn documentation. | 0 | 1 | 9,712 |
0 | 36,386,315 | 0 | 0 | 0 | 0 | 2 | true | 8 | 2016-04-03T00:16:00.000 | 13 | 2 | 0 | Sklearn PCA is pca.components_ the loadings? | 36,380,183 | 1.2 | python,scikit-learn,pca | pca.components_ is the orthogonal basis of the space your projecting the data into. It has shape (n_components, n_features). If you want to keep the only the first 3 components (for instance to do a 3D scatter plot) of a datasets with 100 samples and 50 dimensions (also named features), pca.components_ will have shape (3, 50).
I think what you call the "loadings" is the result of the projection for each sample into the vector space spanned by the components. Those can be obtained by calling pca.transform(X_train) after calling pca.fit(X_train). The result will have shape (n_samples, n_components), that is (100, 3) for our previous example. | Sklearn PCA is pca.components_ the loadings? I am pretty sure it is, but I am trying to follow along a research paper and I am getting different results from their loadings. I can't find it within the sklearn documentation. | 0 | 1 | 9,712 |
0 | 36,381,235 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-04-03T01:36:00.000 | 0 | 1 | 0 | How to cancel the huge negative effect of my training data distribution on subsequent neural network classification function? | 36,380,696 | 1.2 | python,machine-learning,neural-network | Assuming the NN is trained using mini-batches, it is possible to simulate (instead of generate) an evenly distributed training data by making sure each mini-batch is evenly distributed.
For example, assuming a 3-class classification problem and a minibatch size=30, construct each mini-batch by randomly selecting 10 samples per class (with repetition, if necessary). | I need to train my network on a data that has a normal distribution, I've noticed that my neural net has a very high tendency to only predict the most occurring class label in a csv file I exported (comparing its prediction with the actual label).
What are some suggestions (except cleaning the data to produce an evenly distributed training data), that would help my neural net to not go and only predict the most occurring label?
UPDATE: Just wanted to mention that, indeed the suggestions made in the comment sections worked. I, however, found out that adding an extra layer to my NN, mitigated the problem. | 0 | 1 | 82 |
0 | 36,395,078 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-04-04T03:38:00.000 | 1 | 2 | 0 | Pandas indexing confusion | 36,394,194 | 0.099668 | python,pandas | Use df.iloc[1] to select the second row of the dataframe (it uses zero based indexing). To select the second column, use df.iloc[:, 1] (the : is slice notation to select all rows). | While looking at indexing in pandas, I had some questions which should be simple enough. If df is a sufficiently long DataFrame, then df[1:2] gives the second row, however, df[1] gives an error and df[[1]] gives the second column. Why is that? | 0 | 1 | 432 |
0 | 36,516,264 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-04T19:00:00.000 | 0 | 1 | 0 | Can the model object for a learner be exported with joblib? | 36,415,572 | 0 | python,orange | I don't understand what "exported with joblib" refers to, but you can save trained Orange models by pickling them, or with Save Classifier widget if you are using the GUI. | I'm evaluating orange as a potential solution to helping new entrants into data science to get started. I would like to have them save out model objects created from different algorithms as pkl files similar to how it is done in scikit-learn with joblib or pickle. | 0 | 1 | 61 |
0 | 36,431,809 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-04-05T16:09:00.000 | 0 | 5 | 0 | filter numpy array of datetimes by frequency of occurance | 36,431,659 | 0 | python,datetime,numpy,pandas,filtering | Sort your array
Count contiguous occurrences by going through it once, & filter for frequency >= 20
The running time is O(nlog(n)) whereas your list comprehension was probably O(n**2)... that makes quite a difference on 2 million entries.
Depending on how your data is structured, you might be able to sort only the axis and data you need from the numpy array that holds it. | I have an array of over 2 million records, each record has a 10 minutes resolution timestamp in datetime.datetime format, as well as several other values in other columns.
I only want to retain the records which have timestamps that occur 20 or more times in the array. What's the fastest way to do this? I've got plenty of RAM, so I'm looking for processing speed.
I've tried [].count() in a list comprehension but started to lose the will to live waiting for it to finish. I've also tried numpy.bincount() but tragically it doesn't like datetime.datetime
Any suggestions would be much appreciated.
Thanks! | 0 | 1 | 964 |
0 | 36,454,679 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-04-05T16:09:00.000 | 0 | 5 | 0 | filter numpy array of datetimes by frequency of occurance | 36,431,659 | 0 | python,datetime,numpy,pandas,filtering | Thanks for all of your suggestions.
I ended up doing something completely different with dictionaries in the end and found it much faster for the processing that I required.
I created a dictionary with a unique set of timestamps as the keys and empty lists as the values and then looped once through the unordered list (or array) and populated the value lists with the values that I wanted to count.
Thanks again! | I have an array of over 2 million records, each record has a 10 minutes resolution timestamp in datetime.datetime format, as well as several other values in other columns.
I only want to retain the records which have timestamps that occur 20 or more times in the array. What's the fastest way to do this? I've got plenty of RAM, so I'm looking for processing speed.
I've tried [].count() in a list comprehension but started to lose the will to live waiting for it to finish. I've also tried numpy.bincount() but tragically it doesn't like datetime.datetime
Any suggestions would be much appreciated.
Thanks! | 0 | 1 | 964 |
0 | 36,471,599 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-04-07T08:17:00.000 | 0 | 2 | 0 | Indexing a Numpy Array (4 dimensions) | 36,470,440 | 1.2 | python,arrays,numpy,matrix,indexing | I solved it by using np.squeeze(x) to remove the singleton dimensions. | I have a numpy array which is (1, 2048, 1, 1). I need to assign the first two dimensions to another numpy array which is (1, 2048), but I am confused on how to index it correctly. Hope you can help! | 0 | 1 | 96 |
0 | 36,484,420 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-04-07T15:08:00.000 | 0 | 2 | 0 | State Estimation of Steady Kalman Filter | 36,480,233 | 0 | python,math,simulation,kalman-filter | Steady state KF requires the initial state matches the steady state covariance. Otherwise, the KF could diverge. You can start using the steady state KF when the filter enters the steady state.
The steady state Kalman filter can be used for systems with multiple dimension state. | I am working with discrete Kalman Filter on a system.
x(k+1)=A_k x(k)+B_k u(k)
y(k)=C_k x(k)
I have estimated the state from the available noised y(k), which one is generated from the same system state equations with Reference Trajectory of the state. Then I have tested it with wrong initial state x0 and a big initial co-variance (simulation 1). I have noticed that the KF works very well, after a few steps the gain k quickly converges to a very small value near zero. I think it is may be caused by the process noise Q. I have set it small because the Q stands for the accuracy of the model.
Now I want to modify it to a steady state Kalman Filter. I used the steady gain from simulation-1 as constant instead of the calculation in every iteration. And then, the five equations can be simplified to one equation:
x(k+1)^=(I-KC)A x(k)^+(I-KC)B u(k)+K y(k+1)
I want to test it with same initial state and co-variance matrix as the one in simulation-1. But the result is very different from reference trajectory and even the result of simulation-1. I have tested it with the co-variance matrix p_infi, which is solved from the Discrete Riccati Equation:
k_infi=p_infiC'/(Cp_infi*C'+R)
This neither works.
I am wondering-
How should I apply steady state KF and how should I set the initial state for it?
Is steady state KF used for scalar system?
Should I use it with LQ-controller or some others? | 0 | 1 | 764 |
0 | 36,666,450 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2016-04-07T15:08:00.000 | 1 | 2 | 0 | State Estimation of Steady Kalman Filter | 36,480,233 | 1.2 | python,math,simulation,kalman-filter | Let me first simplify the discussion to a filter with a fixed transition matrix, A rather then A_k above. When the Kalman filter reaches steady-state in this case, one can extract the gains and make a fixed-gain filter that utilizes the steady-state Kalman gains. That filter is not a Kalman filter, it is a fixed-gain filter. It's start-up performance will generally be worse than the Kalman filter. That is the price one pays for replacing the Kalman gain computation with fixed gains.
A fixed-gain filter has no covariance (P) and no Q or R.
Given A, C and Q, the steady-state gains can be computed directly. Using the discrete Kalman filter model, and setting the a-posteriori covarance matrix equal to the propagated a-posteriori covariance matrix from the prior measurement cycle one has:
P = (I - KC) (A P A^T + Q)
Solving that equation for K results in the steady-state Kalman gains for fixed A, Q and C.
Where is R? Well it has no role in the steady-state gain computation because the measurement noise has been averaged down in steady-state. Steady-state means the state estimate is as good as it can get with the amount of process noise (Q) we have.
If A is time-varying, much of this discussion does not hold. There is no assurance that a Kalman filter will reach steady-state. There may be no Pinf.
Another implicit assumption in using the steady-state gains from a Kalman filter in a fixed gain filter is that all of the measurements will remain available at the same rates. The Kalman filter is robust to measurement loss, the fixed-gain filter is not. | I am working with discrete Kalman Filter on a system.
x(k+1)=A_k x(k)+B_k u(k)
y(k)=C_k x(k)
I have estimated the state from the available noised y(k), which one is generated from the same system state equations with Reference Trajectory of the state. Then I have tested it with wrong initial state x0 and a big initial co-variance (simulation 1). I have noticed that the KF works very well, after a few steps the gain k quickly converges to a very small value near zero. I think it is may be caused by the process noise Q. I have set it small because the Q stands for the accuracy of the model.
Now I want to modify it to a steady state Kalman Filter. I used the steady gain from simulation-1 as constant instead of the calculation in every iteration. And then, the five equations can be simplified to one equation:
x(k+1)^=(I-KC)A x(k)^+(I-KC)B u(k)+K y(k+1)
I want to test it with same initial state and co-variance matrix as the one in simulation-1. But the result is very different from reference trajectory and even the result of simulation-1. I have tested it with the co-variance matrix p_infi, which is solved from the Discrete Riccati Equation:
k_infi=p_infiC'/(Cp_infi*C'+R)
This neither works.
I am wondering-
How should I apply steady state KF and how should I set the initial state for it?
Is steady state KF used for scalar system?
Should I use it with LQ-controller or some others? | 0 | 1 | 764 |
0 | 36,489,509 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-07T16:32:00.000 | 1 | 2 | 0 | Ignore all mouse clicks on a matplotlib plot | 36,482,154 | 0.099668 | python,matplotlib,event-handling,mouseevent | I feel that this might be more easily resolved by altering the hardware - can you temporarily unplug the mouse, or tape over the track pad to stop people fiddling with it?
I suggest this because your crashing script will always process mouse-clicks in some way, and if you don't know what's causing the crashes then you may be better off just ensuring that there are no clicks. | I've recently built a python script that interacts with an Arduino and a piece of hardware that uses LIDAR to map out a room. Everything works great, but anytime you click on the plot that is generated with maptotlib, the computer freaks out and crashes the script that is running. This is partly because I was given a $300 computer to run this on, so it's not very powerful. However, I feel like even a $300 computer should be able to handle a mouse click.
How can I ignore mouse clicks entirely with matplotlib so that the computer doesn't freak out and crash the script?
If that's not the correct solution, what might be a better solution?
Edit: This is an interactive plotting session (sort of, I just replace the old data with the new data, there is no plot.ion() command called). So, I cannot just save the plot and show it. The Arduino transmits data constantly. | 0 | 1 | 175 |
0 | 36,650,716 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-04-09T16:01:00.000 | 0 | 1 | 0 | Already trained HMM model for word recognition | 36,519,225 | 1.2 | python,speech-recognition,cmusphinx,htk,autoencoder | I am not aware of any decoder that could help you. Speech recognition software does not work this way.
Usually such thing requires custom implementation for dynamic beam search. That is not a huge task, maybe 100 lines of code. It also depends on what your phonetic decoder produces. Is it phonetic lattice (ideally) or is it a 1-best result with scores or simply 1-best result without scores.
In case you have a proper lattice you might want to try openfst toolkit where you convert LM and dictionary to FST, then compose with lattice FST and then use fstbestpath to find the best path. Still, instead of all those phonetic conversions you can simply write a dynamic search.
Baidu in their projects also convert speech to letters and then use language model to fix letter sequence. But they say that without langauge model it works equally well. | I've implemented a phoneme classifier using an autoencoder (Given an audio file array it returns all the recognized phonemes). I want to extend this project so that word recognition is possible. Does there exist an already trained HMM model (in English) that will recognize a word given a list of phonemes?
Thanks everybody. | 0 | 1 | 354 |
0 | 36,532,299 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-04-10T16:05:00.000 | 2 | 2 | 0 | pixel value change after image rotate | 36,532,089 | 0.197375 | python,opencv,rotation | Yes, it is possible for the initial pixel value not to be found in the transformed image.
To understand why this would happen, remember that pixels are not infinitely small dots, but they are rectangles with horizontal and vertical sides, with small but non-zero width and height.
After a 13 degrees rotation, these rectangles (which have constant color inside) will not have their sides horizontal and vertical anymore.
Therefore an approximation needs to be made in order to represent the rotated image using pixels of constant color, with sides horizontal and vertical. | Is it possible the value pixel of image is change after image rotate? I rotate an image, ex, I rotate image 13 degree, so I pick a random pixel before the image rotate and say it X, then I brute force in image has been rotate, and I not found pixel value as same as X. so is it possible the value pixel can change after image rotate? I rotate with opencv library in python.
Any help would be appreciated. | 0 | 1 | 1,412 |
0 | 36,532,213 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2016-04-10T16:05:00.000 | -1 | 2 | 0 | pixel value change after image rotate | 36,532,089 | 1.2 | python,opencv,rotation | If you just rotate the same image plane the image pixels will remain same. Simple maths | Is it possible the value pixel of image is change after image rotate? I rotate an image, ex, I rotate image 13 degree, so I pick a random pixel before the image rotate and say it X, then I brute force in image has been rotate, and I not found pixel value as same as X. so is it possible the value pixel can change after image rotate? I rotate with opencv library in python.
Any help would be appreciated. | 0 | 1 | 1,412 |
0 | 37,579,943 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-04-11T23:47:00.000 | 0 | 1 | 1 | de-Bazel-ing TensorFlow Serving | 36,561,231 | 0 | python,tensorflow,tensorflow-serving | You are close, you need to update the environment as they do in this script
.../serving/bazel-bin/tensorflow_serving/example/mnist_export
I printed out the environment update, did it manually
export PYTHONPATH=...
then I was able to import tensorflow_serving | While I admire, and am somewhat baffled by, the documentation's commitment to mediating everything related to TensorFlow Serving through Bazel, my understanding of it is tenuous at best. I'd like to minimize my interaction with it.
I'm implementing my own TF Serving server by adapting code from the Inception + TF Serving tutorial. I find the BUILD files intimidating enough as it is, and rather than slogging through a lengthy debugging process, I decided to simply edit BUILD to refer to the .cc file, in lieu of also building the python stuff which (as I understand it?) isn't strictly necessary.
However, my functional installation of TF Serving can't be imported into python. With normal TensorFlow you build a .whl file and install it that way; is there something similar you can do with TF Serving? That way I could keep the construction and exporting of models in the realm of the friendly python interactive shell rather than editing it, crossing all available fingers, building in bazel, and then /bazel-bin/path/running/whatever.
Simply adding the directory to my PYTHONPATH has so far been unsuccessful.
Thanks! | 0 | 1 | 443 |
0 | 70,480,217 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-04-12T14:09:00.000 | 0 | 2 | 0 | Data Preprocessing Python | 36,575,776 | 0 | python,pandas,machine-learning | always split your data to train and test split to prevent overfiting.
if some of your features has big scale and some doesnt you should standard the data.make sure to sandard the data only on the train set not to couse overfiting.
you also have to look for missing datas and replace or remove them.
if less than 0.5% of the data in a column is missing you can use 'dropna' otherwise you have to replace it with something(you can replace ut with zero,mean,the previous data...)
you also have to check outliers by using boxplot.
outliers are point that are significantly different from other data in the same group can also affects your prediction in machine learning.
its the best if we check the multicollinearity.
if some features have correlation we have multicollinearity can couse wrong prediction for our model.
for using your data some of the columns might be categorical with sholud be converted to numerical. | I have a DataFrame in Python and I need to preprocess my data. Which is the best method to preprocess data?, knowing that some variables have huge scale and others doesn't. Data hasn't huge deviance either. I tried with preprocessing.Scale function and it works, but I'm not sure at all if is the best method to proceed to the machine learning algorithms. | 0 | 1 | 1,399 |
0 | 36,582,371 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-04-12T19:26:00.000 | 1 | 2 | 0 | Numpy group scalars into arrays | 36,582,318 | 1.2 | python,arrays,numpy | You can do U[:, None, :] to add a new dimension to the array. | I have a numpy array U with shape (20, 50): 20 spatial points, in a space of 50 dimensions.
How can I transform it into a (20, 1, 50) array, i.e. 20 rows, 1 column, and each element is a 50 dimension point? Kind of encapsulating each row as a numpy array.
Context
The point is that I want to expand the array along the columns (actually, replicating the same array along the columns X times) using numpy.concatenate. But if I would do it straight away I would not get the result I want.
E.g., if I would expand it once along the columns, I would get an array with shape (20, 100). But what I would like is to access each element as a 50-dimensional point, so when I expand it I would expect to have a new U' with shape (20, 2, 50). | 0 | 1 | 71 |
0 | 59,394,797 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-04-13T16:32:00.000 | 1 | 2 | 0 | Python function such as max() doesn't work in pyspark application | 36,604,460 | 0.099668 | python,pyspark | If you get this error even after verifying that you have NOT used from pyspark.sql.functions import *, then try the following:
Use import builtins as py_builtin
And then correspondingly call it with the same prefix.
Eg: py_builtin.max()
*Adding David Arenburg's and user3610141's comments as an answer, as that is what help me fix my problem in databricks where there was a name collision with min() and max() of pyspark with python build ins. | Python function max(3,6) works under pyspark shell. But if it is put in an application and submit, it will throw an error:
TypeError: _() takes exactly 1 argument (2 given) | 0 | 1 | 2,764 |
0 | 36,609,298 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2016-04-13T18:15:00.000 | 1 | 1 | 0 | In python, If I perform an fft on complex data, then irfft only the positive frequencies, how does that affect the data? | 36,606,390 | 1.2 | python,numpy,fft,ifft | What you are doing is perfectly fine. You are generating the analytic signal to accommodate the negative frequencies in the same way a discrete Hilbert transform would. You will have some scaling issues - you need to double all the non-DC and non-Nyquist signals in the real frequency portion of the FFT results.
Some practical concerns are that this method imparts a delay of the window size, so if you are trying to do this in real-time you should probably examine using a FIR Hilbert transformer and the appropriate sums. The delay will be the group delay of the Hilbert transformer in that case.
Another item of concern is that you need to remember that the DC component of your signal will also shift along with all the other frequencies. As such I would recommend that you demean the data (save the value) before shifting, zero out the DC bin after you FFT the data (to remove whatever frequency component ended up in the DC bin), then add the mean back to preserve the signal levels at the end. | So I am trying to perform a frequency shift on a set of real valued points. In order to achieve a frequency shift, one has to multiply the data by a complex exponential, making the resulting data complex. If I multiply by just a cosine I get results at both the sum and difference frequencies. I want just the sum or the difference.
What I have done is multiply the data by a complex exponential, use fft.fft() to compute the fft, then used fft.irfft() on only the positive frequencies to obtain a real valued dataset that has only a sum or difference shift in frequency. This seems to work great, but I want to know if there are any cons to doing this, or maybe a more appropriate way of accomplishing the same goal. Thanks in advance for any help you can provide! | 0 | 1 | 581 |
0 | 69,174,336 | 0 | 0 | 0 | 0 | 1 | false | 63 | 2016-04-13T18:43:00.000 | 0 | 4 | 0 | How to set in pandas the first column and row as index? | 36,606,931 | 0 | python,python-3.x,pandas | Maybe try df = pd.read_csv(header = 0) | When I read in a CSV, I can say pd.read_csv('my.csv', index_col=3) and it sets the third column as index.
How can I do the same if I have a pandas dataframe in memory? And how can I say to use the first row also as an index? The first column and row are strings, rest of the matrix is integer. | 0 | 1 | 160,606 |
0 | 36,617,627 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-14T07:05:00.000 | 0 | 3 | 0 | Suggestions on Feature selection techniques? | 36,615,987 | 0 | python-3.x,machine-learning,data-analysis,feature-selection,data-science | You are already doing a lot of preprocessing. The only additional step I recommend is to normalize the values after PCA. Then your data should be ready to be fed into your learning algorithm.
Or do you want to avoid PCA? If the correlation between your features is not too strong, this might be ok. Then skip PCA and just normalize the values. | Blockquote
I am a student and beginner in Machine Learning. I want to do feature
selection of columns. My dataset is 50000 X 370 and it is a binary
classification problem.
First i removed the columns with std.deviation = 0, then i removed duplicate columns, After that i checked out top 20 features with highest ROC curve area. What should be the next step apart doing PCA? Can anybody give a sequence of steps to be followed for feature selection? | 0 | 1 | 541 |
0 | 36,628,839 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-04-14T15:17:00.000 | 1 | 1 | 0 | Converting a matrix into an image in Python | 36,627,362 | 1.2 | python,image,numpy,matrix,matplotlib | Solved using scipy library
import scipy.misc
...(code)
scipy.misc.imsave(name,array,format)
or
scipy.misc.imsave('name.ext',array) where ext is the extension and hence determines the format at which the image will be stored. | I would like to save a numpy matrix as a .png image, but I cannot do so using the matplotlib since I would like to retain its original size (which apperently the matplotlib doesn't do so since it adds the scale and white background etc). Anyone knows how I can go around this problem using numpy or the PIL please? Thanks | 0 | 1 | 617 |
0 | 36,644,976 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-14T17:40:00.000 | 1 | 2 | 0 | RDD from label Array and data Array in python/spark | 36,630,260 | 0.099668 | python,apache-spark,pyspark | Spark have a function takeSample which can merge two RDD in to an RDD. | I have two python arrays of the same length. They are generated from reading two separate text files. One represents labels; let it be called "labelArray". The other is an array of data arrays; let it be called "dataArray". I want to turn them into an RDD object of LabeledPoint. How can I do this? | 0 | 1 | 1,437 |
0 | 38,653,850 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-15T05:37:00.000 | 0 | 1 | 0 | G++ not detected | 36,639,002 | 0 | python,machine-learning,g++,theano | On Windows, you need to install mingw to support g++. Usually, it is advisable to use Anaconda distribution to install Python. Theano works with Python3.4 or older versions. You can use conda install command to install mingw. | I am working on some neural networks. But my dataset is same 95 features and about 120 datasets.
So while importing theano i get warning g++ not detected and it will degrade the performance.
Do this will effect even a small dataset?
I will have a 2-3 hidden layers.
My shape of neural network will be (95, 200,200, 4)
I hope to hear. | 0 | 1 | 112 |
0 | 36,647,234 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-15T12:27:00.000 | 1 | 1 | 0 | Python: Deep neural networks | 36,647,169 | 0.197375 | python,machine-learning,neural-network,deep-learning,nolearn | Try increasing the number of hidden units and the learning rate. The power of neural networks comes from the hidden layers. Depending on the size of your dataset, the number of hidden layers can go upto a few thousands. Also, please elaborate on the kind, and number of features you're using. If the feature set is small, you're better off using SVMs and RandomForests instead of neural networks. | I am currently working on some project related to machine learning.
I extracted some features from the object.
So I train and test that features with NB, SVM and other classification algorithms and got result about 70 to 80 %
When I train the same features with neural networks using nolearn.dbn and then test it I got about 25% correctly classified. I had 2 hidden layers.
I still don't understand what is wrong with neural networks.
I hope to have some help.
Thanks | 0 | 1 | 473 |
0 | 68,226,844 | 0 | 0 | 0 | 0 | 1 | false | 27 | 2016-04-16T17:45:00.000 | 1 | 5 | 0 | How to create a series of numbers using Pandas in Python | 36,667,548 | 0.039979 | python,python-3.x,pandas,range,series | try pd.Series([0 for i in range(20)]).
It will create a pd series with 20 rows | I am new to python and have recently learnt to create a series in python using Pandas. I can define a series eg: x = pd.Series([1, 2, 3, 4, 5]) but how to define the series for a range, say 1 to 100 rather than typing all elements from 1 to 100? | 0 | 1 | 69,375 |
0 | 60,068,593 | 0 | 0 | 0 | 0 | 1 | false | 17 | 2016-04-16T19:06:00.000 | 0 | 4 | 0 | Change default GPU in TensorFlow | 36,668,467 | 0 | python,tensorflow | If you want to run your code on the second GPU,it assumes that your machine has two GPUs, You can do the following trick.
open Terminal
open tmux by typing tmux (you can install it by sudo apt-get install tmux)
run this line of code in tmux: CUDA_VISIBLE_DEVICES=1 python YourScript.py
Note: By default, tensorflow uses the first GPU, so with above trick, you can run your another code on the second GPU, separately.
Hope it would be helpful!! | Based on the documentation, the default GPU is the one with the lowest id:
If you have more than one GPU in your system, the GPU with the lowest
ID will be selected by default.
Is it possible to change this default from command line or one line of code? | 0 | 1 | 32,954 |
0 | 36,686,758 | 0 | 1 | 0 | 0 | 1 | false | 9 | 2016-04-18T04:11:00.000 | 1 | 3 | 0 | Ignoring non-numerical string values in pandas dataframe | 36,685,347 | 0.066568 | python,pandas | you can use df._get_numeric_data() directly. | I have a DataFrame in which a column might have three kinds of values, integers (12331), integers as strings ('345') or some other string ('text').
Is there a way to drop all rows with the last kind of string from the dataframe, and convert the first kind of string into integers? Or at least some way to ignore the rows that cause type errors if I'm summing the column.
This dataframe is from reading a pretty big CSV file (25 GB), so I'd like some solution that would work when reading in chunks. | 0 | 1 | 17,034 |
0 | 36,693,072 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-04-18T07:35:00.000 | 0 | 1 | 0 | similarity measure scikit-learn document classification | 36,687,929 | 1.2 | python-2.7,scikit-learn,text-classification | As with most supervised learning algorithms, Random Forest Classifiers do not use a similarity measure, they work directly on the feature supplied to them. So decision trees are built based on the terms in your tf-idf vectors.
If you want to use similarity then you will have to compute a similarity matrix for your documents and use this as your features. | I am doing some work in document classification with scikit-learn. For this purpose, I represent my documents in a tf-idf matrix and feed a Random Forest classifier with this information, works perfectly well. I was just wondering which similarity measure is used by the classifier (cosine, euclidean, etc.) and how I can change it. Haven't found any parameters or informatin in the documentation.
Thanks in advance! | 0 | 1 | 350 |
0 | 36,728,171 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-04-19T14:27:00.000 | 4 | 2 | 1 | ./build/tools/caffe: No such file or directory | 36,721,348 | 0.379949 | bash,python-2.7,machine-learning,neural-network,deep-learning | Follow the below instructions and see if it works:
Open a terminal
cd to caffe root directory
Make sure the file caffe exists by listing them using ls ./build/tools
If the file is not present, type make. Running step 3 will list the file now.
Type ./build/tools/caffe, No such file error shouldn't get triggered this time. | I have a question regarding the command for running the training in Linux. I am using GoogleNet model in caffe framework for binary classification of my images. I used the following command to train my dataset
./build/tools/caffe train --solver=models/MyModelGoogLenet/quick_solver.prototxt
But I received this error
bash: ./build/tools/caffe: No such file or directory
How can I resolve this error? Any suggestions would be of great help. | 0 | 1 | 7,193 |
0 | 36,724,914 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2016-04-19T14:27:00.000 | 2 | 2 | 1 | ./build/tools/caffe: No such file or directory | 36,721,348 | 1.2 | bash,python-2.7,machine-learning,neural-network,deep-learning | You should specify absolute paths to all your files and commands, to be on the safer side. If /home/user/build/tools/caffe train still doesn't work, check if you have a build directory in your caffe root. If not, then use /home/user/tools/caffe train instead. | I have a question regarding the command for running the training in Linux. I am using GoogleNet model in caffe framework for binary classification of my images. I used the following command to train my dataset
./build/tools/caffe train --solver=models/MyModelGoogLenet/quick_solver.prototxt
But I received this error
bash: ./build/tools/caffe: No such file or directory
How can I resolve this error? Any suggestions would be of great help. | 0 | 1 | 7,193 |
0 | 44,015,767 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-19T17:25:00.000 | 0 | 2 | 0 | scanning plot through a large data file using python | 36,725,361 | 0 | python-2.7,matplotlib,plot | Something that has worked for me in a similar problem (time varying heat-maps) was to run a batch job of producing several thousands such plots over night, saving each as a separate image. At 10s a figure, you can produce 3600 in 10h. You can then simply scan through the images which could provide you with the insight you're looking for. | I have a large (10-100GB) data file of 16-bit integer data, which represents a time series from a data acquisition device. I would like to write a piece of python code that scans through it, plotting a moving window of a few seconds of this data. Ideally, I would like this to be as continuous as possible.
The data is sampled at 4MHz, so to plot a few seconds of data involves plotting ~10 million data points on a graph. Unfortunately I cannot really downsample since the features I want to see are sparse in the file.
matplotlib is not really designed to do this. It is technically possible, and I have a semi-working matplotlib solution which allows me to plot any particular time window, but it's far too slow and cumbersome to do a continuous scan of incrementally changing data - redrawing the figure takes several seconds, which is far too long.
Can anyone suggest a python package or approach do doing this? | 0 | 1 | 373 |
0 | 36,814,562 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-20T08:50:00.000 | 0 | 1 | 0 | Implementations of algorithms without imputation of missing values | 36,738,514 | 0 | algorithm,python-2.7,machine-learning,scikit-learn,missing-data | AFAIK scikit-learn doesn't have ML algorithms that can work with missing values without preprocessing them first. R does though. | I would like to know if there are any implementations of machine learning algorithms in python which can work even if there are missing values in the dataset. Please note that I don't want algorithms imputing the missing values first.(I could have done that using the Imputer package ). I would like to know about the implementation of algorithms which work even if there are missing values present in the dataset without imputation. | 0 | 1 | 76 |
0 | 54,928,184 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2016-04-20T13:54:00.000 | 0 | 3 | 0 | Building your own NLP API | 36,746,071 | 0 | python,node.js,nlp,chatbot | Two things to think about are: How are you planning on handling the generation side of things? Entity extraction and classification are going to be useful for the Natural language understanding (NLU) side of things, but generation can be tricky in itself.
Another thing to think about is that the training and development of the pipeline of these models is often a separate problem form the deployment. The fact that you want to use node suggests that you already know about deploying software, I think. But remember that deploying large machine learning models in a pipeline can be complicated, and I suspect that these API's may offer neatly packaged pipelines for you. | I'm building a chatbot and I'm new to NLP.
(api.ai & AlchemyAPI are too expensive for my use case. And wit.ai seems to be buggy and constantly changing at the moment.)
For the NLP experts, how easily can I replicate their services locally?
My vision so far (with node, but open to Python):
entity extraction via StanfordNER
intent via NodeNatural's LogisticRegressionClassifier
training UI with text and validate/invalidate buttons (any prebuilt tools for this?)
Are entities and intents all I'll need for a chatbot? How good will NodeNatural/StanfordNER be compared to NLP-as-a-service? What headaches am I not seeing? | 0 | 1 | 2,286 |
0 | 36,749,915 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-20T14:09:00.000 | 0 | 1 | 0 | Use error message with numpy.testing.assert_raises() | 36,746,446 | 0 | python,unit-testing,numpy | These functions are implemented in numpy/testing/utils.py. Studying that code may be your best option.
I see that assert_raises passes the task on to nose.tools.assert_raises(*args,**kwargs). So it depends on what that does. And if I recall use of this in other modules correctly, you are usually more interested in the error message raised by the Error, as opposed to displaying your own. Remember, unittests are more for your own diagnostic purposes, not as a final user-friendly tool.
assert_equal is a complex function that tests various kinds of objects, and builds the error message accordingly. It may including information of the objects.
Choices in this part of the code were determined largely by what has been useful to the developers. They are written primarily to test the numpy code itself. So being systematic is not a priority. | Contrary to np.testing.assert_equal(), np.testing.assert_raises() does not accept an err_msg parameter. Is there a clean way to display an error message when this assert fails?
More generally, why do some assert_* methods accept this parameter, while some others don't? | 0 | 1 | 173 |
0 | 36,754,207 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2016-04-20T15:13:00.000 | 3 | 1 | 0 | 1 producer, 1 consumer, only 1 piece of data to communicate, is queue an overkill? | 36,748,120 | 1.2 | python,python-3.x,pandas,multiprocessing,interprocess | To me, the most important thing you mentioned is this:
It is VERY CRITICAL that the consumer catches every single DataFrame the producer produces.
So, let's suppose you used a variable to store the DataFrame. The producer would set it to the produced value, and the consumer would just read it. That would work very fine, I guess.
But what would happen if somehow the consumer got blocked by more than one producing cycle? Then some old value would be overwritten before reading. And that's why I think a (thread-safe) queue is the way to go almost "by definition".
Besides, beware of premature optimization. If it works for your case, excellent. If some day, for some other case, performance comes to be a problem, only then you should spend the extra work, IMO. | This question is related to Python Multiprocessing. I am asking for a suitable interprocess communication data-structure for my specific scenario:
My scenario
I have one producer and one consumer.
The producer produces a single fairly small panda Dataframe every 10-ish secs, then the producer puts it on a python.multiprocess.queue.
The consumer is a GUI polling that python.multiprocess.queue every 100ms. It is VERY CRITICAL that the consumer catches every single DataFrame the producer produces.
My thinking
python.multiprocess.queue is serving the purpose (I think), and amazingly simple to use! (praise the green slithereen lord!). But I am clearly not utilizing queue's full potential with only one producer one consumer and a max of one item on the queue. That leads me to believe that there is simpler thing than queue. I tried to search for it, I got overwhelmed by options listed in: python 3.5 documentation: 18. Interprocess Communication and Networking. I am also suspecting there may be a way not involving interprocess communication data-structure at all for my need.
Please Note
Performance is not very important
I will stick with multiprocessing for now, instead of multithreading.
My Question
Should I be content with queue? or is there a more recommended way? I am not a professional programmer, so I insist on doing things the tried and tested way.
I also welcome any suggestions of alternative ways of approaching my problem.
Thanks | 0 | 1 | 300 |
0 | 36,783,492 | 1 | 0 | 0 | 0 | 1 | false | 1 | 2016-04-20T15:52:00.000 | 0 | 1 | 0 | Pandas: datareader unable to get historical stock data | 36,749,105 | 0 | python-2.7,pandas,datareader,google-finance,pandas-datareader | That URL is a 404 - pandas isn't at fault, maybe just check the URL? Perhaps they're on different exchanges with different google finance support. | I found that some of the stock exchanges is not supported for datareader. Example, Singapore. Any workaround?
query = web.DataReader(("SGX:BLA"), 'google', start, now) return such error`
IOError: after 3 tries, Google did not return a 200 for url 'http://www.google.com/finance/historical?q=SGX%3ABLA&startdate=Jan+01%2C+2015&enddate=Apr+20%2C+2016&output=csv
It works for IDX indonesia
query = web.DataReader(("IDX:CASS"), 'google', start, now) | 1 | 1 | 850 |
0 | 36,758,704 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-20T23:47:00.000 | 0 | 1 | 0 | Can you use counts in sklearn logistic regression input? | 36,757,158 | 0 | python,scikit-learn,logistic-regression,bernoulli-probability | If they are categorical - you should provide binarized version of it. I don't know how that code in R works, but you should binarize your categorical feature always. Because you have to emphasize that each value of your feature is not related to other one, i.e. for feature "blood_type" with possible values 1,2,3,4 your classifier must learn that 2 is not related to 3, and 4 is not related to 1 in any sense. These is achieved by binarization.
If you have too many features after binarization - you can reduce dimensionality of binarized dataset by FeatureHasher or more sophisticated methods like PCA. | So, I know that in R you can provide data for a logistic regression in this form:
model <- glm( cbind(count_1, count_0) ~ [features] ..., family = 'binomial' )
Is there a way to do something like cbind(count_1, count_0) with sklearn.linear_model.LogisticRegression? Or do I actually have to provide all those duplicate rows? (My features are categorical, so there would be a lot of redundancy.) | 0 | 1 | 308 |
0 | 36,870,610 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-04-21T03:24:00.000 | 1 | 1 | 0 | tensorflow sequence to sequence without softmax | 36,759,037 | 1.2 | python,tensorflow | The model_with_buckets() function in seq2seq.py returns 2 tensors: the output and the losses. The outputs variable contains the raw output of the decoder that you're looking for (that would normally be fed to the softmax). | I was using Tensorflow sequence to sequence example code. for some reason, I don't want to add softmax to output. instead, I want to get the raw output of decoder without softmax. I was wondering if anyone know how to do it based on sequence to sequence example code? Or I need to create it from scratch or modify the the seq2seq.py (under the /tensorflow/tensorflow/python/ops/seq2seq.py)?
Thank you | 0 | 1 | 389 |
0 | 36,780,531 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2016-04-21T20:12:00.000 | 2 | 3 | 0 | remove known exact row in huge csv | 36,779,522 | 0.132549 | python,r,csv | use sed '2636759d' file.csv > fixedfile.csv
As a test for a 40,001 line 1.3G csv, removing line 40,000 this way takes 0m35.710s. The guts of the python solution from @en_Knight (just stripping the line and writing to a temp file) is ~ 2 seconds faster for this same file.
edit OK sed (or some implementations) may not work (based on feedback from questioner)
You could, in plain bash, to remove row n from a file of N rows, file.csv, you can do head -[n-1] file.csv > file_fixed.csv and tail -[N-n] file.csv >> file_fixed.csv (in both of these the expression in brackets is replaced by a plain number).
To do this, though you need to know N. The python solution is better... | I have a ~220 million row, 7 column csv file. I need to remove row 2636759.
This file is 7.7GB, more than will fit in memory. I'm most familiar with R, but could also do this in python or bash.
I can't read or write this file in one operation. What is the best way to build this file incrementally on disk, instead of trying to do this all in memory?
I've tried to find this on SO but have only been able to find how to do this with files that are small enough to read/write in memory, or with rows that are at the beginning of the file. | 0 | 1 | 1,468 |
0 | 36,822,368 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-04-23T22:32:00.000 | 3 | 1 | 0 | How does Support Vector Machine deal with confusing feature vectors? | 36,817,217 | 1.2 | python,machine-learning,svm,feature-extraction | Yes, it will affect the performance of the SVM. It seems your test vectors are just scaled versions of your training vectors. The SVM has no way of knowing that the scaling is irrelevant in your case (unless you present it alot of differently scaled training vectors)
A common practice for feature vectors where the scaling is irrelevant is to scale all the test and train vectors to a common length. | Imagine I have the following feature vectors:
Training vectors:
Class 1:
[ 3, 5, 4, 2, 0, 3, 2],
[ 33, 50, 44, 22, 0, 33, 20]
Class 2:
[ 1, 2, 3, 1, 0, 0, 4],
[ 11, 22, 33, 11, 0, 0, 44]
Testing vectors:
Class 1:
[ 330, 550, 440, 220, 0, 330, 200]
Class 2:
[ 110, 220, 333, 111, 0, 0, 444]
I am using SVM, which learns from the training vectors and then classifies the test samples.
As you can see the feature vectors have very different dimensions: the training set features are very low value numbers and the test set vectors are very high value numbers.
My question is whether it is confusing for SVM to learn from such feature vectors?
Of course when I do vector scaling the difference is still there:
for example after applying standardScaler() on the feature vectors for Class 1:
Training:
[ 0.19 1.53 0.86 -0.48 -1.82 0.19 -0.48]
[ 20.39 31.85 27.80 12.99 -1.82 20.39 11.64]
Test:
[ 220.45 368.63 294.54 146.35 -1.82 220.45 132.88]
Basically, this is a real world problem, and I am asking this since I have developed a way to pre-scale those feature vectors for my particular case.
So after I would use my pre-scaling method, the feature vectors for Class 1 would become:
Training:
[ 3. 5. 4. 2. 0. 3. 2.]
[ 2.75 4.16666667 3.66666667 1.83333333 0. 2.75
1.66666667]
Test:
[ 2.84482759 4.74137931 3.79310345 1.89655172 0. 2.84482759
1.72413793]
which makes them very similar in nature.
This looks even better when standardScaler() is applied onto the pre-scaled vectors:
Training:
[ 0.6 1. 0.8 0.4 0. 0.6 0.4]
[ 0.55 0.83333333 0.73333333 0.36666667 0. 0.55
0.33333333]
Test:
[ 0.56896552 0.94827586 0.75862069 0.37931034 0. 0.56896552
0.34482759]
The ultimate question is whether my pre-scaling method is going to help the SVM in any way? This is more of a theoretical question, any insight into this is appreciated. | 0 | 1 | 62 |
0 | 36,827,834 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-04-24T18:24:00.000 | 0 | 6 | 0 | Python/Numpy fast way to selecting every nth chunk in list | 36,827,155 | 0 | python,arrays,list,numpy,slice | A simple list comprehension can do the job:
[ L[i] for i in range(len(L)) if i%3 != 2 ]
For chunks of size n
[ L[i] for i in range(len(L)) if i%(n+1) != n ] | Edited for the confusion in the problem, thanks for the answers!
My original problem was that I have a list [1,2,3,4,5,6,7,8], and I want to select every chunk of size x with gap of one. So if I want to select select every other chunk of size 2, the outcome would be [1,2,4,5,7,8]. A chunk size of three would give me [1,2,3,5,6,7].
I've searched a lot on slicing and I couldn't find a way to select chunks instead of element. Make multiple slice operations then join and sort seems a little too expensive. The input can either be a python list or numpy ndarray. Thanks in advance. | 0 | 1 | 1,225 |
0 | 44,995,504 | 0 | 0 | 0 | 0 | 2 | false | 18 | 2016-04-25T17:17:00.000 | 5 | 7 | 0 | What numbers that I can put in numpy.random.seed()? | 36,847,022 | 0.141893 | python,numpy-random | What is normally called a random number sequence in reality is a "pseudo-random" number sequence because the values are computed using a deterministic algorithm and probability plays no real role.
The "seed" is a starting point for the sequence and the guarantee is that if you start from the same seed you will get the same sequence of numbers. This is very useful for example for debugging (when you are looking for an error in a program you need to be able to reproduce the problem and study it, a non-deterministic program would be much harder to debug because every run would be different). | I have noticed that you can put various numbers inside of numpy.random.seed(), for example numpy.random.seed(1), numpy.random.seed(101). What do the different numbers mean? How do you choose the numbers? | 0 | 1 | 20,730 |
0 | 60,006,676 | 0 | 0 | 0 | 0 | 2 | false | 18 | 2016-04-25T17:17:00.000 | 0 | 7 | 0 | What numbers that I can put in numpy.random.seed()? | 36,847,022 | 0 | python,numpy-random | One very specific answer: np.random.seed can take values from 0 and 2**32 - 1, which interestingly differs from random.seed which can take any hashable object. | I have noticed that you can put various numbers inside of numpy.random.seed(), for example numpy.random.seed(1), numpy.random.seed(101). What do the different numbers mean? How do you choose the numbers? | 0 | 1 | 20,730 |
0 | 36,857,440 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2016-04-26T06:09:00.000 | 1 | 3 | 0 | OpenCV to recognize image using python | 36,856,532 | 1.2 | python,opencv | Regarding your question about the haar cascades. You can use them to classify the images the way you want:
Train two haar cascades, one for cars and one for bikes. Both cascades will return a value of how certain they are, that the image contains the object they were trained for. If both are uncertain, the image probably contains nothing. Otherwise you take the class with the higher certainty for the content of the image. | I am new to OpenCV.
I am using OpenCV 3.1 and python 2.7.
I have 5 images of bikes and 5 images of cars.
I want to find out given any image is it a car or a bike .
On the internet I found out that using haar cascade we can train,
but most of the examples contain only one trained data means, the user will train only car images and
with query image and they will try to find it it is a car or not,
but I want to check if it is a car or a bike or nothing.
I want to match images based on shape of objects.
Another option I was thinking that take query image and compare with stored images and depending upon similarity give the result. But I know this would take longer time which would be not good.
Are there any better option? There is also template matching but I dont know which would be better options to go for this kind of solution since I dont have knowledge about OpenCV. | 0 | 1 | 1,017 |
0 | 36,976,133 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-04-26T08:49:00.000 | 0 | 1 | 0 | How to extract the frequencies associated with fft2 values in numpy? | 36,859,840 | 1.2 | python,numpy,fft | Yes. Apply fftfreq to each spatial vector (x and y) separately. Then create a meshgrid from those frequency vectors.
Note that you need to use fftshift if you want the typical representation (zero frequencies in center of spatial spectrum) to both the output and your new spatial frequencies (before using meshgrid). | I know that for fft.fft, I can use fft.fftfreq. But there seems to be no such thing as fft.fftfreq2. Can I somehow use fft.fftfreq to calculate the frequencies in 2 dimensions, possibly with meshgrid? Or is there some other way? | 0 | 1 | 343 |
0 | 36,864,312 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-04-26T11:49:00.000 | 0 | 1 | 0 | sklearn's feature importances_ | 36,864,116 | 0 | python,scikit-learn | It means those words are "strongly associated" with one of the responses, in your case probably illegal(1). Depending on your classifier, the exact technical definition of strongly associated will vary. It could be the joint probability of the word and response, P(X='theft', Y='illegal'), or it could be the conditional probabilityP(X='theft' | Y='illegal').
Intuitively, whenever these terms appear in a document, the probability of that document belonging to the illegal category is increased. | I am just curious on the interpretation of sklearn's feature_importances_ attribute. I know that the features with highests coefficients are the features that would highly predict the outcome. My question is - Are these the features strongly predictive to return a 1 (or yes) or not necessarily? (Supervised Learning - Binary response - yes(1) or no(0)).
For example, after building the predictive model, I found out that these words are the top features - insider-trading, theft, embezzlement, investment. The response is 'illegal'(1) or 'legal'(0).
Does it mean that when a certain text has those words, there's a huge chance it's illegal or not necessarily? And, it just simply means that the value of these words would lead to a strong prediction (either illegal or legal). Appreciate any answer to such. | 0 | 1 | 213 |
0 | 36,866,111 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-04-26T12:23:00.000 | 1 | 2 | 0 | Defining dtype of df.to_sparse() result | 36,864,863 | 0.099668 | python,pandas | In short. No.
You see, dtypes is not a pandas controlled entity. Dtypes is typically a numpy thing.
Dtypes are not controllable in any way, they are automagically asserted by numpy and can only change when you change the data inside the dataframe or numpy array.
That being said, the typical reason for ending up with a float instead of an int as a dtype is because of the introduction of NaN values into the series or numpy array. This is a pandas gotcha some say. I personally would argue it is due to the (too) close coupling between pandas and numpy.
In general, dtypes should never be trusted for anything, they are incredibly unreliable. I think everyone working with numpy/pandas would live a better life if they were never exposed to dtypes at all.
If you really really hate floats, the only other option for you as far as I know is to use string representations, which of course causes even more problems in most cases. | I have a dataframe df which is sparse and for memory efficiency I wish to convert it using to_sparse()
However it seems that the new representation ends up with the dtype=float64, even when my df is dtype=int8.
Is there a way specify the data type/ prevent auto conversion to dtype=float64 when using to_sparse() ? | 0 | 1 | 141 |
0 | 36,892,239 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-04-26T14:29:00.000 | 1 | 1 | 0 | How to use agglometative clustering with 1 dimensional array valueset? | 36,867,924 | 1.2 | python,arrays,scikit-learn | Make the array into a column: use x[:, np.newaxis] instead of x | I have around 7k samples and 11 features which I concentrated into one. This concentrated value I call ResVal and is a weighted sum of previous features. Then I gathered these ResVals into 1D array.
Now I want to cluster this results with AgglomerativeClustering but console complains about 1D array.
How can I fix it and get cluster results by line number? | 0 | 1 | 28 |
0 | 36,876,832 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-04-26T21:48:00.000 | 1 | 1 | 0 | Using cluster analysis as alternative to point in polygon assignment | 36,876,389 | 1.2 | python,scikit-learn | Discriminant analisys (a.k.a. supervised classification) is the way to go. You adjust the model by using the coordinates of the points and the information on the node they belong to. As a result, you obtain a model you can use to predict the node for new points as they are known. Linear discriminant analysis is one of the simplest algorithms. | I'm interested to approach the confirming point in polygon problem from another direction. I have a dataframe containing series of coordinates, known to be in certain polygon (administrative area). I have other dataframes with coordinates not assigned to any admin area. Would using SciKit offer an alternate means to assign these to the admin area.
Example:
I know (x, y) point 1 is in admin area a if (x, y) point 2 is within specified radius of point (1, i) can assign it to the same admin area. Does this approach sound viable? | 0 | 1 | 152 |
0 | 36,878,371 | 0 | 0 | 0 | 0 | 1 | false | 24 | 2016-04-27T00:19:00.000 | 1 | 5 | 0 | Python: Add a column to numpy 2d array | 36,878,089 | 0.039979 | python,arrays,numpy | I think the numpy method column_stack is more interesting because you do not need to create a column numpy array to stack it in the matrix of interest. With the column_stack you just need to create a normal numpy array. | I have a 60000 by 200 numpy array. I want to make it 60000 by 201 by adding a column of 1's to the right. (so every row is [prev, 1])
Concatenate with axis = 1 doesn't work because it seems like concatenate requires all input arrays to have the same dimension.
How should I do this? I can't find any existing useful answer, and most of the answers about this were written a few years ago so things might be different now. | 0 | 1 | 64,625 |
0 | 60,298,484 | 0 | 0 | 0 | 0 | 1 | false | 44 | 2016-04-27T15:24:00.000 | 0 | 4 | 0 | How to get a normal distribution within a range in numpy? | 36,894,191 | 0 | python,numpy,random,machine-learning,normal-distribution | You can subdivide your targeted range (by convention) to equal partitions and then calculate the integration of each and all area, then call uniform method on each partition according to the surface.
It's implemented in python:
quad_vec(eval('scipy.stats.norm.pdf'), 1, 4,points=[0.5,2.5,3,4],full_output=True) | In machine learning task. We should get a group of random w.r.t normal distribution with bound. We can get a normal distribution number with np.random.normal() but it does't offer any bound parameter. I want to know how to do that? | 0 | 1 | 43,386 |
0 | 36,908,550 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2016-04-27T15:31:00.000 | 1 | 2 | 0 | What is a good way to extract dominant colors from image without the shadow? | 36,894,358 | 0.099668 | python,opencv,image-processing,machine-learning,computer-vision | If the shadows cover a significant part of the image then this problem is non-trivial.
If the shadow is a small fraction of the area you're interested though you could try using k-medoids instead of k-means and as Piglet mentioned using a different color space with separate chromaticity and luminance channels may help. | Is it possible to extract the 'true' color of building façade from a photo/ a set of similar photos and removing the distraction of shadow? Currently, I'm using K-means clustering to get the dominant colors, however, it extracts darker colors (if the building is red, then the 1st color would be dark red) as there are lots of shadow areas in real photos.
Any suggestions are greatly appreciated!
Thanks in advance! | 0 | 1 | 383 |
0 | 36,946,734 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-27T21:32:00.000 | 1 | 2 | 0 | obtaining the min value of time_diff for a Pandas Dataframe | 36,901,311 | 0.099668 | python-2.7,pandas | Sorry for the garbled question. in order to do a groupby for a timedelta value the best way is to do a pd.numeric on the 'timedelta value' and once the results are obtained we can again do a pd.to_timedelta on it. | I have a Pandas Dataframe which has a field txn['time_diff']
Send_Agent Pay_Agent Send_Time Pay_Time score \
0 AKC383903 AXX100000 2014-08-19 18:52:35 2015-05-01 22:08:39 1
1 AWA280699 AXX100000 2014-08-19 19:32:18 2015-05-01 17:12:32 1
2 ALI030170 ALI030170 2014-08-26 10:11:40 2015-05-01 22:20:09 1
3 AKC403474 AXX100000 2014-08-19 20:35:53 2015-05-01 21:27:12 1
4 AED002616 AED002616 2014-09-28 18:37:32 2015-05-01 14:06:17 1
5 ALI030170 ALI030170 2014-08-20 05:08:03 2015-05-01 21:29:43 1
6 ADA414187 ADA414187 2014-09-26 17:46:24 2015-05-01 21:37:51 1
7 AWA042396 AWA042396 2014-08-27 12:07:11 2015-05-01 17:39:31 1
8 AED002616 AED002616 2014-08-23 04:53:03 2015-05-01 13:33:12 1
9 ALA500685 AXX100000 2014-08-27 16:41:26 2015-05-01 19:01:52 1
10 AWA263407 AXX100000 2014-08-27 18:04:24 2015-05-01 10:39:14 1
11 ACH928457 ACH928457 2014-08-28 10:26:41 2015-05-01 11:55:59 1
time_diff
0 255 days 03:16:04
1 254 days 21:40:14
2 248 days 12:08:29
3 255 days 00:51:19
4 214 days 19:28:45
5 254 days 16:21:40
6 217 days 03:51:27
7 247 days 05:32:20
8 251 days 08:40:09
9 247 days 02:20:26
10 246 days 16:34:50
11 246 days 01:29:18
txn['time_diff'].min() works fine. But txn['time_diff'].groupby(txn['Send_Agent']).min() gives me the output in seconds
Send_Agent
A03010016 86546000000000
A03020048 53056000000000
A10001087 113459000000000
A11120030 680136000000000
A11120074 787844000000000
A11120106 1478045000000000
A11120117 2505686000000000
A11120227 923508000000000
A11120294 1460320000000000
A11120304 970226000000000
A11120393 3787969000000000
A11120414 2499079000000000
A11120425 65753000000000
A11140016 782269000000000
But I want it in terms of days , hours , mins.
I did the following
txn = txn.astype(str)
Time_diff_min = txn['time_diff'].groupby(txn['Send_Agent']).min()
The output I get is in the right format but is erroneous and is fetching the "first" value it finds for that "groupby"
In [15]: Time_diff_min = txn['time_diff'].groupby(txn['Send_Agent']).min()
In [16]: Time_diff_min
Out[16]:
Send_Agent
A03010016 1 days 00:02:26.000000000
A03020048 0 days 14:44:16.000000000
A10001087 1 days 07:30:59.000000000
A11120030 13 days 06:29:35.000000000
A11120074 9 days 02:50:44.000000000
A11120106 17 days 02:34:05.000000000
A11120117 29 days 00:01:26.000000000
A11120227 10 days 16:31:48.000000000
A11120294 16 days 21:38:40.000000000
A11120304 11 days 05:30:26.000000000
A11120393 43 days 20:12:49.000000000
A11120414 28 days 22:11:19.000000000
A11120425 0 days 18:15:53.000000000
A11140016 9 days 01:17:49.000000000
A11140104 0 days 15:33:06.000000000
A11140126 1 days 18:36:07.000000000
A11140214 23 days 02:30:07.000000000
Also
Time_diff_min = txn['time_diff']..min().groupby(txn['Send_Agent'])
throws an error that I cannot groupby on a timedelta | 0 | 1 | 110 |
0 | 65,979,818 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2016-04-28T16:19:00.000 | 0 | 5 | 0 | Cosine distance of vector to matrix | 36,920,262 | 0 | python,vectorization,cosine-similarity | Below worked for me, have to provide correct signature
from scipy.spatial.distance import cosine
def cosine_distances(embedding_matrix, extracted_embedding):
return cosine(embedding_matrix, extracted_embedding)
cosine_distances = np.vectorize(cosine_distances, signature='(m),(d)->()')
cosine_distances(corpus_embeddings, extracted_embedding)
In my case
corpus_embeddings is a (10000,128) matrix
extracted_embedding is a 128-dimensional vector | In python, is there a vectorized efficient way to calculate the cosine distance of a sparse array u to a sparse matrix v, resulting in an array of elements [1, 2, ..., n] corresponding to cosine(u,v[0]), cosine(u,v[1]), ..., cosine(u, v[n])? | 0 | 1 | 3,904 |
0 | 36,930,521 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-28T22:54:00.000 | 1 | 1 | 0 | Can I get "inertia" for sklearn Birch clusters? | 36,926,819 | 0.197375 | python,scikit-learn,cluster-analysis | Nothing is free, and you don't want algorithms to perform unnecessary computations.
Inertia is only sensible for k-means (and even then, do not compare different values of k), and it's simply the variance sum of the data. I.e. compute the mean of every cluster, then the squared deviations from it. Don't compute distances, the equation is simply ((x-mu)**2).sum() | Scikit-learn MiniBatchKMeans has an inertia field that can be used to see how tight clusters are. Does the Birch clustering algorithm have an equivalent? There does not seem to be in the documentation.
If there is no built in way to check this measurement, does it make sense to find the average euclidian distance for each point's closest neighbor in each cluster., then find the mean of those average distances? | 0 | 1 | 886 |
0 | 36,928,073 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-04-29T00:05:00.000 | 0 | 1 | 0 | Scipy zoom with complex values | 36,927,432 | 0 | python,numpy,scipy,complex-numbers,zooming | This is not a good answer but it seems to work quite well. Instead of using the default parameters for the zoom method, I'm using order=0. I then proceed to deal with the real and imaginary part separately, as described in my question. This seems to reduce the artifacts although some smaller artifacts remain. It is by no means perfect and if somebody has a better answer, I would be very interested. | I have a numpy array of values and I wanted to scale (zoom) it. With floats I was able to use scipy.ndimage.zoom but now my array contains complex values which are not supported by scipy.ndimage.zoom. My workaround was to separate the array into two parts (real and imaginary) and scale them independently. After that I add them back together. Unfortunately this produces a lot of tiny artifacts in my 'image'. Does somebody know a better way? Maybe there also exists a python library for this? I couldn't find one.
Thank you! | 0 | 1 | 215 |
0 | 36,951,224 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-04-29T16:25:00.000 | 1 | 1 | 0 | scipy.test() results in errors | 36,943,283 | 1.2 | python-2.7,scipy | All these are in weave, which is not used anywhere else in scipy itself. So unless you're using weave directly, you're likely OK. And there is likely no reason to use weave in new code anyway. | Having some problems with scipy. Installed latest version using pip (0.17.0). Run scipy.test() and I'm getting the following errors. Are they okay to ignore? I'm using python 2.7.6.
Thanks for your help.
======================================================================
ERROR: test_add_function_ordered (test_catalog.TestCatalog)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/tests/test_catalog.py", line 477, in test_add_function_ordered
q.add_function('f',string.upper)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 833, in add_function
self.add_function_persistent(code,function)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 849, in add_function_persistent
cat = get_catalog(cat_dir,mode)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 486, in get_catalog
sh = shelve.open(catalog_file,mode)
File "/usr/lib/python2.7/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib/python2.7/shelve.py", line 222, in init
import anydbm
File "/usr/lib/python2.7/anydbm.py", line 50, in
_errors.append(_mod.error)
AttributeError: 'module' object has no attribute 'error'
======================================================================
ERROR: test_add_function_persistent1 (test_catalog.TestCatalog)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/tests/test_catalog.py", line 466, in test_add_function_persistent1
q.add_function_persistent('code',i)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 849, in add_function_persistent
cat = get_catalog(cat_dir,mode)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 486, in get_catalog
sh = shelve.open(catalog_file,mode)
File "/usr/lib/python2.7/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib/python2.7/shelve.py", line 222, in init
import anydbm
File "/usr/lib/python2.7/anydbm.py", line 50, in
_errors.append(_mod.error)
AttributeError: 'module' object has no attribute 'error'
======================================================================
ERROR: test_get_existing_files2 (test_catalog.TestCatalog)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/tests/test_catalog.py", line 394, in test_get_existing_files2
q.add_function('code', os.getpid)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 833, in add_function
self.add_function_persistent(code,function)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 849, in add_function_persistent
cat = get_catalog(cat_dir,mode)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 486, in get_catalog
sh = shelve.open(catalog_file,mode)
File "/usr/lib/python2.7/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib/python2.7/shelve.py", line 222, in init
import anydbm
File "/usr/lib/python2.7/anydbm.py", line 50, in
_errors.append(_mod.error)
AttributeError: 'module' object has no attribute 'error'
======================================================================
ERROR: test_create_catalog (test_catalog.TestGetCatalog)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/tests/test_catalog.py", line 286, in test_create_catalog
cat = catalog.get_catalog(pardir,'c')
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 486, in get_catalog
sh = shelve.open(catalog_file,mode)
File "/usr/lib/python2.7/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib/python2.7/shelve.py", line 222, in init
import anydbm
File "/usr/lib/python2.7/anydbm.py", line 50, in
_errors.append(_mod.error)
AttributeError: 'module' object has no attribute 'error'
Ran 20343 tests in 138.416s
FAILED (KNOWNFAIL=98, SKIP=1679, errors=4) | 0 | 1 | 82 |
0 | 36,959,985 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2016-04-30T19:59:00.000 | 3 | 1 | 0 | Efficient Matrix-Vector Multiplication: Multithreading directly in Python vs. using ctypes to bind a multithreaded C function | 36,959,589 | 1.2 | python,c,multithreading,linear-algebra,hdf5 | Hardware
As Sven Marnach wrote in the comments, your problem is most likely I/O bound since disk access is orders of magnitude slower than RAM access.
So the fastest way is probably to have a machine with enough memory to keep the whole matrix multiplication and the result in RAM. It would save lots of time if you read the matrix only once.
Replacing the harddisk with an SSD would also help, because that can read and write a lot faster.
Software
Barring that, for speeding up reads from disk, you could use the mmap module. This should help, especially once the OS figures out you're reading pieces of the same file over and over and starts to keep it in the cache.
Since the calculation can be done by row, you might benefit from using numpy in combination with a multiprocessing.Pool for that calculation. But only really if a single process cannot use all available disk read bandwith. | I have a simple problem: multiply a matrix by a vector. However, the implementation of the multiplication is complicated because the matrix is 18 gb (3000^2 by 500).
Some info:
The matrix is stored in HDF5 format. It's Matlab output. It's dense so no sparsity savings there.
I have to do this matrix multiplication roughly 2000 times over the course of my algorithm (MCMC Bayesian Inversion)
My program is a combination of Python and C, where the Python code handles most of the MCMC procedure: keeping track of the random walk, generating perturbations, checking MH Criteria, saving accepted proposals, monitoring the burnout, etc. The C code is simply compiled into a separate executable and called when I need to solve the forward (acoustic wave) problem. All communication between the Python and C is done via the file system. All this is to say I don't already have ctype stuff going on.
The C program is already parallelized using MPI, but I don't think that's an appropriate solution for this MV multiplication problem.
Our program is run mainly on linux, but occasionally on OSX and Windows. Cross-platform capabilities without too much headache is a must.
Right now I have a single-thread implementation where the python code reads in the matrix a few thousand lines at a time and performs the multiplication. However, this is a significant bottleneck for my program since it takes so darn long. I'd like to multithread it to speed it up a bit.
I'm trying to get an idea of whether it would be faster (computation-time-wise, not implementation time) for python to handle the multithreading and to continue to use numpy operations to do the multiplication, or to code an MV multiplication function with multithreading in C and bind it with ctypes.
I will likely do both and time them since shaving time off of an extremely long running program is important. I was wondering if anyone had encountered this situation before, though, and had any insight (or perhaps other suggestions?)
As a side question, I can only find algorithmic improvements for nxn matrices for m-v multiplication. Does anyone know of one that can be used on an mxn matrix? | 0 | 1 | 342 |
0 | 37,810,021 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-05-03T10:20:00.000 | 5 | 2 | 0 | How to interface Python with Qlikview for data visualization? | 37,001,538 | 1.2 | python,scikit-learn,tableau-api,qlikview | There's no straightforward route to calling Python from QlikView. I have used this:
Create a Python program that outputs CSV (or any file format that QlikView can read)
Invoke your Python program from the QlikView script: EXEC python3 my_program.py > my_output.csv
Read the output into QlikView: LOAD * FROM my_output.csv (...)
Note that the EXEC command requires the privilege "Can Execute External Programs" on the Settings tab of the script editor. | I am using Scikit-Learn and Pandas libraries of Python for Data Analysis.
How to interface Python with data visualization tools such as Qlikview? | 0 | 1 | 8,577 |
0 | 37,005,546 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-03T13:28:00.000 | 0 | 1 | 0 | How to close pandas.scatter_matrix() figure | 37,005,545 | 0 | python,pandas,matplotlib | After a bit of investigation, I realized that I could just use:
plt.close()
with no argument to close the current figure, or:
plt.close('all')
to close all of the opened figures. | I'm hitting MemoryError: In RendererAgg: Out of memory when I plot several pandas.scatter_matrix() figures.
Normally I use:
plt.close(fig)
to close matplotlib figures, so that I release the memory used, but pandas.scatter_matrix() does not return a matplotlib figure, rather it returns the axes object. For example:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(1000, 4), columns=['A','B','C','D'])
ax = pd.scatter_matrix(df, alpha=0.2)
How do I close this figure? | 0 | 1 | 521 |
0 | 37,056,824 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2016-05-05T17:23:00.000 | 2 | 1 | 0 | How can I create an AI for tic tac toe in Python using ANN and genetic algorithm? | 37,056,608 | 0.379949 | python,neural-network,artificial-intelligence,genetic-algorithm,tic-tac-toe | Yes, this is possible. But you have to tell your AI the rules of the game, beforehand (well, that's debatable, but it's ostensibly better if you do so - it'll define your search space a little better).
Now, the vanilla tic-tac-toe game is far too simple - a minmax search will more than suffice. Scaling up the dimensionality or the size of the board does make the case for more advanced algorithms, but even so, the search space is quite simple (the algebraic nature of the dimensionality increase leads to a slight transformation of the search space, which should still be tractable by simpler methods).
If you really want to throw a heavy machine learning technique at a problem, take a second look at chess (Deep Blue really just brute forced the sucker). Arimaa is interesting for this application as well. You might also consider looking at Go (perhaps start with some of the work done on AlphaGo)
That's my two cents' worth | I'm very interested in the field of machine learning and recently I got the idea for a project for the next few weeks.
Basically I want to create an AI that can beat every human at Tic Tac Toe. The algorithm must be scalable for every n*n board size, and maybe even for other dimensions (for a 3D analogue of the game, for example).
Also I don't want the algorithm to know anything of the game in advance: it must learn on its own. So no hardcoded ifs, and no supervisioned learning.
My idea is to use an Artificial Neural Network for the main algorithm itself, and to train it through the use of a genetic algorithm. So I have to code only the rules of the game, and then each population, battling with itself, should learn from scratch.
It's a big project, and I'm not an expert on this field, but I hope, with such an objective, to learn lots of things.
First of all, is that possible? I mean, is it possible to reach a good result within a reasonable amount of time?
Are there good libraries in Python that I can use for this project? And is Python a suitable language for this kind of project? | 0 | 1 | 700 |
0 | 46,785,026 | 0 | 1 | 0 | 0 | 2 | false | 34 | 2016-05-05T22:08:00.000 | 4 | 14 | 0 | Trouble with TensorFlow in Jupyter Notebook | 37,061,089 | 0.057081 | python,tensorflow,jupyter | Here is what I did to enable tensorflow in Anaconda -> Jupyter.
Install Tensorflow using the instructions provided at
Go to /Users/username/anaconda/env and ensure Tensorflow is installed
Open the Anaconda navigator and go to "Environments" (located in the left navigation)
Select "All" in teh first drop down and search for Tensorflow
If its not enabled, enabled it in the checkbox and confirm the process that follows.
Now open a new Jupyter notebook and tensorflow should work | I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine.
My question is: how can I also have it work in the Jupyter notebooks?
This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python. | 0 | 1 | 87,389 |
0 | 67,094,115 | 0 | 1 | 0 | 0 | 2 | false | 34 | 2016-05-05T22:08:00.000 | -1 | 14 | 0 | Trouble with TensorFlow in Jupyter Notebook | 37,061,089 | -0.014285 | python,tensorflow,jupyter | Open an Anaconda Prompt screen: (base) C:\Users\YOU>conda create -n tf tensorflow
After the environment is created type: conda activate tf
Prompt moves to (tf) environment, that is: (tf) C:\Users\YOU>
then install Jupyter Notebook in this (tf) environment:
conda install -c conda-forge jupyterlab - jupyter notebook
Still in (tf) environment, that is type
(tf) C:\Users\YOU>jupyter notebook
The notebook screen starts!!
A New notebook then can import tensorflow
FROM THEN ON
To open a session
click Anaconda prompt,
type conda activate tf
the prompt moves to tf environment
(tf) C:\Users\YOU>
then type (tf) C:\Users\YOU>jupyter notebook | I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine.
My question is: how can I also have it work in the Jupyter notebooks?
This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python. | 0 | 1 | 87,389 |
0 | 37,104,201 | 0 | 0 | 0 | 0 | 2 | true | 8 | 2016-05-06T13:55:00.000 | 3 | 4 | 0 | Run model in reverse in Keras | 37,074,244 | 1.2 | python,machine-learning,neural-network,keras | There is no such thing as "running a neural net in reverse", as a generic architecture of neural net does not define any not-forward data processing. There is, however, a subclass of models which do - the generative models, which are not a part of keras right now. The only thing you can do is to create a network which somehow "simulates" the generative process you are interested in. But this is paricular model specific method, and has no general solution. | I'm currently playing around with the Keras framework. And have done some simple classification tests, etc. I'd like to find a way to run the network in reverse, using the outputs as inputs and vice versa. Any way to do this? | 0 | 1 | 4,095 |
0 | 51,939,867 | 0 | 0 | 0 | 0 | 2 | false | 8 | 2016-05-06T13:55:00.000 | 0 | 4 | 0 | Run model in reverse in Keras | 37,074,244 | 0 | python,machine-learning,neural-network,keras | What you are looking for, I think, is the "Auto-Associative" neural network. it has an input of n dimensions, several layers, one of which is the "middle layer" of m dimensions, and then several more layers leading to an output layer which has the same number of dimensions as the input layer, n.
The key here is that m is much smaller than n.
How it works is that you train the network to recreate the input, exactly at the output. Then you cut the network into two halves. The front half goes from n to m dimensions (encoding the input into a smaller space). The back half goes from m dimensions to n dimensions (decoding, or "reverse" if you will).
Very useful for encryption, compression, unsupervised learning, etc. | I'm currently playing around with the Keras framework. And have done some simple classification tests, etc. I'd like to find a way to run the network in reverse, using the outputs as inputs and vice versa. Any way to do this? | 0 | 1 | 4,095 |
0 | 37,092,686 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-05-07T11:31:00.000 | 0 | 1 | 0 | Method to combine multiple svm classifiers (or "any ML classifier" by using scikit-learn. "decision-feature classifiers" | 37,087,996 | 1.2 | python,machine-learning,computer-vision,scikit-learn | First of all - the idea of training separate models is rather bad. Unless you have very good reasons to do so (some external limitations you cannot ignore) you should not do so. Why? Because you are efficiently loosing information, you are unable to model complex dependencies between signals from two classifiers. Training everything jointly gives statistical method ability to choose which data use when, this way it can model for example - that for some particular types of data it will use one part of the input and for another - the rest. When you build independent classifiers - you bias the whole process, as these classifiers have "no idea" that remaining ones exist.
Having said that, here is the solution assuming that somehow you cannot learn joint model. In such scenario (where your models are kind of black-boxes which convert your input representation to decision functions) the basic idea is to simply treat them as preprocessors and fit new model on top, thats all. In other words you have your data point x split into feature-vectors x1,x2,...,xk and k different models mi you built previously, thus you use them to construct a preprocessing method f(x) = [m1(x1), m2(x2), ..., mk(xk)] and this is just a single point in R^k space, which can be now fitted into new classifier to learn how to combine these information. The problematic part is that due to your very specific process, you now need new training set to learn combination rule, as using the same data that was used to construct mi can easily lead to overfitting. To fight that people sometimes use heuristic approaches instead - a priori assuming that these models are already good enough and construct ensemble of these, which either vote for classes (for example weighted by their certeinty) or build whole system around it. I would still argue that you should not go this way in the first place, if you have to - go with learning combination rule with the use of new data, and finally if you cannot do any of the above - settle with some heuristic ensemble technique. | I extract multiple Feature vectors from different sensors, and I trained these features by using SVM individually. My question is there any method to combine these classifiers in a way to obtain a better result.
thanks in advance | 0 | 1 | 662 |
0 | 37,092,622 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-05-07T17:27:00.000 | 3 | 1 | 0 | Additive Smoothing for Dataframe Pandas | 37,091,587 | 1.2 | python,pandas,machine-learning,smoothing,naivebayes | Additive smoothing is just a basic mathematical operation, requiring few additions and division - there is no "special" function for that, you simply write a one-liner operating on particular columns of your dataframe. | I have a large dataframe in Pandas with lots of zeros.
I want to apply additive smoothing but instead of writing it from scratch, I am wondering if there is any better way of producing a "smoothed" dataframe in Pandas. Thanks! | 0 | 1 | 1,278 |
0 | 37,101,713 | 0 | 0 | 0 | 0 | 1 | true | 21 | 2016-05-08T14:49:00.000 | 39 | 4 | 0 | What to download in order to make nltk.tokenize.word_tokenize work? | 37,101,114 | 1.2 | python,nltk | You are right. You need Punkt Tokenizer Models. It has 13 MB and nltk.download('punkt') should do the trick. | I am going to use nltk.tokenize.word_tokenize on a cluster where my account is very limited by space quota. At home, I downloaded all nltk resources by nltk.download() but, as I found out, it takes ~2.5GB.
This seems a bit overkill to me. Could you suggest what are the minimal (or almost minimal) dependencies for nltk.tokenize.word_tokenize? So far, I've seen nltk.download('punkt') but I am not sure whether it is sufficient and what is the size. What exactly should I run in order to make it work? | 0 | 1 | 57,023 |
0 | 37,102,874 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-08T15:12:00.000 | 0 | 1 | 0 | predicting new non-standardized data with classifier trained on standardized data | 37,101,361 | 0 | python,scikit-learn | To solve this problem you should use a pipeline. The first stage there is scaling, and the second one is your model. Then you can pickle the whole pipeline and have fun with your new data. | I have some data with say, L features. I have standardized them using StandardScaler() by doing a fit_transform on X_train. Now while predicting, i did clf.predict(scaler.transform(X_test)). So far so good... now if I want to pickle the model for later reuse, how would I go about predicting on the new data in future with this saved model ? the new (future) data will not be standardized and I didn't pickle the scaler.
Is there anything else that I have to do before pickling the model the way I am doing it right now (to be able to predict on non-standardized data)?
reddit post: https://redd.it/4iekc9
Thanks. :) | 0 | 1 | 51 |
0 | 51,513,887 | 0 | 0 | 0 | 0 | 1 | false | 95 | 2016-05-09T03:04:00.000 | 4 | 10 | 0 | How to add regularizations in TensorFlow? | 37,107,223 | 0.07983 | python,neural-network,tensorflow,deep-learning | I tested tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) and tf.losses.get_regularization_loss() with one l2_regularizer in the graph, and found that they return the same value. By observing the value's quantity, I guess reg_constant has already make sense on the value by setting the parameter of tf.contrib.layers.l2_regularizer. | I found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss value.
My questions are:
Is there a more elegant or recommended way of regularization than doing it manually?
I also find that get_variable has an argument regularizer. How should it be used? According to my observation, if we pass a regularizer to it (such as tf.contrib.layers.l2_regularizer, a tensor representing regularized term will be computed and added to a graph collection named tf.GraphKeys.REGULARIZATOIN_LOSSES. Will that collection be automatically used by TensorFlow (e.g. used by optimizers when training)? Or is it expected that I should use that collection by myself? | 0 | 1 | 80,148 |
0 | 37,126,992 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-09T22:41:00.000 | 0 | 1 | 0 | Need to append bias term when using `sklearn` models? | 37,126,576 | 0 | python,machine-learning,scikit-learn | No, you do not add any biases, models define biases in their own way. What you learned during course is generic, although not perfect - solution. It matters for models such as SVM, which should not ever have appended "1"s, as then this bias would get regularized, which is simply wrong for SVMs. Thus, while this is nice theoretical trick to show that you can actually create methods completely ignoring bias, in practise - it is often treated in a specific way, and scikit-learn does it for you. | In my machine learning class, we have learned about appending a 1 to each sample's feature vector when using many machine learning models to account for bias. For example, if we are doing linear regression and a sample has features f_1, f_2, ..., f_d, we need to add a "fake" feature value of 1 to allow for the regression function to not have to pass through the origin.
When using sklearn models, do you need to do this yourself, or do their implementations do it for you? Specifically, I'm interested in whether or not this is necessary when using any of their regression models or their SVM models. | 0 | 1 | 1,556 |
0 | 37,155,400 | 0 | 0 | 0 | 0 | 1 | true | 6 | 2016-05-10T03:40:00.000 | 1 | 2 | 0 | Keras, best way to save state when optimizing | 37,128,886 | 1.2 | python,keras | You could create a tar archive containing the weights and the architecture, as well as a pickle file containing the optimizer state returned by model.optimizer.get_state(). | I was just wondering what is the best way to save the state of a model while it it optimizing. I want to do this so I can run it for a while, save it, and come back to it some time later. I know there is a function to save the weights and another function to save the model as JSON. During learning I would need to save both the weights and the parameters of the model. This includes parameters like the momentum and learning rate. Is there a way to save both the model and weights in the same file. I read that it is not considered good practice to use pickle. Also would the momentums for the graident decent be included with the models JSON or in the weights? | 0 | 1 | 4,558 |
0 | 37,142,380 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-10T12:36:00.000 | 0 | 1 | 0 | Manually update ratings in recomender system | 37,138,777 | 0 | python,machine-learning,recommendation-engine,rating,collaborative-filtering | Based on your comment above, I would manipulate the Number of times they purchased the product field. You need to basically transform the Number of times they purchased the product field into an implicit rating field. I would maybe scale the product rating system to 1-5. If they press the don't like the product button, the rating is a 1, if they press the like the product button, they get a 5. If they have bought the product frequently, it's a 5, otherwise it starts at a 3 on the first purchase and scales up to a 4 then 5, based on your data. If they have never bought the product AND have never rated the product, it's a null value, so won't contribute to ratings. | I developed a recommender system using Matrix Factorization in Python. The ratings are in the range [1-5]. It works very well. This system is made for client advisors rather than clients themselves. Hence, the system recommends some products to the client advisor and then this one decides which products he's gonna recommend to his client.
In my application I want to have 2 additional buttons: relevant, irrelevant. Thus, for each recommendation the client advisor would press the button irrelevant if the recommendation is not good but its rating is high and he would press the button relevant if the recommendation is good but its rating is low.
The problem is that I can't figure how to update the ratings when one of the buttons is pressed. Please give me some idea about how to handle that feature. I insist on having only two buttons (relevant and irrelevant), the client advisor can't modify the rating himself.
Thank you very much. | 0 | 1 | 62 |
0 | 37,176,185 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-11T03:42:00.000 | 2 | 4 | 0 | How to auto-discover a lagging of time-series data in scikit-learn and classify using time-series data | 37,152,723 | 0.099668 | python,scikit-learn,time-series,quantitative-finance | No. there is not a way, in Python, using sci-kit, to automatically lag all of these time-series to find what time-series (if any) tend to lag other data. You'll have to write some code. | I currently have a giant time-series array with times-series data of multiple securities and economic statistics.
I've already written a function to classify the data, using sci-kit learn, but the function only uses non-lagged time-series data.
Is there a way, in Python, using sci-kit, to automatically lag all of these time-series to find what time-series (if any) tend to lag other data?
I'm working on creating a model using historic data to predict future performance. | 0 | 1 | 2,999 |
0 | 37,247,216 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-11T18:32:00.000 | 2 | 1 | 0 | How to rotate images in Caffe on-the-fly for training set augmentation? | 37,170,740 | 0.379949 | python,deep-learning,caffe | You can make use of Python Layer to do the same. The usage of a Python Layer is demonstrated in caffe_master/examples/py_caffe/. Here you could make use of a python script as the input layer to your network. You could describe the behavior of rotations in this layer. | I know that there is a "mirror" parameter in the default data layer, but is there a way to do arbitrary rotations (really, I would just like to do multiples of 90 degrees), preferably in Python? | 0 | 1 | 792 |
0 | 39,967,880 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-05-11T23:20:00.000 | 1 | 1 | 0 | module object has no attribute 'fblas' error when running theano.test() in Canopy Python | 37,174,808 | 1.2 | python-2.7,theano,enthought,canopy | That's possibly because you installed an old version of Theano package. Try upgrade it or install the newest version by pip install theano. | I could not get Theano running in my system in Enthought canopy Python. When I give import theano and test run, I get the following error.
import blas
File "/Users/rajesh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site- packages/theano/tensor/blas.py", line 135, in
numpy.dtype('float32'):scipy.linalg.blas.fblas.sgemv,
AttributeError: 'module' object has no attribute 'fblas'
Can you please guide me the direction to resolve this ? | 0 | 1 | 342 |
0 | 37,185,292 | 0 | 0 | 0 | 0 | 1 | false | 33 | 2016-05-12T10:44:00.000 | 4 | 3 | 0 | Find out if/which BLAS library is used by Numpy | 37,184,618 | 0.26052 | python,c++,macos,numpy,blas | numpy.show_config() just tells that info is not available on my Debian Linux.
However /usr/lib/python3/dist-packages/scipy/lib has a subdirectory for blas which may tell you what you want. There are a couple of test programs for BLAS in subdirectory tests.
Hope this helps. | I use numpy and scipy in different environments (MacOS, Ubuntu, RedHat).
Usually I install numpy by using the package manager that is available (e.g., mac ports, apt, yum).
However, if you don't compile Numpy manually, how can you be sure that it uses a BLAS library? Using mac ports, ATLAS is installed as a dependency. However, I am not sure if it is really used. When I perform a simple benchmark, the numpy.dot() function requires approx. 2 times the time than a dot product that is computed using the Eigen C++ library. I am not sure if this is a reasonable result. | 0 | 1 | 28,680 |
0 | 37,212,468 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2016-05-13T10:25:00.000 | 2 | 2 | 0 | Smart algorithm for finding the divisors of a binomial coefficient | 37,207,589 | 1.2 | algorithm,python-3.x,discrete-mathematics,binomial-coefficients | First you could start with the fact that : C(n,k) = (n/k) C(n-1,k-1).
You can prouve that C(n,k) is divisible by n/gcd(n,k).
If n is prime then n divides C(n,k).
Check Kummer's theorem: if p is a prime number, n a positive number, and k a positive number with 0< k < n then the greatest exponent r for which p^r divides C(n,k) is the number of carries needed in the subtraction n-k in base p.
Let us suppose that n>4 :
if p>n then p cannot divide C(n,k) because in base p, n and k are only one digit wide → no carry in the subtraction
so we have to check for prime divisors in [2;n]. As C(n,k)=C(n,n-k) we can suppose k≤n/2 and n/2≤n-k≤n
for the prime divisors in the range [n/2;n] we have n/2 < p≤n, or equivalently p≤n<2p. We have p≥2 so p≤n < p² which implies that n has exactly 2 digits when written in base p and the first digit has to be 1. As k≤n/2 < p, k can only be one digit wide. Either the subtraction as one carry and one only when n-k< p ⇒ p divides C(n,k); either the subtraction has no carry and p does not divide C(n,k).
The first result is :
every prime number in [n-k;n] is a prime divisor of C(n,k) with exponent 1.
no prime number in [n/2;n-k] is a prime divisor of C(n,k).
in [sqrt(n); n/2] we have 2p≤n< p², n is exactly 2 digits wide in base p, k< n implies k has at most 2 digits. Two cases: only one carry on no carry at all. A carry exists only if the last digit of n is greater than the last digit of p iif n modulo p < k modulo p
The second result is :
For every prime number p in [sqrt(n);n/2]
p divides C(n;k) with exponent 1 iff n mod p < k mod p
p does not divide C(n;k) iff n mod p ≥ k mod p
in the range [2; sqrt(n)] we have to check all the prime numbers. It's only in this range that a prime divisor will have an exponent greater than 1 | I'm interested in tips for my algorithm that I use to find out the divisors of a very large number, more specifically "n over k" or C(n, k). The number itself can range very high, so it really needs to take time complexity into the 'equation' so to say.
The formula for n over k is n! / (k!(n-k)!) and I understand that I must try to exploit the fact that factorials are kind of 'recursive' somehow - but I havent yet read too much discrete mathematics so the problem is both of a mathematical and a programming nature.
I guess what I'm really looking for are just some tips heading me in the right direction - I'm really stuck. | 0 | 1 | 846 |
0 | 37,321,401 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2016-05-14T00:24:00.000 | 8 | 1 | 0 | Using Python multiprocessing on an HPC cluster | 37,221,133 | 1 | python-3.x,multiprocessing,distributed-computing,hpc | Unfortunately I wasn't able to find an answer in the community. However, through experimentation, I was able to better isolate the problem and find a workable solution.
The problem arises from the nature of Python's multiprocessing implementation. When a Pool object is created (i.e. the manager class that controls the processing cores for the parallel work), a new Python run-time is started for each core. There are multiple places in my code where the multiprocessing package is used and a Pool object instantiated... every function that requires it creates a Pool object as needed and then joins and terminates before exiting. Therefore, if I call the function 3 times in the code, 8 instances of Python are spun up and then closed 3 times. On a single machine, the overhead of this was not significant at all compared to the computational load of the functions... however on the HPC it was absurdly high.
I re-architected the code so that a Pool object is created at the very beginning of the calling of process and then passed to each function as needed. It is closed, joined, and terminated at the end of the overall process.
We found that the bulk of the time was spent in the creation of the Pool object on each node. This was an improvement though because it was only being created once! We then realized that the underlying problem was that multiple nodes were trying to access Python at the same time in the same place from over the network (it was only installed on the head node). We installed Python and the application on all nodes, and the problem was completely fixed.
This solution was the result of trial and error... unfortunately our knowledge of cluster computing is pretty low at this point. I share this answer in the hopes that it will be critiqued so that we can obtain even more insight. Thank you for your time. | I am running a Python script on a Windows HPC cluster. A function in the script uses starmap from the multiprocessing package to parallelize a certain computationally intensive process.
When I run the script on a single non-cluster machine, I obtain the expected speed boost. When I log into a node and run the script locally, I obtain the expected speed boost. However, when the job manager runs the script, the speed boost from multiprocessing is either completely mitigated or, sometimes, even 2x slower. We have noticed that memory paging occurs when the starmap function is called. We believe that this has something to do with the nature of Python's multiprocessing, i.e. the fact that a separate Python interpreter is kicked off for each core.
Since we had success running from the console from a single node, we tried to run the script with HPC_CREATECONSOLE=True, to no avail.
Is there some kind of setting within the job manager that we should use when running Python scripts that use multiprocessing? Is multiprocessing just not appropriate for an HPC cluster? | 0 | 1 | 1,975 |
0 | 37,228,094 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-14T14:37:00.000 | 0 | 1 | 0 | Proper way of loading large amounts of image data | 37,227,938 | 0 | python,deep-learning | Why not add a preprocessing step, where you would either (a) physically move the images to folders associated with bucket and/or rename them, or (b) first scan through all images (headers only) to build the in-memory table of image filenames and their sizes/buckets, and then the random sampling step would be quite simple to implement. | For a Deep Learning application I am building, I have a dataset of about 50k grayscale images, ranging from about 300*2k to 300*10k pixels. Loading all this data into memory is not possible, so I am looking for a proper way to handle reading in random batches of data. One extra complication with this is, I need to know the width of each image before building my Deep Learning model, to define different size-buckets within the data (for example: [2k-4k, 4k-6k, 6k-8k, 8k-10k].
Currently, I am working with a smaller dataset and just load each image from a png file, bucket them by size and start learning. When I want to scale up this is no longer possible.
To train the model, each batch of the data should be (ideally) fully random from a random bucket. A naive way of doing this would be saving the sizes of the images beforehand, and just loading each random batch when it is needed. However, this would result in a lot of extra loading of data and not very efficient memory management.
Does anyone have a suggestion how to handle this problem efficiently?
Cheers! | 0 | 1 | 811 |
0 | 37,234,653 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-05-15T02:59:00.000 | 1 | 1 | 0 | How to do gradient descent for not all variables in tensorflow | 37,234,114 | 1.2 | python-2.7,tensorflow | To lock the ones that you don't want to train you can use tf.Variable(..., trainable=False) | In tensorflow, tf.train.GradientDescentOptimizer does gradient descent for all variables in default. Can i just do gradient descent for only a few of my variables and 'lock' the others? | 0 | 1 | 206 |
0 | 40,741,488 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-05-15T09:36:00.000 | 0 | 1 | 0 | Use GPU installation of tensorflow/cuda in spyder under ubuntu 14.04 | 37,236,677 | 0 | python,anaconda,tensorflow,spyder | To solve your issue you have 3 options here:
1 just start spyder from terminal
2 move PATH variable definition from .bash_profile to session init scripts
3 Duplicate your PATH in spyder's run configuration | I am running ubuntu 14.04 with an anaconda2 installation and would like to use tensorflow in combination with CUDA. So far the steps I performed are:
Installed CUDA 7.5 and cudnn
Installed tensorflow (GPU version) through a DEB package. Note that I don't want to use the conda package of tensorflow since that one is not the GPU version.
Added Anaconda, CUDA and cudnn to path.
Created a conda environment for tensorflow (conda create -n tensorflow python=2.7)
Now if I start python or IDLE from the terminal, I can import tensorflow and it will find all the CUDA dependencies, great!
...however, if I start ipython or spyder from the same terminal, running "import tensorflow as tf" gives me a cold-hearted "ImportError: No module named tensorflow".
My question: How can I get ipython and spyder to find the tensorflow library just like in an IDLE and a python instance? | 0 | 1 | 1,863 |
0 | 37,246,380 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2016-05-16T02:22:00.000 | 0 | 3 | 0 | How to calculate even distance along an interpolated path (Python2.7)? | 37,245,832 | 0 | python,path,geometry | I'm sure there's an elegant way to do this with pandas, but until then, here's a simple idea if you can live with some error. You can do this a few different ways but here's the gist of it:
Treat each tuple as a node in a linked list. Define the desired length, D, between each point. As you move through the list, if the next node is not a distance D from the current node, adjust its x,y coordinates accordingly (or insert/delete nodes as necessary) so that it is a distance D from the current node along the line segments that connect the nodes.
Like I said, you'll have to live with some error because your original points will be adjusted/deleted. If you generate points to create more resolution prior to this, you can probably lessen the error. | I have a (long) list of x,y tuples that collectively describe a path (ie. mouse-activity sampled at a constant rate despite non-constant velocities).
My goal is to animate that path at a constant rate. So I have segments that are curved, and segments that are straight, and the delta-d between any two points isn't guaranteed to be the same.
Given data like:
[(0,0), (0,2), (4,6).... ] where the length of that list is ~1k-2k points, is there any way besides brute-force counting line-segment lengths between each point and then designating every n-length a "frame"? | 0 | 1 | 159 |
0 | 37,246,950 | 0 | 0 | 0 | 0 | 2 | true | 4 | 2016-05-16T02:22:00.000 | 1 | 3 | 0 | How to calculate even distance along an interpolated path (Python2.7)? | 37,245,832 | 1.2 | python,path,geometry | If you use Numpy arrays to represent your data then you can vectorise the computation. That's as efficient as you're going to get. | I have a (long) list of x,y tuples that collectively describe a path (ie. mouse-activity sampled at a constant rate despite non-constant velocities).
My goal is to animate that path at a constant rate. So I have segments that are curved, and segments that are straight, and the delta-d between any two points isn't guaranteed to be the same.
Given data like:
[(0,0), (0,2), (4,6).... ] where the length of that list is ~1k-2k points, is there any way besides brute-force counting line-segment lengths between each point and then designating every n-length a "frame"? | 0 | 1 | 159 |
0 | 37,246,905 | 0 | 1 | 0 | 1 | 1 | false | 1 | 2016-05-16T03:38:00.000 | 1 | 3 | 0 | Python, computationally efficient data storage methods | 37,246,342 | 0.066568 | python,sql,arrays,mongodb,database | Have you considered HDF5? It's very efficient for numerical data, and is supported by both Python and Matlab. | I am retrieving structured numerical data (float 2-3 decimal spaces) via http requests from a server. The data comes in as sets of numbers which are then converted into an array/list. I want to then store each set of data locally on my computer so that I can further operate on it.
Since there are very many of these data sets which need to be collected, simply writing each data set that comes in to a .txt file does not seem very efficient. On the other hand I am aware that there are various solutions such as mongodb, python to sql interfaces...ect but i'm unsure of which one I should use and which would be the most appropriate and efficient for this scenario.
Also the database that is created must be able to interface and be queried from different languages such as MATLAB. | 0 | 1 | 989 |
0 | 37,435,858 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-05-16T13:22:00.000 | 1 | 1 | 0 | Theano : MissingGxx , g++ not available | 37,255,003 | 0.197375 | python,iis,gcc,g++,theano | I've solved the problem , I had two g++ executables in my WinPython environment at following paths
WinPythonDir\python-2.7.10.amd64\Scripts\g++.exe
WinPythonDir\python-2.7.10.amd64\share\mingwpy\bin\g++.exe
Spyder used the correct one (2) and IIS seems to use the one mentioned in 1. I explicitly added path to 2 in my PATH env variable on IIS. Spyder didn't have 2 in PATH (that's strange) but it used the one mentioned in 2 (I confirmed that after logging in Theano files).
After this my MissingGxx error was gone but Now Theano was unable to create compilation directory because IIS uses System profile and Theano uses that profile to generate path to compile_dir , It was somewhere in C;|Windows|System32\config\SystemsProfile|Theano\compile_dir and IIS didn't have rights to it (Spyder uses my local USERPROFILE). I changed the default_base_compiledir path in Theano's configdefaults.py and gave IIS rights to access and modify it. I wasn't able to assign rights to previous compiledir in SystemsProfile beacause that location is pretty sensitive and OS restricted me to do so.
PS : I copied PATH by doing
echo %PATH%
from my WinPython console and concatenated g++ path mentioned at 2 with it and added to PATH variable on IIS because WinPython PATH variable didn't have 2 in it. | I've been trying to deploy my Python Flask App that uses Conv nets using Theano on local IIS. When I try to load a pickled Neural Network , I get following Errors
Unable to Create compiledir.
I solved this by changing compiledir path in configdefaults.py and giving read/write rights to IIS on that directory. Now compiledir gets created.
Now I'm getting MissingGXX error "g++ not available! We can't compile
c code.". G++ is there in my PythonFolder\Scripts and I've added this
path to my environmental variable PATH.
I just Want to know that what causes this error? Is it because Theano can't find g++ and it's all about path issues or it has something to do with compiledir lock
PS: I can run the code from my Winpython console and everything works fine. I've seen the contents of %PATH% and %PYTHONPATH% from my Win python console and replicated the same on my deployed IIS web App.
Here's the header of the stack trace :
(MissingGXX('The following error happened while compiling the node',
Shape_i{0}(input.input), '\n', "g++ not available! We can't compile c
code.", '[Shape_i{0}(input.input)]'), , ( | 0 | 1 | 1,208 |
0 | 37,767,201 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-05-16T16:38:00.000 | 2 | 2 | 0 | How slicing and ellipsis works in numpy? | 37,258,911 | 0.197375 | python,numpy,slice,ellipsis | arr[:,:,1] is fancy indexing used by numpy that selects the first element of the last column in arr. Fancy indexing can only be used in numpy arrays and not in python's traditional lists.
Also, like its pointed out in the comments, a[,:,:,] is a syntax error.
It is helpful because you can select columns easily | I have been reading a very old documentation of Numpy and found out a weird notation which eludes my understanding. The documentation says a[i:...] is a shortcut for a[i,:,:,:].
The documentation being old is very vague and I would welcome any comments.
Thanks,
Prerit | 0 | 1 | 1,494 |
0 | 37,280,915 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-05-17T13:33:00.000 | 1 | 4 | 0 | Dynamic Programming: Mission Per Day, Scheduling for Maximum Profit | 37,277,713 | 0.049958 | python,algorithm,dynamic-programming | Dynamically build a table of dimensions 6 x n. The entry table[w_i][d_j] will denote the maximal reachable value when Bob has worked for the last i days consecutively (including today) and it is day j.
The first column is easy to fill in:
table[1][0] = x_0 if Bob decides to work on the first day, all other values are 0 (table[0][0] => Bob doesn't work on the first day, table[2..5][0] => Bob can't work for multiple consecutive days on day 1.)
Go on to complete the table column-by-column according to the following rules:
The maximum value on day j with 0 consecutive days of work is the maximum of any value of the previous day and not working today:
table[0][d_j] = max{ table[0..5][d_j-1] }
The maximum value on day j with 1 consecutive day of work is the maximum of the previous 2 days with no consecutive days of work plus x_j. (It never makes sense to rest more than 2 days, as we could have just worked the day(s) in between.):
table[1][d_j] = max{ table[0][d_j-2], table[0][d_j-1] } + x[d_j]
Otherwise, table[w_i][d_j] = table[w_i-1][d_j-1] + x[d_j].
The solution will be the maximum value in the last column. | The problem:
There is a set of n days that Bob is planning to work, and on each day i there is a mission; each mission lasts exactly one day, must be done on day i in which it is given, and pays bob x_i dollars. Bob cannot work more than 5 consecutive missions at a time. That is, he must take at least 1 rest day every 5 days.
Given numbers x_1...x_n, on which days should Bob perform missions, and on which days should he rest, in order to make as much money as possible and not work more than 5 days? Your solution should be O(n)
My issue:
I am having trouble coming up with the recurrence for this problem. I have been thinking about this problem for a long time. My original thought was to let p[i] = max{x_i + x_i-1 + .... + x_i-4}, where p[i] is the max profit earnable from days i-4 to i. However, I realize, one, that this does take in to account that the optimal solution might have two consecutive work days, and two, I am not building off previous solutions.
My Question: Can anyone give me insight on understanding the structure of this problem? I feel like I am just not understanding the key properties that would make the solution easy to see.
Thanks in advance! | 0 | 1 | 766 |
Subsets and Splits