GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 50,958,242 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2017-03-02T19:17:00.000 | 1 | 1 | 0 | How can you update a pyfile in the middle of a PySpark shell session? | 42,564,069 | 0.197375 | python,apache-spark,pyspark | I don't think it's feasible during an interactive session. You will have to restart your session to use the modified module. | Within an interactive pyspark session you can import python files via sc.addPyFile('file_location'). If you need to make changes to that file and save them, is there any way to "re-broadcast" the updated file without having to shut down your spark session and start a new one?
Simply adding the file again doesn't work. I'm not sure if renaming the file works, but I don't want to do that anyways.
As far as I can tell from the spark documentation there is only a method to add a pyfile, not update one. I'm hoping that I missed something!
Thanks | 0 | 1 | 1,140 |
0 | 42,592,884 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2017-03-04T06:12:00.000 | 0 | 3 | 0 | Data structure: Top K ordered dictionary keys by value | 42,592,803 | 0 | python,dictionary,data-structures,heap | If your data will not fit in memory, you need to be particularly mindful of how it's stored. Is it in a database, a flat file, a csv file, JSON, or what?
If it is in a "rectangular" file format, you might do well to simply use a standard *nix sorting utility, and then just read in the first k lines. | I have a very large dictionary with entries of the form {(Tuple) : [int, int]}. For example, dict = {(1.0, 2.1):[2,3], (2.0, 3.1):[1,4],...} that cannot fit in memory.
I'm only interested in the top K values in this dictionary sorted by the first element in each key's value. If there a data structure that would allow me to keep only the largest K key-value pairs? As an example, I only want 3 values in my dictionary. I can put in the following key-value pairs; (1.0, 2.1):[2,3], (2.0, 3.1):[1,4], (3.1, 4.2):[8,0], (4.3, 4.1):[1,1] and my dictionary would be: (3.1, 4.2):[8,0], (1.0, 2.1):[2,3], (2.0, 3.1):[1,4] (in case of key-value pairs with the same first element, the second element will be checked and the largest key-value pair based on the second element will be kept) | 0 | 1 | 698 |
0 | 42,611,673 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-05T09:22:00.000 | 0 | 2 | 0 | Get the Foreground in opencv | 42,606,584 | 0 | python,opencv | There is no way that your camera or software will be able to look at a flat image and decide what is foreground and what is background. Is that parrot sitting on a perch and staring at the camera or is it a picture of a parrot on the wall?
In the past I've made a collection of frames from the camera and formed a reference image by taking the median value of every pixel. Hopefully, this is now an imge that can be compared with every subsequent frame and now substracting the two can be used to isolate where change has occurred. The difference isn't what you want but can be turned into a mask that will select what you want from the frame in question. | I'm trying to create a program that removes the background and get the foreground in color. For example if a face appears in front of my webcam i need to get the face only. I tried using BackgroundSubtractorMOG in opencv 3. But that didn't solve my problem. Can anyone tell me where to look or what to use. I'm a newbie in opencv.
P.S i use opencv3 in python | 0 | 1 | 713 |
0 | 42,610,453 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-05T15:10:00.000 | 1 | 1 | 0 | model evaluation with "train_test_split" not static? | 42,610,000 | 0.197375 | python,machine-learning,scikit-learn | You can set random_state parameter to some constant value to reproduce data splits. On the other hand, it's generally a good idea to test exactly what you are trying to know - i.e. run your training at least twice with different randoms states and compare the results. If they differ a lot it's a sign that something is wrong and your solution is not reliable. | According to the resources online "train_test_split" function from sklearn.cross_validation module returns data in a random state.
Does this mean if I train a model with the same data twice, I am getting two different models since the training data points used in the learning process is different in each case?
In practice can the accuracy of such two models differ a lot? Is that a possible scenario? | 0 | 1 | 86 |
0 | 62,075,227 | 0 | 0 | 0 | 0 | 1 | true | 11 | 2017-03-05T16:02:00.000 | 6 | 1 | 0 | Sklearn Model (Python) with NodeJS (Express): how to connect both? | 42,610,590 | 1.2 | python,node.js,scikit-learn,child-process | My recommendation: write a simple python web service (personally recommend flask) and deploy your ML model. Then you can easily send requests to your python web service from your node back-end. You wouldn't have a problem with the initial model loading. it is done once in the app startup, and then you're good to go
DO NOT GO FOR SCRIPT EXECUTIONS AND CHILD PROCESSES!!! I just wrote it in bold-italic all caps so to be sure you wouldn't do that. Believe me... it potentially go very very south, with all that zombie processes upon job termination and other stuff. let's just simply say it's not the standard way to do that.
You need to think about multi-request handling. I think flask now has it by default
I am just giving you general hints because your problem has been generally introduced. | I have a web server using NodeJS - Express and I have a Scikit-Learn (machine learning) model pickled (dumped) in the same machine.
What I need is to demonstrate the model by sending/receiving data from it to the server. I want to load the model on startup of the web server and keep "listening" for data inputs. When receive data, executes a prediction and send it back.
I am relatively new to Python. From what I've seen I could use a "Child Process" to execute that. I also saw some modules that run Python script from Node.
The problem is I want to load the model once and let it be for as long as the server is on. I don't want to keep loading the model every time due to it's size. How is the best way to perform that?
The idea is running everything in a AWS machine.
Thank you in advance. | 1 | 1 | 3,643 |
0 | 42,655,243 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-05T18:43:00.000 | 0 | 1 | 0 | Python - Closeness of two values on a logarithmic scale | 42,612,439 | 0 | python,math | So the solution I have is :
linear_closeness = 1 - (difference / max_deviation)
exponential_closeness = 10^linear_closeness / 10
This is suitable for me. I am open to better solutions. | I have two, time value series (using pandas) and would like to represent the "closeness" of the last value in each series in regards to each other on a logarithmic scale between 0 and 1. 0 being very far away and 1 being the same.
I am not sure how to approach this and any help would be appreciated. | 0 | 1 | 101 |
0 | 47,734,394 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-03-06T12:35:00.000 | 0 | 2 | 0 | How to combine a Self-organising map and a multilayer perceptron in python | 42,625,825 | 0 | python,neural-network,cluster-analysis,image-recognition,self-organizing-maps | I have been wondering if there is any mileage to training a separate supervised neural network for the inputs which map to each node in the SOM. You'd then have separate supervised learning on the subset of the input data mapping to each SOM node. The networks attached to each node would perhaps be smaller and more easily trained than one large network which had to deal with the whole input space. There may also be benefit from including input vectors which map to the adjacent SOM nodes.
Is anyone aware of this being the subject of research? | I am working on a image recognition project in python. I have read in journals that if clustering performed by a self-organizing map (SOM) is input into a supervised neural network the accuracy of image recognition improves as opposed to the supervised network on its own. I have tried this myself by using the SOM to perform clustering and then using the coordinates of the winning neuron after each iteration as input to a multilayer perceptron from the keras. However the accuracy is very poor.
What output of SOM should be used as input to a multilayer perceptron? | 0 | 1 | 1,430 |
0 | 42,777,643 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-03-06T12:35:00.000 | 1 | 2 | 0 | How to combine a Self-organising map and a multilayer perceptron in python | 42,625,825 | 0.099668 | python,neural-network,cluster-analysis,image-recognition,self-organizing-maps | Another way to use SOM is for vector quantisation. Rather than using the winning SOM coordinates, use the codebook values of the winning neuron. Not sure which articles you are reading, but I would have said that SOM into MLP will only provide better accuracy in certain cases. Also, you will need to choose parameters like dimensionality and map size wisely.
For image processing, I would have said that Autoencoders or Convolutional Neural Networks (CNNs) are more cutting-edge alternatives to SOM to investigate if you're not determined on the SOM + MLP architecture. | I am working on a image recognition project in python. I have read in journals that if clustering performed by a self-organizing map (SOM) is input into a supervised neural network the accuracy of image recognition improves as opposed to the supervised network on its own. I have tried this myself by using the SOM to perform clustering and then using the coordinates of the winning neuron after each iteration as input to a multilayer perceptron from the keras. However the accuracy is very poor.
What output of SOM should be used as input to a multilayer perceptron? | 0 | 1 | 1,430 |
0 | 42,632,755 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-03-06T17:56:00.000 | 4 | 2 | 0 | Can Convolution2D work on rectangular images? | 42,632,411 | 1.2 | python,image,keras,convolution | No issues with a rectangle image... Everything will work properly as for square images. | Let's say I have a 360px by 240px image. Instead of cropping my (already small) image to 240x240, can I create a convolutional neural network that operates on the full rectangle? Specifically using the Convolution2D layer.
I ask because every paper I've read doing CNNs seems to have square input sizes, so I wonder if what I propose will be OK, and if so, what disadvantages I may run into. Are all the settings (like border_mode='same') going to work the same? | 0 | 1 | 2,468 |
0 | 43,152,282 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-07T06:29:00.000 | 1 | 1 | 0 | mod.predict gives more columns than expected | 42,641,657 | 1.2 | python,mxnet | Using "y = mod.predict(val_iter,num_batch=1)" instead of "y = mod.predict(val_iter)", then you can get only one batch labels. For example,if you batch_size is 10, then you will only get the 10 labels. | I am using MXNet on IRIS dataset which has 4 features and it classifies the flowers as -'setosa', 'versicolor', 'virginica'. My training data has 89 rows. My label data is a row vector of 89 columns. I encoded the flower names into number -0,1,2 as it seems mx.io.NDArrayIter does not accept numpy ndarray with string values. Then I tried to predict using
re = mod.predict(test_iter)
I get a result which has the shape 14 * 10.
Why am I getting 10 columns when I have only 3 labels and how do I map these results to my labels. The result of predict is shown below:
[[ 0.11760861 0.12082944 0.1207106 0.09154381 0.09155304 0.09155869
0.09154817 0.09155204 0.09154914 0.09154641] [ 0.1176083 0.12082954 0.12071151 0.09154379 0.09155323 0.09155825
0.0915481 0.09155164 0.09154923 0.09154641] [ 0.11760829 0.1208293 0.12071083 0.09154385 0.09155313 0.09155875
0.09154838 0.09155186 0.09154932 0.09154625] [ 0.11760861 0.12082901 0.12071037 0.09154388 0.09155303 0.09155875
0.09154829 0.09155209 0.09154959 0.09154641] [ 0.11760896 0.12082863 0.12070955 0.09154405 0.09155299 0.09155875
0.09154839 0.09155225 0.09154996 0.09154646] [ 0.1176089 0.1208287 0.1207095 0.09154407 0.09155297 0.09155882
0.09154844 0.09155232 0.09154989 0.0915464 ] [ 0.11760896 0.12082864 0.12070941 0.09154408 0.09155297 0.09155882
0.09154844 0.09155234 0.09154993 0.09154642] [ 0.1176088 0.12082874 0.12070983 0.09154399 0.09155302 0.09155872
0.09154837 0.09155215 0.09154984 0.09154641] [ 0.11760852 0.12082904 0.12071032 0.09154394 0.09155304 0.09155876
0.09154835 0.09155209 0.09154959 0.09154631] [ 0.11760963 0.12082832 0.12070873 0.09154428 0.09155257 0.09155893
0.09154856 0.09155177 0.09155051 0.09154671] [ 0.11760966 0.12082829 0.12070868 0.09154429 0.09155258 0.09155892
0.09154858 0.0915518 0.09155052 0.09154672] [ 0.11760949 0.1208282 0.12070852 0.09154446 0.09155259 0.09155893
0.09154854 0.09155205 0.0915506 0.09154666] [ 0.11760952 0.12082817 0.12070853 0.0915444 0.09155261 0.09155891
0.09154853 0.09155206 0.09155057 0.09154668] [ 0.1176096 0.1208283 0.12070892 0.09154423 0.09155267 0.09155882
0.09154859 0.09155172 0.09155044 0.09154676]] | 0 | 1 | 158 |
0 | 42,657,727 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-07T17:04:00.000 | 0 | 1 | 0 | Do I need to use a bin packing algorithm, or knapsack? | 42,654,075 | 1.2 | python,algorithm,dynamic-programming,knapsack-problem,bin-packing | This task can be reduced to solving several knapsack problems. To solve them, the principle of greedy search is usually used, and the number of cuts is the criterion of the search.
The first obvious step of the algorithm is checking the balance.
The second step is to arrange the arrays of bars and chocolate needs, which will simplify further calculations. This implements the principle of greedy search.
The third obvious step is to find and use all the bars, the sizes of which coincide with the needs.
The next step is to find and use all combinations of the bars what satisfy the needs. This task requires a "greedy" search in descending order of needs, which continues in the further calculations. This criterion is not optimal, but it allows to form a basic solution.
If not all the children have received chocolate, then the cuts become obvious. The search should be done according to the descending sizes of the tiles. First, one should check all possibilities to give the cut tiles to two children at once, then the same, but if one existing tile is used, etc.
After that there is an obvious variant "one cut - one need", allowing to form the base variant. But if there remain computational resources, they can be used first to calculate the options of the type "two slits - three needs", etc.
Further optimization consists in returning back to the steps and calculation for the following variants. | Here's the problem statement:
I have m chocolate bars, of integer length, and n children who
want integer amounts of chocolate. Where the total chocolate needs of
the children are less than or equal to the total amount of chocolate
you have. You need to write an algorithm that distributes chocolate to
the children by making the least number of cuts to the bars.
For example, for M = {1,3,7}, N = {1,3,4}, the least number of cuts would be 1.
I don't have any formal experience with algorithms, could anyone give me any hints on what I should start reading to tackle this problem in an efficient way? | 0 | 1 | 770 |
0 | 42,697,952 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-07T19:40:00.000 | 1 | 2 | 0 | How can I make matplotlib plot rendering faster | 42,656,915 | 0.099668 | python,matplotlib,plot,pyqt4 | Two possible solutions:
Don't show a scatter plot, but a hexbin plot instead.
Use blitting.
(In case someone wonders about the quality of this answer; mind that the questioner specifially asked for this kind of structure in the comments below the question.) | I want to work with a scatter plot within a FigureCanvasQTAgg. The scatter plot may have 50,000 or more data points. The user wants to draw a polygon in the plot to select the data points within the polygon. I've realized that by setting points via mouse clicks and connect them with lines using Axis.plot(). When the user has set all points the polygon is drawn. Each time I add a new point I call FigureCanvasQTAgg.draw() to render the current version of the plot. This is slow, because the scatter plot has so much data.
Is there a way to make this faster? | 0 | 1 | 1,682 |
0 | 42,658,405 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-03-07T21:05:00.000 | 1 | 1 | 0 | Speeding up exponential moving average in python | 42,658,330 | 1.2 | python,pandas | by definition, these are functions that are computationally intensive on huge datasets.
So there is very little hope to speed this up. Something you can try is to save the corresponding series as a .csv, do the smoothing in Pandas, and then merge back to your huge dataframe.
Sometimes that can help as carrying a large dataframe in memory back and forth is costly. | I found pandas ewm function quite slow when applied to huge data. Is there any way to speed this up or use alternative functions for exponential weighted moving averages? | 0 | 1 | 416 |
0 | 42,665,677 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-08T06:32:00.000 | 0 | 1 | 0 | Can Tensorflow be used to detect if a particular feature exists in an image? | 42,664,493 | 1.2 | python,python-2.7,machine-learning,tensorflow | Yes and no. Tensorflow is a graph computation library mostly suited for neural networks.
You can create a neural network that determines if a face is in the image or not... You can even search for existing implementations that use Tensorflow...
There is no default Haar feature based cascade classifier in Tensorflow... | For example, using OpenCV and haarcascade_frontal_face.xml, we can predict if a face exists in an image. I would like to know if such a thing (detecting an object of interest) is possible with Tensorflow and if yes, how? | 0 | 1 | 540 |
0 | 42,684,721 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-03-08T08:20:00.000 | 4 | 2 | 0 | Sklearn K means Clustering convergence | 42,666,255 | 1.2 | python,scikit-learn,k-means | You have access to the n_iter_ field of the KMeans class, it gets set after you call fit (or other routines that internally call fit.
Not your fault for overlooking that, it's not part of the documentation, I just found it by checking the source code ;) | I am trying to construct clusters out of a set of data using the Kmeans algorithm from SkLearn. I want to know how one can determine whether the algorithm actually converged to a solution for one's data.
We feed in the tol parameter to define the tolerance for convergence but there is also a max_iter parameter that defines the number of iterations the algorithm will do for each run. I get that the algorithm may not always converge within the max_iter times of iterations. So is there any attribute or a function that I can access to know if the algorithm converged before the max_iter iterations ? | 0 | 1 | 2,975 |
0 | 42,676,591 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-08T15:48:00.000 | 0 | 1 | 0 | How to handle huge volume of data in limited RAM | 42,675,835 | 0 | python,memory-management | This question makes me remember the early 80's. Memory used to be expensive and we invented swapping. The (high level part of the) OS sees more memory than actually present, and pages are copied on disk. When one is needed another one is swapped off, and the page is copied back into memory. Performances are awful, but at least it works.
Your question is rather broad, but a rule of thumb says that if you can process your data in batches, explicitely loading batches of data is much more efficient, but if the algorythm is too complex or requires actions on any data at any moment, just let the swap care for it.
So add a swap file of significantly greater than the memory you think you need (with the given sizes, I would try 100 or 200 Gbytes), start the processing before leaving office, and you could have results next day in the morning. | I have to process a large volume of data ( feature maps of individual layers for around 4000 images) which sizes more than 50 GB after some point. The processing involves some calculation after which around 2MB file is written to the HDD.
Since the free ram is around 40GB my process crashes after some point. Can anyone suggest a better approach to either divide or process this 50GB data such that the computation can be made within the available ram. For e.g. some in memory compression approach
I am just looking for hints to the possible approaches to this problem.
Thanks | 0 | 1 | 313 |
0 | 45,650,018 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-09T05:19:00.000 | 0 | 1 | 0 | How does graph_def store the shape info of input and output of a node | 42,687,327 | 1.2 | python,tensorflow,deep-learning | each node in the graph_def doesn't contains the shape of output tensor, after importing graph_def into memory (with tf.import_graph_def), the shape of each tensor in the graph is automatically determined | I downloaded the inception v3 model(a frozen graph) from website and imported it into a session then I found that the shape of inputs and outputs of all nodes in this graph_def are already fully known, but when I freeze my own graph contaning tf.Examples queues as inputs, the batch_size info seems to be lost and replaced with a ?, my question is how I can fix or change the unknown shape when I try to freeze a graph?
edit:
the node.attr of some nodes in graph_def contains the shape info but why not all nodes? | 0 | 1 | 438 |
0 | 42,703,588 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-09T19:07:00.000 | 0 | 2 | 0 | overflow in exp(x) in python | 42,703,506 | 0 | python-3.x,numpy | What is happening is that you are overflowing the register such that it is overwriting itself. You have exceeded the maximum value that can be stored in the register. You will need to use a different datatype that will most likely not be compatible with exp(.). You might need a custom function that works with 64-bit Integers. | I'm using function: numpy.log(1+numpy.exp(z))
for small values of z (1-705) it gives identity result(1-705 {as expected}),
but for larger value of z from 710+ it gives infinity, and throw error "runtimeWarning: overflow encountered in exp" | 0 | 1 | 3,068 |
0 | 42,713,556 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-10T00:40:00.000 | 0 | 2 | 0 | How to remove both duplicated values from two csv files in apache spark? | 42,708,129 | 0 | python,csv,apache-spark,pyspark | thx for @himanshulllTian's great answer. And I want so say some more. If you have several columns in your file; then you just want to remove record based on the key column. Also, I don't know whether your csv files have same schema. Here is one way to deal with this situation. Let me borrow the example from himanshulllTian.
First, Let's find the records that share some key: val dupKey = df1.join(df2, "key").select("key"). Then we can find the part we hope to remove in each dataframe: val rmDF1 = df1.join(dupKey, "key"). Finally, same except action: val newDF1 = df1.except(rmDF).
This maybe trivial. But work. Hope this help. | Newbie to apache spark. What I want to do is to remove both the duplicated keys from two csv files. I have tried dropDuplicates() and distinct() but all the do is remove one value. For example if key = 1010 appears in both the csv files, I want both of them gone. How do I do this? | 0 | 1 | 368 |
0 | 42,708,813 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-10T01:10:00.000 | 0 | 1 | 0 | Pandas read_csv fails | 42,708,388 | 0 | python,csv,pandas | add this : lineterminator = ':' | I am using pandas read_csv to open a csv file 1327x11. The first 265 rows are only 4 columns wide. Here is row 1 to 5
DWS_LENS1.converter,"-300.0,5.5; -0.1,5.5; 10.0,-5.5; 300.0,-5.5",(mass->volts),: DWS_LENS1.mass_dependent,false,:
DWS_LENS1.voltage.reading,-5.12642,V,:
DWS_LENS1.voltage.target,-4.95000,V,:
DWS_LENS2.converter,"-300.0,20.0; -10.0,20.0; 10.0,-20.0; 300.0,-20.0",(mass->volts),:
and here are some other rows :
157955,SAMPLE,,,,1760.5388,,,,:
,:
Summary,:
,:
Analyte,H3O+ (ppb),NO+ (ppb),O2+ (ppb),O- (ppb),OH- (ppb),:
toluene,1872.7367,,,,,:
isobutane,,1945.7385,,,,:
hexafluorobenzene,,,1951.0644,2121.6486,,:
tetrafluorobenzene,,,,,1599.5802,:
I receive Error tokenizing data. C error: Expected 4 fields in line 266, saw 11
I tried
df=pd.read_csv(test,error_bad_lines=False)
but it skips most rows and returns a 491x4 table.
If I use pd.read_csv(test,delim_whitespace=True,error_bad_lines=False)
I obtain a 1300x4 table but it fails splitting some data.
How can I have the 11 columns back? | 0 | 1 | 1,382 |
0 | 42,730,875 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-03-10T05:36:00.000 | 0 | 2 | 0 | Interpolate with lmfit? | 42,711,002 | 0 | python,interpolation,curve-fitting,lmfit | There is not a built-in way to automatically interpolate with lmfit. With a lmfit Model, you provide the array on independent values at which the Model should be evaluated, and an array of data to compared to that model.
You're free to interpolate or smooth the data or perform some other transformation (I sometimes Fourier transform data and model to emphasize some frequencies), but you'll have to include that as part of the model. | I am trying to fit a curve with lmfit but the data set I'm working with does not contain a lot of points and this makes the resulting fit look jagged instead of curved.
I'm simply using the line:
out = mod.fit(SV, pars, x=VR)
were VR and SV are the coordinates of the points I'm trying to fit.
I've tried using scipy.interpolate.UnivariateSpline and the fitted the resulting data but I want to know if there is a built-in or faster way to do this.
Thank you | 0 | 1 | 424 |
0 | 42,711,798 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-10T06:00:00.000 | 2 | 2 | 0 | How to get the pseudo inverse of a huge diagonal matrix in python? | 42,711,310 | 1.2 | python,numpy,matrix | Just take the reciprocals of the nonzero elements. You can check with a smaller diagonal matrix that this is what pinv does. | If I have a diagonal matrix with diagonal 100Kx1 and how can I to get its pseudo inverse?
I won't be able to diagonalise the matrix and then get the inverse like I would do for small matrix so this won't work
np.linalg.pinv(np.diag(D)) | 0 | 1 | 945 |
0 | 42,725,593 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-03-10T17:53:00.000 | 1 | 2 | 0 | Fastest way to compare every element with every other in np array of strings | 42,724,795 | 0.099668 | python,numpy,numpy-ufunc | tl;dr
You don't want that.
Details
First let's note that you're actually building a triangular matrix: for the first element, compare it to the rest of the elements, then repeat recursively to the rest. You don't use the triangularity, though. You just cut off the diagonal (each element is always equal to itself) and merge the rows into one list in your example.
If you sort your source list, you won't need to compare each element to the rest of the elements, only to the next element. You'd have to keep the position with element using a tuple, to keep track of it after sorting.
You would sort the list of pairs in O(n log n) time, then scan it and find all the matches if O(n) time. Both sorting and finding the matches are simple and quick in your case.
After that, you'd have to create your 'bit vector', which is O(n^2) long. It would contain len(your vector) ** 2 elements, or 57600 million elements for a 240k-element vector. Even if you represented each element as one bit, it would take 53.6 Gbit, or 8.7 GBytes of memory.
Likely you don't want that. I suggest that you find a list of pairs in O(n log n) time, sort it by both first and second position in O(n log n) time, too, and recreate any portion of your desired bitmap by looking at that list of pairs; binary search would really help. Provided that you have much fewer matches than pairs of elements, the result may even fit in RAM. | I have a numpy array of strings, some duplicated, and I'd like to compare every element with every other element to produce a new vector of 1's and 0's indicating whether each pair (i,j) is the same or different.
e.g. ["a","b","a","c"] -> 12-element (4*3) vector [1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1]
Is there a way to do this quickly in numpy without a double loop through all pairs of elements? My array has ~240,000 elements, so it's taking a terribly long time to do it the naive way.
I'm aware of numpy.equal.outer, but apparently numpy.equal is not implemented on strings, so it seems like I'll need some more clever way to compare them. | 0 | 1 | 2,678 |
0 | 42,750,534 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-12T17:11:00.000 | 0 | 1 | 0 | Understanding the None output when using pandas.DataFrame.apply | 42,750,479 | 1.2 | python,pandas,dataframe,apply | The None occurs because the print() function doesn't return any value and apply() expects the function to return something.
If you want to print the data frame, just use print(df), or if you need some other format, tell us what your are trying to get at the printed output. | I'm trying to use the pandas.DataFrame.apply function. My actual code performs similarly to the example below. At the end of the output it outputs "None" for each row in the dataframe. This behavior causes an error in the function I'm passing through apply.
df = pd.DataFrame({"one": range(0,5), "two": range(0,5)})
df.apply(print, axis=1)
Why does it behave this way? What is the None coming from?
How can I alter/control this behavior? | 0 | 1 | 1,727 |
0 | 42,768,183 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-13T10:28:00.000 | 0 | 1 | 0 | export graph prototxt from tensorflow summary | 42,761,359 | 0 | python,tensorflow | The graph is available by calling tf.get_default_graph. You can get it in GraphDef format by doing graph.as_graph_def(). | When using tensorflow, the graph is logged in the summary file, which I "abuse" to keep track of the architecture modifications.
But that means every time I need to use tensorboard to visualise and view the graph.
Is there a way to write out such a graph prototxt in code or export this prototxt from summary file from tensorboard?
Thanks for your answer! | 0 | 1 | 421 |
0 | 42,765,548 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-13T13:11:00.000 | 2 | 2 | 1 | How to export PATH for sublime build tool? | 42,764,539 | 1.2 | python,tensorflow,sublimetext2,sublimetext3,sublimetext | Ok I got it:
The problem is that the LD_LIBRARY_PATH variable was missing. I only exported it in .bashrc.
When I add
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64\
${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
to ~/.profile it's working (don't forget to restart).
It also works if I start sublime from terminal with subl which passes all the variables. | I wanted to create a new "build tool" for sublime text, so that I can run my python scripts with an anaconda env with tensorflow. On my other machines this works without a problem, but on my ubuntu machine with GPU support I get an error.
I think this is due to the missing paths. The path provided in the error message doesn't contain the cuda paths, although I've included them in .bashrc.
Update
I changed ~/.profile to export the paths. But tensorflow still won't start from sublime. Running my script directly from terminal is no problem.
I get ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory
So somehow the GPU stuff (cuda?) can not be found
Thanks | 0 | 1 | 668 |
0 | 42,772,141 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-13T19:31:00.000 | 0 | 2 | 0 | Holoviews: AttributeError: 'Image' object has no attribute 'set' | 42,771,938 | 0 | python,bokeh,holoviews | There are some changes in bokeh 0.12.4, which are incompatible with HoloViews 1.6.2. We will be releasing holoviews 1.7.0 later this month, until then you have the option to downgrading to bokeh 0.12.3 or upgrading to the latest holoviews dev release with:
conda install -c ioam/label/dev holoviews
or
pip install https://github.com/ioam/holoviews/archive/v1.7dev7.zip | I have tried to run the Holoviews examples from the Holoviews website.
I have:
bokeh 0.12.4.
holoviews 1.6.2 py27_0 conda-forge
However, following any of the tutorials I get an error such as the following and am unable to debug:
AttributeError: 'Image' object has no attribute 'set'.
Is anyone able to guide me as to how to fix this?
Cheers
Ed | 0 | 1 | 997 |
0 | 50,535,897 | 0 | 1 | 0 | 0 | 2 | false | 10 | 2017-03-14T03:46:00.000 | 5 | 3 | 0 | Why autocompletion options in Spyder 3.1 are not fully working in the Editor? | 42,777,430 | 0.321513 | python,autocomplete,spyder | Autocomplete was not working for me at all.
So, I tried Tools -> Reset Sypder to factory defaults and it worked. | Running on Mac Sierra, the autocompletion in Spyder (from Anaconda distribution), seems quite erratic. When used from the Ipython console, works as expected. However, when used from the editor (which is my main way of writing), is erratic. The autocompletion works (i.e. when pressing TAB a little box appears showing options) for some modules, such as pandas or matplotlib. So writing 'pd.' and hitting TAB, gets the box with options as expected. However, this does not happen with many other objects: for example, after defining a dataframe named 'df', typing 'df.' TAB shows nothing. In the Ipython console, 'df.' TAB would show the available procedures for that dataframe, such as groupby, and also its columns, etc..
So the question is threefold. First, is there any particular configuration that should be enabled to get this to work? I don't think so, given some time spent googling, but just wanna make sure. Second, could someone state what is the official word on what works and what doesn't in terms of autocompletion (e.g. what particular modules do work from the editor, and which ones doesn't?). Finally, what are the technical aspects of the differences between the editor and the Ipython console in the performance of the autocompletion with Spyder? I read something about Jedi vs. PsychoPy modules, so got curious (however, please keep in mind that although I have scientific experience, I am relatively new to computation, so please keep it reasonably simple for an educated but not expert person).
UPDATE: As a side question, it would be great to know why is the autocompletion better in Rodeo (another IDE). It is more new, has way fewer overall options than Spyder, but the autocompletion works perfectly in the editor. | 0 | 1 | 13,872 |
0 | 46,160,256 | 0 | 1 | 0 | 0 | 2 | false | 10 | 2017-03-14T03:46:00.000 | 5 | 3 | 0 | Why autocompletion options in Spyder 3.1 are not fully working in the Editor? | 42,777,430 | 0.321513 | python,autocomplete,spyder | Autocompletion works correctly if there are NO white spaces in the project working directory path. | Running on Mac Sierra, the autocompletion in Spyder (from Anaconda distribution), seems quite erratic. When used from the Ipython console, works as expected. However, when used from the editor (which is my main way of writing), is erratic. The autocompletion works (i.e. when pressing TAB a little box appears showing options) for some modules, such as pandas or matplotlib. So writing 'pd.' and hitting TAB, gets the box with options as expected. However, this does not happen with many other objects: for example, after defining a dataframe named 'df', typing 'df.' TAB shows nothing. In the Ipython console, 'df.' TAB would show the available procedures for that dataframe, such as groupby, and also its columns, etc..
So the question is threefold. First, is there any particular configuration that should be enabled to get this to work? I don't think so, given some time spent googling, but just wanna make sure. Second, could someone state what is the official word on what works and what doesn't in terms of autocompletion (e.g. what particular modules do work from the editor, and which ones doesn't?). Finally, what are the technical aspects of the differences between the editor and the Ipython console in the performance of the autocompletion with Spyder? I read something about Jedi vs. PsychoPy modules, so got curious (however, please keep in mind that although I have scientific experience, I am relatively new to computation, so please keep it reasonably simple for an educated but not expert person).
UPDATE: As a side question, it would be great to know why is the autocompletion better in Rodeo (another IDE). It is more new, has way fewer overall options than Spyder, but the autocompletion works perfectly in the editor. | 0 | 1 | 13,872 |
0 | 42,932,979 | 0 | 0 | 0 | 0 | 2 | true | 66 | 2017-03-14T11:41:00.000 | 30 | 6 | 0 | tf.nn.conv2d vs tf.layers.conv2d | 42,785,026 | 1.2 | python,tensorflow | For convolution, they are the same. More precisely, tf.layers.conv2d (actually _Conv) uses tf.nn.convolution as the backend. You can follow the calling chain of: tf.layers.conv2d>Conv2D>Conv2D.apply()>_Conv>_Conv.apply()>_Layer.apply()>_Layer.\__call__()>_Conv.call()>nn.convolution()... | Is there any advantage in using tf.nn.* over tf.layers.*?
Most of the examples in the doc use tf.nn.conv2d, for instance, but it is not clear why they do so. | 0 | 1 | 35,607 |
0 | 53,683,545 | 0 | 0 | 0 | 0 | 2 | false | 66 | 2017-03-14T11:41:00.000 | 7 | 6 | 0 | tf.nn.conv2d vs tf.layers.conv2d | 42,785,026 | 1 | python,tensorflow | All of these other replies talk about how the parameters are different, but actually, the main difference of tf.nn and tf.layers conv2d is that for tf.nn, you need to create your own filter tensor and pass it in. This filter needs to have the size of: [kernel_height, kernel_width, in_channels, num_filters]
Essentially, tf.nn is lower level than tf.layers. Unfortunately, this answer is not applicable anymore is tf.layers is obselete | Is there any advantage in using tf.nn.* over tf.layers.*?
Most of the examples in the doc use tf.nn.conv2d, for instance, but it is not clear why they do so. | 0 | 1 | 35,607 |
0 | 42,902,517 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-03-15T12:48:00.000 | 0 | 1 | 0 | Python vs C++ Tensorflow inferencing | 42,810,240 | 0 | python,c++,tensorflow,benchmarking,inference | Write in the language that you are familiar with, in a way that you can maintain.
If it takes you a day longer to write it in the "faster" language, but only saves a minute of runtime, then it'll have to run 24*60 times to have caught up, and multiple times more than that to have been economical. | Is it really worth it to implement a C++ code for loading an already trained model and then fetch it instead of using Python?.
I was wondering this because as far as I understand, Tensorflow for python is C++ behind the scenes (as it is for numpy). So if one ends up basically having a Python program fetching the model loaded with a Python module it would perform pretty similar than using a module in C++ right?
Is there any benchmark? I wasn't able to find anything supporting this theory.
Thanks! | 0 | 1 | 789 |
0 | 42,817,461 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-15T17:59:00.000 | 0 | 3 | 0 | How do I save numpy arrays such that they can be loaded later appropriately? | 42,817,337 | 0 | python,arrays,numpy,save | How about ndarray's .tofile() method? To read use numpy.fromfile(). | I have a code which outputs an N-length Numpy array at every iteration.
Eg. -- theta = [ 0, 1, 2, 3, 4 ]
I want to be able to save the arrays to a text file or .csv file dynamically such that I can load the data file later and extract appropriately which array corresponds to which iteration. Basically, it should be saved in an ordered fashion.
I am assuming the data file would look something like this:-
0 1 2 3 4
1 2 3 4 5
2 3 4 5 6 ... (Random output)
I thought of using np.c_ but I don't want to overwrite the file at every iteration and if I simply save the terminal output as > output.txt, it saves as arrays including the brackets. I don't know how to read such a text file.
Is there a proper method to do this, i.e. write and read the data? | 0 | 1 | 741 |
0 | 42,821,370 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-15T20:24:00.000 | 1 | 1 | 0 | Python's XGBRegressor vs R's XGBoost | 42,819,987 | 1.2 | python,r,machine-learning,scikit-learn,xgboost | Since XGBoost uses decision trees under the hood it can give you slightly different results between fits if you do not fix random seed so the fitting procedure becomes deterministic.
You can do this via set.seed in R and numpy.random.seed in Python.
Noting Gregor's comment you might want to set nthread parameter to 1 to achieve full determinism. | I'm using python's XGBRegressor and R's xgb.train with the same parameters on the same dataset and I'm getting different predictions.
I know that XGBRegressor uses 'gbtree' and I've made the appropriate comparison in R, however, I'm still getting different results.
Can anyone lead me in the right direction on how to differentiate the 2 and/or find R's equivalence to python's XGBRegressor?
Sorry if this is a stupid question, thank you. | 0 | 1 | 1,321 |
0 | 42,841,898 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2017-03-16T13:44:00.000 | 3 | 2 | 0 | Which comes first in order of implementation: POS Tagging or Lemmatisation? | 42,835,852 | 1.2 | python,nlp,nltk,pos-tagger,lemmatization | Part of speech is important for lemmatisation to work, as words which have different meanings depending on part of speech. And using this information, lemmatization will return the base form or lemma. So, it would be better if POS Tagging implementation is done first.
The main idea behind lemmatisation is to group different inflected forms of a word into one. For example, go, going, gone and went will become just one - go. But to derive this, lemmatisation would have to know the context of a word - whether the word is a noun or verb etc.
So, the lemmatisation function can take the word and the part of speech as input and return the lemma after processing the information. | If I wanted to make a NLP Toolkit like NLTK, which features would I implement first after tokenisation and normalisation. POS Tagging or Lemmatisation? | 0 | 1 | 519 |
0 | 42,875,622 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2017-03-16T13:44:00.000 | 2 | 2 | 0 | Which comes first in order of implementation: POS Tagging or Lemmatisation? | 42,835,852 | 0.197375 | python,nlp,nltk,pos-tagger,lemmatization | Sure make the POS Tagger first. If you do lemmatisation first you could lose the best possible classification of words when doing the POS Tagger, especially in languages where ambiguity is commonplace, as it is in Portuguese. | If I wanted to make a NLP Toolkit like NLTK, which features would I implement first after tokenisation and normalisation. POS Tagging or Lemmatisation? | 0 | 1 | 519 |
0 | 42,857,362 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2017-03-17T05:02:00.000 | 4 | 1 | 0 | How to repartition a dataframe into fixed sized partitions? | 42,849,572 | 1.2 | python,dataframe,dask | Short answer is probably "no, there is no way to do this without looking at the data". The reason here is that the structure of the graph depends on the values of your lazy partitions. For example we'll have a different number of nodes in the graph depending on your total datasize. | I have a dask dataframe created from delayed functions which is comprised of randomly sized partitions. I would like to repartition the dataframe into chunks of size (approx) 10000.
I can calculate the correct number of partitions with np.ceil(df.size/10000) but that seems to immediately compute the result?
IIUC to compute the result it would have had to read all the dataframes into memory which would be very inefficient. I would instead like to specify the whole operation as a dask graph to be submitted to the distributed scheduler so no calculations should be done locally.
Is there some way to specify npartitions without having it immediately compute all the underlying delayed functions? | 0 | 1 | 298 |
0 | 42,853,495 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-17T06:06:00.000 | 0 | 2 | 0 | How can i get 'y' rather than 'yhat' from predicted data by using fbprophet? | 42,850,344 | 0 | python,facebook | If I'm not mistaken, 'y' is the data you're using yo fit with, i.e. the input to prophet. 'yhat' is the mean(or median?) of the predicted distribution. | I can use fbprophet (in python) to get some predicted data, but it just includes 't', 'yhat', 'yhat_upper', 'yhat_lower' and or so rather than 'y' which I also want to acquire.
At present I think I can't get the value of 'y' from the predicted data because Prophet doesn't work for predicting the future value like 'y'.
Am I predicting by a wrong way? | 0 | 1 | 475 |
0 | 42,870,291 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-17T23:21:00.000 | 1 | 1 | 0 | Text recognition and detection using TensorFlow | 42,868,546 | 0.197375 | python,tensorflow,deep-learning,text-classification,text-recognition | To group elements on a page, like paragraphs of text and images, you can use some clustering algo, and/or blob detection with some tresholds.
You can use Radon transform to recognize lines and detect skew of a scanned page.
I think that for character separation you will have to mess with fonts. Some polynomial matching/fitting or something. (this is a very wild guess for now, don't take it seriously).
But similar aproach would allow you to get the character out of the line and recognize it in same step.
As for recognition, once you have a character, there is a nice trigonometric trick of comparing angles of the character to the angles stored in a database.
Works great on handwriting too.
I am not an expert on how page segmentation exactly works, but it seems that I am on my way to become one. Just working on a project including it.
So give me a month and I'll be able to tell you more. :D
Anyway, you should go and read Tesseract code to see how HP and Google did it there. It should give you pretty good ideas.
Good luck! | I a working on a text recognition project.
I have built a classifier using TensorFlow to predict digits but I would like to implement a more complex algorithm of text recognition by using text localization and text segmentation (separating each character) but I didn't find an implementation for those parts of the algorithms.
So, do you know some algorithms/implementation/tips I, using TensorFlow, to localize text and do text segmentation in natural scenes pictures (actually localize and segmentation of text in the scoreboard for sports pictures)?
Thank you very much for any help. | 0 | 1 | 2,116 |
0 | 43,602,549 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-03-18T23:34:00.000 | 0 | 2 | 0 | Cannot import name -> No matching distribution when trying to install | 42,881,154 | 0 | python,pip,installation,importerror | Met the same problem. Fixed by installing the pybrain from github:
pip install https://github.com/pybrain/pybrain/archive/0.3.3.zip | I have no experience with Python. Just trying to run a program I downloaded from GitHub. Had a lot of problems trying to run it. After adding PATH, and installing a few modules(?), I got stuck:
When I type in "python filename.py", I get the error:
ImportError: cannot import name: 'SequentialDataSet'
I got the same error with different names. Just typed in "pip install (name)" to fix it. Got the same error again with another name, installed, but now I'm stuck with the error:
Could not find a version that satisfies the requirement
SequentialDataSet (from versions: ). No matching distribution found
for SequentialDataSet.
Let me know if there is any info you need
Thanks | 0 | 1 | 2,259 |
0 | 52,922,702 | 0 | 0 | 0 | 0 | 1 | false | 15 | 2017-03-19T03:29:00.000 | -2 | 3 | 0 | Efficient calculation of euclidean distance | 42,882,604 | -0.132549 | python,algorithm,python-3.x,euclidean-distance | I had the same issue before, and it worked for me once I normalized the values. So try to normalize the data before calculating the distance. | I have a MxN array, where M is the number of observations and N is the dimensionality of each vector. From this array of vectors, I need to calculate the mean and minimum euclidean distance between the vectors.
In my mind, this requires me to calculate MC2 distances, which is an O(nmin(k, n-k)) algorithm. My M is ~10,000 and my N is ~1,000, and this computation takes ~45 seconds.
Is there a more efficient way to compute the mean and min distances? Perhaps a probabilistic method? I don't need it to be exact, just close. | 0 | 1 | 1,648 |
0 | 42,885,380 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-03-19T06:28:00.000 | 1 | 2 | 0 | PyCaffe output layer for testing binary classification model | 42,883,603 | 0.099668 | python,deep-learning,caffe,convolution,pycaffe | SigmoidWithLoss layer outputs a single number per batch representing the loss w.r.t the ground truth labels.
On the other hand, Sigmoid layer outputs a probability value for each input in the batch. This output does not require the ground truth labels to be computed.
If you are looking for the probability per input, you should be looking at the output of the Sigmoid layer | I fine tunes vgg-16 for binary classification. I used sigmoidLoss layer as the loss function.
To test the model, I coded a python file in which I loaded the model with an image and got the output using :
out = net.forward()
My doubt is should I take the output from Sigmoid or SigmoidLoss layer.
And What is the difference between 2 layers.
My output will actually be the probability of input image being class 1.** | 0 | 1 | 329 |
1 | 42,889,299 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-19T08:17:00.000 | 0 | 1 | 0 | Image Registration accuracy evaluation (Hausdroff distance) using SimpleITK without segmenting the image | 42,884,375 | 0 | python,image,itk,elastix,simpleitk | If I understand your question correctly, you want the impossible: to have Hausdorff distance measure as if the image were segmented, but without segmenting it because the segmentation is hard. | I have registered two images, let's say fixed and moving are Registered. After registration I want to measure overlap ratio etc.
The SimpleITK has overlap measure filters and to use overlap_measures_filter.Execute(fixed, moving) and hausdroff_measures_filter.Execute() we need to segment the image and we need labels in input. But the image is hard to segment using just thresholding or connected component filters.
Now the question is then how can we evaluate registration accuracy using SimpleITK with just fixed image and the registered image.(without segmentation ad labeling the image) | 0 | 1 | 363 |
0 | 62,689,216 | 0 | 0 | 0 | 0 | 2 | false | 10 | 2017-03-19T11:56:00.000 | 1 | 6 | 0 | how to get opencv_contrib module in anaconda | 42,886,286 | 0.033321 | python,opencv,anaconda,conda | The question is old but I thought to update the answer with the latest information. My Anaconda version is 2019.10 and build channel is py_37_0 . I used pip install opencv-python==3.4.2.17 and pip install opencv-contrib-python==3.4.2.17. Now they are also visible as installed packages in Anaconda navigator and I am able to use patented methods like SIFT etc. | Can anyone tell me commands to get contrib module for anaconda
I need that module for
matches = flann.knnMatch(des1,des2,k=2)
to run correctly
error thrown is
cv2.error: ......\modules\python\src2\cv2.cpp:163: error: (-215) The data should normally be NULL! in function NumpyAllocator::allocate
Also I am using Anaconda openCV version 3, and strictly dont want to switch to lower versions
P.S. as suggested at many places to edit file cv2.cpp option is not available with anaconda. | 0 | 1 | 29,093 |
0 | 44,329,928 | 0 | 0 | 0 | 0 | 2 | false | 10 | 2017-03-19T11:56:00.000 | 14 | 6 | 0 | how to get opencv_contrib module in anaconda | 42,886,286 | 1 | python,opencv,anaconda,conda | I would recommend installing pip in your anaconda environment then just doing: pip install opencv-contrib-python. This comes will opencv and opencv-contrib. | Can anyone tell me commands to get contrib module for anaconda
I need that module for
matches = flann.knnMatch(des1,des2,k=2)
to run correctly
error thrown is
cv2.error: ......\modules\python\src2\cv2.cpp:163: error: (-215) The data should normally be NULL! in function NumpyAllocator::allocate
Also I am using Anaconda openCV version 3, and strictly dont want to switch to lower versions
P.S. as suggested at many places to edit file cv2.cpp option is not available with anaconda. | 0 | 1 | 29,093 |
0 | 42,890,197 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-19T17:58:00.000 | 0 | 1 | 0 | construct 3d diagonal tensor using 2d tensor | 42,890,105 | 0 | python,numpy,matrix,tensorflow,broadcast | Ok I figure it out. tf.matrix_diag() does the trick... | Given A = [[1,2],[3,4],[5,6]]. How to use tf.diag() to construct a 3d tensor where each stack is a 2d diagonal matrix using the values from A? So the output should be B = [[[1,0],[0,2]],[[3,0],[0,4]],[[5,0],[0,6]]]. I want to use this as my Gaussian covariance matries. | 0 | 1 | 153 |
0 | 57,453,505 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2017-03-20T03:19:00.000 | 1 | 1 | 0 | faster numpy array copy; multi-threaded memcpy? | 42,895,292 | 0.197375 | python,arrays,multithreading,numpy,memcpy | If you are certain that the types/memory layout of both arrays are identical, this might give you a speedup: memoryview(A)[:] = memoryview(B) This should be using memcpy directly and skips any checks for numpy broadcasting or type conversion rules. | Suppose we have two large numpy arrays of the same data type and shape, of size on the order of GB's. What is the fastest way to copy all the values from one into the other?
When I do this using normal notation, e.g. A[:] = B, I see exactly one core on the computer at maximum effort doing the copy for several seconds, while the others are idle. When I launch multiple workers using multiprocessing and have them each copy a distinct slice into the destination array, such that all the data is copied, using multiple workers is faster. This is true regardless of whether the destination array is a shared memory array or one that becomes local to the worker. I can get a 5-10x speedup in some tests on a machine with many cores. As I add more workers, the speed does eventually level off and even slow down, so I think this achieves being memory-performance bound.
I'm not suggesting using multiprocessing for this problem; it was merely to demonstrate the possibility of better hardware utilization.
Does there exist a python interface to some multi-threaded C/C++ memcpy tool?
Update (03 May 2017)
When it is possible, using multiple python processes to move data can give major speedup. I have a scenario in which I already have several small shared memory buffers getting written to by worker processes. Whenever one fills up, the master process collects this data and copies it into a master buffer. But it is much faster to have the master only select the location in the master buffer, and assign a recording worker to actually do the copying (from a large set of recording processes standing by). On my particular computer, several GB can be moved in a small fraction of a second by concurrent workers, as opposed to several seconds by a single process.
Still, this sort of setup is not always (or even usually?) possible, so it would be great to have a single python process able to drop into a multi-threaded memcpy routine... | 0 | 1 | 1,300 |
0 | 43,516,507 | 0 | 0 | 1 | 0 | 2 | false | 0 | 2017-03-20T17:18:00.000 | 0 | 3 | 0 | Can I use Cucumber to test an application that uses more than one language? | 42,909,952 | 0 | java,python,hadoop,cucumber,cucumber-jvm | To use cucumber to test desktop applications you can use specflow which uses a framework in visual studio called teststack.white. Just google on cucumber specflow, teststack.white, etc and you should be able to get on track | I'm currently part of a team working on a Hadoop application, parts of which will use Spark, and parts of which will use Java or Python (for instance, we can't use Sqoop or any other ingest tools included with Hadoop and will be implementing our own version of this). I'm just a data scientist so I'm really only familiar with the Spark portion, so apologies for the lack of detail or if this question just sucks in general - I just know that the engineering team needs both Java and Python support. I have been asked to look into using Cucumber (or any other BDD framework) for acceptance testing our app front to back once we're further along. I can't find any blogs, codebases, or other references where cucumber is being used in a polyglot app, and barely any where Hadoop is being used. Would it be possible to test our app using Cucumber or any other existing BDD framework? We already plan to do unit and integration testing via JUnit/PyUnit/etc as well. | 0 | 1 | 994 |
0 | 42,931,219 | 0 | 0 | 1 | 0 | 2 | false | 0 | 2017-03-20T17:18:00.000 | 0 | 3 | 0 | Can I use Cucumber to test an application that uses more than one language? | 42,909,952 | 0 | java,python,hadoop,cucumber,cucumber-jvm | The feature files would be written using Gherkin. Gherkin looks the same if you are using Java or Python. So in theory, you are able to execute the same specifications from both Java end Python. This would, however, not make any sense. It would just be a way to implement the same behaviour in two different languages and therefore two different places. The only result would be duplication and miserable developers.
What you can do is to use BDD and Gherkin to drive the implementation. But different behaviour in different languages. This will lead you to use two different sets of features. That is possible and probably a good idea given the context you describe. | I'm currently part of a team working on a Hadoop application, parts of which will use Spark, and parts of which will use Java or Python (for instance, we can't use Sqoop or any other ingest tools included with Hadoop and will be implementing our own version of this). I'm just a data scientist so I'm really only familiar with the Spark portion, so apologies for the lack of detail or if this question just sucks in general - I just know that the engineering team needs both Java and Python support. I have been asked to look into using Cucumber (or any other BDD framework) for acceptance testing our app front to back once we're further along. I can't find any blogs, codebases, or other references where cucumber is being used in a polyglot app, and barely any where Hadoop is being used. Would it be possible to test our app using Cucumber or any other existing BDD framework? We already plan to do unit and integration testing via JUnit/PyUnit/etc as well. | 0 | 1 | 994 |
0 | 42,915,731 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-03-20T22:38:00.000 | 1 | 1 | 0 | Week Number in python for regression | 42,915,249 | 1.2 | python,date,format,regression | You will need to convert the string describing the week into an integer you can use as the abscissa (x-coordinate, or independent variable). Pick a "zero point", such as FY2012 WK 52, so that FY2013 WK 01 translates to the integer 1.
I don't that DateTime handles this conversion; you might have to code the translation yourself: parse the string into year and week integers, and compute the abscissa from that:
52*(year-2013) + week
You might also want to keep a dictionary of those translations, as well as a reverse list (week => FY_week) for output labelling.
Does that move you toward a solution you can implement? | My current data set is by Fiscal Week. It is in this format "FY2013 WK 2". How can I format it, so that I can use a regression model on it and predict a value for let's say "FY2017 WK 2".
Should I treat Fiscal Week as a categorical value and use dmatrices? | 0 | 1 | 94 |
0 | 43,289,954 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-21T12:26:00.000 | 0 | 1 | 0 | How to create an udf for hive using python with 3rd party package like sklearn? | 42,927,141 | 0 | python,hive,package,udf | I recently started looking into this approach and I feel like the problem is not about to get all the 'hive nodes' having sklearn on them (as you mentioned above), I feel like it is rather a compatibility issue than 'sklearn node availability' one. I think sklearn is not (yet) designed to run as a parallel algorithm such that large amount of data can be processed in a short time.
What I'm trying to do, as an approach, is to communicate python to 'hive' through 'pyhive' (for example) and implement the necessary sklearn libraries/calls within that code. The rough assumption here that this 'sklearn-hive-python' code will run in each node and deal with the data at the 'map-reduce' level.
I cannot say this is the right solution or correct approach (yet) but this is what I can conclude after searching for sometime. | I know how to create a hive udf with transform and using, but I can't use sklearn because not all the node in hive cluster has sklearn.
I have an anaconda2.tar.gz with sklearn, What should I do ? | 0 | 1 | 314 |
0 | 42,931,849 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-21T15:30:00.000 | 0 | 1 | 0 | Use SVM model trained in Matlab for classification in python | 42,931,469 | 0 | python,matlab,scikit-learn,svm | I guess you understand how SVM works, so what I would do is to train the model again in python just on the support vectors you found rather than on all the original training data and the result should remain the same (as if you trained it on the full data), since the support vectors are the "interesting" vectors in the data, that are sitting on the boundaries. | I have a SVM model trained in MATLAB (using 6 features) for which I have:
Support Vectors [337 x 6]
Alpha [337 x 1]
Bias
Kernel Function: @rbf_kernel
Kernel Function Args = 0.9001
GroupNames [781 x 1]
Support Vector Indices [337 x 1]
Scale Data containing:
shift [1 x 6]
scale factor [1 x 6]
These above are all data that I am able to load in python.
Now I would like to use this model in python without retraining to perform classification in python. In particular I would like to create a SVM model in python from the support vector generated in MATLAB
Is it possible? How? Any help would be very appreciated!
I can't retrain it in python because I don't have the training data (and labels) anymore. | 0 | 1 | 519 |
0 | 42,932,214 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-03-21T15:43:00.000 | 0 | 1 | 0 | How can I stop OpenMDAO evaluating at a given location early | 42,931,794 | 1.2 | python,openmdao | Depending on your setup, you can raise an error inside the component that will kill the run. They you just change the input and start up the next run. Alternately, modify your wrapper for the subsequent code so that if it sees a NAN it skips running and just reports a garbage number thats easily identifiable. | I am using OpenMDAO 1.7.3 for an optimization problem on a map.
My parameters are the coordinates on this map. The first thing I do is interpolating the height at this location from a map in one component. Then some more complex calculations follow in other components.
If OpenMDAO chooses a location outside the boundaries of the map I will get a height of NaN. I already know that the rest of there is no additional information to be gained from this optimization step. How can I make OpenMDAO move on to the next evaluation point as soon before doing the more complex calculations?
In my case the other calculations (in an external program) will even fail if they encounter a NaN, so I have to check the value before calling them in each of the components and assign NaN outputs for each of them. Is there a better way to do that? | 0 | 1 | 55 |
0 | 42,968,126 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-03-23T04:51:00.000 | 3 | 3 | 0 | Adding a jar file to pyspark after context is created | 42,967,472 | 0.197375 | python,apache-spark,jar | sparksession._jsc.addJar does the job. | I am using pyspark from a notebook and I do not handle the creation of the SparkSession.
I need to load a jar containing some functions I would like to use while processing my rdds. This is something which you can easily do using --jars which I cannot do in my particular case.
Is there a way to access the spark scala context and call the addJar method? I tried to use the JavaGateway (sparksession._jvm...) but have not been successful so far. Any idea?
Thanks
Guillaume | 0 | 1 | 4,578 |
0 | 62,708,019 | 0 | 1 | 0 | 0 | 1 | false | 47 | 2017-03-23T14:03:00.000 | -2 | 7 | 0 | Anaconda version with Python 3.5 | 42,978,349 | -0.057081 | python,anaconda | It is very simple, first, you need to be inside the virtualenv you created, then to install a specific version of python say 3.5, use Anaconda, conda install python=3.5
In general you can do this for any python package you want
conda install package_name=package_version | I want to install tensorflow with python 3.5 using anaconda but I don't know which anaconda version has python 3.5. When I go to anaconda download page am presented with Anaconda 4.3.1 which has either version 3.6 or 2.7 of python | 0 | 1 | 123,158 |
0 | 43,007,775 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-23T18:08:00.000 | 0 | 1 | 0 | Does updating one Bokeh ColumnDataSource affect the entire document? | 42,983,784 | 0 | python,bokeh | As of Bokeh 0.12.5 update messages are:
triggered immediately for a given property change
granular (are not batched in any way)
So, updating model.foo triggers an immediate message sent to the browser, and that message only pertains to the corresponding model.foo in the browser.
There Bokeh protocol allows for batching updates, but this capability is not really used anywhere yet. It is an open feature request to allow for some kind of batching that would delay sending update messages until some grouping of them could be collected. | I have a Bokeh document with many plots/models, each of which has its own ColumnDataSource. If I update one ColumnDataSource does that trigger updates to all of my models or only to the models to which the changed source is relevant?
I ask because I have a few models, some of which are complex and change slowly and others which are simple and change quickly. I want to know if it makes sense performance-wise to scale the update frequencies on a per-plot basis or if I have to actually have different documents for this to be effective.
I am running a Bokeh server application | 0 | 1 | 247 |
0 | 42,996,091 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-23T20:48:00.000 | 1 | 1 | 0 | Selectively Iterate over Tensor | 42,986,686 | 0.197375 | python,machine-learning,tensorflow,keras,conv-neural-network | The general approach is to use binary masks. Tensorflow provides several boolean functions such as tf.equal and tf.not_equal. For selecting only enterings which are equal to a certain value, you could use tf.equal and then multiply the loss tensor by the obtained binary mask. | I am currently building a CNN with Keras and need to define a custom loss function. I would only like to consider specific parts of my data in the loss and ignore others based on a certain parameter value. But, I am having trouble iterating over the Tensor objects that the Keras loss function expects.
Is there a simple way for me to compute the mean squared error between two Tensors, only looking at selected values in the Tensor?
For example, each Tensor in my case represents a 2D 16x16 grid, with each cell having 2 parameters - shape (16, 16, 2). I only want to compare cells where one of their parameters is equal to 1. | 0 | 1 | 707 |
0 | 42,995,646 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-03-24T08:49:00.000 | 1 | 2 | 0 | piecewise linear interpolation function in python | 42,995,027 | 0.099668 | python,interpolation,coefficients | If you're doing linear interpolation you can just use the formula that the line from point (x0, y0) to (x1, y1) the line that interpolates them is given by y - y0 = ((y0 - y1)/(x0 - x1)) * (x - x0). You can take 2 element slices of your list using the slice syntax; for example to get [2.5, 3.4] you would use x[1:3].
Using the slice syntax you can then implement the linear interpolation formula to calculate the coefficients of the linear polynomial interpolations. | I'm fairly new to programming and thought I'd try writing a piecewise linear interpolation function. (perhaps which is done with numpy.interp or scipy.interpolate.interp1d)
Say I am given data as follows: x= [1, 2.5, 3.4, 5.8, 6] y=[2, 4, 5.8, 4.3, 4]
I want to design a piecewise interpolation function that will give the coefficents of all the Linear polynomial pieces between 1 and 2.5, 2.5 to 3.4 and so on using Python.
of course matlab has the interp1 function which do this but im using python and i want to do exactly the same job as matlab but python only gives the valuse but not linear polynomials coefficient ! (in matlab we could get this with pp.coefs) .
but how to get pp.coefs in python numpy.interp ? | 0 | 1 | 4,195 |
0 | 42,999,562 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-24T10:55:00.000 | 1 | 1 | 0 | assign new dimension of length one | 42,997,677 | 1.2 | python,python-xarray | You can use xarray.concat to achieve this:
da = xarray.DataArray(0, coords={"x": 42})
xarray.concat((da,), dim="x") | I have a DataArray for which da.dims==(). I can assign a coordinate da.assign_coords(foo=42). I would like to add a corresponding dimension with length one, such that da.dims==("foo",) and the corresponding coordinate would be foo=[42]. I cannot use assign_coords(foo=[42]), as this results in the error message cannot add coordinates with new dimensions to a DataArray.
How do I assign a new dimension of length one to a DataArray? I could do something like DataArray(da.values.reshape([1]), dims="foo", coords={"foo": [42]}) but I wonder if there is a method that does not require copying the entire object. | 0 | 1 | 281 |
0 | 43,012,808 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2017-03-24T14:12:00.000 | 2 | 3 | 0 | How should I interpret the output of numpy.fft.rfft2? | 43,001,729 | 0.132549 | python,numpy,fft | Also note the ordering of the coefficients in the fft output:
According to the doc: by default the 1st element is the coefficient for 0 frequency component (effectively the sum or mean of the array), and starting from the 2nd we have coeffcients for the postive frequencies in increasing order, and starts from n/2+1 they are for negative frequencies in decreasing order. To have a view of the frequencies for a length-10 array:
np.fft.fftfreq(10)
the output is:
array([ 0. , 0.1, 0.2, 0.3, 0.4, -0.5, -0.4, -0.3, -0.2, -0.1])
Use np.fft.fftshift(cf), where cf=np.fft.fft(array), the output is shifted so that it corresponds to this frequency ordering:
array([-0.5, -0.4, -0.3, -0.2, -0.1, 0. , 0.1, 0.2, 0.3, 0.4])
which makes for sense for plotting.
In the 2D case it is the same. And the fft2 and rfft2 difference is as explained by others. | Obviously the rfft2 function simply computes the discrete fft of the input matrix. However how do I interpret a given index of the output? Given an index of the output, which Fourier coefficient am I looking at?
I am especially confused by the sizes of the output. For an n by n matrix, the output seems to be an n by (n/2)+1 matrix (for even n). Why does a square matrix ends up with a non-square fourier transform? | 0 | 1 | 5,101 |
0 | 43,018,809 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2017-03-25T16:15:00.000 | 0 | 4 | 0 | Python How to make array in array? | 43,018,769 | 0 | python,arrays,list,numpy,types | You can't make that array. Arrays is numpy are similar to matrices in math. They have to be mrows, each having n columns. Use a list of lists, or list of np.arrays | I have a numpy array Z1, Z2, Z3:
Z1 = [1,2,3]
Z2 = [4,5]
Z3 = [6,7,8,9]
I want new numpy array Z that have Z1, Z2, Z3 as array like:
Z = [[1,2,3],[4,5],[6,7,8,9]
print(type(Z),type(Z[0]))
>>> <class 'numpy.ndarray'> <class 'numpy.ndarray'>
I used np.append, hstack, vstack, insert, concatenate ...but all I failed.
There is only 2 case:
Z = [1,2,3,4,5,6,7,8,9]
or ERROR
so I made a list Z first, and append list Z1, Z2, Z3 and then convert list Z into numpy array Z.
BUT
Z = [[1,2,3],[4,5],[6,7,8,9]]
print(type(Z),type(Z[0]))
>>> <class 'numpy.ndarray'> <class 'list'>
I want to do not use 'while' or 'for'. Help me please.. | 0 | 1 | 1,326 |
0 | 43,039,088 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2017-03-25T16:15:00.000 | 0 | 4 | 0 | Python How to make array in array? | 43,018,769 | 0 | python,arrays,list,numpy,types | everybody thanks! Answers are a little bit different with that I want, but eventually I solve that without use 'for' or 'while'.
First, I made "numpy array" Z1, Z2, Z3 and put them into "list" Z. There are array in List.
Second, I convert "list" Z into "numpy array" Z. It is array in array that I want. | I have a numpy array Z1, Z2, Z3:
Z1 = [1,2,3]
Z2 = [4,5]
Z3 = [6,7,8,9]
I want new numpy array Z that have Z1, Z2, Z3 as array like:
Z = [[1,2,3],[4,5],[6,7,8,9]
print(type(Z),type(Z[0]))
>>> <class 'numpy.ndarray'> <class 'numpy.ndarray'>
I used np.append, hstack, vstack, insert, concatenate ...but all I failed.
There is only 2 case:
Z = [1,2,3,4,5,6,7,8,9]
or ERROR
so I made a list Z first, and append list Z1, Z2, Z3 and then convert list Z into numpy array Z.
BUT
Z = [[1,2,3],[4,5],[6,7,8,9]]
print(type(Z),type(Z[0]))
>>> <class 'numpy.ndarray'> <class 'list'>
I want to do not use 'while' or 'for'. Help me please.. | 0 | 1 | 1,326 |
0 | 43,021,757 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-03-25T20:29:00.000 | 0 | 1 | 0 | Can training and evaluation sets be the same in predictive analytics? | 43,021,624 | 1.2 | python,scikit-learn,anaconda,data-mining,prediction | The training set and the evaluation set must be different. The whole point of having an evaluation set is guard against over-fitting.
In this case what you should do is take say 100,000 customers, picked at random. Then use the data to try and learn what is about customers that make them likely purchase A. Then use the remaining 40,000 to test how well you model works. | I'm creating a model to predict the probability that customers will buy product A in a department store that sells product A through Z. The store has it's own credit card with demographic and transactional information of 140,000 customers.
There is a subset of customers (say 10,000) who currently buy A. The goal is to learn from these customers 10,000 customers and score the remaining 130,000 with their probability to buy A, then target the ones with the highest scores with marketing campaigns to increase A sales.
How should I define my training and evaluation sets?
Training set:
Should it be only the 10,000 who bought A or the whole 140k customers?
Evaluation set: (where the model will be used in production)
I believe this should be the 130k who haven't bought A.
The question about time:
Another alternative is to take a photograph of the database last year, use it as a training set, then take the database as it is today and evaluate all customer's with the model created with last year's info. Is this correct? When should I do this?
Which option is correct for all sets? | 0 | 1 | 55 |
0 | 43,032,478 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-26T13:29:00.000 | 1 | 1 | 0 | How to get confidence levels(probabilities) for DNN Regressor in Tensorflow | 43,029,358 | 0.197375 | python,tensorflow | In terminal, type help(tf.contrib.learn.DNNRegressor. There you will see the object has methods such as predict() which returns predicted scores.
DNNRegressor does regression, not classification, so you don't get a probability distribution over classes. | For DNN Classifier there is a method predict_proba to get the probabilities, whereas for DNN Regressor it is not there. Please help. | 0 | 1 | 827 |
0 | 43,035,582 | 0 | 1 | 0 | 0 | 2 | false | 3 | 2017-03-26T21:24:00.000 | 3 | 4 | 0 | Why doesn't Python come pre-built with required libraries like pandas, numpy etc | 43,034,716 | 0.148885 | python,anaconda,package,python-packaging,canopy | This is a bit like asking "Why doesn't every motor come with a car around it?"
While a car without a motor is pretty useless, the inverse doesn't hold: Most motors aren't even used for cars. Of course one could try selling a complete car to people who want to have a generator, but they wouldn't buy it.
Also the people designing cars might not be the best to build a motor and vice versa.
Similarly with python. Most python distributions are not used with numpy, scipy or pandas. Distributing python with those packages would create a massive overhead.
However, there is of course a strong demand for prebuilt distributions which combine those modules with a respective python and make sure everything interacts smoothly. Some examples are Anaconda, Canopy, python(x,y), winpython, etc. So an end user who simply wants a car that runs, best chooses one of those, instead of installing everything from scratch. Other users who do want to always have the newest version of everything might choose to tinker them together themselves. | What is the reason packages are distributed separately?
Why do we have separate 'add-on' packages like pandas, numpy?
Since these modules seem so important, why are these not part of Python itself?
Are the "single distributions" of Python to come pre-loaded?
If it's part of design to keep the 'core' separate from additional functionality, still in that case it should at least come 'pre-imported' as soon as you start Python.
Where can I find such distributions if they exist? | 0 | 1 | 1,040 |
0 | 43,035,255 | 0 | 1 | 0 | 0 | 2 | false | 3 | 2017-03-26T21:24:00.000 | -1 | 4 | 0 | Why doesn't Python come pre-built with required libraries like pandas, numpy etc | 43,034,716 | -0.049958 | python,anaconda,package,python-packaging,canopy | PyPi currently has over 100,000 libraries available. I'm sure someone thinks each of these is important.
Why do you need or want to pre-load libraries, considering how easy a pip install is especially in a virtual environment? | What is the reason packages are distributed separately?
Why do we have separate 'add-on' packages like pandas, numpy?
Since these modules seem so important, why are these not part of Python itself?
Are the "single distributions" of Python to come pre-loaded?
If it's part of design to keep the 'core' separate from additional functionality, still in that case it should at least come 'pre-imported' as soon as you start Python.
Where can I find such distributions if they exist? | 0 | 1 | 1,040 |
0 | 44,992,636 | 0 | 1 | 0 | 0 | 3 | false | 7 | 2017-03-27T04:22:00.000 | 0 | 5 | 0 | ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection | 43,037,903 | 0 | python,scikit-learn | Just closed the Spyder editor and restarted. This Issue got fixed. | I was trying to import sklearn.model_selection with Jupiter Notebook under anaconda environment with python 3.5, but I was warned that I didn't have "model_selection" module, so I did conda update scikit-learn.
After that, I received a message of ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection.
I reinstalled sklearn and scipy, but still received the same error message. May I have some advice? | 0 | 1 | 18,428 |
0 | 55,497,890 | 0 | 1 | 0 | 0 | 3 | false | 7 | 2017-03-27T04:22:00.000 | 1 | 5 | 0 | ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection | 43,037,903 | 0.039979 | python,scikit-learn | The same error appeared when I tried to import hmm from hmmlearn, I reinstalled scipy and it worked. Hope this can be helpful.(I have tried updating all of the packages involved to solve the problem, but did not work. My computer system is ubuntu 16.04, with anaconda3 installed.) | I was trying to import sklearn.model_selection with Jupiter Notebook under anaconda environment with python 3.5, but I was warned that I didn't have "model_selection" module, so I did conda update scikit-learn.
After that, I received a message of ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection.
I reinstalled sklearn and scipy, but still received the same error message. May I have some advice? | 0 | 1 | 18,428 |
0 | 43,158,642 | 0 | 1 | 0 | 0 | 3 | true | 7 | 2017-03-27T04:22:00.000 | 10 | 5 | 0 | ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection | 43,037,903 | 1.2 | python,scikit-learn | I came across exactly the same problem just now. After I updated scikit-learn and tried to import sklearn.model_selection, the ImportError appeared.
I just restarted anaconda and ran it again.
It worked. Don't know why. | I was trying to import sklearn.model_selection with Jupiter Notebook under anaconda environment with python 3.5, but I was warned that I didn't have "model_selection" module, so I did conda update scikit-learn.
After that, I received a message of ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection.
I reinstalled sklearn and scipy, but still received the same error message. May I have some advice? | 0 | 1 | 18,428 |
0 | 53,089,089 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-27T08:56:00.000 | 0 | 3 | 0 | Using quantopian for data analysis | 43,041,964 | 0 | python-3.x,zipline | You can get data for non-NYSE stocks as well like Nasdaq securities. Screens are also available by fundamentals(market, exchange, market cap). These screens can limit stocks analyzed from the broad universe. | I want to know were Quantopian gets data from?
If I want to do an analysis on a stock market other than NYSE, will I get the data? If not, can I manually upload the data so that I can run my algorithms on it. | 0 | 1 | 312 |
0 | 43,051,434 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-27T13:42:00.000 | 0 | 2 | 0 | How to determine size in bytes of H2O frame in Python? | 43,048,126 | 0 | python,h2o | This refers to 2-4 times the size of the file on disk, so rather than looking at the memory in Python, look at the original file size. Also, the 2-4x recommendation varies by algorithm (GLM & DL will requires less memory than tree-based models). | I am loading Spark dataframes into H2O (using Python) for building machine learning models. It has been recommended to me that I should allocate an H2O cluster with RAM 2-4x as big as the frame I will be training on, so that the analysis fits comfortably within memory. But I don't know how to precisely estimate the size of an H2O frame.
So supposing I have an H2O frame already loaded into Python, how do I actually determine its size in bytes? An approximation within 10-20% is fine. | 0 | 1 | 699 |
0 | 43,056,493 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-27T14:50:00.000 | 0 | 1 | 0 | Best way to save a CNN's weights in order to reuse them | 43,049,673 | 0 | python,tensorflow,deep-learning | Tensorflow provides a way to save your model: tensorflow.org/api_docs/python/tf/train/Saver. Your friend should then also use Tensorflow to load them. The language you load / save with doesn't affect how it is saved - if you save them in Tensorflow in Python they can be read in Tensorflow in C++. | I would like to save weights (and biases) from a CNN that I implemented and trained from scratch using Tensorflow (Python API).
Now I would like to save these weights in a file and share it with someone so he can use my network. But since I have a lot of weights I don't know. How can/should I do that? Is there a format recommended to do that? | 0 | 1 | 383 |
0 | 43,058,780 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-28T00:36:00.000 | 0 | 1 | 0 | Slow datetime parsing in Pandas | 43,058,703 | 0 | python,performance,csv,pandas,datetime | and 2. In my experience, if processing time is not critical for your study (say you process the data once and then you run your analysis) then I would recommend you parse the dates using pd.to_datetime() and others after you have read in the data.
anything that will help Pandas reduce the set of possibilities about the types in your data WILL speed up the processing. That make sense. The more precise you are, the faster will be the processing. | These questions about datetime parsing in pandas.read_csv() are all related.
Question 1
The parameter infer_datetime_format is False by default. Is it safe to set it to True? In other words, how accurately can Pandas infer date formats? Any insight into its algorithm would be helpful.
Question 2
Loading a CSV file with over 450,000 rows took over 10 mins when I ran pd.read_csv("file.csv", parse_dates = ["Start", "End"])
However it took only 20 seconds when I added the parameters dayfirst = True and infer_datetime_format = True. Yet if either was False, it took over 10 mins.
Why must both be True in order to speed up datetime parsing? If one is False but not the other, shouldn't it take strictly between 20 sec and 10 mins? (The answer to this question may well be the algorithm, as in Question 1.)
Question 3
Since dayfirst = True, infer_datetime_format = True speeds up datetime parsing, why is it not the default setting? Is it because Pandas cannot accurately infer date formats? | 0 | 1 | 527 |
0 | 44,095,963 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-03-28T04:50:00.000 | 1 | 5 | 0 | ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights | 43,060,827 | 0.039979 | python,tensorflow | Adding reuse = tf.get_variable_scope().reuse to BasicLSTMCell is OK to me. | I run ptb_word_lm.py provided by Tensorflow 1.0, but it shows this message:
ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights:
'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell'; and the cell was
not constructed as BasicLSTMCell(..., reuse=True). To share the
weights of an RNNCell, simply reuse it in your second calculation, or
create a new one with the argument reuse=True.
Then I modify the code, add 'reuse=True' to BasicLSTMCell, but it show this message:
ValueError: Variable Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/weights does not
exist, or was not created with tf.get_variable(). Did you mean to set
reuse=None in VarScope?
How could I solve this problems? | 0 | 1 | 3,069 |
0 | 45,557,687 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-03-28T04:50:00.000 | 0 | 5 | 0 | ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights | 43,060,827 | 0 | python,tensorflow | You can trying to add scope='lstmrnn' in your tf.nn.dynamic_rnn() function. | I run ptb_word_lm.py provided by Tensorflow 1.0, but it shows this message:
ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights:
'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell'; and the cell was
not constructed as BasicLSTMCell(..., reuse=True). To share the
weights of an RNNCell, simply reuse it in your second calculation, or
create a new one with the argument reuse=True.
Then I modify the code, add 'reuse=True' to BasicLSTMCell, but it show this message:
ValueError: Variable Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/weights does not
exist, or was not created with tf.get_variable(). Did you mean to set
reuse=None in VarScope?
How could I solve this problems? | 0 | 1 | 3,069 |
0 | 43,080,891 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-28T22:00:00.000 | 2 | 1 | 0 | Sklearn predict function | 43,080,722 | 1.2 | python,machine-learning,scikit-learn,scikits | i'th output is prediction for i'th input. Whatever you passed to .predict is a collection of objects, and the ordering of predictions is the same as the ordering of data passed in. | I am using sklearn's Linear Regression ML model in Python to predict. The predict function returns an array with a lot of floating point numbers, (which is correct) but I don't quite understand what the floating point numbers represent. Is it possible to map them back?
For context, I am trying to predict sales of a product (label) from stocks available. The predict function returns a large array of floating point numbers. How do I know what each floating point number represents?
For instance, the array is like [11.5, 12.0, 6.1,..]. It seems 6.1 is the sales qty but with what stock quantity is it associated with? | 0 | 1 | 4,905 |
0 | 43,098,199 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2017-03-29T11:40:00.000 | 5 | 3 | 0 | How to add new nodes / neurons dynamically in tensorflow | 43,092,454 | 1.2 | python,machine-learning,tensorflow,neural-network,artificial-intelligence | Instead of creating a whole new graph you might be better off creating a graph which has initially more neurons than you need and mask it off by multiplying by a non-trainable variable which has ones and zeros. You can then change the value of this mask variable to allow effectively new neurons to act for the first time. | If I want to add new nodes to on of my tensorflow layers on the fly, how can I do that?
For example if I want to change the amount of hidden nodes from 10 to 11 after the model has been training for a while. Also, assume I know what value I want the weights coming in and out of this node/neuron to be.
I can create a whole new graph, but is there a different/better way? | 0 | 1 | 2,690 |
0 | 43,114,845 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-29T16:27:00.000 | 1 | 1 | 0 | Apache Spark: Pre requisite questions | 43,099,139 | 0.197375 | java,python,scala,ubuntu,hadoop | You need to install hadoop-2.7 more to whatever you are installing.
Java version is fine.
The mentioned configuration should work with scala 2.12.1. | I am about to install Apache Spark 2.1.0 on Ubuntu 16.04 LTS. My goal is a standalone cluster, using Hadoop, with Scala and Python (2.7 is active)
Whilst downloading I get the choice: Prebuilt for Hadoop 2.7 and later (File is spark-2.1.0-bin-hadoop2.7.tgz)
Does this package actually include HADOOP 2.7 or does it need to be installed separately (first I assume)?
I have Java JRE 8 installed (Needed for other tasks). As the JDK 8 also seems to be a pre requisite as well, I also did a ' sudo apt install default-jdk', which indeed shows as installed:
default-jdk/xenial,now 2:1.8-56ubuntu2 amd64 [installed]
Checking java -version however doesn't show the JDK:
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
Is this sufficient for the installation? Why doesn't it also show the JDK?
I want to use Scala 2.12.1. Does this version work well with the Spark2.1/Hadoop 2.7 combination or is another version more suitable?
Is the Scala SBT package also needed?
Been going back and forth trying to get everything working, but am stuck at this point.
Hope somebody can shed some light :) | 0 | 1 | 730 |
0 | 43,239,777 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-29T17:29:00.000 | 1 | 1 | 1 | undefined symbol: cudnnCreate in ubuntu google cloud vm instance | 43,100,290 | 0.197375 | python,tensorflow,ubuntu-16.04,cudnn | Answering my own question: The issue was not that the library was not installed, the library installed was the wrong version hence it could not find it. In this case it was cudnn 5.0. However even after installing the right version it still didn't work due to incompatibilities between versions of driver, CUDA and cudnn. I solved all this issues by re-installing everything including the driver taking into account tensorflow libraries requisites. | I'm trying to run a tensorflow python script in a google cloud vm instance with GPU enabled. I have followed the process for installing GPU drivers, cuda, cudnn and tensorflow. However whenever I try to run my program (which runs fine in a super computing cluster) I keep getting:
undefined symbol: cudnnCreate
I have added the next to my ~/.bashrc
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64:/usr/local/cuda-8.0/extras/CUPTI/lib64:/usr/local/cuda-8.0/lib64"
export CUDA_HOME="/usr/local/cuda-8.0"
export PATH="$PATH:/usr/local/cuda-8.0/bin"
but still it does not work and produces the same error | 0 | 1 | 304 |
0 | 43,103,742 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-03-29T19:28:00.000 | 0 | 2 | 0 | why argument random_state in cross_validation.train_test_split is integer not boolean | 43,102,532 | 0 | python,machine-learning,cross-validation,sklearn-pandas | To expand a bit further on Kelvin's answer, if you want a random train-test split, then don't specify the random_state parameter. If you do not want a random train-test split (i.e. you want an identically-reproducible split each time), specify random_state with an integer of your choice. | i need to know why argument random_state in cross_validation.train_test_split is integer not Boolean, since it's role is to flag random allocation or not? | 0 | 1 | 280 |
0 | 43,103,553 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-03-29T19:28:00.000 | 2 | 2 | 0 | why argument random_state in cross_validation.train_test_split is integer not boolean | 43,102,532 | 0.197375 | python,machine-learning,cross-validation,sklearn-pandas | random_state is not only a flag of randomness or not, but which random seed to use. If you choose random_state = 3 you will "randomly" split the dataset, but you are able to reproduce the same split each time. I.e. each call with the same dataset will yield the same split, which is not the case if you don't specify the random_state parameter.
The reason why I use the quotation marks, is that it is actually pseudo random.
Wikipedia explains pseudorandomness like this:
A pseudorandom process is a process that appears to be random but is
not. Pseudorandom sequences typically exhibit statistical randomness
while being generated by an entirely deterministic causal process.
Such a process is easier to produce than a genuinely random one, and
has the benefit that it can be used again and again to produce exactly
the same numbers - useful for testing and fixing software. | i need to know why argument random_state in cross_validation.train_test_split is integer not Boolean, since it's role is to flag random allocation or not? | 0 | 1 | 280 |
0 | 43,107,623 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2017-03-29T22:15:00.000 | 3 | 1 | 0 | How are variables shared between concurrent `session.run(...)` calls in tensorflow? | 43,105,148 | 1.2 | python,multithreading,tensorflow | After doing some experimentation it appears that each call to sess.run(...) does indeed see a consistent point-in-time snapshot of the variables.
To test this I performed 2 big matrix multiply operations (taking about 10 sec each to complete), and updated a single, dependent, variable before, between, and after. In another thread I grabbed and printed that variable every 1/10th second to see if it picked up the change that occurred between operations while the first thread was still running. It did not, I only saw it's initial and final values. Therefore I conclude that variable changes are only visible outside of a specific call to sess.run(...) at the end of that run. | If you make two concurrent calls to the same session, sess.run(...), how are variables concurrently accessed in tensorflow?
Will each call see a snapshot of the variables as of the moment run was called, consistent throughout the call? Or will they see dynamic updates to the variables and only guarantee atomic updates to each variable?
I'm considering running test set evaluation on a separate CPU thread and want to verify that it's as trivial as running the inference op on a CPU device in parallel.
I'm having troubles figuring out exactly what guarantees are provided that make sessions "thread safe". | 0 | 1 | 646 |
0 | 43,189,429 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-30T16:24:00.000 | 1 | 1 | 0 | Difference of two dataframes in python | 43,123,378 | 0.197375 | python | df=pd.concat([a,b])
df = df.reset_index(drop=True)
df_gpby = df.groupby(list(df.columns))
idx = [x[0] for x in df_gpby.groups.values() if len(x) == 1]
df1=df.reindex(idx) | I have a two dataframes
ex:
test_1
name1 name2
a1 b1
a1 b2
a2 b1
a2 b2
a2 b3
test_2
name1 name2
a1 b1
a1 b2
a2 b1
I need the difference of two dataframes like
name1 name2
a2 b2
a2 b3 | 0 | 1 | 63 |
0 | 43,141,030 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-31T11:36:00.000 | 0 | 2 | 0 | How can I create a deep neural network which has a capability to take a decision for hypothesis? | 43,139,718 | 1.2 | python,machine-learning,statistics,deep-learning | If you are sure enough that the alternative hypothesis data come from different distribution than the null hypothesis, you can try unsupervised learning algorithm. i.e a K-mean or a GMM with the right number of cluster could yield a great separation of the data. You can then assign label to the second class data and train a classifier using it.
This a general approach of semi-supervised learning.
Another idea would be to consider the alternative hypothesis data as outliers and use anomalie detection algorithm to find you second class data point. This is much more difficult to achieve and rely heavily on the supposition that data comes from really different distribution. | Basically, I am interested in solving a hypothesis problem, where I am only aware of the data distribution of a null hypothesis and don't know anything about the alternative case.
My concern is how should I train my deep neural network so that it can classify or recognise whether a particular sample data has a similar distribution as in null hypothesis case or it's from another class(An alternative Hypothesis case).
According to my understanding, It's different from a binary classification (one vs all case), because in that case, we know what data we are going to tackle, but here in my case alternative hypothesis case can follow any data distribution.
Here I am giving you an example situation, what I want exactly
Suppose I want to predict that a person is likely to have cancer or not
e.g
I have a data set of the factors that cause cancer like,
Parameter A=1,Parameter B=3.87,Parameter C=5.6,Has cancer = yes
But I don't have a data set where
Parameter A=2,Parameter B=1.87,Parameter C=2.6,Has cancer = No
Can be anything like this
Means I don't know about anything which leads to a conclusion of not having cancer, can I still train my model to recognise whether a person has cancer? | 0 | 1 | 103 |
0 | 43,141,287 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-03-31T12:51:00.000 | 0 | 2 | 0 | How to convert a column of a dataframe from char to ascii integers? [Pandas] | 43,141,160 | 0 | python-3.x,pandas | e.g. "a"=97 in ascii}
write print(ord("a"))
print(ord("a"))
answer would be 97 | I have a dataframe in which one columns called 'label' holds values like 'b', 'm', 'n' etc.
I want 'label' to instead hold the ascii equivalent of the letter.
How do I do it? | 0 | 1 | 3,205 |
0 | 43,158,973 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-03-31T16:17:00.000 | 1 | 3 | 0 | numpy array of zeros or empty | 43,145,332 | 0.066568 | python,arrays,numpy | It is better to create array of zeros and fill it using if-else. Even conditions makes slow your code, reshaping empty array or concatenating it with new vectors each iteration of loop is more slower operation, because each time new array of new size is created and old array is copied there together with new vector value by value. | I am writing code and efficiency is very important.
Actually I need 2d array, that I am filling with 0 and 1 in for loop. What is better and why?
Make empty array and fill it with "0" and "1". It's pseudocode, my array will be much bigger.
Make array filled by zeros and make if() and if not zero - put one.
So I need information what is more efficiency:
1. Put every element "0" and "1" to empty array
or
2. Make if() (efficiency of 'if') and then put only "1" element. | 0 | 1 | 5,613 |
0 | 43,160,280 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-04-01T17:56:00.000 | 0 | 2 | 0 | in Python 3.6 and numpy, what does the comma mean or do in "predictors[training_indices,:]" | 43,160,202 | 0 | python,python-3.x,numpy,comma | In your code predictors is a two dimensional array. You're taking a slice of the array. Your output will be all the values with training_indices as their index in the first axis. The : is slice notation, meaning to take all values along the second axis.
This kind of indexing is not common in Python outside of numpy, but it's not completely unique. You can write your own class that has a __getitem__ method, and interpret it however you want. The slice you're asking about will pass a 2-tuple to __getitem__. The first value in the tuple will be training_indices, and the second value will be a slice object. | I am in an online course, and I find I do not understand this expression:
predictors[training_indices,:]
predictors is an np.array of floats.
training_indices is a list of integers known to be indices of predictors, so 0=< i < len(training_indices)).
Is this a special numpy expression?
Thanks! | 0 | 1 | 802 |
0 | 59,593,711 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-04-02T08:48:00.000 | 1 | 2 | 0 | How to read a csv file as series instead of dataframe in pandas? | 43,166,420 | 0.099668 | python,pandas | There is 2 option read series from csv file;
pd.Series.from_csv('File_name.csv')
pd.read_csv('File_name.csv', squeeze=True)
My prefer is; using squeeze=True with read_csv | When I try to use x = pandas.Series.from_csv('File_name.csv', header = None)
It throws an error saying IndexError: single positional indexer is out-of-bounds.
However, If I read it as dataframe and then extract series, it works fine.
x = pandas.read_csv('File_name.csv', header = None)[0]
What could be wrong with first method? | 0 | 1 | 2,872 |
0 | 43,170,260 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-04-02T14:54:00.000 | 2 | 1 | 0 | Using your own Data in Tensorflow | 43,169,766 | 0.379949 | python,tensorflow,neural-network,dataset | I suggest you use OpenCV library. Whatever you uses your MNIST data or PIL, when it's loaded, they're all just NumPy arrays. If you want to make MNIST datasets fit with your trained model, here's how I did it:
1.Use cv2.imread to load all the images you want them to act as training datasets.
2.Use cv2.cvtColor to convert all the images into grayscale images and resize them into 28x28.
3.Divide each pixel in all the datasets by 255.
4.Do the training as usual!
I haven't tried to make it your own format, but theoratically it's the same. | I already know how to make a neural network using the mnist dataset. I have been searching for tutorials on how to train a neural network on your own dataset for 3 months now but I'm just not getting it. If someone can suggest any good tutorials or explain how all of this works, please help.
PS. I won't install NLTK. It seems like a lot of people are training their neural network on text but I won't do that. If I would install NLTK, I would only use it once. | 0 | 1 | 1,168 |
0 | 43,214,452 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2017-04-03T01:21:00.000 | 3 | 1 | 0 | check if tensorflow placeholder is filled | 43,175,272 | 1.2 | python,tensorflow,deep-learning | You can create a third placeholder variable of type boolean to select which branch to use and feed that in at run time.
The logic behind it is that since you are feeding in the placholders at runtime anyways you can determine outside of tensorflow which placeholders will be fed. | Suppose I have two placeholder quantities in tensorflow: placeholder_1 and placeholder_2. Essentially I would like the following computational functionality: "if placeholder_1 is defined (ie is given a value in the feed_dict of sess.run()), compute X as f(placeholder_1), otherwise, compute X as g(placeholder_2)." Think of X as being a hidden layer in a neural network that can optionally be computed in these two different ways. Eventually I would use X to produce an output, and I'd like to backpropagate error to the parameters of f or g depending on which placeholder I used.
One could accomplish this using the tf.where(condition, x, y) function if there was a way to make the condition "placeholder_1 has a value", but after looking through the tensorflow documentation on booleans and asserts I couldn't find anything that looked applicable.
Any ideas? I have a vague idea of how I could accomplish this basically by copying part of the network, sharing parameters and syncing the networks after updates, but I'm hoping for a cleaner way to do it. | 0 | 1 | 1,190 |
0 | 43,182,382 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-04-03T04:42:00.000 | 2 | 1 | 1 | spark consume from stream -- considering data for longer period | 43,176,607 | 1.2 | python-3.x,apache-spark,pyspark | Your streaming job is not supposed to calculate the Daily count/Avg.
Approach 1 :
You can store the data consumer from Kafka into a persistent storage like DB/HBase/HDFS , and then you can run Daily batch which will calculate all the statistics for you like Daily count or avg.
Approach 2 :
In order to get that information form streaming itself you need to use Accumulators which will hold the record count,sum. and calculate avg according.
Approach 3 :
Use streaming window, but holding data for a day doesn't make any sense. If you need 5/10 min avg, you can use this.
I think the first method is preferable as it will give you more flexibility to calculate all the analytics you want. | We have a spark job running which consumes data from kafka stream , do some analytics and store the result.
Since data is consumed as they are produced to kafka, if we want to get
count for the whole day, count for an hour, average for the whole
day
that is not possible with this approach. Is there any way which we should follow to accomplish such requirement
Appreciate any help
Thanks and Regards
Raaghu.K | 0 | 1 | 35 |
0 | 43,355,585 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-04-03T07:32:00.000 | 0 | 1 | 0 | How to use hmmlearn to classify English text? | 43,178,966 | 0 | python-3.x,text-classification,markov-models,hmmlearn | hmmlearn is designed for unsupervised learning of HMMs, while your problem is clearly supervised: given examples of English and random strings, learn to distinguish between the two. Also, as you've correctly pointed it out, the notion of hidden states is tricky to define for text data, therefore for your problem plain MMs would be more appropriate. I think you should be able to implement them in <100 lines of code in Python. | I want to implement a classic Markov model problem: Train MM to learn English text patterns, and use that to detect English text vs. random strings.
I decided to use hmmlearn so I don't have to write my own. However I am confused about how to train it. It seems to require the number of components in the HMM, but what is a reasonable number for English? Also, can I not do a simple higher order Markov model instead of hidden? Presumably the interesting property is is patterns of ngrams, not hidden states. | 0 | 1 | 764 |
0 | 43,199,249 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-04-04T05:42:00.000 | 4 | 2 | 0 | Regression vs Classifier predict_proba | 43,199,108 | 1.2 | python,machine-learning,scikit-learn,classification,regression | Generally, for a qualitative problem that is to classify between categories or class, we prefer classification.
for example: to identify if it is night or day.
For Quantitative problems, we prefer regression to solve the problems.
for example: to identify if its 0th class or 1st class.
But in a special case, when we have only two classes. Then, we can use both classification and regression to solve two-class problems as in your case.
Please note that, this explanation is on the behalf of two-class point of view or multi-class problems. Though regression is to deal with real quantitative problems rather than classes.
Probability has nothing to deal specifically with methods. Each method deduce a probability and on the basis of that, they predict the outcome.
It is better if you explain the reference to predict_proba from your
question.
Hope it helps! | Just a quick question, if I want to classify objects into either 0 or 1 but I would like the model to return me a 'likeliness' probability for example if an object is 0.7, it means it has 0.7 chance of being in class 1, do I do a regression or stick to classifiers and use the predict_proba function?
How is regression and predict_proba function different?
Any help is greatly appreciated!
Thank you! | 0 | 1 | 2,562 |
0 | 46,034,678 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2017-04-04T08:56:00.000 | 8 | 1 | 0 | gensim KeydVectors dimensions | 43,202,548 | 1 | python-3.x,gensim | kv.vector_size still works; I'm using gensim 2.3.0, which is the latest as I write. (I am assuming kv is your KeyedVectors object.) It appears object properties are not documented on the API page, but auto-complete suggests it, and there is no deprecated warning or anything.
Your question helped me answer my own, which was how to get the number of words: len(kv.index2word) | Im gensims latest version, loading trained vectors from a file is done using KeyedVectors, and dosent requires instantiating a new Word2Vec object. But now my code is broken because I can't use the model.vector_size property. What is the alternative to that? I mean something better than just kv[kv.index2word[0]].size. | 0 | 1 | 4,245 |
0 | 43,210,008 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2017-04-04T10:22:00.000 | 8 | 2 | 1 | Broken Pipe Error Redis | 43,204,496 | 1.2 | python,sockets,redis,redis-py | Redis' String data type can be at most 512MB. | We are trying to SET pickled object of size 2.3GB into redis through redis-py package. Encountered the following error.
BrokenPipeError: [Errno 32] Broken pipe
redis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer.
I would like to understand the root cause. Is it due to input/output buffer limitation at server side or client side ? Is it due to any limitations on RESP protocol? Is single value (bytes) of 2.3 Gb allowed to store into Redis ?
import redis
r = redis.StrictRedis(host='10.X.X.X', port=7000, db=0)
pickled_object = pickle.dumps(obj_to_be_pickled)
r.set('some_key', pickled_object)
Client Side Error
BrokenPipeError: [Errno 32] Broken pipe
/usr/local/lib/python3.4/site-packages/redis/connection.py(544)send_packed_command()
self._sock.sendall(item)
Server Side Error
31164:M 04 Apr 06:02:42.334 - Protocol error from client: id=95 addr=10.2.130.144:36120 fd=11 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=16384 qbuf-free=16384 obl=42 oll=0 omem=0 events=r cmd=NULL
31164:M 04 Apr 06:07:09.591 - Protocol error from client: id=96 addr=10.2.130.144:36139 fd=11 name= age=9 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=40 qbuf-free=32728 obl=42 oll=0 omem=0 events=r cmd=NULL
Redis Version : 3.2.8 / 64 bit | 0 | 1 | 9,490 |
0 | 43,208,268 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-04-04T12:26:00.000 | -2 | 3 | 0 | Find inverse polynomial in python in GF2 | 43,207,222 | -0.132549 | python,numpy,scipy,polynomials,inverse | Try to use mathematical package sage | I'm fairly new to Python and I have a question related to the polynomials.
Lets have a high-degree polynomial in GF(2), for example :
x^n + x^m + ... + 1, where n, m could be up to 10000.
I need to find inverse polynomial to this one. What will be the fastest way to do that in Python (probably using numpy) ?
Thanks | 0 | 1 | 3,247 |
0 | 43,324,614 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-04-04T12:35:00.000 | 0 | 2 | 0 | Why OpenCV face detection recognition the faces for untrained face? | 43,207,422 | 0 | python-2.7,opencv3.0,face-detection,face-recognition,opencv3.1 | i guess here in your problem you are not actually referring to detection ,but recognition ,you must know the difference between these two things:
1-detection does not distinguish between persons, it just detects the facial shape of a person based on the haarcascade previously trained
2-recognition is the case where u first detect a person ,then try to distinguish that person from your cropped and aligned database of pics,i suggest you follow the philipp wagner tutorial for that matter. | I trained 472 unique images for a person A for Face Recognition using "haarcascade_frontalface_default.xml".
While I am trying to detect face for the same person A for the same images which I have trained getting 20% to 80% confidence, that's fine for me.
But, I am also getting 20% to 80% confidence for person B which I have not included in training the images. Why its happening for person B while I am doing face detection?
I am using python 2.7 and OpenCV 3.2.0-dev version. | 0 | 1 | 402 |
0 | 43,216,616 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-04-04T13:48:00.000 | 0 | 1 | 0 | Given a sparse matrix with shape (num_samples, num_features), how do I estimate the co-occurrence matrix? | 43,209,135 | 0 | python,machine-learning,data-mining | This can be solved reasonably easily if you go to a transposed matrix.
Of any two features (now rows, originally columns) you compute the intersection. If it's larger than 50, you have a frequent cooccurrence.
If you use an appropriate sparse encoding (now of rows, but originally of columns - so you probably need not only to transpose the matrix, but also to reencode it) this operation using O(n+m), where n and m are the number of nonzero values.
If you have an extremely high number of features this make take a while. But 100000 should be feasible. | The sparse matrix has only 0 and 1 at each entry (i,j) (1 stands for sample i has feature j). How can I estimate the co-occurrence matrix for each feature given this sparse representation of data points? Especially, I want to find pairs of features that co-occur in at least 50 samples. I realize it might be hard to produce the exact result, is there any approximated algorithm in data mining that allows me to do that? | 0 | 1 | 101 |
0 | 52,570,244 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2017-04-04T18:55:00.000 | 1 | 2 | 0 | InfluxDB and Grafana: comparing two databases with different timestamps on same graph panel | 43,215,443 | 0.099668 | python,influxdb,grafana | I believe this is currently available via kapacitor, but assume a more elegant solution will be readily accomplished using FluxQL.
Consuming the influxdb measurements into kapacitor will allow you to force equivalent time buckets and present the data once normalized. | So basically I would like to be able to view two different databases within the same Grafana graph panel. The issue is that InfluxDB is a time series database, so it is not possible to see the trend between two databases in the same graph panel unless they have similar timestamps. The workaround is creating two panels in Grafana and adding a delay to one, but this doesn't give a good representation as the graphs are not on the same panel so it is more difficult to see the differences. I am currently working on a script to copy the databases in question and alter the timestamps so that the two newly created databases look like the data was taken at the same time. I am wondering if anyone has any idea how to change the timestamp, and if so, what would be the best way to to do so with a large amount of data points? Thanks. | 0 | 1 | 569 |
0 | 43,306,424 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2017-04-04T18:55:00.000 | 0 | 2 | 0 | InfluxDB and Grafana: comparing two databases with different timestamps on same graph panel | 43,215,443 | 0 | python,influxdb,grafana | I can confirm from my grafana instance that it's not possible to add a shift to one timeseries and not the other in one panel.
To change the timestamp, I'd just simply do it the obvious way. Load a few thousands of entries at a time to python, change the the timestamps and write it to a new measure (and indicate the shift in the measurement name). | So basically I would like to be able to view two different databases within the same Grafana graph panel. The issue is that InfluxDB is a time series database, so it is not possible to see the trend between two databases in the same graph panel unless they have similar timestamps. The workaround is creating two panels in Grafana and adding a delay to one, but this doesn't give a good representation as the graphs are not on the same panel so it is more difficult to see the differences. I am currently working on a script to copy the databases in question and alter the timestamps so that the two newly created databases look like the data was taken at the same time. I am wondering if anyone has any idea how to change the timestamp, and if so, what would be the best way to to do so with a large amount of data points? Thanks. | 0 | 1 | 569 |
Subsets and Splits