GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 48,129,094 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2018-01-06T11:25:00.000 | -2 | 3 | 0 | Plane-plane intersection in python | 48,126,838 | -0.132549 | python,computational-geometry,intersection,plane | This is solved by elementary vector computation and is not substantial/useful enough to deserve a library implementation. Work out the math with Numpy.
The line direction is given by the cross product of the two normal vectors (A, B, C), and it suffices to find a single point, say the intersection of the two given planes and the plane orthogonal to the line direction and through the origin (by solving a 3x3 system).
Computation will of course fail for parallel planes, and be numerically unstable for nearly parallel ones, but I don't think there's anything you can do. | I need to calculate intersection of two planes in form of AX+BY+CZ+D=0 and get a line in form of two (x,y,z) points. I know how to do the math, but I want to avoid inventing a bicycle and use something effective and tested. Is there any library which already implements this? Tried to search opencv and google, but with no success. | 0 | 1 | 8,472 |
0 | 48,127,014 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-01-06T11:34:00.000 | 1 | 1 | 0 | Trying to understand neural networks | 48,126,924 | 1.2 | python,neural-network,deep-learning | Learning rate: that code does not use a learning rate, or rather it uses a learning rate of 1. Lines 48,49 just add the adjustment (gradient) value without any rate. That is an uncommon choice and can sometimes work in practice, though in general it is not advised. Technically this is the simplest version of gradient descent, and there are many elaborations to simple gradient descent, most of which involve a learning rate. I won't elaborate further as this comment answers the question that you are asking, but you should know that building an optimizer is a big area of research, and there are lots of ways to do this more elaborately or with more sophistication (in the hopes of better performance).
Error threshold: instead of stopping optimization when an error threshold is reached, this algorithm stops after a fixed number of iterations (60,000). That is a common choice, particularly when using something like stochastic gradient descent (again, another big topic). The basic choice here is valid though: instead of optimizing until a performance threshold is reached (error threshold), optimize until a computational budget is reached (60,000 iterations). | I'm new to coding and I've been guided to start with Python because it is good for a beginner and very versatile. I've been watching some tutorials online on how to create a neural network with Python, however I've just got stuck in this example.
I've seen and worked out tutorials where you have the learning rate and the error threshold which are constant variables. For example learning rate = 0.1 and error threshold = 0.1, however in this particular example there are no constant learning rate and error threshold variables that I can see.
Can someone explain why the learning rate and error threshold aren't being used? | 0 | 1 | 90 |
0 | 48,141,696 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-01-07T18:12:00.000 | 3 | 1 | 0 | use negative loss in tensorflow | 48,140,133 | 0.53705 | python,tensorflow,machine-learning | A gradient descent minimizer will typically try to find the minimum loss irrespective of the sign of the loss surface. It sounds like you either want to a) assign a large loss to encourage your model to pick something else or b) assign a fifth no-action category. | I am implementing a reinforcement agent that takes actions based on classes.
so it can take action 1 or 2 or 3 or 4.
So my question is can I use negative loss in tensorflow to stop it from outputting an action.
Example:
Let's say the agent outputs action 1 I want to very strongly dissuade it from taking action 1 in that situation again. but there is not a known action that it should have taken instead. So I can't just choose a different action to make it learn that.
So my question is:
does tensorflow gradient computation handle negative values for loss.
And if it does will it work the way I describe? | 0 | 1 | 1,647 |
0 | 48,142,657 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2018-01-07T23:11:00.000 | 1 | 2 | 0 | What Neural Network to use for AI Mouse Movement | 48,142,421 | 0.099668 | python,tensorflow,machine-learning,neural-network,mouse | Using a neural network for this task seems like total over kill to me. It seems like you have 2 inputs, each which have an X and a Y coordinate with one representing the initial position and one representing the final position of the mouse.
There are a ton of ways you can introduce randomness into this path in hard to detect ways that are much simpler than a neural network. Use some weird random number generator with bizarre personal logic in if statements to determine the amounts to add within some range to the current value on each iteration. You could use a neural net, but again i think it's overkill.
As far as what type of neural net you need to use, i would just start with an out of the box one from a tutorial online (tensorflow and sklearn are what i've used) and tweak the hyper parameters to see what makes the model better. | I am trying to make a fuction that takes in 2 (x,y) coordinates and returns and array of coordinates for where the mouse should be every 0.05 second (or about that)...
(The whole aim of the program is to have random/not smooth mouse movement to mimic a human's mouse movement)
E.G. INPUTS : [(800,600), (300,400)]
OUTPUTS : [(800,600),(780,580),...,(300,400)]
I would like to know which type of neural network I should use to get the correct outputs.
I'm kind of new to the subject of neural nets but I have a decent understanding and will research the suggestions given.
A pointer in the right direction and some links would be greatly appreciated! | 0 | 1 | 2,533 |
0 | 48,143,029 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2018-01-07T23:11:00.000 | 0 | 2 | 0 | What Neural Network to use for AI Mouse Movement | 48,142,421 | 0 | python,tensorflow,machine-learning,neural-network,mouse | If you're trying to predict where the mouse should be based on the position of something else, a simple ANN will do the job.
Are you trying to automate tasks, like have a script that controls a game? A Recurrent Neural Network like an LSTM or GRU will take history into account.
I know you're doing this as a learning exercise, but if you're just trying to smooth mouse movement a simple interpolation algorithm might work. | I am trying to make a fuction that takes in 2 (x,y) coordinates and returns and array of coordinates for where the mouse should be every 0.05 second (or about that)...
(The whole aim of the program is to have random/not smooth mouse movement to mimic a human's mouse movement)
E.G. INPUTS : [(800,600), (300,400)]
OUTPUTS : [(800,600),(780,580),...,(300,400)]
I would like to know which type of neural network I should use to get the correct outputs.
I'm kind of new to the subject of neural nets but I have a decent understanding and will research the suggestions given.
A pointer in the right direction and some links would be greatly appreciated! | 0 | 1 | 2,533 |
0 | 48,146,211 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2018-01-08T07:15:00.000 | 2 | 2 | 0 | Change sentiment of a single word | 48,145,777 | 0.197375 | python,nlp,nltk,sentiment-analysis | The models used for sentiment analysis are generally the result of a machine-learning process. You can produce your own model by running the model creation on a training set where the sentiments are tagged the way you like, but this is a significant undertaking, especially if you are unfamiliar with the underpinnings.
For a quick and dirty fix, maybe just make your code override the sentiment for an individual word, or (somewhat more challenging) figure out how to change its value in the existing model. Though if you can get a hold of the corpus the NLTK maintainers trained their sentiment analysis on and can modify it, that's probably much simpler than figuring out how to change an existing model. If you have a corpus of your own with sentiments for all the words you care about, even better.
In general usage, "quick" is not superficially a polarized word -- indeed, "quick and dirty" is often vaguely bad, and a "quick assessment" is worse than a thorough one; while of course in your specific context, a service which delivers quickly will dominantly be a positive thing. There will probably be other words which have a specific polarity in your domain, even though they cannot be assigned a generalized polarity, and vice versa -- some words with a polarity in general usage will be neutral in your domain. Thus, training your own model may well be worth the effort, especially if you are exploring utterances in a very specific register. | I've been working with NLTK in Python for a few days for sentiment analysis and it's a wonderful tool. My only concern is the sentiment it has for the word 'Quick'. Most of the data that I am dealing with has comments about a certain service and MOST refer to the service as being 'Quick' which clearly has Positive sentiments to it. However, NLTK refers to it as being Neutral. I want to know if it's even possible to retrain NLTK to now refer to the Quick adjective as having positive annotations? | 0 | 1 | 915 |
0 | 48,154,073 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2018-01-08T07:15:00.000 | 3 | 2 | 0 | Change sentiment of a single word | 48,145,777 | 1.2 | python,nlp,nltk,sentiment-analysis | I have fixed the problem. Found the vader Lexicon file in AppData\Roaming\nltk_data\sentiment. Going through the file I found that the word Quick wasn't even in it. The format of the file is as following:
Token Mean-sentiment StandardDeviation [list of sentiment score collected from 10 people ranging from -4 to 4]
I edited the file. Zipped it. Now NLTK refers to Quick as having positive sentiments. | I've been working with NLTK in Python for a few days for sentiment analysis and it's a wonderful tool. My only concern is the sentiment it has for the word 'Quick'. Most of the data that I am dealing with has comments about a certain service and MOST refer to the service as being 'Quick' which clearly has Positive sentiments to it. However, NLTK refers to it as being Neutral. I want to know if it's even possible to retrain NLTK to now refer to the Quick adjective as having positive annotations? | 0 | 1 | 915 |
0 | 53,168,687 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2018-01-08T11:21:00.000 | 0 | 1 | 0 | Ambiguous Entity in stanfors NER | 48,149,281 | 1.2 | python-3.x,stanford-nlp,named-entity-recognition | Scrape data from sites like wikipedia,etc. and create a scoring model and then use it for context prediction. | I am working on Stanford NER, My question is regarding ambiguous entities.
For example, I have 2 sentences:
I love oranges.
Orange is my dress code for tomorrow.
How can i train these 2 sentences to give out,
first orange as Fruit, second orange as Color.
Thanks | 0 | 1 | 154 |
0 | 66,533,975 | 0 | 0 | 0 | 0 | 1 | false | 375 | 2018-01-08T14:50:00.000 | 18 | 14 | 0 | How to check if pytorch is using the GPU? | 48,152,674 | 1 | python,memory-management,gpu,nvidia,pytorch | Query
Command
Does PyTorch see any GPUs?
torch.cuda.is_available()
Are tensors stored on GPU by default?
torch.rand(10).device
Set default tensor type to CUDA:
torch.set_default_tensor_type(torch.cuda.FloatTensor)
Is this tensor a GPU tensor?
my_tensor.is_cuda
Is this model stored on the GPU?
all(p.is_cuda for p in my_model.parameters()) | How do I check if pytorch is using the GPU? It's possible to detect with nvidia-smi if there is any activity from the GPU during the process, but I want something written in a python script. | 0 | 1 | 569,617 |
0 | 48,159,738 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-01-08T23:34:00.000 | 2 | 3 | 0 | Calculating Mean & STD for Batch [Python/Numpy] | 48,159,562 | 0.132549 | python,numpy,tensorflow,batch-processing,channel | Assume you want to get the mean of multiple axis(if I didn't get you wrong). numpy.mean(a, axis=None) already supports multiple axis mean if axis is a tuple.
I'm not so sure what you mean by naive method. | Looking to calculate Mean and STD per channel over a batch efficiently.
Details:
batch size: 128
images: 32x32
3 channels (RGB)
So each batch is of size [128, 32, 32, 3].
There are lots of batches (naive method takes ~4min over all batches).
And I would like to output 2 arrays: (meanR, meanG, meanB) and (stdR, stdG, stdB)
(Also if there is an efficient way to perform arithmetic operations on the batches after calculating this, then that would be helpful. For example, subtracting the mean of the whole dataset from each image) | 0 | 1 | 3,580 |
0 | 48,221,294 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2018-01-09T10:18:00.000 | 1 | 2 | 1 | Access datalake from Azure datafactory V2 using on demand HD Insight cluster | 48,165,947 | 1.2 | python,pyspark,azure-hdinsight,azure-data-factory,azure-data-lake | Currently, we don't have support for ADLS data store with HDI Spark cluster in ADF v2. We plan to add that in the coming months. Till then, you will have to contiue using the workaround as you mentioned in your post above. Sorry for the inconvenience. | I am trying to execute spark job from on demand HD Insight cluster using Azure datafactory.
Documentation indicates clearly that ADF(v2) does not support datalake linked service for on demand HD insight cluster and one have to copy data onto blob from copy activity and than execute the job. BUT this work around seems to be a hugely resource expensive in case of a billion files on a datalake. Is there any efficient way to access datalake files either from python script that execute spark jobs or any other way to directly access the files.
P.S Is there a possiblity of doing similar thing from v1, if yes then how? "Create on-demand Hadoop clusters in HDInsight using Azure Data Factory" describe on demand hadoop cluster that access blob storage but I want on demand spark cluster that access datalake.
P.P.s Thanks in advance | 0 | 1 | 343 |
0 | 49,116,105 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-01-09T10:18:00.000 | 0 | 2 | 1 | Access datalake from Azure datafactory V2 using on demand HD Insight cluster | 48,165,947 | 0 | python,pyspark,azure-hdinsight,azure-data-factory,azure-data-lake | The Blob storage is used for the scripts and config files that the On Demand cluster will use. In the scripts you write and store in the attached Blob storage they can write from ADLS to SQLDB for example. | I am trying to execute spark job from on demand HD Insight cluster using Azure datafactory.
Documentation indicates clearly that ADF(v2) does not support datalake linked service for on demand HD insight cluster and one have to copy data onto blob from copy activity and than execute the job. BUT this work around seems to be a hugely resource expensive in case of a billion files on a datalake. Is there any efficient way to access datalake files either from python script that execute spark jobs or any other way to directly access the files.
P.S Is there a possiblity of doing similar thing from v1, if yes then how? "Create on-demand Hadoop clusters in HDInsight using Azure Data Factory" describe on demand hadoop cluster that access blob storage but I want on demand spark cluster that access datalake.
P.P.s Thanks in advance | 0 | 1 | 343 |
0 | 48,172,836 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-01-09T15:16:00.000 | 1 | 1 | 0 | Derivative of log plot in python | 48,171,283 | 1.2 | python,numpy,derivative | Thanks to the discussions in the comments the problem with np.gradient has been solved by updating the numpy package from version 1.12.1 to 1.13.3. This update is specially relevant if you are also getting the ValueError "distances must be scalars" when using gradient. Thus, in order to extract the order of the power-law, computing np.gradient(logy,logx) remains a valid option of going about it. | We have the x and y values, and I am taking their log, by logx = np.log10(x) and logy = np.log10(y). I am trying to compute the derivative of logy w.r.t logx, so dlogy/dlogx. I used to do this successfully using numpy gradient, more precisely
derivy = np.gradient(logy,np.gradient(logx))
but for some strange reason it doesn't seem to work anymore yielding the error: "Traceback (most recent call last):
File "derivlog.py", line 79, in <module>
grady = np.gradient(logy,np.gradient(logx))
File "/usr/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 1598, in gradient
raise ValueError("distances must be scalars")
ValueError: distances must be scalars"
Context: When trying to detect power-laws, of the kind y ~ x^t, given the values of y as a function of x, one wants to exctract essentially the power t, so we take logs which gives log y ~ t*log x and then take the derivative in order to extract t.
Here's a minimal example for recreating the problem: x=[ 3. 4. 5. 6. 7. 8. 9. 10. 11.]
y = [ 1.05654 1.44989 1.7939 2.19024 2.62387 3.01583 3.32106 3.51618
3.68153]
Are there other (more suited) methods in python for taking such numerical derivatives? | 0 | 1 | 1,101 |
0 | 48,258,953 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-01-09T16:52:00.000 | 1 | 1 | 0 | is it possible to use cnn models trained in cntk python in c#? | 48,173,038 | 0.197375 | c#,python,c++,cntk | Yes, this is completely possible. The CNTK framework itself is written in C++ and the C# as well as the Python interface are only wrappers onto the C++ code. So, as long you use versions which are compatible to each other, you can do that.
For instance, if you use CNTK 2.3.1 with python and also CNTK 2.3.1 with C# there is of course nothing which should get in your way. If the versions are different, it depends if there have been breaking changes.
Just for your information: There will be two formats in the near future: CNTK V2 model format and the new ONNX format. | I want to use CNTK python to train a CNN model and then use the trained model when i am programming in c# or c++. Is it possible to use CNTK python
trained model in C# or c++? | 0 | 1 | 149 |
0 | 48,188,480 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-01-09T22:47:00.000 | 0 | 1 | 0 | Do I need to train my neural network everytime I run it? | 48,177,771 | 0 | python,neural-network,training-data | If you're not using an already ready library it 'saves' the training only if you write a part of code to save it. The simplest way is to generate a TXT with a list of all the weights after the training and load it with a specific function. | I am trying machine learning for the first time, and am playing around with a handwriting recognition NN (in Python). I just wanted to know whether or not I need to train the model every time I run it, or if it 'saves' the training. Thanks in advance. | 0 | 1 | 966 |
0 | 48,187,560 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2018-01-10T04:10:00.000 | 0 | 2 | 0 | Prevent underflow in floating point division in Python | 48,180,177 | 0 | python,floating-point,precision,underflow | Division, like other IEEE-754-specified operations, is computed at infinite precision and then (with ordinary rounding rules) rounded to the closest representable float. The result of calculating x/y will almost certainly be a lot more accurate than the result of calculating np.exp(np.log(x) - np.log(y) (and is guaranteed not to be less accurate). | Suppose both x and y are very small numbers, but I know that the true value of x / y is reasonable.
What is the best way to compute x/y?
In particular, I have been doing np.exp(np.log(x) - np.log(y) instead, but I'm not sure if that would make a difference at all? | 0 | 1 | 6,827 |
0 | 48,302,888 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-01-10T13:51:00.000 | 1 | 3 | 0 | Migrate Halcon code to OpenCV | 48,188,940 | 0.066568 | python,opencv,computer-vision,halcon | It depends on which Halcon functionalities are you using and why you want to do it. The question appears to be very general. I would recommend you to convert your Halcon Program to C++ and write a wrapper function to pass arguments to/from your openCV program.This would be the simplest option to provide interaction between your opencv and halcon program. Hope it helps. | I am developing a solution using a comercial computer vision software called Halcon. I am thinking on migrating or convert my solution to OpenCV in Python. I will like to start developing my other computer vision solution in Halcon because the IDE is incredible, and them generate a script to migrate them to OpenCV.
Does anyone know any library for this task?
I will like to start developing an open source SDK to convert Halcon to OpenCV. I and thinking to start developing all internal function from Halcon to Python. Any advice? | 0 | 1 | 4,998 |
0 | 48,305,091 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-01-10T13:51:00.000 | 1 | 3 | 0 | Migrate Halcon code to OpenCV | 48,188,940 | 0.066568 | python,opencv,computer-vision,halcon | This is unfortunately not possible because Halcon itself is not an open source library and every single function is locked.
The reason behind is runtime licencing. | I am developing a solution using a comercial computer vision software called Halcon. I am thinking on migrating or convert my solution to OpenCV in Python. I will like to start developing my other computer vision solution in Halcon because the IDE is incredible, and them generate a script to migrate them to OpenCV.
Does anyone know any library for this task?
I will like to start developing an open source SDK to convert Halcon to OpenCV. I and thinking to start developing all internal function from Halcon to Python. Any advice? | 0 | 1 | 4,998 |
0 | 48,197,823 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2018-01-10T22:36:00.000 | 0 | 3 | 0 | Data Science Model and Training - Understanding | 48,197,227 | 0 | python,machine-learning,artificial-intelligence,jupyter-notebook,data-science | It depends on the model. For example linear regression, training will give you the coefficients of the slope and the intercept (generally). These are the "model parameters". When deployed, traditionally, these coefficients get fed into a different algorithm (literally y=mx+b), and then when queried "what should y be, when I have x", it responds with the appropriate value.
Kmeans clustering on the other hand the "parameters" are vectors, and the predict algorithm calculates distance from a vector given to the algorithm, and then returns the closest cluster - note often times these clusters are post processed, so the predict algorithm will say "shoes" not "[1,2,3,5]", which is again an example of how these things change in the wild.
Deep learning returns a list of edge weights for a graph, various parametric systems (as in maximum likelihood estimation), return the coefficients to describe a particular distribution, for example uniform distribution is number of buckets, Gaussian/Normal distribution is mean and variance, other more complicated ones have even more, for example skew and conditional probabilities. | Coming from a programming background where you write code, test, deploy, run.. I'm trying to wrap my head around the concept of "training a model" or a "trained model" in data science, and deploying that trained model.
I'm not really concerned about the deployment environment, automation, etc.. I'm trying to understand the deployment unit.. a trained model. What does a trained model look like on a file system, what does it contain?
I understand the concept of training a model, and splitting a set of data into a training set and testing set, but lets say I have a notebook (python / jupyter) and I load in some data, split between training/testing data, and run an algorithm to "train" my model. What is my deliverable under the hood? While I'm training a model I'd think there'd be a certain amount of data being stored in memory.. so how does that become part of the trained model? It obviously can't contain all the data used for training; so for instance if I'm training a chatbot agent (retrieval-based), what is actually happening as part of that training after I'd add/input examples of user questions or "intents" and what is my deployable as far as a trained model? Does this trained model contain some sort of summation of data from training or array of terms, how large (deployable size) can it get?
While the question may seem relatively simple "what is a trained model", how would I explain it to a devops tech in simple terms? This is an "IT guy interested in data science trying to understand the tangible unit of a trained model in a discussion with a data science guy".
Thanks | 0 | 1 | 225 |
0 | 50,946,421 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2018-01-10T22:36:00.000 | 0 | 3 | 0 | Data Science Model and Training - Understanding | 48,197,227 | 0 | python,machine-learning,artificial-intelligence,jupyter-notebook,data-science | A trained model(pickled) or whatever you want to use, contains the features on which it has been trained at least. Take for example a simple distance based model, you design a model based on the fact that (x1,x2,x3,x4) features are important and if any point if comes into contact with the model should give back the calculated distance based on which you draw insights or conclusions.
Similarly for chatbots, you train based on ner-crf, whatever features you want. As soon as a text comes into contact with the model, the features are extracted based on the model and insights/conclusions are drawn. Hope it was helpful !! I tried explaining the Feyman way. | Coming from a programming background where you write code, test, deploy, run.. I'm trying to wrap my head around the concept of "training a model" or a "trained model" in data science, and deploying that trained model.
I'm not really concerned about the deployment environment, automation, etc.. I'm trying to understand the deployment unit.. a trained model. What does a trained model look like on a file system, what does it contain?
I understand the concept of training a model, and splitting a set of data into a training set and testing set, but lets say I have a notebook (python / jupyter) and I load in some data, split between training/testing data, and run an algorithm to "train" my model. What is my deliverable under the hood? While I'm training a model I'd think there'd be a certain amount of data being stored in memory.. so how does that become part of the trained model? It obviously can't contain all the data used for training; so for instance if I'm training a chatbot agent (retrieval-based), what is actually happening as part of that training after I'd add/input examples of user questions or "intents" and what is my deployable as far as a trained model? Does this trained model contain some sort of summation of data from training or array of terms, how large (deployable size) can it get?
While the question may seem relatively simple "what is a trained model", how would I explain it to a devops tech in simple terms? This is an "IT guy interested in data science trying to understand the tangible unit of a trained model in a discussion with a data science guy".
Thanks | 0 | 1 | 225 |
0 | 54,176,655 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2018-01-10T22:36:00.000 | 1 | 3 | 0 | Data Science Model and Training - Understanding | 48,197,227 | 0.066568 | python,machine-learning,artificial-intelligence,jupyter-notebook,data-science | A trained model will contain the value of its parameters. If you tuned only a few parameters, then only they will contain the new adjusted value. Unchanged parameters will store the default value. | Coming from a programming background where you write code, test, deploy, run.. I'm trying to wrap my head around the concept of "training a model" or a "trained model" in data science, and deploying that trained model.
I'm not really concerned about the deployment environment, automation, etc.. I'm trying to understand the deployment unit.. a trained model. What does a trained model look like on a file system, what does it contain?
I understand the concept of training a model, and splitting a set of data into a training set and testing set, but lets say I have a notebook (python / jupyter) and I load in some data, split between training/testing data, and run an algorithm to "train" my model. What is my deliverable under the hood? While I'm training a model I'd think there'd be a certain amount of data being stored in memory.. so how does that become part of the trained model? It obviously can't contain all the data used for training; so for instance if I'm training a chatbot agent (retrieval-based), what is actually happening as part of that training after I'd add/input examples of user questions or "intents" and what is my deployable as far as a trained model? Does this trained model contain some sort of summation of data from training or array of terms, how large (deployable size) can it get?
While the question may seem relatively simple "what is a trained model", how would I explain it to a devops tech in simple terms? This is an "IT guy interested in data science trying to understand the tangible unit of a trained model in a discussion with a data science guy".
Thanks | 0 | 1 | 225 |
0 | 48,223,162 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-01-12T03:24:00.000 | 8 | 1 | 0 | Difference between tf.layers.conv1d vs tf.layers.conv2d | 48,219,121 | 1.2 | python,tensorflow,neural-network,conv-neural-network | tf.layers.conv1d is used when you slide your convolution kernels along 1 dimensions (i.e. you reuse the same weights, sliding them along 1 dimensions), whereas tf.layers.conv2d is used when you slide your convolution kernels along 2 dimensions (i.e. you reuse the same weights, sliding them along 2 dimensions).
So the typical use case for tf.layers.conv2d is if you have a 2D image. And possible use-cases for tf.layers.conv1d are, for example:
Convolutions in Time
Convolutions on Piano notes | What is the difference in the functionalities of tf.layers.conv1d and tf.layers.conv2d in tensorflow and how to decide which one to choose? | 0 | 1 | 4,923 |
0 | 50,122,396 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2018-01-12T03:54:00.000 | 4 | 1 | 0 | Why doesn't Keras need the gradient of a custom loss function? | 48,219,296 | 0.664037 | python,python-3.x,tensorflow,deep-learning,keras | To my understanding, as long as each operator that you will use in your Error function has already a predefined gradient. the underlying framework will manage to calculate the gradient of you loss function. | To my understanding, in order to update model parameters through gradient descend, the algorithm needs to calculate at some point the derivative of the error function E with respect of the output y: dE/dy. Nevertheless, I've seen that if you want to use a custom loss function in Keras, you simply need to define E and you don't need to define its derivative. What am I missing?
Each lost function will have a different derivative, for example:
If loss function is the mean square error: dE/dy = 2(y_true - y)
If loss function is cross entropy: dE/dy = y_true/y
Again, how is it possible that the model does not ask me what the derivative is? How does the model calculate the gradient of the loss function with respect of parameters from just the value of E?
Thanks | 0 | 1 | 1,747 |
0 | 48,220,586 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-01-12T06:01:00.000 | 0 | 2 | 0 | Word2vec Re-training with new corpus, how the weights will be updated for the existing vocabulary? | 48,220,414 | 0 | python,word2vec | If I understand this option correctly, you are resetting all the weights of the shared words and then train them on the C2 data... This would mean that all the information on the shared words from C1 is lost, which would seem like a big loss to me. (I dont know the corpus sizes). Also, how different are the two corpora? How big is this intersection? Do the corpora cover similar topics/areas or not? This could also influence your decision on whether losing all the info from the C1 corpus is okay or not okay.
This seems like a more logical flow to me... but again the difference in corpora/vocabulary is important here. If a lot of words from C2 are left out because of the intersection, you can think of ways to add unknown words one way or another.
But in order to asses which option is truly 'best' in your case, create a case where you can measure how 'good' one approach is according to the other. In most cases this involves some similarity measure... but maybe your case is different.. | Scenerio: A word2vec model is trained on corpus C1 with vocabulary V1. If we want to re-train the same model with another corpus C2 having vocabulary V2 using train() API, what will happen out of these two:
For model, weights for V1 intersection V2 will be reset and re-training for with corpus C2 will come up with all together new weights
For model, re-training with corpus C2 will be continued with the existing weights for vocabulary V1 intersection V2.
Which one is correct hypothesis out of the above two? | 0 | 1 | 224 |
0 | 51,941,579 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-01-12T06:01:00.000 | 0 | 2 | 0 | Word2vec Re-training with new corpus, how the weights will be updated for the existing vocabulary? | 48,220,414 | 0 | python,word2vec | Why not initiate each of the word2vec parameters with random generated numbers for each run? I could do this and with careful selection of the random numbers for each parameter (numFeatures, contextWindow, seed) I was able to get random similarity tuples which I wanted for my usecase. Simulating an ensemble architecture.
What do others think of it? Pls do reply. | Scenerio: A word2vec model is trained on corpus C1 with vocabulary V1. If we want to re-train the same model with another corpus C2 having vocabulary V2 using train() API, what will happen out of these two:
For model, weights for V1 intersection V2 will be reset and re-training for with corpus C2 will come up with all together new weights
For model, re-training with corpus C2 will be continued with the existing weights for vocabulary V1 intersection V2.
Which one is correct hypothesis out of the above two? | 0 | 1 | 224 |
0 | 48,231,802 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-01-12T17:26:00.000 | 1 | 1 | 0 | What's the input_size for the RNN Model in Keras | 48,231,233 | 1.2 | python,machine-learning,deep-learning,keras,rnn | The input_dim is just the shape of the input you pass to this layer. So:
input_dim = 7
There are other options, such as:
input_shape=(7,) -- This argument uses tuples instead of integers, good when your input has more than one dimension
batch_input_shape=(batch_size,7) -- This is not usually necessary, but you use it in cases you need a fixed batch size (there are a few layer configurations that demand that)
Now, the size of the output in a Dense layer is the units argument. Which is 128 in your case and should be equal to num_neurons. | I'm just starting with deep learning, and I've been told that Keras would be the best library for beginners.
Before that, for the sake of learning, I built a simple feed forward network using only numpy so I could get the feel of it.
In this case, the shape of the weight matrix was (len(X[0]), num_neurons). The number of features and the number of neurons. And it worked.
Now, I'm trying to build a simple RNN using Keras. My data has 7 features and the size of the layer would be 128.
But if I do something like model.add(Dense(128, input_dim=(7, 128)))it says it's wrong.
So I have no idea what this input_dim should be.
My data has 5330 data points and 7 features (shape is (5330, 7)).
Can someone tell me what the input_dim should be and why?
Thank you. | 0 | 1 | 328 |
0 | 50,582,786 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-01-12T19:48:00.000 | 0 | 1 | 0 | Python - Gaussian Kernel for colour images | 48,233,118 | 1.2 | python,kernel,convolution,gaussian,gaussianblur | You have to create a gaussian filter of the dimension you want e.g. 3x3 or 11x11.
Then do the convolution on each channel of colour.
If you want to do it using Fourir, you have to apply psf2otf (a matlab function that it is also in Python by users) and multiply both matrixs pointwise (on each channel). | Hello I'm working with images with Python. I want to convolve an image with a gaussian filter.
The image is an array that it have the shape (64,64,3) 64x64 pixels and 3 channels of colour. How will it be the gaussian filter? which dimension? Do you know a function to define it and make the convolution with the image? | 0 | 1 | 1,583 |
0 | 62,427,224 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-01-13T02:22:00.000 | 0 | 3 | 0 | How to perform .describe() method on variables that have boolean data type in pandas | 48,236,383 | 0 | python,pandas | Try using this:
df.describe(include=['object','bool']).T
the T here is used for the transpose purpose | I am trying to get the summary statistics of the columns of a data frame with data type: Boolean.
When I run:df.describe() it only gives me the summary statistics for numerical (in this case float) data types. When I change it to df.describe(include=['O']), it gives me only the object data type.
In either case, the summary statistics for Boolean data types are not provided.
Any suggestion is highly appreciated.
Thanks | 0 | 1 | 5,123 |
0 | 48,242,730 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-01-13T16:49:00.000 | 0 | 1 | 0 | How to write arrays as column elements in a Data file in python and read it later in C? | 48,242,064 | 0 | python,c,arrays,data-files | I'd say write it to a text file. Per line put one first column number, followed by the second column's list of floats. Put a space between each element.
Assuming you know the maximum number of floats in the second column's array, and the maximum character length of each float, you can use fgets() to read the file line by line. | I have two column dataset where each element of first column corresponds to an array. So basically my second column elements are arrays and first column elements are just numbers. I need to write it in a file using Python and later read it in C. I know HDF5 is the best way to store arrays but I was wondering if there is any other effective way of writing it in .csv/.dat/.txt file. Since I have to read it in C I can't use things like numpy.savez. | 0 | 1 | 135 |
0 | 48,259,989 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-14T16:16:00.000 | 0 | 1 | 0 | how could I copy tensorflow module from one computer to another? | 48,251,568 | 0 | python,ubuntu,tensorflow | You could copy the contents of your python site-packages across. But if you are generally in the situation where internet access is expensive, you might find it more practical to use a caching proxy for all of your internet traffic. | So I have computers all running on ubuntu and only one of them has python tensorflow module. I want to install tensorflow to all of them but it would be inefficient to connect every computer to the internet and install them all over again.so is there a possible efficient way to just copy paste some files from the computer to another to use this python module? thanks in advance. | 0 | 1 | 160 |
0 | 48,278,872 | 0 | 0 | 0 | 1 | 1 | false | 53 | 2018-01-15T14:07:00.000 | 5 | 8 | 0 | Load S3 Data into AWS SageMaker Notebook | 48,264,656 | 0.124353 | python,amazon-web-services,amazon-s3,machine-learning,amazon-sagemaker | Do make sure the Amazon SageMaker role has policy attached to it to have access to S3. It can be done in IAM. | I've just started to experiment with AWS SageMaker and would like to load data from an S3 bucket into a pandas dataframe in my SageMaker python jupyter notebook for analysis.
I could use boto to grab the data from S3, but I'm wondering whether there is a more elegant method as part of the SageMaker framework to do this in my python code?
Thanks in advance for any advice. | 1 | 1 | 73,507 |
0 | 48,267,194 | 0 | 0 | 0 | 0 | 1 | true | 24 | 2018-01-15T15:25:00.000 | 44 | 2 | 0 | Keras: find out the number of layers | 48,265,926 | 1.2 | python,machine-learning,keras,deep-learning,keras-layer | model.layers will give you the list of all layers. The number is consequently len(model.layers) | Is there a way to get the number of layers (not parameters) in a Keras model?
model.summary() is very informative, but it is not straightforward to get the number of layers from it. | 0 | 1 | 16,915 |
0 | 48,280,113 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-16T11:12:00.000 | 1 | 1 | 0 | Sequential Rule Mining using Apriori Algorithm and Pandas | 48,279,902 | 0.197375 | python,pandas,apriori | message is string type, and elif "what is" in message: seems to be correct in syntax.
Have you checked whether the indentation is correct? Sometimes the bug can be a very simple thing.. | I am performing Sequential Rule Mining using Apriori Algorithm and FPA, I have the dataset in excel as shown below, I want to know, how should I load my data into pandas dataframe what I am using is the following read_excel command, but the data contains ---> between items and lies in single column as shown below.
How should I load and perform Pattern Mining. | 0 | 1 | 683 |
0 | 48,287,827 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2018-01-16T18:36:00.000 | 3 | 1 | 0 | sort very large data with dask? | 48,287,766 | 1.2 | python,pandas,dataframe,sorting,dask | Yes, by calling set_index on the column that you wish to sort. On a single machine it uses your hard drive intelligently for excess space. | I need to sort a data table that is well over the size of the physical memory of the machine I am using. Pandas cannot handle it because it needs to read the entire data into memory. Can dask handle that?
Thanks! | 0 | 1 | 1,393 |
0 | 48,289,327 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-01-16T19:47:00.000 | 2 | 1 | 0 | Python regex match human names with abbr (dots) in text | 48,288,763 | 1.2 | python,regex | If you are ok with the following assertions:
Names and surnames always begin with a capital letter
For names reduced to one capital letter, this letter is always immediately followed by a dot
Names can be separated with either a comma or the "and" word
These names end with a final dot
Then you can use this regex: ^©[0-9]{4} +(([A-Z][a-z]+|[A-Z]\.|and|,) *)*\. * | I want to use regex to match patterns in paragraphs like the following:
©2016 Rina Foygel Barber and Emil Y. Sidky. Many optimization problems arising in high-dimensional statistics decompose naturally into a sum of several terms, where the individual terms are relatively simple but the composite objective function can only be optimized with iterative algorithms. In this paper, we are interested in optimization problems of the form F(Kx) + G(x), where K is a fixed linear transformation, while F and G are functions that may be nonconvex and/or nondifferentiable. In particular, if either of the terms are nonconvex, existing alternating minimization techniques may fail to converge; other types of existing approaches may instead be unable to handle nondifferentiability. We propose the mocca (mirrored convex/concave) algorithm, a primal/dual optimization approach that takes a local convex approximation to each term at every iteration. Inspired by optimization problems arising in computed tomography (CT) imaging, this algorithm can handle a range of nonconvex composite optimization problems, and offers theoretical guarantees for convergence when the overall problem is approximately convex (that is, any concavity in one term is balanced out by convexity in the other term). Empirical results show fast convergence for several structured signal recovery problems.
So that the first line with human names, year, and copyright (©2016 Rina Foygel Barber and Emil Y. Sidky.) can be removed.
The only I can come up now was to use ^© ?[0-9][0-9][0-9][0-9].+\.. However, this can hardly match things like the above paragraph due to the . in human names. Any suggestions? Thanks! | 0 | 1 | 179 |
0 | 48,303,455 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-17T13:57:00.000 | 1 | 1 | 0 | dtype changes after set_value/at | 48,302,876 | 0.197375 | python,pandas | I think the problem is related to the fact that I was trying to assign a None to a bool Series, then it just tries to convert to a different type (why not object?)
Fixed changing the dtype to object first: dataframe.foo = dataframe.foo.astype(object).
Works like a charm now. | I'm facing a weird issue on Pandas now, not sure if a pandas pitfall or just something I'm missing...
My pd.Series is just
foo
False
False
False
> a.foo.dtype
dtype('bool')
When I use a dataframe.set_value(index, col, None), my whole Series is converted to dtype('float64') (same thing applies to a.at[index, col] = None).
Now my Series is
foo
NaN
NaN
NaN
Do you have any idea on how this happens and how to fix it?
Thanks in advance. :)
Edit:
Using 0.20.1. | 0 | 1 | 37 |
0 | 48,479,898 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-17T14:01:00.000 | 0 | 1 | 0 | Python Gurobi, report of statistics presolve | 48,302,947 | 0 | python,attributes,gurobi | If m is the model, i create new model for presolve (mpre = m.presolve()), and then i use mpre.getAttr(GRB.Attr.NumVars), mpre.getAttr(GRB.Attr.NumConstrs) and mpre.getAttr(GRB.Attr.NumConstrs), mpre.getAttr(GRB.Attr.NumNZs). | I try to get number of rows, columns and nonzeros values after presolve. I know about getAttr(GRB.Attr.NumVars), getAttr(GRB.Attr.NumConstrs)...but they give the statics before presolve. Can any one help me.
Thanks | 0 | 1 | 71 |
0 | 48,314,822 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-01-18T05:43:00.000 | 1 | 1 | 0 | I have trouble with Interpreter in Pycharm | 48,314,549 | 1.2 | python,opencv,pycharm,interpreter | Go to project interpreter settings in preferences in Pycharm, and use the + sign there to add the module to your interpreter.
This can happen when pip is installing to a directory that is not part of your project's python interpreter's PATH.
Installing via the Pycharm preferences menu always solves for me, although there is a deeper issue of pip not installing to the correct directory... | I am python newbie. I am doing 1 project testing of Keras. I already installed opencv by pip install opencv-python but i can't find opencv in interpreter of Pycharm and have an error when i import cv2. My interpreter is usr/bin/python2.7 | 0 | 1 | 335 |
0 | 48,341,100 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-18T13:13:00.000 | 0 | 1 | 0 | Use of one-hot encoder to build decision trees | 48,322,232 | 0 | python-3.x,decision-tree,one-hot-encoding | Based on my tests, One-hot encoding results in continuous variables (amounts, in my case) having been assigned higher feature importance.
Also, a single level of a categorical variable must meet a very high bar in order to be selected for splitting early in the tree building. This apparently degrades predictive performance (I didn't see this consequence mentioned in any post, unfortunately).
I will investigate for other approaches. | I need to build decision trees on categorical data.
I understood that scikit-learn was only able to deal with numerical values, and the recommended approach is then to use on-hot encoding, preferrably using the Panda Dummies.
So, I build a sample dataset where all attributes and labels are categorical. At this stage, I try to understand how to 'one-hot' encode to be able to use sklearn, but the documentation does not address this case.
Could eventually give me a quick example or a link to some material for beginners ? | 0 | 1 | 792 |
0 | 48,517,556 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-01-18T15:04:00.000 | 0 | 1 | 0 | Use opencv in python idle | 48,324,322 | 0 | python-3.x,opencv3.0 | It could be an issue of having multiple python versions on your machine, you should select the python interpreter that is global to your system "that which utilises" pip in terminal. | I installed opencv in python3 using pip. It runs well in terminal, but when I try to run it in idle, it cannot import cv2. What could be the solution?
I am using vim as my python idle. | 0 | 1 | 787 |
0 | 48,326,851 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-01-18T17:08:00.000 | 0 | 1 | 0 | While trying to import pandas library to pycharm there is an error message | 48,326,786 | 0 | python-3.x,pycharm | It's telling you that you need Microsoft Visual C++ 10.0 as pandas library has dependencies that refer to that version of Microsoft Visual C++. It provides you the url to download and install it. Use it, then import pandas library. | the error message
error: Microsoft Visual C++ 10.0 is required. Get it with "Microsoft Windows SDK 7.1": www.microsoft.com/download/details.aspx?id=8279 | 0 | 1 | 244 |
0 | 48,331,105 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-01-18T21:58:00.000 | 1 | 2 | 0 | Errors processing large images with SIFT OpenCV | 48,331,004 | 0.099668 | python,opencv,image-processing,feature-detection,sift | I would split the image into smaller windows. So long as you know the windows overlap (I assume you have an idea of the lateral shift) the match in any window will be valid.
You can even use this as a check, the translation between feature points in any part of the image must be the same for the transform to be valid | I want to use OpenCV Python to do SIFT feature detection on remote sensing images. These images are high resolution and can be thousands of pixels wide (7000 x 6000 or bigger). I am having trouble with insufficient memory, however. As a reference point, I ran the same 7000 x 6000 image in Matlab (using VLFEAT) without memory error, although larger images could be problematic. Does anyone have suggestions for processing this kind of data set using OpenCV SIFT?
OpenCV Error: Insufficient memory (Failed to allocate 672000000 bytes) in cv::OutOfMemoryError, file C:\projects\opencv-python\opencv\modules\core\src\alloc.cpp, line 55
OpenCV Error: Assertion failed (u != 0) in cv::Mat::create, file
(I'm using Python 2.7 and OpenCV 3.4 in the Spyder IDE on a Windows 64-bit with 32 GB of RAM.) | 0 | 1 | 649 |
0 | 48,355,204 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-01-18T21:58:00.000 | 0 | 2 | 0 | Errors processing large images with SIFT OpenCV | 48,331,004 | 0 | python,opencv,image-processing,feature-detection,sift | There are a few flavors how to process SIFT corner detection in this case:
process single image per unit/time one core;
multiprocess 2 or more images /unit time on single core;
multiprocess 2 or more images/unit time on multiple cores.
Read cores as either cpu or gpu. Threading result in serial processing instead of parallel.
As stated Rebecca has at least 32gb internal memory on her PC at her disposal which is more than sufficient for option 1 to process at once.
So in that light.. splitting a single image as suggested by Martin... should be a last resort in my opinion.
Why should you avoid splitting a single image in multiple windows during feature detection (w/o running out of memory)?
Answer:
If a corner is located at the spilt-side of the window and thus becomes unwillingly two more or less polygonal straight-line-like shapes you won't find the corner you're looking for, unless you got a specialized algorithm to search for those anomalies.
In casu:
In Rebecca's case its crucial to know which approach she took on processing the image(s)... Was it one, two, or many more images loaded simultaneously into memory?
If hundreds or thousands of images are simultaneously loaded into memory... you're basically choking the system by taking away its breathing space (in the form of free memory). In addition, we're not talking about other programs that are loaded into memory and claim (reserve) or consume memory space for various background programs. That comes on top of the issue at hand.
Overthinking:
If as suggested by Martin there is an issue with the Opencv lib in handling such amount of images as described by Rebecca.. do some debugging and then report your findings to Opencv, post a question here at SO as she did... but post also code that shows how you deal with the image processing at the start; as explained above why that is important. And yes as Martin stated... don't post wrappers... totally pointless to do so. A referral link to it (with possible version number) is more than enough... or a tag ;-) | I want to use OpenCV Python to do SIFT feature detection on remote sensing images. These images are high resolution and can be thousands of pixels wide (7000 x 6000 or bigger). I am having trouble with insufficient memory, however. As a reference point, I ran the same 7000 x 6000 image in Matlab (using VLFEAT) without memory error, although larger images could be problematic. Does anyone have suggestions for processing this kind of data set using OpenCV SIFT?
OpenCV Error: Insufficient memory (Failed to allocate 672000000 bytes) in cv::OutOfMemoryError, file C:\projects\opencv-python\opencv\modules\core\src\alloc.cpp, line 55
OpenCV Error: Assertion failed (u != 0) in cv::Mat::create, file
(I'm using Python 2.7 and OpenCV 3.4 in the Spyder IDE on a Windows 64-bit with 32 GB of RAM.) | 0 | 1 | 649 |
0 | 48,336,589 | 0 | 0 | 0 | 1 | 1 | true | 2 | 2018-01-19T03:11:00.000 | 3 | 1 | 0 | What mean "Inf" in csv? | 48,333,572 | 1.2 | python,postgresql,csv | inf (meaning infinity) is a correct value for floating point values (real and double precision), but not for numeric.
So you will either have to use one of the former data types or fix the input data. | I am working on copying csv file content into postgresql database.
While copying into the database, I get this error:
invalid input syntax for type numeric: "inf"
My question is:
I think "inf" means "infinitive" value, is it right? what does "inf" correctly mean? If it is kinda error, is it possible to recover original value?
And, Should I manually correct these values to copy it into the database?
Is there any good solution to fix this problem without manually correcting or setting exceptions in copying codebase? | 0 | 1 | 1,132 |
0 | 48,701,185 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-01-19T07:23:00.000 | 0 | 1 | 0 | Do I have to do feature selection prior to applying my machine learning algorithm? | 48,335,994 | 1.2 | python,algorithm,classification,knn,supervised-learning | There is no definite answer to this. The current trend is not do feature selection and let the classifier decide which features to use. Take current image datasets for example which also have 1000+ features (depending on the image resolution). They are fed to a CNN usually without any preprocessing. However, this is not generally true. If for example you assume to have a lot of correlations in the data, feature selection might help. | My question is,
Does the machine learning algorithm takes care of selecting the best features in my data ? or shall I do feature selection and scaling prior to my machine learning algorithm.
I am aware of few supervised classification machine learning algorithms such as kNN, Neural Networks, Adaboast etc.
But is there some you recommend me looking at ? | 0 | 1 | 124 |
0 | 48,346,637 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-19T15:09:00.000 | 0 | 1 | 0 | PAGMO/PYGMO: Anyone understand the options for Corana’s Simulated Annealing? | 48,344,081 | 0 | python,minimization,simulated-annealing | Gonna answer my own question here. I climbed into the actual .cpp code and found the answers.
In Corana's method, you select how many total iterations N of annealing you want. Then the minimization is a nested series of loops where you vary the step sizes, number of step-size adjustments, and temperature values at user-defined intervals. In PAGMO, they changed this so you explicitly specify how many times you will do these. Those are the n_* parameters and bin_size. I don't think bin_size is a good name here, because it isn't actually a size. It is the number of steps taken through a bin range, such that N=n_T_adj * n_range_adj * bin_range. I think just calling it n_bins or n_bins_adj makes more sense. Every bin_size function evaluations, the stepsize is modified (see below for limits).
In Corana's method you specify the multiplicative factor to decrease the temperature each time it is needed; it could be that you reach the minimum temp before running out of iterations, or vice versa. In PAGMO, the algorithm automatically computes the temperature-change factor so that you reach Tf at the end of the iteration sequence: r_t=(Tf/Ts)**(1/n_T_adj).
The start_range is, I think, a bad name for this variable. The stepsize in the alorithm is a fraction between 0 and start_range which defines the width of the search bins between the upper and lower bounds for each variable. So if stepsize=0.5, width=0.5*(upper_bound-lower_bound). At each iteration, the step size is adjusted based on how many function calls were accepted. If the step size grows larger than start_range, it is reset to that value. I think I would call it step_limit instead. But there you go. | I'm using the PYGMO package to solve some nasty non-linear minimization problems, and am very interested in using their simulated_annealing algorithm, however it has a lot of hyper-parameters for which I don't really have any good intuition. These include:
Ts (float) – starting temperature
Tf (float) – final temperature
n_T_adj (int) – number of temperature adjustments in the annealing schedule
n_range_adj (int) – number of adjustments of the search range performed at a constant temperature
bin_size (int) – number of mutations that are used to compute the acceptance rate
start_range (float) – starting range for mutating the decision vector
Let's say I have a 4 dimensional geometric registration (homography) problem with variables and search ranges:
x1: [-10,10] (a shift in x)
x2: [10,30] (a shift in y)
x3: [-45,0] (rotation angle)
x4: [0.5,2] (scaling/magnification factor)
And the cost function for a random (bad) choice of values is 50. A good value is around zero.
I understand that Ts and Tf are for the Metropolis acceptance criterion of new solutions. That means Ts should be about the expected size of the initial changes in the cost function, and Tf small enough that no more changes are expected.
In Corana's paper, there are many hyperparameters listed that make sense: N_s is the number of evaluation cycles before changing step sizes, N_T are the number of step-size changes before changing the temperature, and r_T is the factor by which the temp is reduced each time. However, I can't figure out how these correlate to pygmo's parameters of n_T_adj, n_range_adj, bin_size, and start_range.
I'm really curious if anyone can explain how pygmo's hyperparameters are used, and how they relate to the original paper by Corana et al? | 0 | 1 | 175 |
0 | 48,351,965 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-01-20T01:58:00.000 | 0 | 3 | 0 | Python - Sort a list by a certain element in a tuple in a dictionary | 48,351,902 | 0 | python | For each dict item in the list you want to sort, you want to take the item's value keyed by 'info', which is a list, and sort on the second item (addressed as [1], counting from zero.
So: data.sort(key=lambda item: item['info'][1]) | data = [{'info': ('orange', 400000, 'apple'), 'photo': None}, {'info': ('grape', 485000, 'watermelon'), 'photo': None}]
I want to sort data by the 2nd element (400000, 485000) in the tuple in the dictionary. How do I do this?
I followed another answer and my closest attempt was data.sort(key=lambda tup: tup[1]), but that produces this error:
KeyError: 1 | 0 | 1 | 55 |
0 | 54,622,059 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-01-20T07:00:00.000 | 0 | 3 | 0 | Aws Glue - S3 - Native Python | 48,353,544 | 0 | python,python-3.x,amazon-redshift,aws-glue | AWS Glue should be able to process all the files in a folder irrespective of the name in a single job. If you don’t want the old file to be processed again move it using boto3 api for s3 to another location after each run. | Within AWS Glue how do I deal with files from S3 that will change every week.
Example:
Week 1: “filename01072018.csv”
Week 2: “filename01142018.csv”
These files are setup in the same format but I need Glue to be able to change per week to load this data into Redshift from S3. The code for Glue uses native Python as the backend. | 0 | 1 | 292 |
0 | 48,372,244 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-21T09:14:00.000 | -1 | 1 | 0 | Clustering a correlation matrix into multiple matrices of maximum correlations in Python | 48,365,313 | -0.197375 | python,numpy,scipy,cluster-analysis,correlation | The obvious choice here is hierarchical agglomerative clustering.
Beware that most tools (e.g., sklearn) expect a distance matrix. But it can be trivially implemented for a similarity matrix instead. Then you can use correlation. This is textbook stuff. | Say I calculated the correlations of prices of 500 stocks, and stored them in a 500x500 correlation matrix, with 1s on the diagonal.
How can I cluster the correlations into smaller correlation matrices (in Python), such that the correlations of stocks in each matrix is maximized? Meaning to say, I would like to cluster the stocks such that in each cluster, the stock prices are all highly correlated with one another.
There is no upper bound to how many smaller matrices I can cluster into, although preferably, their sizes are similar i.e it is better to have 3 100x100 matrices and 1 200x200 matrix than say a 10x10 matrix, 90x90 matrix and 400x400 matrix. (i.e minimize standard deviation of matrix sizes).
Preferably to be done in Python. I've tried to look up SciPy's clustering libraries but have not yet found a solution (I'm new to SciPy and such statistical programming problems).
Any help that points me in the right direction is much appreciated! | 0 | 1 | 1,454 |
0 | 48,369,514 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-21T16:56:00.000 | 0 | 1 | 0 | Allow soft placement in tensorflow | 48,369,319 | 0 | python-2.7,tensorflow | This error probably occurs because you are using python 2.7. Where as tensorflow is for use with python 3.5 and python 3.6 | I just upgraded from tf 1.1 to tf 1.4 for python 2.7 and I got the following problem:
I have a graph which I am putting all its OPs in the specific device by using tf.device('device') command. But, one of the OPs is only allowed to be in CPU device, so I am using allow_soft_placement=True and it was working correctly in tf 1.1 (it put only the OPs without GPU implementation in CPU and other OPs in GPU). But now (in tf1.4) when I am running my network it is putting all the OPs in the CPU (not just the one which has not GPU implementation).
Any help is appreciated. | 0 | 1 | 518 |
0 | 48,370,179 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-21T18:14:00.000 | 2 | 1 | 0 | Reinforcement Learning - How to we decide the reward to the agent when the input to the game is only pixels? | 48,370,121 | 0.379949 | python,machine-learning,artificial-intelligence,reinforcement-learning,openai-gym | You need to make up a reward that proxies the behavior you want - and that is actually no trivial business.
If there is some numbers on a fixed part of the screen representing score, then you can use old fashioned image processing techniques to read the numbers and let those be your reward function.
If there is a minimap in a fixed part of the screen with fixed scale and orientation, then you could use minus the distance of your character to a target as reward.
If there are no fixed elements in the UI you can use to proxy the reward, then you are going to have a bad time, unless you can somehow access the internal variables of the console to proxy the reward (using the position coordinates of your PC, for example). | I am new to RL and the best I've done is CartPole in openAI gym. In cartPole, the API automatically provides the reward given the action taken. How am I supposed to decide the reward when all I have is pixel data and no "magic function" that could tell the reward for a certain action.
Say, I want to make a self driving bot in GTA San Andreas. The input I have access to are raw pixels. How am I supposed to figure out the reward for a certain action it takes? | 0 | 1 | 322 |
0 | 48,451,376 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-01-22T13:36:00.000 | 0 | 1 | 0 | Adaptive Filter input vectors and iteration | 48,382,873 | 1.2 | python,algorithm,filtering,signal-processing,perceptron | Your explanation is correct. The X input vector is multiplied recursively by the filter coefficients. It's been some time since I wrote an adaptive filter, but if I remember correctly you're multiplying M filter coefficients by the latest M input values to get an update.
So M is the order of your filter, or the number of filter coefficients, and n is the length of the signal you are filtering. And as you note your recursive filter will look at a 'window' of those input values for each filtered output calculation. | I am trying to implement an adaptive filter for noise cancellation, in particular a RLS filter to remove motion artifacts from a signal. To do this I am reading some literature, there is one thing I don't understand and every book or article I found just asumes I already now this.
I have a reference signal represented as a list in Python of about 8000 elements, or samples. I need to input this to the RLS filter, but every algorithm I find always talks about the input vector as
X[n] = [x1[n], x2[n], x3[n], ........, xM[n]]T
Where X is the input vector, and n is a time instant. And here is where I get lost. If n is a time instant, it would mean x[n] is an element in the list, a salmple. But if that is the case, what are x1, x2, ...., xM???.
I realise this is not strictly a coding problem, but I hope someone can help!
Thanks... | 0 | 1 | 345 |
0 | 48,389,275 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-01-22T16:37:00.000 | 3 | 1 | 0 | How to find the median between two sorted arrays? | 48,386,293 | 0.53705 | python,arrays,algorithm | Here is my approach that I managed to come up with.
First of all we know that the resulting array will contain N+M elements, meaning that the left part will contain (N+M)/2 elements, and the right part will contain (N+M)/2 elements as well. Let's denote the resulting array as Ans, and denote the size of one of its parts as PartSize.
Perform a binary search operation on array A. The range of such binary search will be [0, N]. This binary search operation will help you determine the number of elements from array A that will form the left part of the resulting array.
Now, suppose we are testing the value i. If i elements from array A are supposed to be included in the left part of the resulting array, this means that j = PartSize - i elements must be included from array B in the first part as well. We have the following possibilities:
j > M this is an invalid state. In this case it means we still need to choose more elements from array A, so our new binary search range becomes [i + 1, N].
j <= M & A[i+1] < B[j] This is a tricky case. Think about it. If the next element in array A is smaller than the element j in array B, this means that element A[i+1] is supposed to be in the left part rather than element B[j]. In this case our new binary search range becomes [i+1, N].
j <= M & A[i] > B[j+1] This is close to the previous case. If the next element in array B is smaller than the element i in array A, the means that element B[j+1] is supposed to be in the left part rather than element A[i]. In this case our new binary search range becomes [0, i-1].
j <= M & A[i+1] >= B[j] & A[i] <= B[j+1] this is the optimal case, and you have finally found your answer.
After the binary search operation is finished, and you managed to calculate both i and j, you can now easily find the value of the median. You need to handle a few cases here depending on whether N+M is odd or even.
Hope it helps! | I'm working on a competitive programming problem where we're trying to find the median of two sorted arrays. The optimal algorithm is to perform a binary search and identify splitting points, i and j, between the two arrays.
I'm having trouble deriving the solution myself. I don't understand the initial logic. I will follow how I think of the problem so far.
The concept of the median is to partition the given array into two sets. Consider a hypothetical left array and a hypothetical right array after merging the two given arrays. Both these arrays are of the same length.
We know that the median given both those hypothetical arrays works out to be [max(left) + min(right)]/2. This makes sense so far. But the issue here is now knowing how to construct the left and right arrays.
We can choose a splitting point on ArrayA as i and a splitting point on ArrayB as j. Note that len(ArrayB[:j] + ArrayB[:i]) == len(ArrayB[j:] +ArrayB[i:]).
Now we just need to find the cutting points. We could try all splitting points i, j such that they satisfy the median condition. However this works out to be O(m*n) where M is size of ArrayB and where N is size of ArrayA.
I'm not sure how to get where I am to the binary search solution using my train of thought. If someone could give me pointers - that would be awesome. | 0 | 1 | 711 |
0 | 48,448,962 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-01-22T17:59:00.000 | 4 | 1 | 0 | Multi-Task Learning: Train a neural network to have different loss functions for the two classes? | 48,387,602 | 1.2 | python,tensorflow,keras | This is a question that's important in multi-task learning where you have multiple loss functions, a shared neural network structure in the middle, and inputs that may not all be valid for all loss functions.
You can pass in a binary mask which are 1 or 0 for each of your loss functions, in the same way that you pass in the labels. Then multiply each loss by its corresponding mask. The derivative of 1x is just dx, and the derivative of 0x is 0. You end up zeroing out the gradient in the appropriate loss functions. Virtually all optimizers are additive optimizers, meaning you're summing the gradient, adding a zero is a null operation. Your final loss function should be the sum of all your other losses.
I don't know much about Keras. Another solution is to change your loss function to use the labels only: L = cross_entropy * (label / (label + 1e-6)). That term will be almost 0 and almost 1. Close enough for government work and neural networks at least. This is what I actually used the first time before I realized it was as simple as multiplying by an array of mask values.
Another solution to this problem is to us tf.where and tf.gather_nd to select only the subset of labels and outputs that you want to compare and then pass that subset to the appropriate loss function. I've actually switched to using this method rather than multiplying by a mask. But both work. | I have a neural net with two loss functions, one is binary cross entropy for the 2 classes, and another is a regression. Now I want the regression loss to be evaluated only for class_2, and return 0 for class_1, because the regressed feature is meaningless for class_1.
How can I implement such an algorithm in Keras?
Training it separately on only class_1 data doesn't work because I get nan loss. There are more elegant ways to define the loss to be 0 for one half of the dataset and mean_square_loss for another half? | 0 | 1 | 2,069 |
0 | 48,401,126 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-01-23T11:07:00.000 | 2 | 1 | 0 | Is there any tensorflow version of numpy.view_as_windows? | 48,400,191 | 1.2 | python-3.x,numpy,tensorflow | Note: This answer does not answer the OP's exact question, but addresses the actual need of the OP as clarified in the comments (i.e., generate image patches, quickly). I just thought this would fit better here than in a badly-formatted comment.
If all you need to do is generating image patches, Tensorflow (and generally GPU acceleration) is not the right tool for this, because the actual computation is trivial (extract a sub-area of an image) and the bottleneck would be memory transfer between GPU and CPU.
My suggestion is, then, to write CPU-only code that uses view_as_windows and parallelize it via multiprocessing to split the workload on all your CPU cores.
Should you need to feed those patches to a Tensorflow graph afterwards, the way to go would be to first generate the patches on the CPU (with whatever input pipeline you like), batch them and then feed them to the GPU for the graph computation. | I am working with python3 and tensorflow to generate image patches by using numpy view_as_windows but because of numpy can't run on GPU, is there any way to do it with tensorflow?
ex: view_as_windows(array2d, window_shape, stride)
Thanks | 0 | 1 | 133 |
0 | 48,505,541 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-01-24T05:25:00.000 | 0 | 1 | 0 | How to use mpi for python for parallel cluster computing/ hpc? | 48,415,348 | 0 | python,parallel-processing,cluster-computing,hpc,mpi4py | Actually, you may submit one script to some nodes, specified in a given file. But the results are given based on each script. You cannot combine the results of more than one script on run-time, since each result is saved in some particular file (if any job scheduler used). | I've read some tutorials and documentation on MPI for python. However, I'm still not clear on how it is supposed to be used for sending jobs to separate nodes in a cluster, then combining/processing the results. It seems that you only specify the number of different processes.
Is it possible to use MPI for sending versions of the same script to separate nodes which run separately with multiprocessing, then combine the combine the results later? If this is an inappropriate use for MPI, what could do something like this? | 0 | 1 | 207 |
0 | 48,419,553 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-01-24T08:21:00.000 | 0 | 2 | 0 | How to modify the initial value in BasicLSTMcell in tensorflow | 48,417,764 | 0 | python,tensorflow,lstm,recurrent-neural-network | In the first iteration add some tf.assign operations to assign the values you want to the internal variables.
Make sure this only happens in the first iteration otherwise you'll overwrite any training you do.
The cell has a method called get_trainable_variables to help you if you want. | I want to initial the value for weight and bias in BasicLSTMcell in Tensorflow with my pre-trained value (I get them by .npy). But as I use get_tensor_by_name to get the tensor, it seems that it just returns a copy for me and the raw value is never changed. I need your help! | 0 | 1 | 488 |
0 | 48,605,766 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-01-24T10:20:00.000 | 2 | 2 | 0 | Predict a variable that is not in the input sequences with LSTM-Keras | 48,420,005 | 1.2 | python,keras,lstm | I'd say that one way for it to do this is for your network to simply predict C or have C as the label
I have been seeing this again and again. Don't confuse a NN with something more than it actually is. You simply approximate the output Y given an input X by learning a function F. That is your NN.
In your case the output could very easily be C + Other_Output
Depending on what that other output is, your network could converge and have good results. It could very well not, so your question is simply at this point incomplete.You have to ask yourself some questions like:
Does C + Ohter_Output make sense for the give input.
Is there a good way for me serialize the C + Other_Output ? Like having the first K out of N output array elements describing C and the rest N-K describing Other_Output ?
Is C a multiclass problem and if so is Other_Output a different kind of problem or could potentially be turned into a multiclass of the same kind, problem that could converge along with C or make them both multilabel problem ?
These are at least some of the questions you need to ask yourself before even choosing the architecture.
That being said, no, unless you train your network to learn about patterns between A B D and C it will not be able to predict a missing input.
Good luck,
Gabriel | Assume that I have five columns in my dataset (A,B,C,D,E) and I want to build an LSTM model by training just on A,B,D,E (i.e. I want to exclude C)
My problem is that I still want to use this model to predict C. Is it possible if I didn't train my model with this variable? How can I do that?
EDIT 1
I'm working with categorical and numerical data modeled as time series. In this specific case, C is a categorical time series (given in a one-hot representation). | 0 | 1 | 686 |
0 | 48,425,028 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-01-24T14:26:00.000 | 0 | 2 | 0 | dask job killed because memory usage? | 48,424,813 | 0 | python,performance,dask | Try going through the data in chunks with:
chunksize = 10 ** 6
for chunk in pd.read_csv(filename, chunksize=chunksize):
process(chunk) | Hi I have a python script that uses dask library to handle a very large data frame, larger than the physical memory. I notice that the job get killed in the middle of a run if the memory usage stays at 100% of the computer for some time.
Is it expected? I would thought the data would be spilled to disk and there are plenty of disk space left.
Is there a way to limit its total memory usage? Thanks
EDIT:
I also tried:
dask.set_options(available_memory=12e9)
It did not work. It did not seemed to limit its memory usage. Again, when memory usage reach 100%, the job gets killed. | 0 | 1 | 1,009 |
0 | 48,432,486 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-01-24T21:39:00.000 | 0 | 1 | 0 | Why would importing pandas use up almost all my ram? | 48,432,038 | 0 | python,pandas,ram,conda | Ok, so starting from a brand new environment with latest python 3.5.4 and latest panda seems to cut it....so I think I'll close this wonderful thread for now and re-open if after reinstalling all other needed libs I end up with the same problem. | I'm fairly new to python but I'm pretty sure I didn't get this behaviour before.
A couple of days back I've noticed that if I open a new python console and simply do:
import pandas as pd
Then python.exe ram usage grows steadily in about 5 seconds to reach about 96% utilisation (ie about 15.5G of my 16G total ram).
That's not normal, right?
I'm using anaconda3 python 3.5 on windows 10....I've updated my conda and pandas but to no avail...
Cheers | 0 | 1 | 61 |
0 | 48,448,182 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-25T16:35:00.000 | 1 | 1 | 0 | Can I use ffmpeg to output jpgs to a numpy array in python without writing the files to disk etc? | 48,447,814 | 0.197375 | python,numpy,ffmpeg | You are exactly right, JPEG images are compressed (this is even a lossy compression, PNG would be a format with lossless compression), and JPEG files are much smaller than the data in uncompressed form.
When you load the images to memory, they are in uncompressed form, and having several GB of data with 14400 images is not surprising.
Basically, my advice is don't do that. Load them one at a time (or in batches), process them, then load the next images. If you load everything to memory beforehand, there will be a point when you run out of memory.
I'm doing a lot of image processing, and I have trouble imagining a case where it is necessary to have that many images loaded at once. | I have to read thousands of images in memory.This has to be done.When i extract frames using ffmpeg from a video,the disk space for the 14400 files =92MB and are in JPG format.When I read those images in python and append in a python list using libraries like opencv,scipy etc the same 14400 files=2.5 to 3GB.Guess the decoding is the reason?any thoughts on this will be helpful? | 0 | 1 | 420 |
0 | 48,455,544 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-01-26T02:48:00.000 | 0 | 1 | 0 | Pip is correctly installing libraries to the proper directory, but I cannot import those packages properly in program | 48,455,129 | 0 | python,python-3.x,pandas,python-import | SOLVED. So when I did:
import sys
sys.path.append(path to pandas library)
it worked! so now I can fully use pandas. I guess I will just have to do this anytime I download a new library and it doesn't work. Thank you for all the help | I installed (with pip) MatPlotLib and Pandas, and they are both not working properly in programs. Here is the strange thing...
When I type the following into the interactive environment of IDLE
import pandas as pd
pd.Series([1, 2, 3, 4, 5])
I get this as output: (indicating that it works properly)
0 1
1 2
2 3
3 4
4 5
dtype: int64
But when I use that very same code in a python program, it crashes and says "AttributeError: module 'pandas' has no attribute 'Series'"
Can anyone tell me what's going on?
I can also successfully import matplotlib in the interactive environment but get errors when I do it in a program and run it.
EDIT: There is no shebang because I am running the program through IDLE. I am using python 3.6, the only python I have on my computer. I am executing this file by clicking run in IDLE.
Currently In my command prompt paths I have
C:\Users\Karl\AppData\Local\Programs\Python\Python36
C:\Users\Karl\AppData\Local\Programs\Python\Python36\Scripts\
EDIT 2: I think we are closer to finding out the problem. If I run a python program with the code above in the command line (this time with the proper shebang(I forgot one before)), it works! So this must be an idle issue.
Currently, one of IDLE's paths is to
C:\Users\Karl\AppData\Local\Programs\Python\Python36\lib\site-packages
which contains all of the libraries for python.
EDIT FINAL: SOLVED. So when I did:
import sys
sys.path.append(path to pandas library) it worked! so now I can fully use pandas. I guess I will just have to do this anytime I download a new library and it doesn't work. Thank you for all the help | 0 | 1 | 53 |
0 | 48,471,254 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2018-01-26T23:07:00.000 | 0 | 1 | 0 | How to parse EXTREMELY fuzzy dates? | 48,470,828 | 0 | python-3.x,date,fuzzy-comparison | The problem you're describing is categorically known as "Natural Language Parsing", or NLP.
Googling for Python NLP Date Parsing libraries yields several results. You should do that, and evaluate them for your needs. | I have web scraped data in a column of a pandas dataframe that represents when different pieces of art were created. This data was entered in as strings by various people in many many different formats. Some examples:
1998
circa 1995
c. 2003-5
March 2, 1904
1st quarter of 19th century
19th to 20th century
ca. late 19th and early 20th Century
BCE 500
206 BCE-240 CE
1995-99
designed 1950, produced 1969
designed 1935, produced circa 1946-1968
1990; and 1989
1975/97
618-907 CE
2001; 2006 and 2008
1937-42/48
no date
n.d.
mid 1900s
late 1940's
I've spent a couple days writing a long transformer class that attempts to handle every combination in my current dataset, which is semi-successful, but I figured this must be something people have done in the past.
So does there exist any way in Python to handle date information that is extremely fuzzy in this way? | 0 | 1 | 290 |
0 | 48,475,567 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-01-27T06:55:00.000 | 0 | 1 | 0 | Transposed convolution TensorFlow padding for FCN style networks | 48,473,417 | 0 | python,tensorflow,deep-learning,convolution,image-segmentation | No, it's your decision how you calculate the kernel-map in every single convolutional layer. It's a matter of designing your model. | I am implementing some variants of FCN for Segmentation. In particular, I have implemented a U-net architecture. Within the architecture, I am applying valid convolution with a 3x3 kernel and then I apply transposed convolution for upsampling with a 2x2 kernel and stride of 2.
My question is, if using valid or same padding for the convolution, does this determine whether we use valid or same padding for the transposed convolution?
Currently I use valid padding for convolution and same padding for transposed convolution. | 0 | 1 | 330 |
0 | 58,831,150 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-01-27T22:44:00.000 | 0 | 2 | 0 | Error while trying to run NiftyNet quick start command | 48,481,327 | 0 | python,niftynet | Please Install tensorflow using this command
pip install tensorflow
After that install nifty net using below command
'''
pip install niftynet
'''
reinstall the python
'''
pip install python
'''
if the problem is still than please mentioned your problem in more details
please make sure your environment variable is set before executing the command from niftynet page. | I'm trying out NiftyNet and got stuck at the first step.
Trying to run the quickstart command
python net_download.py dense_vnet_abdominal_ct_model_zoo
python net_segment.py inference -c ~/niftynet/extensions/dense_vnet_abdominal_ct/config.ini
gives me
KeyError: "Registering two gradient with name 'FloorMod' !(Previous registration was in _find_and_load_unlocked :955)"
Could any one help? I'm using Ubuntu 16.04 with Nvidia GPU. Tried tensorflow:1.4.1-py3 docker image, Anaconda with CPU version of tensorflow
and native python with CPU version of tensorflow and I get the same error.
I'm pretty sure it's something I did wrong because I get the same error from those different environment but I'm not sure what...
Thanks! | 0 | 1 | 599 |
0 | 48,482,421 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-28T00:08:00.000 | 2 | 1 | 0 | how to set stride to zero when using tf.layers.conv2d | 48,481,873 | 0.379949 | python,tensorflow,neural-network,conv-neural-network | A unity stride is the same as not having a stride (turning it off), as its the normal way a convolution works.
As the stride is the amount of pixels the sliding window moves, one is the minimum value, and zero would not be valid as then the sliding window wouldn't move at all. | is there a way that I can turn off stride in tensor flow when using: tf.layers.conv2d()? According to the docs, the default is (1,1) but when I try to change this to (0,0) I get an error telling me that it has to be a positive number.
Thanks. | 0 | 1 | 336 |
0 | 53,142,990 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-01-29T20:52:00.000 | 6 | 1 | 0 | ImportError: No module named mpl_toolkits | 48,509,766 | 1.2 | python,matplotlib | I solved this by running the following imports in the following order:
import matplotlib.pyplot as plt
import mpl_toolkits
from mpl_toolkits.mplot3d import Axes3D
Note: This only worked for me on python3. So first, I had to install python3 and pip3. Then I did "pip3 install matplotlib" in Terminal. If you already have matplotlib then try "pip3 install --upgrade matplotlib" | I have a python program and am trying to plot something using the mplot3d from mpl toolkits, but whenever I try to import the Axes3D from mpl_toolkits from mpl_toolkits.mplot3d import Axes3D
I get the following error: ImportError: No module named mpl_toolkits | 0 | 1 | 11,673 |
0 | 49,185,722 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-01-30T09:18:00.000 | 0 | 2 | 0 | Fine control over h5py buffering | 48,517,784 | 0 | python,hdf5,h5py | If you need consistency and avoid corrupted hdf5 files, you may like to:
1) use write-ahead-log, use append logs to write what's being added/updated each time, no need to write to hdf5 this moment.
2) periodically, or at the time you need to shutdown, you replay the logs to apply them one by one, write to the hdf5 file
3) if your process down during 1), you won't lose data, after you startup next time, just replay the logs and writes them to hdf5 file
4) if your process is down during 2), you will not lose data, just remove the corrupted hdf5 file, replay the logs and write it again. | I have some data in memory that I want to store in a HDF file.
My data are not huge (<100 MB, so they fit in memory very comfortably), so for performance it seems to make sense to keep them there. At the same time, I also want to store it on disk. It is not critical that the two are always exactly in sync, as long as they are both valid (i.e. not corrupted), and that I can trigger a synchronization manually.
I could just keep my data in a separate container in memory, and shovel it into an HDF object on demand. If possible I would like to avoid writing this layer. It would require me to keep track of what parts have been changed, and selectively update those. I was hoping HDF would take care of that for me.
I know about the driver='core' with backing store functionality, but it AFAICT, it only syncs the backing store when closing the file. I can flush the file, but does that guarantee to write the object to storage?
From looking at the HDF5 source code, it seems that the answer is yes. But I'd like to hear a confirmation.
Bonus question: Is driver='core' actually faster than normal filesystem back-ends? What do I need to look out for? | 0 | 1 | 870 |
0 | 48,531,968 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-30T16:03:00.000 | 0 | 1 | 0 | TensorFlow Checkpoints to S3 | 48,525,733 | 0 | python,tensorflow,amazon-s3,amazon-sagemaker | Create an object in S3 and enable versioning to the bucket. Everytime you change the model and save it to S3, it will be automatically versioned and stored in the bucket.
Hope it helps. | I am executing a Python-Tensorflow script on Amazon Sagemaker. I need to checkpoint my model to the S3 instance I am using, but I can't find out how to do this without using the Sagemake Tensorflow version.
How does one checkpoint to an S3 instance without using the Sagemaker TF version? | 1 | 1 | 895 |
0 | 48,527,617 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2018-01-30T17:42:00.000 | 1 | 2 | 0 | Jupyter Notebook and previous output | 48,527,451 | 0.099668 | python,jupyter-notebook | usually, yes, so long as the kernel is still up. the return values of all expressions evaluated are stored in the Out global list. If you are now executing statement number n, then Out[n-1] will have the last thing you successfully finished.
if your output was not returned, but rather printed. You're out of luck... | Is there any way to see the previous output without rerunning the program? For example, I left my ML algorithm to work overnight and in the morning I got the results. But, for some reason, when I pressed Enter on the original code, it started to run again and the original output disappeared. | 0 | 1 | 3,493 |
0 | 63,316,299 | 0 | 1 | 0 | 0 | 2 | false | 8 | 2018-01-30T18:02:00.000 | 0 | 2 | 0 | No module named 'bokeh.plotting'; bokeh is not a package | 48,527,785 | 0 | python,bokeh | Renaming the bokeh.py file and deleting bokeh.pyc file solved it for me. | I am trying to import curdoc. I have tried from bokeh.io import curdoc and from bokeh.plotting import curdocbut neither works.
I've tried pip install -U bokeh and pip install bokeh but it still returns no module named 'bokeh.plotting; 'bokeh' is not a package'. What is happening?
I have reverted back to 0.12.1 currently. | 0 | 1 | 6,323 |
0 | 52,453,673 | 0 | 1 | 0 | 0 | 2 | true | 8 | 2018-01-30T18:02:00.000 | 18 | 2 | 0 | No module named 'bokeh.plotting'; bokeh is not a package | 48,527,785 | 1.2 | python,bokeh | Check your folder if any of the program named bokeh.py please rename it because it's picking bokeh.plotting from your program bokeh.py not from the library. | I am trying to import curdoc. I have tried from bokeh.io import curdoc and from bokeh.plotting import curdocbut neither works.
I've tried pip install -U bokeh and pip install bokeh but it still returns no module named 'bokeh.plotting; 'bokeh' is not a package'. What is happening?
I have reverted back to 0.12.1 currently. | 0 | 1 | 6,323 |
0 | 48,529,697 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-01-30T18:04:00.000 | 1 | 1 | 0 | Loss is increasing from first epoch itself | 48,527,808 | 1.2 | python,optimization,neural-network,deep-learning,pytorch | Probably your learning rate is too high. Try decreasing your learning rate. A too large learning rate is the most common reason for loss increasing from the first epoch.
Also your loss is very high. It is unusual to have such a high lost. You probably have a sum in your loss function, it might be wiser to replace that sum with a mean. While this makes no difference if you use the Adam optimizer, if you use simple SGD with or without momentum using a sum instead of a mean, means that you will need to tune your learning rate differently if the dimensions (or the length of your sequence processed by your lstm) of your system changes. | I am training my siamese network for nlp. I have used lstm in it. and BCELoss. My loss is increasing from the first epoch. The first 36 epoch loss is
error after 0 is
272.4357
[torch.FloatTensor of size 1]
error after 1 is
271.8972
[torch.FloatTensor of size 1]
error after 2 is
271.5598
[torch.FloatTensor of size 1]
error after 3 is
271.6979
[torch.FloatTensor of size 1]
error after 4 is
271.7315
[torch.FloatTensor of size 1]
error after 5 is
272.3965
[torch.FloatTensor of size 1]
error after 6 is
273.3982
[torch.FloatTensor of size 1]
error after 7 is
275.1197
[torch.FloatTensor of size 1]
error after 8 is
275.8228
[torch.FloatTensor of size 1]
error after 9 is
278.3311
[torch.FloatTensor of size 1]
error after 10 is
277.1054
[torch.FloatTensor of size 1]
error after 11 is
277.8418
[torch.FloatTensor of size 1]
error after 12 is
279.0189
[torch.FloatTensor of size 1]
error after 13 is
278.4090
[torch.FloatTensor of size 1]
error after 14 is
281.8813
[torch.FloatTensor of size 1]
error after 15 is
283.4077
[torch.FloatTensor of size 1]
error after 16 is
286.3093
[torch.FloatTensor of size 1]
error after 17 is
287.6292
[torch.FloatTensor of size 1]
error after 18 is
297.2318
[torch.FloatTensor of size 1]
error after 19 is
307.4176
[torch.FloatTensor of size 1]
error after 20 is
304.6649
[torch.FloatTensor of size 1]
error after 21 is
328.9772
[torch.FloatTensor of size 1]
error after 22 is
300.0669
[torch.FloatTensor of size 1]
error after 23 is
292.3902
[torch.FloatTensor of size 1]
error after 24 is
300.8633
[torch.FloatTensor of size 1]
error after 25 is
305.1822
[torch.FloatTensor of size 1]
error after 26 is
333.9984
[torch.FloatTensor of size 1]
error after 27 is
346.2062
[torch.FloatTensor of size 1]
error after 28 is
354.6148
[torch.FloatTensor of size 1]
error after 29 is
341.3568
[torch.FloatTensor of size 1]
error after 30 is
369.7580
[torch.FloatTensor of size 1]
error after 31 is
366.1615
[torch.FloatTensor of size 1]
error after 32 is
368.2455
[torch.FloatTensor of size 1]
error after 33 is
391.4102
[torch.FloatTensor of size 1]
error after 34 is
394.3190
[torch.FloatTensor of size 1]
error after 35 is
401.0990
[torch.FloatTensor of size 1]
error after 36 is
422.3723
[torch.FloatTensor of size 1] | 0 | 1 | 1,124 |
0 | 48,550,016 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-01-30T23:20:00.000 | 1 | 2 | 0 | How to oversample image dataset using Python? | 48,532,069 | 0.099668 | python-3.x,machine-learning,deep-learning,computer-vision,imblearn | Thanks for the clarification. In general, you don't oversample with Python. Rather, you pre-process your data base, duplicating the short-handed classes. In the case you cite, you might duplicate everything in class B, and make 5 copies of everything in class C. This gives you a new balance of 1000:600:500, likely more palatable to your training routines. Instead of the original 1400 images, you now shuffle 2100.
Does that solve your problem? | I am working on a multiclass classification problem with an unbalanced dataset of images(different class). I tried imblearn library, but it is not working on the image dataset.
I have a dataset of images belonging to 3 class namely A,B,C. A has 1000 data, B has 300 and C has 100. I want to oversample class B and C, so that I can avoid data imbalance. Please let me know how to oversample the image dataset using python. | 0 | 1 | 2,828 |
0 | 48,560,107 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-01-31T09:56:00.000 | 1 | 1 | 0 | Bigger neural network converges to bigger error than smaller | 48,539,256 | 0.197375 | python,machine-learning,neural-network,keras | What I noticed in my tests is that increasing the number of parameters require sometime to review how you prepare your input data or how you initialize your weights. I found that often increasing the number of parameteres requires to initialize the weights differently (meaning initializing with smaller values) or you need to normalize the input data (I guess you have done that), or even dividing them by a constant factor to make them smaller.
Sometime reducing the learning rate helps, since your cost function will become more complex with more parameters and it may happen that the learning rate that before was working fine is too big for your new case. But is very difficult to give a precise answer.
Something else: what do you mean with bigger error? Are you doing classification or regression? In addition are you talking about error on the train set or the dev/test sets? That is a big difference. It may well be that (if you are talking about the dev/test sets) that you are overfitting your data and therefore gets a bigger error on the dev/tests sets (bias-variance tradeoff)... Can you give us more details? | I am training neural networks using the great Keras library for Python. I got curious about one behaviour I don't understand.
Often even slighly bigger models converge to bigger error than smaller ones.
Why does this happen? I would expect bigger model just to train longer, but converge to smaller or same error.
I hyperoptimized the model, tried different amounts of dropout regularization and let it train for sufficient time. I experimented with models about 10-20k parameters, 5 layers, 10M data samples and 20-100 epochs with decreasing LR. Models contained Dense and sometimes LSTM layers. | 0 | 1 | 407 |
0 | 49,105,522 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-01-31T12:37:00.000 | 0 | 1 | 0 | Unsuccessfully Running TensorFlow Package on Windows and Python | 48,542,418 | 1.2 | python,import,packages | Just change this instruction: conda create -n tensorflow pip python=3.5
to: conda create -n tensorflow pip python=3.5 -spyder. | I have been working with Anaconda, Python 3.5 and Win 7. Having been obliged to work with TensorFlow, I installed and run it. For the first time it worked but for the second time, I got the error : "tensor flow module not found". I uninstalled and installed it again and got the same error. Switching to windows 10, python 2 or using another system have not been useful.
Is there any other way to try (except using Linux)?
Thanks a lot in advance. | 0 | 1 | 39 |
0 | 48,544,648 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-01-31T13:23:00.000 | 0 | 1 | 0 | What is the difference between oversampling minority data and simulating minority data? | 48,543,257 | 1.2 | python,statistics | So after some research, here is my answer.
Bootstrapping is for either 1D array or 2D (also referred to as pairs bootstrap) of any columns of data (only columns) while oversampling is for the minority class in the dataset (rows + columns).
Bootstrapping is less popular for binary classification with 100+ features as it does not maintain the underlying relationship between the features while oversampling (depending on the technique) does maintain the underlying relationship between the features in the data. Moreover, Bootstrap works best for continuous data in a single column in case of 1D or 2d, as then you can produce a range of data for the whole column regardless of minority or majority class as if the measurements where taken n numbers of times (n is is any integer 1+)
Thus, in this question scenario, oversampling is preferred as it maintains the underlying structure of the data of the minority class. | Oversampled data as in those created by the SMOTHE and ADASYN method.
Simulated data as in those created by statistics, using extreme mean values, bootstrapping and measuring performance with the p-value.
The application is to increase the minority class of underrepresented data using python. | 0 | 1 | 39 |
0 | 48,550,825 | 0 | 0 | 0 | 0 | 2 | true | 12 | 2018-01-31T19:41:00.000 | 15 | 3 | 0 | What does train_on_batch() do in keras model? | 48,550,201 | 1.2 | python,tensorflow,machine-learning,keras,artificial-intelligence | Yes, train_on_batch trains using a single batch only and once.
While fit trains many batches for many epochs. (Each batch causes an update in weights).
The idea of using train_on_batch is probably to do more things yourself between each batch. | I saw a sample of code (too big to paste here) where the author used model.train_on_batch(in, out) instead of model.fit(in, out). The official documentation of Keras says:
Single gradient update over one batch of samples.
But I don't get it. Is it the same as fit(), but instead of doing many feed-forward and backprop steps, it does it once? Or am I wrong? | 0 | 1 | 13,182 |
0 | 62,452,563 | 0 | 0 | 0 | 0 | 2 | false | 12 | 2018-01-31T19:41:00.000 | 0 | 3 | 0 | What does train_on_batch() do in keras model? | 48,550,201 | 0 | python,tensorflow,machine-learning,keras,artificial-intelligence | The method fit of the model train the model for one pass through the data you gave it, however because of the limitations in memory (especially GPU memory), we can't train on a big number of samples at once, so we need to divide this data into small piece called mini-batches (or just batchs). The methode fit of keras models will do this data dividing for you and pass through all the data you gave it.
However, sometimes we need more complicated training procedure we want for example to randomly select new samples to put in the batch buffer each epoch (e.g. GAN training and Siamese CNNs training ...), in this cases we don't use the fancy an simple fit method but instead we use the train_on_batch method. To use this methode we generate a batch of inputs and a batch of outputs(labels) in each iteration and pass it to this method and it will train the model on the whole samples in the batch at once and gives us the loss and other metrics calculated with respect to the batch samples. | I saw a sample of code (too big to paste here) where the author used model.train_on_batch(in, out) instead of model.fit(in, out). The official documentation of Keras says:
Single gradient update over one batch of samples.
But I don't get it. Is it the same as fit(), but instead of doing many feed-forward and backprop steps, it does it once? Or am I wrong? | 0 | 1 | 13,182 |
0 | 48,553,586 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-01-31T23:47:00.000 | 2 | 3 | 0 | Writing dataframe to csv WITHOUT removing commas | 48,553,303 | 0.132549 | python,python-3.x,pandas,csv | df.to_csv('output.tsv', sep='\t')
Will separate the values with tabs instead of commas.
.tsv is tab separated value file | I want to write a pandas dataframe to csv. One of the columns of the df has entries which are lists, e.g. [1, 2], [3, 4], ...
When I use df.to_csv('output.csv') and I open the output csv file, the commas are gone. That is, the corresponding column of the csv has entries [1 2], [3 4], ....
Is there a way to write a dataframe to csv without removing the commas in these lists? I've also tried using csv.writer. | 0 | 1 | 7,416 |
0 | 50,587,505 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2018-02-01T05:22:00.000 | 0 | 2 | 0 | PySpark casting IntegerTypes to ByteType for optimization | 48,555,860 | 0 | python,apache-spark,pyspark,spark-dataframe | In order to see whether there is any impact, you can try two things:
Write the data back to the file system. Once with the original type and anther time with your optimisation. Compare size on disk.
Try calling collect on the dataframe and look at the driver memory in your OS's system monitor, make sure to induce a garbage collection to get a cleaner indication. Again- do this once w/o the optimisation and another time with the optimisation.
user8371915 is right in the general case but take into account that the optimisations may or may not kick in based on various parameters like row group size and dictionary encoding threshold.
This means that even if you do see impact, there is a good chance you could get the same compression by tuning spark. | I'm reading in a large amount of data via parquet files into dataframes. I noticed a vast amount of the columns either have 1,0,-1 as values and thus could be converted from Ints to Byte types to save memory.
I wrote a function to do just that and return a new dataframe with the values casted as bytes, however when looking at the memory of the dataframe in the UI, I see it saved as just a transformation from the original dataframe and not as a new dataframe itself, thus taking the same amount of memory.
I'm rather new to Spark and may not fully understand the internals, so how would I go about initially setting those columns to be of ByteType? | 0 | 1 | 339 |
0 | 48,645,854 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2018-02-01T05:22:00.000 | 0 | 2 | 0 | PySpark casting IntegerTypes to ByteType for optimization | 48,555,860 | 0 | python,apache-spark,pyspark,spark-dataframe | TL;DR It might be useful, but in practice impact might be much smaller than you think.
As you noticed:
the memory of the dataframe in the UI, I see it saved as just a transformation from the original dataframe and not as a new dataframe itself, thus taking the same amount of memory.
For storage, Spark uses in-memory columnar storage, which applies a number of optimizations, including compression. If data has low cardinality, then column can be easily compressed using run length encoding or dictionary encoding, and casting won't make any difference. | I'm reading in a large amount of data via parquet files into dataframes. I noticed a vast amount of the columns either have 1,0,-1 as values and thus could be converted from Ints to Byte types to save memory.
I wrote a function to do just that and return a new dataframe with the values casted as bytes, however when looking at the memory of the dataframe in the UI, I see it saved as just a transformation from the original dataframe and not as a new dataframe itself, thus taking the same amount of memory.
I'm rather new to Spark and may not fully understand the internals, so how would I go about initially setting those columns to be of ByteType? | 0 | 1 | 339 |
0 | 48,558,428 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-02-01T08:13:00.000 | 4 | 1 | 0 | Real-time anomaly detection from time series data | 48,558,085 | 1.2 | python,machine-learning,keras,time-series | I think you are using batch processing model(you didn't using any real-time processing frameworks and tools) so there shouldn't be any problem when making your model or classification. The problem may occur a while after you make model, so after that time your predicted model is not valid.
I suggest some ways maybe solve this problem:
Use Real-time or near real-time processing(like apache spark, flink, storm, etc).
Use some conditions to check your data periodically for any changes if any change happens run your model again.
Delete instances that you think they may cause problem(may that changed data known as anomaly itself) but before make sure that data is not so important.
Change you algorithm and use algorithms that not very sensitive to changes. | I have some problem when detecting anomaly from time series data.
I use LSTM model to predict value of next time as y_pred, true value at next time of data is y_real, so I have er = |y_pred - y_t|, I use er to compare with threshold = alpha * std and get anomaly data point. But sometime, our data is effected by admin or user for example number of player of a game on Sunday will higher than Monday.
So, should I use another model to classify anomaly data point or use "If else" to classify it? | 0 | 1 | 1,811 |
0 | 48,800,308 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-02-01T11:45:00.000 | 2 | 1 | 0 | How to generate sparse orthogonal matrix in python? | 48,561,974 | 1.2 | python,sparse-matrix,orthogonal | As a preliminary thought you could partition the matrix into diagonal blocks, fill those blocks with QR and then permute rows/columns. The resulting matrices will remain orthagonal. Alternatively, you could define some sparsity pattern for Q and try to minimize f(Q, xi) subject to QQ^T=I where f is some (preferably) convex function that adds entropy through the random variable xi. Can't say anything about the efficacy of either method since I haven't actually tried them.
EDIT: A bit more about the second method. f can really be any function. One choice might be similarity of the non-zero elements to a random gaussian vector (or any other random variate): f = ||vec(Q) - x||_2^2, x ~ N(0, sigma * I). You could handle this using any general constrained optimizer. The problem of course, is that not every pattern S is guaranteed to have a (full rank) orthogonal filling. If you have the memory, L1 regularization (or a smooth approximation) could encourage sparsity in a dense matrix variable: g(Q) = f(Q) + P(Q) where P is any sparsity-inducing penalty function. Check out Wen & Yen (2010) "A feasible Method for Optimization with Orthogonality Constraints" for an algorithm specifically designed for optimization of general (differentiable) functions over (dense) orthogonal matrices and Liu, Wu, So (2015) "Quadratic Optimization with Orthogonality Constraints" for more theorical evaluation of several line/arc search algorithms for quadratic functions. If memory is a problem, you could generate each row/column separately using sparse basis pursuit, for which there are many algorithms depending on the nature of your problem. See Qu, Sun and Wright (2015) "Finding a sparse vector in a subspace: linear sparsity using alternate directions" and Bian et al (2015) "Sparse null space basis pursuit and analysis dictionary learning for high-dimensional data analysis" for algorithm details, though in both cases you will have to incorporate/replace constraints to promote orthogonality to all previous vectors.
It's also worth noting there are sparse QR algorithms that return Q as the product of sparse/structured matrices. If you are concerned about storage space alone, this might be the simplest method to create large, efficient orthogonal operators. | How can one generate random sparse orthogonal matrix?
I know there is a sparse matrices in scipy library but they are generally non-orthogonal. One can exploit QR-factorization, but it is not necessarily preserves sparsity. | 0 | 1 | 989 |
0 | 48,573,176 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-02-01T16:05:00.000 | 5 | 2 | 0 | How to implement Sklearn Metric in Keras as Metric? | 48,567,012 | 1.2 | python,machine-learning,scikit-learn,keras | Metrics in Keras and in Sklearn mean different things.
In Keras metrics are almost same as loss. They get called during training at the end of each batch and each epoch for reporting and logging purposes. Example use is having the loss 'mse' but you still would like to see 'mae'. In this case you can add 'mae' as a metrics to the model.
In Sklearn metric functions are applied on predictions as per the definition "The metrics module implements functions assessing prediction error for specific purposes". While there's an overlap, the statistical functions of Sklearn doesn't fit to the definition of metrics in Keras. Sklearn metrics can return float, array, 2D array with both dimensions greater than 1. There is no such object in Keras by the predict method.
Answer to your question:
It depends where you want to trigger:
End of each batch or each epoch
You can write a custom callback that is fired at the end of batch.
After prediction
This seems to be easier. Let Keras predict on the entire dataset, capture the result and then feed the y_true and y_pred arrays to the respective Sklearn metric. | Tried googling up, but could not find how to implement Sklearn metrics like cohen kappa, roc, f1score in keras as a metric for imbalanced data.
How to implement Sklearn Metric in Keras as Metric? | 0 | 1 | 2,808 |
0 | 48,574,103 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-02-01T22:47:00.000 | 1 | 1 | 0 | Python np.array example | 48,572,999 | 1.2 | python,neural-network | The shape of A tells me that A is most likely an array of 60 grayscale images (batch size 60), with each image having a size of 128x128 pixels.
We have: B = np.array([A[..., n:n+5] for n in (5*4, 5*5)]). To better understand what's happening here, let's unpack this line in reverse:
for n in (5*4, 5*5): This is the same as for n in (20, 25). The author probably chose to write it in this way for some intuitive reason related to the data or the rest of the code. This gives us n=20 and n=25.
A[..., n:n+5]: This is the same as A[:, :, n:n+5]. This gives us all the rows from all the images of in A, but only the 5 columns at n:n+5. The shape of the resulting array is then (60, 128, 5).
n=20 gives us A[:, :, 20:25] and n=25 gives us A[:, :, 25:30]. Each of these arrays is therefore of size (60, 128, 5).
Together, [A[..., n:n+5] for n in (5*4, 5*5)] gives us a list (thanks list comprehension!) with two elements, each a numpy array of size (60, 128, 5). np.array() converts this list into a numpy array of shape (2, 60, 128, 5).
The result is that B contains 2 patches of each image, each a 5 pixel column wide subset of the original image- one starting at column 20 and the second one starting at column 25.
I can't speculate to the reason for this crop without further information about the network and its purpose.
Hope this helps! | I am new to Python.
I am confused as to what is happening with the following:
B = np.array([A[..., n:n+5] for n in (5*4, 5*5)])
Where A.shape = (60L, 128L, 128L)
and B.shape = (2L, 60L, 128L, 5L)
I believe it is supposed to make some sort of image patch. Can someone explain to me what this does? This example is in the context of applying neural networks to images. | 0 | 1 | 80 |
0 | 68,209,345 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-02-02T10:32:00.000 | 0 | 2 | 0 | Difference of three Naive Bayes classifiers | 48,580,762 | 0 | python,machine-learning,naivebayes | We use algorithm based on the kind of dataset we have.Bernoulli Naive bayes is good at handling boolean/binary attributes,while Multinomial Naive bayes is good at handling discrete values and Gaussian naive bayes is good at handling continuous values.
Consider three scenarios
1)consider a datset which has columns like has_diabetes,has_bp,has_thyroid and then you classify the person as healthy or not.In such a scenario,Bernoulli NB will work well.
2)consider a dataset that has marks of various students of various subjects and you want to predict,whether the student is clever or not.Then in this case multinomial NB will work fine.
3)consider a dataset that has weight of students and you are predicting height of them,then GaussiaNB will well in this case. | Sorry for some grammatical mistakes and misuse of words.
I am currently working with text classification, trying to classify the email.
After my research, i found out Multinomial Naive Bayes and Bernoulli Naive Bayes is more often used for text classification.
Bernoulli just cares about whether the word happens or not.
Multinomial cares about the number of occurrence of the word.
For Gaussian Naive Bayes, it's usually been used for continuous data and data with normal distribution, eg: height,weight
But what is the reason that we don't use Gaussian Naive Bayes for text classification?
Any bad things will happen if we apply it to text classification? | 0 | 1 | 1,887 |
0 | 48,620,666 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-02-02T19:11:00.000 | 1 | 1 | 0 | Installing ggplot in python 3.6.3 | 48,589,399 | 0.197375 | python,ggplot2,pip,conda | Try making a fresh conda environment and only request the installation of ggplot:
conda create --name my_new_env -c conda-forge ggplot
This installed ggplot v0.11.5 and python v3.6.3 for me just now (Ubuntu 17.10) | I am transitioning from R and would like to use ggplot in the yhat Rodeo IDE.
I have tried installing ggplot for python 3.6.3 (Anaconda 5.0.1 64-bit) in a number of ways.
When trying to install any kind of ggplot via conda i get this:
"UnsatisfiableError: The following specifications were found to be in conflict:
-ggplot
-zict"
I am aware that ggplot does not work with the most recent version of python, so i tried installing an older version and received the same error.
I then thought to try pip, and ensured that pip was installed and up to date via conda.
In rodeo when i try to use "! pip install ggplot" i get the "pip.exe has stopped working" error. Same for easy_install, etc.
Any ideas? It seems that i can't install it in any way. | 0 | 1 | 2,135 |
0 | 48,633,697 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-03T01:18:00.000 | 2 | 1 | 0 | Launching tensorboard error - ImportError: cannot import name weakref | 48,593,014 | 0.379949 | python,tensorflow,tensorboard | For those with the same problem I was able to fix it through the following:
1) find where your tensorflow lives pip show tensorflow and look at the location line, copy it.
2) For me it was cd /usr/local/lib/python2.7/site-packages/
3) cd tensorflow/python/lib
4) open tf_should_use.py
5) In your python editor replace line 28 from backports import weakref with import weakref and save the file. | With python 2.7 on a mac with tensorflow and running
tensorboard --logdir= directory/wheremylog/fileis produces the following error ImportError: cannot import name weakref
I've seen several folks remedy the issue with pip install backports.weakref
but that requirement is already satisfied for me. Requirement already satisfied: backports.weakref in /usr/local/lib/python2.7/site-packages
I am out of ideas and am really keen to get tensorboard working.
Thanks | 0 | 1 | 917 |
0 | 48,595,998 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-02-03T09:18:00.000 | 0 | 2 | 0 | Tensorflow: Finding index of first occurrence of elements in a tensor | 48,595,802 | 0 | python,tensorflow | One possibilty would be to use x.eval() which gives you a numpy array back,
then use numpy.unique(...) | Suppose I have a tensor, x = [1, 2, 6, 6, 4, 2, 3, 2]
I want to find the index of the first occurrence of every unique element in x.
The output should be [0, 1, 6, 4, 2].
I basically want the second output of numpy.unique(x,return_index=True). This functionality doesn't seem to be supported in tf.unique.
Is there a workaround to this in tensorflow, without using any loops? | 0 | 1 | 778 |
0 | 48,607,299 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-02-03T17:54:00.000 | 0 | 1 | 0 | Difference between k-means clustering and creating groups by brute force approach | 48,600,277 | 1.2 | python,r,statistics,cluster-analysis | Define "better".
What you do seems to be related to "leader" clustering. But that is a very primitive form of clustering that will usually not yield competitive results. But with 1 million points, your choices are limited, and kmeans does not handle categorical data well.
But until you decide what is 'better', there probably is nothing 'wrong' with your greedy approach.
An obvious optimization would be to first split all the data based on the categorical attributes (as you expect them to match exactly). That requires just one pass over the data set and a hash table. If your remaining parts are small enough, you could try kmeans (but how would you choose k), or DBSCAN (probably using the same threshold you already have) on each part. | I have a task to find similar parts based on numeric dimensions--diameters, thickness--and categorical dimensions--material, heat treatment, etc. I have a list of 1 million parts. My approach as a programmer is to put all parts on a list, pop off the first part and use it as a new "cluster" to compare the rest of the parts on the list based on the dimensions. As a part on the list matches the categorical dimensions and numerical dimensions--within 5 percent--I will add that part to the cluster and remove from the initial list. Once all parts in the list are compared with the initial cluster part's dimensions, I will pop the next part off the list and start again, populating clusters until no parts remain on the original list. This is a programmatic approach. I am not sure if this is most efficient way of categorizing parts into "clusters" or if k-means clustering would be a better approach. | 0 | 1 | 161 |
0 | 48,799,838 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-02-04T14:26:00.000 | 4 | 1 | 0 | Get specific node id from the figure plot by osmnx | 48,609,095 | 0.664037 | python,plot,openstreetmap,networkx | Just set the annotate parameter to True in the osmnx.plot.plot_graph() module. | Now I can plot the map using osmnx, but I don't know the node id of nodes in the figure. That is how can I associate the node id and the node plot in the figure? Or, Can I plot the map with node id? Thanks! | 0 | 1 | 469 |
0 | 48,779,917 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-02-04T19:22:00.000 | 0 | 1 | 0 | How to render a plot.ly Jupyter Notebook file into Dash python Dashboard | 48,612,105 | 1.2 | python,plotly-dash | I have solved it.
It was due to miscoding. The code was not in the right place.
Thanks. | I would like to have you sharing the template in Dash python where it generate a scatter charter with the help of plot.ly stunning graphics.
I am unable to make my Dash framework running but I do the same flawlessly in Jupyter Notebook. Just was wondering if I could run it on Dash.
Your help is much appreciated.
Regards. | 1 | 1 | 458 |
0 | 48,644,352 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-06T13:31:00.000 | -2 | 2 | 0 | Encode array to uint8 and then back | 48,644,272 | -0.197375 | python | Try array_name.encode("uint8").decode("uint8") if this worked, then you can use the decode() method | I have an array which has intensity values from -3000 to 1000,I have thresholded all values which are less then -100 to -100 and all values greater than 400 to 400,After which i convert it to datatype np.uint8,Which makes all the values in the array to have intensity values from 0 to 255,
What i am confused about is,is there an operation i can run which will give the array from 0-255 intensity range to the earlier one -100 to 400?
Any suggestions will be useful,Thanks in advance | 0 | 1 | 1,021 |
0 | 48,674,801 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-02-07T13:07:00.000 | 2 | 1 | 0 | Keras Training a CNN - Should I Convert Heatmap Data As Image or 2D Matrix | 48,664,649 | 1.2 | python,tensorflow,keras,conv-neural-network,data-processing | In fact - you need to reshape a single data point to have a 3D shape - as keras expects your dataset to have shape (number examples, width, height, channels). If you don't wont to make your image RGB - you can simply leave it with only one channel (and interpret it as greyscale channel). | I am interested in training a Keras CNN and I have some data in the form of 2D matrices (e.g. width x height). I normally represent, or visualize the data like a heatmap, with a colorbar.
However, in training the CNN and formatting the data input, I'm wondering if I should keep this matrix as a 2D matrix, or convert it into an RGB image that is essentially a 3D matrix?
What is the best practice and some considerations people should take into account? | 0 | 1 | 691 |
0 | 48,668,122 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-02-07T15:54:00.000 | 1 | 2 | 0 | How to import data into Tensorflow? | 48,668,031 | 1.2 | python,tensorflow,neural-network,deep-learning,classification | There are many ways to import images for training, you can use Tensorflow but these will be imported as Tensorflow objects, which you won't be able to visualize until you run the session.
My favorite tool to import images is skimage.io.imread. The imported images will have the dimension (width, height, channels)
Or you can use importing tool from scipy.misc.
To resize images, you can use skimage.transform.resize.
Before training, you will need to normalize all the images to have the values between 0 and 1. To do that, you simply divide the images by 255.
The next step is to one hot encode your labels to be an array of 0s and 1s.
Then you can build and train your CNN. | I am new to Tensorflow and to implementing deep learning. I have a dataset of images (images of the same object).
I want to train a Neural Network model using python and Tensorflow for object detection.
I am trying to import the data to Tensorflow but I am not sure what is the right way to do it.
Most of the tutorials available online are using public datasets (i.e. MNIST), which importing is straightforward but not helpful in the case where I need to use my own data.
Is there a procedure or tutorial that i can follow? | 0 | 1 | 864 |
0 | 48,674,833 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-02-07T16:27:00.000 | 4 | 1 | 0 | How to retrieve the content of a PCollection and assign it to a normal variable? | 48,668,686 | 1.2 | python,apache-beam | PCollection is simply a logical node in the execution graph and its contents are not necessarily actually stored anywhere, so this is not possible directly.
However, you can ask your pipeline to write the PCollection to a file (e.g. convert elements to strings and use WriteToText with num_shards=1), run the pipeline and wait for it to finish, and then read that file from your main program. | I am using Apache-Beam with the Python SDK.
Currently, my pipeline reads multiple files, parse them and generate pandas dataframes from its data.
Then, it groups them into a single dataframe.
What I want now is to retrieve this single fat dataframe, assigning it to a normal Python variable.
Is it possible to do? | 0 | 1 | 1,669 |
0 | 48,681,568 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-07T18:20:00.000 | 0 | 1 | 0 | Pandas read csv adds zeros | 48,670,780 | 0 | python,pandas,csv,encoding,iso-8859-1 | Basically the column looks like this
Column_ID
10
HGF6558
059
KP257
0001 | I have a problem with reading in a csv with an id field with mixed dtypes from the original source data, i.e. the id field can be 11, 2R399004, BL327838, 7 etc. but the vast majority of them being 8 characters long.
When I read it with multiple versions of pd.read_csv and encoding='iso-8859-1' it always converts the 7 and 11 to 00000007 or the like. I've tried using utf-8 but I get the following error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc9 in position 40: unexpected end of data
I have tried setting the dtype={'field': object} and string and various iterations of latin-1 and the like but it will continually do this.
Is there any way to get around this error, without going through every individual file and fixing the dtypes? | 0 | 1 | 85 |
0 | 48,678,366 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2018-02-07T20:57:00.000 | 2 | 3 | 0 | How to detect rotated object from image using OpenCV? | 48,673,104 | 0.132549 | c#,python,c++,opencv,object-detection | What features are you using to detect your books? Are you training a CNN and deploying it with OpenCV? In that case adding rotation image augmentation to your training would make it easy to detect rotated books.
If you are using traditional computer vision techniques instead, you can try to use some rotation invariant feature extractors like SURF, however, the results will not be as good as using CNNs which are now the state of the art for this kind of problems. | I have been training OpenCV classifier for recognition of books.The requirement is recognize book from an image. I have used 1000+ images and OpenCV is able to detect books with no rotation. However, when I try to detect books with rotations it does not work properly.So I am wondering if their anyway to detect objects with rotations in images using OpenCV? | 0 | 1 | 5,762 |
0 | 49,749,946 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-07T22:08:00.000 | 0 | 1 | 0 | Words appearing across all topics in lda | 48,674,084 | 0 | python,gensim,lda,topic-modeling | In LDA all words are part of all topics, but with a different probability. You could define a minimum probability for your words to print, but I would be very surprised if mallet didn't come up with at least a couple of "duplicate" words across topics as well. Make sure to use the same parameters for both gensim and mallet. | I am using gensim lda for topic modeling and getting the results like so:
Topic 1: word1 word2 word3 word4
Topic 2: word4 word1 word2 word5
Topic 3: word1 word4 word5 word6
However using mallet on same lda does not produce duplicate words across topics. I have ~20 documents with >1000 words each that I train the lda on. How to get rid of words appearing across multiple topics? | 0 | 1 | 675 |
Subsets and Splits