GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 68,612,285 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-05-09T12:21:00.000 | 1 | 3 | 0 | dynamically growing array in numba jitted functions | 43,869,734 | 0.066568 | python,numpy,dynamic-arrays,numba | To dynamically increase the size of an existing array (and therefore do it in-place), numpy.ndarray.resize must be used instead of numpy.resize. This method is NOT implemented in Python, and is not available in Numba, so it just cannot be done. | It seems that numpy.resize is not supported in numba.
What is the best way to use dynamically growing arrays with numba.jit in nopython mode?
So far the best I could do is define and resize the arrays outside the jitted function, is there a better (and neater) option? | 0 | 1 | 2,018 |
0 | 43,872,949 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-05-09T13:39:00.000 | 0 | 1 | 0 | Merging two DataFrames (CSV files) with different dates using Python | 43,871,444 | 0 | python,csv,dataframe,merge | The first file is smth like:
Timestamp ; Flow1 ; Flow 2
2017/02/17 00:05 ; 540 ; 0
2017/02/17 00:10 ; 535 ; 0
2017/02/17 00:15 ; 543 ; 0
2017/02/17 00:20 ; 539 ; 0
CSV file #2:
Timestamp ; DOC ; Temperatute ; UV254;
2017/02/17 00:14 ; 668.9 ; 15,13 ; 239,23
2017/02/17 00:15 ; 669,46 ; 15,14 ; 239,31
2017/02/17 00:19 ; 668 ; 15,13 ; 239,43
2017/02/17 00:20 ; 669,9 ; 15,14 ; 239,01
he output file is supposed to be like :
Timestamp ; DOC ; Temperatute ; UV254 ; Flow1 ; Flow2 2017/02/17 00:15 ; 669,46 ; 15,14 ; 239,31 ; 543 ; 0
2017/02/17 00:20 ; 669,9 ; 15,14 ; 239,01 ; 539 ; 0 | I would like to know how can I proceed in order to concatenate two csv files, here is the composition of this two files:
The first one contains some datas related to water chemical parameters, these measurements are taken in different dates.
The second one shows the different flow values of waste water, during a certain period of time.
The problem is that I am looking to assign each value of the second file (Flow values) to the right row in the first file (water chemical parameters) in such a way that the flow and the other chemical parameters are measured in the same moments.
Any suggestions ? | 0 | 1 | 37 |
0 | 43,871,677 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-05-09T13:46:00.000 | 0 | 2 | 0 | How do I pass my input to keras? | 43,871,607 | 0 | python,numpy,keras | If you want to create a 'list of numpys' you can do np.array(yourlist).
If you print result.shape you will see what the resulting shape is. Hope this helps! | I am currently aware that keras doesn't support list of list of numpys.. but I can't see other way to pass my input.
My input to my neural network is each columns (total 45 columns) of 33 different images.
The way I've currently stored it is as an
list of list in which the outer list has length 45, and the inner has length 33, and within this inner list I store a numpy.ndarray of shape (1,8,3)..
I feed it this as I need to do 45 convolutions, one for each column in the image. The same convolution has to be applied on all images for their respective column number.
So convolution_1 has to be applied on every first column on all the 33 images. | 0 | 1 | 218 |
0 | 43,872,953 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-05-09T13:46:00.000 | 0 | 2 | 0 | How do I pass my input to keras? | 43,871,607 | 0 | python,numpy,keras | You can use Input(batch_shape = (batch_size, height, width, channels)), where batch_size = 45, channels = 33 and use np.ndarray of shape (45, height, width, 33) if your backend is tensorflow | I am currently aware that keras doesn't support list of list of numpys.. but I can't see other way to pass my input.
My input to my neural network is each columns (total 45 columns) of 33 different images.
The way I've currently stored it is as an
list of list in which the outer list has length 45, and the inner has length 33, and within this inner list I store a numpy.ndarray of shape (1,8,3)..
I feed it this as I need to do 45 convolutions, one for each column in the image. The same convolution has to be applied on all images for their respective column number.
So convolution_1 has to be applied on every first column on all the 33 images. | 0 | 1 | 218 |
0 | 43,878,521 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-05-09T19:23:00.000 | 0 | 1 | 0 | extended and upright flags in SURF opencv c++ function | 43,878,271 | 1.2 | python,c++,opencv,image-processing,surf | got it !
C++: SURF::SURF(double hessianThreshold, int nOctaves=4, int nOctaveLayers=2, bool extended=true, bool upright=false ) | what are the equivalent flags of SURF in opencv C++ to python SURF flags extended and upright ?
in python version upright flag decides whether to calculate orientation or not
And extended flag gives option of whether to use 64 dim or 128 dim
Is there a to do this similar operation in opencv C++ version of SURF function
FYI I am using opencv version 2.4.13 | 0 | 1 | 175 |
0 | 44,149,840 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-05-10T00:51:00.000 | 1 | 1 | 0 | Error using Torch RNN | 43,881,941 | 1.2 | python-2.7,lua,torch,luarocks | Check that the header file exists and that you have the correct path.
If the header file is missing you skipped the preprocess step. If the header file exists it's likely in your data directory and not in the same directory as the sample.lua code:
th train.lua -input_h5 data/my_data.h5 -input_json data/my_data.json | I'm following the instructions on github.com/jcjohnson/torch-rnn and have it working until the training section. When I use th train.lua -input_h5 my_data.h5 -input_json my_data.jsonI get the error Error: unable to locate HDF5 header file at /usr/local/Cellar/hdf5/1.10.0-patch1/include;/usr/include;/usr/local/opt/szip/include/hdf5.h
I'm new to luarocks and torch, so I'm not sure what's wrong. I installed torch-hd5f. Any advice would be very much appreciated. | 0 | 1 | 84 |
0 | 43,917,161 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-05-10T15:07:00.000 | 1 | 1 | 0 | Embeddings vs text cleaning (NLP) | 43,896,369 | 1.2 | python-3.x,text,nlp,embedding,data-cleaning | I post this here just to summarise the comments in a longer form and give you a bit more commentary. No sure it will answer your question. If anything, it should show you why you should reconsider it.
Points about your question
Before I talk about your question, let me point a few things about your approaches. Word embeddings are essentially mathematical representations of meaning based on word distribution. They are the epitome of the phrase "You shall know a word by the company it keeps". In this sense, you will need very regular misspellings in order to get something useful out of a vector space approach. Something that could work out, for example, is US vs. UK spelling or shorthands like w8 vs. full forms like wait.
Another point I want to make clear (or perhaps you should do that) is that you are not looking to build a machine learning model here. You could consider the word embeddings that you could generate, a sort of a machine learning model but it's not. It's just a way of representing words with numbers.
You already have the answer to your question
You yourself have pointed out that using hunspell introduces new mistakes. It will be no doubt also the case with your other approach. If this is just a preprocessing step, I suggest you leave it at that. It is not something you need to prove. If for some reason you do want to dig into the problem, you could evaluate the effects of your methods through an external task as @lenz suggested.
How does external evaluation work?
When a task is too difficult to evaluate directly we use another task which is dependent on its output to draw conclusions about its success. In your case, it seems that you should pick a task that depends on individual words like document classification. Let's say that you have some sort of labels associated with your documents, say topics or types of news. Predicting these labels could be a legitimate way of evaluating the efficiency of your approaches. It is also a chance for you to see if they do more harm than good by comparing to the baseline of "dirty" data. Remember that it's about relative differences and the actual performance of the task is of no importance. | I am a graduate student focusing on ML and NLP. I have a lot of data (8 million lines) and the text is usually badly written and contains so many spelling mistakes.
So i must go through some text cleaning and vectorizing. To do so, i considered two approaches:
First one:
cleaning text by replacing bad words using hunspell package which is a spell checker and morphological analyzer
+
tokenization
+
convert sentences to vectors using tf-idf
The problem here is that sometimes, Hunspell fails to provide the correct word and changes the misspelled word with another word that don't have the same meaning. Furthermore, hunspell does not reconize acronyms or abbreviation (which are very important in my case) and tends to replace them.
Second approache:
tokenization
+
using some embeddings methode (like word2vec) to convert words into vectors without cleaning text
I need to know if there is some (theoretical or empirical) way to compare this two approaches :)
Please do not hesitate to respond If you have any ideas to share, I'd love to discuss them with you.
Thank you in advance | 0 | 1 | 842 |
0 | 43,976,879 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-05-10T23:09:00.000 | 2 | 1 | 0 | How extract vocabulary vectors from gensim's word2vec? | 43,904,029 | 0.379949 | python,machine-learning,gensim,word2vec,text-classification | If you have trained word2vec model, you can get word-vector by __getitem__ method
model = gensim.models.Word2Vec(sentences)
print(model["some_word_from_dictionary"])
Unfortunately, embeddings from word2vec/doc2vec not interpreted by a person (in contrast to topic vectors from LdaModel)
P/S If you have texts at the object in your tasks, then you should use Doc2Vec model | I want to analyze the vectors looking for patterns and stuff, and use SVM on them to complete a classification task between class A and B, the task should be supervised. (I know it may sound odd but it's our homework.) so as a result I really need to know:
1- how to extract the coded vectors of a document using a trained model?
2- how to interpret them and how does word2vec code them?
I'm using gensim's word2vec. | 0 | 1 | 1,519 |
0 | 43,920,803 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-05-11T16:17:00.000 | 0 | 1 | 0 | Jupyter Notebook doesn't show in Dashboard (Windows 10) | 43,920,802 | 0 | python,jupyter-notebook | The files path names are too long. Reducing the path length by reducing the number of folders and/or folder name lengths will solve the problem. | My Jupyter Notebook doesn't show in the Jupyter Dashboard in Windows 10.
Additionally, I get the following error in my Jupyter cmd line console:
[W 00:19:39.638 NotebookApp] C:\Users\danie\Documents\Courses\Python-Data-Science-and-Machine-Learning-Bootcamp Jupyter Notebooks\Python-Data-Science-and-Machine-Learning-Bootcamp\Machine Learning Sections\Decision-Trees-and-Random-Forests\Decision Trees and Random Forest Project - Solutions.ipynb doesn't exist | 0 | 1 | 368 |
0 | 43,935,121 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2017-05-11T16:23:00.000 | 2 | 2 | 0 | IBM Watson nl-c training time | 43,920,923 | 0.197375 | python,ibm-cloud,ibm-watson,training-data,nl-classifier | For NLC it depends on the type of data, and quantity. There is no fixed time to when it completes, but I have seen a classifier run a training session for nearly a day.
That said, normally anywhere from 30 minutes to a couple of hours.
Watson conversation Intents is considerably faster (minutes). But both use different models, so I would recommend to test both and see the results. Also check how each is scoring when comparing (absolute/relative). | I have a data-set that contains about 14,700 records. I wish to train it on ibm watson and currently i'm on trial version. What is the rough estimate about the time that the classifier will take to train? Each record of dataset contains a sentence and the second column contains the class-name. | 0 | 1 | 222 |
0 | 43,921,011 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2017-05-11T16:23:00.000 | 0 | 2 | 0 | IBM Watson nl-c training time | 43,920,923 | 0 | python,ibm-cloud,ibm-watson,training-data,nl-classifier | If your operating system is UNIX, you can determine how long a query takes to complete and display results when executed using dbaccess. You can use the time command to report how much time is spent, from the beginning to the end of a query execution. Including the time to connect to the database, execute the query and write the results to an output device.
The time command uses another command or utility as an argument, and writes a message to standard error that lists timing statistics for that command. It reports the elapsed time between invocation of the command and its termination. The message includes the following information:
The elapsed (real) time between invocation and termination of the utility. The real time is divided in two components, based on the kind of processing:
The User CPU time, equivalent to the sum of the tms_utime and tms_cutime fields returned by the times function for the process in which utility is executed.
or,
The System CPU time, equivalent to the sum of the tms_stime and tms_cstime fields returned by the times() function for the process in which utility is executed. | I have a data-set that contains about 14,700 records. I wish to train it on ibm watson and currently i'm on trial version. What is the rough estimate about the time that the classifier will take to train? Each record of dataset contains a sentence and the second column contains the class-name. | 0 | 1 | 222 |
0 | 43,939,525 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-05-12T13:44:00.000 | 0 | 2 | 0 | File write collisions on parallelized python | 43,939,316 | 0 | python,file | Python processes, threads and coroutines offers synchronization primitives such as locks, rlocks, conditions and semaphores. If your threads access randomly one or more shared variables then every thread should acquire lock on this variable so that another thread couldn't access it. | I'm doing some research in neuroscience and I'm using python's tinydb library for keeping track of all my model training runs and the data they generate.
One of the issues I realized might come up is when I try to train multiple models on a cluster. What could happen is that two threads might try to write to the tinydb json file at the same time.
Can someone please let me know if this will be an issue? | 0 | 1 | 678 |
0 | 43,954,917 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-13T14:16:00.000 | -2 | 1 | 0 | How to finding distance between camera and detected object using openCV in python? | 43,954,187 | -0.379949 | python,opencv,image-processing,distance | I am sorry but finding a distance is a metrology problem, so you need to calibrate your camera. Calibrating is a relatively easy process which is necessary for any measurements.
Let's assume you only have one calibrated camera, if the orientation/position of this camera is fixed relatively to the ground plane, it is possible to calculate the distance between the camera and the feet of somebody (assuming the feet are visible). | I want to find out the distance between the camera and the people (detected using the HOG descriptor) in front of camera.I'm looking into more subtle approach rather than calibrating the camera and without knowing any distances before hand.
This can fall under the scenario of an autonomous car finding the distance between the car in front.
Can someone help me out with a sample code or an explanation on how to do so | 0 | 1 | 2,354 |
0 | 61,691,668 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2017-05-13T14:56:00.000 | 3 | 2 | 0 | Google foo.bar Challenge Issue: Can't import libraries | 43,954,548 | 0.291313 | python,numpy | Math is a standard library but still it is not working with the Foobar. | I am working on a problem (doomsday_fuel) using python and I need to use matrices, so I would like to import numpy. I have solved the problem and it runs perfectly on my own computer, but Google returns the error: ImportError: No module named numpy [line 3].
The beginning of my code looks like:
import fractions
from fractions import Fraction
import numpy as np
I have checked constraints.txt and they do not seem to restrict numpy
"Your code will run inside a Python 2.7.6 sandbox. Standard libraries are supported except for bz2, crypt, fcntl, mmap, pwd, pyexpat, select, signal, termios, thread, time, unicodedata, zipimport, zlib."
Does anyone have any ideas how or why this would happen? Or do people have ideas as to what steps I could take to ask Google about this? | 0 | 1 | 4,751 |
0 | 43,968,199 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2017-05-14T19:07:00.000 | 3 | 1 | 0 | An algorithm for grouping by trying without feedback | 43,967,808 | 1.2 | python,algorithm,sorting,theory | Let's try the following:
Suppose we have an m times n grid grid with p different colors. First we work row by row with the following algorithm:
Column reduction
Drag the piece at (1,1) to (1,2), then (1,2) to (1,3) and so on until you reach (1, n)
Drag the piece at (1,1) the same way to (1,n-1).
Continue till you reach (1, n-p) with the piece moved.
The first step is guaranteed to move the color that was originally at (1,1) to (1,n) and collect all pieces of the same color on its way.
The succeeding steps collect the remaining colors. After this part of the algorithm we are guaranteed to have only the columns p to n filled, each with a different color.
This we repeat for the remaining m-1 rows. After that the columns 1 to n-p-1 are guaranteed to be empty.
Row reduction
Now we repeat the same process with the columns, i.e. drag (1, j) to (m, j) for all j >= n-p and then drag (1,j) to (m-1, j).
After this part we are guaranteed to have only filled a p times p subgrid.
Full grid search
Now we collect each different color by brute force:
Move (p,p) to (p,p+1), (p, p+2), ... (p, n) and then to (p + 1, n), (p+1, n-1), ..., (p+1, p) and then to (p+2, p), ..., (p+2, n) and so on until we reach either (m, p) or (m,n), depending wether p is even or odd.
This step we repeat p times, only that we stop each time on field short of the last one.
As a result only the remaining p fields are filled and each contains a stack of the same color. Problem solved.
To estimate the complexity:
The row moving part requires n + n-1 + n-2 + ... + n-p= n*(n+1)/2 - (n-p)*(n-p+1)/2=np+(p^2+p)/2=O(n^2) moves per row, hence O(mn^2).
The column moving part similarly requires O(nm^2) moves.
The final moving requires p^2 moves for each color, i.e. O(p^3).
If q = max(n,m,p) the complexity is O(q^3).
Note: If we do not know p we could immediately start with the full grid search. We still remain in complexity O(q^3). If, however, p << n or p << m the column and row reduction will reduce the practical complexity greatly. | Tagged this as Python because is the most pseudo-y-code language in my opinion
I'll explain graphically and the answer can be graphical/theorical too (maybe its the wrong site to post this?)
Let's say I want to make an algorithm that solves a simple digital game for infants (this is not the actual context, its much more complex)
These are the rules :
There is a square grid seen from above, with a colored lego piece in
each spot
You can drag pieces to try and stack on top of each other.
If their color match, they will stack, leaving the spot of the first
piece you dragged empty.
If you move a piece to an empty spot, it will move to that spot
If their color don't match and you drag one of top of the other, they
will switch spots.
The amount of pieces of a same color is randomly generated when a new grid is started.
The goal of the game is to obviously drag pieces of the same color until you only have one stack of each color.
Now here comes question, I want to make a script that solves the game, but it will be "blind", meaning it won't be able to see colors, or track when a match occurs. It will have to traverse in a way that it will ensure it tried all possible "drags"
The main problem for me to even start thinking about this comes from the fact that they swap positions if the script fails to guess the color, and there's no feedback to know that you failed.
Also is the complexity of this calculable? Is it too insane? | 0 | 1 | 58 |
0 | 43,971,827 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-15T04:39:00.000 | 1 | 1 | 0 | How to plot the histogram of image width in python? | 43,971,678 | 0.197375 | python,image-processing,computer-vision,ipython,opencv-python | ok i will give you the steps, but the coding has to be done by you
assuming you have python installed and pip in you machine
Install pillow using pip
get the images in the script and calculate the width and store them in a list, you will get to know how to calculate width from the Pillow documentation
Install matplotlib using pip
Pass that list you created from the images to the plotting function of matplotlib.
the Histogram representation can be found in Matlpoltlib documentation
hope it helps, | I have some training images (more than 20 image with format .tif)that i want to plot their histogram of width in python. I will be more than happy if any one can helps. | 0 | 1 | 194 |
0 | 43,972,717 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-05-15T05:25:00.000 | 0 | 1 | 0 | Music genre classification with sklearn: how to accurately evaluate different models | 43,972,059 | 0 | python,machine-learning,scikit-learn,statistical-sampling | To evaluate a classifier's accuracy against another classifier, you need to randomly sample from the dataset for training and test. Use the test dataset to evaluate each classifier and compare the accuracy in one go.
Given a dataset stored in a dataframe , split it into training and test (random sampling is better to get an indepth understanding of how good your classifier is for all cases , stratified sampling can sometimes mask your true accuracy) Why? Let's take an example :
If you are doing stratified sampling on some particular category (and let's assume this category has an exceptionally large amount of data[skewed] and the classifier predicts that one category well , you might be led to believe that the classifier works well, even if doesn't perform better on categories with less information. Where does stratified sampling work better? When you know that the real world data will also be skewed and you will be satisifed if the most important categories are predicted correctly. (This definitely does not mean that your classifier will work bad on categories with less info, it can work well , it's just that stratified sampling sometimes does not present a full picture)
Use the same training dataset to train all classifers and the same testing dataset to evaluate them. Also , random sampling would be better. | I'm working on a project to classify 30 second samples of audio from 5 different genres (rock, electronic, rap, country, jazz). My dataset consists of 600 songs, exactly 120 for each genre. The features are a 1D array of 13 mfccs for each song and the labels are the genres.
Essentially I take the mean of each set of 13 mfccs for each frame of the 30 second sample. This leads to 13 mfccs for each song. I then get the entire dataset, and use sklearn's scaling function.
My goal is to compare svm, knearest, and naive bayes classifiers (using the sklearn toolset). I have done some testing already but I've noticed that results vary depending on whether I do random sampling/do stratified sampling.
I do the following function in sklearn to get training and testing sets:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=0, stratify=y)
It has the parameters "random state" and "stratify". When "random state" is ommitted, it randomly samples from the entire dataset; when it is set to 0, the training and test sets are guaranteed to be the same each time.
My question is, how do I appropriately compare different classifiers. I assume I should make the same identical call to this function before training and testing each classifer. My suspicion is that I should be handing the exact same split to each classifier, so it should not be random sampling, and stratifying as well.
Or should I be stratifying (and random sampling)? | 0 | 1 | 894 |
0 | 43,977,884 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-05-15T10:58:00.000 | 0 | 2 | 0 | How to maximize the area under -log(x) curve? | 43,977,734 | 0 | python,r,python-2.7,equation,logarithm | That is no programming question but a mathematics question and if I get the function in your question right, the answer is "wherever the graph hits the x-axis".
But I think that was not what you wanted. Maybe you want the rectangle between O(0,0) and P(x, y)?
Than you still should simply use a cas and a-level mathematics:
A = x*(-15.7log(x)+154.94) | I'm trying to get the x and y coordinates for which the area under the curve: y=-15.7log(x)+154.94 is maximum. I would like to compute this in R or Python. Can someone please help me to find it?
Background: I have data points of sales (y) vs prices (x). I tried fitting a log curve in R: lm(formula = y ~ log(x)) which gave me the above equation. I'm trying to increase the revenue which is the product of sales and prices. Hence the rectangular area under the curve should be maximized. | 0 | 1 | 161 |
0 | 46,858,137 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-05-15T11:50:00.000 | 1 | 1 | 0 | Edit a row value in datatable in spotfire | 43,978,775 | 0.197375 | ironpython,spotfire,rscript | This can be done.
1-> Create function or packaged function which returns ref-cursor.
1.1-> In that update your value in table based on where clause. 2-> Once you have function ready, create informationlink on that
object using parameter type single. 3-> Once you do that import
information link to spotfire using on demand values. 4-> create
document property and use that do property as parameter for on
demand. 5-> keep datatable refresh manually(uncheck Auto
Refresh). 6-> provide user text box to have new values. 7->
provide 1 button and use datatable.Refresh(). 8-> it will pass
your doproperty value to database and your function will return
ref-cursor, in that you can return the sql%rowcount or success or
failure msg. | How do we edit a row in a datatable in spotfire?
Can we do it using ironpython or R script?
I have a requirement where I want to edit the values in spotfire datatable to see the effect in the respective visuals. The data table is populated using an information link (from a SQL database). | 0 | 1 | 1,165 |
0 | 43,987,786 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-15T17:59:00.000 | 0 | 1 | 0 | Mattes Mutual Info basic doubts on 3D image registration | 43,985,976 | 0 | python,image-processing,optimization,itk,image-registration | Similarity metrics in ITK usually give the cost, so the optimizers try to minimize them. Mutual information is an exception to this rule (higher MI is better), so in order to fit into the existing framework it has negative values - bigger negative number is better than small negative number, while still following the logic that it should be minimized.
Modified time is used to check whether a certain filter should be updated or not.
Generally lower metric means better registration. But it is not comparable between different metrics, or even between different types of images using the same metric.
Random sampling will take 10-20% of samples in your RoI. I am not sure whether it picks randomly within RoI, or picks randomly within image and then checks whether it is in RoI. | 1. Mattes Mutual Info Doubts
In SimpleITK Mattes Mutual information is a similarity metric measure, is this a maximizing function or minimizing function?
I have tried a 3D registration(image size : 480*480*60) with
Metric Mattes Mutual Info metric and Gradient Descent Optimizer
Output
numofbins = 30
Optimizer stop condition: RegularStepGradientDescentOptimizerv4: Step too small after 24 iterations. Current step (7.62939e-06) is less than minimum step (1e-05).
Iteration: 25
Metric value: -0.871268982129
numofbins = 4096
Optimizer stop condition: RegularStepGradientDescentOptimizerv4: Step too small after 34 iterations. Current step (7.62939e-06) is less than minimum step (1e-05).
Iteration: 23
Metric value: -1.7890
If it is a minimization function then the lower one is better, which I suspect.
2. Transformation matrix final Output
TranslationTransform (0x44fbd20)
RTTI typeinfo: itk::TranslationTransform
Reference Count: 2
Modified Time: 5528423
What is Modified Time?
3. Final Metric is a registration accuracy measurement?
Is metric a sign of registration accuracy? Is a higher metric value mean better registration? Or it is just a value at optimum point after optimization?
4. Random sampling for registration
10-20% of random sample points suffice for a registration. But the doubt arises whether the samples are taken from the main ROI or outside the ROI? Masking is an option, is there any other option in SimpleITK?
Thanks | 0 | 1 | 770 |
0 | 44,009,737 | 0 | 0 | 0 | 0 | 2 | true | 4 | 2017-05-16T18:43:00.000 | 8 | 2 | 0 | Python: How can I reshape 3D Images (np array) to 1D and then reshape them back correctly to 3D? | 44,009,244 | 1.2 | python,arrays,numpy,image-processing,tensorflow | If you are looking to create a 1D array, use .reshape(-1), which will create a linear version of you array. If you the use .reshape(32,32,3), this will create an array of 32, 32-by-3, arrays, which is the original format described. Using '-1' creates a linear array of the same size as the number of elements in the combined, nested array. | I have RGB images (32 x 32 x 3) saved as 3D numpy arrays which I use as input for my Neural Net (using tensorflow). In order to use them as an input I reshape them to a 1D np array (1 x 3072) using reshape(1,-1). When I finish training my Net I want to reshape the output back, but using reshape(32,32,3) doesn't seem to provide the desired outcome.
Is this the correct way to do it? How I can be sure that each datum will be back to the correct place? | 0 | 1 | 11,600 |
0 | 44,009,566 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2017-05-16T18:43:00.000 | 2 | 2 | 0 | Python: How can I reshape 3D Images (np array) to 1D and then reshape them back correctly to 3D? | 44,009,244 | 0.197375 | python,arrays,numpy,image-processing,tensorflow | If M is (32 x 32 x 3), then .reshape(1,-1) will produce a 2d array (not 1d), of shape (1, 32*32*3). That can be reshaped back to (32,32,3) with the same sort of reshape statement.
But that's reshaping the input to and from But you haven't told us what the output of your Net is like. What shape does it have? How are you trying to reshape the output, and what is wrong with it? | I have RGB images (32 x 32 x 3) saved as 3D numpy arrays which I use as input for my Neural Net (using tensorflow). In order to use them as an input I reshape them to a 1D np array (1 x 3072) using reshape(1,-1). When I finish training my Net I want to reshape the output back, but using reshape(32,32,3) doesn't seem to provide the desired outcome.
Is this the correct way to do it? How I can be sure that each datum will be back to the correct place? | 0 | 1 | 11,600 |
0 | 44,054,062 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-17T02:58:00.000 | -1 | 2 | 0 | Using Docker for Image training in Python (New to this) | 44,014,764 | -0.099668 | windows,python-3.x,docker,tensorflow | If you're planning to use Python 3, I'd recommend docker run -it gcr.io/tensorflow/tensorflow:latest-devel-py3 (Numpy is installed for python3 in that container). Not sure why Python 3 is partially installed in the latest-devel package. | All of my steps have worked very well up to this point. I am on a windows machine currently. I am in the root directory after using the command:
docker run -it gcr.io/tensorflow/tensorflow:latest-devel
then followed by a cd /tensorflow, I am now in the directory and it is time to train the images so i jused:
/tensorflow# python tensorflow/examples/image_retraining/retrain.py \
--bottleneck_dir=/tf_files/bottlenets \
--how_many_training_steps 500 \
--model_dir=/tf_files/retrained_graph.pb \
--output_labels=/tf_files/retrained_labels.txt \
--image_dir /tf_files/
And i get this error:
File "tensorflow/examples/image_retraining/retrain.py", line 77, in
import numpy as np
ImportError: No module named 'numpy'
I DO already have numpy installed in my python35 folder and it is up to date. Thanks a lot for any help, I am really stuck on this! | 0 | 1 | 237 |
0 | 44,020,731 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2017-05-17T06:38:00.000 | 0 | 1 | 0 | How could I use TensorFlow in jupyter notebook? I install TensorFlow via python 3.5 pip already | 44,017,326 | 1.2 | python-3.x,tensorflow,pip,installation,jupyter-notebook | There is a package called nb_conda that helps manage your anaconda kernels. However, when you launch Jupyter make sure that you have jupyter installed inside your conda environment and that you are launching Jupyter from that activated environment.
So:
Activate your conda environment that has Tensorflow installed. You can check by doing conda list. If Tensorflow is not installed within your environment then do so.
Install jupyter and nb_conda if you haven't already.
From your activated environment, run jupyter notebook.
You should now be running in the correct kernel. You should see a kernel named Python [conda env:namehere] in the top right. You may also have a choice of kernels thanks to nb_conda if installed.
See if that works for you. | I installed tensorflow via python3.5 pip, it is in the python3.5 lib folder and I can use it perfectly on shell IDLE.
I have anaconda(jupyter notebook) on my computer at the same time, however, I couldn't import tensorflow on notebook.
I guess notebook was using the anaconda lib folder, not python3.5 libs. is there any way to fix that instead of install again on anaconda folder?
thanks | 0 | 1 | 4,674 |
0 | 44,022,536 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-05-17T08:54:00.000 | 3 | 1 | 1 | Tensorflow and Pycharm | 44,020,050 | 1.2 | python,tensorflow,pycharm,cudnn | The solution is:
Run PyCharm from the console.
OR
add the environment variable to the IDE settings: LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH | I have an issues with tensorflow on pycharm.
Whenever I import tensorflow in the linux terminal, it works correctly. However, in PyCharm community 2017.1, it shows:
ImportError: libcudnn.so.5: cannot open shared object file: No such file or directory
Any hint on how to tackle the issue.
Please note that I am using python 3.5.2, tensorflow 1.1.0, Cuda 8 and CuDnn 5.1
EDIT: when printing sys.path, I get this in PyCharm:
['/home/xxx/pycharm-community-2017.1.2/helpers/pydev', '/home/xxx/pycharm-community-2017.1.2/helpers/pydev', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86_64-linux-gnu', '/usr/lib/python3.5/lib-dynload', '/usr/local/lib/python3.5/dist-packages', '/usr/lib/python3/dist-packages', '/usr/local/lib/python3.5/dist-packages/IPython/extensions', '/home/xxx/xxx/xxx']
and this in the terminal:
['', '/usr/local/bin', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86_64-linux-gnu', '/usr/lib/python3.5/lib-dynload', '/usr/local/lib/python3.5/dist-packages', '/usr/lib/python3/dist-packages', '/usr/local/lib/python3.5/dist-packages/IPython/extensions', '/home/xxx/.ipython'] | 0 | 1 | 921 |
0 | 44,079,737 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-17T10:06:00.000 | 0 | 2 | 0 | get pixel of image in tensorflow | 44,021,777 | 0 | python,tensorflow,neural-network,pixel,convolution | Actually, I'm trying to train a NN that get corrupted images and based on them the grand truth, remove noise from that images.It must be Network in Network, an another word pixels independent. | I am new by tensorflow. I want to write a Neural network, that gets noisy images from a file and uncorrupted images from another file.
then I want to correct noisy images based on the other images. | 0 | 1 | 594 |
0 | 44,037,339 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-05-17T10:23:00.000 | 4 | 1 | 0 | Unpickling Error while using Word2Vec.load() | 44,022,180 | 1.2 | python,gensim,word2vec | This would normally work, if the file was created by gensim's native .save().
Are you sure the file 'ammendment_vectors.model.bin' is complete and uncorrupted?
Was it created using the same Python/gensim versions as in use where you're trying to load() it?
Can you try re-creating the file? | I am trying to load a binary file using gensim.Word2Vec.load(fname) but I get the error:
File "file.py", line 24, in
model = gensim.models.Word2Vec.load('ammendment_vectors.model.bin')
File "/home/hp/anaconda3/lib/python3.6/site-packages/gensim/models/word2vec.py", line 1396, in load
model = super(Word2Vec, cls).load(*args, **kwargs)
File "/home/hp/anaconda3/lib/python3.6/site-packages/gensim/utils.py", line 271, in load
obj = unpickle(fname)
File "/home/hp/anaconda3/lib/python3.6/site-packages/gensim/utils.py", line 933, in unpickle
return _pickle.load(f, encoding='latin1')
_pickle.UnpicklingError: could not find MARK
I googled but I am unable to figure out why this error is coming up. Please let me know if any other information is required. | 0 | 1 | 5,562 |
0 | 44,035,007 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-05-17T19:43:00.000 | 4 | 3 | 0 | How to deal with exponent overflow of 64float precision in python? | 44,033,533 | 1.2 | python,numpy | You can use the function np.logaddexp() to do such operations. It computes logaddexp(x1, x2) == log(exp(x1) + exp(x2)) without explicitly computing the intermediate exp() values. This avoids the overflow. Since exp(0.0) == 1, you would compute np.logaddexp(0.0, 1000.0) and get the result of 1000.0, as expected. | I am a newbie in python sorry for the simple question.
In the following code, I want to calculate the exponent and then take the log.
Y=numpy.log(1+numpy.exp(1000))
The problem is that when I take the exponent of 710 or larger numbers the numpy.exp() function returns 'inf' even if I print it with 64float it prints 'inf'.
any help regarding the problem will be appreciated. | 0 | 1 | 2,564 |
0 | 44,055,484 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-05-18T07:31:00.000 | 1 | 2 | 0 | sklearn: Get Distance from Point to Nearest Cluster | 44,041,347 | 0.099668 | python,machine-learning,scikit-learn,cluster-analysis,data-mining | To be closer to the intuition of DBSCAN you probably should only consider core points.
Put the core points into a nearest neighbor searcher. Then search for all noise points, use the cluster label of the nearest point. | I'm using clustering algorithms like DBSCAN.
It returns a 'cluster' called -1 which are points that are not part of any cluster. For these points I want to determine the distance from it to the nearest cluster to get something like a metric for how abnormal this point is. Is this possible? Or are there any alternatives for this kind of metric? | 0 | 1 | 2,660 |
0 | 44,423,577 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-18T10:06:00.000 | 0 | 1 | 0 | Filtering in tweepy - exact phrase | 44,044,773 | 0 | python,tweepy | the twitter api doesn't allow that. you'll have to check for each returned tweet whether or not it actually contains one of your exact phrases. | I can't get tweepy filtering to quite work how I want to.
stream.filter(track=['one two' , 'three four'])
I want to retweet based on a specific two word set
i.e. "one two"
but I'm getting retweets where the tweet has those two words, but not in order and separated
i.e. "three two one" or "one three two" etc.
I want tweets which contain my phrase but in order
i.e. "one two three" or "three one two" or "one two" etc. | 0 | 1 | 990 |
0 | 44,049,390 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-18T10:59:00.000 | 0 | 2 | 0 | Answering business questions with machine learning models (scikit or statsmodels) | 44,045,913 | 0 | python,machine-learning,statistics,regression,data-science | Why did customer service calls drop last month?
It depends on what type and features of data you have to analyze and explore the data. One of the basic things is to look at correlation between features and target variable to check if you can identify any feature that can correlate with the drop of calls. So exploring different statistic might help better to answer this question than prediction models.
Also it is always a good practice to analyze and explore the data before you even start working on prediction models as its often necessary to improve the data (scaling, removing outliers, missing data etc) depending on the prediction model you chose.
Should we go with this promotion model or another one?
This question can be answered based on the regression or any other prediction models you designed for this data. These models would help you to predict the sales/outcome for the feature if you can provide the input features of the promotion models. | Thanks for your help on this.
This feels like a silly question, and I may be overcomplicating things. Some background information - I just recently learned some machine learning methodologies in Python (scikit and some statsmodels), such as linear regression, logistic regression, KNN, etc. I can work the steps of prepping the data in pandas data frames and transforming categorical data to 0's and 1's. I can also load those into a model (like, logistic regression in scikit learn). I know how to train and test it (using CV, etc.), and some fine tuning methods (gridscore, etc.). But this is all in the scope of predicting outcomes on new data. I mainly focused on learning on building a model to predict on new X values, and testing that model to confirm accuracy/precision.
However, now I'm having trouble identifying and executing the steps to the OTHER kinds of questions that say, a regression model, can answer, like:
Why did customer service calls drop last month?
Should we go with this promotion model or another one?
Assuming we have all our variables/predictor sets, how would we determine those two questions using any supervised machine learning model, or just a stat model in the statsmodels package.
Hope this makes sense. I can certainly go into more detail. | 0 | 1 | 200 |
0 | 44,048,354 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-05-18T12:17:00.000 | 0 | 3 | 0 | Trying to import keras but got an error | 44,047,544 | 0 | python | I had a similar problem, solved it by installing an older pandas version
pip install pandas==0.19.2 | Trying to import Keras 2.0.4 with Tensorflow 1.0.1 on Windows10 as backend, but I got the following message:
AttributeError: module 'pandas' has no attribute 'computation'
I've recently upgraded my pandas into version 0.20.1, is it the reason why I failed to import keras?
There is a lot more information available on the error message. if you want to know about it, just let me know | 0 | 1 | 1,406 |
0 | 44,051,350 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-05-18T14:48:00.000 | 0 | 1 | 0 | Gensim save_word2vec_format() vs. model.save() | 44,051,051 | 0 | python,nlp,gensim,word2vec | EDIT: this was intended as a comment. Don't know how to change it now, sorry
correlation between the word occurrence-frequency and vector-length I don't quite follow - aren't all your vectors the same length? Or are you not referring to the embedding vectors? | I am using gensim version 0.12.4 and have trained two separate word embeddings using the same text and same parameters. After training I am calculating the Pearsons correlation between the word occurrence-frequency and vector-length. One model I trained using save_word2vec_format(fname, binary=True) and then loaded using load_word2vec_format the other I trained using model.save(fname) and then loaded using Word2Vec.load(). I understand that the word2vec algorithm is non deterministic so the results will vary however the difference in the correlation between the two models is quite drastic. Which method should I be using in this instance? | 0 | 1 | 3,611 |
0 | 44,053,256 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-05-18T16:11:00.000 | 1 | 2 | 0 | Python: Converting string to floats, reading floats into 2D array, if/then, reordering of rows? | 44,052,893 | 0.099668 | python,arrays,string | First Part:
@njzk2 is exactly right. Simply removing the literal spaces to change from l.strip().split(' ') to l.strip().split() will correct the error, and you will see the following output for f_values:
[['-91.', '0.444253325'], ['-90.', '0.883581936'], ['-89.', '-0.0912338793']]
And the output for newarray shows float values rather than strings:
[[-91.0, 0.444253325], [-90.0, 0.883581936], [-89.0, -0.0912338793]]
Second Part:
For the second part of the question "if negative, add 512", a simple loop would be clear and simple, and I'm a big believer in clear, readable code.
For example the following is simple and straightforward:
for items in newarray:
if items[0] < 0:
items[0] += 512.00
When we print newarray after the loop, we see the following:
[[421.0, 0.444253325], [422.0, 0.883581936], [423.0, -0.0912338793]] | Let me start by saying that I know nothing about Python, but I am trying to learn(mostly through struggling it seems). I've looked around this site and tried to cobble together code to do what I need it to, but I keep running into problems. Firstly, I need to convert a file of 2 columns and 512 rows of strings to floats then put them in a 512x2 array. I check the first column (all rows) for negative values. If negative, add 512. Then I need to reorder the rows in numerical order and write/save the new array.
On to my first problem, converting to floats and putting the floats into an array. I have this code, which I made from others' questions:
with open("binfixtest.composite") as f:
f_values = map(lambda l: l.strip().split(' '), f)
print f_values
newarray = [map(float, v) for v in f_values]
Original format of file:
-91. 0.444253325
-90. 0.883581936
-89. -0.0912338793
New format of f_values:
['-91. 0.444253325'], ['-90. 0.883581936'], ['-89. -0.0912338793']
I'm getting the error:
Traceback (most recent call last):
File "./binfix.py", line 10, in <module>
newarray = [map(float, v) for v in f_values]
ValueError: invalid literal for float(): -91. 0.444253325
which I can't seem to fix. If I don't convert to float, when I try to add 512.0 to negative rows it gives me the error TypeError: cannot concatenate 'str' and 'float' objects
Any help is most definitely appreciated as I am completely clueless here. | 0 | 1 | 352 |
0 | 44,240,013 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-05-19T06:15:00.000 | 1 | 1 | 0 | ChatterBot Ubuntu Corpus Trainer | 44,062,679 | 1.2 | python,chatbot | Yes, we can , the data folder is ".\data" , which is path from where you are invoking the ubuntu_corpus_training_example.py. create a folder ubuntu_dialogs and unzip all the folders, the trainer.py looks at .\data\ubuntu_dialogs***.tsv files | Looking to create a custom trainer for chatterbot, In the ubuntu corpus trainer, it looks as if the training is done based on all the conversation entries.
I manually copy the ubuntu_dialogs.tgz to the 'data' folder.
Trainer fails with error file could not be opened successfully
https://github.com/gunthercox/ChatterBot/blob/master/examples/ubuntu_corpus_training_example.py
can i unzip all the data to ubuntu_dialogs and provide to trainer?
edit: Yes, we can , the data folder is ".\data" , which is path from where you are invoking the ubuntu_corpus_training_example.py.
create a folder ubuntu_dialogs and unzip all the folders, the trainer.py looks at .\data\ubuntu_dialogs***.tsv files | 0 | 1 | 941 |
0 | 44,166,939 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2017-05-19T16:39:00.000 | 0 | 1 | 0 | Finding closest value in a binary file | 44,075,041 | 0 | python | First attempt. It works, seemingly every time, but I don't know if it's the most efficient way:
Take first and last time stamps and number of frames to calculate an average time step.
Use average time step and difference between target and beginning timestamps to find approximate index.
Check for approximate and 2 surrounding timestamps against target.
If target falls between, then take index with minimum difference.
If not, set approximate index as new beginning or end, accordingly, and repeat. | I have a large binary file (~4 GB) containing a series of image and time stamp data. I want to find the image that most closely corresponds to a user-given time stamp. There are millions of time stamps in the file, though. In Python 2.7, using seek, read, struct.unpack, it took over 900 seconds just to read all the time stamps into an array. Is there an efficient algorithm for finding the closest value that doesn't require reading all of the values? They monotonically increase, though at very irregular intervals. | 0 | 1 | 40 |
0 | 44,091,300 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-19T18:22:00.000 | 0 | 1 | 0 | How-To Generate a 3D Numpy Array On-Demand for an LSTM | 44,076,649 | 0 | python,numpy,keras,lstm,training-data | I found the answer to this on the Keras slack from user rocketknight. Use the model.fit_generator function. Define a generator function somewhere within your main python script that "yields" a batch of data. Then call this function in the arguments of the model.fit_generator function. | I am currently trying to use a "simple" LSTM network implemented through Keras for a summer project. Looking at the example code given, it appears the LSTM code wants a pre-generated 3D numpy array. As the dataset and the associated time interval I want to use are both rather large, it would be very prohibitive for me to load a "complete array" all at once. Is it possible to load the raw dataset and apply the sequencing transform to it as needed by the network (in this case construct the 3D array from x time-interval windows that then increment by 1 each time)? If so, how would you go about doing this?
Thanks for any help you can provide! | 0 | 1 | 160 |
0 | 44,106,903 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-05-21T16:17:00.000 | 0 | 1 | 0 | How to group sentences by edit distance? | 44,099,095 | 0 | python,machine-learning,nlp,cluster-analysis,edit-distance | There is only a limited set of POS tags.
Rather than using edit distance, compute a POS-POS similarity matrix just once. You may even want to edit this matrixes desired, e.g. to make two POS tags effectively the same, or to increase the difference of two tags.
Store that in a numpy array, convert all your vectors to indexes, and then compute similarities using that lookup table. For performance reasons, use numpy where possible, and write the performance critical code in cython because the Python interpreter is very slow. | I have a large set (36k sentence) of sentences (text list) and their POS tags (POS list), and I'd like to group/cluster the elements in the POS list using edit distance/Levenshtein:
(e.g Sentx POS tags= [CC DT VBZ RB JJ], Senty POS tags= [CC DT VBZ RB JJ] ) are in cluster edit distance =0,
while ([CC DT VBZ RB JJ], [CC DT VB RB JJ]) are in cluster edit distance =1.
I understand how the clustering algorithms work but I am confused how to approach such a problem in python and how to store the clusters in data structures so that I can retrieve them easily.
I tried to create a matrix (measuring the distance of each sentence with all the sentences in the corpus) but it takes very long to be processed. | 0 | 1 | 520 |
0 | 45,963,156 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-05-22T12:40:00.000 | 2 | 1 | 0 | Does it make sense to talk about skip-gram and cbow when using The Glove method? | 44,113,128 | 1.2 | python-3.x,word2vec,word-embedding | Not really, skip-gram and CBOW are simply the names of the two Word2vec models. They are shallow neural networks that generate word embeddings by predicting a context from a word and vice versa, and then treating the output of the hidden layer as the vector/representation. GloVe uses a different approach, making use of the global statistics of a corpus by training on a co-occurrence matrix, rather than local context windows. | I'm trying different word embeddings methods, in order to pick the approache that works the best for me. I tried word2vec and FastText. Now, I would like to try Glove. In both word2vec and FastText, there is two versions: Skip-gram (predict context from word) and CBOW (predict word from context). But in Glove python package, there is no parameter that enables you to choose whether you want to use skipg-gram or Cbow.
Given that Glove does not work the same way as w2v, I'm wondering: Does it make sense to talk about skip-gram and cbow when using The Glove method ?
Thanks in Advance | 0 | 1 | 341 |
0 | 54,380,982 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-23T01:40:00.000 | 0 | 2 | 0 | How to encode categorical with many levels on scikit-learn? | 44,124,471 | 0 | python,machine-learning,scikit-learn | One another solution is that, you can do a bivariate analysis of the categorical variable with the target variable. What yo will get is a result of how each level affects the target. Once you get this you can combine those levels that have a similar effect on the data. This will help you reduce number of levels, as well as each well would have a significant impact. | guys.
I have a large data set (60k samples with 50 features). One of this features (which is really relevant for me) is job names. There are many jobs names that I'd like to encode to fit in some models, like linear regression or SVCs. However, I don't know how to handle them.
I tried to use pandas dummy variables and Scikit-learn One-hot Encoding, but it generate many features that I may not be encounter on test set. I tried to use the scikit-learn LabelEncoder(), but I also got some errors when I was encoding the variables float() > str() error, for example.
What would you guys recommend me to handle with this several categorical features? Thank you all. | 0 | 1 | 1,235 |
0 | 44,134,880 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-05-23T11:17:00.000 | 0 | 1 | 0 | Strip certain content of columns in multiple columns | 44,133,280 | 1.2 | python,string,pandas,split | Ok just solved the question:
with df.shape I found out what the dimensions are and then started a for loop:
for i in range(1,x):
df[df.columns[i]]= df[df.columns[i]].str.split('/').[-1]
If you have any more efficient ways let me know :) | I am currently in the phase of data preparation and have a certain issue I would like to make easy.
The content of my columns: 10 MW / color. All the columns which have this content are named with line nr. [int] or a [str]
What I want to display and which is the data of interest is the color. What I did was following:
df['columnname'] = df['columnname'].str.split('/').str[-1]
The problem which occurs is that this operation should be done on all columns which names start with the string "line". How could I do this?
I thought about doing it with a for-loop or a while-loop but I am quite unsure how to do the referencing in the data frame then since I am a nooby in python for now.
Thanks for your help! | 0 | 1 | 50 |
0 | 44,140,770 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-05-23T16:46:00.000 | 1 | 1 | 0 | Python pandas Several DataFrames Best Practice | 44,140,675 | 0.197375 | python,pandas | I think the simplest and most efficient path would be to have two tables. The reason being is that with the 1 big table your algorithm can take O(n^2) since you have to iterate n number of times for each element in your markers and then matching for each element n times for each performance.
If you did the 2 table approach your complexity goes to O(n * m) where n is the number of technical markers and then m is the number of records in performance. In this use case I'd imagine your n to be based on whichever set you want to look at and not the whole set so that means your n < m and therefore you can simple apply a short circuit to make the algorithm much more efficient.
Alternatively if you were able to build a master look up table to capture all the relationships between a performance and a technical marker then your complexity is essentially a hash look up or O(1). | I have a DataFrame with about 6 million rows of daily data that I will use to find how certain technical markers affected their respective stocks’ long term performance. I have 2 approaches, which one is recommended?
Make 2 different tables, one of raw data and one (a filtered copy) containing the technical markers, then do “lookups” on the master table to get the subsequent performance.
Use 1 big table, containing both the markers and the performance data.
I’m not sure what is more computationally expensive – calculating the technical markers for all the rows, even the unneeded ones, or doing the lookups against the master table. Thanks. | 0 | 1 | 320 |
0 | 44,144,746 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-23T17:08:00.000 | 0 | 1 | 0 | How to extract cluster id from Dirichlet process in PyMC3 for grouped data? | 44,141,059 | 0 | python,process,cluster-computing,pymc3,dirichlet | If I understand you correctly, you're trying to extract which category (1 through k) a data point belongs to. However, a Dirichlet random variable only produces a probability vector. This should be used as a prior for a Categorical RV, and when that is sampled from, it will result in a numbered category. | I am using PyMC3 to cluster my grouped data. Basically, I have g vectors and would like to cluster the g vectors into m clusters. However, I have two problems.
The first one is that, it seems PyMC3 could only deal with one-dimensional data but not vectors. The second problem is, I do not know how to extract the cluster id for the raw data. I do extract the number of components (k) and corresponding weights. But I could not extract the id that indicating the which cluster that each point belongs to.
Any ideas or comments are welcomed! | 0 | 1 | 95 |
0 | 44,207,434 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-05-23T20:28:00.000 | 1 | 1 | 0 | PySpark dataframe pipeline throws No plan for MetastoreRelation Error | 44,144,421 | 1.2 | python,apache-spark,machine-learning,pyspark,spark-dataframe | This error was due to the order of joining the 2 pyspark dataframes.
I tried changing the order of join from say a.join(b) to b.join(a) and its working. | After preprocessing the pyspark dataframe , I am trying to apply pipeline to it but I am getting below error:
java.lang.AssertionError: assertion failed: No plan for
MetastoreRelation.
What is the meaning of this and how to solve this.
My code has become quite large, so I will explain the steps 1. I have 8000 columns and 68k rows in my spark dataframe. Out of 8k columns, 500 are categorical to which I applied pyspark.ml one hot encoding as a stage in ml.pipeline encoders2 = [OneHotEncoder(inputCol=c,
outputCol="{0}_enc".format(c)) for c in cat_numeric[i:i+2]]
but this is very slow and even after 3 hours it was not complete. I am using 40gb memory on each of 12 nodes!.
2. So I am reading 100 columns from pyspark dataframe , creating pandas dataframe from that and doing one hot encoding. Then I transform pandas daaframe back into pyspark data and merge it with original dataframe.
3. Then I try to apply pipeline with stages of string indexer and OHE for categorical string features which are just 5 and then assembler to create 'features' and 'labels' . But in this stage I get the above error.
4. Please let me know if my approach is wrong or if I am missing anything. Also let me know if you want more information. Thanks | 0 | 1 | 1,261 |
0 | 44,147,287 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2017-05-23T22:37:00.000 | 2 | 1 | 0 | '_remove_dead_weakref' error when updating scikit-learn on Win10 machine | 44,146,146 | 0.379949 | python,scikit-learn | After spending a couple hours to no avail, deleted the python anaconda folder and reinstalled. Have the latest bits now and problem solved :) | I'm new to open source so appreciate any/all help.
I've got notebook server 4.2.3 running on: Python 3.5.2 |Anaconda 4.2.0 (64-bit) on my Windows10 machine.
When trying to update scikit-learn from 0.17 to 0.18, I get below error which I believe indicates one of the dependency files is outdated. I can't understand how or why since I just (<1 month ago) installed Python through anaconda. Note I get the same error when I try
conda update scikit-learn
conda install scikit-learn=0.18
pip install -U scikit-learn
ImportError: cannot import name '_remove_dead_weakref'
How do I fix it? Should I try to uninstall and re-install? If so what's the safest (meaning will cleanly remove all bits) way to do this?
Thanks in advance.
I'm trying to update to 0.18. I'm running | 0 | 1 | 3,110 |
0 | 44,366,690 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2017-05-24T13:47:00.000 | 0 | 2 | 0 | Using dummy variables for Machine Learning with more than one categorical variable | 44,160,324 | 1.2 | python,machine-learning,dummy-variable | In a case where there is more than one categorical variable that needs to be replaced for a dummy. The approach should be to encode each of the variables for a dummy (as in the case for a single categorical variable) and then remove one instance of each dummy that exists for each variable in order to avoid colinearity.
Basically, each categorical variable should be treated the same as a single individual one. | I am looking to do either a multivariate Linear Regression or a Logistic Regression using Python on some data that has a large number of categorical variables. I understand that with one Categorical variable I would need to translate this into a dummy and then remove one type of dummy so as to avoid colinearity however is anyone familiar with what the approach should be when dealing with more than one type of categorical variable?
Do I do the same thing for each? e.g translate each type of record into a dummy variable and then for each remove one dummy variable so as to avoid colinearity? | 0 | 1 | 1,175 |
0 | 44,163,726 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2017-05-24T13:47:00.000 | 1 | 2 | 0 | Using dummy variables for Machine Learning with more than one categorical variable | 44,160,324 | 0.099668 | python,machine-learning,dummy-variable | If there are many categorical variables and also in these variables, if there are many levels, using dummy variables might not be a good option.
If the categorical variable has data in form of bins, for e.g, a variable age having data in form 10-18, 18-30, 31-50, ... you can either use Label Encoding or create a new numerical feature using mean/median of the bins or create two features for lower age and upper age
If you have timestamps from initiation of a task to end, for e.g, starting time of machine to the time when the machine was stopped, you can create a new feature by calculating the duration in terms of hours or minutes.
Given many categorical variables but with few number of levels, the obvious and only way out in such cases would be to apply One-Hot Encoding on the categorical variables.
But when a categorical variable has many levels, there may be certain cases which are too rare or too frequent. Applying One-Hot Encoding on such data would affect the model performance badly. In such cases, it'd be recommended to apply certain business logic/feature engineering and thereby reduce the number of levels first. Thereafter you can use One-Hot Encoding on the new feature if it is still categorical. | I am looking to do either a multivariate Linear Regression or a Logistic Regression using Python on some data that has a large number of categorical variables. I understand that with one Categorical variable I would need to translate this into a dummy and then remove one type of dummy so as to avoid colinearity however is anyone familiar with what the approach should be when dealing with more than one type of categorical variable?
Do I do the same thing for each? e.g translate each type of record into a dummy variable and then for each remove one dummy variable so as to avoid colinearity? | 0 | 1 | 1,175 |
0 | 44,213,304 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2017-05-25T00:43:00.000 | 5 | 4 | 0 | Warning from keras: "Update your Conv2D call to the Keras 2 API" | 44,170,581 | 0.244919 | python,keras | As it says, it's not an issue. It still works fine although they might change it any day and the code will not work.
In Keras 2 Convolution2D has been replaced by Conv2d along with some changes in the parameters.
Convolution* layers are renamed Conv*.
Conv2D(10, 3, 3) becomes Conv2D(10, (3, 3)) | I am trying to use keras to create a CNN, but I keep getting this warning which I do not understand how to fix.
Update your Conv2D call to the Keras 2 API: Conv2D(64, (3, 3), activation="relu") after removing the cwd from sys.path.
Can anyone give any ideas about fixing this? | 0 | 1 | 6,588 |
0 | 44,424,599 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-05-25T08:20:00.000 | 0 | 2 | 0 | attributeError:'module' object has no attribute 'MXIndexedRecordIO' | 44,175,700 | 0 | python-2.7,cpu,mxnet | @user3824903 I think to create a bin directory, you have to compile MXNet from source with option USE_OPENCV=1 | I have used im2rec.py to convert "caltech101 images" into record io format:
I have created "caltech.lst" succesfully using
os.system('python %s/tools/im2rec.py --list=1 --recursive=1 --shuffle=1 data/caltech data/101_ObjectCategories'%MXNET_HOME)
Then, when I run this :
os.system("python %s/tools/im2rec.py --train-ratio=0.8 --test-ratio=0.2 --num-thread=4 --pass-through=1 data/caltech data/101_ObjectCategories"%MXNET_HOME)
I have this error : attributeError:'module' object has no attribute 'MXIndexedRecordIO'
Please, someone has an idea to fix this error ?
Thanks in advance.
Environment info
Operating System:Windows 8.1
MXNet version:0.9.5 | 0 | 1 | 375 |
0 | 44,183,963 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-05-25T15:11:00.000 | 0 | 2 | 0 | Normalize IDs column | 44,183,927 | 0 | python,pandas,numpy,ipython,jupyter-notebook | I would go through and find the item with the smallest id in the list, set it to 1, then find the next smallest, set it to 2, and so on.
edit: you are right. That would take way too long. I would just go through and set one of them to 1, the next one to 2, and so on. It doesn't matter what order the ids are in (I am guessing). When a new item is added just set it to 9067, and so on. | I'm making a recommender system, and I'd like to have a matrix of ratings (User/Item). My problem is there are only 9066 unique items in the dataset, but their IDs range from 1 to 165201. So I need a way to map the IDs to be in the range of 1 to 9066, instead of 1 to 165201. How do I do that? | 0 | 1 | 210 |
0 | 44,204,330 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-26T14:27:00.000 | 0 | 2 | 0 | SSAS connection from Python | 44,204,086 | 0 | python,ssas,olap | Seems Python does not support to include .net dll, but IronPython does, we had a MS BI automation project before with IronPython to connect SSAS, it is a nice experience.
www.mdx-helper.com | Does anyone know of a Python package to connect to SSAS multidimensional and/or SSAS tabular that supports MDX and/or DAX queries. I know of olap.xmla but that requires an HTTP connection. I am looking for a Python equivalent of olapR in R. Thanks | 0 | 1 | 6,435 |
0 | 44,212,992 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-27T01:30:00.000 | 1 | 2 | 0 | How to save large Python numpy datasets? | 44,212,063 | 0.099668 | python,opencv,numpy,keras | As with anything regarding performance or efficiency, test it yourself. The problem with recommendations for the "best" of anything is that they might change from year to year.
First, you should determine if this is even an issue you should be tackling. If you're not experiencing performance issues or storage issues, then don't bother optimizing until it becomes a problem. What ever you do, don't waste your time on premature optimizations.
Next, assuming it actually is an issue, try out every method for saving to see which one yields the smallest results in the shortest amount of time. Maybe compression is the answer, but that might slow things down? Maybe pickling objects would be faster? Who knows until you've tried.
Finally, weigh the trade-offs and decide which method you can compromise on; You'll almost never have one silver bullet solution. While your at it, determine if just adding more CPU, RAM or disk space at the problem would solve it. Cloud computing affords you a lot of headroom in those areas. | I'm attempting to create an autonomous RC car and my Python program is supposed to query the live stream on a given interval and add it to a training dataset. The data I want to collect is the array of the current image from OpenCV and the current speed and angle of the car. I would then like it to be loaded into Keras for processing.
I found out that numpy.save() just saves one array to a file. What is the best/most efficient way of saving data for my needs? | 0 | 1 | 744 |
0 | 44,216,018 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2017-05-27T10:04:00.000 | 1 | 7 | 0 | Random number generator that returns only one number each time | 44,215,505 | 0.028564 | python,python-3.x,random,generator | For a large number of non-repeating random numbers use an encryption. With a given key, encrypt the numbers: 0, 1, 2, 3, ... Since encryption is uniquely reversible then each encrypted number is guaranteed to be unique, provided you use the same key. For 64 bit numbers use DES. For 128 bit numbers use AES. For other size numbers use some Format Preserving Encryption. For pure numbers you might find Hasty Pudding cipher useful as that allows a large range of different bit sizes and non-bit sizes as well, like [0..5999999].
Keep track of the key and the last number you encrypted. When you need a new unique random number just encrypt the next number you haven't used so far. | Does Python have a random number generator that returns only one random integer number each time when next() function is called? Numbers should not repeat and the generator should return random integers in the interval [1, 1 000 000] that are unique.
I need to generate more than million different numbers and that sounds as if it is very memory consuming in case all the number are generated at same time and stored in a list. | 0 | 1 | 5,066 |
0 | 45,108,482 | 0 | 0 | 0 | 0 | 1 | true | 10 | 2017-05-28T11:43:00.000 | 22 | 3 | 0 | Difference between tf.nn_conv2d and tf.nn.depthwise_conv2d | 44,226,932 | 1.2 | python,tensorflow,deep-learning,conv-neural-network | I am no expert on this, but as far as I understand the difference is this:
Lets say you have an input colour image with length 100, width 100. So the dimensions are 100x100x3. For both examples we use the same filter of width and height 5. Lets say we want the next layer to have a depth of 8.
In tf.nn.conv2d you define the kernel shape as [width, height, in_channels, out_channels]. In our case this means the kernel has shape [5,5,3,out_channels].
The weight-kernel that is strided over the image has a shape of 5x5x3, and it is strided over the whole image 8 times to produce 8 different feature maps.
In tf.nn.depthwise_conv2d you define the kernel shape as [width, height, in_channels, channel_multiplier]. Now the output is produced differently. Separate filters of 5x5x1 are strided over each dimension of the input image, one filter per dimension, each producing one feature map per dimension. So here, a kernel size [5,5,3,1] would produce an output with depth 3. The channel_multiplier tells you how many different filters you want to apply per dimension. So the original desired output of depth 8 is not possible with 3 input dimensions. Only multiples of 3 are possible. | What is the difference between tf.nn_conv2d and tf.nn.depthwise_conv2d in Tensorflow? | 0 | 1 | 8,789 |
0 | 47,000,213 | 0 | 1 | 0 | 0 | 1 | false | 34 | 2017-05-28T13:18:00.000 | 8 | 5 | 0 | removing newlines from messy strings in pandas dataframe cells? | 44,227,748 | 1 | python,string,pandas,split | in messy data it might to be a good idea to remove all whitespaces df.replace(r'\s', '', regex = True, inplace = True). | I've used multiple ways of splitting and stripping the strings in my pandas dataframe to remove all the '\n'characters, but for some reason it simply doesn't want to delete the characters that are attached to other words, even though I split them. I have a pandas dataframe with a column that captures text from web pages using Beautifulsoup. The text has been cleaned a bit already by beautifulsoup, but it failed in removing the newlines attached to other characters. My strings look a bit like this:
"hands-on\ndevelopment of games. We will study a variety of software technologies\nrelevant to games including programming languages, scripting\nlanguages, operating systems, file systems, networks, simulation\nengines, and multi-media design systems. We will also study some of\nthe underlying scientific concepts from computer science and related\nfields including"
Is there an easy python way to remove these "\n" characters?
Thanks in advance! | 0 | 1 | 88,228 |
0 | 44,228,127 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-05-28T13:37:00.000 | 4 | 1 | 0 | Is it correct to compare score of different estimators? | 44,227,908 | 1.2 | python,scikit-learn,regression | If you have a similar pipeline to feed the same data into the models, then the metrics are comparable. You can choose the SVR Model without any doubt.
By the way, it could be really interesting for you to "redevelop" this "R_squared" Metric, it could be a nice way to learn the underlying mechanic. | I am getting different score values from different estimators from scikit.
SVR(kernel='rbf', C=1e5, gamma=0.1) 0.97368549023058548
Linear regression 0.80539997869990632
DecisionTreeRegressor(max_depth = 5) 0.83165426563946387
Since all regression estimators should use R-square score, I think they are comparable, i.e. the closer the score is to 1, the better the model is trained. However, each model implements score function separately so I am not sure. Please clarify. | 0 | 1 | 94 |
0 | 46,943,022 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-28T14:58:00.000 | 0 | 1 | 0 | Impute Missing Values Using K-Nearest Neighbors | 44,228,698 | 0 | python-3.x | I had seen this same exact error message, and it was because python was confused about some other file names in the same folder that is was loading instead of library files. Try cleaning the folder, renaming your files, etc. | i'm trying to impute missing values with KNN in python so i have downloaded a package named fancyimpute that contain the methode KNN then when i want to import it i get this Error ImportError: cannot import name 'KNN'
please help me | 0 | 1 | 427 |
0 | 51,102,921 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-05-28T23:46:00.000 | 2 | 1 | 0 | Can I train a model in steps in Keras? | 44,233,042 | 0.379949 | python,memory-management,tensorflow,keras,theano | You can do this thing, but it will cause your training time to approach sizes that will only make the results useful for future generations.
Let's consider what all we have in our memory when we train with a batch size of 1 (assuming you've only read in that one sample into memory):
1) that sample
2) the weights of your model
3) the activations of each layer #your model stores these for backpropogation
None of this stuff is unnecessary for training. However, you could, theoretically, do a forward pass on the first half of the model, dump the weights and activations to disk, load the second half of the model, do a forward pass on that, then the backward pass on that, dump those weights and activations to disk, load back the weights and activations of the first half, then complete the backward pass on that. This process could be split up even more to the point of doing one layer at a time.
OTOH, this is akin to what swap space does, without you having to think about it. If you want a slightly less optimized version of this (which, optimization is clearly moot at this point), you can just increase your swap space to 500GB and call it a day. | I've got a model in Keras that I need to train, but this model invariably blows up my little 8GB memory and freezes my computer.
I've come to the limit of training just one single sample (batch size = 1) and still it blows up.
Please assume my model has no mistakes or bugs and this question is not about "what is wrong with my model". (Yes, smaller models work ok with the same data, but aren't good enough for the task).
How can I split my model in two and train each part separately, but propagating the gradients between them?
Is there a possibility? (There is no limitation about using theano or tensorflow)
Using CPU only, no GPU. | 0 | 1 | 957 |
0 | 44,246,824 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-29T12:12:00.000 | 0 | 4 | 0 | detect card symbol using opencv python | 44,242,207 | 0 | python,opencv | As the card symbol is at fixed positions, you may try below (e.g. in OpenCV 3.2 Python):
Crop the symbol at top left corner, image = image[h1:h2,w1:w2]
Threshold the symbol colors to black, the rest to white, thresh = mask = cv2.inRange(image,(0,0,0),(100,100,100))
Perform a find contour detection, _, contours, hierarchy = cv2.findContours()
Get the area size of the contour. area = cv2.contourArea(contour)
Compare the area to determine which one of the 4 symbols it belongs to.
In which you have to build the area thresholds of each symbol for step 4 to compare. All the cv2 functions above are just for reference.
Hope this help. | I'm trying to detect the difference between a spade, club, diamond and hart. the number on the card is irrelevant, just the suit matters.
i've tried color detection by looking at just the red or black colors, but that still leaves me with two results per color. how could i make sure i can detect each symbol individually?
for instance: i have a picture of a red hart, a red diamond, a black spade and a black club. i want to draw the contours of each symbol in a different color.
I'm using my webcam as a camera. | 0 | 1 | 4,838 |
0 | 44,807,718 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-05-29T12:40:00.000 | -1 | 1 | 1 | cv2.VideoCapture doesn't work within docker container | 44,242,760 | -0.197375 | python,opencv,docker,video-capture | There might be 2 problems:
1) In your container opencv is not installed properly. To check that do print(ret, frame). If they come (false, None). Then opencv has not installed properly.
2) The file you are using is corrupted. To check that try to copy any image file (jpg) into the container and use cv2.imread to read the image and then print it. If numpy array comes then while copying your file is not getting corrupted.
The best option is pull any opencv + python image and then create a container with it. The better option is use dockerfiles of these images to create the container. | I am trying to use cv2.VideoCapture to capture image from a docker container.
import cv2
vid = cv2.VideoCapture('path\to\video')
ret, frame = vid.read()
In terms of the video file,
I have tried
either mount the file with docker -v
or docker cp to copy the video file into container,
but both with no luck (ret returns False).
Should I add any command when launching the container?
Thanks in advance. | 0 | 1 | 1,491 |
0 | 44,309,273 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2017-05-29T12:42:00.000 | 0 | 1 | 0 | Python: Finding period of a two column data | 44,242,810 | 0 | python,fft,dft | For calculating periods I would just find the peak of the Fourier transformed data, to do that in python look into script.fft. Could be computationally intensive though. | This question seems so trivial but I didn't find any suitable answer so I am asking!
Lets say I have a two column data(say, {x, sin(x)} )
X Y(X)
0.0 0.0
0.1 0.099
0.2 0.1986
How do I find the period of the function Y(X);
I have some experience in Mathematica where(roughly)
I just interpolate the data as a function say y(x), then
Calculate y'(x) and set y'(x_p)=0;
collect all (x_p+1 - x_p)'s and take average to get the period.
In python, however I am stuck after step 1 as I can find out x_p for a particular guess value but not all the x_p's. Also this procedure doesn't seem very elegant to me. Is there a better way to do things in python? | 0 | 1 | 1,459 |
0 | 44,250,976 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2017-05-29T14:17:00.000 | 1 | 3 | 0 | Imbalanced data: undersampling or oversampling? | 44,244,711 | 1.2 | python,machine-learning,classification,random-forest,supervised-learning | oversampling or
under sampling or
over sampling the minority and under sampling the majority
is a hyperparameter. Do cross validation which ones works best.
But use a Training/Test/Validation set. | I have binary classification problem where one class represented 99.1% of all observations (210 000). As a strategy to deal with the imbalanced data, I choose sampling techniques. But I don't know what to do: undersampling my majority class or oversampling the less represented class.
If anybody have an advise?
Thank you.
P.s. I use random forest algorithm from sklearn. | 0 | 1 | 4,671 |
0 | 51,599,898 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2017-05-29T14:17:00.000 | 0 | 3 | 0 | Imbalanced data: undersampling or oversampling? | 44,244,711 | 0 | python,machine-learning,classification,random-forest,supervised-learning | Undersampling:
Undersampling is typically performed when we have billions (lots) of data points and we don’t have sufficient compute or memory(RAM) resources to process the data. Undersampling may lead to worse performance as compared to training the data on full data or on oversampled data in some cases. In other cases, we may not have a significant loss in performance due to undersampling.
Undersampling is mainly performed to make the training of models more manageable and feasible when working within a limited compute, memory and/or storage constraints.
Oversampling:
oversampling tends to work well as there is no loss of information in oversampling unlike undersampling. | I have binary classification problem where one class represented 99.1% of all observations (210 000). As a strategy to deal with the imbalanced data, I choose sampling techniques. But I don't know what to do: undersampling my majority class or oversampling the less represented class.
If anybody have an advise?
Thank you.
P.s. I use random forest algorithm from sklearn. | 0 | 1 | 4,671 |
0 | 44,291,881 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-30T05:06:00.000 | 1 | 1 | 0 | How to train doc2vec on AWS cluster using spark | 44,253,840 | 0.197375 | python-2.7,amazon-s3,aws-lambda,doc2vec | Gensim's Doc2Vec is not designed to distribute training over multiple-machines. It'd be a significant and complex project to adapt its initial bulk training to do that.
Are you sure your dataset and goals require such distribution? You can get a lot done on a single machine with many cores & 128GB+ RAM.
Note that you can also train a Doc2Vec model on a smaller representative dataset, then use its .infer_vector() method on the frozen model to calculate doc-vectors for any number of additional texts. Those frozen models can be spun up on multiple machines – allowing arbitrarily-distributed calculation of doc-vectors. (That would be far easier than distributing initial training.) | I'm using python Gensim to train doc2vec. Is there any possibility to allow this code to be distributed on AWS (s3).
Thank you in advance | 0 | 1 | 632 |
0 | 44,584,830 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-05-30T11:07:00.000 | 0 | 1 | 0 | CNN in keras using theano as backend | 44,260,553 | 0 | python-2.7 | Create a folder dataset and then create two sub-folders train and test. Then inside Train if you wish create sub-folders with images labels (e.g. fish - holds all fish images, lion - holds lion images etc) and in test you can populate with some images. Finally train the model pointing to Dataset - > Train. | I am very new to the keras. currently working with CNN in keras using theano as backend. I would like to train my network with own images( around 25000 images),which all are in same folder and test it. How could I do that? (please help me, i am not familiar with deep learning) | 0 | 1 | 70 |
0 | 44,267,916 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-05-30T15:47:00.000 | 0 | 2 | 0 | Machine Learning - test set with fewer features than the train set | 44,266,677 | 0 | python,machine-learning | The train set determines what features you can use for recognition. If you're lucky, your recognizer will just ignore unknown features (I believe NaiveBayes does), otherwise you'll get an error. So save the set of feature names you created during training, and use them during testing/recognition.
Some recognizers will treat a missing binary feature the same as a zero value. I believe this is what the NLTK's NaiveBayesClassifier does, but other engines might have different semantics. So for binary present/absent features, I would write my feature extraction function to always put the same keys in the feature dictionary. | guys.
I was developing an ML model and I got a doubt. Let's assume that my train data has the following data:
ID | Animal | Age | Habitat
0 | Fish | 2 | Sea
1 | Hawk | 1 | Mountain
2 | Fish | 3 | Sea
3 | Snake | 4 | Forest
If I apply One-hot Encoding, it will generate the following matrix:
ID | Animal_Fish | Animal_Hawk | Animal_Snake | Age | ...
0 | 1 | 0 | 0 | 2 | ...
1 | 0 | 1 | 0 | 1 | ...
2 | 1 | 0 | 0 | 3 | ...
3 | 0 | 0 | 1 | 4 | ...
That's beautiful and work in most of the cases. But, what if my test set contains fewer (or more) features than the train set? What if my test set doesn't contain "Fish"? It will generate one less category.
Can you guys help me how can I manage this kind of problem?
Thank you | 0 | 1 | 5,636 |
0 | 44,274,041 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-05-30T23:44:00.000 | -1 | 2 | 0 | Unable to handle NaN in pandas dataframe | 44,273,555 | -0.099668 | python,pandas | Building on from piRSquared, a possible method to treating NaN values (if applicable to your problem) is to convert the NaN inputs to the median of the column.
df = df.fillna(df.mean()) | I have a pandas dataframe with a variable, which, when I print it, shows up as mostly containing NaN. It is of dtype object. However, when I run the isnull function, it returns "FALSE" everywhere. I am wondering why the NaN values are not encoded as missing, and if there is any way of converting them to missing values that are treated properly.
Thank you. | 0 | 1 | 1,285 |
1 | 44,306,111 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-05-31T06:10:00.000 | 0 | 2 | 0 | What are the ximgproc_DisparityWLSFilter.filter() Arguments? | 44,276,962 | 0 | python,opencv,disparity-mapping | Unlike c++, Python doesn't work well with pointers. So the arguments are
Filtered_disp = ximgproc_DisparityWLSFilter.filter(left_disp,left, None,right_disp)
Note that it's no longer a void function in Python!
I figured this out through trial and error though. | I get a ximgproc_DisparityWLSFilter from cv2.ximgproc.createDisparityWLSFilter(left_matcher),
but I cannot get ximgproc_DisparityWLSFilter.filter() to work.
The error I get is
OpenCV Error: Assertion failed (!disparity_map_right.empty() && (disparity_map_right.depth() == CV_16S) && (disparity_map_right.channels() == 1)) in cv::ximgproc::DisparityWLSFilterImpl::filter, file ......\opencv_contrib\modules\ximgproc\src\disparity_filters.cpp, line 262
In general, how do I figure out how to use this, when there isn't a single google result for "ximgproc_DisparityWLSFilter"? | 0 | 1 | 2,604 |
0 | 44,968,506 | 0 | 0 | 0 | 0 | 4 | false | 6 | 2017-05-31T17:01:00.000 | 5 | 8 | 0 | Python Machine Learning Functions | 44,290,736 | 0.124353 | python,machine-learning | question is really vague one.
still as you mentioned machine learning TAG. i take it as machine learning problem. in this case there is no specific model or algorithm available to decide which algorithm/function best suits to your data !!.
it's hit & trial method to decide which model should be best for your data. so you are going to write a wrapper program which going to test your data with all possible model and based on their Accuracy Score your code will decide which model best suits. !! | Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product?
Thanks. | 0 | 1 | 1,870 |
0 | 49,795,930 | 0 | 0 | 0 | 0 | 4 | false | 6 | 2017-05-31T17:01:00.000 | 1 | 8 | 0 | Python Machine Learning Functions | 44,290,736 | 0.024995 | python,machine-learning | I don't think this is a perfect place to ask this kind of questions. There are some other websites where you can ask this kind of questions.
For learning Machine Learning (ML), do a basic ML course and follow blogs. | Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product?
Thanks. | 0 | 1 | 1,870 |
0 | 52,089,251 | 0 | 0 | 0 | 0 | 4 | false | 6 | 2017-05-31T17:01:00.000 | 1 | 8 | 0 | Python Machine Learning Functions | 44,290,736 | 0.024995 | python,machine-learning | If you have just started learning ML then you should first get the ideas about different scientific libraries which Python provides. Most important thing is that you have to start with basic understanding of machine learning modelling from various online material available or by doing ML course.
FYI.. there is no such functionality available in python which gives you information about the model which is perfect for a particular set of data. It solely depends on how much analytical you are to choose a good model for your dataset by checking different statistical/model parameters. | Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product?
Thanks. | 0 | 1 | 1,870 |
0 | 56,123,050 | 0 | 0 | 0 | 0 | 4 | false | 6 | 2017-05-31T17:01:00.000 | 1 | 8 | 0 | Python Machine Learning Functions | 44,290,736 | 0.024995 | python,machine-learning | From your question, I garner that you have a result and are trying to find the optimal algorithm to reach there. Unfortunately, as far as I'm aware, you have to compare the different algorithms in itself to understand which one has better performance.
However if you only wish to obtain a suitable algorithm for your use-case but are unsure what broad classification to start looking for an algorithm in, then I'd suggest you read up on the different types of machine learning (classification/regression) and then establish relations between your use case and how the algorithm performs its tasks. With that as a fundamental, you can fine tune your search. | Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product?
Thanks. | 0 | 1 | 1,870 |
0 | 44,316,105 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2017-06-01T11:51:00.000 | 1 | 1 | 0 | What does a tensorflow session exactly do? | 44,306,765 | 1.2 | python,machine-learning,tensorflow,gpu | TensorFlow sessions allocate ~all GPU memory on startup, so they can bypass the cuda allocator.
Do not run more than one cuda-using library in the same process or weird things (like this stream executor error) will happen. | I have tensorflow's gpu version installed, as soon as I create a session, it shows me this log:
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0
with properties: name: GeForce GTX TITAN Black major: 3 minor: 5
memoryClockRate (GHz) 0.98 pciBusID 0000:01:00.0 Total memory: 5.94GiB
Free memory: 5.31GiB I
tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I
tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y I
tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN
Black, pci bus id: 0000:01:00.0)
And when I check my GPU memory usage, around 90% of it gets consumed.
Tensorflow documentation does not say anything about this. Does it take control of the gpu ? Why does it consume most of the memory ? | 0 | 1 | 526 |
0 | 44,308,255 | 0 | 1 | 0 | 0 | 1 | true | 5 | 2017-06-01T12:58:00.000 | 7 | 1 | 0 | matplotlib.figure.suptitle(), what does 'sup' stand for? | 44,308,195 | 1.2 | python,matplotlib | It is an abbreviation indicating a "super" title. It is a title which appears at the top of the figure, whereas a normal title only appears above a particular axes. If you only have one axes object, then there's unlikely an appreciable difference, but the difference happens when you have multiple subplots on the same figure and you would like a title at the top of the figure not on each of the axes objects. | I understand that matplotlib.figure.suptitle() adds a title to a figure.
But what does the "sup" stand for? | 0 | 1 | 370 |
0 | 44,324,291 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-06-02T08:05:00.000 | 0 | 1 | 0 | word2vec - reduce RAM consumption when loading model | 44,323,816 | 0 | python,gensim,word2vec | I am not intimately familiar with the word2vec implementation in gensim but the model, once trained, should basically boil down to a dictionary of (word -> vector) pairs. This functionality is provided by the gensim.models.KeyedVectors class and is independent of the training algorithm used to derive the vectors.
You could extend that class such that it loads the vectors from a database (SQLite for instance) on demand instead of into memory upon creation.
Probably best if you open an issue on github and start a discussion with the core developers on the matter. | I have about 30 word2vec models. When loading them in a python script each consumes a few GB of RAM so it is impossible to use all of them at once. Is there any way to use the models without loading the complete model into RAM? | 0 | 1 | 467 |
0 | 57,430,035 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-06-02T13:06:00.000 | -1 | 4 | 0 | Filtering dataframe based on column value_counts (pandas) | 44,329,734 | -0.049958 | python,pandas | l2 = ((df.val1.loc[df.val== 'Best'].value_counts().sort_index()/df.val1.loc[df.val.isin(l11)].value_counts().sort_index())).loc[lambda x : x>0.5].index.tolist() | I'm trying out pandas for the first time. I have a dataframe with two columns: user_id and string. Each user_id may have several strings, thus showing up in the dataframe multiple times. I want to derive another dataframe from this; one where only those user_ids are listed that have at least 2 or more strings associated to them.
I tried df[df['user_id'].value_counts()> 1], which I thought was the standard way to do this, but it yields IndexingError: Unalignable boolean Series key provided. Can someone clear out my concept and provide the correct alternative? | 0 | 1 | 9,412 |
0 | 44,433,959 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-06-04T06:08:00.000 | 0 | 2 | 0 | Specifying Multiple targets for regression in TFLearn | 44,351,248 | 0 | python,regression,tflearn | That's not how regression works. You must have only one column as a target. That's why the tensorflow API only allows one column to be the target of regression, specified with an integer. | How to specify multiple target_column in tflearn.data_utils.load_csv method.
According to Tflearn docs load_csv takes target_column as integer.
Tried passing my target_columns as a list in the load_csv method and as expected got a TypeError: 'list' object cannot be interpreted as an integer traceback.
Any solutions for this.
Thanks | 0 | 1 | 443 |
0 | 45,017,647 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-06-04T14:27:00.000 | 1 | 1 | 0 | Numpy array to string gives an output that if saved to a file results in a bigger file than the original image, why? | 44,355,163 | 0.197375 | python-3.x,numpy | This is likely due to the fact that typical image formats are compressed. If you open an image using e.g. scipy.ndimage.imread, the file will be decompressed and the result will be a numpy array of size (NxMx3), where N and M are the dimensions of the image and 3 represents the [R, G, B] channels. Transforming this to a string does not perform any compression, so the result will be larger than the original file. | The Operation : transforming a rgb image numpy array to string gives an output that if saved to a file also results in a bigger file than the original image, why? | 0 | 1 | 15 |
0 | 44,362,389 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-06-05T00:43:00.000 | 2 | 1 | 0 | Tensorflow Slower on Python 3 vs. Python 2 | 44,360,273 | 0.379949 | python,python-2.7,python-3.x,tensorflow | When operating Tensorflow from python most code to feed the computational engine with data resides in python domain. There are known differences between python 2/3 when it comes to performance on various tasks. Therefore, I'd guess that the python code you use to feed the net (or TF python layer, which is quite thick) makes heavy use of python features that are (by design) a bit slower in python 3. | My tests show that Tensorflow GPU operations are ~6% slower on Python 3 compared to Python 2. Does anyone have any insight on this?
Platform:
Ubuntu 16.04.2 LTS
Virtualenv 15.0.1
Python 2.7.12
Python 3.6.1
TensorFlow 1.1
CUDA Toolkit 8.0.44
CUDNN 5.1
GPU: GTX 980Ti
CPU: i7 4 GHz
RAM: 32 GB | 0 | 1 | 1,834 |
0 | 44,379,161 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2017-06-05T13:35:00.000 | 0 | 1 | 0 | Does Scipy have techniques to import&export optimisation model files such as LP? | 44,370,237 | 1.2 | python,import,scipy,export,linear-programming | No as Sascha mentioned in the comment. Use other alternatives such as cvxpy/cvxopt. | I am trying to manage problems from Scipy. So does Scipy provide techniques to import and export model files? | 0 | 1 | 95 |
0 | 44,388,121 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-06-06T10:43:00.000 | 0 | 1 | 0 | Create Matrix with gaussian-distributed ellipsis in python | 44,387,854 | 0 | python,matrix,ellipse,gaussianblur | You need to draw samples from a multi-variate gaussian distribution. The function you can use is numpy.random.multivariate_normal
You mean value matrix should be [40, 60]. The covariance C matrix should be 2X2. Regarding its values:
C[1, 1], C[2, 2]: decides the width of the ellipse along each axis. Choose it so that 3*C[i,i] is almost equal to the width of the ellipse along this axis.
The diagonal values are zero if you want the ellipse to be along the axes, otherwise put larger values (keep in mind that C[2, 1]==C[1, 2]
However, keep in mind that, since it is a Gaussian distribution, the output values will be close to 0 at distance 3*C[i,i] from the center, but they will never be truly zero. | I have a 100x100 Matrix with Zeros. I want to add a 10x20 ellipsis around a specific point in the Matrix - lets say at position 40,60. The Ellipsis should be filled with values from 0 to 1. (1 in the center - 0 at the edge) - The numbers should be gaussian-distributed.
Maybe someone can give me a clue, how to start with this problem.. | 0 | 1 | 221 |
0 | 44,519,380 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2017-06-06T14:24:00.000 | 1 | 1 | 0 | pandas read_sql_query returns negative and incorrect values for Oracle Database number field containing positive values | 44,392,676 | 1.2 | python,sql,oracle,pandas,dataframe | Removing pandas and just using cx_Oracle still resulted in an integer overflow so in the SQL query I'm using:
CAST(field AS NUMBER(19))
At this moment I can only guess that any field between NUMBER(11) and NUMBER(18) will require an explicit CAST to NUMBER(19) to avoid the overflow. | I'm running pandas read_sql_query and cx_Oracle 6.0b2 to retrieve data from an Oracle database I've inherited to a DataFrame.
A field in many Oracle tables has data type NUMBER(15, 0) with unsigned values. When I retrieve data from this field the DataFrame reports the data as int64 but the DataFrame values have 9 or fewer digits and are all signed negative. All the values have changed - I assume an integer overflow is happening somewhere.
If I convert the database values using to_char in the SQL query and then use pandas to_numeric on the DataFrame the values are type int64 and correct.
I'm using Python 3.6.1 x64 and pandas 0.20.1. _USE_BOTTLENECK is False.
How can I retrieve the correct values from the tables without using to_char? | 0 | 1 | 577 |
0 | 44,397,071 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-06-06T18:13:00.000 | 2 | 2 | 0 | Get a subset of data from one row of Dataframe | 44,397,034 | 0.197375 | python,pandas,dataframe,indexing | row_2 = df[['B', 'C']].iloc[1]
OR
# Convert column to 2xN vector, grab row 2
row_2 = list(df[['B', 'C']].apply(tuple, axis=1))[1] | Let's say I have a dataframe df with columns 'A', 'B', 'C'
Now I just want to extract row 2 of df and only columns 'B' and 'C'. What is the most efficient way to do that?
Can you please tell me why df.ix[2, ['B', 'C']] didn't work?
Thank you! | 0 | 1 | 52 |
0 | 44,403,857 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-06-07T04:50:00.000 | 5 | 1 | 0 | Deep learning using Caffe - Python | 44,403,745 | 0.761594 | python,machine-learning,neural-network,deep-learning,caffe | There is a fundamental difference between weights and input data: the training data is used to learn the weights (aka "trainable parameters") during training. Once the net is trained, the training data is no longer needed while the weights are kept as part of the model to be used for testing/deployment.
Make sure this difference is clear to you before you precede.
Layers with trainable parameters has a filler to set the weights initially.
On the other hand, an input data layer does not have trainable parameters, but it should supply the net with input data. Thus, input layers has no filler.
Based on the type of input layer you use, you will need to prepare your training data. | I am studying deep learning and trying to implement it using CAFFE- Python. can anybody tell that how we can assign the weights to each node in input layer instead of using weight filler in caffe? | 0 | 1 | 166 |
0 | 44,412,918 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2017-06-07T10:24:00.000 | 4 | 2 | 0 | How to convert between different color maps on OpenCV? | 44,409,981 | 1.2 | python,opencv | I think it is a little more complicated than what is suggested in the comments, since you are dealing with temperatures. You need to revert the color mapping to a temperature value image, then apply one colormap with OpenCV that you like.
Going back to greyscale is not so straightforward as converting the image from BGR to greyscale, because you have colors like dark red that will may be mapped as dark grey as well as dark blue colors, however they are in totally opposite parts of the scale.
Both of your images are in different scale (temperature wise) as well, so, if you pass them back to grey scale, black is not the same temperature as the other one, so it is not possible to compare them directly.
To get a proper scale value you can try to get the upper rectangle (the one that shows the scale) and separate them in equal pieces and divide the temperature range with the same number of divisions. This will give you a color for each temperature. Then transform both images to cv::Mat double and each pixel will have the temperature value.
Finally you must decide what will be your temperature range to decide the colors for all the images you have. For example you can choose 25-45. Then normalize the images with temperatures (the one with doubles) with the range you selected and normalize them to greyscale images (0 will be 25 and 255 will be 45) and apply the color map to this images.
I hope this helps. | I have a set of thermal images which are encoded with different types of color maps.
I want to use a constant color map to make fault intelligence easier.
Please guide me on how to go about this. | 0 | 1 | 942 |
0 | 44,412,645 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2017-06-07T10:24:00.000 | 0 | 2 | 0 | How to convert between different color maps on OpenCV? | 44,409,981 | 0 | python,opencv | You can use cvtcolor to HSV, and then manually change Hue. After you change hue, you can cvt color back to rbg. | I have a set of thermal images which are encoded with different types of color maps.
I want to use a constant color map to make fault intelligence easier.
Please guide me on how to go about this. | 0 | 1 | 942 |
0 | 44,433,064 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-06-07T17:17:00.000 | 0 | 3 | 0 | Switching from tensorflow on python 3.6 to python 3.5 | 44,419,017 | 0 | python,tensorflow,keras | I had some issues with my tensorflow's installation too.
I personnaly used anaconda to solve the problem.
After installing anaconda (Maybe uninstall the old one if you already have one), launch an anaconda prompt and input conda create -n tensorflow python=3.5, afther that, you must activate it with activate tensorflow.
Once it's done, you have to install tensorflow on your python 3.5.
For that, use:
pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.2.0rc1-cp35-cp35m-win_amd64.whl
for cpu version
pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.2.0rc1-cp35-cp35m-win_amd64.whl for gpu version
You now have the r1.2 version of tensorflow.
Then, just use pip install keras and keras will be installed.
Now, all you have to do is launch anaconda navigator, select tensorflow on the scrolling menu and launch spyder/jupyter.
You can now use Keras with a tensorflow backend in Python 3.5
Hope it helped someone ! (It take me so much time to find it by myself) | This is my first question on stackoverflow, please bear with me as I will do my best to provide as much info as possible.
I have a windows 10, 6-bit processor. My end goal is to use keras within spyder. The first thing I did was update python to 3.6 and install tensorflow, which seemed to work. When I attempted to get keras, however, it wasn't working, and I read that keras worked on python 3.5. I successfully installed keras on python 3.5, which automatically installed theano as the backend.
But now I have two spyder environments, one running off of python 3.5, one off of 3.6. The 3.5 reads keras but doesn't go through with any modules because it cannot find tensorflow. The 3.6 can read tensorflow, but cannot find keras.
Please let me know what you would recommend. Thank you! | 0 | 1 | 2,643 |
0 | 44,421,256 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-06-07T18:40:00.000 | 0 | 2 | 0 | the difference between .bin file and .mat files | 44,420,434 | 0 | python,image-processing,tensorflow | A file-name suffix is just a suffix (which sometimes help to get info about that file; e.g. Windows decides which tool is called when double-clicked). A suffix does not need to be correct. And of course, changing the suffix will not change the content.
Every format will need their own decoder. JPG, PNG, MAT and co.
To some extent, these are automatically used by reading out metadata (giving some assumptions!). Many image-tools have some imread-function which works for jpg and png, even if there is no suffix (because there is checking for common and supported image-formats).
I'm not sure what tensorflow does automatically, but:
jpg, png, bmp should be no problem
worst-case: use scipy to read and convert
mat is usually a matrix (with infinite different encodings) and often matlab-based
scipy can read many matlab-based formats
bin can be anything (usually stands for binary; no clear mapping like the above)
Don't get me wrong, but i expect someone trying to use tensorflow (not a small, not a simple tool) to know that changing a suffix should never magically transform the content to the new format (especially in the lossless/lossy case like png, jpg). I hope you evaluated this decision and you are not running blindly into using a popular tool. | can the tensorflow read a file contain a normal images for example in JPG, .... or the tensorflow just read the .bin file contains images
what is the difference between .mat file and .bin file
Also when I rename the .bin file name to .mat, does the data of the file changed??
sorry maybe my language not clear because I cannot speak English very well | 0 | 1 | 574 |
0 | 50,867,766 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-06-08T13:19:00.000 | 0 | 1 | 0 | Is it normal to obtain different test results with different batch sizes with tensorflow | 44,436,899 | 0 | python,machine-learning,tensorflow,neural-network | This normally means that you did not set the phase_train parameter back to false after testing. | I am using tensorflow for a classification problem.
I have some utility for saving and loading my network. When I restore the network, I can specify a different batch size than for training.
My problem is that I am getting different test results when I restore with a different batch size. However, there is no difference when using the same batch size.
EDIT: Please note that I am not using dropout.
The difference is between 0% and 1% (0.5% on average).
My network is a fully connected layer that predicts two different outputs. I did not have the issue when I only had one task to predict.
My loss op is a sum of both losses.
What could be the issue? Does it have to do with Tensorflow's parallelization strategy? | 0 | 1 | 361 |
0 | 44,444,164 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-06-08T13:38:00.000 | 1 | 1 | 0 | What is label_keys parameter good for in a Classifier - Tensorflow? | 44,437,307 | 1.2 | python,tensorflow,tensorboard | Not in tensorboard, but the predict method can return the class names instead of numbers if you provide label_keys. | What is label_keys parameter good for in a Classifier.
Can you visualize the labeled data on Tensorboard at the Embeddings section? | 0 | 1 | 65 |
0 | 44,439,439 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2017-06-08T15:05:00.000 | 3 | 2 | 0 | Conditional summation in python | 44,439,375 | 1.2 | python,numpy | Your best bet is probably something like np.count_nonzero(x > threshold), where x is your 2-d array.
As the name implies, count_nonzero counts the number of elements that aren't zero. By making use of the fact that True is 1-ish, you can use it to count the number of elements that are True. | I have a numpy 2d array (8000x7200). I want to count the number of cells having a value greater than a specified threshold. I tried to do this using a double loop, but it takes a lot of time.
Is there a way to perform this calculation quickly? | 0 | 1 | 1,618 |
0 | 44,441,810 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-06-08T16:24:00.000 | 0 | 1 | 0 | How to choose parameters for svm in sklearn | 44,441,002 | 0 | python,machine-learning,scikit-learn,svm | Yes, this is mostly a matter of experimentation -- especially as you've told us very little about your data set: separability, linearity, density, connectivity, ... all the characteristics that affect classification algorithms.
Try the linear and Gaussian kernels for starters. If linear doesn't work well and Gaussian does, then try the other kernels.
Once you've found the best 1 or 2 kernels, then play with the cost and gamma parameters. Gamma is a "slack" parameter: it gives the kernel permission to make a certain proportion of raw classification errors as a trade-off for other benefits: width of the gap, simplicity of the partition function, etc.
I haven't yet had an application that got more than trivial benefit from altering the cost. | I'm trying to use SVM from sklearn for a classification problem. I got a highly sparse dataset with more than 50K rows and binary outputs.
The problem is I don't know quite well how to efficiently choose the parameters, mainly the kernel, gamma anc C.
For the kernels for example, am I supposed to try all kernels and just keep the one that gives me the most satisfying results or is there something related to our data that we can see in the first place before choosing the kernel ?
Same goes for C and gamma.
Thanks ! | 0 | 1 | 726 |
0 | 44,444,173 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-06-08T19:23:00.000 | 0 | 2 | 0 | Faster calculation histogram of a set of images | 44,443,999 | 0 | python-3.x,numpy,histogram | Python is among the slowest production-ready languages you can use.
As you haven't posted any code, I can only provide general suggestions. They are listed in order of practicality below:
Use a compiled version of python, such as pypy or cpython
Use existing software with your desired functionality. There's nothing wrong with finding free software online.
Use a more efficient (or perhaps even lossy) algorithm to skip computation
Use a faster language such as Rust, C++, C#, or Java | I have about 3 million images and need to calculate a histogram for each one. Right now I am using python but it is taking of lot of time. Is there any way to process the images in batches? I have NVIDIA 1080 Ti GPU cards, so maybe if there is a way to process on the GPU?
I can't find any code or library to process the images in parallel. Any kind of help to boost up the speed is appreciated | 0 | 1 | 528 |
0 | 45,195,580 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-06-08T20:04:00.000 | 0 | 2 | 0 | Cannot run keras | 44,444,634 | 0 | python,machine-learning,virtualenv,keras,mnist | Do you get an error message if you just import keras? I was getting a similar error in the command line and then implemented in Spyder (using Anaconda) and it worked fine. | I want to run keras on anaconda for convolution neural network using mnist handwriting recognition. A day before everything worked fine but as I try to run the same program, i get the following error in the first line:
from keras.datasets import mnist (first line of code)
ModuleNotFoundError: No module named 'keras.datasets'; 'keras' is not
a package
I also created virtual environment to use python 3.5 as my python version is 3.6. I have installed both keras and tensorflow. How do i fix the above error? Perhaps it is related to path and not error with keras. My anaconda is installed in E: whearas working environment is C:\Users\Prashant Mahato. | 0 | 1 | 1,591 |
0 | 44,454,289 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-06-09T06:55:00.000 | 1 | 3 | 0 | How to reshape a 3D numpy array? | 44,451,227 | 0.066568 | python,numpy,deep-learning,conv-neural-network | The standard way is to resize the image such that the smaller side is equal to 224 and then crop the image to 224x224. Resizing the image to 224x224 may distort the image and can lead to erroneous training. For example, a circle might become an ellipse if the image is not a square. It is important to maintain the original aspect ratio. | I have a list of numpy arrays which are actually input images to my CNN. However size of each of my image is not cosistent, and my CNN takes only images which are of dimension 224X224. How do I reshape each of my image into the given dimension?
print(train_images[key].reshape(224, 224,3))
gives me an output
ValueError: total size of new array must be unchanged
I would be very grateful if anybody could help me with this. | 0 | 1 | 2,121 |
0 | 44,451,381 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-06-09T06:55:00.000 | 1 | 3 | 0 | How to reshape a 3D numpy array? | 44,451,227 | 0.066568 | python,numpy,deep-learning,conv-neural-network | Here are a few ways I know to achieve this:
Since you're using python, you can use cv2.resize(), to resize the image to 224x224. The problem here is going to be distortions.
Scale the image to adjust to one of the required sizes (W=224 or H=224) and trim off whatever is extra. There is a loss of information here.
If you have the larger image, and a bounding box, use some delta to bounding box to maintain the aspect ratio and then resize down to the required size.
When you reshape a numpy array, the produce of the dimensions must match. If not, it'll throw a ValueError as you've got. There's no solution using reshape to solve your problem, AFAIK. | I have a list of numpy arrays which are actually input images to my CNN. However size of each of my image is not cosistent, and my CNN takes only images which are of dimension 224X224. How do I reshape each of my image into the given dimension?
print(train_images[key].reshape(224, 224,3))
gives me an output
ValueError: total size of new array must be unchanged
I would be very grateful if anybody could help me with this. | 0 | 1 | 2,121 |
0 | 49,260,218 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-06-09T11:44:00.000 | 1 | 2 | 0 | Finding contour in using opencv in python | 44,456,932 | 0.099668 | python-3.x,opencv,contour | The mode and method parameter of findContours() are enum with integer values. One can use either the keywords or the integer values assigned to it. This detail can be viewed as an intellisense in visual studio when opencv is included in a project.
Below are the associated values with each enum.
MODES
CV_RETR_EXTERNAL : 0
CV_RETR_LIST : 1
CV_RETR_CCOMP : 2
CV_RETR_TREE : 3
METHODS
CV_CHAIN_APPROX_NONE : 1
CV_CHAIN_APPROX_SIMPLE : 2
CV_CHAIN_APPROX_TC89_L1 : 3
CV_CHAIN_APPROX_TC89_KCOS : 4 | I think,I understood well the function "cv2.findContours(image, mode, method).
But I got this thing contours,hierarchy = cv2.findContours(thresh,2,1) in one of the documents of opencv. I am not getting what is the meaning of 2,1 here and why have they been used. Someone please explain it. | 0 | 1 | 777 |
0 | 44,464,380 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-06-09T18:12:00.000 | 1 | 1 | 0 | Determining the orientation of a file in memory | 44,464,315 | 0.197375 | python,file,io,bit-manipulation | Endiannes is a problem of binary files. CSV file is a text file. The numbers are not binary numbers but ASCII characters. There is no endiannes in it. | Say I want to process a CSV file. I know in Python I can call the read() function to open the file and read it in a byte at a time, from the first field in the file (i.e. the field in the top left of the file) to the last field (the field in the bottom right).
My question is how I can determine the orientation of a file in memory. That is, if I view the contents of the file as a single binary number and process it as bit stream, how can I know if the first field (the field the read() first returns to us) is stored in the least significant positions of the binary number or the most significant positions? Would that be determined by the endianness of the machine my program is running on?
Here's one (contrived) instance where this distinction would matter. Say I first scanned the binary representation of the file from least significant position to most significant position to determine the widths of each of the CSV values. If I were to then call read(), the first field width I calculated would correspond to the first field read() returns if and only if the first field of the CSV file is stored at the least significant bit positions when we view the file as a single binary number. If the first field was instead stored at the most significant positions, I'd first have to reverse my list of calculated field widths before I could use it.
Here's a more concrete example:
CSV file: abc,12345
Scanned field widths: either [3, 5] or [5, 3] depending on how the CSV file is laid out in memory.
Now, if I call read(), the first field I'll process is abc. If abc happened to be the first field I scanned through when calculating the field widths, I'm good. I'll know that I've scanned the entire first field after reading 3 characters. However, if I first scanned 12345 when calculating the field widths, I've got a problem.
How can I determine how a file is laid out in memory? Is the first field of a file stored in the least significant bit positions, or the most significant bit positions? | 0 | 1 | 29 |
0 | 59,876,018 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2017-06-10T03:50:00.000 | 0 | 15 | 0 | Price column object to int in pandas | 44,469,313 | 0 | python,pandas,ipython-notebook | This will also work: dframe.amount.str.replace("$","").astype(int) | I have a column called amount with holds values that look like this: $3,092.44 when I do dataframe.dtypes() it returns this column as an object how can i convert this column to type int? | 0 | 1 | 12,214 |
0 | 44,471,880 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-06-10T09:42:00.000 | 3 | 2 | 0 | Creating 1D zero array in Octave | 44,471,853 | 1.2 | python,numpy,matrix,octave | zeros(n,1) works well for me in Octave. | How can we create an array with n elements. zeros function can create only arrays of dimensions greater than or equal to 2? zeros(4), zeros([4]) and zeros([4 4]) all create 2D zero matrix of dimensions 4x4.
I have a code in Python where I have used numpy.zeros(n). I wish to do something similar in Octave. | 0 | 1 | 7,005 |
Subsets and Splits