GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 53,186,754 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-11-07T02:17:00.000 | 0 | 2 | 0 | Pit in LSTM programming by python | 53,182,773 | 0 | python-3.x,tensorflow,keras,lstm,rnn | No. Samples are not equal to batch size. Samples means the number of rows in your data-set. Your training data-set is divided into number of batches and pass it to the network to train.
In simple words,
Imagine your data-set has 30 samples, and you define your batch_size as 3.
That means the 30 samples divided into 10 batches(30 divided by you defined batch_size = 10). When you train you model, at a time only 3 rows of data will be be pushed to the neural network and then next 3 rows will be push to the neural network. Like wise whole data-set will push to the neural network.
Samples/Batch_size = Number of batches
Remember that batch_size and number of batches are two different things. | As we all Know, if we want to train a LSTM network, we must reshape the train dataset by the function numpy.reshape(), and reshaping result is like [samples,time_steps,features]. However, the new shape is influenced by the original one. I have seen some blogs teaching LSTM programming taking 1 as time_steps, and if time_steps is another number, samples will change relevently. My question is that does the samplesequal to batch_size?
X = X.reshape(X.shape[0], 1, X.shape[1]) | 0 | 1 | 74 |
0 | 53,204,495 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2018-11-07T18:20:00.000 | 1 | 2 | 0 | Tensorflow training, how to prevent training node deletion | 53,195,482 | 1.2 | python,tensorflow,machine-learning | You can use the keep_checkpoint_max flag to tf.estimator.RunConfig in model_main.py.
You can set it to a very large number to practically save all checkpoints.
You should be warned though that depending on the model size and saving frequency, it might fill up your disk (and therefore crash during training).
You can change saving frequency by the flags save_checkpoints_steps or save_checkpoints_secs of RunConfig. The default is to use save_checkpoints_secs, with a default value of 600 (10 minutes). | I am using Tensorflow with python for object detection.
I want to start training and leave it for a while and keep all training nodes (model-cpk). Standard Tensorflow training seems to delete nodes and only keep the last few nodes. How do I prevent that?
Please excuse me if this is the wrong place to ask such questions. I would be oblidged if been told a proper place. Thank you. | 0 | 1 | 69 |
0 | 53,205,258 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-11-07T18:20:00.000 | 0 | 2 | 0 | Tensorflow training, how to prevent training node deletion | 53,195,482 | 0 | python,tensorflow,machine-learning | You can save modelcheckpoints as .hdf5 files are load them again when wanting to predict on test data.
Hope that helps. | I am using Tensorflow with python for object detection.
I want to start training and leave it for a while and keep all training nodes (model-cpk). Standard Tensorflow training seems to delete nodes and only keep the last few nodes. How do I prevent that?
Please excuse me if this is the wrong place to ask such questions. I would be oblidged if been told a proper place. Thank you. | 0 | 1 | 69 |
0 | 53,202,686 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-11-07T19:29:00.000 | 0 | 2 | 0 | Can you post process results from Cloud ML's prediction output? | 53,196,467 | 0 | python,tensorflow,google-cloud-ml | I'll answer (1): we have an Alpha API that will permit this. Please contact [email protected] for more information. | I have a model for object detection (Faster RCNN from Tensorflow's Object Detection API) running on Google Cloud ML. I also have some code to filter the resulting bounding boxes based on size, aspect ratio etc.
Is it possible to run this code as part of the prediction process so I don't need to run a separate process to do it afterwards.
Is it possible to limit the number of bounding boxes predicted by the model based on some confidence threshold, as currently outputting a lot of extraneous data. | 0 | 1 | 81 |
0 | 53,199,041 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-07T22:35:00.000 | 1 | 1 | 0 | scipy stats distributions documentation | 53,198,927 | 0.197375 | python,scipy | If you use ipython then I believe scipy.stats.binom? achieves this. | I'm trying to track down the docs for the various distributions in scipy.stats. It is easy enough to google around for them, but I like to use the built-in help function for kicks sometimes. Through a series of help calls, can find that scipy has a stats module and that scipy.stats has a binom distribution. However, at that point, using help becomes tricky. help(scipy.stats.binom) actually returns a help document for a class named binom_gen which inherits methods from some parent abstract class whose __init__ method is utterly uninformative. However, it does provide the following hint: "See help(type(self)) for accurate signature." Okay. Since I do not have access to self from outside of the class code itself, I assume that this means "go ahead and instantiate an object, and then call help." After some trial and error on getting literally any old parameters to not raise an Exception (specifically, scipy.stats.binom(0.5,0.5) returns successfully), we can call help on that thing.
Both help(scipy.stats.binom(0.5,0.5)) and help(type(scipy.stats.binom(0.5,0.5)) give the docs for class rv_frozen, which is equally uninformative, and actually gives the same suggestion to: "See help(type(self)) for accurate signature."
How do I access the help for distributions in scipy.stats? More generally, is there a meaningful way to navigate an abstract class morass via successive calls to the help function, or must I simply know a priori the class ultimately returned by these factories? | 0 | 1 | 70 |
0 | 53,286,280 | 0 | 1 | 0 | 0 | 1 | true | 12 | 2018-11-07T23:57:00.000 | 9 | 2 | 0 | Importing tensorflow makes python 3.6.5 error | 53,199,675 | 1.2 | python,python-3.x,tensorflow | I have solved the issue. The following procedure was used to find and fix the problem:
I used the faulthandler module to force python to print out a stack trace and recieved a Windows fatal exception: access violation error which seems to suggest the problem was indeed a segfault caused by some module used by tensorflow.
I tried to fix dependencies by doing a conda update --all and then a conda clean --all which didn't fix the problem.
I noticed though that the problems seems to arise from the h5py and keras modules so I did pip install --upgrade h5py and pip install --upgrade keras and pip install --upgrade tensorflow and the problem was fixed. I am now using tensorflow version 1.12.0, keras version 2.2.4, and h5py version 2.8.0.
The key to solving this problem seems to be the faulthander module which showed me which modules (h5py and keras) were leading to the segfault. | Tensorflow used to work on my computer. But now when I try to import tensorflow python itself errors out. I am not given a traceback call to tell me what the error is. I get a window's prompt that says "Python has stopped working". When I click "debug" all I get is "An unhandled win32 exception occurred in python.exe". I've never had a python package actually error out python itself for me, I've always just had a traceback error thrown by python if I didn't install something right.
I've tried uninstalling and reinstalling tensorflow (effectively updating from 1.7.0 to 1.12.0) but that has not helped. I'm not sure how to search for a solution to this problem either since I'm not given a traceback or an error code or an error message aside from the very generic one above.
I'm currently using python 3.6.5 with tensorflow 1.12.0 (CPU only) installed. My OS is Windows 7 Enterprise 64 bit.
Any ideas?
EDIT: The python distro I am using is through Anaconda and I'm trying to run python directly through the anaconda prompt (command line interface).
EDIT2: I used the faulthandler module to see if I can get a stack trace out of it, and I got a Windows fatal exception: code 0xc0000139 and a Windows fatal exception: access violation, along with a bunch of lines linking to various frozen importlib._bootstrap lines of code in various __init__.py modules.
EDIT3: For a bit more context, this is on a workplace machine with a lot of security software installed on it. | 0 | 1 | 2,939 |
0 | 53,211,031 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-08T02:59:00.000 | 0 | 1 | 0 | nltk bags of words showing emotions | 53,200,934 | 0 | python,nlp,nltk | I'm not aware of any dataset that associates sentiments to keywords, but you can easily built one starting from a generic sentiment analysis dataset.
1) Clean the datasets from the stopwords and all the terms that you don't want to associate to a sentiment.
2)Compute the count of each words in the two sentiment classes and normalize it. In this way you will associate a probability to each word to belong to a class. Let's suppose that you have 300 times the word "love" appearing in the positive sentences and the same word appearing 150 times in the negative sentences. Normalizing you have that the word "love" belongs with a probability of 66% (300/(150+300)) to the positive class and 33% to the negative one.
3) In order to make the dictionary more robust to the borderline terms you can set a threshold to consider neutral all the words with the max probability lower than the threshold.
This is an easy approach to build the dictionary that you are looking for. You could use more sophisticated approach as Term Frequency-Inverse Document Frequency. | i am working on NLP using python and nltk.
I was wondering whether is there any dataset which have bags of words which shows keywords relating to emotions such as happy, joy, anger, sadness and etc
from what i dug up in the nltk corpus, i see there are some sentiment analysis corpus which contain positive and negative review which doesn't exactly related to keywords showing emotions.
Is there anyway which i could build my own dictionary containing words which shows emotion for this purpose? is so, how do i do it and is there any collection of such words?
Any help would be greatly appreciated | 0 | 1 | 822 |
0 | 53,203,802 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-08T08:00:00.000 | -3 | 3 | 0 | How to save a text file to a .mat file? | 53,203,507 | -0.197375 | python-2.7,matlab,text-files,mat-file | if what you need is to change file format:
mv example.mat example.txt | How do I save a '.txt' file as a '.mat' file, using either MATLAB or Python?
I tried using textscan() (in MATLAB), and scipy.io.savemat() (in Python). Both didn't help.
My text file is of the format: value1,value2,value3,valu4 (each row) and has over 1000 rows.
Appreciate any help is appreciated. Thanks in advance. | 0 | 1 | 880 |
0 | 53,205,313 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-08T09:18:00.000 | 0 | 1 | 0 | relation in between a categorical dependent variable and combination of independent variables | 53,204,674 | 0 | python,machine-learning,statistics,analysis | I am not sure if I correctly understand your question, but from what I understand:
You can try to convert your continuous columns to buckets, which means effectively converting them as categorical as well and then find correlation between them. | I am looking for a technique which could help us find the relation in between a categorical dependent variable and combination of independent variables (Y ~ X1*X2+X2*X3+X3*X4), here among X1 to X4 we have few categorical columns and few continuous columns.
I am working on a classification problem and I want to check what combination of independent columns are highly related to dependent columns. | 0 | 1 | 97 |
0 | 53,235,950 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2018-11-09T06:50:00.000 | 0 | 2 | 0 | pip install hypothesis[pandas] says hypothesis3.82.1 does not provide the extra 'pandas' | 53,221,061 | 0 | python-hypothesis | fixed my problem with
pip install hypothesis[all]
and also realizing that hypothesis.extra tab completion only showed django, and that pandas and numpy extras seem to need to be imported explicitly. | When I ran pip install hypothesis[pandas] I got the following:
Collecting hypothesis[pandas]
Using cached https://files.pythonhosted.org/packages/36/58/222aafec5064d12c2b6123c69e512933b1e82a55ce49015371089d216f89/hypothesis-3.82.1-py3-none-any.whl
hypothesis 3.82.1 does not provide the extra 'pandas'
pip install hypothesis[django] seemed to work and hypothesis.extra has django but not pandas. Any idea what is going on with the pip install for pandas and numpy extras? | 0 | 1 | 432 |
0 | 53,377,498 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-09T14:45:00.000 | 0 | 1 | 0 | Unable to import tensorflow, error for importing pywrap_tensorflow | 53,227,954 | 1.2 | python,tensorflow | Well, I am answering my own question since the error seems to have multiple causes.
I am not sure what was the cause, however, after downgrading python to 3.5 and installing tensorflow with pip (pip install tensorflow), resolved the issue.
Note: I uninstalled everything before installing Anaconda again. | I am trying to use Keras Sequential, however, my jupyter notebook is flooded with error as it's not able to import tensorflow in the backend (i think). Later I found that, its not with Keras, but I am not able to do 'import tensorflow as tf' as well.
Any suggestions, please?
I am using python 3.5.6
tensorflow 1.12
I did, pip install tensorflow for installation.
ImportError Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in swig_import_helper()
17 try:
---> 18 fp, pathname, description = imp.find_module('_pywrap_tensorflow', [dirname(file)])
19 except ImportError:
~\AppData\Local\Continuum\anaconda3\lib\imp.py in find_module(name, path)
295 else:
--> 296 raise ImportError(_ERR_MSG.format(name), name=name)
297
ImportError: No module named '_pywrap_tensorflow'
During handling of the above exception, another exception occurred:
ModuleNotFoundError Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python__init__.py in
53 # use dlopen() for dynamic loading.
---> 54 from tensorflow.python import pywrap_tensorflow
55 except ImportError:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in
27 return _mod
---> 28 _pywrap_tensorflow = swig_import_helper()
29 del swig_import_helper
~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in swig_import_helper()
19 except ImportError:
---> 20 import _pywrap_tensorflow
21 return _pywrap_tensorflow
ModuleNotFoundError: No module named '_pywrap_tensorflow'
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
in
----> 1 import tensorflow as tf
~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow__init__.py in
22
23 # pylint: disable=wildcard-import
---> 24 from tensorflow.python import *
25 # pylint: enable=wildcard-import
26
~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python__init__.py in
58 please exit the tensorflow source tree, and relaunch your python interpreter
59 from there.""" % traceback.format_exc()
---> 60 raise ImportError(msg)
61
62 # Protocol buffers
ImportError: Traceback (most recent call last):
File "C:\Users\ritesh.kankonkar\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 18, in swig_import_helper
fp, pathname, description = imp.find_module('_pywrap_tensorflow', [dirname(file)])
File "C:\Users\ritesh.kankonkar\AppData\Local\Continuum\anaconda3\lib\imp.py", line 296, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named '_pywrap_tensorflow'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\ritesh.kankonkar\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python__init__.py", line 54, in
from tensorflow.python import pywrap_tensorflow
File "C:\Users\ritesh.kankonkar\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 28, in
_pywrap_tensorflow = swig_import_helper()
File "C:\Users\ritesh.kankonkar\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 20, in swig_import_helper
import _pywrap_tensorflow
ModuleNotFoundError: No module named '_pywrap_tensorflow'
Error importing tensorflow. Unless you are using bazel,
you should not try to import tensorflow from its source directory;
please exit the tensorflow source tree, and relaunch your python interpreter
from there. | 0 | 1 | 1,593 |
0 | 54,779,304 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-11T15:43:00.000 | 1 | 1 | 0 | TypeError: non_max_suppression() got an unexpected keyword argument 'score_threshold' | 53,250,360 | 0.197375 | python,python-3.x,tensorflow | I encountered same issue using tf 1.8. Tensorflow versions < 1.9 did not support the score_threshold param.
Need to be sure you're using version 1.9 or newer. | Hi I am using win 7 64 bit and tensorflow version 1.5 I've tried 1.9 and higher but isnt work and I've tried tensorflow-gpu version but again isnt work all the error this | 0 | 1 | 764 |
0 | 53,253,685 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-11-11T21:51:00.000 | 1 | 1 | 0 | Parsing a CSV into a database for an API using Python? | 53,253,610 | 0.197375 | python,sql,database,pandas,sqlite | If you used pd.read_csv() i can assure you all of the info is there, it's just not displaying it.
You can check by doing something like print(df['Column_name_you_are_interested_in'].tolist()) just to make sure though. You can also use the various count type methods in pandas to make sure all of your lines are there.
Panadas is pretty versatile so it shouldn't have trouble with 6000 lines | I'm gonna use data from a .csv to train a model to predict user activity on google ads (impressions, clicks) in relation to the weather for a given day. And I have a .csv that contains 6000+ recordings of this info and want to parse it into a database using Python.
I tried making a df in pandas but for some reason the whole table isn't shown. The middle columns (there's about 7 columns I think) and rows (numbered over 6000 as I mentioned) are replaced with '...' when I print the table so I'm not sure if the entirety of the information is being stored and if this will be usable.
My next attempt will possible be SQLite but since it's local memory, will this interfere with someone else making requests to my API endpoint if I don't have the db actively open at all times?
Thanks in advance. | 0 | 1 | 270 |
0 | 53,294,036 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-11-12T16:39:00.000 | 1 | 1 | 0 | Input numerical arrays instead of images into Keras/TF CNN | 53,266,491 | 1.2 | python,tensorflow,keras,conv-neural-network,mnist | Yes, you can use CNN for data other than images like sequential/time-series data(1D convolution but you can use 2D convolution as well).
CNN does its job pretty good for these types of data.
You should provide your input as an image matrix i.e a window on which CNN can perform convolution on.
And you can store those input matrices/window in a numpy array and then load those file and train your CNN on it. | I have been building some variations of CNN's off of Keras/Tensorflow examples that use the MNIST data images (ubyte files) for feature extraction. My eventual goal is to do a similar thing but with a collection (~10000) 2D FFT arrays of signal data that I have made (n x m ~ 1000 x 50)(32 bite float data)
I have been looking for an example that uses something other than image files but can not seem to find any.
My questions are: Is this possible to do without converting them to images. Can the dataset be exported to a pickle or some other file that I could input? Whats the best way to achieve this?
Thanks! | 0 | 1 | 440 |
0 | 53,452,436 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-11-13T01:35:00.000 | 1 | 1 | 0 | inception v3 using tf.data? | 53,272,508 | 0.197375 | python,tensorflow | Well, I eventually got this working. The various documents referenced in the comment on my question had what I needed, and I gradually figured out which parameters passed to queuerunners corresponded to which parameters in the tf.data stuff.
There was one gotcha that took a while for me to sort out. In the inception implementation, the number of examples used for validation is rounded up to be a multiple of the batch size; presumably the validation set is reshuffled and some examples are used more than once. (This does not strike me as great practice, but generally the number of validation instances is way larger than the batch size, so only a relative few are double counted.)
In the tf.data stuff, enabling shuffling and reuse is a separate thing and I didn't do it on the validation data. Then things broke because there weren't enough unique validation instances, and I had to track that down.
I hope this helps the next person with this issue. Unfortunately, my code has drifted quite far from Inception v3 and I doubt that it would be helpful for me to post my modification. Thanks! | I'm using a bit of code that is derived from inception v3 as distributed by the Google folks, but it's now complaining that the queue runners used to read the data are deprecated (tf.train.string_input_producer in image_processing.py, and similar). Apparently I'm supposed to switch to tf.data for this kind of stuff.
Unfortunately, the documentation on tf.data isn't doing much to relieve my concern that I've got too much data to fit in memory, especially given that I want to batch it in a reusable way, etc. I'm confident that the tf.data stuff can do this; I just don't know how to do it. Can anyone point me to a full example of code that uses tf.data to deal with batches of data that won't all fit in memory? Ideally, it would simply be an updated version of the inception-v3 code, but I'd be happy to try and work with anything. Thanks! | 0 | 1 | 58 |
0 | 53,301,201 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-14T02:26:00.000 | 0 | 1 | 0 | An algorithm that efficiently computes the distance of one labeled pixel to its nearest differently labeled pixel | 53,292,326 | 1.2 | python,algorithm,distance,distance-matrix | I think that if you have a matrix, you can run a BFS version where the matrix A will be your graph G and the vertex v will be the arbitrary pixel you chose.
There is an edge between any two adjacent cells in the matrix. | I apologize for my lengthy title name. I have two questions, where the second question is based on the first one.
(1). Suppose I have a matrix, whose entries are either 0 or 1. Now, I pick an arbitrary 0 entry. Is there an efficient algorithm that searches the nearest entry with label 1 or calculates the distance between the chosen 0 entry and its nearest entry with label 1?
(2). Suppose now the distribution of entries 1 has a geometric property. To make this statement clearer, imagine this matrix as an image. In this image, there are multiple continuous lines (not necessarily straight). These lines form several boundaries that partition the image into smaller pieces. Assume the boundaries are labeled 1, whereas all the pixels in the partitioned area are labeled 0. Now, similar to (1), I pick a random pixel labeled as 0, and I hope to find out the coordinate of the nearest pixel labeled as 1 or the distance between them.
A hint/direction for part (1) is enough for me. If typing up an answer takes too much time, it is okay just to tell me the name of the algorithm, and I will look it up.
p.s.: If I post this question in an incorrect section, please let me know. I will re-post it to an appropriate section. Thank you! | 0 | 1 | 80 |
0 | 53,308,093 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-14T06:38:00.000 | 1 | 1 | 1 | What is the difference between tensorflow serving Dockerfile and Dockerfile.devel? | 53,294,415 | 0.197375 | python,docker,tensorflow,dockerfile,tensorflow-serving | a Dockerfile is a file where your write the configurations to create a docker image.
The tensorflow/serving cpu and gpu are docker images which means they are already configured to work with tensorflow, tensorflow_model_server and, in the case of gpu, with CUDA.
If you have a GPU, then you can use a tensorflow/serving gpu version which would reduce the latency of your predictions. If you don't have a GPU, then you can use a tensorflow/serving cpu version which would do exactly the same but will be slower. | Why are there two different docker files for tensorflow serving - Dockerfile & Dockerfile.devel - for both CPU and GPUs?
Which one is necessary for deploying and testing? | 0 | 1 | 125 |
0 | 53,333,072 | 0 | 0 | 0 | 0 | 1 | true | 6 | 2018-11-14T13:56:00.000 | 7 | 1 | 0 | Python/Gensim - What is the meaning of syn0 and syn0norm? | 53,301,916 | 1.2 | python,deep-learning,nlp,gensim,word-embedding | These names were inherited from the original Google word2vec.c implementation, upon which the gensim Word2Vec class was based. (I believe syn0 only exists in recent versions for backward-compatbility.)
The syn0 array essentially holds raw word-vectors. From the perspective of the neural-network used to train word-vectors, these vectors are a 'projection layer' that can convert a one-hot encoding of a word into a dense embedding-vector of the right dimensionality.
Similarity operations tend to be done on the unit-normalized versions of the word-vectors. That is, vectors that have all been scaled to have a magnitude of 1.0. (This makes the cosine-similarity calculation easier.) The syn0norm array is filled with these unit-normalized vectors, the first time they're needed.
This syn0norm will be empty until either you do an operation (like most_similar()) that requires it, or you explicitly do an init_sims() call. If you explicitly do an init_sims(replace=True) call, you'll actually clobber the raw vectors, in-place, with the unit-normed vectors. This saves the memory that storing both vectors for every word would otherwise require. (However, some word-vector uses may still be interested in the original raw vectors of varying magnitudes, so only do this when you're sure most_similar() cosine-similarity operations are all you'll need.)
The syn1 (or syn1neg in the more common case of negative-sampling training) properties, when they exist on a full model (and not for a plain KeyedVectors object of only word-vectors), are the model neural network's internal 'hidden' weights leading to the output nodes. They're needed during model training, but not a part of the typical word-vectors collected after training.
I believe the syn prefix is just a convention from neural-network variable-naming, likely derived from 'synapse'. | I know that in gensims KeyedVectors-model, one can access the embedding matrix by the attribute model.syn0. There is also a syn0norm, which doesn't seem to work for the glove model I recently loaded. I think I also have seen syn1 somewhere previously.
I haven't found a doc-string for this and I'm just wondering what's the logic behind this?
So if syn0 is the embedding matrix, what is syn0norm? What would then syn1 be and generally, what does syn stand for? | 0 | 1 | 6,897 |
0 | 53,306,686 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-14T16:25:00.000 | -1 | 1 | 0 | Reverse engineer scikit-learn serialized model | 53,304,675 | 1.2 | python,machine-learning,scikit-learn,pickle,joblib | No, you cant (in principle, anyway) reverse engineer the data based on a model. You can obviously derive the trained model weights/etc and start to get a good understanding of what it might have been trained over, but directly deriving the data, I'm not aware of any possible way of doing that, providing you're pickling the trained model. | I am trying to understand the security implications of serializing a scikit-learn/keras fitted model (using pickle/joblib etc).
Specifically, if I work on data that I don't want to be revealed, would there be anyway for someone to reverse engineer what data a model was fitted on? Or is the data, just a way for the algorithm to update the relevant coefficients/weights for the algorithm? (If I train the model against "This movie is great" and store it as a foo.pkl file, would I also be able to load the foo.pkl and say it was trained on "This movie is great" if all I have access to is the pkl file and not the data) | 0 | 1 | 450 |
0 | 62,402,653 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2018-11-14T23:57:00.000 | 3 | 1 | 0 | Why is pd.unique() faster than np.unique()? | 53,310,547 | 0.53705 | python,pandas,numpy,data-science,data-analysis | np.unique() is treating the data as an array, so it goes through every value individually then identifies the unique fields.
whereas, pandas has pre-built metadata which contains this information and pd.unique() is simply calling on the metadata which contains 'unique' info, so it doesn't have to calculate it again. | I tried to compare the two, one is pandas.unique() and another one is numpy.unique(), and I found out that the latter actually surpass the first one.
I am not sure whether the excellency is linear or not.
Can anyone please tell me why such a difference exists, with regards to the code implementation? In what case should I use which? | 0 | 1 | 1,666 |
0 | 53,312,488 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-15T03:51:00.000 | 0 | 2 | 0 | Compare stock indices of different sizes Python | 53,312,182 | 1.2 | python,plot,statistics,correlation | I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink/enlarge a background chart so that it matches the other on a new y-axis.
These websites rescale them by fixing the initial starting points for both indices at, say, 100. I.e. if Dow is 25000 points and S&P is 2500, then Dow is divided by 250 to get to 100 initially and S&P by 25. Then you have two indices that start at 100 and you then can compare them side by side.
The other method (works good only if you have two series) - is to set y-axis on the right hand side for one series, and on the left hand side for the other one. | I am using Python to try and do some macroeconomic analysis of different stock markets. I was wondering about how to properly compare indices of varying sizes. For instance, the Dow Jones is around 25,000 on the y-axis, while the Russel 2000 is only around 1,500. I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink/enlarge a background chart so that it matches the other on a new y-axis. Is there some statistical method where I can do this same thing in Python? | 0 | 1 | 318 |
0 | 53,314,237 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-11-15T06:53:00.000 | 0 | 2 | 0 | How to convert 2D matrix to 3D tensor without blending corresponding entries? | 53,313,913 | 0 | python,tensorflow | You can first go through each of your first three columns and count the number of different products, stores and weeks that you have. This will give you the shape of your new array, which you can create using numpy. Importantly now, you need to create a conversion matrix for each category. For example, if product is 'XXX', then you want to know to which row of the first dimension (as product is the first dimension of your array) 'XXX' corresponds; same idea for store and week. Once you have all of this, you can simply iterate through all lines of your existing array and assign the value of quantity to the correct location inside your new array based on the indices stored in your conversion matrices for each value of product, store and week. As you said, it makes sense because there is a one-to-one correspondence. | I have data with the shape of (3000, 4), the features are (product, store, week, quantity). Quantity is the target.
So I want to reconstruct this matrix to a tensor, without blending the corresponding quantities.
For example, if there are 30 product, 20 stores and 5 weeks, the shape of the tensor should be (5, 20, 30), with the corresponding quantity. Because there won't be an entry like (store A, product X, week 3) twice in entire data, so every store x product x week pair should have one corresponding quantity.
Any suggestions about how to achieve this, or there is any logical error? Thanks. | 0 | 1 | 626 |
0 | 53,320,092 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2018-11-15T12:49:00.000 | 0 | 2 | 0 | Python - No module found | 53,319,860 | 0 | python,macos | In Anaconda Navigator, I have already installed those libraries. What I have done was to delete them and install once again. Now it works for me from both: console and Jupyter Notebook. | I'm new to Python and because I couldn't find a solution for my problem after some researches in google, I'm creating a new question, where I'm sure someone for 100% asked for it already. I have installed miniconda with numpy and pandas, which I want to use. It's located at ~/miniconda. I've created new python file in ~/Desktop, where I have imported those two libraries:
import numpy as np
import pandas as pd
When I run my code, I got an error message:
ModuleNotFoundError: No module named 'numpy'
How can I make miniconda libraries visible in console for the python command? | 0 | 1 | 53 |
0 | 53,320,029 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2018-11-15T12:49:00.000 | 1 | 2 | 0 | Python - No module found | 53,319,860 | 0.099668 | python,macos | conda has its own version of the Python interpreter. It is located in the Miniconda directory (It's called "Python.exe"). If you are using an IDE you need to switch the interpreter to use this version of Python rather than the default one you may have installed on the internet from the Python website itself. | I'm new to Python and because I couldn't find a solution for my problem after some researches in google, I'm creating a new question, where I'm sure someone for 100% asked for it already. I have installed miniconda with numpy and pandas, which I want to use. It's located at ~/miniconda. I've created new python file in ~/Desktop, where I have imported those two libraries:
import numpy as np
import pandas as pd
When I run my code, I got an error message:
ModuleNotFoundError: No module named 'numpy'
How can I make miniconda libraries visible in console for the python command? | 0 | 1 | 53 |
0 | 53,327,378 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-15T20:21:00.000 | 0 | 1 | 0 | xgboost feature importance of categorical variable | 53,327,334 | 0 | python,xgboost,categorical-data | You could probably get by with summing the individual categories' importances into their original, parent category. But, unless these features are high-cardinality, my two cents would be to report them individually. I tend to err on the side of being more explicit with reporting model performance/importance measures. | I am using XGBClassifier to train in python and there are a handful of categorical variables in my training dataset. Originally, I planed to convert each of them into a few dummies before I throw in my data, but then the feature importance will be calculated for each dummy, not the original categorical ones. Since I also need to order all of my original variables (including numerical + categorical) by importance, I am wondering how to get importance of my original variables? Is it simply adding up? | 0 | 1 | 1,226 |
0 | 53,332,661 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-16T06:20:00.000 | 1 | 1 | 0 | Running python script on external hard disk (training neural network) | 53,332,468 | 0.197375 | python,neural-network | It depends on the read speed of your Hard drive and External hard drive.
Is your hard drive a SSD? If it is, then It sure gonna be way faster than your external hard drive.
If the read speed of your hard disk drive and external is same or similar, then its doesn't matter where you store your dataset.
1) Your python file will be "loaded" into RAM, and executed. So your internal hard disk plays no major role.
2)Again, if your External HDD and internal HDD has similar read speed, then it doesn't matter. | I have a dataset that is too large to store locally and I want to train a neural network.
Which would be faster? or are they the same?
1) All files are stored on the external hard drive. The python file is run in the directory of the hard drive that loads the data and trains the network
2) Python files are saved locally and the loading of the dataset during training is done by pointing it to the dataset on the external hard drive
I'd assume that the execution speed and loading of the dataset will be equal in both these cases, but I'm not sure | 0 | 1 | 1,158 |
0 | 53,363,572 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-16T07:58:00.000 | 0 | 1 | 0 | How to use dask to populate DataFrame in parallelized task? | 53,333,644 | 1.2 | python,pandas,python-multiprocessing,python-multithreading,dask | The right way to do something like this, in rough outline:
make a function that, for a given argument, returns a data-frame of some part of the total data
wrap this function in dask.delayed, make a list of calls for each input argument, and make a dask-dataframe with dd.from_delayed
if you really need the index to be sorted and the index to partition along different lines than the chunking you applied in the previous step, you may want to do set_index
Please read the docstrings and examples for each of these steps! | I would like to use dask to parallelize a numbercrunching task.
This task utilizes only one of the cores in my computer.
As a result of that task I would like to add an entry to a DataFrame via shared_df.loc[len(shared_df)] = [x, 'y']. This DataFrame should be populized by all the (four) paralllel workers / threads in my computer.
How do I have to setup dask to perform this? | 0 | 1 | 48 |
0 | 53,416,420 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-11-16T08:10:00.000 | 0 | 2 | 0 | Multidimensional gradient descent in Tensorflow | 53,333,794 | 0 | python,tensorflow,gradient-descent | Tensorflow first reduces your loss to a scalar and then optimizes that. | What does Tensorflow really do when the Gradient descent optimizer is applied to a "loss" placeholder that is not a number (a tensor of size 1) but rather a vector (a 1-dimensional tensor of size 2, 3, 4, or more)?
Is it like doing the descent on the sum of the components? | 0 | 1 | 358 |
0 | 61,325,048 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2018-11-17T18:20:00.000 | 0 | 3 | 0 | How to use F-score as error function to train neural networks? | 53,354,176 | 0 | python,tensorflow,loss-function,precision-recall | the loss value and accuracy is a different concept. The loss value is used for training the NN. However, accuracy or other metrics is to value the training result. | I am pretty new to neural networks. I am training a network in tensorflow, but the number of positive examples is much much less than negative examples in my dataset (it is a medical dataset).
So, I know that F-score calculated from precision and recall is a good measure of how well the model is trained.
I have used error functions like cross-entropy loss or MSE before, but they are all based on accuracy calculation (if I am not wrong). But how do I use this F-score as an error function? Is there a tensorflow function for that? Or I have to create a new one?
Thanks in advance. | 0 | 1 | 8,429 |
0 | 53,365,423 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-11-18T18:45:00.000 | 1 | 1 | 0 | Is there an algorithm to calculate a numerical rating of the degree of abstractness of a word in NLP? | 53,364,314 | 1.2 | python,nlp,wordnet | There's no definition of abstractness that I know of, neither any algorithm to calculate it.
However, there are several directions I would use as proxies
Frequency - Abstract concepts are likely to be pretty rare in a common speech, so a simple idf should help identify rare words.
Etymology - Common words in English, are usually decedent from Germanic origin, while more technical words are usually borrowed from French / Latin.
Supervised learning - If you have Wikipedia articles you find abstract, then the common phrases or word would probably also describe similar abstract concepts. Training a classifier can be a way to score.
There's no ground truth as to what is abstract, and what is concrete, especially if you try to quantify it.
I suggest aggregating these proxies to a metric you find useful for your needs. | Is there an algorithm that can automatically calculate a numerical rating of the degree of abstractness of a word. For example, the algorithm rates purvey as 1, donut as 0, and immodestly as 0.5 ..(these are example values)
Abstract words in the sense words that refer to ideas and concepts that are distant from immediate perception, such as economics, calculating, and disputable. Other side Concrete words refer to things, events, and properties that we can perceive directly with our senses, such as trees, walking, and red. | 0 | 1 | 77 |
0 | 53,378,514 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-19T15:52:00.000 | 0 | 1 | 0 | Return distribution over set of action space from Neural Network | 53,378,284 | 1.2 | python,tensorflow,neural-network,probability,reinforcement-learning | Is there any reason you want it to return a matrix of these actions? Why not just map each of the 27 combinations to integers 0-26? So your architecture could look like [Linear(5, n), ReLU, Linear(n, .) ... Softmax(Linear(., 27))]. Then when you need to evaluate, you can just map it back to the action sequence. This is similar to how in NLP tasks you map multidimensional word vectors to integers via stoi for training and bring them back via itos.
I should point out that if your training paradigm involves some further use of these discrete choices (say you use the argmax of this upstream of another net), then the nondifferentiability of argmax means that this architechture will not learn anything. I only mention this because you use the phrase "action space", which is typical in DRL. If this is the case, you may want to consider algorithms like REINFORCE where action sequences can be learned discretely and used upstream via policy gradient. | I am trying to build a neural network to output a probabilistic distribution over set of whole action space.
My action space is a vector of 3 individual actions : [a,b,c]
a can have 3 possible actions within itself a1,a2,a3 and similarly b has b1,b2,b3, c has c1,c2,c3. So In total i can have 27 different combinations of these actions 3^3 = 27. Ultimately the neural network should output 27 combinations of these actions (which is a matrix of 27 x 3) : [[a1,b1,c1],[a2,b2,c2],[a3,b3,c3],[a1,b1,c2],[a1,b1,c3],.....] and so on for all 27 combinations. Just to mention the input to my network is a state which is a vector of 5 elements.
I want a probability associated to each of these 27 combinations.
I know I can associate probability by using a softmax with 27 outputs but I don't understand how the network can output a matrix in this case where every row has a probability associated to it. | 0 | 1 | 57 |
0 | 53,711,345 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-11-20T11:30:00.000 | 0 | 1 | 0 | Dask how to avoid recomputing things | 53,392,067 | 0 | python,dask,dask-distributed | Yes, this is the usecase that persist is for. The trick is figuring out where to apply it - this decision is usually influenced by:
The size of your intermediate results. These will be kept in memory until all references to them are deleted (e.g. foo in foo = intermediate.persist()).
The shape of your graph. It's better to persist only components that would need to be recomputed, to minimize the memory impact of the persisted values. You can use .visualize() to look at the graph.
The time it takes to compute the tasks. If the tasks are quick to compute, then it may be more beneficial just to recompute them rather than keep them around in memory. | Using dask I have defined a long pipeline of computations; at some point given constraints in apis and version I need to compute some small result (not lazy) and feed it in the lazy operations. My problem is that at this point the whole computation graph will be executed so that I can produce an intermediate results. Is there a way to not loose the work done at this point and have to recompute everything from scratch when in a following step I am storing the final results to disk?
Is using persist supposed to help with that?
Any help will be very appreciated. | 0 | 1 | 210 |
0 | 53,396,837 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-20T15:53:00.000 | 1 | 1 | 0 | Feature selection in K means clustering | 53,396,792 | 0.197375 | python-3.x,cluster-analysis,k-means | The brute force approach is to try all different 380 possibilities.
The non brute force approach could be try to do your clustering with 19 features (all 20 solutions) and keeping the best one, then dropping one more, selecting the best of the 19... up to two classes. | I have a dataset with more than 20 columns. I want to find out which two variables contributes towards highest importance. How to do it? | 0 | 1 | 635 |
0 | 53,404,984 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-11-21T03:00:00.000 | 0 | 3 | 0 | How do I train the Convolutional Neural Network with negative and positive elements as the input of the first layer? | 53,404,679 | 0 | python,tensorflow,conv-neural-network | We usually have 3 types of datasets for getting a model trained,
Training Dataset
Validation Dataset
Test Dataset
Training Dataset
This should be an evenly distributed data set which covers all varieties of data. If your train with more epochs, the model will get used to the training dataset and will only give proper proper prediction on the training dataset and this is called Overfitting. Only way to keep a check on overfitting is by having other datasets which the model has never been trained on.
Validation Dataset
This can be used fine tune model hyperparameters
Test Dataset
This is the dataset which the model has not been trained on has never been a part of deciding the hyperparameters and will give the reality of how the model is performing. | Just I am curious why I have to scale the testing set on the testing set, and not on the training set when I’m training a model on, for example, CNN?!
Or am I wrong? And I still have to scale it on the training set.
Also, can I train a dataset in the CNN that contents positive and negative elements as the first input of the network?
Any answers with reference will be really appreciated. | 0 | 1 | 245 |
0 | 53,404,823 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-11-21T03:00:00.000 | 0 | 3 | 0 | How do I train the Convolutional Neural Network with negative and positive elements as the input of the first layer? | 53,404,679 | 0 | python,tensorflow,conv-neural-network | Scaling data depends upon the requirement as well the feed/data you got. Test data gets scaled with Test data only, because Test data don't have the Target variable (one less feature in Test data). If we scale our Training data with new Test data, our model will not be able to correlate with any target variable and thus fail to learn. So the key difference is the existence of Target variable. | Just I am curious why I have to scale the testing set on the testing set, and not on the training set when I’m training a model on, for example, CNN?!
Or am I wrong? And I still have to scale it on the training set.
Also, can I train a dataset in the CNN that contents positive and negative elements as the first input of the network?
Any answers with reference will be really appreciated. | 0 | 1 | 245 |
0 | 53,413,823 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-21T03:13:00.000 | 0 | 1 | 0 | Mixing questions between Qualtrics Blocks | 53,404,770 | 0 | python,machine-learning,deep-learning,qualtrics | There isn't any easy way to do this.
You could put all 300 questions in the same block. Then in a block before the 300 question block have 3 multiple choice questions (QA, QB, QC) where you have placeholders for the 100 questions as choices (QA: A1, A2, ..., A100; QB: B1, B2, ... , B100; QC: C1, C2, ..., C100). For each of those questions used Advanced Randomization to randomly display 15 choices. Hide the questions with JavaScript. Then in your 300 question block add display logic to each question based on whether its corresponding choice was displayed in the applicable QA, QB, QC question. | I am creating a questionnaire on Qualtrics and I have 3 different blocks of questions. Let's call them A, B, and C. Each of the blocks has 100 questions each. I want to randomly pick 15 questions from each of the blocks. That part is easy. I have used Randomization options available for each block.
However, I want to mix questions across these blocks. Currently, what I can get is
A1 A2 ... A15 B1, B2, ... B15 C1, C2, ... C15
All the questions from one block appear together.
I want to randomize this specific ordering as well. My requirements are:
15 questions from each block selected randomly from a pool of 100 in each block.
Randomly displaying this pool of 45 questions to the user.
How can I do this? I've been stuck at this for hours now. Thanks for your help in advance. | 0 | 1 | 30 |
0 | 53,417,517 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-11-21T17:07:00.000 | 3 | 3 | 0 | what is workers parameter in word2vec in NLP | 53,417,258 | 1.2 | python,machine-learning,nlp,word2vec | workers = use this many worker threads to train the model (=faster training with multicore machines).
If your system is having 2 cores, and if you specify workers=2, then data will be trained in two parallel ways.
By default , worker = 1 i.e, no parallelization | in below code .
i didn't understand the meaning of workers parameter .
model = Word2Vec(sentences, size=300000, window=2, min_count=5, workers=4) | 0 | 1 | 5,032 |
0 | 53,430,651 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2018-11-22T11:40:00.000 | 0 | 2 | 0 | How to process a voice input in Keras that is not one of the two speaker outputs? | 53,430,205 | 1.2 | python,machine-learning,keras | A neural-net is - in essence - nothing more than a fancy feature-extractor and interpolator.
There is no reason to expect anything specific for data that it's never seen, and this doesn't have much to do with working with the DTFT, MFCC, or I-Vectors, it's a basic principle of data-driven algorithms.
Just as a methodological explanation, not to be taken literally, finding a differentiation between two speakers going "eeeeeeeee" can be done by finding the mean pitch and just deciding upon that one not-so-informative feature.
Then what do you think would happen when introducing a new utterance?
One last point - there are so many ways to solve such a degenerate case, that you will most probably get some overfit. Which can also lead to unexpected results for out-of-sample data.
Regarding the general problem, there are a few different approaches, but I'd recommend having two stages:
1-Class SVM or something similar, to identify that the utterance is in the class you or your friend
The NN you trained.
Another option would be to get enough "general speaker" samples and add them as a third class. This is known in some contexts as the OOS (out of sample) category for classification. That can get you some good googling material :-) | I'm trying to create a speaker recognition with a neural network using Keras and also the Fourier transformation to process the voice samples. The voice samples are me and my friend saying 'eeeee' for 3 seconds. Now the problem is if we give the neural network an input of someone else doing that ('ee' for 3 seconds), it still gives an output that indicates that it's 100% one of us. I output data using softmax so it gives about [1, 0] for my friend and [0, 1] for me. Sometimes it's like [0.95, 0.05].
It's working well except for if we input the data from another person like I said, it still gives like [1,0] although I would expect it to give something like [0.6, 0.4] because it's another voice. Now I have also tried using 2 features of MFCC but it doesn't seem to work either. Would making a third output and train it with random samples work (I myself don't really think so because it can't train for all different inputs)? Or how can I try to face this issue otherwise? I've been struggling with this issue for quite a while now so any help would be much appreciated! | 0 | 1 | 74 |
0 | 53,430,299 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-11-22T11:40:00.000 | 0 | 2 | 0 | How to process a voice input in Keras that is not one of the two speaker outputs? | 53,430,205 | 0 | python,machine-learning,keras | It is not a simple matter. You need a lot more training examples and to do some tests.
You COULD try to train something to be you-and-your-friend vs all, but it won't be that easy and (again) you will need lots of examples.
It's very broad as a question, there are a few different approaches and i'm not sure Keras and neural networks are the correct answer for this specific task. | I'm trying to create a speaker recognition with a neural network using Keras and also the Fourier transformation to process the voice samples. The voice samples are me and my friend saying 'eeeee' for 3 seconds. Now the problem is if we give the neural network an input of someone else doing that ('ee' for 3 seconds), it still gives an output that indicates that it's 100% one of us. I output data using softmax so it gives about [1, 0] for my friend and [0, 1] for me. Sometimes it's like [0.95, 0.05].
It's working well except for if we input the data from another person like I said, it still gives like [1,0] although I would expect it to give something like [0.6, 0.4] because it's another voice. Now I have also tried using 2 features of MFCC but it doesn't seem to work either. Would making a third output and train it with random samples work (I myself don't really think so because it can't train for all different inputs)? Or how can I try to face this issue otherwise? I've been struggling with this issue for quite a while now so any help would be much appreciated! | 0 | 1 | 74 |
0 | 54,192,123 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-24T09:33:00.000 | 1 | 1 | 0 | ImportError: cannot import name 'convert_kernel' | 53,456,874 | 1.2 | python,tensorflow | I got the same issue. The filename of my python code was "tensorflow.py". After I changed the name to "test.py". The issue was resolved.
I guess there is already a "tensorflow.py" in the tensorflow package. If anyone uses the same name, it may lead to the conflict.
If your python code is also called "tensorflow.py", you can try to use other names and see if it helps. | When i try to use tensorflow to train model, i get this error message.
File "/Users/ABC/anaconda3/lib/python3.6/site-packages/keras/utils/layer_utils.py", line 7, in
from .conv_utils import convert_kernel
ImportError: cannot import name 'convert_kernel'
i have already install Keras | 0 | 1 | 1,678 |
0 | 53,636,929 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-11-25T03:39:00.000 | 0 | 1 | 0 | propagation model using neural network (I am beginner) | 53,464,452 | 0 | python,networking,perceptron,propagation | I believe you need a data for the response (PL) and data for the independent variables in order to find n.
you can find n using that data in SPSS, excel, Matlab etc.
Good luck. | Propagation model:
P = 10 * n * log10 (d/do)
P = path loss (dB)
n = the path loss distance exponent
d = distance (m)
do = reference distance (m)
The initial idea is to make the loss measurements 'P' with respect to a distance 'd', and to determine the value of 'n'
my question: is this implementation possible using multi-layer Perceptron?
But what could be my inputs and outputs? I thought of something like:
input: distance 'd'
output: Loss "P"
But I could not think of a solution to determine 'n' from these parameters
the idea is that it is something simple, only for study and later improved | 0 | 1 | 33 |
0 | 53,465,899 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-25T05:33:00.000 | 0 | 1 | 0 | Is it Normal for a Neural Network Loss to Increase after being trained on an example? | 53,464,933 | 0 | python,machine-learning,neural-network,lstm,recurrent-neural-network | Is your dataset shuffled?
Otherwise it could be the case that it was predicting one class for the first 99 examples.
If not then LSTM can be tricky to train. Try changing hyper parameters and also I would recommend starting with SimpleRNN, GRU and then LSTM as sometimes a simple network might just do the trick. | I am currently testing an LSTM network. I print the loss of its prediction on a training example before back-propagation and after back-propagation. It would make sense that the after loss should always be less than the before loss because the network was just trained on that example.
However, I am noticing that around the 100th training example, the network begins to give a more inaccurate prediction after back-propagation than before back-propagating on a training example.
Is a network expected to always have the before loss be higher than the after loss? If so, are there any reasons this happens?
To be clear, for the first hundred examples, the network seems to be training correctly and doing just fine. | 0 | 1 | 28 |
0 | 54,864,247 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-11-26T08:15:00.000 | 2 | 3 | 0 | Tensorflow r1.12: TypeError: Type already registered for SparseTensorValue when running a 2nd script | 53,477,005 | 1.2 | python-3.x,tensorflow,spyder | (Spyder maintainer here) This error was fixed in Spyder 3.3.3, released on February/2019. | I have just built Tensorflow r1.12 from source in Ubuntu 16.04. The installation is successful.
When I run a certain script in Spyder at the 1st time, everything flows smoothly.
However, when I continue to run another script, following errors occur (which didn't happen previously):
File "/home/haohua/tf_env/lib/python3.6/site-packages/tensorflow/init.py", line 24, in
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/home/haohua/tf_env/lib/python3.6/site-packages/tensorflow/python/init.py", line 70, in
from tensorflow.python.framework.framework_lib import * # pylint: disable=redefined-builtin
File "/home/haohua/tf_env/lib/python3.6/site-packages/tensorflow/python/framework/framework_lib.py", line 30, in
from tensorflow.python.framework.sparse_tensor import SparseTensor
File "/home/haohua/tf_env/lib/python3.6/site-packages/tensorflow/python/framework/sparse_tensor.py", line 248, in
pywrap_tensorflow.RegisterType("SparseTensorValue", SparseTensorValue)
TypeError: Type already registered for SparseTensorValue
The temporary solution to avoid such TypeError is to restart the kernel.
But I don't want to restart kernel at every single step of running a script.
Thus, I would like to ask for a critical solution for such kind of issue. Thank you in advance. | 0 | 1 | 2,669 |
0 | 53,508,471 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2018-11-26T20:13:00.000 | 5 | 1 | 0 | Is it possible to append to an existing Feathers format file? | 53,488,351 | 0.761594 | python,pandas,feather | Feather files are intended to be written at once. Thus appending to them is not a supported use case.
Instead I would recommend to you for such a large dataset to write the data into individual Apache Parquet files using pyarrow.parquet.write_table or pandas.DataFrame.to_parquet and read the data also back into Pandas using pyarrow.parquet.ParquetDataset or pandas.read_parquet. These functions can treat a collection of Parquet files as a single dataset that is read at once into a single DataFrame. | I am working on a very huge dataset with 20 million+ records. I am trying to save all that data into a feathers format for faster access and also append as I proceed with me analysis.
Is there a way to append pandas dataframe to an existing feathers format file? | 0 | 1 | 1,519 |
0 | 53,493,633 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-11-27T05:53:00.000 | 1 | 2 | 0 | frozen frames detection openCV python | 53,493,527 | 1.2 | python,opencv,ubuntu-16.04 | This was my approach to solve this issue.
Frozen frames: calculate absolute difference over HSV/RGB per every pixel in two consecutive frames np.arrays and determine max allowed diff that is valid for detecting frozen frames.
Black frames have naturally very low (or zero) V-value sum over the frame. Determine max V-sum of whole frame to determine, below which the frame is "black". | I'm trying to detect camera is capturing frozen frames or black frame. Suppose a camera is capturing video frames and suddenly same frame is capturing again and again. I spend long time to get any idea about this problem statement but i failed. So how we detect it or any idea/steps/procedure for this problem. | 0 | 1 | 805 |
0 | 53,504,769 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-27T17:01:00.000 | 1 | 1 | 0 | Ignore edges when calculating betweenness or closeness of the graph | 53,504,630 | 0.197375 | python,networkx,igraph | You could simply remove the edges you want to ignore before running the computations, and keep a record of what edges you have to put back when you're done. | I want to do calculation on my graph neglecting some edges (as if they don't exist). Like calculation of degree, closeness, or betweenness.
any ideas !
Python | 0 | 1 | 32 |
0 | 53,509,981 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-27T23:39:00.000 | -1 | 2 | 0 | How to create a column of (x,y) pairs in a data frame | 53,509,923 | -0.099668 | python,pandas | Implementing (x,y) coordinates in 1 column would be unnecessarily complex and hacky. I strongly recommend you make two columns, for example pair1_x and pair1_y. Is there a particular reason you need one column? | I am trying to get a column of (x,y) coordinate pairs in my pandas data frame. I want to be able to access each part of the coordinate. For example, if the title of the column is 'pair1' I want to be able to call pair1[0] and pair1[1] to access the x and y integers respectively. Ultimately, I'd be passing these into a distance function to get the distance between 'pair1' and 'pair2'. Thanks! | 0 | 1 | 854 |
0 | 54,064,392 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-11-28T14:13:00.000 | 3 | 1 | 0 | XGBoost (Python) Prediction for Survival Model | 53,521,427 | 0.53705 | python,xgboost | No, I think not. A workaround would be to fit the baseline hazard in another package e.g. from sksurv.linear_model import CoxPHSurvivalAnalysis or in R by require(survival). Then you can use the predicted output from XGBoost as multiplyers to the fitted baseline. Just remember that if the baseline is on the log scale then use output_margin=True and add the predictions.
I hope the authors of XGBoost soon will provide some examples of how to use this function. | The docs for Xgboost imply that the output of a model trained using the Cox PH loss will be exponentiation of the individual persons predicted multiplier (against the baseline hazard). Is there no way to extract from this model the baseline hazard in order to predict the entire survival curve per person?
survival:cox: Cox regression for right censored survival time data
(negative values are considered right censored). Note that predictions
are returned on the hazard ratio scale (i.e., as HR =
exp(marginal_prediction) in the proportional hazard function h(t) =
h0(t) * HR) | 0 | 1 | 2,074 |
0 | 53,529,130 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-11-28T20:06:00.000 | 0 | 2 | 0 | NetworkX plotting: different units/scale between node positions and sizes? | 53,527,307 | 0 | python,matplotlib,networkx | Networkx uses matplotlib to plot things. It does not use pixels for its coordinates, and for good reason.
If you you have coordinates whose values range from -0.01 to 0.01, it will produce a plot that scales the upper and lower bounds on the coordinates to be large enough to hold this, but not so large that everything is in a tiny little bit of the plot. If you now add points with coordinate values around 100, it will rescale the plot to show these as well.
Think about how it would look to plot the line y = x+1 for x in (-0.5, 0.5) if matplotlib insisted that 1 had to correspond to a pixel. | I'm working on a graph with (x,y) node coordinates randomly picked from 0-100. If I simply plot the graph using nx.draw() and passing the original coordinates it looks ok, but if I try to plot some node sizes in a way it relates to coordinates it looks clearly inconsistent.
Looks like the nodes position parameter in draw() is not in the same unit of the node sizes, which are in pixels.Sadly there's nothing about position units in NetworkX documentation... | 0 | 1 | 307 |
0 | 53,529,069 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2018-11-28T20:06:00.000 | 1 | 2 | 0 | NetworkX plotting: different units/scale between node positions and sizes? | 53,527,307 | 1.2 | python,matplotlib,networkx | Ok, I figured it out...
Position parameter for nodes are relative, from 0.0 to 1.0 times whatever your plot size is, while size parameter is absolute, in pixels | I'm working on a graph with (x,y) node coordinates randomly picked from 0-100. If I simply plot the graph using nx.draw() and passing the original coordinates it looks ok, but if I try to plot some node sizes in a way it relates to coordinates it looks clearly inconsistent.
Looks like the nodes position parameter in draw() is not in the same unit of the node sizes, which are in pixels.Sadly there's nothing about position units in NetworkX documentation... | 0 | 1 | 307 |
0 | 53,528,742 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-11-28T21:44:00.000 | 1 | 3 | 0 | Pandas read and parse Excel data that shows as a datetime, but shouldn't be a datetime | 53,528,583 | 0.066568 | python,pandas | pandas is not at fault its the excel that is interpreting the data wrongly,
Set the data to text in that column and it wont interpret as date.
then save the file and open through pandas and it should work fine.
other wise export as CSV and try to open in pandas. | I have a system I am reading from that implemented a time tracking function in a pretty poor way - It shows the tracked working time as [hh]:mm in the cell. Now this is problematic when attempting to read this data because when you click that cell the data bar shows 11:00:00 PM, but what that 23:00 actually represents is 23 hours of time spent and not 11PM. So whenever the time is 24:00 or more you end up with 1/1/1900 12:00:00 AM and on up ( 25:00 = 1/1/1900 01:00:00 AM).
So pandas picks up the 11:00:00 AM or 1/1/1900 01:00:00 AM when it comes into the dataframe. I am at a loss as to how I would put this back into an INT for and get the number of hours in a whole number format 24, 25, 32, etc.
Can anyone help me figure out how to turn this horribly formatted data into the number of hours in int format? | 0 | 1 | 470 |
0 | 53,535,742 | 0 | 0 | 0 | 0 | 1 | false | 15 | 2018-11-29T00:38:00.000 | 11 | 2 | 0 | The loss function and evaluation metric of XGBoost | 53,530,189 | 1 | python,machine-learning,xgboost,xgbclassifier | 'binary:logistic' uses -(y*log(y_pred) + (1-y)*(log(1-y_pred)))
'reg:logistic' uses (y - y_pred)^2
To get a total estimation of error we sum all errors and divide by number of samples.
You can find this in the basics. When looking on Linear regression VS Logistic regression.
Linear regression uses (y - y_pred)^2 as the Cost Function
Logistic regression uses -(y*log(y_pred) + (y-1)*(log(1-y_pred))) as the Cost function
Evaluation metrics are completely different thing. They design to evaluate your model. You can be confused by them because it is logical to use some evaluation metrics that are the same as the loss function, like MSE in regression problems. However, in binary problems it is not always wise to look at the logloss. My experience have thought me (in classification problems) to generally look on AUC ROC.
EDIT
according to xgboost documentation:
reg:linear: linear regression
reg:logistic: logistic regression
binary:logistic: logistic regression for binary classification, output
probability
So I'm guessing:
reg:linear: is as we said, (y - y_pred)^2
reg:logistic is -(y*log(y_pred) + (y-1)*(log(1-y_pred))) and rounding predictions with 0.5 threshhold
binary:logistic is plain -(y*log(y_pred) + (1-y)*(log(1-y_pred))) (returns the probability)
You can test it out and see if it do as I've edited. If so, I will update the answer, otherwise, I'll just delete it :< | I am confused now about the loss functions used in XGBoost. Here is how I feel confused:
we have objective, which is the loss function needs to be minimized; eval_metric: the metric used to represent the learning result. These two are totally unrelated (if we don't consider such as for classification only logloss and mlogloss can be used as eval_metric). Is this correct? If I am, then for a classification problem, how you can use rmse as a performance metric?
take two options for objective as an example, reg:logistic and binary:logistic. For 0/1 classifications, usually binary logistic loss, or cross entropy should be considered as the loss function, right? So which of the two options is for this loss function, and what's the value of the other one? Say, if binary:logistic represents the cross entropy loss function, then what does reg:logistic do?
what's the difference between multi:softmax and multi:softprob? Do they use the same loss function and just differ in the output format? If so, that should be the same for reg:logistic and binary:logistic as well, right?
supplement for the 2nd problem
say, the loss function for 0/1 classification problem should be
L = sum(y_i*log(P_i)+(1-y_i)*log(P_i)). So if I need to choose binary:logistic here, or reg:logistic to let xgboost classifier to use L loss function. If it is binary:logistic, then what loss function reg:logistic uses? | 0 | 1 | 19,483 |
0 | 53,540,809 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-11-29T13:43:00.000 | 0 | 1 | 0 | How to use Tensorflow tf.nn.Conv2d simultaneously for training and prediction? | 53,540,424 | 1.2 | python,tensorflow,neural-network,deep-learning,reinforcement-learning | You need to set your Placeholder like follows
tf.placeholder(shape=(None,160,128,3) ....) , with having None in the first dimension , your placeholder will be flexible to any value you feed either 1 or 100. | I am currently diving deeper into tensorflow and I am a bit puzzled with the proper use of tf.nn.Conv2d(input, filter, strides, padding). Although it looks simple at first glance I cannot get my hear around the following issue:
The use of filter, strides, padding is clear to me. However what is not clear is the correct application of the input.
I am coming from a reinforcement learning Atari (Pong) problem in which I want to use the network for batch training AND (with a certain probability) also for predictions in each step. That means, for training I am feeding the network a full batch of let's say 100 , each single unit consisting of 3 frames with size 160, 128. Using the NHWC format of tensorflow my input to input would be a tf.placeholder of shape (100,160,128,3). So for training I am feeding 100 160x128x3 packages.
However, when predicting outputs from my network (going up or down with the pong paddle) in a certain situation I am only feeding the network one package of 160x128x3 (i.e. one package of three frames). Now this is where tensorflow crashes. It expects (100,160,128,3) but receives (1,160,128,3).
Now I am puzzled. I obviously do not want to set the batch size to 1 and always feed only one package for training. But how can I proceed here? How shall this be implemented with tf.nn.conv2d?
Very much appreciated if someone cann steer me into the right direction here
Thank you in advance for your help!
Kevin | 0 | 1 | 221 |
0 | 53,543,239 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-29T15:58:00.000 | 2 | 1 | 0 | Using regular python code on a Spark cluster | 53,542,955 | 1.2 | python,apache-spark,distributed-computing | Spark use RDD(Resilient distributed dataset) to distribute work among workers or slaves , I dont think you can use your existing code in python without dramatically adapting the code to spark specification , for tensorflow there are many options to distribute computing over multiple gpus. | Can I run a normal python code using regular ML libraries (e.g., Tensorflow or sci-kit learn) in a Spark cluster? If yes, can spark distribute my data and computation across the cluster? if no, why? | 0 | 1 | 238 |
0 | 53,543,608 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-11-29T15:59:00.000 | 1 | 2 | 0 | Pytorch: Normalize Image data set | 53,542,974 | 0.099668 | python,deep-learning,computer-vision,conv-neural-network,pytorch | What normalization tries to do is mantain the overall information on your dataset, even when there exists differences in the values, in the case of images it tries to set apart some issues like brightness and contrast that in certain case does not contribute to the general information that the image has. There are several ways to do this, each one with pros and cons, depending on the image set you have and the processing effort you want to do on them, just to name a few:
Linear Histogram stetching: where you do a linear map on the current
range of values in your image and stetch it to match the 0 and 255
values in RGB
Nonlinear Histogram stetching: Where you use a
nonlinear function to map the input pixels to a new image. Commonly
used functions are logarithms and exponentials. My favorite function
is the cumulative probability function of the original histogram, it
works pretty well.
Adaptive Histogram equalization: Where you do a linear
histogram stretching in certain places of your image to avoid doing
an identity mapping where you have the max range of values in your original
image. | I want to normalize custom dataset of images. For that i need to compute mean and standard deviation by iterating over the dataset. How can I normalize my entire dataset before creating the data set? | 0 | 1 | 3,508 |
0 | 53,544,202 | 0 | 1 | 1 | 0 | 1 | false | 3 | 2018-11-29T16:03:00.000 | 1 | 1 | 0 | Algorithm to find minimum number of resistors from a set of resistor values. (C++ or Python) | 53,543,066 | 0.197375 | python,c++,algorithm | This is actually quite hard, best I can do is propose an idea for an algorithm for solving the first part, the concept of including parallels looks harder as well, but maybe the algorithm can be extended.
If you define a function "best", which takes a target resistance as input and outputs the minimal set of resistors that generates that resistance. E.G: if you only had 10K resistors, best(50K)=5*10K.
This function "best" has the following properties for a set of available resistors [A, B, C,...]:
best(A) = A for any A in the set.
best(Target) = min(best(Target-A) + A), best(Target-B) + B,...)
best(0)=0
best(x)=nonsense, if x<0 (remove these cases)
This can be used to reductively solve the problem. (I'd probably recommend storing the variables down the tree as you go along.
Here's an example to illustrate a bit:
AvailableSet = [10K, 100K]
Target = 120K
First iteration:
best(120K) = min[ best(110K) + 10K, best(20K) + 100K]
calculate each subtree:
best(110K) = min[best(100K) + 10K, best(10K) + 100K]
This is now finalised as we can calculate everything in the min(_) by using the properties, so work back up the tree:
best(110K) = 100K + 10K ( I suppose if there is a tie like in this case pick a permuation randomly)
best(120K) = min[best(110K) + 10K , best(20K) + 100K] = ... = 100K + 10K + 10K
That should work as a solution to the first half of the problem, you may be able to extend this by adding extra properties, but it will make it harder to solve the problem reductively in this way.
Probably the best way is to solve this first half of the problem and use a different algorithm to find the best solution using parallels and decide which is minimal in each case. | I'm trying to design an algorithm that takes in a resistance value and outputs the minimum number of resistors and the values associated with those resistors. I would like the algorithm to iterate through a set of resistor values and the values in the set can be used no more than n times. I would like some direction on where to start.
Series: Req = R1 + R2 +...
Parallel: (1/Req) = (1/R1) + (1/R2) +...
Input:
100000 (100k)
Set: {30k, 50k, 80k, 200k}
Output:
2 resistors in series: 50k + 50k
2 resistors in parallel: 200k || 200k | 0 | 1 | 241 |
0 | 53,709,777 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-11-29T23:14:00.000 | 1 | 2 | 0 | is there a way to download the frozen graph ( .pb file ) of PoseNet? | 53,548,976 | 0.099668 | python,tensorflow,neural-network,deep-learning,tensorflow.js | We currently do not have the frozen graph for inference publicly, however you could download the assets and run them in a Node.js environment. | I intend to use posenet in python and not in browser, for that I need the model as a frozen graph to do inference on. Is there a way to do that? | 0 | 1 | 2,638 |
0 | 53,557,733 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-30T12:32:00.000 | 0 | 2 | 0 | not having to load a dataset over and over | 53,557,674 | 0 | python,global-variables,spyder | It depends how large your data set is.
For relatively smaller datasets you could look at installing Anaconda Python Jupyter notebooks. Really great for working with data and visualisation once the dataset is loaded. For larger datasets you can write some functions / generators to iterate efficiently through the dataset. | Currently in R, once you load a dataset (for example with read.csv), Rstudio saves it as a variable in the global environment. This ensures you don't have to load the dataset every single time you do a particular test or change.
With Python, I do not know which text editor/IDE will allow me to do this. E.G - I want to load a dataset once, and then subsequently do all sorts of things with it, instead of having to load it every time I run the script.
Any points as to how to do this would be very useful | 0 | 1 | 119 |
0 | 53,558,970 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-30T13:52:00.000 | 0 | 1 | 0 | IndexError: index 43462904 is out of bounds for size 43462904 | 53,558,877 | 0 | python-3.x,group-by,compiler-errors,index-error | An array of length N can be indexed with 0 ... N-1:
arr = [0,1,2]
arr[0]: 0
arr[1]: 1
arr[2]: 2
len(arr): 3
In this example you try to access arr[3] which is invalid as it's the N+1st entry in the array. | I have a data set that have 43.462.904 milions of records. I try to do a group by with two variables and do an average of the third one.
The fnuction is: df1 = df.groupby(["var1", pd.Grouper(key="var2"freq="MS")]).mean()
The error that exit is the follow: IndexError: index 43462904 is out of bounds for size 43462904
The error is because I have a long dataset? The function with a few data function | 0 | 1 | 162 |
0 | 53,559,275 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-30T14:10:00.000 | 0 | 2 | 0 | Does the test set is used to update weight in a deep learning model with keras? | 53,559,147 | 0 | python,keras,deep-learning | No , you souldn't use your test set for training to prevent overfitting , if you use cross-validation principles you need exactly to split your data into three datasets a train set which you'll use to train your model , a validation set to test different value of your hyperparameters , and a test set to finally test your model , if you use all your data for training, your model will overfit obviously.
One thing to remember deep learning work well if you have a large and very rich datasets | I'm wondering if the result of the test set is used to make the optimization of model's weights. I'm trying to make a model but the issue I have is I don't have many data because they are medical research patients. The number of patient is limited in my case (61) and I have 5 feature vectors per patient. What I tried is to create a deep learning model by excluding one subject and I used the exclude subject as the test set. My problem is there is a large variability in subject features and my model fits well the training set (60 subjects) but not that good the 1 excluded subject.
So I'm wondering if the testset (in my case the excluded subject) could be used in a certain way to make converge the model to better classify the exclude subject? | 0 | 1 | 204 |
0 | 53,560,972 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-30T15:43:00.000 | 2 | 1 | 0 | Unusual order of dimensions of an image matrix in python | 53,560,638 | 1.2 | python-3.x,matlab | You misunderstood. You do not want to reshape, you want to transpose it. In MATLAB, arrays are A(x,y,z) while in python they are P[z,y,x]. Make sure that once you load the entire matrix, you change the first and last dimensions.
You can do this with the swapaxes function, but beware! it does not make a copy nor change the the data, just changes how the higher level indices of nparray access the internal memory. Your best chances if you have enough RAM is to make a copy and dump the original. | I downloaded a dataset which contains a MATLAB file called 'depths.mat' which contains a 3-dimensional matrix with the dimensions 480 x 640 x 1449. These are actually 1449 images, each with the dimension 640 x 480. I successfully loaded it into python using the scipy library but the problem is the unusual order of the dimensions. This makes Python think that there are 480 images with the dimensions 640 x 1449. I tried to reshape the matrix in python, but a simple reshape operation did not solve my problem.
Any suggestions are welcome. Thank you. | 0 | 1 | 42 |
0 | 53,564,337 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-11-30T20:12:00.000 | 0 | 2 | 0 | Pandas memory error when saving DataFrame to file | 53,564,278 | 0 | python,pandas | There are several options. You can pickle the dataframe or you can use hdf5 format.
These will occupy less memory. Also when you load it next time, it would be quicker then other formats. | I finnally managed to join two big DataFrames on a big machine of my school (512G memory). At the moment we re two people using the same machine, the other one is using about 120G of the memory, after I called the garbage collecter we get to 420G.
I want to save the DataFrame to memory so I then I can reuse it easily and move it on another machine, I have tried to export it to a parquet file, but I get a memory error...
So how can I manage to dump that Dataframe on the hard drive in purpose of reusing it without running into memory error when memory is already near full ?
Thank you | 0 | 1 | 1,372 |
0 | 53,566,219 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-11-30T21:40:00.000 | 0 | 2 | 0 | word2vec: user-level, document-level embeddings with pre-trained model | 53,565,271 | 0 | python,twitter,nlp,word2vec,word-embedding | You are on the right track with averaging the word vectors in a tweet to get a "tweet vector" and then averaging the tweet vectors for each user to get a "user vector". Whether these average vectors will be useful or not depends on your learning task. Hard to say if this average method will work or not without trying since it depends on how diverse is your data set in terms of variation between the words used in tweets by each user. | I am currently developing a Twitter content-based recommender system and have a word2vec model pre-trained on 400 million tweets.
How would I go about using those word embeddings to create a document/tweet-level embedding and then get the user embedding based on the tweets they had posted?
I was initially intending on averaging those words in a tweet that had a word vector representation and then averaging the document/tweet vectors to get a user vector but I wasn't sure if this was optimal or even correct. Any help is much appreciated. | 0 | 1 | 312 |
0 | 53,612,424 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2018-11-30T21:40:00.000 | 2 | 2 | 0 | word2vec: user-level, document-level embeddings with pre-trained model | 53,565,271 | 1.2 | python,twitter,nlp,word2vec,word-embedding | Averaging the vectors of all the words in a short text is one way to get a summary vector for the text. It often works OK as a quick baseline. (And, if all you have, is word-vectors, may be your main option.)
Such a representation might sometimes improve if you did a weighted average based on some other measure of relative term importance (such as TF-IDF), or used raw word-vectors (before normalization to unit-length, as pre-normalization raw magnitudes can sometimes hints at strength-of-meaning).
You could create user-level vectors by averaging all their texts, or by (roughly equivalently) placing all their authored words into a pseudo-document and averaging all those words together.
You might retain more of the variety of a user's posts, especially if their interests span many areas, by first clustering their tweets into N clusters, then modeling the user as the N centroid vectors of the clusters. Maybe even the N varies per user, based on how much they tweet or how far-ranging in topics their tweets seem to be.
With the original tweets, you could also train up per-tweet vectors using an algorithm like 'Paragraph Vector' (aka 'Doc2Vec' in a library like Python gensim.) But, that can have challenging RAM requirements with 400 million distinct documents. (If you have a smaller number of users, perhaps they can be the 'documents', or they could be the predicted classes of a FastText-in-classification-mode training session.) | I am currently developing a Twitter content-based recommender system and have a word2vec model pre-trained on 400 million tweets.
How would I go about using those word embeddings to create a document/tweet-level embedding and then get the user embedding based on the tweets they had posted?
I was initially intending on averaging those words in a tweet that had a word vector representation and then averaging the document/tweet vectors to get a user vector but I wasn't sure if this was optimal or even correct. Any help is much appreciated. | 0 | 1 | 312 |
0 | 53,572,155 | 0 | 1 | 0 | 0 | 1 | true | 4 | 2018-12-01T14:48:00.000 | 4 | 1 | 0 | How to see updated Dataframe after I run the code again in Spyder (without doubleclicking from Variable explorer after EVERY run)? | 53,571,920 | 1.2 | python,pandas,dataframe,spyder | (Spyder maintainer here) Unfortunately this is not possible as of September 2020. However, we'll try to implement this functionality in Spyder 5, to be released in 2021. | Is there a way to see an updated version of my Dataframe every time I run the code in Spyder? I can see the name and size of the Dataframe in "Variable explorer" but I don't like that I have to double click it to open it.
Or is there a way to have the Dataframe (that I have already earlier opened by double clicking it) update after I run the code again? | 0 | 1 | 219 |
0 | 53,577,656 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-02T04:56:00.000 | 0 | 3 | 0 | How do I alternate the color of a scatter plot in matplotlib? | 53,577,532 | 0 | python,matplotlib | The simplest way to do this is probably to implement the logic outside the plot, by assigning a different group to each point defined by your circle-crossing concept.
Once you have these group-indexes it's a simple plt.scatter, using the c (stands for "color") input.
Good luck! | I have an array of data points of dimension n by 2, where the second dimension $2$ corresponds to the real part and imaginary part of a complex number.
Now I know the data points will intersect the unit circle on the plane for a couple of times. What I want to implement is: suppose the path starts will some color, it changes to another color when it touches the unit circle on the plane and changes color again if it intersects the unit circle again. I am not sure whether there is an easy to implement this. | 0 | 1 | 1,989 |
0 | 53,586,551 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-03T01:15:00.000 | 1 | 1 | 0 | Different sized vectors in word2vec | 53,586,254 | 1.2 | python,vector,word2vec | You run the code for one model three times, each time supplying a different vector_size parameter to the model initialization. | I am trying to generate three different sized output vectors namely 25d, 50d and 75d. I am trying to do so by training the same dataset using the word2vec model. I am not sure how I can get three vectors of different sizes using the same training dataset. Can someone please help me get started on this? I am very new to machine learning and word2vec. Thanks | 0 | 1 | 25 |
0 | 53,609,235 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-12-03T03:16:00.000 | 1 | 2 | 0 | Missing required dependencies ['numpy'] for anaconda ver5.3.1 on pycharm | 53,587,001 | 0.099668 | python,python-3.x,pycharm,anaconda,conda | Your pycharm created a new environment for your project I suspect. Maybe it copied across the anaconda python.exe but not all the global packages.
In pycharm you can go to the project properties where you can see a list of all the packages available, and add additional packages. Here you can install Numpy.
File --> Settings --> Project: --> Project Interpreter | I just installed anaconda ver5.3.1 which uses python 3.7.
I encountered the following error;
"Missing required dependencies {0}".format(missing_dependencies))
ImportError: Missing required dependencies ['numpy']
I have upgraded numpy, pandas to the latest version using conda but the same error appears. To fix this problem, I have to downgrade to an older anaconda version which uses python 3.6
I am using Windows 10.
EDIT: I just discovered this problem is more related to pycharm than anaconda. I got this error while running pycharm in debug mode. However, when I run the same python script in Anaconda prompt console, there is no error.
What are some possible pycharm settings I should check to fix this problem? Are there ways to configure pycharm to output more verbose error messages? | 0 | 1 | 5,309 |
0 | 53,601,635 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-12-03T05:29:00.000 | 4 | 1 | 0 | What is the meaning of "size" of word2vec vectors [gensim library]? | 53,587,960 | 1.2 | python,gensim,word2vec,word-embedding | It is not the case that "[word2vec] aims to represent each word in the dictionary by a vector where each element represents the similarity of that word with the remaining words in the dictionary".
Rather, given a certain target dimensionality, like say 100, the Word2Vec algorithm gradually trains word-vectors of 100-dimensions to be better and better at its training task, which is predicting nearby words.
This iterative process tends to force words that are related to be "near" each other, in rough proportion to their similarity - and even further the various "directions" in this 100-dimensional space often tend to match with human-perceivable semantic categories. So, the famous "wv(king) - wv(man) + wv(woman) ~= wv(queen)" example often works because "maleness/femaleness" and "royalty" are vaguely consistent regions/directions in the space.
The individual dimensions, alone, don't mean anything. The training process includes randomness, and over time just does "whatever works". The meaningful directions are not perfectly aligned with dimension axes, but angled through all the dimensions. (That is, you're not going to find that a v[77] is a gender-like dimension. Rather, if you took dozens of alternate male-like and female-like word pairs, and averaged all their differences, you might find some 100-dimensional vector-dimension that is suggestive of the gender direction.)
You can pick any 'size' you want, but 100-400 are common values when you have enough training data. | Assume that we have 1000 words (A1, A2,..., A1000) in a dictionary. As fa as I understand, in words embedding or word2vec method, it aims to represent each word in the dictionary by a vector where each element represents the similarity of that word with the remaining words in the dictionary. Is it correct to say there should be 999 dimensions in each vector, or the size of each word2vec vector should be 999?
But with Gensim Python, we can modify the value of "size" parameter for Word2vec, let's say size = 100 in this case. So what does "size=100" mean? If we extract the output vector of A1, denoted (x1,x2,...,x100), what do x1,x2,...,x100 represent in this case? | 0 | 1 | 2,261 |
0 | 53,592,746 | 0 | 0 | 1 | 0 | 2 | true | 0 | 2018-12-03T11:15:00.000 | 0 | 2 | 0 | How to embed machine learning python codes in hardware platform like raspberry pi? | 53,592,665 | 1.2 | python,machine-learning,raspberry-pi,artificial-intelligence,robotics | Python supports wide range of platforms, including arm-based.
You raspberry pi supports Linux distros, just install Python and go on. | I want to implement machine learning on hardware platform s which can learning by itself Is there any way to by which machine learning on hardware works seamlessly? | 0 | 1 | 68 |
0 | 67,797,110 | 0 | 0 | 1 | 0 | 2 | false | 0 | 2018-12-03T11:15:00.000 | 0 | 2 | 0 | How to embed machine learning python codes in hardware platform like raspberry pi? | 53,592,665 | 0 | python,machine-learning,raspberry-pi,artificial-intelligence,robotics | First, you may want to be clear on hardware - there is wide range of hardware with various capabilities. For example raspberry by is considered a powerful hardware. EspEye and Arduio Nano 33 BLE considered low end platforms.
It also depends which ML solution you are deploying. I think the most widely deployed method is neural network. Generally, the work flow is to train the model on a PC or on Cloud using lots of data. It is done on PC due to large amount of resources needed to run back prop. The inference is much lighter which can be done on the edge devices. | I want to implement machine learning on hardware platform s which can learning by itself Is there any way to by which machine learning on hardware works seamlessly? | 0 | 1 | 68 |
0 | 53,610,813 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-12-03T11:56:00.000 | 0 | 2 | 0 | Train SqueezeNet model using MNIST dataset Pytorch | 53,593,363 | 0 | python,neural-network,pytorch,mnist,torchvision | The initialization of the pretrained weights is possible but you'll get trouble with the strides and kernel sizes since MNIST images are 28X28 pixels. Most probably the reduction will lead to (batch_sizex1x1xchannel) feature maps before the net is at its infernece layer which will then cause an error. | I want to train SqueezeNet 1.1 model using MNIST dataset instead of ImageNet dataset.
Can i have the same model as torchvision.models.squeezenet?
Thanks! | 0 | 1 | 1,212 |
0 | 53,806,861 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-03T21:09:00.000 | 0 | 3 | 0 | Using MFCC's for voice recognition | 53,601,892 | 0 | python,keras,neural-network,voice-recognition,mfcc | You can use MFCCs with dense layers / multilayer perceptron, but probably a Convolutional Neural Network on the mel-spectrogram will perform better, assuming that you have enough training data. | I'm currently using the Fourier transformation in conjunction with Keras for voice recogition (speaker identification). I have heard MFCC is a better option for voice recognition, but I am not sure how to use it.
I am using librosa in python (3) to extract 20 MFCC features. My question is: which MFCC features should I use for speaker identification?
In addition to this I am unsure on how to implement these features. What I would do is to get the necessary features and make one long vector input for a neural network. However, it is also possible to display colors, so could image recognition also be possible, or is this more aimed at speech, and not speaker recognition?
In short, I am unsure where I should start, as I am not very experienced with image recognition and have no idea where to start.
Thanks in advance!! | 0 | 1 | 1,526 |
0 | 53,613,988 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-12-04T13:06:00.000 | 2 | 1 | 0 | Increasing accuracy by changing batch-size and input image size | 53,613,681 | 1.2 | python,keras,image-segmentation | If your model is a regular convolutional network (without any weird hacks), the samples in a batch will not be connected to each other.
Depending on which loss function you use, the batch size might be important too. For regular functions (available 'mse', 'binary_crossentropy', 'categorical_crossentropy', etc.), they all keep the samples independent from each other. But some losses might consider the entire batch. (F1 metrics, for instance). If you're using a loss function that doesn't treat samples independently, then the batch size is very important.
That said, having a bigger batch size may help the net to find its way more easily, since one image might push weights towards one direction, while another may want a different direction. The mean results of all images in the batch should then be more representative of a general weight update.
Now, entering an experimenting field (we never know everything about neural networks until we test them), consider this comparison:
a batch with 1 huge image versus
a batch of patches of the same image
Both will have the same amount of data, and for a convolutional network, it wouldn't make a drastic difference. But for the first case, the net will probably be better at finding connections between roads, maybe find more segments where the road might be covered by something, while the small patches, being full of borders might be looking more into textures and be not good at identifying these gaps.
All of this is, of course, a guess. Testing is the best.
My net in a GPU cannot really use big patches, which is bad for me... | I am extracting a road network from satellite imagery. Herein the pixel classification is binary ( 0 = non-road, 1 = road). Hence, the mask of the complete satellite image which is 6400 x 6400 pixels shows one large road network where each road is connected to another road. For the implementation of the U-net I divided that large image in 625 images of 256 x 256 pixels.
My question is: Can a neural network easier find structure with an increase in batch size (thus can it find structure between different batches), or can it only find structure if the input image size is enlarged? | 0 | 1 | 955 |
0 | 53,616,756 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-12-04T14:50:00.000 | 0 | 2 | 0 | Pandas read_excel removes columns under empty header | 53,615,545 | 0 | python-3.x,pandas | A quick fix would be to pass header=None to pandas' read_excel() function, manually insert the missing values into the first row (it now will contain the column names), then assign that row to df.columns and drop it after. Not the most elegant way, but I don't know of a builtin solution to your problem
EDIT: by "manually insert" I mean some messing with fillna(), since this appears to be an automated process of some sort | I have an Excel file where A1,A2,A3 are empty but A4:A53 contains column names.
In "R" when you were to read that data, the columns names for A1,A2,A3 would be "X_1,X_2,X_3" but when using pandas.read_excel it simply skips the first three columns, thus ignoring them. The problem is that the number of columns in each file is dynamic thus I cannot parse the column range, and I cannot edit the files and adding "dummy names" for A1,A2,A3 | 0 | 1 | 6,260 |
0 | 53,633,844 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2018-12-05T08:13:00.000 | 5 | 1 | 0 | DataFrame view in PyCharm when using pyspark | 53,627,818 | 1.2 | python,pyspark,pycharm | Pycharm does not support spark dataframes, you should call the toPandas() method on the dataframe. As @abhiieor mentioned in a comment, be aware that you can potentially collect a lot of data, you should first limit() the number of rows returned. | I create a pyspark dataframe and i want to see it in the SciView tab in PyCharm when i debug my code (like I used to do when i have worked with pandas).
It says "Nothing to show" (the dataframe exists, I can see it when I use the show() command).
someone knows how to do it or maybe there is no integration between pycharm and pyspark dataframe in this case? | 0 | 1 | 1,310 |
0 | 53,630,217 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-05T10:22:00.000 | 2 | 1 | 0 | batch size for LSTM | 53,630,041 | 1.2 | python,tensorflow,keras,lstm | If you provide your data as numpy arrays to model.fit() then yes, Keras will take care of feeding the model with the batch size you specified. If your dataset size is not divisible by the batch size, Keras will have the final batch be smaller and equal to dataset_size mod batch_size. | I've been trying to set up an LSTM model but I'm a bit confused about batch_size. I'm using the Keras module in Tensorflow.
I have 50,000 samples, each has 200 time steps and each time step has three features. So I've shaped my training data as (50000, 200, 3).
I set up my model with four LSTM layers, each having 100 units. For the first layer I specified the input shape as (200, 3). The first three layers have return_sequences=True, the last one doesn't. Then I do some softmax classification.
When I call model.fit with batch_size='some_number' do Tensorflow/Keras take care of feeding the model with batches of the specified size? Do I have to reshape my data somehow in advance? What happens if the number of samples is not evenly divisible by 'some_number'?
Thanks for your help! | 0 | 1 | 1,170 |
0 | 53,749,834 | 0 | 0 | 0 | 0 | 3 | false | 2 | 2018-12-05T14:43:00.000 | 1 | 3 | 0 | How to improve accuracy of random forest multiclass classification model? | 53,634,808 | 0.066568 | python,machine-learning,random-forest | Try doing a feature selection first using PCA or Random forest and then fit a chained classifier where first do a oneversesall and then a random forest or a decision tree. You should get a slightly better accuracy. | I am working on a multi class classification for segmenting customers into 3 different classes based on their purchasing behavior and demographics. I cannot disclose the data set completely but in general it contains around 300 features and 50000 rows. I have tried the following methods but I am unable to achieve accuracy above 50% :
Tuning the hyperparameters ( I am using tuned hyperparameters after doing GridSearchCV)
Normalizing the dataset and then running my models
Tried different classification methods : OneVsRestClassifier, RandomForestClassification, SVM, KNN and LDA
I have also removed irrelevant features and tried running my models
My classes were imbalanced, so I have also tried using class_weight = balanced, oversampling using SMOTE, downsampling and resampling.
Is there something else I can try to improve my accuracy ( and by accuracy I mean f-score, precision and recall ).
Any help will be appreciated. | 0 | 1 | 4,038 |
0 | 53,755,232 | 0 | 0 | 0 | 0 | 3 | true | 2 | 2018-12-05T14:43:00.000 | 2 | 3 | 0 | How to improve accuracy of random forest multiclass classification model? | 53,634,808 | 1.2 | python,machine-learning,random-forest | Try to tune below parameters
n_estimators
This is the number of trees you want to build before taking the maximum voting or averages of predictions. Higher number of trees give you better performance but makes your code slower. You should choose as high value as your processor can handle because this makes your predictions stronger and more stable. As your data size is bigger so it will take more time for each iteration but try this.
max_features
These are the maximum number of features Random Forest is allowed to try in individual tree. There are multiple options available in Python to assign maximum features. Few of them are :
Auto/None : This will simply take all the features which make sense
in every tree.Here we simply do not put any restrictions on the
individual tree.
sqrt : This option will take square root of the total number of
features in individual run. For instance, if the total number of
variables are 100, we can only take 10 of them in individual
tree.”log2″ is another similar type of option for max_features.
0.2 : This option allows the random forest to take 20% of variables in individual run. We can assign and value in a format “0.x” where we
want x% of features to be considered.
min_sample_leaf
Leaf is the end node of a decision tree. A smaller leaf makes the model more prone to capturing noise in train data. You can start with some minimum value like 75 and gradually increase it. See which value your accuracy is coming high. | I am working on a multi class classification for segmenting customers into 3 different classes based on their purchasing behavior and demographics. I cannot disclose the data set completely but in general it contains around 300 features and 50000 rows. I have tried the following methods but I am unable to achieve accuracy above 50% :
Tuning the hyperparameters ( I am using tuned hyperparameters after doing GridSearchCV)
Normalizing the dataset and then running my models
Tried different classification methods : OneVsRestClassifier, RandomForestClassification, SVM, KNN and LDA
I have also removed irrelevant features and tried running my models
My classes were imbalanced, so I have also tried using class_weight = balanced, oversampling using SMOTE, downsampling and resampling.
Is there something else I can try to improve my accuracy ( and by accuracy I mean f-score, precision and recall ).
Any help will be appreciated. | 0 | 1 | 4,038 |
0 | 53,635,051 | 0 | 0 | 0 | 0 | 3 | false | 2 | 2018-12-05T14:43:00.000 | 1 | 3 | 0 | How to improve accuracy of random forest multiclass classification model? | 53,634,808 | 0.066568 | python,machine-learning,random-forest | How is your training acc? I assume that your acc is your validation. If your training acc is way to high, som normal overfitting might be the case. Random forest normally handles overfitting very well.
What you could try is PCA of your data, and then try classify on that. This gives you the features which accounts for most variation in the data, thus can be a good idea to try, if you cannot classify on the original data (and also it reduces your features).
A side note: remember, that the fitting of SVM is quadratic in the number of points, thus reducing your data to around 10-20000 for tuning parameters and then fit a SVM on the full dataset with the optimal parameter for the subset, might also speed up the process.
Also remember to consider trying different kernels for the SVM. | I am working on a multi class classification for segmenting customers into 3 different classes based on their purchasing behavior and demographics. I cannot disclose the data set completely but in general it contains around 300 features and 50000 rows. I have tried the following methods but I am unable to achieve accuracy above 50% :
Tuning the hyperparameters ( I am using tuned hyperparameters after doing GridSearchCV)
Normalizing the dataset and then running my models
Tried different classification methods : OneVsRestClassifier, RandomForestClassification, SVM, KNN and LDA
I have also removed irrelevant features and tried running my models
My classes were imbalanced, so I have also tried using class_weight = balanced, oversampling using SMOTE, downsampling and resampling.
Is there something else I can try to improve my accuracy ( and by accuracy I mean f-score, precision and recall ).
Any help will be appreciated. | 0 | 1 | 4,038 |
0 | 53,637,348 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-05T17:01:00.000 | 1 | 1 | 0 | How to use a good use of nb_train_samples in keras? | 53,637,278 | 1.2 | python,tensorflow,keras,deep-learning | Yes, they have to be the same. These are the parameters you use to tell the process how many you have of each type of image. For instance, if you tell it that you have 5_000 validation samples, but there are only 3_000 in the data set, you will crash the run. | I'm using keras with Tensorflow-gpu backend in Python. I'm trying to put the correct number of nb_train_samplesn nb_validation_ samples and epochs.
I am using the fit-generator-method.
nb_train_samples has to be the same number that images that i have for training? Can be higher?
nb_validation_samples has to be the same number that images that i have for validation? Can be higher? | 0 | 1 | 454 |
0 | 53,842,312 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-12-05T18:46:00.000 | 0 | 2 | 0 | Datetime in the format "M1,D1,H1" (January 1st, 1.00 am) | 53,638,858 | 1.2 | python-3.x,pandas | df.insert(0, 'time', [dt.datetime(self.MODEL_YEAR, 1, 1, 0) + dt.timedelta(hours=int(x))
for x in df['hour'].values]) | I want a column in a dataframe including datetime in the format "M1,D1,H1" (January 1st, 1.00 am). I have a dataframe of the size with 8760 elements. How do I populate it? | 0 | 1 | 73 |
0 | 53,644,466 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-12-06T03:24:00.000 | 1 | 2 | 0 | Csv Python reader | 53,644,166 | 0.099668 | python,csv | What I suggest is:
Read the CVS file and set it in an array of two dimensions. Where columns are touchdowns, sacks, passing yards, etc. and rows are specific values for each player.
To determinate for example which player got the most touchdown, go through the column 1 "column for touchdowns" and compare the maximum value of that column.
To continue with the other stats you have to repeat the previous process with another column.
I hope this helps you. | Hi I am trying to write a python code for reading NFL Stats
I converted stats into a excel Csv file,
I am wondering if anyone could help me plan out my code
Like how would I go about getting whos got the most touchdowns, sacks, passing yards, and etc.
I know this is kinda of beginner stuff but much help would be appreciated! | 0 | 1 | 92 |
0 | 58,120,534 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2018-12-06T16:55:00.000 | 3 | 2 | 0 | How to add recurrent dropout to CuDNNGRU or CuDNNLSTM in Keras | 53,656,220 | 0.291313 | python,tensorflow,keras,lstm | You can use kernel_regularizer and recurrent_regularizer for prevent overfitting, i am using L2 regularizers and i am having good results. | One can apply recurrent dropout onto basic LSTM or GRU layers in Keras by passing its value as a parameter of the layer.
CuDNNLSTM and CuDNNGRU are LSTM and GRU layers that are compatible with CUDA. The main advantage is that they are 10 times faster during training. However they lack some of the beauty of the LSTM or GRU layers in Keras, namely the possibility to pass dropout or recurrent dropout values.
While we can add Dropout layers directly in the model, it seems we cannot do that with Recurrent Dropout.
My question is then the following : How to add recurrent dropout to CuDNNGRU or CuDNNLSTM in Keras ? | 0 | 1 | 4,528 |
0 | 54,430,444 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-12-07T03:51:00.000 | 1 | 1 | 0 | TensorFlow tf.data.Dataset API for medical imaging | 53,662,978 | 1.2 | python,tensorflow | Finally, I found a method to solve my problem.
I first crop a subject's image without applying the actual crop. I only measure the slices I need to crop the volume to only the brain. I then serialize all the data set images into one TFRecord file, each training example being an image modality, original image's shape and the slices (saved as Int64 feature).
I decode the TFRecords afterward. Each training sample are reshaped to the shape it contains in a feature. I stack all the image modalities into a stack using tf.stack() method. I crop the stack using the previously extracted slices (the crop then applies to all images in the stack). I finally get some random patches using tf.random_crop() method that allows me to randomly crop a 4-D array (heigh, width, depth, channel).
The only thing I still haven't figured out is data augmentation. Since all this is occurring in Tensors format, I cannot use plain Python and NumPy to rotate, shear, flip a 4-D array. I would need to do it in the tf.Session(), but I would rather like to avoid this and directly input the training handle.
For the evaluation, I serialize in a TFRecords file only one test subject per file. The test subject contains all modalities too, but since there is no TensorFLow methods to extract patches in 4-D, the image is preprocessed in small patches using Scikit-Learn extract_patches() method. I serialize these patches to the TFRecords.
This way, training TFRecords is a lot smaller. I can evaluate the test data using batch prediction.
Thanks for reading and feel free to comment ! | I'm a student in medical imaging. I have to construct a neural network for image segmentation. I have a data set of 285 subjects, each with 4 modalities (T1, T2, T1ce, FLAIR) + their respective segmentation ground truth. Everything is in 3D with resolution of 240x240x155 voxels (this is BraTS data set).
As we know, I cannot input the whole image on a GPU for memory reasons. I have to preprocess the images and decompose them in 3D overlapping patches (sub-volumes of 40x40x40) which I do with scikit-image view_as_windows and then serialize the windows in a TFRecords file. Since each patch overlaps of 10 voxels in each direction, these sums to 5,292 patches per volume. The problem is, with only 1 modality, I get sizes of 800 GB per TFRecords file. Plus, I have to compute their respective segmentation weight map and store it as patches too. Segmentation is also stored as patches in the same file.
And I eventually have to include all the other modalities, which would take nothing less than terabytes of storage. I also have to remember I must also sample equivalent number of patches between background and foreground (class balancing).
So, I guess I have to do all preprocessing steps on-the-fly, just before every training step (while hoping not to slow down training too). I cannot use tf.data.Dataset.from_tensors() since I cannot load everything in RAM. I cannot use tf.data.Dataset.from_tfrecords() since preprocessing the whole thing before takes a lot of storage and I will eventually run out.
The question is : what's left for me for doing this cleanly with the possibility to reload the model after training for image inference ?
Thank you very much and feel free to ask for any other details.
Pierre-Luc | 0 | 1 | 680 |
0 | 53,672,476 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-12-07T10:55:00.000 | 1 | 2 | 0 | MutiLabel classification | 53,668,101 | 0.099668 | python,machine-learning,deep-learning,text-classification | It is not super clear what is your main idea, but articles typically do have tags or categories and you may use that for the classification labels.
Humans are pretty good at articles tagging. | I have some 1000 news articles related to science and technology. I need to train a classifier which will predict say 3(computer science, electronics, electrical) confidence scores for each article.
Each score represents how much the article belongs to each field.
The confidence score will be a value between zero and one.
But the data set doesn't have a training label.
How do I proceed from here? What kind of data do I need?
How do I train such a model? | 0 | 1 | 68 |
0 | 53,673,661 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-07T12:12:00.000 | 0 | 1 | 0 | How to convert a dbf file to a dask dataframe? | 53,669,318 | 0 | python,dataframe,dask,dbf | Dask does not have a dbf loading method.
As far as I can tell, dbf files do not support random-access to the data, so it is not possible to read from sections of the file in separate workers, in parallel. I may be wrong about this, but certainly dbfreader makes no mention of jumping through to an arbitrary record.
Therefore, the only way you could read from dbf in parallel, and hope to see a speed increase, would be to split your original data into multiple dbf files, and use dask.delayed to read each of them.
It is worth mentioning, that probably the reason dbfreader is slow (but please, do your own profiling!) is that it's doing byte-by-byte manipulations and making python objects for every record before passing the records to pandas. If you really wanted to speed things up, this code should be converted to cython or maybe numba, and a pre-allocated dataframe assigned into. | I have a big dbf file, converting it to a pandas dataframe is taking a lot of time.
Is there a way to convert the file into a dask dataframe? | 0 | 1 | 235 |
0 | 53,670,039 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-12-07T12:49:00.000 | 0 | 1 | 1 | Jupyter notebook : kernel died msg when loading big CSV file | 53,669,913 | 0 | python,python-3.x,machine-learning,jupyter-notebook,turi-create | 53MB is not a big file ! You should try to load this in a ipython terminal to test.
Load as you would in a Jupyter to see if you have any issue. If there is no issue, This could be a bad installation of Jupyter.
Note : The kernel dies mostly when you're out of RAM. But 53MB is not that big, assuming you have at least 2 or 4G or RAM on your laptop, an error like this shouldn't happend | I am using a Jupiter Notebook for making a machine learning model using turicreate. When ever I upload a big .csv file, I get a message: kernel died. Since I am new in python is there an other way to Batch loading file or anyone nows how to fix this issue ?
The csv file is 52.9 MB
Thanks | 0 | 1 | 3,050 |
0 | 53,677,838 | 1 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-07T17:35:00.000 | 1 | 1 | 0 | Descending order of shortest paths in networkx | 53,674,393 | 0.197375 | python-3.x,networkx,dijkstra | Try networkx's shortest_simple_paths. | I have a weighted Graph using networkx and the topology is highly meshed. I would like to extract a number of paths between two nodes with distance minimization.
To clarify, the dijkstra_path function finds the weighted shortest path between two nodes, I would like to get that as well as the second and third best option of shortest weighted paths between two nodes.
I tried using all_simple_paths and then ordering the paths in distance minimization order but it is extremely time consuming when the network is meshed with 500 nodes or so.
Any thoughts on the matter? Thank you for your help! | 0 | 1 | 270 |
0 | 53,688,653 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-12-09T01:25:00.000 | 0 | 1 | 0 | processing data in parallel python | 53,688,617 | 0 | python-3.x,python-multiprocessing,python-multithreading | cPickle.load() will release the GIL so you can use it in multiple threads easily. But cPickle.loads() will not, so don't use that.
Basically, put your data from Redis into a StringIO then cPickle.load() from there. Do this in multiple threads using concurrent.futures.ThreadPoolExecutor. | I have a script, parts of which at some time able to run in parallel. Python 3.6.6
The goal is to decrease execution time at maximum.
One of the parts is connection to Redis, getting the data for two keys, pickle.loads for each and returning processed objects.
What’s the best solution for such a tasks?
I’ve tried Queue() already, but Queue.get_nowait() locks the script, and after {process}.join() it also stops execution even though the task is done. Using pool.map raises TypeError: can't pickle _thread.lock objects.
All I could achieve is parallel running of all parts but still cannot connect the results | 0 | 1 | 95 |
0 | 54,559,052 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-12-09T05:34:00.000 | 0 | 2 | 0 | Issue with installing Keras library from Anaconda | 53,689,684 | 0 | python,tensorflow,keras,anaconda,theano | I guess the issue is that tensorflow is not yet released for Python3.7 (you have mentioned latest version of Anaconda). To overcome this, you may create a new environment with Python3.6 and simultaneously install Keras.
conda create -n p360 python=3.6 anaconda tensorflow keras
Here p360 is the name of the environment I chose. | I have been trying to install 'Keras' library from Anaconda on my laptop. I have the latest version of Anaconda. After that, I tried
conda update conda
conda update --all
The above two succeeds. After that I tried
conda install -c conda-forge keras
conda install keras
Both of the above fails with the below error.
ERROR conda.core.link:_execute(502): An error occurred while installing package >'::automat-0.7.0-py_1'.
CondaError: Cannot link a source that does not exist. >C:\Users\Anaconda3\Scripts\conda.exe
I downloaded "automat-0.7.0-py_1" from anaconda site into one local folder and tried conda install from there. It works. However when I try to install Keras again, that again fails. I am clueless now what to do. | 0 | 1 | 1,214 |
0 | 61,193,906 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-12-09T14:40:00.000 | 0 | 3 | 0 | Pandas sort_index numerically, not lexicographically | 53,693,429 | 0 | python,pandas,dataframe | You might have the index as a string type.
I was having this issue after using the groupby() function. I fixed the problem by changing the column that later became my index to an int() with:
df['col_name'] = df['col_name'].astype(int) . | I'm having some issues with sorting a pandas dataframe.
sort_index(axis=0) results in the dataframe sorting the index as 1 10 11 12 13... etc.
While sort_index(axis=1) seems to work for the first couple of rows and then it gets completely disordered.
I simply cannot wrap my head around what is going on. I want a simply numerical sorting of my indices, it seems that it ought to be the standard setting for sort_index. | 0 | 1 | 719 |
0 | 53,693,470 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-12-09T14:40:00.000 | 2 | 3 | 0 | Pandas sort_index numerically, not lexicographically | 53,693,429 | 0.132549 | python,pandas,dataframe | you have 2 types of index, either row index (axis=0) or columns index (axis=1)
you are just arranging columns by name when you use axis=1. it does not reorder each row by values. check your columns names after your sort_index(axis=1) and you will understang | I'm having some issues with sorting a pandas dataframe.
sort_index(axis=0) results in the dataframe sorting the index as 1 10 11 12 13... etc.
While sort_index(axis=1) seems to work for the first couple of rows and then it gets completely disordered.
I simply cannot wrap my head around what is going on. I want a simply numerical sorting of my indices, it seems that it ought to be the standard setting for sort_index. | 0 | 1 | 719 |
0 | 56,693,911 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-12-09T16:39:00.000 | 0 | 3 | 0 | Convert astropy table to list of dictionaries | 53,694,408 | 0 | python,list,dictionary,astropy | I ended up iterating, slicing and copying as list which worked fine on the relatively small dataset. | I have an astropy.table.table object holding stars data. One row per star with columns holding data such as Star Name, Max Magnitude, etc.
I understand an astropy table's internal representation is a dict for each column, with the rows being returned on the fly as slices across the dict objects.
I need to convert the astropy table to a Python list of dict objects, with one dict per star. Essentially this is both a transposition of the table and a conversion.
I can obviously iterate through the table, by column within row, to build the dicts and add them to the list, but I was hoping there was a more efficient way?' | 0 | 1 | 885 |
0 | 53,696,362 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-09T19:51:00.000 | 0 | 1 | 0 | Solving TSP with GA: Should a distance matrix speed up run-time? | 53,696,084 | 0 | python,list,dictionary,time-complexity,sqrt | The short answer:
Yes. Dictionary would make it faster.
The long answer:
Lets say, you pre-processing and calculates all distances once - Great! Now, lets say I want to find the distance between A and B. So, all I have to do now is to find that distance where I put it - it is in the list!
What is the time complexity to find something in the list? Thats right - O(n)
And how may times I'm going to use it? My guess according to your question: 1M+ times
Now, that is a huge problem. I suggest you to use a dictionary so you could search in the pre-calculated distace between any two cities in O(1). | I am trying to write a GA in Python to solve TSP. I would like to speed it up. Because right now, it takes 24 seconds to run 200 generations with a population size of 200.
I am using a map with 29 cities. Each city has an id and (x,y) coordinates.
I tried implementing a distance matrix, which calculates all the distances once and stores it in a list. So instead of calculating the distance using the sqrt() function 1M+ times, it only uses the function 406 times. Every time a distance between two cities is required, it is just retrieved from the matrix using the id of the two cities as the index.
But even with this, it takes just as much time. I thought sqrt() would be more expensive than just indexing a list. Is it not? Would a dictionary make it faster? | 0 | 1 | 177 |
0 | 53,709,894 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-12-09T20:32:00.000 | 1 | 1 | 0 | How to display same detailed help in Pycharm as in Jupyter? | 53,696,405 | 1.2 | python,pycharm,jupyter-notebook | What you want here is help(pandas.DataFrame). Prints the same information as shift+TAB does in Jupyter. | When in Jupyter I Shift+TAB on pandas.DataFrame, it displays e.g.
Two-dimensional size-mutable, potentially heterogeneous tabular data
structure with labeled axes (rows and columns). Arithmetic operations
align on both row and column labels. Can be thought of as a dict-like
container for Series objects. The primary pandas data structure.
Is there any way to display this in Pycharm as well? Quick documentation (Ctrl+Q) doesnt display this. | 0 | 1 | 62 |
0 | 53,714,251 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-09T22:47:00.000 | 2 | 1 | 0 | Error for word2vec with GoogleNews-vectors-negative300.bin | 53,697,450 | 0.379949 | python,gensim,word2vec | This is just a warning, not a fatal error. Your code likely still works.
"Deprecation" means a function's use has been marked by the authors as no longer encouraged.
The function typically still works, but may not for much longer – becoming unreliable or unavailable in some future library release. Often, there's a newer, more-preferred way to do the same thing, so you don't trigger the warning message.
Your warning message points you at the now-preferred way to load word-vectors of that format: use KeyedVectors.load_word2vec_format() instead.
Did you try using that, instead of whatever line of code (not shown in your question) that you were trying before seeing the warning? | the version of python is 3.6
I tried to execute my code but, there are still some errors as below:
Traceback (most recent call last):
File
"C:\Users\tmdgu\Desktop\NLP-master1\NLP-master\Ontology_Construction.py",
line 55, in
, binary=True)
File "E:\Program
Files\Python\Python35-32\lib\site-packages\gensim\models\word2vec.py",
line 1282, in load_word2vec_format
raise DeprecationWarning("Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead.")
DeprecationWarning: Deprecated. Use
gensim.models.KeyedVectors.load_word2vec_format instead.
how to fix the code? or is the path to data wrong? | 0 | 1 | 1,129 |
0 | 53,714,817 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2018-12-10T22:20:00.000 | 0 | 3 | 0 | physical dimensions and array dimensions | 53,714,557 | 1.2 | python,arrays | I believe that the rainfall value shouldn't be a dimension. Therefore, you could use 2D array[lat][lon] = rainfall_value or 3D array[time][lat][lon] = rainfall_value respectively.
If you want to reduce number of dimensions further, you can combine latitude and longitude into one dimension as you suggested, which would make arrays 1D/2D. | If I have a rainfall map which has three dimensions (latitude, longitude and rainfall value), if I put it in an array, do I need a 2D or 3D array? How would the array look like?
If I have a series of daily rainfall map which has four dimensions (lat, long, rainfall value and time), if I put it in an array, do I need a 3D or 4D array?
I am thinking that I would need a 2D and 3D arrays, respectively, because the latitude and longitude can be represented by a 1D array only (but reshaped such that it has more than 1 rows and columns). Enlighten me please. | 0 | 1 | 77 |
0 | 53,730,003 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-11T18:03:00.000 | 0 | 2 | 0 | How to save image as binary compressed .tiff python? | 53,729,889 | 0 | python,tiff | You can try using libtiff.
Install using pip install libtiff | Is there any library to save images in binary (1bit per pixel) .tiff compressed file?
opencv and pillow cannot do that | 0 | 1 | 1,104 |
0 | 55,491,090 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-11T21:55:00.000 | 1 | 1 | 0 | How to perform cross validation on NMF Python | 53,732,904 | 1.2 | python,scikit-learn,nmf | A property of nmf is that it is an unsupervised (machine learning) method. This generally means that there is no labeled data that can serve as a 'golden standard'.
In case of NMF you can not define what is the 'desired' outcome beforehand.
The cross validation in sklearn is designed for supervised machine learning, in which you have labeled data by definition.
What cross validation does, it holds out sets of labeled data, then trains a model on the data that is leftover and evaluates this model on the held out set. For this evaluation any metric can be used. For example: accuracy, precision, recall and F-measure, and for computing these measures it needs labeled data. | I am trying to perform cross-validation on NMF to find the best parameters to use. I tried using the sklearn cross-validation but get an error that states the NMF does not have a scoring method. Could anyone here help me with that? Thank you all | 0 | 1 | 657 |
Subsets and Splits