GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 59,453,381 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2018-05-03T05:14:00.000 | -1 | 3 | 0 | Install opencv python package in Anaconda | 50,147,385 | -0.066568 | python-2.7,opencv,image-processing,ide,anaconda | Remove all previous/current (if any) python installation
Install Anaconda and add anaconda to PATH(Envirnoment variables:: Adavanced system setting->Environment variables->under system variables go to variable PATHand click edit to add new envirnomental variables)
(During installation check box involve PATH)
Open anaconda prompt with admin access. Type and enter:-
conda update --all
conda install -c conda-forge opencv
conda install spyder=4.0.0 (spyder updation)
conda update python (for python updation)
To install this package with conda run one of the following:
conda install -c conda-forge opencv
conda install -c conda-forge/label/gcc7 opencv
conda install -c conda-forge/label/broken opencv
conda install -c conda-forge/label/cf201901 opencv | Can someone provide the steps and the necessary links of the dependencies to install external python package "opencv-python" for image processing?
I tried installing it in pycharm, but it was not able to import cv2(opencv) and was throwing version mismatch with numpy!
Please help! | 0 | 1 | 9,045 |
0 | 50,148,675 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-05-03T06:43:00.000 | 0 | 1 | 0 | Jupyter Notebook; Cannot use columns which are not shown on Jupyter notebook | 50,148,502 | 1.2 | python,pandas,jupyter-notebook | You can try: pandas.set_option('display.max_columns', None) in order to display all columns.
To reset it, type: pandas.reset_option('display.max_columns').
You may change None to whatever number of columns you wish to have. | I'm trying to do create plot using columns on pandas. But the number of columns are many and some of the columns are not shown on Jupyter notebook. And when I use the columns which are not shown on Jupyter Notebook, the plot cannot be created corrctly. Like when I do this, sns.pairplot(data[['col1', 'col13(cannot see on Jupyter Notebook)']]) col1 is used as y-label instead of col13(cannot see on Jupyter Notebook).
How can I fix this? | 0 | 1 | 64 |
0 | 50,159,650 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-03T11:52:00.000 | 0 | 1 | 0 | embedding word positions in keras | 50,154,465 | 0 | python,nlp,keras,word2vec,word-embedding | You can compute the maximal separation between
entity mentions linked by a relation and choose an
input width greater than this distance. This will ensure
that every input (relation mention) has same length
by trimming longer sentences and padding shorter
sentences with a special token. | I am trying to build a relation extraction system for drug-drug interactions using a CNN and need to make embeddings for the words in my sentences. The plan is to represent each word in the sentences as a combination of 3 embeddings: (w2v,dist1,dist2) where w2v is a pretrained word2vec embedding and dist1 and dist2 are the relative distances between each word in the sentence and the two drugs that are possibly related.
I am confused about how I should approach the issue of padding so that every sentence is of equal length. Should I pad the tokenised sentences with some series of strings(what string?) to equalise their lengths before any embedding? | 0 | 1 | 354 |
0 | 50,185,377 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-03T23:53:00.000 | 1 | 1 | 0 | How to identify when a certain image disappear | 50,165,274 | 0.197375 | python,python-3.x,opencv | I ended using numpy to save the captured frames and reached 99% efficiency with the reduced area, no resizing of the images or multiprocessing. | I'm doing an opencv project which needs to detect a image on the screen, which will disappear after some time it shows up. It needs to save the most amount of frames possible while the image is showing, and stop when it disappears. I plan to use the data collected to do a ConvNet, so the more frames I can capture, the better.
I was using template matching full-screen to search for the image and to identify when it disappears, but I was only capturing about 30% of the total frames, with the screen at 30FPS.
Wanting to increase frame capture rate, I changed to searching full-screen with template matching until the image was found, and then the area for searching was reduced to the coordinates found of the image with a little margin, so the program could identify when the image disapeared using a lot less resources (because of a very smaller área to check if the image was still there). This allowed me to capture 60% of the frames.
However I want to know, can I do something else to allow me to optimize my program? I feel like doing template matching for every frame is overkill. Is object tracking better in this case, or it won't even work because the image disapears?
PS: the image stays for about 7~10 seconds on the screen and takes about the same time to pop up again. | 0 | 1 | 118 |
0 | 50,363,845 | 0 | 0 | 0 | 1 | 2 | true | 2 | 2018-05-04T11:46:00.000 | 3 | 3 | 0 | Save keras model to database | 50,174,189 | 1.2 | sql-server,database,python-3.x,tensorflow,keras | It seems that there is no clean solution to directly store a model incl. weights into the database. I decided to store the model as h5 file in the filesystem and upload it from there into the database as a backup. For predictions I load anyway the model from the filesystem as it is much faster than getting it from the database for each prediction. | I created a keras model (tensorflow) and want to store it in my MS SQL Server database. What is the best way to do that? pyodbc.Binary(model) throws an error. I would prefer a way without storing the model in the file system first.
Thanks for any help | 0 | 1 | 3,587 |
0 | 57,933,840 | 0 | 0 | 0 | 1 | 2 | false | 2 | 2018-05-04T11:46:00.000 | 1 | 3 | 0 | Save keras model to database | 50,174,189 | 0.066568 | sql-server,database,python-3.x,tensorflow,keras | The best approach would be to save it as a file in the system and just save the path in the database. This technique is usually used to store large files like images since databases usually struggle with them. | I created a keras model (tensorflow) and want to store it in my MS SQL Server database. What is the best way to do that? pyodbc.Binary(model) throws an error. I would prefer a way without storing the model in the file system first.
Thanks for any help | 0 | 1 | 3,587 |
0 | 50,494,869 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2018-05-04T21:09:00.000 | 4 | 2 | 0 | How to find an optimum number of processes in GridSearchCV( ..., n_jobs = ... )? | 50,183,080 | 0.379949 | python,machine-learning,parallel-processing,scikit-learn,parallelism-amdahl | An additional simpler answer by Prof. Kevyn Collins-Thompson, from course Applied Machine Learning in Python:
If I have 4 cores in my system, n_jobs = 30 (30 as an example) will be the same as n_jobs = 4. So no additional effect
So the maximum performance that can be obtained always is using n_jobs = -1 | I'm wondering, which is better to use with GridSearchCV( ..., n_jobs = ... ) to pick the best parameter set for a model, n_jobs = -1 or n_jobs with a big number, like n_jobs = 30 ?
Based on Sklearn documentation:
n_jobs = -1 means that the computation will be dispatched on all the
CPUs of the computer.
On my PC I have an Intel i3 CPU, which has 2 cores and 4 threads, so does that mean if I set n_jobs = -1, implicitly it will be equal to n_jobs = 2 ? | 0 | 1 | 4,600 |
0 | 50,198,173 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-05-06T09:13:00.000 | 0 | 1 | 0 | Predicting binary classification | 50,198,094 | 1.2 | python,machine-learning | it's basically the same, when talking about binary classification, you can think of a final layer for each model that adapt the output to other model
e.g if the model output 0 or 1 than the final layer will translate it to vector like [1,0] or [0,1] and vise-versa by a threshold criteria, usually is >= 0.5
a nice byproduct of 2 nodes in the final layer is the confidence level of the model in it's predictions [0.80, 0.20] and [0.55, 0.45] will both yield [1,0] classification but the first prediction has more confidence
this can be also extrapolate from 1 node output by the distance of the output from the fringes 1 and 0 so 0.1 will be considered with more confidence than 0.3 as a 0 prediction | I have been self-learning machine learning lately, and I am now trying to solve a binary classification problem (i.e: one label which can either be true or false). I was representing this as a single column which can be 1 or 0 (true or false).
Nonetheless, I was researching and read about how categorical variables can reduce the effectiveness of an algorithm, and how one should one-hot encode them or translate into a dummy variable thus ending with 2 labels (variable_true, variable_false).
Which is the correct way to go about this? Should one predict a single variable with two possible values or 2 simultaneous variables with a fixed unique value?
As an example, let's say we want to predict whether a person is a male or female:
Should we have a single label Gender and predict 1 or 0 for that variable, or Gender_Male and Gender_Female? | 0 | 1 | 381 |
0 | 50,206,803 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-07T02:06:00.000 | 0 | 1 | 0 | Tensorflow How can I make a classifier from a CSV file using TensorFlow? | 50,206,222 | 0 | python,csv,tensorflow | You can start with this tutorial, and try it first without changing anything; I strongly suggest this unless you are already familiar with Tensorflow so that you gain some familiarity with it.
Now you can modify the input layer of this network to match the dimensions of the HuMoments. Next, you can give a numeric label to each type of aphid that you want to recognize, and adjust the size of the output layer to match them.
You can now read the CSV file using python, and remove any text like "HuMoments". If your file has names of aphids, remove them and replace them with numerical class labels. Replace the training data of the code in the above link, with these data.
Now you can train the network according to the description under the title "Train the Model".
One more note. Unless it is essential to use Tensorflow to match your project requirements, I suggest using Keras. Keras is a higher level library that is much easier to learn than Tensorflow, and you have more sample code online. | I need to create a classifier to identify some aphids.
My project has two parts, one with a computer vision (OpenCV), which I already conclude. The second part is with Machine Learning using TensorFlow. But I have no idea how to do it.
I have these data below that have been removed starting from the use of OpenCV, are HuMoments (I believe that is the path I must follow), each line is the HuMoments of an aphid (insect), I have 500 more data lines that I passed to one CSV file.
How can I make a classifier from a CSV file using TensorFlow?
HuMoments (in CSV file):
0.27356047,0.04652453,0.00084231,7.79486673,-1.4484489,-1.4727380,-1.3752532
0.27455502,0.04913969,3.91102408,1.35705980,3.08570234,2.71530819,-5.0277362
0.20708829,0.01563241,3.20141907,9.45211423,1.53559373,1.08038279,-5.8776765
0.23454372,0.02820523,5.91665789,6.96682467,1.02919203,7.58756583,-9.7028848 | 0 | 1 | 102 |
0 | 50,210,208 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-07T08:23:00.000 | 0 | 1 | 0 | PySCIPOpt/SCIP - isLPSolBasic() not in pyscipopt.scip.Model | 50,209,928 | 0 | python,discrete-mathematics,scip | Your assumption is correct: The pip version of PySCIPOpt was outdated and did not yet include the latest updates with respect to cutting plane separators. I just triggered a new release build (v.1.4.6) that should be available soon.
When in doubt, you can always build PySCIPOpt from source by running python setup.py install from within the project directory. | I developed a gomory cut for a LP problem (based on 'test-gomory.py' test file) which I could not manage to run. Finally, I copied the test file to check whether I'd the same trouble. Indeed I got the same message:
if not scip.isLPSolBasic():
AttributeError: 'pyscipopt.scip.Model' object has no attribute 'isLPSolBasic'
I have downloaded SCIPOptSuite 5.0.1 win64, set up path and installed pyscipiopt using pip on conda.
I cannot figure what is wrong, except that I may have failed to install pyscipopt properly? Thank you for pointing me in the right direction. | 0 | 1 | 215 |
0 | 58,696,815 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-07T12:01:00.000 | 0 | 2 | 0 | How to load index shards by gensim.similarities.Similarity? | 50,213,754 | 0 | python,gensim | shoresh's answer is correct. The key part that OP was missing was
index.save(output_fname)
While just creating the object appears to save it, it's really only saving the shards, which require saving a sort of directory file (via index.save(output_fname) to be made accessible as a whole object. | I'm working on something using gensim.
In gensim, var index usually means an object of gensim.similarities.<cls>.
At first, I use gensim.similarities.Similarity(filepath, ...) to save index as a file, and then loads it by gensim.similarities.Similarity.load(filepath + '.0'). Because gensim.similarities.Similarity default save index to shards file like index.0.
When index file becoming larger, it automatically seperate into more shards, like index.0,index.1,index.2......
How can I load these shards file? gensim.similarities.Similarity.load() can only load one file.
BTW: I have try to find the answer in gensim's doc, but failed. | 0 | 1 | 473 |
0 | 50,225,476 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-05-07T17:42:00.000 | 0 | 2 | 0 | Challenges with high cardinality data | 50,219,738 | 0 | python,machine-learning,data-science,dimensionality-reduction,cardinality | You can use replace all id numbers and names in the data with a standard token like <ID> or <NAME>. This should be done during preprocessing. Next you should pick a fixed vocabulary. Like all words that occur at least 5 times in the training data. | Background: I am working on classifying data from a ticketing system data into a failed or successful requests. A request goes into various stages before getting completed. Each request is assigned to different teams and individuals before being marked as complete.
Making use of historical data I want to create predictions for these tickets at a final state x before they are marked as complete(success or fail).
Amongst the various features, individual's name who work on the records & team names are very important factors in analysing this data. Being a huge organization I expect 5-10 new names being added every day.
Historical data
60k records (used for training, validation and testing)
Has 10k unique individual names
Current data
Overal 1k records
- Has 200 individual names
I'm facing a challenge due to high cardinality data like individual names whose number is not fixed and keeps on growing.
1. Challenge while making actual predictions - The no. of columns for the current data will be different every time and would never match the feature length of training data.
- So I have to train my model every single time, I want to make predictions.
2. Challenge while data prep - The above also presents a challenge for data prep as now I always have to encode the complete data and the query encoded data to split into current and future data.
Sorry for the long story.
What am I looking for?
Is there a better way to approach?
These high & constantly changing dimensions is a pain. Any suggestions on how can I handle them, to avoid training every time?
Note: I tried using PCA and Autoencoders for dim red. (The results were not great for my highly unbalanced dataset so I'm working on the data with high dimensions only) | 0 | 1 | 254 |
0 | 50,226,408 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-05-07T17:42:00.000 | 0 | 2 | 0 | Challenges with high cardinality data | 50,219,738 | 0 | python,machine-learning,data-science,dimensionality-reduction,cardinality | Since you have a dynamic data like you said, you can use neural net to identify and merge updating variables and data.
Also you should use classifiers like
CVParameterSelection : For cross validation parameter selection.
PART : For making a decision tree, great utility as it works on divide and conquer rule.
REP Tree (Pruned) : For reduced error in output by splitting error values
And finally when you have the systems in place, you can run the prediction model! | Background: I am working on classifying data from a ticketing system data into a failed or successful requests. A request goes into various stages before getting completed. Each request is assigned to different teams and individuals before being marked as complete.
Making use of historical data I want to create predictions for these tickets at a final state x before they are marked as complete(success or fail).
Amongst the various features, individual's name who work on the records & team names are very important factors in analysing this data. Being a huge organization I expect 5-10 new names being added every day.
Historical data
60k records (used for training, validation and testing)
Has 10k unique individual names
Current data
Overal 1k records
- Has 200 individual names
I'm facing a challenge due to high cardinality data like individual names whose number is not fixed and keeps on growing.
1. Challenge while making actual predictions - The no. of columns for the current data will be different every time and would never match the feature length of training data.
- So I have to train my model every single time, I want to make predictions.
2. Challenge while data prep - The above also presents a challenge for data prep as now I always have to encode the complete data and the query encoded data to split into current and future data.
Sorry for the long story.
What am I looking for?
Is there a better way to approach?
These high & constantly changing dimensions is a pain. Any suggestions on how can I handle them, to avoid training every time?
Note: I tried using PCA and Autoencoders for dim red. (The results were not great for my highly unbalanced dataset so I'm working on the data with high dimensions only) | 0 | 1 | 254 |
0 | 50,233,334 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-08T11:51:00.000 | 0 | 2 | 0 | action when target variable is character/string | 50,232,939 | 0 | python-3.x | You can use the pandas factorize method for converting strings into numbers. numpy.unique can also be used but will be comparatively slower. | I want to train a classifier, and my target variable has 300 unique values, and its type is character/string
Is there an automated process with pandas that can automatically trnsform each string into a number?
Thanks a lot | 0 | 1 | 486 |
0 | 50,252,474 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-09T08:13:00.000 | 0 | 1 | 0 | Data with too many categories | 50,248,547 | 0 | python,r | I am not sure if I understand by "automatically". However, instead of plotting (which can be a hard task if you have many attributes for each sample), you can try to automatically group your samples using clustering techniques such as K-Means, Hierarchical clustering, SOM (or any clustering technique that fits to your problem). Then, for each group, you may extract any statistical information of interest. | I hope to know a general approach when do data engineering.
I have a data set with some variables with too many categories, and including these variables into a predictive model definitely would increase the complexity of the model, thus leads to overfit.
While normally I would group those categories into fewer groups by drawing plots to see if the response variable is significantly different among these groups. Is there a more efficient way dealing with this issue, like automatically carry out some statistical test?
ADDED: In a nutshell, I hope to group or bin values in a variable so that the response variable in each group has very different distribution. | 0 | 1 | 339 |
0 | 50,260,668 | 0 | 0 | 0 | 0 | 1 | true | 9 | 2018-05-09T16:01:00.000 | 10 | 1 | 0 | Tensorflow Eager and Tensorboard Graphs? | 50,257,614 | 1.2 | python,tensorflow,machine-learning | No, by default there is no graph nor sessions in eager executing, which is one of the reasons why it is so appealing. You will need to write code that is compatible with both graph and eager execution to write your net's graph in graph mode if you need to.
Note that even though you can use Tensorboard in eager mode to visualize summaries, good ol' tf.summary.FileWriter is incompatible with eager execution: you need to use tf.contrib.summary.create_file_writer instead (works in graph mode too, so you won't have to change your code). | I'm currently looking over the Eager mode in Tensorflow and wanted to know if I can extract the graph to use in Tensorboard. I understand that Tensorflow Eager mode does not really have a graph or session system that the user has to create. However, from my understanding there is one under the hood. Could this hidden Graph and Session be exported to support a visual graph view in Tensorboard? Or do I need to redo my model into a Graph/Session form of execution? | 0 | 1 | 4,878 |
0 | 50,265,639 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2018-05-10T01:53:00.000 | 2 | 1 | 0 | Document similarity in production environment | 50,264,369 | 1.2 | python,machine-learning,nlp,gensim,doc2vec | You don't have to take the old model down to start training a new model, so despite any training lags, or new-document bursts, you'll always have a live model doing the best it can.
Depending on how much the document space changes over time, you might find retraining to have a negligible benefit. (One good model, built on a large historical record, might remain fine for inferring new vectors indefinitely.)
Note that tuning inference to use more steps (especially for short documents), or a lower starting alpha (more like the training default of 0.025) may give better results.
If word-vectors are available, there is also the "Word Mover's Distance" (WMD) calculation of document similarity, which might be ever better at identifying close duplicates. Note, though, it can be quite expensive to calculate – you might want to do it only against a subset of likely candidates, or have to add many parallel processors, to do it in bulk. There's another newer distance metric called 'soft cosine similarity' (available in recent gensim) that's somewhere between simple vector-to-vector cosine-similarity and full WMD in its complexity, that may be worth trying.
To the extent the vocabulary hasn't expanded, you can load an old Doc2Vec model, and continue to train() it – and starting from an already working model may help you achieve similar results with fewer passes. But note: it currently doesn't support learning any new words, and the safest practice is to re-train with a mix of all known examples interleaved. (If you only train on incremental new examples, the model may lose a balanced understanding of the older documents that aren't re-presented.)
(If you chief concern is documents that duplicate exact runs-of-words, rather than just similar fuzzy topics, you might look at mixing-in other techniques, such as breaking a document into a bag-of-character-ngrams, or 'shingleprinting' as in common in plagiarism-detection applications.) | We are having n number of documents. Upon submission of new document by user, our goal is to inform him about possible duplication of existing document (just like stackoverflow suggests questions may already have answer).
In our system, new document is uploaded every minute and mostly about the same topic (where there are more chance of duplication).
Our current implementation includes gensim doc2vec model trained on documents (tagged with unique document ids). We infer vector for new document and find most_similar docs (ids) with it. Reason behind choosing doc2vec model is that we wanted to take advantage of semantics to improve results. As far as we know, it does not support online training, so we might have to schedule a cron or something that periodically updates the model. But scheduling cron will be disadvantageous as documents come in a burst. User may upload duplicates while model is not yet trained for new data. Also given huge amount of data, training time will be higher.
So i would like to know how such cases are handled in big companies. Are there any better alternative? or better algorithm for such problem? | 0 | 1 | 418 |
0 | 50,737,007 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-05-10T13:06:00.000 | 1 | 1 | 0 | python: Find camera shift angle using a reference image and current image from the camera | 50,273,650 | 1.2 | python,opencv,image-processing,scipy,computer-vision | Use of hanning window will be more accurate for getting the phase values. Use phase correlate function with the window will give you a right phase shift. | I have a reference image captured from a cc-camera and need to capture current image from the cc-camera to check if the camera is shifted or not at regular intervals. If shifted, the shift angle must also be needed. How can I achieve this using python? Is the cv2.phaseCorrelate() function be helpful for this. Please give me some suggestions.
Thank you. | 0 | 1 | 352 |
0 | 50,282,794 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2018-05-10T22:49:00.000 | 4 | 1 | 0 | Tensorflow: switch between CPU and GPU [Windows 10] | 50,282,567 | 1.2 | python-3.x,tensorflow | If you set the environment variable CUDA_VISIBLE_DEVICES=-1 you will use the CPU only. If you don't set that environment variable you will allocate memory to all GPUs but by default only use GPU 0. You can also set it to the specific GPU you want to use. CUDA_VISIBLE_DEVICES=0 will only use GPU 0.
This environment variable is created by the user, it won't exist until you create it. You need to set the variable before tensorflow is imported (usually that is before you start your script). | How can I quickly switch between running tensorflow code with my CPU and my GPU?
My setup:
OS = Windows 10
Python = 3.5
Tensorflow-gpu = 1.0.0
CUDA = 8.0.61
cuDNN = 5.1
I saw a post suggesting something about setting CUDA_VISIBLE_DEVICES=0 but I don't have this variable in my environment (not sure if it's because I'm running windows or what) but if I do set it using something like os.environ it doesn't effect how tensorflow runs code. | 0 | 1 | 2,716 |
0 | 50,290,731 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-05-11T10:08:00.000 | 1 | 1 | 0 | Visualize a SVM model having100 attributes in 2D plot python | 50,289,947 | 1.2 | python,machine-learning,plot,svm,text-classification | For input data of higher dimensionality, I think that there is not a direct way to render a SVM. You should apply a dimensionality reduction, in order to have something to plot in 2-d or 3-d. | I am using SVM for text classification (tf-idf score based classification).
Is it possible to plot SVM having more than 100 attributes and 10 labels. Is there any way to reduce the features and then plot the same multiclass SVM. | 0 | 1 | 150 |
0 | 50,296,509 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2018-05-11T16:20:00.000 | 2 | 2 | 0 | Generate non-uniform random numbers | 50,296,427 | 1.2 | python,arrays,algorithm | Your number 100 is not independent of the input; it depends on the given p values. Any parameter that depends on the magnitude of the input values is really exponential in the input size, meaning you are actually using exponential space. Just constructing that array would thus take exponential time, even if it was structured to allow constant lookup time after generating the random number.
Consider two p values, 0.01 and 0.99. 100 values is sufficient to implement your scheme. Now consider 0.001 and 0.999. Now you need an array of 1,000 values to model the probability distribution. The amount of space grows with (I believe) the ratio of the largest p value and the smallest, not in the number of p values given. | Algo (Source: Elements of Programming Interviews, 5.16)
You are given n numbers as well as probabilities p0, p1,.., pn-1
which sum up to 1. Given a rand num generator that produces values in
[0,1] uniformly, how would you generate one of the n numbers according
to their specific probabilities.
Example
If numbers are 3, 5, 7, 11, and the probabilities are 9/18, 6/18,
2/18, 1/18, then in 1000000 cals to the program, 3 should appear
500000 times, 7 should appear 111111 times, etc.
The book says to create intervals p0, p0 + p1, p0 + p1 + p2, etc so in the example above the intervals are [0.0, 5.0), [0.5, 0.0.8333), etc and combining these intervals into a sorted array of endpoints could look something like [1/18, 3/18, 9/18, 18/18]. Then run the random function generator, and find the smallest element that is larger than the generated element - the array index that it corresponds to maps to an index in the given n numbers.
This would require O(N) pre-processing time and then O(log N) to binary search for the value.
I have an alternate solution that requires O(N) pre-processing time and O(1) execution time, and am wondering what may be wrong with it.
Why can't we iterate through each number in n, multiplying [n] * 100 * probability that matches with n. E.g [3] * (9/18) * 100. Concatenate all these arrays to get, at the end, a list of 100 elements, with the number of elements for each mapping to how likely it is to occur. Then, run the random num function and index into the array, and return the value.
Wouldn't this be more efficient than the provided solution? | 0 | 1 | 556 |
0 | 50,296,535 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-05-11T16:20:00.000 | 2 | 2 | 0 | Generate non-uniform random numbers | 50,296,427 | 0.197375 | python,arrays,algorithm | If you have rational probabilities, you can make that work. Rather than 100, you must use a common denominator of the rational proportions. Insisting on 100 items will not fulfill the specs of your assigned example, let alone more diabolical ones. | Algo (Source: Elements of Programming Interviews, 5.16)
You are given n numbers as well as probabilities p0, p1,.., pn-1
which sum up to 1. Given a rand num generator that produces values in
[0,1] uniformly, how would you generate one of the n numbers according
to their specific probabilities.
Example
If numbers are 3, 5, 7, 11, and the probabilities are 9/18, 6/18,
2/18, 1/18, then in 1000000 cals to the program, 3 should appear
500000 times, 7 should appear 111111 times, etc.
The book says to create intervals p0, p0 + p1, p0 + p1 + p2, etc so in the example above the intervals are [0.0, 5.0), [0.5, 0.0.8333), etc and combining these intervals into a sorted array of endpoints could look something like [1/18, 3/18, 9/18, 18/18]. Then run the random function generator, and find the smallest element that is larger than the generated element - the array index that it corresponds to maps to an index in the given n numbers.
This would require O(N) pre-processing time and then O(log N) to binary search for the value.
I have an alternate solution that requires O(N) pre-processing time and O(1) execution time, and am wondering what may be wrong with it.
Why can't we iterate through each number in n, multiplying [n] * 100 * probability that matches with n. E.g [3] * (9/18) * 100. Concatenate all these arrays to get, at the end, a list of 100 elements, with the number of elements for each mapping to how likely it is to occur. Then, run the random num function and index into the array, and return the value.
Wouldn't this be more efficient than the provided solution? | 0 | 1 | 556 |
0 | 50,298,062 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-05-11T17:11:00.000 | 2 | 3 | 0 | Get cluster points after KMeans in a list format | 50,297,142 | 0.132549 | python-3.x,scikit-learn,k-means,data-science | You probably look for the attribute labels_. | Suppose I clustered a data set using sklearn's K-means.
I can see the centroids easily using KMeans.cluster_centers_ but I need to get the clusters as I get centroids.
How can I do that? | 0 | 1 | 5,157 |
0 | 50,305,713 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-12T10:36:00.000 | 0 | 1 | 0 | How to continue to train a model with new classes and data? | 50,305,294 | 0 | python-3.x,tensorflow,machine-learning,rnn | Best thing to do is to train your network from scratch with the output layers adjusted to the new output class size.
If retraining is an issue, then keep the trained network as it is and only drop the last layer. Add a new layer with the proper output size, initialized to random weights and then fine-tune (train) the entire network. | I have trained a model successfully and now I want to continue training it with new data. If a given data with the same amount of classes it works fine. But having more data then initially it will give me the error:
ValueError: Shapes (?, 14) and (?, 21) are not compatible
How can I dynamically increase the number of classes in my trained model or how to make the model accept a lesser number of classes? Do I need to save the classes in a pickle file? | 0 | 1 | 168 |
0 | 50,632,788 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-12T14:33:00.000 | 0 | 1 | 0 | protoc executable unable to find object detection .protos file in tensorflow models | 50,307,347 | 0 | python,tensorflow,object-detection,protoc | Currently you are in which directory?
You can set the Path variable to point to protoc.exe and run the command from object detection directory | I am trying to work with the object detection api by tensorflow but was unable to install that properly. I looked up every solution in the internet and everything was in vain. Below is the error message I am getting:
“C:\Program Files\protoc-3.5.0-win32\bin\protoc.exe” object_detection/protos/*.proto --python_out=.
object_detection/protos/*.proto: No such file or directory | 0 | 1 | 353 |
0 | 50,309,756 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-05-12T16:54:00.000 | 2 | 1 | 0 | How to revert shuffled array in python with default PRNG and indexes? | 50,308,623 | 1.2 | python,numpy,random,pillow | This has nothing to do with your random numbers.
Notice that you use the random number generator only once when you create the shuffled indices. When you load the indices from the file, the random number generator is not used, since only a file is read.
Your issue occurs at a different place: You save the scrambled Lena as a .jpg. Thereby, poor Lena's scrambled image gets compressed and the colour values change a little bit. When you load the image again and reorder the indices, you do not get the original colours back but only the values after the compression.
Solution: Save your images as a *.png and everything works out.
If you run into problems with an alpha channel, just convert the image back to RGB: scrambled_img = Image.open(img_path).convert("RGB") | Moving an image to an array then flattening it and shuffling with given x seed it should be easy to unshuffle it with the given seed and indexes from the shuffling process.
read image IMG.jpg
random.seed(x) and shuffle -> indexes, shuffle_img.jpg
unshuffle
However, this RESULT shows that the resulting IMG is simmilar but not 1:1 as the input image with this grain noise.
Why the unshuffling gives so much noise if it is not the RNG, only PRNG? | 0 | 1 | 287 |
0 | 50,333,204 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-12T20:43:00.000 | 1 | 1 | 0 | How to efficiently use a Session from within a python object to keep tensorflow as an implementation detail? | 50,310,515 | 0.197375 | python,tensorflow,scikit-learn | Don't open a session at each function call, that could be very inefficient if the function is called many times.
If for some reason, you don't want to expose a context manager, then you need to open the session yourself, and leave it open. It is perhaps a bit simpler for the user, but sharing the tf.Session with other objects or libraries might be more difficult. Also trying to hide the fact that you are using tensorflow may be a bit vain, as it is potentially incompatible with other libraries also relying on the GPU. (Also the user will need to install tensorflow to use the library, s/he will definitely know that you are using it).
So I would not try to encapsulate things that can't or shouldn't (in my opinion) and use a context manager for the tf.Session (maybe even using directly a tf.Session itself if I don't mind exposing tensorflow, otherwise wrapping it in my own context manager). | I'm implementing a custom sklearn Transformer, which requires an optimization step which has been coded in Tensorflow. TF requires a Session, which should be used as a context manager or explicitly closed. The question is: adding a close() method to the Transformer would be odd (and unexpected for a user), what is the best place to close the session? Should I open and close a new session for a every call to fit()? Or should I keep it open and leave the session.close() to the __del__ method of the transformer? Any other options? | 0 | 1 | 143 |
0 | 50,326,237 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-05-13T19:28:00.000 | 0 | 1 | 0 | Need help using Keras' model.predict | 50,319,873 | 0 | python,tensorflow,machine-learning,keras,predict | The model that you trained is not directly optimized w.r.t. the graph reconstruction. Without loss of generality, for a N-node graph, you need to predict N choose 2 links. And it may be reasonable to assume that the true values of the most of these links are 0.
When looking into your model accuracy on the 0-class and 1-class, it is clear that your model is prone to predict 1-class, assuming your training data is balanced. Therefore, your reconstructed graph contains many false alarm links. This is the exact reason why the performance of your reconstruction graph is poor.
If it is possible to retrain the model, I suggest you do it and use more negative samples.
If not, you need to consider applying some post-processing. For example, instead of finding a threshold to decide which two nodes have a link, use the raw predicted link probabilities to form a node-to-node linkage matrix, and apply something like the minimum spanning tree to further decide what are appropriate links. | My goal is to make an easy neural network fit by providing 2 verticies of a certain Graph and 1 if there's a link or 0 if there's none.
I fit my model, it gets loss of about 0.40, accuracy of about 83% during fitting. I then evaluate the model by providing a batch of all positive samples and several batches of negative ones (utilising random.sample). My model gets loss of ~0.35 and 1.0 accuracy for positive samples and ~0.46 loss 0.68 accuracy for negative ones.
My understanding of neural networks if extremely limited, but to my understanding the above means it theoretically always is right when it outputs 0 when there's no link, but can sometimes output 1 even if there is none.
Now for my actual problem: I try to "reconstruct" the original graph with my neural network via model.predict. The problem is I don't understand what the predict output means. At first I assumed values above 0.5 mean 1, else 0. But if that's the case the model doesn't even come close to rebuilding the original.
I get that it won't be perfect, but it simply returns value above 0.5 for random link candidates.
Can someone explain to me how exactly model.predict works and how to properly use it to rebuild my graph? | 0 | 1 | 343 |
0 | 50,329,741 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-14T05:48:00.000 | 0 | 1 | 0 | How to used a tensor in different graphs? | 50,323,744 | 0 | python,tensorflow,graph | expanding on @jdehesa's comment,
embedding could be trained initially, saved from graph1 and restored to graph2 using tensorflows saver/restore tools. for this to work you should assign embedding to a name/variable scope in graph1 and reuse the scope in graph2 | I build two graphs in my code, graph1 and graph2.
There is a tensor, named embedding, in graph1. I tied to use it in graph2 by using get_variable, while the error is tensor must be from the same graph as Tensor. I found that this error occurs because they are in different graphs.
So how can I use a tensor in graph1 to graph2? | 0 | 1 | 39 |
0 | 50,327,148 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-14T09:11:00.000 | 1 | 1 | 0 | identifying feature type in a dataset : categorical or bag of words | 50,326,774 | 0.197375 | python,pandas,machine-learning | Well you're confused between those two terms:
Categorical Data is the kind of data which can be categorized between different categories especially more than two classes or multi-class. Search for 20 Newsgroup Dataset.
Whereas,
Bag of Words is a technique of storing features. Identification of features is done on the basis of what outcome is required. There are techniques to extract features like TF-IDF Vectorizer from sklearn, Word2Vec, Doc2Vec, etc. But identification of features is solely based on the dataset you use and the application it is used for. Always remember, if you convert textual data to numerical form or whatsoever, the column names are your Features or Dimensions whereas the rows are your samples or instances or records. | I am trying to identify the type of feature in a dataset which can be either categorical/bag of words/ floats.
However I am unable to reach to a accurate solution to distinguish between categorical and bag of words due to following reasons.
Categorical data can either be object or float. Counting the unique values in a feature does not ensure the accurate solution as different samples can have the same feature value which necessarily may not be categorical.
For bag or words, I thought of counting the number of words but again this is not the correct way as text can be written in a single word or may be missing.
what can be the best way to identify the type of features? | 0 | 1 | 183 |
0 | 53,698,659 | 0 | 1 | 0 | 0 | 2 | false | 15 | 2018-05-14T17:00:00.000 | 16 | 5 | 0 | No module named '_bz2' in python3 | 50,335,503 | 1 | python,python-3.x,matplotlib,importerror,bzip2 | If you compiling python yourself, you need to install libbz2 headers and .so files first, so that python will be compiled with bz2 support.
On ubuntu, apt-get install libbz2-dev then compile python. | When trying to execute the following command:
import matplotlib.pyplot as plt
The following error occurs:
from _bz2 import BZ2Compressor, BZ2Decompressor ImportError: No module
named '_bz2'
So, I was trying to install bzip2 module in Ubuntu using :
sudo pip3 install bzip2
But, the following statement pops up in the terminal:
Could not find a version that satisfies the requirement bzip2 (from
versions: ) No matching distribution found for bzip2
What can I do to solve the problem? | 0 | 1 | 17,593 |
0 | 71,457,141 | 0 | 1 | 0 | 0 | 2 | false | 15 | 2018-05-14T17:00:00.000 | 1 | 5 | 0 | No module named '_bz2' in python3 | 50,335,503 | 0.039979 | python,python-3.x,matplotlib,importerror,bzip2 | I found a pattern in those issue.
It happens mostly if you are missing dev tools and other important libraries important to compile code and install python.
For me most of those step did not work. But I had to do the following :
Remove my python installation
pyenv uninstall python_version
Then install all the build tools to make sure I am not missing any
sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \ libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \ xz-utils tk-dev libffi-dev liblzma-dev python-openssl git
Reinstall the new python version
pyenv install python_version
I hope that solves your issues. | When trying to execute the following command:
import matplotlib.pyplot as plt
The following error occurs:
from _bz2 import BZ2Compressor, BZ2Decompressor ImportError: No module
named '_bz2'
So, I was trying to install bzip2 module in Ubuntu using :
sudo pip3 install bzip2
But, the following statement pops up in the terminal:
Could not find a version that satisfies the requirement bzip2 (from
versions: ) No matching distribution found for bzip2
What can I do to solve the problem? | 0 | 1 | 17,593 |
0 | 54,786,075 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-15T16:29:00.000 | 0 | 1 | 0 | Tensorflow android feed with specific array | 50,355,139 | 0 | android,python,tensorflow | tensorInferenceInterface.feed("the_input", pixels, batch_size, width, height, dims);
hope this will work for you | I created a CNN model on tensorflow with input placeholder - [None, 32, 32, 3]tf.placeholder(tf.float32, [None, 24, 24, 3]). Then I want to use that model in Android application and for that reason I froze the model. When I include the library I noticed that I can feed only with two dimensional array or ByteBuffer. How can I feed the bitmap from the camera? Do I have to change the input size of the placeholder and what should the size be?
Thanks! | 0 | 1 | 86 |
0 | 53,350,392 | 0 | 0 | 0 | 0 | 1 | false | 25 | 2018-05-15T16:57:00.000 | 1 | 6 | 0 | How should I get the shape of a dask dataframe? | 50,355,598 | 0.033321 | python,dask | To get the shape we can try this way:
dask_dataframe.describe().compute()
"count" column of the index will give the number of rows
len(dask_dataframe.columns)
this will give the number of columns in the dataframe | Performing .shape is giving me the following error.
AttributeError: 'DataFrame' object has no attribute 'shape'
How should I get the shape instead? | 0 | 1 | 16,817 |
0 | 50,400,053 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-15T20:09:00.000 | 0 | 1 | 0 | Dynamic range of unknown parameters and form of cost function in scipy.optimize.least_squares | 50,358,434 | 0 | python,scipy,mathematical-optimization,least-squares,nonlinear-optimization | Most solvers are designed for variables in the 1-10 range. A large range can cause numerical problems, but it is not guaranteed to be problematic. Numerical problems sometimes stem from the matrix factorization step of the linear algebra for solving the Newton step, which is more dependent of the magnitude of the derivatives. You may also encounter challenges with termination tolerances for values outside the 1-10 range. Overall, if it looks like it's working, it's probably fine. You could get a slightly better answer by normalizing values.
Division by a degree of freedom can cause difficulties in three ways:
division by zero
discontinuous derivatives around 0
very steep derivatives near 0, or very flat derivatives far from 0
For these reasons, I would recommend \sum^N_{n=1} ( g_n - ( a0' e^(tn/a1) y_n - b0' e^-(tn*b1')) )^2. However, as previously stated, if it's already working it may not be worth the effort to reformulate your problem. | I am using scipy.optimize.least_squares to solve an interval constrained nonlinear least squares optimization problem. The form of my particular problem is that of finding a0, a1, b0, and b1 such that the cost function:
\sum^N_{n=1} ( g_n - (y_n - b0 e^-(tn/b1)) / a0 e^-(tn/a1) )^2
is minimized where g_n, y_n and t_n are known and there are interval constraints on a0, a1, b0, and b1.
The four unknown parameters span approximately four orders of magnitude (e.g, a0 = 2e-3, a1 = 30, similar for b0 and b1). I have heard that a high dynamic range of unknown parameters can be numerically problematic for optimization routines.
My first question is whether four or so orders of magnitude range would be problematic for scipy.optimize.minimize. The routine appears to converge on the data I've applied so far.
My second question relates to the form of the cost function. I can equivalently write it as:
\sum^N_{n=1} ( g_n - ( 1/a0 e^(tn/a1) y_n - b0/a0 e^-(tn/b1) +tn/a1) / )^2
=
\sum^N_{n=1} ( g_n - ( a0' e^(tn/a1) y_n - b0' e^-(tn*b1')) )^2
where the new parameters are simple transformations of the original parameters. Is there any advantage to doing this in terms of numerical stability or the avoidance of local minima? I haven't proven it, but I wonder whether this new cost function would be convex as opposed to the original cost function. | 0 | 1 | 176 |
0 | 50,632,652 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2018-05-16T06:58:00.000 | 2 | 2 | 0 | How to use two models in Tensorflow object Detection API | 50,364,281 | 0.197375 | python,tensorflow,object-detection | You cant combine both the models. Have two sections of code which will load one model at a time and identify whatever it can see in the image.
Other option is to re-train a single model that can identify all objects you are interested in | In tensorflow Object Detection API we are using ssd_mobilenet_v1_coco_2017_11_17 model to detect 90 general objects. I want to use this model for detection.
Next, I have trained faster_rcnn_inception_v2_coco_2018_01_28 model to detect a custom object. I wish to use this in the same code where I will be able to detect those 90 objects as well as my new trained custom object. How to achieve this with single code? | 0 | 1 | 2,781 |
0 | 50,370,068 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-16T09:47:00.000 | 2 | 1 | 0 | should model.compile() be run prior to using model.load_weights(), if model has been only slightly changed say dropout? | 50,367,540 | 0.379949 | python,keras,deep-learning | The model.compile() method does not touch the weights in any way.
Its purpose is to create a symbolic function adding the loss and the optimizer to the model's existing function.
You can compile the model as many times as you want, whenever you want, and your weights will be kept intact.
Possible consequences of compile
If you got a model, well trained for some epochs, it's optimizer (depending on what type and parameters you chose for it) will also be trained for that specific epochs.
Compiling will make you lose the trained optimizer, and your first training batches might experience some bad results due to learning rates not suited to the current state of the model.
Other than that, compiling doesn't cause any harm. | With training & validation through a dataset for nearly 24 epochs, intermittently 8 epochs at once and saving weights cumulatively after each interval.
I observed a constant declining train & test-loss for first 16 epochs, post which the training loss continues to fall whereas test loss rises so i think it's the case of Overfitting.
For which i tried to resume training with weights saved after 16 epochs with change in hyperparameters say increasing dropout_rate a little.
Therefore i reran the dense & transition blocks with new dropout to get identical architecture with same sequence & learnable parameters count.
Now when i'm assigning previous weights to my new model(with new dropout) with model.load_weights() and compiling thereafter.
i see the training loss is even higher, that should be initially (blatantly with increased inactivity of random nodes during training) but later also it's performing quite unsatisfactory,
so i'm suspecting maybe compiling after loading pretrained weights might have ruined the performance?
what's reasoning & recommended sequence of model.load_weights() & model.compile()? i'd really appreciate any insights on above case. | 0 | 1 | 1,304 |
0 | 50,518,560 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-16T10:05:00.000 | 0 | 1 | 0 | get vscode python error when run pyspark program using vs code | 50,367,947 | 0 | python,pyspark,visual-studio-code | Somehow your code is trying to pickle the debugger itself. Do make sure you are running the debugger using the PySpark configuration. | I run my pyspark program in vscode, and get error:
PicklingError: Could not serialize object: ImportError: No module named visualstudio_py_debugger .
I suppose it has something to do with my vscode setting? | 1 | 1 | 185 |
0 | 50,375,996 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-16T16:07:00.000 | 2 | 1 | 0 | Real width of detected face | 50,375,489 | 0.379949 | python,opencv,distance,face-detection,measure | OpenCV's facial recognition is slightly larger than a face, therefore an average face may not be helpful. Instead, just take a picture of a face at different distances from the camera and record the distance from the camera along with the pixel width of the face for several distances. After plotting the two variables on a graph, use a trendline to come up with a predictive model. | I've been researching like forever, but couldn't find an answer. I'm using OpenCV to detect faces, now I want to calculate the distance to the face. When I detect a face, I get a matofrect (which I can visualize with a rectangle). Pretty clear so far. But now: how do I get the width of the rectangle in the real world? There has to be some average values that represent the width of the human face. If I have that value (in inch, mm or whatever), I can calculate the distance using real width, pixel width and focal length. Please, can anyone help me?
Note: I'm comparing the "simple" rectangle solution against a Facemark based distance measuring solution, so no landmark based answers. I just need the damn average face / matofrectwidth :D
Thank you so much! | 0 | 1 | 138 |
0 | 50,376,159 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-16T16:42:00.000 | -1 | 2 | 0 | How can I print pandas pivot table to word document | 50,376,081 | -0.099668 | python,python-2.7,pandas,ms-word,pivot-table | I suggest exporting it to csv using pandas.to_csv(), and then read the csv to create a table in a word document. | Could some one please explain with an example about how we can send the pivot table output from pandas to word document without losing format. | 0 | 1 | 2,260 |
0 | 50,382,763 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-17T02:58:00.000 | 1 | 1 | 0 | Neural Network Predict Soccer Results | 50,382,706 | 0.197375 | python,c,tensorflow,neural-network | I did this for the 2017 UK Premier league season. However I accumulated the first 19 (out of 38 Games), in order to try and help with my predictions.
I would attack this in the following manner
Get Data on the Teams (What you consider data I will leave up to you)
Get the History of previous matches (Personally I think this is not going to help, as teams change so much)
Python
Pandas
Create New Features
Keras
Model away
When I was playing with the UK Premier league, got a prediction accuracy of aprox 62%. So how to make it better ?
Distance travelled
Improved it by 1.2%. Some teams do not like traveling it seems.
Weather
I got the weather forecast per ground @ Kick-Off-Time (What a pain)
Accuracy improved by 0.5%
What I did not do - was get a list of Players who were playing per game, per day. And attribute things like distance run, passing percentage, goals, fouls, yellow cards etc | I've going on a little competition with friends: We're writing models for the upcoming worldcup to see whoms model gets the most points out of a tip game.
So my approach would be to write a neural network and train it with previous worldcup results regarding to the anticipated wining rates (back then), to maximize the score of the tip game (e.g. 6 points for the exact score, 4 correct goal difference, 3 correct winner).
Scrap rates from various sites (bwin etc.) and let the network tip for me.
I'm familiar with linear algebra, probability calculus etc., but have never programmed a neural network yet.
Since I have not much time left, could someone help me by picking the best approach (like which concept/algorithm I should use) or link me a tutorial to a similar problem or the approach that would fit?
Best,
Hannes | 0 | 1 | 968 |
0 | 50,385,903 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-17T07:15:00.000 | 0 | 1 | 0 | What if a categorical column has multiple values in the train set but only one in test data? Would such a feature be useful in model training at all? | 50,385,511 | 0 | python,machine-learning,regression,data-science,feature-selection | well, It depends on how many features you have in total. If very few (say less than five), that single feature will most likely play an important role in your classification. In this case, I would say you have "Data Mismatch" problem; meaning that your training and test data are coming from different distributions. One simple way to solve it is to put the two sets together, shuffle the whole set, and split your data again. | I am trying to solve a regression problem, where in one of my features can take up two values ('1','0') in the train set but can be valued only '1' in the test data. Intuitively, including this feature seems wrong to me but I am unable to find a concrete logic to support my assumption. | 0 | 1 | 52 |
0 | 50,392,683 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-05-17T13:04:00.000 | 1 | 1 | 0 | Weight file use for different size image | 50,392,211 | 0.197375 | python,keras,deep-learning,convolutional-neural-network | If your model is fully CNN, there is absolutely no need to have different models.
A CNN model can take images of any size. Just make sure the input_shape=(None,None,channels)
You will need separate numpy arrays though, one for the big images, another for the small images, and you will have to call a different fit method for each.
(But probably you will be working with a generator anyway) | I want to train CNN where image-dimension is 128*512, then I want to use this weight file to train other data which has 128*1024 dimension. That means I want to use pre-trained weight file during the training time of different data(128*1024).
Is it possible or How can I do it?
I want to do this because I have only 300 images which have 128*1024 dimension, while I have 5000 images which have 128*512 dimension and both datasets are different.
Thank you | 0 | 1 | 97 |
0 | 50,410,930 | 1 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-18T07:49:00.000 | 0 | 1 | 0 | How do I capture a live IP Camera feed (eg: http://61.60.112.230/view/viewer_index.shtml?id=938427) to python application? | 50,406,354 | 0 | python,opencv,camera,video-streaming | First,you need to find the actual path to camera stream. It is mjpeg or rtsp stream. Use developer option in this page to find the stream, like http://ip/video.mjpg or rtsp://ip/live.sdp. When you find the stream url,create python script which will use stream like capture = cv2.VideoCapture(stream_url). But,as noticed,this is too broad which mean you are asking for full tutorial. This is some directions,and when you have some code,ask a question about issues with your code and you'll get an answer. | I need to perform image recognition thing on real time camera feed. The video is embedded in shtml hosted on a IP. How do I access the video and process things using openCV. | 0 | 1 | 378 |
0 | 50,412,247 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-18T13:15:00.000 | 0 | 1 | 0 | Reconstructing a classified image from a simple Convolution Neural Network | 50,412,204 | 0 | python,tensorflow,convolutional-neural-network,deconvolution | You should have a look to cGAN implementation and why not DeepDream from Google ;)
The answer is yes it is possible, however it's not straight forward | I have a CNN trained on a classification task (Network is simple, 2 convolution + pooling layers and 2x fully connected layers). I would like to use this to reconstruct an image if I input a classification label.
Is this possible to achieve?
Is it possible to share weights between corresponding layers in the 2 networks? | 0 | 1 | 25 |
0 | 50,414,131 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-05-18T14:53:00.000 | 4 | 1 | 0 | Effective passing of large data to python 3 functions | 50,414,041 | 0.664037 | python,python-3.x | Python handles function arguments in the same manner as most common languages: Java, JavaScript, C (pointers), C++ (pointers, references).
All objects are allocated on the heap. Variables are always a reference/pointer to the object. The value, which is the pointer, is copied. The object remains on the heap and is not copied. | I am coming from a C++ programming background and am wondering if there is a pass by reference equivalent in python. The reason I am asking is that I am passing very large arrays into different functions and want to know how to do it in a way that does not waste time or memory by having copy the array to a new temporary variable each time I pass it. It would also be nice if, like in C++, changes I make to the array would persist outside of the function.
Thanks in advance,
Jared | 0 | 1 | 507 |
0 | 52,691,471 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-18T18:57:00.000 | 0 | 1 | 0 | tensorflow.python.framework.errors_impl.NotFoundError: data/kitti_label_map.pbtxt; No such file or directory | 50,417,709 | 0 | python,python-3.x,tensorflow | I don't have a definitive solution to this but here is what resolved it.
First, I copied the kitti_label_map.pbtxt into the data_dir. Then I also copied create_kitti_tf_record.py into the data_dir. And now I copied(this is what made it run in the end) the name and absolute path of the kitti_label_map.pbtxt and pasted it as label_map_path
I have no idea why but it worked. | I'm trying to convert the kitti dataset into the tensorflow .record. After I typed the command:
python object_detection/dataset_tools/create_kitti_tf_record.py
--lable_map_path=object_detection/data/kitti_label_map.pbtxt --data_dir=/Users/zhenglyu/Graduate/research/DataSet/kitti/data_object_image_2/testing/image_2
--output_path=/Users/zhenglyu/Graduate/research/DataSet/kitti2tf/train.record
validation_set_size=1000
I got this error:
Traceback (most recent call last): File
"object_detection/dataset_tools/create_kitti_tf_record.py", line 310,
in
tf.app.run() File "/Users/zhenglyu/tensorflow/lib/python3.6/site-packages/tensorflow/python/platform/app.py",
line 126, in run
_sys.exit(main(argv)) File "object_detection/dataset_tools/create_kitti_tf_record.py", line 307,
in main
validation_set_size=FLAGS.validation_set_size) File "object_detection/dataset_tools/create_kitti_tf_record.py", line 94,
in convert_kitti_to_tfrecords
label_map_dict = label_map_util.get_label_map_dict(label_map_path) File
"/Users/zhenglyu/Graduate/research/TensorFlow/model/research/object_detection/utils/label_map_util.py",
line 152, in get_label_map_dict
label_map = load_labelmap(label_map_path) File "/Users/zhenglyu/Graduate/research/TensorFlow/model/research/object_detection/utils/label_map_util.py",
line 132, in load_labelmap
label_map_string = fid.read() File "/Users/zhenglyu/tensorflow/lib/python3.6/site-packages/tensorflow/python/lib/io/file_io.py",
line 120, in read
self._preread_check() File "/Users/zhenglyu/tensorflow/lib/python3.6/site-packages/tensorflow/python/lib/io/file_io.py",
line 80, in _preread_check
compat.as_bytes(self.name), 1024 * 512, status) File "/Users/zhenglyu/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py",
line 519, in __exit
c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError:
data/kitti_label_map.pbtxt; No such file or directory
The file exists for sure. And I don't know why as I set the label_map_path to another one (object_detection/data/kitti_label_map.pbtxt), the path still remains the default setting (data/kitti_label_map.pbtxt).
I know there's a lot of related problem but none of the solutions that I found works for me. I used Virtualenv to install the tensorflow and using python 3.6. Could these be the problem? Thanks! | 0 | 1 | 4,898 |
0 | 51,161,622 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2018-05-19T21:46:00.000 | 0 | 2 | 0 | how to get the distance of sequence of nodes in pgr_dijkstra pgrouting? | 50,429,760 | 0 | mysql,sql,postgresql,mysql-python,pgrouting | If you want all pairs distance then use
select * from pgr_apspJohnson ('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads) | I have an array of integers(nodes or destinations) i.e array[2,3,4,5,6,8] that need to be visited in the given sequence.
What I want is, to get the shortest distance using pgr_dijkstra. But the pgr_dijkstra finds the shortest path for two points, therefore I need to find the distance of each pair using pgr_dijkstra and adding all distances to get the total distance.
The pairs will be like
2,3
3,4
4,5
5,6
6,8.
Is there any way to define a function that takes this array and finds the shortest path using pgr_dijkstra.
Query is:
for 1st pair(2,3)
SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads',2,3, false);
for 2nd pair(3,4)
SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,3,4,*** false)
for 3rd pair(4,5)
SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,4,5,*** false);
NOTE: The array size is not fixed, it can be different.
Is there any way to automate this in postgres sql may be using a loop etc?
Please let me know how to do it.
Thank you. | 0 | 1 | 708 |
0 | 50,448,522 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-21T11:22:00.000 | 0 | 1 | 0 | TensorFlow: Print Internal State of RNN at at Every Time Step | 50,447,786 | 0 | python,debugging,tensorflow,lstm | Okay, so the issue was that I was modifying the output but wasn't updating the output_size of the LSTM itself. Hence the error. It works perfectly fine now. However, I still find this method to be extremely annoying. Not accepting my own answer with the hope that somebody will have a cleaner solution. | I am using the tf.nn.dynamic_rnn class to create an LSTM. I have trained this model on some data, and now I want to inspect what are the values of the hidden states of this trained LSTM at each time step when I provide it some input.
After some digging around on SO and on TensorFlow's GitHub page, I saw that some people mentioned that I should write my own LSTM cell that returns whatever I want printed as part of the output of the LSTM. However, this does not seem straight forward to me since the hidden states and the output of the LSTM do not have the same shapes.
My output tensor from the LSTM has shape [16, 1] and the hidden state is a tensor of shape [16, 16]. Concatenating them results in a tensor of shape [16, 17]. When I tried to return it, I get an error saying that some TensorFlow op required a tensor of shape [16,1].
Does anyone know an easier work around to this situation? I was wondering if it is possible to use tf.Print to just print the required tensors. | 0 | 1 | 217 |
0 | 53,102,357 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-05-22T09:01:00.000 | 0 | 1 | 0 | Python has stopped working(APPCRASH) Anaconda | 50,463,731 | 0 | python,anaconda | Updated all the libraries to latest version in Anaconda and try.
I was facing a similar situation when I was running the code for Convolutional Neural Network in Spyder under Anaconda environment (Windows 7) I was getting following error
Problem Event Name: APPCRASH
Application Name: pythonw.exe
Fault Module Name: StackHash
Exception Code: c0000005
OS Version: 6.1.7601.2.1.0.256.1
I updated all the libraries to latest version in Anaconda and the problem is resolved. | When I try to build Linear Regression model with training data in Jupyter notebook, python has stopped working, with error as shown below. I am using Anaconda 3.5 on windows7, Python 3.6 version.
Problem signature:
Problem Event Name: APPCRASH
Application Name: python.exe
Application Version: 3.6.4150.1013
Application Timestamp: 5a5e439a
Fault Module Name: mkl_core.dll
Fault Module Version: 2018.0.1.1
Fault Module Timestamp: 59d8a332
Exception Code: c000001d
Exception Offset: 0000000001a009a3
OS Version: 6.1.7600.2.0.0.256.48
Locale ID: 1033
Additional Information 1: 7071
Additional Information 2: 70718f336ba4ddabacde4b6b7fbe73e3
Additional Information 3: de32
Additional Information 4: de328b4df988a86fd2d750fb0942dbd1
I am not able to get any help from google when I search with this error, even I tried
1 Uninstalled and again installed
2. Ran below commands but no use
conda update conda
conda update ipython ipython-notebook ipython-qtconsole
I suspect this error related to Windows, but not sure how to fix it.
Thanks,
Sagar. | 0 | 1 | 2,067 |
0 | 50,487,464 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-22T13:41:00.000 | 0 | 2 | 0 | How to use classifier random forest in Python for 2 different data sets? | 50,469,243 | 0 | python,random-forest | You could try to put NUM as a single column, and the first and second datasets would use completely independent columns, with the non-matching cells containing empty data. Whether the results will be any good, will depend much on your data. | I have 2 data sets with different variables. But both includes a variable, say NUM, that helps to identify the occurrence of an event. With the NUM, I was able to identify the event, by labelling it. How can one run RF to effectively include considerations of the 2 datasets? I am not able to append them (column wise) as the number of records for each NUM differs. | 0 | 1 | 120 |
0 | 56,719,666 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-22T16:06:00.000 | 0 | 2 | 0 | How can I solve a system of linear equations in python faster than with numpy.linalg.lstsq? | 50,472,095 | 0 | python,performance,numpy,math,computer-science | If your coefficient matrix is sparse, use "spsolve" from "scipy.sparse.linalg". | I am trying to solve a linear system spanning somewhat between hundred thousand and two hundred thousand equations with numpy.linalg.lstsq but is taking waaaaay too long. What can I do to speed this up?
The matrix is sparse with hundreds of columns (the dimensions are approximately 150 000 x 140) and the system is overdetermined. | 0 | 1 | 2,001 |
0 | 59,359,358 | 0 | 0 | 0 | 1 | 2 | false | 1 | 2018-05-22T16:16:00.000 | 0 | 2 | 1 | superset dashboards - dynamic updates | 50,472,282 | 0 | python,oracle,superset | You could set the auto-refresh interval for a dashboard if you click on the arrow next to the Edit dashboard-button. | I'm testing Apache superset Dashboards, It s a great tool.
I added an external Database source (Oracle), and I created nice Dashboards very easily.
I would like to see my Dashboards updated regularly and automatically (3 times a day) in superset.
But my Dashboards are not updated.
I mean when a row is inserted into the Oracle Tables, if I refresh the Dashboard, I cannot view the new data in the Dashboard.
What is the best way to do it ?
=> Is there a solution / an option to force the Datasource to be automatically updated regularly ? in a frequency ? What is the parameter / option ?
=> is there a solution to import in batch csv files (for instance in python), then this operation will update the Dashboard ?
=> other way ?
If you have examples to share... :-)
My environment:
Superset is Installed on ubuntu 16.04 and Python 2.7.12.
Oracle is installed on another Linux server.
I connect from google chrome to Superset.
Many thanks for your help | 1 | 1 | 1,509 |
0 | 50,473,011 | 0 | 0 | 0 | 1 | 2 | false | 1 | 2018-05-22T16:16:00.000 | 1 | 2 | 1 | superset dashboards - dynamic updates | 50,472,282 | 0.099668 | python,oracle,superset | I just found the origin of my error... :-)
In fact I added records in the future (tomorow, the day after, ...)...
And My dashboard was only showing all Records to the today date...
I inserted a record before, I refreshed and It appeared.
Thanks to having read me... | I'm testing Apache superset Dashboards, It s a great tool.
I added an external Database source (Oracle), and I created nice Dashboards very easily.
I would like to see my Dashboards updated regularly and automatically (3 times a day) in superset.
But my Dashboards are not updated.
I mean when a row is inserted into the Oracle Tables, if I refresh the Dashboard, I cannot view the new data in the Dashboard.
What is the best way to do it ?
=> Is there a solution / an option to force the Datasource to be automatically updated regularly ? in a frequency ? What is the parameter / option ?
=> is there a solution to import in batch csv files (for instance in python), then this operation will update the Dashboard ?
=> other way ?
If you have examples to share... :-)
My environment:
Superset is Installed on ubuntu 16.04 and Python 2.7.12.
Oracle is installed on another Linux server.
I connect from google chrome to Superset.
Many thanks for your help | 1 | 1 | 1,509 |
0 | 50,506,830 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-24T09:42:00.000 | 0 | 1 | 0 | Which is more efficient ? Fetching data directly from databse or from an HTML page in pandas dataframe? | 50,506,046 | 0 | python,database,pandas,api,dataframe | This depends on a lot of things, namely:
is your database in the same local network as your application server?
is the website used in the same local network as your application server?
how large is the table you have in the website?
how large is your database?
can a user do some trusted changes on the webpage that is not inside the database yet?
All in all, the most common case is that your database is in the same local network as your application server, but there is a lot of data stored in there. So, it is difficult to answer in terms of efficiency, it might be quicker to load data from your local network, but selecting items from the database can be very time-consuming, to such an extent that it will be less efficient than loading data from the webpage. So, to compare efficiency, you will have to do a lot of tests and estimations for the future.
However, there are some other aspects
Server load
If you always load from the database, then you have higher server load, which will affect scalability and even performance when longer database tasks are running, for instance some maintenance cron jobs.
Security
If a hacker sends some valid requests but with wrong data, you might be misled, so you will need to validate any input from users you do not trust. And you should not trust the users generally, trusted users should be the exception.
Data structure
You might cache data for this purpose if you do this with the database and you may use JSON when you send the content of the table to the server, so network communication will not increase.
Network burden
If data is sent from the webpage back, then in the case of usage spikes a lot of data might be sent, which might affect the responsiveness of your application. | I am making graphs using bokeh in python and for that I was calling an HTML page to fetch data inside my dataframe but now I want to fetch data directly from database inside my dataframe. So which method is more efficient? | 1 | 1 | 44 |
0 | 50,517,141 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-05-24T19:47:00.000 | 1 | 4 | 0 | Generate an array of N random integers, between 1 and K, but containing at least one of each number | 50,517,089 | 0.049958 | python,numpy,random | Fill the matrix iteratively with numbers 1-K so if K was 2, and N was 4, [1,2,1,2]. Then randomly generate 2 random numbers between 1-N where the numbers don't equal, and swap the numbers at those positions. | I need to generate a matrix of N random integers between 1 and K, where each number appears at least once, having K ≤ N.
I have no problem using a call to numpy.random.random_integers() and checking the number of distinct elements, when K is much less than N, but it's harder to get a valid array when K approximates to N.
Is there any nice way to get this done? Any hint would be appreciated. | 0 | 1 | 152 |
0 | 50,517,145 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-05-24T19:47:00.000 | 1 | 4 | 0 | Generate an array of N random integers, between 1 and K, but containing at least one of each number | 50,517,089 | 0.049958 | python,numpy,random | Fill K numbers using xrange(k) and then fill (n-k) number using random number generator | I need to generate a matrix of N random integers between 1 and K, where each number appears at least once, having K ≤ N.
I have no problem using a call to numpy.random.random_integers() and checking the number of distinct elements, when K is much less than N, but it's harder to get a valid array when K approximates to N.
Is there any nice way to get this done? Any hint would be appreciated. | 0 | 1 | 152 |
0 | 50,520,768 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-24T21:53:00.000 | 0 | 1 | 0 | LSTM using generator function | 50,518,710 | 0 | python,neural-network,keras,lstm,recurrent-neural-network | Personally, it is recommended to use the PReLU activation function before the fully connected dense layer.
For example:
model.add(LSTM(128,input_shape=(train_X.shape[1],train_X.shape[2])))
model.add(BatchNormalization())
model.add(Dropout(.2))
model.add(Dense(64))
model.add(PReLU()) | I am trying to create a model that has an LSTM layer of 100 units with input dimensions (16,48,12) (16 is the batch size as it takes input through a generator function). The generator function produces an expected output of (16, 1, 2) (16 is the batch size) and I want to use as output a dense layer with a softmax activation fucntion. What would be the best way to do that? I am fairly new to keras and I can't quiet get the grasp of using generator functions... | 0 | 1 | 240 |
0 | 50,525,619 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-25T08:36:00.000 | 0 | 1 | 0 | How do Encoder/Decoder models learn in Deep Learning? | 50,524,927 | 0 | python,tensorflow,machine-learning,keras | The encoder learns a compressed representation of the input data and the decoder tries to learn how to use just this compressed representation to reconstruct the original input data as best as possible. Let's say that the initial weights (usually randomly set) produce a reconstruction error of e. During training, both the encoder and decoder layer weights are adjusted so that e is reduced.
Later on, usually, the decoder layer is removed and the output of the encoder layer (the compressed representation) is used as a feature map of the input.
What does compressed representation mean? If your input is an image of size 20 * 20 = 400 elements, your encoder layer might be of size 100 with a compression factor of 4. In other words, you are learning how to capture the essence of the data with 400 elements in only 100 while still being able to reconstruct the 400 element data with minimum error.
You are correct about filters being equivalent of nodes and changing the weights to learn the best representation for the input during training. | After learning a bit about encoder/decoder models in deep learning (mostly in Keras), i still cannot understand where the learning takes place.
Does the encoder just create the feature map and then the decoder tries to get as close as possible as the result with BackProp, or does the encoder learn as well when the model is trained?
One last question: if i understood correctly, the filters are the equivalent of the nodes in a classic machine learning model, changing its weights to learn, am i right? | 0 | 1 | 135 |
0 | 52,078,671 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-25T14:13:00.000 | 0 | 1 | 0 | Fit method of gensim.sklearn_api.w2vmodel.W2VTransformer throws error when inputed 2-dimensional array of strings | 50,531,181 | 0 | python,arrays,python-3.6,word2vec,gensim | It seems that gensim's word2vec has some problems when working with numpy arrays. Converting data to python lists helped me. | i'm trying to cluster some documents with word2vec and numpy.
w2v = W2VTransformer()
X_train = w2v.fit_transform(X_train)
When I run the fit or fit_transform I get this error:
Exception in thread Thread-8:
Traceback (most recent call last):
File "C:\Users\lperona\AppData\Local\Continuum\anaconda3\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Users\lperona\AppData\Local\Continuum\anaconda3\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\lperona\AppData\Local\Continuum\anaconda3\lib\site-packages\gensim\models\base_any2vec.py", line 99, in _worker_loop
tally, raw_tally = self._do_train_job(data_iterable, job_parameters, thread_private_mem)
File "C:\Users\lperona\AppData\Local\Continuum\anaconda3\lib\site-packages\gensim\models\word2vec.py", line 539, in _do_train_job
tally += train_batch_cbow(self, sentences, alpha, work, neu1, self.compute_loss)
File "gensim/models/word2vec_inner.pyx", line 458, in gensim.models.word2vec_inner.train_batch_cbow
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
(X_train is a 2D numpy array of strings)
Does anyone know a solution?
Thank you | 0 | 1 | 380 |
0 | 61,811,370 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2018-05-25T19:09:00.000 | 0 | 3 | 0 | Cython + OpenCV and NumPy | 50,535,498 | 0 | python,numpy,opencv,cython | It all depends on what your program is doing.
If you program is just stringing together large operations that are implemented in C++ then cython isn't going to help you much if at all.
If you are writing code that works directly on individual pixels then cython could be a big help. | I have a program with mainly OpenCV and NumPy, with some SciPy as well. The system needs to be a real-time system with a frame rate close to 30 fps but right now only about 10 fps. Will using Cython help speed this up? I ask because OpenCV is already written in C++ and should already be quite optimized, and NumPy, as far as I understand, is also quite optimized. So will the use of Cython help improve the processing time of my program? | 0 | 1 | 7,781 |
0 | 50,557,193 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-05-26T17:23:00.000 | 0 | 1 | 0 | What affects tensorlayer.prepro.threading_data's return type? | 50,545,324 | 1.2 | python,list,types,return,numpy-ndarray | It seems that what was causing this problem was having items with different shapes in the list. In this instance, PNG images with 3 and 4 channel. Removing the alpha channel (the fourth channel) from all PNG images solved this for me. | I've been trying to use tensorlayer.prepro.threading_data, but I'm getting a different return type for different inputs. Sometimes it returns an ndarray and sometimes it returns a list. The documentation doesn't specify what's the reason for the different return types.
Can anyone shed some light on this?
Answer:
It seems that what was causing this problem was having items with different shapes in the list. In this instance, PNG images with 3 and 4 channel.
Removing the alpha channel (the fourth channel) from all PNG images solved this for me. | 0 | 1 | 51 |
0 | 68,709,535 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-27T02:35:00.000 | -1 | 1 | 0 | Extract only points inside polygon | 50,548,575 | -0.197375 | python,pyspark,arcgis,pyspark-sql,point-in-polygon | Insert csv on map and polygons layer for selection.
take Select by Location tools and insert to processing.
Target layer is point (csv), Source Layer is Polygon layer.
Spatial selection for target layer feature(s) is completely contain the source layer feature selected.
And click apply.
Notes: You must configure porjection same this two layers. | I have two CSV file one contain points for polygon around 2000 point (lat, long). another file has more than 1 billion row ( id, lat, long). how to extract only the points intersect(inside) the polygon by pyspark | 0 | 1 | 222 |
0 | 50,564,217 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2018-05-28T04:25:00.000 | 1 | 1 | 0 | How to write video to memory in OpenCV | 50,559,105 | 0.197375 | python,python-3.x,opencv,video,video-capture | Video files will often, or even generally, be too big to fit in your main memory so you will be unlikely to be able to just keep the entire video there.
It also worth noting that your OS itself may decide to move data between fast memory, slower memory, disk etc as it manages multiple processes but that is likely not important in this discussion.
It will depend on your use case but a typical video scenario might be:
receive video frame from camera or source stream
do some processing on the video frame - e.g. transcode it, detect an object, add text or images etc
store or send the updated frame someplace
go back to first step
In a flow like this, there is obviously advantage in keeping the frame in memory while working on it, but once done then it is generally not an advantage any more so its fine to move it to disk, or send it to its destination.
If you use cases requires you to work on groups of frames, as some encoding algorithms do, then it may make sense to keep the group of frames in memory until you are finished with them and then write them to disk.
I think the best answer for you will depend on your exact use case, but whatever that it is it is unlikely that keeping the entire video in memory would be necessary or even possible. | Can you write video to memory? I use Raspberry Pi and I don't want to keep writing and deleting videowriter objects created on sd card (or is it ok to do so?).
If conditions are not met I would like to discard written video every second. I use type of motion detector recording and I would like to capture moment (one second in this case) before movement has been detected, because otherwise written video loses part of what happened. I use opencv in python environment. | 0 | 1 | 1,432 |
0 | 50,597,931 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-28T07:27:00.000 | 0 | 1 | 0 | Unsupervised Outlier detection | 50,561,180 | 0 | python-3.x,cluster-analysis,curve-fitting,outliers,lmfit | What happens if you just treat the 6 points as a 12 dimensional vector and run any of the usual outlier detection methods such as LOF and LoOP?
It's trivial to see the relationship between Euclidean distance on the 12 dimensional vector, and the 6 Euclidean distances of the 6 points each. So this will compare the similarities of these curves.
You can of course also define a complex distance function for LOF. | I have 6 points in each row and have around 20k such rows. Each of these row points are actually points on a curve, the nature of curve of each of the rows is same (say a sigmoidal curve or straight line, etc). These 6 points may have different x-values in each row.I also know a point (a,b) for each row which that curve should pass through. How should I go about in finding the rows which may be anomalous or show an unexpected behaviour than other rows? I was thinking of curve fitting but then I only have 6 points for each curve, all I know is that majority of the rows have same nature of curve, so I can perhaps make a general curve for all the rows and have a distance threshold for outlier detection. | 0 | 1 | 185 |
0 | 50,574,746 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-05-28T11:26:00.000 | 0 | 1 | 0 | Why did the numpy core size shrink from 1.14.0 to 1.14.1? | 50,565,322 | 1.2 | python,numpy,shared-libraries | It appears that the difference is in the debug symbols. Perhaps one was built with a higher level of debug symbols than the other, or perhaps the smaller one was built with compressed debug info (a relatively new feature). One way to find out more would be to inspect the compiler and linker flags used during each build, or try to build it yourself. | When creating an AWS lambda package I noticed that the ZIP became a lot smaller when I updated from numpy 1.14.0 to 1.14.3. From 24.6MB to 8.4MB.
The directory numpy/random went from 4.3MB to 1.2MB, according to Ubuntus Disc Usage analyzer. When I, however, compare the directories with meld they seem to be identical. So I had a closer look at this and found that only one file (mtrand.cpython-36m-x86_64-linux-gnu.so) differs that much. I guess it is a similar reason why the core became smaller.
Could somebody explain why this became so much smaller? | 0 | 1 | 60 |
0 | 50,583,288 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-28T15:43:00.000 | 0 | 2 | 0 | splitting data in to test train data when there is unbalance of data | 50,569,782 | 0 | python,machine-learning | Try to do oversampling as you have less amount of data points. Or else you can use neural network preferably MLP, That works fine with unbalanced data. | i have an unbalanced data set which has two categorical values. one has around 500 values of a particular class and other is only one single datapoint with another class.Now i would like to split this data into test train with 80-20 ratio. but since this is unbalanced , i would like to have the second class to be present in both the test and train data.
I tried using the test-train-split from sklearn, but it does not give the second class data to be present in both of them. I even tried the stratified shuffle split, but that was also not giving data as i thought.
Is there any way we can split the data from a data frame forcing both the test and train datasets to have the single datapoint?. I am new to python so having difficulty figuring it out.
the data looks like:
a b c d label
1 0 0 1 1
1 1 1 0 1
..........
........
1 0 0 1 0.
the label has only 1 and 0 but the 0 is only one single observation but the rest of the 500 data points are having label as 1 | 0 | 1 | 61 |
0 | 50,573,045 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-28T20:07:00.000 | 2 | 2 | 0 | How to increasing Number of epoch in keras Conv Net | 50,572,884 | 0.197375 | python-3.x,tensorflow,keras,deep-learning | Simply call model.fit(data, target, epochs=100, batch_size=batch_size) again to carry on training the same model. model needs to be the same model object as in the initial training, not re-compiled. | Suppose, I have a model that is already trained with 100 epoch. I tested the model with test data and the performance is not satisfactory. So I decide to train for another 100 epoch. How can I do that?
I trained with
model.fit(data, target, epochs=100, batch_size=batch_size)
Now I want to train the same model without adding new data for another 100 epoch. How can I do that? | 0 | 1 | 1,201 |
0 | 50,594,107 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-28T20:14:00.000 | 0 | 2 | 0 | Managing classes in tensorflow object detection API | 50,572,962 | 0 | python,python-3.x,tensorflow,object-detection,object-detection-api | Shrinking the last layer to output 1 or two classes is not likely to yield large speed ups. This is because most of the computation is in the intermediate layers. You could shrink the intermediate layers, but this would result in poorer accuracy. | I'm working on a project that requires the recognition of just people in a video or a live stream from a camera. I'm currently using the tensorflow object recognition API with python, and i've tried different pre-trained models and frozen inference graphs. I want to recognize only people and maybe cars so i don't need my neural network to recognize all 90 classes that come with the frozen inference graphs, based on mobilenet or rcnn, as it seems this slows the process, and 89 of this 90 classes are not needed in my project. Do i have to train my own model or is there a way to modify the inference graphs and the existing models? This is probably a noob question for some of you, but mind that i've worked with tensorflow and machine learning for just one month.
Thanks in advance | 0 | 1 | 980 |
0 | 50,579,077 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2018-05-29T07:48:00.000 | 1 | 3 | 0 | Creating an empty multidimensional array | 50,579,027 | 0.066568 | python,numpy | I am guessing that by empty, you mean an array filled with zeros.
Use np.zeros() to create an array with zeros. np.empty() just allocates the array, so the numbers in there are garbage. It is provided as a way to even reduce the cost of setting the values to zero. But it is generally safer to use np.zeros(). | In Python when using np.empty(), for example np.empty((3,1)) we get an array that is of size (3,1) but, in reality, it is not empty and it contains very small values (e.g., 1.7*(10^315)). Is possible to create an array that is really empty/have no values but have given dimensions/shape? | 0 | 1 | 4,649 |
0 | 50,711,224 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-05-29T09:08:00.000 | 0 | 3 | 0 | For a given sparse matrix, how can I multiply it with a given vector of binary values | 50,580,459 | 0 | python,numpy,scipy,sparse-matrix,linear-algebra | If you don't like the speed of matrix multiplication, then you have to consider modification of the matrix attributes directly. But depending on the format that may be slower.
To zero-out columns of a csr, you can find the relevant nonzero elements, and set the data values to zero. Then run the eliminate_zeros method to remove those elements from the sparsity structure.
Setting columns of a csc format may be simpler - find the relevant value in the indptr. At least the elements that you want to remove will be clustered together. I won't go into the details.
Zeroing rows of a lil format should be fairly easy - replace the relevant lists with [].
Anyways with familiarity of the formats it should possible to work out alternatives to matrix multiplication. But without doing so, and doing sometimes, I could say which ones are faster. | I have a sparse matrix and another vector and I want to multiply the matrix and vector so that each column of the vector where it's equal to zero it'll zero the entire column of the sparse matrix.
How can I achieve that? | 0 | 1 | 357 |
0 | 50,589,936 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-05-29T09:08:00.000 | 1 | 3 | 0 | For a given sparse matrix, how can I multiply it with a given vector of binary values | 50,580,459 | 0.066568 | python,numpy,scipy,sparse-matrix,linear-algebra | The main problem is the size of your problem and the fact you're using Python which is on the order of 10-100x slower for matrix multiplication than some other languages. Unless you use something like Cython I don't see you getting an improvement. | I have a sparse matrix and another vector and I want to multiply the matrix and vector so that each column of the vector where it's equal to zero it'll zero the entire column of the sparse matrix.
How can I achieve that? | 0 | 1 | 357 |
0 | 50,580,610 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-05-29T09:11:00.000 | 0 | 1 | 0 | Is it possible to execute tensorflow-on-spark program without gpu suppport? | 50,580,534 | 1.2 | python,apache-spark,tensorflow,pyspark | Yes, it is possible to use CPU. Tensorflow will automatically use CPU if it doesn't find any GPU on your system. | I want tensor-flow-on-spark programs(for learning purpose),& I don't have a gpu support . Is it possible to execute tensor-flow on spark program without GPU support?
Thank you | 0 | 1 | 149 |
0 | 52,261,793 | 0 | 0 | 1 | 0 | 2 | false | 1 | 2018-05-29T15:14:00.000 | 0 | 2 | 0 | How to get phase and frequency of complex CSI for channel impulse responses? | 50,587,784 | 0 | python,frequency,phase,amplitude,csi | In Matlab, you do abs(csi) to get the amplitude. To get the phase, angle(csi). Search for similar functions in python | I have measurements of channel impulse responses as complex CSI's. There are two transmitters Alice and Bob and the measurements look like
[real0], [img0], [real1], [img1], ..., [real99], [img99] (100 complex values).
Amplitude for the Nth value is ampN = sqrt(realN^2 + imgN^2)
How do I get the frequency and phase values out of the complex CSI's?
Any help would be appreciated. | 0 | 1 | 364 |
0 | 50,593,481 | 0 | 0 | 1 | 0 | 2 | false | 1 | 2018-05-29T15:14:00.000 | 0 | 2 | 0 | How to get phase and frequency of complex CSI for channel impulse responses? | 50,587,784 | 0 | python,frequency,phase,amplitude,csi | complex-valued Channel State Information ?
python has cmath, a standard lib for complex number math
but numpy and scipy.signal will probably ultimately be more useful to you | I have measurements of channel impulse responses as complex CSI's. There are two transmitters Alice and Bob and the measurements look like
[real0], [img0], [real1], [img1], ..., [real99], [img99] (100 complex values).
Amplitude for the Nth value is ampN = sqrt(realN^2 + imgN^2)
How do I get the frequency and phase values out of the complex CSI's?
Any help would be appreciated. | 0 | 1 | 364 |
0 | 50,591,996 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-05-29T19:29:00.000 | 0 | 1 | 0 | Runtime error when importing basemap into spyder | 50,591,637 | 1.2 | python,runtime-error,spyder,matplotlib-basemap | (Spyder maintainer here) This error was fixed in our 3.2.8 version, released in March/2018.
Since you're using Anaconda, please open the Anaconda prompt and run there
conda update spyder
to get the fix. | I installed spyder onto my computer a few months ago and it has worked fine until I needed to produce a map with station plots and topography. I simply tried to import matplotlib-basemap and get the following error:
File "<ipython-input-12-6634632f8d36>", line 1, in
runfile('C:/Users/Isa/Documents/Freedman/2018/ENVIROCOMP/Stationplots.py', wdir='C:/Users/Isa/Documents/Freedman/2018/ENVIROCOMP')
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Isa/Documents/Freedman/2018/ENVIROCOMP/Stationplots.py", line 15, in
from mpl_toolkits.basemap import Basemap, shiftgrid, cm
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 951, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 890, in _find_spec
File "<frozen importlib._bootstrap>", line 864, in _find_spec_legacy
File "C:\ProgramData\Anaconda3\lib\site-packages\pyximport\pyximport.py", line 253, in find_module
fp, pathname, (ext,mode,ty) = imp.find_module(fullname,package_path)
File "C:\ProgramData\Anaconda3\lib\imp.py", line 271, in find_module
"not {}".format(type(path)))
RuntimeError: 'path' must be None or a list, not <class '_frozen_importlib_external._NamespacePath'>
If anyone has gone through this or understands this type of error suggest a way to make basemap work on spyder? | 0 | 1 | 453 |
0 | 50,611,284 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-30T10:14:00.000 | -2 | 2 | 0 | Tensorflow contrib learn deprecation warnings | 50,602,085 | -0.197375 | python,tensorflow,machine-learning,deprecation-warning | All of these warning have instructions for updating. Follow the instructions: switch to tf.data for preprocessing. | When I am using the below line in my code
vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length, vocabulary=bow)
I get theses warnings. How do I eliminate them ?
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
WARNING:tensorflow:From /tmp/anyReader-376H566fJpAUSEt/anyReader-376qtSRQxT2gOiq.tmp:67: VocabularyProcessor.__init__ (from tensorflow.contrib.learn.python.learn.preprocessing.text) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tensorflow/transform or tf.data.
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/preprocessing/text.py:154: CategoricalVocabulary.__init__ (from tensorflow.contrib.learn.python.learn.preprocessing.categorical_vocabulary) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tensorflow/transform or tf.data.
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/preprocessing/text.py:170: tokenizer (from tensorflow.contrib.learn.python.learn.preprocessing.text) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tensorflow/transform or tf.data. | 0 | 1 | 1,335 |
0 | 51,165,112 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-05-30T15:56:00.000 | 0 | 2 | 0 | TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed | 50,609,002 | 0 | python,debugging,tensorflow,visual-studio-code | You can simply stop at the break point, and switch to DEBUG CONSOLE panel, and type var.shape. It's not that convenient, but at least you don't need to write any extra debug code in your code. | I'm new (obviously) to python, but not so new to TensorFlow
I've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console:
WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed.
I'm a bit confused on how to fix this issue. Do I have to wait for an update of TensorFlow before it works? | 0 | 1 | 1,348 |
0 | 50,609,100 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-05-30T15:56:00.000 | 0 | 2 | 0 | TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed | 50,609,002 | 0 | python,debugging,tensorflow,visual-studio-code | Probably yes you may have to wait. In the debug mode a deprecated function is being called.
You can print out the shape explicitly by calling var.shape() in the code as a workaround. I know not very convenient. | I'm new (obviously) to python, but not so new to TensorFlow
I've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console:
WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed.
I'm a bit confused on how to fix this issue. Do I have to wait for an update of TensorFlow before it works? | 0 | 1 | 1,348 |
0 | 50,734,768 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2018-05-30T17:55:00.000 | 0 | 1 | 0 | Mutation algorithm efficiency | 50,610,831 | 1.2 | python,numpy,statistics,genetic-algorithm | Yes.
Suppose your gene length is 100 and your mutation rate is 0.1, then picking 100*0.1=10 random indices and mutating them is faster than generating & checking 100 numbers. | Instead of iterating through each element in a matrix and checking if random() returns lower than the rate of mutation, does it work if you generate a certain amount of random indices that match the rate of mutation or is there some other method? | 0 | 1 | 54 |
0 | 50,645,446 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-05-30T21:00:00.000 | 0 | 1 | 0 | Finding original features' effect to the principal components used as inputs in Kernel PCA | 50,613,294 | 0 | python,machine-learning,cluster-analysis,pca,kernel-density | Since you are applying PCA in the kernel space, there is a strictly nonlinear relation with your original features and the features of the reduced data; the eigenvectors you calculate are in the kernel space to begin with. This obstructs a straightforward approach, but maybe you can do some kind of sensitivity analysis. Apply small perturbations to the original features and measure how the final, reduced features react to them. The Jacobian of the final features with respect to the original features can be also a good place to start. | I am trying to implement Kernel PCA to my dataset which has both categorical (encoded with one hot encoder) and numeric features and decreases the number of dimensions from 22 to 3 dimensions in total. After that, I will continue with clustering implementation. I use Spyder as IDE.
In order to understand the structure of my yielded clusters from the algorithm, I want to interpret which features affect the derived principal components and how they affect them.
Is it possible? If so, how can I interpret this, is there any method? | 0 | 1 | 91 |
0 | 50,633,400 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-05-31T17:56:00.000 | 0 | 2 | 0 | Will OpenCV 3.2+ FileStorage save the SimpleBlobDetector_create object in XML or YML? | 50,630,168 | 0 | python-3.x,opencv,file-storage | I have lately had some troubles when using FileStorage with XML or YAML (It appears to be some kind of bug in the OpenCV sourcecode). I would recommend you to try it with JSON. In oder to do so, just change the name of the file to XXXX.json. If you are saving self-constructed structures as well, just construct the structure as if it was a YAML and change the filename to .json.
I hope this helps you further.
Regards, David | I am fairly new to OpenCV 3+ in Python. It looks to me that FileStorage under Python does not support, for example, a writeObj() method. Is it possible to save the SimpleBlobDetector_create to an XML or YAML file using OpenCV 3+ in Python? Another way to put it is this: using Python OpenCV, can I save XML/YAML data that is not a numpy array or a scalar (e.g. an object)? | 0 | 1 | 362 |
0 | 50,634,001 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2018-05-31T22:09:00.000 | 0 | 1 | 0 | import sklearn in python | 50,633,488 | 1.2 | python,import,scikit-learn | Why bot Download the full anaconda and this will install everything you need to start which includes Spider IDE, Rstudio, Jupyter and all the needed modules..
I have been using anaconda without any error and i will recommend you try it out. | I installed miniconda for Windows10 successfully and then I could install numpy, scipy, sklearn successfully, but when I run import sklearn in python IDLE I receive No module named 'sklearn' in anaconda prompt. It recognized my python version, which was 3.6.5, correctly. I don't know what's wrong, can anyone tell me how do I import modules in IDLE ? | 0 | 1 | 1,095 |
0 | 69,603,564 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2018-06-01T01:04:00.000 | 0 | 2 | 0 | Pycharm Can't install TensorFlow | 50,634,751 | 0 | python,tensorflow,pip,pycharm,conda | what worked for is this;
I installed TensorFlow on the command prompt as an administrator using this command pip install tensorflow
then I jumped back to my pycharm and clicked the red light bulb pop-up icon, it will have a few options when you click it, just select the one that says install tensor flow. This would not install in from scratch but basically, rebuild and update your pycharm workspace to note the newly installed tensorflow | I cannot install tensorflow in pycharm on windows 10, though I have tried many different things:
went to settings > project interpreter and tried clicking the green plus button to install it, gave me the error: non-zero exit code (1) and told me to try installing via pip in the command line, which was successful, but I can't figure out how to make Pycharm use it when it's installed there
tried changing to a Conda environment, which still would not allow me to run tensorflow since when I input into the python command line: pip.main(['install', 'tensorflow']) it gave me another error and told me to update pip
updated pip then tried step 2 again, but now that I have pip 10.0.1, I get the error 'pip has no attribute main'. I tried reverted pip to 9.0.3 in the command line, but this won't change the version used in pycharm, which makes no sense to me. I reinstalled anaconda, as well as pip, and deleted and made a new project and yet it still says that it is using pip 10.0.1 which makes no sense to me
So in summary, I still can't install tensorflow, and I now have the wrong version of pip being used in Pycharm. I realize that there are many other posts about this issue but I'm pretty sure I've been to all of them and either didn't get an applicable answer or an answer that I understand. | 0 | 1 | 16,417 |
0 | 50,648,721 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2018-06-01T16:44:00.000 | 0 | 4 | 0 | Can you use loc to select a range of columns plus a column outside of the range? | 50,647,832 | 0 | python,python-3.x,pandas,dataframe | You can use pandas.concat():
pd.concat([df.loc[:,'column_1':'columns_60'],df.loc[:,'column_81']],axis=1) | Suppose I want to select a range of columns from a dataframe: Call them 'column_1' through 'column_60'. I know I could use loc like this:
df.loc[:, 'column_1':'column_60']
That will give me all rows in columns 1-60.
But what if I wanted that range of columns plus 'column_81'. This doesn't work:
df.loc[:, 'column_1':'column_60', 'column_81']
It throws a "Too many indexers" error.
Is there another way to state this using loc? Or is loc even the best function to use in this case?
Many thanks. | 0 | 1 | 4,458 |
0 | 51,454,563 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-06-02T10:28:00.000 | 0 | 1 | 0 | Wordcloud-Pillow issue | 50,655,939 | 0 | python,python-imaging-library,python-import,word-cloud | This is a Pillow problem, rather than a WordCloud problem. As it says, your Pillow installation has somehow become part 4.2.1, part 5.1.0. The simplest solution would be to reinstall Pillow. | from wordcloud import WordCloud
While importing WordCloud I get below issue.Can you help with this?
RuntimeWarning: The _imaging extension was built for another version of Pillow or PIL:
Core version: 5.1.0
Pillow version: 4.2.1
"The _imaging extension was built for Python with UCS2 support; "
ImportError Traceback (most recent call last)
in ()
----> 1 from wordcloud import WordCloud
:\Users\jhaas\Anaconda2\lib\site-packages\wordcloud__init__.py in ()
----> 1 from .wordcloud import (WordCloud, STOPWORDS, random_color_func,
2 get_single_color_func)
3 from .color_from_image import ImageColorGenerator
4
5 all = ['WordCloud', 'STOPWORDS', 'random_color_func',
C:\Users\jhaas\Anaconda2\lib\site-packages\wordcloud\wordcloud.py in ()
17 from operator import itemgetter
18
---> 19 from PIL import Image
20 from PIL import ImageColor
21 from PIL import ImageDraw
C:\Users\jhaas\Anaconda2\lib\site-packages\PIL\Image.py in ()
65 "Pillow version: %s" %
66 (getattr(core, 'PILLOW_VERSION', None),
---> 67 PILLOW_VERSION))
68
69 except ImportError as v:
ImportError: The _imaging extension was built for another version of Pillow or PIL:
Core version: 5.1.0
Pillow version: 4.2.1 | 0 | 1 | 270 |
0 | 55,304,529 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-06-03T08:14:00.000 | -1 | 3 | 0 | Train CNN model with multiple folders and sub-folders | 50,664,485 | -0.066568 | python,tensorflow,keras | Use os.walk to access all the files in sub-directories recursively and append to the dataset. | I am developing a convolution neural network (CNN) model to predict whether a patient in category 1,2,3 or 4. I use Keras on top of TensorFlow.
I have 64 breast cancer patient data, classified into four category (1=no disease, 2= …., 3=….., 4=progressive disease). In each patient's data, I have 3 set of MRI scan images taken at different dates and inside each MRI folder, I have 7 to 8 sub folders containing MRI images in different plane (such as coronal plane/sagittal plane etc).
I learned how to deal with basic “Cat-Dog-CNN-Classifier”, it was easy as I put all the cat & dog images into a single folder to train the network. But how do I tackle the problem in my breast cancer patient data? It has multiple folders and sub-solders.
Please suggest. | 0 | 1 | 1,045 |
0 | 50,683,342 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2018-06-04T13:46:00.000 | 3 | 2 | 0 | How to normalize data when using Keras fit_generator | 50,682,119 | 1.2 | python,tensorflow,machine-learning,keras,keras-2 | The generator does allow you to do on-the-fly processing of data but pre-processing the data prior to training is the preferred approach:
Pre-process and saving avoids processing the data for every epoch, you should really just do small operations that can be applied to batches. One-hot encoding for example is a common one while tokenising sentences etc can be done offline.
You probably will tweak, fine-tune your model. You don't want to have the overhead of normalising the data and ensure every model trains on the same normalised data.
So, pre-process once offline prior to training and save it as your training data. When predicting you can process on-the-fly. | I have a very large data set and am using Keras' fit_generator to train a Keras model (tensorflow backend). My data needs to be normalized across the entire data set however when using fit_generator, I have access to relatively small batches of data and normalization of the data in this small batch is not representative of normalizing the data across the entire data set. The impact is quite large (I tested it and the model accuracy is significantly degraded).
My question is this: What is the correct practice of normalizing data across entire data set when using Keras' fit_generator? One last point: my data is a mix of text and numeric data and not images, and hence I am not able to use some of the capabilities in Keras' provided image generator which may address some of the issues for image data.
I have looked at normalizing the full data set prior to training ("brute-force" approach, I suppose) but I am wondering if there is a more elegant way of doing this. | 0 | 1 | 2,686 |
0 | 50,692,350 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-06-05T04:53:00.000 | -1 | 2 | 0 | Pandas - Read/Write to the same csv quickly.. getting permissions error | 50,692,295 | -0.099668 | python,pandas,csv,io | Close the file that you are trying to read and write and then try running your script.
Hope it helps | I have a script that I am trying to execute every 2 seconds.. to begin it reads a .csv with pd.read_csv. Then executes modifications on the df and finally overwrites the original .csv with to_csv.
I'm running into a PermissionError: [Errno 13] Permission denied: and from my searches I believe it's due to trying to open/write too often to the same file though I could be wrong.
Any suggestions how to avoid this?
Not sure if relevant but the file is stored in one-drive folder.
It does save on occasion, seemingly randomly.
Increasing the timeout so the script executes slower helps but I want it running fast!
Thanks | 0 | 1 | 1,596 |
0 | 50,709,075 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-06-05T05:38:00.000 | 1 | 1 | 0 | Can doc2vec be useful if training on Documents and inferring on sentences only | 50,692,739 | 1.2 | python,gensim,training-data,doc2vec | Every corpus and project goals are different. Your approach of training on larger docs but then inferring on shorter sentences could plausibly work, but you have to try it to see how well, and then iteratively test whether perhaps shorter training docs (as single sentences or groups-of-sentences) work better, for your specific goal.
Note that gensim Doc2Vec inference often gains from non-default parameters – especially more steps (than the tiny default 5) or a smaller starting alpha (more like the training default of 0.025), especially on shorter documents. And, that inference also may work better or worse depending on original model metaparameters.
Note also that an implementation limit means that texts longer than 10,000 tokens are silently truncated in gensim Word2Vec/Doc2Vec training. (If you have longer docs, you can split them into less-than-10K-token subdocuments, but then repeat the tags for each subdocument, to closely simulate what effect training with a longer document would have had.) | I am training with some documents with gensim's Doc2vec.
I have two types of inputs:
Whole English Wikipedia: Each article of Wikipedia text is considered as one
document for doc2vec training. (Total around 5.5 million articles or documents)
Some documents related to my project that are manually prepared and collected from some websites. (around 15000 documents).
Where each document the size is around 100 sentences.
Further, I want to use this model to infer sentences of size (10~20 words).
I request some clarification on my approach.
Is the method of training over documents(size of each document approx. 100 sentences each) and then inferring over new sentence correct. ?
Or, should I train over only sentences and not documents and then infer over the new sentence.? | 0 | 1 | 332 |
0 | 50,713,079 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-06-06T04:26:00.000 | 1 | 1 | 0 | Pytrends anaconda install conflict with TensorFlow | 50,712,246 | 0.197375 | python,tensorflow,anaconda | Try upgrading your version of tensorflow. I tried it with Tensorflow 1.6.0 ,tensorboard 1.5.1 and it worked fine. I was able to import pytrends. | It seems I have a conflict when trying to install pytrends via anaconda. After submitting "pip install pytrends" the following error arises:
tensorflow-tensorboard 1.5.1 has requirement bleach==1.5.0, but you'll have bleach 2.0.0 which is incompatible.
tensorflow-tensorboard 1.5.1 has requirement html5lib==0.9999999, but you'll have html5lib 0.999999999 which is incompatible.
I also have tensorflow but don't necessarily need it. But I'd prefer a means to operate with both. | 0 | 1 | 333 |
0 | 50,723,414 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-06-06T11:33:00.000 | 0 | 2 | 0 | Optimizing RAM usage when training a learning model | 50,719,405 | 0 | python,deep-learning,ram,rdp | Slightly orthogonal to your actual question, if your high RAM usage is caused by having entire dataset in memory for the training, you could eliminate such memory footprint by reading and storing only one batch at a time: read a batch, train on this batch, read next batch and so on. | I have been working on creating and training a Deep Learning model for the first time. I did not have any knowledge about the subject prior to the project and therefor my knowledge is limited even now.
I used to run the model on my own laptop but after implementing a well working OHE and SMOTE I simply couldnt run it on my own device anymore due to MemoryError (8GB of RAM). Therefor I am currently running the model on a 30GB RAM RDP which allows me to do so much more, I thought.
My code seems to have some horribly inefficiencies of which I wonder if they can be solved. One example is that by using pandas.concat my model's RAM usages skyrockets from 3GB to 11GB which seems very extreme, afterwards I drop a few columns making the RAm spike to 19GB but actually returning back to 11GB after the computation is completed (unlike the concat). I also forced myself to stop using the SMOTE for now just because the RAM usage would just go up way too much.
At the end of the code, where the training happens the model breaths its final breath while trying to fit the model. What can I do to optimize this?
I have thought about splitting the code into multiple parts (for exmaple preprocessing and training) but to do so I would need to store massive datasets in a pickle which can only reach 4GB (correct me if I'm wrong). I have also given thought about using pre-trained models but I truely did not understand how this process goes to work and how to use one in Python.
P.S.: I would also like my SMOTE back if possible
Thank you all in advance! | 0 | 1 | 1,499 |
0 | 50,730,108 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2018-06-06T14:46:00.000 | 8 | 1 | 0 | How is Word2Vec min_count applied | 50,723,303 | 1.2 | python,word2vec,gensim | Words below the min_count frequency are dropped before training occurs. So, the relevant context window is the word-distance among surviving words.
This de facto shrinking of contexts is usually a good thing: the infrequent words don't have enough varied examples to obtain good vectors for themselves. Further, while individually each infrequent word is rare, in total there are lots of them, so these doomed-to-poor-vector rare-words intrude on most other words' training, serving as a sort of noise that makes those word-vectors worse too.
(Similarly, when using the sample parameter to down-sample frequent words, the frequent words are randomly dropped – which also serves to essentially "shrink" the distances between surviving words, and often improves overall vector quality.) | Say that I'm training a (Gensim) Word2Vec model with min_count=5. The documentation learns us what min_count does:
Ignores all words with total frequency lower than this.
What is the effect of min_count on the context? Lets say that I have a sentence of frequent words (min_count > 5) and infrequent words (min_count < 5), annotated with f and i:
This (f) is (f) a (f) test (i) sentence (i) which (f) is (f) shown (i) here (i)
I just made up which word is frequently used and which word is not for demonstration purposes.
If I remove all infrequent words, we get a completely different context from which word2vec is trained. In this example, your sentence would be "This is a which is", which would then be a training sentence for Word2Vec. Moreover, if you have a lot of infrequent words, words that were originally very far away from each other are now placed within the same context.
Is this the correct interpretation of Word2Vec? Are we just assuming that you shouldn't have too many infrequent words in your dataset (or set a lower min_count threshold)? | 0 | 1 | 6,773 |
0 | 50,787,466 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2018-06-06T19:21:00.000 | 1 | 1 | 0 | Not able to install Spyder in the virtual environment on Anaconda | 50,728,057 | 1.2 | python,python-3.x,anaconda,spyder | You should activate your virtual environment and then type: conda install spyder. That should install spyder for that particular virtual environment. If you used pip or pip3 you may have problems. | I was trying to install spyder in the virtual environment on anaconda, but ended up with this debugging error.
Executing transaction:
\ DEBUG menuinst_win32:init(199): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'C:\Users\Public\Anaconda\envs\tensorflow', env_name: 'tensorflow', mode: 'None', used_mode: 'user'
DEBUG menuinst_win32:create(323): Shortcut cmd is C:\Users\Public\Anaconda\pythonw.exe, args are ['C:\Users\Public\Anaconda\cwp.py', 'C:\Users\Public\Anaconda\envs\tensorflow', 'C:\Users\Public\Anaconda\envs\tensorflow\pythonw.exe', 'C:\Users\Public\Anaconda\envs\tensorflow\Scripts\spyder-script.py']
| DEBUG menuinst_win32:create(323): Shortcut cmd is C:\Users\Public\Anaconda\python.exe, args are ['C:\Users\Public\Anaconda\cwp.py', 'C:\Users\Public\Anaconda\envs\tensorflow', 'C:\Users\Public\Anaconda\envs\tensorflow\python.exe', 'C:\Users\Public\Anaconda\envs\tensorflow\Scripts\spyder-script.py', '--reset']
I have also tried to clear the debugging errors using
conda config --set quiet True , but no use.
Can anyone help me with this? | 0 | 1 | 777 |
0 | 50,748,029 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-06-07T09:27:00.000 | 0 | 1 | 0 | Reinforcement Learning with MDP for revenues optimization | 50,737,705 | 1.2 | python,optimization,reinforcement-learning,markov-decision-process | I think the biggest thing missing in your formulation is the sequential part. Reinforcement learning is useful when used sequentially, where the next state has to be dependent on the current state (thus the "Markovian"). In this formulation, you have not specified any Markovian behavior at all. Also, the reward is a scalar which is dependent on either the current state or the combination of current state and action. In your case, the revenue is dependent on the price (the action), but it has no correlation to the state (the seat). These are the two big problems that I see with your formulation, there are others as well. I will suggest you to go through the RL theory (online courses and such) and write a few sample problems before trying to formulate your own. | I want to modelize the service of selling seats on an airplane as an MDP( markov decision process) to use reinforcement learning for airline revenues optimization, for that I needed to define what would be: states, actions, policy, value and reward. I thought a little a bit about it, but i think there is still something missing.
I modelize my system this way:
States = (r,c) where r is the number of passengers and c the number of seats bought so r>=c.
Actions = (p1,p2,p3) that are the 3 prices. the objective is to decide which one of them give more revenues.
Reward: revenues.
Could you please tell me what do u think and help me?
After the modelization, I have to implement all of that wit Reinforcement Learning. Is there a package that do the work ? | 0 | 1 | 232 |
0 | 50,739,411 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-06-07T09:44:00.000 | 3 | 1 | 0 | How can I read a csv file using panda dataframe from GPU? | 50,738,058 | 1.2 | python-3.x,gpu,h2o,h2o4gpu | No. The biggest bottleneck is IO and that’s handled by the CPU. | I am reading a file using
file=pd.read_csv('file_1.csv')
which is taking a long time on CPU.
Is there any method to read this using GPU. | 0 | 1 | 810 |
0 | 50,776,543 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-06-07T14:13:00.000 | 0 | 1 | 0 | Processing Images with Depth Information | 50,743,476 | 0 | python,opencv | If you want to work with depth based cameras you can go for Time of Flight(ToF) cameras like picoflexx and picomonstar cameras. They will give you X,Y and Z values. Where your x and y values are distances from camera centre of that point (like in 2D space) and Z will five you the direct distance of that point (not perpendicular) from camera centre.
For this camera and this 3d data processing you can use Point Cloud Library fro processing. | I've been doing a lot of Image Processing recently on Python using OpenCV and I've worked all this while with 2-D Images in the generic BGR style.
Now, I'm trying to figure out how to incorporate depth and work with depth information as well.
I've seen the documentation on creating simple point clouds using the Left and Right images of a Stereocamera, but I was hoping to gain some intuition on Depth-based cameras themselves like Kinect.
What kind of camera should I use for this purpose, and more importantly: how do I process these images in Python - as I can't find a lot of documentation on handling RGBD images in OpenCV. | 0 | 1 | 280 |
0 | 50,774,913 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-06-07T17:31:00.000 | 0 | 1 | 0 | ARIMA Forecasting | 50,747,097 | 1.2 | python,time-series,missing-data,arima | I don't know exactly about your specific domain problem, but these things apply usually in general:
If the NA values represent 0 values for your domain specific problem, then replace them with 0 and then fit the ARIMA model (this would for example be the case if you are looking at daily sales and on some days you have 0 sales)
If the NA values represent unknown values for your domain specific problem then do not replace them and fit your ARIMA model. (this would be the case, if on a specific day the employee forgot to write down the amount of sales and it could be any number).
I probably would not use imputation at all. There are methods to fit an ARIMA model on time series that have missing values. Usually these algorithms should probably also implemented somewhere in python. (but I don't know since I am mostly using R) | I have a time series data which looks something like this
Loan_id Loan_amount Loan_drawn_date
id_001 2000000 2015-7-15
id_003 100 2014-7-8
id_009 78650 2012-12-23
id_990 100 2018-11-12
I am trying to build a Arima forecasting model on this data which has round about 550 observations. These are the steps i have followed
Converted the time series data into daily data and replaced NA values with 0. the data look something like this
Loan_id Loan_amount Loan_drawn_date
id_001 2000000 2015-7-15
id_001 0 2015-7-16
id_001 0 2015-7-17
id_001 0 2015-7-18
id_001 0 2015-7-19
id_001 0 2015-7-20
....
id_003 100 2014-7-8
id_003 0 2014-7-9
id_003 0 2014-7-10
id_003 0 2014-7-11
id_003 0 2014-7-12
id_003 0 2014-7-13
....
id_009 78650 2012-12-23
id_009 0 2012-12-24
id_009 0 2012-12-25
id_009 0 2012-12-26
id_009 0 2012-12-27
id_009 0 2012-12-28
...
id_990 100 2018-11-12
id_990 0 2018-11-13
id_990 0 2018-11-14
id_990 0 2018-11-15
id_990 0 2018-11-16
id_990 0 2018-11-17
id_990 0 2018-11-18
id_990 0 2018-11-19
Can Anyone please suggest me how do i proceed ahead with these 0 values now?
Seeing the variance in the loan amount numbers i would take log of the of the loan amount. i am trying to build the ARIMA model for the first time and I have read about all the methods of imputation but there is nothing i can find. Can anyone please tell me how do i proceed ahead in this data | 0 | 1 | 148 |
0 | 50,774,153 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-06-07T17:57:00.000 | 0 | 2 | 0 | what does the error "Length of label is not same with #data" when I call lightgbm.train | 50,747,460 | 0 | python,lightgbm | It simply means that the dimensions of your training examples and the respective list of labels do not match. In other words, if you have 10 training instances you need exactly 10 labels. (For a multi-label scenarios a better formulation would be to replace label by labelling, or refer to the size of the array.) | I'm pretty new the LightGBM, and when I try to apply lightgbm.train on my dataset, I got this error:
LightGBMError: Length of label is not same with #data
I'm not sure where I made a mistake. I tried
model = lightgbm.train(params, train_data, valid_sets=test_data, early_stopping_rounds=150, verbose_eval=200)
Thanks in advance. | 0 | 1 | 4,332 |
0 | 50,753,840 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-06-07T19:07:00.000 | 1 | 1 | 0 | readlines and numpy loadtxt gives UnicodeDecodeError after upgrade 18.04 | 50,748,564 | 0.197375 | python,numpy,ubuntu | Adding encoding='ISO-8859-1' to readlines and loadtxt did the trick. | I have a python script which uses readlines and numpy.loadtxt to load a csv file. It works perfectly fine on my desktop running ubuntu 16.04. On my laptop running 18.04 I get (loading the same file) the following error: UnicodeDecodeError: 'utf8' codec can't decode byte 0xb5 in position 446: invalid start byte
What can I do to make the script working? | 0 | 1 | 322 |
Subsets and Splits