GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
49,523,832
0
0
0
0
2
false
3
2018-03-08T06:22:00.000
1
3
0
How to find the optimal number of clusters using k-prototype in python
49,166,657
0.066568
python,cluster-analysis
Yeah elbow method is good enough to get number of cluster. Because it based on total sum squared.
I am trying to cluster some big data by using the k-prototypes algorithm. I am unable to use K-Means algorithm as I have both categorical and numeric data. Via k prototype clustering method I have been able to create clusters if I define what k value I want. How do I find the appropriate number of clusters for this.? Will the popular methods available (like elbow method and silhouette score method) with only the numerical data works out for mixed data?
0
1
8,033
0
49,182,785
0
1
0
0
1
true
0
2018-03-08T14:36:00.000
1
1
0
Parallelize Pandas CSV Writing
49,175,681
1.2
python,pandas
If you have only one HDD (not even an SSD drive), then the disk IO is your bottleneck and you'd better write to it sequentially instead of writing in parallel. The disk head needs to be positioned before writing, so trying to write in parallel will most probably be slower compared to one writer process. It would make sense if you would have multiple disks...
Is it possible to write multiple CSVs out simultaneously? At the moment, I do a listdir() on an outputs directory, and iterate one-by-one through a list of files. I would ideally like to write them all at the same time. Has anyone had any experience in this before?
0
1
61
0
49,190,269
0
0
0
0
1
false
0
2018-03-09T07:46:00.000
0
1
0
Keras "Tanh Activation" function -- edit: hidden layers
49,188,928
0
python-3.x,neural-network,keras,multiclass-classification,activation-function
First of all you simply should'nt use them in your output layer. Depending on your loss function you may even get an error. A loss function like mse should be able to take the ouput of tanh, but it won't make much sense. But if were talking about hidden layers you're perfectly fine. Also keep in mind, that there are biases which can train an offset to the layer before giving the ouput of the layer to the activation function.
Tanh activation functions bounds the output to [-1,1]. I wonder how does it work, if the input (features & Target Class) is given in 1-hot-Encoded form ? How keras (is managing internally) the negative output of activation function to compare them with the class labels (which are in one-hot-encoded form) -- means only 0's and 1's (no "-"ive values) Thanks!
0
1
641
0
49,312,396
0
0
0
0
1
true
0
2018-03-09T12:25:00.000
0
1
0
Install tensorflow1.2 with CUDA8.0 and cuDNN5.1 shows 'ImportError: libcublas.so.9.0'
49,193,808
1.2
python,tensorflow,cuda,ubuntu-16.04,cudnn
Thanks to @Robert Crovella, you give me the helpful solution of my question! When I try to use a different way: pip install tensorflo-gpu==1.4 to install again, It found my older version of tensorflow1.5 and uninstall tf1.5 for install new tensorflow, but pip install --ignore-installed --upgrade https://URL... couldn't find it. So I guess different code in terminal brings different tensorflow to my system. Thank you again
I want to install tensorflow1.2 on Ubuntu 16.04 LST, After installing with pip, I test it with import tensorflow as tf in terminal, error shows that ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory It seems that tensorflow needs higher version CUDA, But the version of my tensorflow is 1.2, so I think my CUDA version is high enough. If CUDA9.0 is too high for tensorflow1.2? By the way, I found other people can run tensorflow1.2 using CUDA8.0 and cuDNN5.1, so can you help me solve this problem, Thank you very much!
0
1
341
0
49,195,249
0
0
0
0
1
false
3
2018-03-09T13:34:00.000
1
1
0
Preprocessing machine learning data
49,195,008
0.197375
python,python-3.x,algorithm,machine-learning
The data isn't stored in a CSV (Do I simply store it in a database like I would with any other type of data?) You can store in whatever format you like. Some form of preprocessing is used so that the ML algorithm doesn't have to analyze the same data repeatedly each time it is used (or does it have to given that one new piece of data is added every time the algorithm is used?). This depends very much on what algorithm you use. Some algorithms can easily be implemented to learn in an incremental manner. For example, Linear/Logistic Regression implemented with Stochastic Gradient Descent could easily just run a quick update on every new instance as it gets added. For other algorithms, full re-trains are the only option (though you could of course elect not to always do them over and over again for every new instance; you could, for example, simply re-train once per day at a set point in time).
This may be a stupid question, but I am new to ML and can't seem to find a clear answer. I have implemented a ML algorithm on a Python web app. Right now I am storing the data that the algorithm uses in an offline CSV file, and every time the algorithm is run, it analyzes all of the data (one new piece of data gets added each time the algorithm is used). Apologies if I am being too vague, but I am wondering how one should generally go about implementing the data and algorithm properly so that: The data isn't stored in a CSV (Do I simply store it in a database like I would with any other type of data?) Some form of preprocessing is used so that the ML algorithm doesn't have to analyze the same data repeatedly each time it is used (or does it have to given that one new piece of data is added every time the algorithm is used?).
0
1
92
0
57,648,777
0
1
0
0
2
false
9
2018-03-09T18:16:00.000
0
3
0
Importing the multiarray numpy extension module failed (Just with Anaconda)
49,199,818
0
python,numpy,anaconda
Kindly perform invalidate cache and restart if you are using PyCharm. No need to uninstall numpy or run any command.
I'm quite new to Python/Anaconda, and I'm facing an issue that I couldn't solve on my own or googling. When I'm running Python on cmd I can import and use numpy. Working fine. When I'm running scripts on Spyder, or just trying to import numpy on Anaconda Prompt this error message appears: ImportError: Importing the multiarray numpy extension module failed. Most likely you are trying to import a failed build of numpy. If you're working with a numpy git repo, try git clean -xdf (removes all files not under version control). Otherwise reinstall numpy. Original error was: cannot import name 'multiarray' I don't know if there are relations to it, but I cannot update conda, as well. When I try to update I receive Permission Errors. Any ideas?
0
1
9,555
0
49,199,982
0
1
0
0
2
false
9
2018-03-09T18:16:00.000
0
3
0
Importing the multiarray numpy extension module failed (Just with Anaconda)
49,199,818
0
python,numpy,anaconda
I feel like I would have to know a little more but, it seems to be that you need to reinstall numpy and check if the complete install was successful. Keep in mind that Anaconda is a closed environment so you don't have as much control. with regards to the permissions issue you may have installed it with a superuser/admin. That would mean that in order to update you would have to update with your superuser/admin.
I'm quite new to Python/Anaconda, and I'm facing an issue that I couldn't solve on my own or googling. When I'm running Python on cmd I can import and use numpy. Working fine. When I'm running scripts on Spyder, or just trying to import numpy on Anaconda Prompt this error message appears: ImportError: Importing the multiarray numpy extension module failed. Most likely you are trying to import a failed build of numpy. If you're working with a numpy git repo, try git clean -xdf (removes all files not under version control). Otherwise reinstall numpy. Original error was: cannot import name 'multiarray' I don't know if there are relations to it, but I cannot update conda, as well. When I try to update I receive Permission Errors. Any ideas?
0
1
9,555
0
49,200,765
0
0
0
0
1
true
0
2018-03-09T19:05:00.000
0
1
0
Should I drop a variable that has the same value in the whole column for building machine learning models?
49,200,518
1.2
python,r,pandas,machine-learning,data-science
You should be deleting such columns because it will provide no extra information about how each data point is different from another. It's fine to leave the column for some machine learning models (due to the nature of how the algorithms work), like random forest, because this column will actually not be selected to split the data. To spot those, especially for categorical or nominal variables (with fixed number of possible values), you can count the occurrence of each unique value, and if the mode is larger than a certain threshold (say 95%), then you delete that column from your model. I personally will go through variables one by one if there aren't any so that I can fully understand each variable in the model, but the above systematic way is possible if the feature size is too large.
For instance, column x has 50 values and all of these values are the same. Is it a good idea to delete variables like these for building machine learning models? If so, how can I spot these variables in a large data set? I guess a formula/function might be required to do so. I am thinking of using nunique that can take account of the whole dataset.
0
1
474
0
55,458,337
0
0
0
0
1
true
10
2018-03-10T07:25:00.000
2
4
0
Accessing '.pickle' file in Google Colab
49,206,488
1.2
python,tensorflow,google-data-api,google-colaboratory
Thanks, guys, for your answers. Google Colab has quickly grown into a more mature development environment, and my most favorite feature is the 'Files' tab. We can easily upload the model to the folder we want and access it as if it were on a local machine. This solves the issue. Thanks.
I am fairly new to using Google's Colab as my go-to tool for ML. In my experiments, I have to use the 'notMNIST' dataset, and I have set the 'notMNIST' data as notMNIST.pickle in my Google Drive under a folder called as Data. Having said this, I want to access this '.pickle' file in my Google Colab so that I can use this data. Is there a way I can access it? I have read the documentation and some questions on StackOverflow, but they speak about Uploading, Downloading files and/or dealing with 'Sheets'. However, what I want is to load the notMNIST.pickle file in the environment and use it for further processing. Any help will be appreciated. Thanks !
0
1
27,735
0
49,372,377
0
1
0
0
1
true
4
2018-03-10T08:41:00.000
3
1
0
Does Intel vs. AMD matter for running python?
49,207,112
1.2
python,intel,amd
Are you asking about compatibility or performance? Both AMD and Intel market CPU products compatible with x86(_64) architecture and are functionally compatible with all software written for it. That is, they will run it with high probability (there always may be issues when changing hardware, even while staying with the same vendor, as there are too many variables to account). Both Intel and AMD offer a huge number of products with widely varying level of marketed performance. Performance of any application is determined not only by a chosen vendor of a central processor, but by a huge number of other factors, such as amount and speed of memory, disk, and not the least the architecture of the application itself. In the end, it is only real-world measurements that decide, but some estimations can be made by looking at relevant benchmarks and understanding underlying principles of computer performance.
I do a lot of coding in Python (Anaconda install v. 3.6). I don't compile anything, I just run machine learning models (mainly sci-kit and tensor flow) Are there any issues with running these on an workstation with AMD chipset? I've only used Intel before and want to make sure I don't buy wrong. If it matters it is the AMD Ryzen 7-1700 processor.
0
1
11,208
0
50,358,133
0
1
0
0
1
false
1
2018-03-11T16:30:00.000
0
1
0
How can I copy styled pandas dataframes from Jupyter Notebooks to powerpoint without loss of formatting
49,222,299
0
python,pandas,dataframe,jupyter-notebook,powerpoint
One way seems to be to copy the styled pandas table from jupyter notebook to excel. It will keep a lot of the formatting. Then you can copy it to powerpoint and it will maintain its style.
I am trying to copy styled pandas dataframes from Jupyter Notebooks to powerpoint without loss of formatting. I currently just take a screenshot to preserve formatting, but this is not ideal. Does anyone know of a better way? I search for an extension that maybe has a screenshot button, but no luck.
0
1
1,613
0
49,227,672
0
0
0
0
1
false
1
2018-03-12T02:48:00.000
2
1
0
Categorical Data yes/no to 0/1 python - is it a right approach?
49,227,490
0.379949
python,python-3.x,pandas,neural-network,decision-tree
Yes, in my opinion, encoding yes/no to 1/0 would be the right approach for you. Python's sklearn requires features in numerical arrays. There are various ways of encoding : Label Encoder; One Hot Encoder. etc However, since your variable only has 2 levels of categories, it wouldnt make much difference if you go for LabelEncoder or OneHotEncoder.
My dataset has few features with yes/no (categorical data). Few of the machine learning algorithms that I am using, in python, do not handle categorical data directly. I know how to convert yes/no, to 0/1, but my question is - Is this a right approach to go about it? Can these values of no/yes to 0/1, be misinterpreted by algorithms ? The algorithms I am planning to use for my dataset are - Decision Trees (DT), Random Forests (RF) and Neural Networks (NN).
0
1
1,491
0
49,229,831
0
0
0
0
1
false
16
2018-03-12T06:55:00.000
9
2
0
Difference between numpy.round and numpy.around
49,229,610
1
python,arrays,numpy,rounding
The main difference is that round is a ufunc of the ndarray class, while np.around is a module-level function. Functionally, both of them are equivalent as they do the same thing - evenly round floats to the nearest integer. ndarray.round calls around from within its source code.
So, I was searching for ways to round off all the numbers in a numpy array. I found 2 similar functions, numpy.round and numpy.around. Both take seemingly same arguments for a beginner like me. So what is the difference between these two in terms of: General difference Speed Accuracy Being used in practice
0
1
11,776
0
49,231,482
0
1
0
0
1
false
1
2018-03-12T08:57:00.000
0
1
0
Installing python packages in a different location than default by pip or conda
49,231,322
0
python,pip,packages,conda
install conda create new environment (conda create --name foobar python=3.x list of packages use anaconda to activate foobar (activate foobar) check pip location by typing in cmd 'where pip' to be sure you use pip from withing the python from withing the foobar environment and not the default python installed in your system outside of your conda environment and next use the pip from above location to install requested library into your environment. ps. you may want to consider to install Cygwin on your Windows machine to get use to work with Linux environment.
How do you use a Python package such as Tensorflow or Keras if you cannot install the package on the drive on which pip always saves the packages? I'm a student at a university and we don't have permission to write to the C drive, which is where pip works out of (I get a you don't have write permission error when installing packages through pip or conda`). I do have memory space available on my user drive, which is separate from the C drive (where the OS is installed). So, is there any way I can use these Python libraries without it being installed? Maybe I can install the package on my user drive and ask the compiler to access it from there? I'm just guessing here, I have no knowledge of how this works.
0
1
1,385
0
49,244,770
0
0
0
0
1
true
0
2018-03-12T12:01:00.000
0
1
0
Normalization of input data to Qnetwork
49,234,736
1.2
python,scikit-learn,reinforcement-learning,q-learning
Normalizing the input can lead to faster convergence. It is highly recommended to normalize the inputs. And as the network will progress through different layers due to use of non-linearities the data flowing between the different layers will not be normalized anymore and therefore, for faster convergence we often use batch normalization layers. Unit Gaussian data always helps in faster convergence and therefore make sure to keep it in unit Gaussian form as much as possible.
I am well known with that a “normal” neural network should use normalized input data so one variable does not have a bigger influence on the weights in the NN than others. But what if you have a Qnetwork where your training data and test data can differ a lot and can change over time in a continous problem? My idea was to just run a normal run without normalization of input data and then see the variance and mean from the input datas of the run and then use the variance and mean to normalize my input data of my next run. But what is the standard to do in this case? Best regards Søren Koch
0
1
333
0
49,294,793
0
0
0
0
1
false
0
2018-03-12T18:03:00.000
0
1
0
Can the parent nodes of clusters formed using disjoint set forest be used as cluster representative?
49,241,733
0
python,algorithm,machine-learning,cluster-analysis,data-mining
The parent node is the aggregated cluster. It's not a single point, so you can't just use it as representative. But you can use the medoids, for example.
The intention is to merge clusters which have similarity higher than the Jaccard similarity based on pairwise comparison of cluster representative. My logic here is that because the child nodes are all under the parent node for a cluster, it means that the parent node is somewhat like a representative of the cluster.
0
1
27
0
49,248,423
0
1
0
0
1
false
0
2018-03-12T23:06:00.000
0
1
0
ModuleNotFoundError in Spyder with Python
49,245,779
0
python,ubuntu,spyder
Your PATH may be pointing to the wrong python environment. Depending on which one is conflicting, you may have to do some exploring to find the culprit. My guess is that Spyder is not using your created conda environment where Pytorch is installed. To change the path in Spyder, open the Preferences window. Within this window, select the Python interpreter item on the left. The path to the Python executable will be right there. I'm using a Mac, so the settings navigation may be different for you, but it's around there somewhere.
Hi I'm using Ubuntu and have created a conda environment to build a project. I'm using Python 2.7 and Pytorch plus some other libraries. When I try to run my code in Spyder I receive a ModuleNotFoundError telling me that torch module hasn't been installed. However, when I type conda list into a terminal I can clearly see torch is there. How can I configure this to work with Spyder? Thanks.
0
1
1,203
0
49,266,730
0
0
0
0
1
false
1
2018-03-13T01:52:00.000
0
2
0
Process Large (10gb) Time Series CSV file into daily files
49,247,108
0
python,python-3.x,pandas
I was getting thrown off in that open(...) actually gets a line. I was doing a separate readline(...) after the open(...)and so unwittingly advancing the iterator and getting bad results. There is a small problem with csv write which I'll post on new question.
I am new to Python 3, coming over from R. I have a very large time series file (10gb) which spans 6 months. It is a csv file where each row contains 6 fields: Date, Time, Data1, Data2, Data3, Data4. "Data" fields are numeric. I would like to iterate through the file and create & write individual files which contain only one day of data. The individual dates are known only by the fact that the date field suddenly changes. Ie, they don't include weekends, certain holidays, as well as random closures due to unforseen events so the vector of unique dates is not deterministic. Also, the number of lines per day is also variable and unknown. I envision reading each line into a buffer and comparing the date to the previous date. If the next date = previous date, I append that line to the buffer. I repeat this until next date != previous date, at which point I write the buffer to a new csv file which contains only that day's data (00:00:00 to 23:59:59). I had trouble appending the new lines with pandas dataframes, and using readline into a list just got too mangled for me. Looking for Pythonic advice.
0
1
626
0
49,248,106
0
0
0
0
1
false
0
2018-03-13T03:00:00.000
1
1
0
K Means Cluster with Specified Intra Cluster Distance
49,247,626
0.197375
python,machine-learning,k-means
I think this method would work: Run KMeans. Mark all clusters exceeding intracluster distance threshold. For each marked cluster, run KMeans for K=2 on the cluster's data. Repeat 2, until no clusters are marked. Each cluster is split in two, until the intra cluster distance is not violated. Another option: Run KMeans. If any clusters exceed intracluster distance threshold, increase K and repeat 1.
I often come across a situation where I have bunch of different addresses (input data in Lat Long) mapped all over the city. What i need to do is use cluster these locations in a way that allows me to specify "maximum distance netween any two points within a cluster". In other words, specify maximum intra-cluster distance. For example, to cluster all my individual points in a way that -- maximum distance between any two points within a cluster is 1.5KM.
0
1
233
0
62,542,674
0
0
0
0
1
false
2
2018-03-13T13:47:00.000
-1
1
0
What is a simple way to extract NDVI average from polygon [Sentinel 2 L2A]
49,257,867
-0.197375
python,r,gis,satellite
You can give a try to Google Engine. That would be the easiest way to obtain access to image series. If your research applies only to that period, you may work less downloading by hand and processing in QGIS. If programming is a must, use Google Engine. They have much of the problem resolved. Otherwise you will have to develop routines for handling the communication with the Sentinel Open Hub, downloading L1C (if L2A not present) and converting to L2A using Sen2Cor, then obtaining ndvi, croping, etc.
Currently, I am working on a project for a non-profit organization. Therefore, I need the average NDVI values for certain polygons. Input for my search: Group of coördinates (polygon) a range of dates (e.g. 01-31-2017 and 02-31-2017) What I now want is: the average NDVI value of the most recent picture in that given date range with 0% cloud coverage of the given polygon Is there a simple way to extract these values via an API (in R or Python)? I prefer working with the sentinel-hub, but I am not sure if it's the best platform to extract the data I need. Because I am working time series I should use the L2A version (there is an NDVI layer).
0
1
355
0
49,274,707
0
0
1
0
1
false
1
2018-03-14T05:14:00.000
1
1
1
In Matlab Runtime Python3.6 installer not found in order to install matlab python suport for ubunut 16.4
49,270,176
0.197375
python,matlab,computer-vision,ubuntu-16.04
The python installer should be in /{matlab_root}/extern/engines/python. Then python setup.py install Hope it helps
I tried to install MATLAB R2017b runtime Python 3.6 on to my ubuntu 16.4. As per the instruction that given in matlab community python installer (setup.py) should be in ../../v93/extern/engines/python location. When I go there Icouldnt see that setup.py file in the location. I have tried so many time re installing the MATLAB R2017b runtime. But I couldn't find that python setup.py on the location. could you please send me instruction how to install this MATLAB R2017b runtime on ubuntu 16.4 where I can access my matlab libries from python3.6
0
1
88
0
49,294,559
0
0
0
0
1
false
2
2018-03-14T09:08:00.000
0
1
0
Python KMeans Clustering - Handling nan Values
49,273,536
0
python,cluster-analysis,k-means
If you don't have data on a word, then skip it. You could try to compute a word vector on the fly based on the context, but that essentially is the same as just skipping it.
I am trying to cluster a number of words using the KMeans algorithm from scikit learn. In particular, I use pre-trained word embeddings (300 dimensional vectors) to map each word with a number vector and then I feed these vectors to KMeans and provide the number of clusters. My issue is that there are certain words in my input corpus which I can not find in the pretrained word embeddings dictionary. This means that in these cases, instead of a vector, I get a numpy array full of nan values. This does not work with the kmeans algorithm and therefore I have to exclude these arrays. However, I am interested in seeing all these cases that were not found in the word embeddings and what is more, if possible throw them inside a separate cluster that will contain only them. My idea at this point is to set a condition that if the word is returned with a nan-values array from the embeddings index, then assign an arbitrary vector to it. Each dimension of the embeddings vector lie within [-1,1]. Therefore, if I assign the following vector [100000]*300 to all nan words, I have created a set of outliers. In practice, this works as expected, since this particular set of vectors are forced in a separate cluster. However, the initialization of the kmeans centroids is affected by these outlier values and therefore all the rest of my clusters get messed up as well. As a remedey, I tried to initiate the kmeans using init = k-means++ but first, it takes significantly longer to execute and second the improvement is not much better. Any suggestions as to how to approach this issue? Thank you.
0
1
2,184
0
49,289,462
0
0
0
0
1
true
2
2018-03-14T23:27:00.000
11
1
0
Decision Tree Sklearn -Depth Of tree and accuracy
49,289,187
1.2
python,scikit-learn,decision-tree
max_depth is what the name suggests: The maximum depth that you allow the tree to grow to. The deeper you allow, the more complex your model will become. For training error, it is easy to see what will happen. If you increase max_depth, training error will always go down (or at least not go up). For testing error, it gets less obvious. If you set max_depth too high, then the decision tree might simply overfit the training data without capturing useful patterns as we would like; this will cause testing error to increase. But if you set it too low, that is not good as well; then you might be giving the decision tree too little flexibility to capture the patterns and interactions in the training data. This will also cause the testing error to increase. There is a nice golden spot in between the extremes of too-high and too-low. Usually, the modeller would consider the max_depth as a hyper-parameter, and use some sort of grid/random search with cross-validation to find a good number for max_depth.
I am applying Decision Tree to a data set, using sklearn In Sklearn there is a parameter to select the depth of the tree - dtree = DecisionTreeClassifier(max_depth=10). My question is how the max_depth parameter helps on the model. how does high/low max_depth help in predicting the test data more accurately?
0
1
22,441
0
49,290,127
0
1
0
0
1
false
1
2018-03-15T01:00:00.000
0
2
0
replace numbers with token if numbers have whitespace on both side
49,289,969
0
python,string,replace,whitespace
I also tried ' \d+ ' and that works! probably not "pythonic" though...
the code below replaces numbers with the token NUMB: raw_corpus.loc[:,'constructed_recipe']=raw_corpus['constructed_recipe'].str.replace('\d+','NUMB') It works fine if the numbers have a space before and a space after, but creates a problem if the numbers are included in another string. How do I modify the code so that it only replaces numbers with NUMB if the numbers are surrounded by a space on both sides? e.g. do not modify this string: "from url 500px", but do modify this string: "dishwasher 10 pods" to "dishwasher NUMB pods". I'm not sure how to modify '\d+' to make this happen. Any ideas?
0
1
415
0
49,598,792
0
0
0
0
2
false
0
2018-03-15T12:33:00.000
0
3
0
Tying weights in neural machine translation
49,299,609
0
python,deep-learning,recurrent-neural-network,pytorch,seq2seq
Did you check the code that kmario23 shared? Because it is written that if the hidden size and the embedding sizes are not equal then raise an exception. So, this means if you really want to tie the weights then you should decrease the hidden size of your decoder to 300. On the other hand, if you rethink your idea, what you really want to do is to eliminate the weight tying. Why? Because basically, you want to use a transformation which needs another matrix.
I want to tie weights of the embedding layer and the next_word prediction layer of the decoder. The embedding dimension is set to 300 and the hidden size of the decoder is set to 600. Vocabulary size of the target language in NMT is 50000, so embedding weight dimension is 50000 x 300 and weight of the linear layer which predicts the next word is 50000 x 600. So, how can I tie them? What will be the best approach to achieve weight tying in this scenario?
0
1
5,192
0
54,236,136
0
0
0
0
2
true
0
2018-03-15T12:33:00.000
3
3
0
Tying weights in neural machine translation
49,299,609
1.2
python,deep-learning,recurrent-neural-network,pytorch,seq2seq
You could use linear layer to project the 600 dimensional space down to 300 before you apply the shared projection. This way you still get the advantage that the entire embedding (possibly) has a non-zero gradient for each mini-batch but at the risk of increasing the capacity of the network slightly.
I want to tie weights of the embedding layer and the next_word prediction layer of the decoder. The embedding dimension is set to 300 and the hidden size of the decoder is set to 600. Vocabulary size of the target language in NMT is 50000, so embedding weight dimension is 50000 x 300 and weight of the linear layer which predicts the next word is 50000 x 600. So, how can I tie them? What will be the best approach to achieve weight tying in this scenario?
0
1
5,192
0
49,303,974
0
0
0
0
1
false
0
2018-03-15T12:42:00.000
0
1
0
Binary mask for output vector in Tensorflow
49,299,761
0
python,tensorflow,machine-learning,lstm,bitmask
You could use tf.boolean_mask on the softmax prediction output to remove the probabilities for inactive deals then get the maximum probabilities without them.
I want to recommend products by clickstream with LSTM in TensorFlow. I have historical user behaviour data using which I want to use to train model to recommend products (represented as classes on output) but I need to consider whether product was active in that moment on webpage(not to recommend inactive deals). Since I consider this very difficult using ground truth, I would like to use binary mask on output before it is compared to the target vector. Is there any native way to do this in TensorFlow?
0
1
376
0
49,313,978
0
1
0
0
1
true
1
2018-03-15T20:40:00.000
3
1
0
is there any way to not installing packages on Google Colab every time?
49,308,803
1.2
python,pip,google-colaboratory
No, there's currently no way for users to choose additional packages to install by default.
some packages like numpy are installed as default in Google Colab. is there any way to not installing new packages and make it default just like numpy?
0
1
936
0
58,624,383
0
0
0
0
1
false
12
2018-03-15T21:27:00.000
1
2
0
Dask Dataframe: Get row count?
49,309,523
0.099668
python,dataframe,dask
If you only need the number of rows - you can load a subset of the columns while selecting the columns with lower memory usage (such as category/integers and not string/object), there after you can run len(df.index)
Simple question: I have a dataframe in dask containing about 300 mln records. I need to know the exact number of rows that the dataframe contains. Is there an easy way to do this? When I try to run dataframe.x.count().compute() it looks like it tries to load the entire data into RAM, for which there is no space and it crashes.
0
1
12,792
0
49,311,725
0
0
0
0
1
false
0
2018-03-16T00:51:00.000
0
1
1
How to add your files across cluster on pyspark AWS
49,311,592
0
python,apache-spark,amazon-ec2,pyspark
Since you are in AWS already, it may be easier to just store your data files in s3, and open them directly from there.
I am new to spark. I am trying to read a file from my master instance but I am getting this error. After research I found out either you need to load data to hdfs or copy across clusters. I am unable to find the commands for doing either of these. --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) in () ----> 1 ncols = rdd.first().features.size # number of columns (no class) of the dataset /home/ec2-user/spark/python/pyspark/rdd.pyc in first(self) 1359 ValueError: RDD is empty 1360 """ -> 1361 rs = self.take(1) 1362 if rs: 1363 return rs[0] /home/ec2-user/spark/python/pyspark/rdd.pyc in take(self, num) 1311 """ 1312 items = [] -> 1313 totalParts = self.getNumPartitions() 1314 partsScanned = 0 1315 /home/ec2-user/spark/python/pyspark/rdd.pyc in getNumPartitions(self) 2438 2439 def getNumPartitions(self): -> 2440 return self._prev_jrdd.partitions().size() 2441 2442 @property /home/ec2-user/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in call(self, *args) 1131 answer = self.gateway_client.send_command(command) 1132 return_value = get_return_value( -> 1133 answer, self.gateway_client, self.target_id, self.name) 1134 1135 for temp_arg in temp_args: /home/ec2-user/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw) 61 def deco(*a, **kw): 62 try: ---> 63 return f(*a, **kw) 64 except py4j.protocol.Py4JJavaError as e: 65 s = e.java_exception.toString() /home/ec2-user/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 317 raise Py4JJavaError( 318 "An error occurred while calling {0}{1}{2}.\n". --> 319 format(target_id, ".", name), value) 320 else: 321 raise Py4JError( Py4JJavaError: An error occurred while calling o122.partitions. : org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/ec2-user/PR_DATA_35.csv at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61) at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748)
0
1
96
0
49,339,891
0
0
0
0
1
false
1
2018-03-17T17:05:00.000
1
2
0
Tracking cycles while adding random edges to a sparse graph
49,339,575
0.099668
python,graph,graph-algorithm,traversal,graph-traversal
Possible solution I came up with while in the shower. What I will do is maintain a list of size n, representing how many times that node has been on an edge. When I add an edge (i,j), I will increment list[i] and list[j]. If after an edge addition, list[i] > 1, and list[j] > 1, I will do a DFS starting from that edge. I realized I don't need to BFS, I only need to DFS from the last added edge, and I only need to do it if it at least has potential to be in a cycle (it's nodes show up twice). I doubt it is optimal.. maybe some kind of list of disjoint sets would be better. But this is way better than anything I was thinking of before.
Scenario: I have a graph, represented as a collection of nodes (0...n). There are no edges in this graph. To this graph, I connect nodes at random, one at a time. An alternative way of saying this would be that I add random edges to the graph, one at a time. I do not want to create simple cycles in this graph. Is there a simple and/or very efficient way to track the creation of cycles as I add random edges? With a graph traversal, it is easy, since we only need to track the two end nodes of a single path. But, with this situation, we have any number of paths that we need to track - and sometimes these paths combine into a larger path, and we need to track that too. I have tried several approaches, which mostly come down to maintaining a list of "outer nodes" and a set of nodes internal to them, and then when I add an edge going through it and updating it. But, it becomes extremely convoluted, especially if I remove an edge in the graph. I have attempted to search out algorithms or discussions on this, and I can't really find anything. I know I can do a BFS to check for cycles, but it's so so so horribly inefficient to BFS after every single edge addition.
0
1
59
0
49,357,921
0
0
0
0
1
false
0
2018-03-18T15:41:00.000
0
2
0
traffic density visualization in Python
49,349,788
0
python-3.x,data-visualization
It is difficult to say without any information about the structure of the data. Is it just points? Is it a shapefile? Probably you should start with geopandas....
I have a csv file with traffic density data per road segment of a certain high way, measured in Annual average daily traffic (AADT). Now I want to visualize this data. Since I have the locations (lat and lon) of the road segments, my idea is to create lines between these points and give it a color which relates to the AADT value. So suppose, road segments / lines with high AADT are marked red and low AADT are marked green. Which package should I use for this visualization?
0
1
253
0
49,357,236
0
0
0
0
1
true
0
2018-03-19T00:19:00.000
1
1
0
Can we train a Keras Model in Stages?
49,354,178
1.2
python,numpy,tensorflow,deep-learning,keras
Yes, you can, but the concept is not called "stages" but batches and it is the most common method to train neural networks. You just need to make a generator function that loads batches of your data one at a time and use model.fit_generator to start training it.
I have a huge NumPy matrix of dimension (1919090, 140, 37). Now it is not easy to fit something that large anywhere in the memory local or on a server. So I was thinking of splitting the NumPy matrix into smaller parts say of (19,000, 140, 37) and then training a Keras model on it. I store the model, then loading it again and continue training on the next matrix portion. I repeat this until the model is trained on all the 100 or so matrix bits. Is there a way of doing it?
0
1
61
0
49,361,454
0
0
0
0
1
false
2
2018-03-19T10:35:00.000
2
3
0
Sentiment Lexicon for stock market prediction
49,360,828
0.132549
python,machine-learning,nlp,nltk,sentiment-analysis
Not readily available, but trivial to build on your own. Simply download a sentiment annotated twitter dataset, construct a dictionary of words for it, iterate over the entries and add +1/(-1) to positive(/negative) words. Finally, divide each word's values by its respective occurrence count and you'll have a naive sentiment score for each word, with values close to 1(/-1) indicating strong sentiment charge, which you can use for your BoW task.
I am making a Stock Market Predictor machine learning application that will try to predict the price for a certain stock. It will take news articles/tweets regarding that particular company and the company's historical data for this reason. My issue is that I need to first construct a sentiment analyser for the headlines/tweets for that company. I dont want to train a model to give me the sentiment scores rather, I want a sentiment lexicon that contains a bag of words related to stock market and finance. Is there any such lexicons/dictionaries available that I can use in my project? Thanks
0
1
2,572
0
49,890,401
0
0
0
0
2
false
1
2018-03-19T12:37:00.000
4
2
0
I got a message when importing tensorflow in python
49,363,172
0.379949
python,tensorflow,anaconda
You could upgrade h5py to a more recent version. It worked for me. sudo pip3 install h5py==2.8.0rc1
When I import tensorflow in Python I get this error: C:\Users\Sathsara\Anaconda3\envs\tensorflow\Lib\site-packages\h5py__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters
0
1
1,143
0
49,363,229
0
0
0
0
2
false
1
2018-03-19T12:37:00.000
4
2
0
I got a message when importing tensorflow in python
49,363,172
0.379949
python,tensorflow,anaconda
Its not an error, its just informing you that in future releases the this feature or behaviour is going to change or be no longer available. This is important if you plan to reuse this code with different versions of python and tensorflow.
When I import tensorflow in Python I get this error: C:\Users\Sathsara\Anaconda3\envs\tensorflow\Lib\site-packages\h5py__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters
0
1
1,143
0
70,389,655
0
1
0
0
2
false
25
2018-03-19T23:47:00.000
-4
4
0
Dependencies and packages conflicts in Anaconda?
49,374,217
-1
python-3.x,anaconda,packages
You can try using different conda environments. For example: conda create -n myenv Then you can activate your environment with: conda activate myenv and deactivate with: conda deactivate
I'm using Anaconda 5.1 and Python 3.6 on a Windows 10 machine. I'm having quite a few problems ; I tried to add some useful tools such as lightGBM, tensorflow, keras, bokeh,... to my conda environment but once I've used conda install -c conda-forge packagename on all of these, I end up having downgrading and upgrading of different packages that just mess with my installation and I can't use anything anymore after those installations. I wonder if it were possible to have multiples versions of packages & dependencies living alongside each other which won't kill my install? Sorry if my question seems noobish and thanks for your help, Nate
0
1
49,725
0
49,374,371
0
1
0
0
2
false
25
2018-03-19T23:47:00.000
6
4
0
Dependencies and packages conflicts in Anaconda?
49,374,217
1
python-3.x,anaconda,packages
You could try disabling transitive deps updates by passing --no-update-dependencies or --no-update-deps to conda install command. Ex: conda install --no-update-deps pandas.
I'm using Anaconda 5.1 and Python 3.6 on a Windows 10 machine. I'm having quite a few problems ; I tried to add some useful tools such as lightGBM, tensorflow, keras, bokeh,... to my conda environment but once I've used conda install -c conda-forge packagename on all of these, I end up having downgrading and upgrading of different packages that just mess with my installation and I can't use anything anymore after those installations. I wonder if it were possible to have multiples versions of packages & dependencies living alongside each other which won't kill my install? Sorry if my question seems noobish and thanks for your help, Nate
0
1
49,725
0
62,830,161
0
0
0
0
1
false
2
2018-03-20T18:21:00.000
0
3
0
Activation Function in Machine learning
49,391,576
0
python,math,machine-learning,calculus,sigmoid
Simply put, an activation function is a function that is added into an artificial neural network in order to help the network learn complex patterns in the data. When comparing with a neuron-based model that is in our brains, the activation function is at the end deciding what is to be fired to the next neuron. That is exactly what an activation function does in an ANN as well. It takes in the output signal from the previous cell and converts it into some form that can be taken as input to the next cell.
What is meant by Activation function in Machine learning. I go through with most of the articles and videos, everyone states or compare that with neural network. I'am a newbie to machine learning and not that much familiar with deep learning and neural networks. So, can any one explain me what exactly an Activation function is ? instead of explaining with neural networks. I struck with this ambiguity while I learning Sigmoid function for logistic regression.
0
1
317
0
49,401,566
0
1
0
0
1
false
0
2018-03-20T20:11:00.000
0
2
1
I installed tensorflow on mac and now I can't open Anaconda
49,393,300
0
python,macos,tensorflow,terminal,anaconda
I fixed the issue by downgrading to pip version 9.0.1. It appears anaconda doesn't like pip version 9.0.2 I ran: pip install pip==9.0.1
I installed tensorflow on my mac and now I can't seem to open anaconda-navigator. When I launch the app, it appears in the dock but disappears quickly. When I launch anaconda-navigator the terminal I get the following error(s). KeyError: 'pip._vendor.urllib3.contrib'
0
1
155
0
49,500,435
0
0
0
0
1
true
0
2018-03-20T21:56:00.000
0
1
0
Portfolio Performance Attribution Metrics
49,394,773
1.2
python,performance,portfolio,metric,attribution
A reasonable intensive measure of how much a market instrument within a portfolio captured its potential would be the geometric difference between its IRR for the period and its annualized market return for the same period. For this you would need the cash flow amounts and dates into and out of the instrument, its actual opening and closing market value and the total market return of the component. If the instrument or its IRR-implied value is ever short at any time during the period, then you would need to employ an advanced version of IRR instead of the standard calculations provided in Excel. What is usually the more relevant is the extensive measure of the achieved controllable relative success of the management of a portfolio. It is best captured by a well-formulated value-based decision attribution analysis using daily data. Andre Mirabelli
I would like to incorporate a particular performance metric into my portfolio managing software. This metric should be one where I can measure "how much of the potential gains from the selected assets have been captured by the selected portfolio composition". Consider the following table reporting a portfolio's performance with key metrics between dates 2017/10 and 2018/03 netpeq: net $ profit gained over the period aroc: annualized rate of change in asset's price over the period cagr: compounded annualized growth of portfolio over the period I need a metric which penalizes divergence between cagr (or netpeq) and aroc. Namely, positive aroc says these asset could have produced growth (as in BA, MSFT, CSCO) but the portfolio manager failed to make money out of these or even lost money. I would like to measure the extent the portfolio manager missed to capture a. the growth potential due to each asset in the portfolio b. the overall growth potential w.r.t portfolio as a whole. +---------+----------+----------+---------+-------+--------+--------+--------+ | name | netpeq | draw | aroa | cagr | sharpe | rvalue | aroc | +---------+----------+----------+---------+-------+--------+--------+--------+ | BA | -555.71 | 3439.15 | -36.54 | -1.25 | -0.17 | 0.42 | 64.58 | +---------+----------+----------+---------+-------+--------+--------+--------+ | DWDP | 0 | 0 | 0 | 0 | 0 | 0 | -13.18 | +---------+----------+----------+---------+-------+--------+--------+--------+ | CAT | -447.66 | 1361.54 | -74.36 | -1.01 | -0.66 | -0.17 | 39.91 | +---------+----------+----------+---------+-------+--------+--------+--------+ | WMT | 363.25 | 448.09 | 183.34 | 0.82 | 1.1 | 0.66 | 4.73 | +---------+----------+----------+---------+-------+--------+--------+--------+ | UTX | 0 | 0 | 0 | 0 | 0 | 0 | 18.96 | +---------+----------+----------+---------+-------+--------+--------+--------+ | NKE | 690.34 | 498.24 | 313.36 | 1.57 | 1.21 | 0.84 | 67.19 | +---------+----------+----------+---------+-------+--------+--------+--------+ | VZ | -76 | 76 | -226.16 | -0.17 | -2.18 | -0.63 | 4.73 | +---------+----------+----------+---------+-------+--------+--------+--------+ | XOM | -272.87 | 555.36 | -111.12 | -0.62 | -0.65 | -0.46 | -18.69 | +---------+----------+----------+---------+-------+--------+--------+--------+ | GE | 0 | 0 | 0 | 0 | 0 | 0 | -85.61 | +---------+----------+----------+---------+-------+--------+--------+--------+ | MCD | 1025.63 | 731.44 | 317.12 | 2.33 | 1.09 | 0.64 | -6.02 | +---------+----------+----------+---------+-------+--------+--------+--------+ | CSCO | -313.88 | 313.88 | -226.16 | -0.71 | -1.81 | -0.39 | 75.23 | +---------+----------+----------+---------+-------+--------+--------+--------+ | JPM | 961.69 | 267.33 | 813.59 | 2.19 | 1.72 | 0.86 | 45.46 | +---------+----------+----------+---------+-------+--------+--------+--------+ | V | 3261.55 | 1969.88 | 374.46 | 7.53 | 1.76 | 0.9 | 31.18 | +---------+----------+----------+---------+-------+--------+--------+--------+ | GS | 0 | 0 | 0 | 0 | 0 | 0 | 24.24 | +---------+----------+----------+---------+-------+--------+--------+--------+ | HD | -32.32 | 960.59 | -7.61 | -0.07 | -0.06 | 0.09 | 20 | +---------+----------+----------+---------+-------+--------+--------+--------+ | PFE | 0 | 0 | 0 | 0 | 0 | 0 | 4.12 | +---------+----------+----------+---------+-------+--------+--------+--------+ | KO | 0 | 0 | 0 | 0 | 0 | 0 | -10.66 | +---------+----------+----------+---------+-------+--------+--------+--------+ | MMM | 0 | 0 | 0 | 0 | 0 | 0 | 17.01 | +---------+----------+----------+---------+-------+--------+--------+--------+ | DIS | 0 | 0 | 0 | 0 | 0 | 0 | 11.43 | +---------+----------+----------+---------+-------+--------+--------+--------+ | CVX | 357.2 | 1415.09 | 57.09 | 0.81 | 0.37 | 0.33 | -5.8 | +---------+----------+----------+---------+-------+--------+--------+--------+ | INTC | 1632.52 | 599.42 | 615.95 | 3.73 | 1.4 | 0.63 | 67.32 | +---------+----------+----------+---------+-------+--------+--------+--------+ | PG | -197.12 | 314.7 | -141.66 | -0.45 | -1.25 | -0.72 | -32.05 | +---------+----------+----------+---------+-------+--------+--------+--------+ | TRV | -348.86 | 348.86 | -226.16 | -0.79 | -1.55 | -0.79 | 26.49 | +---------+----------+----------+---------+-------+--------+--------+--------+ | MSFT | -205.86 | 680.29 | -68.44 | -0.46 | -0.42 | 0.25 | 47.6 | +---------+----------+----------+---------+-------+--------+--------+--------+ | AAPL | 0 | 0 | 0 | 0 | 0 | 0 | 28.32 | +---------+----------+----------+---------+-------+--------+--------+--------+ | JNJ | 17.55 | 64.8 | 61.25 | 0.04 | 0.33 | 0.43 | -7.55 | +---------+----------+----------+---------+-------+--------+--------+--------+ | AXP | -1366.89 | 1492.43 | -207.14 | -3.06 | -1.69 | -0.77 | 5.65 | +---------+----------+----------+---------+-------+--------+--------+--------+ | IBM | 0 | 0 | 0 | 0 | 0 | 0 | 20.59 | +---------+----------+----------+---------+-------+--------+--------+--------+ | UNH | 877.04 | 676.82 | 293.06 | 1.99 | 1.13 | 0.79 | 39.98 | +---------+----------+----------+---------+-------+--------+--------+--------+ | MRK | 0 | 0 | 0 | 0 | 0 | 0 | -27.88 | +---------+----------+----------+---------+-------+--------+--------+--------+ | RunPort | 5369.6 | 10091.44 | 120.34 | 12.56 | 0.65 | 0.73 | -1 | +---------+----------+----------+---------+-------+--------+--------+--------+
0
1
595
0
53,602,315
0
0
0
0
2
false
4
2018-03-22T03:47:00.000
2
2
0
Providing user defined sample weights for knn classifier in scikit-learn
49,420,191
0.197375
python,scikit-learn,knn,nearest-neighbor
KNN in sklearn doesn't have sample weight, unlike other estimators, e.g. DecisionTree. Personally speaking, I think it is a disappointment. It is not hard to make KNN support sample weight, since the predicted label is the majority voting of its neighbours. A stupid walk around, is to generate samples yourself based on the sample weight. E.g., if a sample has weight 2, then make it appear twice.
I am using the scikit-learn KNeighborsClassifier for classification on a dataset with 4 output classes. The following is the code that I am using: knn = neighbors.KNeighborsClassifier(n_neighbors=7, weights='distance', algorithm='auto', leaf_size=30, p=1, metric='minkowski') The model works correctly. However, I would like to provide user-defined weights for each sample point. The code currently uses the inverse of the distance for scaling using the metric='distance' parameter. I would like to continue to keep the inverse distance scaling but for each sample point, I have a probability weight as well. I would like to apply this as a weight in the distance calculation. For example, if x is the test point and y,z are the two nearest neighbors for which distance is being calculated, then I would like the distance to be calculated as (sum|x-y|)*wy and (sum|x-z|)*wz respectively. I tried to define a function that was passed into the weights argument but then I also would like to keep the inverse distance scaling in addition to the user defined weight and I do not know the inverse distance scaling function. I could not find an answer from the documentation. Any suggestions?
0
1
1,542
0
63,655,345
0
0
0
0
2
false
4
2018-03-22T03:47:00.000
-1
2
0
Providing user defined sample weights for knn classifier in scikit-learn
49,420,191
-0.099668
python,scikit-learn,knn,nearest-neighbor
sklearn.neighbors.KNeighborsClassifier.score() has a sample_weight parameter. Is that what you're looking for?
I am using the scikit-learn KNeighborsClassifier for classification on a dataset with 4 output classes. The following is the code that I am using: knn = neighbors.KNeighborsClassifier(n_neighbors=7, weights='distance', algorithm='auto', leaf_size=30, p=1, metric='minkowski') The model works correctly. However, I would like to provide user-defined weights for each sample point. The code currently uses the inverse of the distance for scaling using the metric='distance' parameter. I would like to continue to keep the inverse distance scaling but for each sample point, I have a probability weight as well. I would like to apply this as a weight in the distance calculation. For example, if x is the test point and y,z are the two nearest neighbors for which distance is being calculated, then I would like the distance to be calculated as (sum|x-y|)*wy and (sum|x-z|)*wz respectively. I tried to define a function that was passed into the weights argument but then I also would like to keep the inverse distance scaling in addition to the user defined weight and I do not know the inverse distance scaling function. I could not find an answer from the documentation. Any suggestions?
0
1
1,542
0
49,432,623
0
1
0
0
1
true
0
2018-03-22T10:08:00.000
1
1
0
Python 3.6 matplotlib.pyplot shows graph immediately without letting me apply other functions?
49,425,805
1.2
python,matplotlib,spyder
try typing %matplotlib or %matplotlib qt before doing plt.hist(df.amount, bins=30). This will switch the console out of "inline" mode.
I am trying to build a plot using matplotlib.pyplot as plt. For example plt.hist(df.amount, bins = 30) But when I hit enter in the console it generates the graph. I want to apply xlim, ylim and title functions of plt but can't do this. Anyone familiar with this behavior? Should I change Spyder settings? Same behavior with Seaborn package.
0
1
44
0
49,430,423
0
0
0
0
1
false
1
2018-03-22T13:38:00.000
0
2
0
Is it possible to force Tensorflow to generate orthogonal matrix?
49,430,178
0
python,tensorflow
It should be possible. I see two solutions. If you don't care that the transformation is a perfect rotation, you can take the matrix, adjust it to what you think it's a good matrix (make it a perfect rotation) then compute the difference between the one you like and the original and add it as a loss. With this approach you can push the model to do what you want but the model might not converge on a perfect rotation, especially if a perfect rotation is not a very good solution. Another approach is to start with a proper rotation matrix and train the parameters of the rotation. In this case you would have the x,y,z of the rotation axis and the amount of rotation. So declare these as variables, compute the rotation matrix and use that in your model. The operations are derivable so gradient descent should work, but depending on the data itself you might depend on the start position (so try a few times). Hard to say which one will work best in your case, so it might be worth trying both.
I'm using Tensorflow to generate a transformation matrix for a set of input vectors (X) to target vectors (Y). To minimize the error between the transformed input and the target vector samples I'm using a gradient descent algorithm. Later on I want to use the generated matrix to transform vectors coming from the same source as the training input vectors so that they look like the corresponding target vectors. Linear regression, pretty much, but with 3-dimensional targets. I can assume that the input and target vectors are in cartesian space. Thus, the transformation matrix should consist of a rotation and a translation. I'm working solely with unit vectors, so I can also safely assume that there's no translation, only rotation. So, in order to get a valid rotation matrix that I can turn into a rotation quaternion I understand that I have to make sure the matrix is orthogonal. Thus, the question is, is it possible to give Tensorflow some kind of constraint so that the matrix it tries to converge to is guaranteed to be orthogonal? Can be a parameter, a mathematical constraint, a specific optimizer, whatever. I just need to make sure the algorithm converges to a valid rotation matrix.
0
1
729
0
50,856,659
0
0
0
0
1
false
0
2018-03-22T15:10:00.000
0
1
0
Sharing class objects in Tensorlfow
49,432,175
0
python,tensorflow,lstm,rnn
You can share final state that is one of the outputs of for example consider dynamic rnn. You get outputs,cell_final_state which you can share. You probably know about cell_final_state.c and cell_final_state.h. you can set that final state as initial state. lemme know if I cleared your question or not. Just noticed it's such an old question. Hopefully helps future people.
TF has tf.variable_scope() that allows users to access tf.Variable() anywhere in the code. Basically every variable in TF is a global variable. Is there a similar way to access class objects like tf.nn.rnn_cell.LSTMCell() or tf.layers.Dense()? To be more specific, can i create a new class object, let's say lstm_cell_2 (used for prediction) that uses the same weights and biases in lstm_cell_1 (used during training). I am building an RNN to do language modeling. What i am doing right now is to return the lstm_cell_1 then pass it onto the prediction function. This works, but i want to in the end use separate tf.Graph() and tf.Session() for training, inference and prediction. Hence comes the problem of sharing Tensorflow objects. Also, my lstm_cell is an instance of tf.nn.rnn_cell.MultiRNNCell() which doesn't take a name argument. Thanks
0
1
27
0
49,451,732
0
0
0
0
2
false
7
2018-03-23T13:59:00.000
1
2
0
Q Learning Applied To a Two Player Game
49,451,366
0.099668
python,tic-tac-toe,reinforcement-learning,q-learning
Q-Learning is an algorithm from the MDP (Markov Decision Process) field, i.e the MDP and Learning in practically facing a world that being act upon. and each action change the state of the agent (with some probability) the algorithm build on the basis that for any action, the world give a feedback (reaction). Q-Learning works best when for any action there is a somewhat immediate and measurable reaction in addition this method looks at the world from one agent perspective My Suggestion is to implement the agent as part of the world. like a be bot which plays with various strategies e.g random, best action, fixed layout or even a implement it's logic as q-learning for looking n steps forward and running all the states (so later you can pick the best one) you can use monte-carlo tree search if the space size is too large (like did with GO) the Tic-Tac-Toe game is already solved, the player can achieve win or draw if follows the optimal strategy, and 2 optimal players will achieve draw, the full game tree is fairly easy to build
I am trying to implement a Q Learning agent to learn an optimal policy for playing against a random agent in a game of Tic Tac Toe. I have created a plan that I believe will work. There is just one part that I cannot get my head around. And this comes from the fact that there are two players within the environment. Now, a Q Learning agent should act upon the current state, s, the action taken given some policy, a, the successive state given the action, s', and any reward received from that successive state, r. Lets put this into a tuple (s, a, r, s') Now usually an agent will act upon every state it finds itself encountered in given an action, and use the Q Learning equation to update the value of the previous state. However, as Tic Tac Toe has two players, we can partition the set of states into two. One set of states can be those where it is the learning agents turn to act. The other set of states can be where it is the opponents turn to act. So, do we need to partition the states into two? Or does the learning agent need to update every single state that is accessed within the game? I feel as though it should probably be the latter, as this might affect updating Q Values for when the opponent wins the game. Any help with this would be great, as there does not seem to be anything online that helps with my predicament.
0
1
2,280
0
49,451,735
0
0
0
0
2
true
7
2018-03-23T13:59:00.000
7
2
0
Q Learning Applied To a Two Player Game
49,451,366
1.2
python,tic-tac-toe,reinforcement-learning,q-learning
In general, directly applying Q-learning to a two-player game (or other kind of multi-agent environment) isn't likely to lead to very good results if you assume that the opponent can also learn. However, you specifically mentioned for playing against a random agent and that means it actually can work, because this means the opponent isn't learning / changing its behaviour, so you can reliably treat the opponent as ''a part of the environment''. Doing exactly that will also likely be the best approach you can take. Treating the opponent (and his actions) as a part of the environment means that you should basically just completely ignore all of the states in which the opponent is to move. Whenever your agent takes an action, you should also immediately generate an action for the opponent, and only then take the resulting state as the next state. So, in the tuple (s, a, r, s'), we have: s = state in which your agent is to move a = action executed by your agent r = one-step reward s' = next state in which your agent is to move again The state in which the opponent is to move, and the action they took, do not appear at all. They should simply be treated as unobservable, nondeterministic parts of the environment. From the point of view of your algorithm, there are no other states in between s and s', in which there is an opponent that can take actions. From the point of view of your algorithm, the environment is simply nondeterministic, which means that taking action a in state s will sometimes randomly lead to s', but maybe also sometimes randomly to a different state s''. Note that this will only work precisely because you wrote that the opponent is a random agent (or, more importantly, a non-learning agent with a fixed policy). As soon as the opponent also gains the ability to learn, this will break down completely, and you'd have to move on to proper multi-agent versions of Reinforcement Learning algorithms.
I am trying to implement a Q Learning agent to learn an optimal policy for playing against a random agent in a game of Tic Tac Toe. I have created a plan that I believe will work. There is just one part that I cannot get my head around. And this comes from the fact that there are two players within the environment. Now, a Q Learning agent should act upon the current state, s, the action taken given some policy, a, the successive state given the action, s', and any reward received from that successive state, r. Lets put this into a tuple (s, a, r, s') Now usually an agent will act upon every state it finds itself encountered in given an action, and use the Q Learning equation to update the value of the previous state. However, as Tic Tac Toe has two players, we can partition the set of states into two. One set of states can be those where it is the learning agents turn to act. The other set of states can be where it is the opponents turn to act. So, do we need to partition the states into two? Or does the learning agent need to update every single state that is accessed within the game? I feel as though it should probably be the latter, as this might affect updating Q Values for when the opponent wins the game. Any help with this would be great, as there does not seem to be anything online that helps with my predicament.
0
1
2,280
0
68,944,912
0
0
0
0
1
false
5
2018-03-23T17:59:00.000
2
2
0
GridSearchCV final model
49,455,806
0.197375
python,machine-learning,scikit-learn
This is given in sklearn: “The refitted estimator is made available at the best_estimator_ attribute and permits using predict directly on this GridSearchCV instance.” So, you don’t need to fit the model again. You can directly get the best model from best_estimator_ attribute
If I use GridSearchCV in scikit-learn library to find the best model, what will be the final model it returns? That said, for each set of hyper-parameters, we train the number of CV (say 3) models. In this way, will the function return the best model in those 3 models for the best setting of parameters?
0
1
5,388
0
49,622,289
0
0
0
0
1
false
1
2018-03-23T23:29:00.000
1
3
0
Tm1 to python to R
49,459,591
0.066568
python,r
It seems as if you only want to read data from tm1. Therefore a "simple" mdx query should be fine. Have a look at the package "httr" how to send POST-Requests. Then it's pretty staright forward to port the relevant parts from tm1py to R.
I would like to create a dashboard using R. However, all the data that I need to connect is from TM1. The easiest way that I found is using an python library called TM1py to connect to tm1 data. I would like to know what is the easist to access to access TM1py library from R ? Thanks
0
1
412
0
49,461,632
0
0
0
0
1
false
0
2018-03-24T03:43:00.000
0
3
0
How to change values in certain columns according to certain rule in pandas dataframe
49,460,990
0
python,pandas
You may need to use isnull(): df['col2'] = df['col2'].apply(lambda x: str(x)[0] if not pd.isnull(x) else x)
Suppose I have a pandas dataframe looks like this: col1 col2 0 A A60 1 B B23 2 C NaN The data from is read from a csv file. Suppose I want to change each non-missing value of 'col2' to its prefix (i.e. 'A' or 'B'). How could I do this without writing a for loop? The expected output is col1 col2 0 A A 1 B B 2 C NaN
0
1
489
0
49,472,921
0
0
0
0
1
true
1
2018-03-24T19:29:00.000
0
1
0
Tensoflow: Providing jacobians and hessians to ScipyOptimizerInterface
49,468,976
1.2
python,tensorflow
This is not directly supported by TensorFlow's ScipyOptimizerInterface but you should be able to build a hessian function that will be passed through the interface and work over its head. scipy.optimize.minimize expects a function that recieves a candidate solution p (in the form of a 1D numpy vector) and returns the Hessian. Therefore, define a Python function that receives a numpy vector p and evaluates the hessian for the solution defined by p using sess.run(hess,feed_dict=feed_dict). hess is a tensor defined once by hess=tf.hessians(loss,variable) outside this function (so you won't define another tf.hessians operation with each function call). feed_dict is a dictionary with your tensorflow variable as a key and a numpy array p reshaped to fit your variable's shape as a value. In case you optimize more than one variable in parallel, dig in external_optimizer.py to see how it packs and unpacks multiple tensors into a single vector (look for _pack and _make_eval_func). Note that for most deep learning problems, calculating the Hessian is not practical because it's a matrix of the dimensions size(variable)xsize(variable) - which is typically too large. Hence the LBFGS optimizer is a more feasible second-order optimizer.
I am trying out the different optimization methods of tf.contrib.opt.ScipyOptimizerInterface and some of them (e.g. trust-exact) require the hessian of the objective function. How can I use tf.hessians as hessian for tf.contrib.opt.ScipyOptimizerInterface? I tried to provide it with hess=tf.hessians(loss,variable) (which returns a list of tensors) but it needs a callable object. And if I input directly hess=tf.hessians I get TypeError: hessians() takes at least 2 arguments (1 given). If you have any examples of tf.gradients or tf.hessians used with ScipyOptimizerInterface it would already be super useful!
0
1
182
0
49,477,081
0
0
0
0
1
true
0
2018-03-25T07:54:00.000
0
1
0
How to choose a group of columns in a Dask Dataframe?
49,473,649
1.2
python,dataframe,dask
It was my mistake. I passed to the slicing operator a list of strings in a numpy array receiving a "not implemented error", passing a python list instead works correctly.
Is there a way to choose a group of columns in a dask dataframe? The slice df [['col_1', 'col_2']] does not seem to work.
0
1
1,478
0
52,206,018
0
0
0
0
1
false
0
2018-03-25T15:41:00.000
0
2
0
TensorFlow: ImportError: cannot import name 'dragon4_positional'
49,477,640
0
python,python-3.x,tensorflow
This looks like it is an issue with Numpy, which is a dependency of Tensorflow. Did you try upgrading your version of numpy using pip or conda? Like such: pip install --ignore-installed --upgrade numpy
I get the following error when trying to use tensorflow importError Traceback (most recent call last) in () ----> 1 import tensorflow as tf ~\Anaconda3\lib\site-packages\tensorflow__init__.py in () 22 23 # pylint: disable=wildcard-import ---> 24 from tensorflow.python import * 25 # pylint: enable=wildcard-import 26 ~\Anaconda3\lib\site-packages\tensorflow\python__init__.py in () 54 # imported using tf.load_op_library() can access symbols defined in 55 # _pywrap_tensorflow.so. ---> 56 import numpy as np 57 try: 58 if hasattr(sys, 'getdlopenflags') and hasattr(sys, 'setdlopenflags'): ~\Anaconda3\lib\site-packages\numpy__init__.py in () 140 return loader(*packages, **options) 141 --> 142 from . import add_newdocs 143 all = ['add_newdocs', 144 'ModuleDeprecationWarning', ~\Anaconda3\lib\site-packages\numpy\add_newdocs.py in () 11 from future import division, absolute_import, print_function 12 ---> 13 from numpy.lib import add_newdoc 14 15 ############################################################################### ~\Anaconda3\lib\site-packages\numpy\lib__init__.py in () 6 from numpy.version import version as version 7 ----> 8 from .type_check import * 9 from .index_tricks import * 10 from .function_base import * ~\Anaconda3\lib\site-packages\numpy\lib\type_check.py in () 9 'common_type'] 10 ---> 11 import numpy.core.numeric as _nx 12 from numpy.core.numeric import asarray, asanyarray, array, isnan, zeros 13 from .ufunclike import isneginf, isposinf ~\Anaconda3\lib\site-packages\numpy\core__init__.py in () 36 from . import numerictypes as nt 37 multiarray.set_typeDict(nt.sctypeDict) ---> 38 from . import numeric 39 from .numeric import * 40 from . import fromnumeric ~\Anaconda3\lib\site-packages\numpy\core\numeric.py in () 1818 1819 # Use numarray's printing function -> 1820 from .arrayprint import array2string, get_printoptions, set_printoptions 1821 1822 ~\Anaconda3\lib\site-packages\numpy\core\arrayprint.py in () 42 from .umath import absolute, not_equal, isnan, isinf, isfinite, isnat 43 from . import multiarray ---> 44 from .multiarray import (array, dragon4_positional, dragon4_scientific, 45 datetime_as_string, datetime_data, dtype, ndarray, 46 set_legacy_print_mode) This error occured after I tried to upgrade TF from version 1.1 to the latest version. So I dont know what current TF version I am using. I am using Windows 10 without a GPU. Do you know how to fix it?
0
1
1,449
0
51,741,148
0
1
0
0
1
false
0
2018-03-25T18:07:00.000
1
3
0
Unable to import cv2 OpenCV 2.4.13 in python 3.6
49,479,145
0.066568
python,opencv,computer-vision,anaconda
Try pip install opencv-python instead of pip install cv2. Although the name of the package changes, you can still import it as import cv2, It will work.
import cv2 On executing the above code, it shows the following error. Error: Traceback (most recent call last) in () ----> 1 import cv2 ImportError: DLL load failed: The specified module could not be found. Unable to import cv2 in python I have installed OpenCV 2.4.13 and Anaconda3 with python 3.6.4. OpenCV location:C:\Users\harsh\Anaconda3 Anaconda location:C:\Users\harsh\opencv. I have also added cv2.pyd in C:\Users\harsh\Anaconda3\Lib\site-packages.
0
1
2,644
0
49,957,782
0
0
0
0
1
true
0
2018-03-25T21:26:00.000
1
1
0
What is returned by scipy.io.wavefile.read()?
49,481,114
1.2
python-3.x,audio,scipy
wavfile.read() returns two things: data: This is the data from your wav file which is the amplitude of the audio taken at even intervals of time. sample rate: How many of those intervals make up one second of audio.
I have never worked with audio before. For a monophonic wav file read() returns an 1-D array of integers. What do these integers represent? Are they the frequencies? If not how do I use them to get the frequencies?
0
1
152
0
49,485,045
0
0
0
0
1
true
0
2018-03-26T05:57:00.000
0
1
0
Text Categorization Test NLTK python
49,484,820
1.2
python,nltk,text-mining,naivebayes
Just saving the model will not help. You should also save your VectorModel (like tfidfvectorizer or countvectorizer what ever you have used for fitting the train data). You can save those the same way using pickle. Also save all those models you used for pre-processing the train data like normalization/scaling models, etc. For the test data repeat the same steps by loading the pickle models that you saved and transform the test data in train data format that you used for model building and then you will be able to classify.
I have using nltk packages and train a model using Naive Bayes. I have save the model to a file using pickle package. Now i wonder how can i use this model to test like a random text not in the dataset and the model will tell if the sentence belong to which categorize? Like my idea is i have a sentence : " Ronaldo have scored 2 goals against Egypt" And pass it to the model file and return categorize "sport".
0
1
147
0
49,493,806
0
0
0
0
1
true
2
2018-03-26T12:57:00.000
4
1
0
Spacy training multithread CPU usage
49,492,038
1.2
python,multithreading,nlp,spacy
The only things that are multi-threaded are the matrix multiplications, which in v2.0.8 are done via numpy, which delegates them to a BLAS library. Everything else is single-threaded. You should check what BLAS library your numpy is linked to, and also make sure that the library has been compiled appropriately for your machine. On my machine the numpy I install via pip comes with a copy of OpenBLAS that thinks my machine has a Prescott CPU. This prevents it from using AVX instructions. So if I install default numpy from pip on my machine, it runs 2-3x slower than it should. Another problem is that OpenBLAS might be launching more threads than it should. This seems especially common in containers. Finally, the efficiency of parallelism very much depends on batch-size. On small batches, the matrices are small and the per-update routines such as the Adam optimiser take more of the time. I usually disable multi-threading and train on a single core, because this is the most efficient (in the sense of dollars-for-work) --- I then have more models training as separate processes (usually on separate GCE VMs). When writing spaCy I haven't assumed that the goal is to use lots of cores. The goal is efficiency. It's not a virtue to use your whole machine to perform the same work that could be done on a single core. A lot of papers are very misleading in this respect. For instance, it might feel satisfying to launch 12 training processes across a cloud and optimize using an asynchronous SGD strategy such as Hogwild!. This is an efficient way to burn up a bunch of energy, but doesn't necessarily train your models any faster: using Adam and smaller batch sizes, training is more stable and often reaches the same accuracy in many fewer iterations. Similarly, we can make the network larger so the machines get their workout...But why? The goal is to train the model. Multiplying a bunch of matrices is a means, not an end. The problem I've been most concerned with is the terrible BLAS linkage situation. This will be much improved in v2.1, as we'll be bringing our own OpenBLAS kernel. The kernel will be single-threaded by default. A simple thing to try if you suspect your BLAS is bad is to try installing numpy using conda. That will give you a copy linked against intel's MKL library.
I'm training some models with my own NER pipe. I need to run spacy in lxc container so I can run it with python3.6 (which allow multi thread on training). But.. on my 7 core authorized to run on my container only 1 run at 100% others run at 40-60% (actually they start at 100% but decrease after fews minutes). I would really like to improve this % core usage. Any idea to where to look ? Could it be a problem of Producer / Consumer ? Env: - spaCy version 2.0.8 - Location /root/.env/lib/python3.6/site-packages/spacy - Platform Linux-3.14.32-xxxx-grs-ipv6-64-x86_64-with-debian-buster-sid - Python version 3.6.4
0
1
2,172
0
51,396,901
0
0
0
0
1
false
10
2018-03-26T14:00:00.000
0
3
0
Random Forest Regressor using a custom objective/ loss function (Python/ Sklearn)
49,493,331
0
python-3.x,scikit-learn,random-forest,statsmodels,poisson
If the problem is that the counts c_i arise from different exposure times t_i, then indeed one cannot fit the counts, but one can still fit the rates r_i = c_i/t_i using MSE loss function, where one should, however, use weights proportional to the exposures, w_i = t_i. For a true Random Forest Poisson regression, I've seen that in R there is the rpart library for building a single CART tree, which has a Poisson regression option. I wish this kind of algorithm would have been imported to scikit-learn.
I want to build a Random Forest Regressor to model count data (Poisson distribution). The default 'mse' loss function is not suited to this problem. Is there a way to define a custom loss function and pass it to the random forest regressor in Python (Sklearn, etc..)? Is there any implementation to fit count data in Python in any packages?
0
1
10,061
0
49,507,909
0
0
0
1
1
false
3
2018-03-26T22:15:00.000
2
1
0
Is there a function in xlsxwriter that lets you sort a column?
49,501,501
0.379949
python,xlsxwriter
Sorting isn't a feature of the xlsx file format. It is something Excel does at runtime. So it isn't something XlsxWriter can replicate. A workaround would be to to sort your data using Python before you write it.
I was wondering if there was a function in xlsxwriter that lets you sort the contents in the column from greatest to least or least to greatest? thanks!
0
1
847
0
52,691,745
0
0
0
0
1
false
22
2018-03-27T03:06:00.000
2
6
0
Save and load model optimizer state
49,503,748
0.066568
python,tensorflow,machine-learning,keras
upgrading Keras to 2.2.4 and using pickle solved this issue for me. with keras release 2.2.3 Keras models can now be safely pickled.
I have a set of fairly complicated models that I am training and I am looking for a way to save and load the model optimizer states. The "trainer models" consist of different combinations of several other "weight models", of which some have shared weights, some have frozen weights depending on the trainer, etc. It is a bit too complicated of an example to share, but in short, I am not able to use model.save('model_file.h5') and keras.models.load_model('model_file.h5') when stopping and starting my training. Using model.load_weights('weight_file.h5') works fine for testing my model if the training has finished, but if I attempt to continue training the model using this method, the loss does not come even close to returning to its last location. I have read that this is because the optimizer state is not saved using this method which makes sense. However, I need a method for saving and loading the states of the optimizers of my trainer models. It seems as though keras once had a model.optimizer.get_sate() and model.optimizer.set_sate() that would accomplish what I am after, but that does not seem to be the case anymore (at least for the Adam optimizer). Are there any other solutions with the current Keras?
0
1
21,170
0
49,528,355
0
0
0
0
2
false
0
2018-03-27T04:15:00.000
1
2
0
clustering in python without number of clusters or threshold
49,504,271
0.099668
python,cluster-analysis
Clustering is an explorative technique. This means it must always be able to produce different results, as desired by the user. Having many parameters is a feature. It means the method can be adapted easily to very different data, and to user preferences. There will never be a generally useful parameter-free technique. At best, some parameters will have default values or heuristics (such as Euclidean distance, such as standardizing the input prior to clusterings such as the gap statistic for choosing k) that may give a reasonable first try in 80% of cases. But after that first try, you'll need to understand the data, and try other parameters to learn more about your data. Methods that claim to be "parameter free" usually just have some hidden parameters set so it works on the few toy example it was demonstrated on.
Is it possible to do clustering without providing any input apart from the data? The clustering method/algorithm should decide from the data on how many logical groups the data can be divided, even it doesn't require me to input the threshold eucledian distance on which the clusters are built, this also needs to be learned from the data. Could you please suggest me what is closest solution for my problem?
0
1
231
0
49,504,632
0
0
0
0
2
true
0
2018-03-27T04:15:00.000
1
2
0
clustering in python without number of clusters or threshold
49,504,271
1.2
python,cluster-analysis
Why not code your algorithm to create a list of clusters ranging from size 1 to n (which could be defined in a config file so that you can avoid hard coding and just fix it once). Once that is done, compute the clusters of size 1 to n. Choose the value which gives you the smallest Mean Square Error. This would require some additional work by your machine to determine the optimal number of logical groups the data can be divided (bounded between 1 and n).
Is it possible to do clustering without providing any input apart from the data? The clustering method/algorithm should decide from the data on how many logical groups the data can be divided, even it doesn't require me to input the threshold eucledian distance on which the clusters are built, this also needs to be learned from the data. Could you please suggest me what is closest solution for my problem?
0
1
231
0
49,514,082
0
0
0
0
1
true
1
2018-03-27T12:31:00.000
6
1
0
Is Tensorflow worth using for simple optimization problems?
49,512,935
1.2
python,tensorflow
This is somewhat opinion based, but Tensorflow and similar frameworks such as PyTorch are useful when you want to optimize an arbitrary, parameter-rich non-linear function (e.g., a deep neural network). For a 'standard' statistical model, I would use code that was already tailored to it instead of reinventing the wheel. This is true especially when there are closed-form solutions (as in linear least squares) - why go into to the murky water of local optimization when you don't have to? Another advantage of using existing statistical libraries is that they usually provide you with measures of uncertainty about your point estimates. I see one potential case in which you might want to use Tensorflow for a simple linear model: when the number of variables is so big the model can't be estimated using closed-form approaches. Then gradient descent based optimization makes sense, and tensorflow is a viable tool for that.
I have started learning Tensorflow recently and I am wondering if it is worth using in simple optimization problems (least squares, maximum likelihood estimation, ...) instead of more traditional libraries (scikit-learn, statsmodel)? I have implemented a basic AR model estimator using Tensorflow with MLE and the AdamOptimizer and the results are not convincing either performance or computation speed wise. What do you think?
0
1
343
0
49,514,624
0
1
0
0
1
false
1
2018-03-27T13:39:00.000
1
2
0
Excel function IFERROR(value, value_if_error). Does it have a python equivalent?
49,514,486
0.099668
python,pandas
No sure what you mean by an error in data in Python. Do you mean NA? Then try the fillna function in pandas.
Does python have a function similar to the excel function IFERROR(value, value_if_error) Can I use np.where? Many Thanks
0
1
4,595
0
67,007,173
0
0
0
0
1
false
1
2018-03-27T13:42:00.000
0
1
0
Calling R function with se.fit parameter with rpy2 from Python
49,514,535
0
python,r,prediction,rpy2
In Python you can not use "." as part of keywords, replacing "." with "_", so "se_fit", should work.
I need to call the R function predict(fit_hs, type="quantile", se.fit=True, p=0.5) where predict refers to survreg in library survival. It gives an error about the se.fit parameter saying it's a keyword that can't be used. Could you please help finding a way to call this R function from Python?
0
1
54
0
55,154,657
0
0
0
0
1
false
14
2018-03-28T11:02:00.000
-1
2
0
Dask dataframe split partitions based on a column or function
49,532,824
-0.099668
python,pandas,dataframe,dask,dask-distributed
Setting index to the required column and map_partitions works much efficient compared to groupby
I have recently begun looking at Dask for big data. I have a question on efficiently applying operations in parallel. Say I have some sales data like this: customerKey productKey transactionKey grossSales netSales unitVolume volume transactionDate ----------- -------------- ---------------- ---------- -------- ---------- ------ -------------------- 20353 189 219548 0.921058 0.921058 1 1 2017-02-01 00:00:00 2596618 189 215015 0.709997 0.709997 1 1 2017-02-01 00:00:00 30339435 189 215184 0.918068 0.918068 1 1 2017-02-01 00:00:00 32714675 189 216656 0.751007 0.751007 1 1 2017-02-01 00:00:00 39232537 189 218180 0.752392 0.752392 1 1 2017-02-01 00:00:00 41722826 189 216806 0.0160143 0.0160143 1 1 2017-02-01 00:00:00 46525123 189 219875 0.469437 0.469437 1 1 2017-02-01 00:00:00 51024667 189 215457 0.244886 0.244886 1 1 2017-02-01 00:00:00 52949803 189 215413 0.837739 0.837739 1 1 2017-02-01 00:00:00 56526281 189 220261 0.464716 0.464716 1 1 2017-02-01 00:00:00 56776211 189 220017 0.272027 0.272027 1 1 2017-02-01 00:00:00 58198475 189 215058 0.805758 0.805758 1 1 2017-02-01 00:00:00 63523098 189 214821 0.479798 0.479798 1 1 2017-02-01 00:00:00 65987889 189 217484 0.122769 0.122769 1 1 2017-02-01 00:00:00 74607556 189 220286 0.564133 0.564133 1 1 2017-02-01 00:00:00 75533379 189 217880 0.164387 0.164387 1 1 2017-02-01 00:00:00 85676779 189 215150 0.0180961 0.0180961 1 1 2017-02-01 00:00:00 88072944 189 219071 0.492753 0.492753 1 1 2017-02-01 00:00:00 90233554 189 216118 0.439582 0.439582 1 1 2017-02-01 00:00:00 91949008 189 220178 0.1893 0.1893 1 1 2017-02-01 00:00:00 91995925 189 215159 0.566552 0.566552 1 1 2017-02-01 00:00:00 I want to do a few different groupbys, first a groupby-apply on customerKey. Then another groupby-sum on customerKey, and a column which will be the result of the previos groupby apply. The most efficient way I can think of doing this would be do split this dataframe into partitions of chunks of customer keys. So, for example I could split the dataframe into 4 chunks with a partition scheme for example like (pseudocode) partition by customerKey % 4 Then i could use map_partitions to do these group by applies for each partition, then finally returning the result. However it seems dask forces me to do a shuffle for each groupby I want to do. Is there no way to repartition based on the value of a column? At the moment this takes ~45s with 4 workers on a dataframe of only ~80,000 rows. I am planning to scale this up to a dataframe of trillions of rows, and already this seems like it is going to scale horribly. Am I missing something fundamental to Dask?
0
1
9,214
0
49,542,244
0
0
0
1
1
false
2
2018-03-28T17:49:00.000
0
3
0
(Sql + Python) df array to string with single quotes?
49,541,070
0
python,sql,arrays,string,quote
Please try this and verify if it helps- sql = "SELECT * FROM database WHERE list IN (%s)" % ",".join(map(myString,List1))
Goal: how to convert (111, 222, 333) to ('111', '222', '333') for an sql query in Python? What I have done so far: I am calling a csv file to a df: dataset = pd.read_csv('simple.csv') print(dataset) LIST 0 111 1 222 2 333 List11 = dataset.LIST.apply(str) print(List1) 0 111 1 222 2 333 Name: OPERATION, dtype: object myString = ",".join(List1) print(myString) 111,222,333 sql = "SELECT * FROM database WHERE list IN (%s)" %myString This does not work. Could you please help?
0
1
1,305
0
51,157,225
0
0
0
0
2
false
27
2018-03-29T07:20:00.000
-4
9
0
Keras rename model and layers
49,550,182
-1
python,keras
for 1), I think you may build another model with right name and same structure with the exist one. then set weights from layers of the exist model to layers of the new model.
1) I try to rename a model and the layers in Keras with TF backend, since I am using multiple models in one script. Class Model seem to have the property model.name, but when changing it I get "AttributeError: can't set attribute". What is the Problem here? 2) Additionally, I am using sequential API and I want to give a name to layers, which seems to be possibile with Functional API, but I found no solution for sequential API. Does anonye know how to do it for sequential API? UPDATE TO 2): Naming the layers works, although it seems to be not documented. Just add the argument name, e.g. model.add(Dense(...,...,name="hiddenLayer1"). Watch out, Layers with same name share weights!
0
1
40,090
0
63,853,924
0
0
0
0
2
false
27
2018-03-29T07:20:00.000
10
9
0
Keras rename model and layers
49,550,182
1
python,keras
To rename a keras model in TF2.2.0: model._name = "newname" I have no idea if this is a bad idea - they don't seem to want you to do it, but it does work. To confirm, call model.summary() and you should see the new name.
1) I try to rename a model and the layers in Keras with TF backend, since I am using multiple models in one script. Class Model seem to have the property model.name, but when changing it I get "AttributeError: can't set attribute". What is the Problem here? 2) Additionally, I am using sequential API and I want to give a name to layers, which seems to be possibile with Functional API, but I found no solution for sequential API. Does anonye know how to do it for sequential API? UPDATE TO 2): Naming the layers works, although it seems to be not documented. Just add the argument name, e.g. model.add(Dense(...,...,name="hiddenLayer1"). Watch out, Layers with same name share weights!
0
1
40,090
0
49,551,180
0
0
0
0
1
false
1
2018-03-29T07:51:00.000
0
1
0
When is it safe to cache tf.Tensors?
49,550,723
0
python,tensorflow
TL;DR: TF already caches what it needs to, don't bother with it yourself. Every time you call sess.run([some_tensors]) TF's engine find the minimum subgraph needed to compute all tensors in [some_tensors] and runs it from top to bottom (possibly on new data, if you're not feeding it the same data). That means, caching of results in-between sess.run calls is useless towards saving computation, because they will be recomputed anyway. If, instead, you're concerned with having multiple tensors using the same data as input in one call of sess.run, don't worry, TF is smart enough. if you have input A and B = 2*A, C = A + 1, as long as you do one sess.run call as sess.run([B,C]) A will be evaluated only once (and then implicitly cached by the TF engine).
Let's say we have some method foo we call during graph construction time that returns some tf.Tensors or a nested structure of them every time is called, and multiple other methods that make use of foo's result. For efficiency and to avoid spamming the TF graph with unnecessary repeated operations, it might be tempting to make foo cache its result (to reuse the subgraph it produces) the first time is called. However, that will fail if foo is ever used in the context of a control flow, like tf.cond, tf.map_fn or tf.while_loop. My questions are: When is it safe to cache tf.Tensor objects in such a way that does not cause problems with control flows? Perhaps is there some way to retrieve the control flow under which a tf.Tensor was created (if any), store it and compare it later to see if a cached result can be reused? How would the answer to the question above apply to tf.Operations? (Question text updated to make clearer that foo creates a new set of tensors every time is called)
0
1
321
0
49,553,542
0
0
0
0
1
false
1
2018-03-29T10:09:00.000
0
3
0
Python Pandas DataFrames Sorting, Summing and Fetching Max Data
49,553,357
0
python,python-3.x,pandas
The best way to do when you are learning is to try it. It's very unlikely your data will be too large (there aren't millions of car models), but in any case, you can use df.head(N) to take the top N rows to try your method and see if it's slow. Other useful functions include df.groupby, df.nlargest, df.sort_values
I have just started learning Python, Pandas and NumPy and I want to find out what is the cleanest and most efficient way to solve the following problem. I have data which holds CarManufacturer, Car, TotalCarSales, bearing in mind that the data is not small: CarManufacturer Car TotalCarSales Volkswagen Polo 100 Volkswagen Golf 50 Honda Jazz 40 Honda Civic 100 Question: Which manufacturer sold the most cars according to it's top 3 best sellers? I'm struggling to solve this efficiently. I want to avoid iterating over the data. My thoughts: - Load Data into DataFrame - Index data according to CarManufacturer, Car, TotalCarSales - Do I want to do a sort here? That would be slow? - Create a new DataFrame which has CarManufacturer, TotalSales. For each CarManufacturer I would need to get the top 3 TotalCarSales and take their sum - Is there a way of doing this without iterating over all records in DataFrame? What is best way to fetch the top 3? - Then if I sort the TotalSales and take the top 3, wouldn't the sort be slow? Is there a more efficient way?
0
1
91
0
49,565,369
0
0
0
0
1
true
1
2018-03-29T15:54:00.000
0
1
0
Random crop and bounding boxes in tensorflow
49,560,347
1.2
python,tensorflow
You can get the shape, but only at runtime - when you call sess.run and actually pass in the data - that's when the shape is actually defined. So do the random crop manually in tesorflow, basically, you want to reimplement tf.random_crop so you can handle the manipulations to the bounding boxes. First, to get the shape, x = your_tensor.shape[0] will give you the first dimension. It will appear as None until you actually call sess.run, then it will resolve to the appropriate value. Now you can compute some random crop parameters using tf.random_uniform or whatever method you like. Lastly you perform the crop with tf.slice. If you want to choose whether to perform the crop or not you can use tf.cond. Between those components, you should be able to implement what you want using only tensorflow constructs. Try it out and if you get stuck along the way post the code and error you run into.
I want to add a data augmentation on the WiderFace dataset and I would like to know, how is it possible to random crop an image and only keep the bouding box of faces with the center inside the crop using tensorflow ? I have already try to implement a solution but I use TFRecords and the TfExampleDecoder and the shape of the input image is set to [None, None, 3] during the process, so no way to get the shape of the image and do it by myself.
0
1
972
0
49,562,017
0
0
0
0
1
false
5
2018-03-29T17:21:00.000
1
2
0
How to choose RandomState in train_test_split?
49,561,882
0.099668
python,pandas,machine-learning,scikit-learn,svm
For me personally, I set random_state to a specific number (usually 42) so if I see variation in my programs accuracy I know it was not caused by how the data was split. However, this can lead to my network over fitting on that specific split. I.E. I tune my network so it works well with that split, but not necessarily on a different split. Because of this, I think it's best to use a random seed when you submit your code so the reviewer knows you haven't over fit to that particular state. To do this with sklearn.train_test_split you can simply not provide a random_state and it will pick one randomly using np.random.
I understand how random state is used to randomly split data into training and test set. As Expected, my algorithm gives different accuracy each time I change it. Now I have to submit a report in my university and I am unable to understand the final accuracy to mention there. Should I choose the maximum accuracy I get? Or should I run it with different RandomStates and then take its average? Or something else?
0
1
5,696
0
49,571,075
0
1
0
0
1
false
1
2018-03-29T20:31:00.000
0
1
0
ImportError with dask.distributed
49,564,542
0
python-3.x,importerror,dask-distributed
I resolved the error, there were some outdated packages including imageio which needed upgrade to work with dask.distributed and dask.dataframe.
I am trying to import dask.distributed package, but I keep getting this error: ImportError: cannot import name 'collections_to_dsk'. Help is appreciated.
0
1
311
0
49,902,116
0
0
0
0
2
false
0
2018-03-30T07:13:00.000
0
2
0
CountVectorizer in Python
49,570,046
0
python,tf-idf,text-classification,countvectorizer,tfidfvectorizer
You could easily just concatenate these matrices and other feature columns to build one very large matrix. However, be aware that concatenating the matrix from email body and email subject will probably create an incredibly sparse matrix. When you then add other features you might risk to "water down" your other features. This depends mostly on the algorithm you choose to use for your prediction. In all cases you would benefit from reducing the dimensionality of the two matrices for email subject and body or directly use a different approach than CountVectorizer - for example TFIDF.
I am working on a problem in which I have to predict whether a sent email from a company is opened or not and if it is opened, I have to predict whether the recipient clicked on the given link or not. I have a data set with the following features: Total links inside the emai` Total internal links inside the email Number of images inside the email Number of sections inside the email Email_body Email Subject For the email body and subject I can use a CountVectorizer but how can I include my other feature to that sparse matrix created by said CountVectorizer.
0
1
82
0
49,936,932
0
0
0
0
2
false
0
2018-03-30T07:13:00.000
0
2
0
CountVectorizer in Python
49,570,046
0
python,tf-idf,text-classification,countvectorizer,tfidfvectorizer
Your problem is you have two large sparse feature vectors (email body and subject) and also small dense feature vectors. Here is my simple suggestion: (Jerome's idea) Reduce the dimension of email body and subject (via PCA, AutoEncoder, CBOW, Doc2Vec, PLSA, or LDA) so that you will end up with a dense feature vector. Then, concatenate it with other meta information. I think concatenating the matrix with other features are okay. If you use a simple linear model, you can put more weights on meta information and scale down all weights learned from subject and body of the emails. The real problem is when you use a bag-of-word representation (either term-frequency or TFIDF), your feature vector will be extremely sparse for a very short email. The model might not perform well. By the way, I think the author information could be a good indicator whether the email will be opened or not.
I am working on a problem in which I have to predict whether a sent email from a company is opened or not and if it is opened, I have to predict whether the recipient clicked on the given link or not. I have a data set with the following features: Total links inside the emai` Total internal links inside the email Number of images inside the email Number of sections inside the email Email_body Email Subject For the email body and subject I can use a CountVectorizer but how can I include my other feature to that sparse matrix created by said CountVectorizer.
0
1
82
0
49,577,047
0
1
0
0
1
true
1
2018-03-30T14:45:00.000
1
2
0
How to compare date (yyyy-mm-dd) with year-Quarter (yyyyQQ) in python
49,576,487
1.2
sql,python-3.x,pandas
Once you are able to get the month out into a variable: mon you can use the following code to get the quarter information: for mon in range(1, 13): print (mon-1)//3 + 1, print which would return: for months 1 - 3 : 1 for months 4 - 6 : 2 for months 7 - 9 : 3 for months 10 - 12 : 4
I am writing a sql query using pandas within python. In the where clause I need to compare a date column (say review date 2016-10-21) with this value '2016Q4'. In other words if the review dates fall in or after Q4 in 2016 then they will be selected. Now how do I convert the review date to something comparable to 'yyyyQ4' format. Is there any python function for that ? If not, how so I go about writing one for this purpose ?
0
1
798
0
49,577,580
0
0
1
0
1
false
0
2018-03-30T14:47:00.000
2
1
0
Quick way to classify if an image contains text or not
49,576,528
0.379949
python,classification,ocr,tesseract,text-extraction
Unfortunately there is no way to tell if an image has text in it, without performing OCR of some kind on it. You could build a machine learning model that handles this, however keep in mind it would still need to process the image as well.
I have millions of images, and I am able to use OCR with pytesseract to perform descent text extraction, but it takes too long to process all of the images. Thus I would like to determine if an image simply contains text or not, and if it doesn't, i wouldn't have to perform OCR on it. Ideally this method would have a high recall. I was thinking about building a SVM or some machine learning model to help detect, but I was hoping if anyone new of a method to quickly determine if an object contains text or not.
0
1
285
0
49,597,211
0
0
1
0
2
true
3
2018-03-31T04:10:00.000
0
2
0
Measurement for intersection of 2 irregular shaped 3d object
49,584,153
1.2
python,3d,computational-geometry,bin-packing
A sample-based approach is what I'd try first. Generate a bunch of points in the unioned bounding AABB, and divide the number of points in A and B by the number of points in A or B. (You can adapt this measure to your use case -- it doesn't work very well when A and B have very different volumes.) To check whether a given point is in a given volume, use a crossing number test, which Google. There are acceleration structures that can help with this test, but my guess is that the number of samples that'll give you reasonable accuracy is lower than the number of samples necessary to benefit overall from building the acceleration structure. As a variant of this, you can check line intersection instead of point intersection: Generate a random (axis-aligned, for efficiency) line, and measure how much of it is contained in A, in B, and in both A and B. This requires more bookkeeping than point-in-polyhedron, but will give you better per-sample information and thus reduce the number of times you end up iterating through all the faces.
I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex. I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate. Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape. I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method. Update: I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.
0
1
1,361
0
49,688,037
0
0
1
0
2
false
3
2018-03-31T04:10:00.000
0
2
0
Measurement for intersection of 2 irregular shaped 3d object
49,584,153
0
python,3d,computational-geometry,bin-packing
By straight voxelization: If the faces are of similar size (if needed triangulate the large ones), you can use a gridding approach: define a regular 3D grid with a spacing size larger than the longest edge and store one bit per voxel. Then for every vertex of the mesh, set the bit of the cell it is included in (this just takes a truncation of the coordinates). By doing this, you will obtain the boundary of the object as a connected surface. You will obtain an estimate of the volume by means of a 3D flood filling algorithm, either from an inside or an outside pixel. (Outside will be easier but be sure to leave a one voxel margin around the object.) Estimating the volumes of both objects as well as intersection or union is straightforward with this machinery. The cost will depend on the number of faces and the number of voxels.
I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex. I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate. Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape. I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method. Update: I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.
0
1
1,361
0
49,594,057
0
0
0
0
1
true
0
2018-04-01T01:28:00.000
1
1
0
Neural Network - Input Normalization
49,593,985
1.2
python,tensorflow,machine-learning,neural-network,deep-learning
A large number of features makes it easier to parallelize the normalization of the dataset. This is not really an issue. Normalization on large datasets would be easily GPU accelerated, and it would be quite fast. Even for large datasets like you are describing. One of my frameworks that I have written can normalize the entire MNIST dataset in under 10 seconds on a 4-core 4-thread CPU. A GPU could easily do it in under 2 seconds. Computation is not the problem. While for smaller datasets, you can hold the entire normalized dataset in memory, for larger datasets, like you mentioned, you will need to swap out to disk if you normalize the entire dataset. However, if you are doing reasonably large batch sizes, about 128 or higher, your minimums and maximums will not fluctuate that much, depending upon the dataset. This allows you to normalize the mini-batch right before you train the network on it, but again this depends upon the network. I would recommend experimenting based on your datasets, and choosing the best method.
It is a common practice to normalize input values (to a neural network) to speed up the learning process, especially if features have very large scales. In its theory, normalization is easy to understand. But I wonder how this is done if the training data set is very large, say for 1 million training examples..? If # features per training example is large as well (say, 100 features per training example), 2 problems pop up all of a sudden: - It will take some time to normalize all training samples - Normalized training examples need to be saved somewhere, so that we need to double the necessary disk space (especially if we do not want to overwrite the original data). How is input normalization solved in practice, especially if the data set is very large? One option maybe is to normalize inputs dynamically in the memory per mini batch while training.. But normalization results will then be changing from one mini batch to another. Would it be tolerable then? There is maybe someone in this platform having hands on experience on this question. I would really appreciate if you could share your experiences. Thank you in advance.
0
1
1,045
0
50,748,457
0
0
0
0
1
true
1
2018-04-02T00:11:00.000
0
1
0
pyspark read csv file multiLine option not working for records which has newline spark2.3 and spark2.2
49,603,834
1.2
python-3.x,apache-spark,pyspark,spark-dataframe
I created my own hadoop Custom Record Reader and was able to read it by invoking the api . spark.sparkContext.newAPIHadoopFile(file_path,'com.test.multi.reader.CustomFileFormat','org.apache.hadoop.io.LongWritable','org.apache.hadoop.io.Text',conf=conf) And in the Custom Record Reader implemented the logic to handle the newline characters encountered .
I am trying to read the dat file using pyspark csv reader and it contains newline character ("\n") as part of the data. Spark is unable to read this file as single column, rather treating it as new row. I tried using the "multiLine" option while reading , but still its not working. spark.read.csv(file_path, schema=schema, sep=delimiter,multiLine=True) Data is something like this. Here $ is CRLF for newline shown in vim. name,test,12345,$ $ ,desc$ name2,test2,12345,$ $ ,desc2$ So pyspark is treating desc as next record. How to read such data in pyspark . Tried this in both spark2.2 and spark2.3 versions.
0
1
648
0
49,605,405
0
0
0
0
1
true
52
2018-04-02T04:14:00.000
50
4
0
Does Numpy automatically detect and use GPU?
49,605,231
1.2
python,numpy,gpu
Does Numpy/Python automatically detect the presence of GPU and utilize it to speed up matrix computation (e.g. numpy.multiply, numpy.linalg.inv, ... etc)? No. Or do I have code in a specific way to exploit the GPU for fast computation? Yes. Search for Numba, CuPy, Theano, PyTorch or PyCUDA for different paradigms for accelerating Python with GPUs.
I have a few basic questions about using Numpy with GPU (nvidia GTX 1080 Ti). I'm new to GPU, and would like to make sure I'm properly using the GPU to accelerate Numpy/Python. I searched on the internet for a while, but didn't find a simple tutorial that addressed my questions. I'd appreciate it if someone can give me some pointers: 1) Does Numpy/Python automatically detect the presence of GPU and utilize it to speed up matrix computation (e.g. numpy.multiply, numpy.linalg.inv, ... etc)? Or do I have code in a specific way to exploit the GPU for fast computation? 2) Can someone recommend a good tutorial/introductory material on using Numpy/Python with GPU (nvidia's)? Thanks a lot!
0
1
49,124
0
49,608,006
0
0
0
0
1
false
0
2018-04-02T08:27:00.000
0
1
0
Casting a numpy array into a (different) pre-allocated array
49,607,824
0
python,numpy
As Paul Panzer commented, this can be done simply by B[...] = A.
Suppose I have an array A of dtype int32, and I want to cast it into float64. The standard way to do this (that I know) is A.astype('float64'). But this allocates a new array for the result. If I run this command repeatedly (with different arrays of the same shape), each time using the result and discarding it shortly after, then the overhead from these allocations can be non-negligible. Suppose I pre-allocated an array B, with the same shape of A and of type float64. Is there a way to use the memory of B for the result of the casting, instead of allocating new memory each time? ufuncs and numpy.dot have an 'out' argument for this, but astpye does not.
0
1
76
0
49,613,345
0
0
0
0
1
false
1
2018-04-02T14:12:00.000
0
1
0
How to use scikit-learn metrics in CNTK?
49,612,908
0
python,neural-network,cntk
Unless this metric is already implemented in CNTK, implement your own custom "metric" function in whatever format CNTK requires, and have it pass the inputs on to scikit-learn's metric function.
I wish to use classification metrics like matthews_corrcoef as a metric to a neural network built with CNTK. The way I could find as of now was to evaluate the value by passing the predictions and label as shown matthews_corrcoef(cntk.argmax(y_true, axis=-1).eval(), cntk.argmax(y_pred, axis=-1).eval()) Ideally I'd like to pass the metric to the trainer object while building my network. One of the ways would be to create own custom metric and pass that to the trainer object. Although possible, it'll be better to be able to reuse the already existing metrics present in other libraries.
0
1
68
0
49,623,125
0
1
0
0
1
false
0
2018-04-02T22:28:00.000
0
3
0
python pair multiple field entries from csv
49,619,655
0
python,csv,text
First, get a distinct of all breakfast items. A pseudo code like below Iterate through each line Collect item and person in 2 different lists Do a set on those 2 lists Say persons, items Counter = 1 for person in persons: for item in items: Print "breafastitem", Counter Print person, item
Trying to take data from a csv like this: col1 col2 eggs sara bacon john ham betty The number of items in each column can vary and may not be the same. Col1 may have 25 and col2 may have 3. Or the reverse, more or less. And loop through each entry so its output into a text file like this breakfast_1 breakfast_item eggs person sara breakfast_2 breakfast_item bacon person sara breakfast_3 breakfast_item ham person sara breakfast_4 breakfast_item eggs person john breakfast_5 breakfast_item bacon person john breakfast_6 breakfast_item ham person john breakfast_7 breakfast_item eggs person betty breakfast_8 breakfast_item bacon person betty breakfast_9 breakfast_item ham person betty So the script would need to add the "breakfast" number and loop through each breakfast_item and person. I know how to create one combo but not how to pair up each in a loop? Any tips on how to do this would be very helpful.
0
1
50
0
49,630,619
0
0
0
0
1
false
1
2018-04-03T10:30:00.000
0
1
0
Distance metric for n binary vectors
49,627,823
0
python,machine-learning,similarity,cosine-similarity
There are two concepts relevant to your question, which you should consider separately. Similarity Measure: Independent of your scoring mechanism, you should find a similarity measure which suits your data best. It can be an Euclidean distance (not suitable for a 1500 dimensional space), a cosine (dot product based) distance, or a Hamiltonian distance (assuming your input features are completely independent, which rarely is the case). A lot can go on in your distance function, and you should find one which makes sense for your data. Scoring Mechanism: You mention total_distance_of_vectors in your question, which probably is not what you want. If n >> m, almost certainly the total sum of distances for n vectors is more than the total distance for m vectors. What you're looking for is most probably an average of the distances between the members of your sets. Then, depending on weather you want your average to be sensitive to outliers or not, you can go for average of the distances or average of squared distances. If you want to dig deeper, you can also get the mean and variance of the distances within the two sets and compare the distributions.
I have n and m binary vectors(of length 1500) from set A and B respectively. I need a metric that can say how similar (kind of distance metric) all those n vectors and m vectors are. The output should be total_distance_of_n_vectors and total_distance_of_m_vectors. And if total_distance_of_n_vectors > total_distance_of_m_vectors, it means Set B have more similar vectors than Set A. Which metric should I use? I thought of Jaccard similarity. But I am not able to put it in this context. Should I find the distance of each vector with each other to find the total distance or something else ?
0
1
1,262
0
49,713,880
0
1
0
0
1
false
0
2018-04-03T19:58:00.000
3
2
0
numpy version creating issue. python 2.7 already installed
49,638,201
0.291313
macos,numpy,matplotlib,ipython,homebrew
I just met the same problem. It's a issue about numpy preinstall in Python has a version number issue(required >=1.5, but found 1.8.0rc1). Try running brew install python2 to upgrade your python which may solve this issue.
Getting few "package missing" errors while installing ipython on High Sierra. matplotlib 1.3.1 has requirement numpy>=1.5, but you'll have numpy 1.8.0rc1 which is incompatible.
0
1
1,097
0
55,976,519
0
0
0
0
2
false
173
2018-04-04T05:09:00.000
16
5
0
What's the difference between reshape and view in pytorch?
49,643,225
1
python,pytorch
Tensor.reshape() is more robust. It will work on any tensor, while Tensor.view() works only on tensor t where t.is_contiguous()==True. To explain about non-contiguous and contiguous is another story, but you can always make the tensor t contiguous if you call t.contiguous() and then you can call view() without the error.
In numpy, we use ndarray.reshape() for reshaping an array. I noticed that in pytorch, people use torch.view(...) for the same purpose, but at the same time, there is also a torch.reshape(...) existing. So I am wondering what the differences are between them and when I should use either of them?
0
1
88,597
0
69,676,338
0
0
0
0
2
false
173
2018-04-04T05:09:00.000
0
5
0
What's the difference between reshape and view in pytorch?
49,643,225
0
python,pytorch
I would say the answers here are technically correct but there's another reason for existing of reshape. pytorch is usually considered more convenient than other frameworks because it closer to python and numpy. It's interesting that the question involves numpy. Let's look into size and shape in pytorch. size is a function so you call it like x.size(). shape in pytorch is not a function. In numpy you have shape and it's not a function - you use it x.shape. So it's handy to get both of them in pytorch. If you came from numpy it would be nice to use the same functions.
In numpy, we use ndarray.reshape() for reshaping an array. I noticed that in pytorch, people use torch.view(...) for the same purpose, but at the same time, there is also a torch.reshape(...) existing. So I am wondering what the differences are between them and when I should use either of them?
0
1
88,597
0
49,651,837
0
0
0
1
1
false
1
2018-04-04T12:47:00.000
0
2
0
Pandas Dataframe.to_sql wrongly inserting into more than one table (postgresql)
49,651,442
0
python-3.x,postgresql,pandas
Removing INHERITS (tablename); on the slave table (creating it again without INHERITS) seems to have done the trick. Only a matter of curiosity: Why did it matter? I thought inheritance only gets columns and dtypes not the actual data.
df.to_sql(name='hourly', con=engine, if_exists='append', index=False) It inserts data not only to table 'hourly', but also to table 'margin' - I execute this particular line only. It's Postgresql 10. While Creating table 'hourly', I inherited column names and dtypes from table 'margin'. Is it something wrong with the db itself or is it Python code?
0
1
588
0
49,656,081
0
0
0
1
1
false
0
2018-04-04T13:46:00.000
1
2
0
how to read text from excel file in python pandas?
49,652,693
0.099668
excel,python-3.x,pandas,import
Try converting the file from .xlsx to .CSV I had the same problem with text columns so i tried converting to CSV (Comma Delimited) and it worked. Not very helpful, but worth a try.
I am working on a excel file with large text data. 2 columns have lot of text data. Like descriptions, job duties. When i import my file in python df=pd.read_excel("form1.xlsx"). It shows the columns with text data as NaN. How do I import all the text in the columns ? I want to do analysis on job title , description and job duties. Descriptions and Job Title are long text. I have over 150 rows.
0
1
2,378
0
49,678,287
0
0
0
0
1
false
0
2018-04-04T20:30:00.000
0
1
0
Ensemble (Combine) multiple deep learning regression models which already have dropout layers
49,659,892
0
python,tensorflow,regression,prediction,robust
I think the presence of dropout is irrelevant to what you want to do. Ensembling should work just fine with dropout.
Currently I have multiple trained models for regression task, each model is of the same architecture but while training, I have dropout layer, to improve the performance, is that still possible for me to combine those trained models and calculate the mean of the weights as the combined, new model? I just heard that there is an ensemble prediction method which allow us to do, but I am not sure whether I can still do this because I already have random dropout layer. Any hint is much appreciated!
0
1
137
0
49,662,938
0
0
0
0
1
false
1
2018-04-05T01:37:00.000
1
3
0
In Keras, how to send each item in a batch through a model?
49,662,869
0.066568
python,tensorflow,keras
At the moment you are returning a 3D array. Add a Flatten() layer to convert the array to 2D, and then add a Dense(1). This should output (batch_size, 1).
I have a model that starts with a Conv2D layer and so it must take input of shape (samples, rows, cols, channels) (and the model must ultimately output a shape of (1)). However, for my purposes one full unit of input needs to be some (fixed) number of samples, so the overall input shape sent into this model when given a batch of input ends up being (batch_size, samples, rows, cols, channels) (which is expected and correct, but...). How do I send each item in the batch through this model so that I end up with an output of shape (batch_size, 1)? What I have tried so far: I tried creating an inner model containing the Conv2D layer et al then wrapping the entire thing in a TimeDistributed wrapper, followed by a Dense(units=1) layer. This compiled, but resulted in an output shape of (batch_size, samples, 1). I feel like I am missing something simple...
0
1
587
0
64,231,036
0
0
0
0
1
false
27
2018-04-05T06:45:00.000
2
4
0
How to add report_tensor_allocations_upon_oom to RunOptions in Keras
49,665,757
0.099668
python,tensorflow,keras,gpu
OOM means out of memory. May be it is using more memory at that time. Decrease batch_size significantly. I set to 16, then it worked fine
I'm trying to train a neural net on a GPU using Keras and am getting a "Resource exhausted: OOM when allocating tensor" error. The specific tensor it's trying to allocate isn't very big, so I assume some previous tensor consumed almost all the VRAM. The error message comes with a hint that suggests this: Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. That sounds good, but how do I do it? RunOptions appears to be a Tensorflow thing, and what little documentation I can find for it associates it with a "session". I'm using Keras, so Tensorflow is hidden under a layer of abstraction and its sessions under another layer below that. How do I dig underneath everything to set this option in such a way that it will take effect?
0
1
26,528
0
49,669,130
0
0
0
0
1
false
0
2018-04-05T08:57:00.000
0
2
0
Feed the output of a CNN in a LSTM
49,668,169
0
python,tensorflow,deep-learning,lstm
If you merge several small sequences from different videos to form a batch, the output of the last layer of your model (the RNN) should already be [batch_size, window_size, num_classes]. Basically, you want to wrap your CNN with reshape layers which will concatenate the frames from each batch: input -> [batch_size, window_size, nchannels, height, width] reshape -> [batch_size * window_size, nchannels, height, width] CNN -> [batch_size * window_size, feat_size] reshape -> [batch_size, window_size, feats_size] RNN -> [batch_size, window_size, num_outputs] (assuming frame-wise predictions) But this will take a lot of memory, so you can set batch size to 1, which is what you seem to be doing if I understood correctly. In this case you can spare the first reshape. I'm not sure about the order of the axes above, but the general logic remains the same. As a side note: if you plan on using Batch Normalization at some point, you may want to raise the batch size because consecutive frames from a single segment might not contain a lot of variety by themselves. Also give a double check on the batch normalization axes which should cover both time and batch axes.
It is the first time that I am working with the LSTM networks. I have a video with a frame rate of 30 fps. I have a CNN network (AlexNet based) and I want to feed the last layer of my CNN network into the recurrent network (I am using tensorflow). Supposing that my batch_size=30, so equal to the fps, and I want to have a timestep of 1 second (so, every 30 frames). The output of the last layer of my network will be [bast_size, 1000], so in my case [30, 1000], now do I have to reshape the size of my output to [batch_size, time_steps, features] (in my case: [30, 30, 1000])? Is that correct? or am I wrong?
0
1
2,027
0
49,686,565
0
0
0
0
1
false
5
2018-04-05T13:50:00.000
8
1
0
How is the output h_n of an RNN (nn.LSTM, nn.GRU, etc.) in PyTorch structured?
49,674,079
1
python,neural-network,deep-learning,lstm,pytorch
The implementation of LSTM and GRU in pytorch automatically includes the possibility of stacked layers of LSTMs and GRUs. You give this with the keyword argument nn.LSTM(num_layers=num_layers). num_layers is the number of stacked LSTMs (or GRUs) that you have. The default value is 1, which gives you the basic LSTM. num_directions is either 1 or 2. It is 1 for normal LSTMs and GRUs, and it is 2 for bidirectional RNNs. So in your case, you probably have a simple LSTM or GRU so the value of num_layers * num_directions would then be one. h_n[0] is the hidden state of the bottom-most layer (the one which takes in the input), and h_n[-1] of the top-most layer (the one which outputs the output of the network). batch_first puts the batch dimension before the time dimension (the default being the time dimension before the batch dimension), because the hidden state doesn't have a time dimension, batch_first has no effect on the hidden state shape.
The docs say h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len Now, the batch and hidden_size dimensions are pretty much self-explanatory. The first dimension remains a mystery, though. I assume, that the hidden states of all "last cells" of all layers are included in this output. But then what is the index of, for example, the hidden state of the "last cell" in the "uppermost layer"? h_n[-1]? h_n[0]? Is the output affected by the batch_first option?
0
1
1,829
0
49,676,768
0
0
0
0
1
false
0
2018-04-05T15:58:00.000
0
1
0
Exception: Python in worker has different version 2.7 than that in driver 2.6, PySpark cannot run with different minor versions
49,676,701
0
python,apache-spark,pyspark
Install at least Python 2.7 on each node and configure SPARK_PYTHON environment variable to point to the required installation. Spark doesn't support mixed environments and doesn't support Python 2.6 anymore.
We have a hadoop cluster of 625 nodes. But some of them are centos 6 (python 2.6) and some are centos 7 (python 2.7). So how can I resolve this as I am getting this error constantly.
0
1
562
0
49,817,588
0
1
0
0
1
false
0
2018-04-06T05:00:00.000
0
1
0
Using Jupyter Notebook to plot data from rosbag files
49,685,635
0
python,jupyter-notebook,ros
Found this answer in one of the older .ipynb files. Plots can be obtained with Plotly library. It really doesn't matter how many bag files are being scanned for topics and are plotted. Each bag file can be separately converted using the 'bag_to_dataframe' function from the "rosbag_pandas" package. Even in case of similar topics between bag files, it is possible to differentiate their values in the plots. Hope this helps, I'll share a sample bag file soon :)
I have multiple rosbag files with a ton of data, and what I would like to do is analyze these bag files using Jupyter Notebook, but the problem is that each bag has a different set of data parameters. So I have created msg files to subscribe to data from each bag file. Some msg files have the same variables since those variables are used in multiple plots. Can someone walk me through the process of getting the plots on the notebook file one at a time (from the data obtained from the bag files), for every bag file I have? Even if multiple files use the same topics?
0
1
690