GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 71,438,963 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-09-05T01:01:00.000 | 1 | 1 | 0 | Spyder cannot find module named 'pandas_datareader' | 52,175,718 | 1.2 | python,pandas,module,spyder,datareader | I tried conda install pandas-datareader in Anaconda Prompt. It was installed and after my computer restarted, pandas-datareader worked in spyder 3.6. | First off I would like to say that I am aware that this question has been asked before, however, none of the other posts have offered a solution that resolves the problem.
I am trying to use pandas-datareader to grab stock prices from the internet. I am using windows with python version 3.6. I first installed pandas-datareader from the console using
pip install pandas-datareader.
I then wrote a code which used the line
import pandas_datareader.data as web
It came back with the error
`ModuleNotFoundError: No module named 'pandas_datareader'
I tried to redownload pandas-datareader, just in case it didn't work the first time, but the console came back with "Requirement already satisfied" so that wasn't the problem.
From there I uninstalled pandas-datareader, and reinstalled it with
pip3 install pandas-datareader
I still got the same error message.
I was worried that it might have something to do with old versions of python installed on my computer so I deleted all files for python 2.7, but this did not help the issue. I restarted spyder and my computer and this did not help. I tried Jupiter notebook and this did not help either.
Now to take my investigation one step further, I looked in my file folders at the hidden files to see where pandas-datareader could be hiding. When I go to C:\Users\J.Shepard\Anaconda3\pkgs and C:\Users\J.Shepard\Anaconda3\pkgs I see that pandas-0.23.0-py36h830ac7b_0 is installed but I cannot find anything that looks like pandas-datareader. In fact, when I search for "pandas-datareader" in my file search, I only see 2 text files with one line of code each. I do not know what to make of this discovery but I thought it might be helpful to someone else.
I hope that I have made a good case to prove that I have genuinely tried and failed to solve this problem on my own. Based on the number of other unresolved posts related to this same question, I believe that this is a question that deserves to be asked again. | 0 | 1 | 2,865 |
0 | 56,251,858 | 0 | 0 | 0 | 0 | 1 | false | 13 | 2018-09-05T03:51:00.000 | 6 | 1 | 0 | Could Keras prefetch data like tensorflow Dataset? | 52,176,792 | 1 | python,tensorflow,keras,dataset | If you call fit_generator with workers > 1, use_multiprocessing=True, it will prefetch queue_size batches.
From docs: max_queue_size: Integer. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10. | In TensorFlow's Dataset API, we can use dataset.prefetch(buffer_size=xxx) to preload other batches' data while GPU is processing the current batch's data, therefore, I can make full use of GPU.
I'm going to use Keras, and wonder if keras has a similar API for me to make full use of GPU, instead of serial execution: read batch 0->process batch 0->read batch 1-> process batch 1-> ...
I briefly looked through the keras API and did not see a description of the prefetch. | 0 | 1 | 2,301 |
0 | 52,183,726 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-09-05T08:25:00.000 | 0 | 1 | 0 | Multi-dimensional tensors as input to rnn in tensorflow (tf.contrib.rnn.RNNCell) | 52,180,502 | 1.2 | python,tensorflow,deep-learning,computer-vision,rnn | As you have said RNN only accept as input a Tensor like [batch_size, sequence_lentgh, features].
In order to use RNN from tensorflow you will have to extract the features with a CNN for each frame and convert your CNN output data to a tensor that follows [batch_size, sequence_lentgh, features] shape in order to feed it to the RNN. | From tensorflow documentation about tf.contrib.rnn.RNNCell: "This definition of cell differs from the definition used in the literature. In the literature, 'cell' refers to an object with a single scalar output. This definition refers to a horizontal array of such units."
It seems, that rnn cell only accepts vectors as inputs. However I would like to feed images/videos to an rnn (e.g. [batch size, steps, height, width, channels]). Is there a way to do this using rnn cell and dynamic rnn, or do I have to manually construct an rnn? | 0 | 1 | 155 |
0 | 52,200,967 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-09-05T18:03:00.000 | 0 | 2 | 0 | Is it good idea to repartition 50 million records data in dataframe? If yes then someone please tell me the appropriate way of doing this | 52,191,056 | 0 | python,database,dataframe,pyspark,hadoop2 | Usually, partitioning is a good idea and as @Karthik already said, often the date is not the best idea. In my experience it always made sense to partition your data based on the amount of workers you have. So ideally your partition size is a multiple of your workers. We normally use 120 partitions, as we have 24 workers in our spark environment and end up with code like:
new_df = spark.read.csv("some_csv.csv", header="true", escape="\"", quote="\"").repartition(100)
We also experienced way better performance in working with parquet instead of csv, which is a tradeoff, as the data has to be read, repartitioned and stored again, but it paid off in the analysis steps. So maybe you should also consider this conversion. | We are going to handle Big Data (~50 million records) in our organization. We are partitioning data on the basis of date and other some parameters, but data is not equally partitioned. Can we do repartition on it for good performance? | 0 | 1 | 64 |
0 | 52,194,219 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-09-05T18:03:00.000 | 0 | 2 | 0 | Is it good idea to repartition 50 million records data in dataframe? If yes then someone please tell me the appropriate way of doing this | 52,191,056 | 0 | python,database,dataframe,pyspark,hadoop2 | Depending on your machine try maintaining a fixed number of partitions. It is always a good idea to partition but in most cases, it's not a good idea to partition based on date(Not sure because I don't know the nature of your data). | We are going to handle Big Data (~50 million records) in our organization. We are partitioning data on the basis of date and other some parameters, but data is not equally partitioned. Can we do repartition on it for good performance? | 0 | 1 | 64 |
0 | 52,212,217 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-09-06T18:54:00.000 | 1 | 1 | 0 | H2O Word2Vec inconsistent vectors | 52,210,521 | 1.2 | python,word2vec,h2o | word2vec in h2o-3 uses hogwild implementation - the model parameters are updated concurrently from multiple threads and it is not possible to guarantee the reproducibility in this implementation.
How big is your text corpus? At the cost of a slowdown of the model training you could get reproducible result with limiting the algo to use just a single thread (h2o start-up parameter -nthread). | I have a general question on a specific topic.
I am using the vectors generated by Word2Vec to feed as features into my Distributed Random Forest model for classifying some records. I have millions of records and am receiving new records on a daily basis. Because of the new records coming in I want the new records to be encoded with the same vector model as the previous records. Meaning that the word "AT" will be the same vector now and in the future.
I know that Word2Vec uses a random seed to generate the vectors for the words in the corpus but I want to turn this off. I need to set the seed such that if I train a model on a section of the data today and then again on the same data in the future, I want it to generate the same model with the exact same vectors for each word.
The problem with generating new models and then encoding is that it takes a great deal of time to encode these records and then on top of that my DRF model for classification isn't any good anymore because the vector for the words have changed. So I have to retrain a new DRF.
Normally this would not be an issue since I could just train one model each and then use that forever;However I know that a good practice is to update your packages on the regular. This is a problem for h2o since once you update there is no backward comparability with model generated on previous version.
Are there any sources that I could read on how to set the seed on the Word2Vec model for h2o in python? I am using Python version 3 and h2o version 3.18 | 0 | 1 | 140 |
0 | 52,211,776 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-09-06T20:24:00.000 | 4 | 1 | 0 | why do i get nan loss value in training discriminator and generator of GAN? | 52,211,665 | 1.2 | python,tensorflow,generative-adversarial-network | There are several reasons for a NaN loss and why models diverge. Most common ones I've seen are:
Your learning rate is too high. If this is the case, the loss increases and then diverges to infinity.
You are getting a division by zero error. If this is the case, you can add a small number like 1e-8 to your output probability.
You have bad inputs. If this is the case, make sure that you do not feed your model with NaNs. i.e. use assert not np.any(np.isnan(x)) on the input data.
Your labels are not in the same domain of your objective function. If this is the case, check the range of your labels and make sure they match.
If none of the above helps, try to check the activation function, the optimizer, the loss function, the size and the shape of the network.
Finally, though less likely, there might be a bug with the framework you are using. Check the repo of the framework if there are others having the same issue. | I have saved my text vectors by using gensim library which consists of some negative numbers. will it effect the training?
If not then why am i getting nan loss value first for discriminator and then for both discriminator and generator after certain steps of training? | 0 | 1 | 5,113 |
0 | 52,221,695 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-09-07T10:33:00.000 | 0 | 2 | 0 | Python Pandas reading UTF-8 characters | 52,220,676 | 0 | python-2.7,pandas | Double Check if Excel is saved as UTF-8
In Excel 2016 When saving as: click More Options > Tools > Web Options > Encoding > Save this document as ... (pick UTF-8 from the list)
Saving Excel as csv or even txt helps in many cases too.
If csv or txt exported from Excel also doesn't open/work properly
open it in notepad and save it again selecting proper UTF-8 encoding | I am trying to read an Excel file containing the Swedish characters åäö.
I am importing the Excel file with pd.read_excel(path, sheetname, encoding='utf8')
Works fine to import it and I can see the åäö characters, but when I work with the data for example creating a new variable df['East'] = df['Öst'] + 50 I receive an error message
Ascii codec can't decode byte 0xc3 in position 33: ordinal not in range
Anyone that can help me solve this issue? | 0 | 1 | 865 |
0 | 52,454,922 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-07T12:44:00.000 | 0 | 1 | 0 | How to use C++ to implement SimilarityTransform in scikit-image without using estimateRigidTransform in OpenCV? | 52,222,864 | 0 | python,c++,image,opencv | I found a function in eigen3, can do the same thing as the python code does. | When I try to translate a project written in python to C++. I have to implement the function SimilarityTransform in the package scikit-image. I find estimateRigidTransform in OpenCV will do the same thing. But estimateRigidTransform will return empty matrix somtimes. So, Is there some method that which will works better than that and alway return a matrix?
Thanks in advance. | 0 | 1 | 748 |
0 | 52,223,093 | 0 | 0 | 0 | 0 | 1 | false | 21 | 2018-09-07T12:55:00.000 | 4 | 4 | 0 | Merge multiple dataframes based on a common column | 52,223,045 | 0.197375 | python,pandas,dataframe,merge,concat | You can do
df1.merge(df2, how='left', left_on='Col1', right_on='Col1').merge(df3, how='left', left_on='Col1', right_on='Col1') | I have Three dataframes. All of them have a common column and I need to merge them based on the common column without missing any data
Input
>>>df1
0 Col1 Col2 Col3
1 data1 3 4
2 data2 4 3
3 data3 2 3
4 data4 2 4
5 data5 1 4
>>>df2
0 Col1 Col4 Col5
1 data1 7 4
2 data2 6 9
3 data3 1 4
>>>df3
0 Col1 Col6 Col7
1 data2 5 8
2 data3 2 7
3 data5 5 3
Expected Output
>>>df
0 Col1 Col2 Col3 Col4 Col5 Col6 Col7
1 data1 3 4 7 4
2 data2 4 3 6 9 5 8
3 data3 2 3 1 4 2 7
4 data4 2 4
5 data5 1 4 5 3 | 0 | 1 | 29,412 |
0 | 52,245,736 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-08T00:53:00.000 | 0 | 1 | 0 | Get classification score of hypothetical detection box | 52,231,108 | 0 | python,tensorflow,deep-learning | With tensorflow you cannot do that. What you are saying is almost like a region proposal and rest of the pipeline on which different platforms like tensorflow, yolo are built to arrive at object detection. You are proposing to a built a different platform by asking what you are asking. | Is there anyway to assert the presence of a detection box in an image and obtain the classification score of said hypothetical box?
I am working with a tensorflow object detection graph and want to refine it's accuracy with a little trickery; by making the claim that there are more (N) objects in a given image than it is detecting, asserting there are image objects in multiple areas in the image, and evaluating each hypothetical image object based on it's classification score between 0 and 1.
In other words:
I want to say "Hey, TensorFlow, I think there is an image object with rectangular coordinates (x1, y1), (x2, y2) in this image. What would the classification score of a hypothetical detection box defined by that rectangle be?" Is this possible? | 0 | 1 | 32 |
0 | 52,236,009 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2018-09-08T02:24:00.000 | 0 | 2 | 0 | Graph traversal, maybe another type of mathematics? | 52,231,442 | 0 | python,algorithm,set,graph-algorithm,graph-traversal | If you really intended to find the minimum amount, the answer is 0, because you don't have to use any number at all.
I guess you meant to write "maximal amount of numbers".
If I understand your problem correctly, it sounds like we can translated it to the following problem:
Given a set of n numbers (1,..,n), what is the maximal amount of numbers I can use to divide the set into pairs, where each number can appear only once.
The answer to this question is:
when n = 2k f(n) = 2k for k>=0
when n = 2k+1 f(n) = 2k for k>=0
I'll explain, using induction.
if n = 0 then we can use at most 0 numbers to create pairs.
if n = 2 (the set can be [1,2]) then we can use both numbers to
create one pair (1,2)
Assumption: if n=2k lets assume we can use all 2k numbers to create 2k pairs and prove using induction that we can use 2k+2 numbers for n = 2k+2.
Proof: if n = 2k+2, [1,2,..,k,..,2k,2k+1,2k+2], we can create k pairs using 2k numbers (from our assomption). without loss of generality, lets assume out pairs are (1,2),(3,4),..,(2k-1,2k). we can see that we still have two numbers [2k+1, 2k+2] that we didn't use, and therefor we can create a pair out of two of them, which means that we used 2k+2 numbers.
You can prove on your own the case when n is odd. | Let’s say you have a set/list/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let’s say for reasons that are not important, you run them through a function and receive the following pairs:
(1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My question involves the next part: how do I find out the minimum amount of numbers that can be part of a pair without being repeated? For this particular sequence it is all six. For [1,4,2] the pairs are (1,4), (1,2), (2,4). In this case any one of the numbers could be excluded as they are all in pairs, but they each repeat, therefore it would be 2 (which 2 do not matter).
At first glance this seems like a graph traversal problem - the numbers are nodes, the pairs edges. Is there some part of mathematics that deals with this? I have no problem writing up a traversal algorithm, I was just wondering if there was a solution with a lower time complexity. Thanks! | 0 | 1 | 81 |
0 | 52,249,740 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2018-09-08T02:24:00.000 | 0 | 2 | 0 | Graph traversal, maybe another type of mathematics? | 52,231,442 | 0 | python,algorithm,set,graph-algorithm,graph-traversal | In case anyone cares in the future, the solution is called a blossom algorithm. | Let’s say you have a set/list/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let’s say for reasons that are not important, you run them through a function and receive the following pairs:
(1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My question involves the next part: how do I find out the minimum amount of numbers that can be part of a pair without being repeated? For this particular sequence it is all six. For [1,4,2] the pairs are (1,4), (1,2), (2,4). In this case any one of the numbers could be excluded as they are all in pairs, but they each repeat, therefore it would be 2 (which 2 do not matter).
At first glance this seems like a graph traversal problem - the numbers are nodes, the pairs edges. Is there some part of mathematics that deals with this? I have no problem writing up a traversal algorithm, I was just wondering if there was a solution with a lower time complexity. Thanks! | 0 | 1 | 81 |
0 | 52,291,702 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-08T20:30:00.000 | 0 | 1 | 0 | How to change channel dimension of an image? | 52,239,164 | 0 | python-3.x,numpy | You should set a score threshold to map every pixel in the image to one class, and every class has a color (which has RGB channels), so every pixel is a RGB value for its class. | Suppose I have numpy array with shape = [y,z,21] where y = image_width, z= image_height. The above array represents image with 21 channels. How should I convert it to size = [ y,z,3 ] ? | 0 | 1 | 355 |
0 | 52,258,663 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2018-09-10T08:26:00.000 | 5 | 1 | 0 | KMeans clustering unbalanced data | 52,253,787 | 1.2 | python,cluster-analysis,k-means,data-science,feature-engineering | It is not part of the k-means objective to produce balanced clusters. In fact, solutions with balanced clusters can be arbitrarily bad (just consider a dataset with duplicates). K-means minimizes the sum-of-squares, and putting these objects into one cluster seems to be beneficial.
What you see is the typical effect of using k-means on sparse, non-continuous data. Encoded categoricial variables, binary variables, and sparse data just are not well suited for k-means use of means. Furthermore, you'd probably need to carefully weight variables, too.
Now a hotfix that will likely improve your results (at least the perceived quality, because I do not think it makes them statistically any better) is to normalize each vector to unit length (Euclidean norm 1). This will emphasize the ones of rows with few nonzero entries. You'll probably like the results more, but they are even much harder to interpret. | I have a set of data with 50 features (c1, c2, c3 ...), with over 80k rows.
Each row contains normalised numerical values (ranging 0-1). It is actually a normalised dummy variable, whereby some rows have only few features, 3-4 (i.e. 0 is assigned if there is no value). Most rows have about 10-20 features.
I used KMeans to cluster the data, always resulting in a cluster with a large number of members. Upon analysis, I noticed that rows with fewer than 4 features tends to get clustered together, which is not what I want.
Is there anyway balance out the clusters? | 0 | 1 | 3,624 |
0 | 52,255,420 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-10T09:41:00.000 | 0 | 1 | 0 | How to apply avg function to DataFrame series monthly? | 52,254,994 | 0 | python,pandas,dataframe | data.resample('M', how='mean') | I have a DataFrame series with day resolution. I want to transform the series to a series of monthly averages. Ofcourse I can apply rolling mean and select only every 30th of means but it would not precise. I want to get series which contains mean value from the previous month on every first day of a month. For example, on February 1 I want to have daily average for the January. How can I do this in pythonic way? | 0 | 1 | 63 |
0 | 55,653,445 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-10T11:25:00.000 | 0 | 1 | 0 | how to use Tensorflow seq2seq.GreedyEmbeddingHelper first parameter Embedding in case of using normal one hot vector instead of embedding? | 52,256,809 | 0 | python,tensorflow | embedding = tf.Variable(tf.random_uniform([c-dimensional ,
EMBEDDING_DIM]))
here you can create the embedding for you own model.
and this will be trained during your training process to give a vector for your own input.
if you don't want to use it you just can create a matrix where is every column of it is one hot vector represents the character and pass it as embedding.
it will be some thing like that:
[[1,0,0],[0,1,0],[0,0,1]]
here if you have vocabsize of 3 . | I am trying to decode one character (represented as c-dimensional one hot vectors) at a time with tensorflow seq2seq model implementations. I am not using any embedding in my case.
Now I am stuck with tf.contrib.seq2seq.GreedyEmbeddingHelper. It requires "embedding: A callable that takes a vector tensor of ids (argmax ids), or the params argument for embedding_lookup. The returned tensor will be passed to the decoder input."
How I will define callable? What are inputs (vector tensor if ids(argmax ids)) and outputs of this callable function? Please explain using examples. | 0 | 1 | 267 |
0 | 52,279,819 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-11T08:36:00.000 | 0 | 1 | 0 | How to extract only ID photo from CV with pdfimages | 52,271,908 | 0 | python,image,pdf,extract,pypdf | You need a way of differentiating images found in the PDF in order to extract the ones of interest.
I believe you have the options of considering:
Image characteristics such as Width, Height, Bits Per Component, ColorSpace
Metadata information about the image (e.g. a XMP tag of interest)
Facial recognition of the person in the photo or Form recognition of the structure of the ID itself.
Extracting all of the images and then use some image processing code to analyze the images to identify the ones of interest.
I think 2) may be the most reliable method if the author of the PDF included such information with the photo IDs. 3) may be difficult to implement and get a reliable result from consistently. 1) will only work if that is a reliable means of identifying such photo IDs for your PDF documents.
Then you could key off of that information using your extraction tool (if it lets you do that). Otherwise you would need to write your own extraction tool using a PDF library. | Hi I tried to use pdfimages to extract ID images from my pdf resume files. However for some files they return also the icon, table lines, border images which are totally irrelevant.
Is there anyway I can limit it to only extract person photo? I am thinking if we can define a certain size constraints on the output? | 0 | 1 | 194 |
0 | 52,287,805 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-09-11T09:54:00.000 | 0 | 1 | 0 | Write binary numpy array of zeros and ones to file using cv2 or Pillow | 52,273,313 | 1.2 | python-3.x,python-imaging-library,cv2 | Since you are writing pixels with values (0, 0, 0) or (1, 1, 1) to the image you are seeing an image that is entirely black and almost-black, so it looks black.
You can multiply your array by 255 to get an array of { (0, 0, 0), (255, 255, 255) } which would be black and white. When you read the image you can convert back to 0s and 1s. | Is it possible to write binary numpy array containing 0 and 1 to file using opencv (cv2) or Pillow? I was using scipy.misc.imsave and it worked well, but i read it's depreciated so i wanted to switch to other modules, but when trying to write such an array i see only black image. I need to have 0/1 values, and not 0/255 for further processing. | 0 | 1 | 486 |
0 | 53,675,625 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2018-09-12T03:53:00.000 | 1 | 3 | 0 | Can neuroevolution of augmenting topologies (NEAT) neural networks be built in TensorFlow? | 52,287,254 | 0.066568 | python,tensorflow,pytorch,neat | One way to make an evolving tensorflow network would be to use either hyperneat or the es-hyperneat algorithms instead of running the evolution on the individual networks in the species this instead evolves a "genome" that is actually cppn that encodes the phenotype neural nets. For the cppn you can use a feed forward tensorflow network with the caveat of having different activation functions that can be used at each node, this lets the cppn evolve to be able to be queried for the structure and weights of the "phenotype" neural network for which you can use a generic tensorflow net(or whatever net you so choose)
I would look into the neat-python and peas libraries and look at the networks they use and replicate those classes with tensorflow nets. | I am making a machine learning program for time series data analysis and using NEAT could help the work. I started to learn TensorFlow not long ago but it seems that the computational graphs in TensorFlow are usually fixed. Is there tools in TensorFlow to help build a dynamically evolving neural network? Or something like Pytorch would be a better alternative? Thanks. | 0 | 1 | 2,641 |
0 | 52,302,920 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-09-12T19:22:00.000 | 0 | 1 | 0 | sklearn learning_curve and StandardScaler | 52,302,047 | 0 | python,scikit-learn | learning_curve does not implement StandardScaler on its own. You could create a Pipeline as your estimator where the first step is StandardScaler then whatever your estimator you're using as the next step. This way when you call learning_curve during each cv iteration you are training both the scaler and estimator on the training folds and performance is being validated against the testing fold in each iteration.
You would not want to scale the entire dataset before calling learning_curve. The reason is when you scale the entire set before training your model you are introducing bias because you are using data that will be used for validation to train the model, which can cause over-fitting. | I want to know if the sklearn.model_selection learning_curve can use or does use sklearn.preprocessing StandardScaler. I've looked over the implementation, but my skill level isn't up to par to come to a conclusion on my own. All tutorials on using learning_curve have you pass the entire data set to the learning_curve and learning_curve will split the data into training and testing sets.
All tutorials for any estimators have you split the data into training and test then scale only the training data and transform the test data using the training data scale. Which completely understand.
Should I scale the entire data set before passing it to learn_curve. I do know learning_curve will use either k-folds or some other cross validation method, so does it even matter because it will all get averaged out with cross validation?
Thanks, | 0 | 1 | 374 |
0 | 52,302,923 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-09-12T19:44:00.000 | 1 | 1 | 0 | TensorFlow estimators vs manual/session approach | 52,302,352 | 1.2 | python,tensorflow,deep-learning,tensorflow-estimator | A simple answer would be:
Estimator hides some TensorFlow concepts, such as Graph and
Session, from the user. This is best for newbies since it makes new learners be able to get started much easier (this is nothing to do with the type of dataset, just use tf.dataset API to write an input_fn is sufficient to provide input data for an estimator).
Once you have played with tensorflow for a while, understanding how Estimator works and maybe start to use low-level APIs is definitely needed to make you an expert. | I am fairly new to deep learning and TensorFlow, and in the set of lectures from the course I am taking they go over two methods of employing TensorFlow: using estimators and using sessions. It seems like the estimators method is much easier to understand and simpler as it is similar to what I have done using the sklearn classifier packages. Is there any particular reason why one would use sessions instead of estimators? Or will it depend on the type of dataset I am dealing with? | 0 | 1 | 471 |
0 | 52,318,766 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2018-09-13T14:52:00.000 | 1 | 4 | 0 | problem installing and importing modules in python | 52,316,354 | 0.049958 | python,numpy,opencv | I removed the Anaconda version on my machine, so I just have python 3.7 installed. I removed the python interpreter(Pycharm) and installed it again and the problem got fixed somehow! | I am installing python on windows10 and trying to install the opencv and numpy extentions in the command window. I get no error installing them and it says it is successfully installed. But when I try to check the installation and import cv2 it does not recognize it and give me the error: no module named cv2.
can anybody help me with this problem? Is there something wrong in installation process or do I need to install something else?
I checked the newest version of each and used the compatible one with my system.
Thanks. | 0 | 1 | 3,551 |
0 | 52,316,565 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2018-09-13T14:52:00.000 | 1 | 4 | 0 | problem installing and importing modules in python | 52,316,354 | 0.049958 | python,numpy,opencv | Is it possible that you have 2 versions of python on your machine and your native pip is pointing to the other one? (e.g. you pip install opencv which installs opencv for python 2, but you are using python 3). If this is so, then use pip3 install opencv | I am installing python on windows10 and trying to install the opencv and numpy extentions in the command window. I get no error installing them and it says it is successfully installed. But when I try to check the installation and import cv2 it does not recognize it and give me the error: no module named cv2.
can anybody help me with this problem? Is there something wrong in installation process or do I need to install something else?
I checked the newest version of each and used the compatible one with my system.
Thanks. | 0 | 1 | 3,551 |
0 | 52,332,078 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-09-14T01:42:00.000 | 0 | 1 | 0 | Some modules can be imported in python previously but now can only be imported in ipython2 | 52,323,907 | 0 | python,linux,python-2.7,numpy | Make sure the python path that you given in bashrc is correct.
Also it will be good to use conda environment to try out the same since there is confusion in python environments. For that you can follow the below steps:
Create the environment and activate it using following commands:
conda create -n test_env python=2.7
conda activate test_env
conda install ipykernel
ipython kernel install --name test_env --user
Then install the required packages in the environment that you created and try to import it within the created environment. | Previously I installed pytorch,PIL,numpy... using pip. After that I installed python3. Thus ipython switched from python2 to python3. I have to use ipython2 to start python2 kernel. These modules still works well in ipython2, but when I run a python script using python, python2, python2.7, they all raise ImportError:
ImportError: No module named PIL(numpy,torch...)
When run this command: sudo pip install numpy
return:
Requirement already satisfied: numpy in
/usr/local/lib/python3.5/dist-packages (1.15.1)
when running this command: sudo pip2 install numpy
return: Requirement already satisfied (use --upgrade to upgrade): numpy in /usr/lib/python2.7/dist-packages
When I run: python, import sys, sys.path
it shows :
['', '/home/szy/miniconda2/lib/python27.zip',
'/home/szy/miniconda2/lib/python2.7',
'/home/szy/miniconda2/lib/python2.7/plat-linux2',
'/home/szy/miniconda2/lib/python2.7/lib-tk',
'/home/szy/miniconda2/lib/python2.7/lib-old',
'/home/szy/miniconda2/lib/python2.7/lib-dynload',
'/home/szy/.local/lib/python2.7/site-packages',
'/home/szy/miniconda2/lib/python2.7/site-packages']
The location of numpy is not among them.
and the sys.path in ipython2:
['', '/usr/local/bin', '/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/home/szy/.local/lib/python2.7/site-packages',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/usr/local/lib/python2.7/dist-packages/IPython/extensions',
'/home/szy/.ipython']
What's wrong?
Previous I could run scripts with python and import these modules. | 0 | 1 | 62 |
0 | 52,564,257 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-09-14T12:41:00.000 | 10 | 4 | 0 | pip install face_recognition giving error | 52,332,268 | 1 | python,face-recognition | I ran into this issue as well. I am using windows and have a python environment that I am installing the requirements to.
I ran pip install cmake , and then pip install dlib. I no longer received the error and successfully installed dlib. | RuntimeError:
CMake must be installed to build the following extensions: dlib
Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib | 0 | 1 | 35,770 |
0 | 52,334,301 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-09-14T14:37:00.000 | 1 | 3 | 0 | Matplotlib - How to strip extra whitespaces from a plot without needing to save it? | 52,334,185 | 0.066568 | python,matplotlib,whitespace | The easiest way imho is to click on the button "configure subplots" and adjust the sliders because you see the result immediately. You could although call the tight_layout() function directly on plt bevor show() | I have made a plot, and I don't want extra whitespaces in my plot; the question is:
How can I strip extra whitespaces from a plot?
I know you can strip extra whitespaces from a plot when you save it; Then you just do this: plt.savefig('file_name.png', bbox_inches='tight')
But I can't find any similar arguments you can pass to plt.plot() to have no extra whitespaces. Is it possible to pass an argument to plt.plot()? | 0 | 1 | 76 |
0 | 52,335,074 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2018-09-14T14:55:00.000 | 0 | 1 | 0 | Python/SQL/Excel I have 12 datasets and I want to combine them to one representative set | 52,334,490 | 0 | python,sql,excel,statistics | It sounds like you want to break it all out into (60*12) rows with 3 columns: one recording the application number, another recording the time, and another recording the location. Then a model could dummy out each location as a predictor, and you could generate 12 simulated predictions, with uncertainty. Then, to get your one overall prediction, average those predictions instead - bootstrap and then pool the predictions if you're fancy. Model time however you want - autoregression, Kalman filter, nearest-neighbor (probably not enough data for that one though). Just don't dummy out each time point individually or you'll have a perfect-fitting model.
But be aware of the possible universe of interactions between the locations that you could model here. Dummying them all out assumes no interactions between them, or at least one you care about, or that relate to anything you care about. It just accounts for fixed effects, i.e. you're assuming that the time dynamic within each location is the same, it's just that some locations tend overall and on average to have higher application numbers than others. You could derive tons of predictors pertaining to any given location based on the application number(s) in other location(s) - current number, past number, etc. All depends on what you consider to be possible and informative to account for. | I'm trying to create a predictive curve using 12 different datasets of empirical data. Essentially I want to write a function that passes 2 variables (Number of Applications, Days) and generates a predictive curve based on the 12 datasets that i have. The datasets all have 60 days and have Number of Applications from 500 to 100,000.
I'm not really sure of what the best approach would be, I was thinking maybe taking the average percentage of total applications at each day (ex: at day 1 on average there are 3% of total applications issued, day 10 on average there are 10%, etc)would be a good place to start but i'm not sure if that's the best approach.
I have python, SQL, and excel at my disposal but I'm not necessarily looking for a specific solution as much as just a general suggestion on approach. Any help would be much appreciated! | 0 | 1 | 23 |
0 | 52,596,253 | 0 | 1 | 0 | 0 | 2 | true | 0 | 2018-09-14T15:29:00.000 | 0 | 2 | 0 | Is tensorflow session running in parallel to the rest of my code? | 52,335,065 | 1.2 | python,multithreading,tensorflow,parallel-processing,batch-processing | After some research I found out that 'session.run' is not running concurrently to your other code. Indeed, as Ujjwal suggested, the 'tf.data.Dataset' API is the best choice for pipelining batch preprocessing and GPU execution. | I'm running my session on a GPU and I'm wondering if the 'session.run()' piece of code is running in parallel to my other code in my script.
I use batch processing on the CPU prior to running 'session.run()' in a loop and would like to pipeline this processing with the execution on the GPU. Is this already satisfied in this setting or do I need to manually start threads? | 0 | 1 | 354 |
0 | 52,335,207 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2018-09-14T15:29:00.000 | 0 | 2 | 0 | Is tensorflow session running in parallel to the rest of my code? | 52,335,065 | 0 | python,multithreading,tensorflow,parallel-processing,batch-processing | It entirely depends upon how you have written your code. This should be trivial to check, by checking out your CPU and GPU utilization simultanously
I normally make use of tf.data.Dataset API. I use the get_next() method of an iterator to feed data to a network. CPU and GPU work in parallel in this case. | I'm running my session on a GPU and I'm wondering if the 'session.run()' piece of code is running in parallel to my other code in my script.
I use batch processing on the CPU prior to running 'session.run()' in a loop and would like to pipeline this processing with the execution on the GPU. Is this already satisfied in this setting or do I need to manually start threads? | 0 | 1 | 354 |
0 | 66,705,862 | 0 | 0 | 0 | 1 | 1 | false | 3 | 2018-09-14T17:50:00.000 | 0 | 2 | 0 | AWS Glue - read from a sql server table and write to S3 as a custom CSV file | 52,336,996 | 0 | python,python-2.7,amazon-web-services,amazon-s3,aws-glue | This task fits AWS DMS (Data Migration Service) use case. DMS is designed to either migrate data from one data storage to another or keep them in sync. It can certainly keep in sync as well as transform your source (i.e., MSSQL) to your target (i.e., S3).
There is one non-negligible constraint in your case thought. Ongoing sync with MSSQL source only works if your license is the Enterprise or Developer Edition and for versions 2016-2019. | I am working on Glue since january, and have worked multiple POC, production data lakes using AWS Glue / Databricks / EMR, etc. I have used AWS Glue to read data from S3 and perform ETL before loading to Redshift, Aurora, etc.
I have a need now to read data from a source table which is on SQL SERVER, and fetch data, write to a S3 bucket in a custom (user defined) CSV file, say employee.csv.
Am looking for some pointers, to do this please.
Thanks | 0 | 1 | 1,571 |
0 | 52,344,172 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-09-15T06:18:00.000 | 1 | 3 | 0 | Pythonic way to cut specific elements from numpy array | 52,342,187 | 0.066568 | python,python-2.7,list,numpy | Why don't you use just c=a[b] as this is the Python way to take the values from array a. | I have a Python list (numpy array) and another list which contains the indices for the location of values from the first array which I want to keep.
Is there a Pythonic way to do this?
I know numpy.delete, but I want to keep the elements and not delete them. | 0 | 1 | 178 |
0 | 52,360,778 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-09-17T04:15:00.000 | 3 | 1 | 0 | Finding local min/max of a cubic function | 52,360,672 | 1.2 | python,math,scientific-computing | For cubic function you can find positions of potential minumum/maximums without optimization but using differentiation:
get the first and the second derivatives
find zeros of the first derivative (solve quadratic equation)
check the second derivative in found points - sign tells whether that point is min, max or saddle point
I think that differentiation should be in sympy package
Also check whether problem statement assumes accounting for boundary values (as @Lakshay Garg notices in comments) | I'm looking to program a Python function that takes in 6 variables, a, b, c, d, e, f, where a, b is the interval to compute on (e.g. [1, 3], all real numbers), and c, d, e, f are the coefficients of the cubic polynomial, i.e. f(x) = cx^3 + dx^2 + ex + f, and returns the local min/max on the interval [a, b].
I have a rough idea (although the computing time would be bad) of how to program this, where I create a new list of steps 0.01 or something similarly small from a to b, evaluate f at each value, then simply return the min/max of the list. This would take very long for a, b values that are very far apart.
What is the best way to go about making this? Are there any outside libraries for scientific/mathematical computing? Thank you. | 0 | 1 | 1,730 |
0 | 52,438,930 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2018-09-17T09:49:00.000 | 10 | 1 | 0 | How to find center points of DBSCAN clusrering in sklearn | 52,364,959 | 1.2 | python-3.x,scikit-learn,dbscan | DBSCAN doesn't have centers.
You can compute then yourself, but they may be outside of the cluster if it is not convex. | How to find the centre point of clusters of DBSCAN clustering algorithm in sklearn. | 0 | 1 | 6,728 |
0 | 52,387,459 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-17T19:08:00.000 | 1 | 1 | 0 | Why will GPU usage run low in NN training? | 52,374,287 | 0.197375 | python,machine-learning,pytorch | I can only guess without further research but it could be that your network is small in terms of layer-size (not number of layers) so each step of the training is not enough to occupy all the GPU resources. Or at least the ratio between the data size and the transfer speed (to the gpu memory) is bad and the GPU stays idle most of the time.
tl;dr: the gpu jobs are not long enough to justify the memory transfers | I'm running a NN training on my GPU with pytorch.
But the GPU usage is strangely "limited" at about 50-60%.
That's a waste of computing resources but I can't make it a bit higher.
I'm sure that the hardware is fine because running 2 of my process at the same time,or training a simple NN (DCGAN,for instance) can both occupy 95% or more GPU.(which is how it supposed to be)
My NN contains several convolution layers and it should use more GPU resources.
Besides, I guess that the data from dataset has been feeding fast enough,because I used workers=64 in my dataloader instance and my disk works just fine.
I just confused about what is happening.
Dev details:
GPU : Nvidia GTX 1080 Ti
os:Ubuntu 64-bit | 0 | 1 | 795 |
0 | 55,842,872 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-09-18T08:14:00.000 | 0 | 2 | 0 | Y_train values for symbolicRegressor | 52,381,949 | 0 | python-3.x,genetic,gplearn | Sorry for the late replay. gplearn supports regression (numeric y) with the SymbolicRegressor estimator, and with the newly released gplearn 0.4.0 we also support binary classification (two labels in y) using the SymbolicClassifier. From the sounds of things though, you have a multi-label problem which gplearn does not currently support. It may be something we look to support in the future. | I split my dataset in X_train, Y_train, X_test and Y_test, and then I used the symbolicRegressor...
I've already convert the string values from Dataframe in float values.
But by applying the symbolicRegressor I get this error:
ValueError: could not convert string to float: 'd'
Where 'd' is a value from Y.
Since all my values in Y_train and Y_test are alphabetic character because they are the "labels", I can not understand why the symbolicRegressor tries to get a float number ..
Any idea? | 0 | 1 | 302 |
0 | 52,389,432 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2018-09-18T08:14:00.000 | 0 | 2 | 0 | Y_train values for symbolicRegressor | 52,381,949 | 1.2 | python-3.x,genetic,gplearn | According to the https://gplearn.readthedocs.io/en/stable/index.html - "Symbolic regression is a machine learning technique that aims to identify an underlying mathematical expression that best describes a relationship". Pay attention to mathematical. I am not good at the topic of the question and gplearn's description does not clearly define area of applicability / restrictions.
However, according to the source code https://gplearn.readthedocs.io/en/stable/_modules/gplearn/genetic.html method fit() of BaseSymbolic class contains line X, y = check_X_y(X, y, y_numeric=True) where check_X_y() is sklearn.utils.validation.check_X_y(). Argument y_numeris means: "Whether to ensure that y has a numeric type. If dtype of y is object, it is converted to float64. Should only be used for regression algorithms".
So y values must be numeric. | I split my dataset in X_train, Y_train, X_test and Y_test, and then I used the symbolicRegressor...
I've already convert the string values from Dataframe in float values.
But by applying the symbolicRegressor I get this error:
ValueError: could not convert string to float: 'd'
Where 'd' is a value from Y.
Since all my values in Y_train and Y_test are alphabetic character because they are the "labels", I can not understand why the symbolicRegressor tries to get a float number ..
Any idea? | 0 | 1 | 302 |
0 | 52,383,425 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2018-09-18T09:18:00.000 | 0 | 4 | 0 | MemoryError with numpy arange | 52,383,129 | 0 | python,numpy,matplotlib,out-of-memory | In this case the function logspace from numpy is more suitable.
The answer to the example is
np.logspace(3,15,num=15-3+1, endpoint=True) | I want to create an array of powers of 10 as a label for the y axis of a plot.
I am using the plt.yticks() with matplotlib imported as plt but this does not matter here anyway.
I have plots where as the y axis is varying from 1e3 to 1e15. Those are log plots.
Matplotlib is automatically displaying those with ticks with 1e2 steps and I want to have a step of 10 instead (in order to be able to use the minorticks properly).
I want to use the plt.yticks(numpy.arange(1e3, 1e15, 10)) command as said but numpy.arange(1e3, 1e15, 10) result in a MemoryError. Isn't it supposed to output an array of length 13? Why does the memory gets full?
How to overpass this issue and not build the array manually?
I also tried using built-in range but it won't work with floats.
Thank you. | 0 | 1 | 911 |
0 | 52,387,276 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-09-18T13:00:00.000 | 1 | 3 | 0 | Reading large CSV file with Pandas freezes computer | 52,387,191 | 0.066568 | python,pandas,csv | You're probably loading all of the data in your RAM, thus allocating all memory available, forcing your system to rely on swap memory (writing temporary data to the disk, which is MUCH slower).
It should solve the issue if you split the data into chunks that fit in your memory. Maybe 1 GB each? | I am working with a relatively large CSV file in Python. I am using the pandas read_csv function to import it. The data is on a shared folder at work and around 25 GB.
I have 2x8 GB RAM and an Intel Core i5 processor and using the juypter notebook. While loading the file the RAM Monitoring goes up to 100%. It stays at 100% or 96% for some minutes and then my computer clock stopped and my screen is frozen. Even if I wait 2 hours my computer is not able to use any more, so I have to restart.
My question is:
Do I need to split the data? Would it help? Or is it a general performance problem with my laptop?
It is the first time that I am working with such a 'large' dataset (I still think 25 GB is not too much.) | 0 | 1 | 593 |
0 | 65,549,840 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2018-09-19T11:29:00.000 | 2 | 3 | 0 | How do I plot for Multiple Linear Regression Model using matplotlib | 52,404,857 | 0.132549 | python,matplotlib,machine-learning,regression,linear-regression | You can use Seaborn's regplot function, and use the predicted and actual data for comparison. It is not the same as plotting a best fit line, but it shows you how well the model works.
sns.regplot(x=y_test, y=y_predict, ci=None, color="b") | I try to Fit Multiple Linear Regression Model
Y= c + a1.X1 + a2.X2 + a3.X3 + a4.X4 +a5X5 +a6X6
Had my model had only 3 variable I would have used 3D plot to plot.
How can I plot this . I basically want to see how the best fit line looks like or should I plot multiple scatter plot and see the effect of individual variable
Y = a1X1 when all others are zero and see the best fit line.
What is the best approach for these models. I know it is not possible to visualize higher dimensions want to know what should be the best approach. I am desperate to see the best fit line | 0 | 1 | 24,001 |
0 | 59,574,232 | 0 | 1 | 0 | 0 | 2 | false | 28 | 2018-09-19T11:36:00.000 | 3 | 5 | 0 | Get a list of categories of categorical variable (Python Pandas) | 52,404,971 | 0.119427 | python,pandas,categorical-data | Try executing the below code.
List_Of_Categories_In_Column=list(df['Categorical Column Name'].value_counts().index) | I have a pandas DataFrame with a column representing a categorical variable. How can I get a list of the categories? I tried .values on the column but that does not return the unique levels.
Thanks! | 0 | 1 | 67,005 |
0 | 67,443,900 | 0 | 1 | 0 | 0 | 2 | false | 28 | 2018-09-19T11:36:00.000 | 0 | 5 | 0 | Get a list of categories of categorical variable (Python Pandas) | 52,404,971 | 0 | python,pandas,categorical-data | df.column name.value_counts() # to see total number of values for each categories in a column
df.column name.value_counts().index # to see only the categories name
df.column name .value_counts().count() # to see how many categories in a column (only number) | I have a pandas DataFrame with a column representing a categorical variable. How can I get a list of the categories? I tried .values on the column but that does not return the unique levels.
Thanks! | 0 | 1 | 67,005 |
0 | 52,409,256 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-19T15:06:00.000 | 2 | 2 | 0 | RandomForestClassifiers sklearn apply(X) | 52,408,980 | 0.197375 | python-3.x,scikit-learn,random-forest,sklearn-pandas | It gives you the indices of the leaf your data point is for every tree of your forest.
This is what is then used to predict the class of your point. | Apply returns indices of leafs.
Could anyone explain which indices does it return? Related fucntion in Matlab?
Thanks | 0 | 1 | 33 |
0 | 52,435,565 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-19T15:42:00.000 | 0 | 1 | 0 | Is it possible to customize Plotly x-axis hoverinfo in Python? | 52,409,641 | 0 | python,plotly | I think maybe you want to change the ticklabels of x-axis instead of the hoverinfo.
The meaning of x in hoverinfo is the x-coordinated of the points. So if you truly want to revise the x, maybe you should change the x-coordinated of the points. Maybe changed into string or some special case.
Of course, you could also use hovertext to fully present all the information which you want to present. If you are worried about the \n symbol is not working, you could use <Br> directly. | When I set the hoverinfo of a Plotly object to "x+text", I can modify what is shown in the hover tooltip using the hovertext attribute.
I haven't found a way to modify the hover text along the x-axis though. I would like to modify it to be more than the default x-axis value at that location. | 0 | 1 | 130 |
0 | 52,422,031 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-20T09:29:00.000 | 1 | 2 | 0 | Move array of doubles from Python to Java | 52,421,822 | 0.099668 | java,python,arrays | Saving a file to disk to exchange the data between different applications sounds like a hacky approach to me.
Depending on your structure and complexity, I would consider implementing a messaging queue (i.e. redis) or a document database (i.e. mongo or prefered alternative) with respective clients to do the data exchange between the apps.
On the data structure itself, I would choose json or csv for the task. If you need human readability or strict structure, json is your tool. If the data is meant only for the machine to read, csv takes less space to store the same amount of data. | I have a Python script that calculates a few, quite small, NumPy arrays and I have a Java service on a separate machine that needs to use these arrays.
These arrays sometimes need to be recalculated and then used afterwards by the Java service. What is the best way to dump a NumPy array to the disk and load it in Java as float[][]?
I know that I could use JSON to do this (Python script dumps the NumPy array to a JSON file and Java service recovers float[][] from this), but is there any other, "preferred" way? | 1 | 1 | 57 |
0 | 52,422,304 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-20T09:40:00.000 | 0 | 4 | 0 | Open CV to Capture Unique Objects from a Video | 52,422,060 | 0 | python,opencv | Do you need to detect license plates, etc? Or just notice if something happens? For the latter, you could use a very simple approach. Take an average of say the frames of the last 30 seconds and subtract that from a current frame. If the mean absolute average of the delta image is above a threshold, that could be the change you are looking for. | I was doing a frame slicing from the OpenCV Library in Python, and I am successfully able to create frames from the video being tested on.
I am doing it on a CCTV Camera installed at a parking entry gateway where the video plays 24x7, and at times the car is standing still for good number of minutes, leading to having consecutive frames of the same vehicle.
My question is how can I create a frame only when a new vehicle enters the parking lot? | 0 | 1 | 810 |
0 | 52,436,519 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-09-21T04:11:00.000 | 2 | 2 | 0 | numpy Broadcasting for user functions | 52,436,499 | 1.2 | python,python-3.x,numpy | You could define your own function f = lambda x: sin(x) if x<1 else cos(x) and then use numpy's builtin vectorizer f_broadcasting = np.vectorize(f).
This doesn't offer any speed improvements (and the additional overhead can slow down small problems), but it gives you the desired broadcasting behavior. | In numpy, if a is an ndarray, then, something like
np.sin(a) takes sin of all the entries of ndarray. What if I need to define my own function (for a stupid example, f(x) = sin(x) if x<1 else cos(x)) with broadcasting behavior? | 0 | 1 | 134 |
0 | 52,504,486 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-09-21T08:22:00.000 | 0 | 3 | 0 | Solve 3D least squares in numpy/scipy | 52,439,564 | 1.2 | python,numpy,scipy,linear-algebra,least-squares | In fact the answer was simple, I just needed to create bigger matrices Y and X by horizontally stacking the Y_k (to create Y) and the X_k (to create X). Then I can just solve a regular 2d least squares problem: minimize norm(Y - A.dot(X)) | For some integer K around 100, I have 2 * K (n, n) arrays: X_1, ..., X_K and Y_1, ..., Y_K.
I would like to perform K least squares simultaneously, i.e. find the n by n matrix A minimizing the sum of squares over k: \sum_k norm(Y_k - A.dot(X_k), ord='fro') ** 2 (A must not depend on k).
I am looking for an easy way to do this with numpy or scipy.
I know the function I want to minimize is a quadratic form in A so I could do it by hand, but I'm looking for an off-the-shelf way of doing it. Is there one? | 0 | 1 | 834 |
0 | 52,460,262 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-09-21T09:24:00.000 | 0 | 2 | 0 | NVIDIA K-80 GPU does not run with Deep Learning Image Tensorflow | 52,440,606 | 1.2 | python,tensorflow,keras,google-compute-engine | The problem was with my use of requirements.txt.
I created it on my laptop with pip freeze, uploaded it to the VM and used pip to install all the requirements.
In this way my requirements.txt included tensorflow. As the result, pip installed the repository version that did not include GPU support, replacing pre-installed tensorflow with GPU support.
I was able to figure this out by repeating my steps and checking GPU per second suggestion of @Shintlor (thanks!) along the way.
I have created another VM and did not use requirements.txt; rather, I installed all the missing packages one by one. After that my model trains now 20 times faster than on my laptop. | I have created a virtual machine in Google Compute us-east-1c region with the following specifications: n1-standard-2 (2 vCPU, 7.5 GB memory), 1 NVIDIA Tesla K80 GPU, boot disk: Deep Learning Image Tensorflow 1.10.1 m7 CUDA 9.2.
When I first logged in to the machine, it asked me to install the drivers and I agreed. It gave me some warning messages which I did not save.
I tried to train a model written entirely in Keras with TF backend.
However, judging by the speed and CPU utilization (both similar to what it does on my laptop, slow and using almost all CPU available), GPU is not used.
This is also confirmed by the TF output:
2018-09-21 08:39:48.602158: I
tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX2
FMA
It did not mention any GPU. (Thanks @Matias Valdenegro !)
In my model I did not relate to GPU with an understanding that TF takes care of it automatically.
Any ideas?
Many thanks. | 0 | 1 | 985 |
0 | 52,443,366 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2018-09-21T11:53:00.000 | 1 | 3 | 0 | Keras floods Jupyter cell output during fit (verbose=1) | 52,443,200 | 0.066568 | python,keras,jupyter-notebook,jupyter,tqdm | verbose=2 should be used for interactive outputs. | When running keras model inside Jupyter notebook with "verbose=1" option, I started getting not single line progress status updates as before, but a flood of status lines updated at batch. See attached picture. Restarting jupyter or the browser is not helping.
Jupyter notebook server is: 5.6.0, keras is 2.2.2, Python is Python 3.6.5
Please help.
cell content:
history = model.fit(x=train_df_scaled, y=train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS, verbose=1, validation_data=(validation_df_scaled, validation_labels), shuffle=True)
output flood example: (it is thousands of lines like this)
loss: 217.5794 - mean_absolute_error: 11.166 - ETA: 32:42 - loss: 216.9500 - mean_absolute_error: 11.165 - ETA: 32:21 - loss: 216.6378 - mean_absolute_error: 11.164 - ETA: 32:00 - loss: 216.0345 - mean_absolute_error: 11.164 - ETA: 31:41 - loss: 215.6621 - mean_absolute_error: 11.166 - ETA: 31:21 - loss: 215.4639 - mean_absolute_error: 11.171 - ETA: 31:02 - loss: 215.1654 - mean_absolute_error: 11.173 - ETA: 30:44 - loss: 214.6583 - mean_absolute_error: 11.169 - ETA: 30:27 - loss: 213.8844 - mean_absolute_error: 11.164 - ETA: 30:10 - loss: 213.3308 - mean_absolute_error: 11.163 - ETA: 29:54 - loss: 213.1179 - mean_absolute_error: 11.167 - ETA: 29:37 - loss: 212.8138 - mean_absolute_error: 11.169 - ETA: 29:25 - loss: 212.7157 - mean_absolute_error: 11.174 - ETA: 29:11 - loss: 212.5421 - mean_absolute_error: 11.177 - ETA: 28:56 - loss: 212.1867 - mean_absolute_error: 11.178 - ETA: 28:42 - loss: 211.8032 - mean_absolute_error: 11.180 - ETA: 28:28 - loss: 211.4079 - mean_absolute_error: 11.179 - ETA: 28:15 - loss: 211.2733 - mean_absolute_error: 11.182 - ETA: 28:02 - loss: 210.8588 - mean_absolute_error: 11.179 - ETA: 27:50 - loss: 210.4498 - mean_absolute_error: 11.178 - ETA: 27:37 - loss: 209.9327 - mean_absolute_error: 11.176 - ETA: 27: | 0 | 1 | 2,733 |
0 | 52,445,923 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2018-09-21T11:53:00.000 | 1 | 3 | 0 | Keras floods Jupyter cell output during fit (verbose=1) | 52,443,200 | 0.066568 | python,keras,jupyter-notebook,jupyter,tqdm | Two things I would recommend:
Try restarting Jupyter Notebook server.
Try different browser other
than what you're using; perhaps your browser got some update and it's
breaking stuff! (usually, chrome is bad with notebooks!) | When running keras model inside Jupyter notebook with "verbose=1" option, I started getting not single line progress status updates as before, but a flood of status lines updated at batch. See attached picture. Restarting jupyter or the browser is not helping.
Jupyter notebook server is: 5.6.0, keras is 2.2.2, Python is Python 3.6.5
Please help.
cell content:
history = model.fit(x=train_df_scaled, y=train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS, verbose=1, validation_data=(validation_df_scaled, validation_labels), shuffle=True)
output flood example: (it is thousands of lines like this)
loss: 217.5794 - mean_absolute_error: 11.166 - ETA: 32:42 - loss: 216.9500 - mean_absolute_error: 11.165 - ETA: 32:21 - loss: 216.6378 - mean_absolute_error: 11.164 - ETA: 32:00 - loss: 216.0345 - mean_absolute_error: 11.164 - ETA: 31:41 - loss: 215.6621 - mean_absolute_error: 11.166 - ETA: 31:21 - loss: 215.4639 - mean_absolute_error: 11.171 - ETA: 31:02 - loss: 215.1654 - mean_absolute_error: 11.173 - ETA: 30:44 - loss: 214.6583 - mean_absolute_error: 11.169 - ETA: 30:27 - loss: 213.8844 - mean_absolute_error: 11.164 - ETA: 30:10 - loss: 213.3308 - mean_absolute_error: 11.163 - ETA: 29:54 - loss: 213.1179 - mean_absolute_error: 11.167 - ETA: 29:37 - loss: 212.8138 - mean_absolute_error: 11.169 - ETA: 29:25 - loss: 212.7157 - mean_absolute_error: 11.174 - ETA: 29:11 - loss: 212.5421 - mean_absolute_error: 11.177 - ETA: 28:56 - loss: 212.1867 - mean_absolute_error: 11.178 - ETA: 28:42 - loss: 211.8032 - mean_absolute_error: 11.180 - ETA: 28:28 - loss: 211.4079 - mean_absolute_error: 11.179 - ETA: 28:15 - loss: 211.2733 - mean_absolute_error: 11.182 - ETA: 28:02 - loss: 210.8588 - mean_absolute_error: 11.179 - ETA: 27:50 - loss: 210.4498 - mean_absolute_error: 11.178 - ETA: 27:37 - loss: 209.9327 - mean_absolute_error: 11.176 - ETA: 27: | 0 | 1 | 2,733 |
0 | 52,505,253 | 0 | 0 | 0 | 0 | 3 | true | 7 | 2018-09-21T11:53:00.000 | 4 | 3 | 0 | Keras floods Jupyter cell output during fit (verbose=1) | 52,443,200 | 1.2 | python,keras,jupyter-notebook,jupyter,tqdm | After a few tests I found that the error is related to tqdm import. Tqdm was used in a piece of code which was later rewritten withoout it. Even though I was not using tqdm in this notebook, just having it imported affected the keras output.
To fix it I just commented out this line:
from tqdm import tqdm
and everything went fine, with nice keras progress bars. Not sure how exactly it conflicted with keras though... | When running keras model inside Jupyter notebook with "verbose=1" option, I started getting not single line progress status updates as before, but a flood of status lines updated at batch. See attached picture. Restarting jupyter or the browser is not helping.
Jupyter notebook server is: 5.6.0, keras is 2.2.2, Python is Python 3.6.5
Please help.
cell content:
history = model.fit(x=train_df_scaled, y=train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS, verbose=1, validation_data=(validation_df_scaled, validation_labels), shuffle=True)
output flood example: (it is thousands of lines like this)
loss: 217.5794 - mean_absolute_error: 11.166 - ETA: 32:42 - loss: 216.9500 - mean_absolute_error: 11.165 - ETA: 32:21 - loss: 216.6378 - mean_absolute_error: 11.164 - ETA: 32:00 - loss: 216.0345 - mean_absolute_error: 11.164 - ETA: 31:41 - loss: 215.6621 - mean_absolute_error: 11.166 - ETA: 31:21 - loss: 215.4639 - mean_absolute_error: 11.171 - ETA: 31:02 - loss: 215.1654 - mean_absolute_error: 11.173 - ETA: 30:44 - loss: 214.6583 - mean_absolute_error: 11.169 - ETA: 30:27 - loss: 213.8844 - mean_absolute_error: 11.164 - ETA: 30:10 - loss: 213.3308 - mean_absolute_error: 11.163 - ETA: 29:54 - loss: 213.1179 - mean_absolute_error: 11.167 - ETA: 29:37 - loss: 212.8138 - mean_absolute_error: 11.169 - ETA: 29:25 - loss: 212.7157 - mean_absolute_error: 11.174 - ETA: 29:11 - loss: 212.5421 - mean_absolute_error: 11.177 - ETA: 28:56 - loss: 212.1867 - mean_absolute_error: 11.178 - ETA: 28:42 - loss: 211.8032 - mean_absolute_error: 11.180 - ETA: 28:28 - loss: 211.4079 - mean_absolute_error: 11.179 - ETA: 28:15 - loss: 211.2733 - mean_absolute_error: 11.182 - ETA: 28:02 - loss: 210.8588 - mean_absolute_error: 11.179 - ETA: 27:50 - loss: 210.4498 - mean_absolute_error: 11.178 - ETA: 27:37 - loss: 209.9327 - mean_absolute_error: 11.176 - ETA: 27: | 0 | 1 | 2,733 |
0 | 52,463,724 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-09-22T06:21:00.000 | 0 | 2 | 0 | Autoencoder with Transfer Learning? | 52,454,090 | 0 | python-3.x,keras,computer-vision,autoencoder,resnet | From what I know, there is no proven method to do this. I'd train the autoencoder from scratch.
In theory, if you find a pre-trained CNN which does not use max pooling, you can use those weights and architecture for the encoder stage in your autoencoder. You can also extract features from a pre-trained model and concatenate/merge them to your autoencoder. But the value add is not clear, and the architecture might become overly complex. | Is there a way I can train an autoencoder model using a pre-trained model like ResNet?
I'm trying to train an autoencoder model with input as an image and output as a masked version of that image.
Is it possible to use weights from a pretrained model here? | 0 | 1 | 1,464 |
0 | 52,461,491 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-09-22T18:39:00.000 | 1 | 1 | 0 | image multi classification with keras | 52,459,748 | 1.2 | python,neural-network,keras,multilabel-classification | The best way to accomplish this is to create a new class in addition to dog and cat to handle images you have no interest in. So now, your labels would be ["dogs", "cats", "other"].
In your current architecture, your model is forced to predict a random image as either a dog or cat as those are the only two options it has. Adding a new class to deal with other images is generally the easiest way to make your classifier more robust to incorrect predictions. | so if I have two labels "dogs" and "cats" and I want to create multi classification neural network.
now if I provided a new random image which is not a dog or a cat, is there a way I can teach the classifier to tell me that this image is not a dog or a cat instead of saying how much percent it maybe cat or dog? | 0 | 1 | 67 |
0 | 52,462,006 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2018-09-22T19:51:00.000 | 1 | 1 | 0 | Efficient Intersection of pandas dataframe with remote mongodb? | 52,460,327 | 1.2 | python,mongodb,pandas,pymongo | As you already explained that you won't be able to insert data. So only thing is possible is first take the unique values to a list.df['column_name'].unique(). Then you can use the $in operator in .find() method and pass your list as a parameter. If it takes time or it is too much. Then break your list in equal chunks, I mean list of list [[id1, id2, id3], [id4, id5, id6] ... ] and do a for loop for sub-list in list: db.xyz.find({'key':{'$in': sublist}}, {'_id': 1}) and use the sub list as parameter in $in operator. Then for each iteration if the value exist in the db it will return the _id and we can easily store that in a empty list and append it and we will be able to get all the id's in such cases where the value exist in the collection.
So it's just the way I would do. Not necessarily the best possible. | I have a python pandas dataframe on my local machine, and have access to a remote mongodb server that has additional data that I can query via pymongo.
If my local dataframe is large, say 40k rows with 3 columns in each row, what's the most efficient way to check for the intersection of my local dataframe's features and a remote collection containing millions of documents?
I'm looking for general advice here. I thought I could just take a distinct list of values from each of the 3 features, and use each of these in an $or find statement, but if I have 90k distinct values for one of the 3 features it seems like a bad idea.
So any opinion would be very welcome. I don't have access to insert my local dataframe into the remote server, I only have select/find access.
thanks very much! | 0 | 1 | 112 |
0 | 64,562,864 | 0 | 0 | 0 | 0 | 1 | false | 26 | 2018-09-24T05:11:00.000 | 8 | 4 | 0 | Available options in the spark.read.option() | 52,472,993 | 1 | python,python-3.x,apache-spark | Annoyingly, the documentation for the option method is in the docs for the json method. The docs on that method say the options are as follows (key -- value -- description):
primitivesAsString -- true/false (default false) -- infers all primitive values as a string type
prefersDecimal -- true/false (default false) -- infers all floating-point values as a decimal type. If the values do not fit in decimal, then it infers them as doubles.
allowComments -- true/false (default false) -- ignores Java/C++ style comment in JSON records
allowUnquotedFieldNames -- true/false (default false) -- allows unquoted JSON field names
allowSingleQuotes -- true/false (default true) -- allows single quotes in addition to double quotes
allowNumericLeadingZeros -- true/false (default false) -- allows leading zeros in numbers (e.g. 00012)
allowBackslashEscapingAnyCharacter -- true/false (default false) -- allows accepting quoting of all character using backslash quoting mechanism
allowUnquotedControlChars -- true/false (default false) -- allows JSON Strings to contain unquoted control characters (ASCII characters with value less than 32, including tab and line feed characters) or not.
mode -- PERMISSIVE/DROPMALFORMED/FAILFAST (default PERMISSIVE) -- allows a mode for dealing with corrupt records during parsing.
PERMISSIVE : when it meets a corrupted record, puts the malformed
string into a field configured by columnNameOfCorruptRecord, and sets
other fields to null. To keep corrupt records, an user can set a
string type field named columnNameOfCorruptRecord in an user-defined
schema. If a schema does not have the field, it drops corrupt records
during parsing. When inferring a schema, it implicitly adds a
columnNameOfCorruptRecord field in an output schema.
DROPMALFORMED : ignores the whole corrupted records.
FAILFAST : throws an exception when it meets corrupted records. | When I read other people's python code, like, spark.read.option("mergeSchema", "true"), it seems that the coder has already known what the parameters to use. But for a starter, is there a place to look up those available parameters? I look up the apche documents and it shows parameter undocumented.
Thanks. | 0 | 1 | 38,801 |
0 | 52,604,844 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-24T21:49:00.000 | 0 | 1 | 0 | Prediction problem- Build model using 6 months data and predict on one month data? | 52,487,832 | 0 | python,machine-learning,logic,data-science,prediction | Is your minimum unit of mesurement 6 months ? I hope not, but if yes, then I would sugges that you dont try to predict the next 1 month.
Seasonality within a year aside, you would need daily volume measurements.. I would be very worried to build anything on monthly or even weekly numbers.
In terms of modelling techniques, please stick to simple regression methods like kungphu suggests. | I have a data set which contains site usage behavior of users over a period of six months. It contains data about:
Number of pages viewed
Number of unique cookies associated with each user
Different number of OS, Browsers used
Different number of cities visited
Everything over here is collected on a six month time frame. I have used this data to train a model to predict a target variable 'y'. Everything is numeric in format.
Now I know since its a six month data, and the model is built upon this 6 months of data, I can use this to predict on the next six month data to get target variable y.
My question is that if instead of using it to predict on six month time frame, I use the model to predict on monthly time frame, will it give me incorrect results?
My logic tells me yes, as for example, I used tree method such as Decision tree and Random forest, these algorithms kind of makes thresholds to give output "0/1". Now the variables I mentioned above such as number of cookies associated, OS, Browser etc would have different values if we look at it from one month stand point and if we look at it from 6 months standpoint. For example, number of unique cookies associated with a user would be less if seen over a month where as it will be more if seen from 6 months standpoint.
But I am confused as to if the model will automatically adjust these values while running on monthly data or not. Request you to help me understand the if I am thinking this right or wrong. Also please provide logical explanation if possible.
Thanks. | 1 | 1 | 421 |
0 | 52,493,234 | 0 | 1 | 0 | 0 | 1 | true | 7 | 2018-09-25T07:39:00.000 | 3 | 1 | 0 | Why pandas has its own datetime object Timestamp? | 52,492,996 | 1.2 | python,pandas,timestamp | You can go through Pandas documentation for the details:
"pandas.Timestamp" is a replacement for python datetime.datetime for
Padas usage.
Timestamp is the pandas equivalent of python’s Datetime and is
interchangeable with it in most cases. It’s the type used for the
entries that make up a DatetimeIndex, and other timeseries oriented
data structures in pandas.
Notes
There are essentially three calling conventions for the constructor.
The primary form accepts four parameters. They can be passed by
position or keyword.
The other two forms mimic the parameters from datetime.datetime. They
can be passed by either position or keyword, but not both mixed
together.
Timedeltas are differences in times, expressed in difference units,
e.g. days, hours, minutes, seconds. They can be both positive and
negative.
Timedelta is a subclass of datetime.timedelta, and behaves in a
similar manner, but allows compatibility with np.timedelta64 types
as well as a host of custom representation, parsing, and attributes.
I would say as pandas works better with Time Series data hence its been a kind of warper on the original built-in datetime module.
The weaknesses of Python's datetime format inspired the NumPy team to
add a set of native time series data type to NumPy. The datetime64
dtype encodes dates as 64-bit integers, and thus allows arrays of
dates to be represented very compactly. | The documentation of pandas.Timestamp states a concept well-known to every pandas user:
Timestamp is the pandas equivalent of python’s Datetime and is interchangeable with it in most cases.
But I don't understand why are pandas.Timestamps needed at all.
Why is, or was, it useful to have a different object than python's Datetime? Wouldn't it be cleaner to simply build pandas.DatetimeIndex out of Datetimes? | 0 | 1 | 416 |
0 | 52,499,867 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-25T12:13:00.000 | 1 | 2 | 0 | Filled 3D numpy mask | 52,497,995 | 0.099668 | python-2.7,numpy,geometry | If you have surface cells marked and there si no additional information, then scan array layer by layer to get the first marked cell (or get some surface cell if they are known).
When you have marked A[z,y,x] surface cell, fill line in the last dimension (x) 1d array until new marked cell is met.
Then find neighbor marked cell in the same top-level layer (same z, close y ans x) and repeat to fill lines until whole section (ellipse or cutted ellipse) is filled, then continue with the next z layer
Edit
Perhaps I overcomplicate the problem, and FloodFill algo is simple solution. | I have a binary (0-1) 3D numpy array, which I plan to use for masking a 3D image. The mask at the moment consists in the area of a cylinder. The two centres of the faces are two arbitrary points, and the axis is not parallel to x, y or z.
How can I fill the cylinder with a pure numpy solution? | 0 | 1 | 792 |
0 | 52,499,750 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-09-25T13:36:00.000 | 0 | 2 | 0 | ImportError: No module named detector_classifier | 52,499,573 | 0 | python,ubuntu | I think a little more information might help. Which python version and which pip version are you using? I just googled "detector_classifier" and couldn't find anything. What library does "detector_classifier" belong to?
Without much background to go off of, I would recommended making sure you have updated pip. Depending on what operating system you're using, your configuration might need some tinkering so your system knows where to look. | I'm working with Concept Drift, but when trying to run my code i get this error
"ImportError: No module named detector_classifier" been trying to install the module with pip install, but all i get is no match found. Anyone had this problem before? | 0 | 1 | 908 |
0 | 52,628,707 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-25T14:07:00.000 | 0 | 1 | 0 | Is it possible to keep all the images in one folder for tensorflow object detection API | 52,500,185 | 0 | python,tensorflow | In case you already have separate record files for train and eval (validation/test), then it's okay.
You simply put the pathes of the corresponding records in
tf_record_input_reader {
input_path: "/path/to/record/record_name.record"
}
once for train_input_reader and once for eval_input_reader. In case the records are split into shards, than use the format of
input_path: "/path/to/record/record_name.record-?????-of-00010" (instead of 10 - number of shards). | I am new to tensorflow and it’s object detection API. In its tutorial, it’s said that the images must be separated into train/ and test/ folders. Actually I am working on a server where my entire data is kept in a folder called ‘images’ and I don’t want to either change it’s structure or create another copy of it.
However, I have created separate train.record and test.record files as well and it’s just that I want all my images to be in one folder together. Is it possible? If yes , then which of files need to be modified? Thanks | 0 | 1 | 192 |
0 | 52,529,791 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-27T04:54:00.000 | 0 | 3 | 0 | how can i check all the values of dataframe whether have null values in them without a loop | 52,529,669 | 0 | python,pandas | This gives you all a columns and how many null values they have.
df = pd.DataFrame({0:[1,2,None,],1:[2,3,None])
df.isnull().sum() | if all(data_Window['CI']!=np.nan):
I have used the all() function with if so that if column CI has no NA values, then it will do some operation. But i got syntax error. | 0 | 1 | 77 |
0 | 52,536,297 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-09-27T11:42:00.000 | 0 | 1 | 0 | Understanding the output of scipy.stats.multivariate_normal | 52,536,206 | 1.2 | python,scipy | This is fine. The probability density function can be larger than 1 at a specific point. It's the integral than must be equal to 1.
The idea that pdf < 1 is correct for discrete variables. However, for continuous ones, the pdf is not a probability. It's a value that is integrated to a probability. That is, the integral from minus infinity to infinity, in all dimensions, is equal to 1. | I am trying to build a multidimensional gaussian model using scipy.stats.multivariate_normal. I am trying to use the output of scipy.stats.multivariate_normal.pdf() to figure out if a test value fits reasonable well in the observed distribution.
From what I understand, high values indicate a better fit to the given model and low values otherwise.
However, in my dataset, I see extremely large PDF(x) results, which lead me to question if I understand things correctly. The area under the PDF curve must be 1, so very large values are hard to comprehend.
For e.g., consider:
x = [-0.0007569417915494715, -0.01394295997613827, 0.000982078369890444, -0.03633664354397629, -0.03730583036106844, 0.013920453054506978, -0.08115836865224338, -0.07208494497398354, -0.06255237023298793, -0.0531888840386906, -0.006823760545565131]
mean = [0.01663645201261102, 0.07800335614699873, 0.016291452384234965, 0.012042931155488702, 0.0042637244100103885, 0.016531331606477996, -0.021702714746699842, -0.05738646649459681, 0.00921296058625439, 0.027940994009345254, 0.07548111758006244]
covariance = [[0.07921927017771506, 0.04780185747873293, 0.0788086850274493, 0.054129466248481264, 0.018799028456661045, 0.07523731808137141, 0.027682748950487425, -0.007296954729572955, 0.07935165417756569, 0.0569381100965656, 0.04185848489472492], [0.04780185747873293, 0.052300105044833595, 0.047749467098423544, 0.03254872837949123, 0.010582358713999951, 0.045792252383799206, 0.01969282984717051, -0.006089301208961258, 0.05067712814145293, 0.03146214776997301, 0.04452949330387575], [0.0788086850274493, 0.047749467098423544, 0.07841809405745602, 0.05374461924031552, 0.01871005609017673, 0.07487015790787396, 0.02756781074862818, -0.007327131572569985, 0.07895548129950304, 0.056417456686115544, 0.04181063355048408], [0.054129466248481264, 0.03254872837949123, 0.05374461924031552, 0.04538801863296238, 0.015795381235224913, 0.05055944754764062, 0.02017033995851422, -0.006505939129684573, 0.05497361331950649, 0.043858860182247515, 0.029356699144606032], [0.018799028456661045, 0.010582358713999951, 0.01871005609017673, 0.015795381235224913, 0.016260640022897347, 0.015459548918222347, 0.0064542528152879705, -0.0016656858963383602, 0.018761682220822192, 0.015361512546799405, 0.009832025009280924], [0.07523731808137141, 0.045792252383799206, 0.07487015790787396, 0.05055944754764062, 0.015459548918222347, 0.07207012779105286, 0.026330967917717253, -0.006907504360835279, 0.0753380831201204, 0.05335128471397023, 0.03998397595850863], [0.027682748950487425, 0.01969282984717051, 0.02756781074862818, 0.02017033995851422, 0.0064542528152879705, 0.026330967917717253, 0.020837940236441078, -0.003320408544812026, 0.027859582829638897, 0.01967636950969646, 0.017105000942890598], [-0.007296954729572955, -0.006089301208961258, -0.007327131572569985, -0.006505939129684573, -0.0016656858963383602, -0.006907504360835279, -0.003320408544812026, 0.024529061074105817, -0.007869287828047853, -0.006228903058681195, -0.0058974553248417995], [0.07935165417756569, 0.05067712814145293, 0.07895548129950304, 0.05497361331950649, 0.018761682220822192, 0.0753380831201204, 0.027859582829638897, -0.007869287828047853, 0.08169291677188911, 0.05731196406065222, 0.04450058445993234], [0.0569381100965656, 0.03146214776997301, 0.056417456686115544, 0.043858860182247515, 0.015361512546799405, 0.05335128471397023, 0.01967636950969646, -0.006228903058681195, 0.05731196406065222, 0.05064023101024737, 0.02830810316675855], [0.04185848489472492, 0.04452949330387575, 0.04181063355048408, 0.029356699144606032, 0.009832025009280924, 0.03998397595850863, 0.017105000942890598, -0.0058974553248417995, 0.04450058445993234, 0.02830810316675855, 0.040658283674780395]]
For this, if I compute y = multivariate_normal.pdf(x, mean, cov);
the result is 342562705.3859754.
How could this be the case? Am I missing something?
Thanks. | 0 | 1 | 1,012 |
0 | 52,539,914 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-09-27T14:44:00.000 | 1 | 1 | 0 | Holoviews - network graph - change edge color | 52,539,639 | 0.197375 | python,networkx,bokeh,holoviews | Problem solved, the option to change edges color is edge_line_color and not edge_color. | I am using holoviews and bokeh with python 3 to create an interactive network graph fro mNetworkx. I can't manage to set the edge color to blank. It seems that the edge_color option does not exist. Do you have any idea how I could do that? | 0 | 1 | 284 |
0 | 52,544,455 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-09-27T19:59:00.000 | 1 | 2 | 0 | How can we parse DataFrame.describe()? | 52,544,301 | 0.099668 | python,pandas,dataframe,sklearn-pandas | print always prints in string format.
But if you check type(df.describe()) then you'll see that it is a dataframe.
So you can treat it like one. :) | How can we parse the output from DataFrame.describe()? When we print the result of DataFrame.describe() as shown in examples, it is in string format, which is why it is difficult to parse it.
I understand that the print function might be converting the output into a displayable and readable form. However, it is not easily parseable. How can we achieve this? | 0 | 1 | 110 |
0 | 52,544,615 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-09-27T20:02:00.000 | 2 | 3 | 0 | In Python DataFrame how to find out number of rows that have valid values of columns | 52,544,340 | 0.132549 | python,pandas,dataframe,sklearn-pandas | Use df.isnull().sum() to get number of rows with None and NaN value.
Use df.eq(value).sum() for any kind of values including empty string "". | I want to find the number of rows that have certain values such as None or "" or NaN (basically empty values) in all columns of a DataFrame object. How can I do this? | 0 | 1 | 875 |
0 | 52,555,122 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-09-27T20:52:00.000 | 0 | 1 | 1 | In which deployment mode can we "Not" add nodes to a cluster in Apache Spark 2.3.1 | 52,544,955 | 1.2 | python-2.7,apache-spark,cluster-computing,worker | When master is Local, your program will run on single machine that is your edge node.
To run it in distributed environment i.e. on cluster you need to select master as "Yarn".
When deployment mode is "client" (default) your edge node will become the master (where driver program will run).
When deployment mode is "cluster", any of the healthy node from the cluster becomes master | In which deployment mode can we Not add Nodes/workers to a cluster in Apache Spark 2.3.1
1.Spark Standalone
2.Mesos
3.Kubernetes
4.Yarn
5.Local Mode
i have installed Apache Spark 2.3.1 on my machine and have run it in Local Mode
in Local Mode can we add Nodes/workers to Apache Spark? | 0 | 1 | 45 |
0 | 57,938,416 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-09-28T10:49:00.000 | 2 | 2 | 0 | Algorithm used in Excel Fuzzy Lookup | 52,553,735 | 0.197375 | python,excel,levenshtein-distance,fuzzy-logic | The following is an excerpt from Microsoft Fuzzy Lookup Add-In for Excel, Readme.docx. I hope that helps.
Advanced Concepts
Fuzzy Lookup technology is based upon a very simple, yet flexible
measure of similarity between two records. Jaccard similarity Fuzzy
Lookup uses Jaccard similarity, which is defined as the size of the
set intersection divided by the size of the set union for two sets of
objects. For example, the sets {a, b, c} and {a, c, d} have a Jaccard
similarity of 2/4 = 0.5 because the intersection is {a, c} and the
union is {a, b, c, d}. The more that the two sets have in common, the
closer the Jaccard similarity will be to 1.0.
Weighted Jaccard similarity and tokenization of records With Fuzzy
Lookup, you can assign weights to each item in a set and define the
weighted Jaccard similarity as the total weight of the intersection
divided by the total weight of the union. For the weighted sets {(a,
2), (b, 5), (c, 3)}, {(a, 2), (c, 3), (d, 7)}, the weighted Jaccard
similariyt is (2 + 3)/(2 + 3 + 5 +7) = 5/17 = .294.
Because Jaccard similarity is defined over sets, Fuzzy Lookup must
first convert data records to sets before it calculates the Jaccard
similarity. Fuzzy Lookup converts the data to sets using a Tokenizer.
For example, the record {“Jesper Aaberg”, “4567 Main Street”} might be
tokenized into the set, {“ Jesper”, “Aaberg”, “4567”, “Main”,
“Street”}. The default tokenizer is for English text, but one may
change the LocaleId property in Configure=>Global Settings to specify
tokenizers for other languages.
Token weighting Because not all tokens are of equal importance, Fuzzy
Lookup assigns weights to tokens. Tokens are assigned high weights if
they occur infrequently in a sample of records and low weights if they
occur frequently. For example, frequent words such as “Corporation”
might be given lower weight, while less frequent words such as
“Abracadabra” might be given a higher weight. One may override the
default token weights by supplying their own table of token weights.
Transformations Transformations greatly increase the power of Jaccard
similarity by allowing tokens to be converted from one string to
another. For instance, one might know that the name “Bob” can be
converted to “Robert”; that “USA” is the same as “United States”; or
that “Missispi” is a misspelling of “Mississippi”. There are many
classes of such transformations that Fuzzy Lookup handles
automatically such as spelling mistakes (using Edit Transformations
described below), string prefixes, and string merge/split operations.
You can also specify a table containing your own custom
transformations.
Jaccard similarity under transformations The Jaccard similarity under
transformations is the maximum Jaccard similarity between any two
transformations of each set. Given a set of transformation rules, all
possible transformations of the set are considered. For example, for
the sets {a, b, c} and {a, c, d} and the transformation rules {b=>d,
d=>e}, the Jaccard similarity is computed as follows: Variations of
{a, b, c}: {a, b, c}, {a, d, c} Variations of {a, c, d}: {a, c, d},
{a, c, e} Maximum Jaccard similarity between all pairs: J({a, b, c},
{a, c, d}) = 2/4 = 0.5 J({a, b, c}, {a, c, e}) = 2/4 = 0.5 J({a, d,
c}, {a, c, d}) = 3/3 = 1.0 J({a, d, c}, {a, c, e}) = 2/4 = 0.5 The
maximum is 1.0. Note: Weghted Jaccard similiary under transformations
is simply the maximum weighted Jaccard similarity across all pairs of
transformed sets.
Edit distance Edit distance is the total number of character
insertions, deletions, or substitutions that it takes to convert one
string to another. For example, the edit distance between “misissipi”
and “mississippi” is 2 because two character insertions are required.
One of the transformation providers that’s included with Fuzzy Lookup
is the EditTransformationProvider, which generates specific
transformations for each input record and creates a transformation
from the token to all words in its dictionary that are within a given
edit distance. The normalized edit distance is the edit distance
divided by the length of the input string. In the previous example,
the normalized edit distance is 2/9 = .222. | I was working on matching company names of two sets. I was trying to code it in Python with Levenstien's distance. I was having issues with short names of companies, and their trailing part like Pvt,Ltd. I have ran the same set with Excel Fuzzy lookup and was getting good results. I there a way that i can see how excel fuzzy lookup is coded and use the same implementation in python. | 0 | 1 | 2,872 |
0 | 63,618,206 | 0 | 0 | 0 | 1 | 1 | true | 1 | 2018-09-28T16:10:00.000 | 0 | 1 | 0 | unable to read the mongodb data (json) in pyspark | 52,559,131 | 1.2 | python,mongodb,hive,pymongo,pyspark-sql | mport json with open('D:/json/aaa.json') as f: d = f.read() da = ''.join(d.split()) print(type(da)) print(da) daa=da.replace('u'','') daaa= json.loads(daa) print(daaa)
satisfied with the answer. Hence closing this question | I am connecting the mongodb database via pymongo and achieved the expected result of fetching it outside the db in json format . but my task is that i need to create a hive table via pyspark , I found that mongodb provided json (RF719) which spark is not supporting .when i tried to load the data in pyspark (dataframe) it is showing as corrupted record. . and if any possible ways of converting the json format in python is also fine ..Please suggest a response | 0 | 1 | 208 |
0 | 53,070,860 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-09-29T11:57:00.000 | 0 | 1 | 0 | Predicting python script in Jupyter Lab | 52,568,135 | 0 | python,rstudio,jupyter,jupyter-lab | Auto-completion is supported in Jupyter already. You could try type enu and then hit tab. This enumerate will prompt out automatically. | I am an R user currently learning python.
In RStudio, when I type a piece of code, it automatically gives me predictions of the functions I am looking for - like an autocomplete.
I would like to have something similar in Jupyter Lab. Is it possible? | 0 | 1 | 200 |
0 | 52,700,217 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-09-29T12:08:00.000 | -1 | 1 | 0 | how can I use Transfer Learning for LSTM? | 52,568,209 | 1.2 | python-3.x,conv-neural-network,lstm | As I have discovered, we can't use Transfer learning on the LSTM weights. I think the causation is infra-structure of LSTM networks. | I intent to implement image captioning. Would it be possible to transfer learning for LSTM? I have used pretrained VGG16(transfer learning) to Extract features as input of the LSTM. | 0 | 1 | 289 |
0 | 52,579,026 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-09-30T09:32:00.000 | 1 | 2 | 0 | How to save Numpy 4D array to CSV? | 52,576,617 | 0.099668 | python,csv,numpy,keras | You can try using pickle to save the data. It is much more diverse and easy to handle compare to np.save. | I am trying to save a series of images to CSV file so they can be used as a training dataset for Alexnet Keras machine learning. The shape is (15,224,224,3).
So far I am having issue doing this. I have managed to put all data into a numpy array but now I cannot save it to a file.
Please help. | 0 | 1 | 8,759 |
0 | 52,630,545 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-10-01T03:49:00.000 | 3 | 2 | 0 | difference between datashader and other plotting libraries | 52,584,339 | 1.2 | python,matplotlib,plotly,datashader | It may be helpful to first think of Datashader not in comparison to Matplotlib or Plotly, but in comparison to numpy.histogram2d. By default, Datashader will turn a long list of (x,y) points into a 2D histogram, just like histogram2d. Doing so only requires a simple increment of a grid cell for each new point, which is easily accellerated to machine-code speeds with Numba and is trivial to parallelize with Dask. The resulting array is then at most the size of your display screen, no matter how big your dataset is. So it's cheap to process in a separate program that adds axes, labels, etc., and it will never crash your browser.
By contrast, a plotting program like Plotly will need to convert each data point into a JSON or other serialized representation, pass that to JavaScript in the browser, have JavaScript draw a shape into a graphics buffer, and make each such shape support hover and other interactive features. Those interactive features are great, but it means Plotly is doing vastly more work per data point than Datashader is, and requires that the browser can hold all those data points. The only computation Datashader needs to do with your full data is to linearly scale the x and y locations of each point to fit the grid, then increment the grid value, which is much easier than what Plotly does.
The comparison to Matplotlib is slightly more complicated, because with an Agg backend, Matplotlib is also pre-rendering to a fixed-size graphics buffer before display (somewhat like Datashader). But Matplotlib was written before Numba and Dask (making it more difficult to speed up), it still has to draw shapes for each point (not just a simple increment), it can't fully parallelize the operations (because later points overwrite earlier ones in Matplotlib), and it provides anti-aliasing and other nice features not available in Datashader. So again Matplotlib is doing a lot more work than Datashader.
But if what you really want to do is see the faithful 2D distribution of billions of data points, Datashader is the way to go, because that's really all it is doing. :-) | I want to understand the clear difference between Datashader and other graphing libraries eg plotly/matplotlib etc.
I understand that in order to plot millions/billions of data points, we need datashader as other plotting libraries will hung up the browser.
But what exactly is the reason which makes datashader fast and does not hung up the browser and how exactly the plotting is done which doesnt put any load on the browser ????
Also, datashader doesnt put any load on browser because in the backend datashader will create a graph on the basis of my dataframe and send only the image to the browser which is why its fast??
Plz explain i am unable to understand the in and out clearly. | 0 | 1 | 901 |
0 | 55,257,524 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-01T06:56:00.000 | 0 | 2 | 0 | Can we ensemble fastText along with SVM? | 52,585,975 | 0 | python,machine-learning,scikit-learn,ensemble-learning | In your use case you can as you're dealing with 3 models you should keep in mind that:
The models have different mechanics to use the predict() method:
FastText uses an internal file (serialized model with .bin extension, for example) with all embeddings and wordNGrams and you can pass raw text directly;
SVM and NaiveBayes you're obligated to pre-process the data using CountVectorizer, TfidfVectorizer LabelEncoder, get the result, repass for the LabelEncoder and deliver the result.
You will need at the end deal with different probabilities (if you're predicting with k > 1) and probably you need to take care of this
If you're going to serialize it to production you'll need to pickle the SVM and NB models and use .bin for FastText model and of course the embeddings from the former ones need to be instantiated too. This can be a little pain in your response time if you need to predict in near real time. | I'm trying to ensemble the three different models (FastText, SVM, NaiveBayes).
I thought of using python to do this. I'm sure that we can ensemble NaiveBayes as well as SVM models. But, can we ensemble fastText using python ?
Can anyone please suggest me regarding the same ... | 0 | 1 | 780 |
0 | 52,589,887 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-10-01T07:38:00.000 | 3 | 3 | 0 | Possible ways to embed python matplotlib into my presentation interactively | 52,586,506 | 1.2 | python,matplotlib,powerpoint,jupyter,rise | When putting a picture in PowerPoint you can decide whether you want to embed it or link to it. If you decide to link to the picture, you would be free to change it outside of powerpoint. This opens up the possibility for the following workflow:
Next to your presentation you have a Python IDE or Juypter notebook open with the scripts that generate the figures. They all have a savefig command in them to save to exactly the location on disc from where you link the images in PowerPoint. If you need to change the figure, you make the changes in the python code, run the script (or cell) and switch back to PowerPoint where the newly created image is updated.
Note that I would not recommend putting too much effort into finding a better solution to this, but rather spend the time thinking about good visual reprentations of the data, due to the following reasons: 1. If your instrutor's demands are completely unreasonable ("I like blue better than green, so you need to use blue.") than it's not worth spending effort into satisfying their demands at all. 2. If your instrutor's demands are based on the fact that the current reprentation does not allow to interprete the data correctly, this can be prevented by spending more thoughts on good plots prior to the presentation. This is a learning process, which I guess your instructor wants you to internalize. After all, you won't get a degree in computer science for writing a PowerPoint backend to matplotlib, but rather for being able to present your research in a way suited for your subject. | I need to present my data in various graphs. Usually what I do is to take a screenshot of my graph (I almost exclusively make them with matplotlib) and paste it into my PowerPoint.
Unfortunately my direct superior seems not to be happy with the way I present them. Sometimes he wants certain things in log scale and sometimes he dislike my color palette. The data is all there, but because its an image there's no way I can change that in the meeting.
My superior seems to really care about those things and spend quite a lot of time telling me how to make plots in every single meeting. He (usually) will not comment on my data before I make a plot the way he wants.
That's where my question becomes relevant. Right now what I have in my mind is to have an interactive canvas embedded in my PowerPoint such that I can change the range of the axis, color of my data point, etc in real time. I have been searching online for such a thing but it comes out empty. I wonder if that could be done and how can it be done?
For some simple graphs Excel plot may work, but usually I have to present things in 1D or 2D histograms/density plots with millions of entries. Sometimes I have to fit points with complicated mathematical formulas and that's something Excel is incapable of doing and I must use scipy and pandas.
The closest thing to this I found online is rise with jupyter which convert a jupyter notebook into a slide show. I think that is a good start which allows me to run python code in real time inside the presentation, but I would like to use PowerPoint related solutions if possible, mostly because I am familiar with how PowerPoint works and I still find certain PowerPoint features useful.
Thank you for all your help. While I do prefer PowerPoint, any other products that allows me to modify plots in my presentation in real time or alternatives of rise are welcomed. | 0 | 1 | 8,150 |
0 | 52,592,108 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2018-10-01T12:57:00.000 | 2 | 2 | 0 | Should I use the dictionary or the series to hold a bunch of dataframe? | 52,591,696 | 0.197375 | python,pandas,dataframe,panel | Method 2 also works. Since Python 3.6 it remembers the order it is created too. | Suppose I have several dataframes: df1, df2, df3, etc. The label with each dataframes is A1, A2, A3 etc. I want to use this information as a whole, so that I can pass them. Three methods came into my mind:
method 1
use a label list: labels=["A1", "A2", "A3"...] and a list of dataframes dfs=[df1, df2, df3...].
method 2
use a dictionary: d={"A1": df1, "A2": df2, "A3": df3}.
method 3
use a pandas series: s=pd.Series([df1, df2, df3], index=["A1", "A2", "A3"]).
I will use the label and dataframes sequentially, therefore I think method1 and method3 should be my choice. However, using method 1 will need me to pass two items, while using method 3 I only need to keep one object. Is it a common practice to put the dataframes in a series? I seldom see people do this, is it against best practice? Is there any better suggestions? | 0 | 1 | 117 |
0 | 52,592,997 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-10-01T13:56:00.000 | 0 | 1 | 0 | Most efficient datatype for iteratively adding to? | 52,592,761 | 0 | python-3.x,data-science | In your question you say when the data collection is finished (possibly months from now). It is enormous amount of time in comparison with efficiency of python or pandas or any other programming tool I can imagine. I just created 100k random dictionaries of length 18 containing floats, saved them into text file (csv format) and loaded with pandas into dataframe. Took 2 seconds to save and 0.5 seconds to load. So just add every new record to the file and periodically create backups of your choice. | I have a web scraper which iteratively retrieves data from web pages, and I would like to add the attributes pulled to a pandas dataframe (eventually) for running simple statistics and analysis. The current script returns a dictionary every time a new page is scraped.
I understand adding a new row or column to an existing pandas dataframe is slow, so my thought was to add the dictionary's as they are retrieved to a csv, and then convert this csv all at once to a dataframe when the data collection is finished (possibly months from now). I will be dealing with up to 100,000 dicts, with 18 key value pairs.
Is there a more efficient method or datatype to use in this scenario? | 0 | 1 | 16 |
0 | 67,233,384 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-10-02T16:45:00.000 | 1 | 2 | 0 | How to cluster *features* based on their correlations to each other with sklearn k-means clustering | 52,612,841 | 0.099668 | python,machine-learning,scikit-learn,k-means,sklearn-pandas | Create a new matrix by taking the correlations of all the features df.corr(), now use this new matrix as your dataset for the k-means algorithm.
This will give you clusters of features which have similar correlations. | I have a pandas dataframe with rows as records (patients) and 105 columns as features.(properties of each patient)
I would like to cluster, not the patients, not the rows as is customary, but the columns so I can see which features are similar or correlated to which other features. I can already calculate the correlation each feature with every other feature using df.corr(). But how can I cluster these into k=2,3,4... groups using sklearn.cluster.KMeans?
I tried KMeans(n_clusters=2).fit(df.T) which does cluster the features (because I took the transpose of the matrix) but only with a Euclidian distance function, not according to their correlations. I prefer to cluster the features according to correlations.
This should be very easy but I would appreciate your help. | 0 | 1 | 1,423 |
0 | 52,775,839 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-10-03T03:22:00.000 | 0 | 1 | 0 | Have a keyword parser function return variables into local namespace | 52,619,262 | 1.2 | python | A dict is a good way to package a variable number of named values. If the parser returns a dict, then there is a single object that can be queried to get those names and values, avoiding the problem of needing to know the number and names ahead of time.
Another possibility would be to put the parser into a class, either as a factory method (classmethod or staticmethod returning an instance) or as a regular method (invoked during or after __init__), where the class instance holds the parsed values. | This may be a straight-up unwise idea so I'd best explain the context. I am finding that some of my functions have multiple and sometimes mutually exclusive or interdependent keyword arguments - ie, they offer the user the ability to input a certain piece of data as (say) a numpy array or a dataframe. And then if a numpy array, an index can be separately passed, but not if it it's a dataframe.
Which has led me to wonder if it's worth creating some kind of keyword parser function to handle these exclusivities/dependencies. One issue with this is that the keyword parser function would then need to return any variables created (and ex-ante, we would not know their number or their names) into the namespace of the function that called it. I'm not sure if that's possible, at least in a reasonable way (I imagine it could be achieved by directly changing the local dict but that's sometimes said to be a bad idea).
So my question is:
1. Is this a bad idea in the first place? Would creating separate functions depending on whether the input was a dataframe or ndarray be more sensible and simpler?
2. Is it possible without too much hacking to have a function return an unspecified number of variables into the local namespace?
Apologies for the slightly vague nature of this question but any thoughts gratefully received. | 0 | 1 | 30 |
0 | 52,642,522 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2018-10-04T08:18:00.000 | 1 | 1 | 0 | pip install not working for pandas + numpy | 52,642,130 | 1.2 | python,pandas,numpy | You should not use pip in python CLI. You must use pip in your system CLI like Windows powershell.
use command below to install packages :
pip install pachakge-name
for example:
pip install numpy scipy matplotlib pandas
Or you can do this one by one. Each package in single line of pip install | I am new to Python and am trying to pip install the pandas, numpy and a few other libraries, but it won't work.
My method is:
go to command prompt and type python -m pip install pandas --user - I have also tried every other way like pip install etc. Each time i do it it just says syntax error. Solutions?
Thank you. | 0 | 1 | 721 |
0 | 52,675,950 | 0 | 0 | 0 | 0 | 3 | true | 7 | 2018-10-04T11:22:00.000 | 7 | 3 | 0 | How to evaluate Word2Vec model | 52,645,459 | 1.2 | python,nlp,word2vec,embedding,word-embedding | There's no generic way to assess token-vector quality, if you're not even using real words against which other tasks (like the popular analogy-solving) can be tried.
If you have a custom ultimate task, you have to devise your own repeatable scoring method. That will likely either be some subset of your actual final task, or well-correlated with that ultimate task. Essentially, whatever ad-hoc method you may be using the 'eyeball' the results for sanity should be systematized, saving your judgements from each evaluation, so that they can be run repeatedly against iterative model improvements.
(I'd need more info about your data/items and ultimate goals to make further suggestions.) | Hi have my own corpus and I train several Word2Vec models on it.
What is the best way to evaluate them one against each-other and choose the best one? (Not manually obviously - I am looking for various measures).
It worth noting that the embedding is for items and not word, therefore I can't use any existing benchmarks.
Thanks! | 0 | 1 | 6,758 |
0 | 55,913,014 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2018-10-04T11:22:00.000 | 3 | 3 | 0 | How to evaluate Word2Vec model | 52,645,459 | 0.197375 | python,nlp,word2vec,embedding,word-embedding | One way to evaluate the word2vec model is to develop a "ground truth" set of words. Ground truth will represent words that should ideally be closest together in vector space. For example if your corpus is related to customer service, perhaps the vectors for "dissatisfied" and "disappointed" will ideally have the smallest euclidean distance or largest cosine similarity.
You create this table for ground truth, maybe it has 200 paired words. These 200 words are the most important paired words for your industry / topic. To assess which word2vec model is best, simply calculate the distance for each pair, do it 200 times, sum up the total distance, and the smallest total distance will be your best model.
I like this way better than the "eye-ball" method, whatever that means. | Hi have my own corpus and I train several Word2Vec models on it.
What is the best way to evaluate them one against each-other and choose the best one? (Not manually obviously - I am looking for various measures).
It worth noting that the embedding is for items and not word, therefore I can't use any existing benchmarks.
Thanks! | 0 | 1 | 6,758 |
0 | 58,868,796 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2018-10-04T11:22:00.000 | 1 | 3 | 0 | How to evaluate Word2Vec model | 52,645,459 | 0.066568 | python,nlp,word2vec,embedding,word-embedding | One of the ways of evaluating the Word2Vec model would be to apply the K-Means algorithm on the features generated by the Word2Vec. Along with that create your own manual labels/ground truth representing the instances/records. You can calculate the accuracy of the model by comparing the clustered result tags with the ground truth label.
Eg: CLuter 0 - Positive -{"This is a good restaurant", "Good food here", "Not so good dinner"}
Cluster 1 - Negative - {"This is a fantastic hotel", "food was stale"}
Now, compare the tags/labels generated by the clusters with the ground truth values of the instances/sentences in the clusters and calculate the accuracy. | Hi have my own corpus and I train several Word2Vec models on it.
What is the best way to evaluate them one against each-other and choose the best one? (Not manually obviously - I am looking for various measures).
It worth noting that the embedding is for items and not word, therefore I can't use any existing benchmarks.
Thanks! | 0 | 1 | 6,758 |
0 | 52,648,696 | 0 | 1 | 0 | 0 | 2 | false | 13 | 2018-10-04T13:53:00.000 | -1 | 4 | 0 | Why doesn't a new Conda environment come with packages like numpy? | 52,648,520 | -0.049958 | python,package,anaconda,conda | You can check the packages you have in your environment with the command:
conda list
If packages are not listed you just have to add it, with the command:
conda install numpy | I am going through the painful process of learning how to manage packages/ different (virtual) environments in Python/Anaconda. I was told that Anaconda is basically a python installation with all the packages I need (e.g. numpy, scipy, sci-kit learn etc).
However, when I create a new environment, none of these packages is readily available. I cannot import them when using PyCharm with the newly created environment. When I check the Pycharm project interpreter, or the anaconda navigator environments tab, It seems that indeed none of these packages are installed in my new environments. Why is this? It doesn't make sense to me to provide all these packages, but then not make them ready for use when creating new environments. Do I have to install all these packages manually in new env's or am I missing something?
Kindest regards, and thanks in advance. | 0 | 1 | 12,607 |
0 | 52,648,738 | 0 | 1 | 0 | 0 | 2 | false | 13 | 2018-10-04T13:53:00.000 | 3 | 4 | 0 | Why doesn't a new Conda environment come with packages like numpy? | 52,648,520 | 0.148885 | python,package,anaconda,conda | I don't know about "conda" environments but in general virtual environments are used to provide you a "unique" environment. This might include different packages, different environment variables etc.
The whole point of making a new virtual environment is to have a separate place where you can install all the binaries ( and other resources ) required for your project. If you have some pre-installed binaries in the environment, doesn't it defeat the purpose of creating one in the first place?
The fact that you can create multiple environments helps you to separate binaries that might be needed by one and not by the other.
For instance, if you are creating a project which requires numpy:1.1 but you have numpy:2.1 installed , then you have to change it. So basically, by not installing any other packages, they are not making assumptions about your project's requirements. | I am going through the painful process of learning how to manage packages/ different (virtual) environments in Python/Anaconda. I was told that Anaconda is basically a python installation with all the packages I need (e.g. numpy, scipy, sci-kit learn etc).
However, when I create a new environment, none of these packages is readily available. I cannot import them when using PyCharm with the newly created environment. When I check the Pycharm project interpreter, or the anaconda navigator environments tab, It seems that indeed none of these packages are installed in my new environments. Why is this? It doesn't make sense to me to provide all these packages, but then not make them ready for use when creating new environments. Do I have to install all these packages manually in new env's or am I missing something?
Kindest regards, and thanks in advance. | 0 | 1 | 12,607 |
0 | 53,033,952 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-10-04T16:23:00.000 | 0 | 2 | 0 | Facing issues while installing rasa_core | 52,651,437 | 0 | python,installation,chatbot,rasa-nlu,rasa-core | I faced the same issue and able to install rasa_core after resolving dependencies fast.
Please try below:
First install twisted
pip install Twisted
Then, install rasa_core
pip install rasa_core | I am trying to install rasa_core in my python by using !pip install rasa_core; command.
But i am getting an error :
Below is the error :
Failed building wheel for Twisted
The scripts freeze_graph.exe, saved_model_cli.exe, tensorboard.exe, tflite_convert.exe, toco.exe and toco_from_protos.exe are installed in 'C:\Users\user\AppData\Roaming\Python\Python36\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Command "c:\programdata\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\user\\AppData\\Local\\Temp\\pip-install-fot9mu3e\\Twisted\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\user\AppData\Local\Temp\pip-record-vp3wq_7u\install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in C:\Users\user\AppData\Local\Temp\pip-install-fot9mu3e\Twisted\
Could anyone please help me. | 0 | 1 | 868 |
0 | 52,665,360 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-10-05T08:09:00.000 | 1 | 1 | 0 | How to choose coefficients with scikit LinearRegression | 52,661,107 | 1.2 | python,scikit-learn,autoregressive-models | Just make a dataset X with 11 columns [x0-97, x0-10, x0-9,...,x0-1]. Then series of x0 will be your target Y. | I want to find an autoregressive model on some data stored in a dataframe and I have 96 data points per day. The data is the value of solar irradiance in some region and I know it has a 1-day seasonality. I want to obtain a simple linear model using scikit LinearRegression and I want to specify which lagged data points to use. I would like to use the last 10 data points, plus the data point that has a lag of 97, which corresponds to the data point of 24 hour earlier. How can I specify the lagged coefficients that I want to use? I don't want to have 97 coefficients, I just want to use 11 of them: the previous 10 data points and the data point 97 positions back. | 0 | 1 | 38 |
0 | 52,665,472 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2018-10-05T09:42:00.000 | 2 | 1 | 0 | Tensor shape modification using slicing and None | 52,662,727 | 1.2 | python,tensorflow | Indeed, None adds a new dimension. You can also use tf.newaxis for this which is a bit more explicit IMHO.
The new dimension is added in axis 1 because that's where it appears in the index. E.g. input[:, :, None] should result in shape (19, 4, 1, 64, 64, 3) and so on.
It might get clearer if we write all the dimensions in the slicing: input[:, None, :, :, :, :]. In slicing, : simply means taking all elements of the dimension. So by using one :, we take all elements of dimension 0 and then "move on" to dimension 1. Since None appears here, we know that the new size-1 axis should be in dimension 1. Accordingly, the remaining dimensions get "pushed back". | I am bit puzzled by how to read and understand a simple line of code:
I have a tensor input of shape (19,4,64,64,3).
The line of code input[:, None] returns a tensor of shape (19, 1,
4, 64, 64, 3).
How should I understand the behavior of that line? It seems that None is adding a dimension, with a size of 1. But why is this added on that specific position (between 19 and 4)? | 0 | 1 | 59 |
0 | 52,670,172 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2018-10-05T15:23:00.000 | 1 | 1 | 0 | Tensorflow Object Detection API: How to ignore regions during training? | 52,668,857 | 1.2 | python,tensorflow,object-detection,tensorflow-serving,object-detection-api | If those regions to ignore remain static, as in, the contents of the region doesn't change throughout the dataset, then the model can be learnt to ignore those regions.
If you really want the model to ignore them during training, then mask them with a constant value. | I'm using the object detection API from the models/research python repo on Ubuntu 16.04, and I wanted to fine-tune a pre-trained model (at the moment I'm interested in SSD with MobileNet or Inception backbones) on the UA-DETRAC dataset.
The problem is that there are specific regions, with their bounding boxes, which are marked as "ignored regions" and I wouldn't want the model to train on what he thinks are some false positives which are true, just not annotated (included in those regions).
I thought of cropping the images to exclude those regions, but I would lose some information.
Is there the built-in possibility to mark them as "don't care" boxes or should I modify the code?
Thanks | 0 | 1 | 811 |
0 | 59,511,989 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2018-10-06T07:18:00.000 | -1 | 5 | 0 | No module named 'prompt_toolkit.formatted_text' | 52,676,660 | -0.039979 | python,jupyter-notebook,jupyter | Check your environment variable Path!
In the system variable Path add the following line
C:\Users\\AppData\Roaming\Python\Python37\Scripts | I am totally new to Jupyter Notebook.
Currently, I am using the notebook with R and it is working well.
Now, I tried to use it with Python and I receive the following error.
[I 09:00:52.947 NotebookApp] KernelRestarter: restarting kernel (4/5),
new random ports
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code,
run_globals)
File
"/home/frey/.local/lib/python3.6/site-packages/ipykernel_launcher.py",
line 15, in from ipykernel import kernelapp as app
File
"/home/frey/.local/lib/python3.6/site-packages/ipykernel/init.py",
line 2, in from .connect import *
File
"/home/frey/.local/lib/python3.6/site-packages/ipykernel/connect.py",
line 13, in from IPython.core.profiledir import ProfileDir
File "/home/frey/.local/lib/python3.6/site-packages/IPython/init.py",
line 55, in from .terminal.embed import embed
File
"/home/frey/.local/lib/python3.6/site-packages/IPython/terminal/embed.py",
line 16, in from IPython.terminal.interactiveshell import
TerminalInteractiveShell
File
"/home/frey/.local/lib/python3.6/site-packages/IPython/terminal/interactiveshell.py",
line 20, in from prompt_toolkit.formatted_text import PygmentsTokens
ModuleNotFoundError: No module named 'prompt_toolkit.formatted_text'
[W 09:00:55.956 NotebookApp] KernelRestarter: restart failed [W
09:00:55.956 NotebookApp] Kernel 24117cd7-38e5-4978-8bda-d1b84f498051
died, removing from map.
Hopefully, someone can help me. | 0 | 1 | 13,255 |
0 | 52,676,845 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2018-10-06T07:18:00.000 | 1 | 5 | 0 | No module named 'prompt_toolkit.formatted_text' | 52,676,660 | 0.039979 | python,jupyter-notebook,jupyter | It's more stable to create a kernel with an Anaconda virtualenv.
Follow these steps.
Execute Anaconda prompt.
Type conda create --name $ENVIRONMENT_NAME R -y
Type conda activate $ENVIRONMENT_NAME
Type python -m ipykernel install
Type ipython kernel install --user --name $ENVIRONMENT_NAME
Then, you'll have a new jupyter kernel named 'R' with R installed. | I am totally new to Jupyter Notebook.
Currently, I am using the notebook with R and it is working well.
Now, I tried to use it with Python and I receive the following error.
[I 09:00:52.947 NotebookApp] KernelRestarter: restarting kernel (4/5),
new random ports
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code,
run_globals)
File
"/home/frey/.local/lib/python3.6/site-packages/ipykernel_launcher.py",
line 15, in from ipykernel import kernelapp as app
File
"/home/frey/.local/lib/python3.6/site-packages/ipykernel/init.py",
line 2, in from .connect import *
File
"/home/frey/.local/lib/python3.6/site-packages/ipykernel/connect.py",
line 13, in from IPython.core.profiledir import ProfileDir
File "/home/frey/.local/lib/python3.6/site-packages/IPython/init.py",
line 55, in from .terminal.embed import embed
File
"/home/frey/.local/lib/python3.6/site-packages/IPython/terminal/embed.py",
line 16, in from IPython.terminal.interactiveshell import
TerminalInteractiveShell
File
"/home/frey/.local/lib/python3.6/site-packages/IPython/terminal/interactiveshell.py",
line 20, in from prompt_toolkit.formatted_text import PygmentsTokens
ModuleNotFoundError: No module named 'prompt_toolkit.formatted_text'
[W 09:00:55.956 NotebookApp] KernelRestarter: restart failed [W
09:00:55.956 NotebookApp] Kernel 24117cd7-38e5-4978-8bda-d1b84f498051
died, removing from map.
Hopefully, someone can help me. | 0 | 1 | 13,255 |
0 | 52,677,839 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2018-10-06T09:34:00.000 | 0 | 5 | 0 | AttributeError("module 'pandas' has no attribute 'read_csv'") | 52,677,658 | 0 | python,pandas,attributeerror | There might be possibility that you are using this name for your script as read_csv.py hence pandas itself confused what to import, if or csv.py then you can rename it to something else like test_csv_read.py.
also remove any files in the path naming read_csv.pyc or csv.pyc . | I am new to Python and I have been stuck on a problem for some time now. I recently installed the module pandas and at first, it worked fine. However, for some reason it keeps saying
AttributeError("module 'pandas' has no attribute 'read_csv'").
I have looked all over StackOverflow and the consensus is that there is likely another file in my CWD with the same name but I believe I don't.
Even if I create a new project and call it, for example, Firstproject.py, and immediately import pandas as pd, I get the error.
I would appreciate the help. I can provide more info if required. | 0 | 1 | 9,258 |
0 | 55,653,559 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2018-10-06T09:34:00.000 | 0 | 5 | 0 | AttributeError("module 'pandas' has no attribute 'read_csv'") | 52,677,658 | 0 | python,pandas,attributeerror | Here is the solution
when you downloaded python its automatically download 32 you need to delete if you don't have 32 and go download 64 and then problem solved :) | I am new to Python and I have been stuck on a problem for some time now. I recently installed the module pandas and at first, it worked fine. However, for some reason it keeps saying
AttributeError("module 'pandas' has no attribute 'read_csv'").
I have looked all over StackOverflow and the consensus is that there is likely another file in my CWD with the same name but I believe I don't.
Even if I create a new project and call it, for example, Firstproject.py, and immediately import pandas as pd, I get the error.
I would appreciate the help. I can provide more info if required. | 0 | 1 | 9,258 |
0 | 60,574,804 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2018-10-06T09:34:00.000 | 0 | 5 | 0 | AttributeError("module 'pandas' has no attribute 'read_csv'") | 52,677,658 | 0 | python,pandas,attributeerror | In my case, I had installed module "panda" instead of "pandas". I was getting this error, even when there was no conflicting .py files were present in working folder.
Then I recognized my mistake, and then installed package "pandas and problem got resolved. | I am new to Python and I have been stuck on a problem for some time now. I recently installed the module pandas and at first, it worked fine. However, for some reason it keeps saying
AttributeError("module 'pandas' has no attribute 'read_csv'").
I have looked all over StackOverflow and the consensus is that there is likely another file in my CWD with the same name but I believe I don't.
Even if I create a new project and call it, for example, Firstproject.py, and immediately import pandas as pd, I get the error.
I would appreciate the help. I can provide more info if required. | 0 | 1 | 9,258 |
0 | 52,695,712 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-06T20:14:00.000 | 0 | 2 | 0 | Is it possible to change the loss function dynamically during training? | 52,682,979 | 0 | python,tensorflow,machine-learning | You have to implement your own algorithm. This is mostly possible with Tensorflow. | I am working on a machine learning project and I am wondering whether it is possible to change the loss function while the network is training. I'm not sure how to do it exactly in code.
For example, start training with cross entropy loss and then halfway through training, switch to 0-1 loss. | 0 | 1 | 783 |
0 | 52,807,275 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-07T05:39:00.000 | 0 | 2 | 0 | Tensorflow: Each row in the training dataset contains 99% of the previous rows data - can I optimize it before running the training? | 52,685,768 | 0 | python,tensorflow,dataset,tensorflow-datasets | If you use Data API then you can cache the input. Also maybe TF's support for Kafka might be a help here as you could model it as a stream of data.
Another approach would be to reuse some data between session calls. Then you would have to use resource variable (in the current Variable() spec it means using flag use_resource in the constructor). This way your csv could contain only minute level data and you would just add it to the variable - create a kind of circle-buffer from it. | I am searching for a way to make my training and testing data smaller in file size.
The model I want to end up with
I want to train a model that predicts whether or not a crypto coin price is making and x% (0.4 or so) jump within the next 10 minutes (i.e. I want the model to answer with a Yes or No).
Every minute I will the model with the last 3 hours of price and volume data (that means 180 datapoints each containing 5 values open, close, high, low prices and volume).
My current training and testing sets are BIG
My training and testing sets are therefore row in a csv file where each row contains 5 x 180 = 900 numbers plus one label (Yes or No) and with about 100k rows I guess this is a very huge dataset.
But each row in csv contains mostly redundant data
But each "neighbor" row in the csv file only contain 1 new data point since every next row is only 1 minute "older" and therefore only has skipped the data point of oldest minute and instead introduced a new point of the next minute.
Is it possible to setup up the traing code so the csv file only needs to have the latest minute data point in each row? | 0 | 1 | 133 |
0 | 52,692,667 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2018-10-07T20:28:00.000 | 1 | 1 | 0 | Error - No module named '_pywrap_tensorflow' | 52,692,622 | 1.2 | python,tensorflow | Please downgrade Python to 3.6.x and try again. I had faced similar issue while using Python 3.7.x. Once I downgraded it, it worked. Make sure you adjust your path variable accordingly. "Pip" also may have to be modified and the corresponding path variable. | I have seen multiple questions for the same issue. I went through all the answers and tried all of them. I updated pip, tensorflow, python etc to the latest versions or as suggested in the answers and still I am facing this issue.
Pip version 18.0,
Python 3.7 | 0 | 1 | 45 |
0 | 52,694,329 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-10-08T01:18:00.000 | 1 | 1 | 0 | Difference between dask pivot_table and pandas pivot_table python | 52,694,289 | 1.2 | python,python-3.x,pandas,pivot-table,dask | Definitely Dask. The way pandas work is, it processes everything as a monolithic block in memory and is not parallelizable, while Dask is made to break the data frame into chunks that can be processed in parallel. | It seems we can achieve same goal using pivot_table from both libraries, but which one is more efficient in performance for large dataset? | 0 | 1 | 515 |
0 | 70,800,730 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-10-08T13:37:00.000 | 0 | 2 | 0 | Random Forest Multi Class Python does not improve accuracy | 52,703,577 | 0 | python,random-forest,multiclass-classification | Try to tune below parameters
n_estimators
This is the number of trees you want to build before taking the maximum voting or averages of predictions. Higher number of trees give you better performance but makes your code slower.
max_features
These are the maximum number of features Random Forest is allowed to try in individual tree. There are multiple options available in Python to assign maximum features.
min_sample_leaf
Leaf is the end node of a decision tree. A smaller leaf makes the model more prone to capturing noise in train data. You can start with some minimum value like 75 and gradually increase it. See which value your accuracy is coming high.
Otherwise:
You can try XGBoost, LightGBM or Adaboost, they often perform better than Random Forest
Try do not remove missing values, complex ensemble models such as RF and GBM treats it well, may be you lost some useful information doing so, especially if you have large percent of your data missing in some features
Try to increase n_estimators and max_depth, may be your trees not deep enough to catch all data properties | I am making a random forest multi-classifier model. Basically there are hundred of households which have 200+ features, and based on these features I have to classify them in one of the classes {1,2,3,4,5,6}.
The problem I am facing is I cannot improve the accuracy of the model how much ever I can try. I have used RandomSearchCV and also GridSearchCV but I can only achieve accuracy of around 68%.
Some points to note
The sample points are unbalanced. This is the order of classes in decreasing order {1,4,2,7,6,3}. I have used class_weight = "balanced" but it does improve the accuracy.
I have tried number of estimators ranging from 50-450
I have also calculated the f1 score and not only going by accuracy to compare the models
What else do you guys suggest to improve the accuracy/f1-score? I am stuck with this problem from a long time. Any help will be highly appreciated. | 0 | 1 | 118 |
0 | 70,800,413 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-10-08T13:37:00.000 | 0 | 2 | 0 | Random Forest Multi Class Python does not improve accuracy | 52,703,577 | 0 | python,random-forest,multiclass-classification | You can check if the features are on different scales. If they are, it is suggested to use some type of normalization. This step is essential for many linear-based models to perform well. You can take a quick look at the distributions of each numeric feature to decide what type of normalization to use. | I am making a random forest multi-classifier model. Basically there are hundred of households which have 200+ features, and based on these features I have to classify them in one of the classes {1,2,3,4,5,6}.
The problem I am facing is I cannot improve the accuracy of the model how much ever I can try. I have used RandomSearchCV and also GridSearchCV but I can only achieve accuracy of around 68%.
Some points to note
The sample points are unbalanced. This is the order of classes in decreasing order {1,4,2,7,6,3}. I have used class_weight = "balanced" but it does improve the accuracy.
I have tried number of estimators ranging from 50-450
I have also calculated the f1 score and not only going by accuracy to compare the models
What else do you guys suggest to improve the accuracy/f1-score? I am stuck with this problem from a long time. Any help will be highly appreciated. | 0 | 1 | 118 |
Subsets and Splits