GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 46,582,120 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-10-05T08:24:00.000 | 0 | 1 | 0 | Converting many string values to categories | 46,581,018 | 0 | python,pandas | I applied below command and it works:
df['kategorie']=action['kategorie'].astype('category') | I have a data frame with one column full of string values. They need to be converted into categories. Due to huge amount it would be inconvenient to define categories in dictionary. Is there any other way in pandas to do that? | 0 | 1 | 34 |
0 | 47,052,669 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-10-05T10:31:00.000 | 0 | 1 | 0 | Load portions of matrix into RAM | 46,583,487 | 0 | python,data-structures,microcontroller,micropython | Sorry, but your question contains the answer - if you need to work with 32x32 tiles, the best format is that which represents your big image as a sequence of tiles (and e.g. not as one big 256x256 image, though reading tiles out of it is also not a rocket science and should be fairly trivial to code in MicroPython, though 32x32 tiles would be more efficient of course).
You don't describe the exact format of your images, but I wouldn't use pickle module for it, but store images as raw bytes and load them into array.array() objects (using inplace .readinto() operation). | I'm writing some image processing routines for a micro-controller that supports MicroPython. The bad news is that it only has 0.5 MB of RAM. This means that if I want to work with relatively big images/matrices like 256x256, I need to treat it as a collection of smaller matrices (e.g. 32x32) and perform the operation on them. Leaving at aside the fact of reconstructing the final output of the orignal (256x256) matrix from its (32x32) submatrices, I'd like to focus on how to do the loading/saving from/to disk (an SD card in this case) of this smaller matrices from a big image.
Given that intro, here is my question: Assuming I have a 256x256 on disk that I'd like to apply some operation onto (e.g. convolution), what's the most convenient way of storing that image so it's easy to load it into 32x32 image patches? I've seen there is a MicroPython implementation of the pickle module, is this a good idea for my problem? | 0 | 1 | 52 |
0 | 46,584,984 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2017-10-05T11:31:00.000 | 1 | 1 | 0 | Can I install Tensorflow on both Python 2 and 3? | 46,584,556 | 1.2 | python,tensorflow | Yes you can do. Easy step is install python anaconda then create environment with python 2.7 and python 3. Install Tensorflow for both environment | I've installed Tensorflow using Python 3 (pip3 install). Now, since Jupyter Notebook is using Python 2, thus the python command is linked to python2.7, all the codes in Jupyter Notebook get error (ImportError: No module named tensorflow).
Question: Can I install Tensorflow running side by side for both Python 2 and 3? | 0 | 1 | 948 |
0 | 46,597,662 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-10-05T18:46:00.000 | 0 | 2 | 0 | Programming an interactive slackbot - python | 46,592,760 | 0 | python,csv,slack,slack-api | you can solve using pandas from python
pandas is data processing framework
pandas framework can processing EXCEL, TXT as well as csv file.
The following links pandas documentation | I have recently been working on a slackbot and I have the basic functionality down, I am able to take simple commands and make have the bot answer. But, I want to know if there is anyway to have to bot store some data given by a user, such as "@slackbot 5,4,3,2,1" and then have the bot sort it and return it like "1,2,3,4,5". Also, is there anyway to have the bot read an external .csv file and have it return some type of information? for example I want the bot to tell me what the first row of a .csv file says.
Thank you! any help would be appreciated | 0 | 1 | 158 |
0 | 56,807,453 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-10-05T18:46:00.000 | 0 | 2 | 0 | Programming an interactive slackbot - python | 46,592,760 | 0 | python,csv,slack,slack-api | Whatever you have mentioned in your question, can easily be done using slackbot. You can develop slackbot as Django server.
If you want bot to store data, you can connect your django server to any database or to any cache (eg: Redis, Memecache).
You can write sorting logic in python and send sorted list back to slack using Slackclient library.
And based on your input to slackbot, you can perform action in python and send response back to slack.
Hope this answers! | I have recently been working on a slackbot and I have the basic functionality down, I am able to take simple commands and make have the bot answer. But, I want to know if there is anyway to have to bot store some data given by a user, such as "@slackbot 5,4,3,2,1" and then have the bot sort it and return it like "1,2,3,4,5". Also, is there anyway to have the bot read an external .csv file and have it return some type of information? for example I want the bot to tell me what the first row of a .csv file says.
Thank you! any help would be appreciated | 0 | 1 | 158 |
0 | 46,597,146 | 0 | 0 | 0 | 0 | 1 | true | 6 | 2017-10-06T00:31:00.000 | 4 | 7 | 0 | Differentiable round function in Tensorflow? | 46,596,636 | 1.2 | python,tensorflow | Rounding is a fundamentally nondifferentiable function, so you're out of luck there. The normal procedure for this kind of situation is to find a way to either use the probabilities, say by using them to calculate an expected value, or by taking the maximum probability that is output and choose that one as the network's prediction. If you aren't using the output for calculating your loss function though, you can go ahead and just apply it to the result and it doesn't matter if it's differentiable. Now, if you want an informative loss function for the purpose of training the network, maybe you should consider whether keeping the output in the format of probabilities might actually be to your advantage (it will likely make your training process smoother)- that way you can just convert the probabilities to actual estimates outside of the network, after training. | So the output of my network is a list of propabilities, which I then round using tf.round() to be either 0 or 1, this is crucial for this project.
I then found out that tf.round isn't differentiable so I'm kinda lost there.. :/ | 0 | 1 | 6,120 |
0 | 46,600,856 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-06T07:37:00.000 | 0 | 2 | 0 | How can I create sheet 2 in a CSV file by using Python code? | 46,600,652 | 0 | python,csv | You can do this by using multiple CSV files - one CSV file per sheet.
A comma-separated value file is a plain text format. It is only going to be able to represent flat data, such as a table (or a "sheet")
When storing multiple sheets, you should use separate CSV files. You can write each one separately and import/parse them individually into their destination. | Is there is way to create sheet 2 in same csv file by using python code | 0 | 1 | 4,260 |
0 | 46,610,485 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2017-10-06T14:36:00.000 | 0 | 1 | 0 | Sorting and loading data from Pandas to Redshift using to_sql | 46,608,223 | 1.2 | python,sorting,amazon-redshift,pandas-to-sql | While ingesting data into redshift, data gets distributed between slices on each node in your redshift cluster.
My suggestion would be to create a sort key on a column which you need to be sorted. Once you have sort key on that column, you can run Vacuum command to get your data sorted.
Sorry! I cannot be of much help on Python/Pandas
If I’ve made a bad assumption please comment and I’ll refocus my answer. | I've built some tools that create front-end list boxes for users that reference dynamic Redshift tables. New items in the table, they appear automatically in the list.
I want to put the list in alphabetical order in the database so the dynamic list boxes will show the data in that order.
After downloading the list from an API, I attempt to sort the list alphabetically in a Pandas dataframe before uploading. This works perfectly:
df.sort_values(['name'], inplace=True, ascending=True, kind='heapsort')
But then when I try to upload to Redshift in that order, it loses the order while it uploads. The data appears in chunks of alphabetically ordered segments.
db_conn = create_engine('<redshift connection>')
obj.to_sql('table_name', db_conn, index = False, if_exists = 'replace')
Because of the way the third party tool (Alteryx) works, I need to have this data in alphabetical order in the database.
How can I modify to_sql to properly upload the data in order? | 0 | 1 | 725 |
0 | 46,619,774 | 0 | 0 | 0 | 0 | 1 | true | 7 | 2017-10-07T11:16:00.000 | 4 | 2 | 0 | Share memory between C/C++ and Python | 46,619,531 | 1.2 | python,c++,linux,opencv | OK, this is not exactly a memory sharing in its real sense. What you want is IPC to send image data from one process to another.
I suggestthat you use Unix named pipes. You will have to get the raw data in a string format in C/C++, send it through pipe or Unix socket to Python and there get a numpy array from the sent data. Perhaps using np.fromstring() function.
Do not worry about the speed, pipes are pretty fast. Local and Unix sockets as well. Most time will be lost on getting the string representation and turning it back to matrix.
There is a possibility that you can create real shared memory space and get the data from OpenCV in C/C++ directly into Python, and then use OpenCV in Python to get out numpy array, but it would be complicated. If you don't need speed of light your best bet are named pipes. | Is there a way to share memory to share an openCV image (MAT in C+++ and numpy in python) image between a C/C++ and python? Multiplataform is not needed, I'm doing it in linux, I've thought share between mmap or similar think.
I have two running processes one is written in C and the other is python, and I need to share an image between them.
I will call from the c process to python via socket but I need to send and image and via memory.
Another alternative could be write in memory file, not sure if it could be more time consuming. | 0 | 1 | 5,634 |
0 | 46,620,696 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-10-07T13:23:00.000 | 1 | 1 | 0 | How to use random numbers that executes a one dimensional random walk in python? | 46,620,657 | 0.197375 | python,random | 1 - Start with a list initialized with 5 items (maybe None?)
2 - place the walker at index 2
3 - randomly chose a direction (-1 or + 1)
4 - move the walker in the chosen direction
5 - maybe print the space and mark the location of the walker
6 - repeat at step 3 as many times as needed | Start with a one dimensional space of length m, where m = 2 * n + 1. Take a step either to the left or to the right at random, with equal probability. Continue taking random steps until you go off one edge of the space, for which I'm using while 0 <= position < m.
We have to write a program that executes the random walk. We have to create a 1D space using size n = 5 and place the marker in the middle. Every step, move it either to the left or to the right using the random number generator. There should be an equal probability that it moves in either direction.
I have an idea for the plan but do not know how to write it in python:
Initialize n = 1, m = 2n + 1, and j = n + 1.
Loop until j = 0 or j = m + 1 as shown. At each step:
Move j left or right at random.
Display the current state of the walk, as shown.
Make another variable to count the total number of steps.
Initialize this variable to zero before the loop.
However j moves always increase the step count.
After the loop ends, report the total steps. | 0 | 1 | 327 |
0 | 58,870,147 | 0 | 1 | 0 | 0 | 2 | true | 3 | 2017-10-07T15:14:00.000 | 1 | 2 | 0 | Epochs Vs Pass Vs Iteration | 46,621,774 | 1.2 | python,machine-learning,deep-learning,neural-network,epoch | Epoch:
One round forward Propagation and backward Propagation into the neural network at once.(dataset )
Example :
One round of throwing the ball into the basket and finding out the error and come back and changing the weights.(f = ma)
Forward propagation:
The Process of initizing the mass and acceleration with random values and predicting the output is called the forward propagation.
Backward propagation:
Changing the values and again predicting the output .(By finding out the gradient)
Gradient:
If i change the input of X(Independent variable) then what the value of y(Dependent variable) is changed out is called the gradient.
Actually there is no answer for that . And the epochs are based on the dataset but you can say that the numbers of epochs is related to how different your data is. With an example, Do you have the only white tigers in your dataset or is it much more different dataset.
Iteration:
Iteration is the number of batches needed to complete one epoch.
Example:
We can divide the dataset of 1000 examples into batches of 250 then it will take 4 iterations to complete 1 epoch. (Here Batch size = 250, iteration = 4) | What does the term epochs mean in the neural network.
How does it differ from pass and iteration | 0 | 1 | 971 |
0 | 53,937,484 | 0 | 1 | 0 | 0 | 2 | false | 3 | 2017-10-07T15:14:00.000 | 2 | 2 | 0 | Epochs Vs Pass Vs Iteration | 46,621,774 | 0.197375 | python,machine-learning,deep-learning,neural-network,epoch | There are many neural networks algorithms in unsupervised learning. As long as a cost function can be defined, so can "neural networks" be used.
For instance, there are for instance autoencoders, for dimensionality reduction, or Generative Adversarial Networks (so 2 networks, one generating new samples). All these are unsupervised learning, and still using neural networks. | What does the term epochs mean in the neural network.
How does it differ from pass and iteration | 0 | 1 | 971 |
0 | 46,929,384 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-10-07T20:31:00.000 | 0 | 1 | 0 | IBM Watson Natural Language Understanding uploading multiple documents for analysis | 46,624,822 | 1.2 | python,ibm-cloud,ibm-watson,watson-nlu | NLU can be "manually" adapted to do batch analysis. But the Watson service that provides what you are asking for is Watson Discovery. It allows to create Collections (set of documents) that will be enriched thru an internal NLU function and then queried. | I have roughly 200 documents that need to have IBM Watson NLU analysis done. Currently, processing is performed one at a time. Will NLU be able preform a batch analysis? What is the correct python code or process to batch load the files and then response results? The end goal is to grab results to analyze which documents are similar in nature. Any direction is greatly appreciated as IBM Support Documentation does not cover batch processing. | 1 | 1 | 445 |
0 | 46,646,311 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-10-09T12:10:00.000 | 2 | 1 | 0 | is multi-label clasification for text only | 46,646,141 | 1.2 | python,machine-learning,multilabel-classification | Of course it can be done with numbers. After all, the text itself is converted to numbers to be classified. But you should not use regression for that. It is clearly a case for classification.
A regular classifier (for example, a neural network) usually has multiple outputs, one for each class. Each output returns the probability that the input vector belongs to that particular class.
In standard classification, you assign it to the class with the maximum probability. In your case, just assign it to all the classes for which p > 0.5 (assuming that the output is in [0, 1].
Regarding the question of whether your problem is a multi-regression or multi-classification problem, you can't know that just by looking at the inputs. You decide it based on what you are trying to find. Choose regression if you are trying to find numeric values in a continuous range (for example, predict the price and number of sales for a given product). Choose classification if you have a number of attributes that the input has or doesn't have. | I was working on a numeric dataset and apparently it is a multi variable output regression. I wanted to know if you can have a multi-label classification in a numeric dataset or it is strictly for text based.
For Eg: Stackoverflow an categorize every text/code into multiple tags like python,flask, python2.7 ... But can something like that be done with numbers. Sorry I know that this is a noob question but I wanted to know the answer. Thanks in Advance. | 0 | 1 | 36 |
1 | 46,680,237 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-10-10T22:02:00.000 | 0 | 2 | 0 | Using Tensorflow on smartphones | 46,676,738 | 0 | java,android,python,mobile,tensorflow | Short answer is : yes. You will be safe with python since it’s the main front end language for tensorflow. Also I agree with BHawk’s answer above. | I've been learning a lot about the uses of Machine Learning and Google's Tensorflow. Mostly, developers use Python when developing with Tensorflow. I do realize that other languages can be used with Tensorflow as well, i.e. Java and C++. I see that Google s about to launch Tensorflow Lite that is supposed to be a game changer for mobile devices. My question; can I be safe by learning Tensorflow using Python and still be able to develop mobile apps using this service? | 0 | 1 | 249 |
1 | 46,817,475 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2017-10-10T22:02:00.000 | 0 | 2 | 0 | Using Tensorflow on smartphones | 46,676,738 | 1.2 | java,android,python,mobile,tensorflow | In short, yes. It would be safe to learn implementing TensorFlow using python and still comfortably develop machine learning enabled mobile apps.
Let me elaborate. Even with TensorFlow Lite, training the data can only happen on the server side; only the prediction, or the inference happens on the mobile device. So typically, you would create your models on TensorFlow, often using python, and then leverage TensorFlow Lite to package that model into your app. | I've been learning a lot about the uses of Machine Learning and Google's Tensorflow. Mostly, developers use Python when developing with Tensorflow. I do realize that other languages can be used with Tensorflow as well, i.e. Java and C++. I see that Google s about to launch Tensorflow Lite that is supposed to be a game changer for mobile devices. My question; can I be safe by learning Tensorflow using Python and still be able to develop mobile apps using this service? | 0 | 1 | 249 |
0 | 46,681,839 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-10-11T05:56:00.000 | 1 | 1 | 0 | How to generate a size 1000 random number list according to Poisson distribution and with a fixed mean(size)? | 46,680,795 | 0.197375 | python,python-2.7 | So in a Poisson distribution lambda is the mean and variance at the same time. And if you draw infinitely often you will see it is true.
What you are asking for is like expecting to roll a dice 10 times and get an average of 3.5 since thats the expected mean.
Nevertheless you could generate a list with numpy.random.poisson, check if the mean is what you want or draw another 1000 times and check again. | I want to generate a size 1000 random number list according to Poisson distribution and with a fixed mean. Since the size is fixed to 1000, so the sum is also fixed.
The first idea I get is to use numpy.random.Poisson(lambda,size), but it can not set a fixed mean for the list. So I am really confused. | 0 | 1 | 561 |
0 | 46,719,538 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-11T06:28:00.000 | 0 | 2 | 0 | Natural Language Processing(syntatctic,semantic,progmatic) Analysis | 46,681,209 | 0 | python-3.x,nlp,stanford-nlp | I would suggest that you should read an introductory book on NLP to be familiar with the chain processes you are trying to achieve. You are trying to do question-answering , aren't you? If it is the case, you should read about question-answering systems. The above sentence has to be morphologically analyzed (so read about morphological analyzers), syntactically parsed (so read about syntactic parsing) and semantically understood (so read about anaphora resolution and , in linguistics, theta theory). Ravi is called agent and Ragu is called patient or experiencer. Only then, you can proceed to pursue your objectives.
I hope this helps you! | My text contains text="Ravi beated Ragu"
My Question will be "Who beated Ragu?"
The Answer Should come "Ravi" Using NLP
How to do this by natural language processing.
Kindly guide me to proceed with this by syntactic,semantic and progmatic analysis using python | 0 | 1 | 78 |
0 | 46,697,151 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-10-11T19:55:00.000 | 0 | 1 | 0 | How to use multinomial naive bayes for both text and non text data using python? | 46,696,478 | 0 | python,machine-learning,naivebayes | Three are several ways to do that:
simply concatenate Hashing Vectors with integers and train on that bigger features vector. It will work.
It would be more reasonable to do so using different classifier, beacuse MultinominalNB can't model the interactions between the features. But if you want nothing else but MultinominalNB, you can do it. You can also:
train two of them - one on HV, one on integers, and weigth the output, or
use MultinominalNB on text and different classifier on integers, or
use MultinominalNB on text, and take the output as a features both with integers. | The data consists of text parameters as well as integer parameters. The problem is to train machine with both data. Hashing Vectorizer is used for text parameters training.
Thanks in advance.... | 0 | 1 | 207 |
0 | 46,698,554 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-11T22:27:00.000 | 0 | 1 | 0 | scipy optimize - View steps during procedure | 46,698,519 | 0 | python,optimization,scipy | The minimize function takes an options dict as a keyword argument. Accepted keys for this dict inlude, disp, which should be set to True to print the progress of the minimization. | I am using the minimize function from scipy.optimize library.
Is there a way to print some values during the optimization procedure? Values like the current x, objective function value, number of iterations and number of gradient evaluations.
I know there are options to save these values and return them after the optimization is over. But can I see them at each step? | 0 | 1 | 311 |
0 | 46,699,385 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-11T22:33:00.000 | 0 | 1 | 0 | Normalise face landmark data using python | 46,698,570 | 0 | python,image-processing | Actually I think I have figured it out, pretty simple maths actually, here is what i am going to do
Take every point and take away the first box point values - this will give me the points as if the box starts at [ 0,0 ]
Apply the box/normalised size ratio to every point | I am currently learning python and playing around with tensorflow.
I have a bunch of images where I have obtained the landmarks (pixel points) of a person's facial features such as ears and eyes. In addition, it also provides me with a box (4 coordinates) where the face exists.
My goal is to normalise all the data from different images into a standard sized rectangle / square and calculate the position of the landmarks relative to the normalised size.
Is there an API that allows me to do this already or should I get cracking and calculate the points myself?
Thanks in advance. | 0 | 1 | 339 |
0 | 46,704,606 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-10-12T04:03:00.000 | 2 | 1 | 0 | Are all train samples used in fit_generator in Keras? | 46,701,216 | 1.2 | python,tensorflow,machine-learning,keras,neural-network | No, because it is a generator the model does not know the total number of training samples. Therefore, it finishes an epoch when it reaches the final step defined with the steps_per_epoch argument. In your case it will indeed train 192 samples per epoch.
If you want to use all samples in your model you can shuffle the data at the start of every epoch with the argument shuffle. | I am using model.fit_generator() to train a neural network with Keras. During the fitting process I've set the steps_per_epoch to 16 (len(training samples)/batch_size).
If the mini batch size is set to 12, and the total number of training samples is 195, does it mean that 3 samples won't be used in the training phase? | 0 | 1 | 198 |
0 | 46,701,588 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-10-12T04:29:00.000 | 0 | 1 | 0 | How to find the number of elements of a float array in python and how to convert it to a 2-dimensional float array? | 46,701,431 | 0 | python,arrays | Length of array: len(array). Try to do 2 cycles to spread all values to 2-d array. | I am trying to find the length of a 1-D float array and convert it into a 2-d array in python. Also when I am trying to print the elements of the float array the following error is coming:-
'float' object is not iterable | 0 | 1 | 608 |
0 | 46,737,626 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-10-13T19:04:00.000 | 0 | 1 | 0 | Resizing 2D arrays to a different size (e.,g. reduction or comression) | 46,736,521 | 0 | arrays,python-3.x,compression | n-dimensional arrays can be many things, aside from being an image.
one example would be a geo-spatial representation that would consolidate (roll-up_ whenever you zoom out and drill down whenever you zoom in.
array resizing technique should rely on the context of such resize takes place, and hence there is no best answer.
But typically resizing arrays or tensors is by consolidation whenever you reduce number of entries, and by interpolation whenever you increase them | What is the best way to resize a 2D array (the array has a thermal data contents values between 20 to 30) from size 173X151 to size 146X121 without losing too much information.
I understand it is possible to reduce the size of images with some function(images of intensity values 0 to 255) but my understanding that these functions are for images and not other types of arrays.
Is there a function for reducing the size for any type of array? something like compressing the 2D array to different sizes?
Thanks | 0 | 1 | 40 |
0 | 46,747,332 | 0 | 0 | 0 | 1 | 1 | false | 3 | 2017-10-14T13:29:00.000 | 1 | 2 | 0 | Set worksheet.hide_gridlines(2) to certain range of cells | 46,745,120 | 0.099668 | excel,python-2.7,xlsxwriter | As far as I know that isn't possible in Excel to hide gridlines for a range. Gridlines are either on or off for the entire worksheet.
As a workaround you could turn the gridlines off and then add a border to each cell where you want them displayed.
As a first step you should figure out how you would do what you want to do in Excel and then apply that to an XlsxWriter program. | Im creating Excel file from pandas and I'm using worksheet.hide_gridlines(2)
the problem that all gridlines are hide in my current worksheet.I need to hide a range of cells, for example A1:I80.How can I do that? | 0 | 1 | 2,139 |
0 | 57,548,526 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2017-10-14T20:28:00.000 | 1 | 3 | 0 | Can I use Train AND Test data for Imputation? | 46,749,037 | 0.066568 | python-2.7,data-science,imputation | The philosophy behind splitting data into training and test sets is to have the opportunity of validating the model through fresh(ish) data, right?
So, by using the same imputer on both train and test sets, you are somehow spoiling the test data, and this may cause overfitting.
You CAN use the same approach to impute the missing data on both sets (in your case, the decision tree), however, you should instantiate two different models, and fit each one with its own related data. | Interestingly, I see a lot of different answers about this both on stackoverflow and other sites:
While working on my training data set, I imputed missing values of a certain column using a decision tree model. So here's my question. Is it fair to use ALL available data (Training & Test) to make a model for imputation (not prediction) or may I only touch the training set when doing this? Also, once I begin work on my Test set, must I use only my test set data, impute using the same imputation model made in my training set, or can I use all the data available to me to retrain my imputation model?
I would think so long as I didn't touch my test set for prediction model training, using the rest of the data for things like imputations would be fine. But maybe that would be breaking a fundamental rule. Thoughts? | 0 | 1 | 2,913 |
0 | 56,999,837 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2017-10-15T15:18:00.000 | 2 | 4 | 0 | What does "splitter" attribute in sklearn's DecisionTreeClassifier do? | 46,756,606 | 0.099668 | python,python-3.x,machine-learning,scikit-learn | Short ans:
RandomSplitter initiates a **random split on each chosen feature**, whereas BestSplitter goes through **all possible splits on each chosen feature**.
Longer explanation:
This is clear when you go thru _splitter.pyx.
RandomSplitter calculates improvement only on threshold that is randomly initiated (ref. lines 761 and 801). BestSplitter goes through all possible splits in a while loop (ref. lines 436 (which is where loop starts) and 462). [Note: Lines are in relation to version 0.21.2.]
As opposed to earlier responses from 15 Oct 2017 and 1 Feb 2018, RandomSplitter and BestSplitter both loop through all relevant features. This is also evident in _splitter.pyx. | The sklearn DecisionTreeClassifier has a attribute called "splitter" , it is set to "best" by default, what does setting it to "best" or "random" do? I couldn't find enough information from the official documentation. | 0 | 1 | 8,211 |
0 | 48,555,365 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2017-10-15T15:18:00.000 | 4 | 4 | 0 | What does "splitter" attribute in sklearn's DecisionTreeClassifier do? | 46,756,606 | 0.197375 | python,python-3.x,machine-learning,scikit-learn | The "Random" setting selects a feature at random, then splits it at random and calculates the gini. It repeats this a number of times, comparing all the splits and then takes the best one.
This has a few advantages:
It's less computation intensive than calculating the optimal split of every feature at every leaf.
It should be less prone to overfitting.
The additional randomness is useful if your decision tree is a component of an ensemble method. | The sklearn DecisionTreeClassifier has a attribute called "splitter" , it is set to "best" by default, what does setting it to "best" or "random" do? I couldn't find enough information from the official documentation. | 0 | 1 | 8,211 |
0 | 48,411,184 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-10-16T09:28:00.000 | 0 | 2 | 0 | What is cuDNN implementation of rnn cells in Tensorflow | 46,767,001 | 1.2 | python-3.x,tensorflow,cudnn | In short: cudnnGRU and cudnnLSTM can/ must be used on GPU, normal rnn implementations not. So if you have tensorflow-gpu, cudnn implementation of RNN cells would run faster. | To create RNN cells, there are classes like GRUCell and LSTMCell which can be used later to create RNN layers.
And also there are 2 other classes as CudnnGRU and CudnnLSTM which can be directly used to create RNN layers.
In the documentation they say that the latter classes have cuDNN implementation. Why should I use or not use this cuDNN implemented classes over classical RNN implementations when I'm creating a RNN model..? | 0 | 1 | 1,107 |
0 | 58,364,407 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2017-10-16T16:41:00.000 | 0 | 3 | 0 | ImportError: No module named 'sklearn.lda' | 46,775,155 | 0 | python,machine-learning,scikit-learn,lda | In case you are using new version and using
from sklearn.qda import QDA
it will give error, try
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis | When I run classifier.py in the openface demos directory using:
classifier.py train ./generated-embeddings/
I get the following error message:
--> from sklearn.lda import LDA
ModuleNotFoundError: No module named 'sklearn.lda'.
I think to have correctly installed sklearn.
What could be the reason for this message? | 0 | 1 | 15,902 |
0 | 46,807,104 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-10-17T04:38:00.000 | 0 | 1 | 0 | Is there non-GPU equivalent of tf_utils package? | 46,782,575 | 1.2 | python,tensorflow,deep-learning | pandas already has quite a bit of functionality that tf_utils provides. E.g.
read_csv could capture what I was looking for from load_dataset.
get_dummies does what convert_to_one_hot does.
I could not find any direct functionality in pandas that came close to random_mini_batches but it can be achieved with sampling and random numbers. | I am using CPU based tensorflow (on non GPU platform) from python. I want to use functionality like load_dataset, random_mini_batches, convert_to_one_hot etc. from tf_utils package. However the one from neuroailab github has dependency on tensorflow-gpu. Is there any other CPU based (for non GPU platform) equivalent package for this? | 0 | 1 | 462 |
0 | 46,813,155 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2017-10-17T12:59:00.000 | 4 | 1 | 0 | Spyder IDE - Cancel opening large variables without restarting the entire program | 46,790,782 | 0.664037 | python-3.x,spyder | (Spyder developer here) My answers:
Cancel the request to view the variable without closing the program?
No, that's not possible, sorry.
Set a default so Spyder will only display the first 1000 rows of very large objects such as dataframes?
That's already in place. The problem is the size in memory of your dataframes because Spyder needs to make a copy of them to graphically display them.
To fix this problem, we are planning to use more efficient serialization libraries (e.g. pyarrow) in Spyder 5, to be released in 2021. | I am working with large Pandas dataframes in Spyder. Occasionally I accidentally click the large dataframes in the Variable Explorer window and Spyder will very hang for very long periods while it tries to open.
The only way I have found to stop this process is to close Spyder completely and then reopen.
Is it possible to:
Cancel the request to view the variable without closing the program?
Set a default so Spyder will only display the first 1000 rows of very large objects such as dataframes? | 0 | 1 | 890 |
0 | 46,792,992 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-10-17T14:45:00.000 | 2 | 2 | 0 | polynomial transformation in python | 46,792,940 | 0.197375 | python,polynomial-math,coordinate-transformation | Create a burner variable, store x-tau into it, and feed that into your function | I'm trying to shift a polynomial. I'm currently using numpy.poly1d() to make a quadratic equation.
example: 2x^2 + 3x +4
but I need to shift the function by tau, such that
2(x-tau)^2 + 3(x-tau) + 4
Tau is a value that will change base on some other variables in my code. | 0 | 1 | 487 |
0 | 46,832,946 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-10-18T18:37:00.000 | 1 | 1 | 0 | Bokeh Plots Axis Value don't show completely | 46,817,031 | 1.2 | python-3.x,bokeh | This is a known bug with current versions (around 0.12.10) for now the best workaround is to increase plot.min_border (or p.min_border_left, etc) to be able to accommodate whatever the longest label you expect is. Or to rotate the labels to be parallel to the axis so that they always take up the same space, e.g. p.yaxis.major_label_orientation = "vertical" | I have just started exploring bokeh and here is a small issue I am stuck with. This is in regards with live graphs.
The problem is with the axis values. Initially if I start with say 10, till 90 it shows correct values but while printing 100, it only show 10 and the last zero(0) is hidden. It's not visible.
That is when it switches from 2 digit number to a 3 or more digit number only the first two digits are visible.
Is there any figure property I am missing or what I am not sure of. | 0 | 1 | 601 |
0 | 46,823,775 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-10-19T05:44:00.000 | 0 | 1 | 0 | Error when trying to pull row based on index value in the dataframe | 46,823,445 | 0 | python,dataframe | I think the way you are passing your argument is wrong, try passing it in the same pattern as its in csv. Like ["1/10/2011"], it should work. Good luck :) | I'am reading a data off a csv file. The columns are as below:
Date , Buy, Sell, Price
1/10/2011, 1 , 5, 500
1/15/2011, 4, 2, 500
When I tried to pull data based on index like df["2011-01-10"], I got an error KeyError: '2011-01-10'
Anyone know what this is might be the case?
Thanks, | 0 | 1 | 15 |
0 | 46,828,277 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-19T10:44:00.000 | 0 | 2 | 0 | What classification algorithm should I use for document classification with this variables? | 46,828,118 | 0 | python,machine-learning,svm,naivebayes,document-classification | Because of the continuous score, which i assume is your label, it's a regression problem. SVMs are more common for classification problems. There are lots of possible algorithms out there. Logistic Regression would be pretty common to solve something like this.
Edit
Now that you edited your post your problem became a classification problem :-)
Classification = some classes you want your data to classify as like boolean(True, False) or multinomial(Big, Middle, Small, Very Small)
Regression = continuous values(all real numbers between 0 and 1)
Now you can try your SVM and see if it works well enough for your data.
See @Maxim's answer he has some good points (balancing, scaling) | I'm trying to classify pages, in particular search for a page, in documents based on bag of words, page layout, contain tables or not, has bold titles, etc. With this premise I have created a pandas.DataFrame like this, for each document:
page totalCharCount matchesOfWordX matchesOfWordY hasFeaturesX hasFeaturesY hasTable score
0 0.0 608.0 0.0 2.0 0.0 0.0 0.0 0.0
1 1.0 3292.0 1.0 24.0 7.0 0.0 0.0 0.0
2 2.0 3302.0 0.0 15.0 1.0 0.0 1.0 0.0
3 3.0 26.0 0.0 0.0 0.0 1.0 1.0 1.0
4 4.0 1851.0 3.0 25.0 20.0 7.0 0.0 0.0
5 5.0 2159.0 0.0 27.0 6.0 0.0 0.0 0.0
6 6.0 1906.0 0.0 9.0 15.0 3.0 0.0 0.0
7 7.0 1825.0 0.0 24.0 9.0 0.0 0.0 0.0
8 8.0 2053.0 0.0 20.0 10.0 2.0 0.0 0.0
9 9.0 2082.0 2.0 16.0 3.0 2.0 0.0 0.0
10 10.0 2206.0 0.0 30.0 1.0 0.0 0.0 0.0
11 11.0 1746.0 3.0 31.0 3.0 0.0 0.0 0.0
12 12.0 1759.0 0.0 38.0 3.0 1.0 0.0 0.0
13 13.0 1790.0 0.0 21.0 0.0 0.0 0.0 0.0
14 14.0 1759.0 0.0 11.0 6.0 0.0 0.0 0.0
15 15.0 1539.0 0.0 20.0 3.0 0.0 0.0 0.0
16 16.0 1891.0 0.0 13.0 6.0 1.0 0.0 0.0
17 17.0 1101.0 0.0 4.0 0.0 1.0 0.0 0.0
18 18.0 2247.0 0.0 16.0 5.0 5.0 0.0 0.0
19 19.0 598.0 2.0 3.0 1.0 1.0 0.0 0.0
20 20.0 1014.0 2.0 1.0 16.0 3.0 0.0 0.0
21 21.0 337.0 1.0 2.0 1.0 1.0 0.0 0.0
22 22.0 258.0 0.0 0.0 0.0 0.0 0.0 0.0
I'm taking a look to Naive Bayes and SVM algorithms but I'm not sure which one fits better with the problem. The variables are independent. Some of them must be present to increase the score, and some of them matches the inverse document frequency, like totalCharCount.
Any help?
Thanks a lot! | 0 | 1 | 258 |
0 | 61,232,784 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2017-10-19T19:27:00.000 | 1 | 1 | 0 | pandas: create a caption with to_latex? | 46,837,459 | 1.2 | python,pandas,latex | As of version 1.0.0, released on 29 January 2020, to_latex accepts caption and label arguments. | I am exporting a table from a pandas script into a .tex and would like to add a caption.
with open('Table1.tex','w') as tf:
tf.write(df.to_latex(longtable=True))
(I invoke the longtable argument to span the long table over multiple pages.)
The Table1.tex file gets imported into a bigger LaTeX document via the \import command. I would like to add a caption to the table, ideally on its top. Do you think there would be any way for doing that?
I cannot manually edit the Table1.tex, it gets updated regularly using a Python script. | 0 | 1 | 1,703 |
0 | 46,842,124 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-20T01:26:00.000 | 0 | 1 | 0 | Data Periodicity - How to normalize? | 46,841,117 | 0 | python,pandas,periodicity | First, you need to define what output you need, then, deduce how to treat the input to get the desired output.
Regarding daily data for the first 10 years, it could be a possible option to keep only one day per week. Sub-sampling does not always mean loosing information, and does not always change the final result. It depends on the nature of the collected data: speed of variations of the data, measurement error, noise.
Speed of variations: Refer to Shannon to decide whether no information is lost by sampling once every week instead of every day. Given that for the 2 last year, some people had decided to sample only once every week, it seems to say that they have observed that data does not vary much every day and that a sample every week is enough information. That provides a hint to vote for a final data set that would include one sample every week for the total 12 years. Unless they reduced the sampling for cost reason, making a compromise between accuracy and cost of doing the sampling. Try to find in the literature a what speed your data is expected to vary.
Measurement error: If the measurement error contains a small epsilon that is randomly positive or negative, then, taking the average of 7 days to make a "one week" data will be better because it will increase the chances to cancel this variation. Otherwise, it is enough to do a sub-sampling taking only 1 day per week and throwing other days of the week. I would try both methods, averaging, and sub-sampling, and see if the output is significantly different. | I have a data set which contains 12 years of weather data. For first 10 years, the data was recorded per day. For last two years, it is now being recorded per week. I want to use this data in Python Pandas for analysis but I am little lost on how to normalize this for use.
My thoughts
Convert first 10 years data also into weekly data using averages. Might work but so much data is lost in translation.
Weekly data cannot be converted to per day data.
Ignore daily data - that is a huge loss
Ignore weekly data - I lose more recent data.
Any ideas on this? | 0 | 1 | 205 |
0 | 46,845,150 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-10-20T03:02:00.000 | 1 | 1 | 0 | Machine Learning, What are the common techniques for feature engineering and presenting the model? | 46,841,795 | 1.2 | python,machine-learning,visualization | Feature engineering is more of art than technique. That might require domain knowledge or you could try adding, subtracting, dividing and multiplying different columns to make features out of it and check if it adds value to the model. If you are using Linear Regression then the adjusted R-squared value must increase or in the Tree models, you can see the feature importance, etc. | I am having a ML language identification project (Python) that requires a multi-class classification model with high dimension feature input.
Currently, all I can do to improve accuracy is through trail-and-error. Mindlessly combining available feature extraction algorithms and available ML models and see if I get lucky.
I am asking if there is a commonly accepted workflow that find a ML solution systematically.
This thought might be naive, but I am thinking if I can somehow visualize those high dimension data and the decision boundaries of my model. Hopefully this visualization can help me to do some tuning. In MATLAB, after training, I can choose any two features among all features and MATLAB will give a decision boundary accordingly. Can I do this in Python?
Also, I am looking for some types of graphs that I can use in the presentation to introduce my model and features. What are the most common graphs used in the field?
Thank you | 0 | 1 | 211 |
0 | 61,983,790 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2017-10-21T15:35:00.000 | 1 | 4 | 0 | Python add audio to video opencv | 46,864,915 | 0.049958 | python-3.x,opencv,ffmpeg,opencv-python | You can use pygame for audio.
You need to initialize pygame.mixer module
And in the loop, add pygame.mixer.music.play()
But for that, you will need to choose audio file as well.
However, I have found better idea! You can use webbrowser module for playing videos (and because it would play on browser, you can hear sounds!)
import webbrowser
webbrowser.open("video.mp4") | I use python cv2 module to join jpg frames into video, but I can't add audio to it. Is it possible to add audio to video in python without ffmpeg?
P.S. Sorry for my poor English | 0 | 1 | 19,102 |
0 | 46,887,660 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-23T10:08:00.000 | 1 | 2 | 0 | Tensorflow share weights across input placeholder | 46,886,770 | 0.099668 | python,tensorflow,neural-network | A simple option would be to replicate W. If the original W is, lets say p X q, you can just do l = tf.matmul(X, tf.tile(W, (k, 1))). | I am trying to build a neural network with shared input weights.
Given pk inputs in the form X=[x_1, ..., x_p, v_1,...,v_p,z1,...,z_p,...] and a weight matrix w of shape (p, layer1_size), I want the the first layer to be defined as sum(w, x_.) + sum(w, v_.) + ....
In other words the input and the first layer should be fully connected where weights are shared across the different groups of inputs. l = tf.matmul(X, W) where each row of W must have a structure like: (w1, ... ,wp, w1, ..., wp, ...)
How can I do that in tensorflow? | 0 | 1 | 547 |
0 | 58,123,095 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2017-10-23T18:05:00.000 | 1 | 3 | 0 | Thicken a one pixel line | 46,895,772 | 0.066568 | python,image,numpy,opencv,image-processing | I'd take a look at morphological operations. Dilation sounds closest to what you want. You might need to work on a subregion with your line if you don't want to dilate the rest of the image. | I'm using OpenCV to do some image processing on Python. I'm trying to overlay an outline on an image where the outline was made from a mask. I'm using cv2.Canny() to get the outline of the mask, then changing that to a color using cv2.cvtColor() then finally converting that edge to cyan using outline[np.where((outline == [255,255,255]).all(axis=2))] = [180,105,255]. My issue now is that this is a one pixel thick line and can barely be seen on large images. This outline is all [0,0,0] except on the points I apply as a mask onto my color image using cv2.bitwise_or(img, outline.
I'm currently thickening this outline by brute forcing and checking every single pixel in the bitmap to check if any of its neighbors are [180,105,255] and if so, that pixel will also change. This is very slow. Is there any way using numpy or openCV to do this automatically? I was hoping for some conditional indexing with numpy, but can't find anything. | 0 | 1 | 7,385 |
0 | 46,911,651 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-10-24T12:53:00.000 | 0 | 2 | 0 | orb opencv variable inputs | 46,911,160 | 0 | python,opencv,threshold,orb | The python docstring of ORB_create actually contains information about the parameter nfeatures, which is the maximum number of features to return. Could that solve your problem? | I am new to opencv and am trying to extract keypoints of a gesture image by ORB algorithm in python interface. The image input is binary and has many curvatures. So ORB gives too many points as keypoints (which are actually not). I am trying to increase the threshold of ORB algorithm so that the unnesessary points dont get detected. I have searched for ORB algorithms and haven't found any use of threshold except for in c++ function description.
So my question is what are the input parameters for ORB detection algorithm and what is the actual syntax in python.
Thanks in advance. | 0 | 1 | 258 |
0 | 47,732,243 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-10-24T12:53:00.000 | 0 | 2 | 0 | orb opencv variable inputs | 46,911,160 | 0 | python,opencv,threshold,orb | After looking at the ORB() function in Opencv C++ description, I realized that the input parameters can be passed into function in Python as nfeatures=200,mask=img etc. (not sure about C++ though). | I am new to opencv and am trying to extract keypoints of a gesture image by ORB algorithm in python interface. The image input is binary and has many curvatures. So ORB gives too many points as keypoints (which are actually not). I am trying to increase the threshold of ORB algorithm so that the unnesessary points dont get detected. I have searched for ORB algorithms and haven't found any use of threshold except for in c++ function description.
So my question is what are the input parameters for ORB detection algorithm and what is the actual syntax in python.
Thanks in advance. | 0 | 1 | 258 |
0 | 46,918,492 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-10-24T17:26:00.000 | 1 | 1 | 0 | Keras. Correct way to give a (x,y) plot + features as input to a NN | 46,916,554 | 1.2 | python-3.x,tensorflow,keras | If x coordinates for all plots are same you could (and in fact should) ignore it. Because in this case this data do not introduce any additional information. Their use will only lead to a more complex neural network, worse convergence and as result to increasing of training time and performance degradation.
About second question - it is not necessary to do it. During training neural network will automatically identify which features are the most important. | Currently I'm trying to make Keras binary classify a set of (x,y) plots.
As a newbie, I can't figure out the proper way to give a correct input, since I've got these plots with app 3400 pairs each one and a set of 8 aditional features (local minimae locations) for every plot. What I tried is to give keras a 3400 + 3400 + 8 input layer, but it just feels wrong to do, and so far isn't making any progress.
As x variable is almost a correlative order, ¿should I ignore it?
¿Is it possible to ask keras to distinguish: "Hey these 3400 numbers are a plot, and these other 8 are some features about it"? | 0 | 1 | 128 |
0 | 46,919,591 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-10-24T20:20:00.000 | 0 | 2 | 0 | How to use Pandas to find the strongest month of sale for a product | 46,919,360 | 0 | python,pandas | Well, I am not sure about how is your data so not sure if the answer will help but from what you said is that you are trying to check the month of the highest sales, so giving the product you will probably want to use the pandas groupby using the month and you will have a DataFrame with every month grouped.
imagine a DF named Data:
mean_buy = Data.groupby(months).mean()
with months = np.array([1,2,3,4,5,6,7,8,9,10,11,12]*number_of_years) | I am new to python pandas, and I am trying to find the strongest month within a given series of timestamped sales data. The question to answer for n products is: when is the demand for the given product the highest?
I am not looking for a complete solution but rather some ideas, how to approach this problem.
I already looked into seasonal_decomposition to get some sort of seasonality indication but I feel that this might be a bit too complicated of an approach. | 0 | 1 | 324 |
0 | 46,919,474 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-10-24T20:20:00.000 | 0 | 2 | 0 | How to use Pandas to find the strongest month of sale for a product | 46,919,360 | 0 | python,pandas | I don't have 50 reputation to add comment hence adding answer section. Some insight about your required solution would be great, because to me it's not clear about your requirement. BTW Coming to the idea, if your can split and load the time series data as the timestamp and demand then you can easily do it using regular python methods like max and then getting the time stamp value where the max demand occurred. | I am new to python pandas, and I am trying to find the strongest month within a given series of timestamped sales data. The question to answer for n products is: when is the demand for the given product the highest?
I am not looking for a complete solution but rather some ideas, how to approach this problem.
I already looked into seasonal_decomposition to get some sort of seasonality indication but I feel that this might be a bit too complicated of an approach. | 0 | 1 | 324 |
0 | 47,045,367 | 0 | 0 | 0 | 0 | 1 | false | 18 | 2017-10-25T07:49:00.000 | 0 | 4 | 0 | Getting around tf.argmax which is not differentiable | 46,926,809 | 0 | python,tensorflow | tf.argmax is not differentiable because it returns an integer index. tf.reduce_max and tf.maximum are differentiable | I've written a custom loss function for my neural network but it can't compute any gradients. I thinks it is because I need the index of the highest value and are therefore using argmax to get this index.
As argmax is not differentiable I to get around this but I don't know how it is possible.
Can anyone help? | 0 | 1 | 10,784 |
0 | 48,502,090 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-25T09:46:00.000 | 1 | 1 | 0 | Running Python Tensorflow on CPU and GPU in parallel | 46,929,145 | 0.197375 | python,tensorflow,gpu,cpu | Do any of your networks share operators?
E.g. they use variables with the same name in the same variable_scope which is set to variable_scope(reuse=True)
Then multiple nets will try to reuse the same underlying Tensor structures.
Also check it tf.ConfigProto.allow_soft_placement is set to True or False in your tf.Session. If True you can't be guaranteed that the device placement will be actually executed in the way you intended in your code. | I need to train a very large number of Neural Nets using Tensorflow with Python. My neural nets (MLP) are ranging from very small ones (~ 2 Hidden Layers with ~30 Neurons each) to large ones (3-4 Layers with >500 neurons each).
I am able to run all of them sequencially on my GPU, which is fine. But my CPU is almost idling. Additionally I found out, that my CPU is quicker than the GPU for my very small nets (I assume because of the GPU-Overhead etc...). Thats why I want to use both my CPU and my GPU in parallel to train my nets. The CPU should process the smaller networks to the larger ones, and my GPU should process from the larger to the smaller ones, until they meet somewhere in the middle... I thought, this is a good idea :-)
So I just simply start my consumers twice in different processes. The one with device = CPU, the other one with device = GPU. Both are starting and consuming the first 2 nets as expected. But then, the GPU-consumer throws an Exception, that his tensor is accessed/violated by another process on the CPU(!), which I find weird, because it is supposed to run on the GPU...
Can anybody help me, to fully segregate my to processes? | 0 | 1 | 1,450 |
0 | 46,945,235 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-10-26T02:53:00.000 | 0 | 3 | 0 | If the value of list 1 is the index of list 2, how can I sort list 1 according to the list 2 value | 46,945,113 | 0 | python,sorting | You can transform the items in a using a key. That key is a function of each element of a.
Try this:
a = sorted(a, key=lambda i: b[i])
Note that if any value in a is outside the range of b, this would fail and raise an IndexError: list index out of range.
Based on your description, however, you want the list to be sorted in reverse order, so:
a = sorted(a, key=lambda i: b[i], reverse=True) | Suppose there is a list a[i] which stores index value v, v is the index value of another list b[v]. I want to according to the values of list b to sort the list a.
For example
a=[0,2,3,1] b=[7,10,8,6]
I want the list a become a=[1,2,0,3], is there some concise way to sort list a? | 0 | 1 | 48 |
0 | 46,945,183 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-10-26T02:53:00.000 | -1 | 3 | 0 | If the value of list 1 is the index of list 2, how can I sort list 1 according to the list 2 value | 46,945,113 | -0.066568 | python,sorting | Simply solution would be sort the list b first and then get the indexes for list b after sorting, finally get the values of a list in the order of b indexes that were taken after sorting b list | Suppose there is a list a[i] which stores index value v, v is the index value of another list b[v]. I want to according to the values of list b to sort the list a.
For example
a=[0,2,3,1] b=[7,10,8,6]
I want the list a become a=[1,2,0,3], is there some concise way to sort list a? | 0 | 1 | 48 |
0 | 46,961,237 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-26T18:10:00.000 | -1 | 1 | 0 | One hot encoding and its combination with DecisionTreeClassifier | 46,961,091 | -0.197375 | python,pandas,scikit-learn,decision-tree,one-hot-encoding | The two options you are describing do two very different things.
If you choose to binarize (one-hot encode) the values of the variable, there is no order to them. The decision tree at each split, considers a binary split on each of the new binary variables and chooses the most informative one. So yes, each binary feature is now treated as an independent feature.
The second choice puts the values in an order. Implicitly you are saying that a < b < c if transforming a=1, b=2, c=3. If you use this variable in a decision tree, the algorithm considers the splits 1 vs 2,3 and 1,2 vs 3 (but not 3 vs 1,2).
So the meaning of the variables is very different and I don't think you can expect equivalent results. | So my understanding is that you perform one hot encoding to convert categorical features as integers to fit them to scikit learn machine learning classifier.
So let's say we have two choices
a. Splitting all the features into one hot encoded features (if A is say a categorical features that takes values 'a', 'b' and 'c', then it becomes A_a, A_b and A_c with binary values in each of its rows with binary value '1' meaning that the observation has the feature and binary value '0' meaning it does not possess the feature!). I would then fit a DecisionTreeClassifier on this.
b. Not splitting all the features, but converting each category into an integer value WITHOUT performing one hot encoding (if A is say a categorical features that takes values 'a', 'b' and 'c', then 'a', 'b' and 'c' are renamed as 1, 2, 3 and no new columns are created, 'A' remains a single column with integer values 1, 2, 3 by using pandas.factorize or something which you an then fit a DecisionTreeClassifier.
My question is, when you fit DecisionTreeClassifier on the one hot encoded dataset, with multiple columns, will each of the new columns be treated as a separate feature?
Also, if you fit the DecisionTreeClassifier on the dataset where the categorical features are simply converted to an integer and kept in a single column; will it produce the same node splits, as the one where the DecisionTreeClassifier was fit on the dataset with the one-hot encoded features?
Like, when you visualize the tree in both cases,
is the interpretation given below the right way to look at it?
for DecisionTreeClassifier with one-hot-encoding
if attribute == A_a, then yes
if attribute == A_b, then no
for DecisionTreeClassifier without one-hot-encoding ('a' represented by integer value 1 and 'b' by value 2)
if attribute == 1 then yes
if attribute == 2, then no | 0 | 1 | 325 |
0 | 46,979,942 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-26T23:13:00.000 | 0 | 2 | 0 | Python - How can I find difference between two rows of same column using loop in CSV file? | 46,965,192 | 0 | python,python-3.x,csv | import csv
# Files to load (Remember to change these)
file_to_load = "raw_data/budget_data_2.csv"
# Read the csv and convert it into a list of dictionaries
with open(file_to_load) as revenue_data:
reader = csv.reader(revenue_data)
# use of next to skip first title row in csv file
next(reader)
revenue = []
date = []
rev_change = []
# in this loop I did sum of column 1 which is revenue in csv file and counted total months which is column 0
for row in reader:
revenue.append(float(row[1]))
date.append(row[0])
print("Financial Analysis")
print("-----------------------------------")
print("Total Months:", len(date))
print("Total Revenue: $", sum(revenue))
#in this loop I did total of difference between all row of column "Revenue" and found total revnue change. Also found out max revenue change and min revenue change.
for i in range(1,len(revenue)):
rev_change.append(revenue[i] - revenue[i-1])
avg_rev_change = sum(rev_change)/len(rev_change)
max_rev_change = max(rev_change)
min_rev_change = min(rev_change)
max_rev_change_date = str(date[rev_change.index(max(rev_change))])
min_rev_change_date = str(date[rev_change.index(min(rev_change))])
print("Avereage Revenue Change: $", round(avg_rev_change))
print("Greatest Increase in Revenue:", max_rev_change_date,"($", max_rev_change,")")
print("Greatest Decrease in Revenue:", min_rev_change_date,"($", min_rev_change,")")
Output I got
Financial Analysis
-----------------------------------
Total Months: 86
Total Revenue: $ 36973911.0
Avereage Revenue Change: $ -5955
Greatest Increase in Revenue: Jun-2014 ($ 1645140.0 )
Greatest Decrease in Revenue: May-2014 ($ -1947745.0 ) | Date Revenue
9-Jan $943,690.00
9-Feb $1,062,565.00
9-Mar $210,079.00
9-Apr -$735,286.00
9-May $842,933.00
9-Jun $358,691.00
9-Jul $914,953.00
9-Aug $723,427.00
9-Sep -$837,468.00
9-Oct -$146,929.00
9-Nov $831,730.00
9-Dec $917,752.00
10-Jan $800,038.00
10-Feb $1,117,103.00
10-Mar $181,220.00
10-Apr $120,968.00
10-May $844,012.00
10-Jun $307,468.00
10-Jul $502,341.00
# This is what I did so far...
# Dependencies
import csv
# Files to load (Remember to change these)
file_to_load = "raw_data/budget_data_2.csv"
totalrev = 0
count = 0
# Read the csv and convert it into a list of dictionaries
with open(file_to_load) as revenue_data:
reader = csv.reader(revenue_data)
next(reader)
for row in reader:
count += 1
revenue = float(row[1])
totalrev += revenue
for i in range(1,revenue):
revenue_change = (revenue[i+1] - revenue[i])
avg_rev_change = sum(revenue_change)/count
print("avg rev change: ", avg_rev_change)
print ("budget_data_1.csv")
print ("---------------------------------")
print ("Total Months: ", count)
print ("Total Revenue:", totalrev)
I have above data in CSV file. I am having problem in finding revenue change, which is Revenue of row 1 - row 0 , row 2 - row 1 and so on... finally, I want sum of total revenue change. I tried with loop but I guess there is some silly mistake. Please suggest me codes so I can compare my mistake. I am new to python and coding. | 0 | 1 | 5,951 |
0 | 46,967,516 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-27T04:01:00.000 | 0 | 2 | 0 | python sklearn accuracy score for two different list | 46,967,312 | 0 | python,numpy,scikit-learn | You have to convert the array to list to make it work
This should do for you
accuracy_score(y_test.tolist(),labs) | I have two lists
y_test = array('B', [1, 2, 3, 4, 5])
and
labs = [1, 2, 3, 4, 5]
In sklearn, when i do print accuracy_score(y_test,labs), i get error
ValueError: Expected array-like (array or non-string sequence), got array('B', [1, 2, 3, 4, 5]).
I tried to compare it using print accuracy_score(y_test['B'],labs) but it is showing
TypeError: array indices must be integers | 0 | 1 | 2,073 |
0 | 47,003,117 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-10-28T12:59:00.000 | 0 | 1 | 0 | Is the loss value computed by keras for 2D CNN regression by keras point wise? | 46,989,998 | 0 | python,keras | The way how mse is defined in Keras make it compute an average pixel error. So you could simply take a loss value as an average pixel error. | I'm using keras for CNN on 2D images for regression with mean squared error as the loss function. The loss values are of the range 100. To know average error at each pixel, should I divide it by total number of pixels? Or the loss values displayed are for pixels? | 0 | 1 | 246 |
0 | 46,998,392 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-29T08:29:00.000 | 1 | 1 | 0 | Logistic Regression- Working with categorical variable in Python? | 46,998,234 | 0.197375 | python,pandas,regression,statsmodels | You can apply grouping and then do logistic regression on each group. Or you treat it as multilabel classifier and do "Softmax regression". | I have a dataset that includes 7 different covariates and an output variable, the 'success rate'.
I'm trying to find the important factors that predict the success rate. One of the covariates in my dataset is a categorical variable that takes on 700 values (0- 700), each representing the ID of the district they're from.
How should I deal with this variable while performing logistic regression?
If I make 700 dummy columns, how can I make it easier to interpret the results?
I'm using Python and statsmodels. | 0 | 1 | 755 |
0 | 46,999,668 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-29T11:24:00.000 | -1 | 2 | 0 | Random element in fitness function genetic algorithm | 46,999,584 | -0.099668 | python,neural-network,artificial-intelligence,genetic-algorithm | Normally you use a seed for genetic algorithms, which should be fixed. It will always generate the same "random" childs sequentially, which makes your approach reproducible. So the genetic algorithm is kind of pseudo-random. That is state of art how to perform genetic algorithms. | So I am using a genetic algorithm to train a feedforward neural network, tasked with recognizing a function given to the genetic algorithm. I.e x = x**2 or something more complicated obviously.
I realized I am using random inputs in my fitness function, which causes the fitness to be somewhat random for a member of the population, however, still in line with how close it is to the given function obviously. A colleague remarked that it is stranged that the same member of the population doesnt always get the same fitness, which I agree is a little unconventional. However, it got me thinking, is there any reason why this would be bad for the genetic algorithm? I actually think it might be quite good because it enables me to have a rather small testset, speeding up number of generations while still avoiding overfitting to any given testest.
Does anyone have experience with this?
(fitness function is MSE compared to given function, for a randomly generated testset of 10 iterations) | 0 | 1 | 495 |
0 | 47,006,813 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-10-30T00:25:00.000 | 0 | 1 | 0 | Catboost tuning order? | 47,006,642 | 0 | python,machine-learning,cross-validation,hyperparameters,catboost | You have essentially answered your own question already. Any variable that depends on something else x you must first define x.
One thing to keep in mind is you can define a function before the variables you need to pass into it since its only when you call the function that you need the input variables, defining the function is just setting the process you will use. Calling a function and defining the variable it returns is what you have to do in order.
The order you would use is:
Include any remote librarys or functions, define any initial variables that dont depend on anything, define your local functions.
Next in your main you first need to generate the variables that your iteration function requires, then itterate with these variables, then generate the ones that depend on the itteration. | So with Catboost you have parameters to tune and also iterations to tune. So for iterations you can tune using cross validation with the overfit detector turned on. And for the rest of the parameters you can use Bayesian/Hyperopt/RandomSearch/GridSearch. My question is which order to tune Catboost in. Should I tune the number of iterations first or the other parameters first. A lot of the parameters are kind of dependent on number of iterations, but also the number of iterations could be dependent on the parameters set. So any idea of which order is the proper way? | 0 | 1 | 1,861 |
0 | 47,014,211 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-10-30T11:17:00.000 | 0 | 1 | 0 | One-hot vector to softmax-like distribution in tensorflow | 47,013,937 | 0 | python,tensorflow | A solution could be to keep your one-hot vector ;).
Another one, more general, is to make a random positive vector, then compute the difference between the highest score d and the score of your true class, then add a random number between d and +infinity to the true class's score, then normalize to get a valid distribution. (note that you can force the true class's initial score to be 0, but that will probably be a tiny bit longer to code).
The choice of the distributions for the initial random vector and for the quantity to add to the true class's score will change the output distribution, but I don't know which one you want or why you want to do that... | Is there a way in tensorflow to transform a one-hot vector into a softmax-like distribution?
For example, I have the following one-hot vector:
[0 0 0 0 1 0]
I want to have a vector with probabilities where the one value is the most likely number, like:
[0.1 0.1 0.1 0.1 0.5 0.1]
This vector should always be random, but with the true class having the highest probability.
How can I reach this? | 0 | 1 | 359 |
0 | 47,031,545 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-10-30T23:49:00.000 | 2 | 1 | 0 | Is there a heuristic for homogenizing image dimensions before using them to train neural net? | 47,025,896 | 0.379949 | python-2.7,image-processing,machine-learning,keras,conv-neural-network | I don't think there is a standard approach on this. In machine learning, in many cases we have to try and see.
If I were you, if I had to build a custom neural network, I would start with mean image size and then I would gradually increase the size until reaching optimum score.
If you are using a pretrained neural network then just resize your images to network's default. | I am training a neural net on a set of images with heterogeneous dimensions. Of course, they all have to have the same dimensions to be fed to the NN, and it is simple enough to use scipy.misc.imresize() for this. But, how should I choose width and height? My first instinct was to plot histograms of both and eyeball values around the 75th percentile. I also thought maybe I should scale all images up to the max values for both height and width, so that no details are discarded from the higher-pixel images. Is there a best practice for addressing this problem? Thanks!
For reference, I am using python 2.7 and keras with theano backend and dimension ordering. | 0 | 1 | 44 |
0 | 47,037,520 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-10-31T13:53:00.000 | 1 | 6 | 0 | Using large index in python (numpy or lists) | 47,037,150 | 0.033321 | python,numpy | I can propose to use such notation [5*10**5:1*10**6] but it's not so clear as in case of 5e5 and 1e6. And even worse in case of 3.5e6 = 35*10**5 | I frequently need to enter large integers for indexing and creating numpy arrays, such as 3500000 or 250000. Normally I'd enter these using scientific notation, 3.5e6 or .25e6 or such. This is quicker, and much less likely to have errors.
Unfortunately, python expects integer datatypes for indexing. The obvious solution is to convert datatypes. So [5e5:1e6] becomes [int(5e5):int(1e6)], but this decreases readability and is somewhat longer to type. Not to mention, it's easy to forget what datatype an index is until an indexing operation fails on a list or numpy.ndarray.
Is there a way to have numpy or python interpret large floats as integers, or is there an easy way to create large integers in python? | 0 | 1 | 330 |
0 | 47,038,338 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-10-31T14:42:00.000 | 0 | 5 | 0 | Save data as a *.dat file? | 47,038,101 | 0 | python,database,save | Correct me if I'm wrong, but opening, writing to, and subsequently closing a file should count as "saving" it. You can test this yourself by running your import script and comparing the last modified dates. | I am writing a program in Python which should import *.dat files, subtract a specific value from certain columns and subsequently save the file in *.dat format in a different directory.
My current tactic is to load the datafiles in a numpy array, perform the calculation and then save it. I am stuck with the saving part. I do not know how to save a file in python in the *.dat format. Can anyone help me? Or is there an alternative way without needing to import the *.dat file as a numpy array? Many thanks! | 0 | 1 | 41,290 |
0 | 47,042,891 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2017-10-31T18:53:00.000 | 0 | 1 | 0 | How to prevent charts or tables to disappear when I re-open Jupyter Notebook? | 47,042,689 | 0 | python,ipython,jupyter-notebook,ipython-notebook | Are you explicitly saving your notebook before you re-open it? A Jupyter notebook is really just a large json object, eventually rendered as a fancy html object. If you save the notebook, illustrations and diagrams should be saved as well. If that doesn't do the trick, try putting the one-liner "data" in a different cell than read_sql(). | I use Pandas with Jupyter notebook a lot. After I ingest a table in from using pandas.read_sql, I would preview it by doing the following:
data = pandas.read_sql("""blah""")
data
One problem that I have been running into is that all my preview tables will disappear if I reopen my .ipynb
Is there a way to prevent that from happening?
Thanks! | 0 | 1 | 71 |
0 | 55,322,568 | 0 | 1 | 0 | 0 | 4 | false | 5 | 2017-10-31T19:44:00.000 | 1 | 5 | 0 | Jupyter Notebook: no module named pandas | 47,043,407 | 0.039979 | python,python-3.x,pandas,jupyter-notebook | Try this for python3
sudo pip3 install pandas | I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd)
but I'm getting the following error:
ModuleNotFoundError: No module named 'pandas'
Some pertinent information:
I'm using python3
I've installed pandas using conda install pandas
My conda environment has pandas installed correctly. After activating the environment, I type python into the terminal and from there I can successfully import pandas and use it appropriately. This leads me to believe that it is an issue with my jupyter notebook. | 0 | 1 | 7,502 |
0 | 47,049,051 | 0 | 1 | 0 | 0 | 4 | false | 5 | 2017-10-31T19:44:00.000 | 4 | 5 | 0 | Jupyter Notebook: no module named pandas | 47,043,407 | 0.158649 | python,python-3.x,pandas,jupyter-notebook | You can try: which conda and which python to see the exact location where conda and python was installed and which was launched.
And try using the absolute path of conda to launch jupyter.
For example, /opt/conda/bin/jupyter notebook | I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd)
but I'm getting the following error:
ModuleNotFoundError: No module named 'pandas'
Some pertinent information:
I'm using python3
I've installed pandas using conda install pandas
My conda environment has pandas installed correctly. After activating the environment, I type python into the terminal and from there I can successfully import pandas and use it appropriately. This leads me to believe that it is an issue with my jupyter notebook. | 0 | 1 | 7,502 |
0 | 72,239,426 | 0 | 1 | 0 | 0 | 4 | false | 5 | 2017-10-31T19:44:00.000 | 0 | 5 | 0 | Jupyter Notebook: no module named pandas | 47,043,407 | 0 | python,python-3.x,pandas,jupyter-notebook | The default kernel in jupyter notebook points the python that is different to the python used inside the terminal.
You could check using which python
So the packages installed by conda lives in different place compared to the python that is used by the jupyter notebook at default.
To fix the issue, both needs to be same.
For that create a new kernel using ipykernel. syntax: python -m ipykernel install --user --name custom_name --display-name "Python (custom_name)"
After that, check the custom kernel and the path of the python used. jupyter kernel list --json
Finally, Restart the jupyter notebook. And change the kernel to the new custom_kernel. | I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd)
but I'm getting the following error:
ModuleNotFoundError: No module named 'pandas'
Some pertinent information:
I'm using python3
I've installed pandas using conda install pandas
My conda environment has pandas installed correctly. After activating the environment, I type python into the terminal and from there I can successfully import pandas and use it appropriately. This leads me to believe that it is an issue with my jupyter notebook. | 0 | 1 | 7,502 |
0 | 64,191,121 | 0 | 1 | 0 | 0 | 4 | false | 5 | 2017-10-31T19:44:00.000 | 0 | 5 | 0 | Jupyter Notebook: no module named pandas | 47,043,407 | 0 | python,python-3.x,pandas,jupyter-notebook | Its seems using homebrew installs for packages dependancies of home brew formulas are not handled by home brew well. Mostly path issues as installs are in different locations vs pip3, I also had tried installing pandas thru nb with !pip3, but I got errors that is was already satisfied meaning it was already installed just not importing. I uninstalled homebrew jupyterlab and used pip3 instead and all worked proper as a work around. | I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd)
but I'm getting the following error:
ModuleNotFoundError: No module named 'pandas'
Some pertinent information:
I'm using python3
I've installed pandas using conda install pandas
My conda environment has pandas installed correctly. After activating the environment, I type python into the terminal and from there I can successfully import pandas and use it appropriately. This leads me to believe that it is an issue with my jupyter notebook. | 0 | 1 | 7,502 |
0 | 47,129,061 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2017-11-01T09:01:00.000 | 1 | 2 | 0 | unable to read stata .dta file in python | 47,051,326 | 0.099668 | python,pandas,stata | Just use the read_table() of Pandas then make sure to include delim_whitespace=True and header=None. | I am trying to read a Stata (.dta) file in Python with pandas.read_stata, But I'm getting this error:
ValueError: Version of given Stata file is not 104, 105, 108, 111 (Stata 7SE), 113 (Stata 8/9), 114 (Stata 10/11), 115 (Stata 12), 117 (Stata 13), or 118 (Stata 14)
Please advise. | 0 | 1 | 3,485 |
0 | 47,236,204 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-11-02T01:47:00.000 | 0 | 2 | 0 | why model is giving high accuracy of 84% but very low AUC 13%? | 47,066,314 | 0 | python,machine-learning,random-forest | No, your model is not fine.
In your dataset around 88% records belong to "Label 0", which makes your model bias to "Label 0". Thus, even though your AUC is low, it shows 84% accuracy as most of the data belongs to "Label 0".
You can undersample records belong to "Label 0" or oversample records belong to "Label 1" to make your model more accurate.
Hope it helps. | I have built model which gives me 84% accuracy for random forest and support vector machine but giving very low auc of 13% only. I am building this in python and I am new to machine learning and data science.
I am predicting 0 and 1 labels on dataset. My overall dataset is having records of 30744.
Label 1 - 6930
Label 0 - 23814
Could you please advise if this is fine? Is model is getting overfitted?
Appreciate any suggestion on improving auc? | 0 | 1 | 1,723 |
0 | 50,312,611 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-11-02T07:20:00.000 | 0 | 1 | 0 | Using bokeh plotting with kafka streaming | 47,069,678 | 0 | java,python,apache-kafka,bokeh,apache-kafka-streams | Try kafka-python. You can set up a simple consumer to read the data from your cluster. | Here is a problem I am stuck with presently.
Recently I have been exploring bokeh for plotting and kafka for streaming.
And I thought of making a sample live dashboard using both of them.
But the problem is I use bokeh with python and kafka stream api's with Java. Is there a way to use them together by any chance.
The only way I can see is both of them can be used with scala. But presently I don't want to get into scala. | 1 | 1 | 824 |
0 | 47,099,951 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-11-02T17:20:00.000 | 0 | 1 | 0 | Looking to cluster short descriptions of reports. Should I use Word2Vec or Doc2Vec | 47,081,149 | 0 | python,machine-learning,nlp,word2vec,doc2vec | They're very similar, so just as with a single approach, you'd try tuning parameters to improve results in some rigorous manner, you should try them both, and compare the results.
Your dataset sounds tiny compared to what either needs to induce good vectors – Word2Vec is best trained on corpuses of many millions to billions of words, while Doc2Vec's published results rely on tens-of-thousands to millions of documents.
If composing some summary-vector-of-the-document from word-vectors, you could potentially leverage word-vectors that are reused from elsewhere, but that will work best if the vectors' original training corpus is similar in vocabulary/domain-language-usage to your corpus. For example, don't expect words trained on formal news writing to work well with, or even cover the same vocabulary as, informal tweets, or vice-versa.
If you had a larger similar-text corpus of documents to train a Doc2Vec model, you could potentially train a good model on the full set of documents, but then just use your small subset, or re-infer vectors for your small subset, and get better results than a model that was only trained on your subset.
Strictly for clustering, and with your current small corpus of short texts, if you have good word-vectors from elsewhere, it may be worth looking at the "Word Mover's Distance" method of calculating pairwise document-to-document similarity. It can be expensive to calculate on larger docs and large document-sets, but might support clustering well. | So, I have close to 2000 reports and each report has an associated short description of the problem. My goal is to cluster all of these so that we can find distinct trends within these reports.
One of the features I'd like to use some sort of contextual text vector. Now, I've used Word2Vec and think this would be a good option but I also so Doc2Vec and I'm not quite sure what would be a better option for this use case.
Any feedback would be greatly appreciated. | 0 | 1 | 380 |
0 | 51,358,087 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-11-03T11:28:00.000 | 0 | 2 | 0 | Python : Halton and Hammersley quasi random sequences | 47,094,705 | 0 | python,numpy,scipy | Most library methods offering low discrepancy methods for arbitrary dimensions, won’t include arguments that allow you to define arbitrary intervals for each of the separate dimensions/components. However, in virtually all of these cases, you can adapt the exisiting method to suit your requirements with the addition of a single line of code. Understanding this will dramatically increase number of librbaries you can choose to use!
For nearly all low discrepancy (quasirandom) sequences, each term is equidistributed in the half open range [0,1).
Similarly, for d-dimensional sequences, each component of each term falls in [0,1).
This includes the Halton sequence ( which is a generalization if the van der Corput), Hammersley, Weyl/Kronecker, Sobol, and Niederreiter sequences.
Converting a value from [0,1) to [a,b) can be achieved, via the linear transformation x = a + (b-a) z.
Thus if the n-th term of the canonical low discrepancy sequence is (z_1,z_2,z,z_3), then you desired sequence is (2+2*z1, 2+2*z2, 1+6*z3). | I am trying to construct Hammersley and Halton quasi random sequences. I have for example three variables x1, x2 and x3. They all have integer values. x1 has a range from 2-4, x2 from 2-4 and x3 from 1-7. Is there any python package which can create those sequences? I saw that there are some procject like sobol or SALib, but they do not implemented Halton and Hammersley.
Best regards | 0 | 1 | 1,801 |
0 | 56,950,297 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2017-11-04T02:37:00.000 | 3 | 2 | 0 | Why does get_weights return an empty list? | 47,106,830 | 0.291313 | python,machine-learning,tensorflow,keras | Maybe you are asking for weights before they are created.
Weights are created when the Model is first called on inputs or
build() is called with an input_shape.
For example, if you load weights from checkpoint but you don't give an input_shape to the model, then get_weights() will return an empty list. | I am teaching myself data science and something peculiar has caught my eyes. In a sample DNN tutorial I was working on, I found that the Keras layer.get_weights() function returned empty list for my variables. I've successfully cross validated and used model.fit() function to compute the recall scores.
But as I'm trying to use the get_weights() function on my categorical variables, it returns empty weights for all.
I'm not looking for a solution to code but I am just curious about what would possibly cause this. I've read through the Keras API but it did not provide me with the information I was hoping to see. What could cause the get_weights() function in Keras to return empty list except for of course the weights not being set? | 0 | 1 | 2,799 |
0 | 47,526,695 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-11-04T09:36:00.000 | 0 | 2 | 0 | how can i make anaconda spyder code completion work again after installing tensorflow | 47,109,343 | 0 | python,tensorflow,anaconda,spyder,code-completion | Now that I have to use a temporary alternative, I installed anaconda version without an installed tensorflow in anaconda's envs. And I use it when I don't use tensorflow. I hope this question can be complement answered, please attent my answer. | I am data scientist in beijing and working with anaconda in win7
but after I pip installed tensorflow v1.4,code completion of my IDE spyder in anaconda not work, before anything of code completion function is work perfectly.
Now even I uninstall tensorflow,code completion function of spyder still not work. Any help?
my envirment:
win7
anaconda3 v5.0 for win64 (py3.6)
tensorflow v1.4 for win (tf_nightly-1.4.0.dev20171006-cp36-cp36m-win_amd64.whl)
So two question:
1 How can i fix it so to make anaconda3 spyder code completion work again?
2 After uninstall tensorflow, anaconda3 spyder code completion still not work, what can I do? | 0 | 1 | 487 |
0 | 47,525,945 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-11-04T09:36:00.000 | 0 | 2 | 0 | how can i make anaconda spyder code completion work again after installing tensorflow | 47,109,343 | 0 | python,tensorflow,anaconda,spyder,code-completion | I try pip rope_py3k、jedi and readline, and reset the set of tool, but all are not useful.
and my Spyder code editing area also can not be automatically completed after the installation of tensorflow, I have re-installed again and found the same problem.
However,when I re-installed all envs except tensorflow,it can work!!
My environment is win10, anaconda3.5, python3.6.3, tensorflow1.4.
Did you resolve it?And I hope you can teach me. | I am data scientist in beijing and working with anaconda in win7
but after I pip installed tensorflow v1.4,code completion of my IDE spyder in anaconda not work, before anything of code completion function is work perfectly.
Now even I uninstall tensorflow,code completion function of spyder still not work. Any help?
my envirment:
win7
anaconda3 v5.0 for win64 (py3.6)
tensorflow v1.4 for win (tf_nightly-1.4.0.dev20171006-cp36-cp36m-win_amd64.whl)
So two question:
1 How can i fix it so to make anaconda3 spyder code completion work again?
2 After uninstall tensorflow, anaconda3 spyder code completion still not work, what can I do? | 0 | 1 | 487 |
0 | 47,115,148 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2017-11-04T18:20:00.000 | 10 | 1 | 0 | Sklearn's imputer v/s df.fillnan to replace nan values with mean of the column | 47,114,021 | 1.2 | python,pandas,dataframe,scikit-learn | I feel imputer class has its own benefits because you can just simply mention mean or median to perform some action unlike in fillna where you need to supply values. But in imputer you need to fit and transform the dataset which means more lines of code. But it may give you better speed over fillna but unless really big dataset it doesn’t matter.
But fillna has something which is really cool. You can fill the na even with a custom value which you may sometime need. This makes fillna better IMHO even if it may perform slower. | I found 2 ways to replace nan values in pythons,
One using sklearn's imputer class and the other using df.fillnan()
the later seems easy with less code.
But efficiency wise which is better.
Can anyone explain the use cases of each.? | 0 | 1 | 2,979 |
0 | 47,131,429 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2017-11-06T06:55:00.000 | 1 | 5 | 0 | Diff between two dataframes in pandas | 47,131,361 | 0.039979 | python,pandas,merge,compare,diff | Set df2.columns = df1.columns
Now, set every column as the index: df1 = df1.set_index(df1.columns.tolist()), and similarly for df2.
You can now do df1.index.difference(df2.index), and df2.index.difference(df1.index), and the two results are your distinct columns. | I have two dataframes both of which have the same basic schema. (4 date fields, a couple of string fields, and 4-5 float fields). Call them df1 and df2.
What I want to do is basically get a "diff" of the two - where I get back all rows that are not shared between the two dataframes (not in the set intersection). Note, the two dataframes need not be the same length.
I tried using pandas.merge(how='outer') but I was not sure what column to pass in as the 'key' as there really isn't one and the various combinations I tried were not working. It is possible that df1 or df2 has two (or more) rows that are identical.
What is a good way to do this in pandas/Python? | 0 | 1 | 32,562 |
0 | 47,259,321 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-11-06T17:14:00.000 | 0 | 1 | 0 | ModuleNotFound Error in python script but only when imported into a parent script | 47,142,323 | 0 | python,windows,numpy,anaconda | basteflp, thanks for your response. I managed to solve it. The module not found error was due to running the script outside of the a specific Anaconda environment. Running the script after loading the Anaconda environment resolved the error. | On a windows machine with Anaconda installed. Script B runs correctly and produces the correct result. Script B is called from a Windows console app.
When script A imports script B, script B fails with the error "ModuleNotFoundError: No module named 'numpy'".
When script B is passed directly to Python executable, script B works and executes without error. (I'm new to python) Any help pointing me in the right direction would be appreciated.
Thanks | 0 | 1 | 52 |
0 | 47,147,531 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-11-06T23:13:00.000 | 0 | 6 | 0 | Filter Pandas Dataframe using an arbitrary number of conditions | 47,147,414 | 0 | python,pandas | I believe that reduce( (lambda x, y: x & (df[y[0]]<y[1])), list_of_filters ) will do it. | I am comfortable with basic filtering and querying using Pandas. For example, if I have a dataframe called df I can do df[df['field1'] < 2] or df[df['field2'] < 3]. I can also chain multiple criteria together, for example:
df[(df['field1'] < 3) & (df['field2'] < 2)].
What if I don't know in advance how many criteria I will need to use? Is there a way to "chain" an arbitrary number of these operations together? I would like to pass a list of filters such as [('field1', 3), ('field2', 2), ('field3', 4)] which would result in the chaining of these 3 conditions together.
Thanks! | 0 | 1 | 813 |
0 | 47,172,267 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-11-07T01:36:00.000 | 0 | 1 | 0 | Getting numpy vector from a trained Doc2Vec model for each document | 47,148,615 | 0 | python-3.x,nlp,gensim,doc2vec | Bulk training only creates vectors for tags you supplied. If you want to read out a bulk-trained vector per paragraph (as if by model.docvecs['paragraph000']), you have to give each paragraph a unique tag during training (like 'paragraph000'). You can give docs other tags as well - but bulk training only creates remembers doc-vectors for supplied tags.
After training, you can infer vectors for any other texts you supply to infer_vector() - and of course you could supply the same paragraphs that were used during training. | This is my first time using Doc2Vec
I'm trying to classify works of an author. I have trained a model with Labeled Sentences (paragraphs, or strings of specified length), with words = the list of words in the paragraph, and tags = author's name. In my case I only have two authors.
I tried accessing the docvecs attribute from the trained model but it only contains two elements, corresponding to the two tags I have when I trained the model. I'm trying to get the doc2vec numpy representations of each paragraph I fed in to the training so I can use that as training data later on. How can I do this?
Thanks. | 0 | 1 | 773 |
0 | 64,742,353 | 0 | 0 | 0 | 0 | 1 | false | 30 | 2017-11-07T07:54:00.000 | 5 | 3 | 0 | What is the difference between xgb.train and xgb.XGBRegressor (or xgb.XGBClassifier)? | 47,152,610 | 0.321513 | python,machine-learning,scikit-learn,regression,xgboost | From my opinion the main difference is the training/prediction speed.
For further reference I will call the xgboost.train - 'native_implementation' and XGBClassifier.fit - 'sklearn_wrapper'
I have made some benchmarks on a dataset shape (240000, 348)
Fit/train time:
sklearn_wrapper time = 89 seconds
native_implementation time = 7 seconds
Prediction time:
sklearn_wrapper = 6 seconds
native_implementation = 3.5 milliseconds
I believe this is reasoned by the fact that sklearn_wrapper is designed to use the pandas/numpy objects as input where the native_implementation needs the input data to be converted into a xgboost.DMatrix object.
In addition one can optimise n_estimators using a native_implementation. | I already know "xgboost.XGBRegressor is a Scikit-Learn Wrapper interface for XGBoost."
But do they have any other difference? | 0 | 1 | 17,925 |
0 | 47,173,103 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-11-08T05:56:00.000 | 0 | 2 | 0 | No module named tensorflow even after it is present in the local | 47,172,525 | 0 | python,tensorflow,jupyter | Its done...tried installing within the environment Tensorflow. | I am already having tensorflow in my anaconda.Still when i run the ipython notebook ,it shows No module named tensorflow. | 0 | 1 | 263 |
0 | 47,172,877 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-11-08T05:56:00.000 | 0 | 2 | 0 | No module named tensorflow even after it is present in the local | 47,172,525 | 0 | python,tensorflow,jupyter | Are you using Virtual Environment?
If yes, there might be difference in version.
try "pip install ipython", and then import tensorflow.
May it works. | I am already having tensorflow in my anaconda.Still when i run the ipython notebook ,it shows No module named tensorflow. | 0 | 1 | 263 |
0 | 47,173,911 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2017-11-08T06:50:00.000 | 0 | 1 | 0 | Where is RDD or Spark SQL dataframe stored or persisted in client deploy mode on a Spark 2.1 Standalone cluster? | 47,173,286 | 0 | python,pyspark,apache-spark-sql,spark-dataframe | If i understand correctly, then what you will get on the client side is an int. At least should be, if setup correctly. So the answer is no, the DF is not going to hit your local RAM.
You are interacting with the cluster via SparkSession (SparkContext for earlier versions). Even though you are developing -i.e. writing code- on the client machine, the actual computation of spark operations -i.e. running pyspark code- will not be performed on your local machine. | I am deploying a Jupyter notebook(using python 2.7 kernel) on client side which accesses data on a remote and does processing in a remote Spark standalone cluster (using pyspark library). I am deploying spark cluster in Client mode. The client machine does not have any Spark worker nodes.
The client does not have enough memory(RAM). I wanted to know that if I perform a Spark action operation on dataframe like df.count()on client machine, will the dataframe be stored in Client's RAM or will it stored on Spark worker's memory? | 0 | 1 | 187 |
0 | 47,379,830 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-11-08T19:09:00.000 | 6 | 2 | 0 | Sentiment Analysis with Imbalanced Dataset in LightGBM | 47,187,750 | 1.2 | python-3.x,machine-learning,nlp,sentiment-analysis,lightgbm | Are there any approaches to follow to handle this type of datasets
that are so imbalanced.?
Your dataset is almost balanced. 70/30 is close to equal. With gratient boosted trees it is possible to train on much more unbalanced data, like credit scoring, fraud detection, and medical diagnostics, where percentage of positives may be less that 1%.
Your problem might be not in class imbalance, but in the wrong metric you use. When you calculate accuracy, you implicitly penalize your model equally for false negatives and false positives. But is it really the case? When classes are imbalanced, or just uncomparable from the business or physical point of view, other metrics, like precision, recall, or ROC AUC might be of more use than accuracy. For your problem I would recommend ROC AUC.
Maybe, what you really want is probabilistic classification. And if you want to keep it binary, play with the threshold used for the classification.
How can I further improve my model.?
Because it is analysis of text, I would suggest more accurate data cleaning. Some directions to start with:
Did you try different regimes of lemmatization/stemming?
How did you preprocess special entities, like numbers, smileys, abbreviations, company names, etc.?
Did you exploit collocations, by including bigrams or even trigrams into your model along with words?
How did you handle negation? One single "no" could change the meaning dramatically, and CountVectorizer catches that poorly.
Did you try to extract semantics from the words, e.g. match the synonyms or use word embeddins from a pretrained model like word2vec or fastText?
Maybe tree-based models is not the best choice. In my own experience, best sentiment analysis was performed by linear models like logistic regression or a shallow neural network. But you should heavily regularize them, and you should scale your features wisely, e.g. with TF-IDF.
And if your dataset is large, you can try deep learning and train a RNN on your data. LSTM is often the best model for many text-related problems.
Should I try down-sampling.?
No, you should never down-sample, unless you have too much data to process on your machine. Down-sampling creates biases in your data.
If you really really want to increase the relative importance of the minority class for your classifier, you can just reweight the observations. As far as I know, in LightGBM you can change class weights with the scale_pos_weight parameter.
Or is it the maximum possible accuracy.? How can I be sure of it.?
You can never know. But you can do an experiment: ask several humans to label your test samples, and compare them with each other. If only 90% of labels coincide, then even humans cannot relaibly classify the rest 10% of samples, so you have reached the maximum.
And again, don't focus on accuracy too much. Maybe, for your business application it is okay if you incorrectly label some positive reviews as negative, as long as all the negative reviews are successfully identified. | I am trying to perform sentiment analysis on a dataset of 2 classes (Binary Classification). Dataset is heavily imbalanced about 70% - 30%. I am using LightGBM and Python 3.6 for making the model and predicting the output.
I think imbalance in dataset effect performance of my model. I get about 90% accuracy but it doesn't increase further even though I have performed fine-tuning of the parameters. I don't think this the maximum possible accuracy as there are others who scored better than this.
I have cleaned the dataset with Textacy and nltk. I am using CountVectorizer for encoding the text.
I have tried up-sampling the dataset but it resulted in poor model (I haven't tuned that model)
I have tried using the is_unbalance parameter of LightGBM, but it doesn't give me a better model.
Are there any approaches to follow to handle this type of datasets that are so imbalanced.? How can I further improve my model.? Should I try down-sampling.? Or is it the maximum possible accuracy.? How can I be sure of it.? | 0 | 1 | 3,502 |
0 | 48,100,987 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-11-09T18:55:00.000 | 0 | 2 | 0 | NumPy + BLAS + LAPACK on GPU (AMD and Nvidia) | 47,209,532 | 0 | python,numpy,lapack,blas | Another option is ArrayFire. While this package does not contain a complete BLAS and LAPACK implementation, it does offer much of the same functionality. It is compatible with OpenCL and CUDA, and hence, is compatible with AMD and Nvidia architectures. It has wrappers for Python, making it easy to use. | We have a Python code which involves expensive linear algebra computations. The data is stored in NumPy arrays. The code uses numpy.dot, and a few BLAS and LAPACK functions which are currently accessed through scipy.linalg.blas and scipy.linalg.lapack. The current code is written for CPU. We want to convert the code so that some of the NumPy, BLAS, and LAPACK operations are performed on a GPU.
I am trying to determine the best way to do this is. As far as I can tell, Numba does not support BLAS and LAPACK functions on the GPU. It appears that PyCUDA may the best route, but I am having trouble determining whether PyCUDA allows one to use both BLAS and LAPACK functions.
EDIT: We need the code to be portable to different GPU architectures, including AMD and Nvidia. While PyCUDA appears to offer the desired functionality, CUDA (and hence, PyCUDA) cannot run on AMD GPUs. | 0 | 1 | 2,168 |
0 | 50,078,794 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-11-10T20:32:00.000 | 4 | 1 | 0 | OpenCV - undefined symbol: hb_font_funcs_set_variation_glyph_func | 47,230,690 | 1.2 | python,opencv,anaconda | you can try conda install -c conda-forge opencv, this one also will install opencv version 3.
For your error, you can fix it by install pango using conda install -c conda-forge pango | I have a working anaconda environment on my ubuntu 17.10 machine and installed opencv3 using conda install -c menpo opencv3
When I try to import cv2 the following error shows up
import cv2
ImportError: /usr/lib/x86_64-linux-gnu/libpangoft2-1.0.so.0: undefined symbol: hb_font_funcs_set_variation_glyph_func | 0 | 1 | 2,740 |
0 | 50,935,626 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-11-12T17:45:00.000 | 1 | 1 | 0 | Convert Contour into BLOB OpenCV | 47,251,937 | 0.197375 | python,opencv,blob,contour | After a lot of programming, I realized the procedure is:
After a contour is extracted, create a black image with same size as the original image.
Draw the contour in black image.
Find the coordinate of the contour center by contour feature : moments, x0 and y0.
Use Floodfill() to white-fill the interior of the contour using (x0,y0) as seed point.
The resulting image contains the white BLOB to the corresponding contour. | Hi everyone,
I am trying to convert a contour to a blob in an image .There are several blobs in image ; the proper one is extracted by applying contour feature. The blob is required to mask a grayscale image.
I have tried extracting each non-zero pixels and pointPolygontest() in order to find the BLOB points, but it requires >70ms to complete the proccess. The application is in 30 fps videos, so I need to convert them within 30ms. I am using OpenCV in python. Is there a way to convert a contour into a Blob within 30ms in opencv?
Thanks in advance. | 0 | 1 | 532 |
0 | 47,270,384 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2017-11-12T19:45:00.000 | 0 | 4 | 0 | Are all objects returned by value rather than by reference? | 47,253,169 | 1.2 | python,function,numpy,heap-memory | I see what I missed now: Objects are created on the heap, but function frames are on the stack. So when methodB finishes, its frame will be reclaimed, but that object will still exist on the heap, and methodA can access it with a simple reference. | I am coding in Python trying to decide whether I should return a numpy array (the result of a diff on some other array) or return numpy.where(diff)[0], which is a smaller array but requires that little extra work to create. Let's call the method where this happens methodB.
I call methodB from methodA. The rub is that I won't necessarily always need the where() result in methodA, but I might. So is it worth doing this work inside methodB, or should I pass back the (much larger memory-wise) diff itself and then only process it further in methodA if needed? That would be the more efficient choice assuming methodA just gets a reference to the result.
So, are function results ever not copied when they are passed back the the code that called that function?
I believe that when methodB finishes, all the memory in its frame will be reclaimed by the system, so methodA has to actually copy anything returned by methodB in to its own frame in order to be able to use it. I would call this "return by value". Is this correct? | 0 | 1 | 2,293 |
0 | 47,269,281 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2017-11-13T15:21:00.000 | 6 | 1 | 0 | Spyder :An error ocurred while starting the kernel | 47,267,716 | 1.2 | python,installation,anaconda,spyder | The problem is that you have two Python versions installed:
C:\Users\afsan\Anaconda3\
C:\Users\afsan\AppData\Local\Programs\Python\Python36
Given that it seems you want to use Spyder with Anaconda, please remove your second Python version (manually, if necessary). That should fix your problem. | I am still getting this error: An error occurred while starting the kernel
Things I tried:
setuptools command
updating spyder
Uninstalled everything that had word python in it from Uninstall or change progam panel
Uninstalling and reinstalling anaconda
Reading people's response on how they tried to fix it
Tried not to get frustrated.
This started occurring after I updated the spyder which I shouldn't have, but now I am stuck with the issue. I will share the complete message that's coming up on my IPhython Console screen.
Traceback (most recent call last):
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 213, in main
from ipykernel.kernelapp import IPKernelApp
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\__init__.py", line 2, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\connect.py", line 18, in
import jupyter_client
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\__init__.py", line 4, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\connect.py", line 22, in
import zmq
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\__init__.py", line 34, in
from zmq import backend
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 40, in
reraise(*exc_info)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 27, in
_ns = select_backend(first)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\select.py", line 26, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in
from . import (constants, error, message, context,
ImportError: cannot import name 'constants'
Traceback (most recent call last):
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 213, in main
from ipykernel.kernelapp import IPKernelApp
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\__init__.py", line 2, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\connect.py", line 18, in
import jupyter_client
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\__init__.py", line 4, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\connect.py", line 22, in
import zmq
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\__init__.py", line 34, in
from zmq import backend
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 40, in
reraise(*exc_info)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 27, in
_ns = select_backend(first)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\select.py", line 26, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in
from . import (constants, error, message, context,
ImportError: cannot import name 'constants'
Traceback (most recent call last):
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 213, in main
from ipykernel.kernelapp import IPKernelApp
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\__init__.py", line 2, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\connect.py", line 18, in
import jupyter_client
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\__init__.py", line 4, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\connect.py", line 22, in
import zmq
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\__init__.py", line 34, in
from zmq import backend
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 40, in
reraise(*exc_info)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 27, in
_ns = select_backend(first)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\select.py", line 26, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in
from . import (constants, error, message, context,
ImportError: cannot import name 'constants'
Traceback (most recent call last):
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 213, in main
from ipykernel.kernelapp import IPKernelApp
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\__init__.py", line 2, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\connect.py", line 18, in
import jupyter_client
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\__init__.py", line 4, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\connect.py", line 22, in
import zmq
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\__init__.py", line 34, in
from zmq import backend
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 40, in
reraise(*exc_info)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 27, in
_ns = select_backend(first)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\select.py", line 26, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in
from . import (constants, error, message, context,
ImportError: cannot import name 'constants'
Traceback (most recent call last):
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 213, in main
from ipykernel.kernelapp import IPKernelApp
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\__init__.py", line 2, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\connect.py", line 18, in
import jupyter_client
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\__init__.py", line 4, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\connect.py", line 22, in
import zmq
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\__init__.py", line 34, in
from zmq import backend
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 40, in
reraise(*exc_info)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 27, in
_ns = select_backend(first)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\select.py", line 26, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in
from . import (constants, error, message, context,
ImportError: cannot import name 'constants' | 0 | 1 | 19,126 |
0 | 47,274,239 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-11-13T21:46:00.000 | 0 | 3 | 0 | python distinguish between '300' and '300.0' for a dataframe column | 47,274,094 | 0 | python,pandas,csv,dataframe | Some solutions:
Go through all the files, change the columns names, then save the result in a new folder. Now when you read a file, you can go to the new folder and read it from there.
Wrap the normal file read function in another function that automatically changes the column names, and call that new function when you read a file.
Wrap column selection in a function. Use a try/except block to have the function try to access the given column, and if it fails, use the other form. | Recently I have been developing some code to read a csv file and store key data columns in a dataframe. Afterwards I plan to have some mathematical functions performed on certain columns in the dataframe.
I've been fairly successful in storing the correct columns in the dataframe. I have been able to have it do whatever maths is necessary such as summations, additions of dataframe columns, averaging etc.
My problem lies in accessing specific columns once they are stored in the dataframe. I was working with a test file to get everything working and managed this no problem. The problems arise when I open a different csv file, it will store the data in the dataframe, but the accessing the column I want no longer works and it stops at the calculation part.
From what I can tell the problem lies with how it reads the column name. The column names are all numbers. For example, df['300'], df['301'] etc. When accessing the column df['300'] works fine in the testfile, while the next file requires df['300.0']. If I switch to a different file it may require df['300'] again. All the data was obtained in the same way so I am not certain why some are read as 300 and the others 300.0.
Short of constantly changing the column labels each time I open a different file, is there anyway to have it automatically distinguish between '300' and '300.0' when opening the file, or force '300.0' = '300'?
Thanks | 0 | 1 | 213 |
0 | 47,689,288 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-11-14T03:49:00.000 | 0 | 1 | 0 | cv2 running optical flow on particular rectangles | 47,277,332 | 0 | python,opencv,opticalflow,cv2 | Yes, it's possible. cv2.calcOpticalFlowPyrLK() will be the optical flow function you need. Before you make that function call, you will have to create an image mask. I did a similar project, but in C++, though I can outline the steps for you:
Create an empty matrix with same width and height of your images
Using the points from your ROI, create a shape out of it (I did mine using cv2.fillPoly()) and fill the inside of the shape with white (Your image mask should only be comprised of black and white color)
If you are planning on using corners as features, then call cv2.goodFeaturesToTrack() and pass in the mask you've made as one of its arguments.
If you're using the Feature2D module to detect features, you can use the same mask to only extract the features in that masked area.
By this step, you should now have a collection of features/points that are only within the bounds of the shape! Call the optical flow function and then process the results.
I hope that helps. | I am using OpenCV's Optical Flow module. I understand the examples in the documentation but those take the entire image and then get the optical flow over the image.
I only want to pass it over some parts of an image. Is it possible to do that? If yes, how do I go about it?
Thanks! | 0 | 1 | 279 |
0 | 51,707,973 | 0 | 0 | 0 | 0 | 2 | true | 5 | 2017-11-14T09:46:00.000 | 0 | 2 | 0 | Early stopping using tensorflow tf.estimator ? | 47,282,399 | 1.2 | python,tensorflow | Recently I have come across this function in tensorflow API.
tf.keras.callbacks.EarlyStopping. tf version is r1.9.
Arguments:
monitor: quantity to be monitored
min_delta: minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement.
patience: number of epochs with no improvement after which training
will be stopped.
verbose: verbosity mode.
mode: one of {auto, min, max}. In min mode, training will stop when
the quantity monitored has stopped decreasing; in max mode it will
stop when the quantity monitored has stopped increasing; in auto
mode, the direction is automatically inferred from the name of the
monitored quantity. | I am using tensorflow v1.4. I want to use early stopping using the validation set with a patience of 5 epochs.
I have searched on the web and found out that there used to be a function called ValidationMonitor but it is depreciated now. So is there a way to achieve this ? | 0 | 1 | 1,059 |
0 | 49,161,168 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2017-11-14T09:46:00.000 | 0 | 2 | 0 | Early stopping using tensorflow tf.estimator ? | 47,282,399 | 0 | python,tensorflow | There doesn't seem to be a good way of doing this, unfortunately. One method to consider is to save checkpoints quite often during training, and then later iterate over them and evaluating them. Then you can discard the checkpoints that does not have the best eval performance. This doesn't help you in saving time during training, but at least the resulting model you are left with is an early stopping model. | I am using tensorflow v1.4. I want to use early stopping using the validation set with a patience of 5 epochs.
I have searched on the web and found out that there used to be a function called ValidationMonitor but it is depreciated now. So is there a way to achieve this ? | 0 | 1 | 1,059 |
0 | 64,614,856 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2017-11-14T12:58:00.000 | 1 | 5 | 0 | Is there a way in pd.read_csv to replace NaN value with other character? | 47,286,547 | 0.039979 | python,pandas,csv | Putting this into the read_csv function does work:
dtype={"count": pandas.Int64Dtype()}
i.e.
df = pd.read_csv('file.csv')
This type supports both integers and pandas.NA values so you can import without the floats becoming integers.
If necessary, you can then use regular DataFrame commands to clean up the missing values, as described in other answers here.
BTW, my first attempt to solve this changes the integers into strings. If that's of interest:
df = pd.read_csv('file.csv', na_filter= False)
(It reads the file without replacing any missing values with NaN). | I have some data in csv file.
Because it is collected from machine,all lines should be number but some NaN values exists in some lines.And the machine can auto replace these NaN values with a string '-'.
My question is how to set params of pd.read_csv() to auto replace '-'values with zero from csv file? | 0 | 1 | 31,191 |
0 | 51,295,721 | 0 | 1 | 0 | 0 | 1 | true | 4 | 2017-11-15T16:02:00.000 | 0 | 1 | 0 | python3 Illegal instruction (core dump) | 47,312,023 | 1.2 | python,linux,virtual-machine,python-3.5 | Although I'm not sure the source of the issue, I did a full purge of python3 and reinstalled it and all packages with it and that fixed the issue! | I am on a virtual machine running on Ubuntu 16.04. I have installed pandas, sklearn, and conda using pip3. When I try to run a python3 program using these packages, I get the error "Illegal instruction (core dump)."
Not sure how to fix this. Simple python3 programs (aka no imports) run fine. I also tried importing but not using these packages, and that works.
The only other thing I have done on this VM is opencv development with C++. | 0 | 1 | 2,258 |
0 | 47,341,200 | 0 | 1 | 0 | 0 | 1 | false | 16 | 2017-11-16T07:39:00.000 | 26 | 1 | 0 | Viewing dataframes in Spyder using a command in its console | 47,324,077 | 1 | python,r,rstudio,spyder | (Spyder maintainer here) There's no function similar to view() in Spyder. To view the contents of a Dataframe, you need to double-click on it in the Variable Explorer. | I have been using R Studio for quite some time, and I find View() function very helpful for viewing my datasets.
Is there a similar View() counterpart in Spyder? | 0 | 1 | 18,421 |
0 | 47,371,621 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-11-16T08:07:00.000 | 0 | 3 | 0 | Learners in CNTK C# API | 47,324,511 | 0 | c#,python,cntk | Checked that CNTKLib is providing those learners in CPUOnly package.
Nestrov is missing in there but present in python.
There is a difference while creating the trainer object
with CNTKLib learner function vs Learner class.
If a learner class is used,
net parameters are provided as a IList.
This can be obtained using netout.parameter() ;
If CNTKLib is used,
parameters are provided as ParameterVector.
Build ParameterVector while building the network.
and provide it while creating Trainer object.
ParameterVector pv = new ParameterVector ()
pv.Add(weightParameter)
pv.Add(biasParameter)
Thanks everyone for your answers. | I am using C# CNTK 2.2.0 API for training.
I have installed Nuget package CNTK.CPUOnly and CNTK.GPU.
I am looking for following learners in C#.
1. AdaDelta
2. Adam
3. AdaGrad
4. Neterov
Looks like Python supports these learners but C#
package is not showing them.
I can see only SGD and SGDMomentun learners in C# there.
Any thoughts, how to get and set other learners in C#.
Do I need to install any additional package to get these learners?
Appreciate your help. | 0 | 1 | 447 |
0 | 47,324,718 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-11-16T08:07:00.000 | 0 | 3 | 0 | Learners in CNTK C# API | 47,324,511 | 0 | c#,python,cntk | Download the NCCL 2 app to configure in c# www.nvidia. com or google NCCL download | I am using C# CNTK 2.2.0 API for training.
I have installed Nuget package CNTK.CPUOnly and CNTK.GPU.
I am looking for following learners in C#.
1. AdaDelta
2. Adam
3. AdaGrad
4. Neterov
Looks like Python supports these learners but C#
package is not showing them.
I can see only SGD and SGDMomentun learners in C# there.
Any thoughts, how to get and set other learners in C#.
Do I need to install any additional package to get these learners?
Appreciate your help. | 0 | 1 | 447 |
0 | 47,336,205 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-11-16T16:08:00.000 | 0 | 2 | 0 | Error reported while running pyomo optimization with cbc solver and using timelimit | 47,334,254 | 0 | python-2.7,pyomo,coin-or-cbc | You could try to set the bound gap tolerance such that it will accept the other answer. I'm surprised that the solver status is coming back with error if there is a feasible solution found. Could you print out the whole results object? | I am trying to solve Optimisation problem with pyomo (Pyomo 5.3 (CPython 2.7.13 on Linux 3.10.0-514.26.2.el7.x86_64)) using CBC solver (Version: 2.9.8) and specifying a time limit in solver of 60 sec. The solver is getting a feasible solution (-1415.8392) but apparently not yet optimal (-1415.84) as you can see below.
After time limit ends model seemingly exits with an error code. I want to print or get values of all variables of feasible solution using CBC in specified time limit. Or is there any other way by which I can set, if Model gets 99% value of an Optimal solution, to exit and print the feasible solution.
The error code is posted below.
Cbc0004I Integer solution of -1415.8392 found after 357760 iterations and 29278 nodes (47.87 seconds)
Cbc0010I After 30000 nodes, 6350 on tree, -1415.8392 best solution, best possible -1415.84 (48.87 seconds)
Cbc0010I After 31000 nodes, 6619 on tree, -1415.8392 best solution, best possible -1415.84 (50.73 seconds)
Cbc0010I After 32000 nodes, 6984 on tree, -1415.8392 best solution, best possible -1415.84 (52.49 seconds)
Cbc0010I After 33000 nodes, 7384 on tree, -1415.8392 best solution, best possible -1415.84 (54.31 seconds)
Cbc0010I After 34000 nodes, 7419 on tree, -1415.8392 best solution, best possible -1415.84 (55.73 seconds)
Cbc0010I After 35000 nodes, 7824 on tree, -1415.8392 best solution, best possible -1415.84 (57.37 seconds)
Traceback (most recent call last):
File "model_final.py", line 392, in
solver.solve(model, timelimit = 60*1, tee=True)
File "/home/aditya/0r/lib/python2.7/site-packages/pyomo/opt/base/solvers.py", line 655, in solve
default_variable_value=self._default_variable_value)
File "/home/aditya/0r/lib/python2.7/site-packages/pyomo/core/base/PyomoModel.py", line 242, in load_from
% str(results.solver.status))
ValueError: Cannot load a SolverResults object with bad status: error
When I run the model generated by pyomo manually using the same command-line parameters as pyomo /usr/bin/cbc -sec 60 -printingOptions all -import /tmp/tmpJK1ieR.pyomo.lp -import -stat=1 -solve -solu /tmp/tmpJK1ieR.pyomo.soln it seems to exit normally and also writes the solution as shown below.
Cbc0010I After 35000 nodes, 7824 on tree, -1415.8392 best solution, best possible -1415.84 (57.06 seconds)
Cbc0038I Full problem 205 rows 289 columns, reduced to 30 rows 52 columns
Cbc0010I After 36000 nodes, 8250 on tree, -1415.8392 best solution, best possible -1415.84 (58.73 seconds)
Cbc0020I Exiting on maximum time
Cbc0005I Partial search - best objective -1415.8392 (best possible -1415.84), took 464553 iterations and 36788 nodes (60.11 seconds)
Cbc0032I Strong branching done 15558 times (38451 iterations), fathomed 350 nodes and fixed 2076 variables
Cbc0035I Maximum depth 203, 5019 variables fixed on reduced cost
Cbc0038I Probing was tried 31933 times and created 138506 cuts of which 0 were active after adding rounds of cuts (4.431 seconds)
Cbc0038I Gomory was tried 30898 times and created 99534 cuts of which 0 were active after adding rounds of cuts (4.855 seconds)
Cbc0038I Knapsack was tried 30898 times and created 12926 cuts of which 0 were active after adding rounds of cuts (8.271 seconds)
Cbc0038I Clique was tried 100 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Cbc0038I MixedIntegerRounding2 was tried 30898 times and created 13413 cuts of which 0 were active after adding rounds of cuts (3.652 seconds)
Cbc0038I FlowCover was tried 100 times and created 4 cuts of which 0 were active after adding rounds of cuts (0.019 seconds)
Cbc0038I TwoMirCuts was tried 30898 times and created 15292 cuts of which 0 were active after adding rounds of cuts (2.415 seconds)
Cbc0038I Stored from first was tried 30898 times and created 15734 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Cbc0012I Integer solution of -1411.9992 found by Reduced search after 467825 iterations and 36838 nodes (60.12 seconds)
Cbc0020I Exiting on maximum time
Cbc0005I Partial search - best objective -1411.9992 (best possible -1415.4522), took 467825 iterations and 36838 nodes (60.12 seconds)
Cbc0032I Strong branching done 476 times (1776 iterations), fathomed 1 nodes and fixed 18 variables
Cbc0035I Maximum depth 21, 39 variables fixed on reduced cost
Cuts at root node changed objective from -1484.12 to -1415.45
Probing was tried 133 times and created 894 cuts of which 32 were active after adding rounds of cuts (0.060 seconds)
Gomory was tried 133 times and created 1642 cuts of which 0 were active after adding rounds of cuts (0.047 seconds)
Knapsack was tried 133 times and created 224 cuts of which 0 were active after adding rounds of cuts (0.083 seconds)
Clique was tried 100 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.001 seconds)
MixedIntegerRounding2 was tried 133 times and created 163 cuts of which 0 were active after adding rounds of cuts (0.034 seconds)
FlowCover was tried 100 times and created 5 cuts of which 0 were active after adding rounds of cuts (0.026 seconds)
TwoMirCuts was tried 133 times and created 472 cuts of which 0 were active after adding rounds of cuts (0.021 seconds)
ImplicationCuts was tried 25 times and created 41 cuts of which 0 were active after adding rounds of cuts (0.003 seconds)
Result - Stopped on time limit
Objective value: -1411.99922848
Lower bound: -1415.452
Gap: 0.00
Enumerated nodes: 36838
Total iterations: 467825
Time (CPU seconds): 60.13
Time (Wallclock seconds): 60.98
Total time (CPU seconds): 60.13 (Wallclock seconds): 61.01
The top few lines of the CBC solution file are:
Stopped on time - objective value -1411.99922848
0 c_e_x1454_ 0 0
1 c_e_x1455_ 0 0
2 c_e_x1456_ 0 0
3 c_e_x1457_ 0 0
4 c_e_x1458_ 0 0
5 c_e_x1459_ 0 0
6 c_e_x1460_ 0 0
7 c_e_x1461_ 0 0
8 c_e_x1462_ 0 0
Can anyone tell me how can I get these values without generating any error?
Thanks in advance. | 0 | 1 | 1,241 |
Subsets and Splits