GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 45,159,452 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-07-18T02:37:00.000 | 0 | 1 | 1 | Install OpenCV 3 Without Upgrading Python | 45,156,570 | 0 | python,opencv3.0 | Try using pip,
You can create a pip-reqs.txt file and pin things to a specific version
Then just run
pip -r pip-reqs.txt
pip will then take care of installing opencv for you for the python version that is currently configured | Python 3.5.2 is installed, and I need to ensure it doesn't upgrade to 3.6 due to some other dependencies.
When I install OpenCV 3 via brew (see below), brew invokes python3 and upgrades to Python 3.6, the latest build:
brew install opencv3 --with-python3
How can I install OpenCV 3 without changing my Python build? | 0 | 1 | 150 |
0 | 49,570,892 | 0 | 0 | 0 | 0 | 1 | true | 12 | 2017-07-18T12:46:00.000 | 5 | 1 | 0 | Matrix exponentiation with scipy: expm, expm2 and expm3 | 45,167,237 | 1.2 | python,scipy,sparse-matrix | This will depend a lot on the detail of the implementation of these different ways of exponentiating the matrix.
In general terms, I would expect the eigen-decomposition (expm2) to be poorly suited to sparse matrices, because it is likely to remove the sparseness. It will also be more difficult to apply to non-symmetric matrices, because this will require the use of complex arithmetic and more expensive algorithms to compute the eigen-decomposition.
For the Taylor-series approach (expm3), this sounds risky if there are a fixed number of terms independent of the norm of the matrix. When computing e^x for a scalar x, the largest terms in the Taylor series are around that for which n is close to x.
However, the implementation details of these (deprecated) functions may use tricks like diagonally loading the matrix so as to improve the stability of these series expansion. | Matrix exponentiation can be performed in python using functions within the scipy.linalg library, namely expm, expm2, expm3. expm makes use of a Pade approximation; expm2 uses the eigenvalue decomposition approach and expm3 makes use of a Taylor series with a default number of terms of 20.
In SciPy 0.13.0 release notes it is stated that:
The matrix exponential functions scipy.linalg.expm2 and scipy.linalg.expm3 are deprecated. All users should use the numerically more robust scipy.linalg.expm function instead.
Although expm2 and expm3 are deprecated since release version SciPy 0.13.0, I have found that in many situations these implementations are faster than expm.
From this, some questions arise:
In what situations could expm2 and expm3 result in numerical instabilities?
In what situations (e.g. sparse matrices, symmetric, ...) is each of the algorithms faster/more precise? | 0 | 1 | 1,827 |
0 | 45,189,807 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-07-18T13:49:00.000 | 1 | 1 | 0 | Calculate mean value of an image dataset | 45,168,747 | 1.2 | python,statistics,deep-learning | The mean value of the dataset is the mean value of the pixels of all the images across all the colour channels (e.g. RBG). Grey scale images will have just one mean value and colour images like ImageNet will have 3 mean values.
Usually mean is calculated on the training set and the same mean is used to normalize both training and test images. | In deep learning experiments,there is a consensus that mean subtraction from the data set could improve the accuracy.For example,the mean value of ImageNet is [104.0 117.0 124.0],so before feeding the network,the mean value will be subtracted from the image. My question is
How the mean value is calculated?
Should I calculate the mean value on training and testing data set separately? | 0 | 1 | 2,860 |
0 | 45,170,697 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-07-18T15:05:00.000 | 1 | 1 | 0 | Way to predict with regression model | 45,170,586 | 0.197375 | python,machine-learning,regression | No you shouldn't do it. If your mode always predict 1.5 times more than the actual values then that means your model is just not performing well and the data cannot be linearly fitted. To prevent this, you should look at other models that is able to capture the structure of your data or you might have outliers and removing them would help the linear regression model. | I have a question about regression model in machine learning and I am wondering if my way is correct or not.
I have built my regression model and already trained it with my data, but my model always predict 1.5 times more than actual values.
I understood that this is my model's habit, consider as it is predict alway 1.5 times.
After considering as it is, I divided predicted value by 1.5 times.
Let's say, my model predict 100 in some case, and I calculated 100/1.5 and get approximately 66.6 in a result.
Actually 66.6 is not predicted value and I manipulated it.
Is this manipulation acceptable for regression?
Can I supply this 66.6 to my customer? | 0 | 1 | 64 |
0 | 45,172,785 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-07-18T15:05:00.000 | 1 | 1 | 0 | How word2vec deal with the end of a sentence | 45,170,589 | 0.197375 | python,gensim,word2vec | The window is trimmed to the edges of the current text example. So, the first word of a text only gets its context words from subsequent words in the same text. (No words are retained from previous examples.) Similarly, the last word in a text only gets its context words from previous words in the same text. (No words are pulled in from the next text example.) Each text example (aka sentence) stands alone. | When training, what will word2vec do to cope with the words at the end of a sentence . Will it use the exact words at the beginning of another sentence as the context words of the center words which is
at the end of last sentence. | 0 | 1 | 438 |
0 | 45,178,600 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-07-18T17:29:00.000 | 1 | 1 | 0 | Metric for appropriate number of DTW clusters | 45,173,476 | 1.2 | python,cluster-analysis | You can use Silhouette with DTW as distance function.
But don't forget that this just a heuristic. A different k can be better by subjective use. | I am using dynamic time warping (DTW) as a similarity metric to cluster ~3500 time series using the k-means algorithm in Python.
I am looking for a similar metric to the popular silhouette score used in sklearn.metrics.silhouette_score but relevant to DTW.
Wondering if anyone can provide any help? | 0 | 1 | 240 |
0 | 45,392,458 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2017-07-18T22:33:00.000 | 0 | 1 | 0 | Anaconda and OpenCV using old version | 45,178,197 | 1.2 | python-2.7,opencv,anaconda | Anaconda is maintained by Continuum so it seems like they have not had a chance to update to the newer version of OpenCV. I will try to see if I can bring it to their attention. | I am trying to download the latest version of OpenCV using anaconda, but Anaconda only has version 3.1.0. I ended up installing it with pip, but can someone explain why anaconda does not have 3.2.0 version of OpenCV. Also, I am using Python 2.7.
Thanks | 0 | 1 | 355 |
0 | 45,181,917 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-07-19T05:38:00.000 | 0 | 2 | 0 | How to analyse multiple csv files very efficiently? | 45,181,769 | 0 | python,database,pandas,csv | For me I usually merge the file into a DataFrame and save it as a pickle but if you merge it the file will pretty big and used up a lot of ram when you used it but it is the fastest way if your machine have a lot of ram.
Storing the database is better in the long term but you will waste your time uploading the csv to the database and then waste even more of your time retrieving it from my experience you use the database if you want to query specific things from the table such as you want a log from date A to date B however if you use pandas to query all of that than this method is not very good.
Sometime for me depending on your use case you might not even need to merge it use the filename as a way to query and get the right log to process (using the filesystem) then merge the log files you are concern with your analysis only and don't save it you can save that as pickle for further processing in the future. | I have nearly 60-70 timing log files(all are .csv files, with a total size of nearly 100MB). I need to analyse these files at a single go. Till now, I've tried the following methods :
Merged all these files into a single file and stored it in a DataFrame (Pandas Python) and analysed them.
Stored all the csv files in a database table and analysed them.
My doubt is, which of these two methods is better? Or is there any other way to process and analyse these files?
Thanks. | 0 | 1 | 160 |
0 | 45,187,882 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-07-19T08:17:00.000 | 2 | 2 | 0 | BCELoss for binary pixel-wise segmentation pytorch | 45,184,741 | 0.197375 | python,deep-learning,pytorch | Seems to me like that your Sigmoids are saturating the activation maps. The images are not properly normalised or some batch normalisation layers are missing. If you have an implementation that is working with other images check the image loader and make sure it does not saturate the pixel values. This usually happens with 16-bits channels. Can you share some of the input images?
PS Sorry for commenting in the answer. This is a new account and I am not allowed to comment yet. | I'm implementing a UNet for binary segmentation while using Sigmoid and BCELoss. The problem is that after several iterations the network tries to predict very small values per pixel while for some regions it should predict values close to one (for ground truth mask region). Does it give any intuition about the wrong behavior?
Besides, there exist NLLLoss2d which is used for pixel-wise loss. Currently, I'm simply ignoring this and I'm using MSELoss() directly. Should I use NLLLoss2d with Sigmoid activation layer?
Thanks | 0 | 1 | 4,348 |
0 | 45,188,959 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-07-19T11:13:00.000 | 3 | 1 | 0 | How do I generate a matrix with x dimension and a vector and without using loops? Python | 45,188,890 | 1.2 | python,loops,numpy,matrix | a = np.arange(1000).reshape(100, 10) should do it. | Good morning to all.
I want to generate an array for example with 10 columns and 100 rows with the vector a = np.arange (1,1001), but I do not want to use loop since my web page gets saturated if I put a loop.
Someone knows of some numpy or math command or another.
Thank you very much for your attention. | 0 | 1 | 86 |
0 | 45,191,351 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2017-07-19T12:25:00.000 | 0 | 2 | 0 | Direct way to access Numpy RandomState object | 45,190,558 | 0 | python,numpy,cython,numpy-random | Found an answer thanks to kazemakase: _rand is accessible directly, I'd just need to import mtrand. But __self__ may be more future proof, if syntax doesn't change. | Is there are more direct way to access the RandomState object created on import other than np.random.<some function>.__self__? Both np.random._rand and getattr(np.random, "_rand") raise AttributeError. The former works fine but doesn't seem very transparent/Pythonic, though the most transparent might just be creating a separate RandomState object. The purpose is passing the interal_state variable to a cython function that calls randomkit functions directly. | 0 | 1 | 238 |
0 | 45,191,429 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2017-07-19T12:31:00.000 | 2 | 3 | 0 | Keras deep learning why validation accuracy stuck in a value every time? | 45,190,707 | 0.132549 | python,deep-learning,keras | Given just this information it is hard to tell what might be the underlying problem. In general, the machine learning engineer is always working with a direct trade-off between overfitting and model complexity. If the model isn't complex enough, it may not be powerful enough to capture all of the useful information necessary to solve a problem. However, if our model is very complex (especially if we have a limited amount of data at our disposal), we run the risk of overfitting. Deep learning takes the approach of solving very complex problems with complex models and taking additional countermeasures to prevent overfitting.
Three of the most common ways to do that are
Regularization
Dropout
Data augmentation
If your model is not complex enough:
Make it bigger (easy)
Make it smarter (hard) | I am trying to train InceptionV3 network with my custom dataset (36 classes 130 samples per each). And some parameters fro my network: | 0 | 1 | 3,006 |
0 | 45,191,017 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2017-07-19T12:31:00.000 | 0 | 3 | 0 | Keras deep learning why validation accuracy stuck in a value every time? | 45,190,707 | 0 | python,deep-learning,keras | Be more specific on the example, post the code you used to build the Sequential Model.
At the moment I can say that your problem could be in the initial dataset.
You have 130 sample for 36 classes that means 3.6 example for each class? | I am trying to train InceptionV3 network with my custom dataset (36 classes 130 samples per each). And some parameters fro my network: | 0 | 1 | 3,006 |
0 | 45,190,981 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2017-07-19T12:31:00.000 | 1 | 3 | 0 | Keras deep learning why validation accuracy stuck in a value every time? | 45,190,707 | 0.066568 | python,deep-learning,keras | It could mean that the model has learned everything possible and can't improve further.
One of the possible ways to improve accuracy is to get new data. You have ~4 samples per class, which is rather low. Try to get more samples or use data augmentation technics. | I am trying to train InceptionV3 network with my custom dataset (36 classes 130 samples per each). And some parameters fro my network: | 0 | 1 | 3,006 |
0 | 45,191,580 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2017-07-19T13:00:00.000 | 3 | 1 | 0 | Do I have to install numpy, spicy again for python3? | 45,191,428 | 1.2 | python,python-2.7,python-3.x,numpy | Yes you need to install them using pip3 as well as python3.4 bundles pip in along side of python | I have brew install python2 for OsX, then I install numpy, scipy using pip install.
I also need python3, so I brew install python3, but when I import numpy under python3, import error occurs.
I known I can fix this by install numpy using pip3 install numpy, but do I have to do this? Since I have the package already installed for python2, can I just tell python3 where it is and then use it? | 0 | 1 | 494 |
0 | 45,193,581 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-07-19T14:24:00.000 | 1 | 1 | 0 | Unique Key - CustomerID, A Categorical or a Numerical Variable? | 45,193,502 | 1.2 | python,pandas,data-science | I don't think you should use customerID as a variable. This is an unique value for each customer. It can be used as an index - to know for what customer belongs the prediction.
So you'de better drop this column from training/test data. | I am trying to do Segmentation in Customer Data in Python using Pandas. I have a customer ID variable in my dataset. I am confused over here, even though it won't be considered as a variable that affects the Output variable. How do we actually treat this variable if needed, a Categorical or a numerical ?
Also, Is there a business case that you could think of where the customerID will be considered? | 0 | 1 | 655 |
0 | 45,197,852 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2017-07-19T17:59:00.000 | 0 | 1 | 0 | Pandas read_sql | 45,197,851 | 0 | python,sql-server,pandas,sqlalchemy | I had to work around the datetime column from my SQL query itself just so SQLAlchemy/Pandas can stop reading it as a NaN value.
In my SQL query, I used CONVERT() to convert the datetime column to a string. This was ready with no issue, and then I used pandas.to_datetime() to convert it back into datetime.
Anyone else with a better solution or know what's really going on, please share your answer, I'd really appreciate it!!! | I encountered the following irregularities and wanted to share my solution.
I'm reading a sql table from Microsoft SQL Server in Python using Pandas and SQLALCHEMY. There is a column called "occurtime" with the following format: "2017-01-01 01:01:11.000". Using SQLAlchemy to read the "occurtime" column, everything was returned as NaN. I tried to set the parse_date parameter in the pandas.read_sql() method but with no success.
Is anyone else encountering issue reading a datetime column from a SQL table using SQLAlchemy/Pandas? | 0 | 1 | 752 |
0 | 45,219,444 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-07-19T18:07:00.000 | 0 | 1 | 0 | Optimizing the frozen graph | 45,197,995 | 0 | python,tensorflow | You need to build the tool inside the tensorflow directory, not the model directory, and then run the tool on the model. | I have frozen the tensorflow graph and I have noticed that the using the graph in realtime for prediction is too slow. so I would like to do graph optimization.
Within my project file, I have got a folder called model which contains the model file (pb file).
Now I tried running the following command inside the model dir
bazel build tensorflow/python/tools:strip_unused
But it is firing the following error,
bazel build tensorflow/python/tools:strip_unused
ERROR: no such package 'tensorflow/python/tools': BUILD file not found on package path. | 0 | 1 | 170 |
0 | 45,198,690 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-07-19T18:37:00.000 | 1 | 5 | 0 | Python Pandas - Dataframe column - Convert FY in format '2015/2016' to '15/16' | 45,198,564 | 0.039979 | python,pandas,dataframe | If you have a string you can always just choose parts of it by writing:
foo = 'abcdefg'
foo2 = foo[2:4]
print foo2
then the output would be:
cd | I have a column in my dataframe (call it 'FY') which has financial year values in the format: 2015/2016 or 2016/2017.
I want to convert the whole column so it says 15/16 or 16/17 etc instead.
I presume you somehow only take the 3rd, 4th and 5th character from the string, as well as the 8th and 9th, but haven't got a clue how to do it.
Could anyone help me? Thank you. | 0 | 1 | 138 |
0 | 45,640,420 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2017-07-20T07:03:00.000 | 15 | 4 | 0 | How to interpret the observations of RAM environments in OpenAI gym? | 45,207,569 | 1 | python,openai-gym | There's a couple ways of understanding the ram option.
Let's say you wanted to learn pong. If you train from the pixels, you'll likely use a convolutional net of several layers. interestingly, the final output of the convnet is a a 1D array of features. These you pass to a fully connected layer and maybe output the correct 'action' based on the features the convnet recognized in the image(es). Or you might use a reinforcement layer working on the 1D array of features.
Now let's say it occurs to you that pong is very simple, and could probably be represented in a 16x16 image instead of 160x160. straight downsampling doesn't give you enough detail, so you use openCV to extract the position of the ball and paddles, and create your mini version of 16x16 pong. with nice, crisp pixels. The computation needed is way less than your deep net to represent the essence of the game, and your new convnet is nice and small. Then you realize you don't even need your convnet any more. you can just do a fully connected layer to each of your 16x16 pixels.
So, think of what you have. Now you have 2 different ways of getting a simple representation of the game, to train your fully-connected layer on. (or RL algo)
your deep convnet goes through several layers and outputs a 1D array, say of 256 features in the final layer. you pass that to the fully connected layer.
your manual feature extraction extracts the blobs (pattles/ball) with OpenCV, to make a 16x16 pong. by passing that to your fully connected layer, it's really just a set of 16x16=256 'extracted features'.
So the pattern is that you find a simple way to 'represent' the state of the game, then pass that to your fully connected layers.
Enter option 3. The RAM of the game may just be a 256 byte array. But you know this contains the 'state' of the game, so it's like your 16x16 version of pong. it's most likely a 'better' representation than your 16x16 because it probably has info about the direction of the ball etc.
So now you have 3 different ways to simplify the state of the game, in order to train your fully connected layer, or your reinforcment algorithm.
So, what OpenAI has done by giving you the RAM is helping you avoid the task of learning a 'representation' of the game, and that let's you move directly to learning a 'policy' or what to do based on the state of the game.
OpenAI may provide a way to 'see' the visual output on the ram version. If they don't, you could ask them to make that available. But that's the best you will get. They are not going to reverse engineer the code to 'render' the RAM, nor are they going to reverse engineer the code to 'generate' 'RAM' based on pixels, which is not actually possible, since pixels are only part of the state of the game.
They simply provide the ram if it's easily available to them, so that you can try algorithms that learn what to do assuming there is something giving them a good state representation.
There is no (easy) way to do what you asked, as in translate pixels to RAM, but most likely there is a way to ask the Atari system to give you both the ram, and the pixels, so you can work on ram but show pixels. | In some OpenAI gym environments, there is a "ram" version. For example: Breakout-v0 and Breakout-ram-v0.
Using Breakout-ram-v0, each observation is an array of length 128.
Question: How can I transform an observation of Breakout-v0 (which is a 160 x 210 image) into the form of an observation of Breakout-ram-v0 (which is an array of length 128)?
My idea is to train a model on the Breakout-ram-v0 and display the trained model playing using the Breakout-v0 environment. | 0 | 1 | 5,417 |
0 | 45,222,284 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-07-20T14:57:00.000 | 0 | 2 | 0 | freezing the "tensorflow object detection api pet detector" graph | 45,218,374 | 0 | python,tensorflow,object-detection | To find what to use for output_node_names, just checkout the graph.pbtxt file. In this case it was Softmax | I've followed the pet detector tutorial, i have exported the model using "export_inference_graph.py".
However when I try to freeze the graph using the provided "freeze_graph.py" but now sure what --output_node_names to use.
Does anyone know which I should use, or more importantly how I find out what to use for when I train my own model. | 0 | 1 | 1,483 |
0 | 45,219,608 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-07-20T15:31:00.000 | 0 | 1 | 0 | sympy set expectation and variance of random variable | 45,219,177 | 0 | python,probability,sympy | Let x=stats.Bernoulli('x', 1/2, succ=1, fail=-1). Then the E and variance are what I want. So if I compute any expression with only the first two moments and take E and variance, I get the answer I want. | I have a symbol, x, in my sympy code and have been computing a number of expressions with it. Ultimately the expression I have is very long and I am only interested in its expectation under the assumption E(x) = 0, E(x^2) = 1. Is there some way to set the expectation and variance of x in advance and then ask sympy to compute expectation of my entire expression? | 0 | 1 | 93 |
0 | 45,231,429 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-07-20T21:27:00.000 | 1 | 1 | 0 | Py-tables vs Blaze vs S-Frames | 45,225,500 | 0.197375 | python-3.x,pandas,hdf5,pytables,blaze | good question! one option you may consider is to not use any of the libraries aformentioned, but instead read and process your file chunk-by-chunk, something like this:
csv="""\path\to\file.csv"""
pandas allows to read data from (large) files chunk-wise via a file-iterator:
it = pd.read_csv(csv, iterator=True, chunksize=20000000 / 10)
for i, chunk in enumerate(it):
... | I am working on an exploratory data analysis using python on a huge Dataset (~20 Million records and 10 columns). I would be segmenting, aggregating data and create some visualizations, I might as well create some decision trees liner regression models using that dataset.
Because of the large data set I need to use a data-frame that allows out of core data storage. Since I am relatively new to Python and working with large data-sets, i want to use a method which would allow me to easily use sklearn on my data-sets. I'm confused weather to use Py-tables, Blaze or s-Frame for this exercise. If someone could help me understand what are their pros and cons. What are the factors that are important in this kind of decision making that would be much appreciated. | 0 | 1 | 170 |
0 | 45,234,643 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2017-07-21T09:42:00.000 | 3 | 2 | 0 | Value at KMeans.cluster_centers_ in sklearn KMeans | 45,234,336 | 0.291313 | python,scikit-learn,k-means | The cluster centre value is the value of the centroid. At the end of k-means clustering, you'll have three individual clusters and three centroids, with each centroid being located at the centre of each cluster. The centroid doesn't necessarily have to coincide with an existing data point. | On doing K means fit on some vectors with 3 clusters, I was able to get the labels for the input data.
KMeans.cluster_centers_ returns the coordinates of the centers and so shouldn't there be some vector corresponding to that? How can I find the value at the centroid of these clusters? | 0 | 1 | 17,090 |
0 | 45,238,795 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-07-21T13:15:00.000 | 0 | 2 | 0 | Using Scikit-learn on a csv dataset | 45,238,671 | 0 | python,arrays,scikit-learn | Look into the pandas package which allows you to import CSV files into a dataframe. pandas is supported by scikit-learn. | How do I apply scikit-learn to a numpy array with 4 columns each representing a different attribute?
Basically, I'm wanting to teach it how to recognize a healthy patient from these 4 characteristics and then see if it can identify an abnormal one.
Thanks in advance! | 0 | 1 | 773 |
0 | 45,256,348 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-07-22T15:20:00.000 | 0 | 2 | 0 | What is use of SFrame.materialize() in Graphlab? | 45,256,140 | 0 | python,graphlab,sframe | The note tells you that the operation (filtering in your case) isn't applied to whole date set right away, but only to some portion of it. This is to save resources -- in case the operation doesn't do what you intended, resources want be wasted by applying the operation on whole possibly large data set but only on the needed portion (head in your case, that is output by default). Materialization forces propagation of the operation on the whole data set. | When I was trying to get the rows of my dataset belonging to column of userid =1 through graphlab's sframe datastructure, sf[sf['userid'] == 1],
I got the rows,however I also got this message, [? rows x 6 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
I have gone through the documention, yet I can't understand what sf.materialize() do! could someone help me out here. | 0 | 1 | 661 |
0 | 45,288,121 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2017-07-22T22:54:00.000 | 0 | 3 | 0 | Is there a good way to keep track of large numbers of symbols in scipy? | 45,259,985 | 0 | python,sympy | I think the most straightfoward way to do this is to use sympy.symarray, like so:
x = sympy.symarray("x",(5,5,5))
This creates an accordingly sized (numpy) array - here the size is 5x5x5 - that contains sympy variables, more specifically these variables are prefixed with whatever you chose - here "x"- and have as many indices as you provided dimensions, here 3. Of course you can make as many of these arrays as you need - perhaps it makes sense to use different prefixes for different groups of variables for readability etc.
You can then use these in your code by using e.g. x[i,j,k]:
In [6]: x[0,1,4]
Out[6]: x_0_1_4
(note that you can not access the elements via x_i_j_k - I found this a bit counterintuitive when I started using sympy, but once you get the hang on python vs. sympy variables, it makes perfect sense.)
You can of course also use slicing on the array, e.g. x[:,0,0].
If you need a python list of your variables, you can use e.g. x.flatten().tolist().
This is in my opinion preferable to using sympy.MatrixSymbol because (a) you get to decide the number of indices you want, and (b) the elements are "normal" sympy.Symbols, meaning you can be sure you can do anything with them you could also do with them if you declared them as regular symbols.
(I'm not sure this is still the case in sympy 1.1, but in sympy 1.0 it used to be that not all functionality was implemented for MatrixElement.) | I need to do symbolic manipulations to very large systems of equations, and end up with well over 200 variables that I need to do computations with. The problem is, one would usually name their variables x, y, possibly z when solving small system of equations. Even starting at a, b, ... you only get 26 unique variables this way.
Is there a nice way of fixing this problem? Say for instance I wanted to fill up a 14x14 matrix with a different variable in each spot. How would I go about doing this? | 0 | 1 | 109 |
0 | 45,265,597 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-07-23T04:07:00.000 | 0 | 1 | 0 | Optimisations on spark jobs | 45,261,395 | 0 | python,apache-spark,pyspark | More information about the job would help:
Some of the generic suggestions:
Arrangement operators is very important. Not all arrangements will result in the same performance. Arrangement of operators should be such to reduce the number of shuffles and the amount of data shuffled. Shuffles are fairly expensive operations; all shuffle data must be written to disk and then transferred over the network.
repartition , join, cogroup, and any of the *By or *ByKey transformations can result in shuffles.
rdd.groupByKey().mapValues(_.sum) will produce the same results as rdd.reduceByKey(_ + _). However, the former will transfer the entire dataset across the network, while the latter will compute local sums for each key in each partition and combine those local sums into larger sums after shuffling.
You can avoid shuffles when joining two datasets by taking advantage
of broadcast variables.
Avoid the flatMap-join-groupBy pattern.
Avoid reduceByKey When the input and output value types are different.
This is not exhaustive. And you should also consider tuning your configuration of Spark.
I hope it helps. | I am new to spark and want to know about optimisations on spark jobs.
My job is a simple transformation type job merging 2 rows based on a condition. What are the various types of optimisations one can perform over such jobs? | 0 | 1 | 49 |
0 | 45,263,297 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-07-23T08:40:00.000 | 2 | 1 | 0 | Building Convolutional Neural Network using large images? | 45,263,156 | 1.2 | python,machine-learning,tensorflow,neural-network,conv-neural-network | Theoretically there is no limit on the size of images being fed into a CNN. The most significant problem with larger image sizes is the increased memory footprint, especially with large batches. Moreover, you would need to use more convolutional layers to down sample the input image. Downsizing an image is a possibility of course, but you will lose discriminative information, naturally. For downsampling you can use scipy's scipy.misc.imresize. | I understand reating convolutional nerural network for 32 x 32 x 3 image, but i am planning to use larger image with different pixels. How can I reduce the image size to the required size ? does the pixel reduction impact accuracy in tensor flow ? | 0 | 1 | 971 |
0 | 45,868,691 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-07-23T15:46:00.000 | 0 | 1 | 0 | Tensorflow: how to create batch with different type of data from different source (folder)? | 45,266,976 | 1.2 | python,tensorflow,batch-processing | I solved the ratio problem!! I create two batch, one for validation, one for train. Then i concatenate them with image_batch = tf.concat([image_validation_batch, image_train_batch], 0). This is only for image batch, i will investigate on the label. | I can't find an appropriate question title, sorry.
I have a graph composed by two main data flow: image classification and label cleaning. I have two type of data:
(image_data, noisy_label, verified_label) from validation set
(image_data, noisy_label) from train set
The first is used to train the label cleaning part of the graph.
The second is used to train the image classification after its noisy label is cleaned.
Every batch need to have a ratio of 1:9.
How can i create this type of batch?? is it possible in tensorflow?? | 0 | 1 | 258 |
0 | 45,276,971 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-07-24T09:17:00.000 | 1 | 1 | 0 | Neural Network on an external harddrive | 45,276,669 | 0.197375 | python,neural-network | A hard drive could simply store a file that represents your network. Perhaps a JSON file with the thousands of optimized weight values, etc. Or if it is optimized, simply a template on layers and neurons, depending on what you hope to do (test or train?). Then the program you have on your computer can load this file, and test/train it. The fact it is on a hard drive makes no difference. | I was wondering if anybody either had experience or suggestions regarding the
possibility of storing a neural network (nodes, synapses) on an external hard-drive, and using a computer to run it. I do not know whether this is possible.
I would like to be able to run a convolutional Neural Network while not loading my
computer up. Thanks. | 0 | 1 | 180 |
0 | 45,289,131 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-07-24T11:56:00.000 | 2 | 1 | 0 | getting distance matrix and features matrix from word2vec model | 45,280,020 | 0.379949 | python,k-means,gensim,word2vec | In gensim's Word2Vec model, the raw number_of-words x number_of_features numpy array of word vectors is in model.wv.vectors. (In older Gensim versions, the .vectors property was named .syn0 matching the original Google word2vec.c naming).
You can use the model.wv.key_to_index dict (previously .vocab) to learn the string-token-to-array-slot assignment, or the model.wv.index_to_key list (previously .index2word) to learn the array-slot-to-word assignment.
The pairwise distances aren't pre-calculated, so you'd have to create that yourself. And with typical vocabulary sizes, it may be impractically large. (For example, with a 100,000 word vocabulary, storing all pairwise distances in the most efficient way possible would require roughly 100,000^2 * 4 bytes/float / 2 = 20GB of addressable space. | I have generated a word2vec model using gensim for a huge corpus and I need to cluster the vocabularies using k means clustering for that i need:
cosine distance matrix (word to word, so the size of the matrix the number_of_words x number_of_words )
features matrix (word to features, so the size of the matrix is the number_of_words x number_of_features(200) )
for the feature matrix i tried to use x=model.wv and I got the object type as gensim.models.keyedvectors.KeyedVectors and its much smaller than what I expected a feature matrix will be
is there a way to use this object directly to generate the k-means clustering ? | 0 | 1 | 1,198 |
0 | 45,282,483 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-07-24T13:39:00.000 | 0 | 1 | 0 | Tensorflow: combining two tensors with dimension X into one tensor with dimension X+1 | 45,282,194 | 1.2 | python,tensorflow | This is impossible. You have tensor, which contains batch_size * max_word_length
elements and tensor which contains batch_size * predicted_label elements. Hence there are
batch_size * (max_word_length + predicted_label)
elements. And now you want to create new tensor [batch_size, max_word_length, predicted_label] with
batch_size * max_word_length * predicted_label
elements. You don't have enough elements for this. | I am doing some sentiment analysis with Tensorflow, but there is a problem I can't solve:
I have one tensor (input) shaped as [?, 38] [batch_size, max_word_length] and one (prediction) shaped as [?, 3] [batch_size, predicted_label].
My goal is to combine both tensors into a single tensor with the shape of [?, 38, 3].
This tensor is used as the input of my second stage.
Seems easy, but i can't find a way of doing it.
Can (and will) you tell me how to do this? | 0 | 1 | 834 |
0 | 45,285,878 | 0 | 0 | 0 | 0 | 1 | false | 48 | 2017-07-24T16:32:00.000 | 1 | 5 | 0 | When to use pandas series, numpy ndarrays or simply python dictionaries? | 45,285,743 | 0.039979 | python,pandas,numpy | Numpy is very fast with arrays, matrix, math.
Pandas series have indexes, sometimes it's very useful to sort or join data.
Dictionaries is a slow beast, but sometimes it's very handy too.
So, as it was already was mentioned, it depends on use case which data types and tools to use. | I am new to learning Python, and some of its libraries (numpy, pandas).
I have found a lot of documentation on how numpy ndarrays, pandas series and python dictionaries work.
But owing to my inexperience with Python, I have had a really hard time determining when to use each one of them. And I haven't found any best-practices that will help me understand and decide when it is better to use each type of data structure.
As a general matter, are there any best practices for deciding which, if any, of these three data structures a specific data set should be loaded into?
Thanks! | 0 | 1 | 27,323 |
0 | 45,297,247 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-07-25T05:44:00.000 | 0 | 2 | 0 | How can I use 2 gpu to calculate in tensorflow? | 45,294,801 | 0 | python,tensorflow | By default, if the GPU version of TensorFlow is installed, TensorFlow will use all the GPU available.
To control the GPU memory allocation, you can use the tf.ConfigProto().gpu_options. | I use two GTX 980 gpu.
When I am dealing with slim in tensorflow.
Usually, i have a problem so called 'Out of Memory'.
So, I want to use two gpu at the same time.
How can I use 2 gpu?
Oh, sorry for my poor english skill. :( | 0 | 1 | 599 |
0 | 45,456,995 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-07-25T09:43:00.000 | 0 | 1 | 0 | Variable Resolution with Tensorflow for Superresolution | 45,299,561 | 1.2 | python,tensorflow,resolution,inference,tensor | Okay so here is what I did:
input and output tensors now have the shape (batchsize, None, None, channels)
The training images now have to be resized outside of the network.
Important reminder: training images have to be the same size since they are in batches! Images in one batch have to have the same size. When inferencing the batch size is 1 so the size does not matter. | I am using tensorflow to scale images by a factor of 2. But since the tensor (batchsize, height, width, channels) determines the resolution it only accepts images of only one resolution for inference and training.
For other resolutions I have to modify the code and retrain the model. Is it possible to make my code resolution independent? In theory convolutions of images are resolution independent, I don't see a reason why this wouldn't be possible.
I have no idea how to do this in tensorflow though. Is there anything out there to help me with this?
Thanks | 0 | 1 | 208 |
0 | 45,308,393 | 0 | 0 | 0 | 0 | 1 | false | 16 | 2017-07-25T15:02:00.000 | 24 | 4 | 0 | using leaky relu in Tensorflow | 45,307,072 | 1 | python,tensorflow,neural-network | If alpha < 1 (it should be), you can use tf.maximum(x, alpha * x) | How can I change G_h1 = tf.nn.relu(tf.matmul(z, G_W1) + G_b1) to leaky relu? I have tried looping over the tensor using max(value, 0,01*value) but I get TypeError: Using a tf.Tensor as a Python bool is not allowed.
I also tried to find the source code on relu on Tensorflow github so that I can modify it to leaky relu but I couldn't find it.. | 0 | 1 | 19,787 |
0 | 45,312,185 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-07-25T17:55:00.000 | 3 | 2 | 0 | python matplotlib save graph as data file | 45,310,481 | 0.291313 | python,matplotlib | @Cedric's Answer.
Additionally, if you get the pickle error for pickling functions, add the 'dill' library to your pickling script. You just need to import it at the start, it will do the rest. | I want to create a python script that zooms in and out of matplotlib graphs along the horizontal axis. My plot is a set of horizontal bar graphs.
I also want to make that able to take any generic matplotlib graph.
I do not want to just load an image and zoom into that, I want to zoom into the graph along the horizontal axis. (I know how to do this)
Is there some way I can save and load a created graph as a data file or is there an object I can save and load later?
(typically, I would be creating my graph and then displaying it with the matplotlib plt.show, but the graph creation takes time and I do not want to recreate the graph every time I want to display it) | 0 | 1 | 6,630 |
0 | 45,316,714 | 0 | 1 | 0 | 0 | 1 | false | 31 | 2017-07-26T02:46:00.000 | 1 | 5 | 0 | How to install Tensorflow on Python 2.7 on Windows? | 45,316,569 | 0.039979 | python,tensorflow,module,pip,installation | If you are using windows:
If you take a gander at TensorFlow website under windows PIP installation first line says.
"Pip installation on Windows
TensorFlow supports only 64-bit Python 3.5 on Windows. We have tested the pip packages with the following distributions of Python:"
Now either install python 3.5, or use the unofficial version of Tensorflow from ANACONDA.
other way is to Download and install docker toolbox for windows https://www.docker.com/docker-toolbox Open a cmd window, and type: docker run -it b.gcr.io/tensorflow/tensorflow This should bring up a linux shell. Type python and I think all would be well! | I try to install TensorFlow via pip (pip install tensorflow) but get this error
could not find a version that satisfies the requirement tensorflow (from versions: )
Is there a solution to this problem? I still wish to install it via pip | 0 | 1 | 88,521 |
0 | 45,370,757 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-07-28T08:57:00.000 | 0 | 1 | 0 | Performing Logistic Regression with a large number of features? | 45,369,106 | 0 | python,machine-learning,statistics,logistic-regression,prediction | Your best choice would be to use L1 regularized logistic regression (aka Lasso regression). In case you're not familiar with it, the algorithm automatically selects some of the features by penalizing those that do not lead to increased accuracy (in layman terms).
You can increase/decrease this regularization strength (it's just a parameter) till your model achieved the highest accuracy (or some other metric) on a test set or in a cross-validation procedure. | I have a dataset with 330 samples and 27 features for each sample, with a binary class problem for Logistic Regression.
According to the "rule if ten" I need at least 10 events for each feature to be included. Though, I have an imbalanced dataset, with 20% o positive class and 80% of negative class.
That gives me only 70 events, allowing approximately only 7/8 features to be included in the Logistic model.
I'd like to evaluate all the features as predictors, I don't want to hand pick any features.
So what would you suggest? Should I make all possible 7 features combinations? Should I evaluate each feature alone with an association model and then pick only the best ones for a final model?
I'm also curious about the handling of categorical and continuous features, can I mix them? If I have a categorical [0-1] and a continuous [0-100], should I normalize? | 0 | 1 | 772 |
0 | 45,370,664 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-07-28T09:58:00.000 | 1 | 2 | 0 | Is it good to normalization/standardization data having large number of features with zeros | 45,370,442 | 0.099668 | python,machine-learning,pca,svd,normalize | Any features that only have zeros (or any other constant value) in the training set, are not and cannot be useful for any ML model. You should discard them. The model cannot learn any information from them so it won't matter that the test data do have some non-zero values.
Generally, you should do normalization or standardization before feeding data for PCA/SVD, otherwise these methods will catch wrong patterns in the data (e.g. if features are on a different scale between each other).
Regarding the reason behind such a difference in the accuracy, I'm not sure. I guess it has to do with some peculiarities of the dataset. | I'm having data with around 60 features and most will be zeros most of the time in my training data only 2-3 cols may have values( to be precise its perf log data). however, my test data will have some values in some other columns.
I've done normalization/standardization(tried both separately) and feed it to PCA/SVD(tried both separately). I used these features in to fit my model but, it is giving very inaccurate results.
Whereas, if I skip normalization/standardization step and directly feed my data to PCA/SVD and then to the model, its giving accurate results(almost above 90% accuracy).
P.S.: I've to do anomaly detection so using Isolation Forest algo.
why these results are varying? | 0 | 1 | 2,142 |
0 | 45,371,303 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2017-07-28T09:58:00.000 | 3 | 2 | 0 | Is it good to normalization/standardization data having large number of features with zeros | 45,370,442 | 1.2 | python,machine-learning,pca,svd,normalize | Normalization and standarization (depending on the source they sometimes are used equivalently, so I'm not sure what you mean exactly by each one in this case, but it's not important) are a general recommendation that usually works well in problems where the data is more or less homogeneously distributed. Anomaly detection however is, by definition, not that kind of problem. If you have a data set where most of the examples belong to class A and only a few belong to class B, it is possible (if not necessary) that sparse features (features that are almost always zero) are actually very discriminative for your problem. Normalizing them will basically turn them to zero or almost zero, making it hard for a classifier (or PCA/SVD) to actually grasp their importance. So it is not unreasonable that you get better accuracy if you skip the normalization, and you shouldn't feel you are doing it "wrong" just because you are "supposed to do it"
I don't have experience with anomaly detection, but I have some with unbalanced data sets. You could consider some form of "weighted normalization", where the computation of the mean and variance of each feature is weighted with a value inversely proportional to the number of examples in the class (e.g. examples_A ^ alpha / (examples_A ^ alpha + examples_B ^ alpha), with alpha some small negative number). If your sparse features have very different scales (e.g. one is 0 in 90% of cases and 3 in 10% of cases and another is 0 in 90% of cases and 80 in 10% of cases), you could just scale them to a common range (e.g. [0, 1]).
In any case, as I said, do not apply techniques just because they are supposed to work. If something doesn't work for your problem or particular dataset, you are rightful not to use it (and trying to understand why it doesn't work may yield some useful insights). | I'm having data with around 60 features and most will be zeros most of the time in my training data only 2-3 cols may have values( to be precise its perf log data). however, my test data will have some values in some other columns.
I've done normalization/standardization(tried both separately) and feed it to PCA/SVD(tried both separately). I used these features in to fit my model but, it is giving very inaccurate results.
Whereas, if I skip normalization/standardization step and directly feed my data to PCA/SVD and then to the model, its giving accurate results(almost above 90% accuracy).
P.S.: I've to do anomaly detection so using Isolation Forest algo.
why these results are varying? | 0 | 1 | 2,142 |
0 | 45,394,598 | 0 | 0 | 0 | 0 | 1 | true | 12 | 2017-07-29T22:08:00.000 | 23 | 1 | 0 | Do I need to split data when using GridSearchCV? | 45,394,527 | 1.2 | python,machine-learning,scikit-learn,grid-search | GridSearchCV will take the data you give it, split it into Train and CV set and train algorithm searching for the best hyperparameters using the CV set. You can specify different split strategies if you want (for example proportion of split).
But when you perform hyperparameter tuning information about dataset still 'leaks' into the algorithm.
Hence I would advice to take the following approach:
1) Take your original dataset and hold out some data as a test set (say, 10%)
2) Use grid search on remaining 90%. Split will be done for you by the algorithm here.
3) After you got optimal hyperparameters, test it on the test set from #1 to get final estimate of the performance you can expect on new data. | Gridsearhcv uses StratifiedKFold or KFold. So my question is that should I split my data into train and test before using gridsearch, then do fitting only for test data? I am not sure whether it is necessary because cv method already splits the data but I have seen some examples which split data beforehand.
Thank you. | 0 | 1 | 8,338 |
0 | 45,410,830 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2017-07-31T08:18:00.000 | 1 | 2 | 0 | Tensorflow vs Numpy math functions | 45,410,644 | 0.099668 | python,numpy,tensorflow | Of course there is a real difference. Numpy works on arrays which can use highly optimized vectorized computations and it's doing pretty well on CPU whereas tensorflow's math functions are optimized for GPU where many matrix multiplications are much more important. So the question is where you want to use what. For CPU, I would just go with numpy whereas for GPU, it makes sense to use TF operations. | Is there any real difference between the math functions performed by numpy and tensorflow. For example, exponential function, or the max function?
The only difference I noticed is that tensorflow takes input of tensors, and not numpy arrays.
Is this the only difference, and no difference in the results of the function, by value? | 0 | 1 | 2,450 |
0 | 45,414,023 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2017-07-31T10:40:00.000 | 4 | 1 | 0 | Keras: How to make fast predictions with trained network? | 45,413,651 | 1.2 | python-3.x,deep-learning,keras | One way would be to use a Jupyter notebook, load your model in one cell and do continouus predictions in subsequent cells.
Another way is to setup a server with Flask and run predictions against a simple API. | I built and trained my nn and now it is time to make predictions for the given input data. But I don't know the proper way to make fast predictions with the trained nn. What I am currently doing is loading model every time and making predictions on it. I wonder if there is a way to load the model on memory permanently (for a session) and then make predictions. | 0 | 1 | 998 |
0 | 45,428,708 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-08-01T03:47:00.000 | 0 | 2 | 0 | Getting all but the first k rows from a group in a GroupBy object | 45,428,631 | 0 | python,pandas,pandas-groupby | Yesy, You can use reindex the new dataframe using the reset_index() method. | I have a pandas GroupBy object. I am using head(k) to extract the first k elements of each group into a dataframe and I want to also extract the complement. Each group has a nonconstant size.
Is there any straightforwards way of doing this? | 0 | 1 | 433 |
0 | 45,432,298 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2017-08-01T07:49:00.000 | 1 | 1 | 0 | Linux[Ubuntu 16.04]-Installing MATLAB engine for Anaconda Python3 | 45,431,931 | 1.2 | linux,matlab,python-3.x,anaconda,ubuntu-16.04 | Did it myself. Just copied the matlab folder which was formed in matlab directory for py2.7 to my anaconda's virtual-env's site-packages.
According to the paths mentioned above in question, you need to do this on linux terminal.
cp /usr/local/MATLAB/R2016a/extern/engines/python/build/lib.linux-x86_64-2.7/matlab /home/fire-trail/anaconda3/envs/py34/lib/python3.4
and it will work with py34 in anaconda.
remember that min requirement for matlab engine in linux is, matlab 2014b and python 2.7
hope this helps other. | I'm trying to get Matlab's python engine to work with my Anaconda installation on Linux. But I'm not quite getting it right.
Anaconda's Python version: 3.6 (created a virtual-env for python 3.4)
Matlab Version: 2016b
Path to matlab root: /usr/local/MATLAB
Path to Anaconda: /home/fire-trail/anaconda3
Virtual env: py34
I installed matlab engine via official documentation from mathworks but it installs it in the default Linux Python installation and that too in Python 2.7
I want Anaconda 3.4 virtual env (py34) to find matlab engine. | 0 | 1 | 594 |
0 | 45,468,144 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-08-01T12:05:00.000 | 2 | 2 | 0 | Implementing Policy iteration methods in Open AI Gym | 45,437,357 | 0.197375 | python,machine-learning,reinforcement-learning,openai-gym | No, OpenAI Gym environments will not provide you with the information in that form. In order to collect that information you will need to explore the environment via sampling: i.e. selecting actions and receiving observations and rewards. With these samples you can estimate them.
One basic way to approximate these values is to use LSPI (least square policy iteration), as far as I remember, you will find more about this in Sutton too. | I am currently reading "Reinforcement Learning" from Sutton & Barto and I am attempting to write some of the methods myself.
Policy iteration is the one I am currently working on. I am trying to use OpenAI Gym for a simple problem, such as CartPole or continuous mountain car.
However, for policy iteration, I need both the transition matrix between states and the Reward matrix.
Are these available from the 'environment' that you build in OpenAI Gym.
I am using python.
If not, how do I calculate these values, and use the environment? | 0 | 1 | 1,540 |
0 | 45,453,040 | 0 | 0 | 0 | 0 | 2 | true | 9 | 2017-08-01T18:12:00.000 | 21 | 2 | 0 | Python: What is the "size" parameter in Gensim Word2vec model class | 45,444,964 | 1.2 | python,gensim,word2vec | size is, as you note, the dimensionality of the vector.
Word2Vec needs large, varied text examples to create its 'dense' embedding vectors per word. (It's the competition between many contrasting examples during training which allows the word-vectors to move to positions that have interesting distances and spatial-relationships with each other.)
If you only have a vocabulary of 30 words, word2vec is unlikely an appropriate technology. And if trying to apply it, you'd want to use a vector size much lower than your vocabulary size – ideally much lower. For example, texts containing many examples of each of tens-of-thousands of words might justify 100-dimensional word-vectors.
Using a higher dimensionality than vocabulary size would more-or-less guarantee 'overfitting'. The training could tend toward an idiosyncratic vector for each word – essentially like a 'one-hot' encoding – that would perform better than any other encoding, because there's no cross-word interference forced by representing a larger number of words in a smaller number of dimensions.
That'd mean a model that does about as well as possible on the Word2Vec internal nearby-word prediction task – but then awful on other downstream tasks, because there's been no generalizable relative-relations knowledge captured. (The cross-word interference is what the algorithm needs, over many training cycles, to incrementally settle into an arrangement where similar words must be similar in learned weights, and contrasting words different.) | I have been struggling to understand the use of size parameter in the gensim.models.Word2Vec
From the Gensim documentation, size is the dimensionality of the vector. Now, as far as my knowledge goes, word2vec creates a vector of the probability of closeness with the other words in the sentence for each word. So, suppose if my vocab size is 30 then how does it create a vector with the dimension greater than 30? Can anyone please brief me on the optimal value of Word2Vec size?
Thank you. | 0 | 1 | 14,983 |
0 | 65,432,085 | 0 | 0 | 0 | 0 | 2 | false | 9 | 2017-08-01T18:12:00.000 | 0 | 2 | 0 | Python: What is the "size" parameter in Gensim Word2vec model class | 45,444,964 | 0 | python,gensim,word2vec | It's equal to vector_size.
To make it easy, it's a uniform size of dimension of the output vectors for each word that you trained with word2vec. | I have been struggling to understand the use of size parameter in the gensim.models.Word2Vec
From the Gensim documentation, size is the dimensionality of the vector. Now, as far as my knowledge goes, word2vec creates a vector of the probability of closeness with the other words in the sentence for each word. So, suppose if my vocab size is 30 then how does it create a vector with the dimension greater than 30? Can anyone please brief me on the optimal value of Word2Vec size?
Thank you. | 0 | 1 | 14,983 |
0 | 45,447,380 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-08-01T20:08:00.000 | 0 | 1 | 0 | Caffe LMDB train and val.txt | 45,446,829 | 1.2 | python,caffe | You are confusing test and validation sets. A validation set is a set where you know the labels (like in training) but you do not train on it. The validation set is used to make sure you are not overfitting the training data.
At test time you may present your model with unlabeled data and make prediction for these samples. | During the process of making a lmdb file,we are supposed to make a train.txt and val.txt file,i have already made a train.txt file which consists of the image name space its corresponding label.Ex image1.JPG 0.
Now that i have to make the val.txt file im confused as to how do i give it its corresponding values since it is my test data and i am hoping to predict those.Can anyone tell me what thisval.txt file is and what is it supposed to be doing. | 0 | 1 | 427 |
0 | 45,450,859 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2017-08-02T02:57:00.000 | 4 | 2 | 0 | What's the difference with opencv, python-opencv, and libopencv? | 45,450,706 | 0.379949 | python,opencv,ubuntu-14.04 | libopencv is the debian/ubuntu package while python-opencv is the python wrapper and can be accessed using cv2 interface like COLDSPEED mentioned | I'm new to opencv and using ubuntu 14.04, I'm confused of the difference with opencv, python-opencv, and libopencv, as I have libopencv and python-opencv installed in my system, but I there is no cv interface accessible, so I have to install opencv which is much hard than python-opencv and libopencv. | 0 | 1 | 16,239 |
0 | 45,462,707 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-08-02T13:26:00.000 | 0 | 1 | 0 | How to deploy a desktop application that includes a neural network? | 45,462,207 | 1.2 | python,numpy,neural-network | Neural network weights are just data. You can store this any way you like along with the distributed application. As you have used Numpy to create the weights and biases array, you can probably just use pickle - add save_network function or similar name (used in training program only) and load_network function. If your weights and biases are just a bunch of local variables, you will want put them into a structure like a dict first. | I have developed a desktop application. It includes a neural network as a part of the application. Now I'm confused what to do after training. Can I make an executable out of it as usual method?
Please someone explain what should do, because I have no idea how to pass this milestone. I've tried searching neural network tutorials. But none of them helped me with this problem.
If someone wants to know, I have used numpy and openCV only. | 0 | 1 | 248 |
0 | 48,011,076 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2017-08-03T15:26:00.000 | 4 | 2 | 0 | Loss history for MLPRegressor | 45,488,558 | 0.379949 | python,scikit-learn | Very simple actually: model.loss_curve_ gives you the values of the loss for each epoch. You can then easily plot the learning curve by putting the epochs on the x axis and the values mentioned above on the y axis | I am using an MLPRegressor to solve a problem and would like to plot the loss function, i.e., by how much the loss decreases in each training epoch. However, the attribute model.loss_ available for the MLPRegressor only allows access to the last loss value. Is there any possibility to access the whole loss history? | 0 | 1 | 7,290 |
0 | 57,315,631 | 0 | 0 | 0 | 0 | 2 | false | 9 | 2017-08-04T20:30:00.000 | 2 | 9 | 0 | How to remove columns with too many missing values in Python | 45,515,031 | 0.044415 | python,pandas,dataframe,scikit-learn,missing-data | The fastest way to find the sum of NaN or the percentage by columns is:
for the sum: df.isna().sum()
for the percentage: df.isna().mean() | I'm working on a machine learning problem in which there are many missing values in the features. There are 100's of features and I would like to remove those features that have too many missing values (it can be features with more than 80% missing values). How can I do that in Python?
My data is a Pandas dataframe. | 0 | 1 | 35,470 |
0 | 66,210,039 | 0 | 0 | 0 | 0 | 2 | false | 9 | 2017-08-04T20:30:00.000 | 0 | 9 | 0 | How to remove columns with too many missing values in Python | 45,515,031 | 0 | python,pandas,dataframe,scikit-learn,missing-data | One thing about dropna() according to the documentation, thresh argument specifies the number of non-NaN to keep. | I'm working on a machine learning problem in which there are many missing values in the features. There are 100's of features and I would like to remove those features that have too many missing values (it can be features with more than 80% missing values). How can I do that in Python?
My data is a Pandas dataframe. | 0 | 1 | 35,470 |
0 | 45,788,917 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-08-05T03:37:00.000 | 0 | 2 | 0 | How to run openCV algorithms in real-time on Raspberry PI3 | 45,517,925 | 1.2 | python,opencv,raspberry-pi,raspbian,opencv3.0 | If you haven't already done so, you should consider the following:
Reduce image size to the minimum required size for recognizing the target object for each classifier. If different objects require different resolutions, you can even use a set of copies of the original image, with different sizes.
Identify search regions for each classifier and thereby reduce the search area. For example, if you are searching for face landmarks, you can define search regions for the left eye, right eye, nose, and mouth after running the face detector and finding the rectangle that contains the face.
I am not very sure if optimization is going to be very helpful, because OpenCv already does some hardware optimization. | I'm working on a Raspberry PI, an embedded linux platform with Raspbian Jessie where Python 2.7 is already installed, and I have OpenCV algorithms that must run in real-time and must apply several HAAR Cascade classifiers on the same image. Is there any method to reduce the time of these operations? such as multithreading for example?
I also hear about GPU calculations but I didn't know from where I can start.
Thank you for the help. | 0 | 1 | 669 |
0 | 66,559,845 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-05T07:16:00.000 | 1 | 2 | 0 | Pyspark: How to use --files tag for multiple files while running job on Yarn cluster | 45,519,339 | 0.099668 | python,pyspark | you can send comma separated files in a string like this through file paths :
--files "filepath1,filepath2,filepath3" \
worked for me!! | I am new to Spark and using python to write jobs using pyspark. I wanted to run my script on a yarn cluster and remove the verbose logging by sending a log4j.properties for setting logging level to WARN using --files tag. I have a local csv file that the script uses and i need to include this as well. How do I use --files tag to include both the files?
I am using the following command:
/opt/spark/bin/spark-submit --master yarn --deploy-mode cluster --num-executors 50 --executor-cores 2 --executor-memory 2G --files /opt/spark/conf/log4j.properties ./list.csv ./read_parquet.py
But I get the following error:
Error: Cannot load main class from JAR file:/opt/spark/conf/./list.csv
` | 0 | 1 | 4,088 |
0 | 49,416,048 | 0 | 1 | 0 | 0 | 1 | false | 20 | 2017-08-05T15:49:00.000 | 17 | 6 | 0 | Pandas slicing excluding the end | 45,523,749 | 1 | python,pandas,indexing | Easiest I can think of is df.loc[start:end].iloc[:-1].
Chops off the last one. | When slicing a dataframe using loc,
df.loc[start:end]
both start and end are included. Is there an easy way to exclude the end when using loc? | 0 | 1 | 11,186 |
0 | 45,533,385 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-06T11:23:00.000 | 0 | 2 | 0 | NLTK FreqDist counting two words as one | 45,531,514 | 0 | python,nltk | This is a well-known problem in NLP and it is often referred to Tokenization. I can think about two possible solutions:
try different NLTK tokenizers (e.g. twitter tokenizer), which maybe will be able to cover all of your cases
run a Name Entity Recognition (NER) on your sentences. This allows you to recognise entity present in the text. This could work because it can recognise Heart rate as a single entity, thus as a single token. | I am having some trouble with NLTK's FreqDist. Let me give you some context first:
I have built a web crawler that crawls webpages of companies selling wearable products (smartwatches etc.).
I am then doing some linguistic analysis and for that analysis I am also using some NLTK functions - in this case FreqDist.
nltk.FreqDist works fine in general - it does the job and does it well; I don't get any errors etc.
My only problem is that the word "heart rate" comes up often and because I am generating a list of the most frequently used words, I get heart and rate separately to the tune of a few hundred occurrences each.
Now of course rate and heart can both occur without being used as "heart rate" but how do I count the occurrences of "heart rate" instead of just the 2 words separately and I do mean in an accurate way. I don't want to subtract one from the other in my current Counters or anything like that.
Thank you in advance! | 0 | 1 | 1,205 |
0 | 45,537,986 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-07T00:35:00.000 | 0 | 1 | 0 | Python have program running and ready for when called | 45,537,958 | 0 | python,scripting,neural-network | Wrap it in a Python based web server listening on some agreed-on port. Hit it with HTTP requests when you want to supply a new file or retrieve results. | To be clear I have no idea what I'm doing here and any help would be useful.
I have a number of saved files, Keras neural network models and dataframes. I want to create a program that loads all of these files so that the data is there and waiting for when needed.
Any data sent to the algorithm will be standardised and fed into the neural networks.
The algorithm may be called hundreds of times in quick succession and so I don't want to have to import the model and standardisation parameters every time as it will slow everything down.
As I understand it the plan is to have this program running in the background on a server and then somehow call it when required.
How would I go about setting up something like this? I'm asking here first because I've never attempted anything like this before and I don't even know where to start. I'm really hoping you can help me find some direction or maybe provide an example of something similar. Even a search term that would help me research would be useful.
Many thanks | 0 | 1 | 28 |
0 | 49,675,393 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-08-07T09:39:00.000 | 0 | 1 | 0 | Pandas: calculating how much RAM is needed to generate a pivot table? | 45,543,799 | 0 | python,pandas,pivot-table | If you're using DataFrame.pivot_table(), try using DataFrame.pivot() instead, it has much smaller memory consumption and is also faster.
This solution is only possible if you're not using a custom aggregation function to construct you pivot table and if the tuple of columns you're pivoting on don't have redundant combinations. | I have 300 million rows and 3 columns in pandas.
I want to pivot this to a wide format.
I estimate that the total memory of in the
Current long format is 9.6 GB.
I arrived at this by doing
300,000,000 * 3 * 8 bytes per "cell".
I want to convert to a wide format with
1.9 million rows * 1000 columns.
I estimate that it should take 15.2 GB.
When I pivot, the memory usage goes to 64gb (Linux resource monitor) and the swap gets used to 30gb and then
The ipython kernel dies, which I am assuming is an out of memory related death.
Am I correct that during the generation of a pivot table the RAM usage will spike to more than the 64 GB of RAM that my desktop has? Why does generating a pivot table exceed system RAM? | 0 | 1 | 528 |
0 | 45,798,320 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-07T16:03:00.000 | 0 | 1 | 0 | NN: outputting a probability density function instead of a single value | 45,551,212 | 0 | python,tensorflow,neural-network,keras | Ok So I found a solution to this issue, though it adds a lot of overhead.
Initially I thought the keras callback could be of use but despite the fact that it provided the flexibility that I wanted i.e.: train only on test data or only a subset and not for every test. It seems that callbacks are only given summary data from the logs.
So the first step what to create a custom metric that would do the same calculation as any metric with the 2 arrays ( the true value and the predicted value) and once those calculations are done, output them to a file for later use.
Then once we found a way to gather all the data for every sample, the next step was to implement a method that could give a good measure of error. I'm currently implementing a handful of methods but the most fitting one seem to be bayesian bootstraping ( user lmc2179 has a great python implementation). I also implemented ensemble methods and gaussian process as alternatives or to use as other metrics and some other bayesian methods.
I'll try to find if there are internals in keras that are set during the training and testing phases to see if I can set a trigger for my metric. The main issue with using all the data is that you obtain a lot of unreliable data points at the start since the network is not optimized. Some data filtering could be useful to remove a good amount of those points to improve the results of the error predictors.
I'll update if I find anything interesting. | This might sound silly but I'm just wondering about the possibility of modifying a neural network to obtain a probability density function rather than a single value when you are trying to predict a scalar. I know that when you are trying to classify images or words you can get a probability for each class, so I'm thinking there might be a way to do something similar with a continuous value and plot it. (Similar to the posterior plot with bayesian optimisation)
Such details could be interesting when deploying a model for prediction and could provide more flexibility than a single value.
Does anyone knows a way to obtain such an output?
Thanks! | 0 | 1 | 117 |
0 | 45,551,702 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-08-07T16:24:00.000 | 1 | 1 | 0 | Can I rescale an image without using an imaging library? | 45,551,570 | 0.197375 | python,image,python-3.x | Images has so many formats, compressed, uncompressed, black & white or colored, they may be flat or layered, the may be constructed as raster or vector ones, so the answer is generally NO. | I have been looking now for a few hours and I can't find and answer to if can I rescale a image without using a imaging library. I am using python 3 don't really know if that matters. Thank you | 0 | 1 | 39 |
0 | 45,555,309 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-08-07T20:22:00.000 | 0 | 1 | 0 | What is the point of stateless LSTM to exists? | 45,555,122 | 0 | python,keras,lstm,rnn | Data for keras LSTM models is always in the form of (batch_size, n_steps, n_features). When you use shuffle=True, you are going to shuffle on the batch_size argument, thereby retaining the natural order of the sequence that is n_steps long.
In cases that each of your batch_size number of sequences are unrelated to each other, it would be natural to use a stateless model. Each array still contains order (a time-series), but does not depend on the others in your batch. | The main purpose of the LSTM is to utilize its memory property. Based on that what is the point of a stateless LSTM to exist? Don’t we “convert” it into a simple NN by doing that?
In other words.. Does the stateless use of LSTM aim to model the sequences (window) in the input data - if we apply shuffle = False in the fit layer in keras - (eg. for a window of 10 time steps capture any pattern between 10-character words)? If yes why don’t we convert the initial input data to match the form of the sequencers under inspection and then use a plain NN?
If we choose to have shuffle = True then we are losing any information that could be found in our data (e.g. time series data - sequences), don't we? In that case I would expect in to behave similarly to a plain NN and get the same results between the two by setting the same random seed.
Am I missing something in my thinking?
Thanks! | 0 | 1 | 408 |
0 | 45,577,693 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2017-08-08T20:38:00.000 | 25 | 1 | 0 | Print sample set of columns from dataframe in Pandas? | 45,577,630 | 1 | python,python-3.x,pandas | print(df2[['col1', 'col2', 'col3']].head(10)) will select the top 10 rows from columns 'col1', 'col2', and 'col3' from the dataframe without modifying the dataframe. | How do you print (in the terminal) a subset of columns from a pandas dataframe?
I don't want to remove any columns from the dataframe; I just want to see a few columns in the terminal to get an idea of how the data is pulling through.
Right now, I have print(df2.head(10)) which prints the first 10 rows of the dataframe, but how to I choose a few columns to print? Can you choose columns by their indexed number and/or name? | 0 | 1 | 31,744 |
0 | 45,587,277 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-08-09T05:18:00.000 | 1 | 1 | 0 | Possibility of identification of non trained item in Machine Learning | 45,582,182 | 1.2 | python,machine-learning,neural-network,deep-learning,artificial-intelligence | A classifier actually gives you a probability of item belonging to a category, unless you add a final layer or post-processing that translates those probabilities to one and zeros. So, you can define a certain confidence threshold for probabilities and if classifier does not output probabilities above the threshold then call the output undecided.
An "audi" can still have features that make network believe it is tree for example. | A machine learning model has been trained to recognize the name of Animals and Plants. If suppose an automobile name is given, is it possible to say that the given name doesn't belong to the category animals or plants. If possible, kindly mention the methodology or algorithm which achieves this scenario.
E.g. If 'Lion' or 'Coconut Tree' is given the model will be predicting either 'Animals' or 'Trees' category. If suppose, 'Audi' is given, is it possible to say that the given item belongs neither to 'Animals' or 'Plants'. (Note : I have heard that the machine learning model will try to fit into either one the category). | 0 | 1 | 48 |
0 | 45,646,885 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-08-10T09:54:00.000 | 0 | 2 | 0 | An algorithm to split concatenated names | 45,610,370 | 0 | python,algorithm,machine-learning,nlp | One possible algorithmic solution is to create a longer compositional dictionary representing all possible first_name last_name. Then for any given list of tokens as a name (words separated with space), for each token, find all dictionary enteries which have shortest edit distance to that token | My problem is
I have full names with concatenated names, like "davidrobert jones". I want to split it to be "david robert jones".
I tested the solutions using longest prefix matching algorithm with a names dictionary, but it's not that simple because a name could be written in many ways.
I added phonetic matching algorithm too, but also there are many names that could have same pronunciation and so they're very ambiguous.
What is the best solution to do so?, i believe machine learning could have an answer, but i don't know much about machine learning. | 0 | 1 | 180 |
0 | 72,499,535 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-08-10T16:02:00.000 | 0 | 2 | 0 | import matplotlib fails with "'module' object not callable" error | 45,618,524 | 0 | python,matplotlib,import,module | Matplotlib is an entire library, so if you are using import matplotlib as plt in your code, it might not work. Use 'import matplotlib.plyplot as plt' instead. | This question may appear similar to previously asked questions, but it is not.
I have a Python script with a single line:
import matplotlib
This fails with the error:
'module' object is not callable
random.py - print a random integer between 1 and 100
(followed by 3 more lines of usage of random.py)
If I start python from the command line, then type
import matplotlib
That works. I can instantiate classes from the module, plot figures and so on.
I am completely lost as to what is going on. Any clue appreciated.
Python version 2.6.6 on 64 bit x86 Linux machine. | 0 | 1 | 1,820 |
0 | 45,633,068 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-08-11T06:46:00.000 | 1 | 2 | 0 | How to select a particular dataframe from a list of dataframes in Python equivalent to R? | 45,628,665 | 0.099668 | r,python-3.x,list,pandas,dataframe | Found a solution to select a particular dataframe/dataframe_column from a list of dataframes.
In R : x = listOfdf$df1$df2$df3
In Python : x = listOfdf['df1']['df2']['df3']
Thank you :) | I have a list of dataframes in R, with which I'm trying to select a particular dataframe as follows:
x = listOfdf$df1$df2$df3
Now, trying hard to find an equivalent way to do so in Python. Like, the syntax on how a particular DataFrame be selected from a list of DataFrames in Pandas Python. | 0 | 1 | 4,311 |
0 | 45,641,584 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-08-11T17:47:00.000 | 0 | 1 | 0 | workfront api output to csv | 45,641,038 | 0 | python,workfront-api | Parsing nested objects is a challenge with Python. I would suggest opting for multiple CSVs for each set of objects referenced in the collection of another object (tasks contained in a project, for example). Ensure that you have a key that can be used to link entries between the files, such as the tasks' project ID. | I would like to convert each workfront api object (Projects, Isues, Tasks, etc) output to a csv using Python. Is there a recommended way to do this with the nested objects? For example: json to csv, list to csv, etc?
Thanks,
Victor | 0 | 1 | 88 |
0 | 51,193,723 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2017-08-11T21:44:00.000 | 7 | 3 | 0 | for Imbalanced data dealing with cat boost | 45,644,200 | 1 | python-3.x,catboost | CatBoost also has scale_pos_weight parameter starting from version 0.6.1 | Is there a parameter like "scale_pos_weight" in catboost package as we used to have in the xgboost package in python ? | 0 | 1 | 14,128 |
0 | 45,657,010 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-11T22:02:00.000 | 0 | 2 | 0 | How to implement a neural network model, with fixed correspondence between the input layer and the first hidden layer specified? | 45,644,367 | 0 | python,tensorflow,neural-network,deep-learning | I finally implemented the requirement by forcing certain blocks of the weight matrix corresponding to the first layer to be constant zero. That is, rather than just define w1 = tf.Variables(tf.random_normal([100,10])), I define ten 10 by 1 weight vectors and concatenate them with zeros to form a block diagonal matrix as final w1. | I would like to implement a feed-forward neural network, with the only difference from a usual one that I'd manually control the correspondence between input features and the first hidden layer neurons. For example, in the input layer I have features f1, f2, ..., f100, and in the first hidden layer I have h1, h2, ..., h10. I want the first 10 features f1-f10 fed into h1, and f11-f20 fed into h2, etc.
Graphically, unlike the common deep learning technique dropout which is to prevent over-fitting by randomly omit hidden nodes for a certain layer, here what I want is to statically (fixed) omit certain hidden edges between input and hidden.
I am implementing it using Tensorflow and didn't find a way of specifying this requirement. I also looked into other platforms such as pytourch and theano, but still haven't got an answer. Any idea of implementation using Python would be appreciated! | 0 | 1 | 75 |
0 | 45,644,861 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-08-11T22:58:00.000 | 1 | 1 | 0 | NUMA-friendly initialization of numpy.ndarray | 45,644,840 | 1.2 | python,multithreading,numpy,ctypes | I'd initialize the array with np.empty and then pass the buffer to the C function. That should allow each core to grab whatever pages from the array it needs during the initialization. | Is it possible to initialize a numpy.ndarray in a parallel fashion such that the corresponding pages will be distributed among the NUMA-nodes on the system?
The ndarray will later be passed to a multi-threaded C function which yields much better performance if the passed data is allocated in parallel (adhering to the first-touch policy) | 0 | 1 | 167 |
0 | 45,714,701 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-12T00:53:00.000 | 0 | 1 | 0 | increase maximum recursion depth tensorflow | 45,645,507 | 0 | python-2.7,recursion,tensorflow | I had made a coding error that referenced a tensor in the construction of the same tensor. I don't know if changing recursion depth would solve similar, but unbugged, situations. | Is there a hack so that I can increase the maximum recursion depth allowed? I only need it to be 2-3 times as big.
I have a tensorflow graph with many tensors that are lazily constructed because they depend on other tensors (which may or may not be constructed yet). I can guarantee that this process terminates, and that I will not run out of memory. However, I run into this recursion depth error. | 0 | 1 | 252 |
0 | 45,661,326 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-08-13T13:57:00.000 | 0 | 3 | 0 | How to convert a list of ndarray arrays into list in python | 45,661,124 | 0 | python,list,numpy | I found a code that accomplishes my request:
x = [str(i[0]) for i in the_list] | I have a list of ten 1-dimension-ndarrays, where each on hold a string, and I would like to one long list where every item will be a string (without using ndarrays anymore). How should I implement it? | 0 | 1 | 136 |
0 | 57,688,939 | 0 | 0 | 0 | 0 | 1 | false | 154 | 2017-08-13T15:58:00.000 | 2 | 7 | 0 | Can I run Keras model on gpu? | 45,662,253 | 0.057081 | python,tensorflow,keras,jupyter | Of course. if you are running on Tensorflow or CNTk backends, your code will run on your GPU devices defaultly.But if Theano backends, you can use following
Theano flags:
"THEANO_FLAGS=device=gpu,floatX=float32 python my_keras_script.py" | I'm running a Keras model, with a submission deadline of 36 hours, if I train my model on the cpu it will take approx 50 hours, is there a way to run Keras on gpu?
I'm using Tensorflow backend and running it on my Jupyter notebook, without anaconda installed. | 0 | 1 | 296,920 |
0 | 45,666,230 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-08-14T00:58:00.000 | 0 | 1 | 0 | How to load large dataset to python and perform matrix operations | 45,666,183 | 1.2 | python,google-cloud-platform,sparse-matrix,bigdata | From python perspective, I'm currently using h5py to handle this big data. And it is fast too. You should check it out. However, I believe that Google might have provide something to handle this type of data. | I have a large dataset with 100 million rows of user online activities. Each row includes a timestamp, user id, and site domain name. I would like to transform the dataset into a matrix of unique domain and user id, in order to perform some matrix operations. The number of unique domains is about 100K and the number of unique user is about 10 million. The matrix is very sparse.
What's the best packages or technologies to use? I realize my question is very broad. I am using python and Google Cloud Platform, so I am hoping the solutions would be on those lines. | 0 | 1 | 48 |
0 | 45,681,068 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-08-14T18:34:00.000 | 2 | 2 | 0 | How does [...,::-1] work in slice notation? | 45,680,851 | 1.2 | python,opencv,slice | The slice operator works as with 3 params. Start(inclusive), end(exclusive) and step.
If start is not specified then it gets the start of the array, same with the end but with the last element. If step not specified the default is 1.
This way, if you do [1, 2, 3, 4][0:2] it will return [1, 2]
If you do [1, 2, 3, 4][1:] it will return [2, 3, 4]
If you do [1, 2, 3, 4][1::2] it will return [2, 4]
For negative indexes, that means iterate backwards so [1, 2, 3, 4][::-1] says, from the starting element until the last element iterate the list backwards 1 element at a time, returning [4, 3, 2, 1].
As the question is not entirely clear I hope this clears up the functioning and make you get to an answer. | OpenCV uses BGR encoding, and img[...,::-1] swaps the red and blue axes of img for when an image needs to be in the more common RGB. I've been using it for several months now but still don't understand how it works. | 0 | 1 | 207 |
0 | 45,896,146 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-14T22:23:00.000 | 1 | 1 | 0 | DiscountCurve is not aware of evaluation date in QuantLib python | 45,683,889 | 0.197375 | python,quantitative-finance,quantlib | No, unfortunately there's no way around this. For this particular class, you'll have to recreate an instance when your settlement date changes.
Writing a version of the class that takes distances between dates can be done, but it's not currently available. If you write it, please consider creating a pull request for inclusion in the library. | I am using quantlib in python. In order to construct a DiscountCurve object, I need to pass a vector of Dates and corresponding discount factors. The problem is that, when I change the evaluation date to account for settlement days, the curve object is not shifted/adjusted properly and the NPV of the bond does not change as a function of evaluation date.
Is there any way around this? Do I have to construct a different DiscountCurve by shifting the dates whenever I change the number of settlement days?
Ideally, instead of passing a vector of dates, I should be able to pass a vector of distances between consecutive dates but the very first date should be allowed to be the evaluation date. | 0 | 1 | 667 |
0 | 45,702,226 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2017-08-15T06:13:00.000 | 1 | 1 | 0 | how loss and metric are calculated in cntk | 45,687,374 | 1.2 | python,cntk,resnet | (1) Yes.
(2) 29.37% mean that 29.37% of the classification are correct. Evaluation is on the test data, assuming you are reading both training and test data.
(3) Make sure that the input is the same format, by that I mean do you normalize or subtract the mean in your python, if so then you need to do the same in C#. Can you run the eval first using Python and see what result do you get? | I am new to cntk and python. I have created a python program based on TrainResNet_CIFAR10.py to train 4736 of (64x64x3) images and test 2180 images with 4 classes. After train 160 epochs, I got loss = 0.663 and metric = 29.37%. Finished evaluation metric = 18.94%. When I evaluate the train model based on CNTKLibraryCSEvalExamples.cs to test 2180 images, almost all 2180 are classified as one class (second class). My questions are:
I assume loss is calculated from cross_entropy_with_softmax(z, label_var) and metric is using classification_error(z, label_var). Am I correct and how are they actually determined?
What does mean of metric = 29.37% and evaluation metric = 18.94%? Are they from train and test images, respectively?
what could cause totally wrong evaluate results?
Any help will be greatly appreciated. | 0 | 1 | 401 |
0 | 45,689,273 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2017-08-15T07:22:00.000 | 0 | 1 | 0 | Modifying and creating xlsx files with Python, specifically formatting single words of a e.g. sentence in a cell | 45,688,168 | 0 | python,excel,pandas,openpyxl,xlsxwriter | i have been recently working with openpyxl. Generally if one cell has the same style(font/color), you can get the style from cell.font: cell.font.bmeans bold andcell.font.i means italic, cell.font.color contains color object.
but if the style is different within one cell, this cannot help. only some minor indication on cell.value | I'm working a lot with Excel xlsx files which I convert using Python 3 into Pandas dataframes, wrangle the data using Pandas and finally write the modified data into xlsx files again.
The files contain also text data which may be formatted. While most modifications (which I have done) have been pretty straight forward, I experience problems when it comes to partly formatted text within a single cell:
Example of cell content: "Medical device whith remote control and a Bluetooth module for communication"
The formatting in the example is bold and italic but may also be a color.
So, I have two questions:
Is there a way of preserving such formatting in xlsx files when importing the file into a Python environment?
Is there a way of creating/modifying such formatting using a specific python library?
So far I have been using Pandas, OpenPyxl, and XlsxWriter but have not succeeded yet. So I shall appreciate your help!
As pointed out below in a comment and the linked question OpenPyxl does not allow for this kind of formatting:
Any other ideas on how to tackle my task? | 0 | 1 | 190 |
0 | 45,702,871 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-15T23:14:00.000 | 0 | 1 | 0 | Bundled executable crashes without warning when rendering plots | 45,702,870 | 0 | python-2.7,numpy,matplotlib,64-bit,py2exe | I tracked the error to the numpy library. Numpy calls numpy.linalg._umath_linalg.inv() and the program abruptly exits with no error message, warning, or traceback.
_umath_linalg is a .pyd file and I discovered that this particular .pyd file doesn't like being called from library.zip, which is where py2exe puts it when using bundle option 2 or 1.
The solution is to exclude numpy in the py2exe setup script and copy the entire package folder into the distribution directory and add that directory to the system path at the top of the main python script. | (I have already resolved this issue but it cost me two weeks of my time and my employer a couple of grand, so I'm sharing it here to save some poor soul.)
My company is converting our application from 32-bit to 64-bit. We create an executable using py2exe, using the bundle=2 option. The executable started crashing as soon as it tried to render a matplotlib plot.
Versions:
python==2.7.13,
matplotlib==2.0.0,
numpy==1.13.1,
py2exe==0.6.10a1 | 0 | 1 | 69 |
0 | 45,743,561 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-08-16T15:41:00.000 | 0 | 1 | 0 | interpreting a string to do and/or combinations of dataframe masks | 45,717,860 | 0 | python,dataframe,logical-operators,interpretation | After some work I was able to do this with regular expressions and eval().
Using regex, I extracted both the 'template' and the 'criteria'. The 'template' would look something like 1 & 2 | (3 & 4 & (5 or 6)), and the associated 'criteria' would be something like ['criteria1', 'criteria2', ..., 'criteria6']. Then I could manipulate the criteria however I wanted, then substitute the manipulated values back into the template string. Finally, I could just run eval(template) or whatever name of the final string to be executed. | I'm not sure where exactly to start with this one. I know I can do bit-wise logical combinations of masks like so: (mask1 & mask2) | mask3 | (mask4 & (mask5 | mask6))
Now, if I had a user input a string like: '(criteria1 & criteria2) | criteria3 | (criteria4 & (criteria5 | criteria6))', but needed to interpret each criteria through a function to determine and return a mask, how can I retain the parentheses and logic and then combine the masks? | 0 | 1 | 25 |
0 | 45,727,419 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-08-17T01:51:00.000 | 1 | 1 | 0 | How can i obtain Camera Matrix in 3Dreconstruction? | 45,725,440 | 1.2 | python,opencv,matrix,camera,3d-reconstruction | You already have a code for camera calibration and printing a camera matrix in your OpenCV installation. Go to this path if you are on windows -
C:\opencv\sources\samples\python
There you have a file called calibrate | I want to achieve a 3D-reconstruction algorithm with sfm,
But how should i set the parameters of the Camera Matrix?
I have double cameras,both know their focal length.
And how about Rotation Matrix and Translation Matrix from world view?
i use python | 0 | 1 | 574 |
0 | 45,788,066 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2017-08-17T07:54:00.000 | 1 | 2 | 0 | Debugging Tensorflow hang on global variables initialisation | 45,729,494 | 1.2 | python,tensorflow | I managed to solve the problem. The tip from @amo-ej1 to run in a regular file was a step in the correct direction. This uncovered that the tensor flow process was killing itself off with a SIGKILL and returning an error code of 137.
I tried Tensorflow Debugger tfdbg though this did not provide any further details as the problem was the graph did not initialize. I started to think the graph structure was incorrect, so I dumped out the graph structure using:
tf.summary.FileWriter('./logs/traing_graph', graph)
I then used up Tensorboard to inspect the resultant summary graph structure data dumped out the the directory and found that the tensor dimensions of the Fully Connected layer was wrong , having a width of 15million !!?! (wrong)
It turned out that one of the configurable parameters of the graph was incorrect. It was picking the dimension of the layer 2 tensor shape incorrectly from an incorrect addressing the previous tf.shape type property and it exploded the dimensions of the graph.
There were no OOM error messages in /var/log/system.log so I am unsure why the graph initialisation caused the python tensorflow script process to die.
I fixed the dimensions of the graph and graph initialization worked just fine!
My top tip is visualise your graph with Tensorboard before initialisation and training to do a quick check the resultant graph structure you coded it what you expected it to be. You probably will save yourself a lot of time! :-) | I'm after advice on how to debug what on Tensorflow is struggling with when it hangs.
I have a multi layer CNN which hangs upon global_variables_initializer() is run in the session. I am getting no errors or messages on the console output.
Is there an intelligent way of debugging what Tensorflow is struggling with when it hangs instead of repeatedly commenting out lines of code that makes the graph, and re-running to see where it hangs. Would TensorFlow debugger (tfdbg) help? What options do I have?
Ideally it would be great to just to break current execution and look at some stack or similar to see where the execution is hanging during the init.
I'm currently running Tensorflow 0.12.1 with Python 3 inside a Jupiter notebook. | 0 | 1 | 944 |
0 | 45,747,350 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-08-17T09:47:00.000 | 1 | 1 | 0 | When running " python mnist_with_summaries.py ", it has occurred the error | 45,731,787 | 0.197375 | python-3.x,tensorflow,mnist | I have solved this problem.
I changed line 204 and line 210 of mnist_with_summaries.py to the local directories, and I created some folders.
OR, don't change the code, and I created some folders in the local disk where is the running environment according to the code.
line 204: create /tmp/tensorflow/mnist/input_data
line 210: create /tmp/tensorflow/mnist/logs/mnist_with_summaries | When running this example:" python mnist_with_summaries.py ", it has
occurred the following error:
detailed errors:
Traceback (most recent call last):
File "mnist_with_summaries.py", line 214, in
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py"
, line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "mnist_with_summaries.py", line 186, in main
tf.gfile.MakeDirs(FLAGS.log_dir)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.p
y", line 367, in recursive_create_dir
pywrap_tensorflow.RecursivelyCreateDir(compat.as_bytes(dirname), status)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\contextlib.py", line 89, in exit
next(self.gen)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\site-packages\tensorflow\python\framework\errors
_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.NotFoundError: Failed to create a directory: /tmp\tensorflow
Running environment:windows7+Anaconda3+python3.6+tensorflow1.3.0
Why?Any idea on how to resolve this problem?Thank you! | 0 | 1 | 249 |
1 | 45,740,933 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-17T12:53:00.000 | 0 | 1 | 0 | Real-time animation in Python | 45,735,688 | 0 | python,matplotlib,graphics,real-time | Write a multithreading script that runs both, your computation script and a script for the images(where each of them can act as one frame for the video). Keep closing the image window each time the next image is computed.
This solution is makeshift but will work | I am trying to visualize data computed by my program (neural network), by showing images while the program is working, creating a video that shows the progress in real time.
It should be pretty basic, but I'm new to Python, and I'm struggling to find the good framework to do this. It seems that with most libraries (Tkinter, graphics, matplotlib, etc), displaying a video stops the computation, and the user has to interact with the GUI (like close the window) to go back to the program. For now I use PIL.show() to display a single image without stopping the program, but it does not seem suited to video, because I cannot replace the displayed image by another, as the window is not handled by the program anymore.
I'm using Linux Mint and Python 2.7.6
So what is the simplest way to do that ? Is there a library that is well-suited ? Or where can I find an example code doing that ? | 0 | 1 | 436 |
0 | 45,737,266 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2017-08-17T13:45:00.000 | 2 | 3 | 0 | Why can you do df.loc(False)['value'] in pandas? | 45,736,874 | 0.132549 | python,pandas,indexing | For any python object, () invokes the __call__ method, whereas [] invokes the __getitem__ method (unless you are setting a value, in which case it invokes __setitem__). In other words () and [] invoke different methods, so why would you expect them to act the same? | I do not see any documentation on pandas explaining the parameter False passed into loc. Can anyone explain how () and [] differ in this case? | 0 | 1 | 971 |
0 | 45,804,171 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-08-17T19:51:00.000 | 2 | 1 | 0 | What is eval_step in Experiment Tensorflow | 45,743,894 | 1.2 | python,tensorflow | If your evaluation is a whole epoch, you're right that it doesn't make much sense. eval_steps is more for the case when you're doing mini-batch evaluation and want to evaluate on multiple mini-batches. | I do wonder what is the parameter 'eval_steps' in learn.experiment in tensorflow ? Why would you run over the evaluation set several times every time you want to evaluate your model ?
Thanks! | 0 | 1 | 629 |
0 | 45,766,815 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2017-08-18T16:21:00.000 | 6 | 1 | 0 | python 3.3 : scipy.optimize.curve_fit doesn't update the value of point | 45,761,067 | 1.2 | python-3.x,scipy,curve-fitting | It is normal for the objective function to be called initially with very small (roughly 1e-8) changes in parameter values in order to calculate the partial derivatives to decide which way to go in parameter space. If the result of the objective function does not change at all (not even at 1e-8 level) the fit will give up: changing the parameter values did not change the result.
I would first look into whether the result of your objective function is really sensitive to the parameters. If the changes to your result really are not sensitive to a 1e-8 change, but would be sensitive to a larger change, you may want to increase the value of epsfcn passed to scipy.optimize.leastsq. | I am trying to fit a custom function to some data points using curve_fit. I have tried 1 or two free parameters. I have used it other times. Now I am struggling to make a fit, because the algorithm returns always the initial input values, with infinite sigma, no matter what the initial values are. I have also tried to print the internal parameters with which my custom function is called, and I don't understand, my custom function is called just 4 times, the first three with always the same parameters and the last with a relative change of the parameter of 10^-8. this doesn't look right | 0 | 1 | 985 |
0 | 45,789,539 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2017-08-21T02:39:00.000 | 1 | 1 | 0 | sending data from python to c# | 45,788,655 | 1.2 | c#,python,dll | Your options are as follow,
Use a TCP socket, bind it to a port and listen for data in C# while the python application sends all data to it. C# has some great features for sockets such as System.Net.TcpClient
and
System.Net.TcpServer.
Your other option is that if the C# application only needs to be run once it receives information from the python program and then it can die, you could have the python program start the C# one and pass it parameters containing the information that you need transmitted.
By the looks of it your question only asked if there was a way to communicate, these are probably the best two. Please let me know if I can help anymore. | i have a chart pattern recognition program using neural networks written in python .instead of porting the whole code to C# ,i decided it was better to only send certain bits indicating the following :
Buy=1,Sell=-1,Do nothing=0
once they are in C-sharp ,i could relay them to a third party program (multicharts) which would continuously call the C# dll function and receive these values after a certain time interval .
my question is ,is there a way to relay these bit's to C# and pack all of this in a dll ,which gets read by the 3rd party program ?
the whole reason i want to port to C# is because multicharts only reads in dll's and i dont think python has them.
sorry for being naive ,i don't have very good grip on C# . | 0 | 1 | 612 |
0 | 45,809,411 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-08-21T18:47:00.000 | 0 | 1 | 0 | DataStax Cassandra cassandra.cluster.NoHostAvailable | 45,803,795 | 0 | python,python-3.x,cassandra,cassandra-3.0,datastax-python-driver | Please check if your nodes are really listening by opening up a separate connection from say cqlsh terminal, as you say it is running locally so probably a single node. If that connects, you might want to see how many file handles are open, maybe it is running out of those. We had a similar problem couple of years back, that was attributed to available file handles. | I am consistently getting this error under normal conditions. I am using the Python Cassandra driver (v3.11) to connect locally with RPC enabled. The issue presents itself after a period of time. y assumption was that it was related to max number of connections or queries. Any pointers on where to begin troubleshooting would be greatly appreciated. | 0 | 1 | 203 |
0 | 45,832,556 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2017-08-22T09:16:00.000 | 0 | 1 | 0 | how to change matplotlibrc default directory | 45,813,527 | 1.2 | python,matplotlib,anaconda,windows-subsystem-for-linux | It looks like when anaconda or matplotlib was installed it's created the matplotlibrc file in C:\Users\user\AppData\Local\lxss\home\puter\anaconda3\lib\python3.6\site-packages\matplotlib\mpl-data using the windows environment. This has caused the file not to be recognised in WSL.
To fix this create another matplotlibrc file in bash or whatever shell you're using. In the directory listed above copy the contents of the previously created matplotlibrc file into your new matplotlibrc file. Make sure you don't create this file in the windows environment otherwise it won't be recognised. | I'm currently using anaconda and python 3.6 on windows bash. Every time i want to use matplotlib I have to paste a copy of the matplotlibrc file into my working directory otherwise my code won't run or plot and I get the warning - /home/computer/anaconda3/lib/python3.6/site-packages/matplotlib/init.py:1022: UserWarning: could not find rc file;returning defaults
my matplotlibrc file is located at C:\Users\user\AppData\Local\lxss\home\puter\anaconda3\lib\python3.6\site-packages\matplotlib\mpl-data
I thought to fix this I could edit my .condarc file and set it to look for matplotlibrc in the correct directory. Could anyone tell me how to do this? | 0 | 1 | 950 |
0 | 45,825,713 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-22T18:39:00.000 | 0 | 2 | 1 | Error in installing Tensorflow on Mac | 45,824,737 | 0 | python-2.7,tensorflow | Use Python EasyInstall, is super easy:
sudo easy_install pip | I installed Tensorflow on my macOS Sierra using pip install tensorflow.
Im getting the following error:
OSError: [Errno 1] Operation not
permitted:'/var/folders/zn/l9gmn4613677f6mlrh6prtb00000gn/T/pip-xv3AU6-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy-1.8.0rc1-py2.7.egg-info'
Is there anyway to resolve this? | 0 | 1 | 109 |
0 | 45,868,428 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2017-08-22T19:20:00.000 | 0 | 1 | 0 | How to replicate the "Show empty cells as" functionality of Excel graphs | 45,825,401 | 0 | python,excel,openpyxl | Asked the dev about it -
There is a dispBlanksAs property of the ChartContainer but this currently isn't accessible to client code.
I looked through the source some more using that answer to guide me. The option is definitely in there, but you'd have to modify source and build locally to get at it.
So no, it's not accessible at this time. | When selecting a data source for a graph in Excel, you can specify how the graph should treat empty cells in your data set (treat as zero, connect with next data point, leave gap).
The option to set this behavior is available in xlsxwriter with chart.show_blanks_as(), but I can't find it in openpyxl. If anyone knows where to find it or can confirm that it's not present, I'd appreciate it. | 0 | 1 | 101 |
0 | 45,834,527 | 0 | 0 | 0 | 0 | 1 | false | 16 | 2017-08-23T08:16:00.000 | 2 | 4 | 0 | Numpy:zero mean data and standardization | 45,834,276 | 0.099668 | python,numpy,image-preprocessing | Key here are the assignment operators. They actually performs some operations on the original variable.
a += c is actually equal to a=a+c.
So indeed a (in your case x) has to be defined beforehand.
Each method takes an array/iterable (x) as input and outputs a value (or array if a multidimensional array was input), which is thus applied in your assignment operations.
The axis parameter means that you apply the mean or std operation over the rows. Hence, you take values for each row in a given column and perform the mean or std.
Axis=1 would take values of each column for a given row.
What you do with both operations is that first you remove the mean so that your column mean is now centered around 0. Then, when you divide by std, you happen to reduce the spread of the data around this zero, and now it should roughly be in a [-1, +1] interval around 0.
So now, each of your column values is centered around zero and standardized.
There are other scaling techniques, such as removing the minimal or maximal value and dividing by the range of values. | I saw in tutorial (there were no further explanation) that we can process data to zero mean with x -= np.mean(x, axis=0) and normalize data with x /= np.std(x, axis=0). Can anyone elaborate on these two pieces on code, only thing I got from documentations is that np.mean calculates arithmetic mean calculates mean along specific axis and np.std does so for standard deviation. | 0 | 1 | 42,882 |
0 | 45,845,595 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-08-23T16:55:00.000 | 1 | 2 | 0 | Python Pandas Dataframe sampling | 45,845,425 | 0.099668 | python,pandas,dataframe,sampling,balance | My best guess: 'protect' one random row from each id (create separate dataframe with those rows), then delete from original dataframe until satisfied (including the fact that the classes in the 'protected' dataframe will line up flush with what remains) and concatenate the two dataframes? | I am looking for an elegant way to sample a dataset in a specific way. I found a few solutions, but I was wondering if any of you know a better way.
Here is the task I am looking at:
I want to balance my dataset, so that I have the same amount of instances for class 0 as for class 1, so in the example below we have 5 instances of class 1 and 11 instances of class 0:
id | class
------ | ------
1 | 1
1 | 0
1 | 0
1 | 0
1 | 0
2 | 1
2 | 1
2 | 0
2 | 0
2 | 0
3 | 1
3 | 1
3 | 0
3 | 0
3 | 0
3 | 0
Sofar I have just deleted randomly 6 instances of class 0, but I would like to prevent that all instances of one id could get deleted. I tried doing a stratified "split", with sklearn, but it does not work, because not every id contains more than 1 item. The desired output should look similar to this:
id | class
------ | ------
1 | 1
1 | 0
2 | 1
2 | 1
2 | 0
2 | 0
3 | 1
3 | 1
3 | 0
3 | 0
Any good ideas? | 0 | 1 | 749 |
0 | 45,855,698 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-08-24T07:13:00.000 | 1 | 2 | 0 | Reduce dataset to smaller size, keep the gist of information in the dataset | 45,855,209 | 1.2 | python,math,statistics | I believe you are trying to resample your data. Your current sample rate is 1/60 samples per second and you are trying to get to 1/96 samples per second (900 / (24*60*60)). The ratio between the two rates is 5/8.
If you search for "python resample" you will find other similar questions and articles involving numpy and pandas which have built in routines for it.
To do it manually you can first upsample by 5 to get to 7200 samples per second and then downsample by 8 to get down to 900 samples per second.
To upsample you can make a new list five times as long and fill in every fifth element with your existing data. Then you can do, say, linear interpolation to fill in the gaps.
One you do that you can downsample by simply taking every eighth element. | I'm developing a line chart. The data is being generated by a sensor and is a tuple (timestamp, value). Sensor creates a new datapoint every 60 seconds or so.
Now I want to display it in a graph and my limitation is about 900 points on then graph. In a daily view of that graph, I'd get about 1440 points and that's too much.
I'm looking for a general way how to shrink my dataset of any size to fixed size (in my case 900) while it keeps the timestamp distribution linear.
Thanks | 0 | 1 | 1,337 |
0 | 45,891,842 | 1 | 0 | 0 | 0 | 1 | false | 0 | 2017-08-24T09:15:00.000 | 0 | 1 | 0 | Synthetic network graph | 45,857,602 | 0 | python,graph,networkx | If you stick to networkx, you can generate two large complete graphs with nx.complete_graph(), merge them, and then add some edges connecting randomly chosen nodes in each graph. If you want a more realistic example, build dense nx.erdos_renyi_graph()s instead of the complete graphs. | I am trying to validate a system to detect more than 2 cluster in a network graph. For this i need to create a synthetic graph with some cluster. The graph should be very large, more than 100k nodes at least. I s there any system to do this? Any known dataset with more than 2 cluster would also be suffice. | 0 | 1 | 182 |
Subsets and Splits