GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 52,786,576 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-08T17:02:00.000 | 0 | 1 | 0 | Keras LSTM Input Dimension understanding each other | 52,706,996 | 0 | python,machine-learning,keras,lstm,rnn | First: Regressors will replicate if you input a feature that gives some direct intuition about the predicted input might be to secure the error is minimized, rather than trying to actually predict it. Try to focus on binary classification or multiclass classification, whether the closing price go up/down or how much.
Second: Always engineer the raw features to give more explicit patterns to the ML algorithm. Think on inputs as Volume(t) - Volume(t-1), close(t)^2 - close(t-1)^2, technical indicators(RSI, CCI, OBV etc.) Create your own features. You can use the pyti library for technical indicators. | but I have been trying to play around with it for awhile. I've seen a lot of guides on how Keras is used to build LSTM models and how people feed in the inputs and get expected outputs. But what I have never seen yet is, for example stock data, how we can make the LSTM model understand patterns between different dimensions, say close price is much higher than normal because volume is low.
Point of this is that I want to do a test with stock prediction, but make it so that each dimensions are not reliant on previous time steps, but also reliant on other dimensions it haves as well.
Sorry if I am not asking the question correctly, please ask more questions if I am not explaining it clearly. | 0 | 1 | 51 |
0 | 52,719,901 | 0 | 1 | 0 | 0 | 1 | true | 5 | 2018-10-09T11:15:00.000 | 3 | 3 | 1 | Deploying python with docker, images too big | 52,719,729 | 1.2 | python,amazon-web-services,docker | First see if there are easy wins to shrink the image, like using Alpine Linux and being very careful about what gets installed with the OS package manager, and ensuring you only allow installing dependencies or recommended items when truly required, and that you clean up and delete artifacts like package lists, big things you may not need like Java, etc.
The base Anaconda/Ubuntu image is ~ 3.5GB in size, so it's not crazy that with a lot of extra installations of heavy third-party packages, you could get up to 10GB. In production image processing applications, I routinely worked with Docker images in the range of 3GB to 6GB, and those sizes were after we had heavily optimized the container.
To your question about splitting dependencies, you should provide each different application with its own package definition, basically a setup.py script and some other details, including dependencies listed in some mix of requirements.txt for pip and/or environment.yaml for conda.
If you have Project A in some folder / repo and Project B in another, you want people to easily be able to do something like pip install <GitHub URL to a version tag of Project A> or conda env create -f ProjectB_environment.yml or something, and voila, that application is installed.
Then when you deploy a specific application, have some CI tool like Jenkins build the container for that application using a FROM line to start from your thin Alpine / whatever container, and only perform conda install or pip install for the dependency file for that project, and not all the others.
This also has the benefit that multiple different projects can declare different version dependencies even among the same set of libraries. Maybe Project A is ready to upgrade to the latest and greatest pandas version, but Project B needs some refactoring before the team wants to test that upgrade. This way, when CI builds the container for Project B, it will have a Python dependency file with one set of versions, while in Project A's folder or repo of source code, it might have something different. | We've built a large python repo that uses lots of libraries (numpy, scipy, tensor flow, ...) And have managed these dependencies through a conda environment. Basically we have lots of developers contributing and anytime someone needs a new library for something they are working on they 'conda install' it.
Fast forward to today and now we need to deploy some applications that use our repo. We are deploying using docker, but are finding that these images are really large and causing some issues, e.g. 10+ GB. However each individual application only uses a subset of all the dependencies in the environment.yml.
Is there some easy strategy for dealing with this problem? In a sense I need to know the dependencies for each application, but I'm not sure how to do this in an automated way.
Any help here would be great. I'm new to this whole AWS, Docker, and python deployment thing... We're really a bunch of engineers and scientists who need to scale up our software. We have something that works, it just seems like there has to be a better way . | 0 | 1 | 2,376 |
0 | 62,167,763 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2018-10-09T12:58:00.000 | 1 | 3 | 0 | Clustering images using unsupervised Machine Learning | 52,721,662 | 0.066568 | python,computer-vision,cluster-analysis,k-means,unsupervised-learning | I have implemented Unsupervised Clustering based on Image Similarity using Agglomerative Hierarchical Clustering.
My use case had images of People, so I had extracted the Face Embedding (aka Feature) Vector from each image. I have used dlib for face embedding and so each feature vector was 128d.
In general, the feature vector of each image can be extracted. A pre-trained VGG or CNN network, with its final classification layer removed; can be used for feature extraction.
A dictionary with KEY as the IMAGE_FILENAME and VALUE as the FEATURE_VECTOR can be created for all the images in the folder. This will make the co-relation between the filename and it’s feature vector easier.
Then create a single feature vector say X, which comprises of individual feature vectors of each image in the folder/group which needs to be clustered.
In my use case, X had the dimension as : NUMBER OF IMAGE IN THE FOLDER, 128 (i.e SIZE OF EACH FEATURE VECTOR). For instance, Shape of X : 50,128
This feature vector can then be used to fit an Agglomerative Hierarchical Cluster. One needs to fine tune the distance threshold parameter empirically.
Finally, we can write a code to identify which IMAGE_FILENAME belongs to which cluster.
In my case, there were about 50 images per folder so this was a manageable solution. This approach was able to group image of a single person into a single clusters. For example, 15 images of PERSON1 belongs to CLUSTER 0, 10 images of PERSON2 belongs to CLUSTER 2 and so on… | I have a database of images that contains identity cards, bills and passports.
I want to classify these images into different groups (i.e identity cards, bills and passports).
As I read about that, one of the ways to do this task is clustering (since it is going to be unsupervised).
The idea for me is like this: the clustering will be based on the similarity between images (i.e images that have similar features will be grouped together).
I know also that this process can be done by using k-means.
So the problem for me is about features and using images with K-means.
If anyone has done this before, or has a clue about it, please would you recommend some links to start with or suggest any features that can be helpful. | 0 | 1 | 4,082 |
0 | 52,735,568 | 0 | 0 | 0 | 0 | 3 | true | 7 | 2018-10-09T12:58:00.000 | 3 | 3 | 0 | Clustering images using unsupervised Machine Learning | 52,721,662 | 1.2 | python,computer-vision,cluster-analysis,k-means,unsupervised-learning | Label a few examples, and use classification.
Clustering is as likely to give you the clusters "images with a blueish tint", "grayscale scans" and "warm color temperature". That is a quote reasonable way to cluster such images.
Furthermore, k-means is very sensitive to outliers. And you probably have some in there.
Since you want your clusters correspond to certain human concepts, classification is what you need to use. | I have a database of images that contains identity cards, bills and passports.
I want to classify these images into different groups (i.e identity cards, bills and passports).
As I read about that, one of the ways to do this task is clustering (since it is going to be unsupervised).
The idea for me is like this: the clustering will be based on the similarity between images (i.e images that have similar features will be grouped together).
I know also that this process can be done by using k-means.
So the problem for me is about features and using images with K-means.
If anyone has done this before, or has a clue about it, please would you recommend some links to start with or suggest any features that can be helpful. | 0 | 1 | 4,082 |
0 | 52,721,794 | 0 | 0 | 0 | 0 | 3 | false | 7 | 2018-10-09T12:58:00.000 | 5 | 3 | 0 | Clustering images using unsupervised Machine Learning | 52,721,662 | 0.321513 | python,computer-vision,cluster-analysis,k-means,unsupervised-learning | Most simple way to get good results will be to break down the problem into two parts :
Getting the features from the images: Using the raw pixels as features will give you poor results. Pass the images through a pre trained CNN(you can get several of those online). Then use the last CNN layer(just before the fully connected) as the image features.
Clustering of features : Having got the rich features for each image, you can do clustering on these(like K-means).
I would recommend implementing(using already implemented) 1, 2 in Keras and Sklearn respectively. | I have a database of images that contains identity cards, bills and passports.
I want to classify these images into different groups (i.e identity cards, bills and passports).
As I read about that, one of the ways to do this task is clustering (since it is going to be unsupervised).
The idea for me is like this: the clustering will be based on the similarity between images (i.e images that have similar features will be grouped together).
I know also that this process can be done by using k-means.
So the problem for me is about features and using images with K-means.
If anyone has done this before, or has a clue about it, please would you recommend some links to start with or suggest any features that can be helpful. | 0 | 1 | 4,082 |
0 | 52,728,282 | 0 | 0 | 0 | 0 | 2 | true | 4 | 2018-10-09T13:35:00.000 | 1 | 2 | 0 | What is the appropriate distance metric when clustering paragraph/doc2vec vectors? | 52,722,423 | 1.2 | python,cluster-analysis,distance,doc2vec,hdbscan | I believe in practice cosine-distance is used, despite the fact that there are corner-cases where it's not a proper metric.
You mention that "elements of the resulting docvecs are all in the range [-1,1]". That isn't usually guaranteed to be the case – though it would be if you've already unit-normalized all the raw doc-vectors.
If you have done that unit-normalization, or want to, then after such normalization euclidean-distance will always give the same ranked-order of nearest-neighbors as cosine-distance. The absolute values, and relative proportions between them, will vary a little – but all "X is closer to Y than Z" tests will be identical to those based on cosine-distance. So clustering quality should be nearly identical to using cosine-distance directly. | My intent is to cluster document vectors from doc2vec using HDBSCAN. I want to find tiny clusters where there are semantical and textual duplicates.
To do this I am using gensim to generate document vectors. The elements of the resulting docvecs are all in the range [-1,1].
To compare two documents I want to compare the angular similarity. I do this by calculating the cosine similarity of the vectors, which works fine.
But, to cluster the documents HDBSCAN requires a distance matrix, and not a similarity matrix. The native conversion from cosine similarity to cosine distance in sklearn is 1-similarity. However, it is my understanding that using this formula can break the triangle inequality preventing it from being a true distance metric. When searching and looking at other people's code for similar tasks, it seems that most people seem to be using sklearn.metrics.pairwise.pairwise_distances(data, metric='cosine') which is defines cosine distance as 1-similarity anyway. It looks like it provides appropriate results.
I am wondering if this is correct, or if I should use angular distance instead, calculated as np.arccos(cosine similarity)/pi. I have also seen people use Euclidean distance on l2-normalized document vectors; this seems to be equivalent to cosine similarity.
Please let me know what is the most appropriate method for calculating distance between document vectors for clustering :) | 0 | 1 | 1,238 |
0 | 52,735,502 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2018-10-09T13:35:00.000 | 1 | 2 | 0 | What is the appropriate distance metric when clustering paragraph/doc2vec vectors? | 52,722,423 | 0.099668 | python,cluster-analysis,distance,doc2vec,hdbscan | The proper similarity metric is the dot product, not cosine.
Word2vec etc. are trained using the dot product, not normalized by the vector length. And you should exactly use what was trained.
People use the cosine all the time because it worked well for bag of words. The choice is not based on a proper theoretical analysis for all I know.
HDBSCAN does not require a metric. The 1-sim transformation assumes that x is bounded by 1, so that won't reliably work.
I suggest to try the following approaches:
use negative distances. That may simply work. I.e., d(x,y)=-(x dot y)
use max-sim transformation. Once you have the dot product matrix it is easy to get the maximum value.
implement HDBSCAN* with a similarity rather than a metric | My intent is to cluster document vectors from doc2vec using HDBSCAN. I want to find tiny clusters where there are semantical and textual duplicates.
To do this I am using gensim to generate document vectors. The elements of the resulting docvecs are all in the range [-1,1].
To compare two documents I want to compare the angular similarity. I do this by calculating the cosine similarity of the vectors, which works fine.
But, to cluster the documents HDBSCAN requires a distance matrix, and not a similarity matrix. The native conversion from cosine similarity to cosine distance in sklearn is 1-similarity. However, it is my understanding that using this formula can break the triangle inequality preventing it from being a true distance metric. When searching and looking at other people's code for similar tasks, it seems that most people seem to be using sklearn.metrics.pairwise.pairwise_distances(data, metric='cosine') which is defines cosine distance as 1-similarity anyway. It looks like it provides appropriate results.
I am wondering if this is correct, or if I should use angular distance instead, calculated as np.arccos(cosine similarity)/pi. I have also seen people use Euclidean distance on l2-normalized document vectors; this seems to be equivalent to cosine similarity.
Please let me know what is the most appropriate method for calculating distance between document vectors for clustering :) | 0 | 1 | 1,238 |
0 | 53,180,151 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-09T14:30:00.000 | 0 | 1 | 0 | rasterio - load multi-dimensional data | 52,723,483 | 0 | python-3.x,rasterio | rasterio is really not the tool of choice for multi-dimensional netCDF data. It excels at handling 3D (band, y, x) data where band is some relatively short, unlabeled axis.
Look into xarray instead, which is built around the netCDF model and supports labeled axes and many dimensions, plus lazy loading, out-of-memory computation, plotting, indexing, ... | I just discovered rasterio for easy raster handling in Python. I am working with multi-dimensional climate data (4D and 5D). I was successful to open and read my 4D-NetCDF file with rasterio (lat: 180, lon: 361, time: 6, number: 51). However, the rasterio dataset object shows me three dimensions (180, 361, 306), whereby dimension 3 and 4 were combined. Can rasterio dataset objects only store 3 dimensions?
If yes, how does rasterio combine dimensions 3 and 4, to know what layer of the 306 resembles the original?
Thanks. | 0 | 1 | 162 |
0 | 52,730,991 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-09T20:44:00.000 | 0 | 1 | 0 | Right way to serialize a Random Forest Regression File | 52,729,048 | 0 | python,machine-learning,pickle,random-forest,data-science-experience | Aim of saving complete model is modifying the model in the future. If you are not planning to modify your model, you can just save the weights and use them for the prediction. This will save a huge space for you. | I am working on building a Random Forest Regression model for predicting ETA. I am saving the model in pickle format by using pickle package. I have also used joblib to save the model. But the size of file is really large (more than 100 GB). I would like to ask the data science experts that is it the correct format to save the model or is there any other efficient method to do so? Any insights on this will be appreciated. | 0 | 1 | 234 |
0 | 52,729,697 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-10-09T21:27:00.000 | 0 | 3 | 0 | Python tool to find meaningful pairs of words in a document | 52,729,565 | 0 | python,nltk,natural-language-processing | Interesting problem to play with, assuming there's not already a lexicon of meaningful compound words that you could leverage. And I'd love to see "computer science" as a trending topic.
Let's take the approach that we know nothing about compound words in English, whether "stop sign" is as meaningfully distinct from "stop" and "sign" as
"does better" is from "does" and "better"
Breaking it down, you want to build a process that:
Identifies co-located pairs
Drops any that are clearly not related as compound words (ie parts of speech, proper names or punctuation)
Saves candidate pairs
Analyzes candidate pairs for frequency
Teaches your system to look for the most valuable candidate pairs
Is that an accurate description?
If so, I think the tool you ask for would be in (4) or (5). For 4), consider the Associative Rule in Python's Orange library as a start. You could also use TF-IDF from scikit-learn. For 5) you could expose the output from 4) as a list, set or dictionary of strings with counts. | I'm writing a program that gathers tweets from Twitter, and evaluates the text to find trending topics. I'm planning on using NLTK to stem the terms and do some other operations on the data.
What I need is a tool that can determine if two adjacent words in a tweet should be treated as a single term. For example, if "fake news" is trending on Twitter, I don't want to treat those two words as different. Another example, if everyone is tweeting about 'Computer Science', it wouldn't make sense to treat computer and science as two different terms since they refer to the same topic. Does a tool exist that can find such terms? | 0 | 1 | 681 |
0 | 52,904,530 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-10T10:39:00.000 | -1 | 1 | 0 | TensorFlow: Correct way of using steps in Stochastic Gradient Descent | 52,738,335 | -0.197375 | python,tensorflow,machine-learning | step is the literal meaning: means you refresh the parameters in your batch size; so for linear_regessor.train, it will train 100 times for this batch_size 1.
epoch means to refresh the whole data, which is 17,000 in your set. | I am currently using TensorFlow tutorial's first_steps_with_tensor_flow.ipynb notebook to learn TF for implementing ML models. In the notebook, they have used Stochastic Gradient Descent (SGD) to optimize the loss function. Below is the snippet of the my_input_function:
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
Here, it can be seen that the batch_size is 1. The notebook uses a housing data set containing 17000 labeled examples for training. This means for SGD, I will be having 17000 batches.
LRmodel = linear_regressor.train(input_fn = lambda:my_input_fn(my_feature,
targets), steps=100)
I have three questions -
Why is steps=100 in linear_regressor.train method above? Since we have 17000 batches and steps in ML means the count for evaluating one batch, in linear_regressor.train method steps = 17000 should be initialized, right?
Is number of batches equal to the number of steps/iterations in ML?
With my 17000 examples, if I keep my batch_size=100, steps=500, and num_epochs=5, what does this initialization mean and how does it correlate to 170 batches? | 0 | 1 | 185 |
0 | 52,746,594 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-10-10T18:20:00.000 | 0 | 2 | 0 | Correct usage of itertools in python 3 | 52,746,519 | 0 | python,python-3.x | Provided that the zip object is created correctly, you can either do list(zip_object) or [*zip_object] to get the list. | I am trying to expand the following list
[(1, [('a', '12'), ('b', '64'), ('c', '36'), ('d', '48')]), (2, [('a', '13'), ('b', '26'), ('c', '39'), ('d', '52')])]
to
[(1,a,12),(1,b,24),(1,c,36),(1,d,48),(2,a,13),(2,b,26),(2,c,39),(2,d,52)]
I used zip(itertools.cycle()) in python 3, but instead get a zip object reference. Is there any other way I can do it? This worked for python 2 | 0 | 1 | 66 |
0 | 52,752,228 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-10-11T01:55:00.000 | 0 | 1 | 0 | Different exceptions happened when running Keras and scikit-learn | 52,751,040 | 1.2 | python,tensorflow,machine-learning,scikit-learn,keras | Found it.
It should be:
clf = KerasClassifier(build_fn=get_model)
Instead of:
clf = KerasClassifier(build_fn=get_model()) | I try to pass a Keras model (as a function) to KerasClassifier wrapper from scikit_learn, and then use GridSearchCV to create some settings, and finally fit the train and test datasets (both are numpy array)
I then, with the same python script, got different exceptions, some of them are:
_1.
Traceback (most recent call last): File "mnist_flat_imac.py", line
63, in
grid_result = validator.fit(train_images, train_labels) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/site-packages/sklearn/model_selection/_search.py",
line 626, in fit
base_estimator = clone(self.estimator) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/site-packages/sklearn/base.py",
line 62, in clone
new_object_params[name] = clone(param, safe=False) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/site-packages/sklearn/base.py",
line 53, in clone
snipped here
in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/copy.py", line 174,
in deepcopy
rv = reductor(4) TypeError: can't pickle SwigPyObject objects Exception ignored in: > Traceback (most recent call last): File
"/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/site-packages/tensorflow/python/framework/c_api_util.py",
line 52, in __del__
c_api.TF_DeleteGraph(self.graph) AttributeError: 'ScopedTFGraph' object has no attribute 'graph'
_2.
Traceback (most recent call last): File "mnist_flat_imac.py", line
63, in
grid_result = validator.fit(train_images, train_labels) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/site-packages/sklearn/model_selection/_search.py",
line 626, in fit
base_estimator = clone(self.estimator) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/site-packages/sklearn/base.py",
line 62, in clone
new_object_params[name] = clone(param, safe=False) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/site-packages/sklearn/base.py",
line 53, in clone
return copy.deepcopy(estimator) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/copy.py", line 182,
in deepcopy
y = _reconstruct(x, rv, 1, memo) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/copy.py", line 297,
in _reconstruct
snipped here
in deepcopy
y = _reconstruct(x, rv, 1, memo) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/copy.py", line 297,
in _reconstruct
state = deepcopy(state, memo) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/copy.py", line 155,
in deepcopy
y = copier(x, memo) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/copy.py", line 243,
in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/copy.py", line 174,
in deepcopy
rv = reductor(4) TypeError: can't pickle SwigPyObject objects
_3.
Traceback (most recent call last): File "mnist_flat_imac.py", line
63, in
grid_result = validator.fit(train_images, train_labels) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/site-packages/sklearn/model_selection/_search.py",
line 626, in fit
base_estimator = clone(self.estimator) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/site-packages/sklearn/base.py",
line 62, in clone
new_object_params[name] = clone(param, safe=False) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/site-packages/sklearn/base.py",
line 53, in clone
snipped here
in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/copy.py", line 182,
in deepcopy
y = _reconstruct(x, rv, 1, memo) File "/home/longnv/PYTHON_ENV/DataScience/lib/python3.5/copy.py", line 306,
in _reconstruct
y.dict.update(state) AttributeError: 'NoneType' object has no attribute 'update'
Why did it output different errors with the same python script?
And how can I fix this?
Thank you so much!
P.S.
python: 3.5
tensorflow: 1.10.1
pandas: 0.23.4
Ubuntu: 4.4.0-124-generic | 0 | 1 | 196 |
0 | 52,772,728 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-11T13:04:00.000 | 0 | 1 | 0 | Finding correlation of two data frames using python | 52,760,769 | 0 | python,correlation | Bin them both to vectors of equal length, with bin or window sizes dependent on the shapes of the input frames, then calculate correlation on the vectors. | I am working on a data set and after performing the bucketing operation over two columns, I ended up with two buckets that have maximum number of data points.
For those two buckets, I have created two separate data frames, which is of different shapes (number of columns are same and the number of rows are different) so as to compare them.
I need to know which transformation I can use to perform a correlation of two data frames possible. How can I do that?
Any other suggestions for comparing data frames are appreciated. | 0 | 1 | 48 |
0 | 52,767,061 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-10-11T18:40:00.000 | 2 | 1 | 0 | How to get a file from the files on my computer to JupyterLab? | 52,766,928 | 0.379949 | python,jupyter-notebook | Every running program, including JupyterLab, has a "working directory" which is where it thinks it is on your computer's file system. What exactly this directory is usually depends on how you launched it (e.g., when you run a program from terminal, its working directory is initially the folder your terminal was in when you ran the command, but it's possible for a program to change its own working directory later).
Your file path indicates you're on Linux, so I'd suggest opening a terminal in your JupyterLab and running pwd to have it print out its current directory. (You can also run !pwd in any open notebook if that's easier.) You should then copy your CSV file to that directory.
If you do that, then from your Python code, you can just open the file locally, like open('Roger-Federer.csv') or pandas.read_csv('Roger-Federer.csv'). You don't have to move the file to open it from Python, though, you can just give it the entire file path, like open('/home/emily/Downloads/Roger-Federer.csv'), and that'll work just fine too. | I am very new to this and struggling with the basics. I have a csv file /home/emily/Downloads/Roger-Federer.csv The textbook says that I need to "extract the file and download it to my current directory" on JupyterLab (I am using Python). What does this mean? How do I do this? Thank you | 0 | 1 | 628 |
0 | 58,498,554 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-10-11T18:47:00.000 | 0 | 1 | 0 | Importing tensorflow not working when upgraded | 52,767,007 | 0 | python,python-3.x,tensorflow | you can uninstall the tensorflow and re-install it. | Tensorflow was working fine when I had 1.4 but when I upgraded it, it stopped working.
The version that I installed is 1.11 with CUDA 9 and cuDNN 7.
Traceback (most recent call last): File
"C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 58, in
from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py",
line 28, in
_pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py",
line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File
"C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\imp.py",
line 243, in load_module
return load_dynamic(name, filename, file) File "C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\imp.py",
line 343, in load_dynamic
return _load(spec) ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "", line 1, in
File
"C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow__init__.py",
line 22, in
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File
"C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python__init__.py",
line 49, in
from tensorflow.python import pywrap_tensorflow File "C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 74, in
raise ImportError(msg) ImportError: Traceback (most recent call last): File
"C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 58, in
from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py",
line 28, in
_pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py",
line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File
"C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\imp.py",
line 243, in load_module
return load_dynamic(name, filename, file) File "C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\imp.py",
line 343, in load_dynamic
return _load(spec) ImportError: DLL load failed: The specified module could not be found. | 0 | 1 | 358 |
0 | 52,773,168 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-12T05:36:00.000 | 0 | 1 | 0 | How to add a none option in scikit learn predict | 52,772,906 | 0 | python,python-3.x,machine-learning,scikit-learn,leap-motion | Instead of treating this as a classification problem with 4 levels, treat it as a classification problem with 5 levels. 4 levels would correspond to one of the original four and the 5th level can be used for all others. | I have the next problem, I'm trying to classify one of four hand position, I'm using SVM and this positions will be used to make commands in my program, the predict function works fine, but for example if I made some other gestures (none of the 4 that I use for commands) the predict function try to classify this gesture in one of the original four, I want to know if is possible to say "this gesture is none of the four that I know".
The final function of this is similar to the behaivor of some commands of the kinect in Xbox for example, you can move your hands but the Xbox will only react to a especifict gesture.
I'm using Python, pandas, scikit-learn and leap motion, I training my network with data collected from four gestures.
Thanks in advance. | 0 | 1 | 87 |
0 | 52,773,601 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-10-12T06:25:00.000 | 1 | 3 | 0 | Reading and Writing into CSV file at the same time | 52,773,491 | 0.066568 | python,python-3.x | You can do open("data.csv", "rw"), this allows you to read and write at the same time. | I wanted to read some input from the csv file and then modify the input and replace it with the new value. For this purpose, I first read the value but then I'm stuck at this point as I want to modify all the values present in the file.
So is it possible to open the file in r mode in one for loop and then immediately in w mode in another loop to enter the modified data?
If there is a simpler way to do this please help me out
Thank you. | 0 | 1 | 9,000 |
0 | 52,843,229 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2018-10-13T01:01:00.000 | 0 | 2 | 0 | Early Stopping with a Cross-Validated Metric in Keras | 52,788,635 | 0 | python,keras,prediction,cross-validation | I imagine that using a callback as suggested by @VincentPakson would be cleaner and more efficient, but the level of programming required is beyond me. I was able to create a for loop to do what I wanted by:
Training a model for a single epoch and saving it using model.save().
Loading the saved model and training the model for a single epoch for each of the 10 folds (i.e. 10 models), then averaging the 10 validation set errors.
Loading the saved model and training for a single epoch using all of the training data, and the overwriting the saved model with this model.
Repeating steps 1-3 until the estimate from 2 stops improving for a given patience.
I'd love a better answer but this seems to work. Slowly. | Is there a way in Keras to cross-validate the early stopping metric being monitored EarlyStopping(monitor = 'val_acc', patience = 5)? Before allowing training to proceed to the next epoch, could the model be cross-validated to get a more robust estimate of the test error? What I have found is that the early stopping metric, say the accuracy on a validation set, can suffer from high variance. Early-stopped models often do not perform nearly as well on unseen data, and I suspect this is because of the high variance associated with the validation set approach.
To minimize the variance in the early stopping metric, I would like to k-fold cross-validate the early stopping metric as the model trains from epoch i to epoch i + 1. I would like to take the model at epoch i, divide the training data into 10 parts, learn on 9 parts, estimate the error on the remaining part, repeat so that all 10 parts have had a chance to be the validation set, and then proceed with training to epoch i + 1 with the full training data as usual. The average of the 10 error estimates will hopefully be a more robust metric that can be used for early stopping.
I have tried to write a custom metric function that includes k-fold cross-validation but I can't get it to work. Is there a way to cross-validate the early stopping metric being monitored, perhaps through a custom function inside the Keras model or a loop outside the Keras model?
Thanks!! | 0 | 1 | 960 |
0 | 52,846,778 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-13T03:42:00.000 | 0 | 1 | 0 | How to set size of hidden state vector in LSTM, keras? | 52,789,325 | 0 | python,keras,lstm | If by vector size you mean the number of nodes in a layer, then yes you are doing it right. The output dimensionality of your layer
is the same as the number of nodes. The same thing applies to convolutional layers, number of filters and output dimensionality along the last axis, aka number of color channels is the same. | I am currently setting the vector size by using model.add(LSTM(50)) i.e setting the value in units attribute but I highly doubt its correctness(In keras documentation, units is explained as dimensionality of the output space). Anyone who can help me here? | 0 | 1 | 567 |
0 | 64,389,564 | 0 | 0 | 0 | 0 | 5 | false | 4 | 2018-10-13T09:27:00.000 | 0 | 5 | 0 | 'pandas' has no attribute 'read_csv'" | 52,791,477 | 0 | python,pandas,csv | Since I had this problem just now, and on the whole internet no answer covered my issue:
You may have pandas installed (Like I did), but in the wrong environment. Especially when you just start out in Python and use an IDE like PyCharm, you don't realise that you may create a new Environment (Called "pythonProject", "pythonProject1", pythonProject2",... by default), and installing a package does not mean it is installed in all Environments.
If you have AnacondaNavigator installed, you can easily look up which Environment has which packages.
This is a very cruel oversight, as PyCharm doesn't warn you of the misplaced package; it just looks up if it is somewhere; so you don't get the error at import. | I have been using pandas for a while and it worked fine but out of nowhere, it decided to give me this error
AttributeError("module 'pandas' has no attribute 'read_csv'")
Now I have spent many many hours trying to solve this issue viewing every StackOverflow forum but they don't help.
I know where both my cvs + python files are located.
My script is not called cvs.py or anything such.
My code can literally just be `'import pandas as pd' and I get the
no attribute error.
I would appreciate if someone could spare the time + assist me in solving this problem. | 0 | 1 | 5,427 |
0 | 53,273,547 | 0 | 0 | 0 | 0 | 5 | false | 4 | 2018-10-13T09:27:00.000 | 2 | 5 | 0 | 'pandas' has no attribute 'read_csv'" | 52,791,477 | 0.07983 | python,pandas,csv | I've just been spinning my wheels on the same problem.
TL/DR: try renaming your python files
I think there must be a number of other naming conflicts besides some of the conceivably obvious ones like csv.py and pandas.py mentioned in other posts on the topic.
In my case, I had a single file called inspect.py. Running on the command line gave me the error, as did running import pandas from within a python3 shell, but only when launching the shell from the same directory as inspect.py. I renamed inspect.py, and now it works just fine!! | I have been using pandas for a while and it worked fine but out of nowhere, it decided to give me this error
AttributeError("module 'pandas' has no attribute 'read_csv'")
Now I have spent many many hours trying to solve this issue viewing every StackOverflow forum but they don't help.
I know where both my cvs + python files are located.
My script is not called cvs.py or anything such.
My code can literally just be `'import pandas as pd' and I get the
no attribute error.
I would appreciate if someone could spare the time + assist me in solving this problem. | 0 | 1 | 5,427 |
0 | 68,730,346 | 0 | 0 | 0 | 0 | 5 | false | 4 | 2018-10-13T09:27:00.000 | 0 | 5 | 0 | 'pandas' has no attribute 'read_csv'" | 52,791,477 | 0 | python,pandas,csv | I have faced the same problem when I update my python packages using conda update --all.
The error:
AttributeError: module 'pandas' has no attribute 'read_csv'
I believe it is a pandas path problem.
The solution:
print(pd)
To see where are your pandas come from. I was getting
<module 'pandas' (namespace)>
Then I used to print(np), for example, to see where my numpy is, then I got
<module 'numpy' from 'C:\\Users\\name\\Anaconda3\\envs\\eda_env\\lib\\site-packages\\numpy\\__init__.py'>
I used the same path to find my pandas path. I found out that the folder lib named Lib with an uppercase letter. I change it to lowercase lib, and it solves my problem.
Change Lib to lib or check a working module and make sure pandas have the same. | I have been using pandas for a while and it worked fine but out of nowhere, it decided to give me this error
AttributeError("module 'pandas' has no attribute 'read_csv'")
Now I have spent many many hours trying to solve this issue viewing every StackOverflow forum but they don't help.
I know where both my cvs + python files are located.
My script is not called cvs.py or anything such.
My code can literally just be `'import pandas as pd' and I get the
no attribute error.
I would appreciate if someone could spare the time + assist me in solving this problem. | 0 | 1 | 5,427 |
0 | 69,326,903 | 0 | 0 | 0 | 0 | 5 | false | 4 | 2018-10-13T09:27:00.000 | 1 | 5 | 0 | 'pandas' has no attribute 'read_csv'" | 52,791,477 | 0.039979 | python,pandas,csv | After spending 2 hours researching a solution to this question, running pip uninstall pandas and then pip install pandas in your terminal will work. | I have been using pandas for a while and it worked fine but out of nowhere, it decided to give me this error
AttributeError("module 'pandas' has no attribute 'read_csv'")
Now I have spent many many hours trying to solve this issue viewing every StackOverflow forum but they don't help.
I know where both my cvs + python files are located.
My script is not called cvs.py or anything such.
My code can literally just be `'import pandas as pd' and I get the
no attribute error.
I would appreciate if someone could spare the time + assist me in solving this problem. | 0 | 1 | 5,427 |
0 | 64,144,515 | 0 | 0 | 0 | 0 | 5 | false | 4 | 2018-10-13T09:27:00.000 | 1 | 5 | 0 | 'pandas' has no attribute 'read_csv'" | 52,791,477 | 0.039979 | python,pandas,csv | I had the same issue and it is probably cause of writing
dataframe = pd.read.csv("dataframe.csv")
instead of
dataframe = pd.read_csv("dataframe.csv")
that little "_" is the problem.
Hope this helps somebody else too. | I have been using pandas for a while and it worked fine but out of nowhere, it decided to give me this error
AttributeError("module 'pandas' has no attribute 'read_csv'")
Now I have spent many many hours trying to solve this issue viewing every StackOverflow forum but they don't help.
I know where both my cvs + python files are located.
My script is not called cvs.py or anything such.
My code can literally just be `'import pandas as pd' and I get the
no attribute error.
I would appreciate if someone could spare the time + assist me in solving this problem. | 0 | 1 | 5,427 |
0 | 52,806,830 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-10-14T19:10:00.000 | 1 | 2 | 0 | imshow() with desired framerate with opencv | 52,806,175 | 1.2 | python,opencv | I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer()
calculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer.
eg cv2.WaitKey((1000/50) - (time processing finished - time read started) - 10)
or you could have a more rigid timing eg script start time + frame# * 20ms - time processing finished
I haven't tried this personally so im not sure if it will actually work, also might be worth having a check so the number isnt below 1 | Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the time used for "reading" the image from the queue and then substract that value from number of miliseconds avalible for one frame:
if I have as input video with 50FPS and i want to playback it in real-time i do 1000/50 => 20ms per frame.
And then wait that time using cv2.WaitKey()
But still I get some laggy output. Which is slower then the source video | 0 | 1 | 2,282 |
0 | 52,816,856 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-15T10:41:00.000 | 0 | 2 | 0 | Neuron freezing in Tensorflow | 52,814,880 | 0 | python,tensorflow,deep-learning | A neuron in a dense neural network layer simply corresponds to a column in a weight matrix. You could therefore redefine your weight matrix as a concatenation of 2 parts/variables, one trainable and one not. Then you could either:
selectively pass only the trainable part in the var_list argument of the minimize function of your optimizer, or
Use tf.stop_gradient on the vector/column corresponding to the neuron you want to freeze.
The same concept could be used for convolutional layers, although in this case the definition of a "neuron" becomes unclear; still, you could freeze any column(s) of a convolutional kernel. | I need to implement neurons freezing in CNN for a deep learning research,
I tried to find any function in the Tensorflow docs, but I didn't find anything.
How can I freeze specific neuron when I implemented the layers with tf.nn.conv2d? | 0 | 1 | 473 |
0 | 52,839,277 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-10-16T15:37:00.000 | 1 | 3 | 0 | Cannot import python pack | 52,839,205 | 0.066568 | python,package | You have to install it first. Search “Python Pip” on google and download Pip. Then use that to open CMD and type “pip install (Module)”. Then it should import with no errors. | I cannot install package in Python.For example the package numpy or pandas.I download the python today.
I press import numpy as np and nothing | 0 | 1 | 40 |
0 | 52,859,940 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-16T21:04:00.000 | 0 | 2 | 0 | uWSGI process 1 got Segmentation Fault _ Fail to deploy Flask App on Pythonanywhere | 52,844,037 | 0 | python,flask,deployment,wsgi,pythonanywhere | uWSGI is a C/C++ compiled app and segmentation fault is its internal error that means that there is some incorrect behavior in uWSGI logic: somewhere in its code it's trying to get access to area of memory it's not allowed to access to, so OS kills this process and returns "segfault" error. So make sure you have the latest stable version of uwsgi installed. Also make sure you installed it properly either using a package manager or via manual compiling. It's recommended to install it using a package manager since it's much more easy than via manual compiling. Also, make sure you use it properly. | I'm trying to deploy my flask app on Pythonanywhere but am getting an error i have no idea what to do about. I've looked online and people haven't been getting similar errors like mine.
My app loads a bunch of pretrained ML models.
Would love some help!
2018-10-16 20:52:38 /home/drdesai/.virtualenvs/flask-app-env/lib/python3.6/site-packages/sklearn/base.py:251: UserWarning: Trying to unpickle estimator LinearRegression from version 0.19.1 when using version 0.20.0. This might lead to breaking code or invalid results. Use at your own risk.#012 UserWarning)
2018-10-16 20:52:38 !!! uWSGI process 1 got Segmentation Fault !!!
2018-10-16 20:52:38 * backtrace of 1 *#012/usr/local/bin/uwsgi(uwsgi_backtrace+0x2c) [0x46529c]#012/usr/local/bin/uwsgi(uwsgi_segfault+0x21) [0x465661]#012/lib/x86_64-linux-gnu/libc.so.6(+0x36cb0) [0x7f6ed211ccb0]#012/home/drdesai/.virtualenvs/flask-app-env/lib/python3.6/site-packages/sklearn/neighbors/kd_tree.cpython-36m-x86_64-linux-gnu.so(+0x404b6) [0x7f6ead1d54b6]#012/usr/lib/x86_64-linux-gnu/libpython3.6m.so.1.0(_PyCFunction_FastCallDict+0x105) [0x7f6ed0e80005]#012/usr/lib/x86_64-linux-gnu/libpython3.6m.so.1.0(+0x16b5fa) [0x7f6ed0f195fa]#012/usr/lib/x86_64-linux-gnu/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x2f3c) [0x7f6ed0f1ccfc]#012/usr/lib/x86_64-linux-gnu/libpython3.6m.so.1.0(+0x16a890) [0x7f6ed0f18890]#012/usr/lib/x86_64-linux-gnu/libpython3.6m.so.1.0(+0x16b7b4) [0x7f6ed0f197b4]#012/usr/lib/x86_64-linux-gnu/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x2f3c) [0x7f6ed0f1ccfc]#012/usr/lib/x86_64-linux-gnu/libpython3.6m.so.1.0(+0x16a890) [0x7f6ed0f18890]#012/usr/lib/x86_
2018-10-16 20:52:38 chdir(): No such file or directory [core/uwsgi.c line 1610]
2018-10-16 20:52:38 VACUUM: unix socket /var/sockets/drdesai.pythonanywhere.com/socket removed. | 1 | 1 | 1,381 |
0 | 52,850,290 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-10-16T21:43:00.000 | 1 | 1 | 0 | Azure Machine Learning Studio execute python script, Theano unable to execute optimized C-implementations (for both CPU and GPU) | 52,844,431 | 1.2 | python,theano,azure-machine-learning-studio | I don't think you can fix that - the Python script environment in Azure ML Studio is rather locked down, you can't really configure it (except for choosing from a small selection of Anaconda/Python versions).
You might be better off using the new Azure ML service, which allows you considerably more configuration options (including using GPUs and the like). | I am execute a python script in Azure machine learning studio. I am including other python scripts and python library, Theano. I can see the Theano get loaded and I got the proper result after script executed. But I saw the error message:
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.
Did anyone know how to solve this problem? Thanks! | 0 | 1 | 96 |
0 | 52,848,240 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2018-10-17T05:46:00.000 | 1 | 1 | 0 | Importing a pandas dataframe into Teradata database | 52,847,985 | 0.197375 | python,pandas,sqlalchemy,teradata | I solved it! Although I do not know why, I'm hoping someone can explain:
tf.to_sql('rt_test4', con=td_engine, schema='db_sandbox', index = False, dtype= {'A': CHAR, 'B':Integer}) | I am attempting to import an Excel file into a new table in a Teradata database, using SQLAlchemy and pandas.
I am using the pandas to_sql function. I load the Excel file with pandas and save it as a dataframe named df. I then use df.to_sql and load it into the Teradata database.
When using the code:
df.to_sql('rt_test4', con=td_engine, schema='db_sandbox')
I am prompted with the error:
DatabaseError: (teradata.api.DatabaseError) (3534, '[42S11] [Teradata][ODBC Teradata Driver][Teradata Database] Another index already exists, using the same columns and the same ordering. ') [SQL: 'CREATE INDEX ix_db_sandbox_rt_test4_index ("index") ON db_sandbox.rt_test4']
When I try this and use Teradata SQL Assistant to see if the table exists, I am prompted with selecting txt or unicode for each column name, and to pick a folder directory. A prompt titled LOB information pops open and I have to select if it's UTF or unicode, and a file directory. Then it loads and all the column titles populate, but they are left as empty fields. Looking for some direction here, I feel I've been spinning my wheels on this. | 0 | 1 | 780 |
0 | 52,855,692 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2018-10-17T12:33:00.000 | 1 | 2 | 0 | Why are different libraries searched for in tensorflow, even though both were installed the same way? | 52,854,983 | 1.2 | python,tensorflow | I would say you have a broken CUDA installation somewhere in the library path. It is libcuda.so that has a dependency on libnvidia-fatbinaryloader.so, so maybe the symbolic links point to a library that no longer exists but was installed before.
You can find this information by running the ldd command on the libcuda.so file. | I built tensorflow from source and got a *.whl file that I could install on my pc with pip install *.whl. Now in the virtualenv where I installed it I can open python and do import tensorflow without a problem and also use tf. Now I tried to install this same wheel on an other pc in a virtualenv and it worked successfully, but when I try to use import tensorflow in python I get:
ImportError: libnvidia-fatbinaryloader.so.390.48: cannot open shared object file: No such file or directory
Now I actually do not have that file on the other pc, but after checking my own pc I also don't have it here. I have on both pcs libnvidia-fatbinaryloader.so.390.87. On both pcs the LD_LIBRARY_PATH points to the directory with that version.
How can it be that tensorflow searches for version 48 on the remote pc while searching for 87 and finding it on my pc, even though they are both installed with the same whl file? Is there a config that I need to adjust what version it should search for? | 0 | 1 | 48 |
0 | 52,855,249 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-10-17T12:33:00.000 | 0 | 2 | 0 | Why are different libraries searched for in tensorflow, even though both were installed the same way? | 52,854,983 | 0 | python,tensorflow | The building process is related to the computer environment.Could building tensorflow in the same machine and installing it on the same machine help?Building on one machine and generating the *.whl,but installing on other machines may cause problem. | I built tensorflow from source and got a *.whl file that I could install on my pc with pip install *.whl. Now in the virtualenv where I installed it I can open python and do import tensorflow without a problem and also use tf. Now I tried to install this same wheel on an other pc in a virtualenv and it worked successfully, but when I try to use import tensorflow in python I get:
ImportError: libnvidia-fatbinaryloader.so.390.48: cannot open shared object file: No such file or directory
Now I actually do not have that file on the other pc, but after checking my own pc I also don't have it here. I have on both pcs libnvidia-fatbinaryloader.so.390.87. On both pcs the LD_LIBRARY_PATH points to the directory with that version.
How can it be that tensorflow searches for version 48 on the remote pc while searching for 87 and finding it on my pc, even though they are both installed with the same whl file? Is there a config that I need to adjust what version it should search for? | 0 | 1 | 48 |
0 | 52,859,799 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-10-17T14:35:00.000 | 2 | 1 | 0 | Installing R kernel with conda creates an unwanted addtional python kernel in Jupyter | 52,857,461 | 0.379949 | python,r,jupyter-notebook,conda | r-essentials comes with python as well as the jupyter_client and the ipykernel packages which enables your jupyter to propose this R and thus the python installed as kernels in a notebook. ipykernel is mandatory for the jupyter to propose the R as a kernel and python is a dependency to ipykernel so...
I don't think you can remove python from the list of the kernels proposed. If you remove python from the conda environment, it also removes the ipykernel and the jupyter_client packages. All you can do is ignore it.
EDIT: found more infos
After looking into this since I wanted to do the same thing, it seems jupyter has a nice built-in program to do this:
Run
jupyter-kernelspec list
to list all available kernels. Then you can remove one with
jupyter-kernelspec remove <kernel_to_remove>
if you want to remove the kernel.
HOWEVER, it seems that you CANNOT remove the python3 kernel. Even though I ran:
jupyter-kernelspec remove python3
python3 still appears in the list and is still an available kernel in the notebook... | I created an R kernel to use in a Jupyter notebook with:
conda create -n myrenv r-essentials -c r
And when running Jupyter, in the menu to create a new notebook, i can see the choice of my new kernel new --> R [conda env:myrenv] but I also have the choice (among others) of new --> Python [conda env:myrenv].
How can I remove the latter environment from the list? I do not even know why python would be in my R environment.
Additional info:
conda 4.5.11 | 0 | 1 | 86 |
0 | 61,304,539 | 0 | 1 | 0 | 0 | 1 | false | 50 | 2018-10-17T16:54:00.000 | 1 | 5 | 0 | Interactive matplotlib figures in Google Colab | 52,859,983 | 0.039979 | python,matplotlib,google-colaboratory | In addition to @Nilesh Ingle excellent answer, in order to solve the problem of axes and title not displaying :
you should change the link https://cdn.plot.ly/plotly-1.5.1.min.js?noext (which refers to an older version of plotly, thus not displaying axes labels) by https://cdn.plot.ly/plotly-1.5.1.min.js?noext when calling the script in the function configure_plotly_browser_state().
Hope this would help ! | Normally in a jupyter notebook I would use %matplotlib notebook magic to display an interactive window, however this doesn't seem to work with google colab. Is there a solution, or is it not possible to display interactive windows in google colab? | 0 | 1 | 44,387 |
0 | 52,873,525 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-17T18:52:00.000 | 0 | 1 | 0 | Getting the list of features used during training of Random Forest Regressor | 52,861,742 | 0 | python,pandas,scikit-learn,random-forest | Is there a function which allows to get the list of names of columns
used during the training of the Random Forest Regressor model?
RF uses all features from your dataset. Each tree may contain sqrt(num_of_features) or log2(num_of_features) or whatever but these columns are selected at random. So usually RF covers all columns from your dataset.
There may be edge case when you use a small number of estimators in RF and some features may not be considered. I suppose, RandomForestRegressor.feature_importances_ (zero or nan value may be indicators here) or dive into each tree in RandomForestRegressor.estimators_ may help.
If not, then is there a function which for the missing columns would
assign Nulls?
RF does not accept missing values. Either you need to code missing value as the separate class (and use it for learning too) or XGBoost (for example) is your choice. | I used one set of data to learn a Random Forest Regressor and right now I have another dataset with smaller number of features (the subset of the previous set).
Is there a function which allows to get the list of names of columns used during the training of the Random Forest Regressor model?
If not, then is there a function which for the missing columns would assign Nulls? | 0 | 1 | 216 |
0 | 72,310,873 | 0 | 0 | 0 | 1 | 1 | false | 10 | 2018-10-17T20:03:00.000 | 1 | 1 | 0 | How do I use python pandas to read an already opened excel sheet | 52,862,768 | 0.197375 | python,excel,pandas | There is no way to do this. The table is not saved to disk, so pandas can not read it from disk. | Assuming I have an excel sheet already open, make some changes in the file and use pd.read_excel to create a dataframe based on that sheet, I understand that the dataframe will only reflect the data in the last saved version of the excel file. I would have to save the sheet first in order for pandas dataframe to take into account the change.
Is there anyway for pandas or other python packages to read an opened excel file and be able to refresh its data real time (without saving or closing the file)? | 0 | 1 | 1,261 |
0 | 52,864,018 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2018-10-17T21:28:00.000 | 1 | 3 | 0 | Load thousands of CSV files into tableau | 52,863,882 | 0.066568 | python,csv,tableau-api | I would suggest doing any data prep outside of Tableau. Since you seem to be familiar with Python, try Pandas to combine all the csv files into one dataframe then output to a database or a single csv. Then connect to that single source. | I have a gazillion CSV files and in column 2 is the x-data and column 3 is the y-data. Each CSV file is a different time stamp. The x-data is slightly different in each file, but the number of rows is constant. I'm happy to assume the x-data is in fact identical.
I am persuaded that Tableau is a good interface for me to do some visualization and happily installed tabpy and "voila", I can call python from Tableau... except... to return an array I will need to return a string with comma separated values for each time stamp, and then one of those strings per x-axis and then.... Hmm, that doesnt sound right.
I tried telling Tableau just open them all and I'd join them later, but gave up after 30 mins of it crunching.
So what do you reckon? I am completely agnostic. Install an SQL server and create a database? Create a big CSV file that has a time-stamp for each column? Google? JSON?
Or maybe there is some clever way in Tableau to loop through the CSV files? | 0 | 1 | 1,928 |
0 | 52,864,473 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2018-10-17T21:28:00.000 | 0 | 3 | 0 | Load thousands of CSV files into tableau | 52,863,882 | 0 | python,csv,tableau-api | If you are using Windows, you can combine all the csv files into a single csv, then import that into Tableau. This of course assumes that all of your csv files have the same data structure.
Open the command prompt
Navigate to the directory where the csv files are (using the cd command)
Use the command copy *.csv combined-file.csv. The combined-file.csv can be whatever name you want. | I have a gazillion CSV files and in column 2 is the x-data and column 3 is the y-data. Each CSV file is a different time stamp. The x-data is slightly different in each file, but the number of rows is constant. I'm happy to assume the x-data is in fact identical.
I am persuaded that Tableau is a good interface for me to do some visualization and happily installed tabpy and "voila", I can call python from Tableau... except... to return an array I will need to return a string with comma separated values for each time stamp, and then one of those strings per x-axis and then.... Hmm, that doesnt sound right.
I tried telling Tableau just open them all and I'd join them later, but gave up after 30 mins of it crunching.
So what do you reckon? I am completely agnostic. Install an SQL server and create a database? Create a big CSV file that has a time-stamp for each column? Google? JSON?
Or maybe there is some clever way in Tableau to loop through the CSV files? | 0 | 1 | 1,928 |
0 | 52,871,555 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-10-18T09:53:00.000 | 0 | 1 | 0 | Is it possible to manipulate data from csv without the need for producing a new csv file? | 52,871,447 | 1.2 | python,pandas | This is not possible using pandas. This lib creates copy of your .csv / .xls file and stores it in RAM. So all changes are applied to file stored in you memory not on disk. | I know how to import and manipulate data from csv, but I always need to save to xlsx or so to see the changes. Is there a way to see 'live changes' as if I am already using Excel?
PS using pandas
Thanks! | 0 | 1 | 47 |
0 | 52,928,244 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-10-18T10:03:00.000 | 1 | 2 | 0 | Ordinal 241 could not be located | 52,871,597 | 0.099668 | python,anaconda,jupyter | Eventually, I came to a conclusion that there is a problem with adaptation of this Anaconda version with my Win 8.1.
So, I downgraded Anaconda version to Anaconda3-5.2.0-Windows-x86_64 and that solved the issue. | I am using Anaconda3-5.3.0-Windows-x86_64 release and experienced the following problem:
While running import command (e.g. - import numpy as np), from Jupyter notebook, I receive the following error -
The ordinal 241 could not be located in the dynamic link library path:\mkl_intel_thread.dll>
Where 'path' is the path to anaconda directory in my Win10 PC.
I tried the following in order to overcome this issue -
Reinstall anaconda from scratch
Update the .dll to latest one
Update Win10 path to search the right folders
Unfortunately - non of the above methods worked for me.
Can someone offer some solution / new ideas to check?
Thank you all in advance! | 0 | 1 | 930 |
0 | 52,880,527 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-10-18T14:31:00.000 | 1 | 1 | 0 | Elements in array change data types from float to string | 52,876,380 | 0.197375 | python,arrays,append,reshape | Because of the mix of numbers and strings, np.array will use the common format: string. The solution here is to convert data to type object which supports mixed element types. This is performed by using:
data = np.array(data, dtype=object)
prior to hstack. | When I append elements to a list that have the following format and type:
data.append([float, float, string])
Then stack the list using:
data = np.hstack(data)
And then finally reshape the array and transpose using:
data = np.reshape(data,(-1,3)).T
All the array elements in data are changed to strings. I want (and expected) the first and second columns in data to be of type float and the third of type string, but instead they are all of type string. [Interestingly, if I do not append the string elements to data and adjust the newshape to (-1,2), both columns are floats.] I cannot figure this one out. Any help would be appreciated. | 0 | 1 | 397 |
0 | 52,879,296 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2018-10-18T17:03:00.000 | 6 | 1 | 0 | How are tensors immutable in TensorFlow? | 52,879,126 | 1.2 | python,tensorflow | Tensors, differently from variables, can be compared to a math equation.
When you say a tensor equals 2+2, it's value is not actually 4, it's the computing instructions that leads to the value of 2+2 and when you start a session an execute it, TensorFlow runs the computations needed to return the value of 2+2 and gives you the output. And because of the tensor beeing the computations, rather than the the result, a tensor is immutable
Now for your questions:
By saying the tensor can be evaluated with different values it means that if you for example say that a tensor equals to a random number, when you run it different times, you will have different values (as the equation itself is a random one), but the value of the tensor itself as mentioned before, is not the value, is the steps that leads to it (in this case a random formula)
The context of a single execution means that when you run a tensor, it will only output you one value. Think executing a tensor like applying the equation i mentioned. If i say a tensor equals random + 1, when you execute the tensor a single time, it will return you a random value +1, nothing else. But since the tensor contains a randomic output, if you run it multiple times, you will most likely get different values | I read the following sentence in the TensorFlow documentation:
With the exception of tf.Variable, the value of a tensor is immutable,
which means that in the context of a single execution tensors only
have a single value. However, evaluating the same tensor twice can
return different values; for example that tensor can be the result of
reading data from disk, or generating a random number.
Can someone elaborate a little bit on the "immutable" aspect of a Tensor?
What is the "scope of the immutability" since evaluating a tensor twice could return different results?
What does it mean "the context of a single execution"? | 0 | 1 | 2,114 |
0 | 52,889,192 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2018-10-19T09:04:00.000 | 0 | 5 | 0 | how to remove zeros after decimal from string remove all zero after dot | 52,889,130 | 0 | python,pandas | A quick-and-dirty solution is to use "%g" % value, which will convert floats 1.5 to 1.5 but 1.0 to 1 and so on. The negative side-effect is that large numbers will be represented in scientific notation like 4.44e+07. | I have data frame with a object column lets say col1, which has values likes:
1.00,
1,
0.50,
1.54
I want to have the output like the below:
1,
1,
0.5,
1.54
basically, remove zeros after decimal values if it does not have any digit after zero. Please note that i need answer for dataframe. pd.set_option and round don't work for me. | 0 | 1 | 5,418 |
0 | 52,943,459 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2018-10-20T01:58:00.000 | 0 | 1 | 0 | How to find redundant paths (subpaths) in the trajectory of a moving object? | 52,901,800 | 0 | python,python-2.7,video-tracking | I would first create a detection procedure that outputs a list of points visited along with their video frame number. Then use list exploration functions to know how many redundant suites are found and where.
As you see I don't write your code. If you need anymore advise please ask! | I need to track a moving deformable object in a video (but only 2D space). How do I find the paths (subpaths) revisited by the object in the span of its whole trajectory? For instance, if the object traced a path, p0-p1-p2-...-p10, I want to find the number of cases the object traced either p0-...-p10 or a sub-path like p3-p4-p5. Here, p0,p1,...,p10 represent object positions (in (x,y) pixel coordinates at the respective instants). Also, how do I know at which frame(s) these paths (subpaths) are being revisited? | 0 | 1 | 77 |
0 | 52,912,811 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-10-21T06:05:00.000 | 1 | 1 | 0 | How To Detect English Language Words Using Machine Learning From Data | 52,912,553 | 0.197375 | python,tensorflow,machine-learning | Character frequency scanning is one way to do this.
For example for each language obtain a list of character frequencies,
A: 3%
B: 1%
C: 0.5%
D: 0.7%
E: 4%
etc..
Then evaluate your string's character frequency against your static map. You can obtain a probabilistic model of the likelihood of the string being one of your languages.
Of course this works best for longer strings where there is enough statistical data to capture the true frequency. You would also need to train your frequencies on samples from your target source, e.g. English tweets likely have a different letter frequency to works of Shakespeare.
Another option is to find the most likely n-grams in a language, e.g, 'we' is a common 2-gram in english. If you scan your code for how often these most likely n-grams occur you can generally detect if something is in a specific language or not.
I'm sure there are also other ideas or combinations of classifiers, but this gives you a start. Don't underestimate the power of an ensemble of classifiers either. For example suppose you came up with 3 different models that were all different and uncorrelated, and say each model could detect english correctly 3 times out of 4 (75%). If you then used all 3 models with an equally weighted vote, so if 3 of 3 or 2 of 3 voted english it was classed as english then your error improves to about 3.4 times correct from 4 (85%) (=0.75^3 + 3*0.75^2*0.25) | I have data that contains English text messages.
I want to detect messages that are "written in English letters", but aren't English words. (For example with codes based rules, but I don't want to hard coded the rules).
Please note that the computer being used does not have an active internet connection (so I cannot check against online dictionary).
Example Data
"hello how are you"
"fjrio kjfdelf ejfe" <-- code (let's say is means "how are you" in spanish)
"i am fine thanks"
"10x man"
"jfrojf feoif" <-- code (let's say it means "hello world" in japanish)
I'm new to machine learning, so for my understanding, maybe one approach could
be using nlp? | 0 | 1 | 440 |
0 | 52,934,432 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-21T23:20:00.000 | 0 | 2 | 0 | How can I sort 128 bit unsigned integers in Python? | 52,920,727 | 0 | python,sorting,numpy,int128 | I was probably expecting too much from Python, but I'm not disappointed. A few minutes of coding allowed me to create something (using built-in lists) that can process the sorting a hundred million uint128 items on an 8GB laptop in a couple of minutes.
Given a large number of items to be sorted (1 trillion), it's clear that putting them into smaller bins/files upon creation makes more sense than looking to sort huge numbers in memory. The potential issues created by appending data to thousands of files in 1MB chunks (fragmentation on spinning disks) are less of a worry due to the sorting of each of these fragmented files creating a sequential file that will be read many times (the fragmented file is written once and read once).
The benefits of development speed of Python seem to outweigh the performance hit versus C/C++, especially since the sorting happens only once. | I have a huge number of 128-bit unsigned integers that need to be sorted for analysis (around a trillion of them!).
The research I have done on 128-bit integers has led me down a bit of a blind alley, numpy doesn't seem to fully support them and the internal sorting functions are memory intensive (using lists).
What I'd like to do is load, for example, a billion 128-bit unsigned integers into memory (16GB if just binary data) and sort them. The machine in question has 48GB of RAM so should be OK to use 32GB for the operation. If it has to be done in smaller chunks that's OK, but doing as large a chunk as possible would be better. Is there a sorting algorithm that Python has which can take such data without requiring a huge overhead?
I can sort 128-bit integers using the .sort method for lists, and it works, but it can't scale to the level that I need. I do have a C++ version that was custom written to do this and works incredibly quickly, but I would like to replicate it in Python to accelerate development time (and I didn't write the C++ and I'm not used to that language).
Apologies if there's more information required to describe the problem, please ask anything. | 0 | 1 | 1,852 |
0 | 52,929,724 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2018-10-22T12:42:00.000 | 0 | 2 | 0 | Why do I need sklearn in docker container if I already have the model as a pickle? | 52,929,649 | 0 | python,python-3.x,docker,scikit-learn,pickle | The pickle is just the representation of the data inside the model. You still need the code to use it, that's why you need to have sklearn inside the container. | I pickled a model and want to expose only the prediction api written in Flask. However when I write a dockerfile to make a image without sklearn in it, I get an error ModuleNotFoundError: No module named 'sklearn.xxxx' where xxx refers to sklearn's ML algorithm classes, at the point where I am loading the model using pickle like classifier = pickle.load(f).
When I rewrite the dockerfile to make an image that has sklearn too, then I don't get the error even though in the API I never import sklearn.
My concept of pickling is very simple, that it will serialize the classifier class with all of its data. So when we unpickle it, since the classifier class already has a predict attribute, we can just call it. Why do I need to have sklearn in the environment? | 0 | 1 | 1,171 |
0 | 52,929,910 | 0 | 0 | 0 | 0 | 2 | true | 3 | 2018-10-22T12:42:00.000 | 1 | 2 | 0 | Why do I need sklearn in docker container if I already have the model as a pickle? | 52,929,649 | 1.2 | python,python-3.x,docker,scikit-learn,pickle | You have a misconception of how pickle works.
It does not seralize anything, except of instance state (__dict__ by default, or custom implementation). When unpickling, it just tries to create instance of corresponding class (here goes your import error) and set pickled state.
There's a reason for this: you don't know beforehand what methods will be used after load, so you can not pickle implementation. In addition to this, in pickle time you can not build some AST to see what methods/modules will be needed after deserializing, and main reason for this is dynamic nature of python — your implementation can actually vary depending on input.
After all, even assuming that theoretically we'd have smart self-contained pickle serialization, it will be actual model + sklearn in single file, with no proper way to manage it. | I pickled a model and want to expose only the prediction api written in Flask. However when I write a dockerfile to make a image without sklearn in it, I get an error ModuleNotFoundError: No module named 'sklearn.xxxx' where xxx refers to sklearn's ML algorithm classes, at the point where I am loading the model using pickle like classifier = pickle.load(f).
When I rewrite the dockerfile to make an image that has sklearn too, then I don't get the error even though in the API I never import sklearn.
My concept of pickling is very simple, that it will serialize the classifier class with all of its data. So when we unpickle it, since the classifier class already has a predict attribute, we can just call it. Why do I need to have sklearn in the environment? | 0 | 1 | 1,171 |
0 | 52,942,231 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-22T15:08:00.000 | 0 | 1 | 0 | python - multivariate regression with discrete and continuous | 52,932,510 | 0 | python,regression | Correlation is used only for numeric data, discrete / binary data need to be treated differently. Have a look at Phi coefficient for binary.
As for correlation coefficient (for numeric data), it depends on the relationship between the variables. If these are linear then Pearson is preferred, otherwise Spearman (or something else). | I have a dataset with 53 independent variables (X) and 1 dependent (Y).
The dependent variable is a boolean (either 1 or 0), while the independent set is made of both continuous and discrete variables.
I was planning to use pandas.DataFrame.corr() to list the most influencing variables for the output Y.
corr can be:
pearson regression
kendall regression
spearman regression
I get different results for 3 approaches.
Do you have suggestions on which one would be the most suitable given the shape (discrete+continuos) of the dataset? | 0 | 1 | 339 |
0 | 56,162,459 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-10-22T17:39:00.000 | 1 | 1 | 0 | How to choose beta in F-beta score | 52,934,864 | 0.197375 | python-3.x,machine-learning,scikit-learn,random-forest,grid-search | To give more weight to the Precision, we pick a Beta value in the
interval 0 < Beta < 1
To give more weight to the Recall, we pick a Beta Value in the interval 1 < Beta
When you set beta = Cost of False Negative/Cost of False Positive then you'll give more weight to recall, in case of the cost of False negative is higher than that of False positive, so it will work, but this doesn't mean that this is the optimum solution for your problem.
Optimizing Beta is relevant to the shape of your data, so it would be better to try different values of Beta on your data until you get the best value. | I am using grid search to optimize the hyper-parameters of a Random Forest fit on a balanced data set, and I am struggling with which model evaluation metric to choose. Given the real-world context of this problem, false negatives are more costly than false positives. I initially tried optimizing recall but I was ending up with extremely high numbers of false positives. My solution is to instead optimize an f-beta score with beta > 1. My question is, how best to choose beta? If I can calculate the cost of a false negative and false positive, can I set beta = Cost of False Negative/Cost of False Positive? Does this approach make sense? | 0 | 1 | 1,826 |
0 | 52,957,261 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2018-10-23T13:34:00.000 | 1 | 6 | 0 | valueError when using multi_gpu_model in keras | 52,950,449 | 0.033321 | python,tensorflow,keras,google-cloud-platform,gpu | TensorFlow is only seeing one GPU (the gpu and xla_gpu devices are two backends over the same physical device). Are you setting CUDA_VISIBLE_DEVICES? Does nvidia-smi show all GPUs? | I am using google cloud VM with 4 Tesla K80 GPU's.
I am running a keras model using multi_gpu_model with gpus=4(since i have 4 gpu's). But, i am getting the following error
ValueError: To call multi_gpu_model with gpus=4, we expect the
following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1',
'/gpu:2', '/gpu:3']. However this machine only has: ['/cpu:0',
'/xla_cpu:0', '/xla_gpu:0', '/gpu:0']. Try reducing gpus.
I can see that there are only two gpu's here namely '/xla_gpu:0', '/gpu:0'. so, i tried with gpus = 2 and again got the following error
ValueError: To call multi_gpu_model with gpus=2, we expect the
following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1'].
However this machine only has: ['/cpu:0', '/xla_cpu:0', '/xla_gpu:0',
'/gpu:0']. Try reducing gpus.
can anyone help me out with the error. Thanks! | 0 | 1 | 7,993 |
0 | 58,273,653 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2018-10-23T13:34:00.000 | 0 | 6 | 0 | valueError when using multi_gpu_model in keras | 52,950,449 | 0 | python,tensorflow,keras,google-cloud-platform,gpu | I had the same issue. Tensorflow-gpu 1.14 installed, CUDA 10.0, and 4 XLA_GPUs were displayed with device_lib.list_local_devices().
I have another conda environement and there is just Tensorflow 1.14 installed and no tensorflow-gpu, and i don't know why, but i can run my multi_gpu model on all gpus with that environment. | I am using google cloud VM with 4 Tesla K80 GPU's.
I am running a keras model using multi_gpu_model with gpus=4(since i have 4 gpu's). But, i am getting the following error
ValueError: To call multi_gpu_model with gpus=4, we expect the
following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1',
'/gpu:2', '/gpu:3']. However this machine only has: ['/cpu:0',
'/xla_cpu:0', '/xla_gpu:0', '/gpu:0']. Try reducing gpus.
I can see that there are only two gpu's here namely '/xla_gpu:0', '/gpu:0'. so, i tried with gpus = 2 and again got the following error
ValueError: To call multi_gpu_model with gpus=2, we expect the
following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1'].
However this machine only has: ['/cpu:0', '/xla_cpu:0', '/xla_gpu:0',
'/gpu:0']. Try reducing gpus.
can anyone help me out with the error. Thanks! | 0 | 1 | 7,993 |
0 | 52,973,320 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2018-10-23T20:32:00.000 | 1 | 1 | 0 | running PVWatts for module system not in Sandia DB (python library) | 52,957,433 | 1.2 | python,pvlib | Yes, use an incident angle modifier function such as physicaliam to calculate the AOI loss, apply the AOI loss to the in-plane direct component, then add the in-plane diffuse component. | I want to run the PVWatts model (concretely to get pvwatts_dc) on an Amerisolar 315 module which doesn't seem to appear. What I am trying to do is to replicate the steps in the manual, which only requires system DC size.
When I go into the power model, the formula says g_poa_effective must be already angle-of-incidence-loss corrected. How do I do this correction? I've thought about using the physical correction formula pvlib.pvsystem.physicaliam(aoi), but is this the right track? | 0 | 1 | 77 |
0 | 52,965,846 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-10-24T09:41:00.000 | 0 | 2 | 0 | Using convolution layer trained weights for different image size | 52,965,773 | 1.2 | python,tensorflow,deep-learning | As convolution layer are independent of image size
Actually it's more complicated than that. The kernel itself is independent of the image size because we apply it on each pixel. And indeed, the training of these kernels can be reused.
But this means that the output size is dependent on the image size, because this is the number of nodes that are fed out of the layer for each input pixel. So the dense layer is not adapted to your image, even if the feature extractors are independent.
So you need to preprocess your image to fit into the size of the first layer or you retrain your dense layers from scratch.
When people talk about "transfer-learning" is what people have done in segmentation for decades. You reuse the best feature extractors and then you train a dedicated model with these features. | I want to use the first three convolution layers of vgg-16 to generate feature maps.
But i want to use it with variable image size,i.e not imagenet size of 224x224 or 256x256. Such as 480x640or any other randome image dimension.
As convolution layer are independent of image spatial size, how can I use the weights for varying image sizes?
So how do we use the pre-trained weights of vgg-16 upto the first three convolution layers.
Kindly let me know if that is possible. | 0 | 1 | 129 |
0 | 53,006,871 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-10-24T20:27:00.000 | 0 | 1 | 0 | Why is pip not updating tensorflow correctly, and, if it is, why is the 'attrib error' still thrown? | 52,977,453 | 1.2 | python,tensorflow,pip,artificial-intelligence | As mentioned in the comments, the most probable solution to the attribute error is the update problem. However, if you're encountering the Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow, the easiest solution is to use following code.
pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.1.0-py2-none-any.whl (for python 2)
or
pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.1.0-py3-none-any.whl (for python 3) | I've installed tensorflow over pip3 and python3, and am working on it. While using the colum function, the commonly experienced error AttributeError: module 'tensorflow' has no attribute 'feature_column'.
It might look like a duplicate question, but I've looked at the other occurrences of the same question, but, after updating the file (pip3 install --upgrade tensorflow), I checked the version. The version 0.12.0 is shown. So why does pip still show its completely new. Is 0.12.0 the newest version?
When I attempted to uninstall tensorflow and re-install it, it refuses to re-install. I'm using python3 -m pip install tensorflow. The error thrown here is Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow
Thanks in advance for your help | 0 | 1 | 194 |
0 | 52,981,417 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-25T02:44:00.000 | 0 | 2 | 0 | How to use Tensorflow Keras API | 52,980,583 | 0 | python,python-3.x,tensorflow,tensorboard | You can use these APIs all together. E.g. if you have a regular dense network, but with an special layer you can use higher level API for dense layers (tf.layers and tf.keras) and low level API for your special layer. Furthermore, it is complex graphs are easier to define in low level APIs, e.g. if you want to share variables, etc.
Eager execution helps you for fast debugging, it evaluates tensors directly without a need of invoking a session. | Well I start learning Tensorflow but I notice there's so much confusion about how to use this thing..
First, some tutorials present models using low level API tf.varibles, scopes...etc, but other tutorials use Keras instead and for example to use tensor board to invoke callbacks.
Second, what's the purpose of having ton of duplicate API, really what's the purpose behind using high level API like Keras when you have low level to build model like Lego blocks?
Finally, what's the true purpose of using eager execution? | 0 | 1 | 241 |
0 | 53,030,878 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-10-25T11:08:00.000 | 0 | 1 | 0 | Keras flow_from_dataframe wrong data ordering | 52,987,835 | 1.2 | python,keras | While I haven't found a way to decide the order in which the generator produces data, the order can be obtained with the generator.filenames property. | I am using keras's data generator with flow_from_dataframe. for training it works just fine, but when using model.predict_generator on the test set, I discovered that the ordering of the generated results is different than the ordering of the "id" column in my dataframe.
shuffle=False does make the ordering of the generator consistent, but it is a different ordering than the dataframe. I also tried different batch sizes and the corresponding correct steps for the predict_generator function. (for example: batch_Size=1, steps=len(data))
how can I make sure the labels predicted for my test set are ordered in the same way of my dataframe "id" column? | 0 | 1 | 403 |
0 | 53,005,078 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-10-25T12:18:00.000 | 0 | 1 | 0 | Python DLL load fail after updating all packages | 52,989,115 | 0 | python,scikit-learn | Ended up unstalling anaconda and re-installing. Seems to work again now | I just updated all conda packages, as Jupyter had a kernel error. Had been working in Pycharm for a while, but wanted to continue in Jupyter now that the code was working. Updating fixed my jupyter kernel error, but now the script won't work, in jupyter, pycharm, or from console. I get same error in each case:
File "myscript.py", line 58, in
myFunction(path, out) File "myscript.py", line 7, in myFunction
from sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer File
"C:\Anaconda\lib\site-packages\sklearn__init__.py", line 134, in
from .base import clone File "C:\Anaconda\lib\site-packages\sklearn\base.py", line 12, in
from .utils.fixes import signature File "C:\Anaconda\lib\site-packages\sklearn\utils__init__.py", line 11, in
from .validation import (as_float_array, File "C:\Anaconda\lib\site-packages\sklearn\utils\validation.py", line 18,
in
from ..utils.fixes import signature File "C:\Anaconda\lib\site-packages\sklearn\utils\fixes.py", line 144, in
from scipy.sparse.linalg import lsqr as sparse_lsqr # noqa File "C:\Anaconda\lib\site-packages\scipy\sparse\linalg__init__.py", line
113, in
from .isolve import * File "C:\Anaconda\lib\site-packages\scipy\sparse\linalg\isolve__init__.py",
line 6, in
from .iterative import * File "C:\Anaconda\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py",
line 7, in
from . import _iterative ImportError: DLL load failed
Any ideas? | 0 | 1 | 392 |
0 | 52,990,768 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-10-25T13:05:00.000 | 0 | 1 | 0 | Generate Permutations of a large number( probably 30) with constraints | 52,990,024 | 0 | python,constraints,permutation,itertools | Calculating all permutations for a list of size 30 is impossible, irrespective of the implementation approach as there will be a total of 30! permutations.
It seems to me that the permutation you require can simply be achieved by sorting the the given list though by using arr.sort() and then calculating the difference between consecutive elements. Am I missing something? | I have list of numbers ( 1 to 30 ) most probably. I need to arrange the list in such a way that the absolute difference between two successive elements is not more than 2 or 3 or 4, and the sum of absolute differences of all the successive elements is minimum.
I tried generating all possible permutations of list ranging upto 10 and 11 and then sorting them according to the cost value, but for large nmbers it takes too long.
It would take ages to get list for 30 numbers.
Is there any way I could perform the constraints while generating the permutations itself ?
Currently I'm using itertools library for python to generate permutations.
Any help is greatly apprecited!
Thank you
EDIT 1: Here are the results I got on small numbers like 10 and 12.
Arranged Array -> Cost (Cost is the sum of absolute difference between two successive elements)
For 10 numbers
[1, 3, 5, 2, 4, 6, 8, 10, 7, 9] 20
[2, 4, 1, 3, 5, 7, 9, 6, 8, 10] 20
For 12.
[1, 3, 5, 2, 4, 6, 8, 11, 9, 7, 10, 12] 25
[1, 3, 5, 7, 10, 12, 9, 11, 8, 6, 4, 2] 25
I need to arrange 30 such numbers where 2 <= difference <= 4 and overall cost is minimum. | 0 | 1 | 343 |
0 | 52,993,813 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-10-25T15:16:00.000 | 1 | 2 | 0 | Write python functions to operate over arbitrary axes | 52,992,767 | 1.2 | python,numpy,multidimensional-array,indexing | numpy functions use several approaches to do this:
transpose axes to move the target axis to a known position, usually first or last; and if needed transpose the result
reshape (along with transpose) to reduce the problem simpler dimensions. If your focus is on the n'th dimension, it might not matter where the (:n) dimension are flattened or not. They are just 'going along for the ride'.
construct an indexing tuple. idx = (slice(None), slice(None), j); A[idx] is the equivalent of A[:,:,j]. Start with a list or array of the right size, fill with slices, fiddle with it, and then convert to a tuple (tuples are immutable).
Construct indices with indexing_tricks tools like np.r_, np.s_ etc.
Study code that provides for axes. Compiled ufuncs won't help, but functions like tensordot, take_along_axis, apply_along_axis, np.cross are written in Python, and use one or more of these tricks. | I've been struggling with this problem in various guises for a long time, and never managed to find a good solution.
Basically if I want to write a function that performs an operation over a given, but arbitrary axis of an arbitrary rank array, in the style of (for example) np.mean(A,axis=some_axis), I have no idea in general how to do this.
The issue always seems to come down to the inflexibility of the slicing syntax; if I want to access the ith slice on the 3rd index, I can use A[:,:,i], but I can't generalise this to the nth index. | 0 | 1 | 68 |
0 | 52,994,178 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2018-10-25T16:21:00.000 | 2 | 1 | 0 | How to store np arrays into psql database and django | 52,993,954 | 0.379949 | python,numpy,psql | json.dumps(np_array.tolist()) is the way to convert a numpy array to json. np_array.fromlist(json.loads(json.dumps(np_array.tolist()))) is how you get it back. | I develop an application that will be used for running simulation and optimization over graphs (for instance Travelling salesman problem or various other problems).
Currently I use 2d numpy array as graph representation and always store list of lists and after every load/dump from/into DB I use function np.fromlist, np.tolist() functions respectively.
Is there supported way how could I store numpy ndarray into psql? Unfortunately, np arrays are not JSON-serializable by default.
I also thought to convert numpy array into scipy.sparse matrix, but they are not json serializable either | 0 | 1 | 975 |
0 | 53,008,270 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-25T17:48:00.000 | 0 | 1 | 0 | Training in Keras with external evaluation function | 52,995,288 | 0 | python,unity3d,keras,deep-learning,ml-agent | After some talks at my university:
the setup won't work this way since I need to split the process.
I need the parameters of working agents to train the network based only on the level description(e.g. matrix like video game description language). To obtain the parametrized agents based on the actual level and the ground truth data(e.g. deviation from trajectory), one need to use reinforcement deep learning with a score function to obtain these parameters. Therefore Unity ML Agents might be useful. Afterwards, I can use the parameters settings and the correlating level data to train a network to yield the desired parameters based only on the level description. | Let me first describe the setup:
We have an autonomous agent in Unity, whose decisions are based on the perceived environment(level) and some pre-defined parameters for value mapping. Our aim is to pre-train the agents' parameters in a DNN. So the idea is basically to define an error metric to evaluate the performance of the agent in a Unity simulation (run the level, e.g. measure the deviation from the optimal trajectory = ground truth in unity). So based on the input level into the DNN, the network should train to output the params, the simulation is performed and the error is passed back to the network as the error value, like accuracy, so the network could train based on that error/performance.
Is there any way to perform the evaluation(comparison to the ground truth) during the training outside of Keras? Usually, one passes X data to the network, train stuff and compare it to the ground truth Y. This works fine for predictions, but I don't want to predict something. What I do want is to measure the deviation from the ground truth inside the simulation.
I know there is Unity ML Agents, but as far as I could read, the 'brain' controls the agent on runtime, i.e. update it on every frame and control the movement. What I want is to perform the whole simulation to update the params/weights of the network.
Best wishes. | 0 | 1 | 78 |
0 | 52,998,511 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-25T20:43:00.000 | 0 | 1 | 0 | Kernel size change in convolutional neural networks | 52,997,810 | 0 | python,tensorflow,neural-network,conv-neural-network,convolution | you need 64 kernel, each with the size of (32,5,5) .
depth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same.
e.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an input with N as depth or say channel, you just copy this 3x3 kernel N times in 3rd dimension, the following math is just like the 1 channel case, you sum all values in all N channels which your kernel window is currently on them after multiplying the kernel values with them and get the value of just 1 entry or pixel. so what you get as output in the end is a matrix with 1 channel:) how much depth you want your matrix for next layer to have? that's the number of kernels you should apply. hence in your case it would be a kernel with this size (64 x 32 x 5 x 5) which is actually 64 kernels with 32 channels for each and same 5x5 values in all cahnnels.
("I am not a very confident english speaker hope you get what I said, it would be nice if someone edit this :)") | I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers.
Convolutional layer with kernel_size = (5,5) with 32 output channels
new dimension of throughput = (32, 28, 28)
Max Pooling layer with pool_size (2,2) and step (2,2)
new dimension of throughput = (32, 14, 14)
If I now want to create a second convolutional layer with kernel size = (5x5) and 64 output channels, how do I proceed? Does this mean that I only need two new filters (2 x 32 existing channels) or does the kernel size change to be (32 x 5 x 5) since there are already 32 input channels?
Since the initial input was a 2D image, I do not know how to conduct convolution for the hidden layer since the input is now 3 dimensional (32 x 14 x 14). | 0 | 1 | 1,046 |
0 | 53,003,804 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-26T07:04:00.000 | 0 | 2 | 0 | How to get the dimension of tensors at runtime? | 53,003,231 | 0 | python,tensorflow | You have to make the tensors an output of the graph. For example, if showme_tensor is the tensor you want to print, just run the graph like that :
_showme_tensor = sess.run(showme_tensor)
and then you can just print the output as you print a list. If you have different tensors to print, you can just add them like that :
_showme_tensor_1, _showme_tensor_2 = sess.run([showme_tensor_1, showme_tensor_2]) | I can get the dimensions of tensors at graph construction time via manually printing shapes of tensors(tf.shape()) but how to get the shape of these tensors at session runtime?
The reason that I want shape of tensors at runtime is because at graph construction time shape of some tensors is coming as (?,8) and I cannot deduce the first dimension then. | 0 | 1 | 885 |
0 | 53,010,613 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-10-26T09:32:00.000 | 0 | 1 | 0 | What does "a","c","f" mean in Spyder-methods (Python) | 53,005,691 | 1.2 | python,function,class,methods,spyder | So what is the difference between (f), (a), (c) ? My first guess would be "function", "attributes", "class" but I'm not entirely sure
(Spyder maintainer here) This is the right interpretation. | So, I know this might not be the place where to ask, but I can simply not figure it out! When im using Spyder and say numpy (np) when I type np. a lot of options pop up - I know most of them are functions related to np, but I kinda struggling to figure out exactly what the different calls are; they all have one letter to the very left of the name, e.g "(f) all", "(a) base".
So what is the difference between (f), (a), (c) ? My first guess would be "function", "attributes", "class" but I'm not entirely sure | 0 | 1 | 419 |
0 | 53,015,631 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-10-26T19:58:00.000 | 1 | 1 | 0 | Why is a learning curve necessary to determine if a neural network has high bias or variance? | 53,015,550 | 1.2 | python,tensorflow,machine-learning,neural-network | Yes, there is, but it's not for spotting overfitting only. But anyway, plotting is just fancy way to see numbers, and sometimes it gives you insights. If you are monitoring loss on train/validation simultaneously – you're looking at same data, obviously.
Regarding Andrew's ideas – I suggest looking into Deep Learning course by him, he clarifies that in modern applications (DL + a lot of data, and I believe, this is your case) bias is not an opposite of variance. | In Andrew Ng's machine learning course it is recommended that you plot the learning curve (training set size vs cost) to determine if your model has a high bias or variance.
However, I am training my model using Tensorflow and see that my validation loss is increasing while my training loss is decreasing. It's my understanding that this means my model is overfitting and so I have high variance. Is there still a reason to plot the learning curve? | 0 | 1 | 141 |
0 | 53,015,879 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-10-26T20:05:00.000 | 1 | 1 | 0 | What is the difference between max(my_array) and my_array.max() | 53,015,623 | 0.197375 | python,numpy,methods,syntax | As people have stated in the comments of your question, they are referencing two different functions.
max(my_array) is a built in Python function available to any sequence data in python.
Whereas, my_array.max() is referencing a function within the object. In this case my_array is from the Numpy array class. Numpy does not recognize the list data type for this function. However, this method will provide a speed improvement over the max(my_array) whenever you are using numpy data sequences.
As a rule of thumb if the variable has a variable.someMethod() the function is a method and is specific to the class of object. If the function is called as function(variable) then the function is either a part of the python distribution or the file/class you are working with. | For example, if I define my array as my_array = np.array([1,2,3]),
What is the difference between
max(my_array)and
my_array.max() ?
Is this one of those cases where one is 'syntactic sugar' for the other? Also, why does the first one work with a Python list, but the second doesn't? | 0 | 1 | 79 |
0 | 53,020,115 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-10-26T21:45:00.000 | 2 | 1 | 0 | DenseNet in Tensorflow | 53,016,653 | 1.2 | python,tensorflow | No, tf.layers.dense implements what is more commonly known as a fully-connected layer, i.e. the basic building block of multilayer perceptrons. If you want dense blocks, you will need to to write your own implementation or use one of those you found on Github. | I am fairly new to tensorflow and I am interested in developing a DeseNet Architecture. I have found implementations from scratch on Github. I was wondering if the tensorflow API happen to implement the dense blocks. Is tensorflow's tf.layers.dense the same as the dense blocks in DenseNet?
Thanks! | 0 | 1 | 354 |
0 | 69,335,685 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-10-27T10:53:00.000 | 0 | 1 | 0 | python - pandas dataframe to powerpoint chart backend | 53,021,158 | 0 | python,python-3.x,pandas,powerpoint | you will need to read a bit about python-pptx.
You need chart's index and slide index of the chart. Once you know them
get your chart object like this->
chart = presentation.slides[slide_index].shapes[shape_index].chart
replacing data
chart.replace_data(new_chart_data)
reset_chart_data_labels(chart)
then when you save your presentation it will have updated the data.
usually, I uniquely name all my slides and charts in a template and then I have a function that will get me the chart's index and slide's index. (basically, I iterate through all slides, all shapes, and find a match for my named chart).
Here is a screenshot where I name a chart->[![screenshot][1]][1]. Naming slides is a bit more tricky and I will not delve into that but all you need is slide_index just count the slides 0 based and then you have the slide's index.
[1]: https://i.stack.imgur.com/aFQwb.png | I have a pandas dataframe result which stores a result obtained from a sql query. I want to paste this result onto the chart backend of a specified chart in the selected presentation. Any idea how to do this?
P.S. The presentation is loaded using the module python-pptx | 0 | 1 | 1,226 |
0 | 53,070,814 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-29T12:47:00.000 | 1 | 2 | 0 | partially define initial centroid for scikit-learn K-Means clustering | 53,045,859 | 0.099668 | python,machine-learning,scikit-learn,cluster-analysis,k-means | That is a very nonstandard variation of k-means. So you cannot expect sklearn to be prepared for every exotic variation. That would make sklearn slower for everybody else.
In fact, your approach is more like certain regression approaches (predicting the last value of the cluster centers) rather than clustering. I also doubt the results will be much better than simply setting the last value to the average of all points assigned to the cluster center using the other 6 dimensions only. Try partitioning your data based on the nearest center (ignoring the last column) and then setting the last column to be the arithmetic mean of the assigned data.
However, sklearn is open source.
So get the source code, and modify k-means. Initialize the last component randomly, and while running k-means only update the last column. It's easy to modify it this way - but it's very hard to design an efficient API to allow such customizations through trivial parameters - use the source code to customize at his level. | Scikit documentation states that:
Method for initialization:
‘k-means++’ : selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. See section Notes in k_init for more details.
If an ndarray is passed, it should be of shape (n_clusters, n_features) and gives the initial centers.
My data has 10 (predicted) clusters and 7 features. However, I would like to pass array of 10 by 6 shape, i.e. I want 6 dimensions of centroid of be predefined by me, but 7th dimension to be iterated freely using k-mean++.(In another word, I do not want to specify initial centroid, but rather control 6 dimension and only leave one dimension to vary for initial cluster)
I tried to pass 10x6 dimension, in hope it would work, but it just throw up the error. | 0 | 1 | 1,975 |
0 | 53,062,724 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-30T10:45:00.000 | 0 | 2 | 0 | Calculate mean of column for each row excluding the row for which mean is calculated | 53,062,585 | 0 | python,pandas,dataframe,machine-learning,data-science | you can dataframe["ColumnName"].mean() for single column, or dataframe.describe() for all columns | I need to calculate the mean of a certain column in DataFrame, so that means for each row is calculated excluding the value of the row for which it's calculated.
I know I can iterate each row by index, dropping each row by index in every iteration, and then calculating mean. I wonder if there's a more efficient way of doing it. | 0 | 1 | 336 |
0 | 53,064,651 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-10-30T12:06:00.000 | 1 | 1 | 0 | Racecar image tagging | 53,063,944 | 1.2 | python,tensorflow,computer-vision,object-detection,image-recognition | The best approach would be to use all 3 methods as an ensamble. You train all 3 of those models, and pass the input image to all 3 of them. Then, there are several ways how you can evaluate output.
You can sum up the probabilities for all of the classes for all 3 models and then draw a conclusion based on the highest probability.
You can get prediction from every model and decide based on number of votes: 1. model - class1, 2. model - class2, 3. model - class2 ==> class2
You can do something like weighted decision making. So, let's say that first model is the best and the most robust one but you don't trust it 100% and want to see what other models will say. Than you can weight the output of the first model with 0.6, and output of other two models with weight of 0.2.
I hope this helps :) | I am working on a system to simplify our image library which grows anywhere from 7k to 20k new pictures per week. The specific application is identifying which race cars are in pictures (all cars are similar shapes with different paint schemes). I plan to use python and tensorflow for this portion of the project.
My initial thought was to use image classification to classify the image by car; however, there is a very high probability of the picture containing multiple cars. My next thought is to use object detection to detect the car numbers (present in fixed location on all cars [nose, tail, both doors, and roof] and consistent font week to week). Lastly there is the approach of object recognition of the whole car. This, on the surface, seems to be the most practical; however, the paint schemes change enough that it may not be.
Which approach will give me the best results? I have pulled a large number of images out for training, and obviously the different methods require very different training datasets. | 0 | 1 | 43 |
0 | 53,066,538 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-10-30T14:18:00.000 | 1 | 1 | 0 | When to use tensorflow estimators? | 53,066,376 | 0.197375 | python,tensorflow | This is a very opinionated answer but I will still write it:
The Estimator-API was developed to simplify building and sharing models. You could compare it with Keras and in fact Estimators is built with tf.keras.layers so one could say it is a simplification of a simplification.
This is obviously good for beginners or people who come from other fields (as people working with ML often do) but can also delimit the things you can do.
As a general rule of thumb I would use Estimators if you want to work on or share a model with people that do not have a good CS background but want to get going either way. | I have a general tensorflow question about when to use estimators. I feel sometimes estimators are not convenient to build something, since we need to meet some fixed requirements when building the graph. On the other hand, using lower level api can be tedious sometimes. Therefore, I want to ask when it is proper to use estimators and when it is not. Thanks! | 0 | 1 | 52 |
0 | 66,114,558 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2018-10-30T17:56:00.000 | 0 | 2 | 0 | Get list of Keras variables | 53,070,199 | 0 | python-3.x,tensorflow,keras | To get the variable's name you need to access it from the weight attribute of the model's layer. Something like this:
names = [weight.name for layer in model.layers for weight in layer.weights]
And to get the shape of the weight:
weights = [weight.shape for weight in model.get_weights()] | I'd like to compare variables in a Keras model with those from a TensorFlow checkpoint. I can get the TF variables like this:
vars_in_checkpoint = tf.train.list_variables(os.path.join("./model.ckpt"))
How can I get the Keras variables to compare from my model? | 0 | 1 | 2,810 |
0 | 53,163,169 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2018-10-30T17:56:00.000 | 2 | 2 | 0 | Get list of Keras variables | 53,070,199 | 0.197375 | python-3.x,tensorflow,keras | You can get the variables of a Keras model via model.weights (list of tf.Variable instances). | I'd like to compare variables in a Keras model with those from a TensorFlow checkpoint. I can get the TF variables like this:
vars_in_checkpoint = tf.train.list_variables(os.path.join("./model.ckpt"))
How can I get the Keras variables to compare from my model? | 0 | 1 | 2,810 |
0 | 55,940,953 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2018-10-31T05:57:00.000 | 0 | 1 | 0 | Google Cloud Platform int64_field_0 | 53,077,155 | 0 | python,csv | Per the comments, using Pandas Data Frame's pd.to_csv(filename, index=false) resolved the issue. | We are getting an extra column 'int64_field_0' while loading data from CSV to BigTable in GCP. Is there any way to avoid this first column. We are using the method load_table_from_file and setting option AutoDetect Schema as True. Any suggestions please. Thanks. | 0 | 1 | 182 |
0 | 53,079,992 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2018-10-31T09:02:00.000 | 1 | 2 | 0 | adding noise to an array. Is it addition or multiplication? | 53,079,698 | 1.2 | python,numpy,noise,conceptual | Well as you have said it yourself, the problem is that you don't know what you want.
Both methods will increase the entropy of the original data.
What is the purpose of your task?
If you want to simulate something like sensor noise, the addition will do just fine.
You can try both and observe what happens to the distribution of your original data set after the application. | I have some code that just makes some random noise using the numpy random normal distribution function and then I add this to a numpy array that contains an image of my chosen object. I then have to clip the array to between values of -1 and 1.
I am just trying to get my head round whether I should be adding this to the array and clipping or multiplying the array by the noise and clipping?
I can't conceptually understand which I should be doing. Could someone please help?
Thanks | 0 | 1 | 424 |
0 | 53,080,222 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-10-31T09:02:00.000 | 2 | 2 | 0 | adding noise to an array. Is it addition or multiplication? | 53,079,698 | 0.197375 | python,numpy,noise,conceptual | It depends what sort of physical model you are trying to represent; additive and multiplicative noise do not correspond to the same phenomenon. Your image can be considered a variable that changes through time. Noise is an extra term that varies randomly as time passes. If this noise term depends on the state of the image in time, then the image and the noise are correlated and noise is multiplicative. If the two terms are uncorrelated, noise is additive. | I have some code that just makes some random noise using the numpy random normal distribution function and then I add this to a numpy array that contains an image of my chosen object. I then have to clip the array to between values of -1 and 1.
I am just trying to get my head round whether I should be adding this to the array and clipping or multiplying the array by the noise and clipping?
I can't conceptually understand which I should be doing. Could someone please help?
Thanks | 0 | 1 | 424 |
0 | 53,091,954 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-10-31T14:50:00.000 | 5 | 1 | 0 | What's the reason for the weights of my NN model don't change a lot? | 53,086,166 | 1.2 | python,machine-learning,neural-network,torch | There are almost always many local optimal points in a problem so one thing you can't say specially in high dimensional feature spaces is which optimal point your model parameters will fit into. one important point here is that for every set of weights that you are computing for your model to find a optimal point, because of real value weights, there are infinite set of weights for that optimal point, the proportion of weights to each other is the only thing that matters, because you are trying to minimize the cost, not finding a unique set of weights with loss of 0 for every sample. every time you train you may get different result based on initial weights. when weights change very closely with almost same ratio to each others this means your features are highly correlated(i.e. redundant) and since you are getting very high accuracy just with a little bit of change in weights, only thing i can think of is that your data set classes are far away from each other. try to remove features one at a time, train and see results if accuracy was good continue to remove another one till you hopefully reach to a 3 or 2 dimensional space which you can plot your data and visualize it to see how data points are distributed and make some sense out of this.
EDIT: Better approach is to use PCA for dimensionality reduction instead of removing one by one | I am training a neural network model, and my model fits the training data well. The training loss decreases stably. Everything works fine. However, when I output the weights of my model, I found that it didn't change too much since random initialization (I didn't use any pretrained weights. All weights are initialized by default in PyTorch). All dimension of the weights only changed about 1%, while the accuracy on training data climbed from 50% to 90%.
What could account for this phenomenon? Is the dimension of weights too high and I need to reduce the size of my model? Or is there any other possible explanations?
I understand this is a quite broad question, but I think it's impractical for me to show my model and analyze it mathematically here. So I just want to know what could be the general / common cause for this problem. | 0 | 1 | 157 |
0 | 53,088,607 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-10-31T16:44:00.000 | 1 | 1 | 0 | Why does opencv on Canopy downgrade numpy, scipy, and other packages when I try to install it? | 53,088,354 | 0.197375 | python-2.7,opencv,package,enthought,canopy | You haven't provided any version or platform information. But perhaps you are using an old Canopy version (current is 2.1.9), or perhaps you are using the subscriber-only "full" installer, which is only intended for airgapped or other non-updateable systems. Otherwise, the currently supported version of opencv is 3.2.0 (build 3.2.0-4) which depends on numpy 1.13.3, which is the currently supported version of numpy. | On my package manager for canopy, every time I try to download opencv it downgrades several other important packages. I am then not able to upgrade those same packages or run my code. How can I download opencv without downgrading my other packages? | 0 | 1 | 113 |
0 | 53,114,849 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-01T00:13:00.000 | 2 | 1 | 0 | Deeplearning with electroencephalography (EEG) data | 53,093,576 | 1.2 | python,deep-learning,neuroscience | It depends on what you want to test. A test set is used to estimate the generalization (i.e. performance on unseen data). So the question is:
Do want to estimate the generalization to unseen data from the same participants (whose data was used to train the classifier)?
Or do you want to estimate the generalization to unseen participants (the general population)?
This really depends on you goal or the claim you are trying to make. I can think of situations for both approaches:
Think of BCIs which need to be retrained for every user. Here, you would test on data from the same individual.
On the other hand, if you make a very general claim (e.g. I can decode some relevant signal from a certain brain region across the population) then having a test set consisting of participants which were not included in the training set would lend much stronger support to your claim. (The question is whether this works, though.) | I am making a convolutional network model with which I want to classify EEG data. The data is an experiment where participants are evoked with images of 3 different classes with 2 subclasses each. To give a brief explanation about the dataset size, a subclass has ±300 epochs of a given participant (this applies for all the subclasses).
Object
Color
Number
Now my question is:
I have 5 participants in my training dataset, I took 15% of each participants' data and put it in the testing dataset. Can I consider the 15% as unseen data even though the same participant was used to train the model on?
Any input is welcome! | 0 | 1 | 174 |
0 | 53,094,747 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-01T03:08:00.000 | 0 | 1 | 0 | cv2 show video stream & add overlay after another function finishes | 53,094,695 | 1.2 | python,cv2 | A common approach would be to create a flag that allows the detection algorithim to only run once every couple of frames and save the predicted reigons of interest to a list, whilst creating bounding boxes for every frame.
So for example you have a face detection algorithim, process every 15th frame to detect faces, but in every frame create a bounding box from the predictions. Even though the predictions get updated every 15 frames.
Another approach could be to add an object tracking layer. Run your heavy algorithim to find the ROIs and then use the object tracking library to hold on to them till the next time it runs the detection algorithim.
Hope this made sense. | I am current working on a real time face detection project.
What I have done is that I capture the frame using cv2, do detection and then show result using cv2.imshow(), which result in a low fps.
I want a high fps video showing on the screen without lag and a low fps detection bounding box overlay.
Is there a solution to show the real time video stream (with the last detection result bounding box), and once a new detection is finished, show the new bounding box and the background was not delayed by the detection function.
Any help is appreciated!
Thanks! | 0 | 1 | 413 |
0 | 53,096,595 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-01T04:04:00.000 | 0 | 1 | 0 | How to choose the right neural network in a binary classification problem for a unbalanced data? | 53,095,061 | 0 | python-3.x,keras | First of all, two features is a really small amount. Neural Networks are highly non-linear models with a really really high amount of freedom degrees, thus if you try to train a network with more than just a couple of networks it will overfit even with balanced classes. You can find more suitable models for a small dimensionality like Support Vector Machines in scikit-learn library.
Now about unbalanced data, the most common techniques are Undersampling and Oversampling. Undersampling is basically training your model several times with a fraction of the dataset, that contains the non dominant class and a random sample of the dominant so that the ratio is acceptable, where as oversampling consist on generating artificial data to balance the classes. In most cases undersampling works better.
Also when working with unbalanced data it's quite important to choose the right metric based on what is more important for the problem (is minimizing false positives more important than false negatives, etc). | I am using keras sequential model for binary classification. But My data is unbalanced. I have 2 features column and 1 output column(1/0). I have 10000 of data. Among that only 20 results in output 1, all others are 0. Then i have extended the data size to 40000. Now also only 20 results in output 1, all others are 0. Since the data is unbalanced(0 dominates 1), which neural network will be better for correct prediction? | 0 | 1 | 63 |
0 | 53,112,148 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-01T22:25:00.000 | 0 | 2 | 0 | GROUPBY with showing all the columns | 53,110,240 | 0 | python,pandas,dataframe,group-by | Did you try this : d_copy.groupby(['CITYS','MODELS']).mean() to have the average percentage of a model by city.
Then if you want to catch the percentages you have to convert it in DF and select the column : pd.DataFrame(d_copy.groupby(['CITYS','MODELS']).mean())['PERCENTAGE'] | I want to do a groupby of my MODELS by CITYS with keeping all the columns where i can print the percentage of each MODELS IN THIS CITY.
I put my dataframe in PHOTO below.
And i have written this code but i don"t know how to do ??
for name,group in d_copy.groupby(['CITYS'])['MODELS']: | 0 | 1 | 60 |
0 | 53,119,543 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-02T13:24:00.000 | 2 | 1 | 0 | Autoencoder Decoded Output | 53,119,402 | 1.2 | python,tensorflow,autoencoder | I guess the question is whether your returned signal is a faithful representation of the input signal (but it's just constrained to the range 0 to 1)?
If so, you could simply multiply it by 79, and then subtract 47.
We'd need to see code if it's more than just a scaling issue. | I am trying to build an AutoEncoder, where I am trying to de-noise the signal.
Now, for example, the amplitude range of my input signal varies in between -47 to +32. But while I am getting the decoded signal (reconstructed one), that signal only ranges in between 0 to +1 amplitude.
How can I get my reconstructed signal with the same amplitude range of -47 to +32? | 0 | 1 | 38 |
0 | 53,125,246 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-11-02T19:35:00.000 | 1 | 1 | 0 | K-Nearest Neighbors find all ties | 53,124,843 | 0.197375 | python,algorithm,pandas,numpy,scikit-learn | In theory, all points in the set may tie, making the problem a different one. Indeed, the K nearest neighbors can be reported in time O(Log N + K) in the absence of ties, whereas ties can imply K = O(N) making any solution O(N).
In practice, if the coordinates are integer, the ties will be a rare event, unless the problem has a special structure. And in floating-point, ties are virtually impossible.
IMO, handling ties will kill efficiency, for no benefit. | I'm currently using sklearn to compute all the k-nearest neighbors from a dataset. Say k = 10. The problem I'm having is sklearn will only return the 10 nearest neighbors and none of the other data points that may tie for the 10th nearest neighbor in terms of distance. I was wondering is there any efficient way to find any other points that may tie the kth nearest neighbor in terms of distance? | 0 | 1 | 203 |
0 | 53,135,661 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-03T19:48:00.000 | 1 | 1 | 0 | Which classification model do you suggest for predicting a credit score? | 53,134,930 | 1.2 | python,machine-learning,classification | Do you have credit scores? Without labeled data I think you might consider reformulating the problem.
If you do, then you can implement any number of regression algorithms from OLS all the way up to an ANN. Rather than look for the "one true" algorithm, many projects implement TPOT or grid search as part of model selection. | I have a data set that contains information about whether medium-budget companies can get loans. There are data on the data set that approximately 38,000 different companies will receive loans. And based on this data, I'm trying to estimate each company's credit score. What would be your suggestion? | 0 | 1 | 155 |
0 | 53,145,276 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-04T20:33:00.000 | 0 | 1 | 0 | f1 or accuracy scoring after downsampling - classification, svm - Python | 53,145,190 | 1.2 | python,classification | If you've rebalanced your data, then it's not unbalanced anymore and I see no problem with using accuracy as the success metric.
Accuracy can mislead you in very skewed datasets but since it isn't skewed anymore, it should work. | I have a dataset consisting in 15 columns and 3000 rows to train a model for a binary classification.
There is a imbalance for y (1:2). Both outcomes (0,1) are equally important.
After downsampling (because the parameter class_weight = balanced didn't work well), I used the parameter scoring = "f1", because I read that this was next to the ROC curve the best measurement of the performance.
The question is:
Do I treat my data after downsampling still as unbalanced and therefore apply f1 or can I go back to normal accuracy?
f1 = 2 * (precision * recall) / (precision + recall)
Cheers in advance! :) | 0 | 1 | 122 |
0 | 53,565,096 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-05T05:57:00.000 | 1 | 1 | 0 | Does catboost implements xgboost (extreme gradient boosting) or a simple gradient boosting? | 53,149,072 | 0.197375 | python,xgboost,catboost | Gradient boosting is a meta-algorithm. There is no simple gradient boosting. Each boosting library uses their own unique algorithm to search for regression trees, as a result, we obtain different results.
Extreme gradient boosting is just an implementation of standard gradient boosting on decision trees from xgboost with some heuristics/regularizations to improve models quality and special scheme to learn regression trees
CatBoost is another implementation of standard gradient boosting with another set of regularizations/heuristics.
So this are different algorithms. | On their website they say 'gradient boosting' but it seems people here compare it to other 'xgboost' algorithm. I would like to know whether it is a real extreme gradient boosting algorithm.
thanks | 0 | 1 | 102 |
0 | 53,159,976 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2018-11-05T18:14:00.000 | 1 | 2 | 0 | What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images? | 53,159,930 | 1.2 | python,keras,conv-neural-network | If you want to convolve along the dimension of your channels, you should add a singleton dimension in the position of channel. If you don't want to convolve along the dimension of your channels, you should use a 2D CNN. | I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands.
Now the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image.
I want to input this small windows of this image (having dimension like W * W * L; W < S) into a 3D CNN, but for that, I need to have 5 dimensions in the following format: (batch, length, width, depth, channels).
It seems to me I am missing one of the spatial dimensions, how do I convert my image array into a 5-dimensional array without losing any information?
I am using python and Keras for the above. | 0 | 1 | 138 |
0 | 53,163,769 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-11-05T18:14:00.000 | 1 | 2 | 0 | What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images? | 53,159,930 | 0.099668 | python,keras,conv-neural-network | What you want is a 2D CNN, not a 3D one. A 2D CNN already supports multiple channels, so you should have no problem using it with a hyperspectral image. | I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands.
Now the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image.
I want to input this small windows of this image (having dimension like W * W * L; W < S) into a 3D CNN, but for that, I need to have 5 dimensions in the following format: (batch, length, width, depth, channels).
It seems to me I am missing one of the spatial dimensions, how do I convert my image array into a 5-dimensional array without losing any information?
I am using python and Keras for the above. | 0 | 1 | 138 |
0 | 53,161,713 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-11-05T18:52:00.000 | 0 | 1 | 0 | Applying Network flows | 53,160,428 | 1.2 | java,python,algorithm,computer-science,network-flow | Create vertices for each student and each school. Draw an edge with capacity 1 from each student to each school that they can attend according to your distance constraint. Create a source vertex with edges to each student with a capacity of 1. Create a sink vertex with edges coming in from each school with capacities equal to each school's maximum capacity.
Running a standard max-flow algorithm will match as many students as possible to schools. Not every student is guaranteed to get to go to school, of course, given the constraints.
This is basically a modification of the standard maximum bipartite matching algorithm. The main difference is that the sinks have capacities greater than 1, which allows multiple students to be matched to a school. | So I've recently started looking into network flows (Max flow, min cuts, etc) and the general problems for network flow always involve assigning "n" of something to "k" of another thing. For example, how would I set up a network flow for "n" children in a city that has "k" schools such that the children's homes are within x kilometres of the school (for simplicity, let's just say 1km)?
What if I were to further add limitations, for example, say each school cannot have more than 100 students? Or 300 students? Could someone help me with how I would initially set up my algorithm to approach problems like these (would appreciate any references too)? They tend to show up on past midterms/exams, so I just wanted to be prepared | 0 | 1 | 146 |
0 | 53,161,823 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-11-05T19:38:00.000 | 1 | 3 | 0 | How to return a new dataframe excluding certain columns? | 53,161,078 | 0.066568 | python,pandas,numpy,dataframe,indexing | To build on @sven-harris answer.
List the columns:
remove = [x for x in df.columns if 'job' in x or 'birth' in x]
remove += ['name', 'userID', 'IgID']
df = df.drop(remove, axis=1) # axis=1 to drop columns, 0 for rows. | I am trying to take a dataframe df and return a new dataframe excluding any columns with the word 'job' in its name, excluding any columns with the string 'birth' in its name, and excluding these columns: name, userID, lgID.
How can I do that? | 0 | 1 | 149 |
0 | 55,496,917 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-11-05T21:51:00.000 | 1 | 1 | 0 | Joblib persistence and Pandas | 53,162,741 | 1.2 | python,python-3.x,pandas,parallel-processing,joblib | Since Pandas data frames are built on Numpy arrays, yes, they will be persisted.
Joblib implements its optimized persistence by hooking in to the pickle protocol. Anything that includes numpy arrays in its pickled representation will benefit from Joblib's optimizations. | There is good documentation on persisting Numpy arrays in Joblib using a memory-mapped file.
In recent versions, Joblib will (apparently) automatically persist and share Numpy arrays in this fashion.
Will Pandas data frames also be persisted, or would the user need to implement persistence manually? | 0 | 1 | 488 |
0 | 53,169,648 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2018-11-06T01:51:00.000 | 3 | 1 | 0 | Difference between Conv3d vs Conv2d | 53,164,733 | 0.53705 | python,tensorflow,neural-network,deep-learning,convolution | If you have a stack of images, you have a video. You can not have two input forms. You have either images or videos. For the video case you can use 3D convolution and 2D convolution is not defined for it. If you stack the channels as you mentioned it (3N) the 2D convolution will interpret the stack as one image with a lot of channels, but not as stack.
Note here that a 2D convolution with (batch, H, W, Channels) is the same as an 3D convolution with (batch, H, W, Channels, 1). | I am a little confused with the difference between conv2d and conv3d functions.
For example, if I have a stack of N images with H height and W width, and 3 RGB channels. The input to the network can be two forms
form1: (batch_size, N, H, W, 3) this is a rank 5 tensor
form2: (batch_size, H, W, 3N ) this is a rank 4 tensor
The question is, if I apply conv3d with M filters with size (N,3,3) to form1 and apply conv2d with M filters with size (3,3)
Do they have basicly the same feature operations? I think both of these forms convolve in temporal and spatial dimension.
I really appreciate if anyone can help me figure this out. | 0 | 1 | 10,046 |
0 | 53,166,406 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-11-06T05:41:00.000 | 5 | 1 | 0 | Family tree in Python | 53,166,322 | 0.761594 | python,algorithm,family-tree | There's plenty of ways to skin a cat, but I'd suggest to create:
A Person class which holds relevant data about the individual (gender) and direct relationship data (parents, spouse, children).
A dictionary mapping names to Person elements.
That should allow you to answer all of the necessary questions, and it's flexible enough to handle all kinds of family trees (including non-tree-shaped ones). | I need to model a four generational family tree starting with a couple. After that if I input a name of a person and a relation like 'brother' or 'sister' or 'parent' my code should output the person's brothers or sisters or parents. I have a fair bit of knowledge of python and self taught in DSA. I think I should model the data as a dictionary and code for a tree DS with two root nodes(i.e, the first couple). But I am not sure how to start. I just need to know how to start modelling the family tree and the direction of how to proceed to code. Thank you in advance! | 0 | 1 | 6,151 |
0 | 53,184,428 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-11-06T07:03:00.000 | 0 | 1 | 0 | Tensorflow MixtureSameFamily and gaussian mixture model | 53,167,161 | 0 | python,tensorflow,gmm | I found an answer for above question thanks to my collegue.
The 4 components of gaussian mixture have had very similar means that the mixture seems like it has only one mode.
If I put four explicitly different values as means to the MixtureSameFamily class, I could get a plot of gaussian mixture with 4 different modes.
Thank you very much for reading this. | I am really new to Tensorflow as well as gaussian mixture model.
I have recently used tensorflow.contrib.distribution.MixtureSameFamily class for predicting probability density function which is derived from gaussian mixture of 4 components.
When I plotted the predicted density function using "prob()" function as Tensorflow tutorial explains, I found the plotted pdf with only one mode. I expected to see 4 modes as the mixture components are 4.
I would like to ask whether Tensorflow uses any global mode predicting algorithm in their MixtureSameFamily class. If not, I would also like to know how MixtureSameFamily class forms the pdf with statistical values.
Thank you very much. | 0 | 1 | 339 |
0 | 53,173,320 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-11-06T10:07:00.000 | 0 | 1 | 0 | Is there any good way to read the content of a Spark RDD into a Dask structure | 53,169,690 | 1.2 | python,pyspark,dask,dask-distributed,fastparquet | I solved this by doing the following
Having a Spark RDD with a list of custom objects as Row values I created a version of the rdd where I serialised the objects to strings using cPickle.dumps. Then converted this RDD to a simple DF with string columns and wrote it to parquet. Dask is able to read parquet files with simple structure. Then deserialised with cPickle.loads to get the original objects | Currently the integration between Spark structures and Dask seems cubersome when dealing with complicated nested structures. Specifically dumping a Spark Dataframe with nested structure to be read by Dask seems to not be very reliable yet although the parquet loading is part of a large ongoing effort (fastparquet, pyarrow);
so my follow up question - Let's assume that I can live with doing a few transformations in Spark and transform the DataFrame into an RDD that contains custom class objects; Is there a way to reliably dump the data of an Spark RDD with custom class objects and read it in a Dask collection? Obviously you can collect the rdd into a python list, pickle it and then read it as a normal data structure but that removes the opportunity to load larger than memory datasets. Could something like the spark pickling be used by dask to load a distributed pickle? | 0 | 1 | 460 |
0 | 53,183,497 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2018-11-07T02:17:00.000 | 0 | 2 | 0 | Pit in LSTM programming by python | 53,182,773 | 1.2 | python-3.x,tensorflow,keras,lstm,rnn | No, samples is different from batch_size. samples is the total number of samples you would have. batch_size would be the size of each batch or the number of samples per each batch used in training, like by .fit.
For example, if samples=128 and batch_size=16, then your data would be divided into 8 batches with each having 16 samples inside during .fit call.
As another note, time_steps is the total time steps or observations within each sample. It does not make much sense to have it as 1 with LSTM as the main advantage of RNN's in general is to learn the temporal patterns. With time_step=1, there won't be any history to leverage. Here as an example that might help:
Assume that your job is to determine if someone is active or not every hour by looking at they breathing rate and heart rate provided every minute, i.e. 2 features measured at 60 samples per hour. (This is just an example, use accelerometers if you really wanted to do this :)) Let's say you have 128 hours of labeled data. Then your input data would be of shape (128, 60, 2) and your output would be of shape (128, 1).
Here, you have 128 samples, 60 time steps or observations per sample, and two features.
Next you split the data into train, validation, and testing according to the samples. For example, your train, validation, and test data would be of shapes (96, 60, 2), (16, 60, 2), and (16, 60, 2), respectively.
If you use batch_size=16, your training, validation, and test data would have 6, 1, and 1 batches, respectively. | As we all Know, if we want to train a LSTM network, we must reshape the train dataset by the function numpy.reshape(), and reshaping result is like [samples,time_steps,features]. However, the new shape is influenced by the original one. I have seen some blogs teaching LSTM programming taking 1 as time_steps, and if time_steps is another number, samples will change relevently. My question is that does the samplesequal to batch_size?
X = X.reshape(X.shape[0], 1, X.shape[1]) | 0 | 1 | 74 |
Subsets and Splits