GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 51,456,740 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-07-06T07:29:00.000 | 0 | 1 | 0 | How to find dot product of two very large matrices to avoid memory error? | 51,205,149 | 1.2 | python,numpy,machine-learning,scipy,logistic-regression | I have tried many things. I will be mentioning these here, if anyone needs them in future:
I had already cleaned up data like removing duplicates and
irrelevant records depending on given problem etc.
I have stored large matrices which hold mostly 0s as sparse matrix.
I implemented the gradient descent using mini-batch method instead of plain old Batch method (theta.T dot X).
Now everything is working fine. | I am trying to learn ML using Kaggle datasets. In one of the problems (using Logistic regression) inputs and parameters matrices are of size (1110001, 8) & (2122640, 8) respectively.
I am getting memory error while doing it in python. This would be same for any language I guess since it's too big. My question is how do they multiply matrices in real life ML implementations (since it would usually be this big)?
Things bugging me :
Some ppl in SO have suggested to calculate dot product in parts and then combine. But even then matrix would be still too big for RAM (9.42TB? in this case)
And If I write it to a file wouldn't it be too slow for optimization algorithms to read from file and minimize function?
Even if I do write it to file how would fmin_bfgs(or any opt. function) read from file?
Also Kaggle notebook shows only 1GB of storage available. I don't think anyone would allow TBs of storage space.
In my input matrix many rows have similar values for some columns. Can I use it my advantage to save space? (like sparse matrix for zeros in matrix)
Can anyone point me to any real life sample implementation of such cases. Thanks! | 0 | 1 | 102 |
0 | 51,208,845 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-07-06T10:48:00.000 | 2 | 1 | 0 | Python, Numpy - Normalize a matrix/array | 51,208,733 | 1.2 | python,arrays,numpy,normalization | Consider trying to cluster objects with two numerical attributes A and B. Both are equally important. Attribute A can range from 0 to 1000 and attribute B can range from 0 to 5.
If you did not normalize A and B you would end up with attribute A completely overpowering attribute B when applying any standard distance metric. | This is most likely a dumb question but being a beginner in Python/Numpy I will ask it anyways. I have come across a lot of posts on how to Normalize an array/matrix in numpy. But I am not sure about the WHY. Why/When does an array/matrix need to be normalized in numpy? When is it used?
Normalize can have multiple meanings in difference context. My question belongs to the field of Data Analytics/Data Science. What does Normalization mean in this context? Or more specifically in what situation should I normalize an array?
The second part to this question is - What are the different methods of Normalization and can they be used interchangeably in all situations?
The third and final part - can Normalization be used for Arrays of any dimensions?
Links to any reference material (for beginners) will be appreciated. | 0 | 1 | 229 |
0 | 54,048,062 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2018-07-06T16:32:00.000 | 0 | 3 | 0 | Python: Ensure each pairwise distance is >= some minimum distance | 51,214,496 | 0 | python,algorithm,numpy | compute pairwise distances between all points -
sklearn.metrics.pairwise.euclidean_distances
compute minimum of all pairwise distances
if minimum is small enough, exit
compute pairwise vectors between each two points so they point away
from each other
divide by pairwise distances so they are unit
vectors
for each point, compute how far to nudge it as sum over
other points of: pairwise vector/pairwise distance squared * some
constant (the force - constant should be set so that when points are at the minimum distance force is small but not infinitesimal)
nudge all the points by amount
computed in 6. should also cap the nudge at maybe minimum/10, so if 2 points end up in same spot, they don't get nudged an infinite distance apart
repeat until minimum is small enough
Should converge in all cases but will expand canvas as necessary. Also it is memory and compute intensive for e.g. 200,000 points, but a sparse matrix that ignores large vectors/small forces makes it more tractable. | I have a 2D array of ~200,000 points, and wish to "jitter" these points such that the distance between any point and its nearest neighbor is >= some minimum value.
Before writing this algorithm from scratch, I wanted to ask: Are there any canonical approaches or often-used algorithms to implement this behavior? I thought it would make sense to start by reviewing those algorithms before setting out on this.
Any suggestions others can offer on this question would be greatly appreciated. | 0 | 1 | 625 |
0 | 51,229,167 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2018-07-08T05:32:00.000 | 1 | 4 | 0 | How to find the RED color regions using OpenCV? | 51,229,126 | 0.049958 | python,opencv,colors | Please, use HSV or HSL (hue, saturation, luminance) instead of RGB, in HSV the red color can be easily detected using the value of hue within some threshold. | I am trying to make a program where I detect red. However sometimes it is darker than usual so I can't just use one value.
What is a good range for detecting different shades of red?
I am currently using the range 128, 0, 0 - 255, 60, 60 but sometimes it doesn't even detect a red object I put in front of it. | 0 | 1 | 18,158 |
0 | 51,231,004 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2018-07-08T10:09:00.000 | 0 | 2 | 0 | escaped character in data for pandas | 51,230,915 | 0 | python,pandas | To process accented character please try encoding='iso-8859-1'. | I have a txt file to import in pandas but the data contains characters like L\E9on, which translates to Léon. How can I import this kind of data in pandas? I have tried using encoding as utf-8 and raw_unicode_escape. It still gives out an error multiple repeat at position 2. | 0 | 1 | 114 |
0 | 51,232,652 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2018-07-08T10:09:00.000 | 0 | 2 | 0 | escaped character in data for pandas | 51,230,915 | 0 | python,pandas | Interesting!!!
To re-produce this issue at my end, I have created dummy data with consists of the text specified by you and saved it as a .txt file
I am able to import this txt file content into pandas data frame without any issues using read_csv method
df=pd.read_csv('spcl.txt') | I have a txt file to import in pandas but the data contains characters like L\E9on, which translates to Léon. How can I import this kind of data in pandas? I have tried using encoding as utf-8 and raw_unicode_escape. It still gives out an error multiple repeat at position 2. | 0 | 1 | 114 |
0 | 51,233,015 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-08T12:17:00.000 | 0 | 1 | 0 | Numpy - How to reshape odd array sizes? | 51,231,925 | 0 | python,arrays,numpy | You cannot reshape an array of n elements into an array of m elements if n and m are not equal. The operation of reshaping an array is really just the operation of obtaining a new view of the same array.
Your input array has n=1*7*7*2048=100353 elements while you're trying to reshape to an array with m=1*1*1*2048=2048 elements.
As pointed out by hpaulj, your problem lies in your input shape. From the shapes you mention, (1, 7, 7, 2048) and (1, 1, 1, 2048), it looks like your expected input should probably be extracted from a later layer (maybe after a global pooling stage) but we cannot say much more without more details on your models. | I have a TensorFlow model in which I want to pass an image to it in order for it to determine the object within the image.
However, the model is complaining of the shape of the image saying it wants it in the form (1, 1, 1, 2048) however it's receiving (1, 7, 7, 2048).
I have tried doing a numpy.reshape() on the image by doing either numpy.reshape(myObj, (1, 1, 1, 2048)) or numpy.reshape(myObj, (1, 1, 1, -1)). However, the former just complains that it can't reshape array of size 100352 to (1, 1, 1, 2048) and the latter resizes the last element of the array to the multiple of (7, 7, 2048), i.e. 100352.
How would one go about reshaping an odd array size, or is this not how Numpy shapes/reshapes work? Is there an alternative way to do what I'm asking for if not possible with Numpy? | 0 | 1 | 1,912 |
0 | 64,356,403 | 0 | 0 | 0 | 0 | 2 | false | 31 | 2018-07-09T02:42:00.000 | 6 | 3 | 0 | Difference between Standard scaler and MinMaxScaler | 51,237,635 | 1 | python,data-science | Many machine learning algorithms perform better when numerical input variables are scaled to a standard range.
Scaling the data means it helps to Normalize the data within a particular range.
When MinMaxScaler is used the it is also known as Normalization and it transform all the values in range between (0 to 1)
formula is x = [(value - min)/(Max- Min)]
StandardScaler comes under Standardization and its value ranges between (-3 to +3)
formula is z = [(x - x.mean)/Std_deviation] | What is the difference between MinMaxScaler and standard scaler.
MMS= MinMaxScaler(feature_range = (0, 1)) ( Used in Program1)
sc = StandardScaler() ( In another program they used Standard scaler and not minMaxScaler) | 0 | 1 | 48,259 |
0 | 58,850,139 | 0 | 0 | 0 | 0 | 2 | false | 31 | 2018-07-09T02:42:00.000 | 71 | 3 | 0 | Difference between Standard scaler and MinMaxScaler | 51,237,635 | 1 | python,data-science | MinMaxScaler(feature_range = (0, 1)) will transform each value in the column proportionally within the range [0,1]. Use this as the first scaler choice to transform a feature, as it will preserve the shape of the dataset (no distortion).
StandardScaler() will transform each value in the column to range about the mean 0 and standard deviation 1, ie, each value will be normalised by subtracting the mean and dividing by standard deviation. Use StandardScaler if you know the data distribution is normal.
If there are outliers, use RobustScaler(). Alternatively you could remove the outliers and use either of the above 2 scalers (choice depends on whether data is normally distributed)
Additional Note: If scaler is used before train_test_split, data leakage will happen. Do use scaler after train_test_split | What is the difference between MinMaxScaler and standard scaler.
MMS= MinMaxScaler(feature_range = (0, 1)) ( Used in Program1)
sc = StandardScaler() ( In another program they used Standard scaler and not minMaxScaler) | 0 | 1 | 48,259 |
0 | 51,252,563 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-07-09T15:39:00.000 | 1 | 1 | 0 | set size of Parquet output files in dask | 51,249,295 | 0.197375 | python,parquet,filesize,dask | The current behaviour is by design, allowing each worker to process a partition independently, and write to files which no other process is writing to. Otherwise, there would need to be some kind of lock, or some consolidation step after writing for each directory.
What you could do, is to use set_index to shuffle the data into one partition for each value of the column you want to partition by (perhaps using the divisions= keyword); now a to_parquet would result in a file for each of these values. If you wanted the files to automatically end up in the correct directories, and have the now-redundant index trimmed, you would want to use to_delayed() and create a delayed function, which takes one partition (a pandas dataframe) and writes it to the correct location. | when using the dask dataframe to_parquet method is there any way to set the default parquet file size like in spark ?
my problem is that when I save it with the partition_on kwarg i get several small files per partition dir and thus resulting very slow queries using "Amazon Athena".
The intermediate desired result (if file size control is not available) is n files (right now 1 will suffice) per partition dir.
The only way i thought of guaranteeing 1 file per partition dir is repartitioning to one partition and then using the to_parquet method (however this is highly inefficient).
is there a better way ? | 0 | 1 | 940 |
0 | 52,307,049 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-10T03:41:00.000 | 0 | 2 | 0 | One hot encoding in Python | 51,256,927 | 0 | python,pandas,machine-learning | you have 2 seperate sheets ( for test and train data set). you have to one-hot encode both the sheets seperately after importing it into the pandas data frame.
and YES one hot encoding will be the same for the same data set no matter you apply on different data sheets, make sure you have same categorical values in that column in each of your data sheet | I'm trying to learn machine learning.
I had a doubt about one hot encoding:
I have a data set split into 2 excel sheets of data. One sheet has train and other has test data. I first trained my model by importing the train data sheet with pandas. There are categorical features in the data set that have to be encoded. I one hot encoded them.
After importing the test dataset , if I one hot encode it, will the encoding be the same as of the train data set or will it be different. If so, how can I solve this issue? | 0 | 1 | 1,910 |
0 | 51,270,563 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-07-10T15:26:00.000 | 1 | 2 | 0 | Use proxy sentences from cleaned data | 51,269,058 | 0.099668 | python,nlp,gensim,word2vec,word-embedding | That sounds like a reasonable solution. If you have access to data that is similar to your cleaned data you could get average sentence length from that data set. Otherwise, you could find other data in the language you are working with (from wikipedia or another source) and get average sentence length from there.
Of course your output vectors will not be as reliable as if you had the correct sentence boundaries, but it sounds like word order was preserved so there shouldn't be too much noise from incorrect sentence boundaries. | Gensim's Word2Vec model takes as an input a list of lists with the inner list containing individual tokens/words of a sentence. As I understand Word2Vec is used to "quantify" the context of words within a text using vectors.
I am currently dealing with a corpus of text that has already been split into individual tokens and no longer contains an obvious sentence format (punctuation has been removed). I was wondering how should I input this into the Word2Vec model?
Say if I simply split the corpus into "sentences" of uniform length (10 tokens per sentence for example), would this be a good way of inputting the data into the model?
Essentially, I am wondering how the format of the input sentences (list of lists) affects the output of Word2Vec? | 0 | 1 | 98 |
0 | 51,269,893 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-07-10T15:57:00.000 | 0 | 1 | 0 | tensorflow error KeyError: "The name 'import/Mul' refers to an Operation not in the graph." | 51,269,683 | 0 | python,python-3.x,tensorflow,machine-learning | python label_image.py --image cat3.jpg --graph retrained_graph.pb --labels retrained_labels.txt --input_layer=Placeholder
This command, the problem was solved. | I did this command to check results of tensorflow training.
python label_image.py --image xxx.jpg --graph retrained_graph.pb --labels retrained_labels.txt
And found following errors.
/Users/xxx/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
2018-07-11 00:39:22.028051: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
File "label_image.py", line 131, in
input_operation = graph.get_operation_by_name(input_name)
File "/Users/xxx/tensorFlow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3718, in get_operation_by_name
return self.as_graph_element(name, allow_tensor=False, allow_operation=True)
File "/Users/xxx/tensorFlow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3590, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/Users/xxx/tensorFlow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3650, in _as_graph_element_locked
"graph." % repr(name))
KeyError: "The name 'import/Mul' refers to an Operation not in the graph."
label_image.py is following lines.
input_height = 299
input_width = 299
input_mean = 0
input_std = 255
input_layer = "Mul"
output_layer = "final_result" | 0 | 1 | 1,478 |
0 | 51,272,378 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-07-10T18:17:00.000 | 0 | 1 | 0 | Data Privacy with Tensorboard | 51,271,790 | 0 | python,tensorflow,keras,tensorboard,privacy | No, Tensorboard does not upload the data to "the cloud" or anywhere outside the computer where it is running, it just interprets data produced by the model. | I've recently begun using Tensorflow via Keras and Python 3.5 to analyze company data, and I am by no means an expert and only recently built my first "real-world" model.
With my experimental data I used Tensorboard to visualiza how my neural network was working, and I would like to do the same with my real data. However, my company is extremely strict about company data leaving our servers - so my question is this:
Does tensorboard take the raw data used in the model and upload it off-site to generate its reports/visuals or does it only use processed data/results from my model?
I've done several google searches already, and I haven't found anything conclusive one way or the other.
If I'm not asking this question correctly, please let me know - I'm new to all of this.
Thank you. | 0 | 1 | 81 |
0 | 51,272,663 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-07-10T19:19:00.000 | 5 | 1 | 0 | Python: ContextualVersionConflict: pandas 0.22.0; Requirement.parse('pandas<0.22,>=0.19'), {'scikit-survival'}) | 51,272,642 | 0.761594 | python,pandas,scikit-learn | Restarting jupyter notebook fixed it. But I am unsure why this would fix it? | I have this issue:
ContextualVersionConflict: (pandas 0.22.0 (...),
Requirement.parse('pandas<0.22,>=0.19'), {'scikit-survival'})
I have even tried to uninstall pandas and install scikit-survival + dependencies via anaconda. But it still does not work....
Anyone with a suggestion on how to fix?
Thanks! | 0 | 1 | 2,454 |
0 | 51,949,438 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-07-11T14:02:00.000 | 0 | 1 | 0 | sampling from sklearn Kernel Density estimation | 51,287,512 | 1.2 | python-3.x,scikit-learn,kernel-density | Answer: yes, the sample is in the same unit as my input data. I checked carefully and with a cold head :) | I'm using the sample() method for a KernelDensity that is fitted to my data's percentage changes. Are the samples from the sample method in the same units as my input data? I ask because documentation says that for the score_sample() method it actually returns the log density and I just want to make sure the same doesn't happen with the sample method or if there's a need to adjust the output. | 0 | 1 | 118 |
0 | 51,637,340 | 1 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-11T14:15:00.000 | 1 | 1 | 0 | OSMnx: Divide a cache ox.graph into equal squares without redownloading each | 51,287,813 | 0.197375 | python-3.x,osmnx | Using OSMnx directly - no, there isn't. You would have to script your own solution using the existing tools OSMnx provides. | I am trying to divide a city into n squares.
Right now, I'm calculating the coordinates for all square centres and using the ox.graph_from_point function to extract the OSM data for each of them.
However, this is getting quite long at high n due to the API pausing times.
My question:
Is there a way to download all city data from OSM, and then divide the cache file into squares (using ox.graph_from_point or other) without making a request for each?
Thanks | 0 | 1 | 161 |
0 | 51,289,142 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2018-07-11T15:01:00.000 | 0 | 2 | 0 | How do I calculate the percentage of difference between two images using Python and OpenCV? | 51,288,756 | 0 | python,opencv | You will need to calculate this on your own. You will need the count of diferent pixels and the size of your original image then a simple math: (diferentPixelsCount / (mainImage.width * mainImage.height))*100 | I am trying to write a program in Python (with OpenCV) that compares 2 images, shows the difference between them, and then informs the user of the percentage of difference between the images. I have already made it so it generates a .jpg showing the difference, but I can't figure out how to make it calculate a percentage. Does anyone know how to do this?
Thanks in advance. | 0 | 1 | 12,248 |
0 | 51,290,834 | 0 | 1 | 0 | 0 | 1 | false | 7 | 2018-07-11T16:59:00.000 | 0 | 3 | 0 | Numpy arrays vs Python arrays | 51,290,791 | 0 | python,arrays,numpy | Yes, if you don't want another dependency in your code. | I noticed that the de facto standard for array manipulation in Python is through the excellent numpy library. However, I know that the Python Standard Library has an array module, which seems to me to have a similar use-case as Numpy.
Is there any actual real-world example where array is desirable over numpy or just plain list?
From my naive interpretation, array is just memory-efficient container for homogeneous data, but offers no means of improving computational efficiency.
EDIT
Just out of curiosity, I searched through Github and import array for Python hits 186'721 counts, while import numpy hits 8'062'678 counts.
However, I could not find a popular repository using array. | 0 | 1 | 2,898 |
0 | 51,536,067 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-07-12T05:34:00.000 | 0 | 1 | 0 | "Mini Keras" Is there a way get a prediction form a trained keras model without the entire keras package? | 51,297,972 | 0 | python,tensorflow,keras | I was able to get my docker container small enough by using a different backend when predicting ( theano ) as opposed to tensorflow.
The other solutions I investigated are some C++ libraries, e.g. ( frugally deep, kerasify, keras2cpp ) but the preprocessng of my data became much harder ( for me anyway ). So I went with the theano backend as above!
Of course you could manually do the convolutions for each filter etc in python.. but that would be even harder | I have a severe space limitation on my deployment device that mean I cannot fit the entire "Keras" package.
Is there a way to get a prediction from a trained keras model without the entire keras package?
I have noted that the Keras docker is ~1GB. I would need it to be < 650MB | 0 | 1 | 137 |
0 | 51,310,630 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-07-12T15:34:00.000 | 1 | 1 | 0 | Can I let people use a different Tensorflow-gpu version above what they had installed with different CUDA dependencies? | 51,309,596 | 1.2 | python,tensorflow | Having a working tensorflow-gpu on a machine does involve a series of steps including installation of cuda and cudnn, the latter requiring an NVidia approval. There are a lot of machines that would not even meet the required config for tensorflow-gpu, e.g. any machine that doesn't have a modern nvidia gpu. You may want to define the tensorflow-gpu requirement and leave it to the user to meet it, with appropriate pointers for guidance. If the project can work acceptably on tensorflow-cpu, that would be a much easier fallback option. | I was trying to pack and release a project which uses tensorflow-gpu. Since my intention is to make the installation as easy as possible, I do not want to let the user compile tensorflow-gpu from scratch so I decided to use pipenv to install whatsoever version pip provides.
I realized that although everything works in my original local version, I can not import tensorflow in the virtualenv version.
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
Although this seems to be easily fixable by changing local symlinks, that may break my local tensorflow and is against the concept of virtualenv and I will not have any idea on how people installed CUDA on their instances, so it doesn't seems to be promising for portability.
What can I do to ensure that tensorflow-gpu works when someone from internet get my project only with the guide of "install CUDA X.X"? Should I fall back to tensorflow to ensure compatibility, and let my user install tensorflow-gpu manually? | 0 | 1 | 124 |
0 | 51,313,130 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-12T18:47:00.000 | 0 | 2 | 0 | Plot Probability Curve with Summation | 51,312,546 | 0 | python,numpy,matplotlib,probability-theory | In your formula, wouldn't it be the product instead of the sum? Anyways, my original thought was to use the Poisson distribution, but that wouldn't work since it's without replacement. The problem is that the factorial function is only defined for whole numbers, so you'd need to use the gamma function. | I have the following problem:
I'm working on a formula to calculate some network effects. The idea is that I have 450 "red users" and 6550 "blue users" which sums up to 7000 users in total. Now I would like to plot "picking x users (the same user cannot be picked twice, so this is sampling without replacement) and calculate the probability that at least 1 user is red".
E.g for x = 3, that means I'm picking 3 random users out of 7000 and check if any of these are "red users"
The probability for having at least 1 red user is p = 1 - the probability all 3 picks are blue users and the probability for a blue user is equal to p = 6550/7000, right?
Resulting in a probability for at least 1 red user:
* p = 1 - 6550/7000 * 6549/6999 * 6548/6998 *
Therefore i came up with the formula:
f(x) = e^-(1- sum of (6500-i)/(7000-i)); for i = 0, till x)
What I've realized is that the curve is pretty edgy since it's just going from a value in ℕ to the next value in ℕ.
Although adding decimal numbers wouldn't make that much sense since "picking 0,5 users or even 0,01 users" is just stupid, I would like to see the full graph in order to be able to compare the formula to some others.
Is there any way I can implement this in python?
Best regards,
Korbi | 0 | 1 | 206 |
0 | 51,314,759 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-07-12T21:27:00.000 | 0 | 1 | 0 | Groupby to find min date with conditions in Python | 51,314,716 | 1.2 | python-3.x,pandas,numpy,jupyter-notebook | Use boolean index with loc and min:
df.loc[(df['Column_A'] == 'A') & (df['Column_B'] == 'type 1'), 'Date'].min() | I have like 3 columns in a data frame for example
Column_A has 2 categorical values like A,B
Column_B also has 3 categorical values like Type1, Type2, Type3
Date column has values like 2010-06-13,2010-06-10
There are about 20,000 rows so the Categorical Column A,B's values keep repeating.
So I need the find min date where Column_A='A' and Column_B='type 1' using Python(Pandas,Numpy) | 0 | 1 | 201 |
0 | 51,385,960 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-13T16:40:00.000 | 0 | 1 | 0 | Inconsistent number of columns | 51,329,549 | 0 | python,csv,encoding,emeditor | When a CSV file with different number of columns are imported to Excel, the differences in row lengths are not evident. Excel has an infinitely wide table by default, so the real size of the CSV file is not shown. I suspect there are empty fields in the last column somewhere, because these would not be evident in Excel. | I have Parcels_All.csv which I compiled by merging many smaller
parcel files, using the command line copy *.csv
I have WKT_Revisions_Combined.csv that needs to go into
Parcels_All.csv to update certain sections of data.
I have vlookup.py script that can print the data from
WKT_Revisions_Combined.csv into Parcels_All.csv by referencing row
ID's and column headers.
Vlookup.py has not been able to successfully run because it seems there is an error within Parcels_All.csv. When I open the file in EmEditor (a big data text editor) it gives the error message "Inconsistent number of columns detected" for a few rows
The interesting thing is when I open Parcels_All.csv in Excel the data is organized and in place; no inconsistent columns.
Any thoughts? Could it be an encoding issue? | 0 | 1 | 1,552 |
0 | 51,348,285 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-07-14T10:36:00.000 | 0 | 1 | 0 | Reward Logic out of Unity3D in ml-agents package | 51,337,634 | 0 | c#,python,unity3d,reinforcement-learning,ml-agent | According to my Reinforcement Learning understanding, the reward is handled by the environment and the agent just get it together with the next observation. You could say it's part of the observation.
Therefore the logic which rewards to get when is part of the environment logic, i.e. in case of Unity-ML the environment lives in Unity, so you have to implement the reward function in Unity (C#).
So in order to keep the clear separation between environment (Unity) and agent (Python). I think its best to keep the reward logic in Unity/C# and don't tinker with it in Python.
tl;dr: I think it's intended that you cannot set the reward via the Python API to keep a clear environment-agent separation. | Unity3D has a package for Reinforcement Learning called ML-agents that I am playing with to understand its components. For my project, I am in the situation that I need to write my own logic to set the reward out of Unity3D (not 'addReward' using C# logic, but write a Python code to set the reward out of Unity).
I wonder if I can use the Python API given by the ML-agents package for using the env observations and update the reward with a custom logic set out of Unity (and send back to Unity)? And where to look for doing so?
In other words (example). In the 3DBall example, a reward logic is set in Unity3D as such if the ball stays on the platform gets a positive reward and if it falls from the platform it receives a negative reward. This logic is implemented in Unity3D by using C# and determine the position of the Ball (vector position) compare to the platform. For every action, the agent calls the env.step(action) and get the tuple of (reward, state...). What if I want to write the logic outside Unity? For example, if I want to write a python program that reads the observation (from Unity3D) and update the reward without using the Unity reward logic? Is this possible? I cannot understand where this option is in the Python API of ML-agents.
At the moment I am thinking to run an external python program in-between the line where I set the reward in C# in Unity3D, but I wonder if this is overcomplicated and that there is an easier solution.
Any help would be really appreciated.
Regards
Guido | 0 | 1 | 464 |
0 | 51,341,789 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-07-14T17:06:00.000 | 1 | 1 | 0 | Multiple Inputs for CNN: images and parameters, how to merge | 51,341,613 | 1.2 | python,tensorflow,keras,concatenation,conv-neural-network | You can use Concatenation layer to merge two inputs. Make sure you're converting multiple inputs into same shape; you can do this by adding additional Dense layer to either of your inputs, so that you can get equal length end layers. Use those same shape outputs in Concatenation layer. | I use Keras for a CNN and have two types of Inputs: Images of objects, and one or two more parameters describing the object (e.g. weight). How can I train my network with both data sources? Concatenation doesn't seem to work because the inputs have different dimensions. My idea was to concatenate the output of the image analysis and the parameters somehow, before sending it into the dense layers, but I'm not sure how. Or is it possible to merge two classifications in Keras, i.e. classifying the image and the parameter and then merging the classification somehow? | 0 | 1 | 1,717 |
0 | 51,346,452 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-14T20:27:00.000 | 0 | 2 | 0 | How to analyse the integrity of clustering with no ground truth labels? | 51,343,116 | 0 | python-3.x,machine-learning,scikit-learn,cluster-analysis,silhouette | Don't just rely on some heuristic, that someone proposed for a very different problem.
Key to clustering is to carefully consider the problem that you are working on. What is the proper way of proposing the data? How to scale (or not scale)? How to measure the similarity of two records in a way that it quantifies something meaningful for your domain.
It is not about choosing the right algorithm; your task is to do the math that relates your domain problem to what the algorithm does. Don't treat it as a black box. Choosing the approach based on the evaluation step does not work: it is already too late; you probably did some bad decisions already in the preprocessing, used the wrong distance, scaling, and other parameters. | I'm clustering data (trying out multiple algorithms) and trying to evaluate the coherence/integrity of the resulting clusters from each algorithm. I do not have any ground truth labels, which rules out quite a few metrics for analysing the performance.
So far, I've been using Silhouette score as well as calinski harabaz score (from sklearn). With these scores, however, I can only compare the integrity of the clustering if my labels produced from an algorithm propose there to be at minimum, 2 clusters - but some of my algorithms propose that one cluster is the most reliable.
Thus, if you don't have any ground truth labels, how do you assess whether the proposed clustering by an algorithm is better than if all of the data was assigned in just one cluster? | 0 | 1 | 1,769 |
0 | 51,351,211 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-15T09:21:00.000 | 0 | 1 | 0 | DBSCAN clustering python - parallel run on multiple clustering tasks | 51,347,015 | 0 | python,apache-spark,cluster-analysis,dbscan | Since your users are all independent, this clearly is an embarrassingly parallel problem. You want to run the same task (DBSCAN) millions of times. There are many ways to achieve this. You can probably use Spark (although I would consider using a Java based tool with it, such as ELKI - and you probably need to make sure you parallelize on the users, not within each user), MapReduce, or even Makefiles with locking, if you have a network file system with locking.
The key factor is how your data is organized. It makes a huge difference whether you can read in parallel for all workers, or route all your data through a master node (bad). You need to get the data efficiently to the workers, and need to store the clustering results. | I need to run DBSCAN clustering on about 14M users, each one has 1k data points. Each user is a different clustering case which is completely separate from other users. basically I have many small clustering tasks.
Running it on a single machine doesn't work for me, even when paralleling the tasks using python multiprocessing module, as IO and clustering take ages.
I thought about using Spark to manage a parallel run on a cluster, but decided it might not fit my case, since DBSCAN is not implemented in MLlib and the fact that I don't need to run each clustering task in parallel, but run each one separately. whenever I try to use anything outside of Spark native RDD or Dataframes it obviously has to collect all data to the driver node.
my question is weather there is a smarter solution to my problem than simply run many isolated processes on different nodes, when each one will select a subset of the users?
thanks | 0 | 1 | 893 |
0 | 51,351,532 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-07-15T19:19:00.000 | 0 | 1 | 0 | Storing already generated data in spyder | 51,351,472 | 0 | python,python-3.x,python-2.7,python-requests,spyder | What form does the data take? If it's a database in pandas, you can export it to a csv or xlsx | I am new to python programming. I am using spyder to run my code. I have generated a very big data in python after 3 hours of computation. Now I want to save my data. One way of doing this is calling the pickle.dump() function, but in order to do this I have to run my program again and this will take another 3 hours. In what way I can store my data when data has already been generated in Spyder? | 0 | 1 | 135 |
0 | 51,353,269 | 0 | 1 | 0 | 0 | 1 | true | 4 | 2018-07-16T00:19:00.000 | 8 | 1 | 0 | Can I (/does it make sense to) create a pandas dataframe to hold custom class instances? | 51,353,218 | 1.2 | python,pandas,class,oop,dataframe | A DataFrame's columns (Series) can have any of the NumPy dtypes, including object, which can hold arbitrary Python objects.
Doing so gives up most of the speed and space benefits of using NumPy/Pandas in the first place. And also the type checking—if you accidentally insert an object that isn't an instance of a match subclass, it will just work. And many convenience features.
But you do still get some convenience features, and sometimes that's a more than good enough reason to use Pandas.
If the performance isn't acceptable, though, you will have to rethink things. For example, if you flatten the object out into a set of attributes (maybe some of them NaN or N/A for some of the subclasses) that you can store as a row, especially if some of those attributes are things like floats or ints that you want to do a lot of computation on, you'll get a lot more out of Pandas—but at the cost of losing the OO benefits of your classes, of course.
Occasionally, it's worth building a hybrid to get the best of both worlds: a DataFrame whose rows hold the storage for your match objects, but then also a match class hierarchy that holds an index or even a single-row DataFrame (a slice from the main one) and provides an OO view onto the same information. But more often, it isn't worth the work to do this, as either almost all of your code ends up being Pandas or almost all of it ends up being OO.
One last possibility, if there are huge numbers of these things, would be an ORM that uses a relational database, or an old-school hierarchical database,1 for storage and indexing.
1. In fact, building a hierarchical database out of a bunch of DataFrames and then wrapping it in an object model is basically the same thing as the previous paragraph, but the idea here is that you use stuff that's already built to do all the hard and/or tedious stuff, not that you build a database just to hide it from yourself. | I am working with tracking data that tracks tennis players in a local match. The data is provided to me in a json file and every 100 milliseconds (so 10 times a second), it knows where the players on both sides of the court are and where the ball is and provides other match data as well. Using this tracking data, I created a bunch of custom python classes and subclasses to create a "match" object and I am loading all of the tracking data into this "match object." Can I create a pandas df holding instances of custom classes/does this make sense?
Backstory/thought process (in case my question doesn't make sense which is highly possible...)
The first time I implemented this I basically created a "match" object where I used python OOP and had the match broken into games, sets, points, players, etc. The players part is a little confusing because due to calculations on my end we created a player object and a new instance of a player every 100 milliseconds (it's hard to wrap your head around because one player is the same throughout the game, but think of it as that player at that exact moment in time). I'm not sure if it makes more sense to change these "player" objects instead into rows in a pandas dataframe (they're are a ton of them, think about a 3 hour match) or instead if I can just create a pandas df and have a player be a column. Players make up points and then points make up frames, so if I did change the player objects to a pandas df it would be hard because than I would have a bunch of rows in the dataframe making up a point and then a bunch of points making up a game.. and whatnot
Because there is so much tracking data, efficiency considerations are important to me (although I prefer to do things something somewhat slower but not drastically but it helps me ensure/check all data) | 0 | 1 | 2,039 |
0 | 51,353,814 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-07-16T02:12:00.000 | 1 | 1 | 0 | How can save rgba image on pyplot? | 51,353,755 | 1.2 | python,image,matplotlib,rgba | can you use: pyplot.savefig() instead of pyplot.saveconfig()?
if yes, then you can use bbox_inches='tight' to remove or reduce margins and padding around the image:
solution one:
pyplot.savefig('test.png', bbox_inches='tight') | I can show rgba image using pyplot.imshow(image,alpha=0.8).
I tried to save image using pyplot.savefig(), but image included padding is saved.
I want to save only image which not have padding. | 0 | 1 | 505 |
0 | 51,361,104 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-16T11:39:00.000 | 1 | 1 | 0 | In tensorflow training and test shows different results | 51,360,966 | 0.197375 | python,tensorflow,machine-learning | I have two suggestions:
If you aren't using a bool isTraining and passing it to the Batch Normalization layers, do so! this should be a placeholder, and set before each session (in training it will be set to true, in test/validation to false).
Check that during test/validation you don't shuffle your tet/validation dataset (there might be some kind of shuffle=True in some import/management of the batche sof your data).
The first one is key, the second shouldn't make that much of a difference, but it ensures exact numberical values each time. | Now i am facing a problem in tensorflow:
I have a network consisting of 6 convolutional layers (each with batch normalization, and the last convolution is followed by an average pooling to make the output shape Nx1x1xC), aiming to classify one image into a category. Everything is fine during training:
- training samples are about 150000
- validation samples during training are about 12000
I have trained totally 50000 iterations with mini-batch size of 6.
- During training, the training loss is getting lower always (from about 2.6 at beginning to about 0.3 at iteration 50000),
- and the validation accuracy is getting higher and saturated after about 40000 iterations (from 60% at beginning to 72% at iteration 50000)
BUT when I use the learned weights of iteration 50000 on the same validation samples to test, the overall accuracy comes at only about 40%. I have googled if there someone who have faced similar problems. Some said the decay of moving average in batch normalization may be the cause.
The default decay in tf.contrib.layers.batch_norm is 0.999. Then I have trained with decay of 0.9, 0.99, 0.999. the result of OA on validation samples during test are 70%, 30%, 39%. Although decay of 0.9 have the best result, it is still lower than the OA on validation during training.
I am writing to ask if anyone have such similar problems, and do you have any idea what could be the cause?
best wishes, | 0 | 1 | 517 |
0 | 51,362,949 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-16T13:07:00.000 | 0 | 1 | 0 | Brief explanation on tensorflow object detection working mechanism | 51,362,567 | 0 | python,tensorflow,object-detection,tensorflow-datasets | You can't "simply" understand how Tensorflow works without a good background on Artificial Intelligence and Machine Learning.
I suggest you start working on those topics. Tensorflow will get much easier to understand and to handle after that. | I've searched for working mechanism of tensorflow object detection in google. I've searched how tensorflow train models with dataset. It give me suggestion about how to implement rather than how it works.
Can anyone explain how dataset are trained in fit into models? | 0 | 1 | 27 |
0 | 51,369,003 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-07-16T14:30:00.000 | -1 | 2 | 0 | How to explain clustering results? | 51,364,165 | 1.2 | python,scikit-learn,cluster-analysis,k-means | Do not treat the clustering algorithm as a black box.
Yes, k-means uses centroids. But most algorithms for high-dimensional data don't (and don't use k-means!). Instead, they will often select some features, projections, subspaces, manifolds, etc. So look at what information the actual clustering algorithm provides! | Say I have a high dimensional dataset which I assume to be well separable by some kind of clustering algorithm. And I run the algorithm and end up with my clusters.
Is there any sort of way (preferable not "hacky" or some kind of heuristic) to explain "what features and thresholds were important in making members of cluster A (for example) part of cluster A?"
I have tried looking at cluster centroids but this gets tedious with a high dimensional dataset.
I have also tried fitting a decision tree to my clusters and then looking at the tree to determine which decision path most of the members of a given cluster follow. I have also tried fitting an SVM to my clusters and then using LIME on the closest samples to the centroids in order to get an idea of what features were important in classifying near the centroids.
However, both of these latter 2 ways require the use of supervised learning in an unsupervised setting and feel "hacky" to me, whereas I'd like something more grounded. | 0 | 1 | 1,372 |
0 | 51,368,477 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-16T18:53:00.000 | 0 | 1 | 0 | Do you need to store your old data to refit a model in sklearn? | 51,368,416 | 0 | python,python-3.x,machine-learning,scikit-learn,data-fitting | In general, only the estimators implementing the partial_fit method are able to do this. Unfortunately, IsolationForest is not one of them. | I am trying to use sklearn to build an Isolation Forest machine learning program to go through a ton of data. I can only store the past 10 days of data, so I was wondering:
When I use the "fit" function on new data that comes in, does it refit the model considering the hyper-parameters from the old data without having had access to that old data anymore? Or is it completely recreating the model? | 0 | 1 | 295 |
0 | 51,387,763 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-16T20:10:00.000 | 0 | 1 | 0 | Extracting the first 13,000 results of a search query with Custom Search Engine JSON API | 51,369,427 | 0 | python-3.x,google-search,google-custom-search,google-search-api | Custom Search JSON API is limited to a max depth of 100 results per query, so you'll need to find a different API or devise some solution to modify the query to divide up the result set into smaller parts | I am developing an application (Python 3.x) in which I need to collect the first 13,000 results of a CSE query using one search keyword (from result index 1 to 13,000). For a free version of CSE JSON API (I have tried it), I can only get the first 10 results per query or 100 results per day (by repeating the same query while incrementing the index) otherwise it gives an error (HttpError 400.....returned Invalid Value) when the result index exceeds 100. Is there any option (paid/free) that I can deploy to achieve the objective? | 1 | 1 | 116 |
0 | 51,370,046 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2018-07-16T20:14:00.000 | 0 | 2 | 0 | What Is Matlab's 'box' Interpolation Kernel | 51,369,489 | 0 | python,matlab,numpy,interpolation,image-resizing | A "box" kernel is an averaging kernel with uniform weights. If it is an interpolation kernel, then it corresponds to nearest neighbor interpolation (it always takes the average of one input sample).
A bit of theory: an interpolating kernel is one that has a value of 1 at the origin, and a value of 0 at integer distances from the origin. In between it can do different things. Thus, to make from "box" an interpolating kernel, we'd make its width somewhere in between infinitesimally thin and just under 2 sample spacings. That makes it fit the definition of an interpolating kernel. However, if it is thinner than 1 sample spacing, it will generate an output of 0 for some displacements -- not desirable. And if it is wider than 1 sample spacing, there will be displacements where the output is the addition of two input samples, twice as large as it should be -- not desirable either. Thus, making it exactly 1 sample spacing wide is the only useful width here. With this width, at any displacement it always covers exactly one input sample -- hence it does linear interpolation. | Does anyone know the equation/algorithm/theorem used by MATLAB's 'box' interpolation kernel in the imresize function? Or (even better) know a numpy equivelant? | 0 | 1 | 806 |
0 | 51,369,992 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2018-07-16T20:14:00.000 | 2 | 2 | 0 | What Is Matlab's 'box' Interpolation Kernel | 51,369,489 | 0.197375 | python,matlab,numpy,interpolation,image-resizing | box interpolation is simply averaging pixels within the specified window size.
You may check the matlab function smooth3 etc for detail. | Does anyone know the equation/algorithm/theorem used by MATLAB's 'box' interpolation kernel in the imresize function? Or (even better) know a numpy equivelant? | 0 | 1 | 806 |
0 | 51,383,465 | 0 | 1 | 0 | 0 | 1 | true | 5 | 2018-07-17T05:38:00.000 | 5 | 1 | 0 | The purpose of introducing nn.Parameter in pytorch | 51,373,919 | 1.2 | python,neural-network,deep-learning,pytorch | From the documentation:
Parameters are Tensor subclasses, that have a very special property when used with Modules - when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. in parameters() iterator. Assigning a Tensor doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model. If there was no such class as Parameter, these temporaries would get registered too.
Think for example when you initialize an optimizer:
optim.SGD(model.parameters(), lr=1e-3)
The optimizer will update only registered Parameters of the model.
Variables are still present in Pytorch 0.4 but they are deprecated. From the docs:
The Variable API has been deprecated: Variables are no longer necessary to use autograd with tensors. Autograd automatically supports Tensors with requires_grad set to True.
Pytorch pre-0.4
In Pytorch before version 0.4 one needed to wrap a Tensor in a torch.autograd.Variable in order to keep track of the operations applied to it and perform differentiation. From the docs of Variable in 0.3:
Wraps a tensor and records the operations applied to it.
Variable is a thin wrapper around a Tensor object, that also holds the gradient w.r.t. to it, and a reference to a function that created it. This reference allows retracing the whole chain of operations that created the data. If the Variable has been created by the user, its grad_fn will be None and we call such objects leaf Variables.
Since autograd only supports scalar valued function differentiation, grad size always matches the data size. Also, grad is normally only allocated for leaf variables, and will be always zero otherwise.
The difference wrt Parameter was more or less the same. From the docs of Parameters in 0.3:
A kind of Variable that is to be considered a module parameter.
Parameters are Variable subclasses, that have a very special property when used with Modules - when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. in parameters() iterator. Assigning a Variable doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model. If there was no such class as Parameter, these temporaries would get registered too.
Another difference is that parameters can’t be volatile and that they require gradient by default. | I am new to Pytorch and I am confused about the difference between nn.Parameter and autograd.Variable. I know that the former one is the subclass of Variable and has the gradient. But I really don't understand why we introduce Parameter and when we should use it?
SUMMARY:
Thanks for the explanation of iacolippo, i finally understand the difference between parameter and variable. In a summary, variable in pytorch is NOT same as in the variable in tensorflow, the former one is not attach to the model's trainable parameters while the later one will. Attaching to the model means that using model.parameters() will return the certain parameter to you, which is useful in training phase to specify the variable needed to train. The 'variable' is more helpful as a cache in some network. | 0 | 1 | 5,351 |
0 | 51,383,561 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-07-17T10:48:00.000 | 3 | 2 | 0 | How to retrain model in graph (.pb)? | 51,379,506 | 0.291313 | python,tensorflow | It's a good question. Actually it would be nice, if someone could explain how to do this. But in addition i can say you, that it would come to "catastrophic forgetting", so it wouldn't work out. You had to train all your data again.
But anyway, i also would like to know that espacially for ssd, just for test reasons. | I have model saved in graph (.pb file). But now the model is inaccurate and I would like to develop it. I have pictures of additional data to learn, but I don't if it's possible or if it's how to do it? The result must be the modified of new data pb graph. | 0 | 1 | 1,718 |
0 | 51,386,373 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2018-07-17T15:52:00.000 | 2 | 1 | 0 | Sparse vs. Dense Vectors PySpark | 51,385,657 | 1.2 | python,apache-spark,machine-learning,pyspark,sparse-matrix | The thing to remember is that pyspark.ml.linalg.Vector and pyspark.mllib.linalg.Vector are just compatibility layer between Python and Java API. There are not full featured or optimized linear algebra utilities and you shouldn't use them as such. The available operations are either not designed for performance or just convert to standard NumPy array under the covers.
When used with other ml / mllib tools there will be serialized and converted to Java equivalents so Python representation performance is mostly inconsequential.
This means that the biggest real concern is storage and a simple rule of thumb is:
If on average half of the entries is zero it is better to use SparseVector.
Otherwise it is better to use DenseVector. | How can I know whether or not I should use a sparse or dense representation in PySpark? I understand the differences between them (sparse saves memory by only storing the non-zero indices and values), but performance-wise, are there any general heuristics that describe when to use sparse vectors over dense ones?
Is there a general "cutoff" dimension and percent of 0 values beyond which it is generally better to use sparse vectors? If not, how should I go about making the decision? Thanks. | 0 | 1 | 1,280 |
0 | 52,216,299 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-07-17T19:14:00.000 | 2 | 1 | 0 | Intel MKL FATAL ERROR: while trying to import gensim package | 51,388,707 | 1.2 | python,tensorflow,anaconda,seaborn,gensim | Here is my theory on your question:
Is there any dependency between gensim, tensoflow, seaborn and such packages?
When you try to install these packages one by one using conda, you might have already seen conda prompting that some of the dependencies will be DOWNGRADED/UPDATED/INSTALLED. Hence there is dependency between the dependencies of these packages.
Why import error is thrown only on certain cases?
Looks like a dependency issue. When you try to import gensim, it tries to load certain lib files, which its not able to find. However, when tensorflow or seaborn is imported the mentioned lib files might have already loaded, hence importing gensim did not show an error.
Why installing few packages and uninstalling few, help to solve the problem?
This might help to have the correct dependencies for the packages to work properly.
Having said that, I tried to recreate the error that you got, however gensim is importing fine for me. If you could give the result of "conda list", will try to recreate the problem and would be able to give a better insight. | We have Anaconda 4.3.1 installed on our hosts and recently we have installed several packages for data science use. All the imports were fine except for gensim.
I am getting "Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so." and getting out of python shell.
It sounds like a duplicate but the weird part is, when I import tensorflow or seaborn before importing gensim, I am not getting that error and gensim is being imported. I would also like to know if there is any dependency between these packages. And I do have the latest version of numpy which is 1.14.5. I have looked at various solutions proposed about installing few packages and uninstalling few. I would like to know the reason why we should be doing it before actually doing it. | 0 | 1 | 732 |
0 | 51,390,517 | 0 | 1 | 0 | 0 | 2 | false | 3 | 2018-07-17T21:30:00.000 | 1 | 3 | 0 | What does it mean for DataFrame.ix to be deprecated? | 51,390,464 | 0.066568 | python,pandas,deprecated | Yes, deprecated here means that the attribute or method has been removed from the newer versions. Hence, it is advised to avoid them in your code to avoid future issues. | Suppose I use DataFrame.ix in some code.
Does the fact that it's deprecated mean that at some point in the future, I'm going to update pandas, and then a little bit later, the stuff using that code will mysteriously start to break because they decided that finally, they were going to actually remove ix? | 0 | 1 | 646 |
0 | 51,390,526 | 0 | 1 | 0 | 0 | 2 | true | 3 | 2018-07-17T21:30:00.000 | 6 | 3 | 0 | What does it mean for DataFrame.ix to be deprecated? | 51,390,464 | 1.2 | python,pandas,deprecated | That's the basic idea of deprecation. The library's maintainers are letting you know now that they plan to stop supporting ix (e.g., fixing bugs in it), and may very well remove it in the near future. As long as it's deprecated, you have a window of opportunity to change your code to use other alternatives (such as loc and iloc), on your own terms, before you're forced to do so when pandas breaks "under your feet". | Suppose I use DataFrame.ix in some code.
Does the fact that it's deprecated mean that at some point in the future, I'm going to update pandas, and then a little bit later, the stuff using that code will mysteriously start to break because they decided that finally, they were going to actually remove ix? | 0 | 1 | 646 |
0 | 51,396,146 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-18T06:39:00.000 | 2 | 2 | 0 | Object Identification Using OpenCV | 51,395,132 | 0.197375 | python,opencv,image-processing,computer-vision | Create a database, Store the credentials you needed for later use e.g object type and some usable specifications, by giving them some unique ID. CNN already recognized the object so just need to store it in database and later on you can perform more processing on the generated data. Simple solution is that to the problem you are explaining.
Okay I got your problem that you want to identify what kind of object is being tracked because cnn is only tracking not identifying. For that purpose you have to train your CNN on some specific features and give them some identity like objectA has [x,y,z] features. Then CNN will help you in finding the identity of the object.
You can use openCv to do this as well, store some features of some specific objects, then use some distance matching technique to match the live feature with stored features.
Thanks. | Currently I am researching about viable approaches to identify a certain object with the image processing techniques. However I ams struggling finding them. For example, I have a CNN capable of detecting certain objects, like a person, then I can track the person as well. However, my issue is that I want the identify the detected and tracked person like saving its credentials and giving an id. I do not want something like who is he/she. Just giving an id in that manner.
Any help/resource will be appreciated. | 0 | 1 | 331 |
0 | 51,565,917 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-07-18T08:03:00.000 | 3 | 1 | 0 | Convert TFLite to Lite | 51,396,671 | 0.53705 | python,tensorflow | There is no difference in ".lite" and ".tflite" format (as long as they can be correctly consumed by Tensonflow Lite). And there is no need to convert them. | I have a working app with TFlite using tensorflow for poets. It works with a labels.txt and graph.lite pair files. I have downloaded another model in .tflite file format and wanted to use in my application. I wanted to ask what are the differences between .lite and .tflite files and are there any ways to convert tflite format to lite?
Thanks | 0 | 1 | 477 |
0 | 51,439,004 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-07-19T19:12:00.000 | 1 | 1 | 0 | Python3 remove multiple hyphenations from a german string | 51,430,249 | 0.197375 | python,nlp,word2vec,preprocessor,hyphenation | It's surely possible, as the pattern seems fairly regular. (Something vaguely analogous is sometimes seen in English. For example: The new requirements applied to under-, over-, and average-performing employees.)
The rule seems to be roughly, "when you see word-fragments with a trailing hyphen, and then an und, look for known words that begin with the word-fragments, and end the same as the terminal-word-after-und – and replace the word-fragments with the longer words".
Not being a German speaker and without language-specific knowledge, it wouldn't be possible to know exactly where breaks are appropriate. That is, in your Geistes- und Sozialwissenschaften example, without language-specific knowledge, it's unclear whether the first fragment should become Geisteszialwissenschaften or Geisteswissenschaften or Geistesenschaften or Geiestesaften or any other shared-suffix with Sozialwissenschaften. But if you've got a dictionary of word-fragments, or word-frequency info from other text that uses the same full-length word(s) without this particular enumeration-hyphenation, that could help choose.
(If there's more than one plausible suffix based on known words, this might even be a possible application of word2vec: the best suffix to choose might well be the one that creates a known-word that is closest to the terminal-word in word-vector-space.)
Since this seems a very German-specific issue, I'd try asking in forums specific to German natural-language-processing, or to libraries with specific German support. (Maybe, NLTK or Spacy?)
But also, knowing word2vec, this sort of patch-up may not actually be that important to your end-goals. Training without this logical-reassembly of the intended full words may still let the fragments achieve useful vectors, and the corresponding full words may achieve useful vectors from other usages. The fragments may wind up close enough to the full compound words that they're "good enough" for whatever your next regression/classifier step does. So if this seems a blocker, don't be afraid to just try ignoring it as a non-problem. (Then if you later find an adequate de-hyphenation approach, you can test whether it really helped or not.) | I'm currently working on a neural network that evaluates students' answers to exam questions. Therefore, preprocessing the corpora for a Word2Vec network is needed. Hyphenation in german texts is quite common. There are mainly two different types of hyphenation:
1) End of line:
The text reaches the end of the line so the last word is sepa-
rated.
2) Short form of enumeration:
in case of two "elements":
Geistes- und Sozialwissenschaften
more "elements":
Wirtschafts-, Geistes- und Sozialwissenschaften
The de-hyphenated form of these enumerations should be:
Geisteswissenschaften und Sozialwissenschaften
Wirtschaftswissenschaften, Geisteswissenschaften und Sozialwissenschaften
I need to remove all hyphenations and put the words back together. I already found several solutions for the first problem.
But I have absoluteley no clue how to get the second part (in the example above "wissenschaften") of the words in the enumeration problem. I don't even know if it is possible at all.
I hope that I have pointet out my problem properly.
So has anyone an idea how to solve this problem?
Thank you very much in advance! | 0 | 1 | 50 |
0 | 51,596,850 | 0 | 0 | 0 | 0 | 1 | false | 161 | 2018-07-20T00:10:00.000 | 15 | 6 | 0 | What does model.train() do in PyTorch? | 51,433,378 | 1 | python,pytorch | There are two ways of letting the model know your intention i.e do you want to train the model or do you want to use the model to evaluate.
In case of model.train() the model knows it has to learn the layers and when we use model.eval() it indicates the model that nothing new is to be learnt and the model is used for testing.
model.eval() is also necessary because in pytorch if we are using batchnorm and during test if we want to just pass a single image, pytorch throws an error if model.eval() is not specified. | Does it call forward() in nn.Module? I thought when we call the model, forward method is being used.
Why do we need to specify train()? | 0 | 1 | 130,339 |
0 | 51,436,328 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-20T03:18:00.000 | 0 | 1 | 0 | How to plot the average of ROC curves? | 51,434,567 | 0 | python,matplotlib,plot,scikit-learn,roc | Solved by choosing a fixed set of FPRs, then using no.interp for each pair of (fpr,tpr) returned by sklearn.metrics.roc_curve to get the corresponding TPRs, then averaged all ROCs using np.mean | I am attempting to perform outlier detection and I have 15 different test sets and 3 different models (a PCA-based classifier, One Class SVM and Isolation Forest).
For PCA-based classification, I have written my own code for generating ROC curves. I have 2 lists pcafprs and pcatprs, each of which has 15 sublists, each sublist representing the False Positive Ratios and True Positive Ratios, required to plot the ROC curve.
For One-Class SVM and Isolation Forest, I can get the (fpr, tpr) from sklearn.metrics.roc_curve. Similar to PCA, I have ocsvmfprs and ocsvmtprs for One-Class SVM, and isoforestfprs and isoforesttprs for Isolation Forest.
For each test set, I can iterate over the FPR and TPR lists and plot the ROC curve. The code might look like:
for i in range(len(pcafprs)):
plt.plot(pcafprs[i], pcatprs[i]) #Plot the ROC curve
plt.show()
For each of the 3 models, I want to be able to plot the average of all 15 ROC curves for the 15 test sets in one graph. I cannot simply do np.mean over the arrays containing the TPRs and FPRs because the FPRs returned by sklearn.metrics.roc_curve are all different points for each test set.
For PCA, I have tried using np.mean(pcatprs, axis=0) and np.mean(pcafprs, axis=0) to average out all the TPRs and FPRs so that I can plot a single graph which represents the mean of all the test sets. This works because for PCA I have generated the same number of FPRs and TPRs for each test set.
However, I am unable to control the no. of FPRs and TPRs returned for each test set by sklearn.metrics.roc_curve, and it turns out that it returns different number of values for each test set. Due to this, I cannot use np.mean to find the average ROC curve.
tl,dr: Is there a way to plot the average of multiple lines on a graph without having the equations and only having some points that lie on the lines, where we have a different number of points available for each line? | 0 | 1 | 1,603 |
0 | 51,438,139 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-20T07:59:00.000 | 0 | 2 | 0 | CNN: Is it better to train 300.000 images during 1 epoch or 300 images during 1000 epoch? | 51,437,817 | 0 | python,deep-learning,epoch,yolo | No, they are not the same.
*The number of examples you show the network defines what it will be looking for - a network with more examples will tend to be more general. If there are, for example, 1000 pictures with different dogs in it, and you only show 300/300000 pictures, the network (on average) will only recognize one specific kind of dog, and be unable to pick out common traits of all dogs.
*An epoch is basically modifying the network in a small step, and the key word here is small - taking too big steps risk overshooting our target values for the network parameters. Since we´re taking small steps, we have to take several of them to get where we want. | This question is related to convolutionals neural networks (especially YoloV3)
Since one epoch is one forward pass and one backward pass of all the training examples, for the model to converge properly, is it the same (in terms of precision and time to converge) to :
train with n*k images during m epochs ?
train with n images during m*k epochs ? | 0 | 1 | 794 |
0 | 68,421,682 | 0 | 1 | 0 | 0 | 5 | false | 5 | 2018-07-20T10:26:00.000 | 0 | 16 | 0 | Can't install tensorflow with pip or anaconda | 51,440,475 | 0 | python,tensorflow | Not Enabling the Long Paths can be the potential problem.To solve that,
Steps include:
Go to Registry Editor on the Windows Laptop
Find the key "HKEY_LOCAL_MACHINE"->"SYSTEM"->"CurrentControlSet"->
"File System"->"LongPathsEnabled" then double click on that option and change the value from 0 to 1.
3.Now try to install the tensorflow it will work. | Does anyone know how to properly install tensorflow on Windows?
I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same "Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
No matching distribution found for tensorflow-gpu" error
I tried installing using pip and anaconda, both don't work for me.
Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works. | 0 | 1 | 44,333 |
0 | 56,944,215 | 0 | 1 | 0 | 0 | 5 | false | 5 | 2018-07-20T10:26:00.000 | 0 | 16 | 0 | Can't install tensorflow with pip or anaconda | 51,440,475 | 0 | python,tensorflow | As of July 2019, I have installed it on python 3.7.3 using py -3 -m pip install tensorflow-gpu
py -3 in my installation selects the version 3.7.3.
The installation can also fail if the python installation is not 64 bit. Install a 64 bit version first. | Does anyone know how to properly install tensorflow on Windows?
I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same "Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
No matching distribution found for tensorflow-gpu" error
I tried installing using pip and anaconda, both don't work for me.
Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works. | 0 | 1 | 44,333 |
0 | 53,177,064 | 0 | 1 | 0 | 0 | 5 | false | 5 | 2018-07-20T10:26:00.000 | 0 | 16 | 0 | Can't install tensorflow with pip or anaconda | 51,440,475 | 0 | python,tensorflow | Actually the easiest way to install tensorflow is:
install python 3.5 (not 3.6 or 3.7) you can check wich version you have by typing "python" in the cmd.
When you install it check in the options that you install pip with it and you add it to variables environnement.
When its done just go into the cmd and tipe "pip install tensorflow"
It will download tensorflow automatically.
If you want to check that it's been installed type "python" in the cmd then some that ">>>" will appear, then you write "import tensorflow" and if there's no error, you've done it! | Does anyone know how to properly install tensorflow on Windows?
I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same "Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
No matching distribution found for tensorflow-gpu" error
I tried installing using pip and anaconda, both don't work for me.
Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works. | 0 | 1 | 44,333 |
0 | 51,706,227 | 0 | 1 | 0 | 0 | 5 | true | 5 | 2018-07-20T10:26:00.000 | 5 | 16 | 0 | Can't install tensorflow with pip or anaconda | 51,440,475 | 1.2 | python,tensorflow | Tensorflow or Tensorflow-gpu is supported only for 3.5.X versions of Python. Try installing with any Python 3.5.X version. This should fix your problem. | Does anyone know how to properly install tensorflow on Windows?
I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same "Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
No matching distribution found for tensorflow-gpu" error
I tried installing using pip and anaconda, both don't work for me.
Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works. | 0 | 1 | 44,333 |
0 | 51,440,570 | 0 | 1 | 0 | 0 | 5 | false | 5 | 2018-07-20T10:26:00.000 | 0 | 16 | 0 | Can't install tensorflow with pip or anaconda | 51,440,475 | 0 | python,tensorflow | You mentioned Anaconda. Do you run your python through there?
If so check in Anaconda Navigator --> Environments, if your current environment have got tensorflow installed.
If not, install tensorflow and run from that environment.
Should work. | Does anyone know how to properly install tensorflow on Windows?
I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same "Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
No matching distribution found for tensorflow-gpu" error
I tried installing using pip and anaconda, both don't work for me.
Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works. | 0 | 1 | 44,333 |
0 | 51,450,250 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2018-07-20T19:30:00.000 | 1 | 4 | 0 | pandas iteratively update column values | 51,449,260 | 0.049958 | python,performance,pandas,numpy,iteration | The important point about these kinds of problems that you need to know is that you're on a paradox spot right now. This means that you're on a point that you want to take advantage of both vectorization and non-vectorization like threading or parallelization.
In such situation you can try one/some of the following options:
Change the type of your data structure.
Rethink your problem and see if it's possible to solve this entirely in a Vectorized way (preferably)
Simply use a non-vectorized-based approach but sacrifice something else like memory. | I have a pandas Series like the following:
a = pd.Series([a1, a2, a3, a4, ...])
and I want to create another pandas Series based on the following rule:
b = pd.Series(a1, a2+a1**0.8, a3 + (a2 + a1**0.8)**0.8, a4 + (a3 + (a2 + a1**0.8)**0.8)**0.8, ...).
This is doable using iteration, but I have a large dataset (millions of records) and I must perform operation for thousands of times (for optimization purposes). I need to do this operation very fast. Is there any possible way for me to realize this by using pandas or numpy built-in functions? | 0 | 1 | 210 |
0 | 51,458,710 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-07-21T16:20:00.000 | 1 | 4 | 0 | How to open/create images in Python without using external modules | 51,457,993 | 0.049958 | python,image-processing,pypy | Working with bare-bones .ppm files is trivial: you have three lines of text (P6, "width height", 255), and then you have the 3*width*height bytes of RGB. As long as you don't need more complicated variants of the .ppm format, you can write a loader and a saver in 5 lines of code each. | I have a python script which opens an image file (.png or .ppm) using OpenCV, then loads all the RGB values into a multidimensional Python array (or list), performs some pixel by pixel calculations solely on the Python array (OpenCV is not used at all for this stage), then uses the newly created array (containing new RGB values) to write a new image file (.png here) using OpenCV again. Numpy is not used at all in this script. The program works fine.
The question is how to do this without using any external libraries, regardless whether they are for image processing or not (e.g. OpenCV, Numpy, Scipy, Pillow etc.). To summarize, I need to use bare bones Python's internal modules to: 1. open image and read the RGB values and 2. write a new image from pre-calculated RGB values. I will use Pypy instead of CPython for this purpose, to speed things up.
Note: I use Windows 10, if that matters. | 0 | 1 | 6,169 |
0 | 51,479,161 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2018-07-22T12:51:00.000 | 1 | 1 | 0 | Consumer-producer pattern with pyarrow | 51,465,320 | 1.2 | python,pandas,redis,pyarrow | Solution with lists:
Producer puts data into a list with LPUSH
Consumer takes data from this list with RPOP or BRPOP (blocking).
Limitations: only one consumer reads the message. If you have 2, only one of them will see the message.
Speed: for one pair of consumer-producer it will have the same speed. The more consumers (for this or other lists), the faster it will be than pub/sub. | What is the best way to implement a multi process based consumer producer pattern with pyarrow as a fast memory store for pandas dataframes?
Currently I am using redis pub sub but I think there might be a more efficient (faster) solution? Could you provide an example? | 0 | 1 | 229 |
0 | 51,473,577 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-22T16:56:00.000 | 0 | 1 | 0 | Use Base Tree from XGBoost | 51,467,350 | 0 | python,machine-learning,scikit-learn,xgboost | There are 2 solutions for you:
Build only 1 tree (meaning use only 1 iteration), that means that when you train use num_boost_round=1
When you predict use ntree_limit=1
Either one will solve your problem. You either train only 1 tree or you predict using only 1 tree. | Is there a way to only use the base decision tree that is used in the XGBoost algorithm?
I know that Sklearn's GBT just uses a Sklearn Decision Tree as their base but XGBoost builds trees differently (e.g. regularization of the leaf weights).
I looked at the code for XGBoost but I wasn't able to figure out how they built their base tree.
Thanks! | 0 | 1 | 113 |
0 | 67,088,725 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-23T17:57:00.000 | 0 | 1 | 0 | CNN image extraction to predict a continuous value | 51,484,727 | 0 | python,tensorflow | I would use the CNN to predict the model of the car and then using a list of all the car prices it's easy enough to get the price, or if you dont care about the car model just use the prices as lables | I have images of vehicles . I need to predict the price of the vehicle based on image extraction.
What I have learnt is , I can use CNN to extract the image features but what I am not able to get is, How to predict the prices of vehicles.
I know that the I need to train my CNN model before it predicts the price.
I don't know how to train the model with images along with prices .
In the end what I expect is , I will input an vehicle image and I need to get price of the vehicle.
Can any one provide the approach for this ? | 0 | 1 | 268 |
0 | 51,487,294 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-07-23T21:00:00.000 | 2 | 1 | 0 | Maintaining accuracy of ratios in numpy | 51,487,174 | 1.2 | python,numpy,math,types,numbers | All your ratios are rational numbers, so you could use the Fraction class in the fractions module. Each "fraction" is the ratio of two integers. And since Python's integers have no upper limit, neither do the fractions. You can treat them much like float values--add, subtract, multiply, divide, and print them.
I did something very much like your operations in a previous project of mine, to track ratios in the tree structure of Windows Registry. I did this project in Delphi but have started redoing it in Python. I have already decided to use fractions.
The problem is that the values will be kept exactly, as well as the operations on them. But if you plot them on a graph, the values may still be overwhelmed by other values. | I am using numpy to analyze graphs. One type of analysis I am doing is traversing, while enumerating: "how often is node K in a path with node J, up to this point". In this analysis, many of my values are ratios, or percentages, or however you want to think of it.
Since graphs often branch, they are often exponential when it comes to combinations or permutations. And so, some of the time, my ratios become very small. And, numpy loses accuracy. And, eventually, numpy says that the ratio is zero even though it should still be greater than zero.
To elaborate a bit more, I use rows of the matrix to represent depth of my search, and I use columns to represent nodes. The value of the [row,column] is the ratio of said node, at said depth, to whatever other node I am comparing it to. And so it's the case, that depending upon the graph, that ratio may be cut in half at every next level. From 1, to .5, to .25, to ..... 1.369^(-554) and suddenly it's zero next iteration. Not to mention, when it gets small enough I lose accuracy in all my other calculations as well.
If I want extreme accuracy even on large graphs, what options do I have? I suppose I could enumerate in the opposite direction, getting total counts and doing division to recalculate ratios when necessary (it is necessary at time in my program). But if I do this, I still expect I would lose a ton of accuracy when I divide one huge number by another huge number, yes? | 0 | 1 | 59 |
0 | 51,494,847 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-24T01:59:00.000 | 1 | 1 | 0 | Understanding parameters in ImageDataGenerator and flow in Keras | 51,489,390 | 0.197375 | python,keras,image-preprocessing | Question 1
The rotation range is [-rotation_range, rotation_range]. I suggest you to check your augmented images by using the parameter save_to_dir in the method flow. This way you can be sure that the images are being augmented as you expect.
Question 2
When calling next, a random augmentation is applied to every image right after it's being loaded, according to the parameters you gave to the constructor of ImageDataGenerator. I.e. an image can be left rotated in one epoch and in the next epoch the same image can be right rotated. That's what makes augmentation so efficient- you artificially increase the size of your data.
Question 3
The list of images is shuffled before each epoch. A batch of images will never repeat itself (well... you can calculate the odds) | Question 1
Does the rotation_range: Int. Degree range for random rotations refer to the range [0, rotation_range] or [-rotation_range, rotation_range]. If I set rotation_range=40, will my images be randomly rotated between [-40, 40] or [0, 40]?
Question 2
Does ImageDataStore.flow randomly generate different augmentations of an input image at every epoch or is a single augmentation generated at the start and used for all epochs.
For example, let's say I have some image A that is part of my inputs into the flow method. Is image A augmented only once before training, and this augmented version used for all epochs? Or is image A randomly augmented every epoch?
Question 3
When the param shuffle is set to True in the flow method, does this mean the batches are shuffled every epoch, or the images within the batches are shuffled every epoch?
For example, lets say our training data consists of 15 images (labeled I1 - I15) is divided into 3 batches/mini-batches before epoch 1 starts (labeled B1, B2, B3).
Lets say before epoch 1, the images were assigned to the batches as follows:
B1 = {I1, I2, I3, I4, I5}
B2 = {I6, I7, I8, I9, I10}
B3 = {I11, I12, I13, I14, I15}
Now in epoch 1, the batches are trained in the order B1, B2, B3.
When epoch 2 starts, will the images in B1, B2, B3 be shuffled so that each batch will not contain the same set of 5 images? | 0 | 1 | 1,238 |
0 | 57,576,484 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2018-07-24T18:55:00.000 | 3 | 3 | 0 | Is image resizing needed to training a new Yolo model? | 51,505,729 | 0.197375 | python,tensorflow,yolo,darknet,darkflow | (1) It already resize it with random=1 in .cfg file.The answer is "yes".The input resolution of images are same.You can resize it by yourself or Yolo can do it.
(2)If your hardware is good enough,I suggest you to use big sized images.Also as a suggest,If you will use webcam,use images as the same resolutions as your webcam uses.
(3)Yes, same as training. | I would like to train a new model using my own dataset. I will be
using Darkflow/Tensorflow for it.
Regarding my doubts:
(1) Should we resize our training images for a specific size?
(2) I think smaller images might save time, but can smaller images harm the accuracy?
(3) And what about the images to be predicted, should we resize them as well or is it not necessary? | 0 | 1 | 4,420 |
0 | 60,745,616 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2018-07-24T19:37:00.000 | 6 | 1 | 0 | Plot a function during debugging in Python | 51,506,354 | 1.2 | python,debugging,matplotlib | Ok, I found a way to show the plot without breaking the debugging process.
All you need to do is to issue plt.pause(1) command, which will display the plots, and then one can continue the debugging process. | I used to work in Matlab and it is really convenient (when working with big arrays/matrices and nested functions) to visualize intermediate results during debugging using plot function.
In Python I cannot plot anything in debug mode: a window with figure plot is never loaded (I am using Spyder IDE for coding and matplotlib.pyplot for plotting).
This is really annoying when debugging nested function and classes.
Does anyone know a good solution? Of course, I can always output intermediate results, however it is not convenient.
Thanks,
Mikhail | 0 | 1 | 2,174 |
0 | 51,526,725 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-24T19:41:00.000 | 0 | 1 | 0 | How to make RNN time-forecast multiple days using Keras? | 51,506,404 | 0 | python,tensorflow,keras,lstm,rnn | I don't have the rep to comment, but I'll say here that I've toyed with a similar task. One could use a sliding window approach for 90 days (I used 30, since 90 is pushing LSTM limits), then predict the price appreciation for next month (so your prediction is for a single value). @Digital-Thinking is generally right though, you shouldn't expect great performance. | I am currently working on a program that would take the previous 4000 days of stock data about a particular stock and predict the next 90 days of performance.
The way I've elected to do this is with an RNN that makes use of LSTM layers to use the previous 90 days to predict the next day's performance (when training, the previous 90 days are the x-values and the next day is used as the y-value). What I would like to do however, is use the previous 90-180 days to predict all the values for the next 90 days. However, I am unsure of how to implement this in Keras as all the examples I have seen only predict the next day and then they may loop that prediction into the next day's 90 day x-values.
Is there any ways to just use the previous 180 days to predict the next 90? Or is the LSTM restricted to only predicting the next day? | 0 | 1 | 299 |
0 | 51,570,129 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2018-07-24T20:47:00.000 | 7 | 2 | 0 | Keras with Tensorflow backend - Run predict on CPU but fit on GPU | 51,507,285 | 1 | python,tensorflow,keras,keras-rl | Maybe you can save the model at the end of the training. Then start another python file and write os.environ["CUDA_VISIBLE_DEVICES"]="-1"before you import any keras or tensorflow stuff. Now you should be able to load the model and make predictions with your CPU. | I am using keras-rl to train my network with the D-DQN algorithm. I am running my training on the GPU with the model.fit_generator() function to allow data to be sent to the GPU while it is doing backprops. I suspect the generation of data to be too slow compared to the speed of processing data by the GPU.
In the generation of data, as instructed in the D-DQN algorithm, I must first predict Q-values with my models and then use these values for the backpropagation. And if the GPU is used to run these predictions, it means that they are breaking the flow of my data (I want backprops to run as often as possible).
Is there a way I can specify on which device to run specific operations? In a way that I could run the predictions on the CPU and the backprops on the GPU. | 0 | 1 | 8,487 |
0 | 51,544,489 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-24T23:52:00.000 | 0 | 1 | 0 | Need to NormalizeImage if using Tensorflow's ObjectDetection API? | 51,509,083 | 0 | python,tensorflow,object-detection,object-detection-api | Depending on what configuration you specified it uses an image resizer to normalize your dataset.
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
}
This will either downsample or upsample your images using bilinear interpolation. | I'm not sure if the Tensorflow ObjectDetection API automatically normalizes the input images (my own dataset). It seems to have an option called 'NormalizeImage' in the DataAugmentations. So far, I haven't specified it, and my models are doing reasonably well. Am I missing image normalization, or does Tensorflow do it automatically for me, or is it just not needed for this Object Detection API?
My models have used Faster RCNN and RetinaNet so far. | 0 | 1 | 372 |
0 | 51,527,863 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-25T21:15:00.000 | 0 | 2 | 0 | Python: How to plot an array of y values for one x value in python | 51,527,766 | 0 | python,matplotlib,graph | You can simply repeat the X values which are common for y values
Suppose
[x,x,x,x],[y1,y2,y3,y4] | I am trying to plot an array of temperatures for different location during one day in python and want it to be graphed in the format (time, temperature_array). I am using matplotlib and currently only know how to graph 1 y value for an x value.
The temperature code looks like this:
Temperatures = [[Temp_array0] [Temp_array1] [Temp_array2]...], where each numbered array corresponds to that time and the temperature values in the array are at different latitudes and longitudes. | 0 | 1 | 1,722 |
0 | 51,541,112 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-07-26T13:17:00.000 | 3 | 1 | 0 | Extract video into frames using pure python(without using openCV) | 51,539,813 | 0.53705 | python,python-3.x | First you should understand that there are many different video codecs and even different video containers in common use, currently. Any library that offers video decoding usually has a multitude of different sub-libraries, to be able to read all the codecs.
But even for single codec/container variants you will not find any Python implementations, beyond toy or research projects. Video decoders are written in C, C++ or similar languages, as the process is computationally very expensive.
The video decoding in OpenCV is a relatively thin wrapper of ffmpeg/libav functionality. All the heavy lifting is done by ffmpeg. So if you want to do without OpenCV, that's possible by finding another video decoding library wrapper in Python. But you will not find a pure-Python implementation of video decoding for common video files. | How can I extract video into frames using python only. I got plenty of solutions but they all are using OpenCV. But in my case I want to do it using python only.
Your help will be highly appreciated. | 0 | 1 | 520 |
0 | 51,547,413 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-07-26T16:02:00.000 | 1 | 1 | 0 | Does h2o4gpu handle categorical features like sklearn or like h2o? | 51,543,158 | 1.2 | python,scikit-learn,h2o,h2o4gpu | There is no native support for categorical columns in h2o4gpu (at least yet), so you will have to one-hot encode (or label encode) your categorical columns like you do in sklearn and xgboost. | I understand that sklearn requires categorical features to be encoded to dummy variables or one-hot encoded when running the sklearn.ensemble.RandomForestRegressor method, and that XGBoost requires the same, but h2o permitted raw categorical features to be used in its h2o.estimators.random_forest.H2ORandomForestEstimator method. Since h2o4gpu's implementation of random forest is built on top of XGBoost, does this mean support for raw categorical features is not included? | 0 | 1 | 190 |
0 | 54,131,481 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-07-26T20:29:00.000 | 6 | 3 | 0 | Convert 18-digit LDAP/FILETIME timestamps to human readable date | 51,547,064 | 1 | python,datetime | I know this answer is very late to the party, but for anyone else looking in the future.
The 18-digit Active Directory timestamps (LDAP), also named 'Windows NT time format','Win32 FILETIME or SYSTEMTIME' or NTFS file time. These are used in Microsoft Active Directory for pwdLastSet, accountExpires, LastLogon, LastLogonTimestamp and LastPwdSet. The timestamp is the number of 100-nanoseconds intervals (1 nanosecond = one billionth of a second) since Jan 1, 1601 UTC.
Therefore, 130305048577611542 does indeed relate to December 3, 2013.
When putting this value through the date time function in Python, it is truncating the value to nine digits. Therefore the timestamp becomes 130305048 and goes from 1.1.1970 which does result in a 1974 date!
In order to get the correct Unix timestamp you need to do:
(130305048577611542 / 10000000) - 11644473600 | I have exported a list of AD Users out of AD and need to validate their login times.
The output from the powershell script give lastlogin as LDAP/FILE time
EXAMPLE 130305048577611542
I am having trouble converting this to readable time in pandas
Im using the following code:
df['date of login'] = pd.to_datetime(df['FileTime'], unit='ns')
The column FileTime contains time formatted like the EXAMPLE above.
Im getting the following output in my new column date of login
EXAMPLE 1974-02-17 03:50:48.577611542
I know this is being parsed incorrectly as when i input this date time on a online converter i get this output
EXAMPLE:
Epoch/Unix time: 1386031258
GMT: Tuesday, December 3, 2013 12:40:58 AM
Your time zone: Monday, December 2, 2013 4:40:58 PM GMT-08:00
Anyone have an idea of what occuring here why are all my dates in the 1970' | 0 | 1 | 8,553 |
0 | 54,362,653 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-27T00:46:00.000 | 0 | 1 | 0 | while running YOLO for test custom object cfg file path error the path is correct but even though its showing this error | 51,549,279 | 0 | python,yolo,darkflow | Oftenly I was also used get something similar to this error
my first mistake was not to open Jupyter notebook at the directory where I kept these folders
so, try to open the notebook from that directory
(in windows simply type cmd on location bar of shift+right-click)
(in MacOS there is some setting first enable that then right-click on the folder)
later I used this options
options={
'model':'cfg/yolo.cfg',
'load':'bin/yolov2.weights',
'threshold':0.3,
'gpu':1.0
}
tfNet=TFNet(options)
and everything work as expected
hope this will help you | This code is for running my trained weights the folder ckpt contains 1050 step train data and this file is in outside of cfg folder in darkflow main folder.
import cv2
from darkflow.net.build import TFNet
import numpy as np
import time
options = {
'model': 'cfg/tiny-yolo-voc-1c.cfg',
'load': 1050,
'threshold': 0.2,
'gpu': 1.0
}
after running this code in atom editor below error showing
Parsing cfg//tiny-yolo-voc-1c.cfg
Traceback (most recent call last):
File "C:\Users\amard\Desktop\Hotel\darkflow\test.py", line 13, in <module>
tfnet = TFNet(options)
File "C:\Users\amard\Desktop\Hotel\darkflow\darkflow\net\build.py", line 58, in __init__
darknet = Darknet(FLAGS)
File "C:\Users\amard\Desktop\Hotel\darkflow\darkflow\dark\darknet.py", line 17, in __init__
src_parsed = self.parse_cfg(self.src_cfg, FLAGS)
File "C:\Users\amard\Desktop\Hotel\darkflow\darkflow\dark\darknet.py", line 68, in parse_cfg
for i, info in enumerate(cfg_layers):
File "C:\Users\amard\Desktop\Hotel\darkflow\darkflow\utils\process.py", line 66, in cfg_yielder
layers, meta = parser(model); yield meta;
File "C:\Users\amard\Desktop\Hotel\darkflow\darkflow\utils\process.py", line 17, in parser
with open(model, 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'cfg//tiny-yolo-voc-1c.cfg'
[Finished in 4.298s] | 0 | 1 | 778 |
0 | 51,565,999 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-27T00:49:00.000 | 0 | 1 | 0 | Scipy interp2d function produces z = f(x,y), I would like to solve for x | 51,549,293 | 0 | python,numpy,scipy | I was able to figure this out. yvalue, zvalue, xmin, and xmax are known values. By creating a linspace out of the possible values x can take on, a list can be created with all of the corresponding function values. Then using argmin() we can find the closest value in the list to the known z value.
f = interp2d(x,y,z)
xnew = numpy.linspace(xmin, xmax)
fnew = f(xnew, yvalue)
xindex = (numpy.abs(fnew - zvalue)).argmin()
xvalue = xnew(xindex) | I am using the 2d interpolation function in scipy to smooth a 2d image. As I understand it, interpolate will return z = f(x,y). What I want to do is find x with known values of y and z. I tried something like this;
f = interp2d(x,y,z)
index = (np.abs(f(:,y) - z)).argmin()
However the interp2d object does not work that way. Any ideas on how to do this? | 0 | 1 | 102 |
0 | 51,712,765 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2018-07-27T04:42:00.000 | 0 | 1 | 0 | How to set an start solution in Gurobi, when only objective function is known? | 51,550,870 | 0 | python,initialization,gurobi,upperbound | I think that if you can calculate a good solution, you can also know some bound for your variable even you dont have the solution exactly ? | I have a minimization problem, that is modeled to be solved in Gurobi, via python.
Besides, I can calculate a "good" initial solution for the problem separately, that can be used as an upper bound for the problem.
What I want to do is to set Gurobi use this upper bound, to enhance its efficiency. I mean, if this upper bound can help Gurobi for its search. The point is that I just have the objective value, but not a complete solution.
Can anybody help me how to set this upper bound in the Gurobi?
Thanks. | 0 | 1 | 438 |
0 | 51,567,400 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2018-07-27T09:05:00.000 | 0 | 2 | 0 | How to create common alias for all cells in a column in Excel automatically through scripting | 51,554,503 | 0 | python,excel | In Excel, is your data in a table, or named range?
I'm kind of assuming a table (which could make this a snap) because, as a column heading in a (named) range, the 'header' (or alias, if I understand) isn't "connected" to the underlying data, as it would be in a table...
Can you provide an example of how you would (or expect to) use the 'column alias' in a formula? | I have an Excel workbook with close to 90 columns. I would like to create column alias for each column in the workbook, so that it will be easier for me to use the respective columns in formulas.
Normally, I would select each column in the workbook and type in my alias for the column into the Cell Reference Bar at the top
Is there a way to do this automatically, because i have a lot of columns? Especially in Python ?
I tried the pandas.Series.to_excel function which has the header attribute. However all it does is change the column names to the string specified and does not modify the alias for all the cells in the column.
Thanks a lot for your help | 0 | 1 | 632 |
0 | 51,556,893 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2018-07-27T09:05:00.000 | 0 | 2 | 0 | How to create common alias for all cells in a column in Excel automatically through scripting | 51,554,503 | 0 | python,excel | If I understand you correctly...
To name each of the columns something slightly different you can use a for-loop which contains an incrementing number that's added to the column name.
There are loads of examples of this available online, here's a really rough illustrative example:
num = 0
for header in column:
num +=1
header = header+str(num)
I don't think you need to program this for just 90 columns in one book though tbh.
You could name the first 3 columns, select the three named cells, and then drag right when you see the + symbol in the bottom right corner of the most rightern cell of the three selected.
Dragging across 90 cells should only take one second.
Once you've named the 90 columns, you can always select row#1 and do some ctrl+h on it to change the header names later. | I have an Excel workbook with close to 90 columns. I would like to create column alias for each column in the workbook, so that it will be easier for me to use the respective columns in formulas.
Normally, I would select each column in the workbook and type in my alias for the column into the Cell Reference Bar at the top
Is there a way to do this automatically, because i have a lot of columns? Especially in Python ?
I tried the pandas.Series.to_excel function which has the header attribute. However all it does is change the column names to the string specified and does not modify the alias for all the cells in the column.
Thanks a lot for your help | 0 | 1 | 632 |
0 | 51,559,013 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-07-27T13:21:00.000 | 1 | 1 | 0 | GridSearchCV: based on mean_test_score results, predict should perform much worse, but it does not | 51,558,872 | 0.197375 | python-3.x,scikit-learn,grid-search | The refit=True parameter of GridSearchCV makes the estimator with the found best set of hyperparameters be refit on the full data. So if your training error is almost zero in the CV folds, you would expect it to be near zero in the best_estimator_ as well. | I am trying to evaluate the performance of a regressor by means of GridSearchCV. In my implementation cv is an int, so I'm applying the K-fold validation method. Looking at cv_results_['mean_test_score'],
the best mean score on the k-fold unseen data is around 0.7, while the train scores are much higher, like 0.999. This is very normal, and I'm ok with that.
Well, following the reasoning behind this concept, when I apply the best_estimator_ on the whole data set, I expect to see at least some part of the data predicted not perfectly, right? Instead, the numerical deviations between the predicted quantities and the real values are near zero for all datapoints. And this smells of overfitting.
I don't understand that, because if I remove a small part of the data and apply GridSearchCV to the remaining part, I find almost identical results as above, but the best regressor applied to the totally unseen data predicts with much higher errors, like 10%, 30% or 50%. Which is what I expected, at least for some points, fitting GridSearchCV on the whole set, based on the results of k-fold test sets.
Now, I understand that this forces the predictor to see all datapoints, but the best estimator is the result of k fits, each of them never saw 1/k fraction of data. Being the mean_test_score the average between these k scores, I expect to see a bunch of predictions (depending on cv value) which show errors distributed around a mean error that justifies a 0.7 score. | 0 | 1 | 234 |
0 | 51,963,250 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-28T07:25:00.000 | 0 | 1 | 0 | pbft implementation in python when f=1 | 51,569,086 | 0 | python,algorithm,blockchain,consensus | You need to use authenticated point-to-point communication channel to implement any sorts of BFT algorithm. Because PBFT assumes all participant's identity are established in prior, you don't need to assume multicast communication primitive. Even though broadcast is executed in PBFT protocol, each message is encrypted by its private key. So you don't need to use multicast or broadcast. | I want to implement pbft algorithm(3f+1 systems;f=1) in python. But what is the channel should use for sending and receiving from replicas. I have tried python multicast but it seems something going wrong while receiving. So please suggest any solution that can put me forward.
Thanks in Advance. | 0 | 1 | 305 |
0 | 51,569,613 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-28T08:31:00.000 | 2 | 1 | 0 | How to properly use numpy in keras | 51,569,542 | 0.379949 | python,numpy,tensorflow,keras | Here is an explanation on how you can access your images.
X is four dimensional tensor. In mathematics tensors are generalization of vectors and metrics into higher dimensional arrays.
Assuming "channels last" data-format
1st Axis = Number of images
2nd Axis = Number of rows in single image
3rd Axis = Number of columns in single row
4th Axis = Number of channels of certain pixel
Now you can access image,row,column, and channels using indexing as follows.
x[0] Represents first image
x[0][0] Represents First row of first image
x[0][0][0] Represents First column of first row of first image
x[0][0][0][0] Represents Red channel of First column of first row of first image | the question is:
In the keras tutorial it use an input x_train = np.random.random((100, 100, 100, 3)), it should means that there's 100 images each has size of [100,100,3] right?
So i thought that x_train[0][0] should represent the first channel of the first img (which should be [100, 100]), but x_train[0][0] in fact has a size of [100,3]... so i'm confused, how can keras take this [100,100,100,3] numpy array as a set of imgs? please help me out, thank in advance.
Another question is:
how can I construct a input like this ? Cause when I do np.array([[100,100],[100,100]]), it becomes to array of [2,100,100] | 0 | 1 | 1,287 |
0 | 51,572,366 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-07-28T14:36:00.000 | 0 | 1 | 0 | Getting:"ModuleNotFoundError: No module named 'tensorflow'", only when running from the command line | 51,572,338 | 1.2 | python,tensorflow | Most likely you are simply using a different interpreter from your command line than from your PyCharm project. This will happen, for example, if you have set up your PyCharm project using a fresh conda environment.
To see which one you are using on the command-line, simply run where python. Then compare this with what you found in PyCharm. | I'm trying to run the Transformer (speech2text) model on windows (till I get my linux machine).
when I'm running the entire command from the cmd:
"python transformer_main.py --data_dir=$DATA_DIR --model_dir=$MODEL_DIR --params=$PARAMS"
I'm getting an error :"ModuleNotFoundError: No module named 'tensorflow'"
But I know that tf is installed and also when I'm using pycharm, first, I can see the package installed(File -> Settings ->Project interpreter).
second, when I'm running the code it's passing that fall site...
I can run through pycharm, but I think it's important to understand what I'm missing, Is it something with the interpreter?
Thanks. | 0 | 1 | 95 |
0 | 51,574,299 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-07-28T18:44:00.000 | 0 | 2 | 0 | How to create a pandas dataframe where the datatype of a column will be a dictionary? | 51,574,236 | 0 | python,pandas | No, this isn't possible.
Pandas dtypes(*) are closely related to NumPy dtypes. There are some differences and additions, e.g. for datetime and category, but in the general the rule holds. Often these additional dtypes are wrappers around NumPy dtypes. The key here is series with these specifically defined dtypes are held in contiguous memory blocks. They can manipulated with vectorised computations.
A series which cannot be held in the fashion described above gets labelled with dtype object. This is nothing more than a sequence of pointers to arbitrary Python types. You should not consider this as an "array of dictionaries" in any vectorised sense. You can compare such a series to a list. You would never say that a list has "dtype dict" just because it contains dictionaries. Similarly, the fact that an object series only contains dictionaries doesn't make it a series of dtype dict.
(*) Notice I use "dtype" instead of "type". This is intentional. "dtype" has a specific and important meaning in relation to Pandas / NumPy, as the rest of my answer should demonstrate. | Is there any way to create a pandas dataframe consisting of two columns. The first columns will of datatype int and the second one will be of dictionary type. Then insert data into the dataframe iteratively. | 0 | 1 | 175 |
0 | 51,584,354 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-07-29T20:06:00.000 | 0 | 1 | 0 | python generate random connected graph with certain constraint on the vertex degree | 51,584,121 | 0 | python,graph | A simple algorithm I can think to is:
Start with one vertex
Repeat one of two random moves:
2A) Pick a random vertex with degree less than 4 and add a new vertex connected to it
2B) Pick two random vertices with degree less than 4 that are not connected and add an edge between them
until you've enough vertices/edges. | Is there any python package that could random generate connected graph (there is a path between every pair of vertices) on which each vertex has degree at most 4?
Thank you! | 0 | 1 | 275 |
0 | 59,050,229 | 0 | 1 | 0 | 0 | 2 | true | 19 | 2018-07-30T11:38:00.000 | 53 | 7 | 0 | cv2 python has no imread member | 51,593,147 | 1.2 | python-3.x,opencv | Since you are trying to executing this with VS Code, try following steps
Open palette on VS Code (use specifies command): CTRL + Shift + P
Then select "Preferences > Open Settings (JSON)" option in the palette dropdown
Then add the following line in the opened settings.json file
python.linting.pylintArgs": ["--generate-members"]
this must work | I pip installed OpenCV-python. The installation seems to be fine and I tested it out on the python IDLE. It ran without any problems. I've been trying to run it on VS Code but it doesn't seem to work. The autocomplete recognizes the imread function but when I type it in it throws up an error saying cv2 has no imread member. I am using the most updated version of python
I am calling it like this:img2 = cv2.imread("C:\Biometric\min.jpg", 0) | 0 | 1 | 33,464 |
0 | 56,180,952 | 0 | 1 | 0 | 0 | 2 | false | 19 | 2018-07-30T11:38:00.000 | 0 | 7 | 0 | cv2 python has no imread member | 51,593,147 | 0 | python-3.x,opencv | Go to the terminal and type pylint --extension-pkg-whitelist=cv2
it has worked for me. | I pip installed OpenCV-python. The installation seems to be fine and I tested it out on the python IDLE. It ran without any problems. I've been trying to run it on VS Code but it doesn't seem to work. The autocomplete recognizes the imread function but when I type it in it throws up an error saying cv2 has no imread member. I am using the most updated version of python
I am calling it like this:img2 = cv2.imread("C:\Biometric\min.jpg", 0) | 0 | 1 | 33,464 |
0 | 51,604,459 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-07-31T00:56:00.000 | 0 | 1 | 0 | Installing tensorflow, getting error 'pip3' is not recognized as an internal or external command, operable program or batch file" | 51,604,098 | 1.2 | python,windows,tensorflow,artificial-intelligence | Try ./pip3 install --upgrade tensorflow from that directory. I think on windows, you need to be explicit that the command is in your working directory. Alternatively you could try
C:\Users\Diederik\AppData\Local\Programs\Python\Python36\Scripts\pip3 install --upgrade tensorflow. That's how I do it on my windoze machine. | I am trying to install tensorflow for their image recognition program thing, but I get the error in the title, it does work when i go to C:\Users\Diederik\AppData\Local\Programs\Python\Python36\Scripts, and type pip3 install --upgrade tensorflow there, but I am not sure if that is how I am supposed to do it, since I tried that before and I got errors while running the classify_image.py, so I thought maybe I should try it the way tensorflow told me to, but that didn't work, Please help me, I am happy to provide any extra information you need. I am on windows 10 | 0 | 1 | 857 |
0 | 51,608,308 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-31T03:26:00.000 | 2 | 2 | 0 | Predicting the POS tag of the upcoming word | 51,605,036 | 0.197375 | python,nlp,nltk | You may train a simple language model on POS tag data using LSTMs. That is, say, using Spacy, convert your corpus to POS tag corpus. Train the model using the new corpus. Predict the POS on evaluation. Another way to do it is by building a language model on your data, generate the next word and find its POS. | Is there a way in python (by using NLTK or SpaCy or any other library) that I can predict the POS tag of the word that are likely to follow the words so far I have entered.
Eg- If i input
I am going to
It shows the POS tag of the next most likely word
eg NN, becuase college can come after this | 0 | 1 | 345 |
0 | 51,611,176 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-07-31T08:32:00.000 | 4 | 1 | 0 | Weights Matrix Final Fully Connected Layer | 51,608,879 | 1.2 | python-3.x,tensorflow,conv-neural-network | Conceptually, a neural network layer is often written like y = W*x where * is matrix multiplication, x is an input vector and y an output vector. If x has 2000 units and y 4800, then indeed W should have size (4800, 2000), i.e. 4800 rows and 2000 columns.
However, in implementations we usually work on a batch of inputs X. Say X is (b, 2000) where b is your batch size. We don't want to transform each element of X individually by doing W*x as above since this would be inefficient.
Instead we would like to transform all inputs at the same time. This can be done via Y = X*W.T where W.T is the transpose of W. You can work out that this essentially applies W*x to each row of X (i.e. each input). Y is then a (b, 4800) matrix containing all transformed inputs.
In Tensorflow, the weight matrix is simply saved in this transposed state, since it is usually the form that is needed anyway. Thus, we have a matrix with shape (2000, 4800) (the shape of W.T). | My question is, I think, too simple, but it's giving me headaches. I think I'm missing either something conceptually in Neural Networks or Tensorflow is returning some wrong layer.
I have a network in which last layer outputs 4800 units. The penultimate layer has 2000 units. I expect my weight matrix for last layer to have the shape (4800, 2000) but when I print out the shape in Tensorflow I see (2000, 4800). Please can someone confirm which shape of weight matrix the last layer should have? Depending on the answer, I can further debug the issue. Thanks. | 0 | 1 | 1,570 |
0 | 51,615,949 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-07-31T14:32:00.000 | 0 | 1 | 0 | python specify weights for linear regression | 51,615,868 | 0 | python,statistics,regression,linear-regression,summary | If you're doing a linear regression, and you know some of the weights, you can essentially mathematically reduce it to a simpler regression. For instance assume you have the following regression
z = ax + by
Given z, x, y it's relatively easy to set up a linear regression. If you want the user to specify (a) rather than having the system solve for it, then your expression becomes
z' = by where z' = z - ax, where a is a coefficient that is specified by your end user.
model = sm.OLS(Predicted_Z, Actual_Z)
results = model.fit()
print(results.summary()) | I have a few linear regression models and due to underlying reasons need to need have the weights for a set regression to be user-defined. is it possible to get an OLS summary based user-defined weights rather than the linear regression weights it finds itself? | 0 | 1 | 476 |
0 | 51,618,254 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-07-31T16:30:00.000 | 1 | 1 | 0 | Handling Error for Continuous Features in a Content-Based Filtering Recommender System | 51,618,103 | 1.2 | python,machine-learning,cosine-similarity,recommender-systems | There are two basic approaches to solve this:
(1) Write your own distance function. The obvious approach is to remove the deliciousness element from each vector, evaluating that difference independently. Use cosine similarity on the rest of the vector. Combine that figure with the taste differential as desired.
(2) Transform your deliciousness data such that the resulting metric is linear. This will allow a "normal" distance metric to do its job as expected. | I've got a content-based recommender that works... fine. I was fairly certain it was the right approach to take for this problem (matching established "users" with "items" that are virtually always new, but contain known features similar to existing items).
As I was researching, I found that virtually all examples of content-based filtering use articles/movies as an example and look exclusively at using encoded tf-idf features from blocks of text. That wasn't exactly what I was dealing with, but most of my features were boolean features, so making a similar vector and looking at cosine distance was not particularly difficult. I also had one continuous feature, which I scaled and included in the vector. As I said, it seemed to work, but was pretty iffy, and I think I know part of the reason why...
The continuous feature that I'm using is a rating (let's call this "deliciousness"), where, in virtually all cases, a better score would indicate an item more favorable for the user. It's continuous, but it also has a clear "direction" (not sure if this is the correct terminology). Error in one direction is not the same as error in another.
I have cases where some users have given high ratings to items with mediocre "deliciousness" scores, but logically they would still prefer something that was more delicious. That user's vector might have an average deliciousness of 2.3. My understanding of cosine distance is that in my model, if that user encountered two new items that were exactly the same except that one had a deliciousness of 1.0 and the other had a deliciousness of 4.5, it would actually favor the former because it's a shorter distance between vectors.
How do I modify or incorporate some other kind of distance measure here that takes into account that deliciousness error/distance in one direction is not the same as error/distance in the other direction?
(As a secondary question, how do I decide how to best scale this continuous feature next to my boolean features?) | 0 | 1 | 98 |
0 | 51,620,347 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-07-31T17:43:00.000 | 0 | 1 | 0 | Zero Importance Feature Removal Without Refit in SciKit-Learn GradientBoostingClassifier | 51,619,180 | 1.2 | python,machine-learning,scikit-learn | This is not a bug and is the expected behavior. Scikit will not make assumptions after the model has been trained about what features should have been included or not.
Instead, when you call fit for a model there is an implicit assumption being made that you have already performed feature selection to remove features that will not be important to the model. Once fit the expectation is that you will provide a dataset of the same size that was used to fit the model regardless of whether the features are important or not. | After fitting a GradientBoostingClassifier in SciKit-Learn, some of the features have zero importance.
My understanding is that zero importance would mean that no splits are made on this feature.
If I try to predict using a data set that does not include the feature then it throws an error for not having all the features.
Of course I realize I can remove the zero importance features, but I would rather not alter the already fit model. (If I remove the zero importance features and refit I get a slightly different model.)
Is this a bug that the model requires zero importance features to make predictions or is there something about the zero importance features I'm not thinking about? Is there a work around to get the exact same model?
(I'm forseeing a question about why this matters -- it's because requiring zero importance features means pulling more columns from a very very large database and it looks sloppy to include a feature in the model that does nothing.) | 0 | 1 | 414 |
0 | 51,628,570 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2018-08-01T06:16:00.000 | 0 | 1 | 0 | Any way to save format when importing an excel file in Python? | 51,626,502 | 0 | python,excel,pandas,xlsxwriter | Separate data from formatting. Have a sheet that contains only the data – that's the one you will be reading/writing to – and another that has formatting and reads the data from the first sheet. | I'm doing some work on the data in an excel sheet using python pandas. When I write and save the data it seems that pandas only saves and cares about the raw data on the import. Meaning a lot of stuff I really want to keep such as cell colouring, font size, borders, etc get lost. Does anyone know of a way to make pandas save such things?
From what I've read so far it doesn't appear to be possible. The best solution I've found so far is to use the xlsxwriter to format the file in my code before exporting. This seems like a very tedious task that will involve a lot of testing to figure out how to achieve the various formats and aesthetic changes I need. I haven't found anything but would said writer happen to in any way be able to save the sheet format upon import?
Alternatively, what would you suggest I do to solve the problem that I have described? | 0 | 1 | 468 |
0 | 51,655,116 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-08-01T10:24:00.000 | 0 | 1 | 0 | openmdao: how is the 'rel' step size calculated for a vector input of design variables? | 51,630,972 | 1.2 | python,openmdao | Yes, that is how the step size is calculated for vectors -- for relative finite difference stepping, the stepsize is scaled by the norm of the vector. However, you bring up a good point that a vector may have wildly different magnitudes in its elements, so maybe we need to add support for specifying a vector of fd step sizes. | I am currently testing gradient-based optimisation in OpenMDAO with high-fidelity flow solvers (SU2) but the shape parameterisation method I am using appears to be highly sensitive to the step size of the finite difference approximation. This is probably due to the objective function being more sensitive to some design variables than others, and so I have been using relative step sizes instead of absolute. However, I expected the step size to be relative to each design variable in the vector, but this appears not to be the case with a constant step size applied to all the design variables.
For example, a relative step size of 1e-5 produced an actual step size of 4.2e-5 (constant) with a vector of 28 design variables that vary in magnitude.
i.e. design_variables = [0, 1e-2, 1e-1...]
Question: How is the relative step size calculated for a vector of design variables that vary in magnitude and include zero?
Notes: the design variables are scaled (equally) and share the same (%) upper and lower bounds. Also, this number does appear to vary with the lower and upper bounds?
UPDATE: Issue partially resolved after reviewing the finite_difference.py script. The norm of the input is taken and multiplied with the step size. However, the codes suggests (step *= scale) that the scaled value is also a scalar and so constant across all design variables, is this correct? | 0 | 1 | 80 |
0 | 51,635,610 | 0 | 0 | 0 | 1 | 1 | true | 1 | 2018-08-01T13:58:00.000 | 3 | 1 | 0 | Can I create Excel workbooks with only Pandas (Python)? | 51,635,233 | 1.2 | python,pandas,pandas.excelwriter | The pandas codebase does not duplicate Excel reading or writing functionality provided by the external libraries you listed.
Unlike the csv format, which Python itself provides native support for, if you don't have any of those libraries installed, you cannot read or write Excel spreadsheets. | In the pandas documentation, it says that the optional dependencies for Excel I/O are:
xlrd/xlwt: Excel reading (xlrd) and writing (xlwt)
openpyxl: openpyxl > version 2.4.0 for writing .xlsx files (xlrd >= 0.9.0)
XlsxWriter: Alternative Excel writer
I can't install any external modules. Is there any way to create an .xlsx file with just a pandas installation?
Edit: My question is - is there any built-in pandas functionality to create Excel workbooks, or is one of these optional dependencies required to create any Excel workbook at all?
I thought that openpyxl was part of a pandas install, but turns out I had XlsxWriter installed. | 0 | 1 | 831 |
0 | 52,934,117 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-08-02T07:49:00.000 | 0 | 2 | 0 | No module named tensorflow.python on windows 10 when I run classify_image.py | 51,648,278 | 0 | python,tensorflow,artificial-intelligence,image-recognition | try pip install --upgrade --ignore-installed tensorflow again.
If it said some packages can't be installed/upgrade for missing some other packages, pip installthose packages too, and try again the first code.
That's how I solved my problem. | I'm trying to use tensorflow to run classify_image.py, but I keep getting the same error:
Traceback (most recent call last):
File "classify_image.py", line 46, in <module>
import tensorflow as tf
File "C:\Users\Diederik\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\__init__.py", line 22, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
ModuleNotFoundError: No module named 'tensorflow.python'
Someone asked me to do a pip3 list, so I did:
C:\Users\Diederik\AppData\Local\Programs\Python\Python36\Scripts>pip3 list Package Version ----------- ------- absl-py 0.3.0 astor 0.7.1 gast 0.2.0 grpcio 1.13.0 Markdown 2.6.11 numpy 1.15.0 pip 10.0.1 protobuf 3.6.0 setuptools 39.0.1 six 1.11.0 tensorboard 1.9.0 tensorflow 1.9.0** termcolor 1.1.0 Werkzeug 0.14.1 wheel 0.31.1 You are using pip version 10.0.1, however version 18.0 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. | 0 | 1 | 437 |
0 | 51,657,493 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-08-02T07:49:00.000 | 0 | 2 | 0 | No module named tensorflow.python on windows 10 when I run classify_image.py | 51,648,278 | 0 | python,tensorflow,artificial-intelligence,image-recognition | I simply needed to add a python path, that was the only problem | I'm trying to use tensorflow to run classify_image.py, but I keep getting the same error:
Traceback (most recent call last):
File "classify_image.py", line 46, in <module>
import tensorflow as tf
File "C:\Users\Diederik\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\__init__.py", line 22, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
ModuleNotFoundError: No module named 'tensorflow.python'
Someone asked me to do a pip3 list, so I did:
C:\Users\Diederik\AppData\Local\Programs\Python\Python36\Scripts>pip3 list Package Version ----------- ------- absl-py 0.3.0 astor 0.7.1 gast 0.2.0 grpcio 1.13.0 Markdown 2.6.11 numpy 1.15.0 pip 10.0.1 protobuf 3.6.0 setuptools 39.0.1 six 1.11.0 tensorboard 1.9.0 tensorflow 1.9.0** termcolor 1.1.0 Werkzeug 0.14.1 wheel 0.31.1 You are using pip version 10.0.1, however version 18.0 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. | 0 | 1 | 437 |
0 | 51,654,678 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-08-02T12:56:00.000 | 0 | 1 | 0 | Make predictions with an old model without losing the current model | 51,654,259 | 0 | python,tensorflow | Would it be possible to use the weights to do it?
Put the weigh of your targets to 1 and the weights of your not-targets to 0
At first, you'd have your weight tensor at [1]*n + [0]*m (+ as in concat).
Then you'd assign it to [1]*(n+m) when you want to add you m new targets
and so forth. | I am training a model that incrementally learns new classes, e.g. n target classes during the first 70 or so epochs, then the original n classes plus m new target classes, etc. When training the model on n+m target classes, the loss function requires predictions from the model trained on n target classes. How can I restore the old model efficiently?
It seems I can do this by creating two separate sessions for each batch and loading the old model in one before training the new model in the other, but this is terribly inefficient and makes training go from taking hours to days. | 0 | 1 | 35 |
0 | 51,681,014 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-08-02T19:42:00.000 | 0 | 2 | 0 | filter working in pyspark shell not spark-submit | 51,661,079 | 0 | python-3.x,apache-spark,filter,pyspark,apache-spark-sql | The filtering is working now after the col('word') is trimmed.
df_filter = df.filter(~(trim(col("word")).isin(stop_words_list)))
I still don't know why it works in pyspark shell, but not spark-submit. The only difference they have is: in pyspark shell, I used spark.read.csv() to read in the file, while in spark-submit, I used the following method.
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
session = pyspark.sql.SparkSession.builder.appName('test').getOrCreate()
sqlContext = SQLContext(session)
df = sqlContext.read.format("com.databricks.spark.csv").option('header','true').load()
I'm not sure if two different read-in methods are causing the discrepancy. Someone who is familiar with this can clarify. | df_filter = df.filter(~(col('word').isin(stop_words_list)))
df_filter.count()
27781
df.count()
31240
While submitting the same code to Spark cluster using spark-submit, the filter function is not working properly, the rows with col('word') in the stop_words_list are not filtered.
Why does this happen? | 0 | 1 | 312 |
0 | 51,686,316 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-08-03T06:28:00.000 | 0 | 1 | 0 | What is the input to the generator in Generative Adversarial Network | 51,666,513 | 0 | python-3.x,image-processing,generative-adversarial-network | The input to the generator is a z-dimensional vector of completely random values. In case of DCGAN, the input is from gaussian distribution source. GANs' theory is based on the fact that these random values are learnt to be distotred by the network in such a way that Discriminator/ Critic is fooled by the image that is produced by the generator. Both generator and discriminator are adversaries of each other making each other better epoch by epoch, and hence the name Adversarial networks.
For classifying an image into cars and not car, you can actually solve this problem fairly easily and accurately using a simple ConvNet by training it on multiple car images and "not car" images. There are already pretty complex object detection networks out there trained on Imagenet data, so you might as well start by referring the architecture of those networks.
Hope my answer was of some help to you. :) | I am recently studying about GAN model and thought that it would be useful for my system in which I am going to predict whether a given image is car or not. I understand the part that the "discriminator" gets input from the "generator". The generator generates image from a random vector which is then passed to the discriminator for authenticity check. But what exactly is that vector that is used by the generator ? Is it in an image converted to pixels and we pass it as an vector of pixels ?
Can anyone please explain me this
Or is there any other method that I should follow to build a system that can classify images to car or not
Thanks in advance | 0 | 1 | 177 |
0 | 53,847,802 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-08-03T07:33:00.000 | 1 | 1 | 0 | Janusgraph query returns dataframe to perform analysis using GraphFrame | 51,667,488 | 0.197375 | python-2.7,apache-spark,apache-spark-sql,gremlin,janusgraph | As far as I am concerned, there is not official connector available for transforming query results of JanusGraph into Spark DataFrame (and then Graph in GraphFrame).
Thus, you must manually write code to performe the transformation. You can use gremlin-python package to query JanusGraph and then using Spark to feed the result set into Spark DataFrame. | I use JanusGraph, HBase and Python (through gremlin_python) to create and store a sample graph. Now I'd like to do some graph anaysis (eg. page rank), and wish to stick to Python. I'm wondering if it's possible to query a graph from JanusGraph in DataFrame format, then ingest into GraphFrame to calculate pageRank?
The key questions are how can I bridge between JanusGraph + GraphFrame using Python, i.e.
1. query a graph which returns in one format
2. call GraphFrame API to caluclate PageRank. | 1 | 1 | 234 |
0 | 51,670,317 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-08-03T10:00:00.000 | 0 | 1 | 0 | Sum of an array and it's transpose | 51,670,148 | 0 | python,arrays | Your result will need n rows to comply to the structure of one of the arrays and n columns to comply to the structure of the other array. | Consider a array A of shape (n x 1). The transpose of A will have the shape (1 x n). Why is that the sum of A and A.T (i.e A + A.T) give back an (n x n) array in Python? | 0 | 1 | 62 |
Subsets and Splits