question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
59,648,509 | 2020-1-8 | https://stackoverflow.com/questions/59648509/batch-normalization-when-batch-size-1 | What will happen when I use batch normalization but set batch_size = 1? Because I am using 3D medical images as training dataset, the batch size can only be set to 1 because of GPU limitation. Normally, I know, when batch_size = 1, variance will be 0. And (x-mean)/variance will lead to error because of division by 0. But why did errors not occur when I set batch_size = 1? Why my network was trained as good as I expected? Could anyone explain it? Some people argued that: The ZeroDivisionError may not be encountered because of two cases. First, the exception is caught in a try catch block. Second, a small rational number is added ( 1e-19 ) to the variance term so that it is never zero. But some people disagree. They said that: You should calculate mean and std across all pixels in the images of the batch. (So even batch_size = 1, there are still a lot of pixels in the batch. So the reason why batch_size=1 can still work is not because of 1e-19) I have checked the Pytorch source code, and from the code I think the latter one is right. Does anyone have different opinion??? | variance will be 0 No, it won't; BatchNormalization computes statistics only with respect to a single axis (usually the channels axis, =-1 (last) by default); every other axis is collapsed, i.e. summed over for averaging; details below. More importantly, however, unless you can explicitly justify it, I advise against using BatchNormalization with batch_size=1; there are strong theoretical reasons against it, and multiple publications have shown BN performance degrade for batch_size under 32, and severely for <=8. In a nutshell, batch statistics "averaged" over a single sample vary greatly sample-to-sample (high variance), and BN mechanisms don't work as intended. Small mini-batch alternatives: Batch Renormalization -- Layer Normalization -- Weight Normalization Implementation details: from source code: reduction_axes = list(range(len(input_shape))) del reduction_axes[self.axis] Eventually, tf.nn.monents is called with axes=reduction_axes, which performs a reduce_sum to compute variance. Then, in the TensorFlow backend, mean and variance are passed to tf.nn.batch_normalization to return train- or inference-normalized inputs. In other words, if your input is (batch_size, height, width, depth, channels), or (1, height, width, depth, channels), then BN will run calculations over the 1, height, width, and depth dimensions. Can variance ever be zero? - yes, if every single datapoint for any given channel slice (along every dimension) is the same. But this should be near-impossible for real data. Other answers: first one is misleading: a small rational number is added (1e-19) to the variance This doesn't happen in computing variance, but it is added to variance when normalizing; nonetheless, it is rarely necessary, as variance is far from zero. Also, the epsilon term is actually defaulted to 1e-3 by Keras; it serves roles in regularizing, beyond mere avoiding zero-division. Update: I failed to address an important piece of intuition with suspecting variance to be 0; indeed, the batch statistics variance is zero, since there is only one statistic - but the "statistic" itself concerns the mean & variance of the channel + spatial dimensions. In other words, the variance of the mean & variance (of the single train sample) is zero, but the mean & variance themselves aren't. | 11 | 18 |
59,713,016 | 2020-1-13 | https://stackoverflow.com/questions/59713016/plotly-express-how-to-fix-the-color-mapping-when-setting-color-by-column-name | I am using plotly express for a scatter plot. The color of the markers is defined by a variable of my dataframe, as in the example below. import pandas as pd import numpy as np import plotly.express as px df = px.data.iris() fig = px.scatter(df[df.species.isin(['virginica', 'setosa'])], x="sepal_width", y="sepal_length", color="species") fig.show() When I add another instance of this variable, the color mapping changes (First, 'virginica', is red, then green). fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species",size='petal_length', hover_data=['petal_width']) fig.show() How can I keep the mapping of the colors when adding variables? | Short answer: 1. Assign colors to variables with color_discrete_map : color_discrete_map = {'virginica': 'blue', 'setosa': 'red', 'versicolor': 'green'} or: 2. Manage the order of your data to enable the correct color cycle with: order_df(df_input = df, order_by='species', order=['virginica', 'setosa', 'versicolor']) ... where order_df is a function that handles the ordering of long dataframes for which you'll find the complete definition in the code snippets below. The details: 1. You can map colors to variables directly with: color_discrete_map = {'virginica': 'blue', 'setosa': 'red', 'versicolor': 'green'} The downside is that you'll have to specify variable names and colors. And that quickly becomes tedious if you're working with dataframes where the number of variables is not fixed. In which case it would be much more convenient to follow the default color sequence or specify one to your liking. So I would rather consider managing the order of your dataset so that you'll get the desired colormatching. 2. The source of the real challenge: px.Scatter() will assign color to variable in the order they appear in your dataframe. Here you're using two different sourcesdf and df[df.species.isin(['virginica', 'setosa', 'versicolor'])] (let's name the latter df2). Running df2['species'].unique() will give you: array(['setosa', 'virginica'], dtype=object) And running df['species'] will give you: array(['setosa', 'versicolor', 'virginica'], dtype=object) See that versicolor pops up in the middle? Thats's why red is no longer assigned to 'virginica', but 'versicolor' instead. Suggested solution: So in order to build a complete solution, you'd have to find a way to specify the order of the variables in the source dataframe. Thats very straight forward for a column with unique values. It's a bit more work for a dataframe of a long format such as this. You could do it as described in the post Changing row order in pandas dataframe without losing or messing up data. But below I've put together a very easy function that takes care of both the subset and the order of the dataframe you'd like to plot with plotly express. Using the complete code and switching between the lines under # data subsets will give you the three following plots: Plot 1: order=['virginica'] Plot 2: ['virginica', 'setosa'] Plot 3: order=['virginica', 'setosa', 'versicolor'] Complete code: # imports import pandas as pd import plotly.express as px # data df = px.data.iris() # function to subset and order a pandas # dataframe fo a long format def order_df(df_input, order_by, order): df_output=pd.DataFrame() for var in order: df_append=df_input[df_input[order_by]==var].copy() df_output = pd.concat([df_output, df_append]) return(df_output) # data subsets df_express = order_df(df_input = df, order_by='species', order=['virginica']) df_express = order_df(df_input = df, order_by='species', order=['virginica', 'setosa']) df_express = order_df(df_input = df, order_by='species', order=['virginica', 'setosa', 'versicolor']) # plotly fig = px.scatter(df_express, x="sepal_width", y="sepal_length", color="species") fig.show() | 12 | 10 |
59,707,973 | 2020-1-12 | https://stackoverflow.com/questions/59707973/how-to-disable-keyword-text-suggestion-in-spyder-4 | Specifically this popup box: It appears almost every time I type anything, and it's kind of getting in the way. I have disabled code completion in the settings, and uninstalled Kite, and disabled Jedi. Any ideas? | (Spyder maintainer here) To disable those completions (called fallback completions) in Spyder 5, you need to go to Tools > Preferences > Completion and Linting > General and deactivate the option called Enable Fallback provider. For Spyder 4, please go to Tools > Preferences > Completion and Linting > Advanced and deactivate the option called Enable fallback completions. | 28 | 36 |
59,642,902 | 2020-1-8 | https://stackoverflow.com/questions/59642902/how-to-handle-file-upload-validations-using-flask-marshmallow | I'm working with Flask-Marshmallow for validating request and response schemas in Flask app. I was able to do simple validations for request.form and request.args when there are simple fields like Int, Str, Float etc. I have a case where I need to upload a file using a form field - file_field. It should contain the file content. How can I validate if this field is present or not and what is the format of file etc. Is there any such field in Marshmallow that I can use like fields.Int() or fields.Str() I have gone through the documentation here but haven't found any such field. | You can use fields.Raw: import marshmallow class CustomSchema(marshmallow.Schema): file = marshmallow.fields.Raw(type='file') If you are using Swagger, you would then see something like this: Then in your view you can access the file content with flask.request.files. For a full example and more advanced topics, check out my project. | 12 | 9 |
59,687,514 | 2020-1-10 | https://stackoverflow.com/questions/59687514/how-to-write-a-case-when-like-statement-in-numpy-array | def custom_asymmetric_train(y_true, y_pred): residual = (y_true - y_pred).astype("float") grad = np.where(residual>0, -2*10.0*residual, -2*residual) hess = np.where(residual>0, 2*10.0, 2.0) return grad, hess I want to write this statement: case when residual>=0 and residual<=0.5 then -2*1.2*residual when residual>=0.5 and residual<=0.7 then -2*1.*residual when residual>0.7 then -2*2*residual end ) however np.where cannot write &(and) logic . How do I write this case when logic in the np.where in python. Thanks | This statement can be written using np.select as: import numpy as np residual = np.random.rand(10) -0.3 # -0.3 to get some negative values condlist = [(residual>=0.0)&(residual<=0.5), (residual>=0.5)&(residual<=0.7), residual>0.7] choicelist = [-2*1.2*residual, -2*1.0*residual,-2*2.0*residual] residual = np.select(condlist, choicelist, default=residual) Note that, when multiple conditions are satisfied in condlist, the first one encountered is used. When all conditions evaluate to False, it will use the default value. Moreover, for your information, you need to use bitwise operator & on boolean numpy arrays as and python keyword won't work on them. Let's benchmark these answers: residual = np.random.rand(10000) -0.3 def charl_3where(residual): residual = np.where((residual>=0.0)&(residual<=0.5), -2*1.2*residual, residual) residual = np.where((residual>=0.5)&(residual<=0.7), -2*1.0*residual, residual) residual = np.where(residual>0.7, -2*2.0*residual, residual) return residual def yaco_select(residual): condlist = [(residual>=0.0)&(residual<=0.5), (residual>=0.5)&(residual<=0.7), residual>0.7] choicelist = [-2*1.2*residual, -2*1.0*residual,-2*2.0*residual] residual = np.select(condlist, choicelist, default=residual) return residual %timeit charl_3where(residual) >>> 112 µs ± 1.7 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) %timeit yaco_select(residual) >>> 141 µs ± 2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) let's try to optimize these with numba from numba import jit @jit(nopython=True) def yaco_numba(residual): out = np.empty_like(residual) for i in range(residual.shape[0]): if residual[i]<0.0 : out[i] = residual[i] elif residual[i]<=0.5 : out[i] = -2*1.2*residual[i] elif residual[i]<=0.7: out[i] = -2*1.0*residual[i] else: # residual>0.7 out[i] = -2*2.0*residual[i] return out %timeit yaco_numba(residual) >>> 6.65 µs ± 123 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) Final check res1 = charl_3where(residual) res2 = yaco_select(residual) res3 = yaco_numba(residual) np.allclose(res1,res3) >>> True np.allclose(res2,res3) >>> True This one is about 15x faster than the previously best one. Hope this helps. | 8 | 9 |
59,684,765 | 2020-1-10 | https://stackoverflow.com/questions/59684765/is-it-possible-to-put-numbers-on-top-of-a-matplot-histogram | import matplotlib.pyplot as plt import numpy as np randomnums = np.random.normal(loc=9,scale=6, size=400).astype(int)+15 Output: array([25, 22, 19, 26, 24, 9, 19, 32, 30, 25, 29, 17, 21, 14, 17, 27, 27, 28, 17, 17, 20, 21, 16, 28, 20, 24, 15, 20, 20, 13, 33, 21, 30, 27, 8, 22, 24, 25, 23, 13, 24, 20, 16, 32, 15, 26, 34, 16, 21, 21, 28, 22, 23, 18, 20, 22, 23, 22, 23, 26, 22, 25, 19, 29, 14, 27, 21, 23, 24, 19, 25, 15, 22, 23, 19, 19, 23, 21, 22, 17, 25, 15, 24, 25, 23 ... h = sorted(randomnums) plt.hist(h,density=False) plt.show() Output: From my research I found only how to plot numbers on top of a bar chart, but what I want is to plot on top of a histogram chart. Is it possible? | An adapted version of the answer I linked in the comments of the question. Thanks a lot for the suggestions in the comments below this post! import matplotlib.pyplot as plt import numpy as np h = np.random.normal(loc=9,scale=6, size=400).astype(int)+15 fig, ax = plt.subplots(figsize=(16, 10)) ax.hist(h, density=False) for rect in ax.patches: height = rect.get_height() ax.annotate(f'{int(height)}', xy=(rect.get_x()+rect.get_width()/2, height), xytext=(0, 5), textcoords='offset points', ha='center', va='bottom') ...gives e.g. See also: matplotlib.axes.Axes.annotate. | 7 | 14 |
59,719,323 | 2020-1-13 | https://stackoverflow.com/questions/59719323/removing-sep-token-in-bert-for-text-classification | Given a sentiment classification dataset, I want to fine-tune Bert. As you know that BERT created to predict the next sentence given the current sentence. Thus, to make the network aware of this, they inserted a [CLS] token in the beginning of the first sentence then they add [SEP] token to separate the first from the second sentence and finally another [SEP] at the end of the second sentence (it's not clear to me why they append another token at the end). Anyway, for text classification, what I noticed in some of the examples online (see BERT in Keras with Tensorflow hub) is that they add [CLS] token and then the sentence and at the end another [SEP] token. Where in other research works (e.g. Enriching Pre-trained Language Model with Entity Information for Relation Classification) they remove the last [SEP] token. Why is it/not beneficial to add the [SEP] token at the end of the input text when my task uses only single sentence? | Im not quite sure why BERT needs the separation token [SEP] at the end for single-sentence tasks, but my guess is that BERT is an autoencoding model that, as mentioned, originally was designed for Language Modelling and Next Sentence Prediction. So BERT was trained that way to always expect the [SEP] token, which means that the token is involved in the underlying knowledge that BERT built up during training. Downstream tasks that followed later, such as single-sentence use-cases (e.g. text classification), turned out to work too with BERT, however the [SEP] was left as a relict for BERT to work properly and thus is needed even for these tasks. BERT might learn faster, if [SEP] is appended at the end of a single sentence, because it encodes somewhat of a knowledge in that token, that this marks the end of the input. Without it, BERT would still know where the sentence ends (due to the padding tokens), which explains that fore mentioned research leaves away the token, but this might slow down training slightly, since BERT might be able to learn faster with appended [SEP] token, especially if there are no padding tokens in a truncated input. | 11 | 6 |
59,702,785 | 2020-1-12 | https://stackoverflow.com/questions/59702785/what-does-dim-1-or-2-mean-in-torch-sum | let me take a 2D matrix as example: mat = torch.arange(9).view(3, -1) tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) torch.sum(mat, dim=-2) tensor([ 9, 12, 15]) I find the result of torch.sum(mat, dim=-2) is equal to torch.sum(mat, dim=0) and dim=-1 equal to dim=1. My question is how to understand the negative dimension here. What if the input matrix has 3 or more dimensions? | The minus essentially means you go backwards through the dimensions. Let A be a n-dimensional matrix. Then dim=n-1=-1, dim=n-2=-2, ..., dim=1=-(n-1), dim=0=-n. See the numpy doc for more information, as pytorch is heavily based on numpy. | 21 | 13 |
59,660,939 | 2020-1-9 | https://stackoverflow.com/questions/59660939/default-value-of-gamma-svc-sklearn | I'm using SVC from sklearn.svm for binary classification in python. For the gamma parameter it says that it's default value is . I'm having a hard time understading this. Can you tell me what's the default value of gamma ,if for example, the input is a vector of 3 dimensions(3,) e.g. [3,3,3] and the number of input vectors are 10.000? Also, is there a way i can print it out to see its value? | This is easy to see with an example. The array X below has two features (columns). The variance of the array is 1.75. The default gamma is therefore is 1/(2*1.75) = 0.2857. You can verify this by checking the ._gamma attribute of the classifier. import numpy as np from sklearn.svm import SVC X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) y = np.array([1, 1, 2, 2]) clf = SVC(gamma='scale') clf.fit(X, y) n_features = X.shape[1] gamma = 1 / (n_features * X.var()) clf._gamma Output: X Out[24]: array([[-1, -1], [-2, -1], [ 1, 1], [ 2, 1]]) n_features Out[25]: 2 X.var() Out[26]: 1.75 gamma Out[27]: 0.2857142857142857 clf._gamma Out[28]: 0.2857142857142857 | 7 | 5 |
59,700,903 | 2020-1-12 | https://stackoverflow.com/questions/59700903/importerror-cannot-import-name-parser | I have a bunch of modules. The modules and their imports are listed below: ast.py: import enum from abc import ABC, abstractmethod err.py: none lexer.py: from token import TokenTag, Token parser.py: from ast import * from err import UndeclaredIdentError, SyntaxError from token import TokenTag as Tag from type import Type peep.py: from lexer import Lexer from parser import Parser token.py: import enum treewalker.py: from abc import ABC, abstractmethod type.py: import enum from treewalker import TreeWalker I tried to run peep.py but I get the following error: Traceback (most recent call last): File "peep.py", line 2, in <module> from parser import Parser ImportError: cannot import name 'Parser' I don't understand why I got ImportError, I can't find any obvious circular dependencies in the file hierachy above. I did some research, I figured that I should rename the module ast.py to syntaxtree.py because ast.py already exists in Python's standard library. After renaming, it produced the same result. Any form of help is appreciated, thanks! | Open parser.py file and change the code for from parser import Parser to from .parser import Parser | 7 | 1 |
59,697,971 | 2020-1-11 | https://stackoverflow.com/questions/59697971/is-there-a-way-i-can-run-a-python-script-when-a-button-programmed-in-flutter-is | Essentially , what I want to do is , press a button that I program in Flutter and when that button is pressed , a python script should start running on my Android device . I want to use youtube-dl (used to download Youtube videos) library in python but I wanna know if there is a way to run the library in flutter . Any Help is appreciated . Thanks in advance . | I know it's quite old now but may be someone else can get help out of it. There is library called starflut and it can incorporate python code inside your flutter app. Take a look here https://pub.dev/packages/starflut | 22 | 13 |
59,721,120 | 2020-1-13 | https://stackoverflow.com/questions/59721120/find-equal-columns-between-two-dataframes | I have two pandas data frames, a and b: a1 a2 a3 a4 a5 a6 a7 1 3 4 5 3 4 5 0 2 0 3 0 2 1 2 5 6 5 2 1 2 and b1 b2 b3 b4 b5 b6 b7 3 5 4 5 1 4 3 0 1 2 3 0 0 2 2 2 1 5 2 6 5 The two data frames contain exactly the same data, but in a different order and with different column names. Based on the numbers in the two data frames, I would like to be able to match each column name in a to each column name in b. It is not as easy as simply comparing the first row of a with the first row of b as there are duplicated values, for example both a4 and a7 have the value 5 so it is not possible to immediately match them to either b2 or b4. What is the best way to do this? | Here's one way leveraging broadcasting to check for equality between both dataframes and taking all on the result to check where all rows match. Then we can obtain indexing arrays for both dataframe's column names from the result of np.where (with @piR's contribution): i, j = np.where((a.values[:,None] == b.values[:,:,None]).all(axis=0)) dict(zip(a.columns[j], b.columns[i])) # {'a7': 'b2', 'a6': 'b3', 'a4': 'b4', 'a2': 'b7'} | 19 | 18 |
59,737,875 | 2020-1-14 | https://stackoverflow.com/questions/59737875/keras-change-learning-rate | I'm trying to change the learning rate of my model after it has been trained with a different learning rate. I read here, here, here and some other places i can't even find anymore. I tried: model.optimizer.learning_rate.set_value(0.1) model.optimizer.lr = 0.1 model.optimizer.learning_rate = 0.1 K.set_value(model.optimizer.learning_rate, 0.1) K.set_value(model.optimizer.lr, 0.1) model.optimizer.lr.assign(0.1) ... but none of them worked! I don't understand how there could be such confusion around such a simple thing. Am I missing something? EDIT: Working example Here is a working example of what I'd like to do: from keras.models import Sequential from keras.layers import Dense import keras import numpy as np model = Sequential() model.add(Dense(1, input_shape=(10,))) optimizer = keras.optimizers.Adam(lr=0.01) model.compile(loss='mse', optimizer=optimizer) model.fit(np.random.randn(50,10), np.random.randn(50), epochs=50) # Change learning rate to 0.001 and train for 50 more epochs model.fit(np.random.randn(50,10), np.random.randn(50), initial_epoch=50, epochs=50) | You can change the learning rate as follows: from keras import backend as K K.set_value(model.optimizer.learning_rate, 0.001) Included into your complete example it looks as follows: from keras.models import Sequential from keras.layers import Dense from keras import backend as K import keras import numpy as np model = Sequential() model.add(Dense(1, input_shape=(10,))) optimizer = keras.optimizers.Adam(lr=0.01) model.compile(loss='mse', optimizer=optimizer) print("Learning rate before first fit:", model.optimizer.learning_rate.numpy()) model.fit(np.random.randn(50,10), np.random.randn(50), epochs=50, verbose=0) # Change learning rate to 0.001 and train for 50 more epochs K.set_value(model.optimizer.learning_rate, 0.001) print("Learning rate before second fit:", model.optimizer.learning_rate.numpy()) model.fit(np.random.randn(50,10), np.random.randn(50), initial_epoch=50, epochs=50, verbose=0) I've just tested this with keras 2.3.1. Not sure why the approach didn't seem to work for you. | 57 | 65 |
59,686,945 | 2020-1-10 | https://stackoverflow.com/questions/59686945/django-postgres-percentile-median-and-group-by | I need to calculate period medians per seller ID (see simplyfied model below). The problem is I am unable to construct the ORM query. Model class MyModel: period = models.IntegerField(null=True, default=None) seller_ids = ArrayField(models.IntegerField(), default=list) aux = JSONField(default=dict) Query queryset = ( MyModel.objects.filter(period=25) .annotate(seller_id=Func(F("seller_ids"), function="unnest")) .values("seller_id") .annotate( duration=Cast(KeyTextTransform("duration", "aux"), IntegerField()), median=Func( F("duration"), function="percentile_cont", template="%(function)s(0.5) WITHIN GROUP (ORDER BY %(expressions)s)", ), ) .values("median", "seller_id") ) ArrayField aggregation (seller_id) source I think what I need to do is something along the lines below select t.*, p_25, p_75 from t join (select district, percentile_cont(0.25) within group (order by sales) as p_25, percentile_cont(0.75) within group (order by sales) as p_75 from t group by district ) td on t.district = td.district above example source Python 3.7.5, Django 2.2.8, Postgres 11.1 | You can create a Median child class of the Aggregate class as was done by Ryan Murphy (https://gist.github.com/rdmurphy/3f73c7b1826cacee34f6c2a855b12e2e). Median then works just like Avg: from django.db.models import Aggregate, FloatField class Median(Aggregate): function = 'PERCENTILE_CONT' name = 'median' output_field = FloatField() template = '%(function)s(0.5) WITHIN GROUP (ORDER BY %(expressions)s)' Then to find the median of a field use my_model_aggregate = MyModel.objects.all().aggregate(Median('period')) which is then available as my_model_aggregate['period__median']. | 7 | 19 |
59,741,453 | 2020-1-14 | https://stackoverflow.com/questions/59741453/is-there-a-general-way-to-run-web-applications-on-google-colab | I would like to develop web apps in Google colab. The only issue is that you need a browser connected to local host to view the web app, but Google colab doesn't have a browser inside the notebook. But it seems that there are ways around this. For example run_with_ngrok is a library for running flaks apps in colab/jupyter notebooks https://github.com/gstaff/flask-ngrok#inside-jupyter--colab-notebooks When you use it, it gives a random address , "Running on http://.ngrok.io" And somehow the webapp that's running on Google colab is running on that address. This is a great solution for Flask apps, but I am looking to run webapps in general on Google Colab, not just Flask ones. Is there a general method for running webapps in colab/jupyter notebooks? | You can plan to start a server on a port, e.g. port=8000. Find the URL to use this way. from google.colab.output import eval_js print(eval_js("google.colab.kernel.proxyPort(8000)")) # https://z4spb7cvssd-496ff2e9c6d22116-8000-colab.googleusercontent.com/ Then, start the server, e.g. !python -m http.server 8000 And click the first link above (instead of localhost or 127.0.0.1), it will open in a new tab. Display in cell You can display the result in an iframe in the output part. I made it into an easy function to call. from IPython.display import Javascript def show_port(port, height=400): display(Javascript(""" (async ()=>{ fm = document.createElement('iframe') fm.src = await google.colab.kernel.proxyPort(%s) fm.width = '95%%' fm.height = '%d' fm.frameBorder = 0 document.body.append(fm) })(); """ % (port, height) )) Now you can start a webapp (here it is http.server) in a background. And display the result as an iframe below it. get_ipython().system_raw('python3 -m http.server 8888 &') show_port(8888) To stop the server, you can call ps and kill the process. | 30 | 44 |
59,718,130 | 2020-1-13 | https://stackoverflow.com/questions/59718130/what-are-c-classes-for-a-nllloss-loss-function-in-pytorch | I'm asking about C classes for a NLLLoss loss function. The documentation states: The negative log likelihood loss. It is useful to train a classification problem with C classes. Basically everything after that point depends upon you knowing what a C class is, and I thought I knew what a C class was but the documentation doesn't make much sense to me. Especially when it describes the expected inputs of (N, C) where C = number of classes. That's where I'm confused, because I thought a C class refers to the output only. My understanding was that the C class was a one hot vector of classifications. I've often found in tutorials that the NLLLoss was often paired with a LogSoftmax to solve classification problems. I was expecting to use NLLLoss in the following example: # Some random training data input = torch.randn(5, requires_grad=True) print(input) # tensor([-1.3533, -1.3074, -1.7906, 0.3113, 0.7982], requires_grad=True) # Build my NN (here it's just a LogSoftmax) m = nn.LogSoftmax(dim=0) # Train my NN with the data output = m(input) print(output) # tensor([-2.8079, -2.7619, -3.2451, -1.1432, -0.6564], grad_fn=<LogSoftmaxBackward>) loss = nn.NLLLoss() print(loss(output, torch.tensor([1, 0, 0]))) The above raises the following error on the last line: ValueError: Expected 2 or more dimensions (got 1) We can ignore the error, because clearly I don't understand what I'm doing. Here I'll explain my intentions of the above source code. input = torch.randn(5, requires_grad=True) Random 1D array to pair with one hot vector of [1, 0, 0] for training. I'm trying to do a binary bits to one hot vector of decimal numbers. m = nn.LogSoftmax(dim=0) The documentation for LogSoftmax says that the output will be the same shape as the input, but I've only seen examples of LogSoftmax(dim=1) and therefore I've been stuck trying to make this work because I can't find a relative example. print(loss(output, torch.tensor([1, 0, 0]))) So now I have the output of the NN, and I want to know the loss from my classification [1, 0, 0]. It doesn't really matter in this example what any of the data is. I just want a loss for a one hot vector that represents classification. At this point I get stuck trying to resolve errors from the loss function relating to expected output and input structures. I've tried using view(...) on the output and input to fix the shape, but that just gets me other errors. So this goes back to my original question and I'll show the example from the documentation to explain my confusion: m = nn.LogSoftmax(dim=1) loss = nn.NLLLoss() input = torch.randn(3, 5, requires_grad=True) train = torch.tensor([1, 0, 4]) print('input', input) # input tensor([[...],[...],[...]], requires_grad=True) output = m(input) print('train', output, train) # tensor([[...],[...],[...]],grad_fn=<LogSoftmaxBackward>) tensor([1, 0, 4]) x = loss(output, train) Again, we have dim=1 on LogSoftmax which confuses me now, because look at the input data. It's a 3x5 tensor and I'm lost. Here's the documentation on the first input for the NLLLoss function: Input: (N, C)(N,C) where C = number of classes The inputs are grouped by the number of classes? So each row of the tensor input is associated with each element of the training tensor? If I change the second dimension of the input tensor, then nothing breaks and I don't understand what is going on. input = torch.randn(3, 100, requires_grad=True) # 3 x 100 still works? So I don't understand what a C class is here, and I thought a C class was a classification (like a label) and meaningful only on the outputs of the NN. I hope you understand my confusion, because shouldn't the shape of the inputs for the NN be independent from the shape of the one hot vector used for classification? Both the code examples and documentations say that the shape of the inputs is defined by the number of classifications, and I don't really understand why. I have tried to study the documentations and tutorials to understand what I'm missing, but after several days of not being able to get past this point I've decided to ask this question. It's been humbling because I thought this was going to be one of the easier things to learn. | Basically you are missing a concept of batch. Long story short, every input to loss (and the one passed through the network) requires batch dimension (i.e. how many samples are used). Breaking it up, step by step: Your example vs documentation Each step will be each step compared to make it clearer (documentation on top, your example below) Inputs input = torch.randn(3, 5, requires_grad=True) input = torch.randn(5, requires_grad=True) In the first case (docs), input with 5 features is created and 3 samples are used. In your case there is only batch dimension (5 samples), you have no features which are required. If you meant to have one sample with 5 features you should do: input = torch.randn(5, requires_grad=True) LogSoftmax LogSoftmax is done across features dimension, you are doing it across batch. m = nn.LogSoftmax(dim=1) # apply over features m = nn.LogSoftmax(dim=0) # apply over batch It makes no sense usually for this operation as samples are independent of each other. Targets As this is multiclass classification and each element in vector represents a sample, one can pass as many numbers as one wants (as long as it's smaller than number of features, in case of documentation example it's 5, hence [0-4] is fine ). train = torch.tensor([1, 0, 4]) train = torch.tensor([1, 0, 0]) I assume, you wanted to pass one-hot vector as target as well. PyTorch doesn't work that way as it's memory inefficient (why store everything as one-hot encoded when you can just pinpoint exactly the class, in your case it would be 0). Only outputs of neural network are one hot encoded in order to backpropagate error through all output nodes, it's not needed for targets. Final You shouldn't use torch.nn.LogSoftmax at all for this task. Just use torch.nn.Linear as last layer and use torch.nn.CrossEntropyLoss with your targets. | 8 | 8 |
59,732,335 | 2020-1-14 | https://stackoverflow.com/questions/59732335/is-there-any-disadvantage-in-using-pythondontwritebytecode-in-docker | In many Docker tutorials based on Python (such as: this one) they use the option PYTHONDONTWRITEBYTECODE in order to make Python avoid to write .pyc files on the import of source modules (This is equivalent to specifying the -B option). What are the risks and advantages of setting this option up? | When you run a single python process in the container, which does not spawn other python processes itself during its lifetime, then there is no "risk" in doing that. Storing byte code on disk is used to compile python into byte code just upon the first invocation of a program and its dependent libraries to save that step upon the following invocations. In a container the process runs just once, therefore setting this option makes sense. | 83 | 84 |
59,734,277 | 2020-1-14 | https://stackoverflow.com/questions/59734277/filerequiredvalidator-doesnt-work-when-using-multiplefilefield-in-my-form | My UploadForm class: from app import app from flask_wtf.file import FileRequired, FileAllowed from wtforms.fields import MultipleFileField from wtforms.validators import InputRequired,DataRequired class UploadForm(FlaskForm): . . . roomImage = MultipleFileField('Room',validators=[FileAllowed(['jpg', 'png'], 'Image only!'), FileRequired('File was empty!')] ) . . .#there are other fields here which are not relevant to the problem at hand HTML Template {% extends "base.html" %} {% block content %} <h1>Upload Your Images</h1> <form action="/" enctype="multipart/form-data" method="post" > {{ form.csrf_token }} Room<br /> {{form.roomImage()}} . . . <MORE THINGS THAT I HAVE EDITED OUT> {{form.submit()}} <br/> {% if form.errors %} {{ form.errors }} {% endif %} </form> {% endblock %} hosts.py to run the check for validation def upload_image():#NEEDS HEAVY FIXING """home page to return the web page to upload documents""" form = UploadForm() if form.validate_on_submit(): Using VS's debugging tools, I find that form.validate_on_submit() doesn't work and always fails validation and I get this error on my html page. {'roomImage': ['File was empty!']} There is another MultipleFileField control with almost the exact same code. This issue does not happen when I use FileField to upload one file. The documentation on this is very limited and all I had to go on was this. I don't really know how to solve this issue. I have searched extensively for finding example involving MultipleFileField but they are not using any validation. A thread on Github that I can't find anymore suggested using OptionalValidator, but then that is not an option for me and even that didn't work. Can someone suggest me a solution? EDIT: Even the FileAllowed() validator does not seem to work. | This works for me (found on GitHub "between the lines"): multi_file = MultipleFileField("Upload File(s)", validators=[DataRequired()]) However FileAllowed(["xml", "jpg"]) is ignored and does not work for me. EDIT: No, sadly, it does not work... It returns True for form.validate() and for form.validate_on_submit() but when you pass no files, by deleting required="" from <input id="multi_file" multiple="" name="multi_file" required="" type="file"> and submit a form, it still evaluate that as True. So problem sill persist in full, as described... | 8 | 2 |
59,649,413 | 2020-1-8 | https://stackoverflow.com/questions/59649413/violin-plot-for-positive-values-with-python | I find violin plots very informative and useful, I use python library 'seaborn'. However, when applied to positive values, they nearly always show negative values at the lower end. I find this really misleading, especially when working with real-life datasets. In the official documentation of seaborn https://seaborn.pydata.org/generated/seaborn.violinplot.html one can see examples with "total_bill" and "tip" which can not be negative. The violin plots show negative values, however. For example, import seaborn as sns sns.set(style="whitegrid") tips = sns.load_dataset("tips") ax = sns.violinplot(x="day", y="total_bill", hue="smoker",data=tips, palette="muted", split=True) I do understand, that those negative values come from gaussian kernels. My question is, therefore: is there any way to solve this problem? Another library in python? Possibility to specify a different kernel? | You can use the keyword cut=0 to limit your plot to the data range. If the data doesn't have negative values, this will chop the end of the violin to zero. Using the same example as you, try: ax = sns.violinplot(x="day", y="total_bill", hue="smoker",data=tips, palette="muted", split=True,cut=0) | 10 | 13 |
59,725,560 | 2020-1-13 | https://stackoverflow.com/questions/59725560/finding-all-the-combinations-of-free-polyominoes-within-a-specific-area-with-a-s | I am new to the world of SAT solvers and would need some guidance regarding the following problem. Considering that: ❶ I have a selection of 14 adjacent cells in a 4*4 grid ❷ I have 5 polyominoes (A, B, C, D, E) of sizes 4, 2, 5, 2 and 1 ❸ these polyominoes are free, i.e. their shape is not fixed and can form different patterns How can I compute all the possible combinations of these 5 free polyominoes inside the selected area (cells in grey) with a SAT-solver ? Borrowing both from @spinkus's insightful answer and the OR-tools documentation I could make the following example code (runs in a Jupyter Notebook): from ortools.sat.python import cp_model import numpy as np import more_itertools as mit import matplotlib.pyplot as plt %matplotlib inline W, H = 4, 4 #Dimensions of grid sizes = (4, 2, 5, 2, 1) #Size of each polyomino labels = np.arange(len(sizes)) #Label of each polyomino colors = ('#FA5454', '#21D3B6', '#3384FA', '#FFD256', '#62ECFA') cdict = dict(zip(labels, colors)) #Color dictionary for plotting inactiveCells = (0, 1) #Indices of disabled cells (in 1D) activeCells = set(np.arange(W*H)).difference(inactiveCells) #Cells where polyominoes can be fitted ranges = [(next(g), list(g)[-1]) for g in mit.consecutive_groups(activeCells)] #All intervals in the stack of active cells def main(): model = cp_model.CpModel() #Create an Int var for each cell of each polyomino constrained to be within Width and Height of grid. pminos = [[] for s in sizes] for idx, s in enumerate(sizes): for i in range(s): pminos[idx].append([model.NewIntVar(0, W-1, 'p%i'%idx + 'c%i'%i + 'x'), model.NewIntVar(0, H-1, 'p%i'%idx + 'c%i'%i + 'y')]) #Define the shapes by constraining the cells relative to each other ## 1st polyomino -> tetromino ## # # # # # # # # ### # # # ################################ p0 = pminos[0] model.Add(p0[1][0] == p0[0][0] + 1) #'x' of 2nd cell == 'x' of 1st cell + 1 model.Add(p0[2][0] == p0[1][0] + 1) #'x' of 3rd cell == 'x' of 2nd cell + 1 model.Add(p0[3][0] == p0[0][0] + 1) #'x' of 4th cell == 'x' of 1st cell + 1 model.Add(p0[1][1] == p0[0][1]) #'y' of 2nd cell = 'y' of 1st cell model.Add(p0[2][1] == p0[1][1]) #'y' of 3rd cell = 'y' of 2nd cell model.Add(p0[3][1] == p0[1][1] - 1) #'y' of 3rd cell = 'y' of 2nd cell - 1 ## 2nd polyomino -> domino ## # # # # # # # # # # # # ############################# p1 = pminos[1] model.Add(p1[1][0] == p1[0][0]) model.Add(p1[1][1] == p1[0][1] + 1) ## 3rd polyomino -> pentomino ## # # # ## # # ## # # # # # # ################################ p2 = pminos[2] model.Add(p2[1][0] == p2[0][0] + 1) model.Add(p2[2][0] == p2[0][0]) model.Add(p2[3][0] == p2[0][0] + 1) model.Add(p2[4][0] == p2[0][0]) model.Add(p2[1][1] == p2[0][1]) model.Add(p2[2][1] == p2[0][1] + 1) model.Add(p2[3][1] == p2[0][1] + 1) model.Add(p2[4][1] == p2[0][1] + 2) ## 4th polyomino -> domino ## # # # # # # # # # # # # ############################# p3 = pminos[3] model.Add(p3[1][0] == p3[0][0]) model.Add(p3[1][1] == p3[0][1] + 1) ## 5th polyomino -> monomino ## # # # # # # # # # # # ############################### #No constraints because 1 cell only #No blocks can overlap: block_addresses = [] n = 0 for p in pminos: for c in p: n += 1 block_address = model.NewIntVarFromDomain(cp_model.Domain.FromIntervals(ranges),'%i' % n) model.Add(c[0] + c[1] * W == block_address) block_addresses.append(block_address) model.AddAllDifferent(block_addresses) #Solve and print solutions as we find them solver = cp_model.CpSolver() solution_printer = SolutionPrinter(pminos) status = solver.SearchForAllSolutions(model, solution_printer) print('Status = %s' % solver.StatusName(status)) print('Number of solutions found: %i' % solution_printer.count) class SolutionPrinter(cp_model.CpSolverSolutionCallback): ''' Print a solution. ''' def __init__(self, variables): cp_model.CpSolverSolutionCallback.__init__(self) self.variables = variables self.count = 0 def on_solution_callback(self): self.count += 1 plt.figure(figsize = (2, 2)) plt.grid(True) plt.axis([0,W,H,0]) plt.yticks(np.arange(0, H, 1.0)) plt.xticks(np.arange(0, W, 1.0)) for i, p in enumerate(self.variables): for c in p: x = self.Value(c[0]) y = self.Value(c[1]) rect = plt.Rectangle((x, y), 1, 1, fc = cdict[i]) plt.gca().add_patch(rect) for i in inactiveCells: x = i%W y = i//W rect = plt.Rectangle((x, y), 1, 1, fc = 'None', hatch = '///') plt.gca().add_patch(rect) The problem is that I have hard-coded 5 unique/fixed polyominoes and I don't know to how define the constraints so as each possible pattern for each polyomino is taken into account (provided it is possible). | One relatively straightforward way to constrain a simply connected region in OR-Tools is to constrain its border to be a circuit. If all your polyominos are to have size less than 8, we don’t need to worry about non-simply connected ones. This code finds all 3884 solutions: from ortools.sat.python import cp_model cells = {(x, y) for x in range(4) for y in range(4) if x > 1 or y > 0} sizes = [4, 2, 5, 2, 1] num_polyominos = len(sizes) model = cp_model.CpModel() # Each cell is a member of one polyomino member = { (cell, p): model.NewBoolVar(f"member{cell, p}") for cell in cells for p in range(num_polyominos) } for cell in cells: model.Add(sum(member[cell, p] for p in range(num_polyominos)) == 1) # Each polyomino contains the given number of cells for p, size in enumerate(sizes): model.Add(sum(member[cell, p] for cell in cells) == size) # Find the border of each polyomino vertices = { v: i for i, v in enumerate( {(x + i, y + j) for x, y in cells for i in [0, 1] for j in [0, 1]} ) } edges = [ edge for x, y in cells for edge in [ ((x, y), (x + 1, y)), ((x + 1, y), (x + 1, y + 1)), ((x + 1, y + 1), (x, y + 1)), ((x, y + 1), (x, y)), ] ] border = { (edge, p): model.NewBoolVar(f"border{edge, p}") for edge in edges for p in range(num_polyominos) } for (((x0, y0), (x1, y1)), p), border_var in border.items(): left_cell = ((x0 + x1 + y0 - y1) // 2, (y0 + y1 - x0 + x1) // 2) right_cell = ((x0 + x1 - y0 + y1) // 2, (y0 + y1 + x0 - x1) // 2) left_var = member[left_cell, p] model.AddBoolOr([border_var.Not(), left_var]) if (right_cell, p) in member: right_var = member[right_cell, p] model.AddBoolOr([border_var.Not(), right_var.Not()]) model.AddBoolOr([border_var, left_var.Not(), right_var]) else: model.AddBoolOr([border_var, left_var.Not()]) # Each border is a circuit for p in range(num_polyominos): model.AddCircuit( [(vertices[v0], vertices[v1], border[(v0, v1), p]) for v0, v1 in edges] + [(i, i, model.NewBoolVar(f"vertex_loop{v, p}")) for v, i in vertices.items()] ) # Print all solutions x_range = range(min(x for x, y in cells), max(x for x, y in cells) + 1) y_range = range(min(y for x, y in cells), max(y for x, y in cells) + 1) solutions = 0 class SolutionPrinter(cp_model.CpSolverSolutionCallback): def OnSolutionCallback(self): global solutions solutions += 1 for y in y_range: print( *( next( p for p in range(num_polyominos) if self.Value(member[(x, y), p]) ) if (x, y) in cells else "-" for x in x_range ) ) print() solver = cp_model.CpSolver() solver.SearchForAllSolutions(model, SolutionPrinter()) print("Number of solutions found:", solutions) | 19 | 5 |
59,721,109 | 2020-1-13 | https://stackoverflow.com/questions/59721109/why-are-deep-learning-libraries-so-huge | I've recently downloaded all packages from PyPI. One interesting observation was that of the Top-15 of the biggest packages, all execept one are deep learning packages: mxnet: mxnet-cu90 (600 MB), mxnet-cu92, mxnet-cu101mkl, mxnet-cu101 (and 6 more mxnet versions) cntk: cntk-gpu (493MB) H2O4GPU (366MB) tensorflow: tensorflow-gpu (357MB), tensorflow I looked at mxnet-cu90. It has exactly one huge file: libmxnet.so (936.7MB). What does this file contain? Is there any way to make it smaller? I'm especially astonished that those libraries are so huge considering that one usually uses them on top of CUDA + cuDNN, which I thought would do the heavy lifting. As a comparison, I looked at related libraries with which you can also build deep learning libraries: numpy: 6MB sympy: 6MB pycuda: 3.6MB tensorflow-cpu: 116MB (so the GPU version needs 241 MB more or around 3x the size!) | Deep learning frameworks are large because they package CuDNN from NVIDIA into their wheels. This is done for the convenience of downstream users. CuDNN are the primitives that the frameworks call to execute highly optimised neural network ops (e.g. LSTM) The unzipped version of CuDNN for windows 10 is 435MB. | 12 | 6 |
59,711,301 | 2020-1-13 | https://stackoverflow.com/questions/59711301/install-pyqt5-5-14-1-on-linux | pip3 install PyQt5 Collecting PyQt5 Using cached https://files.pythonhosted.org/packages/3a/fb/eb51731f2dc7c22d8e1a63ba88fb702727b324c6352183a32f27f73b8116/PyQt5-5.14.1.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python3.6/tokenize.py", line 452, in open buffer = _builtin_open(filename, 'rb') FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-build-b2zw891b/PyQt5/setup.py' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-b2zw891b/PyQt5/ Then I downloaded zip folder from https://www.riverbankcomputing.com/software/pyqt/download5 and run: python3 configure.py --qmake /home/oo/Qt/5.14.0/gcc_64/bin/qmake make sudo make install Successful >>> import PyQt5 >>> import PyQt5.QtCore Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'PyQt5.sip' >>> So I installed pip3 install PyQt5.sip pip3 install sip Successful but still getting same error No module named 'PyQt5.sip' for import PyQt5.QtCore also tried PyQtChart but still error pip3 install PyQtChart Collecting PyQtChart Using cached https://files.pythonhosted.org/packages/83/35/4f6328db9a31e2776cdcd82ef7688994c11e265649f503858f1913444ba9/PyQtChart-5.14.0-5.14.0-cp35.cp36.cp37.cp38-abi3-manylinux1_x86_64.whl Collecting PyQt5>=5.14 (from PyQtChart) Using cached https://files.pythonhosted.org/packages/3a/fb/eb51731f2dc7c22d8e1a63ba88fb702727b324c6352183a32f27f73b8116/PyQt5-5.14.1.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python3.6/tokenize.py", line 452, in open buffer = _builtin_open(filename, 'rb') FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-build-gzep4mr7/PyQt5/setup.py' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-gzep4mr7/PyQt5/ I also downloaded zip folder from https://www.riverbankcomputing.com/software/pyqtchart/download and run: python3 configure.py --qmake /home/oo/Qt/5.14.0/gcc_64/bin/qmake Error: Unable to import PyQt5.QtCore. Make sure PyQt5 is installed. QT screenshot:: My end goal is to run candlestick chart using pyqt5. sudo python3 -m pip install pyqt5 pyqtchart [sudo] password for oo: The directory '/home/oo/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/home/oo/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Requirement already satisfied: pyqt5 in /usr/lib/python3/dist-packages Requirement already satisfied: pyqtchart in /usr/local/lib/python3.6/dist-packages Requirement already satisfied: PyQt5-sip<13,>=12.7 in /home/oo/.local/lib/python3.6/site-packages (from pyqtchart) but still getting same error: Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import PyQt5 >>> import PyQt5.QtCore Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'PyQt5.sip' >>> | I think the initial pip install woes were due to PyQt5 switching to the manylinux2014 platform tag for the latest release (see the wheels on PyPI for 5.14.1 vs 5.14.0). Only pip versions >= 19.3 recognize this platform tag (ref), so if you happen to have an older version of pip, it would instead try to install from source. Two easy options (to avoid the source install): Update pip to the latest via pip3 install --upgrade pip Install the previous release, which used manylinux1 (pip3 install pyqt5==5.14.0) | 37 | 70 |
59,659,146 | 2020-1-9 | https://stackoverflow.com/questions/59659146/could-not-import-pillow-version-from-pil | While importing, Python (Anaconda) gives the following error: ImportError: cannot import name 'PILLOW_VERSION' from 'PIL' I tried removing pillow and then conda install but the error persists. | Pillow 7.0.0 removed PILLOW_VERSION, you should use __version__ in your own code instead. https://pillow.readthedocs.io/en/stable/deprecations.html#pillow-version-constant Edit (2020-01-16): If using torchvision, this has been fixed in v0.5.0. To fix: Require torchvision>=0.5.0 If Pillow was temporarily pinned, remove the pin Old info (2020-01-09): If using torchvision, there is a release planned this week (week 2, 2020) to fix it: https://github.com/pytorch/vision/issues/1712#issuecomment-570286349 The options are: wait for the new torchvision release use the master version of torchvision (eg. pip install -U git+https://github.com/pytorch/vision) install torchvision from a nightly, which also requires a pytorch from a nightly version or install Pillow<7 (eg. pip install "pillow<7") | 34 | 33 |
59,682,542 | 2020-1-10 | https://stackoverflow.com/questions/59682542/typeerror-len-is-not-well-defined-for-symbolic-tensors-activation-3-identity | I am trying to implement a DQL model on one game of openAI gym. But it's giving me following error. TypeError: len is not well defined for symbolic Tensors. (activation_3/Identity:0) Please call x.shape rather than len(x) for shape information. Creating a gym environment: ENV_NAME = 'CartPole-v0' env = gym.make(ENV_NAME) np.random.seed(123) env.seed(123) nb_actions = env.action_space.n My model looks like this: model = Sequential() model.add(Flatten(input_shape=(1,) + env.observation_space.shape)) model.add(Dense(16)) model.add(Activation('relu')) model.add(Dense(nb_actions)) model.add(Activation('linear')) print(model.summary()) Fitting that model to DQN model from keral-rl as follows: policy = EpsGreedyQPolicy() memory = SequentialMemory(limit=50000, window_length=1) dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=10, target_model_update=0.001, policy=policy) dqn.compile(Adam(lr=1e-3), metrics=['mse', 'mae']) dqn.fit(env, nb_steps=5000, visualize=False, verbose=3) The error is from this line: dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=10, target_model_update=0.001, policy=policy) I am using keras-rl==0.4.2 and tensorflow==2.1.0. Based on other answers, I also tried tensorflow==2.0.0-beta0 but it doesn't solve the error. Can someone please explain to me why I am facing this error? and how to solve it? Thank you. | The reason this breaks is because, tf.Tensor TF 2.0.0 (and TF 1.15) has the __len__ overloaded and raises an exception. But TF 1.14 for example doesn't have the __len__ attribute. Therefore, anything TF 1.15+ (inclusive) breaks keras-rl (specifically here), which gives you the above error. So you got two options, Downgrade to TF 1.14 (recommended) Delete the __len__ overloading in TensorFlow source (not recommended as this can break other things) | 14 | 9 |
59,741,934 | 2020-1-14 | https://stackoverflow.com/questions/59741934/python-pandas-merge-multiple-columns-into-a-dictionary-column | I have a dataframe (df_full) like so: |cust_id|address |store_id|email |sales_channel|category| ------------------------------------------------------------------- |1234567|123 Main St|10SjtT |[email protected]|ecom |direct | |4567345|345 Main St|10SjtT |[email protected]|instore |direct | |1569457|876 Main St|51FstT |[email protected]|ecom |direct | and I would like to combine the last 4 fields into one metadata field that is a dictionary like so: |cust_id|address |metadata | ------------------------------------------------------------------------------------------------------------------- |1234567|123 Main St|{'store_id':'10SjtT', 'email':'[email protected]','sales_channel':'ecom', 'category':'direct'} | |4567345|345 Main St|{'store_id':'10SjtT', 'email':'[email protected]','sales_channel':'instore', 'category':'direct'}| |1569457|876 Main St|{'store_id':'51FstT', 'email':'[email protected]','sales_channel':'ecom', 'category':'direct'} | is that possible? I've seen a few solutions around on stack overflow but none of them address combining more than 2 fields into a dictionary field. | Use to_dict, columns = ['store_id', 'email', 'sales_channel', 'category'] df['metadata'] = df[columns].to_dict(orient='records') And if you want to drop original columns, df = df.drop(columns=columns) | 11 | 20 |
59,740,434 | 2020-1-14 | https://stackoverflow.com/questions/59740434/how-to-print-intercept-and-slope-of-a-simple-linear-regression-in-python-with-sc | I am trying to predict car prices (by machine learning) with a simple linear regression (only one independent variable). The variables are "highway miles per gallon" 0 27 1 27 2 26 3 30 4 22 .. 200 28 201 25 202 23 203 27 204 25 Name: highway-mpg, Length: 205, dtype: int64 and "price": 0 13495.0 1 16500.0 2 16500.0 3 13950.0 4 17450.0 ... 200 16845.0 201 19045.0 202 21485.0 203 22470.0 204 22625.0 Name: price, Length: 205, dtype: float64 With the following code: from sklearn.linear_model import LinearRegression x = df["highway-mpg"] y = df["price"] lm = LinearRegression() lm.fit([x],[y]) Yhat = lm.predict([x]) print(Yhat) print(lm.intercept_) print(lm.coef_) However, the intercept and slope coefficient print commands give me the following output: [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] Why doesn't it print the intercept and slope coefficient? The "Yhat" print command does print out the predicted values in an array properly, but somehow the other print commands do not print my desired output... | Essentially, what caused the strange looking coef_ and intercept_ was the fact that your data had 205 features and 205 targets with only 1 sample. Definitely not what you wanted! You probably want 1 feature, 205 samples, and 1 target. To do this, you need to reshape your data: from sklearn.linear_model import LinearRegression import numpy as np mpg = np.array([27, 27, 26, 30, 22, 28, 25, 23, 27, 25]).reshape(-1, 1) price = np.array([13495.0, 16500.0, 16500.0, 13950.0, 17450.0, 16845.0, 19045.0, 21485.0, 22470.0, 22625.0]) lm = LinearRegression() lm.fit(mpg, price) print(lm.intercept_) print(lm.coef_) I used the arrays there for testing, but obviously use the data from your dataframe. P.S. If you omit the resize, you get an error message like this: ValueError: Expected 2D array, got 1D array instead: array=[27 27 26 30 22 28 25 23 27 25]. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. ^ It tells you what to do! | 10 | 24 |
59,739,434 | 2020-1-14 | https://stackoverflow.com/questions/59739434/how-to-force-all-strings-to-floats | I have a small dataframe, consisting of just two columns, which should have all floats in it. So, I have two fields name 'Price' and 'Score'. When I look at the data, it all looks like floats to me, but apparently something is a string. Is there some way to kick out these things that are strings, but look like floats? Or, is there a way to force everything to be float? The error occurs on the last line show here, and then nothing else works. df = pd.read_csv('C:\\my_path\\analytics.csv') print('done!') modDF = df[['Price', 'Score']].copy() modDF = modDF[:100] for i_dataset, dataset in enumerate(datasets): X, y = dataset # normalize dataset for easier parameter selection X = StandardScaler().fit_transform(X) Here is the Stack Trace: datasets = [modDF] for i_dataset, dataset in enumerate(datasets): X, y = dataset # normalize dataset for easier parameter selection X = StandardScaler().fit_transform(X) Traceback (most recent call last): File "<ipython-input-18-013c2a6bef49>", line 5, in <module> X = StandardScaler().fit_transform(X) File "C:\Users\rs\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\base.py", line 553, in fit_transform return self.fit(X, **fit_params).transform(X) File "C:\Users\rs\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\preprocessing\data.py", line 639, in fit return self.partial_fit(X, y) File "C:\Users\rs\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\preprocessing\data.py", line 663, in partial_fit force_all_finite='allow-nan') File "C:\Users\rs\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\utils\validation.py", line 496, in check_array array = np.asarray(array, dtype=dtype, order=order) File "C:\Users\rs\AppData\Local\Continuum\anaconda3\lib\site-packages\numpy\core\numeric.py", line 538, in asarray return array(a, dtype, copy=False, order=order) ValueError: could not convert string to float: 'Price' | You could try using pd.to_numeric like so: df = df.apply(pd.to_numeric, errors='coerce', downcast='float') Which would try to convert your data to float, and the data that is not float will be returned as Nan. Then df.dropna(how='any', axis=0, inplace=True) which just drops any rows with at least 1 Nan value in it. | 8 | 5 |
59,733,820 | 2020-1-14 | https://stackoverflow.com/questions/59733820/django-rest-framework-drf-typeerror-register-got-an-unexpected-keyword-arg | I have updated to djangorestframework==3.11.0 from older version. Now I've got this error, TypeError: register() got an unexpected keyword argument 'base_name' Traceback ... ... ... File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/abu/projects/django-example/django2x/urls.py", line 21, in <module> path('sample/', include('sample.urls')), File "/home/abu/.virtualenvs/django-example/lib/python3.6/site-packages/django/urls/conf.py", line 34, in include urlconf_module = import_module(urlconf_module) File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/abu/projects/django-example/sample/urls.py", line 6, in <module> router.register(r'musician', MusicianViewset, base_name='musician') TypeError: register() got an unexpected keyword argument 'base_name' | From the release notes of Django RestFramework and DRF 3.9 announcement they mentioned that Deprecate the Router.register base_name argument in favor of basename. #5990 Which means, the argument base_name is no longer available from DRF=3.11 onwards and use basename instead So, Change your router config as, router.register(r'musician', MusicianViewset, basename='musician') router.register(r'album', AlbumViewset, basename='album') | 34 | 76 |
59,712,892 | 2020-1-13 | https://stackoverflow.com/questions/59712892/dont-understand-keras-double-parentheses-syntax | Hi I am new to python and keras and looking in some keran example code: model = VGG19(weights='imagenet',include_top=False) model.trainable=False x = Flatten(name='flat')(model) x = Dense(512, activation='relu', name='fc1')(x) x = Dense(512, activation='relu', name='fc2')(x) x = Dense(10, activation='softmax', name='predictions')(x) and just don't understand x=Dense(....)(x) what is second paranteses usage? and what is this syntax name if i want to see python doc's? | This might be an easier to understand version of these chained calls: model = VGG19(weights='imagenet',include_top=False) model.trainable=False layer1 = Flatten(name='flat')(model) layer2 = Dense(512, activation='relu', name='fc1')(layer1) layer3 = Dense(512, activation='relu', name='fc2')(layer2) layer4 = Dense(10, activation='softmax', name='predictions')(layer3) which could be rewritten as: model = VGG19(weights='imagenet',include_top=False) model.trainable=False model.add( Flatten(name='flat')) model.add( Dense(512, activation='relu', name='fc1')) model.add( Dense(512, activation='relu', name='fc2')) model.add( Dense(10, activation='softmax', name='predictions')) | 7 | 5 |
59,655,537 | 2020-1-9 | https://stackoverflow.com/questions/59655537/property-setter-for-subclass-of-pandas-dataframe | I am trying to set up a subclass of pd.DataFrame that has two required arguments when initializing (group and timestamp_col). I want to run validation on those arguments group and timestamp_col, so I have a setter method for each of the properties. This all works until I try to set_index() and get TypeError: 'NoneType' object is not iterable. It appears no argument is being passed to my setter function in test_set_index and test_assignment_with_indexed_obj. If I add if g == None: return to my setter function, I can pass the test cases but don't think that is the proper solution. How should I implement property validation for these required arguments? Below is my class: import pandas as pd import numpy as np class HistDollarGains(pd.DataFrame): @property def _constructor(self): return HistDollarGains._internal_ctor _metadata = ["group", "timestamp_col", "_group", "_timestamp_col"] @classmethod def _internal_ctor(cls, *args, **kwargs): kwargs["group"] = None kwargs["timestamp_col"] = None return cls(*args, **kwargs) def __init__( self, data, group, timestamp_col, index=None, columns=None, dtype=None, copy=True, ): super(HistDollarGains, self).__init__( data=data, index=index, columns=columns, dtype=dtype, copy=copy ) self.group = group self.timestamp_col = timestamp_col @property def group(self): return self._group @group.setter def group(self, g): if g == None: return if isinstance(g, str): group_list = [g] else: group_list = g if not set(group_list).issubset(self.columns): raise ValueError("Data does not contain " + '[' + ', '.join(group_list) + ']') self._group = group_list @property def timestamp_col(self): return self._timestamp_col @timestamp_col.setter def timestamp_col(self, t): if t == None: return if not t in self.columns: raise ValueError("Data does not contain " + '[' + t + ']') self._timestamp_col = t Here are my test cases: import pytest import pandas as pd import numpy as np from myclass import * @pytest.fixture(scope="module") def sample(): samp = pd.DataFrame( [ {"timestamp": "2020-01-01", "group": "a", "dollar_gains": 100}, {"timestamp": "2020-01-01", "group": "b", "dollar_gains": 100}, {"timestamp": "2020-01-01", "group": "c", "dollar_gains": 110}, {"timestamp": "2020-01-01", "group": "a", "dollar_gains": 110}, {"timestamp": "2020-01-01", "group": "b", "dollar_gains": 90}, {"timestamp": "2020-01-01", "group": "d", "dollar_gains": 100}, ] ) return samp @pytest.fixture(scope="module") def sample_obj(sample): return HistDollarGains(sample, "group", "timestamp") def test_constructor_without_args(sample): with pytest.raises(TypeError): HistDollarGains(sample) def test_constructor_with_string_group(sample): hist_dg = HistDollarGains(sample, "group", "timestamp") assert hist_dg.group == ["group"] assert hist_dg.timestamp_col == "timestamp" def test_constructor_with_list_group(sample): hist_dg = HistDollarGains(sample, ["group", "timestamp"], "timestamp") def test_constructor_with_invalid_group(sample): with pytest.raises(ValueError): HistDollarGains(sample, "invalid_group", np.random.choice(sample.columns)) def test_constructor_with_invalid_timestamp(sample): with pytest.raises(ValueError): HistDollarGains(sample, np.random.choice(sample.columns), "invalid_timestamp") def test_assignment_with_indexed_obj(sample_obj): b = sample_obj.set_index(sample_obj.group + [sample_obj.timestamp_col]) def test_set_index(sample_obj): # print(isinstance(a, pd.DataFrame)) assert sample_obj.set_index(sample_obj.group + [sample_obj.timestamp_col]).index.names == ['group', 'timestamp'] | The set_index() method will call self.copy() internally to create a copy of your DataFrame object (see the source code here), inside which it uses your customized constructor method, _internal_ctor(), to create the new object (source). Note that self._constructor() is identical to self._internal_ctor(), which is a common internal method for nearly all pandas classes for creating new instances during operations like deep-copy or slicing. Your problem actually originates from this function: class HistDollarGains(pd.DataFrame): ... @classmethod def _internal_ctor(cls, *args, **kwargs): kwargs["group"] = None kwargs["timestamp_col"] = None return cls(*args, **kwargs) # this is equivalent to calling # HistDollarGains(data, group=None, timestamp_col=None) I guess you copied this code from the github issue. The lines kwargs["**"] = None explicitly tells the constructor to set None to both group and timestamp_col. Finally the setter/validator gets None as the new value and raise an error. Therefore, you should set an acceptable value to group and timestamp_col. @classmethod def _internal_ctor(cls, *args, **kwargs): kwargs["group"] = [] kwargs["timestamp_col"] = 'timestamp' # or whatever name that makes your validator happy return cls(*args, **kwargs) Then you can delete the if g == None: return lines in the validator. | 9 | 4 |
59,717,828 | 2020-1-13 | https://stackoverflow.com/questions/59717828/copy-type-signature-from-another-function | Imagine I have a set of functions like below. foo has a lot of arguments of various types, and bar passes all its arguments to that other function. Is there any way to make mypy understand that bar has the same type as foo without explicitly copying the whole argument list? def foo(a: int, b: float, c: str, d: bool, *e: str, f: str = "a", g: str = "b") -> str: ... def bar(*args, **kwargs): val = foo(*args, **kwargs) ... return val | There's been a lot of discussion about adding this feature here. For the straightforward case of passing all arguments you can use the recipe from this comment: F = TypeVar('F', bound=Callable[..., Any]) class copy_signature(Generic[F]): def __init__(self, target: F) -> None: ... def __call__(self, wrapped: Callable[..., Any]) -> F: ... def f(x: bool, *extra: int) -> str: ... @copy_signature(f) def test(*args, **kwargs): return f(*args, **kwargs) reveal_type(test) # Revealed type is 'def (x: bool, *extra: int) -> str' | 23 | 18 |
59,706,137 | 2020-1-12 | https://stackoverflow.com/questions/59706137/what-is-alpha-in-ridge-regression | What is the parameter alpha in ridge regression and how does it influence the trained regression? So examples would be helpful for me :) | Ridge or Lasso regression is basically Shrinkage(regularization) techniques, which uses different parameters and values to shrink or penalize the coefficients. When we fit a model, we are asking it to learn a set of coefficients that best fit over the training distribution as well as hope to generalize on test data points as well. Learning those coefficients can be done in various ways and multiple techniques are there to reduce the error in coefficients such as LMS(Least mean Squared), RSS(Residual Sum of Squares). Now suppose we are training a model using either LMS or RSS, then ridge regression makes use of the extra term which penalizes the results of LMS or RSS towards zero. So, simply written, Final error to be corrected = RSS + Ridge Term [OR] []1 where Beta1, Beta2 ... are the coefficients for X1, X2... and so on. Ridge term includes the alpha term, which is nothing but the penalty or the tuning parameter. The whole ridge term is sometimes called the shrinkage penalty term too. If we fit the data very well, the RSS value is very low. But the second term is close to zero only when B1, B2...Bn values are small. If these are small, then the corresponding X1, X2....Xn values will be small. Thus the impact of Xi term on Y(the output variable) will be less significant compared to some Bj for Xj whose value is large enough. The alpha term acts as the control parameter, which determines, how much significance should be given to Xi for the Bi coefficient. If Alpha is close to zero, the Ridge term itself is very small and thus the final error is based on RSS alone. If Alpha is too large, the impact of shrinkage grows and the coefficients B1, B2 ... Bn tends to zero. Choosing the right value helps the model learn the right features and better generalize the coefficients. One of the methods that help in choosing the right value is Cross-validation. | 7 | 11 |
59,704,538 | 2020-1-12 | https://stackoverflow.com/questions/59704538/what-is-a-dimensional-range-of-1-0-in-pytorch | So I'm struggling to understand some terminology about collections in Pytorch. I keep running into the same kinds of errors about the range of my tensors being incorrect, and when I try to Google for a solution often the explanations are further confusing. Here is an example: m = torch.nn.LogSoftmax(dim=1) input = torch.tensor([0.3300, 0.3937, -0.3113, -0.2880]) output = m(input) I don't see anything wrong with the above code, and I've defined my LogSoftmax to accept a 1 dimensional input. So according to my experience with other programming languages the collection [0.3300, 0.3937, -0.3113, -0.2880] is a single dimension. The above triggers the following error for m(input): IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) What does that mean? I passed in a one dimensional tensor, but then it tells me that it was expecting a range of [-1, 0], but got 1. A range of what? Why is the error comparing a dimension of 1 to [-1, 0]? What do the two numbers [-1, 0] mean? I searched for an explanation for this error, and I find things like this link which make no sense to me as a programmer: https://github.com/pytorch/pytorch/issues/5554#issuecomment-370456868 So I was able to fix the above code by adding another dimension to my tensor data. m = torch.nn.LogSoftmax(dim=1) input = torch.tensor([[-0.3300, 0.3937, -0.3113, -0.2880]]) output = m(input) So that works, but I don't understand how [-1,0] explains a nested collection. Further experiments showed that the following also works: m = torch.nn.LogSoftmax(dim=1) input = torch.tensor([[0.0, 0.1], [1.0, 0.1], [2.0, 0.1]]) output = m(input) So dim=1 means a collection of collections, but I don't understand how that means [-1, 0]. When I try using LogSoftmax(dim=2) m = torch.nn.LogSoftmax(dim=2) input = torch.tensor([[0.0, 0.1], [1.0, 0.1], [2.0, 0.1]]) output = m(input) The above gives me the following error: IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2) Confusion again that dim=2 equals [-2, 1], because where did the 1 value come from? I can fix the error above by nesting collections another level, but at this point I don't understand what values LogSoftmax is expecting. m = torch.nn.LogSoftmax(dim=2) input = torch.tensor([[[0.0, 0.1]], [[1.0, 0.1]], [[2.0, 0.1]]]) output = m(input) I am super confused by this terminology [-1, 0] and [-2, 1]? If the first value is the nested depth, then why is it negative and what could the second number mean? There is no error code associated with this error. So it's been difficult to find documentation on the subject. It appears to be an extremely common error people get confused by and nothing that I can find in the Pytorch documentation that talks specifically about it. | When specifying a tensor's dimension as an argument for a function (e.g. m = torch.nn.LogSoftmax(dim=1)) you can either use positive dimension indexing starting with 0 for the first dimension, 1 for the second etc. Alternatively, you can use negative dimension indexing to start from the last dimension to the first: -1 indicate the last dimension, -2 the second from last etc. Example: If you have a 4D tensor of dimensions b-by-c-by-h-by-w then The "batch" dimension (the first) can be accessed as either dim=0 or dim=-4. The "channel" dimension (the second) can be accessed as either dim=1 or dim=-3. The "height"/"vertical" dimension (the third) can be accessed as either dim=2 or dim=-2. The "width"/"horizontal" dimension (the fourth) can be accessed as either dim=3 or dim=-1. Therefore, if you have a 4D tensor dim argument can take values in the range [-4, 3]. In your case you have a 1D tensor and therefore dim argument can be wither 0 or -1 (which in this deprecate case amounts to the same dimension). | 7 | 6 |
59,701,981 | 2020-1-12 | https://stackoverflow.com/questions/59701981/bert-tokenizer-model-download | I`m beginner.. I'm working with Bert. However, due to the security of the company network, the following code does not receive the bert model directly. tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased', do_lower_case=False) model = BertForSequenceClassification.from_pretrained("bert-base-multilingual-cased", num_labels=2) So I think I have to download these files and enter the location manually. But I'm new to this, and I'm wondering if it's simple to download a format like .py from github and put it in a location. I'm currently using the bert model implemented by hugging face's pytorch, and the address of the source file I found is: https://github.com/huggingface/transformers Please let me know if the method I thought is correct, and if so, what file to get. Thanks in advance for the comment. | As described here, what you need to do are download pre_train and configs, then putting them in the same folder. Every model has a pair of links, you might want to take a look at lib code. For instance import torch from transformers import * model = BertModel.from_pretrained('/Users/yourname/workplace/berts/') with /Users/yourname/workplace/berts/ refer to your folder Below are what I found at src/transformers/configuration_bert.py there are a list of models' configs BERT_PRETRAINED_CONFIG_ARCHIVE_MAP = { "bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json", "bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-config.json", "bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json", "bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-config.json", "bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-config.json", "bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-config.json", "bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json", "bert-base-german-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-config.json", "bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-config.json", "bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-config.json", "bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-config.json", "bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-config.json", "bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-config.json", "bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json", "bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json", "bert-base-japanese": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-config.json", "bert-base-japanese-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-whole-word-masking-config.json", "bert-base-japanese-char": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-config.json", "bert-base-japanese-char-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-whole-word-masking-config.json", "bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/config.json", "bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/config.json", } and at src/transformers/modeling_bert.py there are links to pre_trains BERT_PRETRAINED_MODEL_ARCHIVE_MAP = { "bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin", "bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-pytorch_model.bin", "bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin", "bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-pytorch_model.bin", "bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-pytorch_model.bin", "bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-pytorch_model.bin", "bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-pytorch_model.bin", "bert-base-german-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-pytorch_model.bin", "bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-pytorch_model.bin", "bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-pytorch_model.bin", "bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-pytorch_model.bin", "bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-pytorch_model.bin", "bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-pytorch_model.bin", "bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin", "bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin", "bert-base-japanese": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-pytorch_model.bin", "bert-base-japanese-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-whole-word-masking-pytorch_model.bin", "bert-base-japanese-char": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-pytorch_model.bin", "bert-base-japanese-char-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-whole-word-masking-pytorch_model.bin", "bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/pytorch_model.bin", "bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/pytorch_model.bin", } | 9 | 14 |
59,702,410 | 2020-1-12 | https://stackoverflow.com/questions/59702410/cython-returns-0-for-expression-that-should-evaluate-to-0-5 | For some reason, Cython is returning 0 on a math expression that should evaluate to 0.5: print(2 ** (-1)) # prints 0 Oddly enough, mix variables in and it'll work as expected: i = 1 print(2 ** (-i)) # prints 0.5 Vanilla CPython returns 0.5 for both cases. I'm compiling for 37m-x86_64-linux-gnu, and language_level is set to 3. What is this witchcraft? | It's because it's using C ints rather than Python integers so it matches C behaviour rather than Python behaviour. I'm relatively sure this used to be documented as a limitation somewhere but I can't find it now. If you want to report it as a bug then go to https://github.com/cython/cython/issues, but I suspect this is a deliberate trade-off of speed for compatibility. The code gets translated to __Pyx_pow_long(2, -1L) where __Pyx_pow_long is a function of type static CYTHON_INLINE long __Pyx_pow_long(long b, long e). The easiest way to fix it is to change one/both of the numbers to be a floating point number print(2. ** (-1)) As a general comment on the design choice: people from the C world generally expect int operator int to return an int, and this option will be fastest. Python had tried to do this in the past with the Python 2 division behaviour (but inconsistently - power always returned a floating point number). Cython generally tries to follow Python behaviour. However, a lot of people are using it for speed so they also try to fall back to quick, C-like operations especially when people specify types (since those people want speed). I think what's happened here is that it's been able to infer the types automatically, and so defaulted to C behaviour. I suspect ideally it should distinguish between specified types and types that it's inferred. However, it's also probably too late to start changing that. | 8 | 4 |
59,696,421 | 2020-1-11 | https://stackoverflow.com/questions/59696421/why-we-need-asyncio-synchronization-primitives-when-to-use-these | According to asyncio synchronization primitives, there are synchronization methods. I am getting confused about why we need synchronization in asyncio? I mean, asyncio is asynchronous. Is it meaningful to add something synchronous in asynchronization? | Synchronization primitives don't make your code synchronous, they make coroutines in your code synchronized. Few examples: You may want to start/continue some coroutine only when another coroutine allows it (asyncio.Event) You may want some part of your code to be executed only by single coroutine at the same time and other to wait for their turn (asyncio.Lock) You may want some part of your code to be executed only by limited number on coroutines at the same time (asyncio.Semaphore) Take a look at a practical example of using asyncio.Semaphore. | 9 | 12 |
59,693,174 | 2020-1-11 | https://stackoverflow.com/questions/59693174/attributeerror-posixpath-object-has-no-attribute-path | I have a python script that I'm trying to execute but i'm not sure how it's meant to be executed. I don't program in Python so i'm unfamiliar with the language. Here is the link to the script i'm trying to use. Also, a link the configuration it's using if you wish to see it. All it seems to do for what's relevant here, however, is set my path which I know is correct since other scripts (not linked here) work as expected with the configuration in that file. Having a look at the script, I believe that the script is meant to be ran with the command line arguments: view, new, init. Thus, I ran the following in my terminal $ lectures.py new But I get the following traceback Traceback (most recent call last): File "/usr/bin/lectures.py", line 156, in <module> lectures = Lectures(Path.cwd()) File "/usr/bin/lectures.py", line 60, in __init__ self.root = course.path AttributeError: 'PosixPath' object has no attribute 'path' Furthermore, my python version $ python --version Python 3.8.1 EDIT: I wanted to add the reference as well for what I am trying to follow | Going through your code, I think you might mean: self.root = course at that line. Path.cwd() returns: ... the current working directory, that is, the directory from where you run the script. that is, either a WindowsPath() or a PosixPath object. I believe it is PosixPath for you, and you can verify with: import os print(os.name) # posix -> Linux # nt -> Windows This has no attribute path, and this is what your Interpreter tells you. | 13 | 4 |
59,690,457 | 2020-1-11 | https://stackoverflow.com/questions/59690457/whats-the-difference-between-auto-remove-and-remove-in-docker-sdk-for-python | I'm learning to use docker SDK. I understand that containers need to be removed after run, otherwise requiring pruning later on. I'm see that there are two boolean flags in client.containers.run: auto_remove (bool) – enable auto-removal of the container on daemon side when the container’s process exits. remove (bool) – Remove the container when it has finished running. Default: False What's the difference? If auto-remove is on daemon side, which side is remove on? Angel? Which side should I join?? ref: https://docker-py.readthedocs.io/en/stable/containers.html | It's in fact exactly that: AutoRemove is one of the parameters to the "create a container" Docker API call, but the remove option signals the client library to remove the container after it exits. Setting auto_remove: True is probably more robust (the container will still clean itself up if the coordinator process crashes) but if the container fails with that option set then container.run() won't return its stderr. If you set detach: True to get back a Container object, then you can't use remove: True (it gets converted to auto_remove: True) but your code can container.remove() it after it's exited. | 11 | 8 |
59,689,524 | 2020-1-10 | https://stackoverflow.com/questions/59689524/how-to-add-variable-type-annotation-for-what-goes-into-a-queue | e.g. from queue import Queue q: Queue = Queue() q.put("abc") This is okay. Now I want to specify the types that go into the queue. from queue import Queue q: Queue[str] = Queue() q.put("abc") This gets "TypeError: 'type' object is not subscriptable" | This isn't currently supported by the typing functionality. See this discussion of Python core developers: https://bugs.python.org/issue33315 It also suggests a current workaround, to put the annotation in quotes: q: "Queue[str]" = Queue() The workaround is that an annotation can be any type (no pun intended). A string is a perfectly acceptable type, but if you want to use non-string annotations, then they have to "behave" properly. You can annotate that something is a Queue because, again, an annotation can have any type. But the Queue class itself isn't subscriptable, so Queue[int] doesn't work. | 15 | 12 |
59,638,155 | 2020-1-8 | https://stackoverflow.com/questions/59638155/how-to-set-0-to-white-at-an-uneven-colormap | I have an uneven colormap and I want the 0 to be white. All negative colors have to be bluish and all positive colors have to be reddish. My current attempt displays the 0 bluish and the 0.7 white. Is there any way to set the 0 to white? import numpy as np import matplotlib.colors as colors from matplotlib import pyplot as m bounds_min = np.arange(-2, 0, 0.1) bounds_max = np.arange(0, 4.1, 0.1) bounds = np.concatenate((bounds_min, bounds_max), axis=None) norm = colors.BoundaryNorm(boundaries=bounds, ncolors=256) # I found this on the internet and thought this would solve my problem. But it doesn't... m.pcolormesh(xx, yy, interpolated_grid_values, norm=norm, cmap='RdBu_r') | The other answer makes it a little more complicated than it needs to be. In order to have the middle point of the colormap at 0, use a DivergingNorm with vcenter=0. import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import DivergingNorm x, y = np.meshgrid(np.linspace(0,50,51), np.linspace(0,50,51)) z = np.linspace(-2,4,50*50).reshape(50,50) norm = DivergingNorm(vmin=z.min(), vcenter=0, vmax=z.max()) pc = plt.pcolormesh(x,y,z, norm=norm, cmap="RdBu_r") plt.colorbar(pc) plt.show() Note: From matplotlib 3.2 onwards DivergingNorm will be renamed to TwoSlopeNorm | 8 | 12 |
59,678,780 | 2020-1-10 | https://stackoverflow.com/questions/59678780/show-extra-columns-when-hovering-in-a-scatter-plot-with-hvplot | I'm trying to add a label or hover column to the points in a scatter plot, but to no avail. For use as example data: import pandas as pd import holoviews as hv import hvplot.pandas df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t') df.query("year == 1966").hvplot.scatter(x="gdp", y="unem") results in the picture below. If I hover over this item I cannot see, which country is represented by the dot (which makes it rather useless). I could use the additional keyword color="country"in the scatterplot call. This would lead to an additional legend (which could be turned off) and the value country: countryname is added to the hover field. Is there an option that just adds the column to my hover without adding a legend and changing the color? | You can use keyword hover_cols to add additional columns to your hover. Documentation: https://hvplot.holoviz.org/user_guide/Customization.html hover_cols (default=[]): list or str Additional columns to add to the hover tool or 'all' which will includes all columns (including indexes if use_index is True). So in your example if you would like to add specific columns to your hover: # use keyword hover_cols to add additional columns when hovering df.hvplot.scatter(x="gdp", y="unem", hover_cols=['country', 'year']) Or if you want to include all additional columns to your hover: df.hvplot.scatter(x="gdp", y="unem", hover_cols='all') Or if you want to include all columns to hover, but not the index: df.hvplot.scatter(x="gdp", y="unem", hover_cols='all', use_index=False) | 11 | 16 |
59,669,715 | 2020-1-9 | https://stackoverflow.com/questions/59669715/fastest-way-to-find-the-rgb-pixel-color-count-of-image | I have a use case where i have to find the consecutive rgb pixel color count of each frame of live video after searching i found a piece of code which does the same thing but performance wise it take around ~ 3 sec to give me output but in my case i have to do this calculation as fast as possible may be 25 frames in 1 seconds. Can someone help me to figure out how to do this by refactoring the below code from PIL import Image import timeit starttime = timeit.default_timer() with Image.open("netflix.png") as image: color_count = {} width, height = image.size print(width,height) rgb_image = image.convert('RGB') for x in range(width): for y in range(height): rgb = rgb_image.getpixel((x, y)) if rgb in color_count: color_count[rgb] += 1 else: color_count[rgb] = 1 print('Pixel Count per Unique Color:') print('-' * 30) print(len(color_count.items())) print("The time difference is :", timeit.default_timer() - starttime) output: Pixel Count per Unique Color: 130869 The time difference is : 3.9660612 | You need to use Numpy, or OpenCV, for fast image processing in Python. I made a 9-colour version of Paddington: from PIL import Image import numpy as np # Open Paddington and make sure he is RGB - not palette im = Image.open('paddington.png').convert('RGB') # Make into Numpy array na = np.array(im) # Arrange all pixels into a tall column of 3 RGB values and find unique rows (colours) colours, counts = np.unique(na.reshape(-1,3), axis=0, return_counts=1) print(colours) print(counts) Results [[ 14 48 84] [ 19 21 30] [ 33 108 163] [ 33 152 190] [ 72 58 58] [ 96 154 210] [180 89 64] [205 210 200] [208 151 99]] [20389 40269 12820 1488 17185 25371 17050 16396 9032] That means there are 20,389 pixels of RGB(14,48,84), and so on. That takes 125ms on my Mac for a 400x400 image, which will give you 8 fps, so you better have at least 4 CPU cores and use all of them to get 25+ fps. Update I think you can actually go significantly faster than this. If you take the dot-product of each of the pixels with [1,256,65536], you will get a single 24-bit number for each pixel, rather than 3 8-bit numbers. It is then a lot faster to find the unique values. That looks like this: # Open Paddington and make sure he is RGB - not palette im = Image.open('paddington.png').convert('RGB') # Make into Numpy array na = np.array(im) # Make a single 24-bit number for each pixel f = np.dot(na.astype(np.uint32),[1,256,65536]) nColours = len(np.unique(f)) # prints 9 That takes 4ms rather than 125ms on my Mac :-) Keywords: Python, Numpy, PIL/Pillow, image processing, count unique colours, count colors. | 7 | 17 |
59,662,920 | 2020-1-9 | https://stackoverflow.com/questions/59662920/how-to-get-the-mode-of-distribution-in-scipy-stats | The scipy.stats library has functions to find the mean and median of a fitted distribution but not mode. If I have the parameters of a distribution after fitting to data, how can I find the mode of the fitted distribution? | If I don't get your wrong, you want to find the mode of fitted distributions instead of mode of a given data. Basically, we can do it with following 3 steps. Step 1: generate a dataset from a distribution from scipy import stats from scipy.optimize import minimize # generate a norm data with 0 mean and 1 variance data = stats.norm.rvs(loc= 0,scale = 1,size = 100) data[0:5] Output: array([1.76405235, 0.40015721, 0.97873798, 2.2408932 , 1.86755799]) Step 2: fit the parameters # fit the parameters of norm distribution params = stats.norm.fit(data) params Output: (0.059808015534485, 1.0078822447165796) Note that there are 2 parameters for stats.norm, i.e. loc and scale. For different dist in scipy.stats, the parameters are different. I think it's convenient to store parameter in a tuple and then unpack it in the next step. Step 3: get the mode(maximum of your density function) of fitted distribution # continuous case def your_density(x): return -stats.norm.pdf(x,*paras) minimize(your_density,0).x Output: 0.05980794 Note that a norm distribution has mode equals to mean. It's a coincidence in this example. One more thing is that scipy treats continuous dist and discrete dist different(they have different father classes), you can do the same thing with following code on discrete dists. ## discrete dist, example for poisson x = np.arange(0,100) # the range of x should be specificied x[stats.poisson.pmf(x,mu = 2).argmax()] # find the x value to maximize pmf Out: 1 You can it try with your own data and distributions! | 9 | 7 |
59,666,138 | 2020-1-9 | https://stackoverflow.com/questions/59666138/sklearn-roc-auc-score-with-multi-class-ovr-should-have-none-average-available | I'm trying to compute the AUC score for a multiclass problem using the sklearn's roc_auc_score() function. I have prediction matrix of shape [n_samples,n_classes] and a ground truth vector of shape [n_samples], named np_pred and np_label respectively. What I'm trying to achieve is the set of AUC scores, one for each classes that I have. To do so I would like to use the average parameter option None and multi_class parameter set to "ovr", but if I run roc_auc_score(y_score=np_pred, y_true=np_label, multi_class="ovr",average=None) I get back ValueError: average must be one of ('macro', 'weighted') for multiclass problems This error is expected from the sklearn function in the case of the multiclass; but if you take a look at the roc_auc_score function source code, you can see that if the multi_class parameter is set to "ovr", and the average is one of the accepted one, the multiClass case is treated as a multiLabel one and the internal multiLabel function accepts None as average parameter. So, by looking at the code, it seems that I should be able to execute a multiclass with a None average in a One vs Rest case but the ifs in the source code do not allow such combination. Am I wrong? In case I'm wrong, from a theoretical point of view should I fake a multilabel case just to have the different AUCs for the different classes or should I write my own function that cycles the different classes and outputs the AUCs? Thanks | As you already know, right now sklearn multiclass ROC AUC only handles the macro and weighted averages. But it can be implemented as it can then individually return the scores for each class. Theoretically speaking, you could implement OVR and calculate per-class roc_auc_score, as: roc = {label: [] for label in multi_class_series.unique()} for label in multi_class_series.unique(): selected_classifier.fit(train_set_dataframe, train_class == label) predictions_proba = selected_classifier.predict_proba(test_set_dataframe) roc[label] += roc_auc_score(test_class, predictions_proba[:,1]) | 15 | 7 |
59,672,062 | 2020-1-9 | https://stackoverflow.com/questions/59672062/elegant-way-in-python-to-make-sure-a-string-is-suitable-as-a-filename | I want to use a user-provided string as a filename for exporting, but have to make sure that the string is permissible on my system as a filename. From my side it would be OK to replace any forbidden character with e.g. '_'. Here I found a list of forbidden characters for filenames. It should be easy enough to use the str.replace() function, I was just wondering if there is already something out there that does that, potentially even taking into account what OS I am on. | pathvalidate is a Python library to sanitize/validate a string such as filenames/file-paths/etc. This library provides both utilities for validation of paths: import sys from pathvalidate import ValidationError, validate_filename try: validate_filename("fi:l*e/p\"a?t>h|.t<xt") except ValidationError as e: print("{}\n".format(e), file=sys.stderr) And utilities for sanitizing paths: from pathvalidate import sanitize_filename fname = "fi:l*e/p\"a?t>h|.t<xt" print("{} -> {}".format(fname, sanitize_filename(fname))) | 10 | 10 |
59,669,413 | 2020-1-9 | https://stackoverflow.com/questions/59669413/what-is-the-canonical-way-to-split-tf-dataset-into-test-and-validation-subsets | Problem I was following a Tensorflow 2 tutorial on how to load images with pure Tensorflow, because it is supposed to be faster than with Keras. The tutorial ends before showing how to split the resulting dataset (~tf.Dataset) into a train and validation dataset. I checked the reference for tf.Dataset and it does not contain a split() method. I tried slicing it manually but tf.Dataset neither contains a size() nor a length() method, so I don't see how I could slice it myself. I can't use the validation_split argument of Model.fit() because I need to augment the training dataset but not the validation dataset. Question What is the intended way to split a tf.Dataset or should I use a different workflow where I won't have to do this? Example Code (from the tutorial) BATCH_SIZE = 32 IMG_HEIGHT = 224 IMG_WIDTH = 224 list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*')) def get_label(file_path): # convert the path to a list of path components parts = tf.strings.split(file_path, os.path.sep) # The second to last is the class-directory return parts[-2] == CLASS_NAMES def decode_img(img): # convert the compressed string to a 3D uint8 tensor img = tf.image.decode_jpeg(img, channels=3) # Use `convert_image_dtype` to convert to floats in the [0,1] range. img = tf.image.convert_image_dtype(img, tf.float32) # resize the image to the desired size. return tf.image.resize(img, [IMG_WIDTH, IMG_HEIGHT]) def process_path(file_path): label = get_label(file_path) # load the raw data from the file as a string img = tf.io.read_file(file_path) img = decode_img(img) return img, label labeled_ds = list_ds.map(process_path, num_parallel_calls=AUTOTUNE) #... #... I can either split list_ds (list of files) or labeled_ds (list of images and labels), but how? | I don't think there's a canonical way (typically, data is being split e.g. in separate directories). But here's a recipe that will let you do it dynamically: # Caveat: cache list_ds, otherwise it will perform the directory listing twice. ds = list_ds.cache() # Add some indices. ds = ds.enumerate() # Do a rougly 70-30 split. train_list_ds = ds.filter(lambda i, data: i % 10 < 7) test_list_ds = ds.filter(lambda i, data: i % 10 >= 7) # Drop indices. train_list_ds = train_list_ds.map(lambda i, data: data) test_list_ds = test_list_ds.map(lambda i, data: data) | 8 | 14 |
59,662,533 | 2020-1-9 | https://stackoverflow.com/questions/59662533/argparse-append-action-with-default-value-only-if-argument-doesnt-appear | I'm parsing CLI arguments in my program with the argparse library. I would like to parse an argument that can repeat, with the following behaviour: if the argument appears at least once, its values are stored in a list, if the argument doesn't appear, the value is some default list. I have the following code so far: import argparse ap = argparse.ArgumentParser(description="Change channel colours.") ap.add_argument('-c', '--channel', action='append', default=['avx', 'fbx']) print(ap.parse_known_args(['-c', 'iasdf', '-c', 'fdas'])) print(ap.parse_known_args()) This appropriately sets a default list, however it doesn't start with an empty list when the argument appears. In other words, the second print statement prints the correct value (the default list), but the first one prints ['avx', 'fbx', 'iasdf', 'fdas'] instead of ['iasdf', 'fdas'] Is there a way in argparse to do what I want without doing something like if len(args.channel) > 2: args.channel = args.channel[2:] after the fact? | There's a bug/issue discussing this behavior. I wrote several posts to that. https://bugs.python.org/issue16399 argparse: append action with default list adds to list instead of overriding For now the only change is in documentation, not in behavior. All defaults are placed in the namespace at the start of parsing. For ordinary actions, user values overwrite the default. But in the append case, they are just added to what's there already. It doesn't try to distinguish between values placed by the default, and previous user values. I think the simplest solution is to leave the default as is, and check after parsing for None or empty list (I don't recall which), and insert your default. You don't get extra points for doing all the parsing in argparse. A bit of post parsing processing is quite ok. | 14 | 15 |
59,661,074 | 2020-1-9 | https://stackoverflow.com/questions/59661074/holoviews-charts-sharing-axis-when-combined-and-outputted | I'm using Holoviews to construct a dashboard of charts. Some of these charts have percentages in the y axis where as others have sums/counts etc. When I try to output all the charts I have created to a html file, all the charts change their y axis to match the axis of the first chart of my chart list. For example: Chart 1 is a sum, values go from 0 to 1000 Chart 2 is a % Chart 3 is a % when I combine these charts in holoviews using: Charts = Chart 1 + Chart 2 + Chart 3 The y axis of charts 2 and 3 become the same as chart 1. Does anyone know why this is happening and how I can fix it so all the charts keep their individual axis pertinent to what they are trying to represent. Thank you! | This happens when the y-axes have the same name. You need to use option axiswise=True if you want every plot to get its own independent x-axis and y-axis. There's a short reference to axiswise in the holoviews FAQ: https://www.holoviews.org/FAQ.html Here's a code example that I've checked and works: # import libraries etc. import numpy as np import pandas as pd import holoviews as hv from holoviews import opts hv.extension('bokeh') # create some sample data df1 = pd.DataFrame({ 'x': np.random.rand(10), 'y': np.random.rand(10), }) df2 = pd.DataFrame({ 'x': np.random.rand(10) * 10, 'y': np.random.rand(10) * 10, }) # set axiswise=True so that every plot gets its own independent x- and y-axis plot1 = hv.Scatter(df1).opts(axiswise=True) plot2 = hv.Scatter(df2).opts(axiswise=True) plot1 + plot2 Or alternatively you can do: plot1 = hv.Scatter(df1) plot2 = hv.Scatter(df2) (plot1 + plot2).opts(opts.Scatter(axiswise=True)) If this doesn't work when you try my code example, you may have to upgrade to the latest version of holoviews. This can be done as follows: Install the latest git versions of holoviews, hvplot, panel, datashader and param | 13 | 10 |
59,661,904 | 2020-1-9 | https://stackoverflow.com/questions/59661904/what-does-equal-do-in-f-strings-inside-the-expression-curly-brackets | The usage of {} in Python f-strings is well known to execute pieces of code and give the result in string format. However, what does the '=' at the end of the expression mean? log_file = open("log_aug_19.txt", "w") console_error = '...stuff...' # the real code generates it with regex log_file.write(f'{console_error=}') | This is actually a brand-new feature as of Python 3.8. Added an = specifier to f-strings. An f-string such as f'{expr=}' will expand to the text of the expression, an equal sign, then the representation of the evaluated expression. Essentially, it facilitates the frequent use-case of print-debugging, so, whereas we would normally have to write: f"some_var={some_var}" we can now write: f"{some_var=}" So, as a demonstration, using a shiny-new Python 3.8.0 REPL: >>> print(f"{foo=}") foo=42 >>> | 81 | 116 |
59,656,759 | 2020-1-9 | https://stackoverflow.com/questions/59656759/what-is-a-best-way-to-intersect-multiple-arrays-with-numpy-array | Suppose I have an example of numpy array: import numpy as np X = np.array([2,5,0,4,3,1]) And I also have a list of arrays, like: A = [np.array([-2,0,2]), np.array([0,1,2,3,4,5]), np.array([2,5,4,6])] I want to leave only these items of each list that are also in X. I expect also to do it in a most efficient/common way. Solution I have tried so far: Sort X using X.sort(). Find locations of items of each array in X using: locations = [np.searchsorted(X, n) for n in A] Leave only proper ones: masks = [X[locations[i]] == A[i] for i in range(len(A))] result = [A[i][masks[i]] for i in range(len(A))] But it doesn't work because locations of third array is out of bounds: locations = [array([0, 0, 2], dtype=int64), array([0, 1, 2, 3, 4, 5], dtype=int64), array([2, 5, 4, 6], dtype=int64)] How to solve this issue? Update I ended up with idx[idx==len(Xs)] = 0 solution. I've also noticed two different approaches posted between the answers: transforming X into set vs np.sort. Both of them has plusses and minuses: set operations uses iterations which is quite slow in compare with numpy methods; however np.searchsorted speed increases logarithmically unlike acceses of set items which is instant. That why I decided to compare performance using data with huge sizes, especially 1 million items for X, A[0], A[1], A[2]. | One idea would be less compute and minimal work when looping. So, here's one with those in mind - a = np.concatenate(A) m = np.isin(a,X) l = np.array(list(map(len,A))) a_m = a[m] cut_idx = np.r_[0,l.cumsum()] l_m = np.add.reduceat(m,cut_idx[:-1]) cl_m = np.r_[0,l_m.cumsum()] out = [a_m[i:j] for (i,j) in zip(cl_m[:-1],cl_m[1:])] Alternative #1 : We can also use np.searchsorted to get the isin mask, like so - Xs = np.sort(X) idx = np.searchsorted(Xs,a) idx[idx==len(Xs)] = 0 m = Xs[idx]==a Another way with np.intersect1d If you are looking for the most common/elegant one, think it would be with np.intersect1d - In [43]: [np.intersect1d(X,A_i) for A_i in A] Out[43]: [array([0, 2]), array([0, 1, 2, 3, 4, 5]), array([2, 4, 5])] Solving your issue You can also solve your out-of-bounds issue, with a simple fix - for l in locations: l[l==len(X)]=0 | 7 | 2 |
59,652,882 | 2020-1-8 | https://stackoverflow.com/questions/59652882/comparing-lists-in-two-columns-row-wise-efficiently | When having a Pandas DataFrame like this: import pandas as pd import numpy as np df = pd.DataFrame({'today': [['a', 'b', 'c'], ['a', 'b'], ['b']], 'yesterday': [['a', 'b'], ['a'], ['a']]}) today yesterday 0 ['a', 'b', 'c'] ['a', 'b'] 1 ['a', 'b'] ['a'] 2 ['b'] ['a'] ... etc But with about 100 000 entries, I am looking to find the additions and removals of those lists in the two columns on a row-wise basis. It is comparable to this question: Pandas: How to Compare Columns of Lists Row-wise in a DataFrame with Pandas (not for loop)? but I am looking at the differences, and Pandas.apply method seems not to be that fast for such many entries. This is the code that I am currently using. Pandas.apply with numpy's setdiff1d method: additions = df.apply(lambda row: np.setdiff1d(row.today, row.yesterday), axis=1) removals = df.apply(lambda row: np.setdiff1d(row.yesterday, row.today), axis=1) This works fine, however it takes about a minute for 120 000 entries. So is there a faster way to accomplish this? | Not sure about performance, but at the lack of a better solution this might apply: temp = df[['today', 'yesterday']].applymap(set) removals = temp.diff(periods=1, axis=1).dropna(axis=1) additions = temp.diff(periods=-1, axis=1).dropna(axis=1) Removals: yesterday 0 {} 1 {} 2 {a} Additions: today 0 {c} 1 {b} 2 {b} | 17 | 15 |
59,645,496 | 2020-1-8 | https://stackoverflow.com/questions/59645496/how-to-perform-a-join-in-salesforce-soql-simple-salesforce-python-library | I am using simpleSalesforce library for python to query SalesForce. I am looking at two different object in SalesForce: Account and Opportunity (parent-child). There is an accountId inside the opportunity object. I am trying to perform an inner join between the two and select the results (fields from both objects). a normal SQL statement would look like this: SELECT acc.Name, opp.StageName FROM Account AS acc JOIN Opportunity AS opp ON acc.Id = opp.AccountId I am not sure how to translate this kind of query into SOQL. | Salesforce doesn't allow arbitrary joins. You must write relationship queries to traverse predefined relationships in the Salesforce schema. Here, you'd do something like SELECT Name, (SELECT StageName FROM Opportunities) FROM Account No explicit join logic is required, or indeed permitted. Note too that your return values will be structured, nested JSON objects - Salesforce does not return flat rows like a SQL query would. | 12 | 7 |
59,647,765 | 2020-1-8 | https://stackoverflow.com/questions/59647765/how-to-obtain-a-list-of-all-markers-in-matplotlib | The diagrams I am generating have many lines, and I want to automatically use colors and markers to distinguish them. I tried this: for i,studyDframeTuple in enumerate(studyDframeTuples): time = studyDframeTuple[1]['time'] error = studyDframeTuple[1]['Linf velocity error'] caseName = studyDirs[studyDframeTuple[0]] ax.plot(time, error, marker = i % 12, label=caseName) Which circulates marker over (0,11). This kind of works, because for some reason marker < 12. When I use marker = i % 20, I get an error that makerstyle 12 is unknown. This is an example of the diagram I'm generating, it's not pretty, it's only used for checking test results: The diagrams are resulting from tests with varying parameters, hence the need to iterate over all available colors, line styles and markers, to make sure that when I have 100 lines on a diagram, I can distinguish the ones that belong to exploded solutions (values like 1e15 on this plot). How can I put all markers in matplotib in a list and iterate over them? Edit: I hacked a list of my own like this mStyles = [".",",","o","v","^","<",">","1","2","3","4","8","s","p","P","*","h","H","+","x","X","D","d","|","_",0,1,2,3,4,5,6,7,8,9,10,11 ] But what when this changes? Can I obtain this list programmatically from matplotlib? | 12 doesn't exist as marker value. You can have a dict of all existing markers using this : from matplotlib.lines import Line2D print(Line2D.markers) Output: {'.': 'point', ',': 'pixel', 'o': 'circle', 'v': 'triangle_down', '^': 'triangle_up', '<': 'triangle_left', '>': 'triangle_right', '1': 'tri_down', '2': 'tri_up', '3': 'tri_left', '4': 'tri_right', '8': 'octagon', 's': 'square', 'p': 'pentagon', '*': 'star', 'h': 'hexagon1', 'H': 'hexagon2', '+': 'plus', 'x': 'x', 'D': 'diamond', 'd': 'thin_diamond', '|': 'vline', '_':'hline', 'P': 'plus_filled', 'X': 'x_filled', 0: 'tickleft', 1: 'tickright', 2: 'tickup', 3:'tickdown', 4: 'caretleft', 5: 'caretright', 6: 'caretup', 7: 'caretdown', 8: 'caretleftbase', 9: 'caretrightbase', 10: 'caretupbase', 11: 'caretdownbase', 'None': 'nothing', None: 'nothing', ' ': 'nothing', '': 'nothing'} | 10 | 22 |
59,643,874 | 2020-1-8 | https://stackoverflow.com/questions/59643874/aws-cdk-error-when-deploying-redis-elasticache-subnet-group-belongs-to-a-diffe | Summary I am trying to deploy a Redis ElastiCache Cluster on AWS using CDK. I want the cluster to be within a VPC for security reasons. My code (see supra) defines a VPC, a security group, a cache subnet group (linked to vpc private subnets) and the cache cluster (linked to both cache subnet group and the security group). With cdk deploy, the deployment goes well until I receive this error: ACL_redis (ACLredis) Subnet group [default] belongs to a different VPC [vpc-326ce55b] than [vpc-0c45b593f3a5fdc4d] (Service: AmazonElastiCache; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: 901398f4-c355-418d-921b-65e6c52dfe3a) What I tried While disabling the rollback, it appears that the cache cluster is created in the default VPC of the region rather than the VPC defined within my stack. I do not understand why Cloud Formation is doing that, as both the security group and the cache subnet group are linked to the stack's VPC. There is no reference to the region default VPC at all. Some code Here is the CDK code from aws_cdk import ( core, aws_stepfunctions, aws_lambda, aws_stepfunctions_tasks, aws_sqs, aws_elasticache, aws_ec2, ) PROJECT_CODE = 'ACL' class AclAwsCdkLearningStack(core.Stack): def __init__(self, scope: core.Construct, id: str, **kwargs) -> None: super().__init__(scope, id, **kwargs) vpc = aws_ec2.Vpc(self, f"{PROJECT_CODE}_vpc", cidr="10.0.0.0/16" ) security_group = aws_ec2.SecurityGroup( scope=self, id=f"{PROJECT_CODE}_security_group", vpc=vpc, ) private_subnets_ids = [ps.subnet_id for ps in vpc.private_subnets] cache_subnet_group = aws_elasticache.CfnSubnetGroup( scope=self, id=f"{PROJECT_CODE}_cache_subnet_group", subnet_ids=private_subnets_ids, # todo: add list of subnet ids here description="subnet group for redis", ) redis_cluster = aws_elasticache.CfnCacheCluster( scope=self, id=f"{PROJECT_CODE}_redis", engine="redis", cache_node_type="cache.t2.small", num_cache_nodes=1, cache_subnet_group_name=cache_subnet_group.cache_subnet_group_name, vpc_security_group_ids=[security_group.security_group_id], ) redis_cluster.add_depends_on(cache_subnet_group) Here is the resulting JSON CloudFormation code: { "Resources": { "ACLvpcAC1CD0C2": { "Type": "AWS::EC2::VPC", "Properties": { "CidrBlock": "10.0.0.0/16", "EnableDnsHostnames": true, "EnableDnsSupport": true, "InstanceTenancy": "default", "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/Resource" } }, "ACLvpcPublicSubnet1SubnetAB5536F8": { "Type": "AWS::EC2::Subnet", "Properties": { "CidrBlock": "10.0.0.0/19", "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "AvailabilityZone": "eu-west-3a", "MapPublicIpOnLaunch": true, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet1" }, { "Key": "aws-cdk:subnet-name", "Value": "Public" }, { "Key": "aws-cdk:subnet-type", "Value": "Public" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet1/Subnet" } }, "ACLvpcPublicSubnet1RouteTable973DCC99": { "Type": "AWS::EC2::RouteTable", "Properties": { "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet1" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet1/RouteTable" } }, "ACLvpcPublicSubnet1RouteTableAssociation07D70069": { "Type": "AWS::EC2::SubnetRouteTableAssociation", "Properties": { "RouteTableId": { "Ref": "ACLvpcPublicSubnet1RouteTable973DCC99" }, "SubnetId": { "Ref": "ACLvpcPublicSubnet1SubnetAB5536F8" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet1/RouteTableAssociation" } }, "ACLvpcPublicSubnet1DefaultRoute5F1B7BC7": { "Type": "AWS::EC2::Route", "Properties": { "RouteTableId": { "Ref": "ACLvpcPublicSubnet1RouteTable973DCC99" }, "DestinationCidrBlock": "0.0.0.0/0", "GatewayId": { "Ref": "ACLvpcIGWA284CC51" } }, "DependsOn": [ "ACLvpcVPCGWA01262F1" ], "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet1/DefaultRoute" } }, "ACLvpcPublicSubnet1EIP0233C01E": { "Type": "AWS::EC2::EIP", "Properties": { "Domain": "vpc", "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet1" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet1/EIP" } }, "ACLvpcPublicSubnet1NATGateway7D889FAC": { "Type": "AWS::EC2::NatGateway", "Properties": { "AllocationId": { "Fn::GetAtt": [ "ACLvpcPublicSubnet1EIP0233C01E", "AllocationId" ] }, "SubnetId": { "Ref": "ACLvpcPublicSubnet1SubnetAB5536F8" }, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet1" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet1/NATGateway" } }, "ACLvpcPublicSubnet2Subnet1243F1B8": { "Type": "AWS::EC2::Subnet", "Properties": { "CidrBlock": "10.0.32.0/19", "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "AvailabilityZone": "eu-west-3b", "MapPublicIpOnLaunch": true, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet2" }, { "Key": "aws-cdk:subnet-name", "Value": "Public" }, { "Key": "aws-cdk:subnet-type", "Value": "Public" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet2/Subnet" } }, "ACLvpcPublicSubnet2RouteTableBFA33E2A": { "Type": "AWS::EC2::RouteTable", "Properties": { "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet2" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet2/RouteTable" } }, "ACLvpcPublicSubnet2RouteTableAssociation0E367E2F": { "Type": "AWS::EC2::SubnetRouteTableAssociation", "Properties": { "RouteTableId": { "Ref": "ACLvpcPublicSubnet2RouteTableBFA33E2A" }, "SubnetId": { "Ref": "ACLvpcPublicSubnet2Subnet1243F1B8" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet2/RouteTableAssociation" } }, "ACLvpcPublicSubnet2DefaultRoute6918C2C0": { "Type": "AWS::EC2::Route", "Properties": { "RouteTableId": { "Ref": "ACLvpcPublicSubnet2RouteTableBFA33E2A" }, "DestinationCidrBlock": "0.0.0.0/0", "GatewayId": { "Ref": "ACLvpcIGWA284CC51" } }, "DependsOn": [ "ACLvpcVPCGWA01262F1" ], "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet2/DefaultRoute" } }, "ACLvpcPublicSubnet2EIPBB2E0F7F": { "Type": "AWS::EC2::EIP", "Properties": { "Domain": "vpc", "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet2" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet2/EIP" } }, "ACLvpcPublicSubnet2NATGatewayA823B2BD": { "Type": "AWS::EC2::NatGateway", "Properties": { "AllocationId": { "Fn::GetAtt": [ "ACLvpcPublicSubnet2EIPBB2E0F7F", "AllocationId" ] }, "SubnetId": { "Ref": "ACLvpcPublicSubnet2Subnet1243F1B8" }, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet2" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet2/NATGateway" } }, "ACLvpcPublicSubnet3Subnet74DB8A91": { "Type": "AWS::EC2::Subnet", "Properties": { "CidrBlock": "10.0.64.0/19", "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "AvailabilityZone": "eu-west-3c", "MapPublicIpOnLaunch": true, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet3" }, { "Key": "aws-cdk:subnet-name", "Value": "Public" }, { "Key": "aws-cdk:subnet-type", "Value": "Public" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet3/Subnet" } }, "ACLvpcPublicSubnet3RouteTable48D5C590": { "Type": "AWS::EC2::RouteTable", "Properties": { "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet3" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet3/RouteTable" } }, "ACLvpcPublicSubnet3RouteTableAssociation6304EEEC": { "Type": "AWS::EC2::SubnetRouteTableAssociation", "Properties": { "RouteTableId": { "Ref": "ACLvpcPublicSubnet3RouteTable48D5C590" }, "SubnetId": { "Ref": "ACLvpcPublicSubnet3Subnet74DB8A91" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet3/RouteTableAssociation" } }, "ACLvpcPublicSubnet3DefaultRoute5ED7E66D": { "Type": "AWS::EC2::Route", "Properties": { "RouteTableId": { "Ref": "ACLvpcPublicSubnet3RouteTable48D5C590" }, "DestinationCidrBlock": "0.0.0.0/0", "GatewayId": { "Ref": "ACLvpcIGWA284CC51" } }, "DependsOn": [ "ACLvpcVPCGWA01262F1" ], "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet3/DefaultRoute" } }, "ACLvpcPublicSubnet3EIP2A75DA44": { "Type": "AWS::EC2::EIP", "Properties": { "Domain": "vpc", "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet3" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet3/EIP" } }, "ACLvpcPublicSubnet3NATGateway88BC6345": { "Type": "AWS::EC2::NatGateway", "Properties": { "AllocationId": { "Fn::GetAtt": [ "ACLvpcPublicSubnet3EIP2A75DA44", "AllocationId" ] }, "SubnetId": { "Ref": "ACLvpcPublicSubnet3Subnet74DB8A91" }, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet3" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PublicSubnet3/NATGateway" } }, "ACLvpcPrivateSubnet1SubnetB88404CC": { "Type": "AWS::EC2::Subnet", "Properties": { "CidrBlock": "10.0.96.0/19", "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "AvailabilityZone": "eu-west-3a", "MapPublicIpOnLaunch": false, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet1" }, { "Key": "aws-cdk:subnet-name", "Value": "Private" }, { "Key": "aws-cdk:subnet-type", "Value": "Private" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet1/Subnet" } }, "ACLvpcPrivateSubnet1RouteTable52EFE8B4": { "Type": "AWS::EC2::RouteTable", "Properties": { "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet1" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet1/RouteTable" } }, "ACLvpcPrivateSubnet1RouteTableAssociation07BBA734": { "Type": "AWS::EC2::SubnetRouteTableAssociation", "Properties": { "RouteTableId": { "Ref": "ACLvpcPrivateSubnet1RouteTable52EFE8B4" }, "SubnetId": { "Ref": "ACLvpcPrivateSubnet1SubnetB88404CC" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet1/RouteTableAssociation" } }, "ACLvpcPrivateSubnet1DefaultRoute1D5645F3": { "Type": "AWS::EC2::Route", "Properties": { "RouteTableId": { "Ref": "ACLvpcPrivateSubnet1RouteTable52EFE8B4" }, "DestinationCidrBlock": "0.0.0.0/0", "NatGatewayId": { "Ref": "ACLvpcPublicSubnet1NATGateway7D889FAC" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet1/DefaultRoute" } }, "ACLvpcPrivateSubnet2Subnet63321773": { "Type": "AWS::EC2::Subnet", "Properties": { "CidrBlock": "10.0.128.0/19", "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "AvailabilityZone": "eu-west-3b", "MapPublicIpOnLaunch": false, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet2" }, { "Key": "aws-cdk:subnet-name", "Value": "Private" }, { "Key": "aws-cdk:subnet-type", "Value": "Private" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet2/Subnet" } }, "ACLvpcPrivateSubnet2RouteTable66EECACC": { "Type": "AWS::EC2::RouteTable", "Properties": { "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet2" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet2/RouteTable" } }, "ACLvpcPrivateSubnet2RouteTableAssociationB47D85D6": { "Type": "AWS::EC2::SubnetRouteTableAssociation", "Properties": { "RouteTableId": { "Ref": "ACLvpcPrivateSubnet2RouteTable66EECACC" }, "SubnetId": { "Ref": "ACLvpcPrivateSubnet2Subnet63321773" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet2/RouteTableAssociation" } }, "ACLvpcPrivateSubnet2DefaultRoute692EE131": { "Type": "AWS::EC2::Route", "Properties": { "RouteTableId": { "Ref": "ACLvpcPrivateSubnet2RouteTable66EECACC" }, "DestinationCidrBlock": "0.0.0.0/0", "NatGatewayId": { "Ref": "ACLvpcPublicSubnet2NATGatewayA823B2BD" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet2/DefaultRoute" } }, "ACLvpcPrivateSubnet3SubnetC5349B6D": { "Type": "AWS::EC2::Subnet", "Properties": { "CidrBlock": "10.0.160.0/19", "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "AvailabilityZone": "eu-west-3c", "MapPublicIpOnLaunch": false, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet3" }, { "Key": "aws-cdk:subnet-name", "Value": "Private" }, { "Key": "aws-cdk:subnet-type", "Value": "Private" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet3/Subnet" } }, "ACLvpcPrivateSubnet3RouteTableFCCC4D72": { "Type": "AWS::EC2::RouteTable", "Properties": { "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet3" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet3/RouteTable" } }, "ACLvpcPrivateSubnet3RouteTableAssociationD5EEF6F8": { "Type": "AWS::EC2::SubnetRouteTableAssociation", "Properties": { "RouteTableId": { "Ref": "ACLvpcPrivateSubnet3RouteTableFCCC4D72" }, "SubnetId": { "Ref": "ACLvpcPrivateSubnet3SubnetC5349B6D" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet3/RouteTableAssociation" } }, "ACLvpcPrivateSubnet3DefaultRoute6D60CB6B": { "Type": "AWS::EC2::Route", "Properties": { "RouteTableId": { "Ref": "ACLvpcPrivateSubnet3RouteTableFCCC4D72" }, "DestinationCidrBlock": "0.0.0.0/0", "NatGatewayId": { "Ref": "ACLvpcPublicSubnet3NATGateway88BC6345" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/PrivateSubnet3/DefaultRoute" } }, "ACLvpcIGWA284CC51": { "Type": "AWS::EC2::InternetGateway", "Properties": { "Tags": [ { "Key": "Name", "Value": "acl-aws-cdk-learning/ACL_vpc" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/IGW" } }, "ACLvpcVPCGWA01262F1": { "Type": "AWS::EC2::VPCGatewayAttachment", "Properties": { "VpcId": { "Ref": "ACLvpcAC1CD0C2" }, "InternetGatewayId": { "Ref": "ACLvpcIGWA284CC51" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_vpc/VPCGW" } }, "ACLsecuritygroupF744FA96": { "Type": "AWS::EC2::SecurityGroup", "Properties": { "GroupDescription": "acl-aws-cdk-learning/ACL_security_group", "SecurityGroupEgress": [ { "CidrIp": "0.0.0.0/0", "Description": "Allow all outbound traffic by default", "IpProtocol": "-1" } ], "VpcId": { "Ref": "ACLvpcAC1CD0C2" } }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_security_group/Resource" } }, "ACLcachesubnetgroup": { "Type": "AWS::ElastiCache::SubnetGroup", "Properties": { "Description": "subnet group for redis", "SubnetIds": [ { "Ref": "ACLvpcPrivateSubnet1SubnetB88404CC" }, { "Ref": "ACLvpcPrivateSubnet2Subnet63321773" }, { "Ref": "ACLvpcPrivateSubnet3SubnetC5349B6D" } ] }, "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_cache_subnet_group" } }, "ACLredis": { "Type": "AWS::ElastiCache::CacheCluster", "Properties": { "CacheNodeType": "cache.t2.small", "Engine": "redis", "NumCacheNodes": 1, "VpcSecurityGroupIds": [ { "Fn::GetAtt": [ "ACLsecuritygroupF744FA96", "GroupId" ] } ] }, "DependsOn": [ "ACLcachesubnetgroup" ], "Metadata": { "aws:cdk:path": "acl-aws-cdk-learning/ACL_redis" } } } } Bash stuff: (.env) acl-aws-cdk-learning % cdk deploy This deployment will make potentially sensitive changes according to your current security approval level (--require-approval broadening). Please confirm you intend to make the following modifications: Security Group Changes ┌───┬───────────────────────────────┬─────┬────────────┬─────────────────┐ │ │ Group │ Dir │ Protocol │ Peer │ ├───┼───────────────────────────────┼─────┼────────────┼─────────────────┤ │ + │ ${ACL_security_group.GroupId} │ Out │ Everything │ Everyone (IPv4) │ └───┴───────────────────────────────┴─────┴────────────┴─────────────────┘ (NOTE: There may be security-related changes not in this list. See https://github.com/aws/aws-cdk/issues/1299) Do you wish to deploy these changes (y/n)? y acl-aws-cdk-learning: deploying... acl-aws-cdk-learning: creating CloudFormation changeset... 0/38 | 11:00:17 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata 0/38 | 11:00:17 | CREATE_IN_PROGRESS | AWS::EC2::InternetGateway | ACL_vpc/IGW (ACLvpcIGWA284CC51) (...) 20/38 | 11:00:53 | CREATE_IN_PROGRESS | AWS::ElastiCache::SubnetGroup | ACL_cache_subnet_group (ACLcachesubnetgroup) Resource creation Initiated 21/38 | 11:00:53 | CREATE_COMPLETE | AWS::ElastiCache::SubnetGroup | ACL_cache_subnet_group (ACLcachesubnetgroup) 21/38 | 11:00:55 | CREATE_IN_PROGRESS | AWS::ElastiCache::CacheCluster | ACL_redis (ACLredis) 22/38 | 11:00:56 | CREATE_FAILED | AWS::ElastiCache::CacheCluster | ACL_redis (ACLredis) Subnet group [default] belongs to a different VPC [vpc-326ce55b] than [vpc-0c45b593f3a5fdc4d] (Service: AmazonElastiCache; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: 901398f4-c355-418d-921b-65e6c52dfe3a) obj._wrapSandboxCode (/Users/private/Git/acl-aws-cdk-learning/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7761:49) \_ Kernel._wrapSandboxCode (/Users/private/Git/acl-aws-cdk-learning/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:8221:20) \_ Kernel._create (/Users/private/Git/acl-aws-cdk-learning/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7761:26) \_ Kernel.create (/Users/private/Git/acl-aws-cdk-learning/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7508:21) \_ KernelHost.processRequest (/Users/private/Git/acl-aws-cdk-learning/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7296:28) \_ KernelHost.run (/Users/private/Git/acl-aws-cdk-learning/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7236:14) \_ Immediate.setImmediate [as _onImmediate] (/Users/private/Git/acl-aws-cdk-learning/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7239:37) \_ runCallback (timers.js:694:18) \_ tryOnImmediate (timers.js:665:5) \_ processImmediate (timers.js:647:5) | I can see that CacheSubnetGroupName is missing in the CacheCluster definition in the generated template. That is why the cache is using the default VPC. CDK omits your subnet group definition as you assign it incorrectly. When using a Cfn resource, you should refer to other resources in your code using ref instead of assigning the resource directly as you did. Your code should work just by updating the following line of your code. redis_cluster = aws_elasticache.CfnCacheCluster( ... cache_subnet_group_name=cache_subnet_group.ref ) | 10 | 23 |
59,645,179 | 2020-1-8 | https://stackoverflow.com/questions/59645179/update-anaconda-failed-entry-point-not-found | I have just tried to update my anaconda environment to the latest version and I am now receiving errors. I opened the conda environment as an admin, and the commands issued were: conda update conda conda update anaconda First command finished fine. Second command produced error: pythonw.exe - Entry Point Not Found The procedure entry point ?PyWinObject_FromULARGE_INTEGER@@YAPEAU_object@@AEAT_ULARGE_INTEGER@@@Z could not be located in the dynamic link library c:\ProgramData\Anaconda3\pythoncom37.dll I have found a reference to this sort of error that requires me to copy a file libssl-1-1-x64.dll from Anaconda3/Library/bin with the one from Anaconda3/DLLs. How to Fix Entry Point Not Found while installing libraries in conda environment However, I do not have that file, in the source location. Is there any commands I can issue to download this file again, or somewhere online I can safely download that one file from? | Sorry all - the clue was in the error message. The entry on how to fix entry point led me in the right direction. but it was the pythoncom37.dll file I needed to copy. That's what you get for blindly following instructions. Many thanks. | 21 | 1 |
59,644,751 | 2020-1-8 | https://stackoverflow.com/questions/59644751/show-both-value-and-percentage-on-a-pie-chart | Here's my current code values = pd.Series([False, False, True, True]) v_counts = values.value_counts() fig = plt.figure() plt.pie(v_counts, labels=v_counts.index, autopct='%.4f', shadow=True); Currently, it shows only the percentage (using autopct) I'd like to present both the percentage and the actual value (I don't mind about the position) | Create your own formatting function. Note that you have to recalculate the actual value from the percentage in that function somehow def my_fmt(x): print(x) return '{:.4f}%\n({:.0f})'.format(x, total*x/100) values = pd.Series([False, False, True, True, True, True]) v_counts = values.value_counts() total = len(values) fig = plt.figure() plt.pie(v_counts, labels=v_counts.index, autopct=my_fmt, shadow=True); | 11 | 17 |
59,642,338 | 2020-1-8 | https://stackoverflow.com/questions/59642338/creating-new-column-based-on-condition-on-other-column-in-pandas-dataframe | I have this dataframe: +------+--------------+------------+ | ID | Education | Score | +------+--------------+------------+ | 1 | High School | 7.884 | | 2 | Bachelors | 6.952 | | 3 | High School | 8.185 | | 4 | High School | 6.556 | | 5 | Bachelors | 6.347 | | 6 | Master | 6.794 | +------+--------------+------------+ I want to create a new column which is categorizing the score column. I want to label it as: 'bad', 'good', 'very good'. Which maybe would look like this: +------+--------------+------------+------------+ | ID | Education | Score | Labels | +------+--------------+------------+------------+ | 1 | High School | 7.884 | Good | | 2 | Bachelors | 6.952 | Bad | | 3 | High School | 8.185 | Very good | | 4 | High School | 6.556 | Bad | | 5 | Bachelors | 6.347 | Bad | | 6 | Master | 6.794 | Bad | +------+--------------+------------+------------+ How can I do that? Thanks in advance | import pandas as pd # initialize list of lists data = [[1,'High School',7.884], [2,'Bachelors',6.952], [3,'High School',8.185], [4,'High School',6.556],[5,'Bachelors',6.347],[6,'Master',6.794]] # Create the pandas DataFrame df = pd.DataFrame(data, columns = ['ID', 'Education', 'Score']) df['Labels'] = ['Bad' if x<7.000 else 'Good' if 7.000<=x<8.000 else 'Very Good' for x in df['Score']] df ID Education Score Labels 0 1 High School 7.884 Good 1 2 Bachelors 6.952 Bad 2 3 High School 8.185 Very Good 3 4 High School 6.556 Bad 4 5 Bachelors 6.347 Bad 5 6 Master 6.794 Bad | 7 | 12 |
59,638,571 | 2020-1-8 | https://stackoverflow.com/questions/59638571/where-to-place-all-in-a-python-file | I am wondering, what the standard placement in a Python file for __all__? My assumption is directly below the import statements. However, I could not find this explicitly stated/asked anywhere. So, in general, where should one place __all__? Where would it be put in the below example file? #!/usr/bin/env python3 """Where to put __all__.""" from time import time # I think it should go here: __all__ = ["Hello"] SOME_GLOBAL = 0.0 class Hello: def __init__(self): pass if __name__ == "__main__": pass Thank you in advance! | Per PEP 8: Module level "dunders" (i.e. names with two leading and two trailing underscores) such as __all__, __author__, __version__, etc. should be placed after the module docstring but before any import statements except from __future__ imports. So if you're following PEP 8 strictly, you were close. In practice, it's not obeyed strictly. A ton of the Python "included batteries" modules define __all__ after the imports, so your approach is perfectly fine. | 10 | 21 |
59,638,035 | 2020-1-8 | https://stackoverflow.com/questions/59638035/using-python-multiprocessing-queue-inside-aws-lambda-function | I have some python that creates multiple processes to complete a task much quicker. When I create these processes I pass in a queue. Inside the processes I use queue.put(data) so I am able to retrieve the data outside of the processes. It works fantastic on my local machine, but when I upload the zip to an AWS Lambda function (Python 3.8) it says the Queue() function has not been implemented.The project runs great in the AWS Lambda when I simply take out the queue functionality so I know this is the only hang up I currently have. I ensured to install the multiprocessing package directly to my python project by using "pip install multiprocess -t ./" as well as "pip install boto3 -t ./". I am new to python specifically as well as AWS but the research I have come across recently potentially points we to SQS. Reading over these SQS docs I am not sure if this is exactly what I am looking for. Here is the code I am running in the Lambda that works locally but not on AWS. See the *'s for important pieces: from multiprocessing import Process, Queue from craigslist import CraigslistForSale import time import math sitesHold = ["sfbay", "seattle", "newyork", "(many more)..." ] results = [] def f(sites, category, search_keys, queue): local_results = [] for site in sites: cl_fs = CraigslistForSale(site=site, category=category, filters={'query': search_keys}) for result in cl_fs.get_results(sort_by='newest'): local_results.append(result) if len(local_results) > 0: print(local_results) queue.put(local_results) # Putting data ********************************* def scan_handler(event, context): started_at = time.monotonic() queue = Queue() print("Running...") amount_of_lists = int(event['amountOfLists']) list_length = int(len(sitesHold) / amount_of_lists) extra_lists = math.ceil((len(sitesHold) - (amount_of_lists * list_length)) / list_length) site_list = [] list_creator_counter = 0 site_counter = 0 for i in range(amount_of_lists + extra_lists): site_list.append(sitesHold[list_creator_counter:list_creator_counter + list_length]) list_creator_counter += list_length processes = [] for i in range(len(site_list)): site_counter = site_counter + len(site_list[i]) processes.append(Process(target=f, args=(site_list[i], event['category'], event['searchQuery'], queue,))) # Creating processes and creating queues *************************** for process in processes: process.start() # Starting processes *********************** for process in processes: listings = queue.get() # Getting from queue **************************** if len(listings) > 0: for listing in listings: results.append(listing) print(f"Results: {results}") for process in processes: process.join() total_time_took = time.monotonic() - started_at print(f"Sites processed: {site_counter}") print(f'Took {total_time_took} seconds long') This is the error the Lambda function is giving me: { "errorMessage": "[Errno 38] Function not implemented", "errorType": "OSError", "stackTrace": [ " File \"/var/task/main.py\", line 90, in scan_handler\n queue = Queue()\n", " File \"/var/lang/lib/python3.8/multiprocessing/context.py\", line 103, in Queue\n return Queue(maxsize, ctx=self.get_context())\n", " File \"/var/lang/lib/python3.8/multiprocessing/queues.py\", line 42, in __init__\n self._rlock = ctx.Lock()\n", " File \"/var/lang/lib/python3.8/multiprocessing/context.py\", line 68, in Lock\n return Lock(ctx=self.get_context())\n", " File \"/var/lang/lib/python3.8/multiprocessing/synchronize.py\", line 162, in __init__\n SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)\n", " File \"/var/lang/lib/python3.8/multiprocessing/synchronize.py\", line 57, in __init__\n sl = self._semlock = _multiprocessing.SemLock(\n" ] } Does Queue() work in an AWS Lambda? What is the best way to accomplish my goal? | doesn't look like it's supported - https://blog.ruanbekker.com/blog/2019/02/19/parallel-processing-on-aws-lambda-with-python-using-multiprocessing/ | 8 | 7 |
59,637,973 | 2020-1-8 | https://stackoverflow.com/questions/59637973/how-to-run-virtualenv-python-on-mac | I am trying to use virtualenv to create a virtual python environment on my mac. I have downloaded virtualenv however I can't run it because it can't find the path to my installation of python3 even though I am supplying the correct path. Here is the command I have run and the response: virtualenv --python=/usr/local/bin/python3 newfolder zsh: /usr/local/bin/virtualenv: bad interpreter: /usr/local/opt/python3/bin/python3.6: no such file or directory Also I have tried running the command with quotes like so: virtualenv --python='/usr/local/bin/python3' newfolder zsh: /usr/local/bin/virtualenv: bad interpreter: /usr/local/opt/python3/bin/python3.6: no such file or directory Please note I am supplying the correct path to python3 as far as I can tell. Here is what I get when I run which python3 which python3 /usr/local/bin/python3 Also virtualenv appears to be correctly installed. Here is evidence for this: pip3 install virtualenv Requirement already satisfied: virtualenv in /Users/mathewlewis/Library/Python/3.7/lib/python/site-packages (16.7.9) Also, in case this is relevant, the software I have is currently mac os catalina 10.15.2 Not only would I like a solution (as has been given at this point) I would also like a reason why this didn't work. | Try: python3 -m venv venv source ./venv/bin/activate | 20 | 63 |
59,586,879 | 2020-1-4 | https://stackoverflow.com/questions/59586879/does-await-in-python-yield-to-the-event-loop | I was wondering what exactly happens when we await a coroutine in async Python code, for example: await send_message(string) (1) send_message is added to the event loop, and the calling coroutine gives up control to the event loop, or (2) We jump directly into send_message Most explanations I read point to (1), as they describe the calling coroutine as exiting. But my own experiments suggest (2) is the case: I tried to have a coroutine run after the caller but before the callee and could not achieve this. | No, await (per se) does not yield to the event loop, yield yields to the event loop, hence for the case given: "(2) We jump directly into send_message". In particular, certain yield expressions are the only points, at bottom, where async tasks can actually be switched out (in terms of nailing down the precise spot where Python code execution can be suspended). To be proven and demonstrated: 1) by theory/documentation, 2) by implementation code, 3) by example. By theory/documentation PEP 492: Coroutines with async and await syntax While the PEP is not tied to any specific Event Loop implementation, it is relevant only to the kind of coroutine that uses yield as a signal to the scheduler, indicating that the coroutine will be waiting until an event (such as IO) is completed. ... [await] uses the yield from implementation [with an extra step of validating its argument.] ... Any yield from chain of calls ends with a yield. This is a fundamental mechanism of how Futures are implemented. Since, internally, coroutines are a special kind of generators, every await is suspended by a yield somewhere down the chain of await calls (please refer to PEP 3156 for a detailed explanation). ... Coroutines are based on generators internally, thus they share the implementation. Similarly to generator objects, coroutines have throw(), send() and close() methods. ... The vision behind existing generator-based coroutines and this proposal is to make it easy for users to see where the code might be suspended. In context, "easy for users to see where the code might be suspended" seems to refer to the fact that in synchronous code yield is the place where execution can be "suspended" within a routine allowing other code to run, and that principle now extends perfectly to the async context wherein a yield (if its value is not consumed within the running task but is propagated up to the scheduler) is the "signal to the scheduler" to switch out tasks. More succinctly: where does a generator yield control? At a yield. Coroutines (including those using async and await syntax) are generators, hence likewise. And it is not merely an analogy, in implementation (see below) the actual mechanism by which a task gets "into" and "out of" coroutines is not anything new, magical, or unique to the async world, but simply by calling the coro's <generator>.send() method. That was (as I understand the text) part of the "vision" behind PEP 492: async and await would provide no novel mechanism for code suspension but just pour async-sugar on Python's already well-beloved and powerful generators. And PEP 3156: The "asyncio" module The loop.slow_callback_duration attribute controls the maximum execution time allowed between two yield points before a slow callback is reported [emphasis in original]. That is, an uninterrupted segment of code (from the async perspective) is demarcated as that between two successive yield points (whose values reached up to the running Task level (via an await/yield from tunnel) without being consumed within it). And this: The scheduler has no public interface. You interact with it by using yield from future and yield from task. Objection: "That says 'yield from', but you're trying to argue that the task can only switch out at a yield itself! yield from and yield are different things, my friend, and yield from itself doesn't suspend code!" Ans: Not a contradiction. The PEP is saying you interact with the scheduler by using yield from future/task. But as noted above in PEP 492, any chain of yield from (~aka await) ultimately reaches a yield (the "bottom turtle"). In particular (see below), yield from future does in fact yield that same future after some wrapper work, and that yield is the actual "switch out point" where another task takes over. But it is incorrect for your code to directly yield a Future up to the current Task because you would bypass the necessary wrapper. The objection having been answered, and its practical coding considerations being noted, the point I wish to make from the above quote remains: that a suitable yield in Python async code is ultimately the one thing which, having suspended code execution in the standard way that any other yield would do, now futher engages the scheduler to bring about a possible task switch. By implementation code asyncio/futures.py class Future: ... def __await__(self): if not self.done(): self._asyncio_future_blocking = True yield self # This tells Task to wait for completion. if not self.done(): raise RuntimeError("await wasn't used with future") return self.result() # May raise too. __iter__ = __await__ # make compatible with 'yield from'. Paraphrase: The line yield self is what tells the running task to sit out for now and let other tasks run, coming back to this one sometime after self is done. Almost all of your awaitables in asyncio world are (multiple layers of) wrappers around a Future. The event loop remains utterly blind to all higher level await awaitable expressions until the code execution trickles down to an await future or yield from future and then (as seen here) calls yield self, which yielded self is then "caught" by none other than the Task under which the present coroutine stack is running thereby signaling to the task to take a break. Possibly the one and only exception to the above "code suspends at yield self within await future" rule, in an asyncio context, is the potential use of a bare yield such as in asyncio.sleep(0). And since the sleep function is a topic of discourse in the comments of this post, let's look at that. asyncio/tasks.py @types.coroutine def __sleep0(): """Skip one event loop run cycle. This is a private helper for 'asyncio.sleep()', used when the 'delay' is set to 0. It uses a bare 'yield' expression (which Task.__step knows how to handle) instead of creating a Future object. """ yield async def sleep(delay, result=None, *, loop=None): """Coroutine that completes after a given time (in seconds).""" if delay <= 0: await __sleep0() return result if loop is None: loop = events.get_running_loop() else: warnings.warn("The loop argument is deprecated since Python 3.8, " "and scheduled for removal in Python 3.10.", DeprecationWarning, stacklevel=2) future = loop.create_future() h = loop.call_later(delay, futures._set_result_unless_cancelled, future, result) try: return await future finally: h.cancel() Note: We have here the two interesting cases at which control can shift to the scheduler: (1) The bare yield in __sleep0 (when called via an await). (2) The yield self immediately within await future. The crucial line (for our purposes) in asyncio/tasks.py is when Task._step runs its top-level coroutine via result = self._coro.send(None) and recognizes fourish cases: (1) result = None is generated by the coro (which, again, is a generator): the task "relinquishes control for one event loop iteration". (2) result = future is generated within the coro, with further magic member field evidence that the future was yielded in a proper manner from out of Future.__iter__ == Future.__await__: the task relinquishes control to the event loop until the future is complete. (3) A StopIteration is raised by the coro indicating the coroutine completed (i.e. as a generator it exhausted all its yields): the final result of the task (which is itself a Future) is set to the coroutine return value. (4) Any other Exception occurs: the task's set_exception is set accordingly. Modulo details, the main point for our concern is that coroutine segments in an asyncio event loop ultimately run via coro.send(). Initial startup and final termination aside, send() proceeds precisely from the last yield value it generated to the next one. By example import asyncio import types def task_print(s): print(f"{asyncio.current_task().get_name()}: {s}") async def other_task(s): task_print(s) class AwaitableCls: def __await__(self): task_print(" 'Jumped straight into' another `await`; the act of `await awaitable` *itself* doesn't 'pause' anything") yield task_print(" We're back to our awaitable object because that other task completed") asyncio.create_task(other_task("The event loop gets control when `yield` points (from an iterable coroutine) propagate up to the `current_task` through a suitable chain of `await` or `yield from` statements")) async def coro(): task_print(" 'Jumped straight into' coro; the `await` keyword itself does nothing to 'pause' the current_task") await AwaitableCls() task_print(" 'Jumped straight back into' coro; we have another pending task, but leaving an `__await__` doesn't 'pause' the task any more than entering the `__await__` does") @types.coroutine def iterable_coro(context): task_print(f"`{context} iterable_coro`: pre-yield") yield None # None or a Future object are the only legitimate yields to the task in asyncio task_print(f"`{context} iterable_coro`: post-yield") async def original_task(): asyncio.create_task(other_task("Aha, but a (suitably unconsumed) *`yield`* DOES 'pause' the current_task allowing the event scheduler to `_wakeup` another task")) task_print("Original task") await coro() task_print("'Jumped straight out of' coro. Leaving a coro, as with leaving/entering any awaitable, doesn't give control to the event loop") res = await iterable_coro("await") assert res is None asyncio.create_task(other_task("This doesn't run until the very end because the generated None following the creation of this task is consumed by the `for` loop")) for y in iterable_coro("for y in"): task_print(f"But 'ordinary' `yield` points (those which are consumed by the `current_task` itself) behave as ordinary without relinquishing control at the async/task-level; `y={y}`") task_print("Done with original task") asyncio.get_event_loop().run_until_complete(original_task()) run in python3.8 produces Task-1: Original task Task-1: 'Jumped straight into' coro; the await keyword itself does nothing to 'pause' the current_task Task-1: 'Jumped straight into' another await; the act of await awaitable itself doesn't 'pause' anything Task-2: Aha, but a (suitably unconsumed) yield DOES 'pause' the current_task allowing the event scheduler to _wakeup another task Task-1: We're back to our awaitable object because that other task completed Task-1: 'Jumped straight back into' coro; we have another pending task, but leaving an __await__ doesn't 'pause' the task any more than entering the __await__ does Task-1: 'Jumped straight out of' coro. Leaving a coro, as with leaving/entering any awaitable, doesn't give control to the event loop Task-1: await iterable_coro: pre-yield Task-3: The event loop gets control when yield points (from an iterable coroutine) propagate up to the current_task through a suitable chain of await or yield from statements Task-1: await iterable_coro: post-yield Task-1: for y in iterable_coro: pre-yield Task-1: But 'ordinary' yield points (those which are consumed by the current_task itself) behave as ordinary without relinquishing control at the async/task-level; y=None Task-1: for y in iterable_coro: post-yield Task-1: Done with original task Task-4: This doesn't run until the very end because the generated None following the creation of this task is consumed by the for loop Indeed, exercises such as the following can help one's mind to decouple the functionality of async/await from notion of "event loops" and such. The former is conducive to nice implementations and usages of the latter, but you can use async and await just as specially syntaxed generator stuff without any "loop" (whether asyncio or otherwise) whatsoever: import types # no asyncio, nor any other loop framework async def f1(): print(1) print(await f2(),'= await f2()') return 8 @types.coroutine def f2(): print(2) print((yield 3),'= yield 3') return 7 class F3: def __await__(self): print(4) print((yield 5),'= yield 5') print(10) return 11 task1 = f1() task2 = F3().__await__() """ You could say calls to send() represent our "manual task management" in this script. """ print(task1.send(None), '= task1.send(None)') print(task2.send(None), '= task2.send(None)') try: print(task1.send(6), 'try task1.send(6)') except StopIteration as e: print(e.value, '= except task1.send(6)') try: print(task2.send(9), 'try task2.send(9)') except StopIteration as e: print(e.value, '= except task2.send(9)') produces 1 2 3 = task1.send(None) 4 5 = task2.send(None) 6 = yield 3 7 = await f2() 8 = except task1.send(6) 9 = yield 5 10 11 = except task2.send(9) | 29 | 60 |
59,591,969 | 2020-1-4 | https://stackoverflow.com/questions/59591969/python-setuptools-quick-way-to-add-scripts-without-main-function-as-console | My request seems unorthodox, but I would like to quickly package an old repository, consisting mostly of python executable scripts. The problem is that those scripts were not designed as modules, so some of them execute code directly at the module top-level, and some other have the if __name__=='__main__' part. How would you distribute those scripts using setuptools, without too much rewrite? I know I could just put them under the scripts option of setup(), but it's not advised, and also it doesn't allow me to rename them. I would like to skip defining a main() function in all those scripts, also because some scripts call weird recursive functions with side effects on global variables, so I'm a bit afraid of breaking stuff. When I try providing only the module name as console_scripts (e.g "myscript=mypkg.myscript" instead of "myscript=mypkg.myscript:main"), it logically complains after installation that a module is not callable. Is there a way to create scripts from modules? At least when they have a if __name__=='__main__'? | There's some design considerations, but I would recommend using a __main__.py for this. it allows all command line invocations to share argument parsing logic you don't have to touch the scripts, at all it is explicit (no do-nothing functions that exist to trigger import) it enables refactoring out any other common stateful logic, since __main__.py is never imported when you import your package convention. People expect to find command line invocations in __main__.py. __main__.py from pathlib import Path from runpy import run_path pkg_dir = Path(__file__).resolve().parent def execute_script(): script_pth = pkg_dir / "local path to script" run_path(str(script_pth), run_name="__main__") Then, you can set your_package.__main__:execute_script as a console script in setup.py/pyproject.toml. You can obviously have as many scripts as you like this way. | 9 | 1 |
59,610,164 | 2020-1-6 | https://stackoverflow.com/questions/59610164/how-to-evaluate-coroutine-in-pycharms-interactive-debugger | When interrupting execution of python async code with PyCharm's interactive debugger (breakpoints) we can inspect the environment with PyCharm's debugging tools like "evaluate expression" or "Execute Line in Python Console". How can we evaluate coroutines within these debugging tools? | Update from October 2022 Make sure you have the latest version of PyCharm, because they said they added this | 10 | 3 |
59,579,859 | 2020-1-3 | https://stackoverflow.com/questions/59579859/set-niceness-of-each-process-in-a-multiprocessing-pool | How can I set the niceness for each process in a multiprocessing.Pool? I understand that I can increment niceness with os.nice(), but how do call it in the child process after creating the pool? If I call it in the mapped function it will be called every time the function executes, rather than once when the process is forked. import multiprocessing as mp NICENESS = 19 DATA = range(100000) def foo(bar): return bar * 2 pool = mp.Pool(100) # Somehow set niceness of each process to NICENESS pool.map(foo, DATA) | What about using an initializer for that? https://docs.python.org/3.8/library/multiprocessing.html#multiprocessing.pool.Pool The function is called once when the pool is started so the os.nice() call in the initializer sets the niceness for the proces after that. I've added some additional statements to show that it works in your worker function but the os.nice() calls should obviously be removed since you want a static niceness value. import multiprocessing as mp import os NICENESS = 3 DATA = range(6) def foo(bar): newniceness = os.nice(1) # remove this print('Additional niceness:', newniceness) # remove this return bar * 2 def set_nicesness(val): # the initializer newval = os.nice(val) # starts at 0 and returns newvalue print('niceness value:', newval) pool = mp.Pool(3, initializer=set_nicesness, initargs=(NICENESS,)) # Somehow set niceness of each process to NICENESS pool.map(foo, DATA) As you can see from the prints the niceness now starts at 3 (I've set this for NICENESS) and starts incrementing from there. Or as a useable snippet import multiprocessing as mp import os NICENESS = 3 def mp_function(bar: int) -> int: return bar * 2 if __name__ == '__main__': pool = mp.Pool(3, initializer=os.nice, initargs=(NICENESS,)) data = range(6) pool.map(mp_function, data) | 10 | 6 |
59,572,174 | 2020-1-3 | https://stackoverflow.com/questions/59572174/no-module-named-dotenv-python-3-8 | EDIT: Solved, if anyone comes across this python3.8 -m pip install python-dotenv worked for me. I've tried reinstalling both dotenv and python-dotenv but I'm still getting the same error. I do have the .env file in the same directory as this script. #bot.py import os import discord from dotenv import load_dotenv load_dotenv() token=os.getenv('DISCORD_TOKEN') client = discord.Client() @client.event async def on_ready(): print(f'{client.user} has connected to Discord!') client.run(token) | in your installation manager if it's Ubuntu or Debian try: apt install python3-dotenv you can also try sudo pip3 install dotenv to install via pip. Whatever you do remember to include explicitly the missing 3 part. Debian/Ubuntu have separate packages and as of the present time python means python2 and python3 means python3 in their apt repositories. However, when it comes to your locally installed python binary on your system which python binary it defaults to using may vary depending on what /usr/bin/python is symlinked to on your system. Some systems it's symlinked to something like python2.7 and other's it may be something like python3.5. Similar issues exist with locally installed pip. Hence, why using the '3' is important when installing or searching for python packages | 105 | 38 |
59,590,993 | 2020-1-4 | https://stackoverflow.com/questions/59590993/where-can-i-download-a-pretrained-word2vec-map | I have been learning about NLP models and came across word embedding, and saw the examples in which it is possible to see relations between words by calculating their dot products and such. What I am looking for is just a dictionary, mapping words to their representative vectors, so I can play around with it. I know that I can build a model and train it and create my own map but I just want the already trained map as a python variable. | You can try out Google's word2vec model trained with about 100 billion words from various news articles. An interesting fact about word vectors, w2v(king) - w2v(man) + w2v(woman) ≈ w2v(queen) | 10 | 9 |
59,567,226 | 2020-1-2 | https://stackoverflow.com/questions/59567226/how-to-programmatically-determine-available-gpu-memory-with-tensorflow | For a vector quantization (k-means) program I like to know the amount of available memory on the present GPU (if there is one). This is needed to choose an optimal batch size in order to have as few batches as possible to run over the complete data set. I have written the following test program: import tensorflow as tf import numpy as np from kmeanstf import KMeansTF print("GPU Available: ", tf.test.is_gpu_available()) nn=1000 dd=250000 print("{:,d} bytes".format(nn*dd*4)) dic = {} for x in "ABCD": dic[x]=tf.random.normal((nn,dd)) print(x,dic[x][:1,:2]) print("done...") This is a typical output on my system with (ubuntu 18.04 LTS, GTX-1060 6GB). Please note the core dump. python misc/maxmem.py GPU Available: True 1,000,000,000 bytes A tf.Tensor([[-0.23787294 -2.0841186 ]], shape=(1, 2), dtype=float32) B tf.Tensor([[ 0.23762687 -1.1229591 ]], shape=(1, 2), dtype=float32) C tf.Tensor([[-1.2672468 0.92139906]], shape=(1, 2), dtype=float32) 2020-01-02 17:35:05.988473: W tensorflow/core/common_runtime/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 953.67MiB (rounded to 1000000000). Current allocation summary follows. 2020-01-02 17:35:05.988752: W tensorflow/core/common_runtime/bfc_allocator.cc:424] **************************************************************************************************xx 2020-01-02 17:35:05.988835: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at cwise_ops_common.cc:82 : Resource exhausted: OOM when allocating tensor with shape[1000,250000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc Segmentation fault (core dumped) Occasionally I do get an error from python instead of a core dump (see below). This would actually be better since I could catch it and thus determine by trial and error the maximum available memory. But it alternates with core dumps: python misc/maxmem.py GPU Available: True 1,000,000,000 bytes A tf.Tensor([[-0.73510283 -0.94611156]], shape=(1, 2), dtype=float32) B tf.Tensor([[-0.8458411 0.552555 ]], shape=(1, 2), dtype=float32) C tf.Tensor([[0.30532074 0.266423 ]], shape=(1, 2), dtype=float32) 2020-01-02 17:35:26.401156: W tensorflow/core/common_runtime/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 953.67MiB (rounded to 1000000000). Current allocation summary follows. 2020-01-02 17:35:26.401486: W tensorflow/core/common_runtime/bfc_allocator.cc:424] **************************************************************************************************xx 2020-01-02 17:35:26.401571: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at cwise_ops_common.cc:82 : Resource exhausted: OOM when allocating tensor with shape[1000,250000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc Traceback (most recent call last): File "misc/maxmem.py", line 11, in <module> dic[x]=tf.random.normal((nn,dd)) File "/home/fritzke/miniconda2/envs/tf20b/lib/python3.7/site-packages/tensorflow_core/python/ops/random_ops.py", line 76, in random_normal value = math_ops.add(mul, mean_tensor, name=name) File "/home/fritzke/miniconda2/envs/tf20b/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 391, in add _six.raise_from(_core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1000,250000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:Add] name: random_normal/ How could I reliably get this information for whatever system the software is running on? | This code will return free GPU memory in MegaBytes for each GPU: import subprocess as sp import os def get_gpu_memory(): command = "nvidia-smi --query-gpu=memory.free --format=csv" memory_free_info = sp.check_output(command.split()).decode('ascii').split('\n')[:-1][1:] memory_free_values = [int(x.split()[0]) for i, x in enumerate(memory_free_info)] return memory_free_values get_gpu_memory() This answer relies on nvidia-smi being installed (which is pretty much always the case for Nvidia GPUs) and therefore is limited to NVidia GPUs. | 33 | 49 |
59,621,736 | 2020-1-7 | https://stackoverflow.com/questions/59621736/despite-installing-the-torch-vision-pytorch-library-i-am-getting-an-error-sayin | The error that I am getting when I use import torchvision is this: Error Message "*Traceback (most recent call last): File "/Users/gokulsrin/Desktop/torch_basics/data.py", line 4, in <module> import torchvision ModuleNotFoundError: No module named 'torchvision'*" I don't know what to do. I have tried changing the version of python from the native one to the one downloaded through anaconda. I am using anaconda as a package manager and have installed torch vision through anaconda as well as through pip commands. | From PyTorch installing Docs you should follow these steps: In Anaconda use this command: conda install pytorch torchvision cpuonly -c pytorch In Pip use this command: pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html Note: If you have an enabled CUDA card you can change the cpuonly option to cudatoolkit=10.1 or cudatoolkit=9.2 After successfully installing the package you can import it with the command import torchvision and the output should look like this: Otherwise, there is something wrong when you are downloading the package from the Internet | 31 | 18 |
59,563,498 | 2020-1-2 | https://stackoverflow.com/questions/59563498/systemerror-class-pyodbc-error-returned-a-result-with-an-error-set | def insert(self): conn = pyodbc.connect( 'Driver={SQL Server};' 'Server=DESKTOP-S0VG212\SQLEXPRESS;' 'Database=MovieGuide;' 'Trusted_Connection=yes;' ) cursor = conn.cursor() Error occurs when executing the query but I don't know what's causing it. cursor.execute('insert into Movies(MovieName,Genre,Rating,Username) values(?,?,?,?);', (self.moviename, self.moviegenre, self.ratebox, self.username)) conn.commit() | I know my answer is late, but it can be useful to someone. SystemError: <class 'pyodbc.Error'> returned a result with an error set error appears when the query is wrong, make sure you are executing the correct query using SQL server query window then you can able to identify the problem. In the question, the semicolon should not come to the end of the query, if you still get an error, it might chances for the column to have some constraint issue. So follow the below method when you are facing this issue. Execute one insert query in the SQL server query tab and identify the problem. | 11 | 4 |
59,615,759 | 2020-1-6 | https://stackoverflow.com/questions/59615759/high-precision-word-alignment-algorithm-in-python | I am working on a project for building a high precision word alignment between sentences and their translations in other languages, for measuring translation quality. I am aware of Giza++ and other word alignment tools that are used as part of the pipeline for Statistical Machine Translation, but this is not what I'm looking for. I'm looking for an algorithm that can map words from the source sentence into the corresponding words in the target sentence, transparently and accurately given these restrictions: the two languages do not have the same word order, and the order keeps changing some words in the source sentence do not have corresponding words in the target sentence, and vice versa sometimes a word in the source correspond to multiple words in the target, and vice versa, and there can be many-to-many mapping there can be sentences where the same word is used multiple times in the sentence, so the alignment needs to be done with the words and their indexes, not only words Here is what I did: Start with a list of sentence pairs, say English-German, with each sentence tokenized to words Index all words in each sentence, and create an inverted index for each word (e.g. the word "world" occurred in sentences # 5, 16, 19, 26 ... etc), for both source and target words Now this inverted index can predict the correlation between any source word and any target word, as the intersection between the two words divided by their union. For example, if the tagret word "Welt" occurs in sentences 5, 16, 26,32, The correlation between (world, Welt) is the number of indexes in the intersection (3) divided by the number of indexes in the union (5), and hence the correlation is 0.6. Using the union gives lower correlation with high frequency words, such as "the", and the corresponding words in other languages Iterate over all sentence pairs again, and use the indexes for the source and target words for a given sentence pairs to create a correlation matrix Here is an example of a correlation matrix between an English and a German sentence. We can see the challenges discussed above. In the image, there is an example of the alignment between an English and German sentence, showing the correlations between words, and the green cells are the correct alignment points that should be identified by the word-alignment algorithm. Here is some of what I tried: It is possible in some cases that the intended alignment is simply the word pair with the highest correlation in its respective column and row, but in many cases it's not. I have tried things like Dijkstra's algorithm to draw a path connecting the alignment points, but it doesn't seem to work this way, because it seems you can jump back and forth to earlier words in the sentence because of the word order, and there is no sensible way to skip words for which there is no alignment. I think the optimum solution will involve something like expanding rectangles which start from the most likely correspondences, and span many-to-many correspondences, and skip words with no alignment, but I'm not exactly sure what would be a good way to implement this Here is the code I am using: import random src_words=["I","know","this"] trg_words=["Ich","kenne","das"] def match_indexes(word1,word2): return random.random() #adjust this to get the actual correlation value all_pairs_vals=[] #list for all the source (src) and taget (trg) indexes and the corresponding correlation values for i in range(len(src_words)): #iterate over src indexes src_word=src_words[i] #identify the correponding src word for j in range(len(trg_words)): #iterate over trg indexes trg_word=trg_words[j] #identify the correponding trg word val=match_indexes(src_word,trg_word) #get the matching value from the inverted indexes of each word (or from the data provided in the speadsheet) all_pairs_vals.append((i,j,val)) #add the sentence indexes for scr and trg, and the corresponding val all_pairs_vals.sort(key=lambda x:-x[-1]) #sort the list in descending order, to get the pairs with the highest correlation first selected_alignments=[] used_i,used_j=[],[] #exclude the used rows and column indexes for i0,j0,val0 in all_pairs_vals: if i0 in used_i: continue #if the current column index i0 has been used before, exclude current pair-value if j0 in used_j: continue #same if the current row was used before selected_alignments.append((i0,j0)) #otherwise, add the current pair to the final alignment point selection used_i.append(i0) #and include it in the used row and column indexes so that it will not be used again used_j.append(j0) for a in all_pairs_vals: #list all pairs and indicate which ones were selected i0,j0,val0=a if (i0,j0) in selected_alignments: print(a, "<<<<") else: print(a) It's problematic because it doesn't accomodate the many-to-many, or even the one to many alignments, and can err easily in the beginning by selecting a wrong pair with highest correlation, excluding its row and column from future selection. A good algorithm would factor in that a certain pair has the highest correlation in its respective row/column, but would also consider the proximity to other pairs with high correlations. Here is some data to try if you like, it's in Google sheets: https://docs.google.com/spreadsheets/d/1-eO47RH6SLwtYxnYygow1mvbqwMWVqSoAhW64aZrubo/edit?usp=sharing | I highly recommend testing Awesome-Align. It relies on multilingual BERT (mBERT) and the results look very promising. I even tested it with Arabic, and it did a great job on a difficult alignment example since Arabic is a morphology-rich language, and I believe it would be more challenging than a Latin-based language such as German. As you can see, one word in Arabic corresponds to multiple words in English, and yet Awesome-Align managed to handle the many-to-many mapping to a great extent. You may give it a try and I believe it will meet your needs. There is also a Google Colab demo at https://colab.research.google.com/drive/1205ubqebM0OsZa1nRgbGJBtitgHqIVv6?usp=sharing#scrollTo=smW6s5JJflCN Good luck! | 8 | 6 |
59,620,543 | 2020-1-7 | https://stackoverflow.com/questions/59620543/setup-py-install-running-egg-info-error-errno-13-permission-denied-regarless | I encountered a bug-like feature of setup.py where I am getting the Permission denied error regardless where I want to install the package without root privilege. I have a toy python package with a few tiny files, and there is no problem of building it. There is nothing special in the setup.py file. I will list one or two of them. setup ( name='pmsi', entry_points={ 'console_scripts': [ 'pmsi = pmsi.pmsi:main', ] }, ) sudo python3 setup.py install Gave me no problem at all. I need to install this package to a particular place and have tried --user, --home, --prefix options; all gave me the same error message at the egg_info step. python3 setup.py install --user running install running bdist_egg running egg_info error: [Errno 13] Permission denied It appears that the install process always tries to copy the egg_info to some system place where I don't have permission to write. I am not an expert on setup.py, there must be some default rule that I can overwrite either on the command line or setup.py. Or should I always install to system place as root (that seems to be a bad choice, what if you want to test before a system install). | The reason for this particular difficulty is because I run sudo before in the package directory and it created some directories owned by root. Afterwards, I run as a regular user and got permission issues. The fix is ownership change. cd ~/lib/python3.8/site-packages sudo chown -R myuid:mygroup * After the above action, the problem was resolved. The actual python lib dir may be different for different situation. | 7 | 9 |
59,637,048 | 2020-1-7 | https://stackoverflow.com/questions/59637048/how-to-find-element-by-part-of-its-id-name-in-selenium-with-python | I'm using selenium with python,now I want to locate an element by part of its id name,what can I do? For example,now I've already located a item by id name coption5 : sixth_item = driver.find_element_by_id("coption5") Is there anyway I can locate this element only by using coption? | To find the element which you have located with: sixth_item = driver.find_element_by_id("coption5") To locate this element only by using coption you can use can use either of the following Locator Strategies: Using XPATH and starts-with(): sixth_item = driver.find_element_by_xpath("//*[starts-with(@id, 'coption')]") Using XPATH and contains(): sixth_item = driver.find_element_by_xpath("//*[contains(@id, 'coption')]") Using CSS_SELECTOR and ^ (wildcard of starts-with): sixth_item = driver.find_element_by_css_selector("[id^='coption']") Using CSS_SELECTOR and * (wildcard of contains): sixth_item = driver.find_element_by_css_selector("[id*='coption']") Reference You can find a detailed discussion on dynamic CssSelectors in: How to get selectors with dynamic part inside using Selenium with Python? Java Selenium webdriver expression finding dynamic element by ccs that starts with and ends with How to click a dynamic link with in a drupal 8 website using xpath/css selector while automating through Selenium and Python Finding elements by CSS selector with ChromeDriver (Selenium) in Python | 10 | 27 |
59,622,573 | 2020-1-7 | https://stackoverflow.com/questions/59622573/pyspark-groupby-dataframe-without-aggregation-or-count | Can it iterate through the Pyspark groupBy dataframe without aggregation or count? For example code in Pandas: for i, d in df2: mycode .... ^^ if using pandas ^^ Is there a difference in how to iterate groupby in Pyspark or have to use aggregation and count? | At best you can use .first , .last to get respective values from the groupBy but not all in the way you can get in pandas. ex: from pyspark.sql import functions as f df.groupBy(df['some_col']).agg(f.first(df['col1']), f.first(df['col2'])).show() Since their is a basic difference between the way the data is handled in pandas and spark not all functionalities can be used in the same way. Their are a few work arounds to get what you want like: for diamonds DataFrame: +---+-----+---------+-----+-------+-----+-----+-----+----+----+----+ |_c0|carat| cut|color|clarity|depth|table|price| x| y| z| +---+-----+---------+-----+-------+-----+-----+-----+----+----+----+ | 1| 0.23| Ideal| E| SI2| 61.5| 55.0| 326|3.95|3.98|2.43| | 2| 0.21| Premium| E| SI1| 59.8| 61.0| 326|3.89|3.84|2.31| | 3| 0.23| Good| E| VS1| 56.9| 65.0| 327|4.05|4.07|2.31| | 4| 0.29| Premium| I| VS2| 62.4| 58.0| 334| 4.2|4.23|2.63| | 5| 0.31| Good| J| SI2| 63.3| 58.0| 335|4.34|4.35|2.75| +---+-----+---------+-----+-------+-----+-----+-----+----+----+----+ You can use: l=[x.cut for x in diamonds.select("cut").distinct().rdd.collect()] def groups(df,i): import pyspark.sql.functions as f return df.filter(f.col("cut")==i) #for multi grouping def groups_multi(df,i): import pyspark.sql.functions as f return df.filter((f.col("cut")==i) & (f.col("color")=='E'))# use | for or. for i in l: groups(diamonds,i).show(2) output : +---+-----+-------+-----+-------+-----+-----+-----+----+----+----+ |_c0|carat| cut|color|clarity|depth|table|price| x| y| z| +---+-----+-------+-----+-------+-----+-----+-----+----+----+----+ | 2| 0.21|Premium| E| SI1| 59.8| 61.0| 326|3.89|3.84|2.31| | 4| 0.29|Premium| I| VS2| 62.4| 58.0| 334| 4.2|4.23|2.63| +---+-----+-------+-----+-------+-----+-----+-----+----+----+----+ only showing top 2 rows +---+-----+-----+-----+-------+-----+-----+-----+----+----+----+ |_c0|carat| cut|color|clarity|depth|table|price| x| y| z| +---+-----+-----+-----+-------+-----+-----+-----+----+----+----+ | 1| 0.23|Ideal| E| SI2| 61.5| 55.0| 326|3.95|3.98|2.43| | 12| 0.23|Ideal| J| VS1| 62.8| 56.0| 340|3.93| 3.9|2.46| +---+-----+-----+-----+-------+-----+-----+-----+----+----+----+ ... In Function groups you can decide what kind of grouping you want for the data. It is a simple filter condition but it will get you all the groups separately. | 7 | 3 |
59,563,085 | 2020-1-2 | https://stackoverflow.com/questions/59563085/how-to-stop-training-when-it-hits-a-specific-validation-accuracy | I am training a convolutional network and I want to stop training once the validation error hits 90%. I thought about using EarlyStopping and setting baseline to .90 but then it stops training whenever the validation accuracy is below that baseline for given number of epochs(which is just 0 here). So my code is: es=EarlyStopping(monitor='val_acc',mode='auto',verbose=1,baseline=.90,patience=0) history = model.fit(training_images, training_labels, validation_data=(test_images, test_labels), epochs=30, verbose=2,callbacks=[es]) When I use this code my training stops after the first epoch with given results: Train on 60000 samples, validate on 10000 samples Epoch 1/30 60000/60000 - 7s - loss: 0.4600 - acc: 0.8330 - val_loss: 0.3426 - val_acc: 0.8787 What else can I try to stop my training once the validation accuracy hits 90% or above? Here is the rest of the code: tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(152, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer=Adam(learning_rate=0.001),loss='sparse_categorical_crossentropy', metrics=['accuracy']) es=EarlyStopping(monitor='val_acc',mode='auto',verbose=1,baseline=.90,patience=0) history = model.fit(training_images, training_labels, validation_data=(test_images, test_labels), epochs=30, verbose=2,callbacks=[es]) | Early Stopping Callback will search for a value that stopped increasing (or decreasing) so it's not a good use for your problem. However tf.keras allows you to use custom callbacks. For your example: class MyThresholdCallback(tf.keras.callbacks.Callback): def __init__(self, threshold): super(MyThresholdCallback, self).__init__() self.threshold = threshold def on_epoch_end(self, epoch, logs=None): val_acc = logs["val_acc"] if val_acc >= self.threshold: self.model.stop_training = True For TF version 2.3 or above, you might have to use "val_accuracy" instead of "val_acc". Thank you Christian Westbrook for the note in the comments. The above Callback, on each epoch end, will extract Validation Accuracy from all available logs. Then it will compare it with user defined threshold (in your case 90%). If the criterion is met the training will be stopped. With that you can simply call: my_callback = MyThresholdCallback(threshold=0.9) history = model.fit(training_images, training_labels, validation_data=(test_images, test_labels), epochs=30, verbose=2, callbacks=[my_callback]) Alternatively, you can use def on_batch_end(...) if you want to stop immediately. This however, requires parameters batch, logs instead of epoch, logs. | 9 | 10 |
59,549,829 | 2020-1-1 | https://stackoverflow.com/questions/59549829/how-do-i-downgrade-my-version-of-python-from-3-7-5-to-3-6-5-on-ubuntu | So currently, I have ubuntu 19. And it comes by default with python 3.7.5. I need to downgrade to 3.6.5. EDIT: I am using virtualenv | The following talks about upgrade from 3.6.7 to 3.7.0 but you can use the same process for downgrade. You should not change the system python unless you really know what you're doing First Install Pyenv Installlation Instructions are here Look at Pyenv Options $ pyenv pyenv 1.2.14 Usage: pyenv <command> [<args>] Some useful pyenv commands are: commands List all available pyenv commands activate Activate virtual environment commands List all available pyenv commands deactivate Deactivate virtual environment doctor Verify pyenv installation and deevlopment tools to build pythons. exec Run an executable with the selected Python version global Set or show the global Python version help Display help for a command hooks List hook scripts for a given pyenv command init Configure the shell environment for pyenv install Install a Python version using python-build local Set or show the local application-specific Python version prefix Display prefix for a Python version rehash Rehash pyenv shims (run this after installing executables) root Display the root directory where versions and shims are kept shell Set or show the shell-specific Python version shims List existing pyenv shims uninstall Uninstall a specific Python version --version Display the version of pyenv version Show the current Python version and its origin version-file Detect the file that sets the current pyenv version version-name Show the current Python version version-origin Explain how the current Python version is set versions List all Python versions available to pyenv virtualenv Create a Python virtualenv using the pyenv-virtualenv plugin virtualenv-delete Uninstall a specific Python virtualenv virtualenv-init Configure the shell environment for pyenv-virtualenv virtualenv-prefix Display real_prefix for a Python virtualenv version virtualenvs List all Python virtualenvs found in `$PYENV_ROOT/versions/*'. whence List all Python versions that contain the given executable which Display the full path to an executable See `pyenv help <command>' for information on a specific command. For full documentation, see: https://github.com/pyenv/pyenv#readme Look at Python Versions $ pyenv versions system * 3.6.7 (set by /home/taarimalta/.pyenv/version) Install a new Python $ pyenv install 3.7.0 Installing Python-3.7.0... WARNING: The Python bz2 extension was not compiled. Missing the bzip2 lib? WARNING: The Python readline extension was not compiled. Missing the GNU readline lib? WARNING: The Python sqlite3 extension was not compiled. Missing the SQLite3 lib? Installed Python-3.7.0 to /home/taarimalta/.pyenv/versions/3.7.0 If you run into an issue with _ctypes install libffi-dev library Now look at the versions $ pyenv versions system * 3.6.7 (set by /home/taarimalta/.pyenv/version) 3.7.0 Select 3.7.0 for local environment $ pyenv local 3.7.0 See that the version changed $ pyenv versions system 3.6.7 * 3.7.0 (set by /home/taarimalta/.python-version) $ python Python 3.7.0 (default, Jan 1 2020, 10:52:57) [GCC 9.2.1 20191008] on linux Type "help", "copyright", "credits" or "license" for more information. >>> Switch to a different folder cd ../project2 pyenv versions system * 3.6.7 (set by /home/taarimalta/.pyenv/version) 3.7.0 The python version may be different here depending on which python version you have set locally Set pyenv version globally This globally sets a python version for a user pyenv global 3.7.0 Note that pyenv sets local version by adding a .python-version file $ pyenv local 3.7.0 $ cat .python-version 3.7.0 Note that pyenv knows the global version by looking at the ~/.pyenv/version file cat ~/.pyenv/version 3.8.2 | 9 | 30 |
59,618,368 | 2020-1-6 | https://stackoverflow.com/questions/59618368/sklearn-extra-installation-issue | [in]: from sklearn_extra.cluster import KMedoids [out]: ModuleNotFoundError: No module named 'sklearn_extra' Then, I tried installing sklearn_extra via [in]: python -m pip install sklearn_extra [out]: ERROR: Could not find a version that satisfies the requirement sklearn_extra (from versions: none) ERROR: No matching distribution found for sklearn_extra Then, I went to installation part of the website(https://scikit-learn-extra.readthedocs.io/en/latest/install.html) and did what it said: Installation Dependencies scikit-learn-extra requires, Python (>=3.5) scikit-learn (>=0.21), and its dependencies Cython (>0.28) User installation Latest development version can be installed with, pip install https://github.com/scikit-learn-contrib/scikit-learn-extra/archive/master.zip [in]: pip install https://github.com/scikit-learn-contrib/scikit-learn-extra/archive/master.zip [out]: ERROR: Command errored out with exit status 1: command: 'c:\users\m\appdata\local\programs\python\python37\python.exe' 'c:\users\m\appdata\local\programs\python\python37\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\m\AppData\Local\Temp\pip-build-env-yopprv13\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'cython>=0.28' numpy==1.14.5 cwd: None Complete output (14 lines): Traceback (most recent call last): File "c:\users\m\appdata\local\programs\python\python37\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\m\appdata\local\programs\python\python37\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "c:\users\m\appdata\local\programs\python\python37\lib\site-packages\pip\__main__.py", line 16, in <module> from pip._internal.main import main as _main # isort:skip # noqa File "c:\users\m\appdata\local\programs\python\python37\lib\site-packages\pip\_internal\main.py", line 8, in <module> import locale File "c:\users\m\appdata\local\programs\python\python37\lib\locale.py", line 16, in <module> import re File "c:\users\m\appdata\local\programs\python\python37\lib\re.py", line 143, in <module> class RegexFlag(enum.IntFlag): AttributeError: module 'enum' has no attribute 'IntFlag' I checked versions of Cython,Python and sklearn they satisfy the required range. Edit: the solution is to uninstall enum34 for me thanks to Balraj Ashwatt's comment. pip uninstall -y enum34 Then I was able to install sklearn_extra | Uninstalling enum34 worked for me and then I was able to install sklearn_extra pip uninstall -y enum34 | 12 | 0 |
59,622,277 | 2020-1-7 | https://stackoverflow.com/questions/59622277/attributeerror-module-object-has-no-attribute-set-random-seed-when-i | The complete set of error messages are shown below: (FYP_v2) sg97-ubuntu@SG97-ubuntu:~/SGSN$ python2 ./train.py Traceback (most recent call last): File "./train.py", line 165, in <module> main() File "./train.py", line 65, in main tf.set_random_seed(args.random_seed) AttributeError: 'module' object has no attribute 'set_random_seed' (FYP_v2) sg97-ubuntu@SG97-ubuntu:~/SGSN$ I checked out this (AttributeError: 'module' object has no attribute 'set_random_seed') question on stackoverflow but it doesn't really apply to my situation since I'm not using Caffe. I've also provided the python code below for reference from __future__ import print_function import argparse from datetime import datetime from random import shuffle import os import sys import time import math import tensorflow as tf import numpy as np from utils import * from train_image_reader import * from net import * parser = argparse.ArgumentParser(description='') parser.add_argument("--snapshot_dir", default='./snapshots', help="path of snapshots") parser.add_argument("--image_size", type=int, default=256, help="load image size") parser.add_argument("--x_data_txt_path", default='./datasets/x_traindata.txt', help="txt of x images") parser.add_argument("--y_data_txt_path", default='./datasets/y_traindata.txt', help="txt of y images") parser.add_argument("--random_seed", type=int, default=1234, help="random seed") parser.add_argument('--base_lr', type=float, default=0.0002, help='initial learning rate for adam') parser.add_argument('--epoch', dest='epoch', type=int, default=50, help='# of epoch') parser.add_argument('--epoch_step', dest='epoch_step', type=int, default=20, help='# of epoch to decay lr') parser.add_argument("--lamda", type=float, default=10.0, help="L1 lamda") parser.add_argument('--beta1', dest='beta1', type=float, default=0.5, help='momentum term of adam') parser.add_argument("--summary_pred_every", type=int, default=200, help="times to summary.") parser.add_argument("--save_pred_every", type=int, default=8000, help="times to save.") parser.add_argument("--x_image_forpath", default='./datasets/train/X/images/', help="forpath of x training datas.") parser.add_argument("--x_label_forpath", default='./datasets/train/X/labels/', help="forpath of x training labels.") parser.add_argument("--y_image_forpath", default='./datasets/train/Y/images/', help="forpath of y training datas.") parser.add_argument("--y_label_forpath", default='./datasets/train/Y/labels/', help="forpath of y training labels.") args = parser.parse_args() def save(saver, sess, logdir, step): model_name = 'model' checkpoint_path = os.path.join(logdir, model_name) if not os.path.exists(logdir): os.makedirs(logdir) saver.save(sess, checkpoint_path, global_step=step) print('The checkpoint has been created.') def get_data_lists(data_path): f = open(data_path, 'r') datas=[] for line in f: data = line.strip("\n") datas.append(data) return datas def l1_loss(src, dst): return tf.reduce_mean(tf.abs(src - dst)) def gan_loss(src, dst): return tf.reduce_mean((src-dst)**2) def main(): if not os.path.exists(args.snapshot_dir): os.makedirs(args.snapshot_dir) x_datalists = get_data_lists(args.x_data_txt_path) # a list of x images y_datalists = get_data_lists(args.y_data_txt_path) # a list of y images tf.set_random_seed(args.random_seed) x_img = tf.placeholder(tf.float32,shape=[1, args.image_size, args.image_size,3],name='x_img') x_label = tf.placeholder(tf.float32,shape=[1, args.image_size, args.image_size,3],name='x_label') y_img = tf.placeholder(tf.float32,shape=[1, args.image_size, args.image_size,3],name='y_img') y_label = tf.placeholder(tf.float32,shape=[1, args.image_size, args.image_size,3],name='y_label') fake_y = generator(image=x_img, reuse=False, name='generator_x2y') # G fake_x_ = generator(image=fake_y, reuse=False, name='generator_y2x') # S fake_x = generator(image=y_img, reuse=True, name='generator_y2x') # G' fake_y_ = generator(image=fake_x, reuse=True, name='generator_x2y') # S' dy_fake = discriminator(image=fake_y, gen_label = x_label, reuse=False, name='discriminator_y') # D dx_fake = discriminator(image=fake_x, gen_label = y_label, reuse=False, name='discriminator_x') # D' dy_real = discriminator(image=y_img, gen_label = y_label, reuse=True, name='discriminator_y') # D dx_real = discriminator(image=x_img, gen_label = x_label, reuse=True, name='discriminator_x') #D' final_loss = gan_loss(dy_fake, tf.ones_like(dy_fake)) + gan_loss(dx_fake, tf.ones_like(dx_fake)) + args.lamda*l1_loss(x_label, fake_x_) + args.lamda*l1_loss(y_label, fake_y_) # final objective function dy_loss_real = gan_loss(dy_real, tf.ones_like(dy_real)) dy_loss_fake = gan_loss(dy_fake, tf.zeros_like(dy_fake)) dy_loss = (dy_loss_real + dy_loss_fake) / 2 dx_loss_real = gan_loss(dx_real, tf.ones_like(dx_real)) dx_loss_fake = gan_loss(dx_fake, tf.zeros_like(dx_fake)) dx_loss = (dx_loss_real + dx_loss_fake) / 2 dis_loss = dy_loss + dx_loss # discriminator loss final_loss_sum = tf.summary.scalar("final_objective", final_loss) dx_loss_sum = tf.summary.scalar("dx_loss", dx_loss) dy_loss_sum = tf.summary.scalar("dy_loss", dy_loss) dis_loss_sum = tf.summary.scalar("dis_loss", dis_loss) discriminator_sum = tf.summary.merge([dx_loss_sum, dy_loss_sum, dis_loss_sum]) x_images_summary = tf.py_func(cv_inv_proc, [x_img], tf.float32) #(1, 256, 256, 3) float32 y_fake_cv2inv_images_summary = tf.py_func(cv_inv_proc, [fake_y], tf.float32) #(1, 256, 256, 3) float32 x_label_summary = tf.py_func(label_proc, [x_label], tf.float32) #(1, 256, 256, 3) float32 x_gen_label_summary = tf.py_func(label_inv_proc, [fake_x_], tf.float32) #(1, 256, 256, 3) float32 image_summary = tf.summary.image('images', tf.concat(axis=2, values=[x_images_summary, y_fake_cv2inv_images_summary, x_label_summary, x_gen_label_summary]), max_outputs=3) summary_writer = tf.summary.FileWriter(args.snapshot_dir, graph=tf.get_default_graph()) g_vars = [v for v in tf.trainable_variables() if 'generator' in v.name] d_vars = [v for v in tf.trainable_variables() if 'discriminator' in v.name] lr = tf.placeholder(tf.float32, None, name='learning_rate') d_optim = tf.train.AdamOptimizer(lr, beta1=args.beta1) g_optim = tf.train.AdamOptimizer(lr, beta1=args.beta1) d_grads_and_vars = d_optim.compute_gradients(dis_loss, var_list=d_vars) d_train = d_optim.apply_gradients(d_grads_and_vars) # update weights of D and D' g_grads_and_vars = g_optim.compute_gradients(final_loss, var_list=g_vars) g_train = g_optim.apply_gradients(g_grads_and_vars) # update weights of G, G', S and S' train_op = tf.group(d_train, g_train) config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config) init = tf.global_variables_initializer() sess.run(init) saver = tf.train.Saver(var_list=tf.global_variables(), max_to_keep=50) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord, sess=sess) counter = 0 # training step for epoch in range(args.epoch): shuffle(x_datalists) # change the order of x images shuffle(y_datalists) # change the order of y images lrate = args.base_lr if epoch < args.epoch_step else args.base_lr*(args.epoch-epoch)/(args.epoch-args.epoch_step) for step in range(len(x_datalists)): counter += 1 x_image_resize, x_label_resize, y_image_resize, y_label_resize = TrainImageReader(args.x_image_forpath, args.x_label_forpath, args.y_image_forpath, args.y_label_forpath, x_datalists, y_datalists, step, args.image_size) batch_x_image = np.expand_dims(np.array(x_image_resize).astype(np.float32), axis = 0) batch_x_label = np.expand_dims(np.array(x_label_resize).astype(np.float32), axis = 0) batch_y_image = np.expand_dims(np.array(y_image_resize).astype(np.float32), axis = 0) batch_y_label = np.expand_dims(np.array(y_label_resize).astype(np.float32), axis = 0) start_time = time.time() feed_dict = { lr : lrate, x_img : batch_x_image, x_label : batch_x_label, y_img : batch_y_image, y_label : batch_y_label} if counter % args.save_pred_every == 0: final_loss_value, dis_loss_value, _ = sess.run([final_loss, dis_loss, train_op], feed_dict=feed_dict) save(saver, sess, args.snapshot_dir, counter) elif counter % args.summary_pred_every == 0: final_loss_value, dis_loss_value, final_loss_sum_value, discriminator_sum_value, image_summary_value, _ = \ sess.run([final_loss, dis_loss, final_loss_sum, discriminator_sum, image_summary, train_op], feed_dict=feed_dict) summary_writer.add_summary(final_loss_sum_value, counter) summary_writer.add_summary(discriminator_sum_value, counter) summary_writer.add_summary(image_summary_value, counter) else: final_loss_value, dis_loss_value, _ = \ sess.run([final_loss, dis_loss, train_op], feed_dict=feed_dict) print('epoch {:d} step {:d} \t final_loss = {:.3f}, dis_loss = {:.3f}'.format(epoch, step, final_loss_value, dis_loss_value)) coord.request_stop() coord.join(threads) if __name__ == '__main__': main() | Use tf.random.set_seed() instead of tf.set_random_seed. Link to the tensorflow doc here: https://www.tensorflow.org/api_docs/python/tf/random/set_seed?version=stable | 7 | 31 |
59,589,647 | 2020-1-4 | https://stackoverflow.com/questions/59589647/what-is-the-most-time-efficient-way-to-remove-unordered-duplicates-in-a-2d-array | I've generated a list of combinations, using itertools and I'm getting a result that looks like this: nums = [-5,5,4,-3,0,0,4,-2] x = [x for x in set(itertools.combinations(nums, 4)) if sum(x)==target] >>> x = [(-5, 5, 0, 4), (-5, 5, 4, 0), (5, 4, -3, -2), (5, -3, 4, -2)] What is the most time-complexity wise efficient way of removing unordered duplicates, such as x[0] and x[1] are the duplicates. Is there anything built in to handle this? My general approach would be to create a counter of all elements in one and compare to the next. Would this be the best approach? Thank you for any guidance. | Since you want to find unordered duplicates the best way to go is by typecasting. Typecast them as set. Since set only contains immutable elements. So, I made a set of tuples. Note: The best way to eliminate duplicates is by making a set of the given elements. >>> set(map(tuple,map(sorted,x))) {(-3, -2, 4, 5), (-5, 0, 4, 5)} | 7 | 7 |
59,580,517 | 2020-1-3 | https://stackoverflow.com/questions/59580517/how-to-document-callable-classes-annotated-with-dataclass-in-sphinx | I researched this topic and cannot see a clear solution. There is a similar SO question My problem is that I have a class with attr.dataclass and typing_extensions.final annotations and I don't want them to be documented but I still want to describe the class from the point of how it would be called. For instance, @final @dataclass(frozen=True, slots=True) class Casting(object): _int_converter_function = int _float_converter_function = float def __call__(self, casting, value_to_cast): if casting['type'] == 'integer': return self._int_converter_function(value_to_cast) return self._float_converter_function(value_to_cast) This is approximately equivalent to this (which is far away from being accurate): class Casting(object): def __init__( self, int_converter_function = int, float_converter_function = float, ): self.int_converter_function = int_converter_function self.float_converter_function = float_converter_function def converter(self, casting, value): self.value = value yield type = casting['type'] if type == 'integer': yield self.int_converter_function(value) else: yield self.float_converter_function(value) and with the latest it is clear that I can document each method with docstrings and in Sphinx do: .. autoclass:: package.Casting :members: .. automethod:: __init__(self, int_converter_function, float_converter_function) How to do the same with annotations? UPDATE: I figured out that my questions should be more specific. I want to Eliminate dataclass completely from the doc but nevertheless, keep the class in the documentation. It messes classes so much that the docs are unreadable. Make a docstring on the __init__ but also keep it separate from the callable description. I left a comment. Example of the doc: """Cast one type of code to another. Constructor arguments: :param int_converter_function: function to convert to int :param float_converter_function: function to convert to float Callable arguments: :param casting: :term:`casting` object :type casting: dict :param value_to_cast: input value :return: Casted value Example >>> cast = Casting(int) >>> cast({'type': 'integer'}, '123') 123 >>> cast({'type': 'decimal'}, '123.12') Decimal('123.12') """ UPDATE 2: The full class as it is below: # -*- coding: utf-8 -*- from attr import dataclass from typing_extensions import final @final @dataclass(frozen=True, slots=True) class Casting(object): """Cast one type of code to another. Constructor arguments: :param int_converter_function: function to convert to int :param float_converter_function: function to convert to float Callable arguments: :param casting: :term:`casting` object :type casting: dict :param value_to_cast: input value :return: Casted value Example >>> cast = Casting(int) >>> cast({'type': 'integer'}, '123') 123 >>> cast({'type': 'decimal'}, '123.12') Decimal('123.12') """ _int_converter_function = int _float_converter_function = float def __call__(self, casting, value_to_cast): if casting['type'] == 'integer': return self._int_converter_function(value_to_cast) return self._float_converter_function(value_to_cast) I want to eliminate package.casting.dataclass from the doc. | As @mzjn mentioned in comments :exclude-members: dataclass should do the job if automodule configured correctly. I made a dumb mistake that was hard to track. If you write :exclude-members: and <name-of-module> on the separate lines then all classes in the file will be ignored. Another part related to make constructor and callable function looks pretty I extracted into separate SO question. | 7 | 3 |
59,623,952 | 2020-1-7 | https://stackoverflow.com/questions/59623952/weird-issue-when-using-dataclass-and-property-together | I ran into a strange issue while trying to use a dataclass together with a property. I have it down to a minumum to reproduce it: import dataclasses @dataclasses.dataclass class FileObject: _uploaded_by: str = dataclasses.field(default=None, init=False) uploaded_by: str = None def save(self): print(self.uploaded_by) @property def uploaded_by(self): return self._uploaded_by @uploaded_by.setter def uploaded_by(self, uploaded_by): print('Setter Called with Value ', uploaded_by) self._uploaded_by = uploaded_by p = FileObject() p.save() This outputs: Setter Called with Value <property object at 0x7faeb00150b0> <property object at 0x7faeb00150b0> I would expect to get None instead of Am I doing something wrong here or have I stumbled across a bug? After reading @juanpa.arrivillaga answer I thought that making uploaded_by and InitVar might fix the issue, but it still return a property object. I think it is because of the this that he said: the datalcass machinery interprets any assignment to a type-annotated variable in the class body as the default value to the created __init__. The only option I can find that works with the default value is to remove the uploadedby from the dataclass defintion and write an actual __init__. That has an unfortunate side effect of requiring you to write an __init__ for the dataclass manually which negates some of the value of using a dataclass. Here is what I did: import dataclasses @dataclasses.dataclass class FileObject: _uploaded_by: str = dataclasses.field(default=None, init=False) uploaded_by: dataclasses.InitVar=None other_attrs: str = None def __init__(self, uploaded_by=None, other_attrs=None): self._uploaded_by = uploaded_by self.other_attrs = other_attrs def save(self): print("Uploaded by: ", self.uploaded_by) print("Other Attrs: ", self.other_attrs) @property def uploaded_by(self): if not self._uploaded_by: print("Doing expensive logic that should not be repeated") return self._uploaded_by p = FileObject(other_attrs="More Data") p.save() p2 = FileObject(uploaded_by='Already Computed', other_attrs="More Data") p2.save() Which outputs: Doing expensive logic that should not be repeated Uploaded by: None Other Attrs: More Data Uploaded by: Already Computed Other Attrs: More Data The negatives of doing this: You have to write boilerplate __init__ (My actual use case has about 20 attrs) You lose the uploaded_by in the __repr__, but it is there in _uploaded_by Calls to asdict, astuple, dataclasses.replace aren't handled correctly So it's really not a fix for the issue I have filed a bug on the Python Bug Tracker: https://bugs.python.org/issue39247 | So, unfortunately, the @property syntax is always interpreted as an assignment to uploaded_by (since, well, it is). The dataclass machinery is interpreting that as a default value, hence why it is passing the property object! It is equivalent to this: In [11]: import dataclasses ...: ...: @dataclasses.dataclass ...: class FileObject: ...: uploaded_by: str ...: _uploaded_by: str = dataclasses.field(repr=False, init=False) ...: def save(self): ...: print(self.uploaded_by) ...: ...: def _get_uploaded_by(self): ...: return self._uploaded_by ...: ...: def _set_uploaded_by(self, uploaded_by): ...: print('Setter Called with Value ', uploaded_by) ...: self._uploaded_by = uploaded_by ...: uploaded_by = property(_get_uploaded_by, _set_uploaded_by) ...: p = FileObject() ...: p.save() Setter Called with Value <property object at 0x10761e7d0> <property object at 0x10761e7d0> Which is essentially acting like this: In [13]: @dataclasses.dataclass ...: class Foo: ...: bar:int = 1 ...: bar = 2 ...: In [14]: Foo() Out[14]: Foo(bar=2) I don't think there is a clean way around this, and perhaps it could be considered a bug, but really, not sure what the solution should be, because essentially, the datalcass machinery interprets any assignment to a type-annotated variable in the class body as the default value to the created __init__. You could perhaps either special-case the @property syntax, or maybe just the property object itself, so at least the behavior for @property and x = property(set_x, get_x) would be consistent... To be clear, the following sort of works: In [22]: import dataclasses ...: ...: @dataclasses.dataclass ...: class FileObject: ...: uploaded_by: str ...: _uploaded_by: str = dataclasses.field(repr=False, init=False) ...: @property ...: def uploaded_by(self): ...: return self._uploaded_by ...: @uploaded_by.setter ...: def uploaded_by(self, uploaded_by): ...: print('Setter Called with Value ', uploaded_by) ...: self._uploaded_by = uploaded_by ...: ...: p = FileObject(None) ...: print(p.uploaded_by) Setter Called with Value None None In [23]: FileObject() Setter Called with Value <property object at 0x1086debf0> Out[23]: FileObject(uploaded_by=<property object at 0x1086debf0>) But notice, you cannot set a useful default value! It will always take the property... Even worse, IMO, if you don't want a default value it will always create one! EDIT: Found a potential workaround! This should have been obvious, but you can just set the property object on the class. import dataclasses import typing @dataclasses.dataclass class FileObject: uploaded_by:typing.Optional[str]=None def _uploaded_by_getter(self): return self._uploaded_by def _uploaded_by_setter(self, uploaded_by): print('Setter Called with Value ', uploaded_by) self._uploaded_by = uploaded_by FileObject.uploaded_by = property( FileObject._uploaded_by_getter, FileObject._uploaded_by_setter ) p = FileObject() print(p) print(p.uploaded_by) | 8 | 6 |
59,594,317 | 2020-1-4 | https://stackoverflow.com/questions/59594317/how-can-i-add-a-python-package-to-a-shell-nix-if-its-not-in-nixpkgs | I have a shell.nix that I use for Python development that looks like this: with import <nixpkgs> {}; (( python37.withPackages (ps: with ps; [ matplotlib spacy pandas spacy_models.en_core_web_lg plotly ])).override({ignoreCollisions=true; })).env It works fine for these packages. The problem is, I also want to use colormath, which doesn't seem to be in nixpkgs. How can I import that package? I can generate a requirements.nix with pypi2nix -V python3 -e colormath, and I've tried to import it with something like this: with import <nixpkgs> {}; let colormath = import ./requirements.nix { inherit pkgs; } in (( python37.withPackages (ps: with ps; [ matplotlib spacy ... colormath ])).override({ignoreCollisions=true; })).env Edit: here's a gist of the output of requirements.nix. I've also tried to make a python package nix expression, as in Nixpkgs, and it seems to build OK: { buildPythonPackage , fetchPypi , networkx , numpy , lib , pytest }: buildPythonPackage rec { pname = "colormath"; version = "3.0.0"; src = fetchPypi { inherit version; inherit pname; sha256 = "3d4605af344527da0e4f9f504fad7ddbebda35322c566a6c72e28edb1ff31217"; }; checkInputs = [ pytest ]; checkPhase = '' pytest ''; # Tests seem to hang # doCheck = false; propagatedBuildInputs = [ networkx numpy ]; meta = { homepage = "https://github.com/gtaylor/python-colormath"; license = lib.licenses.bsd2; description = "Color math and conversion library."; }; } (I even made a pull request for it.) But I just don't know how to import this into my development environment. I'm still off. Is there an easy way to combine nixpkgs and non-nixpkgs python modules? | I did solve a similar problem like that: with import <nixpkgs> {}; ( let colormath = pkgs.python37Packages.buildPythonPackage rec { pname = "colormath"; version = "3.0.0"; src = pkgs.python37Packages.fetchPypi{ inherit version; inherit pname; sha256 = "05qjycgxp3p2f9n6lmic68sxmsyvgnnlyl4z9w7dl9s56jphaiix"; }; buildInputs = [ pkgs.python37Packages.numpy pkgs.python37Packages.networkx ]; }; in pkgs.python37.buildEnv.override rec { extraLibs = [ pkgs.python37 pkgs.python37Packages.matplotlib pkgs.python37Packages.spacy pkgs.python37Packages.pandas pkgs.python37Packages.spacy_models.en_core_web_lg pkgs.python37Packages.plotly colormath ]; } ).env There is probably room for improvements but this worked for me. | 7 | 7 |
59,633,558 | 2020-1-7 | https://stackoverflow.com/questions/59633558/python-based-dockerfile-throws-locale-error-unsupported-locale-setting | I have a problem with passing the host's (Centos7) locales to the python3 docker image. Only the following locales end up in the image, even though I used the suggestion described in the link below: C C.UTF-8 POSIX Why does locale.getpreferredencoding() return 'ANSI_X3.4-1968' instead of 'UTF-8'? My Dockerfile has: FROM python:3.7.5 ENV LC_ALL C.UTF-8 WORKDIR /data ADD ./requirements.txt /data/requirements.txt RUN pip install -r requirements.txt COPY . /data CMD [ "python3", "./test.py" ] When I run this command: locale.setlocale(locale.LC_ALL,'ru_RU') it throws this error: Traceback (most recent call last): File "./test.py", line 10, in <module> locale.setlocale(locale.LC_ALL,'ru_RU') File "/usr/local/lib/python3.7/locale.py", line 608, in setlocale return _setlocale(category, locale) locale.Error: unsupported locale setting If I set ENV LANG ru_RU.UTF-8 ENV LC_ALL ru_RU.UTF-8 Then I get: locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_COLLATE to default locale: No such file or directory locale.getdefaultlocale ('ru_RU', 'UTF-8') locale.getpreferredencoding UTF-8 Exception: unsupported locale setting Please, explain how can I add a ru_RU locale into the python image? | What I would do for Debian based docker image: FROM python:3.7.5 RUN apt-get update && \ apt-get install -y locales && \ sed -i -e 's/# ru_RU.UTF-8 UTF-8/ru_RU.UTF-8 UTF-8/' /etc/locale.gen && \ dpkg-reconfigure --frontend=noninteractive locales ENV LANG ru_RU.UTF-8 ENV LC_ALL ru_RU.UTF-8 and then in python: import locale locale.setlocale(locale.LC_ALL,'ru_RU.UTF-8') | 11 | 27 |
59,636,923 | 2020-1-7 | https://stackoverflow.com/questions/59636923/create-a-grid-of-plots-with-holoviews-hvplot-and-set-the-max-number-of-columns | I'd like to plot several data into a grid using holoviews/hvplot, based on one dimension, which contains several unique data points. Considering this example: import seaborn as sns import hvplot.pandas iris = sns.load_dataset('iris') plot = iris.hvplot.scatter(x="sepal_length", y="sepal_width", col="species") hvplot.show(plot) The above code creates several plots, based on the species part of the iris data set, resulting in the picture below: But now imagine there were not 3 different species, but twenty. The plot would get to wide so I'd like to break the line after a few plots. But I couldn't find any "maximum columns" parameter. A normal grid expects another column to define the rows which I don't have. Any suggestions would help. | In your case I would not create a Gridspace (by using keyword 'row' and 'col') but a Layout. When you have a Layout you can adjust the number of columns easily with .cols(2). Using hvplot you have to use keyword 'by' and 'subplots=True' instead of 'col'. See the code below: iris.hvplot.scatter( x='sepal_length', y='sepal_width', by='species', subplots=True, ).cols(2) Resulting plot: | 7 | 3 |
59,604,595 | 2020-1-5 | https://stackoverflow.com/questions/59604595/how-to-extract-features-from-fft | I am gathering data from X, Y and Z accelerometer sensors sampled at 200 Hz. The 3 axis are combined into a single signal called 'XYZ_Acc'. I followed tutorials on how to transform time domain signal into frequency domain using scipy fftpack library. The code I'm using is the below: from scipy.fftpack import fft # get a 500ms slice from dataframe sample500ms = df.loc[pd.to_datetime('2019-12-15 11:01:31.000'):pd.to_datetime('2019-12-15 11:01:31.495')]['XYZ_Acc'] f_s = 200 # sensor sampling frequency 200 Hz T = 0.005 # 5 milliseconds between successive observation T =1/f_s N = 100 # 100 samples in 0.5 seconds f_values = np.linspace(0.0, f_s/2, N//2) fft_values = fft(sample500ms) fft_mag_values = 2.0/N * np.abs(fft_values[0:N//2]) Then I plot the frequency vs the magnitude fig_fft = plt.figure(figsize=(5,5)) ax = fig_fft.add_axes([0,0,1,1]) ax.plot(f_values,fft_mag_values) Screenshot: My difficulty now is how to extract features out of this data, such as Irregularity, Fundamental Frequency, Flux... Can someone guide me into the right direction? Update 06/01/2019 - adding more context to my question. I'm relatively new in machine learning, so any feedback is appreciated. X, Y, Z are linear acceleration signals, sampled at 200 Hz from a smart phone. I'm trying to detect road anomalies by analysing spectral and temporal statistics. Here's a sample of the csv file which is being parsed into a pandas dataframe with the timestamp as the index. X,Y,Z,Latitude,Longitude,Speed,timestamp 0.8756,-1.3741,3.4166,35.894833,14.354166,11.38,2019-12-15 11:01:30:750 1.0317,-0.2728,1.5602,35.894833,14.354166,11.38,2019-12-15 11:01:30:755 1.0317,-0.2728,1.5602,35.894833,14.354166,11.38,2019-12-15 11:01:30:760 1.0317,-0.2728,1.5602,35.894833,14.354166,11.38,2019-12-15 11:01:30:765 -0.1669,-1.9912,-4.2043,35.894833,14.354166,11.38,2019-12-15 11:01:30:770 -0.1669,-1.9912,-4.2043,35.894833,14.354166,11.38,2019-12-15 11:01:30:775 -0.1669,-1.9912,-4.2043,35.894833,14.354166,11.38,2019-12-15 11:01:30:780 In answer to 'francis', two columns are then added via this code: df['XYZ_Acc_Mag'] = (abs(df['X']) + abs(df['Y']) + abs(df['Z'])) df['XYZ_Acc'] = (df['X'] + df['Y'] + df['Z']) 'XYZ_Acc_Mag' is to be used to extract temporal statistics. 'XYZ_Acc' is to be used to extract spectral statistics. Data 'XYZ_Acc_Mag' is then re sampled in 0.5 second frequency and temporal stats such as mean, standard-deviation, etc have been extracted in a new dataframe. Pair plots reveal the anomaly shown at time 11:01:35 in the line plot above. Now back to my original question. I'm re sampling data 'XYZ_Acc', also at 0.5 seconds, and obtaining the magnitude array 'fft_mag_values'. The question is how do I extract temporal features such as Irregularity, Fundamental Frequency, Flux out of it? | Since 'XYZ_Acc' is defined as a linear combination of the components of the signal, taking its DFT makes sense. It is equivalent to using a 1D accelometer in direction (1,1,1). But a more physical energy-related viewpoint can be adopted. Computing the DFT is similar to writing the signal as a sum of sines. If the acceleration vector writes : The corresponding velocity vector could write: and the specific kinetic energy writes: This method requires computing the DFT a each component before the magnitude corresponding to each frequency. Another issue is that the DFT is intended to compute the Discrete Fourrier Transform of a periodic signal, that signal being build by periodizing the frame. Nevertheless, the actual frame is never a period of a periodic signal and repeating the period creates artificial discontinuities at the end/begin of the frame. The effects strong discontinuities in the spectral domain, deemded spectral leakage, could be reduced by windowing the frame. Computing the real-to-complex DFT result in a power distribution, featuring peaks at particular frequencies. In addition the frequency of a given peak is better estimated as the mean frequency with respect to power density, as shown in Why are frequency values rounded in signal using FFT? Another tool to estimate fundamental frequencies is to compute the autocorrelation of the signal: it is higher near the periods of the signal. Since the signal is a vector of 3 components, an autocorelation matrix can be built. It is a 3x3 Hermitian matrix for each time and therefore features real eigenvalues. The maxima of the higher eigen value can be picture as the magnitude of vaibrations while the correponding eigenvector is a complex direction, somewhat similar to the direction of vibrations combined to angular offsets. The angular offset may signal an ellipsoidal vibration. Here is a fake signal, build by adding a guassian noise and sine waves: Here is the power density spectrum for a given frame overlapping on sine wave: Here is the resulting eigenvalues of the autocorrelation of the same frame, where the period of the 50Hz sine wave is visible. Vertical scaling is wrong: Here goes a sample code: import matplotlib.pyplot as plt import numpy as np import scipy.signal n=2000 t=np.linspace(0.,n/200,num=n,endpoint=False) # an artificial signal, just for tests ax=0.3*np.random.normal(0,1.,n) ay=0.3*np.random.normal(0,1.,n) az=0.3*np.random.normal(0,1.,n) ay[633:733]=ay[633:733]+np.sin(2*np.pi*30*t[633:733]) az[433:533]=az[433:533]+np.sin(2*np.pi*50*t[433:533]) #ax=np.sin(2*np.pi*10*t) #ay=np.sin(2*np.pi*30*t) #az=np.sin(2*np.pi*50*t) plt.plot(t,ax, label='x') plt.plot(t,ay, label='y') plt.plot(t,az, label='z') plt.xlabel('t, s') plt.ylabel('acc, m.s^-2') plt.legend() plt.show() #splitting the sgnal into frames of 0.5s noiseheight=0. for i in range(2*(n/200)): print 'frame', i,' time ', i*0.5, ' s' framea=np.zeros((100,3)) framea[:,0]=ax[i*100:i*100+100] framea[:,1]=ay[i*100:i*100+100] framea[:,2]=az[i*100:i*100+100] #for that frame, apply window. Factor 2 so that average remains 1. window = np.hanning(100) framea[:,0]=framea[:,0]*window*2 framea[:,1]=framea[:,1]*window*2 framea[:,2]=framea[:,2]*window*2 #DFT transform. hatacc=np.fft.rfft(framea,axis=0, norm=None) # scaling by length of frame. hatacc=hatacc/100. #computing the magnitude : all non-zero frequency are doubled to merge energy in bin N-k exp(-2ik/n) to bin k accmag=2*(np.abs(hatacc[:,0])*np.abs(hatacc[:,0])+np.abs(hatacc[:,1])*np.abs(hatacc[:,1])+np.abs(hatacc[:,2])*np.abs(hatacc[:,2])) accmag[0]=accmag[0]*0.5 #first frame says something about noise if i==0: noiseheight=2.*np.max(accmag) if np.max(accmag)>noiseheight: peaks, peaksdat=scipy.signal.find_peaks(accmag, height=noiseheight) timestep=0.005 freq= np.fft.fftfreq(100, d=timestep) #see https://stackoverflow.com/questions/54714169/why-are-frequency-values-rounded-in-signal-using-fft/54775867#54775867 # frequencies of peaks are better estimated as mean frequency of peak, with respect to power density for ind in peaks: totalweight=accmag[ind-2]+accmag[ind-1]+accmag[ind]+accmag[ind+1]+accmag[ind+2] totalweightedfreq=accmag[ind-2]*freq[ind-2]+accmag[ind-1]*freq[ind-1]+accmag[ind]*freq[ind]+accmag[ind+1]*freq[ind+1]+accmag[ind+2]*freq[ind+2] print 'found peak at frequency' , totalweightedfreq/totalweight, ' of height', accmag[ind] #ploting plt.plot(freq[0:50],accmag[0:50], label='||acc||^2') plt.xlabel('frequency, Hz') plt.ylabel('||acc||^2, m^2.s^-4') plt.legend() plt.show() #another approach to find fundamental frequencies: computing the autocorrelation of the windowed signal and searching for maximums. #building the autocorellation matrix autocorr=np.zeros((100,3,3), dtype=complex) acxfft=np.fft.fft(framea[:,0],axis=0, norm=None) acyfft=np.fft.fft(framea[:,1],axis=0, norm=None) aczfft=np.fft.fft(framea[:,2],axis=0, norm=None) acxfft[0]=0. acyfft[0]=0. aczfft[0]=0. autocorr[:,0,0]=np.fft.ifft(acxfft*np.conj(acxfft),axis=0, norm=None) autocorr[:,0,1]=np.fft.ifft(acxfft*np.conj(acyfft),axis=0, norm=None) autocorr[:,0,2]=np.fft.ifft(acxfft*np.conj(aczfft),axis=0, norm=None) autocorr[:,1,0]=np.fft.ifft(acyfft*np.conj(acxfft),axis=0, norm=None) autocorr[:,1,1]=np.fft.ifft(acyfft*np.conj(acyfft),axis=0, norm=None) autocorr[:,1,2]=np.fft.ifft(acyfft*np.conj(aczfft),axis=0, norm=None) autocorr[:,2,0]=np.fft.ifft(aczfft*np.conj(acxfft),axis=0, norm=None) autocorr[:,2,1]=np.fft.ifft(aczfft*np.conj(acyfft),axis=0, norm=None) autocorr[:,2,2]=np.fft.ifft(aczfft*np.conj(aczfft),axis=0, norm=None) # at a given time, the 3x3 matrix autocorr is Hermitian. #Its eigenvalues are real, its unitary eigenvectors signals directions of vibrations and phase between components. autocorreigval=np.zeros((100,3)) autocorreigvec=np.zeros((100,3,3), dtype=complex) for j in range(100): autocorreigval[j,:], autocorreigvec[j,:,:]=np.linalg.eigh(autocorr[j,:,:],UPLO='L') peaks, peaksdat=scipy.signal.find_peaks(autocorreigval[:50,2], 0.3*autocorreigval[0,2]) cleared=np.zeros(len(peaks)) peakperiod=np.zeros(len(peaks)) for j in range(len(peaks)): totalweight=autocorreigval[peaks[j]-1,2]+autocorreigval[peaks[j],2]+autocorreigval[peaks[j]+1,2] totalweightedperiod=0.005*(autocorreigval[peaks[j]-1,2]*(peaks[j]-1)+autocorreigval[peaks[j],2]*(peaks[j])+autocorreigval[peaks[j]+1,2]*(peaks[j]+1)) peakperiod[j]=totalweightedperiod/totalweight #cleared[0]=1. fundfreq=1 for j in range(len(peaks)): if cleared[j]==0: print "found fundamental frequency :", 1.0/(peakperiod[j]), 'eigenvalue', autocorreigval[peaks[j],2],' dir vibration ', autocorreigvec[peaks[j],:,2] for k in range(j,len(peaks),1): mm=np.zeros(1) np.floor_divide(peakperiod[k],peakperiod[j],out=mm) if ( np.abs(peakperiod[k]-peakperiod[j]*mm[0])< 0.2*peakperiod[j] or np.abs(peakperiod[k]-(peakperiod[j])*(mm[0]+1))< 0.2*peakperiod[j]) : cleared[k]=fundfreq #else : # print k,j,mm[0] # print peakperiod[k], peakperiod[j]*mm[0], peakperiod[j]*(mm[0]+1) , peakperiod[j] fundfreq=fundfreq+1 plt.plot(t[i*100:i*100+100],autocorreigval[:,2], label='autocorrelation, large eigenvalue') plt.plot(t[i*100:i*100+100],autocorreigval[:,1], label='autocorrelation, medium eigenvalue') plt.plot(t[i*100:i*100+100],autocorreigval[:,0], label='autocorrelation, small eigenvalue') plt.xlabel('t, s') plt.ylabel('acc^2, m^2.s^-4') plt.legend() plt.show() The output is: frame 0 time 0.0 s frame 1 time 0.5 s frame 2 time 1.0 s frame 3 time 1.5 s frame 4 time 2.0 s found peak at frequency 50.11249238149811 of height 0.2437842149351196 found fundamental frequency : 50.31467771196368 eigenvalue 47.03344783764712 dir vibration [-0.11441502+0.00000000e+00j 0.0216911 +2.98101624e-18j -0.9931962 -5.95276353e-17j] frame 5 time 2.5 s frame 6 time 3.0 s found peak at frequency 30.027895460975156 of height 0.3252387031089667 found fundamental frequency : 29.60690406120401 eigenvalue 61.51059682797539 dir vibration [ 0.11384195+0.00000000e+00j -0.98335779-4.34688198e-17j -0.14158908+3.87566125e-18j] frame 7 time 3.5 s found peak at frequency 26.39622018109896 of height 0.042081187689137545 found fundamental frequency : 67.65844834016518 eigenvalue 6.875616417422696 dir vibration [0.8102307 +0.00000000e+00j 0.32697001-8.83058693e-18j 0.48643275-4.76094302e-17j] frame 8 time 4.0 s frame 9 time 4.5 s Frequencies 50Hz and 30Hz got caught as 50.11/50.31Hz and 30.02/29.60Hz and directions are quite accurate as well. The last feature at 26.39Hz/67.65Hz is likely garbage, as it features different frequencies for the two methods and lower magnitude/eigenvalue. Regarding monitoring of road surface to improve maintenance, I know of a project at my compagny, called Aigle3D. A laser fitted at the back of a van scans the road at highway speed at milimetric accuracy. The van is also fitted with a server, cameras and other sensors, thus providing a huge amount of data on road geometry and defects, presently covering hundreds of km of the french national road network. Detecting and repairing small early defects and cracks may extend the life expectancy of the road at limited cost. If useful, data from accelerometers of daily users could indeed complete the monitoring system, allowing a faster reaction whenether a large pothole appears. | 7 | 6 |
59,559,788 | 2020-1-2 | https://stackoverflow.com/questions/59559788/pandas-zigzag-segmentation-of-data-based-on-local-minima-maxima | I have a timeseries data. Generating data date_rng = pd.date_range('2019-01-01', freq='s', periods=400) df = pd.DataFrame(np.random.lognormal(.005, .5,size=(len(date_rng), 3)), columns=['data1', 'data2', 'data3'], index= date_rng) s = df['data1'] I want to create a zig-zag line connecting between the local maxima and local minima, that satisfies the condition that on the y-axis, |highest - lowest value| of each zig-zag line must exceed a percentage (say 20%) of the distance of the previous zig-zag line, AND a pre-stated value k (say 1.2) I can find the local extrema using this code: # Find peaks(max). peak_indexes = signal.argrelextrema(s.values, np.greater) peak_indexes = peak_indexes[0] # Find valleys(min). valley_indexes = signal.argrelextrema(s.values, np.less) valley_indexes = valley_indexes[0] # Merge peaks and valleys data points using pandas. df_peaks = pd.DataFrame({'date': s.index[peak_indexes], 'zigzag_y': s[peak_indexes]}) df_valleys = pd.DataFrame({'date': s.index[valley_indexes], 'zigzag_y': s[valley_indexes]}) df_peaks_valleys = pd.concat([df_peaks, df_valleys], axis=0, ignore_index=True, sort=True) # Sort peak and valley datapoints by date. df_peaks_valleys = df_peaks_valleys.sort_values(by=['date']) but I don't know how to apply the threshold condition to it. Please advise me on how to apply such condition. Since the data could contain million timestamps, an efficient calculation is highly recommended For clearer description: Example output, from my data: # Instantiate axes. (fig, ax) = plt.subplots() # Plot zigzag trendline. ax.plot(df_peaks_valleys['date'].values, df_peaks_valleys['zigzag_y'].values, color='red', label="Zigzag") # Plot original line. ax.plot(s.index, s, linestyle='dashed', color='black', label="Org. line", linewidth=1) # Format time. ax.xaxis_date() ax.xaxis.set_major_formatter(mdates.DateFormatter("%Y-%m-%d")) plt.gcf().autofmt_xdate() # Beautify the x-labels plt.autoscale(tight=True) plt.legend(loc='best') plt.grid(True, linestyle='dashed') My desired output (something similar to this, the zigzag only connect the significant segments) | I have answered to my best understanding of the question. Yet it is not clear to how the variable K influences the filter. You want to filter the extrema based on a running condition. I assume that you want to mark all extrema whose relative distance to the last marked extremum is larger than p%. I further assume that you always consider the first element of the timeseries a valid/relevant point. I implemented this with the following filter function: def filter(values, percentage): previous = values[0] mask = [True] for value in values[1:]: relative_difference = np.abs(value - previous)/previous if relative_difference > percentage: previous = value mask.append(True) else: mask.append(False) return mask To run your code, I first import dependencies: from scipy import signal import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates To make the code reproduceable I fix the random seed: np.random.seed(0) The rest from here is copypasta. Note that I decreased the amount of sample to make the result clear. date_rng = pd.date_range('2019-01-01', freq='s', periods=30) df = pd.DataFrame(np.random.lognormal(.005, .5,size=(len(date_rng), 3)), columns=['data1', 'data2', 'data3'], index= date_rng) s = df['data1'] # Find peaks(max). peak_indexes = signal.argrelextrema(s.values, np.greater) peak_indexes = peak_indexes[0] # Find valleys(min). valley_indexes = signal.argrelextrema(s.values, np.less) valley_indexes = valley_indexes[0] # Merge peaks and valleys data points using pandas. df_peaks = pd.DataFrame({'date': s.index[peak_indexes], 'zigzag_y': s[peak_indexes]}) df_valleys = pd.DataFrame({'date': s.index[valley_indexes], 'zigzag_y': s[valley_indexes]}) df_peaks_valleys = pd.concat([df_peaks, df_valleys], axis=0, ignore_index=True, sort=True) # Sort peak and valley datapoints by date. df_peaks_valleys = df_peaks_valleys.sort_values(by=['date']) Then we use the filter function: p = 0.2 # 20% filter_mask = filter(df_peaks_valleys.zigzag_y, p) filtered = df_peaks_valleys[filter_mask] And plot as you did both your previous plot as well as the newly filtered extrema: # Instantiate axes. (fig, ax) = plt.subplots(figsize=(10,10)) # Plot zigzag trendline. ax.plot(df_peaks_valleys['date'].values, df_peaks_valleys['zigzag_y'].values, color='red', label="Extrema") # Plot zigzag trendline. ax.plot(filtered['date'].values, filtered['zigzag_y'].values, color='blue', label="ZigZag") # Plot original line. ax.plot(s.index, s, linestyle='dashed', color='black', label="Org. line", linewidth=1) # Format time. ax.xaxis_date() ax.xaxis.set_major_formatter(mdates.DateFormatter("%Y-%m-%d")) plt.gcf().autofmt_xdate() # Beautify the x-labels plt.autoscale(tight=True) plt.legend(loc='best') plt.grid(True, linestyle='dashed') EDIT: If want to both consider the first as well as the last point as valid, then you can adapt the filter function as follows: def filter(values, percentage): # the first value is always valid previous = values[0] mask = [True] # evaluate all points from the second to (n-1)th for value in values[1:-1]: relative_difference = np.abs(value - previous)/previous if relative_difference > percentage: previous = value mask.append(True) else: mask.append(False) # the last value is always valid mask.append(True) return mask | 12 | 5 |
59,636,631 | 2020-1-7 | https://stackoverflow.com/questions/59636631/importerror-cannot-import-name-mutablemapping-from-collections | I have installed the AWS CLI using pip on my Python 3.9.0a1 alpine Docker image. Installation went fine. When I run the aws command, I'm getting the following error. aws Traceback (most recent call last): File "/usr/local/bin/aws", line 27, in <module> sys.exit(main()) File "/usr/local/bin/aws", line 23, in main return awscli.clidriver.main() File "/usr/local/lib/python3.9/site-packages/awscli/clidriver.py", line 68, in main driver = create_clidriver() File "/usr/local/lib/python3.9/site-packages/awscli/clidriver.py", line 77, in create_clidriver load_plugins(session.full_config.get('plugins', {}), File "/usr/local/lib/python3.9/site-packages/awscli/plugin.py", line 44, in load_plugins modules = _import_plugins(plugin_mapping) File "/usr/local/lib/python3.9/site-packages/awscli/plugin.py", line 61, in _import_plugins module = __import__(path, fromlist=[module]) File "/usr/local/lib/python3.9/site-packages/awscli/handlers.py", line 42, in <module> from awscli.customizations.history import register_history_mode File "/usr/local/lib/python3.9/site-packages/awscli/customizations/history/__init__.py", line 24, in <module> from awscli.customizations.history.db import DatabaseConnection File "/usr/local/lib/python3.9/site-packages/awscli/customizations/history/db.py", line 19, in <module> from collections import MutableMapping ImportError: cannot import name 'MutableMapping' from 'collections' (/usr/local/lib/python3.9/collections/__init__.py) python --version Python 3.9.0a1 Do I need to install any other module to fix this error message? | collections.MutableMapping has been deprecated since Python 3.3, and was officially removed since Python 3.9. Excerpt from the documentation: Deprecated since version 3.3, will be removed in version 3.9: Moved Collections Abstract Base Classes to the collections.abc module. You can either wait for a Python 3.9-compatible version of awscli to be released, or patch the aws script (under your /usr/local/bin) yourself like this for the time being: ... import collections from collections import abc collections.MutableMapping = abc.MutableMapping import awscli.clidriver | 14 | 19 |
59,636,344 | 2020-1-7 | https://stackoverflow.com/questions/59636344/how-to-remove-extra-whitespace-from-image-in-opencv | I have the following image which is a receipt image and a lot of white space around the receipt in focus. I would like to crop the white space. I can't manually crop it so I'm looking for a way that I could do it. Cropped one: Tried this code from the following post: How to remove whitespace from an image in OpenCV? gray = load_image(IMG_FILE) # image file gray = 255*(gray < 128).astype(np.uint8) coords = cv2.findNonZero(gray) # Find all non-zero points (text) x, y, w, h = cv2.boundingRect(coords) # Find minimum spanning bounding box rect = load_image(IMG_FILE)[y:y+h, x:x+w] # Crop the image - note we do this on the original image it's cropping a tiny part of the white space. | Here's a simple approach: Obtain binary image. Load the image, convert to grayscale, apply a large Gaussian blur, and then Otsu's threshold Perform morphological operations. We first morph open with a small kernel to remove noise then morph close with a large kernel to combine the contours Find enclosing bounding box and crop ROI. We find the coordinates of all non-zero points, find the bounding rectangle, and crop the ROI. Here's the detected ROI to crop highlighted in green Cropped ROI import cv2 # Load image, grayscale, Gaussian blur, Otsu's threshold image = cv2.imread('1.jpg') original = image.copy() gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (25,25), 0) thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # Perform morph operations, first open to remove noise, then close to combine noise_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3)) opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, noise_kernel, iterations=2) close_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (7,7)) close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, close_kernel, iterations=3) # Find enclosing boundingbox and crop ROI coords = cv2.findNonZero(close) x,y,w,h = cv2.boundingRect(coords) cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2) crop = original[y:y+h, x:x+w] cv2.imshow('thresh', thresh) cv2.imshow('close', close) cv2.imshow('image', image) cv2.imshow('crop', crop) cv2.waitKey() | 7 | 9 |
59,635,147 | 2020-1-7 | https://stackoverflow.com/questions/59635147/looping-through-multiple-arrays-concatenating-values-in-pandas | I've a dataframe with list of items separated by , commas as below. +----------------------+ | Items | +----------------------+ | X1,Y1,Z1 | | X2,Z3 | | X3 | | X1,X2 | | Y2,Y4,Z2,Y5,Z3 | | X2,X3,Y1,Y2,Z2,Z4,X1 | +----------------------+ Also I've 3 list of arrays which has all items said above clubbed into specific groups as below X = [X1,X2,X3,X4,X5] Y = [Y1,Y2,Y3,Y4,Y5] Z = [Z1,Z2,Z3,Z4,Z5] my task is to split the each value in the dataframe & check individual items in the 3 arrays and if an item is in any of the array, then it should concatenate the name of the groups which it is found, separated with &. Also if many items are in the same group/array, then it should mention the number of occurrence as well. My desired output is as below. refer Category column +----------------------+--------------+ | Items | Category | +----------------------+--------------+ | X1,Y1,Z1 | X & Y & Z | | X2,Z3 | X & Z | | X3 | X | | X1,X2 | 2X | | Y2,Y4,Z2,Y5,Z3 | 3Y & 2Z | | X2,X3,Y1,Y2,Z2,Z4,X1 | 3X & 2Y & 2Z | +----------------------+--------------+ X,Y, and Z are the name of the arrays. how shall I start to solve this using pandas? please guide. | Setup df = pd.DataFrame( [['X1,Y1,Z1'], ['X2,Z3'], ['X3'], ['X1,X2'], ['Y2,Y4,Z2,Y5,Z3'], ['X2,X3,Y1,Y2,Z2,Z4,X1']], columns=['Items'] ) X = ['X1', 'X2', 'X3', 'X4', 'X5'] Y = ['Y1', 'Y2', 'Y3', 'Y4', 'Y5'] Z = ['Z1', 'Z2', 'Z3', 'Z4', 'Z5'] Counter from collections import Counter M = {**dict.fromkeys(X, 'X'), **dict.fromkeys(Y, 'Y'), **dict.fromkeys(Z, 'Z')} num = lambda x: {1: ''}.get(x, x) cat = ' & '.join fmt = lambda c: cat(f'{num(v)}{k}' for k, v in c.items()) cnt = lambda x: Counter(map(M.get, x.split(','))) df.assign(Category=[*map(fmt, map(cnt, df.Items))]) Items Category 0 X1,Y1,Z1 X & Y & Z 1 X2,Z3 X & Z 2 X3 X 3 X1,X2 2X 4 Y2,Y4,Z2,Y5,Z3 3Y & 2Z 5 X2,X3,Y1,Y2,Z2,Z4,X1 3X & 2Y & 2Z OLD STUFF pandas.Series.str.get_dummies and groupby First convert the definitions of X, Y, and Z into one dictionary, then use that as the argument for groupby on axis=1 M = {**dict.fromkeys(X, 'X'), **dict.fromkeys(Y, 'Y'), **dict.fromkeys(Z, 'Z')} counts = df.Items.str.get_dummies(',').groupby(M, axis=1).sum() counts X Y Z 0 1 1 1 1 1 0 1 2 1 0 0 3 2 0 0 4 0 3 2 5 3 2 2 Add the desired column Work in Progress I don't like this solution def fmt(row): a = [f'{"" if v == 1 else v}{k}' for k, v in row.items() if v > 0] return ' & '.join(a) df.assign(Category=counts.apply(fmt, axis=1)) Items Category 0 X1,Y1,Z1 X & Y & Z 1 X2,Z3 X & Z 2 X3 X 3 X1,X2 2X 4 Y2,Y4,Z2,Y5,Z3 3Y & 2Z 5 X2,X3,Y1,Y2,Z2,Z4,X1 3X & 2Y & 2Z NOT TO BE TAKEN SERIOUSLY Because I'm leveraging the character of your contrived example and there is nowai you should depend on the first character of your values to be the thing that differentiates them. from operator import itemgetter df.Items.str.get_dummies(',').groupby(itemgetter(0), axis=1).sum() X Y Z 0 1 1 1 1 1 0 1 2 1 0 0 3 2 0 0 4 0 3 2 5 3 2 2 | 7 | 6 |
59,634,937 | 2020-1-7 | https://stackoverflow.com/questions/59634937/variable-foo-class-is-not-valid-as-type-but-why | I have something similar to this: from typing import Type class Foo: pass def make_a_foobar_class(foo_class: Type[Foo]) -> Type[Foo]: class FooBar(foo_class): # this.py:10: error: Variable "foo_class" is not valid as a type # this.py:10: error: Invalid base class "foo_class" pass return FooBar print(make_a_foobar_class(Foo)()) Running mypy throws these two errors (added as comments ^) at line class FooBar(foo_class): The code seems to work just fine: $ python this.py <__main__.make_a_foobar_class.<locals>.FooBar object at 0x10a422be0> What am I doing wrong? | Mypy, and the PEP 484 ecosystem in general, does not support creating classes with dynamic base types. This is likely because supporting such a feature is not worth the additional complexity: the type checker would need to implement additional logic/additional passes since it can no longer cleanly determine what exactly the parent type is by just examining the set of variable names that are currently in scope, and can also no longer accurately type check code using the new dynamic class in the general case. In any case, I would recommend either redesigning your code to avoid doing this, perhaps by using composition over inheritance or something. Alternatively, you can suppress the errors mypy is generating by adding a # type: ignore annotation. This annotation will filter out all errors associated with that particular line once type checking is done. For example: from typing import Type class Foo: pass def make_a_foobar_class(foo_class: Type[Foo]) -> Type[Foo]: class FooBar(foo_class): # type: ignore pass return FooBar print(make_a_foobar_class(Foo)()) | 17 | 14 |
59,630,622 | 2020-1-7 | https://stackoverflow.com/questions/59630622/how-to-enable-zoom-in-out-and-zoom-to-percentage-buttons-in-plots-pane-in-spyder | The zoom in, zoom out and zoom to percentage buttons are disabled in Plots pane in Spyder as shown here. Any idea how to enable them? The should be enabled as seen here. Specs Spyder version: 4.0.0 OS: elementary OS Python: 3.7.5 64-bit Kernel: Linux 5.4.7-050407-generic Laptop: Thinkpad E585 Failed attempts Following changes in preferences didn't help: 1. Changing Backend from inline to automatic. 2. Changing Format from png to svg. 3. Changing Resolution from 72 to 150 dpi. | (Spyder maintainer here) To be able to use those buttons you need to deactivate the option called Fit plots to window, present in the Options menu of the Plots pane: | 8 | 9 |
59,629,795 | 2020-1-7 | https://stackoverflow.com/questions/59629795/how-can-i-split-columns-with-regex-to-move-trailing-caps-into-a-separate-column | I'm trying to split a column using regex, but can't seem to get the split correctly. I'm trying to take all the trailing CAPS and move them into a separate column. So I'm getting all the CAPS that are either 2-4 CAPS in a row. However, it's only leaving the 'Name' column while the 'Team' column is blank. Here's my code: import pandas as pd url = "https://www.espn.com/nba/stats/player/_/table/offensive/sort/avgAssists/dir/desc" df = pd.read_html(url)[0].join(pd.read_html(url)[1]) df[['Name','Team']] = df['Name'].str.split('[A-Z]{2,4}', expand=True) I want this: print(df.head(5).to_string()) RK Name POS GP MIN PTS FGM FGA FG% 3PM 3PA 3P% FTM FTA FT% REB AST STL BLK TO DD2 TD3 PER 0 1 LeBron JamesLA SF 35 35.1 24.9 9.6 19.7 48.6 2.0 6.0 33.8 3.7 5.5 67.7 7.9 11.0 1.3 0.5 3.7 28 9 26.10 1 2 Ricky RubioPHX PG 30 32.0 13.6 4.9 11.9 41.3 1.2 3.7 31.8 2.6 3.1 83.7 4.6 9.3 1.3 0.2 2.5 12 1 16.40 2 3 Luka DoncicDAL SF 32 32.8 29.7 9.6 20.2 47.5 3.1 9.4 33.1 7.3 9.1 80.5 9.7 8.9 1.2 0.2 4.2 22 11 31.74 3 4 Ben SimmonsPHIL PG 36 35.4 14.9 6.1 10.8 56.3 0.1 0.1 40.0 2.7 4.6 59.0 7.5 8.6 2.2 0.7 3.6 19 3 19.49 4 5 Trae YoungATL PG 34 35.1 28.9 9.3 20.8 44.8 3.5 9.4 37.5 6.7 7.9 85.0 4.3 8.4 1.2 0.1 4.8 11 1 23.47 to become this: print(df.head(5).to_string()) RK Name Team POS GP MIN PTS FGM FGA FG% 3PM 3PA 3P% FTM FTA FT% REB AST STL BLK TO DD2 TD3 PER 0 1 LeBron James LA SF 35 35.1 24.9 9.6 19.7 48.6 2.0 6.0 33.8 3.7 5.5 67.7 7.9 11.0 1.3 0.5 3.7 28 9 26.10 1 2 Ricky Rubio PHX PG 30 32.0 13.6 4.9 11.9 41.3 1.2 3.7 31.8 2.6 3.1 83.7 4.6 9.3 1.3 0.2 2.5 12 1 16.40 2 3 Luka Doncic DAL SF 32 32.8 29.7 9.6 20.2 47.5 3.1 9.4 33.1 7.3 9.1 80.5 9.7 8.9 1.2 0.2 4.2 22 11 31.74 3 4 Ben Simmons PHIL PG 36 35.4 14.9 6.1 10.8 56.3 0.1 0.1 40.0 2.7 4.6 59.0 7.5 8.6 2.2 0.7 3.6 19 3 19.49 4 5 Trae Young ATL PG 34 35.1 28.9 9.3 20.8 44.8 3.5 9.4 37.5 6.7 7.9 85.0 4.3 8.4 1.2 0.1 4.8 11 1 23.47 | You may extract the data into two columns by using a regex like ^(.*?)([A-Z]+)$ or ^(.*[^A-Z])([A-Z]+)$: df[['Name','Team']] = df['Name'].str.extract('^(.*?)([A-Z]+)$', expand=True) This will keep all up to the last char that is not an uppercase letter in Group "Name" and the last uppercase letters in Group "Team". See regex demo #1 and regex demo #2 Details ^ - start of a string (.*?) - Capturing group 1: any zero or more chars other than line break chars, as few as possible or (.*[^A-Z]) - any zero or more chars other than line break chars, as many as possible, up to the last char that is not an ASCII uppercase letter (granted the subsequent patterns match) (note that this pattern implies there is at least 1 char before the last uppercase letters) ([A-Z]+) - Capturing group 2: one or more ASCII uppercase letters $ - end of string. | 11 | 9 |
59,627,976 | 2020-1-7 | https://stackoverflow.com/questions/59627976/integrating-dash-apps-into-flask-minimal-example | I want to create a Flask web-application. I want to integrate several Dash-Apps into this site and display links to each Dash-app on the home page. Here is a minimal example: The home page should look like this: from flask import Flask app = Flask(__name__) @app.route("/") def main(): return "Hello World" if __name__ == "__main__": app.run(debug = True) Lets say we have a Dash app that looks like this: import dash import dash_html_components as html app = dash.Dash(__name__) app.layout = html.Div("Hello world 1") if __name__=="__main__": app.run_server(debug=True) My question is now: How can I integrate the Dash app into the Flask app in the following way: 1) there should be a link in the Flask-app leading to the Dash app 2) the location of the Dash app should be for example /dash (the home page is in that case located at/) 3) if there is a second and third Dash-app, it should be easy to add another link and location (e.g. /dash2, /dash3,...) There are many posts dealing with this issue - however, I found no minimal example. | Combining One or More Dash Apps with Existing WSGI Apps The following example illustrates this approach by combining two Dash apps with a Flask app. flask_app.py from flask import Flask flask_app = Flask(__name__) @flask_app.route('/') def index(): return 'Hello Flask app' app1.py import dash import dash_html_components as html app = dash.Dash( __name__, requests_pathname_prefix='/app1/' ) app.layout = html.Div("Dash app 1") app2.py import dash import dash_html_components as html app = dash.Dash( __name__, requests_pathname_prefix='/app2/' ) app.layout = html.Div("Dash app 2") wsgi.py from werkzeug.wsgi import DispatcherMiddleware from app1 import app as app1 from app2 import app as app2 application = DispatcherMiddleware(flask_app, { '/app1': app1.server, '/app2': app2.server, }) In this example, the Flask app has been mounted at / and the two Dash apps have been mounted at /app1 and /app2. In this approach, we do not pass in a Flask server to the Dash apps, but let them create their own, which the DispatcherMiddleware routes requests to based on the prefix of the incoming requests. Within each Dash app, requests_pathname_prefix must be specified as the app's mount point, in order to match the route prefix set by the DispatcherMiddleware. Note that the application object in wsgi.py is of type werkzeug.wsgi.DispatcherMiddleware, which does not have a run method. This can be run as a WSGI app like so: $ gunicorn wsgi:application Alternatively, you can use the Werkzeug development server (which is not suitable for production) to run the app: run.py from werkzeug.wsgi import DispatcherMiddleware from werkzeug.serving import run_simple from app1 import app as app1 from app2 import app as app2 application = DispatcherMiddleware(flask_app, { '/app1': app1.server, '/app2': app2.server, }) if __name__ == '__main__': run_simple('localhost', 8050, application) If you need access to the Dash development tools when using this approach (whether running with a WSGI server, or using the Werkzeug development server) you must invoke them manually for each Dash app. The following lines can be added before the initialisation of the DispatcherMiddleware to do this: app1.enable_dev_tools(debug=True) app2.enable_dev_tools(debug=True) Note: debug mode should not be enabled in production. When using debug mode with Gunicorn, the --reload command line flag is required for hot reloading to work. In this example, the existing app being combined with two Dash apps is a Flask app, however this approach enables the combination of any web application implementing the WSGI specification. A list of WSGI web frameworks can be found in the WSGI documentation with one or more Dash apps. Reference - https://dash.plot.ly/integrating-dash Edited: Multiple Dash app without WSGI from dash import Dash from werkzeug.wsgi import DispatcherMiddleware import flask from werkzeug.serving import run_simple import dash_html_components as html server = flask.Flask(__name__) dash_app1 = Dash(__name__, server = server, url_base_pathname='/dashboard/') dash_app2 = Dash(__name__, server = server, url_base_pathname='/reports/') dash_app1.layout = html.Div([html.H1('Hi there, I am Dash1')]) dash_app2.layout = html.Div([html.H1('Hi there, I am Dash2')]) @server.route('/') @server.route('/hello') def hello(): return 'hello world!' @server.route('/dashboard/') def render_dashboard(): return flask.redirect('/dash1') @server.route('/reports/') def render_reports(): return flask.redirect('/dash2') app = DispatcherMiddleware(server, { '/dash1': dash_app1.server, '/dash2': dash_app2.server }) run_simple('0.0.0.0', 8080, app, use_reloader=True, use_debugger=True) | 12 | 14 |
59,559,294 | 2020-1-2 | https://stackoverflow.com/questions/59559294/error-in-installing-python-package-flair-about-a-dependent-package-not-hosted-i | I am trying to install flair. It is throwing below error when executing below command: pip install flair ERROR: Packages installed from PyPI cannot depend on packages which are not also hosted on PyPI. tiny-tokenizer depends on SudachiDict_core@ https://object-storage.tyo2.conoha.io/v1/nc_2520839e1f9641b08211a5c85243124a/sudachi/SudachiDict_core-20190927.tar.gz I thought installing this package explicitly might fix the error but it doesn't. The error remains same. The installed version of SudachiDict-core is below: SudachiDict-core 0.0.0 Below is the Environment: OS: Windows 10 Python: 3.6 (64 bit) Any hint is appreciated. Thank you! Note: First hurdle when installing flair was torch package. It was resolved simply when torch package is installed. The error looked like below: ERROR: Could not find a version that satisfies the requirement torch>=1.1.0 (from flair) (from verERROR: No matching distribution found for torch>=1.1.0 (from flair) | It is strange running below command solved the problem. pip install flair==0.4.3 I assume that the problem is in a latest version 0.4.4 (and its dependencies). Note: I had torch==1.1.0 package already installed. | 7 | 0 |
59,624,729 | 2020-1-7 | https://stackoverflow.com/questions/59624729/re-findallabcd-string-vs-re-findallabcd-string | In a Python regular expression, I encounter this singular problem. Could you give instruction on the differences between re.findall('(ab|cd)', string) and re.findall('(ab|cd)+', string)? import re string = 'abcdla' result = re.findall('(ab|cd)', string) result2 = re.findall('(ab|cd)+', string) print(result) print(result2) Actual Output is: ['ab', 'cd'] ['cd'] I'm confused as to why does the second result doesn't contain 'ab' as well? | I don't know if this will clear things more, but let's try to imagine what happen under the hood in a simple way, we going to sumilate what happen using match # group(0) return the matched string the captured groups are returned in groups or you can access them # using group(1), group(2)....... in your case there is only one group, one group will capture only # one part so when you do this string = 'abcdla' print(re.match('(ab|cd)', string).group(0)) # only 'ab' is matched and the group will capture 'ab' print(re.match('(ab|cd)+', string).group(0)) # this will match 'abcd' the group will capture only this part 'cd' the last iteration findall match and consume the string at the same time let's imagine what happen with this REGEX '(ab|cd)': 'abcdabla' ---> 1: match: 'ab' | capture : ab | left to process: 'cdabla' 'cdabla' ---> 2: match: 'cd' | capture : cd | left to process: 'abla' 'abla' ---> 3: match: 'ab' | capture : ab | left to process: 'la' 'la' ---> 4: match: '' | capture : None | left to process: '' --- final : result captured ['ab', 'cd', 'ab'] Now the same thing with '(ab|cd)+' 'abcdabla' ---> 1: match: 'abcdab' | capture : 'ab' | left to process: 'la' 'la' ---> 2: match: '' | capture : None | left to process: '' ---> final result : ['ab'] I hope this clears thing a little bit. | 18 | 6 |
59,563,746 | 2020-1-2 | https://stackoverflow.com/questions/59563746/how-to-clean-a-tox-environment-after-running | I have the following tox.ini file: [tox] envlist = flake8,py{35,36,37,38}{,-keyring} [testenv] usedevelop = True install_command = pip install -U {opts} {packages} deps = .[test] keyring: .[keyring] setenv = COVERAGE_FILE = .coverage.{envname} commands= pytest {toxinidir}/tests -n 4 {posargs} [testenv:flake8] basepython = python3 deps = flake8 commands= flake8 src tests [flake8] ignore: F401,E402,E501,W605,W503 When I run the tox command, it creates a .tox folder containing a folder for every environment specified in the [tox] section of the tox.ini. I would like to automatically get rid of these particular folders after the test have succeeded when running tox without having to manually run rm -rf .tox/NAME_OF_THE_ENV. I have searched through the tox documentation but I have found nothing. Is it possible to do so? If yes, how? | I found a way by creating a tox hook. This hook runs the shutil.rmtree command after the tests have been run inside the env. In a tox_clean_env.py file: import shutil from tox import hookimpl @hookimpl def tox_runtest_post(venv): try: shutil.rmtree(venv.path) except Exception as e: print("An exception occurred while removing '{}':".format(venv.path)) print(e) I created a package around this code and I just need to install it using pip. In my setup.py, in the setup function: entry_points={"tox": ["clean_env = tox_clean_env"]}, | 10 | 5 |
59,614,014 | 2020-1-6 | https://stackoverflow.com/questions/59614014/pypdf4-exported-pdf-file-size-too-big | I have a PDF file of around 7000 pages and 479 MB. I have create a python script using PyPDF4 to extract only specific pages if the pages contain specific words. The script works but the new PDF file, even though it has only 650 pages from the original 7000, now has more MB that the original file (498 MB to be exactly). Is there any way to lower the filesize of the new PDF? The script I used: from PyPDF4 import PdfFileWriter, PdfFileReader import os import re output = PdfFileWriter() input = PdfFileReader(open('Binder.pdf', 'rb')) # open input for i in range(0, input.getNumPages()): content = "" content += input.getPage(i).extractText() + "\n" #Format 1 RS = re.search('FIGURE', content) RS1 = #... Only one search given as example. I have more, but are irrelevant for the question. #.... # Format 2 RS20 = re.search('FIG.', content) RS21 = #... Only one search given as example. I have more, but are irrelevant for the question. #.... if (all(v is not None for v in [RS, RS1, RS2, RS3, RS4, RS5, RS6, RS7, RS8, RS9]) or all(v is not None for v in [RS20, RS21, RS22, RS23, RS24, RS25, RS26, RS27, RS28, RS29, RS30, RS30])): p = input.getPage(i) output.addPage(p) #Save pages to new PDF file with open('ExtractedPages.pdf', 'wb') as f: output.write(f) | After a lot of searching found some solutions. The only problem with the exported PDF file was that it was uncompressed. So I needed a solution to compress a PDF file: PyPDF2 and/or PyPDF4 do not have an option to compress PDFs. PyPDF2 had the compressContentStreams() method, which doesn't work. Found a few other solutions that claim to compress PDFs, but none worked for me (adding them here just in case they work for others): pylovepdf ; pdfsizeopt ; pdfc The first solution that worked for me was Adobe Acrobat professional. It reduced the size from 498 MB to 2.99 MB. [Best Solution] As an alternative, open source solution that works, I found coherentpdf. For Windows you can download the pre-built PDF squeezer tool. Then in cmd: cpdfsqueeze.exe input.pdf output.pdf This actually compressed the PDF even more than Adobe Acrobat. From 498 MB to 2.48 MB. Compressed to 0.5% from original. I think this is the best solution as it can be added to your Python Code. Edit: Found another Free solution that has a GUI also. PDFsam. You can use the Merge feature on one PDF file, and in the advanced Settings make sure you have the Compress Output checked. This compressed from 498 to 3.2 MB. | 8 | 11 |
59,620,431 | 2020-1-6 | https://stackoverflow.com/questions/59620431/what-is-a-buffer-in-pytorch | I understand what register_buffer does and the difference between register_buffer and register_parameters. But what is the precise definition of a buffer in PyTorch? | This can be answered looking at the implementation: def register_buffer(self, name, tensor): if '_buffers' not in self.__dict__: raise AttributeError( "cannot assign buffer before Module.__init__() call") elif not isinstance(name, torch._six.string_classes): raise TypeError("buffer name should be a string. " "Got {}".format(torch.typename(name))) elif '.' in name: raise KeyError("buffer name can't contain \".\"") elif name == '': raise KeyError("buffer name can't be empty string \"\"") elif hasattr(self, name) and name not in self._buffers: raise KeyError("attribute '{}' already exists".format(name)) elif tensor is not None and not isinstance(tensor, torch.Tensor): raise TypeError("cannot assign '{}' object to buffer '{}' " "(torch Tensor or None required)" .format(torch.typename(tensor), name)) else: self._buffers[name] = tensor That is, the buffer's name: must be a string: not isinstance(name, torch._six.string_classes) cannot contain a . (dot): '.' in name cannot be an empty string: name == '' cannot be an attribute of the Module: hasattr(self, name) should be unique: name not in self._buffers and the tensor (guess what?): should be a Tensor: isinstance(tensor, torch.Tensor) So, the buffer is just a tensor with these properties, registered in the _buffers attribute of a Module; | 11 | 5 |
59,612,745 | 2020-1-6 | https://stackoverflow.com/questions/59612745/can-i-set-a-default-value-of-prometheus-labels-in-python | I'm using the official python (2.7) client. I want to define a metric with some labels but I don't always have them all the labels to send. When I send only some of them I get the error: AttributeError: 'Counter' object has no attribute '_value' This is the code I used: c = Counter("counterTest, "explain this counter, labelnames=("label1", "label2",), namespace="namespace") c.labels(label1="1").inc(1) Is this a limitation in the python library? Or maybe it's a limitation on the Prometheus end? | You must always specify all labels, how else would we know which series it is that you want to increment? You can specify an empty string as a label value, though this may cause confusion for your users. | 7 | 4 |
59,619,940 | 2020-1-6 | https://stackoverflow.com/questions/59619940/what-is-the-point-of-built-distributions-for-pure-python-packages | One can share Python as a source distribution (.tar.gz format) or as a built distribution (wheels format). As I understand it, the point of built distributions is: Save time: Compilation might be pretty time-consuming. We can do this once on the server and share it for many users. Reduce requirements: The user does not have to have a compiler installed However, those two arguments for bdist files seem not to hold for pure-python packages. Still, I see that natsort comes in both, a sdist and a bdist. Is there any advantage of sharing a pure-python package in bdist format? | From pythonwheels.com: Advantages of wheels Faster installation for pure Python and native C extension packages. Avoids arbitrary code execution for installation. (Avoids setup.py) Installation of a C extension does not require a compiler on Linux, Windows or macOS. Allows better caching for testing and continuous integration. Creates .pyc files as part of installation to ensure they match the Python interpreter used. More consistent installs across platforms and machines. So for me, I think the first and second points are most meaningful for a pure Python package. It's smaller, faster and also more secure. | 7 | 6 |
59,580,304 | 2020-1-3 | https://stackoverflow.com/questions/59580304/extract-individual-field-from-table-image-to-excel-with-ocr | I have scanned images which have tables as shown in this image: I am trying to extract each box separately and perform OCR but when I try to detect horizontal and vertical lines and then detect boxes it's returning the following image: And when I try to perform other transformations to detect text (erode and dilate) some remains of lines are still coming along with text like below: I cannot detect text only to perform OCR and proper bounding boxes aren't being generated like below: I cannot get clearly separated boxes using real lines, I've tried this on an image that was edited in paint(as shown below) to add digits and it works. I don't know which part I'm doing wrong but if there's anything I should try or maybe change/add in my question please please tell me. #Loading all required libraries %pylab inline import cv2 import numpy as np import pandas as pd import pytesseract import matplotlib.pyplot as plt import statistics from time import sleep import random img = cv2.imread('images/scan1.jpg',0) # for adding border to an image img1= cv2.copyMakeBorder(img,50,50,50,50,cv2.BORDER_CONSTANT,value=[255,255]) # Thresholding the image (thresh, th3) = cv2.threshold(img1, 255, 255,cv2.THRESH_BINARY|cv2.THRESH_OTSU) # to flip image pixel values th3 = 255-th3 # initialize kernels for table boundaries detections if(th3.shape[0]<1000): ver = np.array([[1], [1], [1], [1], [1], [1], [1]]) hor = np.array([[1,1,1,1,1,1]]) else: ver = np.array([[1], [1], [1], [1], [1], [1], [1], [1], [1], [1], [1], [1], [1], [1], [1], [1], [1], [1], [1]]) hor = np.array([[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]]) # to detect vertical lines of table borders img_temp1 = cv2.erode(th3, ver, iterations=3) verticle_lines_img = cv2.dilate(img_temp1, ver, iterations=3) # to detect horizontal lines of table borders img_hor = cv2.erode(th3, hor, iterations=3) hor_lines_img = cv2.dilate(img_hor, hor, iterations=4) # adding horizontal and vertical lines hor_ver = cv2.add(hor_lines_img,verticle_lines_img) hor_ver = 255-hor_ver # subtracting table borders from image temp = cv2.subtract(th3,hor_ver) temp = 255-temp #Doing xor operation for erasing table boundaries tt = cv2.bitwise_xor(img1,temp) iii = cv2.bitwise_not(tt) tt1=iii.copy() #kernel initialization ver1 = np.array([[1,1], [1,1], [1,1], [1,1], [1,1], [1,1], [1,1], [1,1], [1,1]]) hor1 = np.array([[1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,1,1,1,1,1]]) #morphological operation temp1 = cv2.erode(tt1, ver1, iterations=2) verticle_lines_img1 = cv2.dilate(temp1, ver1, iterations=1) temp12 = cv2.erode(tt1, hor1, iterations=1) hor_lines_img2 = cv2.dilate(temp12, hor1, iterations=1) # doing or operation for detecting only text part and removing rest all hor_ver = cv2.add(hor_lines_img2,verticle_lines_img1) dim1 = (hor_ver.shape[1],hor_ver.shape[0]) dim = (hor_ver.shape[1]*2,hor_ver.shape[0]*2) # resizing image to its double size to increase the text size resized = cv2.resize(hor_ver, dim, interpolation = cv2.INTER_AREA) #bitwise not operation for fliping the pixel values so as to apply morphological operation such as dilation and erode want = cv2.bitwise_not(resized) if(want.shape[0]<1000): kernel1 = np.array([[1,1,1]]) kernel2 = np.array([[1,1], [1,1]]) kernel3 = np.array([[1,0,1],[0,1,0], [1,0,1]]) else: kernel1 = np.array([[1,1,1,1,1,1]]) kernel2 = np.array([[1,1,1,1,1], [1,1,1,1,1], [1,1,1,1,1], [1,1,1,1,1]]) tt1 = cv2.dilate(want,kernel1,iterations=2) # getting image back to its original size resized1 = cv2.resize(tt1, dim1, interpolation = cv2.INTER_AREA) # Find contours for image, which will detect all the boxes contours1, hierarchy1 = cv2.findContours(resized1, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) #function to sort contours by its x-axis (top to bottom) def sort_contours(cnts, method="left-to-right"): # initialize the reverse flag and sort index reverse = False i = 0 # handle if we need to sort in reverse if method == "right-to-left" or method == "bottom-to-top": reverse = True # handle if we are sorting against the y-coordinate rather than # the x-coordinate of the bounding box if method == "top-to-bottom" or method == "bottom-to-top": i = 1 # construct the list of bounding boxes and sort them from top to # bottom boundingBoxes = [cv2.boundingRect(c) for c in cnts] (cnts, boundingBoxes) = zip(*sorted(zip(cnts, boundingBoxes), key=lambda b:b[1][i], reverse=reverse)) # return the list of sorted contours and bounding boxes return (cnts, boundingBoxes) #sorting contours by calling fuction (cnts, boundingBoxes) = sort_contours(contours1, method="top-to-bottom") #storing value of all bouding box height heightlist=[] for i in range(len(boundingBoxes)): heightlist.append(boundingBoxes[i][3]) #sorting height values heightlist.sort() sportion = int(.5*len(heightlist)) eportion = int(0.05*len(heightlist)) #taking 50% to 95% values of heights and calculate their mean #this will neglect small bounding box which are basically noise try: medianheight = statistics.mean(heightlist[-sportion:-eportion]) except: medianheight = statistics.mean(heightlist[-sportion:-2]) #keeping bounding box which are having height more then 70% of the mean height and deleting all those value where # ratio of width to height is less then 0.9 box =[] imag = iii.copy() for i in range(len(cnts)): cnt = cnts[i] x,y,w,h = cv2.boundingRect(cnt) if(h>=.7*medianheight and w/h > 0.9): image = cv2.rectangle(imag,(x+4,y-2),(x+w-5,y+h),(0,255,0),1) box.append([x,y,w,h]) # to show image ###Now we have badly detected boxes image as shown | You're on the right track. Here's a continuation of your approach with slight modifications. The idea is: Obtain binary image. Load image, convert to grayscale, and Otsu's threshold. Remove all character text contours. We create a rectangular kernel and perform opening to only keep the horizontal/vertical lines. This will effectively make the text into tiny noise so we find contours and filter using contour area to remove them. Repair horizontal/vertical lines and extract each ROI. We morph close to fix and broken lines and smooth the table. From here we sort the box field contours using imutils.sort_contours() with the top-to-bottom parameter. Next we find contours and filter using contour area then extract each ROI. Here's a visualization of each box field and the extracted ROI Code import cv2 import numpy as np from imutils import contours # Load image, grayscale, Otsu's threshold image = cv2.imread('1.jpg') original = image.copy() gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # Remove text characters with morph open and contour filtering kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3)) opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1) cnts = cv2.findContours(opening, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: area = cv2.contourArea(c) if area < 500: cv2.drawContours(opening, [c], -1, (0,0,0), -1) # Repair table lines, sort contours, and extract ROI close = 255 - cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel, iterations=1) cnts = cv2.findContours(close, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] (cnts, _) = contours.sort_contours(cnts, method="top-to-bottom") for c in cnts: area = cv2.contourArea(c) if area < 25000: x,y,w,h = cv2.boundingRect(c) cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), -1) ROI = original[y:y+h, x:x+w] # Visualization cv2.imshow('image', image) cv2.imshow('ROI', ROI) cv2.waitKey(20) cv2.imshow('opening', opening) cv2.imshow('close', close) cv2.imshow('image', image) cv2.waitKey() | 13 | 7 |
Subsets and Splits