question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
64,689,342
2020-11-4
https://stackoverflow.com/questions/64689342/plotly-how-to-add-volume-to-a-candlestick-chart
code: from plotly.offline import init_notebook_mode, iplot, iplot_mpl def plot_train_test(train, test, date_split): data = [Candlestick(x=train.index, open=train['open'], high=train['high'], low=train['low'], close=train['close'],name='train'), Candlestick(x=test.index, open=test['open'], high=test['high'], low=test['low'], close=test['close'],name='test') ] layout = { 'shapes': [ {'x0': date_split, 'x1': date_split, 'y0': 0, 'y1': 1, 'xref': 'x', 'yref': 'paper', 'line': {'color': 'rgb(0,0,0)', 'width': 1}}], 'annotations': [{'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'left','text': ' test data'}, {'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'right', 'text': 'train data '}] } figure = Figure(data=data, layout=layout) iplot(figure) The above code is ok.But now I want to 'volume' in this candlestick chart code: from plotly.offline import init_notebook_mode, iplot, iplot_mpl def plot_train_test(train, test, date_split): data = [Candlestick(x=train.index, open=train['open'], high=train['high'], low=train['low'], close=train['close'],volume=train['volume'],name='train'), Candlestick(x=test.index, open=test['open'], high=test['high'], low=test['low'],close=test['close'],volume=test['volume'],name='test')] layout = { 'shapes': [ {'x0': date_split, 'x1': date_split, 'y0': 0, 'y1': 1, 'xref': 'x', 'yref': 'paper', 'line': {'color': 'rgb(0,0,0)', 'width': 1}} ], 'annotations': [ {'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'left', 'text': ' test data'}, {'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'right', 'text': 'train data '} ] } figure = Figure(data=data, layout=layout) iplot(figure) error: ValueError: Invalid property specified for object of type plotly.graph_objs.Candlestick: 'volume'
You haven't provided a complete code snippet with a data sample, so I'm going to have to suggest a solution that builds on an example here. In any case, you're getting that error message simply because go.Candlestick does not have a Volume attribute. And it might not seem so at first, but you can easily set up go.Candlestick as an individual trace, and then include an individual go.Bar() trace for Volumes using: fig = make_subplots(specs=[[{"secondary_y": True}]]) fig.add_traces(go.Candlestick(...), secondary_y=True) fig.add_traces(go.Bar(...), secondary_y=False) Plot: Complete code: import plotly.graph_objects as go from plotly.subplots import make_subplots import pandas as pd # data df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv') # Create figure with secondary y-axis fig = make_subplots(specs=[[{"secondary_y": True}]]) # include candlestick with rangeselector fig.add_trace(go.Candlestick(x=df['Date'], open=df['AAPL.Open'], high=df['AAPL.High'], low=df['AAPL.Low'], close=df['AAPL.Close']), secondary_y=True) # include a go.Bar trace for volumes fig.add_trace(go.Bar(x=df['Date'], y=df['AAPL.Volume']), secondary_y=False) fig.layout.yaxis2.showgrid=False fig.show()
17
28
64,694,102
2020-11-5
https://stackoverflow.com/questions/64694102/matplotlib-does-not-print-any-plot-on-databricks
%matplotlib inline corr = df.corr() f, ax = plt.subplots(figsize=(11, 9)) ax = sns.heatmap( corr, vmin=-1, vmax=1, center=0, cmap=sns.diverging_palette(20, 220, n=500), linewidths=.50, cbar_kws={"shrink": .7}, square=True ) ax.set_xticklabels( ax.get_xticklabels(), rotation=45, horizontalalignment='right' ); plt.show() This code is does not give any plot display on azure data bricks, only displays <Figure size 1100x900 with 2 Axes> whereas the same code worked fine and displayed a corr plot earlier, not sure whats going wrong here. I get the same output even when I try this. mask = np.triu(np.ones_like(corr, dtype=bool)) f, ax = plt.subplots(figsize=(11, 9)) cmap = sns.diverging_palette(20, 220, as_cmap=True) sns.heatmap(corr, mask=mask, cmap=cmap, vmax=0.3, center=0, square=True, linewidths=.1, cbar_kws={"shrink": .7}) plt.show()
It looks like an issue with the matplotlib modules above 3.3.0. To know the exact reason, I would suggest you to report here: https://github.com/matplotlib/matplotlib/issues As per the test from our end, you will experience the following error message with the matplotlib modules above 3.3.0. If you have installed matplotlib modules above 3.3.0, I would suggest you to use matplotlib modules below 3.2.2. Once you have installed matplotlib==3.2.2, I'm able to successfully display the plot.
6
8
64,687,375
2020-11-4
https://stackoverflow.com/questions/64687375/get-labels-from-dataset-when-using-tensorflow-image-dataset-from-directory
I wrote a simple CNN using tensorflow (v2.4) + keras in python (v3.8.3). I am trying to optimize the network, and I want more info on what it is failing to predict. I am trying to add a confusion matrix, and I need to feed tensorflow.math.confusion_matrix() the test labels. My problem is that I cannot figure out how to access the labels from the dataset object created by tf.keras.preprocessing.image_dataset_from_directory() My images are organized in directories having the label as the name. The documentation says the function returns a tf.data.Dataset object. If label_mode is None, it yields float32 tensors of shape (batch_size, image_size[0], image_size[1], num_channels), encoding images (see below for rules regarding num_channels). Otherwise, it yields a tuple (images, labels), where images has shape (batch_size, image_size[0], image_size[1], num_channels), and labels follows the format described below. Here is the code: import tensorflow as tf from tensorflow.keras import layers #import matplotlib.pyplot as plt import numpy as np import random import PIL import PIL.Image import os import pathlib #load the IMAGES dataDirectory = '/p/home/username/tensorflow/newBirds' dataDirectory = pathlib.Path(dataDirectory) imageCount = len(list(dataDirectory.glob('*/*.jpg'))) print('Image count: {0}\n'.format(imageCount)) #test display an image # osprey = list(dataDirectory.glob('OSPREY/*')) # ospreyImage = PIL.Image.open(str(osprey[random.randint(1,100)])) # ospreyImage.show() # nFlicker = list(dataDirectory.glob('NORTHERN FLICKER/*')) # nFlickerImage = PIL.Image.open(str(nFlicker[random.randint(1,100)])) # nFlickerImage.show() #set parameters batchSize = 32 height=224 width=224 (trainData, trainLabels) = tf.keras.preprocessing.image_dataset_from_directory( dataDirectory, labels='inferred', label_mode='categorical', validation_split=0.2, subset='training', seed=324893, image_size=(height,width), batch_size=batchSize) testData = tf.keras.preprocessing.image_dataset_from_directory( dataDirectory, labels='inferred', label_mode='categorical', validation_split=0.2, subset='validation', seed=324893, image_size=(height,width), batch_size=batchSize) #class names and sampling a few images classes = trainData.class_names testClasses = testData.class_names #plt.figure(figsize=(10,10)) # for images, labels in trainData.take(1): # for i in range(9): # ax = plt.subplot(3, 3, i+1) # plt.imshow(images[i].numpy().astype("uint8")) # plt.title(classes[labels[i]]) # plt.axis("off") # plt.show() #buffer to hold the data in memory for faster performance autotune = tf.data.experimental.AUTOTUNE trainData = trainData.cache().shuffle(1000).prefetch(buffer_size=autotune) testData = testData.cache().prefetch(buffer_size=autotune) #augment the dataset with zoomed and rotated images #use convolutional layers to maintain spatial information about the images #use max pool layers to reduce #flatten and then apply a dense layer to predict classes model = tf.keras.Sequential([ #layers.experimental.preprocessing.RandomFlip('horizontal', input_shape=(height, width, 3)), #layers.experimental.preprocessing.RandomRotation(0.1), #layers.experimental.preprocessing.RandomZoom(0.1), layers.experimental.preprocessing.Rescaling(1./255, input_shape=(height, width, 3)), layers.Conv2D(16, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(128, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(256, 3, padding='same', activation='relu'), layers.MaxPooling2D(), # layers.Conv2D(512, 3, padding='same', activation='relu'), # layers.MaxPooling2D(), #layers.Conv2D(1024, 3, padding='same', activation='relu'), #layers.MaxPooling2D(), #dropout prevents overtraining by not allowing each node to see each datapoint #layers.Dropout(0.5), layers.Flatten(), layers.Dense(512, activation='relu'), layers.Dense(len(classes)) ]) model.compile(optimizer='adam', loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.summary() epochs=2 history = model.fit( trainData, validation_data=testData, epochs=epochs ) #create confusion matrix predictions = model.predict_classes(testData) confusionMatrix = tf.math.confusion_matrix(labels=testClasses, predictions=predictions).numpy() I have tried using (foo, foo1) = tf.keras.preprocessing.image_dataset_from_directory(dataDirectory, etc), but I get (trainData, trainLabels) = tf.keras.preprocessing.image_dataset_from_directory( ValueError: too many values to unpack (expected 2) And if I try to return as one variable and then split it as so: train = tf.keras.preprocessing.image_dataset_from_directory( dataDirectory, labels='inferred', label_mode='categorical', validation_split=0.2, subset='training', seed=324893, image_size=(height,width), batch_size=batchSize) trainData = train[0] trainLabels = train[1] I get TypeError: 'BatchDataset' object is not subscriptable I can access the labels via testClasses = testData.class_names, but I get: 2020-11-03 14:15:14.643300: W tensorflow/core/framework/op_kernel.cc:1740] OP_REQUIRES failed at cast_op.cc:121 : Unimplemented: Cast string to int64 is not supported Traceback (most recent call last): File "birdFake.py", line 115, in confusionMatrix = tf.math.confusion_matrix(labels=testClasses, predictions=predictions).numpy() File "/p/home/username/miniconda3/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper return target(*args, **kwargs) File "/p/home/username/miniconda3/lib/python3.8/site-packages/tensorflow/python/ops/confusion_matrix.py", line 159, in confusion_matrix labels = math_ops.cast(labels, dtypes.int64) File "/p/home/username/miniconda3/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper return target(*args, **kwargs) File "/p/home/username/miniconda3/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 966, in cast x = gen_math_ops.cast(x, base_type, name=name) File "/p/home/username/miniconda3/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 1827, in cast _ops.raise_from_not_ok_status(e, name) File "/p/home/username/miniconda3/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 6862, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "", line 3, in raise_from tensorflow.python.framework.errors_impl.UnimplementedError: Cast string to int64 is not supported [Op:Cast] I am open to any method to get those labels into the confusion matrix. Any ideas as to why what I am doing is not working would also be appreciated. UPDATE: I tried the method proposed by Alexandre Catalano, and I get the following error Traceback (most recent call last): File "./birdFake.py", line 118, in labels = np.concatenate([labels, np.argmax(y.numpy(), axis=-1)]) File "<array_function internals>", line 5, in concatenate ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 1 dimension(s) and the array at index 1 has 0 dimension(s) I printed the first element of the labels array, and it is zero
If I were you, I'll iterate over the entire testData, I'll save the predictions and labels along the way and I'll build the confusion matrix at the end. testData = tf.keras.preprocessing.image_dataset_from_directory( dataDirectory, labels='inferred', label_mode='categorical', seed=324893, image_size=(height,width), batch_size=32) predictions = np.array([]) labels = np.array([]) for x, y in testData: predictions = np.concatenate([predictions, model.predict_classes(x)]) labels = np.concatenate([labels, np.argmax(y.numpy(), axis=-1)]) tf.math.confusion_matrix(labels=labels, predictions=predictions).numpy() and the result is Found 4 files belonging to 2 classes. array([[2, 0], [2, 0]], dtype=int32)
18
16
64,703,065
2020-11-5
https://stackoverflow.com/questions/64703065/how-to-check-if-a-column-contains-list
import pandas as pd df = pd.DataFrame({"col1": ["a", "b", "c", ["a", "b"]]}) I have a dataframe like this, and I want to find the rows that contains list in that column. I tried value_counts() but it tooks so long and throws error at the end. Here is the error: TypeError Traceback (most recent call last) pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.map_locations() TypeError: unhashable type: 'list' Exception ignored in: 'pandas._libs.index.IndexEngine._call_map_locations' Traceback (most recent call last): File "pandas/_libs/hashtable_class_helper.pxi", line 1709, in pandas._libs.hashtable.PyObjectHashTable.map_locations TypeError: unhashable type: 'list' c 1 a 1 [a, b] 1 b 1 Name: col1, dtype: int64 For bigger dataframes this tooks forever. Here is how the desired output look like: col1 c 1 b 1 [a,b] 1 dtype: int64
Lists are mutable, they cannot be compared, so you can neither count the values nor set them as index. You would need to convert to tuple or set (thanks @CameronRiddell) to be able to count: df['col1'].apply(lambda x: tuple(x) if isinstance(x, list) else x).value_counts() Output: c 1 b 1 a 1 (a, b) 1 Name: col1, dtype: int64
6
2
64,700,652
2020-11-5
https://stackoverflow.com/questions/64700652/how-to-assign-pandas-column-to-other-column-or-default-value-if-nan
I have df= a 1 nan 3 I want some syntax for df["b"] = df["a"] or 5 to create a b 1 1 nan 5 3 3 Does pandas support something like this? BONUS: what about different default values per index/group/anything?
import pandas as pd import numpy as np df = pd.DataFrame({"a": [1, np.nan, 3]}) df["b"] = df["a"].fillna(5) print(df) a b 0 1.0 1.0 1 NaN 5.0 2 3.0 3.0 digging through the doc gave the standard pandas solution. No need to go through numpy.
6
6
64,696,711
2020-11-5
https://stackoverflow.com/questions/64696711/longest-path-finding-with-condition
I'm trying to solve a problem in Python/Pandas which I think is closely related to the longest path algorithm. The DataFrame I'm working with has the following structure: import numpy as np import pandas as pd data = { "cusID": ["001", "001", "001", "001", "001", "001", "002", "002", "002"], "start": ["A", "B", "C", "D", "A", "E", "B", "C", "D"], "end": ["B", "C", "D", "A", "E", "A", "C", "D", "E"] } df = pd.DataFrame(data) print(df) cusID start end 0 001 A B 1 001 B C 2 001 C D 3 001 D A 4 001 A E 5 001 E A 6 002 B C 7 002 C D 8 002 D E For each customer, I want to find the longest sequence which does not contain A. For instance, for customer 001, the sequence can be viewed as follows: A -> B -> C -> D -> A -> E -> A where B -> C -> D is the longest sequence of size 3. The resulting DataFrame I'm looking for is the following: cusID longestSeq 0 001 3 1 002 4 Although I wasn't able to write much code to solve this problem, here are some of my thoughts: First, it's obvious that I need to group the original DataFrame by cusID to analyse each of the two sequences separately. One idea I had was to apply some function to transform the DataFrame into this format: cusID seq 0 001 [A, B, C, D, A, E, A] 1 002 [B, C, D, E] and then working on each list separately, and use some sort of counter to take the maximum of the length of the paths that exclude A. My problem is transcribing that logic into code (if correct). Any help would be appreciated.
First step is to normalize the sequences. seqs = pd.concat([ df.drop(columns="end").rename(columns={"start":"node"}), df.groupby("cusID").tail(1).drop(columns="start").rename(columns={"end":"node"}) ]) seqs = seqs.sort_values("cusID", kind="mergesort").reset_index(drop=True) >>> seqs cusID node 0 001 A 1 001 B 2 001 C 3 001 D 4 001 A 5 001 E 6 001 A 7 002 B 8 002 C 9 002 D 10 002 E Then, using zero_runs we define: def longest_non_a(seq): eqa = seq == "A" runs = zero_runs(eqa) return (runs[:,1] - runs[:,0]).max() result = seqs.groupby("cusID")["node"].apply(longest_non_a) >>> result cusID 001 3 002 4 Name: node, dtype: int64
7
5
64,695,324
2020-11-5
https://stackoverflow.com/questions/64695324/upload-via-the-youtube-via-api-set-to-private-locked
I have been using the youtube API to upload remotely. But after a while of messing around with the code all the videos that get uploaded gets "Private (locked)" due to Terms and policies. I cant appeal it either due to "Appealing this violation is not available". Just to clarify I have been able to upload before and only recently started getting this error. Code: youtube-uploader #!/usr/bin/python import argparse import http.client import httplib2 import os import random import time import videoDetails import google.oauth2.credentials import google_auth_oauthlib.flow from googleapiclient.discovery import build from googleapiclient.errors import HttpError from googleapiclient.http import MediaFileUpload from google_auth_oauthlib.flow import InstalledAppFlow from oauth2client import client # Added from oauth2client import tools # Added from oauth2client.file import Storage # Added # Explicitly tell the underlying HTTP transport library not to retry, since # we are handling retry logic ourselves. httplib2.RETRIES = 1 # Maximum number of times to retry before giving up. MAX_RETRIES = 10 # Always retry when these exceptions are raised. RETRIABLE_EXCEPTIONS = (httplib2.HttpLib2Error, IOError, http.client.NotConnected, http.client.IncompleteRead, http.client.ImproperConnectionState, http.client.CannotSendRequest, http.client.CannotSendHeader, http.client.ResponseNotReady, http.client.BadStatusLine) # Always retry when an apiclient.errors.HttpError with one of these status # codes is raised. RETRIABLE_STATUS_CODES = [500, 502, 503, 504] CLIENT_SECRETS_FILE = 'client_secrets.json' SCOPES = ['https://www.googleapis.com/auth/youtube.upload'] API_SERVICE_NAME = 'youtube' API_VERSION = 'v3' VALID_PRIVACY_STATUSES = ('public', 'private', 'unlisted') def get_authenticated_service(): # Modified credential_path = os.path.join('./', 'credentials.json') store = Storage(credential_path) credentials = store.get() if not credentials or credentials.invalid: flow = client.flow_from_clientsecrets(CLIENT_SECRETS_FILE, SCOPES) credentials = tools.run_flow(flow, store) return build(API_SERVICE_NAME, API_VERSION, credentials=credentials) def initialize_upload(youtube, options): tags = None if options.keywords: tags = options.keywords.split(',') body=dict( snippet=dict( title=options.getFileName("video").split(".", 1)[0], description=options.description, tags=tags, categoryId=options.category ), status=dict( privacyStatus=options.privacyStatus ) ) # Call the API's videos.insert method to create and upload the video. videoPath = "/Users\caspe\OneDrive\Documents\Övrigt\Kodning\AwsCSGO\Video\%s" % (options.getFileName("video")) insert_request = youtube.videos().insert( part=','.join(body.keys()), body=body, media_body=MediaFileUpload(videoPath, chunksize=-1, resumable=True) ) resumable_upload(insert_request, options) # This method implements an exponential backoff strategy to resume a # failed upload. def resumable_upload(request, options): response = None error = None retry = 0 while response is None: try: print('Uploading file...') status, response = request.next_chunk() if response is not None: if 'id' in response: print ('The video with the id %s was successfully uploaded!' % response['id']) # upload thumbnail for Video options.insertThumbnail(youtube, response['id']) else: exit('The upload failed with an unexpected response: %s' % response) except HttpError as e: if e.resp.status in RETRIABLE_STATUS_CODES: error = 'A retriable HTTP error %d occurred:\n%s' % (e.resp.status, e.content) else: raise except RETRIABLE_EXCEPTIONS as e: error = 'A retriable error occurred: %s' % e if error is not None: print (error) retry += 1 if retry > MAX_RETRIES: exit('No longer attempting to retry.') max_sleep = 2 ** retry sleep_seconds = random.random() * max_sleep print ('Sleeping %f seconds and then retrying...') % sleep_seconds time.sleep(sleep_seconds) if __name__ == '__main__': args = videoDetails.Video() youtube = get_authenticated_service() try: initialize_upload(youtube, args) except HttpError as e: print ('An HTTP error %d occurred:\n%s') % (e.resp.status, e.content) videoDetails import os from googleapiclient.http import MediaFileUpload class Video: description = "test description" category = "22" keywords = "test" privacyStatus = "public" def getFileName(self, type): for file in os.listdir("/Users\caspe\OneDrive\Documents\Övrigt\Kodning\AwsCSGO\Video"): if type == "video" and file.split(".", 1)[1] != "jpg": return file break elif type == "thumbnail" and file.split(".", 1)[1] != "mp4": return file break def insertThumbnail(self, youtube, videoId): thumnailPath = "/Users\caspe\OneDrive\Documents\Övrigt\Kodning\AwsCSGO\Video\%s" % (self.getFileName("thumbnail")) request = youtube.thumbnails().set( videoId=videoId, media_body=MediaFileUpload(thumnailPath) ) response = request.execute() print(response)
If you check the documentation for Video.insert you will find the following at the top of the page. This is a new policy that is recently beginning to be enforced. Until your application has been verified all videos you upload will be set to private. You need to go though the audit first then you will be able to upload public videos. Note once your application has been verified this will not automatically set all existing previously uploaded videos to public you will need to do that yourself.
8
10
64,689,483
2020-11-5
https://stackoverflow.com/questions/64689483/how-to-do-multiclass-classification-with-keras
I want to make simple classifier with Keras that will classify my data. Features are numeric data and results are string/categorical data. I'm predicting 15 different categories/classes. This is how my code looks: model = Sequential() model.add(Dense(16, input_dim = x_train.shape[1], activation = 'relu')) # input layer requires input_dim param model.add(Dense(16, activation = 'relu')) model.add(Dense(16, activation = 'relu')) model.add(Dense(1, activation='relu')) model.compile(loss="binary_crossentropy", optimizer= "adam", metrics=['accuracy']) #es = EarlyStopping(monitor='loss', min_delta=0.005, patience=1, verbose=1, mode='auto') model.fit(x_train, y_train, epochs = 100, shuffle = True, batch_size=128, verbose=2) scores = model.evaluate(x_test, y_test) print(model.metrics_names[0], model.metrics_names[1]) the problem is that I'm always getting this error: ValueError: could not convert string to float 'category1' What am I doing wrong? When I replace my classes names "category1", "category2" etc with integer numbers, my code works but it always give me accuracy of 0. I have tried to change number of nodes and layers and activation functions but the result is always 0. It seams like the model thinks that I'm doing regression not classification. What is the correct way to do classification with Keras lib If my categorical values are not just 1 or 0?
You need to convert your string categories to integers, there is a method for that: y_train = tf.keras.utils.to_categorical(y_train, num_classes=num_classes) Also, the last layer for multi-class classification should be something like: model.add(Dense(NUM_CLASSES, activation='softmax')) And finally, for multi-class classification, the correct loss would be categorial cross-entropy. model.compile(loss="categorical_crossentropy", optimizer= "adam", metrics=['accuracy']) This is a nice example available from tensorflow: Classification Example
8
13
64,673,038
2020-11-4
https://stackoverflow.com/questions/64673038/how-to-split-cell-in-vscode-jupyter-notebook
How can a Jupyter notebook cell be split in VSCode? I.e., how to split a single cell with multiple lines into two cells with the top lines (above the cursor) in one cell and the bottom lines (below the cursor) in another cell? I've tried Cntrl Shift - using the Daily Insiders Python Extension, but it doesn't seem to do anything.
The Ctrl Shift - is for zooming out the display by default in VS Code. This feature has been put for a long time in Github, and the following is the request: Jupyter Split Cell and Select Multiple Cells command This issue is still open, although there's Notebooks are getting revamped! existed, it's for VS Code Insiders, not our current using VS Code. Maybe the production group can improve it in the future.
19
8
64,686,950
2020-11-4
https://stackoverflow.com/questions/64686950/whats-the-python-equivalent-of-julias-edit-macro
In Julia, calling a function with the @edit macro from the REPL will open the editor and put the cursor at the line where the method is defined. So, doing this: julia> @edit 1 + 1 jumps to julia/base/int.jl and puts the cursor on the line: (+)(x::T, y::T) where {T<:BitInteger} = add_int(x, y) As does the function form: edit(+, (Int, Int)) Is there an equivalent decorator/function in Python that does the same from the Python REPL?
Disclaimer: In the Python ecosystem, this is not the job of the core language/runtime but rather tools such as IDEs. For example, the ipython shell has the ?? special syntax to get improved help including source code. Python 3.8.5 (default, Jul 21 2020, 10:42:08) Type 'copyright', 'credits' or 'license' for more information IPython 7.18.1 -- An enhanced Interactive Python. Type '?' for help. In [1]: import random In [2]: random.uniform?? Signature: random.uniform(a, b) Source: def uniform(self, a, b): "Get a random number in the range [a, b) or [a, b] depending on rounding." return a + (b-a) * self.random() File: /usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/random.py Type: method The Python runtime itself allows viewing source code of objects via inspect.getsource. This uses a heuristic to search the source code as available; the objects themselves do not carry their source code. Python 3.8.5 (default, Jul 21 2020, 10:42:08) [Clang 11.0.0 (clang-1100.0.33.17)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import inspect >>> print(inspect.getsource(inspect.getsource)) def getsource(object): """Return the text of the source code for an object. The argument may be a module, class, method, function, traceback, frame, or code object. The source code is returned as a single string. An OSError is raised if the source code cannot be retrieved.""" lines, lnum = getsourcelines(object) return ''.join(lines) It is not possible to resolve arbitrary expressions or statements to their source; since all names in Python are resolved dynamically, the vast majority of expressions does not have a well-defined implementation unless executed. A debugger, e.g. as provided by pdb.set_trace(), allows inspecting the expression as it is executed.
8
5
64,692,669
2020-11-5
https://stackoverflow.com/questions/64692669/discord-bot-not-responding-to-commands-python
I've just gotten into writing discord bots. While trying to follow online instructions and tutorials, my bot would not respond to commands. It responded perfectly fine to on_message(), but no matter what I try it won't respond to commands. I'm sure it's something simple, but I would appreciate the help. import discord from discord.ext.commands import Bot from discord.ext import commands bot = commands.Bot(command_prefix='$') TOKEN = '<token-here>' @bot.event async def on_ready(): print(f'Bot connected as {bot.user}') @bot.event async def on_message(message): if message.content == 'test': await message.channel.send('Testing 1 2 3') @bot.command(name='go') async def dosomething(ctx): print("command called") #Tried putting this in help in debugging await message.channel.send("I did something") bot.run(TOKEN) Picture of me prompting the bot and the results
Ok. First of all, the only import statement that you need at the top is from discord.ext import commands. The other two are not necessary. Second of all, I tried messing around with your code myself and found that the on_message() function seems to interfere with the commands so taking that out should help. Third of all, I only found this out when I duplicated one of my own working bots and slowly changed all of the code until it was identical to yours. For some reason python didn't like it when I just copied and pasted your code. I have never seen something like this before so I honestly don't know what to say other than that your code is correct and should work as long as you take the on_message() function out. Here's the final code that I got working: from discord.ext import commands bot = commands.Bot(command_prefix="$") TOKEN = "<token-here>" @bot.event async def on_ready(): print(f'Bot connected as {bot.user}') @bot.command() async def dosomething(ctx): await ctx.send("I did something") bot.run(TOKEN) As you can see the only things that I have changed from your code are that I removed the redundant imports at the top, and I deleted the on_message() function. It works perfectly like this on my end so I would suggest that you re-type it out like this in a new file and see if that works. If that doesn't work for you then my next guess would be that there is a problem with your installation of discord.py so you might try uninstalling it and then reinstalling it. If none of that helps let me know and I will see if I can help you find anything else that might be the cause of the problem.
5
0
64,677,450
2020-11-4
https://stackoverflow.com/questions/64677450/plotly-how-to-put-two-3d-graphs-on-the-same-plot-with-plotly-graph-objects
In below code, I draw 2 3D graphs with plotly.graph_objects. I'm unable to put them together. import plotly.graph_objects as go import numpy as np pts = np.loadtxt(np.DataSource().open('https://raw.githubusercontent.com/plotly/datasets/master/mesh_dataset.txt')) x, y, z = pts.T ### First graph fig = go.Figure(data=[go.Mesh3d(x=x, y=y, z=z, alphahull=5, opacity=0.4, color='cyan')]) fig.show() ####### x = [0, 1, 0] y = [0, 2, 3] tvects = [x,y] orig = [0,0,0] df=[] coords = [[orig, np.sum([orig, v],axis=0)] for v in tvects] for i,c in enumerate(coords): X1, Y1, Z1 = zip(c[0]) X2, Y2, Z2 = zip(c[1]) vector = go.Scatter3d(x = [X1[0],X2[0]], y = [Y1[0],Y2[0]], z = [Z1[0],Z2[0]], marker = dict(size = [0,5], color = ['blue'], line=dict(width=5, color='DarkSlateGrey')), name = 'Vector'+str(i+1)) data.append(vector) ### Second graph fig = go.Figure(data=data) fig.show() ######## Could you please elaborate on how to do so, and how to have the arrow with head rather than a line in the second graph?
If you'd like to build on your first fig definition, just include the following after your first call to go.Figure() data = fig._data data is now a list which will have new elements appended to it in the rest of your already exising code under: for i,c in enumerate(coords): X1, Y1, Z1 = zip(c[0]) X2, Y2, Z2 = zip(c[1]) vector = go.Scatter3d(<see details in snippet below>) data.append(vector) Result: Regarding the arrows, the only options dierectly available to you through go.Scatter3D are: ['circle', 'circle-open', 'square', 'square-open','diamond', 'diamond-open', 'cross', 'x'] I hope one of those options will suit your needs. You can specify which one in: marker = dict(size = [15,15], color = ['blue'], symbol = 'diamond', line=dict(width=500, #color='red' )), Complete code: import plotly.graph_objects as go import numpy as np pts = np.loadtxt(np.DataSource().open('https://raw.githubusercontent.com/plotly/datasets/master/mesh_dataset.txt')) x, y, z = pts.T ### First graph fig = go.Figure(data=[go.Mesh3d(x=x, y=y, z=z, alphahull=5, opacity=0.4, color='cyan')]) #fig.show() ####### data = fig._data x = [0, 1, 0] y = [0, 2, 3] tvects = [x,y] orig = [0,0,0] df=[] coords = [[orig, np.sum([orig, v],axis=0)] for v in tvects] # ['circle', 'circle-open', 'square', 'square-open','diamond', 'diamond-open', 'cross', 'x'] for i,c in enumerate(coords): X1, Y1, Z1 = zip(c[0]) X2, Y2, Z2 = zip(c[1]) vector = go.Scatter3d(x = [X1[0],X2[0]], y = [Y1[0],Y2[0]], z = [Z1[0],Z2[0]], marker = dict(size = [15,15], color = ['blue'], symbol = 'diamond', line=dict(width=500, #color='red' )), name = 'Vector'+str(i+1)) data.append(vector) ### Second graph fig = go.Figure(data=data) fig.show() ########
6
5
64,686,981
2020-11-4
https://stackoverflow.com/questions/64686981/make-helix-from-two-objects
I have a plane and a sine curve in it. How to rotate these two objects, please? I mean to slowly incline the plane on the interval -0.1 to 0.4 in order to be, for instance, perpendicular to z at point 0.4? After longer rotation, the maximal and minimal value of the plane and sine would construct "the surface of a cylinder with the axis from the point [0, -0.1, 0] to the point [0, 0.4, 0]". I hope it is clear what I mean. import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d import proj3d fig = plt.figure() ax = fig.add_subplot(111, projection='3d') plane1 = -0.1 plane2 = 0.4 h = 0.03 # Plane xp = np.array([[0, 0], [0, 0]]) yp = np.array([[plane1, plane2], [plane1, plane2]]) zp = np.array([[-h, -h], [h, h]]) ax.plot_surface(xp, yp, zp, alpha=0.4, color = 'red') # Sine f = 100 amp = h y = np.linspace(plane1, plane2, 5000) z = amp*np.sin(y*f) ax.plot(y, z, zdir='x') plt.show() The desired result is a plane filled by the sine curve of the following shape: but no so rotated, 30° on the whole interval is enough.
To answer the title question, to create a helix, you are looking for a simple 3D function: amp, f = 1, 1 low, high = 0, math.pi*20 n = 1000 y = np.linspace(low, high, n) x = amp*np.cos(f*y) z = amp*np.sin(f*y) ax.plot(x,y,z) This gives: One way to find this yourself is to think about: what does it look like from each direction? Making a 2D graph in the y/z plane, you would see a cos graph, and a 2D graph in the y/x plane, you would see a sin graph. And a 2D graph in the x/z plane you would see a circle. And that tells you everything about the 3D function! If you actually want to rotate the image of the sine wave in 3D space, it gets complicated. Also, it won't create a helix like you are expecting because the radius of the "cylinder" you are trying to create will change with the varying y values. But, since you asked how to do the rotation... You will want to use affine transformation matrices. A single rotation can be expressed as a 4x4 matrix which you matrix multiply a homogeneous coordinate to find the resulting point. (See link for illustrations of this) rotate_about_y = np.array([ [cos(theta), 0, sin(theta), 0], [0, 1, , 0], [-sin(theta), 0, cos(theta), 0], [0, 0, 0, 1], ]) Then, to apply this across a whole list of points, you could do something like this: # Just using a mathematical function to generate the points for illustation low, high = 0, math.pi*16 n = 1000 y = np.linspace(low, high, n) x = np.zeros_like(y) z = np.cos(y) xyz = np.stack([x, y, z], axis=1) # shaped [n, 3] min_rotation, max_rotation = 0, math.pi*16 homogeneous_points = np.concatenate([xyz, np.full([n, 1], 1)], axis=1) # shaped [n, 4] rotation_angles = np.linspace(min_rotation, max_rotation, n) rotate_about_y = np.zeros([n, 4, 4]) rotate_about_y[:, 0, 0] = np.cos(rotation_angles) rotate_about_y[:, 0, 2] = np.sin(rotation_angles) rotate_about_y[:, 1, 1] = 1 rotate_about_y[:, 2, 0] = -np.sin(rotation_angles) rotate_about_y[:, 2, 2] = np.cos(rotation_angles) rotate_about_y[:, 3, 3] = 1 # This broadcasts matrix multiplication over the whole homogeneous_points array rotated_points = (homogeneous_points[:, None] @ rotate_about_y).squeeze(1) # Remove tacked on 1 new_points = rotated_points[:, :3] ax.plot(x, y, z) ax.plot(new_points[:, 0], new_points[:, 1], new_points[:, 2]) For this case (where low == min_rotation and high == max_rotation), you get a helix-like structure, but, as I said, it's warped by the fact we're rotating about the y axis, and the function goes to y=0. Note: the @ symbol in numpy means "matrix multiplication". "Homogeneous points" are just those with 1's tacked on the end; I'm not going to get into why they make it work, but they do. Note #2: The above code assumes that you want to rotate about the y axis (i.e. around the line x=0, z=0). If you want to rotate about a different line you need to compose transformations. I won't go too much into it here, but the process is, roughly: Transform the points such that the line you want to rotate around is mapped to the y axis Perform the above transformation Do the inverse of the first transformation in this list And you can compose them by matrix multiplying each transformation with each other. Here's a document I found that explains how and why affine transformation matrices work. But, there's plenty of information out there on the topic.
6
2
64,689,560
2020-11-5
https://stackoverflow.com/questions/64689560/measuring-the-distance-of-a-point-to-a-mask-in-opencv-python
Suppose that I have a mask of an object and a point. I want to find the closest point of the object mask to the point. For example, in my drawing, there is an object, the blue shape in the image (assume inside is also the part of the object mask). And the red point is the point from which I want to find the closest distance to the object mask. So, I want to find the thick green line as it is the shortest distance to the mask not the other ones (pink, orange, etc). I can do this using one of the following ways: An inefficient way is to find the distances of all the pixels to the point using something like this (brute-force). Another way is to create many lines towards the mask with epsilon angle difference and find the closest point over that line, which is also not very nice. I can create lines over the edge, and find the closest point of each of those lines on the object border. (This may not be as easy as I thought, first I need to find the outer border, etc.) But none of these methods are is elegant. I am wondering what is a more elegant and the most efficient way to determine this?
You could do a sort of binary search: let's call P your point and consider circles centred on P pick any point M on the mask, the circle through M will intersect the mask now repeat until convergence, if circle intersects mask, reduce radius otherwise increase it (by binary search type amounts) This will not work if your mask is not nicely connected, but then if that's not the case I doubt you can do better than brute force... The total cost should be linear for the circle-mask intersection check times log for the binary search.
6
1
64,679,865
2020-11-4
https://stackoverflow.com/questions/64679865/error-while-installing-pytorch-using-pip-cannot-build-wheel
I get the following output when I try to run pip3 install pytorch or pip install pytorch Collecting pytorch Using cached pytorch-1.0.2.tar.gz (689 bytes) Building wheels for collected packages: pytorch Building wheel for pytorch (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/chaitanya/anaconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-3v4wd97t/pytorch/setup.py'"'"'; __file__='"'"'/tmp/pip-install-3v4wd97t/pytorch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-8rsdyb8e cwd: /tmp/pip-install-3v4wd97t/pytorch/ Complete output (5 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-3v4wd97t/pytorch/setup.py", line 15, in <module> raise Exception(message) Exception: You tried to install "pytorch". The package named for PyTorch is "torch" ---------------------------------------- ERROR: Failed building wheel for pytorch Running setup.py clean for pytorch Failed to build pytorch Installing collected packages: pytorch Running setup.py install for pytorch ... error ERROR: Command errored out with exit status 1: command: /home/chaitanya/anaconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-3v4wd97t/pytorch/setup.py'"'"'; __file__='"'"'/tmp/pip-install-3v4wd97t/pytorch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-eld9j0g4/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/chaitanya/.local/include/python3.8/pytorch cwd: /tmp/pip-install-3v4wd97t/pytorch/ Complete output (5 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-3v4wd97t/pytorch/setup.py", line 11, in <module> raise Exception(message) Exception: You tried to install "pytorch". The package named for PyTorch is "torch" ---------------------------------------- ERROR: Command errored out with exit status 1: /home/chaitanya/anaconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-3v4wd97t/pytorch/setup.py'"'"'; __file__='"'"'/tmp/pip-install-3v4wd97t/pytorch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-eld9j0g4/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/chaitanya/.local/include/python3.8/pytorch Check the logs for full command output. I downloaded the matching wheel from here, but am couldn't figure out what to do with it. My Python installation is using anaconda3, if that's needed. What should I do from here? Tips on how I could have resolved this on my own would also be appreciated.
From your error: Exception: You tried to install "pytorch". The package named for PyTorch is "torch" which tells you what you need to know, instead of pip install pytorch it should be pip install torch I downloaded the matching wheel from here, but am couldn't figure out what to do with it Installing .whl files is as easy as pip install <path to .whl file> My Python installation is using anaconda3 That is very relevant. You should generally avoid as much as possible to use pip in your conda environment. Instead, you can find the correct conda install command for your setup(cuda version etc.) from pytroch.org, e.g. for cuda 11 it would be conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
14
38
64,676,672
2020-11-4
https://stackoverflow.com/questions/64676672/how-can-i-make-a-distance-matrix-with-own-metric-using-no-loop
I have a np.arrray like this: [[ 1.3 , 2.7 , 0.5 , NaN , NaN], [ 2.0 , 8.9 , 2.5 , 5.6 , 3.5], [ 0.6 , 3.4 , 9.5 , 7.4 , NaN]] And a function to compute the distance between two rows: def nan_manhattan(X, Y): nan_diff = np.absolute(X - Y) length = nan_diff.size return np.nansum(nan_diff) * length / (length - np.isnan(nan_diff).sum()) I need all pairwise distances, and I don't want to use a loop. How do I do that?
Leveraging broadcasting - def manhattan_nan(a): s = np.nansum(np.abs(a[:,None,:] - a), axis=-1) m = ~np.isnan(a) k = m.sum(1) r = a.shape[1]/np.minimum.outer(k,k) out = s*r return out Benchmarking From OP's comments, the use-case seems to be a tall array. Let's reproduce one for benchmarking re-using given sample data : In [2]: a Out[2]: array([[1.3, 2.7, 0.5, nan, nan], [2. , 8.9, 2.5, 5.6, 3.5], [0.6, 3.4, 9.5, 7.4, nan]]) In [3]: a = np.repeat(a, 100, axis=0) # @Dani Mesejo's soln In [4]: %timeit pdist(a, nan_manhattan) 1.02 s ± 35.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # Naive for-loop version In [18]: n = a.shape[0] In [19]: %timeit [[nan_manhattan(a[i], a[j]) for i in range(j+1,n)] for j in range(n)] 991 ms ± 45.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # With broadcasting In [9]: %timeit manhattan_nan(a) 8.43 ms ± 49.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
6
4
64,672,497
2020-11-4
https://stackoverflow.com/questions/64672497/unit-testing-mock-gcs
I have a hard time to find a way to make a unit test for the read and write methods present in this class. I am trying to create a mock with the mock patch library in order to avoid calling Google Storage but I have a hard time figure out how to do it. from google.cloud import storage class GCSObject(str): def __init__(self, uri=""): self.base, self.bucket, self.path = self.parse_uri(uri) def parse_uri(self, uri): uri = uri.lstrip("gs://").replace("//", "/").split("/", 1) if len(uri) > 1: return ("gs://", uri[0], uri[1]) else: return ("gs://", uri[0], "") def read(self) -> bytes: storage_client = storage.Client() bucket = storage_client.bucket(self.bucket) return bucket.blob(self.path).download_as_string() def write(self, content: bytes): storage_client = storage.Client() bucket = storage_client.get_bucket(self.bucket) blob = bucket.blob(self.path) blob.upload_from_string(content) What I am currently trying to do @mock.patch("packages.pyGCP.pyGCP.gcs.storage.Client") def test_upload(client): gcs_path = GCSObject("fake_path") reader = gcs_path.read() # confused what I should assert
I am using google-cloud-storage==1.32.0 and python 3.7.5. Here is the unit test solution: gcs.py: from google.cloud import storage class GCSObject(str): def __init__(self, uri=""): self.base, self.bucket, self.path = self.parse_uri(uri) def parse_uri(self, uri): uri = uri.lstrip("gs://").replace("//", "/").split("/", 1) if len(uri) > 1: return ("gs://", uri[0], uri[1]) else: return ("gs://", uri[0], "") def read(self) -> bytes: storage_client = storage.Client() bucket = storage_client.bucket(self.bucket) return bucket.blob(self.path).download_as_string() def write(self, content: bytes): storage_client = storage.Client() bucket = storage_client.get_bucket(self.bucket) blob = bucket.blob(self.path) blob.upload_from_string(content) test_gcs.py: import unittest from unittest import mock from unittest.mock import Mock from gcs import GCSObject class TestGCSObject(unittest.TestCase): @mock.patch('gcs.storage') def test_read(self, mock_storage): mock_gcs_client = mock_storage.Client.return_value mock_bucket = Mock() mock_bucket.blob.return_value.download_as_string.return_value = "teresa teng".encode('utf-8') mock_gcs_client.bucket.return_value = mock_bucket gcs = GCSObject('fake_path') actual = gcs.read() mock_storage.Client.assert_called_once() mock_gcs_client.bucket.assert_called_once_with('fake_path') mock_bucket.blob.assert_called_once_with('') mock_bucket.blob.return_value.download_as_string.assert_called_once() self.assertEqual(actual, "teresa teng".encode('utf-8')) @mock.patch('gcs.storage') def test_write(self, mock_storage): mock_gcs_client = mock_storage.Client.return_value mock_bucket = Mock() mock_gcs_client.get_bucket.return_value = mock_bucket gcs = GCSObject('fake_path') gcs.write(b'teresa teng') mock_storage.Client.assert_called_once() mock_gcs_client.get_bucket.assert_called_once_with('fake_path') mock_bucket.blob.assert_called_once_with('') mock_bucket.blob.return_value.upload_from_string.assert_called_once_with(b'teresa teng') if __name__ == '__main__': unittest.main() unit test result: .. ---------------------------------------------------------------------- Ran 2 tests in 0.004s OK Name Stmts Miss Cover Missing ---------------------------------------------------------------------- src/stackoverflow/64672497/gcs.py 18 1 94% 12 src/stackoverflow/64672497/test_gcs.py 29 0 100% ---------------------------------------------------------------------- TOTAL 47 1 98%
6
8
64,613,706
2020-10-30
https://stackoverflow.com/questions/64613706/animate-update-a-matplotlib-plot-in-vs-code-notebook
Using Jupyter Notebook, I can create an animated plot (based on this sample code): %matplotlib notebook import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation fig, ax = plt.subplots() x = np.arange(0, 2*np.pi, 0.01) line, = ax.plot(x, np.sin(x)) def init(): line.set_ydata([np.nan] * len(x)) return line, def animate(i): line.set_ydata(np.sin(x + i / 100)) # update the data. return line, ani = animation.FuncAnimation( fig, animate, init_func=init, interval=2, blit=True, save_count=50) plt.show() Is it possible to do so in Visual Studio Code's notebook editor? I think it involves the magic %matplotlib notebook mode which VS Code does not seem to support, but I don't know if there is an alternative.
Looks as though vscode supports ipywidgets (https://github.com/microsoft/vscode-python/issues/3429). So you can use the ipympl backend to matplotlib. install with pip install ipympl (also available on conda-forge) To use it you can use the %matplotlib ipympl magic. %matplotlib notebook does some javascript injection that is very specific to jupyter notebook, so it will not work in vscode or even jupyter lab.
22
34
64,610,269
2020-10-30
https://stackoverflow.com/questions/64610269/sqlalchemy-hangs-during-insert-while-querying-information-schema-tables
I have a Python process that uses SQLAlchemy to insert some data into a MS SQL Server DB. When the Python process runs it hangs during the insert. I turned on SQLAlchemy logging to get some more information. I found that it hangs at this point where SQLAlchemy seems to be requesting table schema info about the entire DB: 2020-10-30 08:12:07 [11444:6368] sqlalchemy.engine.base.Engine._execute_context(base.py:1235) INFO: SELECT [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME] FROM [INFORMATION_SCHEMA].[TABLES] WHERE [INFORMATION_SCHEMA].[TABLES].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max)) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max)) ORDER BY [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME] 2020-10-30 08:12:07 [11444:6368] sqlalchemy.engine.base.Engine._execute_context(base.py:1240) INFO: ('dbo', 'BASE TABLE') I have other "stuff" going on in the DB at this time, including some open transactions and my guess is that for whatever reason querying [INFORMATION_SCHEMA].[TABLES] creates some deadlock or blocks somehow. I've also read (here) that [INFORMATION_SCHEMA].[TABLES] is a view that cannot cause a deadlock which would contradict my guess of what is causing this issue. My question is: Can I alter the configuration/settings of SQLAlchemy so that it does not make this query in the first place? UPDATE 1: The Python code for the insert is like this: with sqlalchemy.create_engine("mssql+pyodbc:///?odbc_connect=%s" % params).connect() as connection: # df is a Pandas DataFrame df.to_sql(name=my_table, con=connection, if_exists='append', index=False) Note that the code works without any problems when I run the Python script during other times of the day when I don't have those other DB transactions going on. In those cases, the log continues immediately like this, listing all the tables in the DB: 2020-10-30 08:13:03 [11444:6368] sqlalchemy.engine.base.Engine._execute_context(base.py:1235) INFO: SELECT [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME] FROM [INFORMATION_SCHEMA].[TABLES] WHERE [INFORMATION_SCHEMA].[TABLES].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max)) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max)) ORDER BY [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME] 2020-10-30 08:13:03 [11444:6368] sqlalchemy.engine.base.Engine._execute_context(base.py:1240) INFO: ('dbo', 'BASE TABLE') 2020-10-30 08:13:03 [11444:6368] sqlalchemy.engine.base.Engine._init_metadata(result.py:810) DEBUG: Col ('TABLE_NAME',) 2020-10-30 08:13:03 [11444:6368] sqlalchemy.engine.base.Engine.process_rows(result.py:1260) DEBUG: Row ('t_table1',) 2020-10-30 08:13:03 [11444:6368] sqlalchemy.engine.base.Engine.process_rows(result.py:1260) DEBUG: Row ('t_table2',) ... UPDATE 2: Apparently when a table or other object is created in an open transaction and not committed yet, querying [INFORMATION_SCHEMA].[TABLES] will get blocked (source). Is anyone familiar with the internals of SQLAlchemy to suggest how to prevent it from making this query in the first place? UPDATE 3: After posting this issue on the SQLAlchemy github (issue link) the SQLAlchemy devs confirmed that the query of [INFORMATION_SCHEMA].[TABLES] is in fact being caused by the Pandas function to_sql(). So, my new question is does anyone know how to disable this behavior in the Pandas to_sql() function? I looked over the documentation and could not find anything that would seem to help.
As of pandas v.2.2.0 you can override the pandas method that runs the check which causes the block/deadlock. Add this before calling to_sql: from pandas.io.sql import SQLDatabase def pass_check_case_sensitive(*args, **kwargs): pass SQLDatabase.check_case_sensitive = pass_check_case_sensitive
8
2
64,631,086
2020-11-1
https://stackoverflow.com/questions/64631086/how-can-i-add-new-layers-on-pre-trained-model-with-pytorch-keras-example-given
I am working with Keras and trying to analyze the effects on accuracy that models which are built with some layers with meaningful weights, and some layers with random initializations. Keras: I load VGG19 pre-trained model with include_top = False parameter on load method. model = keras.applications.VGG19(include_top=False, weights="imagenet", input_shape=(img_width, img_height, 3)) PyTorch: I load VGG19 pre-trained model until the same layer with the previous model which loaded with Keras. model = torch.hub.load('pytorch/vision:v0.6.0', 'vgg19', pretrained=True) new_base = (list(model.children())[:-2])[0] After loaded models following images shows summary of them. (Pytorch, Keras) So far there is no problem. After that, I want to add a Flatten layer and a Fully connected layer on these pre-trained models. I did it with Keras but I couldn't with PyTorch. The output of new_model.summary() is that: My question is, how can I add a new layer in PyTorch?
If all you want to do is to replace the classifier section, you can simply do so. That is : model = torch.hub.load('pytorch/vision:v0.6.0', 'vgg19', pretrained=True) model.classifier = nn.Linear(model.classifier[0].in_features, 4096) print(model) will give you: Before: VGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (6): ReLU(inplace=True) (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (8): ReLU(inplace=True) (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (13): ReLU(inplace=True) (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (15): ReLU(inplace=True) (16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (17): ReLU(inplace=True) (18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (20): ReLU(inplace=True) (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (22): ReLU(inplace=True) (23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (24): ReLU(inplace=True) (25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (26): ReLU(inplace=True) (27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) (30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (31): ReLU(inplace=True) (32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (33): ReLU(inplace=True) (34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (35): ReLU(inplace=True) (36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(7, 7)) (classifier): Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.5, inplace=False) (3): Linear(in_features=4096, out_features=4096, bias=True) (4): ReLU(inplace=True) (5): Dropout(p=0.5, inplace=False) (6): Linear(in_features=4096, out_features=1000, bias=True) ) ) After: VGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (6): ReLU(inplace=True) (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (8): ReLU(inplace=True) (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (13): ReLU(inplace=True) (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (15): ReLU(inplace=True) (16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (17): ReLU(inplace=True) (18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (20): ReLU(inplace=True) (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (22): ReLU(inplace=True) (23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (24): ReLU(inplace=True) (25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (26): ReLU(inplace=True) (27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) (30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (31): ReLU(inplace=True) (32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (33): ReLU(inplace=True) (34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (35): ReLU(inplace=True) (36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(7, 7)) (classifier): Linear(in_features=25088, out_features=4096, bias=True) ) Also note that when you want to alter an existing architecture, you have two phases. You first get the modules you want (that's what you have done there) and then you must wrap that in a nn.Sequential because your list does not implement a forward() and thus you can't really feed it anything. It's just a collection of modules. So you need to do something like this in general (as an example): features = nn.ModuleList(your_model.children())[:-1] model = nn.Sequential(*features) # carry on with what other changes you want to perform on your model Note that if you want to create a new model and you intend on using it like: output = model(imgs) You need to wrap your features and new layers in a second sequential. That is, do something like this: features = nn.ModuleList(your_model.children())[:-1] model_features = nn.Sequential(*features) some_more_layers = nn.Sequential(Layer1, Layer2, ... ) model = nn.Sequential(model_features, some_more_layers) # output = model(imgs) otherwise you had to do something like : features_output = model.features(imgs) output = model.classifier(features_output)
8
9
64,624,092
2020-10-31
https://stackoverflow.com/questions/64624092/how-to-solve-bug-on-snake-wall-teleportation
I'm doing a snake game and I got a bug I can't figure out how to solve, I want to make my snake teleport trough walls, when the snake colllides with a wall it teleports to another with the opposite speed and position, like the classic game, but with my code when the snake gets near the wall it duplicates to the opposite wall, but it was supposed to not even detect collision yet important thing: in the grid the snake is on the side of the wall, like this SSSS WWWW and not like this: SSSS NNNN WWWW when S represents the snake, W represents the wall and N represents nothing. edit: whole code import pygame, random, os, sys from pygame.locals import * pygame.init() screen = pygame.display.set_mode((1020, 585)) pygame.display.set_caption('2snakes!') #files location current_path = os.path.dirname(__file__) data_path = os.path.join(current_path, 'data') icon = pygame.image.load(os.path.join(data_path, 'icon.png')) press_any = pygame.image.load(os.path.join(data_path, 'press_any.png')) pygame.display.set_icon(icon) #collission def collision(c1,c2): return (c1[0] == c2[0]) and (c1[1] == c2[1]) #game over def gameOverBlue(): print("blue") main() def gameOverRed(): print("red") main() fps = 15 def main(): #variables direction = 'RIGHT' direction2 = 'RIGHT' change_to = direction change2_to = direction2 #snake size = 15 s_posx = 60 s_posy = 60 snake = [(s_posx + size * 2, s_posy),(s_posx + size, s_posy),(s_posx, s_posy)] s_skin = pygame.Surface((size, size)) s_skin.fill((82,128,208)) #snake2 size2 = 15 s2_posx = 390 s2_posy = 390 snake2 = [(s2_posx + size2 * 2, s2_posy),(s2_posx + size2, s2_posy),(s2_posx, s2_posy)] s2_skin = pygame.Surface((size2, size2)) s2_skin.fill((208,128,82)) #apple apple = pygame.Surface((size, size)) apple_pos = ((random.randint(0, 67)) * 15, (random.randint(0, 38)) * 15) #endregion while True: pygame.time.Clock().tick(fps) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() #input elif event.type == pygame.KEYDOWN: #snake if event.key == ord('w'): change_to = 'UP' if event.key == ord('s'): change_to = 'DOWN' if event.key == ord('a'): change_to = 'LEFT' if event.key == ord('d'): change_to = 'RIGHT' #snake2 if event.key == pygame.K_UP: change2_to = 'UP' if event.key == pygame.K_DOWN: change2_to = 'DOWN' if event.key == pygame.K_LEFT: change2_to = 'LEFT' if event.key == pygame.K_RIGHT: change2_to = 'RIGHT' #quit game if event.key == pygame.K_ESCAPE: pygame.quit() sys.quit() #smooth snake movement #snake if change_to == 'UP' and direction != 'DOWN': direction = 'UP' if change_to == 'DOWN' and direction != 'UP': direction = 'DOWN' if change_to == 'LEFT' and direction != 'RIGHT': direction = 'LEFT' if change_to == 'RIGHT' and direction != 'LEFT': direction = 'RIGHT' #snake2 if change2_to == 'UP' and direction2 != 'DOWN': direction2 = 'UP' if change2_to == 'DOWN' and direction2 != 'UP': direction2 = 'DOWN' if change2_to == 'LEFT' and direction2 != 'RIGHT': direction2 = 'LEFT' if change2_to == 'RIGHT' and direction2 != 'LEFT': direction2 = 'RIGHT' #movement #snake new_pos = None if direction == 'DOWN': new_pos = (snake[0][0], snake[0][1] + size) if direction == 'UP': new_pos = (snake[0][0], snake[0][1] - size) if direction == 'LEFT': new_pos = (snake[0][0] - size, snake[0][1]) if direction == 'RIGHT': new_pos = (snake[0][0] + size, snake[0][1]) if new_pos: snake = [new_pos] + snake del snake[-1] #snake2 new_pos2 = None if direction2 == 'DOWN': new_pos2 = (snake2[0][0], snake2[0][1] + size2) if direction2 == 'UP': new_pos2 = (snake2[0][0], snake2[0][1] - size2) if direction2 == 'LEFT': new_pos2 = (snake2[0][0] - size2, snake2[0][1]) if direction2 == 'RIGHT': new_pos2 = (snake2[0][0] + size2, snake2[0][1]) if new_pos2: snake2 = [new_pos2] + snake2 del snake2[-1] #apple collision #snake if collision(snake[0], apple_pos): snake.append((-20,-20)) size = 15 s_skin = pygame.Surface((size, size)) s_skin.fill((82,128,208)) apple_pos = ((random.randint(0, 32)) * 15, (random.randint(0, 19)) * 15) #snake2 if collision(snake2[0], apple_pos): snake2.append((-20,-20)) apple_pos = ((random.randint(0, 67)) * 15, (random.randint(0, 38)) * 15) #wall collisison #snake _pos = None if snake[0][0] == 15: _pos = (990, snake[0][1]) elif snake[0][1] == 15: _pos = (snake[0][0], 555) elif snake[0][0] == 990: _pos = (15, snake[0][1]) elif snake[0][1] == 555: _pos = (snake[0][0], 15) if _pos: snake = [_pos] + snake del snake[-1] #snake2 _pos2 = None if snake2[0][0] == 15: _pos2 = (1005, snake2[0][1]) elif snake2[0][1] == 0: _pos2 = (snake2[0][0], 570) elif snake2[0][0] == 1005: _pos2 = (0, snake2[0][1]) elif snake2[0][1] == 570: _pos2 = (snake2[0][0], 0) if _pos2: snake2 = [_pos2] + snake2 del snake2[-1] #self collisison #snake if snake[0] in snake[1:]: print("self collision") gameOverBlue() #snake2 if snake2[0] in snake2[1:]: print("self collision") gameOverRed() #snakes colliding with each other if snake2[0] == snake[0]: print("head to head collisions") if snake[0] in snake2: gameOverRed() if snake2[0] in snake: gameOverBlue() #rendering apple.fill((255,0,0)) screen.fill((10,10,10)) screen.blit(apple,apple_pos) for pos in snake: screen.blit(s_skin,pos) for pos2 in snake2: screen.blit(s2_skin,pos2) pygame.display.update() while True: pygame.time.Clock().tick(fps) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() #input elif event.type == pygame.KEYDOWN: main() screen.blit(press_any, (0,0)) pygame.display.update() edit: the red dot is the food/apple
You want to implement a teleporter. Therefore, once the snake is over the edge of the window, you will have to teleport to the other side. The size of your window is 1020x585. The snake is out of the window if x == -15, y == -15, x == 1020 or y == 585 Hence you have to do the following teleportations: if x = 1020 teleport to x = 0 if x = -15 teleport to x = 1005 if y = 585 teleport to y = 0 if y = -15 teleport to y = 570 _pos = None if snake[0][0] == -15: _pos = (1005, snake[0][1]) elif snake[0][1] == -15: _pos = (snake[0][0], 570) elif snake[0][0] == 1020: _pos = (0, snake[0][1]) elif snake[0][1] == 585: _pos = (snake[0][0], 0) if _pos: snake = [_pos] + snake del snake[-1] Another simplest solution is to use the %(modulo) operator: x1 = (x1 + x1_change) % dis_width y1 = (y1 + y1_change) % dis_height >>> width = 100 >>> 101 % width 1 >>> -1 % width 99 >>> Minimal example: repl.it/@Rabbid76/PyGame-ContinuousMovement import pygame pygame.init() window = pygame.display.set_mode((300, 300)) clock = pygame.time.Clock() rect = pygame.Rect(0, 0, 20, 20) rect.center = window.get_rect().center vel = 5 run = True while run: clock.tick(60) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.KEYDOWN: print(pygame.key.name(event.key)) keys = pygame.key.get_pressed() dx = (keys[pygame.K_RIGHT] - keys[pygame.K_LEFT]) * vel dy = (keys[pygame.K_DOWN] - keys[pygame.K_UP]) * vel rect.centerx = (rect.centerx + dx) % window.get_width() rect.centery = (rect.centery + dy) % window.get_height() window.fill(0) pygame.draw.rect(window, (255, 0, 0), rect) pygame.display.flip() pygame.quit() exit() Note: >>> width = 100 >>> 101 % width 1 >>> -1 % width 99 >>>
6
4
64,630,130
2020-11-1
https://stackoverflow.com/questions/64630130/pipreqs-requirements-txt-is-not-correct
Hello I am having troubles with the pipreqs librairy in Python. It doesn't generate the correct requirements.txt file. I am using a Python Virtual Environment and the only packages I have installed are pipreqs and selenium with pip install pipreqs pip install selenium Structure of the project: MyProject |- test.py And test.py has only one line: from selenium import webdriver First when I do pipreqs ./ I got the error UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 3474: character maps to <undefined> which I managed to solve by using pipreqs ./ --encoding=utf-8 But now the requirements.txt generated doesn't match my expectations. In my opinion, it should be equal to: selenium==1.341.0 But it is equal to: brotli==1.0.9 cryptography==3.2.1 ipaddr==2.2.0 lxml==4.6.1 mock==4.0.2 ordereddict==1.1 protobuf==3.13.0 pyOpenSSL==19.1.0 simplejson==3.17.2 Now when I try to clone this code and do pip install -r requirements.txt it doesn't install selenium and the code doesn't run. What is happening here ?
So the issue I had was that my actual workspace was: MyProject |- .venv // <- My Python Virtual Environment |- test.py My Python Virtual Environment was in my Project folder so when I run the command pipreqs ./ it is looking at all the dependencies of all the files in the folder (including my virtual environment) and that is why it was generating a weird requirements.txt file. To fix this, I used the option --ignore of pipreqs: pipreqs ./ --ignore .venv And the generated requirements.txt is: selenium==3.141.0 You may also want to ignore other folders that are causing issues like this: pipreqs --ignore bin,etc,include,lib,lib64,.venv The --force tag will also overwrite the existing requirements.txt
11
22
64,611,388
2020-10-30
https://stackoverflow.com/questions/64611388/exclude-a-function-from-coverage
I am using coverage.py to get the test coverage of the code. Suppose I have two functions with the same name in two different modules # foo/foo.py def get_something(): # fetch something # 10 line of branch code return "something foo/foo.py" # bar/foo.py def get_something(): # fetch something # 20 line of branch code return "something bar/foo.py" How can I exclude the bar.foo.get_something(...) function "completely" ?
We can use pragma comment on the function definition level which tells the coveragepy to exclude the function completely. # bar/foo.py def get_something(): # pragma: no cover # fetch something # 20 line of branch code return "something bar/foo.py" Note If we have the coveragepy config file with an exclude_lines setting in it, make sure that pragma: no cover in that setting because it overrides the default.
28
40
64,664,094
2020-11-3
https://stackoverflow.com/questions/64664094/i-cannot-use-opencv2-and-received-importerror-libgl-so-1-cannot-open-shared-obj
**env:**ubuntu16.04 anaconda3 python3.7.8 cuda10.0 gcc5.5 command: conda activate myenv python import cv2 error: Traceback (most recent call last): File "", line 1, in File "/home/.conda/envs/myenv/lib/python3.7/site-packages/cv2/__init__.py", line 5, in from .cv2 import * ImportError: libGL.so.1: cannot open shared object file: No such file or directory I tried: RUN apt install libgl1-mesa-glx -y RUN apt-get install 'ffmpeg'\ 'libsm6'\ 'libxext6' -y but this is already installed and the latest version(libgl1-mesa-glx18.0.5-0ubuntu0~16.04.1). then i tried: sudo apt-get install --reinstall libgl1-mesa-glx it doesn't work. finally,I tried to remove the package: sudo apt-get --purge remove libgl1-mesa-glx another error occurred: Reading package list... Done Analyzing the dependency tree of the package Reading status information... Done Some packages cannot be installed. If you are using an unstable distribution, this may be Because the system cannot reach the state you requested. There may be some software you need in this version The packages have not been created yet or they have been moved out of the Incoming directory. The following information may be helpful in solving the problem: The following packages have unmet dependencies: libqt5multimedia5-plugins: Dependency: libqgsttools-p1 (>= 5.5.1) but it will not be installed E: Error, pkgProblemResolver::Resolve failed. This may be due to a software package being required to maintain the status quo. Any help would be really helpful.Thanks in advance. conda list: # packages in environment at /home/lwy/.conda/envs/mmdet1: # # Name Version Build Channel _libgcc_mutex 0.1 conda_forge https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge _openmp_mutex 4.5 1_gnu https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge addict 2.3.0 <pip> albumentations 0.5.1 <pip> appdirs 1.4.4 <pip> asynctest 0.13.0 <pip> attrs 20.2.0 <pip> ca-certificates 2020.6.20 hecda079_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge certifi 2020.6.20 py37he5f6b98_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge chardet 3.0.4 <pip> cityscapesScripts 2.1.7 <pip> codecov 2.1.10 <pip> coloredlogs 14.0 <pip> coverage 5.3 <pip> cycler 0.10.0 <pip> Cython 0.29.21 <pip> decorator 4.4.2 <pip> flake8 3.8.4 <pip> future 0.18.2 <pip> humanfriendly 8.2 <pip> idna 2.10 <pip> imagecorruptions 1.1.0 <pip> imageio 2.9.0 <pip> imgaug 0.4.0 <pip> importlib-metadata 2.0.0 <pip> iniconfig 1.1.1 <pip> isort 5.6.4 <pip> kiwisolver 1.3.1 <pip> kwarray 0.5.10 <pip> ld_impl_linux-64 2.35 h769bd43_9 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge libffi 3.2.1 1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free libgcc-ng 9.3.0 h5dbcf3e_17 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge libgomp 9.3.0 h5dbcf3e_17 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge libstdcxx-ng 9.3.0 h2ae2ef3_17 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge matplotlib 3.3.2 <pip> mccabe 0.6.1 <pip> mmcv 1.1.6 <pip> mmdet 1.2.0+unknown <pip> ncurses 6.2 he1b5a44_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge networkx 2.5 <pip> numpy 1.19.4 <pip> opencv-python 4.4.0.46 <pip> openssl 1.1.1h h516909a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge ordered-set 4.0.2 <pip> packaging 20.4 <pip> Pillow 6.2.2 <pip> pip 20.2.4 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge pluggy 0.13.1 <pip> py 1.9.0 <pip> pycocotools 2.0 <pip> pycodestyle 2.6.0 <pip> pyflakes 2.2.0 <pip> pyparsing 2.4.7 <pip> pyquaternion 0.9.9 <pip> pytest 6.1.2 <pip> pytest-cov 2.10.1 <pip> pytest-runner 5.2 <pip> python 3.7.8 h6f2ec95_1_cpython https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge python-dateutil 2.8.1 <pip> python_abi 3.7 1_cp37m https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge PyWavelets 1.1.1 <pip> PyYAML 5.3.1 <pip> readline 8.0 he28a2e2_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge requests 2.24.0 <pip> scikit-image 0.17.2 <pip> scipy 1.5.3 <pip> setuptools 49.6.0 py37he5f6b98_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge Shapely 1.7.1 <pip> six 1.15.0 <pip> sqlite 3.33.0 h4cf870e_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge tifffile 2020.10.1 <pip> tk 8.6.10 hed695b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge toml 0.10.2 <pip> torch 1.5.0+cu92 <pip> torchvision 0.6.0+cu92 <pip> tqdm 4.51.0 <pip> typing 3.7.4.3 <pip> ubelt 0.9.3 <pip> urllib3 1.25.11 <pip> wheel 0.35.1 pyh9f0ad1d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge xdoctest 0.15.0 <pip> xz 5.2.5 h516909a_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge yapf 0.30.0 <pip> zipp 3.4.0 <pip> zlib 1.2.11 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
I have solved this problem! Firstly, find the file: find /usr -name libGL.so.1 I found /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1. Then, I created a soft link: ln -s /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 /usr/lib/libGL.so.1 Finally, I verified that it is valid: # python import cv2
6
4
64,619,387
2020-10-31
https://stackoverflow.com/questions/64619387/how-to-call-the-linkedin-api-using-python
I tried so many methods, but none seem to work. Help me make a connection with LinkedIn using python. Issue in generating Access Token I received CODE but it doesn't work. I have python 3.9 Please post a sample of basic code that establishes a connection and gets a access Token. And which redirectUri I have to use. Can i use any website link for rediectUri. I tried to check API through curl and Postman but didn't get solution its say Unauthorized Accesss. https://github.com/ozgur/python-linkedin <---This is where I got some idea how to use API .To recievd Access token .
First solution valid for any (including free) applications, it useses so-called 3-Legged OAuth 2.0 Authentication: Login to your account in the browser. Create new application by this link. If you already have application you may use it by selecting it here and changing its options if needed. In application credentials copy Client ID and Client Secret, you'll need them later. On your application's server side create Authorization request URL by next code and send/redirect it to client. If your Python code runs locally you may just open this URL in your browser with import webbrowser; webbrowser.open(url) code. Fill in all fields with your values too. There is redirect_uri in the code, this is URL where authorization response is sent back, for locally running script you have to run Python HTTP web server to retrieve result. # Needs: python -m pip install requests import requests, secrets url = requests.Request( 'GET', 'https://www.linkedin.com/oauth/v2/authorization', params = { 'response_type': 'code', # Always should equal to fixed string "code" # ClientID of your created application 'client_id': 'REPLACE_WITH_YOUR_CLIENT_ID', # The URI your users are sent back to after authorization. # This value must match one of the OAuth 2.0 Authorized Redirect # URLs defined in your application configuration. # This is basically URL of your server that processes authorized requests like: # https://your.server.com/linkedin_authorized_callback 'redirect_uri': 'REPLACE_WITH_REDIRECT_URL', # Replace this with your value # state, any unique non-secret randomly generated string like DCEeFWf45A53sdfKef424 # that identifies current authorization request on server side. # One way of generating such state is by using standard "secrets" module like below. # Store generated state string on your server for further identifying this authorization session. 'state': secrets.token_hex(8).upper(), # Requested permissions, below is just example, change them to what you need. # List of possible permissions is here: # https://learn.microsoft.com/en-us/linkedin/shared/references/migrations/default-scopes-migration#scope-to-consent-message-mapping 'scope': ' '.join(['r_liteprofile', 'r_emailaddress', 'w_member_social']), }, ).prepare().url # You may now send this url from server to user # Or if code runs locally just open browser like below import webbrowser webbrowser.open(url) After user authorized your app by previous URL his browser will be redirected to redirect_uri and two fields code and state will be attached to this URL, code is unique authorization code that you should store on server, code expires after 30 minutes if not used, state is a copy of state from previous code above, this state is like unique id of your current authorization session, use same state string only once and generate it randomly each time, also state is not a secret thing because you send it to user inside authorization URL, but should be unique and quite long. Example of full redirected URL is https://your.server.com/linkedin_authorized_callback?code=987ab12uiu98onvokm56&state=D5B1C1348F110D7C. Next you have to exchange code obtained previously to access_token by next code, next code should be run on your server or where your application is running, because it uses client_secret of your application and this is a secret value, you shouldn't show it to public, never share ClientSecret with anyone except maybe some trusted people, because such people will have ability to pretend (fake) to be your application while they are not. # Needs: python -m pip install requests import requests access_token = requests.post( 'https://www.linkedin.com/oauth/v2/accessToken', params = { 'grant_type': 'authorization_code', # This is code obtained on previous step by Python script. 'code': 'REPLACE_WITH_CODE', # This should be same as 'redirect_uri' field value of previous Python script. 'redirect_uri': 'REPLACE_WITH_REDIRECT_URL', # Client ID of your created application 'client_id': 'REPLACE_WITH_YOUR_CLIENT_ID', # Client Secret of your created application 'client_secret': 'REPLACE_WITH_YOUR_CLIENT_SECRET', }, ).json()['access_token'] print(access_token) access_token obtained by previous script is valid for 60 days! So quite long period. If you're planning to use your application for yourself only or your friends then you can just pre-generate manually once in two months by hands several tokens for several people without need for servers. Next use access_token for any API calls on behalf of just authorized above user of LinkedIn. Include Authorization: Bearer ACCESS_TOKEN HTTP header in all calls. Example of one such API code below: import requests print(requests.get( 'https://api.linkedin.com/v2/jobs', params = { # Any API params go here }, headers = { 'Authorization': 'Bearer ' + access_token, # Any other needed HTTP headers go here }, ).json()) More details can be read here. Regarding how your application is organized, there are 3 options: Your application is running fully on remote server, meaning both authentication and running application (API calls) are done on some dedicated remote server. Then there is no problem with security, server doesn't share any secrets like client_secret, code, access_token. Your application is running locally on user's machine while authentication is runned once in a while by your server, also some other things like storing necessary data in DataBase can be done by server. Then your server doesn't need to share client_secret, code, but shares access_token which is sent back to application to user's machine. It is also OK, then your server can keep track of what users are using your application, also will be able to revoke some or all of access_tokens if needed to block user. Your application is fully run on local user's machine, no dedicated server is used at all. In this case all of client_secret, code, access_token are stored on user's machine. In this case you can't revoke access to your application of some specific users, you can only revoke all of them by regenerating client_secret in your app settings. Also you can't track any work of your app users (although maybe there is some usage statistics in your app settings/info pages). In this case any user can look into your app code and copy client_secret, unless you compile Python to some .exe/.dll/.so and encrypt you client secret there. If anyone got client_secret he can pretend (fake) to be your application meaning that if you app contacts other users somehow then he can try to authorize other people by showing your app interface while having some other fraudulent code underneath, basically your app is not that secure or trusted anymore. Also local code can be easily modified so you shouldn't trust your application to do exactly your code. Also in order to authorize users like was done in previous steps 5)-7) in case of local app you have to start Python HTTP Server to be able to retrieve redirected results of step 5). Below is a second solution valid only if your application is a part of LinkedIn Developer Enterprise Products paid subscription, also then you need to Enable Client Credentials Flow in your application settings, next steps uses so-called 2-Legged OAuth 2.0 Authentication: Login to your account in the browser. Create new application by this link. If you already have application you may use it by selecting it here and changing its options if needed. In application credentials copy ClientID and ClientSecret, you'll need them later. Create AccessToken by next Python code (put correct client id and client secret), you should run next code only on your server side or on computers of only trusted people, because code uses ClientSecret of your application which is a secret thing and shouldn't be showed to public: # Needs: python -m pip install requests import requests access_token = requests.post( 'https://www.linkedin.com/oauth/v2/accessToken', params = { 'grant_type': 'client_credentials', 'client_id': 'REPLACE_WITH_YOUR_CLIENT_ID', 'client_secret': 'REPLACE_WITH_YOUR_CLIENT_SECRET', }, ).json()['access_token'] print(access_token) Copy access_token from previous response, it expires after 30 minutes after issue so you need to use previous script often to gain new access token. Now you can do any API requests that you need using this token, like in code below (access_token is taken from previous steps): import requests print(requests.get( 'https://api.linkedin.com/v2/jobs', params = { # Any API params go here }, headers = { 'Authorization': 'Bearer ' + access_token, # Any other needed HTTP headers go here }, ).json()) More details can be read here or here.
5
14
64,565,901
2020-10-28
https://stackoverflow.com/questions/64565901/how-to-retrieve-attributes-from-selected-datum-in-altair
I have a Streamlit dashboard which lets me interactively explore a t-SNE embedding using an Altair plot. I am trying to figure out how to access the metadata of the selected datum so that I can visualize the corresponding image. In other words, given: selector = alt.selection_single() chart = ( alt.Chart(df) .mark_circle() .encode(x="tSNE_dim1", y="tSNE_dim2", color="predicted class", tooltip=["image url", "predicted class"]) .add_selection(selector) ) ...is there something akin to selected_metadata = selector.tooltip update_dashboard_img(img=selected_metadata["image url"], caption=selected_metadata["predicted class"]) I am aware about image marks but the images are on S3 and there are too many of them to fit in the plot.
I hate to disagree with the creator of Altair, but I was able to achieve this using streamlit-vega-lite package. This works by wrapping the call to the chart creation function with altair_component(): from streamlit_vega_lite import altair_component ... event_dict = altair_component(altair_chart=create_tsne_chart(tsne_df)) # note: works with selector = alt.selection_interval(), not selection_single() dim1_bounds, dim2_bounds = event_dict.get("dim1"), event_dict.get("dim2") if dim1_bounds: (dim1_min, dim1_max), (dim2_min, dim2_max) = dim1_bounds, dim2_bounds selected_images = tsne_df[ (tsne_df.dim1 >= dim1_min) & (tsne_df.dim1 <= dim1_max) & (tsne_df.dim2 >= dim2_min) & (tsne_df.dim2 <= dim2_max) ] st.write("Selected Images") st.write(selected_images) if len(selected_images) > 0: for _index, row in selected_images.iterrows(): img = get_img(row["image url"]) st.image(img, caption=f"{row['image url']} {row['predicted class']}", use_column_width=True) The event_dict only contains information about the selector bounds. So, you have to use those values to reselect the data that was selected in the interactive chart. Note that this package is a POC and has various limitations. Please upvote the Streamlit feature request created by the author of streamlit_vega_lite.
7
4
64,622,210
2020-10-31
https://stackoverflow.com/questions/64622210/how-to-extract-classes-from-prefetched-dataset-in-tensorflow-for-confusion-matri
I was trying to plot a confusion matrix for my image classifier with the following code but I got an error message: 'PrefetchDataset' object has no attribute 'classes' Y_pred = model.predict(validation_dataset) y_pred = np.argmax(Y_pred, axis=1) print('Confusion Matrix') print(confusion_matrix(validation_dataset.classes, y_pred)) # ERROR message generated PrefetchDataset' object has no attribute 'classes'
Disclaimer: this won't work for shuffled datasets. You can use tf.stack to concatenate all the dataset values. Like so: true_categories = tf.concat([y for x, y in test_dataset], axis=0) For reproducibility, let's say you have a dataset, a neural network, and a training loop: import tensorflow_datasets as tfds import tensorflow as tf from sklearn.metrics import confusion_matrix data, info = tfds.load('iris', split='train', as_supervised=True, shuffle_files=True, with_info=True) AUTOTUNE = tf.data.experimental.AUTOTUNE train_dataset = data.take(120).batch(4).prefetch(buffer_size=AUTOTUNE) test_dataset = data.skip(120).take(30).batch(4).prefetch(buffer_size=AUTOTUNE) model = tf.keras.Sequential([ tf.keras.layers.Dense(8, activation='relu'), tf.keras.layers.Dense(16, activation='relu'), tf.keras.layers.Dense(info.features['label'].num_classes, activation='softmax') ]) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics='accuracy') history = model.fit(train_dataset, validation_data=test_dataset, epochs=50, verbose=0) Now that your model has been fitted, you can predict the test set: y_pred = model.predict(test_dataset) array([[2.2177568e-05, 3.0841196e-01, 6.9156587e-01], [4.3539176e-06, 1.2779665e-01, 8.7219906e-01], [1.0816366e-03, 9.2667454e-01, 7.2243840e-02], [9.9921310e-01, 7.8686583e-04, 9.8775059e-09]], dtype=float32) This is going to be a (n_samples, 3) array because we're working with three categories. We want a (n_samples, 1) array for sklearn.metrics.confusion_matrix, so take the argmax: predicted_categories = tf.argmax(y_pred, axis=1) <tf.Tensor: shape=(30,), dtype=int64, numpy= array([2, 2, 2, 0, 2, 2, 2, 2, 1, 1, 2, 0, 0, 2, 1, 1, 1, 2, 0, 2, 1, 2, 1, 0, 2, 0, 1, 2, 1, 0], dtype=int64)> Then, we can take all the y values from the prefetch dataset: true_categories = tf.concat([y for x, y in test_dataset], axis=0) [<tf.Tensor: shape=(4,), dtype=int64, numpy=array([1, 1, 1, 0], dtype=int64)>, <tf.Tensor: shape=(4,), dtype=int64, numpy=array([2, 2, 2, 2], dtype=int64)>, <tf.Tensor: shape=(4,), dtype=int64, numpy=array([1, 1, 1, 0], dtype=int64)>, <tf.Tensor: shape=(4,), dtype=int64, numpy=array([0, 2, 1, 1], dtype=int64)>, <tf.Tensor: shape=(4,), dtype=int64, numpy=array([1, 2, 0, 2], dtype=int64)>, <tf.Tensor: shape=(4,), dtype=int64, numpy=array([1, 2, 1, 0], dtype=int64)>, <tf.Tensor: shape=(4,), dtype=int64, numpy=array([2, 0, 1, 2], dtype=int64)>, <tf.Tensor: shape=(2,), dtype=int64, numpy=array([1, 0], dtype=int64)>] Then, you are ready to get the confusion matrix: confusion_matrix(predicted_categories, true_categories) array([[ 9, 0, 0], [ 0, 9, 0], [ 0, 2, 10]], dtype=int64) (9 + 9 + 10) / 30 = 0.933 is the accuracy score. It corresponds to model.evaluate(test_dataset): 8/8 [==============================] - 0s 785us/step - loss: 0.1907 - accuracy: 0.9333 Also the results are consistent with sklearn.metrics.classification_report: precision recall f1-score support 0 1.00 1.00 1.00 8 1 0.82 1.00 0.90 9 2 1.00 0.85 0.92 13 accuracy 0.93 30 macro avg 0.94 0.95 0.94 30 weighted avg 0.95 0.93 0.93 30 Here's the entire code: import tensorflow_datasets as tfds import tensorflow as tf from sklearn.metrics import confusion_matrix data, info = tfds.load('iris', split='train', as_supervised=True, shuffle_files=True, with_info=True) AUTOTUNE = tf.data.experimental.AUTOTUNE train_dataset = data.take(120).batch(4).prefetch(buffer_size=AUTOTUNE) test_dataset = data.skip(120).take(30).batch(4).prefetch(buffer_size=AUTOTUNE) model = tf.keras.Sequential([ tf.keras.layers.Dense(8, activation='relu'), tf.keras.layers.Dense(16, activation='relu'), tf.keras.layers.Dense(info.features['label'].num_classes, activation='softmax') ]) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics='accuracy') history = model.fit(train_dataset, validation_data=test_dataset, epochs=50, verbose=0) y_pred = model.predict(test_dataset) predicted_categories = tf.argmax(y_pred, axis=1) true_categories = tf.concat([y for x, y in test_dataset], axis=0) confusion_matrix(predicted_categories, true_categories) More generally, you can plot a confusion matrix with sklearn.metrics.ConfusionMatrixDisplay.from_predictions.
10
12
64,580,500
2020-10-28
https://stackoverflow.com/questions/64580500/sort-sounds-by-similarity-based-on-timbretone
Explanation I want to be able to sort a collection of sounds in a list based on the timbre(tone) of the sound. Here is a toy example where I manually sorted the spectrograms for 12 sound files that I created and uploaded to this repo. I know that these are sorted correctly because the sound produced for each file, is exactly the same as the sound in the file before it, but with one effect or filter added to it. For example, a correct sorting of sounds x, y and z where sounds x and y are the same, but y has a distortion effect sounds y and z are the same, but z filters out high frequencies sounds x and z are the same, but z has a distortion effect, and z filters out high frequencies Would be x, y, z Just by looking at the spectrograms, I can see some visual indicators that hint at how the sounds should be sorted, but I would like to automate the sorting process by having a computer recognize such indicators. The sound files for the sounds in the image above are all the same length all the same note/pitch all start at exactly the same time. all the same amplitude (level of loudness) I would like my sorting to work even if all of these conditions are not true(but I'll accept the best answer even if it doesn't solve this) For example, in the image below the start of MFCC_8 is shifted in comparison to MFCC_8 in the first image MFCC_9 is identical to MFCC_9 in the first image, but is duplicated (so it is twice as long) If MFCC_8 and MFCC_9 in the first image were replaced with MFCC_8 and MFCC_9 in the image below, I would like the sorting of sounds to remain the exact same. For my real program, I intend to break up an mp3 file by sound changes like this My program so far Here is the program which produces the first image in this post. I need the code in the function sort_sound_files to be replaced with some code that actually sorts the sound files based on timbre. The part which needs to be done is near the bottom and the sound files on on this repo. I also have this code in a jupyter notebook, which also includes a second example that is more similar to what I actually want this program to do import librosa import librosa.display import matplotlib.pyplot as plt import numpy as np import math from os import path from typing import List class Spec: name: str = '' sr: int = 44100 class MFCC(Spec): mfcc: np.ndarray # Mel-frequency cepstral coefficient delta_mfcc: np.ndarray # delta Mel-frequency cepstral coefficient delta2_mfcc: np.ndarray # delta2 Mel-frequency cepstral coefficient n_mfcc: int = 13 def __init__(self, soundFile: str): self.name = path.basename(soundFile) y, sr = librosa.load(soundFile, sr=self.sr) self.mfcc = librosa.feature.mfcc(y, n_mfcc=self.n_mfcc, sr=sr) self.delta_mfcc = librosa.feature.delta(self.mfcc, mode="nearest") self.delta2_mfcc = librosa.feature.delta(self.mfcc, mode="nearest", order=2) def get_mfccs(sound_files: List[str]) -> List[MFCC]: ''' :param sound_files: Each item is a path to a sound file (wav, mp3, ...) ''' mfccs = [MFCC(sound_file) for sound_file in sound_files] return mfccs def draw_specs(specList: List[Spec], attribute: str, title: str): ''' Takes a list of same type audio features, and draws a spectrogram for each one ''' def draw_spec(spec: Spec, attribute: str, fig: plt.Figure, ax: plt.Axes): img = librosa.display.specshow( librosa.amplitude_to_db(getattr(spec, attribute), ref=np.max), y_axis='log', x_axis='time', ax=ax ) ax.set_title(title + str(spec.name)) fig.colorbar(img, ax=ax, format="%+2.0f dB") specLen = len(specList) fig, axs = plt.subplots(math.ceil(specLen/3), 3, figsize=(30, specLen * 2)) for spec in range(0, len(specList), 3): draw_spec(specList[spec], attribute, fig, axs.flat[spec]) if (spec+1 < len(specList)): draw_spec(specList[spec+1], attribute, fig, axs.flat[spec+1]) if (spec+2 < len(specList)): draw_spec(specList[spec+2], attribute, fig, axs.flat[spec+2]) sound_files_1 = [ '../assets/transients_1/4.wav', '../assets/transients_1/6.wav', '../assets/transients_1/1.wav', '../assets/transients_1/11.wav', '../assets/transients_1/13.wav', '../assets/transients_1/9.wav', '../assets/transients_1/3.wav', '../assets/transients_1/7.wav', '../assets/transients_1/12.wav', '../assets/transients_1/2.wav', '../assets/transients_1/5.wav', '../assets/transients_1/10.wav', '../assets/transients_1/8.wav' ] mfccs_1 = get_mfccs(sound_files_1) ################################################################## def sort_sound_files(sound_files: List[str]): # TODO: Complete this function. The soundfiles must be sorted based on the content in the file, do not use the name of the file # This is the correct order that the sounds should be sorted in return [f"../assets/transients_1/{num}.wav" for num in range(1, 14)] # TODO: remove(or comment) once method is completed ################################################################## sorted_sound_files_1 = sort_sound_files(sound_files_1) mfccs_1 = get_mfccs(sorted_sound_files_1) draw_specs(mfccs_1, 'mfcc', "Transients_1 Sorted MFCC-") plt.savefig('sorted_sound_spectrograms.png') EDIT I didn't realize this until later, but another pretty important thing is that there's going to be lot's of properties that are oscillating. The difference between sound 5 and sound 6 from the first set for example is that sound 6 is sound 5 but with oscillation on the volume (an LFO), this type of oscillation can be placed on a frequency filter, an effect (like distortion) or even pitch. I realize this makes the problem a lot trickier and it's outside the scope of what I asked. Do you have any advice? I could even use several different sorts, and only look at one property at one time.
I came up with a method, not sure if it does exactly what you are hoping but for your first dataset it is very close. Basically I'm looking at the power spectral density of the power spectral density of your .wav files and sorting by the normalized integral of that. (I have no good signal processing reason for doing this. The PSD gives you an idea of how much energy is at each frequency. I initially tried sorting by the PSD and got bad results. Thinking that as you treat the files you were creating more variability, I thought that would alter variation in the spectral density in this way and just tried it.) If this does what you need, I hope you can find a justification for the approach. Step 1: This is pretty straightforward, just change y to self.y to add it to your MFCC class: class MFCC(Spec): mfcc: np.ndarray # Mel-frequency cepstral coefficient delta_mfcc: np.ndarray # delta Mel-frequency cepstral coefficient delta2_mfcc: np.ndarray # delta2 Mel-frequency cepstral coefficient n_mfcc: int = 13 def __init__(self, soundFile: str): self.name = path.basename(soundFile) self.y, sr = librosa.load(soundFile, sr=self.sr) # <--- This line is changed self.mfcc = librosa.feature.mfcc(self.y, n_mfcc=self.n_mfcc, sr=sr) self.delta_mfcc = librosa.feature.delta(self.mfcc, mode="nearest") self.delta2_mfcc = librosa.feature.delta(self.mfcc, mode="nearest", order=2) Step 2: Calculate the PSD of the PSD and integrate (or really just sum): def spectra_of_spectra(mfcc): # first calculate the psd fft = np.fft.fft(mfcc.y) fft = fft[:len(fft)//2+1] psd1 = np.real(fft * np.conj(fft)) # then calculate the psd of the psd fft = np.fft.fft(psd1/sum(psd1)) fft = fft[:len(fft)//2+1] psd = np.real(fft * np.conj(fft)) return(np.sum(psd)/len(psd)) Dividing by the length (normalizing) helps to compare different files of different lengths. Step 3: Sort def sort_mfccs(mfccs): values = [spectra_of_spectra(mfcc) for mfcc in mfccs] sorted_order = [i[0] for i in sorted(enumerate(values), key=lambda x:x[1], reverse = True)] return([i for i in sorted_order], [values[i] for i in sorted_order]) TEST mfccs_1 = get_mfccs(sound_files_1) sort_mfccs(mfccs_1) 1.wav 2.wav 3.wav 4.wav 5.wav 6.wav 7.wav 8.wav 9.wav 10.wav 12.wav 11.wav 13.wav Note that other than 11.wav and 12.wav the files are ordered in the way you would expect. I'm not sure if you agree with the order for your second set of files. I guess that's the test of how useful my method might be. mfccs_2 = get_mfccs(sorted_sound_files_2) sort_mfccs(mfccs_2) 12.wav 22.wav 26.wav 31.wav 4.wav 13.wav 34.wav 30.wav 21.wav 23.wav 7.wav 38.wav 11.wav 3.wav 9.wav 36.wav 16.wav 17.wav 33.wav 37.wav 8.wav 28.wav 5.wav 25.wav 20.wav 1.wav 39.wav 29.wav 18.wav 0.wav 27.wav 14.wav 35.wav 15.wav 24.wav 10.wav 19.wav 32.wav 2.wav 6.wav Last point regarding question in code re: UserWarning I am not familiar with the module you are using here, but it looks like it is trying to do a FFT with a window length of 2048 on a file of length 1536. FFTs are a building block of any sort of frequency analysis. In your line self.mfcc = librosa.feature.mfcc(self.y, n_mfcc=self.n_mfcc, sr=sr) you can specify the kwarg n_fft to remove this, for example, n_fft = 1024. However, I am not sure why librosa uses 2048 as a default so you may want to examine closely before changing. EDIT Plotting the values would help to show the comparison a bit more. The bigger the difference in the values, the bigger the difference in the files. def diff_matrix(L, V, mfccs): plt.figure() plt.semilogy(V, '.') for i in range(len(V)): plt.text(i, V[i], mfccs[L[i]].name.split('.')[0], fontsize = 8) plt.xticks([]) plt.ylim([0.001, 1]) plt.ylabel('Value') Here are the results for your first set and the second set Based on how close the values are relative to each other (think % change rather than difference), the sorting the second set will be quite sensitive to any tweaks compared to the first. EDIT 2 My best stab at your answer below would be to try something like this. For simplicity, I am going to describe pitch frequency as the frequency of the note and spectral frequency as the frequency variations from the signal processing perspective. I hope that makes sense. I would expect an oscillation on the volume to hit all pitches and so the contribution to the PSD would depend on the how the volume is oscillating in terms of the spectral frequencies. When different pitch frequencies get damped differently, you would need to start thinking about which pitch frequencies are important for what you're doing. I think the reason my sorting was so successful in your first example is probably because the variation was ubiquitous (or almost ubiquitous) across pitch frequencies. Perhaps there's a way to consider looking at the PSD at different pitch frequencies or pitch frequency bands. I haven't fully absorbed the info in the paper referenced in the other answer, but if you understand the math I'd start there. As a disclaimer, I kind of just played around and made something up to try to answer your question. You may want to consider asking a follow-up question on a site more focused on questions like this.
10
4
64,648,253
2020-11-2
https://stackoverflow.com/questions/64648253/error-img-empty-in-function-imwrite
I want to create frames from the video named project.avi and save them to frameIn folder. But some type of errors are not let me done. How can I solve this problem. Here is the code: cap = cv2.VideoCapture('project.avi') currentFrame = 0 while(True): ret, frame = cap.read() name = 'frameIn/frame' + str(currentFrame) + '.jpg' print ("Creating file... " + name) cv2.imwrite(name, frame) frames.append(name) currentFrame += 1 cap.release() cv2.destroyAllWindows() The error is: Traceback (most recent call last): File "videoreader.py", line 28, in <module> cv2.imwrite(name, frame) cv2.error: OpenCV(4.4.0) /private/var/folders/nz/vv4_9tw56nv9k3tkvyszvwg80000gn/T/pip-req-build-2rx9f0ng/opencv/modules/imgcodecs/src/loadsave.cpp:738: error: (-215:Assertion failed) !_img.empty() in function 'imwrite'
The cause may be that image is empty, so, You should check weather video is opened correctly before read frames by: cap.isOpened(). Then, after execute ret, frame = cap.read() check ret variable value if true to ensure that frame is grabbed correctly. The code to be Clear : cap = cv2.VideoCapture('project.avi') if cap.isOpened(): current_frame = 0 while True: ret, frame = cap.read() if ret: name = f'frameIn/frame{current_frame}.jpg' print(f"Creating file... {name}") cv2.imwrite(name, frame) frames.append(name) current_frame += 1 cap.release() cv2.destroyAllWindows() I hope it helped you.
7
11
64,635,913
2020-11-1
https://stackoverflow.com/questions/64635913/why-im-getting-this-error-while-building-docker-image
I got the following error while building a docker image by "docker-compose build". ERROR: Couldn't connect to Docker daemon at http://127.0.0.1:2375 - is it running? If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable. Even if I try with "sudo", I got this: Building web Step 1/8 : FROM python:3.8.3-alpine ---> 8ecf5a48c789 Step 2/8 : WORKDIR /usr/src/app ---> Using cache ---> 87bb0088a0ba Step 3/8 : ENV PYTHONDONTWRITEBYTECODE 1 ---> Using cache ---> 4f1a6ddf9e1f Step 4/8 : ENV PYTHONUNBUFFERED 1 ---> Using cache ---> 5d22b6b7a0f5 Step 5/8 : RUN pip install --upgrade pip ---> Using cache ---> 169ee831f728 Step 6/8 : COPY ./requirements.txt . ---> Using cache ---> 4b4351e31632 Step 7/8 : RUN pip install -r requirements.txt ---> Running in a4dae2fe3761 Collecting asgiref==3.2.10 Downloading asgiref-3.2.10-py3-none-any.whl (19 kB) Collecting cffi==1.14.3 Downloading cffi-1.14.3.tar.gz (470 kB) Collecting cryptography==3.2.1 Downloading cryptography-3.2.1.tar.gz (540 kB) Installing build dependencies: started Installing build dependencies: finished with status 'error' ERROR: Command errored out with exit status 1: command: /usr/local/bin/python /usr/local/lib/python3.8/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-p3ocmpkd/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.6.0' wheel 'cffi>=1.8,!=1.11.3; platform_python_implementation != '"'"'PyPy'"'"'' cwd: None Complete output (128 lines): Collecting setuptools>=40.6.0 Downloading setuptools-50.3.2-py3-none-any.whl (785 kB) Collecting wheel Downloading wheel-0.35.1-py2.py3-none-any.whl (33 kB) Collecting cffi!=1.11.3,>=1.8 Using cached cffi-1.14.3.tar.gz (470 kB) Collecting pycparser Downloading pycparser-2.20-py2.py3-none-any.whl (112 kB) Building wheels for collected packages: cffi Building wheel for cffi (setup.py): started Building wheel for cffi (setup.py): finished with status 'error' ERROR: Command errored out with exit status 1: command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-_eoslhz1/cffi/setup.py'"'"'; __file__='"'"'/tmp/pip-install-_eoslhz1/cffi/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-99ngzpxw cwd: /tmp/pip-install-_eoslhz1/cffi/ Complete output (50 lines): unable to execute 'gcc': No such file or directory unable to execute 'gcc': No such file or directory No working compiler found, or bogus compiler options passed to the compiler from Python's standard "distutils" module. See the error messages above. Likely, the problem is not related to CFFI but generic to the setup.py of any Python package that tries to compile C code. (Hints: on OS/X 10.8, for errors about -mno-fused-madd see http://stackoverflow.com/questions/22313407/ Otherwise, see https://wiki.python.org/moin/CompLangPython or the IRC channel #python on irc.freenode.net.) Trying to continue anyway. If you are trying to install CFFI from a build done in a different context, you can ignore this warning. running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/cffi copying cffi/api.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/verifier.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/recompiler.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/model.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/__init__.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/error.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/cparser.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/commontypes.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/pkgconfig.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/lock.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/_cffi_include.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/parse_c_type.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/_embedding.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/_cffi_errors.h -> build/lib.linux-x86_64-3.8/cffi warning: build_py: byte-compiling is disabled, skipping. running build_ext building '_cffi_backend' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.8 -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.8/c/_cffi_backend.o unable to execute 'gcc': No such file or directory error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for cffi Running setup.py clean for cffi Failed to build cffi Installing collected packages: setuptools, wheel, pycparser, cffi Running setup.py install for cffi: started Running setup.py install for cffi: finished with status 'error' ERROR: Command errored out with exit status 1: command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-_eoslhz1/cffi/setup.py'"'"'; __file__='"'"'/tmp/pip-install-_eoslhz1/cffi/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-pah2aui6/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-p3ocmpkd/overlay --compile --install-headers /tmp/pip-build-env-p3ocmpkd/overlay/include/python3.8/cffi cwd: /tmp/pip-install-_eoslhz1/cffi/ Complete output (50 lines): unable to execute 'gcc': No such file or directory unable to execute 'gcc': No such file or directory No working compiler found, or bogus compiler options passed to the compiler from Python's standard "distutils" module. See the error messages above. Likely, the problem is not related to CFFI but generic to the setup.py of any Python package that tries to compile C code. (Hints: on OS/X 10.8, for errors about -mno-fused-madd see http://stackoverflow.com/questions/22313407/ Otherwise, see https://wiki.python.org/moin/CompLangPython or the IRC channel #python on irc.freenode.net.) Trying to continue anyway. If you are trying to install CFFI from a build done in a different context, you can ignore this warning. running install running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/cffi copying cffi/api.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/verifier.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/recompiler.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/model.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/__init__.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/error.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/cparser.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/commontypes.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/pkgconfig.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/lock.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/_cffi_include.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/parse_c_type.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/_embedding.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/_cffi_errors.h -> build/lib.linux-x86_64-3.8/cffi warning: build_py: byte-compiling is disabled, skipping. running build_ext building '_cffi_backend' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.8 -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.8/c/_cffi_backend.o unable to execute 'gcc': No such file or directory error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-_eoslhz1/cffi/setup.py'"'"'; __file__='"'"'/tmp/pip-install-_eoslhz1/cffi/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-pah2aui6/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-p3ocmpkd/overlay --compile --install-headers /tmp/pip-build-env-p3ocmpkd/overlay/include/python3.8/cffi Check the logs for full command output. ---------------------------------------- ERROR: Command errored out with exit status 1: /usr/local/bin/python /usr/local/lib/python3.8/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-p3ocmpkd/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.6.0' wheel 'cffi>=1.8,!=1.11.3; platform_python_implementation != '"'"'PyPy'"'"'' Check the logs for full command output. ERROR: Service 'web' failed to build: The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1 That is my Dockerfile: # pull official base image FROM python:3.8.3-alpine # set work directory WORKDIR /usr/src/app # set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # install dependencies RUN pip install --upgrade pip COPY ./requirements.txt . RUN pip install -r requirements.txt # copy project COPY . . and docker-compose.yml: version: '3.7' services: web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - ./:/usr/src/app/ ports: - 8000:8000 env_file: - ./.env.dev I will add that I bought my laptop a few months ago, and already had problems with Docker on my previous system on this device (that's one of the reasons I changed my system). I switched from Fedora to Ubuntu. Additionally, there is no "python" alias for python3 in my shell, and because of the unknown reason I need to put "python3 -m" before simple "pip freeze". I hope that information may be useful. Thank You. . . . After adding a "RUN apk add builder-base" in my Dockerfile, following error appears: Step 8/9 : RUN pip install -r requirements.txt ---> Running in cd9c74fbd831 Collecting asgiref==3.2.10 Downloading asgiref-3.2.10-py3-none-any.whl (19 kB) Collecting cffi==1.14.3 Downloading cffi-1.14.3.tar.gz (470 kB) Collecting cryptography==3.2.1 Downloading cryptography-3.2.1.tar.gz (540 kB) Installing build dependencies: started Installing build dependencies: finished with status 'error' ERROR: Command errored out with exit status 1: command: /usr/local/bin/python /usr/local/lib/python3.8/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-gbfaltlj/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.6.0' wheel 'cffi>=1.8,!=1.11.3; platform_python_implementation != '"'"'PyPy'"'"'' cwd: None Complete output (104 lines): Collecting setuptools>=40.6.0 Downloading setuptools-50.3.2-py3-none-any.whl (785 kB) Collecting wheel Downloading wheel-0.35.1-py2.py3-none-any.whl (33 kB) Collecting cffi!=1.11.3,>=1.8 Using cached cffi-1.14.3.tar.gz (470 kB) Collecting pycparser Downloading pycparser-2.20-py2.py3-none-any.whl (112 kB) Building wheels for collected packages: cffi Building wheel for cffi (setup.py): started Building wheel for cffi (setup.py): finished with status 'error' ERROR: Command errored out with exit status 1: command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-myhajkmm/cffi/setup.py'"'"'; __file__='"'"'/tmp/pip-install-myhajkmm/cffi/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-9jsg5veu cwd: /tmp/pip-install-myhajkmm/cffi/ Complete output (38 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/cffi copying cffi/api.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/verifier.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/recompiler.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/model.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/__init__.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/error.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/cparser.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/commontypes.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/pkgconfig.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/lock.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/_cffi_include.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/parse_c_type.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/_embedding.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/_cffi_errors.h -> build/lib.linux-x86_64-3.8/cffi warning: build_py: byte-compiling is disabled, skipping. running build_ext building '_cffi_backend' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.8 -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.8/c/_cffi_backend.o c/_cffi_backend.c:15:10: fatal error: ffi.h: No such file or directory 15 | #include <ffi.h> | ^~~~~~~ compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for cffi Running setup.py clean for cffi Failed to build cffi Installing collected packages: setuptools, wheel, pycparser, cffi Running setup.py install for cffi: started Running setup.py install for cffi: finished with status 'error' ERROR: Command errored out with exit status 1: command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-myhajkmm/cffi/setup.py'"'"'; __file__='"'"'/tmp/pip-install-myhajkmm/cffi/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-s86a1yn7/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-gbfaltlj/overlay --compile --install-headers /tmp/pip-build-env-gbfaltlj/overlay/include/python3.8/cffi cwd: /tmp/pip-install-myhajkmm/cffi/ Complete output (38 lines): running install running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/cffi copying cffi/api.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/verifier.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/recompiler.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/model.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/__init__.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/error.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/cparser.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/commontypes.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/pkgconfig.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/lock.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.8/cffi copying cffi/_cffi_include.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/parse_c_type.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/_embedding.h -> build/lib.linux-x86_64-3.8/cffi copying cffi/_cffi_errors.h -> build/lib.linux-x86_64-3.8/cffi warning: build_py: byte-compiling is disabled, skipping. running build_ext building '_cffi_backend' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.8 -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.8/c/_cffi_backend.o c/_cffi_backend.c:15:10: fatal error: ffi.h: No such file or directory 15 | #include <ffi.h> | ^~~~~~~ compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-myhajkmm/cffi/setup.py'"'"'; __file__='"'"'/tmp/pip-install-myhajkmm/cffi/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-s86a1yn7/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-gbfaltlj/overlay --compile --install-headers /tmp/pip-build-env-gbfaltlj/overlay/include/python3.8/cffi Check the logs for full command output. ---------------------------------------- ERROR: Command errored out with exit status 1: /usr/local/bin/python /usr/local/lib/python3.8/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-gbfaltlj/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.6.0' wheel 'cffi>=1.8,!=1.11.3; platform_python_implementation != '"'"'PyPy'"'"'' Check the logs for full command output. ERROR: Service 'web' failed to build: The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
Alpine Linux does not support the binary wheels Python packages ship under the manylinux tag, so you have to compile things like cffi and cryptography yourself. To do so you'll need a compiler and the correct set of headers. This is documented in the cryptography installation documentation for Alpine. Update September 2021: There is now a musllinux standard which allows binary wheels that work with Alpine. To use them you must have pip 21.2.4 or greater.
7
14
64,663,862
2020-11-3
https://stackoverflow.com/questions/64663862/cant-install-h5py
I am trying to h5py on Windows10 64bit, Python 3.8.5, Pip 20.2.4. Used this command pip install h5py But this throws an error ERROR: Could not build wheels for h5py which use PEP 517 and cannot be installed directly Looks like it's quite known issue for pep 517 and other packages, so i try to check all of the solutions like pip install --no-use-pep517 h5py pip install --no-binary h5py But nothing works. How can I install h5py?
Found a solution - I was trying to install on Python3.8.5 32bit. Switching to 64bit just solved the issue. I saw that the latest version doesn't support win 32, check this: github.com/h5py/h5py/issues/1753
8
3
64,645,343
2020-11-2
https://stackoverflow.com/questions/64645343/plotly-update-subplot-titles-after-traces-where-created
I have a plotly plot composed of subplots - fig = make_subplots( rows=3, cols=1) fig.add_trace(go.Scatter(x=[1, 2, 3], y=[4, 5, 6]), row=1, col=1) fig.add_trace(go.Scatter(x=[1, 2, 3], y=[4, 5, 6]), row=2, col=1) fig.add_trace(go.Scatter(x=[1, 2, 3], y=[4, 5, 6]), row=3, col=1) I want to add a title for every one the subplots but only after the figure is created and the traces are added i.e. not when I create the figure. fig = make_subplots( rows=3, cols=1, subplot_titles=['a', 'b', 'c'] Can I do it via fig.update_layout or something similar?
When make_subplots is used it's creating (correctly placed) annotations. So this can be mostly duplicated using annotation methods. In general, for the first: fig.add_annotation(xref="x domain",yref="y domain",x=0.5, y=1.2, showarrow=False, text="a", row=1, col=1) The downside is you may need to adjust x and y to your conditions/tastes. Full example, using bold text: import plotly.graph_objects as go from plotly.subplots import make_subplots fig = make_subplots(rows=3, cols=1) fig.add_trace(go.Scatter(x=[1, 2, 3], y=[4, 5, 6]), row=1, col=1) fig.add_annotation(xref="x domain",yref="y domain",x=0.5, y=1.2, showarrow=False, text="<b>Hey</b>", row=1, col=1) fig.add_trace(go.Scatter(x=[1, 2, 3], y=[4, 5, 6]), row=2, col=1) fig.add_annotation(xref="x domain",yref="y domain",x=0.5, y=1.2, showarrow=False, text="<b>Bee</b>", row=2, col=1) fig.add_trace(go.Scatter(x=[1, 2, 3], y=[4, 5, 6]), row=3, col=1) fig.add_annotation(xref="x domain",yref="y domain",x=0.5, y=1.2, showarrow=False, text="<b>See</b>", row=3, col=1) fig.show()
14
10
64,611,050
2020-10-30
https://stackoverflow.com/questions/64611050/python-change-exception-printable-output-eg-overload-builtins
I am searching for a way to change the printable output of an Exception to a silly message in order to learn more about python internals (and mess with a friend ;), so far without success. Consider the following code try: x # is not defined except NameError as exc: print(exc) The code shall output name 'x' is not defined I would like the change that output to the name 'x' you suggested is not yet defined, my lord. Improve your coding skills. So far, I understood that you can't change __builtins__ because they're "baked in" as C code, unless: You use forbiddenfruit.curse method which adds / changes properties of any object You manually override the dictionnaries of an object I've tried both solutions, but without success: forbiddenfruit solution: from forbiddenfruit import curse curse(BaseException, 'repr', lambda self: print("Test message for repr")) curse(BaseException, 'str', lambda self: print("Test message for str")) try: x except NameError as exc: print(exc.str()) # Works, shows test message print(exc.repr()) # Works, shows test message print(repr(exc)) # Does not work, shows real message print(str(exc)) # Does not work, shows real message print(exc) # Does not work, shows real message Dictionnary overriding solution: import gc underlying_dict = gc.get_referents(BaseException.__dict__)[0] underlying_dict["__repr__"] = lambda self: print("test message for repr") underlying_dict["__str__"] = lambda self: print("test message for str") underlying_dict["args"] = 'I am an argument list' try: x except NameError as exc: print(exc.__str__()) # Works, shows test message print(exc.__repr__()) # Works, shows test message print(repr(exc)) # Does not work, shows real message print(str(exc)) # Does not work, shows real message print(exc) # Does not work, shows real message AFAIK, using print(exc) should rely on either __repr__ or __str__, but it seems like the print function uses something else, which I cannot find even when reading all properties of BaseException via print(dir(BaseException)). Could anyone give me an insight of what print uses in this case please ? [EDIT] To add a bit more context: The problem I'm trying to solve began as a joke to mess with a programmer friend, but now became a challenge for me to understand more of python's internals. There's no real business problem I'm trying to solve, I just want to get deeper understanding of things in Python. I'm quite puzzled that print(exc) won't make use of BaseException.__repr__ or __str__ actually. [/EDIT]
I'll just explain the behaviour you described: exc.__repr__() This will just call your lambda function and return the expected string. Btw you should return the string, not print it in your lambda functions. print(repr(exc)) Now, this is going a different route in CPython and you can see this in a GDB session, it's something like this: Python/bltinmodule.c:builtin_repr will call Objects/object.c:PyObject_Repr - this function gets the PyObject *v as the only parameter that it will use to get and call a function that implements the built-in function repr(), BaseException_repr in this case. This function will format the error message based on a value from args structure field: (gdb) p ((PyBaseExceptionObject *) self)->args $188 = ("name 'x' is not defined",) The args value is set in Python/ceval.c:format_exc_check_arg based on a NAME_ERROR_MSG macro set in the same file. Update: Sun 8 Nov 20:19:26 UTC 2020 test.py: import sys import dis def main(): try: x except NameError as exc: tb = sys.exc_info()[2] frame, i = tb.tb_frame, tb.tb_lasti code = frame.f_code arg = code.co_code[i + 1] name = code.co_names[arg] print(name) if __name__ == '__main__': main() Test: # python test.py x Note: I would also recommend to watch this video from PyCon 2016.
8
1
64,611,957
2020-10-30
https://stackoverflow.com/questions/64611957/assertionerror-would-build-wheel-with-unsupported-tag-cp310-cp310-linux
I've got this message when I try to install numpy using Python 3.10. How to fix this? Copying numpy.egg-info to build/bdist.linux-x86_64/wheel/numpy-1.19.3-py3.10.egg-info running install_scripts Traceback (most recent call last): File "/home/walenty/.local/lib/python3.10/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module> main() File "/home/walenty/.local/lib/python3.10/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/home/walenty/.local/lib/python3.10/site-packages/pip/_vendor/pep517/_in_process.py", line 204, in build_wheel return _build_backend().build_wheel(wheel_directory, config_settings, File "/tmp/pip-build-env-plb3t7s6/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 211, in build_wheel return self._build_with_temp_dir(['bdist_wheel'], '.whl', File "/tmp/pip-build-env-plb3t7s6/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 197, in _build_with_temp_dir self.run_setup() File "/tmp/pip-build-env-plb3t7s6/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 248, in run_setup super(_BuildMetaLegacyBackend, File "/tmp/pip-build-env-plb3t7s6/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 142, in run_setup exec(compile(code, __file__, 'exec'), locals()) File "setup.py", line 508, in <module> setup_package() File "setup.py", line 500, in setup_package setup(**metadata) File "/tmp/pip-install-p3yq92pw/numpy/numpy/distutils/core.py", line 169, in setup return old_setup(**new_attr) File "/tmp/pip-build-env-plb3t7s6/overlay/lib/python3.10/site-packages/setuptools/__init__.py", line 165, in setup return distutils.core.setup(**attrs) File "/usr/local/lib/python3.10/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.10/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.10/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-build-env-plb3t7s6/overlay/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 328, in run impl_tag, abi_tag, plat_tag = self.get_tag() File "/tmp/pip-build-env-plb3t7s6/overlay/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 278, in get_tag assert tag in supported_tags, "would build wheel with unsupported tag {}".format(tag) AssertionError: would build wheel with unsupported tag ('cp310', 'cp310', 'linux_x86_64') ---------------------------------------- ERROR: Failed building wheel for numpy Failed to build numpy ERROR: Could not build wheels for numpy which use PEP 517 and cannot be installed directly
It's a bug in python 3.10, a workaround is installing numpy with the --no-use-pep517 flag. E.g.: pip3.10 install numpy --no-use-pep517 There's a fix for this on the way though, so just waiting is an option as well.
7
8
64,654,838
2020-11-2
https://stackoverflow.com/questions/64654838/pytorch-tutorial-freeze-support-issue
I tried following the tutorial from PyTorch here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py. Full code is here: import torch import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import numpy as np import torch.nn as nn import torch.nn.functional as F import torch.optim as optim # Loading and normalizing CIFAR10 transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # Shows training images, DOESN'T WORK def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) # define a convolutional neural network class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() # Define a loss function and optimizer criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) # Train the network for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 # DOESN'T WORK for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') # save trained model PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) # test the network on the test data dataiter = iter(testloader) images, labels = dataiter.next() # print images dataiter = iter(testloader) images, labels = dataiter.next() imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) # load back saved model net = Net() net.load_state_dict(torch.load(PATH)) # see what the nueral network thinks these examples above are: ouputs = net(images) # index of the highest energy _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) # accuracy on the whole dataset correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) # classes that perfomed well vs classes that didn't perform well class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i])) if __name__ == '__main__': torch.multiprocessing.freeze_support() However I got this issue: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. I'm just trying to run this in a regular python file. When I added if __name__ == '__main__': freeze_support() to the end of my file, I still get the error.
To anyone else with this issue, I believe you need to define a main function and run the training there. Then add: if __name__ == '__main__': main() at the end of the python file. This fixed the freeze_support() issue for me on a different PyTorch training program.
7
12
64,596,394
2020-10-29
https://stackoverflow.com/questions/64596394/importerror-cannot-import-name-docevents-from-botocore-docs-bcdoc-in-aws-co
ImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' (/python3.7/site-packages/botocore/docs/bcdoc/init.py) Traceback (most recent call last): File "/root/.pyenv/versions/3.7.6/bin/aws", line 19, in <module> import awscli.clidriver File "/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/awscli/clidriver.py", line 36, in <module> from awscli.help import ProviderHelpCommand File "/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/awscli/help.py", line 23, in <module> from botocore.docs.bcdoc import docevents ImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' (/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/botocore/docs/bcdoc/__init__.py) [Container] 2020/10/29 16:48:39 Command did not exit successfully aws --version exit status 1 The failure occurs in the PRE_BUILD. And this is my spec build file: buildspec-cd.yml pre_build: commands: - AWS_REGION=${AWS_DEFAULT_REGION} - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) - IMAGE_VERSION=${COMMIT_HASH} - REPOSITORY_URI=${CONTAINER_REGISTRY}/${APPLICATION_NAME} - aws --version - echo Logging in to Amazon ECR... - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email) The codebuild was working correctly and nothing has been changed. Only stopped working.
Reading this GitHub issue #2596. i fixed my error. Just before the PRE_BUILD section, I added this line to my buildspec-cd.yml file: pip3 install --upgrade awscli install: commands: - pip3 install awsebcli --upgrade - eb --version - pip3 install --upgrade awscli pre_build: commands: - AWS_REGION=${AWS_DEFAULT_REGION} - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) - IMAGE_VERSION=${COMMIT_HASH} ...
85
176
64,654,805
2020-11-2
https://stackoverflow.com/questions/64654805/how-do-you-fix-runtimeerror-package-fails-to-pass-a-sanity-check-for-numpy-an
This is the error I am getting and, as far as I can tell, there is nothing useful on the error link to fix this. RuntimeError: The current Numpy installation ('...\\venv\\lib\\site-packages\\numpy\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https://developercommunity.visualstudio.com/content/problem/1207405/fmod-after-an-update-to-windows-2004-is-causing-a.html I have tried multiple versions of Python (3.8.6 and 3.9.0) and numpy and pandas. I am currently using PyCharm to do all this.
This error occurs when using python3.9 and numpy1.19.4 So uninstalling numpy1.19.4 and installing 1.19.3 will work. Edit As of January 5th 2021 numpy version 1.19.5 is out and appears to solve the problem.
140
193
64,588,486
2020-10-29
https://stackoverflow.com/questions/64588486/address-already-in-use-fastapi
I keep getting [Errno 98] Address already in use But the address is not in use. I tried to change the ip and port but It isn't budging. from fastapi import FastAPI app = FastAPI() @app.get("/") async def main(): return {"message": "Helloworld,FastAPI"} if __name__ == '__main__': import uvicorn uvicorn.run(app, host="127.0.0.1", port=8000) uvicorn main:app --reload also tried uvicorn main:app --host=172.0.0.2 --port=5000 then it gives [Errno 99] error while attempting to bind on address ('172.0.0.2', 5000): cannot assign requested address I tried running a flask dev server and it was also running on 172.0.0.1 without a problem? using Arch-Manjaro-Linux I used nmap to see what the fuss was about. But only 2 ports in use on the 127.0.0.1 IP PORT STATE SERVICE 631/tcp open ipp 8000/tcp open http-alt I would use another IP and port but it gives an error that it can't be assigned.
Basically, you can do this. This will kill the process that listens TCP connections on port 8000 sudo lsof -t -i tcp:8000 | xargs kill -9
21
52
64,638,010
2020-11-1
https://stackoverflow.com/questions/64638010/compare-csv-files-content-with-filecmp-and-ignore-metadata
import filecmp comparison = filecmp.dircmp(dir_local, dir_server) comparison.report_full_closure() I want to compare all CSV files kept on my local machine to files kept on a server. The folder structure is the same for both of them. I only want to do a data comparison and not metadata (like time of creation, etc). I am using filecmp but it seems to perform metadata comparison. Is there a way to do what I want?
There are multiple ways to compare the .csv files between the 2 repositories (server file system and local file system). Method 1: using hashlib This method uses the Python module hashlib. I used the hashing algorithm sha256 to compute the hash digest for the files. I compare the hashes for files with the exact file name. This method works well, but it will overlook any file that doesn't exist in both directories. import hashlib def compare_common_files_by_hash(directory_one, directory_two): d1_files = set(os.listdir(directory_one)) d2_files = set(os.listdir(directory_two)) common_files = list(d1_files & d2_files) if common_files: for filename in common_files: hash_01 = hashlib.sha256(open(f'{directory_one}/{filename}', 'rb').read()).hexdigest() hash_02 = hashlib.sha256(open(f'{directory_two}/{filename}', 'rb').read()).hexdigest() if hash_01 == hash_02: print(f'The file - {filename} is identical in the directories {directory_one} and {directory_two}') elif hash_01 != hash_02: print(f'The file - {filename} is different in the directories {directory_one} and {directory_two}') Method 2: using os st_size This method uses the Python module os. In this example, I compared the size of files. This method works ok, but it will misclassify any file that has any data change that doesn't change the size of the file. import os def compare_common_files_by_size(directory_one, directory_two): d1_files = set(os.listdir(directory_one)) d2_files = set(os.listdir(directory_two)) common_files = list(d1_files & d2_files) if common_files: for filename in common_files: file_01 = os.stat(f'{directory_one}/{filename}') file_02 = os.stat(f'{directory_two}/{filename}') if file_01.st_size == file_02.st_size: print(f'The file - {filename} is identical in the directories {directory_one} and {directory_two}') elif file_01.st_size != file_02.st_size: print(f'The file - {filename} is different in the directories {directory_one} and' f' {directory_two}') Method 3: using os st_size and st_mtime This method also uses the Python module os. In this example, I compared not only the size of the file, but also the last modification time. This method works good, but it will misclassify files as being identical. In testing, I saved a file with no data modifications and os.st_mtime flagged the file as being different, but in reality it wasn't really different. import os def compare_common_files_by_metadata(directory_one, directory_two): d1_files = set(os.listdir(directory_one)) d2_files = set(os.listdir(directory_two)) common_files = list(d1_files & d2_files) if common_files: for filename in common_files: file_01 = os.stat(f'{directory_one}/{filename}') file_02 = os.stat(f'{directory_two}/{filename}') if file_01.st_size == file_02.st_size and file_01.st_mtime == file_02.st_mtime: print(f'The file - {filename} is identical in the directories {directory_one} and {directory_two}') elif file_01.st_size != file_02.st_size or file_01.st_mtime != file_02.st_mtime: print(f'The file - {filename} is different in the directories {directory_one} and' f' {directory_two}') Method 4: using set() This example uses Python set() to determine the line to line differences between 2 csv files with the same name. This method will output the exact change between the 2 csv files. import os def compare_common_files_by_lines(directory_one, directory_two): d1_files = set(os.listdir(directory_one)) d2_files = set(os.listdir(directory_two)) common_files = list(d1_files & d2_files) if common_files: for filename in common_files: if fileName.endswith('.csv'): file_01 = open(f'{directory_one}/{filename}', 'r', encoding='ISO-8859-1') file_02 = open(f'{directory_two}/{filename}', 'r', encoding='ISO-8859-1') csv_file_01 = set(map(tuple, csv.reader(file_01))) csv_file_02 = set(map(tuple, csv.reader(file_02))) different = csv_file_01 ^ csv_file_02 for row in sorted(different, key=lambda x: x, reverse=True): if row: print(f'This row: \n {row} \n was different between the file {fileName} in the directories' f' {directory_one} and {directory_two}') Method 5: using filecmp.cmp This method uses the Python module filecmp. In this example I used filecmp.cmp with shallow set to False. Setting this parameter to False instructs filecmp to look at the contents of the files and not the metadata, such as filesize, which is the default for filecmp.cmp. This method works as well as Method 1, that used hashlib. import filecmp def compare_common_files(directory_one, directory_two): d1_files = set(os.listdir(directory_one)) d2_files = set(os.listdir(directory_two)) common_files = list(d1_files & d2_files) if common_files: for filename in common_files: file_01 = f'{directory_one}/{filename}' file_02 = f'{directory_two}/{filename}' comparison = filecmp.cmp(file_01, file_02, shallow=False) if comparison: print(f'The file - {filename} is identical in the directories - {directory_one} and {directory_two}') elif not comparison: print(f'The file - {filename} is different in the directories - {directory_one} and {directory_two}') Method 6: using filecmp.dircmp This method also uses the Python module filecmp. In this example I used filecmp.dircmp, which allows me to not only identify files that are non-common between the 2 directories and find those files that have similar names, but different content. import filecmp def directory_recursive(directory_one, directory_two): files = filecmp.dircmp(directory_one, directory_two) for filename in files.diff_files: print(f'The file - {filename} is different in the directories - {files.left} and {files.right}') for filename in files.left_only: print(f'The file - {filename} - was only found in the directory {files.left}') for filename in files.right_only: print(f'The file - {filename} - was only found in the directory {files.right}') Method 7: line-by-line comparison This example does a line-by-line comparison of 2 csv files and output the line that are different. The output can be added to either Python dictionary or to JSON file for secondary. import csv def get_csv_file_lines(file): with open(file, 'r', encoding='utf-8') as csv_file: rows = csv.reader(csv_file) for row in rows: yield row def compare_csv_files_line_by_line(csv_file_one, csv_file_two): csvfile_02 = get_csv_file_lines(csv_file_two) for line_one in get_csv_file_lines(csv_file_one): line_two = csvfile_02.__next__() if line_two != line_one: print('File names being compared:') print(f'csv_file_one: {csv_file_one}') print(f'csv_file_two: {csv_file_two}') print(f'The following rows have difference in the files being compared.') print('csv_file_one:', line_one) print('csv_file_two:', line_two) print('\n') Local file system to S3 bucket using hashlib The example below is a real world use case for comparing files between a local file system and a remote S3 bucket. I originally was going to use object.e_tag that AWS S3 creates, but that tag can have issues and shouldn't be used in a hashing comparison operation. I decided to query S3 and load an individual file into a memory file system that could be queried and emptied during each comparison operation. This method worked very well and have no adverse impact to my system performance. import fs import os import boto3 import hashlib def create_temp_memory_filesystem(): mem_fs = fs.open_fs('mem://') virtual_disk = mem_fs.makedir('hidden_dir') return mem_fs, virtual_disk def query_s3_file_by_name(filename, memory_filesystem, temp_directory): s3 = boto3.resource('s3', aws_access_key_id='your_access_key_id', aws_secret_access_key='your_secret_access_key') bucket = s3.Bucket('your_bucket_name') for obj in bucket.objects.all(): if obj.key == filename: body = obj.get()['Body'].read() with memory_filesystem.open(f'{temp_directory}/s3_{filename}', 'w') as f: f.write(str(body)) f.close() def compare_local_files_to_s3_files(local_csv_files): virtual_disk = create_temp_memory_filesystem() directory_name = str(virtual_disk[1]).split('/')[1] files = set(os.listdir(local_csv_files)) for filename in files: if filename.endswith('.csv'): local_file_hash = hashlib.sha256(open(f'{local_csv_files}/{filename}', 'rb').read()).hexdigest() query_s3_file_by_name(filename, virtual_disk[0], directory_name) virtual_files = virtual_disk[0].opendir(directory_name) for file_name in virtual_files.listdir('/'): s3_file_hash = hashlib.sha256(open(file_name, 'rb').read()).hexdigest() if local_file_hash == s3_file_hash: print(f'The file - {filename} is identical in both the local file system and the S3 bucket.') elif local_file_hash != s3_file_hash: print(f'The file - {filename} is different between the local file system and the S3 bucket.') virtual_files.remove(file_name) virtual_disk[0].close() Local file system to S3 bucket using filecmp This example is the same as the one above except I use filecmp.cmp instead of hashlib for the comparison operation. import fs import os import boto3 import filecmp def create_temp_memory_filesystem(): mem_fs = fs.open_fs('mem://') virtual_disk = mem_fs.makedir('hidden_dir') return mem_fs, virtual_disk def query_s3_file_by_name(filename, memory_filesystem, temp_directory): s3 = boto3.resource('s3', aws_access_key_id='your_access_key_id', aws_secret_access_key='your_secret_access_key') bucket = s3.Bucket('your_bucket_name') for obj in bucket.objects.all(): if obj.key == filename: body = obj.get()['Body'].read() with memory_filesystem.open(f'{temp_directory}/s3_{filename}', 'w') as f: f.write(str(body)) f.close() def compare_local_files_to_s3_files(local_csv_files): virtual_disk = create_temp_memory_filesystem() directory_name = str(virtual_disk[1]).split('/')[1] files = set(os.listdir(local_csv_files)) for filename in files: if filename.endswith('.csv'): local_file = f'{local_csv_files}/{filename}' query_s3_file_by_name(filename, virtual_disk[0], directory_name) virtual_files = virtual_disk[0].opendir(directory_name) for file_name in virtual_files.listdir('/'): comparison = filecmp.cmp(local_file, file_name, shallow=False) if comparison: print(f'The file - {filename} is identical in both the local file system and the S3 bucket.') elif not comparison: print(f'The file - {filename} is different between the local file system and the S3 bucket.') virtual_files.remove(file_name) virtual_disk[0].close() Local file system to Google Cloud storage bucket using hashlib This example is similar to the S3 hashlib code example above, but it uses a Google Cloud storage bucket. import fs import os import hashlib from google.cloud import storage def create_temp_memory_filesystem(): mem_fs = fs.open_fs('mem://') virtual_disk = mem_fs.makedir('hidden_dir') return mem_fs, virtual_disk def query_google_cloud_storage_file_by_name(filename, memory_filesystem, temp_directory): client = storage.Client.from_service_account_json('path_to_your_credentials.json') bucket = client.get_bucket('your_bucket_name') blobs = bucket.list_blobs() for blob in blobs: if blob.name == filename: with memory_filesystem.open(f'{temp_directory}/{filename}', 'w') as f: f.write(str(blob.download_to_filename(blob.name))) f.close() def compare_local_files_to_google_storage_files(local_csv_files): virtual_disk = create_temp_memory_filesystem() directory_name = str(virtual_disk[1]).split('/')[1] files = set(os.listdir(local_csv_files)) for filename in files: if filename.endswith('.csv'): local_file_hash = hashlib.sha256(open(f'{local_csv_files}/{filename}', 'rb').read()).hexdigest() query_google_cloud_storage_file_by_name(filename, virtual_disk[0], directory_name) virtual_files = virtual_disk[0].opendir(directory_name) for file_name in virtual_files.listdir('/'): gs_file_hash = hashlib.sha256(open(file_name, 'rb').read()).hexdigest() if local_file_hash == gs_file_hash: print(f'The file - {filename} is identical in both the local file system and the Google Cloud bucket.') elif local_file_hash != gs_file_hash: print(f'The file - {filename} is different between the local file system and the Google Cloud bucket.') virtual_files.remove(file_name) virtual_disk[0].close() Local file system to Google Cloud storage bucket using filecmp This example is similar to the S3 filecmp code example above, but it uses a Google Cloud storage bucket. import fs import os import filecmp from google.cloud import storage def create_temp_memory_filesystem(): mem_fs = fs.open_fs('mem://') virtual_disk = mem_fs.makedir('hidden_dir') return mem_fs, virtual_disk def query_google_cloud_storage_file_by_name(filename, memory_filesystem, temp_directory): client = storage.Client.from_service_account_json('path_to_your_credentials.json') bucket = client.get_bucket('your_bucket_name') blobs = bucket.list_blobs() for blob in blobs: if blob.name == filename: with memory_filesystem.open(f'{temp_directory}/{filename}', 'w') as f: f.write(str(blob.download_to_filename(blob.name))) f.close() def compare_local_files_to_google_storage_files(local_csv_files): virtual_disk = create_temp_memory_filesystem() directory_name = str(virtual_disk[1]).split('/')[1] files = set(os.listdir(local_csv_files)) for filename in files: if filename.endswith('.csv'): local_file = f'{local_csv_files}/{filename}' query_google_cloud_storage_file_by_name(filename, virtual_disk[0], directory_name) virtual_files = virtual_disk[0].opendir(directory_name) for file_name in virtual_files.listdir('/'): comparison = filecmp.cmp(local_file, file_name, shallow=False) if comparison: print(f'The file - {filename} is identical in both the local file system and the Google Cloud bucket.') elif not comparison: print(f'The file - {filename} is different between the local file system and the Google Cloud bucket.') virtual_files.remove(file_name) virtual_disk[0].close()
8
6
64,639,526
2020-11-2
https://stackoverflow.com/questions/64639526/numba-data-type-error-cannot-unify-array
I am using Numba to speed up a series of functions as shown below. if I set the step_size variable in function PosMomentSingle to a float (e.g. step_size = 0.5), instead of an integer (e.g step_size = 1.0), I get the following error: Cannot unify array(float32, 1d, C) and array(float64, 1d, C) for 'axle_coords.2', defined at <ipython-input-182-37c789ca2187> (12) File "<ipython-input-182-37c789ca2187>", line 12: def nbSimpleSpanMoment(L, axles, spacings, step_size): <source elided> while np.min(axle_coords) < L: I found it quite hard to understand what the problem is, but my guess is there is an issue with the function after @jit (nbSimpleSpanMoment), with some kind of a datatype mismatch. I tried setting all variables to float32, then to float64 (e.g. L = np.float32(L)) but whatever I try creates a new set of errors. Since the error message is quite cryptic, I am unable to debug the issue. Can someone with numba experience explain what I am doing wrong here? I placed my code below to recreate the problem. Thank you for the help! import numba as nb import numpy as np @nb.vectorize(nopython=True) def nbvectMoment(L,x): if x<L/2.0: return 0.5*x else: return 0.5*(L-x) @nb.jit(nopython=True) def nbSimpleSpanMoment(L, axles, spacings, step_size): travel = L + np.sum(spacings) maxmoment = 0 axle_coords = -np.cumsum(spacings) moment_inf = np.empty_like(axles) while np.min(axle_coords) < L: axle_coords = axle_coords + step_size y = nbvectMoment(L,axle_coords) for k in range(y.shape[0]): if axle_coords[k] >=0 and axle_coords[k] <= L: moment_inf[k] = y[k] else: moment_inf[k] = 0.0 moment = np.sum(moment_inf * axles) if maxmoment < moment: maxmoment = moment return np.around(maxmoment,1) def PosMomentSingle(current_axles, current_spacings): data_list = [] for L in range (1,201): L=float(L) if L <= 40: step_size = 0.5 else: step_size = 0.5 axles = np.array(current_axles, dtype='f') spacings = np.array(current_spacings, dtype='f') axles_inv = axles[::-1] spacings_inv = spacings[::-1] spacings = np.insert(spacings,0,0) spacings_inv = np.insert(spacings_inv,0,0) left_to_right = nbSimpleSpanMoment(L, axles, spacings, step_size) right_to_left = nbSimpleSpanMoment(L, axles_inv, spacings_inv, step_size) data_list.append(max(left_to_right, right_to_left)) return data_list load_effects = [] for v in range(14,31): load_effects.append(PosMomentSingle([8, 32, 32], [14, v])) load_effects = np.array(load_effects)
After removing all type conversions in your code, the following error was returned TypingError: Cannot unify array(int64, 1d, C) and array(float64, 1d, C) for 'axle_coords.2' This helped me to trace back the error to the dtype of spacings. In your code this initialized as a C compatible single, which seems to be different from a python float32, see here. After changing this to np.float64 the code now runs. The code below now runs and unify error does not occur anymore. import numba as nb import numpy as np @nb.vectorize(nopython=True) def nbvectMoment(L,x): if x<L/2.0: return 0.5*x else: return 0.5*(L-x) @nb.jit(nopython=True) def nbSimpleSpanMoment(L, axles, spacings, step_size): travel = L + np.sum(spacings) maxmoment = 0 axle_coords = -np.cumsum(spacings) moment_inf = np.empty_like(axles) while np.min(axle_coords) < L: axle_coords = axle_coords + step_size y = nbvectMoment(L,axle_coords) for k in range(y.shape[0]): if axle_coords[k] >=0 and axle_coords[k] <= L: moment_inf[k] = y[k] else: moment_inf[k] = 0.0 moment = np.sum(moment_inf * axles) if maxmoment < moment: maxmoment = moment return np.around(maxmoment,1) def PosMomentSingle(current_axles, current_spacings): data_list = [] for L in range (1,201): L=float(L) if L <= 40: step_size = 0.5 else: step_size = 0.5 axles = np.array(current_axles, np.float32) spacings = np.array(current_spacings, dtype=np.float64) axles_inv = axles[::-1] spacings_inv = spacings[::-1] spacings = np.insert(spacings,0,0) spacings_inv = np.insert(spacings_inv,0,0) left_to_right = nbSimpleSpanMoment(L, axles, spacings, step_size) right_to_left = nbSimpleSpanMoment(L, axles_inv, spacings_inv, step_size) data_list.append(max(left_to_right, right_to_left)) return data_list load_effects = [] for v in range(14,31): load_effects.append(PosMomentSingle([8, 32, 32], [14, v])) load_effects = np.array(load_effects)
9
9
64,665,978
2020-11-3
https://stackoverflow.com/questions/64665978/any-workaround-to-do-forward-forecasting-for-estimating-time-series-in-python
I want to make forward forecasting for monthly times series of air pollution data such as what would be 3~6 months ahead of estimation on air pollution index. I tried scikit-learn models for forecasting and fitting data to the model works fine. But what I wanted to do is making a forward period estimate such as what would be 6 months ahead of the air pollution output index is going to be. In my current attempt, I could able to train the model by using scikit-learn. But I don't know how that forward forecasting can be done in python. To make a forward period estimate, what should I do? Can anyone suggest a possible workaround to do this? Any idea? my attempt import pandas as pd from sklearn.preprocessing StandardScaler from sklearn.metrics import accuracy_score from sklearn.linear_model import BayesianRidge url = "https://gist.githubusercontent.com/jerry-shad/36912907ba8660e11cd27be0d3e30639/raw/424f0891dc46d96cd5f867f3d2697777ac984f68/pollution.csv" df = pd.read_csv(url, parse_dates=['dates']) df.drop(columns=['Unnamed: 0'], inplace=True) resultsDict={} predictionsDict={} split_date ='2017-12-01' df_training = df.loc[df.index <= split_date] df_test = df.loc[df.index > split_date] df_tr = df_training.drop(['pollution_index'],axis=1) df_te = df_test.drop(['pollution_index'],axis=1) scaler = StandardScaler() scaler.fit(df_tr) X_train = scaler.transform(df_tr) y_train = df_training['pollution_index'] X_test = scaler.transform(df_te) y_test = df_test['pollution_index'] X_train_df = pd.DataFrame(X_train,columns=df_tr.columns) X_test_df = pd.DataFrame(X_test,columns=df_te.columns) reg = linear_model.BayesianRidge() reg.fit(X_train, y_train) yhat = reg.predict(X_test) resultsDict['BayesianRidge'] = accuracy_score(df_test['pollution_index'], yhat) new update 2 this is my attempt using ARMA model from statsmodels.tsa.arima_model import ARIMA index = len(df_training) yhat = list() for t in tqdm(range(len(df_test['pollution_index']))): temp_train = df[:len(df_training)+t] model = ARMA(temp_train['pollution_index'], order=(1, 1)) model_fit = model.fit(disp=False) predictions = model_fit.predict(start=len(temp_train), end=len(temp_train), dynamic=False) yhat = yhat + [predictions] yhat = pd.concat(yhat) resultsDict['ARMA'] = evaluate(df_test['pollution_index'], yhat.values) but this can't help me to make forward forecasting of estimating my time series data. what I want to do is, what would be 3~6 months ahead of estimated values of pollution_index. Can anyone suggest me a possible workaround to do this? How to overcome the limitation of my current attempt? What should I do? Can anyone suggest me a better way of doing this? Any thoughts? update: goal for the clarification, I am not expecting which model or approach works best, but what I am trying to figure it out is, how to make reliable forward forecasting for given time series (pollution index), how should I correct my current attempt if it is not efficient and not ready to do forward period estimation. Can anyone suggest any possible way to do this? update-desired output here is my sketch desired forecasting plot that I want to make:
In order to obtain your desired output, I think you need to use a model that can return the standard deviation in the predicted value. Therefore, I adopt Gaussian process regression. From the code you provided in your post, I don't see how this is a time series forecasting task, so in my solution below, I also treat this task as a usual regression task. First, prepare the data import pandas from sklearn.preprocessing import StandardScaler from sklearn.gaussian_process import GaussianProcessRegressor url = "https://gist.githubusercontent.com/jerry-shad/36912907ba8660e11cd27be0d3e30639/raw/424f0891dc46d96cd5f867f3d2697777ac984f68/pollution.csv" df = pd.read_csv(url,parse_dates=['date']) df.drop(columns=['Unnamed: 0'],axis=1,inplace=True) # sort the dataframe by date and reset the index df = df.sort_values(by='date').reset_index(drop=True) # after sorting the dataframe, split the dataframe split_date ='2017-12-01' df_training = df.loc[(df.date <= split_date).values] df_test = df.loc[(df.date > split_date).values] # drop the date column df_training.drop(columns=['date'],axis=1,inplace=True) df_test.drop(columns=['date'],axis=1,inplace=True) y_train = df_training['pollution_index'] y_test = df_test['pollution_index'] df_training.drop(['pollution_index'],axis=1) df_test.drop(['pollution_index'],axis=1) scaler = StandardScaler() scaler.fit(df_training) X_train = scaler.transform(df_training) X_test = scaler.transform(df_test) X_train_df = pd.DataFrame(X_train,columns=df_training.columns) X_test_df = pd.DataFrame(X_test,columns=df_test.columns) with the dataframes prepared above, you can train a GaussianProcessRegressor and make predictions by gpr = GaussianProcessRegressor(normalize_y=True).fit(X_train_df,y_train) pred,std = gpr.predict(X_test_df,return_std=True) in which std is an array of standard deviations in the predicted values. Then, you can plot the data by import numpy as np from matplotlib import pyplot as plt fig,ax = plt.subplots(figsize=(12,8)) plot_start = 225 # plot the training data ax.plot(y_train.index[plot_start:],y_train.values[plot_start:],'navy',marker='o',label='observed') # plot the test data ax.plot(y_test.index,y_test.values,'navy',marker='o') ax.plot(y_test.index,pred,'darkgreen',marker='o',label='pred') sigma = np.sqrt(std) ax.fill(np.concatenate([y_test.index,y_test.index[::-1]]), np.concatenate([pred-1.960*sigma,(pred+1.9600*sigma)[::-1]]), alpha=.5,fc='silver',ec='tomato',label='95% confidence interval') ax.legend(loc='upper left',prop={'size':16}) the output plot looks like UPDATE I thought pollution_index is something that can be predicted by 'dew', 'temp', 'press', 'wnd_spd', 'rain'. If you want a one-step ahead forecasting, here is what you can do import numpy as np import pandas as pd from statsmodels.tsa.arima_model import ARIMA from matplotlib import pyplot as plt import matplotlib.dates as mdates url = "https://gist.githubusercontent.com/jerry-shad/36912907ba8660e11cd27be0d3e30639/raw/424f0891dc46d96cd5f867f3d2697777ac984f68/pollution.csv" df = pd.read_csv(url,parse_dates=['date']) df.drop(columns=['Unnamed: 0'],axis=1,inplace=True) # sort the dataframe by date and reset the index df = df.sort_values(by='date').reset_index(drop=True) # after sorting the dataframe, split the dataframe split_date ='2017-12-01' df_training = df.loc[(df.date <= split_date).values] df_test = df.loc[(df.date > split_date).values] # extract the relevant info train_date,train_polltidx = df_training['date'].values,df_training['pollution_index'].values test_date,test_polltidx = df_test['date'].values,df_test['pollution_index'].values # train an ARIMA model model = ARIMA(train_polltidx,order=(1,1,1)) model_fit = model.fit(disp=0) # you can predict as many as you want, here I only predict len(test_dat.index) days forecast,stderr,conf = model_fit.forecast(len(test_date)) # plot the result fig,ax = plt.subplots(figsize=(12,8)) plot_start = 225 # plot the training data plt.plot(train_date[plot_start:],train_polltidx[plot_start:],'navy',marker='o',label='observed') # plot the test data plt.plot(test_date,test_polltidx,'navy',marker='o') plt.plot(test_date,forecast,'darkgreen',marker='o',label='pred') # ax.errorbar(np.arange(len(pred)),pred,std,fmt='r') plt.fill(np.concatenate([test_date,test_date[::-1]]), np.concatenate((conf[:,0],conf[:,1][::-1])), alpha=.5,fc='silver',ec='tomato',label='95% confidence interval') plt.legend(loc='upper left',prop={'size':16}) ax = plt.gca() ax.set_xlim([df_training['date'].values[plot_start],df_test['date'].values[-1]]) ax.xaxis.set_major_locator(mdates.MonthLocator(interval=6)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) plt.gcf().autofmt_xdate() plt.show() The output figure is Clearly, the prediction is very bad, because I haven't done any preprocessing to the training data. UPDATE 2 Since I'm not familiar with ARIMA, I implement one-step forecasting using GaussianProcessRegressor with the help of this wonderful post. import numpy as np import pandas as pd from matplotlib import pyplot as plt import matplotlib.dates as mdates from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.preprocessing import StandardScaler url = "https://gist.githubusercontent.com/jerry-shad/36912907ba8660e11cd27be0d3e30639/raw/424f0891dc46d96cd5f867f3d2697777ac984f68/pollution.csv" df = pd.read_csv(url,parse_dates=['date']) df.drop(columns=['Unnamed: 0'],axis=1,inplace=True) # sort the dataframe by date and reset the index df = df.sort_values(by='date').reset_index(drop=True) # after sorting the dataframe, split the dataframe split_date ='2017-12-01' df_training = df.loc[(df.date <= split_date).values] df_test = df.loc[(df.date > split_date).values] # extract the relevant info train_date,train_polltidx = df_training['date'].values,df_training['pollution_index'].values[:,None] test_date,test_polltidx = df_test['date'].values,df_test['pollution_index'].values[:,None] # preprocessing scalar = StandardScaler() scalar.fit(train_polltidx) train_polltidx = scalar.transform(train_polltidx) test_polltidx = scalar.transform(test_polltidx) def series_to_supervised(data,n_in,n_out): df = pd.DataFrame(data) cols = list() for i in range(n_in,0,-1): cols.append(df.shift(i)) for i in range(0, n_out): cols.append(df.shift(-i)) agg = pd.concat(cols,axis=1) agg.dropna(inplace=True) return agg.values months_look_back = 1 # train pollt_series = series_to_supervised(train_polltidx,months_look_back,1) x_train,y_train = pollt_series[:,:months_look_back],pollt_series[:,-1] # test pollt_series = series_to_supervised(test_polltidx,months_look_back,1) x_test,y_test = pollt_series[:,:months_look_back],pollt_series[:,-1] print("The first %i months in the test set won't be predicted." % months_look_back) def walk_forward_validation(x_train,y_train,x_test,y_test): predictions = [] history_x = x_train.tolist() history_y = y_train.tolist() for rep,target in zip(x_test,y_test): # train model gpr = GaussianProcessRegressor(alpha=1e-4,normalize_y=False).fit(history_x,history_y) pred,std = gpr.predict([rep],return_std=True) predictions.append([pred,std]) history_x.append(rep) history_y.append(target) return predictions predictions = walk_forward_validation(x_train,y_train,x_test,y_test) pred_test,pred_std = zip(*predictions) # put back pred_test = scalar.inverse_transform(pred_test) pred_std = scalar.inverse_transform(pred_std) train_polltidx = scalar.inverse_transform(train_polltidx) test_polltidx = scalar.inverse_transform(test_polltidx) # plot the result fig,ax = plt.subplots(figsize=(12,8)) plot_start = 100 # plot the training data plt.plot(train_date[plot_start:],train_polltidx[plot_start:],'navy',marker='o',label='observed') # plot the test data plt.plot(test_date[months_look_back:],test_polltidx[months_look_back:],'navy',marker='o') plt.plot(test_date[months_look_back:],pred_test,'darkgreen',marker='o',label='pred') sigma = np.sqrt(pred_std) ax.fill(np.concatenate([test_date[months_look_back:],test_date[months_look_back:][::-1]]), np.concatenate([pred_test-1.960*sigma,(pred_test+1.9600*sigma)[::-1]]), alpha=.5,fc='silver',ec='tomato',label='95% confidence interval') plt.legend(loc='upper left',prop={'size':16}) ax = plt.gca() ax.set_xlim([df_training['date'].values[plot_start],df_test['date'].values[-1]]) ax.xaxis.set_major_locator(mdates.MonthLocator(interval=6)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) plt.gcf().autofmt_xdate() plt.show() The idea of this script is to cast the time series forecasting task into a supervised regression task. The plot_start is a parameter that controls from which year we want to plot, clearly plot_start cannot be greater than the length of the training data. The output figure of the script is as you can see, the first month in the test dataset is not predicted, because we need to look back one month to make a prediction. In order to further make predictions about unseen data, based on this post on CV site, you can train a new model using the predicted value from the last step, therefore, here is how you can do it unseen_dates = pd.date_range(test_date[-1],periods=180,freq='D').values all_data = series_to_supervised(df['pollution_index'].values,months_look_back,months_to_predict) def predict_unseen(unseen_dates,all_data,days_look_back): predictions = [] history_x = all_data[:,:days_look_back].tolist() history_y = all_data[:,-1].tolist() inds = np.arange(unseen_dates.shape[0]) for ind in inds: # train model gpr = GaussianProcessRegressor(alpha=1e-2,normalize_y=False).fit(history_x,history_y) rep = np.array(history_y[-days_look_back:]).reshape(days_look_back,1) pred,std = gpr.predict(rep,return_std=True) predictions.append([pred,std]) history_x.append(history_y[-days_look_back:]) history_y.append(pred) return predictions predictions = predict_unseen(unseen_dates,all_data,days_look_back=1) pred_test,pred_std = zip(*predictions) fig,ax = plt.subplots(figsize=(12,8)) plot_start = 100 # plot the test data plt.plot(unseen_dates,pred_test,'navy',marker='o') sigma = np.sqrt(pred_std) ax.fill(np.concatenate([unseen_dates,unseen_dates[::-1]]), np.concatenate([pred_test-1.960*sigma,(pred_test+1.9600*sigma)[::-1]]), alpha=.5,fc='silver',ec='tomato',label='95% confidence interval') plt.legend(loc='upper left',prop={'size':16}) ax = plt.gca() ax.xaxis.set_major_locator(mdates.DayLocator(interval=7)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) plt.gcf().autofmt_xdate() plt.show() One very important thing to note: The timestep of the real data is a month, using such data to make predictions about days may not be correct.
6
3
64,648,186
2020-11-2
https://stackoverflow.com/questions/64648186/using-a-data-converter-to-display-3d-volume-as-images
I would like to write a data converter tool. I need analyze the bitstream in a file to display the 2D cross-sections of a 3D volume. The dataset I am trying to view can be found here: https://figshare.com/articles/SSOCT_test_dataset_for_OCTproZ/12356705. It's the file titled: burned_wood_with_tape_1664x512x256_12bit.raw (832 MB) Would extremely appreciate some direction. Willing to award a bounty if I could get some software to display the dataset as images using a data conversion. As I'm totally new to this concept, I don't have code to show for this problem. However, here's a little something I tried using inspiration from other questions on SO: import rawpy import imageio path = "Datasets/burned_wood_with_tape_1664x512x256_12bit.raw" for item in path: item_path = path + item raw = rawpy.imread(item_path) rgb = raw.postprocess() rawpy.imshow(rgb)
Down below I implemented next visualization. Example RAW file burned_wood_with_tape_1664x512x256_12bit.raw consists of 1664 samples per A-scan, 512 A-scans per B-scan, 16 B-scans per buffer, 16 buffers per volume, and 2 volumes in this file, each sample is encoded as 2-bytes unsigned integer in little endian order, only 12 higher bits are used, lower 4 bits contain zeros. Samples are centered approximately around 2^15, to be precise data has these stats min 0 max 47648 mean 32757 standard deviation 454.5. I draw gray images of size 1664 x 512, there are total 16 * 16 * 2 = 512 such images (frames) in a file. I draw animated frames on screen using matplotlib library, also rendering these animation into GIF file. One example of rendered GIF at reduced quality is located after code. To render/draw images of different resulting resolution you need to change code line with plt.rcParams['figure.figsize'], this fig size contains (widht_in_inches, height_in_inches), by default DPI (dots per inch) equals to 100, meaning that if you want to have resulting GIF of resolution 720x265 then you need to set this figure size to (7.2, 2.65). Also resulting GIF contains animation of a bit smaller resolution because axes and padding is included into resulting figure size. My next code needs pip modules to be installed one time by command python -m pip install numpy matplotlib. Try it online! # Needs: python -m pip install numpy matplotlib def oct_show(file, *, begin = 0, end = None): import os, numpy as np, matplotlib, matplotlib.pyplot as plt, matplotlib.animation plt.rcParams['figure.figsize'] = (7.2, 2.65) # (4.8, 1.75) (7.2, 2.65) (9.6, 3.5) sizeX, sizeY, cnt, bits = 1664, 512, 16 * 16 * 2, 12 stepX, stepY = 16, 8 fps = 5 try: fsize, opened_here = None, False if type(file) is str: fsize = os.path.getsize(file) file, opened_here = open(file, 'rb'), True by = (bits + 7) // 8 if end is None and fsize is not None: end = fsize // (sizeX * sizeY * by) imgs = [] file.seek(begin * sizeY * sizeX * by) a = file.read((end - begin) * sizeY * sizeX * by) a = np.frombuffer(a, dtype = np.uint16) a = a.reshape(end - begin, sizeY, sizeX) amin, amax, amean, stdd = np.amin(a), np.amax(a), np.mean(a), np.std(a) print('min', amin, 'max', amax, 'mean', round(amean, 1), 'std_dev', round(stdd, 3)) a = (a.astype(np.float32) - amean) / stdd a = np.maximum(0.1, np.minimum(a * 128 + 128.5, 255.1)).astype(np.uint8) a = a[:, :, :, None].repeat(3, axis = -1) fig, ax = plt.subplots() plt.subplots_adjust(left = 0.08, right = 0.99, bottom = 0.06, top = 0.97) for i in range(a.shape[0]): title = ax.text( 0.5, 1.02, f'Frame {i}', size = plt.rcParams['axes.titlesize'], ha = 'center', transform = ax.transAxes, ) imgs.append([ax.imshow(a[i], interpolation = 'antialiased'), title]) ani = matplotlib.animation.ArtistAnimation(plt.gcf(), imgs, interval = 1000 // fps) print('Saving animated frames to GIF...', flush = True) ani.save(file.name + '.gif', writer = 'imagemagick', fps = fps) print('Showing animated frames on screen...', flush = True) plt.show() finally: if opened_here: file.close() oct_show('burned_wood_with_tape_1664x512x256_12bit.raw') Example output GIF:
6
1
64,658,304
2020-11-3
https://stackoverflow.com/questions/64658304/determining-lunar-eclipse-in-skyfield
I am given a list of dates in UTC, all hours cast to 00:00. I'd like to determine if a (lunar) eclipse occurred in a given day (ie past 24 hours) Considering the python snippet from sykfield.api import load eph = load('de421.bsp') def eclipticangle(t): moon, earth = eph['moon'], eph['earth'] e = earth.at(t) x, y, _ = e.observe(moon).apparent().ecliptic_latlon() return x.degrees I am assuming one is able to determine if an eclipse occurred within 24hrs of a time t by Checking that the first angle is close enough to 180 (easy) Checking if the second degree is close enough to 0 (not so essy?) Now as far as the answer in the comment suggests it is not so trivial to solve the second problem simply by testing if the angle is close to 0. Therefore, my question is Can someone provide a function to determine if a lunar eclipse occurred on a given day t? Edit. This question was edited to reflect the feedback from Brandon Rhodes left in the comments below.
I just went through section 11.2.3 of the Explanatory Supplement to the Astronomical Almanac and tried turning it into Skyfield Python code. Here is what I came up with: import numpy as np from skyfield.api import load from skyfield.constants import ERAD from skyfield.functions import angle_between, length_of from skyfield.searchlib import find_maxima eph = load('de421.bsp') earth = eph['earth'] moon = eph['moon'] sun = eph['sun'] def f(t): e = earth.at(t).position.au s = sun.at(t).position.au m = moon.at(t).position.au return angle_between(s - e, m - e) f.step_days = 5.0 ts = load.timescale() start_time = ts.utc(2019, 1, 1) end_time = ts.utc(2020, 1, 1) t, y = find_maxima(start_time, end_time, f) e = earth.at(t).position.m m = moon.at(t).position.m s = sun.at(t).position.m solar_radius_m = 696340e3 moon_radius_m = 1.7371e6 pi_m = np.arcsin(ERAD / length_of(m - e)) pi_s = np.arcsin(ERAD / length_of(s - e)) s_s = np.arcsin(solar_radius_m / length_of(s - e)) pi_1 = 0.998340 * pi_m sigma = angle_between(s - e, e - m) s_m = np.arcsin(moon_radius_m / length_of(e - m)) penumbral = sigma < 1.02 * (pi_1 + pi_s + s_s) + s_m partial = sigma < 1.02 * (pi_1 + pi_s - s_s) + s_m total = sigma < 1.02 * (pi_1 + pi_s - s_s) - s_m mask = penumbral | partial | total t = t[mask] penumbral = penumbral[mask] partial = partial[mask] total = total[mask] print(t.utc_strftime()) print(0 + penumbral + partial + total) It produces a vector of times at which lunar eclipses occurred, and then a rating of how total the eclipse is: ['2019-01-21 05:12:51 UTC', '2019-07-16 21:31:27 UTC'] [3 2] Its eclipse times are within 3 seconds of the times given in the huge table of lunar ephemerides at NASA: https://eclipse.gsfc.nasa.gov/5MCLE/5MKLEcatalog.txt
7
9
64,622,708
2020-10-31
https://stackoverflow.com/questions/64622708/typing-how-to-bind-owner-class-to-generic-descriptor
Can I implement a generic descriptor in Python in a way it will support/respect/understand inheritance hierarchy of his owners? It should be more clear in the code: from typing import ( Generic, Optional, TYPE_CHECKING, Type, TypeVar, Union, overload, ) T = TypeVar("T", bound="A") # noqa class Descr(Generic[T]): @overload def __get__(self: "Descr[T]", instance: None, owner: Type[T]) -> "Descr[T]": ... @overload def __get__(self: "Descr[T]", instance: T, owner: Type[T]) -> T: ... def __get__(self: "Descr[T]", instance: Optional[T], owner: Type[T]) -> Union["Descr[T]", T]: if instance is None: return self return instance class A: attr: int = 123 descr = Descr[T]() # I want to bind T here, but don't know how class B(A): new_attr: int = 123 qwerty: str = "qwe" if __name__ == "__main__": a = A() if TYPE_CHECKING: reveal_type(a.descr) # mypy guess it is T? but I want A* print("a.attr =", a.descr.attr) # mypy error: T? has no attribute "attr" # no runtime error b = B() if TYPE_CHECKING: reveal_type(b.descr) # mypy said it's T? but I want B* print("b.new_attr =", b.descr.new_attr) # mypy error: T? has no attribute "new_attr" # no runtime error print("b.qwerty =", b.descr.qwerty) # mypy error: T? has no attribute "qwerty" # (no runtime error) gist - almost the same code snippet on gist
I am not sure if you need to have the descriptor class as generic; it will probably just suffice to have __get__ on an instance of Type[T] to return T: T = TypeVar("T") # noqa class Descr: @overload def __get__(self, instance: None, owner: Type[T]) -> "Descr": ... @overload def __get__(self, instance: T, owner: Type[T]) -> T: ... def __get__(self, instance: Optional[T], owner: Type[T]) -> Union["Descr", T]: if instance is None: return self return instance class A: attr: int = 123 descr = Descr() class B(A): new_attr: int = 123 qwerty: str = "qwe" And every example of yours works as you wanted, and you will get an error for print("b.spam =", b.descr.spam) which produces error: "B" has no attribute "spam"
8
8
64,664,813
2020-11-3
https://stackoverflow.com/questions/64664813/get-the-public-ipv4-address-of-a-newly-created-amazon-ec2-instance-with-boto3
I am creating a ec2 instance with boto3 and I want print the ip address of that new instance. ec2 = boto3.resource('ec2') # create the instance new_instance = ec2.create_instances( ImageId='###', MinCount = 1, MaxCount = 1, InstanceType = 't2.nano', KeyName = "key", SecurityGroupIds = ["###"] ) ... wait until running ... ip = new_instance[0].ipv4 # something like this Is there a way to do something like this after it is running?
ec2.create_instances returns a list of ec2.Instance objects. ec2.Instance objects have an attribute named private_ip_address. You can use that to get the private IP address. A side note (based on the comments in your code example) you can also use the wait_until_running waiter to have your code halt until the instance is running. # create the instance new_instance = ec2.create_instances(...) # use a waiter on the instance to wait until running new_instance[0].wait_until_running() ip = new_instance[0].private_ip_address public_ip = new_instance[0].public_ip_address
5
11
64,613,552
2020-10-30
https://stackoverflow.com/questions/64613552/gcloud-sdk-install-for-mac
I have an issue to install the gcloud sdk on my mac. I have the following error when I do the ./install.sh. Source: https://cloud.google.com/sdk/docs/quickstart Welcome to the Google Cloud SDK! Traceback (most recent call last): File "/Users/kevin/Downloads/google-cloud-sdk/bin/bootstrapping/install.py", line 12, in <module> import bootstrapping File "/Users/kevin/Downloads/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 32, in <module> import setup # pylint:disable=g-import-not-at-top File "/Users/kevin/Downloads/google-cloud-sdk/bin/bootstrapping/setup.py", line 57, in <module> from googlecloudsdk.core.util import platforms File "/Users/kevin/Downloads/google-cloud-sdk/lib/googlecloudsdk/__init__.py", line 23, in <module> from googlecloudsdk.core.util import importing File "/Users/kevin/Downloads/google-cloud-sdk/lib/googlecloudsdk/core/util/importing.py", line 23, in <module> import imp File "/Users/kevin/.pyenv/versions/3.9.0/lib/python3.9/imp.py", line 23, in <module> from importlib import util File "/Users/kevin/.pyenv/versions/3.9.0/lib/python3.9/importlib/util.py", line 2, in <module> from . import abc File "/Users/kevin/.pyenv/versions/3.9.0/lib/python3.9/importlib/abc.py", line 17, in <module> from typing import Protocol, runtime_checkable File "/Users/kevin/.pyenv/versions/3.9.0/lib/python3.9/typing.py", line 26, in <module> import re as stdlib_re # Avoid confusion with the re we export. File "/Users/kevin/.pyenv/versions/3.9.0/lib/python3.9/re.py", line 124, in <module> import enum File "/Users/kevin/Downloads/google-cloud-sdk/lib/third_party/enum/__init__.py", line 26, in <module> spec = importlib.util.find_spec('enum') AttributeError: module 'importlib' has no attribute 'util' And when I do gcloud init Traceback (most recent call last): File "/Users/kevin/Downloads/google-cloud-sdk/lib/gcloud.py", line 104, in <module> main() File "/Users/kevin/Downloads/google-cloud-sdk/lib/gcloud.py", line 62, in main from googlecloudsdk.core.util import encoding File "/Users/kevin/Downloads/google-cloud-sdk/lib/googlecloudsdk/__init__.py", line 23, in <module> from googlecloudsdk.core.util import importing File "/Users/kevin/Downloads/google-cloud-sdk/lib/googlecloudsdk/core/util/importing.py", line 23, in <module> import imp File "/Users/kevin/.pyenv/versions/3.9.0/lib/python3.9/imp.py", line 23, in <module> from importlib import util File "/Users/kevin/.pyenv/versions/3.9.0/lib/python3.9/importlib/util.py", line 2, in <module> from . import abc File "/Users/kevin/.pyenv/versions/3.9.0/lib/python3.9/importlib/abc.py", line 17, in <module> from typing import Protocol, runtime_checkable File "/Users/kevin/.pyenv/versions/3.9.0/lib/python3.9/typing.py", line 26, in <module> import re as stdlib_re # Avoid confusion with the re we export. File "/Users/kevin/.pyenv/versions/3.9.0/lib/python3.9/re.py", line 124, in <module> import enum File "/Users/kevin/Downloads/google-cloud-sdk/lib/third_party/enum/__init__.py", line 26, in <module> spec = importlib.util.find_spec('enum') AttributeError: module 'importlib' has no attribute 'util' I think this is a Python issue. but I'm running Python3: python -V shows Python 3.9.0 I have installed it with homebrew. What can be the problem?
This is a known issue across Mac, Windows and Linux: https://issuetracker.google.com/170125513. I'd suggest to use the recommended Python versions mentioned here (3.5 to 3.8). Also this does not affect only to Cloud SDK but others as well (for example as mentioned here).
5
5
64,644,449
2020-11-2
https://stackoverflow.com/questions/64644449/recover-from-segfault-in-python
I have a few functions in my code that are randomly causing SegmentationFault error. I've identified them by enabling the faulthandler. I'm a bit stuck and have no idea how to reliably eliminate this problem. I'm thinking about some workaround. Since the functions are crashing randomly, I could potentially retry them after a failure. The problem is that there's no way to recover from SegmentationFault crash. The best idea I have for now is to rewrite these functions a bit and run them via subprocess. This solution will help me, that a crashed function won't crash the whole application, and can be retried. Some of the functions are quite small and often executed, so it will significantly slow down my app. Is there any method to execute function in a separate context, faster than a subprocess that won't crash whole program in case of segfault?
I had some unreliable C extensions throw segfaults every once in a while and, since there was no way I was going to be able to fix that, what I did was create a decorator that would run the wrapped function in a separate process. That way you can stop segfaults from killing the main process. Something like this: https://gist.github.com/joezuntz/e7e7764e5b591ed519cfd488e20311f1 Mine was a bit simpler, and it did the job for me. Additionally it lets you choose a timeout and a default return value in case there was a problem: #! /usr/bin/env python3 # std imports import multiprocessing as mp def parametrized(dec): """This decorator can be used to create other decorators that accept arguments""" def layer(*args, **kwargs): def repl(f): return dec(f, *args, **kwargs) return repl return layer @parametrized def sigsev_guard(fcn, default_value=None, timeout=None): """Used as a decorator with arguments. The decorated function will be called with its input arguments in another process. If the execution lasts longer than *timeout* seconds, it will be considered failed. If the execution fails, *default_value* will be returned. """ def _fcn_wrapper(*args, **kwargs): q = mp.Queue() p = mp.Process(target=lambda q: q.put(fcn(*args, **kwargs)), args=(q,)) p.start() p.join(timeout=timeout) exit_code = p.exitcode if exit_code == 0: return q.get() logging.warning('Process did not exit correctly. Exit code: {}'.format(exit_code)) return default_value return _fcn_wrapper So you would use it like: @sigsev_guard(default_value=-1, timeout=60) def your_risky_function(a,b,c,d): ...
6
16
64,660,458
2020-11-3
https://stackoverflow.com/questions/64660458/how-to-properly-deprecate-a-custom-exception-in-python
I have custom inheriting exceptions in my Python project and I want to deprecate one of them. What is the proper way of doing it? Exceptions I have: class SDKException(Exception): pass class ChildException(SDKException): pass class ChildChildException(ChildException): # this one is to be deprecated pass I want to deprecate the ChildChildException, considering the exception is used, raised and chained with other exceptions in the project.
You could use a decorator which shows a warning DeprecationWarning category on each instantiation of exception class: import warnings warnings.filterwarnings("default", category=DeprecationWarning) def deprecated(cls): original_init = cls.__init__ def __init__(self, *args, **kwargs): warnings.warn(f"{cls.__name__} is deprecated", DeprecationWarning, stacklevel=2) original_init(self, *args, **kwargs) cls.__init__ = __init__ return cls class SDKException(Exception): pass class ChildException(SDKException): pass @deprecated class ChildChildException(ChildException): # this one is to be deprecated pass try: raise ChildChildException() except ChildChildException: pass app.py:7: DeprecationWarning: ChildChildException is deprecated Update: Also, you can create custom warning class and pass it to the warn function: class ExceptionDeprecationWarning(Warning): pass warnings.warn(f"{cls.__name__} is deprecated", ExceptionDeprecationWarning)
8
4
64,670,318
2020-11-3
https://stackoverflow.com/questions/64670318/how-to-create-a-new-conda-env-based-on-a-yml-file-but-with-different-python-vers
I have a conda environment with python=3.6.0 and all it's dependencies. Now I would like to use this yaml file to create another environment with the same dependencies but with python=3.7.0 without the need for installing the packages with right version one by one.
# Activate old environment conda activate so # Save the list of package to a file: conda list > log # Extract the package name but not the version or hash cat log | awk '{print $1}' > log2 # make the list of packages tr '\n' ' ' < log2 > log3 # print the list of packages cat log3 Use notepad to replace python by python==3.7. Then create a new environment with the edited list of packages. conda create --name so2 conda activate so2 conda install _libgcc_mutex ... python==3.7 ... zstd Conda will try to install all packages with the same name but different version.
6
1
64,669,355
2020-11-3
https://stackoverflow.com/questions/64669355/how-to-copy-download-file-created-in-pyodide-in-browser
I managed to run Pyodide in browser. I created hello.txt file. But how can I access it. Pyodide https://github.com/iodide-project/pyodide/blob/master/docs/using_pyodide_from_javascript.md pyodide.runPython('open("hello.txt", "w")') What I tried in chrome devtools? pyodide.runPython('os.chdir("../")') pyodide.runPython('os.listdir()') pyodide.runPython('os.path.realpath("hello.txt")') Output for listdir ["hello.txt", "lib", "proc", "dev", "home", "tmp"] Output for realpath "/hello.txt" Also, pyodide.runPython('import platform') pyodide.runPython('platform.platform()') Output "Emscripten-1.0-x86-JS-32bit" All outputs in chrome devtools console. It is created in root folder. But how it can be accessed in file explorer or anyway to copy file to Download folder? Thanks
Indeed pyodide operates in an in-memory (MEMFS) filesystem created by Emscripten. You can't directly write files to disk from pyodide since it's executed in the browser sandbox. You can however, pass your file to JavaScript, create a Blob out of it and then download it. For instance, using, let txt = pyodide.runPython(` with open('/test.txt', 'rt') as fh: txt = fh.read() txt `); const blob = new Blob([txt], {type : 'application/text'}); let url = window.URL.createObjectURL(blob); window.location.assign(url); It should have been also possible to do all of this from the Python side, using the type conversions included in pyodide, i.e. from js import Blob, document from js import window with open('/test.txt', 'rt') as fh: txt = fh.read() blob = Blob.new([txt], {type : 'application/text'}) url = window.URL.createObjectURL(blob) window.location.assign(url) however at present, this unfortunately doesn't work, as it depends on pyodide#788 being resolved first.
8
7
64,666,718
2020-11-3
https://stackoverflow.com/questions/64666718/dataframe-removes-duplicate-when-certain-values-are-reached
I have a data frame that contains duplicates. and I would like to remove these duplicates. I also found this function from pandas df.drop_duplicates(subset=['Action', 'Name']). Unfortunately, this function removes too much, because only if the time is less than or equal to 5 minutes should it be removed. How can I do this and how could I print how many rows are dropped? I would be very happy about help. How can you recognize duplicates? If the columns (action, name) are identical and the time difference is less than or equal to 5 minutes. ! The time format is 01.10.2019, 9:56:52 date and time are splitted by a comma import pandas as pd d = {'Time': ['01.10.2019, 9:56:52', '01.10.2019, 9:57:15', '02.10.2019 12:56:12', '02.10.2019 13:02:58', '02.10.2019 13:11:58'] ,'Action': ['Opened', 'Opened', 'Closed', 'Opened', 'Opened'] ,'Name': ['Max', 'Max', 'Susan', 'Michael', 'Michael']} df = pd.DataFrame(data=d) display(df.head()) Output Desired Output Details
IIUC you can create a group number by getting the time difference, and then groupby and first: print (df.assign(group=pd.to_datetime(df["Time"]).diff().dt.seconds.gt(300).cumsum()) .groupby(["group", "Action", "Name"]).first()) Time Action Name group 0 01.10.2019, 9:56:52 Opened Max 1 02.10.2019 12:56:12 Closed Susan 2 02.10.2019 13:02:58 Opened Michael 3 02.10.2019 13:11:58 Opened Michael
6
2
64,664,437
2020-11-3
https://stackoverflow.com/questions/64664437/how-do-you-open-multiple-pages-asynchronously-with-playwright-python
I want to open multiple urls at once using Playwright for Python. But I am struggling to figure out how. This is from the async documentation: async def main(): async with async_playwright() as p: for browser_type in [p.chromium, p.firefox, p.webkit]: browser = await browser_type.launch() page = await browser.newPage() await page.goto("https://scrapingant.com/") await page.screenshot(path=f"scrapingant-{browser_type.name}.png") await browser.close() asyncio.get_event_loop().run_until_complete(main()) This opens each browser_type sequentially. How would I go about it if I wanted to do it in parallel? And how would I go about it if I wanted to do something similar with a list of urls? I tried doing this: urls = [ "https://scrapethissite.com/pages/ajax-javascript/#2015", "https://scrapethissite.com/pages/ajax-javascript/#2014", ] async def main(url): async with async_playwright() as p: browser = await p.chromium.launch(headless=False) page = await browser.newPage() await page.goto(url) await browser.close() async def go_to_url(): tasks = [main(url) for url in urls] await asyncio.wait(tasks) go_to_url() But that gave me the following error: 92: RuntimeWarning: coroutine 'go_to_url' was never awaited go_to_url() RuntimeWarning: Enable tracemalloc to get the object allocation traceback
I believe you need to call your go_to_url function using the same recipe: asyncio.get_event_loop().run_until_complete(go_to_url())
8
1
64,662,085
2020-11-3
https://stackoverflow.com/questions/64662085/fix-not-load-dynamic-library-for-tensorflow-gpu
I want to use my GPU for Tensorflow. I tried this Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation Unfortunately, I keep getting an error Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found. How can I fix this? Python-version: 3.8.3, CUDA 10.1 2020-11-03 12:30:28.832014: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2020-11-03 12:30:28.832688: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cublas64_11.dll'; dlerror: cublas64_11.dll not found 2020-11-03 12:30:28.833342: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cublasLt64_11.dll'; dlerror: cublasLt64_11.dll not found 2020-11-03 12:30:28.833994: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cufft64_10.dll'; dlerror: cufft64_10.dll not found 2020-11-03 12:30:28.834645: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'curand64_10.dll'; dlerror: curand64_10.dll not found 2020-11-03 12:30:28.835297: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found 2020-11-03 12:30:28.835948: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusparse64_11.dll'; dlerror: cusparse64_11.dll not found 2020-11-03 12:30:28.836594: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found 2020-11-03 12:30:28.836789: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1761] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2020-11-03 12:30:28.837575: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-11-03 12:30:28.838495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1265] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-11-03 12:30:28.838708: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1271] 2020-11-03 12:30:28.838831: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
Well, you can see that your Tensorflow installation is looking for Cuda libraries of version 11, 10, while you have 10.1. So in order to fix this, install the proper Cuda version. Why is it looking for 3 different versions, I have no idea. But you can find valid combinations of Cuda, Tensorflow, and CUDNN here. EDIT: Removed 8 from the Cuda version, Tensorflow is actually looking for CUDNN version 8. So don't forget to install CUDNN as well (my guess is that you are installing the latest version of Tensorflow -> that's why is it looking for the latest Cuda and CUDNN releases.)
9
3
64,636,104
2020-11-1
https://stackoverflow.com/questions/64636104/websocket-handshaking-error-in-python-django
I am getting issue with websocket connection as it is getting closed due to handshake error. The error message is as below: WebSocket HANDSHAKING /ws/polData/ [127.0.0.1:59304] Exception inside application: object.__init__() takes exactly one argument (the instance to initialize) Traceback (most recent call last): File "/usr/lib/python3.7/site-packages/channels/routing.py", line 71, in __call__ return await application(scope, receive, send) File "/usr/lib/python3.7/site-packages/channels/sessions.py", line 47, in __call__ return await self.inner(dict(scope, cookies=cookies), receive, send) File "/usr/lib/python3.7/site-packages/channels/sessions.py", line 172, in __call__ return await self.inner(self.scope, receive, self.send) File "/usr/lib/python3.7/site-packages/channels/auth.py", line 181, in __call__ return await super().__call__(scope, receive, send) File "/usr/lib/python3.7/site-packages/channels/middleware.py", line 26, in __call__ return await self.inner(scope, receive, send) File "/usr/lib/python3.7/site-packages/channels/routing.py", line 160, in __call__ send, File "/usr/local/lib/python3.7/dist-packages/asgiref/compatibility.py", line 33, in new_application instance = application(scope) File "/usr/lib/python3.7/site-packages/channels/generic/websocket.py", line 159, in __init__ super().__init__(*args, **kwargs) TypeError: object.__init__() takes exactly one argument (the instance to initialize) WebSocket DISCONNECT /ws/polData/ [127.0.0.1:59304]\
Check your routing.py in app, under websocket_urlpatterns, on repath you may have missed .as_asgi() eg:- websocket_urlpatterns = [ re_path(r'ws/chat/(?P<room_name>\w+)/$', consumers.ChatConsumer.as_asgi()), ]
5
17
64,597,425
2020-10-29
https://stackoverflow.com/questions/64597425/how-to-set-a-repr-for-a-function-itself
__repr__ is used to return a string representation of an object, but in Python a function is also an object itself, and can have attributes. How do I set the __repr__ of a function? I see here that an attribute can be set for a function outside the function, but typically one sets a __repr__ within the object definition itself, so I'd like to set the repr within the function definition itself. My use case is that I am using tenacity to retry a networking function with exponential backoff, and I want to log the (informative) name of the function I have called last. retry_mysql_exception_types = (InterfaceError, OperationalError, TimeoutError, ConnectionResetError) def return_last_retry_outcome(retry_state): """return the result of the last call attempt""" return retry_state.outcome.result() def my_before_sleep(retry_state): print("Retrying {}: attempt {} ended with: {}\n".format(retry_state.fn, retry_state.attempt_number, retry_state.outcome)) @tenacity.retry(wait=tenacity.wait_random_exponential(multiplier=1, max=1200), stop=tenacity.stop_after_attempt(30), retry=tenacity.retry_if_exception_type(retry_mysql_exception_types), retry_error_callback=return_last_retry_outcome, before_sleep=my_before_sleep) def connect_with_retries(my_database_config): connection = mysql.connector.connect(**my_database_config) return connection Currently retry_state.fn displays something like <function <lambda> at 0x1100f6ee0> like @chepner says, but I'd like to add more information to it.
I think a custom decorator could help: import functools class reprable: """Decorates a function with a repr method. Example: >>> @reprable ... def foo(): ... '''Does something cool.''' ... return 4 ... >>> foo() 4 >>> foo.__name__ 'foo' >>> foo.__doc__ 'Does something cool.' >>> repr(foo) 'foo: Does something cool.' >>> type(foo) <class '__main__.reprable'> """ def __init__(self, wrapped): self._wrapped = wrapped functools.update_wrapper(self, wrapped) def __call__(self, *args, **kwargs): return self._wrapped(*args, **kwargs) def __repr__(self): return f'{self._wrapped.__name__}: {self._wrapped.__doc__}' Demo: http://tpcg.io/uTbSDepz.
5
4
64,650,877
2020-11-2
https://stackoverflow.com/questions/64650877/install-of-opencv-python-headless-takes-a-long-time
When I install opencv-python-headless in Google Colab, it takes 15 minutes to complete. My code: ! pip install --upgrade pip ! pip install opencv-python-headless Here's a notebook with this code which recreates the problem: https://colab.research.google.com/gist/mherzog01/38b6cf71942a443da072f09bc097387f/slow-install-of-opencv-python-headless.ipynb. The process eventually completes, but I'd like to reduce the time to install. I saw from `Building wheel for opencv-python (PEP 517) ... -` runs forever a discussion about compiling OpenCV, which could very well be what is happening here. However, this same SO post states that if you upgrade pip, it will use pre-built wheels. Edit: Added @intsco's workaround to the Google Colab
Might be related to changes in OpenCV >=4.3 wheels https://github.com/skvark/opencv-python#backward-compatibility Starting from 4.3.0 and 3.4.10 builds the Linux build environment was updated from manylinux1 to manylinux2014. This dropped support for old Linux distributions. My workaround: pip install "opencv-python-headless<4.3"
8
12
64,590,535
2020-10-29
https://stackoverflow.com/questions/64590535/how-to-make-pipenv-install-package-use-ssl-certificate-of-firewall
Sitting behind a very strict firewall with SSL decryption, I usually install python packages (on macOS 10.15.) with these options pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org <packagename>. But pipenv install --trusted-host pypi.org --trusted-host files.pythonhosted.org <packagename> doesn't work: pipenv.vendor.requirementslib.exceptions.RequirementError: Failed parsing requirement from '--trusted-host' Since ignoring SSL didn't work, I tried to place the certificate of the firewall into a folder and set REQUESTS_CA_BUNDLE=/path/to/company/certificates.pem but without success (maybe I did it wrong). User @Shanti made a promising comment in this question, but I don't know how he accomplished feeding the certificate to pipenv. So on the bottom line I am looking for a way to make pipenv use my firewall's certificate. EDIT: here's the output when running pipenv install: Creating a virtualenv for this project… Pipfile: /Users/admin/Code/test/Pipfile Using /Users/admin/.pyenv/versions/3.8.6/bin/python3.8 (3.8.6) to create virtualenv… ⠧ Creating virtual environment...created virtual environment CPython3.8.6.final.0-64 in 404ms creator CPython3Posix(dest=/Users/admin/.local/share/virtualenvs/test-NSydZlln, clear=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/admin/Library/Application Support/virtualenv) added seed packages: pip==20.2.4, setuptools==50.3.2, wheel==0.35.1 activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator ✔ Successfully created virtual environment! Virtualenv location: /Users/admin/.local/share/virtualenvs/test-NSydZlln Pipfile.lock not found, creating… Locking [dev-packages] dependencies… Locking [packages] dependencies… Building requirements... Resolving dependencies... ✘ Locking Failed! Traceback (most recent call last): File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/urllib3/contrib/pyopenssl.py", line 488, in wrap_socket cnx.do_handshake() File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/OpenSSL/SSL.py", line 1934, in do_handshake self._raise_ssl_error(self._ssl, result) File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/OpenSSL/SSL.py", line 1671, in _raise_ssl_error _raise_current_error() File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue raise exception_type(errors) OpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/urllib3/connectionpool.py", line 670, in urlopen httplib_response = self._make_request( File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/urllib3/connectionpool.py", line 976, in _validate_conn conn.connect() File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/urllib3/connection.py", line 361, in connect self.sock = ssl_wrap_socket( File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/urllib3/util/ssl_.py", line 377, in ssl_wrap_socket return context.wrap_socket(sock, server_hostname=server_hostname) File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/urllib3/contrib/pyopenssl.py", line 494, in wrap_socket raise ssl.SSLError("bad handshake: %r" % e) ssl.SSLError: ("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])",) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/requests/adapters.py", line 439, in send resp = conn.urlopen( File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/urllib3/connectionpool.py", line 724, in urlopen retries = retries.increment( File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/urllib3/util/retry.py", line 439, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /pypi/wheel/json (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])"))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/resolver.py", line 807, in <module> main() File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/resolver.py", line 802, in main _main(parsed.pre, parsed.clear, parsed.verbose, parsed.system, parsed.write, File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/resolver.py", line 785, in _main resolve_packages(pre, clear, verbose, system, write, requirements_dir, packages) File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/resolver.py", line 746, in resolve_packages results, resolver = resolve( File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/resolver.py", line 728, in resolve return resolve_deps( File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/utils.py", line 1378, in resolve_deps results, hashes, markers_lookup, resolver, skipped = actually_resolve_deps( File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/utils.py", line 1096, in actually_resolve_deps results = resolver.clean_results() File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/utils.py", line 1002, in clean_results collected_hashes = self.collect_hashes(ireq) File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/utils.py", line 885, in collect_hashes r = session.get(pkg_url, timeout=10) File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/requests/sessions.py", line 543, in get return self.request('GET', url, **kwargs) File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/Users/admin/.pyenv/versions/3.8.6/lib/python3.8/site-packages/pipenv/vendor/requests/adapters.py", line 514, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /pypi/wheel/json (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
As already stated in the comments, setting the environment variable would solve the problem. It should look like this: export REQUESTS_CA_BUNDLE=/path/to/certificates.pem Complete Chain In certificates.pem you must have a complete chain that includes the root certificate. Therefore certificates.pem should look like this: -----BEGIN CERTIFICATE----- MII... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MII... -----END CERTIFICATE----- ... You can split the file into single files with suffix .pem including the begin and end marker like so: -----BEGIN CERTIFICATE----- MII... -----END CERTIFICATE----- In Finder you can now select the individual .pem files, enter <alt> + <tab> so that you can see the contents of each certificate. The chain must be complete, e.g. you should find the corresponding signing certificate for each certificate when you look in the 'Issuer' section under 'Common Name'. If one or more are missing, use the Keychain Access application (/Applications/Utilities/) to search for the certificate with the missing 'Common Name', export the cert in .PEM format and simply append the resulting file to the end of your certificates.pem file. Test Locally tested like this: setting a HTTPS proxy (in this case Charles) save the Charles certificate in a .pem file try to call pipenv install requests (or any other package), it fails with a SSLCertVerificationError set REQUESTS_CA_BUNDLE environment variable call pipenv install requests again -> works Screenshot
10
18
64,646,867
2020-11-2
https://stackoverflow.com/questions/64646867/downloading-huggingface-pre-trained-models
Once I have downloaded a pre-trained model on a Colab Notebook, it disappears after I reset the notebook variables. Is there a way I can download the model to use it for a second occasion? tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
Mount your google drive: from google.colab import drive drive.mount('/content/drive') Do your stuff and save your models: from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') tokenizer.save_pretrained('/content/drive/My Drive/tokenizer/') Reload it in a new session: tokenizer2 = BertTokenizer.from_pretrained('/content/drive/My Drive/tokenizer/')
5
12
64,643,427
2020-11-2
https://stackoverflow.com/questions/64643427/python-regular-expressions-in-xpaths-using-selenium
I try to get the id of certain HTML tags using Python and Selenium. There is html code: <tr id="10"> <td colspan="5"> <div class="card-view"> <span class="value">PROVIDER_628_54678931</span> </div> </td> </tr> <tr id="11"> <td colspan="5"> <div class="card-view"> <span class="value">PROVIDER_629_54678932</span> </div> </td> </tr> <tr id="12"> <td colspan="5"> <div class="card-view"> <span class="value">PROVIDER_730_54678933</span> </div> </td> </tr> <tr id="13"> <td colspan="5"> <div class="card-view"> <span class="value">PROVIDER_6542_54678934</span> </div> </td> </tr> For extract id of only one parent tag i do : elem = browser.find_element_by_xpath("//span[contains(@class, 'value') and text()='PROVIDER_628_54678931']") parent = elem.find_element_by_xpath('../../..') print(parent.get_attribute("id")) How to use regex in XPath to get parent id-s of "span" element where the text contains "PROVIDER_6XX", but not "PROVIDER_7" and PROVIDER_6542?
I found solution here : link def findTrunksByRegExp(): pattern = re.compile(r"PROVIDER_6\d{2}") elements = browser.find_elements_by_xpath("//span[contains(@class, 'value')]") for element in elements: match = pattern.match(element.text) if match: parent = element.find_element_by_xpath('../../..') print(parent.get_attribute("id"))
6
4
64,603,280
2020-10-30
https://stackoverflow.com/questions/64603280/finding-a-pattern-in-a-grid-python
I have randomly generated grid containing 0 and 1: 1 1 0 0 0 1 0 1 1 1 1 0 1 1 1 1 1 0 0 0 1 0 1 1 0 0 1 0 1 0 1 1 1 1 1 1 0 0 1 1 0 0 1 1 1 1 1 0 0 1 0 0 1 0 1 1 How can I iterate through the grid to find the largest cluster of 1s, that is equal or larger than 4 items (across row and column)? I assume I need to keep a count of each found cluster while iterating and ones its more than 4 items, record and count in a list and then find the largest number. The problem is that I cannot figure out how to do so across both rows and columns and record the count. I have iterated through the grid but not sure how to move further than two rows. For example in the above example, the largest cluster is 8. There are some other clusters in the grid, but they have 4 elements: A A 0 0 0 1 0 1 A A 1 0 1 1 1 1 1 0 0 0 1 0 1 1 0 0 1 0 1 0 1 1 1 1 B B 0 0 1 1 0 0 B B 1 1 1 0 0 1 0 0 1 0 1 1 The code I tried: rectcount = [] for row in range(len(grid)): for num in range(len(grid[row])): # count = 0 try: # if grid[row][num] == 1: # if grid[row][num] == grid[row][num + 1] == grid[row + 1][num] == grid[row + 1][num + 1]: # count += 1 if grid[row][num] == grid[row][num + 1]: if grid[row + 1][num] == grid[row][num + 1]: count += 1 # if grid[row][num] == grid[row][num + 1] and grid[row][num] == grid[row + 1][num]: # count += 1 else: count = 0 if grid[row][num] == grid[row + 1][num]: count += 1 except: pass
I've implemented three algorithms. First algorithm is Simple, using easiest approach of nested loops, it has O(N^5) time complexity (where N is one side of input grid, 10 for our case), for our inputs of size 10x10 time of O(10^5) is quite alright. Algo id in code is algo = 0. If you just want to see this algorithm jump to line ------ Simple Algorithm inside code. Second algorithm is Advanced, using Dynamic Programming approach, its complexity is O(N^3) which is much faster than first algorithm. Algo id in code is algo = 1. Jump to line ------- Advanced Algorithm inside code. Third algorithm Simple-ListComp I implemented just for fun, it is almost same like Simple, same O(N^5) complexity, but using Python's list comprehensions instead of regular loops, that's why it is shorter, also a bit slower because doesn't use some optimizations. Algo id in code is algo = 2. Jump to line ------- Simple-ListComp Algorithm inside code to see algo. The rest of code, besides algorithms, implements checking correctness of results (double-checking between algorithms), printing results, producing text inputs. Code is split into solving-task function solve() and testing function test(). solve() function has many arguments to allow configuring behavior of function. All main code lines are documented by comments, read them to learn how to use code. Basically if s variable contains multi-line text with grid elements, same like in your question, you just run solve(s, text = True) and it will solve task and print results. Also you may choose algorithm out of two versions (0 (Simple) and 1 (Advanced) and 2 (Simple-ListComp)) by giving next arguments to solve function algo = 0, check = False (here 0 for algo 0). Look at test() function body to see simplest example of usage. Algorithms output to console by default all clusters, from largest to smallest, largest is signified by . symbol, the rest by B, C, D, ..., Z symbols. You may set argument show_non_max = False in solve function if you want only first (largest) cluster to be shown. I'll explain Simple algorithm: Basically what algorithm does - it searches through all possible angled 1s rectangles and stores info about maximal of them into ma 2D array. Top-left point of such rectangle is (i, j), top-right - (i, k), bottom-left - (l, j + angle_offset), bottom-right - (l, k + angle_offset), all 4 corners, that's why we have so many loops. In outer two i (row) , j (column) loops we iterate over whole grid, this (i, j) position will be top-left point of 1s rectangle, we need to iterate whole grid because all possible 1s rectangles may have top-left at any (row, col) point of whole grid. At start of j loop we check that grid at (i, j) position should always contain 1 because inside loops we search for all rectangle with 1s only. k loop iterates through all possible top-right positions (i, k) of 1s rectangle. We should break out of loop if (i, k) equals to 0 because there is no point to extend k further to right because such rectangle will always contain 0. In previous loops we fixed top-left and top-right corners of rectangle. Now we need to search for two bottom corners. For that we need to extend rectangle downwards at different angles till we reach first 0. off loop tries extending rectangle downwards at all possible angles (0 (straight vertical), +1 (45 degrees shifted to the right from top to bottom), -1 (-45 degrees)), off basically is such number that grid[y][x] is "above" (corresponds to by Y) grid[y + 1][x + off]. l tries to extend rectangle downwards (in Y direction) at different angles off. It is extended till first 0 because it can't be extended further then (because each such rectangle will already contain 0). Inside l loop there is if grid[l][max(0, j + off * (l - i)) : min(k + 1 + off * (l - i), c)] != ones[:k - j + 1]: condition, basically this if is meant to check that last row of rectangle contains all 1 if not this if breaks out of loop. This condition compares two list slices for non-equality. Last row of rectangle spans from point (l, j + angle_offset) (expression max(0, j + off * (l - i)), max-limited to be 0 <= X) to point (l, k + angle_offset) (expression min(k + 1 + off * (l - i), c), min-limited to be X < c). Inside l loop there are other lines, ry, rx = l, k + off * (l - i) computes bottom-right point of rectangle (ry, rx) which is (l, k + angle_offset), this (ry, rx) position is used to store found maximum inside ma array, this array stores all maximal found rectangles, ma[ry][rx] contains info about rectangle that has bottom-right at point (ry, rx). rv = (l + 1 - i, k + 1 - j, off) line computes new possible candidate for ma[ry][rx] array entry, possible because ma[ry][rx] is updated only if new candidate has larger area of 1s. Here rv[0] value inside rv tuple contains height of such rectangle, rv[1] contains width of such rectangle (width equals to the length of bottom row of rectangle), rv[2] contains angle of such rectangle. Condition if rv[0] * rv[1] > ma[ry][rx][0] * ma[ry][rx][1]: and its body just checks if rv area is larger than current maximum inside array ma[ry][rx] and if it is larger then this array entry is updated (ma[ry][rx] = rv). I'll remind that ma[ry][rx] contains info (width, height, angle) about current found maximal-area rectangle that has bottom-right point at (ry, rx) and that has these width, height and angle. Done! After algorithm run array ma contains information about all maximal-area angled rectangles (clusters) of 1s so that all clusters can be restored and printed later to console. Largest of all such 1s-clusters is equal to some rv0 = ma[ry0][rx0], just iterate once through all elements of ma and find such point (ry0, rx0) so that ma[ry0][rx0][0] * ma[ry0][rx0][1] (area) is maximal. Then largest cluster will have bottom-right point (ry0, rx0), bottom-left point (ry0, rx0 - rv0[1] + 1), top-right point (ry0 - rv0[0] + 1, rx0 - rv0[2] * (rv0[0] - 1)), top-left point (ry0 - rv0[0] + 1, rx0 - rv0[1] + 1 - rv0[2] * (rv0[0] - 1)) (here rv0[2] * (rv0[0] - 1) is just angle offset, i.e. how much shifted is first row along X compared to last row of rectangle). Try it online! # ----------------- Main function solving task ----------------- def solve( grid, *, algo = 1, # Choose algorithm, 0 - Simple, 1 - Advanced, 2 - Simple-ListComp check = True, # If True run all algorithms and check that they produce same results, otherwise run just chosen algorithm without checking text = False, # If true then grid is a multi-line text (string) having grid elements separated by spaces print_ = True, # Print results to console show_non_max = True, # When printing if to show all clusters, not just largest, as B, C, D, E... (chars from "cchars") cchars = ['.'] + [chr(ii) for ii in range(ord('B'), ord('Z') + 1)], # Clusters-chars, these chars are used to show clusters from largest to smallest one = None, # Value of "one" inside grid array, e.g. if you have grid with chars then one may be equal to "1" string. Defaults to 1 (for non-text) or "1" (for text). offs = [0, +1, -1], # All offsets (angles) that need to be checked, "off" is such that grid[i + 1][j + off] corresponds to next row of grid[i][j] debug = False, # If True, extra debug info is printed ): # Preparing assert algo in [0, 1, 2], algo if text: grid = [l.strip().split() for l in grid.splitlines() if l.strip()] if one is None: one = 1 if not text else '1' r, c = len(grid), len(grid[0]) sgrid = '\n'.join([''.join([str(grid[ii][jj]) for jj in range(c)]) for ii in range(r)]) mas, ones = [], [one] * max(c, r) # ----------------- Simple Algorithm, O(N^5) Complexity ----------------- if algo == 0 or check: ma = [[(0, 0, 0) for jj in range(c)] for ii in range(r)] # Array containing maximal answers, Lower-Right corners for i in range(r): for j in range(c): if grid[i][j] != one: continue for k in range(j + 1, c): # Ensure at least 2 ones along X if grid[i][k] != one: break for off in offs: for l in range(i + 1, r): # Ensure at least 2 ones along Y if grid[l][max(0, j + off * (l - i)) : min(k + 1 + off * (l - i), c)] != ones[:k - j + 1]: l -= 1 break ry, rx = l, k + off * (l - i) rv = (l + 1 - i, k + 1 - j, off) if rv[0] * rv[1] > ma[ry][rx][0] * ma[ry][rx][1]: ma[ry][rx] = rv mas.append(ma) ma = None # ----------------- Advanced Algorithm using Dynamic Programming, O(N^3) Complexity ----------------- if algo == 1 or check: ma = [[(0, 0, 0) for jj in range(c)] for ii in range(r)] # Array containing maximal answers, Lower-Right corners for off in offs: d = [[(0, 0, 0) for jj in range(c)] for ii in range(c)] for i in range(r): f, d_ = 0, [[(0, 0, 0) for jj in range(c)] for ii in range(c)] for j in range(c): if grid[i][j] != one: f = j + 1 continue if f >= j: # Check that we have at least 2 ones along X continue df = [(0, 0, 0) for ii in range(c)] for k in range(j, -1, -1): t0 = d[j - off][max(0, k - off)] if 0 <= j - off < c and k - off < c else (0, 0, 0) if k >= f: t1 = (t0[0] + 1, t0[1], off) if t0 != (0, 0, 0) else (0, 0, 0) t2 = (1, j - k + 1, off) t0 = t1 if t1[0] * t1[1] >= t2[0] * t2[1] else t2 # Ensure that we have at least 2 ones along Y t3 = t1 if t1[0] > 1 else (0, 0, 0) if k < j and t3[0] * t3[1] < df[k + 1][0] * df[k + 1][1]: t3 = df[k + 1] df[k] = t3 else: t0 = d_[j][k + 1] if k < j and t0[0] * t0[1] < d_[j][k + 1][0] * d_[j][k + 1][1]: t0 = d_[j][k + 1] d_[j][k] = t0 if ma[i][j][0] * ma[i][j][1] < df[f][0] * df[f][1]: ma[i][j] = df[f] d = d_ mas.append(ma) ma = None # ----------------- Simple-ListComp Algorithm using List Comprehension, O(N^5) Complexity ----------------- if algo == 2 or check: ma = [ [ max([(0, 0, 0)] + [ (h, w, off) for h in range(2, i + 2) for w in range(2, j + 2) for off in offs if all( cr[ max(0, j + 1 - w - off * (h - 1 - icr)) : max(0, j + 1 - off * (h - 1 - icr)) ] == ones[:w] for icr, cr in enumerate(grid[max(0, i + 1 - h) : i + 1]) ) ], key = lambda e: e[0] * e[1]) for j in range(c) ] for i in range(r) ] mas.append(ma) ma = None # ----------------- Checking Correctness and Printing Results ----------------- if check: # Check that we have same answers for all algorithms masx = [[[cma[ii][jj][0] * cma[ii][jj][1] for jj in range(c)] for ii in range(r)] for cma in mas] assert all([masx[0] == e for e in masx[1:]]), 'Maximums of algorithms differ!\n\n' + sgrid + '\n\n' + ( '\n\n'.join(['\n'.join([' '.join([str(e1).rjust(2) for e1 in e0]) for e0 in cma]) for cma in masx]) ) ma = mas[0 if not check else algo] if print_: cchars = ['.'] + [chr(ii) for ii in range(ord('B'), ord('Z') + 1)] # These chars are used to show clusters from largest to smallest res = [[grid[ii][jj] for jj in range(c)] for ii in range(r)] mac = [[ma[ii][jj] for jj in range(c)] for ii in range(r)] processed = set() sid = 0 for it in range(r * c): sma = sorted( [(mac[ii][jj] or (0, 0, 0)) + (ii, jj) for ii in range(r) for jj in range(c) if (ii, jj) not in processed], key = lambda e: e[0] * e[1], reverse = True ) if len(sma) == 0 or sma[0][0] * sma[0][1] <= 0: break maxv = sma[0] if it == 0: maxvf = maxv processed.add((maxv[3], maxv[4])) show = True for trial in [True, False]: for i in range(maxv[3] - maxv[0] + 1, maxv[3] + 1): for j in range(maxv[4] - maxv[1] + 1 - (maxv[3] - i) * maxv[2], maxv[4] + 1 - (maxv[3] - i) * maxv[2]): if trial: if mac[i][j] is None: show = False break elif show: res[i][j] = cchars[sid] mac[i][j] = None if show: sid += 1 if not show_non_max and it == 0: break res = '\n'.join([''.join([str(res[ii][jj]) for jj in range(c)]) for ii in range(r)]) print( 'Max:\nArea: ', maxvf[0] * maxvf[1], '\nSize Row,Col: ', (maxvf[0], maxvf[1]), '\nLowerRight Row,Col: ', (maxvf[3], maxvf[4]), '\nAngle: ', ("-1", " 0", "+1")[maxvf[2] + 1], '\n', sep = '' ) print(res) if debug: # Print all computed maximums, for debug purposes for cma in [ma, mac]: print('\n' + '\n'.join([' '.join([f'({e0[0]}, {e0[1]}, {("-1", " 0", "+1")[e0[2] + 1]})' for e0_ in e for e0 in (e0_ or ('-', '-', 0),)]) for e in cma])) print(end = '-' * 28 + '\n') return ma # ----------------- Testing ----------------- def test(): # Iterating over text inputs or other ways of producing inputs for s in [ """ 1 1 0 0 0 1 0 1 1 1 1 0 1 1 1 1 1 0 0 0 1 0 1 1 0 0 1 0 1 0 1 1 1 1 1 1 0 0 1 1 0 0 1 1 1 1 1 0 0 1 0 0 1 0 1 1 """, """ 1 0 1 1 0 1 0 0 0 1 1 0 1 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 0 1 0 1 0 1 1 1 1 0 1 1 1 1 0 0 0 1 0 0 0 1 1 1 0 1 0 1 """, """ 0 1 1 0 1 0 1 1 0 0 1 1 0 0 0 1 0 0 0 1 1 0 1 0 1 1 0 0 1 1 1 0 0 1 1 0 0 1 1 0 0 0 1 0 1 0 1 1 1 0 0 1 0 0 0 0 0 1 1 0 1 1 0 0 """ ]: solve(s, text = True) if __name__ == '__main__': test() Output: Max: Area: 8 Size Row,Col: (4, 2) LowerRight Row,Col: (4, 7) Angle: 0 CC000101 CC1011.. 100010.. 001010.. 1BBB00.. 00BBBDD0 010010DD ---------------------------- Max: Area: 6 Size Row,Col: (3, 2) LowerRight Row,Col: (2, 1) Angle: -1 10..0100 0..01001 ..000001 0BBB0101 0BBB1011 CC000100 0CC10101 ---------------------------- Max: Area: 12 Size Row,Col: (6, 2) LowerRight Row,Col: (5, 7) Angle: +1 0..01011 00..0001 000..010 BB00..10 0BB00..0 001010.. 10010000 01101100 ----------------------------
9
2
64,635,630
2020-11-1
https://stackoverflow.com/questions/64635630/pytorch-runtimeerror-expected-scalar-type-float-but-found-byte
I am working on the classic example with digits. I want to create a my first neural network that predict the labels of digit images {0,1,2,3,4,5,6,7,8,9}. So the first column of train.txt has the labels and all the other columns are the features of each label. I have defined a class to import my data: class DigitDataset(Dataset): """Digit dataset.""" def __init__(self, file_path, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.data = pd.read_csv(file_path, header = None, sep =" ") self.transform = transform def __len__(self): return len(self.data) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() labels = self.data.iloc[idx,0] images = self.data.iloc[idx,1:-1].values.astype(np.uint8).reshape((1,16,16)) if self.transform is not None: sample = self.transform(sample) return images, labels And then I am running these commands to split my dataset into batches, to define a model and a loss: train_dataset = DigitDataset("train.txt") train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True, num_workers=4) # Model creation with neural net Sequential model model=nn.Sequential(nn.Linear(256, 128), # 1 layer:- 256 input 128 o/p nn.ReLU(), # Defining Regular linear unit as activation nn.Linear(128,64), # 2 Layer:- 128 Input and 64 O/p nn.Tanh(), # Defining Regular linear unit as activation nn.Linear(64,10), # 3 Layer:- 64 Input and 10 O/P as (0-9) nn.LogSoftmax(dim=1) # Defining the log softmax to find the probablities for the last output unit ) # defining the negative log-likelihood loss for calculating loss criterion = nn.NLLLoss() images, labels = next(iter(train_loader)) images = images.view(images.shape[0], -1) logps = model(images) #log probabilities loss = criterion(logps, labels) #calculate the NLL-loss And I take the error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-2-7f4160c1f086> in <module> 47 images = images.view(images.shape[0], -1) 48 ---> 49 logps = model(images) #log probabilities 50 loss = criterion(logps, labels) #calculate the NLL-loss ~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input) 115 def forward(self, input): 116 for module in self: --> 117 input = module(input) 118 return input 119 ~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input) 91 92 def forward(self, input: Tensor) -> Tensor: ---> 93 return F.linear(input, self.weight, self.bias) 94 95 def extra_repr(self) -> str: ~/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias) 1688 if input.dim() == 2 and bias is not None: 1689 # fused op is marginally faster -> 1690 ret = torch.addmm(bias, input, weight.t()) 1691 else: 1692 output = input.matmul(weight.t()) RuntimeError: expected scalar type Float but found Byte Do you know what is wrong? Thank you for your patience and help!
This line is the cause of your error: images = self.data.iloc[idx, 1:-1].values.astype(np.uint8).reshape((1, 16, 16)) images are uint8 (byte) while the neural network needs inputs as floating point in order to calculate gradients (you can't calculate gradients for backprop using integers as those are not continuous and non-differentiable). You can use torchvision.transforms.functional.to_tensor to convert the image into float and into [0, 1] like this: import torchvision images = torchvision.transforms.functional.to_tensor( self.data.iloc[idx, 1:-1].values.astype(np.uint8).reshape((1, 16, 16)) ) or simply divide by 255 to get values into [0, 1].
15
20
64,633,018
2020-11-1
https://stackoverflow.com/questions/64633018/removing-white-or-light-colors-from-matplotlib-color-palette
The matplotlib color palettes often feature white or very light colors which do not show up well on scatter or line plots. I am making a plot in which I use norm = mpl.colors.Normalize(vmin=0, vmax=1) cmap = mpl.cm.ScalarMappable(norm=norm, cmap=mpl.cm.Blues) plt.plot(x, y, c=cmap.to_rgba(z)) cbar = plt.colorbar(cmap) to plot numerous lines. I want to modify the color palette to remove the first, say, 30% of the colors. How can this be achieved? My solution, which is fairly poor, is to modify two of the lines as follows: norm = mpl.colors.Normalize(vmin=-0.4, vmax=1) cbar = plt.colorbar(cmap, boundaries=[0,0.2,0.4,0.6,0.8,1])
You can easily create a custom colormap using LinearSegmentedColormap and choosing the colors that you want (in this case, a subset of the original colormap) min_val, max_val = 0.3,1.0 n = 10 orig_cmap = plt.cm.Blues colors = orig_cmap(np.linspace(min_val, max_val, n)) cmap = matplotlib.colors.LinearSegmentedColormap.from_list("mycmap", colors) demo gradient = np.linspace(0, 1, 256) gradient = np.vstack((gradient, gradient)) fig, (ax1, ax2) = plt.subplots(2,1, figsize=(5,2)) ax1.imshow(gradient, cmap=orig_cmap, aspect='auto') ax1.set_title('original') ax2.imshow(gradient, cmap=cmap, aspect='auto') ax2.set_title('custom') plt.tight_layout()
6
10
64,631,371
2020-11-1
https://stackoverflow.com/questions/64631371/unpivot-multiindex-dataframe-with-pd-melt
I would like to unpivot a DataFrame with MultiIndex columns, but I struggle to get the exact output I want. I played with all the parameters of the pd.melt() function but couldn't make it... Here is the kind of input I have : import pandas as pd indexes = [['TC1', 'TC2'], ['x', 'z', 'Temp']] data = pd.DataFrame(columns=pd.MultiIndex.from_product(indexes)) data.loc[0,('TC1', 'x')] = 10 data.loc[0,('TC1', 'z')] = 100 data.loc[0,('TC1', 'Temp')] = 250 data.loc[0,('TC2', 'x')] = 20 data.loc[0,('TC2', 'z')] = 200 data.loc[0,('TC2', 'Temp')] = 255 And here is the kind of output I would like, with "Time" column being the index of data Time TC x z Temp 0 0 TC1 10 100 250 1 0 TC2 20 200 255 My real data have far more columns of kind TCx. Any clue?
Give this a try data_out = data.stack(level=0).rename_axis(['Time','TC']).reset_index() Out[87]: Time TC Temp x z 0 0 TC1 250 10 100 1 0 TC2 255 20 200
5
7
64,617,670
2020-10-31
https://stackoverflow.com/questions/64617670/best-way-for-installing-non-conda-dependencies-in-snakemake-conda-environments
I would like to be able to install R packages from GitHub in a R conda environment created by Snakemake, as well as python libraries via pip in a python environment. I'll use these environments in a whole set of rules thereafter. My initial thought was to create a rule running a script to install the specified packages. For instance, my initial run was: snakemake -j1 --use-conda -R create_r_environment. My Snakefile: rule create_r_environment: conda: "envs/r.yaml" script: "scripts/r-dependencies.R" rule create_python_environment: conda: "envs/python.yaml" script: "scripts/python-dependencies.py" My envs/r.yaml file: channels: - conda-forge dependencies: - r=4.0 My r-dependencies.R file: remotes::install_github("ramiromagno/gwasrapidd", upgrade = "never") My envs/pyton.yaml file: channels: - conda-forge dependencies: - python=3.8.2 My python-dependencies.py file: !pip install gseapy The log output: Building DAG of jobs... Creating conda environment envs/r.yaml... Downloading and installing remote packages. Environment for envs/r.yaml created (location: .snakemake/conda/388,repos = "http://cran.us.r-project.org")f7df8) Using shell: /usr/bin/bash Provided cores: 1 (use --cores to define parallelism) Rules claiming more threads will be scaled down. Job counts: count jobs 1 create_r_environment 1 [Fri Oct 30 22:38:56 2020] rule create_r_environment: jobid: 0 Activating conda environment: /home/cmcouto-silva/[email protected]/lab_files/phd_data/SO/.snakemake/conda/388f7df8 [Fri Oct 30 22:38:57 2020] Error in rule create_r_environment: jobid: 0 conda-env: /home/cmcouto-silva/[email protected]/lab_files/phd_data/SO/.snakemake/conda/388f7df8 RuleException: CalledProcessError in line 5 of /home/cmcouto-silva/[email protected]/lab_files/phd_data/SO/Snakefile: Command 'source /home/cmcouto-silva/miniconda3/bin/activate '/home/cmcouto-silva/[email protected]/lab_files/phd_data/SO/.snakemake/conda/388f7df8'; set -euo pipefail; Rscript --vanilla /home/cmcouto-silva/[email protected]/lab_files/phd_data/SO/.snakemake/scripts/tmpa6jdxovx.r-dependencies.R' returned non-zero exit status 1. File "/home/cmcouto-silva/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 2168, in run_wrapper File "/home/cmcouto-silva/[email protected]/lab_files/phd_data/SO/Snakefile", line 5, in __rule_create_r_environment File "/home/cmcouto-silva/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 529, in _callback File "/home/cmcouto-silva/miniconda3/envs/snakemake/lib/python3.8/concurrent/futures/thread.py", line 57, in run File "/home/cmcouto-silva/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 515, in cached_or_run File "/home/cmcouto-silva/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 2199, in run_wrapper Shutting down, this might take some time. Exiting because a job execution failed. Look above for error message Complete log: /home/cmcouto-silva/[email protected]/lab_files/phd_data/SO/.snakemake/log/2020-10-30T223743.852983.snakemake.log My folder structure: . ├── envs │ ├── python.yaml │ └── r.yaml ├── scripts │ ├── python-dependencies.py │ └── r-dependencies.R └── Snakefile It successfully creates the environment but fails when running the script, and I don't know why. I've changed the envs/r.yaml file content to install.packages("data.table") to see if there was an issue with the github package, but it's not. It fails anyway. The same occurs when I run the rule create_python_environment (output not showed here). Any help? Edit after the accepted answer As @dariober pointed out, I forgot to install the remotes package before calling it in the script. I did it in the .yaml file, and it worked well. Also, I installed the pip libraries using shell instead of a python file. I would like to highlight some points though, just in case anyone's facing the same or similar problem: First, I could successfully install further packages I needed to, but some of them require specific libraries (e.g. libcurl), which is installed in my system, but it's not recognized inside the Snakemake conda environment, forcing me to either install it in the Snakemake conda environment (which is good for reproducibility, although I don't know how to do that yet) or specify the path library. Maybe a better option would be using a container just like @merv commented out. Second, I figured out that Snakemake already provides a way to install pip libraries using the .yaml file. From the documentation, it looks like this: name: stats2 channels: - javascript dependencies: - python=3.6 # or 2.7 - bokeh=0.9.2 - numpy=1.9.* - nodejs=0.10.* - flask - pip: - Flask-Testing
I think there are quite a few wrong things: remotes::install_github("ramiromagno/gwasrapidd", upgrade = "never"): In your r.yaml you should include the remotes package. !pip install gseapy is not valid python code. If anything, it is code to be executed by shell but I'm not sure that leading ! is correct. Also, gseapy is available from bioconda I don;t see why you should install it with pip. Before OP edited the question My envs/r.yaml file: remotes::install_github("ramiromagno/gwasrapidd", upgrade = "never") It's odd that you get the conda environment correctly created since that r.yaml is not a valid environment file. This is what I tried to recreate your issue: r.yaml cat r.yaml remotes::install_github("ramiromagno/gwasrapidd", upgrade = "never") Snakefile: cat Snakefile rule create_r_environment: conda: "r.yaml" script: "r-dependencies.R" Execute: snakemake -j1 --use-conda -R create_r_environment Building DAG of jobs... Creating conda environment r.yaml... Downloading and installing remote packages. CreateCondaEnvironmentException: Could not create conda environment from /home/dario/Downloads/r.yaml: # >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<< Traceback (most recent call last): File "/home/dario/miniconda3/lib/python3.7/site-packages/conda/exceptions.py", line 1079, in __call__ return func(*args, **kwargs) File "/home/dario/miniconda3/lib/python3.7/site-packages/conda_env/cli/main.py", line 80, in do_call exit_code = getattr(module, func_name)(args, parser) File "/home/dario/miniconda3/lib/python3.7/site-packages/conda_env/cli/main_create.py", line 80, in execute directory=os.getcwd()) File "/home/dario/miniconda3/lib/python3.7/site-packages/conda_env/specs/__init__.py", line 40, in detect if spec.can_handle(): File "/home/dario/miniconda3/lib/python3.7/site-packages/conda_env/specs/yaml_file.py", line 18, in can_handle self._environment = env.from_file(self.filename) File "/home/dario/miniconda3/lib/python3.7/site-packages/conda_env/env.py", line 151, in from_file return from_yaml(yamlstr, filename=filename) File "/home/dario/miniconda3/lib/python3.7/site-packages/conda_env/env.py", line 137, in from_yaml data = validate_keys(data, kwargs) File "/home/dario/miniconda3/lib/python3.7/site-packages/conda_env/env.py", line 35, in validate_keys new_data = data.copy() if data else {} AttributeError: 'str' object has no attribute 'copy' `$ /home/dario/miniconda3/bin/conda-env create --file /home/dario/Downloads/.snakemake/conda/095b0ca2.yaml --prefix /home/dario/Downloads/.snakemake/conda/095b0ca2` environment variables: CIO_TEST=<not set> CMAKE_PREFIX_PATH=/home/dario/miniconda3/envs/tritume:/home/dario/miniconda3/envs/tritum e/x86_64-conda-linux-gnu/sysroot/usr CONDA_AUTO_UPDATE_CONDA=false CONDA_BUILD_SYSROOT=/home/dario/miniconda3/envs/tritume/x86_64-conda-linux-gnu/sysroot CONDA_DEFAULT_ENV=tritume CONDA_EXE=/home/dario/miniconda3/bin/conda CONDA_PREFIX=/home/dario/miniconda3/envs/tritume CONDA_PROMPT_MODIFIER=(tritume) CONDA_PYTHON_EXE=/home/dario/miniconda3/bin/python CONDA_ROOT=/home/dario/miniconda3 CONDA_SHLVL=1 DEFAULTS_PATH=/usr/share/gconf/ubuntu.default.path MANDATORY_PATH=/usr/share/gconf/ubuntu.mandatory.path PATH=/home/dario/miniconda3/envs/tritume/bin:/home/dario/miniconda3/condabi n:/opt/gradle/gradle-5.2/bin:/home/dario/.local/share/umake/bin:/home/ dario/.local/bin:/home/dario/bin:/opt/gradle/gradle-5.2/bin:/usr/local /sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/loc al/games:/snap/bin:/usr/lib/jvm/java-10-oracle/bin:/usr/lib/jvm/java-1 0-oracle/db/bin REQUESTS_CA_BUNDLE=<not set> SSL_CERT_FILE=<not set> WINDOWPATH=2 active environment : tritume active env location : /home/dario/miniconda3/envs/tritume shell level : 1 user config file : /home/dario/.condarc populated config files : /home/dario/.condarc conda version : 4.8.3 conda-build version : not installed python version : 3.7.6.final.0 virtual packages : __glibc=2.27 base environment : /home/dario/miniconda3 (writable) channel URLs : https://conda.anaconda.org/conda-forge/linux-64 https://conda.anaconda.org/conda-forge/noarch https://conda.anaconda.org/bioconda/linux-64 https://conda.anaconda.org/bioconda/noarch https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /home/dario/miniconda3/pkgs /home/dario/.conda/pkgs envs directories : /home/dario/miniconda3/envs /home/dario/.conda/envs platform : linux-64 user-agent : conda/4.8.3 requests/2.22.0 CPython/3.7.6 Linux/4.15.0-91-generic ubuntu/18.04.4 glibc/2.27 UID:GID : 1001:1001 netrc file : None offline mode : False An unexpected error has occurred. Conda has prepared the above report. If submitted, this report will be used by core maintainers to improve future releases of conda. Would you like conda to send this report to the core maintainers? [y/N]: Timeout reached. No report sent. File "/home/dario/miniconda3/envs/tritume/lib/python3.6/site-packages/snakemake/deployment/conda.py", line 320, in create Anyway, your error says: ... r-dependencies.R' returned non-zero exit status 1 What do you have in r-dependencies.R?
6
2
64,629,702
2020-11-1
https://stackoverflow.com/questions/64629702/pytorch-transform-totensor-changes-image
I want to convert images to tensor using torchvision.transforms.ToTensor(). After processing, I printed the image but the image was not right. Here is my code: trans = transforms.Compose([ transforms.ToTensor()]) demo = Image.open(img) demo_img = trans(demo) demo_array = demo_img.numpy()*255 print(Image.fromarray(demo_array.astype(np.uint8))) The original image is: After processing, it looks like:
It seems that the problem is with the channel axis. If you look at torchvision.transforms docs, especially on ToTensor() Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] So once you perform the transformation and return to numpy.array your shape is: (C, H, W) and you should change the positions, you can do the following: demo_array = np.moveaxis(demo_img.numpy()*255, 0, -1) This will transform the array to shape (H, W, C) and then when you return to PIL and show it will be the same image. So in total: import numpy as np from PIL import Image from torchvision import transforms trans = transforms.Compose([transforms.ToTensor()]) demo = Image.open(img) demo_img = trans(demo) demo_array = np.moveaxis(demo_img.numpy()*255, 0, -1) print(Image.fromarray(demo_array.astype(np.uint8)))
10
20
64,626,073
2020-10-31
https://stackoverflow.com/questions/64626073/solve-equation-with-sum-and-index-using-sympy
After having tried many things, I thought it would be good to ask on SO. My problem is fairly simple: how can I solve the following equation using Sympy? Equation I want to solve this for lambda_0 and q is an array of size J containing elments between 0 and 1 that sum op to 1 (discrete probability distribution). I tried the following: from sympy.solvers import solve from sympy import symbols, summation p = [0.2, 0.3, 0.3, 0.1, 0.1] l = symbols('l') j = symbols('j') eq= summation(j*q[j]/(l-j), (j, 0, 4)) s= solve(eq, l) But this gives me an error for q[j] as j is a Symbol object here and not an integer. If I don't make j as symbol, I cannot evaluate the eq expression. Does anyone know how to do this? Edit: p = 1-q in the above, hence q[j] should have been replaced by (1-p[j]).
List p needs to be converted into symbolic array before it can be indexed with symbolic value j. from sympy.solvers import solve from sympy import symbols, summation, Array p = Array([0.2, 0.3, 0.3, 0.1, 0.1]) l, j = symbols('l j') eq = summation(j * (1 - p[j]) / (l - j), (j, 0, 4)) s = solve(eq - 1, l) # [1.13175762143963 + 9.29204634892077e-30*I, 2.23358705810004 - 1.36185313905566e-29*I, 3.4387382449005 + 3.71056356734273e-30*I, 11.5959170755598 + 6.15921474293073e-31*I] (assuming your p stands for 1 - q)
6
2
64,627,112
2020-10-31
https://stackoverflow.com/questions/64627112/adding-multiple-columns-in-pyspark-dataframe-using-a-loop
I need to add a number of columns (4000) into the data frame in pyspark. I am using the withColumn function, but getting assertion error. df3 = df2.withColumn("['ftr' + str(i) for i in range(0, 4000)]", [expr('ftr[' + str(x) + ']') for x in range(0, 4000)]) Not sure what is wrong.
Try to do something like this: df2 = df3 for i in range(0, 4000): df2 = df2.withColumn(f"ftr{i}", lit(f"frt{i}"))
6
2
64,621,585
2020-10-31
https://stackoverflow.com/questions/64621585/pytorch-optimizer-adamw-and-adam-with-weight-decay
Is there any difference between torch.optim.Adam(weight_decay=0.01) and torch.optim.AdamW(weight_decay=0.01)? Link to the docs: torch.optim.
Yes, Adam and AdamW weight decay are different. Hutter pointed out in their paper (Decoupled Weight Decay Regularization) that the way weight decay is implemented in Adam in every library seems to be wrong, and proposed a simple way (which they call AdamW) to fix it. In Adam, the weight decay is usually implemented by adding wd*w (wd is weight decay here) to the gradients (Ist case), rather than actually subtracting from weights (IInd case). # Ist: Adam weight decay implementation (L2 regularization) final_loss = loss + wd * all_weights.pow(2).sum() / 2 # IInd: equivalent to this in SGD w = w - lr * w.grad - lr * wd * w These methods are same for vanilla SGD, but as soon as we add momentum, or use a more sophisticated optimizer like Adam, L2 regularization (first equation) and weight decay (second equation) become different. AdamW follows the second equation for weight decay. In Adam weight_decay (float, optional) – weight decay (L2 penalty) (default: 0) In AdamW weight_decay (float, optional) – weight decay coefficient (default: 1e-2) Read more on the fastai blog.
65
66
64,618,631
2020-10-31
https://stackoverflow.com/questions/64618631/how-to-filter-and-paginate-in-listview-django
I have a problem when I want to paginate the filter that I create with django_filter, in my template it shows me the query set and filter but paginate does not work, I would like to know why this happens and if you could help me. I'll insert snippets of my code so you can see. This is my views.py PD: i have all the necesary imports. @method_decorator(staff_member_required, name='dispatch') class EmployeeListView(ListView): model = Employee paginate_by = 4 def dispatch(self, request, *args, **kwargs): if not request.user.has_perm('employee.view_employee'): return redirect(reverse_lazy('home')) return super(EmployeeListView, self).dispatch(request, *args, **kwargs) def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['filter'] = EmployeeFilter(self.request.GET, queryset = self.get_queryset()) return context filters.py import django_filters from .models import Employee, Accident class EmployeeFilter(django_filters.FilterSet): class Meta: model = Employee fields = { 'rutEmployee' : ['startswith'] }
You should override get_queryset.This means you have to put your filter in get_queryset like this: @method_decorator(staff_member_required, name='dispatch') class EmployeeListView(ListView): model = Employee paginate_by = 4 def dispatch(self, request, *args, **kwargs): if not request.user.has_perm('employee.view_employee'): return redirect(reverse_lazy('home')) return super(EmployeeListView, self).dispatch(request, *args, **kwargs) def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['filter'] = EmployeeFilter(self.request.GET, queryset = self.get_queryset()) return context def get_queryset(self): queryset = super().get_queryset() return EmployeeFilter(self.request.GET, queryset=queryset).qs and use object_list instead of filter in employee_list.html like this: {% for employee in object_list|dictsort:"id" reversed %}
7
8
64,617,770
2020-10-31
https://stackoverflow.com/questions/64617770/why-pytest-deselect-all-the-tests-when-run-with-python-m-test
I can run my tests by executing (on Windows) pytest .\tests\test_x.py Result: ================================= test session starts ================================== platform win32 -- Python 3.8.3, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 rootdir: C:\Users\...... collected 9 items tests\test_x.py ......... [100%] ================================== 9 passed in 3.67s =================================== However, the following two commands pytest -m tests pytest -m test got the following result. Why all the tests are deselected while they can be run as script? PS C:\Users\......> pytest -m test ================================= test session starts ================================== platform win32 -- Python 3.8.3, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 rootdir: C:\Users\...... collected 9 items / 9 deselected ================================ 9 deselected in 3.78s =================================
You're using -m which filters which tests to run according to how you mark your tests. You're telling pytest to only run tests tagged @pytest.mark.test. Presumably, you don't have any tests marked as such. https://docs.pytest.org/en/stable/example/markers.html#mark-run
6
5
64,616,582
2020-10-30
https://stackoverflow.com/questions/64616582/unordered-list-as-dict-key
I want to be able to do something like: foo = Counter(['bar', 'shoo', 'bar']) tmp = {} tmp[foo] = 5 In other words, is there a hashable equivalent for Counter? Note that I can't use frozenset since I have repeated elements that I want to keep in the key. Edit: In my actual application, the objects in foo may not be comparable with each other so the list cannot be sorted.
What you seem to require is a way to use unordered pairs of key-amount as keys. A frozenset is probably the way to go, although you will have to create it out of the items of the Counter and not its keys. foo = Counter(['bar', 'shoo', 'bar']) tmp = {} tmp[frozenset(foo.items())] = 5 # tmp: {frozenset({('bar', 2), ('shoo', 1)}): 5} If this is satisfying, you could implement this transformation by defining you own mapping type like so: from collections import Counter class CounterDict: def __init__(self): self.data = {} def __setitem__(self, key, value): if isinstance(key, Counter): self.data[frozenset(key.items())] = value else: raise TypeError def __getitem__(self, key): if isinstance(key, Counter): return self.data[frozenset(key.items())] else: raise TypeError foo = Counter(['bar', 'shoo', 'bar']) tmp = CounterDict() tmp[foo] = 42 tmp[foo] # 42 You could make this implementation richer by making CounterDict a subclass of collections.UserDict.
8
4
64,616,462
2020-10-30
https://stackoverflow.com/questions/64616462/python-how-to-decode-jwt-header
I have a token that includes the following header eyJraWQiOiI4NkQ4OEtmIiwiYWxnIjoiUlMyNTYifQ. How can I obtain the following JSON decoding of it as jwt.io provides? { "kid": "86D88Kf", "alg": "RS256" } jwt.decode() doesn't give this header. Thanks!
This is an unencrpyted header. Its a URL-safe base64 encoding of a JSON encoding of the data you want. You need to add padding characters to the end of the encoded string to make sure its on a 4 character boundary, then decode. >>> import json >>> import base64 >>> token = "eyJraWQiOiI4NkQ4OEtmIiwiYWxnIjoiUlMyNTYifQ" >>> padded = token + "="*divmod(len(token),4)[1] >>> padded 'eyJraWQiOiI4NkQ4OEtmIiwiYWxnIjoiUlMyNTYifQ==' >>> jsondata = base64.urlsafe_b64decode(padded) >>> jsondata b'{"kid":"86D88Kf","alg":"RS256"}' >>> data = json.loads(jsondata) >>> data {'kid': '86D88Kf', 'alg': 'RS256'}
5
14
64,616,163
2020-10-30
https://stackoverflow.com/questions/64616163/pandas-read-csv-ignore-ending-semicolon-of-last-column
My data file looks like this: data.txt user,activity,timestamp,x-axis,y-axis,z-axis 0,33,Jogging,49105962326000,-0.6946376999999999,12.680544,0.50395286; 1,33,Jogging,49106062271000,5.012288,11.264028,0.95342433; 2,33,Jogging,49106112167000,4.903325,10.882658000000001,-0.08172209; 3,33,Jogging,49106222305000,-0.61291564,18.496431,3.0237172; As can be seen, the last column ends with a semicolon, so when I read into pandas, the column is inferred as type object (ending with the semicolon. df = pd.read_csv('data.txt') df user activity timestamp x-axis y-axis z-axis 0 33 Jogging 49105962326000 -0.694638 12.680544 0.50395286; 1 33 Jogging 49106062271000 5.012288 11.264028 0.95342433; 2 33 Jogging 49106112167000 4.903325 10.882658 -0.08172209; 3 33 Jogging 49106222305000 -0.612916 18.496431 3.0237172; How do I make pandas ignore that semicolon?
The problem with your txt is that it has mixed content. As I can see the header doesn't have the semicolon as termination character If you change the first line adding the semicolon it's quite simple pd.read_csv("data.txt", lineterminator=";")
14
14
64,615,988
2020-10-30
https://stackoverflow.com/questions/64615988/what-does-vertical-bar-pipe-in-function-arguments-type-annotations-mean
I came across function with signature like this: def get_quantile(numbers: List[float], q: float | int ) -> float | int | None : What does it mean? It's a syntax error on my python 3.8. Do I need to import something from future to make it work?
According to PEP 604, | will be used to designate union types from Python 3.10. So float | int will mean Union[float, int], i.e. a float or an int.
19
31
64,567,464
2020-10-28
https://stackoverflow.com/questions/64567464/mask-0-values-during-normalization
I am doing normalization for datasets but the data contains a lot of 0 because of padding. I can mask them during model training but apparently, these zero will be affected when I applied normalization. from sklearn.preprocessing import StandardScaler,MinMaxScaler I am currently using the Sklearn library to do the normalization For example, given a 3D array with dimension (4,3,5) as (batch, step, features) The number of zero-padding varied from batch to batch as these are the features I extracted from audio files, that have varying lengths, using a fixed window size. [[[0 0 0 0 0], [0 0 0 0 0], [0 0 0 0 0]] [[1 2 3 4 5], [4 5 6 7 8], [9 10 11 12 13]], [[14 15 16 17 18], [0 0 0 0 0], [24 25 26 27 28]], [[0 0 0 0 0], [423 2 230 60 70], [0 0 0 0 0]] ] I wish to perform normalization by column so scaler = MinMaxScaler() X_train = scaler.fit_transform(X_train.reshape(-1,X_train.shape[-1])).reshape(X_train.shape) X_test = scaler.transform(X_test.reshape(-1,X_test.shape[-1])).reshape(X_test.shape) However, in this case, zeros are treated as effective values. For example, the minimum value of the first column should be 1 instead of 0. Further, the 0's values are also changed after applying the scalers but I wish to keep them as 0's so I can mask them during training. model.add(tf.keras.layers.Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2]))) Is there any way to mask them during normalization so only the 2nd step and 3rd step in this example are used in normalization? In addition, The actual dimension of the array for my project is bigger as (2000,50,68) among the 68 features, the difference in values of the 68 features can be very large. I tried to normalize them by dividing each element by the biggest element in their row to avoid the impact from 0's but this did not work out well.
The task of just MinMaxScaler() masking can be solved by next code. Each other operation needs separate way of handling, if you'll mention all operations that need masking then we can solve them one-by-one basis and I'll extend my answer. E.g. keras layers can be masked by tf.keras.layers.Masking() layer as you mentioned. Next code min/max-scales only non zero features, the rest remain zeros. import numpy as np from sklearn.preprocessing import MinMaxScaler X = np.array([ [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], [[1, 2, 3, 4, 5], [4, 5, 6, 7, 8], [9, 10, 11, 12, 13]], [[14, 15, 16, 17, 18], [0, 0, 0, 0, 0], [24, 25, 26, 27, 28]], [[0, 0, 0, 0, 0], [423, 2, 230, 60, 70], [0, 0, 0, 0, 0]] ], dtype = np.float64) nz = np.any(X, -1) X[nz] = MinMaxScaler().fit_transform(X[nz]) print(X) Output: [[[0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. ]] [[0. 0. 0. 0. 0. ] [0.007109 0.13043478 0.01321586 0.05357143 0.04615385] [0.01895735 0.34782609 0.03524229 0.14285714 0.12307692]] [[0.03080569 0.56521739 0.05726872 0.23214286 0.2 ] [0. 0. 0. 0. 0. ] [0.05450237 1. 0.10132159 0.41071429 0.35384615]] [[0. 0. 0. 0. 0. ] [1. 0. 1. 1. 1. ] [0. 0. 0. 0. 0. ]]] If you need to train MinMaxScaler() on one dataset and apply it later on others then you can do next: scaler = MinMaxScaler().fit(X[np.any(X, -1)]) X[np.any(X, -1)] = scaler.transform(X[np.any(X, -1)]) Y[np.any(Y, -1)] = scaler.transform(Y[np.any(Y, -1)])
8
3
64,594,493
2020-10-29
https://stackoverflow.com/questions/64594493/filter-out-nan-values-from-a-pytorch-n-dimensional-tensor
This question is very similar to filtering np.nan values from pytorch in a -Dimensional tensor. The difference is that I want to apply the same concept to tensors of 2 or higher dimensions. I have a tensor that looks like this: import torch tensor = torch.Tensor( [[1, 1, 1, 1, 1], [float('nan'), float('nan'), float('nan'), float('nan'), float('nan')], [2, 2, 2, 2, 2]] ) >>> tensor.shape >>> [3, 5] I would like to find the most pythonic / PyTorch way of to filter out (remove) the rows of the tensor which are nan. By filtering this tensor along the first (0th axis) I want to obtain a filtered_tensor which looks like this: >>> print(filtered_tensor) >>> torch.Tensor( [[1, 1, 1, 1, 1], [2, 2, 2, 2, 2]] ) >>> filtered_tensor.shape >>> [2, 5]
Use PyTorch's isnan() together with any() to slice tensor's rows using the obtained boolean mask as follows: filtered_tensor = tensor[~torch.any(tensor.isnan(),dim=1)] Note that this will drop any row that has a nan value in it. If you want to drop only rows where all values are nan replace torch.any with torch.all. For an N-dimensional tensor you could just flatten all the dims apart from the first dim and apply the same procedure as above: #Flatten: shape = tensor.shape tensor_reshaped = tensor.reshape(shape[0],-1) #Drop all rows containing any nan: tensor_reshaped = tensor_reshaped[~torch.any(tensor_reshaped.isnan(),dim=1)] #Reshape back: tensor = tensor_reshaped.reshape(tensor_reshaped.shape[0],*shape[1:])
7
11
64,594,693
2020-10-29
https://stackoverflow.com/questions/64594693/how-to-delete-multiple-defined-and-undefined-variables-safely-in-python
I am currently using jupyter notebook, and I want to delete variables that are used only within the cell, so that I won't accidentally misuse these variables in other cells. For example, I want to remove the variable myvar and loop variable i in the following codes: start = 1 stop = 2 for i in range(start, stop): pass myvar = "A" # other statements # ... del i, myvar # Runs ok, and variables i and myvar are deleted and I won't accidentally use i or myvar in another jupyter notebook cell This works fine, but there are cases where some variables are actually not defined. In the following example, this throws an error since i is not defined because the loop is never run. The defined variables myvar is not deleted. start = 1 stop = 1 # Now we have stop equal to 1 for i in range(start, stop): pass myvar = "A" # other statements # ... del i, myvar # NameError, since i is not defined, and myvar is not deleted I have used contextlib or try-except statements to avoid the error, but still myvar is not deleted. import contextlib start = 1 stop = 1 for i in range(start, stop): pass myvar = "A" # other statements # ... with contextlib.suppress(NameError): del i, myvar # Error suppressed, but myvar is not deleted The only work-around is to wrap del statement for every variable I want to delete. import contextlib start = 1 stop = 1 for i in range(start, stop): pass myvar = "A" # other statements # ... with contextlib.suppress(NameError): del i with contextlib.suppress(NameError): del myvar # ... # It works but codes get messy if I need to delete many variables However, this makes codes messy, especially if you have a lot of variables to delete (n lines if there are n variables). Is there any way to delete all defined and undefined variables safely with less messy codes, in one/two lines?
A one-liner to do del x safely: globals().pop('x', None); There are many ways to do that, but they need more than 1 line of code, which I guess is not what you look for. Note ; at the end, which prevents the variable from being printed by Jupiter.
9
3
64,591,665
2020-10-29
https://stackoverflow.com/questions/64591665/unpacking-x-y-x-y-x-y-what-is-the-difference
What is the difference in Python between unpacking a function call with [], with () or with nothing? def f(): return 0, 1 a, b = f() # 1 [a, b] = f() # 2 (a, b) = f() # 3
There is no difference. Regardless of what kind of syntactic sequence you use, the same byte code is generated. >>> def f(): ... return 0, 1 ... >>> import dis >>> dis.dis('[a,b] = f()') 1 0 LOAD_NAME 0 (f) 2 CALL_FUNCTION 0 4 UNPACK_SEQUENCE 2 6 STORE_NAME 1 (a) 8 STORE_NAME 2 (b) 10 LOAD_CONST 0 (None) 12 RETURN_VALUE >>> dis.dis('(a,b) = f()') 1 0 LOAD_NAME 0 (f) 2 CALL_FUNCTION 0 4 UNPACK_SEQUENCE 2 6 STORE_NAME 1 (a) 8 STORE_NAME 2 (b) 10 LOAD_CONST 0 (None) 12 RETURN_VALUE >>> dis.dis('a, b = f()') 1 0 LOAD_NAME 0 (f) 2 CALL_FUNCTION 0 4 UNPACK_SEQUENCE 2 6 STORE_NAME 1 (a) 8 STORE_NAME 2 (b) 10 LOAD_CONST 0 (None) 12 RETURN_VALUE In every case, you simply call f, then use UNPACK_SEQUENCE to produce the values to assign to a and b. Even if you want to argue that byte code is an implementation detail of CPython, the definition of a chained assignment is not. Given x = [a, b] = f() the meaning is the same as tmp = f() x = tmp [a, b] = tmp x is assigned the result of f() (a tuple), not the "list" [a, b]. Finally, here is the grammar for an assignment statement: assignment_stmt ::= (target_list "=")+ (starred_expression | yield_expression) target_list ::= target ("," target)* [","] target ::= identifier | "(" [target_list] ")" | "[" [target_list] "]" | attributeref | subscription | slicing | "*" target Arguably, the "[" [target_list] "]" could and should have been removed in Python 3. Such a breaking change would be difficult to implement now, given the stated preference to avoid any future changes to Python on the scale of the 2-to-3 transition.
30
36
64,590,557
2020-10-29
https://stackoverflow.com/questions/64590557/how-to-get-the-predict-proba-for-the-class-predicted-by-predict-in-random-fo
from sklearn import ensemble model = ensemble.RandomForestClassifier(n_estimators=10) model.fit(x,y) predictions = model.predict(new) I know predict() uses predict_proba() to get the predictions, by computing the mean of the predicted class probabilities of the trees in the forest. I want to get the result of predict_proba() for the class predicted by the predict() method. What I'm doing is: first call predict() like in the above code, and for the probability I'm extracting the max probability from the trees like so: all_probabilities = model.predict_proba() class_probabilities = np.array([]) for tree in all_probabilities: class_probabilites = np.append(class_probabilities, tree.max()) Is this correct? If not, how can I extract the probability for the predicted class?
The predict_proba() method returns a two-dimensional array, containing the estimated probabilities for each instance and each class: import numpy as np from sklearn.ensemble import RandomForestClassifier X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]) y = np.array([0, 0, 1, 1]) model = RandomForestClassifier() model.fit(X, y) model.predict_proba(X) array([[0.91, 0.09], [0.91, 0.09], [0.25, 0.75], [0.05, 0.95]]) As you note, for each instance the predicted class is the class with the maximum probability. So one simple way to get the estimated probabilities for the predicted classes is to use np.max(): np.max(model.predict_proba(X), axis=1) array([0.91, 0.91, 0.75, 0.95])
7
12
64,579,258
2020-10-28
https://stackoverflow.com/questions/64579258/sentence-embedding-using-t5
I would like to use state-of-the-art LM T5 to get sentence embedding vector. I found this repository https://github.com/UKPLab/sentence-transformers As I know, in BERT I should take the first token as [CLS] token, and it will be the sentence embedding. In this repository I see the same behaviour on T5 model: cls_tokens = output_tokens[:, 0, :] # CLS token is first token Does this behaviour correct? I have taken encoder from T5 and encoded two phrases with it: "I live in the kindergarden" "Yes, I live in the kindergarden" The cosine similarity between them was only "0.2420". I just need to understand how sentence embedding works - should I train network to find similarity to reach correct results? Or I it is enough of base pretrained language model?
In order to obtain the sentence embedding from the T5, you need to take the take the last_hidden_state from the T5 encoder output: model.encoder(input_ids=s, attention_mask=attn, return_dict=True) pooled_sentence = output.last_hidden_state # shape is [batch_size, seq_len, hidden_size] # pooled_sentence will represent the embeddings for each word in the sentence # you need to sum/average the pooled_sentence pooled_sentence = torch.mean(pooled_sentence, dim=1) You have now a sentence embeddings from T5
6
7
64,579,002
2020-10-28
https://stackoverflow.com/questions/64579002/error-when-installing-pywin32-on-ubuntu
I'm trying to install the pywin32 module on Ubuntu for python 3.6, I've tried pip3 install pywin32 and got the following output: Collecting pywin32 Could not find a version that satisfies the requirement pywin32 (from versions: ) No matching distribution found for pywin32 Then I tried pip3 install pypiwin32 and got the following output: Collecting pypiwin32 Using cached https://files.pythonhosted.org/packages/d0/1b/2f292bbd742e369a100c91faa0483172cd91a1a422a6692055ac920946c5/pypiwin32-223-py3-none-any.whl Collecting pywin32>=223 (from pypiwin32) Could not find a version that satisfies the requirement pywin32>=223 (from pypiwin32) (from versions: ) No matching distribution found for pywin32>=223 (from pypiwin32)
The pywin32 and pypiwin32 is "Python extensions for Microsoft Windows Provides access to much of the Win32 API, the ability to create and use COM objects, and the Pythonwin environment." One supported OS is Microsoft Windows, because you can access the Win32 API only from Windows. Source: https://pypi.org/project/pywin32/
8
7
64,569,062
2020-10-28
https://stackoverflow.com/questions/64569062/how-to-handle-custom-exceptions-with-grpc-in-python
I need to implement custom exceptions for handling gRPC request errors with Python. For HTTP requests it's straightforward - requests library catches it well when there's an error code etc. I am looking for analogous ways for gRPC to do something like: try: # send gRPC request except SomeGRPCException as e: # custom handle Is there a way to handle gRPC errors like that in Python? Or with gRPC it wouldn't work like in the example?
For simple RPC error handling, you can use try-catch: try: response = stub.SayHello(...) except grpc.RpcError as rpc_error: if rpc_error.code() == grpc.StatusCode.CANCELLED: pass elif rpc_error.code() == grpc.StatusCode.UNAVAILABLE: pass else: print(f"Received unknown RPC error: code={rpc_error.code()} message={rpc_error.details()}") For complex RPC error handling, you may need to utilize the trailing metadata: https://github.com/grpc/grpc/tree/master/examples/python/errors In most cases, try-catch should be sufficient.
9
16
64,577,138
2020-10-28
https://stackoverflow.com/questions/64577138/implement-iter-and-next-in-different
I'm reading a book on Python which illustrates how to implement the iterator protocol. class Fibbs: def __init__(self): self.a = 0 self.b = 1 def __next__(self): self.a, self.b = self.b, self.a + self.b return self.a def __iter__(self): return self Here, self itself is the iterable and iterator, I believe? However, the para below says: Note that the iterator implements the __iter__ method, which will, in fact, return the iterator itself. In many cases, you would put the __iter__ method in another object, which you would use in the for loop. That would then return your iterator. It is recommended that iterators implement an __iter__ method of their own in addition (returning self, just as I did here), so they themselves can be used directly in for loops. Does this mean you can put __iter__() and __next__() in two different objects? Can it be done for objects belonging to different classes? Can it only be done for objects belonging to different classes? It might be a bit bizarre way of implementing the iterator protocol. But I just want to see how, provided it can actually be implemented like that.
How you make iterators and iterables There are two ways to do this: Implement __iter__ to return self and nothing else, implement __next__ on the same class. You've written an iterator. Implement __iter__ to return some other object that follows the rules of #1 (a cheap way to do this is to write it as a generator function so you don't have to hand-implement the other class). Don't implement __next__. You've written an iterable that is not an iterator. For correctly implemented versions of each protocol, the way you tell them apart is the __iter__ method. If the body is just return self (maybe with a logging statement or something, but no other side-effects), then either it's an iterator, or it was written incorrectly. If the body is anything else, then either it's a non-iterator iterable, or it was written incorrectly. Anything else is violating the requirements for the protocols. In case #2, the other object would be of another class by definition (because you either have an idempotent __iter__ and implement __next__, or you only have __iter__, without __next__, which produces a new iterator). Why the protocol is designed this way The reason you need __iter__ even on iterators is to support patterns like: iterable = MyIterable(...) iterator = iter(iterable) # Invokes MyIterable.__iter__ next(iterator, None) # Throw away first item for x in iterator: # for implicitly calls iterator's __iter__; dies if you don't provide __iter__ The reason you always return a new iterator for iterables, rather than just making them iterators and resetting the state when __iter__ is invoked is to handle the above case (if MyIterable just returned itself and reset iteration, the for loop's implicit call to __iter__ would reset it again and undo the intended skip of the first element) and to support patterns like this: for x in iterable: for y in iterable: # Operating over product of all elements in iterable If __iter__ reset itself to the beginning and only had a single state, this would: Get the first item and put it in x Reset, then iterate through the whole of iterable putting each value in y Try to continue outer loop, discover it's already exhausted, never give any other value to x It's also needed because Python assumes that iter(x) is x is a safe, side-effect free way to test if an iterable is an iterator. If your __iter__ modifies your own state, it's not side-effect free. At worst, for iterables, it should waste a little time making an iterator that is immediately thrown away. For iterators, it should be effectively free (since it just returns itself). To answer your questions directly: Does this mean you can put __iter__() and __next__() in two different objects? For iterators, you can't (it must have both methods, though __iter__ is trivial). For non-iterator iterables, you must (it must only have __iter__, and return some other iterator object). There is no "can". Can it be done for objects belonging to different classes? Yes. Can it only be done for objects belonging to different classes? Yes. Examples Example of iterable: class MyRange: def __init__(self, start, stop): self.start = start self.stop = stop def __iter__(self): return MyRangeIterator(self) # Returns new iterator, as this is a non-iterator iterable # Likely to have other methods (because iterables are often collections of # some sort and support many other behaviors) # Does *not* have __next__, as this is not an iterator Example of iterator: class MyRangeIterator: # Class is often non-public and or defined inside the iterable as # nested class; it exists solely to store state for iterator def __init__(self, rangeobj): # Constructed from iterable; could pass raw values if you preferred self.current = rangeobj.start self.stop = rangeobj.stop def __iter__(self): return self # Returns self, because this is an iterator def __next__(self): # Has __next__ because this is an iterator retval = self.current # Must cache current because we need to modify it before we return if retval >= self.stop: raise StopIteration # Indicates iterator exhausted self.current += 1 # Ensure state updated for next call return retval # Return cached value # Unlikely to have other methods; iterators are generally iterated and that's it Example of "easy iterable" where you don't implement your own iterator class, by making __iter__ a generator function: class MyEasyRange: def __init__(self, start, stop): ... # Same as for MyRange def __iter__(self): # Generator function is simpler (and faster) # than writing your own iterator class current = self.start # Can't mutate attributes, because multiple iterators might rely on this one iterable while current < self.stop: yield current # Produces value and freezes generator until iteration resumes current += 1 # reaching the end of the function acts as implicit StopIteration for a generator
7
20
64,568,775
2020-10-28
https://stackoverflow.com/questions/64568775/tf-idf-vectorizer-to-extract-ngrams
How can I use TF-IDF vectorizer from the scikit-learn library to extract unigrams and bigrams of tweets? I want to train a classifier with the output. This is the code from scikit-learn: from sklearn.feature_extraction.text import TfidfVectorizer corpus = [ 'This is the first document.', 'This document is the second document.', 'And this is the third one.', 'Is this the first document?', ] vectorizer = TfidfVectorizer() X = vectorizer.fit_transform(corpus)
TfidfVectorizer has an ngram_range parameter to determin the range of n-grams you want in the final matrix as new features. In your case, you want (1,2) to go from unigrams to bigrams: vectorizer = TfidfVectorizer(ngram_range=(1,2)) X = vectorizer.fit_transform(corpus).todense() pd.DataFrame(X, columns=vectorizer.get_feature_names()) and and this document document is first first document \ 0 0.000000 0.000000 0.314532 0.000000 0.388510 0.388510 1 0.000000 0.000000 0.455513 0.356824 0.000000 0.000000 2 0.357007 0.357007 0.000000 0.000000 0.000000 0.000000 3 0.000000 0.000000 0.282940 0.000000 0.349487 0.349487 is is the is this one ... the the first \ 0 0.257151 0.314532 0.000000 0.000000 ... 0.257151 0.388510 1 0.186206 0.227756 0.000000 0.000000 ... 0.186206 0.000000 2 0.186301 0.227873 0.000000 0.357007 ... 0.186301 0.000000 3 0.231322 0.000000 0.443279 0.000000 ... 0.231322 0.349487 ...
7
4
64,522,040
2020-10-25
https://stackoverflow.com/questions/64522040/dynamically-create-literal-alias-from-list-of-valid-values
I have a function which validates its argument to accept only values from a given list of valid options. Typing-wise, I reflect this behavior using a Literal type alias, like so: from typing import Literal VALID_ARGUMENTS = ['foo', 'bar'] Argument = Literal['foo', 'bar'] def func(argument: 'Argument') -> None: if argument not in VALID_ARGUMENTS: raise ValueError( f'argument must be one of {VALID_ARGUMENTS}' ) # ... This is a violation of the DRY principle, because I have to rewrite the list of valid arguments in the definition of my Literal type, even if it is already stored in the variable VALID_ARGUMENTS. How can I create the Argument Literal type dynamically, given the VALID_ARGUMENTS variable? The following things do not work: from typing import Literal, Union, NewType Argument = Literal[*VALID_ARGUMENTS] # SyntaxError: invalid syntax Argument = Literal[VALID_ARGUMENTS] # Parameters to generic types must be types Argument = Literal[Union[VALID_ARGUMENTS]] # TypeError: Union[arg, ...]: each arg must be a type. Got ['foo', 'bar']. Argument = NewType( 'Argument', Union[ Literal[valid_argument] for valid_argument in VALID_ARGUMENTS ] ) # Expected type 'Type[_T]', got 'list' instead Can it be done at all?
Go the other way around, and build VALID_ARGUMENTS from Argument: Argument = typing.Literal['foo', 'bar'] VALID_ARGUMENTS: typing.Tuple[Argument, ...] = typing.get_args(Argument) I've used a tuple for VALID_ARGUMENTS here, but if for some reason you really prefer a list, you can get one: VALID_ARGUMENTS: typing.List[Argument] = list(typing.get_args(Argument)) It's possible at runtime to build Argument from VALID_ARGUMENTS, but doing so is incompatible with static analysis, which is the primary use case of type annotations. Doing so is also considered semantically invalid - the spec forbids parameterizing Literal with dynamically computed parameters. The runtime implementation simply doesn't have the information it would need to validate this. Building VALID_ARGUMENTS from Argument is the way to go.
98
109
64,535,462
2020-10-26
https://stackoverflow.com/questions/64535462/plotly-how-to-change-line-style-using-px-line
I have dataframe tha tlooks similar to this: >>>Hour Level value 0 7 H 1.435 1 7 M 3.124 2 7 L 5.578 3 8 H 0.435 4 8 M 2.124 5 8 L 4.578 I want to create line chart in plotly that will have different line style based in the column "level". Right now I have the line chart with the deafult line style: import plotly.graph_objects as go fig = px.line(group, x="Hour", y="value",color='level', title='Graph',category_orders={'level':['H','M','L']} ,color_discrete_map={'H':'royalblue','M':'orange','L':'firebrick'}) fig.show() I would like to control the linestyle for each level. until know I saw that the only way to do this is to add for each "level" but using add_trace as following: # Create and style traces fig.add_trace(go.Scatter(x="Hour", y="value", name='H', line=dict(dash='dash'))) fig.add_trace(go.Scatter(x="Hour", y="value", name = 'M', line=dict(dash='dot'))) fig.show() but I keep getting this error: ValueError: Invalid value of type 'builtins.str' received for the 'x' property of scatter Received value: 'Hour' The 'x' property is an array that may be specified as a tuple, list, numpy array, or pandas Series My end goal is to control the linestyle of the lines in my charts, better if I can do thatinside the part of "px.line"
One way you can set different styles through variables in your dataframe is: line_dash='Level' Plot Complete code import plotly.graph_objects as go import pandas as pd import numpy as np import plotly.io as pio import plotly.express as px group = pd.DataFrame({'Hour': {0: 7, 1: 7, 2: 7, 3: 8, 4: 8, 5: 8}, 'Level': {0: 'H', 1: 'M', 2: 'L', 3: 'H', 4: 'M', 5: 'L'}, 'value': {0: 1.435, 1: 3.1239999999999997, 2: 5.577999999999999, 3: 0.435, 4: 2.124, 5: 4.578}}) import plotly.graph_objects as go fig = px.line( group, x="Hour", y="value", color='Level', title='Graph', category_orders={'Level':['H','M','L']}, color_discrete_map={'H':'royalblue','M':'orange','L':'firebrick'}, line_dash='Level' ) fig.show()
9
12
64,530,101
2020-10-26
https://stackoverflow.com/questions/64530101/the-black-formatter-python
I just started using the 'Black' formatter module with Visual Studio Code. Everything was going well till I just noticed that it uses double quotes over single quotes which I already was using in my code... And it overrode that... So, is there an Black argument that I could add to Visual Studio Code which solves this problem?
You can use the --skip-string-normalization option at the command line, or in your Visual Studio Code options. See The Black code style, Strings. For example: { ... "python.formatting.provider": "black", "python.formatting.blackArgs": [ "--skip-string-normalization", "--line-length", "100" ] ... }
12
28
64,517,048
2020-10-24
https://stackoverflow.com/questions/64517048/pandas-loc-and-pep8
I've tried to search this a number of times but I don't see it answered so here goes... I often use pandas to clean up a dataframe and conform it to my needs. With this comes a lot of .loc accessing to query it and return values. Depending on what I am doing (and column lengths), this can get pretty lengthy. Given PEP8 constrains to 79 characters a line, are there any best practices? Some examples below (these are simplified and for explanatory purposes): missing_address_df = address_df.loc[address_df['address'].notnull()].copy() or multiple query points: nc_drive_df = address.loc[(address_df['address'].str.contains('drive')) & (address_df['state'] == 'NC')]
I'd advise two things Ignore PEP 8's 80 char advice, but try to keep to 120 or 150 lines Keeping some line length requirement makes sense to aid readability, but if you're trying to keep to 80 chars in (for example) a class method, it will lead to worse and less-readable code PEP 8 actually has a section on this, A Foolish Consistency is the Hobgoblin of Little Minds, which describes cases you should deviate from its other advice, for example When applying the guideline would make the code less readable, even for someone who is used to reading code that follows this PEP split the .loc contents onto multiple lines nc_drive_df = address.loc[ (address_df['address'].str.contains('drive')) & \ (address_df['state'] == 'NC') ] It's hard to be objective about when code "looks bad", despite being valid syntax, but you will experience it. Practically, PEP 8 and Cyclomatic Complexity checkers are tools which will help you fight against and defend and propose code styles in a scientific way. If you have a great many boolean statements, you (often must) break them up with parentheses to clarify their order nc_drive_df = address.loc[ ( (address_df['address'].str.contains('drive')) & \ (address_df['state'] == 'NC') ) || ( address_df['zip'] == "00000" ) ] This is somewhat in conflict with conventional Python operators, which are suggested to preceed lines (PEP8), but I challenge this when forming a boolean array because the member Series must be the same shape to get a good result and it's generally easier to observe and reason about them when working with several DataFrames when they're visually aligned. Finally, often when doing scientific Python, you should absolutely try many possibilities against a partial and full data if possible to draw good performance conclusions, consider their readability to be second, and provide excellent comments about and linking to your research, etc. over any particular style.
6
7
64,506,283
2020-10-23
https://stackoverflow.com/questions/64506283/create-a-pandas-table
In using pandas, how can I display a table similar to this one. I think I have to use a dataframe similar to df = pandas.DataFrame(results) and display it with display.display(df) but from there I don't know what to do?
You can pass in a dictionary as data when you use pd.DataFrame: >>> import pandas as pd >>> d = { ... 'Algothime': ['KNN', 'SVM', 'MLP'], ... 'Param. 1': ['-', '-', '-'], ... 'Param. 2': ['-', '-', '-'], ... 'Plage param. 1': ['-', '-', '-'], ... 'Plage param. 2': ['-', '-', '-'], ... } >>> df = pd.DataFrame(data=d) >>> df Algothime Param. 1 Param. 2 Plage param. 1 Plage param. 2 0 KNN - - - - 1 SVM - - - - 2 MLP - - - - If you want that specific style you can use Google Colab or something similar if you don't have Jupyter installed locally:
6
7
64,467,644
2020-10-21
https://stackoverflow.com/questions/64467644/add-density-curve-on-the-histogram
I am able to make histogram in python but I am unable to add density curve , I see many code which are using different ways to add density curve on histogram but I am not sure how to get on my code I have added density = true but not able to get density curve on histogram df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) X=df['A'] hist, bins = np.histogram(X, bins=10,density=True) width = 0.7 * (bins[1] - bins[0]) center = (bins[:-1] + bins[1:]) / 2 plt.bar(center, hist, align='center', width=width) plt.show()
distplot has been removed: removed in a future version of seaborn. Therefore, alternatives are to use histplot and displot. sns.histplot import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) X = df['A'] sns.histplot(X, kde=True, bins=20) plt.show() sns.displot import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) X = df['A'] sns.displot(X, kde=True, bins=20) plt.show() distplot has been removed Here is an approach using distplot method of seaborn. Also, mentioned in the comments: import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) X = df['A'] sns.distplot(X, kde=True, bins=20, hist=True) plt.show()
7
6
64,462,578
2020-10-21
https://stackoverflow.com/questions/64462578/laplace-transform-using-numerical-integration-in-python-has-very-poor-precision
I have written a function to compute the Laplace transform of a function using scipy.integrate.quad. It is not a very sophisticated function and currently performs poorly on the probability density function of an Erlang distribution. I have included all my work below. I first compute the Laplace transform and then the inverse in order to compare it to the original p.d.f. of the Erlang. I use mpmath for this. The mpmath.invertlaplace is not the problem as it manages to convert the closed-form Laplace transform back to the original p.d.f. quite perfectly. Please help me understand what the problem is with my numerical laplace transform. I get the following error but have not been able to resolve it. IntegrationWarning: The occurrence of roundoff error is detected, which prevents the requested tolerance from being achieved. The error may be underestimated. a=0,b=np.inf,limit=limit)[0] PLOT CODE import numpy as np import matplotlib.pyplot as plt import math as m import mpmath as mp import scipy.stats as st from scipy.integrate import quad def get_laplace(func,limit=10000): ''' Returns laplace transfrom function ''' def laplace(s): '''Numerical laplace transform''' # Seperate into real and imaginary parts x = np.real(s) y = np.imag(s) def real_func(t): return m.exp(-x*t)*m.cos(y*t)*func(t) def imag_func(t): return m.exp(-x*t)*m.sin(y*t)*func(t) real_integral = quad(real_func, a=0,b=np.inf,limit=limit)[0] imag_intergal = quad(real_func, a=0,b=np.inf,limit=limit)[0] return complex(real_integral,-imag_intergal) return laplace def L_erlang(s,lam,k): ''' Closed form laplace transform of Erlang or Gamma distribution. ''' return (lam/(lam+s))**k if __name__ == '__main__': # Setup Erlang probability density function k = 5 lam = 1 pdf = st.erlang(a=k,scale=1/lam).pdf # Laplace transforms Lnum = get_laplace(pdf) # numerical approximation L = lambda s: L_erlang(s,lam,k) # closed form # Use mpmath library to perform inverse laplace # Invserse transfrom on numerical laplace function invLnum = lambda t: mp.invertlaplace(Lnum,t, method='dehoog', dps=5, degree=3) # Invserse transfrom on closed-form laplace function invL = lambda t: mp.invertlaplace(L,t, method='dehoog', dps=5, degree=3) # Grid to visualise T = np.linspace(0.1,10,10) # Get values of inverse transforms lnum = np.array([invLnum(t) for t in T]) l = np.array([invL(t) for t in T]) # Plot plt.plot(T,lnum,label='from numerical laplace') plt.plot(T,l,label='from closed-form laplace') plt.plot(T,pdf(T),label='original pdf',linestyle='--') plt.legend(loc='best') plt.show() UPDATE After two cups of VERY strong coffee, I managed to see the obvious mistake and make the code work. It quite embarrassing actually. Have a look at this line of code: imag_intergal = quad(real_func, a=0,b=np.inf,limit=limit)[0] Hmmm, real_func hey? So it should read: imag_intergal = quad(imag_func_func, a=0,b=np.inf,limit=limit)[0] The one gets this lovely plot: Conclusion So why go through all the trouble to numerically perform the Laplace transform of something that we have a closed form solution for. That is because the interest lies somewhere else. We do not have a closed expression for the conditional future lifetime distribution which is similar to the hazard function. Lets call it h. Then for the Erlang distribution erl = st.erlang(a=k,scale=1/lam) that has been active for tau units of time we have h = lambda t: erl.pdf(t+tau)/erl.sf(tau). This distribution can be used as a holding time in a Semi-Markov Model (SMP). To analyse the transient behaviour of the SMP one use Laplace transforms. Usually only pdfs are used but now I can use hazard functions. Its pretty cool because it means one can model transient behaviour without assuming everything is new.
Good day. I am repeating the Updates section from the original questions as this is the solution to the questions. This way, the questions can be marked as resolved. UPDATE After two cups of VERY strong coffee, I managed to see the obvious mistake and make the code work. It quite embarrassing actually. Have a look at this line of code: imag_intergal = quad(real_func, a=0,b=np.inf,limit=limit)[0] Hmmm, real_func hey? So it should read: imag_intergal = quad(imag_func_func, a=0,b=np.inf,limit=limit)[0] The one gets this lovely plot: Conclusion So why go through all the trouble to numerically perform the Laplace transform of something that we have a closed form solution for. That is because the interest lies somewhere else. We do not have a closed expression for the conditional future lifetime distribution which is similar to the hazard function. Lets call it h. Then for the Erlang distribution erl = st.erlang(a=k,scale=1/lam) that has been active for tau units of time we have h = lambda t: erl.pdf(t+tau)/erl.sf(tau). This distribution can be used as a holding time in a Semi-Markov Model (SMP). To analyse the transient behaviour of the SMP one use Laplace transforms. Usually only pdfs are used but now I can use hazard functions. Its pretty cool because it means one can model transient behaviour without assuming everything is new.
8
1
64,495,333
2020-10-23
https://stackoverflow.com/questions/64495333/pydantic-dynamically-create-a-model-with-multiple-base-classes
From the pydantic docs I understand this: import pydantic class User(pydantic.BaseModel): id: int name: str class Student(pydantic.BaseModel): semester: int # this works as expected class Student_User(User, Student): building: str print(Student_User.__fields__.keys()) #> dict_keys(['semester', 'id', 'name', 'building']) However, when I want to create a similar object dynamically (following the section dynamic-model-creation): # this results in a TypeError pydantic.create_model("Student_User2", __base__=(User, Student)) I get: TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases Question: How to dynamically create a class like Student_User
As of pydantic==1.9.2, Student_User2 = pydantic.create_model("Student_User2", __base__=(User, Student), building=(str, ...)) runs successfully and print(Student_User2.__fields__.keys()) returns dict_keys(['semester', 'id', 'name', 'building'])
8
5
64,558,200
2020-10-27
https://stackoverflow.com/questions/64558200/python-requests-in-docker-compose-containers
Problem I have a 2-container docker-compose.yml file. One of the containers is a small FastAPI app. The other is just trying to hit the API using Python's requests package. I can access the app container from outside with the exact same code as is in the Python package trying to hit it, and it works, but it will not work within the package. docker-compose.yml version: "3.8" services: read-api: build: context: ./read-api depends_on: - "toy-api" networks: - ds-net toy-api: build: context: ./api networks: - ds-net ports: - "80:80" networks: ds-net: Relevant requests code from requests import Session def post_to_api(session, raw_input, path): print(f"The script is sending: {raw_input}") print(f"The script is sending it to: {path}") response = session.post(path, json={"payload": raw_input}) print(f"The script received: {response.text}") def get_from_api(session, path): print(f"The datalake script is trying to GET from: {path}") response = session.get(path) print(f"The datalake script received: {response.text}") session = Session() session.trust_env = False ### I got that from here: https://stackoverflow.com/a/50326101/534238 get_from_api(session, path="http://localhost/test") post_to_api(session, "this is a test", path="http://localhost/raw") Running It REPL-Style If I create an interactive session and run those exact commands above in the requests code portion, it works: >>> get_from_api(session, path="http://localhost/test") The script is trying to GET from: http://localhost/test The script received: {"payload":"Yes, you reached here..."} >>> post_to_api(session, "this is a test", path="http://localhost/raw") The script is sending: this is a test The script is sending it to: http://localhost/raw The script received: {"payload":"received `raw_input`: this is a test"} To be clear: the API code is still being run as a container, and that container was still created with the docker-compose.yml file. (In other words, the API container is working properly, when accessed from the host.) Running Within Container Doing the same thing within the container, I get the following (fairly long) errors: read-api_1 | The script is trying to GET from: http://localhost/test read-api_1 | Traceback (most recent call last): read-api_1 | File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 159, in _new_conn read-api_1 | conn = connection.create_connection( read-api_1 | File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 84, in create_connection read-api_1 | raise err read-api_1 | File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 74, in create_connection read-api_1 | sock.connect(sa) read-api_1 | ConnectionRefusedError: [Errno 111] Connection refused read-api_1 | read-api_1 | During handling of the above exception, another exception occurred: . . . read-api_1 | Traceback (most recent call last): read-api_1 | File "access_api.py", line 99, in <module> read-api_1 | get_from_api(session, path="http://localhost/test") read-api_1 | File "access_datalake.py", line 86, in get_from_api read-api_1 | response = session.get(path) read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 543, in get read-api_1 | return self.request('GET', url, **kwargs) read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 530, in request read-api_1 | resp = self.send(prep, **send_kwargs) read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 643, in send read-api_1 | r = adapter.send(request, **kwargs) read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 516, in send read-api_1 | raise ConnectionError(e, request=request) read-api_1 | requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /test (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa9c69b3a0>: Failed to establish a new connection: [Errno 111] Connection refused')) ai_poc_work_read-api_1 exited with code 1 Attempts to Solve I thought it was with how the host identified itself within the container group, or whether that origin could be accessed, so I have already tried to change the following, with no success: Instead of using localhost as the host, I used read-api. Actually, I started with read-api, and had no luck, but once using localhost, I could at least use REPL on the host machine, as shown above. I also tried 0.0.0.0, no luck. (I did not expect that to fix it.) I have changed what CORS ORIGINS are allowed in the API, including all of the possible paths for the container that is trying to read, and just using "*" to flag all CORS origins. No luck. What am I doing wrong? It seems the problem must be with the containers, or maybe how requests interacts with containers, but I cannot figure out what. Here are some relevant GitHub issues or SO answers I found, but none solved it: GitHub issue: Docker Compose problems with requests GitHub issue: Solving high latency requests in Docker containers SO problem: containers communicating with requests
Within the Docker network, applications must be accessed with the service names defined in the docker-compose.yml. If you're trying to access the toy-api service, use get_from_api(session, path="http://toy-api/test") You can access the application via http://localhost/test on your host machine because Docker exposes the application to the host machine. However, loosely speaking, within the Docker network, localhost does not refer to the host's localhost but only to the container's localhost. And in the case of the read-api service, there is no application listening to http://localhost/test.
12
2
64,499,294
2020-10-23
https://stackoverflow.com/questions/64499294/validate-on-entire-validation-set-when-using-ddp-backend-with-pytorch-lightning
I'm training an image classification model with PyTorch Lightning and running on a machine with more than one GPU, so I use the recommended distributed backend for best performance ddp (DataDistributedParallel). This naturally splits up the dataset, so each GPU will only ever see one part of the data. However, for validation, I would like to compute metrics like accuracy on the entire validation set and not just on a part. How would I do that? I found some hints in the official documentation, but they do not work as expected or are confusing to me. What's happening is that validation_epoch_end is called num_gpus times with 1/num_gpus of the validation data each. I would like to aggregate all results and only run the validation_epoch_end once. In this section they state that when using dp/ddp2 you can add an additional function called like this def validation_step(self, batch, batch_idx): loss, x, y, y_hat = self.step(batch) return {"val_loss": loss, 'y': y, 'y_hat': y_hat} def validation_step_end(self, self, *args, **kwargs): # do something here, I'm not sure what, # as it gets called in ddp directly after validation_step with the exact same values return args[0] However, the results are not being aggregated and validation_epoch_end is still called num_gpu times. Is this kind of behavior not available for ddp? Is there some other way how achieve this aggregation behavior?
training_epoch_end() and validation_epoch_end() receive data that is aggregated from all training / validation batches of the particular process. They simply receive a list of what you returned in each training or validation step. When using the DDP backend, there's a separate process running for every GPU. There's no simple way to access the data that another process is processing, but there's a mechanism for synchronizing a particular tensor between the processes. The easiest approach for computing a metric on the entire validation set is to calculate the metric in pieces and then synchronize the resulting tensor, for example by taking the average. self.log() calls will automatically synchronize the value between GPUs when you use sync_dist=True. How the value is synchronized is determined by the reduce_fx argument, which by default is torch.mean. If you're happy with averaging the metric over batches too, you don't need to override training_epoch_end() or validation_epoch_end() — self.log() will do the averaging for you. If the metric cannot be calculated separately for each GPU and then averaged, it can get a bit more challenging. It's possible to update some state variables at each step, and then synchronize the state variables at the end of an epoch and calculate the metric. The recommended way is to create a class that derives from the Metric class from the TorchMetrics project. Add the state variables in the constructor using add_state() and override the update() and compute() methods. The API will take care of synchronizing the state variables between the GPU processes. There's already an accuracy metric in TorchMetrics and the source code is a good example of how to use the API.
7
6
64,517,793
2020-10-24
https://stackoverflow.com/questions/64517793/why-is-this-function-slower-in-jax-vs-numpy
I have the following numpy function as seen below that I'm trying to optimize by using JAX but for whatever reason, it's slower. Could someone point out what I can do to improve the performance here? I suspect it has to do with the list comprehension taking place for Cg_new but breaking that apart doesn't yield any further performance gains in JAX. import numpy as np def testFunction_numpy(C, Mi, C_new, Mi_new): Wg_new = np.zeros((len(Mi_new[:,0]), len(Mi[0]))) Cg_new = np.zeros((1, len(Mi[0]))) invertCsensor_new = np.linalg.inv(C_new) Wg_new = np.dot(invertCsensor_new, Mi_new) Cg_new = [np.dot(((-0.5*(Mi_new[:,m].conj().T))), (Wg_new[:,m])) for m in range(0, len(Mi[0]))] return C_new, Mi_new, Wg_new, Cg_new C = np.random.rand(483,483) Mi = np.random.rand(483,8) C_new = np.random.rand(198,198) Mi_new = np.random.rand(198,8) %timeit testFunction_numpy(C, Mi, C_new, Mi_new) #1000 loops, best of 3: 1.73 ms per loop Here's the JAX equivalent: import jax.numpy as jnp import numpy as np import jax def testFunction_JAX(C, Mi, C_new, Mi_new): Wg_new = jnp.zeros((len(Mi_new[:,0]), len(Mi[0]))) Cg_new = jnp.zeros((1, len(Mi[0]))) invertCsensor_new = jnp.linalg.inv(C_new) Wg_new = jnp.dot(invertCsensor_new, Mi_new) Cg_new = [jnp.dot(((-0.5*(Mi_new[:,m].conj().T))), (Wg_new[:,m])) for m in range(0, len(Mi[0]))] return C_new, Mi_new, Wg_new, Cg_new C = np.random.rand(483,483) Mi = np.random.rand(483,8) C_new = np.random.rand(198,198) Mi_new = np.random.rand(198,8) C = jnp.asarray(C) Mi = jnp.asarray(Mi) C_new = jnp.asarray(C_new) Mi_new = jnp.asarray(Mi_new) jitter = jax.jit(testFunction_JAX) %timeit jitter(C, Mi, C_new, Mi_new) #1 loop, best of 3: 4.96 ms per loop
For general considerations on benchmark comparisons between JAX and NumPy, see https://jax.readthedocs.io/en/latest/faq.html#is-jax-faster-than-numpy As for your particular code: when JAX jit compilation encounters Python control flow, including list comprehensions, it effectively flattens the loop and stages the full sequence of operations. This can lead to slow jit compile times and suboptimal code. Fortunately, the list comprehension in your function is readily expressed in terms of native numpy broadcasting. Additionally, there are two other improvements you can make: there is no need to forward declare Wg_new and Cg_new before computing them when computing dot(inv(A), B), it is much more efficient and precise to use np.linalg.solve rather than explicitly computing the inverse. Making these three improvements to both the numpy and JAX versions result in the following: def testFunction_numpy_v2(C, Mi, C_new, Mi_new): Wg_new = np.linalg.solve(C_new, Mi_new) Cg_new = -0.5 * (Mi_new.conj() * Wg_new).sum(0) return C_new, Mi_new, Wg_new, Cg_new @jax.jit def testFunction_JAX_v2(C, Mi, C_new, Mi_new): Wg_new = jnp.linalg.solve(C_new, Mi_new) Cg_new = -0.5 * (Mi_new.conj() * Wg_new).sum(0) return C_new, Mi_new, Wg_new, Cg_new %timeit testFunction_numpy_v2(C, Mi, C_new, Mi_new) # 1000 loops, best of 3: 1.11 ms per loop %timeit testFunction_JAX_v2(C_jax, Mi_jax, C_new_jax, Mi_new_jax) # 1000 loops, best of 3: 1.35 ms per loop Both functions are a fair bit faster than they were previously due to the improved implementation. You'll notice, however, that JAX is still slower than numpy here; this is somewhat to be expected because for a function of this level of simplicity, JAX and numpy are both generating effectively the same short series of BLAS and LAPACK calls executed on a CPU architecture. There's simply not much room for improvement over numpy's reference implementation, and with such small arrays JAX's overhead is apparent.
5
11
64,559,768
2020-10-27
https://stackoverflow.com/questions/64559768/flask-socketio-import-wont-work-error-no-module-named-flask-socketio
Im trying to create connection between flask socketio and react native socketio, I have already prepared client side with react native socketio BUT I have run into problem with importing flask_socketio in rpi. Im trying to use simplest implementation possible, this is my code: from flask import Flask from flask_socketio import SocketIO, emit app = Flask(__name__) @app.route('/', methods=['GET']) def hello_world(): return "Hello World" if __name__ == '__main__': app.run(host='0.0.0.0', port=5005) Without line 2 it works perfectly but I need to use flask_socketio. here is photo too(first run is without importing flask_socketio, than I tried to import it and it wont work. I have tried reinstalling flask_socketio twice, rebooting but nothing works:/:
Problem was that I was installing it with sudo python pip install flask_socketio, BUT I had to use python3, so right installation is python3 -m pip install flask_socketio
5
8
64,514,398
2020-10-24
https://stackoverflow.com/questions/64514398/python-multiprocessing-within-flask-request-with-gunicorn-nginx
I want to build a service that will be able to handle: a low volume of requests a high compute cost for each request but where the high compute cost can be parallelized. My understanding of a pre-fork server is that something like the following happens: server starts Gunicorn creates multiple OS processes, also called workers, ready to accept requests request comes in. Nginx forwards to Gunicorn. Gunicorn sends to one of the workers. What I want to understand is what happens if, in my Flask code, when handling the request, I have this: from multiprocessing import pool as ProcessPool with ProcessPool(4) as pool: pool.map(some_expensive_function, some_data) In particular: Will additional OS processes be started? Will the speedup be what I expect? (I.e., similar to if I ran the ProcessPool outside of a Flask production context?) If Gunicorn created 4 web workers, will there now be 7 OS processes running? 9? Is there a risk of making too many? Does Gunicorn assume that each worker will not fork or does it not care? If a web-worker dies or is killed after starting the ProcessPool, will it be closed by the context manager properly? Is this a sane thing to do? What are the alternatives?
Great question! With Python multiprocessing, there are 3 "start methods" that can be used, and they all have implications for your questions. As the docs explain, they are: 'spawn': The parent process starts a fresh python interpreter process. The child process will only inherit those resources necessary to run the process object’s run() method. In particular, unnecessary file descriptors and handles from the parent process will not be inherited. Starting a process using this method is rather slow compared to using fork or forkserver. Available on Unix and Windows. The default on Windows and macOS. 'fork': The parent process uses os.fork() to fork the Python interpreter. The child process, when it begins, is effectively identical to the parent process. All resources of the parent are inherited by the child process. Note that safely forking a multithreaded process is problematic. Available on Unix only. The default on Unix. 'forkserver' When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited. Available on Unix platforms which support passing file descriptors over Unix pipes. As for Gunicorn's pre-fork model, you've explained it well. Each of the workers is running in its own process. Since you're trying to use multiprocessing within a worker, rather than alongside Gunicorn, this should be doable, but will still be a bit error-prone. import multiprocessing mp = multiprocessing.get_context('spawn') This code gives us the mp object, which has the same API as the multiprocessing module, but with a set start method. In the case of the code above, it is set to 'spawn'. This is the safest route for using multiprocessing within a Gunicorn worker, as it is the most isolated from the process that created it, and less likely to run into problems around accidentally shared resources. with mp.Pool(processes=4) as pool: pool.map(some_expensive_function, some_data) We then use the mp object to create a process pool as you've done. This code must be inside a function/module that is only called/used within the worker processes. If it is used within the server process it could cause problems. Will additional OS processes be started? Will the speedup be what I expect? (I.e., similar to if I ran the ProcessPool outside of a Flask production context?) Quite a lot of questions packed in here. Additional OS processes will be started. The speedup could vary massively, and will depend on a number of factors such as: How many other processes are running? How many worker processors is Gunicorn running? Is the server under heavy load? How many cores does the processor have? How parallelizable is the work? Does some_expensive_function(data_1) have to wait for some_expensive_function(data_2) before it can do its work? To figure out if using multiprocessing is faster, and how much faster it will be, you'll have to test it. Best you can do prior to that is form a rough estimate based on factors like those listed above. (cont.) If Gunicorn created 4 web workers, will there now be 7 OS processes running? 9? Is there a risk of making too many? Does Gunicorn assume that each worker will not fork or does it not care? If there are 4 Gunicorn worker processes, and each of them is fulfilling a request that uses multiprocessing with 4 processes, then there will be 1 Gunicorn parent process + 4 worker processes + 4 * 4 worker subprocesses = 21 processes, not to mention the processes being used by Nginx. Gunicorn recommends you create (2 * num_cores) + 1 workers, but in your case you may want to decrease that, perhaps by dividing it by 4, to account for the fact that your worker processes themselves work best when using multiple cores. To find the most efficient configuration, you'll have to benchmark various configurations to find out what works best for you. If a web-worker dies or is killed after starting the ProcessPool, will it be closed by the context manager properly? This depends on how the worker dies. If it is killed via SIGKILL, or encounters a segmentation fault, or some other critical error, then it will abruptly die without getting to run any finalization code. The context manager can only do its job in the cases where a try-finally block would be able to execute the 'finally' block. For more about that, check out this answer: Does 'finally' always execute in Python? Is this a sane thing to do? What are the alternatives? It's not insane per se, but it's not the kind of approach I'd generally recommend. One alternative would be to have some_expensive_function implemented with its own server. Your Gunicorn workers could use IPC or network communication to send work to the some_expensive_function server process, and it would handle dividing this work among sub processes. One advantage of a design like that is that the some_expensive_function server process can easily be moved to run on another computer if performance demands it. It's similar to how databases are generally run as their own server process, and can either be located on the same computer or on a separate computer (potentially behind a load balancer for read-only queries, or a sharding configuration) depending on what performance requirements must be met. If you decide to go that route, you may find the Python package Celery useful for distributing the work from the Gunicorn workers. If you want to do this, you should probably be running Gunicorn with preload_app=True.
29
32
64,457,733
2020-10-21
https://stackoverflow.com/questions/64457733/django-dumpdata-fails-on-special-characters
I'm trying to dump my entire DB to a json. When I run python manage.py dumpdata > data.json I get an error: (env) PS C:\dev\watch_something> python manage.py dumpdata > data.json CommandError: Unable to serialize database: 'charmap' codec can't encode character '\u0130' in position 1: character maps to <undefined> Exception ignored in: <generator object cursor_iter at 0x0460C140> Traceback (most recent call last): File "C:\dev\watch_something\env\lib\site-packages\django\db\models\sql\compiler.py", line 1602, in cursor_iter cursor.close() sqlite3.ProgrammingError: Cannot operate on a closed database. It's because one of the characters in my DB is a sepcial character. How can I dump the DB correctly? FYI, all other DB functionalities work fine
One solution is to use ./manage.py dumpdata -o data.json instead of ./manage.py dumpdata > data.json. Another solution is to use Python's UTF-8 mode, run: python -Xutf8 ./manage.py dumpdata > data.json
24
92
64,555,101
2020-10-27
https://stackoverflow.com/questions/64555101/receive-an-error-from-lingnutls-hogweed-when-importing-cv2
I've never seen an error like this and don't know where to start. I installed opencv with conda install opencv and am running Ubuntu Linux 18.04 using a conda environment named fpn. How should I even approach debugging this? Traceback (most recent call last): File "test.py", line 5, in <module> import cv2 ImportError: /home/s/miniconda3/envs/fpn/lib/python3.7/site-packages/../../././libgnutls.so.30: symbol mpn_add_1 version HOGWEED_4 not defined in file libhogweed.so.4 with link time reference
There seems to be a problem with the recent releases of opencv packages for Conda. I have tested all the 4.x releases and found that the problem occurs starting from 4.3. Unless you really depend on >=4.3, forcing a version prior to 4.3 solves the problem, name: test channels: - anaconda - conda-forge dependencies: - python>=3.8 - opencv<4.3 in my cases this installed 4.2.0. Importing cv2 in Python works fine then. Note that using conda update didn't work for me and I still got the error, but I had to first remove the environment and then re-create it. I think this behavior indicates that the error is rooted in some dependency of opencv, which is not properly down-graded when conda update is used.
7
2
64,517,366
2020-10-24
https://stackoverflow.com/questions/64517366/python-error-while-installing-matplotlib
OS: Windows 10 Python ver: 3.9.0 Error code: ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. I tried: python -m pip install -U pip python -m pip install -U matplotlib didn't work. and then I tried: pip install --upgrade setuptools didn't solve the problem. I read on SO that maybe if I open the shell in administrator mode it could solve the problem but it didn't work too. I saw someone mentioning ez-setup for this error code. I installed it but that didn't work too. I don't know if it has something to do but my C directory looks like this: C:\Users\METİNUSTA It has an uppercase i character which sometimes can cause problems with applications. I can't change it because I am using my school's Windows key and it don't let me do any change. Because of this I installed python on D: . Also here my pip list for extra information: ez-setup 0.9 flake8 3.8.4 mccabe 0.6.1 pip 20.2.4 pycodestyle 2.6.0 pyflakes 2.2.0 setuptools 50.3.2 wheel 0.35.1 and finally whole error log that I get on windows powershell: ERROR: Command errored out with exit status 1: command: 'd:\python\python39\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\METİNUSTA\\AppData\\Local\\Temp\\pip-install-8iv10tb_\\matplotlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\METİNUSTA\\AppData\\Local\\Temp\\pip-install-8iv10tb_\\matplotlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\METİNUSTA\AppData\Local\Temp\pip-pip-egg-info-elosrn6m' cwd: C:\Users\METİNUSTA\AppData\Local\Temp\pip-install-8iv10tb_\matplotlib\ Complete output (99 lines): WARNING: Missing build requirements in pyproject.toml for numpy>=1.15 from https://files.pythonhosted.org/packages/bf/e8/15aea783ea72e2d4e51e3ec365e8dc4a1a32c9e5eb3a6d695b0d58e67cdd/numpy-1.19.2.zip#sha256=0d310730e1e793527065ad7dde736197b705d0e4c9999775f212b03c44a8484c. WARNING: The project does not specify a build backend, and pip cannot fall back to setuptools without 'setuptools>=40.8.0' and 'wheel'. ERROR: Command errored out with exit status 1: command: 'd:\python\python39\python.exe' 'd:\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\METNUS~1\AppData\Local\Temp\tmpqz3brme_' cwd: C:\Users\METİNUSTA\AppData\Local\Temp\pip-wheel-l2wpf1i8\numpy Complete output (49 lines): Error in sitecustomize; set PYTHONVERBOSE for traceback: SyntaxError: (unicode error) 'utf-8' codec can't decode byte 0xdd in position 0: unexpected end of data (sitecustomize.py, line 21) Running from numpy source directory. setup.py:470: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates run_build = parse_setuppy_commands() Error in sitecustomize; set PYTHONVERBOSE for traceback: SyntaxError: (unicode error) 'utf-8' codec can't decode byte 0xdd in position 0: unexpected end of data (sitecustomize.py, line 21) Processing numpy/random\_bounded_integers.pxd.in Processing numpy/random\bit_generator.pyx Traceback (most recent call last): File "C:\Users\METİNUSTA\AppData\Local\Temp\pip-wheel-l2wpf1i8\numpy\tools\cythonize.py", line 59, in process_pyx from Cython.Compiler.Version import version as cython_version ModuleNotFoundError: No module named 'Cython' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\METİNUSTA\AppData\Local\Temp\pip-wheel-l2wpf1i8\numpy\tools\cythonize.py", line 235, in <module> main() File "C:\Users\METİNUSTA\AppData\Local\Temp\pip-wheel-l2wpf1i8\numpy\tools\cythonize.py", line 231, in main find_process_files(root_dir) File "C:\Users\METİNUSTA\AppData\Local\Temp\pip-wheel-l2wpf1i8\numpy\tools\cythonize.py", line 222, in find_process_files process(root_dir, fromfile, tofile, function, hash_db) File "C:\Users\METİNUSTA\AppData\Local\Temp\pip-wheel-l2wpf1i8\numpy\tools\cythonize.py", line 188, in process processor_function(fromfile, tofile) File "C:\Users\METİNUSTA\AppData\Local\Temp\pip-wheel-l2wpf1i8\numpy\tools\cythonize.py", line 64, in process_pyx raise OSError('Cython needs to be installed in Python as a module') OSError: Cython needs to be installed in Python as a module Cythonizing sources Traceback (most recent call last): File "d:\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 280, in <module> main() File "d:\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 263, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "d:\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 133, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) File "d:\python\python39\lib\site-packages\setuptools\build_meta.py", line 161, in prepare_metadata_for_build_wheel self.run_setup() File "d:\python\python39\lib\site-packages\setuptools\build_meta.py", line 253, in run_setup super(_BuildMetaLegacyBackend, File "d:\python\python39\lib\site-packages\setuptools\build_meta.py", line 145, in run_setup exec(compile(code, __file__, 'exec'), locals()) File "setup.py", line 499, in <module> setup_package() File "setup.py", line 479, in setup_package generate_cython() File "setup.py", line 274, in generate_cython raise RuntimeError("Running cythonize failed!") RuntimeError: Running cythonize failed! ---------------------------------------- ERROR: Command errored out with exit status 1: 'd:\python\python39\python.exe' 'd:\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\METNUS~1\AppData\Local\Temp\tmpqz3brme_' Check the logs for full command output. Traceback (most recent call last): File "d:\python\python39\lib\site-packages\setuptools\installer.py", line 126, in fetch_build_egg subprocess.check_call(cmd) File "d:\python\python39\lib\subprocess.py", line 373, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['d:\\python\\python39\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\METNUS~1\\AppData\\Local\\Temp\\tmppoh8r2c9', '--quiet', 'numpy>=1.15']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\METİNUSTA\AppData\Local\Temp\pip-install-8iv10tb_\matplotlib\setup.py", line 242, in <module> setup( # Finally, pass this all along to distutils to do the heavy lifting. File "d:\python\python39\lib\site-packages\setuptools\__init__.py", line 152, in setup _install_setup_requires(attrs) File "d:\python\python39\lib\site-packages\setuptools\__init__.py", line 147, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "d:\python\python39\lib\site-packages\setuptools\dist.py", line 673, in fetch_build_eggs resolved_dists = pkg_resources.working_set.resolve( File "d:\python\python39\lib\site-packages\pkg_resources\__init__.py", line 764, in resolve dist = best[req.key] = env.best_match( File "d:\python\python39\lib\site-packages\pkg_resources\__init__.py", line 1049, in best_match return self.obtain(req, installer) File "d:\python\python39\lib\site-packages\pkg_resources\__init__.py", line 1061, in obtain return installer(requirement) File "d:\python\python39\lib\site-packages\setuptools\dist.py", line 732, in fetch_build_egg return fetch_build_egg(self, req) File "d:\python\python39\lib\site-packages\setuptools\installer.py", line 128, in fetch_build_egg raise DistutilsError(str(e)) from e distutils.errors.DistutilsError: Command '['d:\\python\\python39\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\METNUS~1\\AppData\\Local\\Temp\\tmppoh8r2c9', '--quiet', 'numpy>=1.15']' returned non-zero exit status 1. Edit setup.cfg to change the build options; suppress output with --quiet. BUILDING MATPLOTLIB matplotlib: yes [3.3.2] python: yes [3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)]] platform: yes [win32] sample_data: yes [installing] tests: no [skipping due to configuration] macosx: no [Mac OS-X only] ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
edit: matplotlib has now released wheels for python 3.9 so pip install --upgrade matplotlib should work. original answer matplotlib hasn't made a wheel yet for version 3.9 so your python attempted to build it from source. You should downgrade to python 3.8 and then everything should work
7
8