question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
61,670,081 | 2020-5-8 | https://stackoverflow.com/questions/61670081/django-secret-key-setting-must-not-be-empty-with-github-workflow | I have a GitHub workflow for Django and when it gets to migrating the database it gives the error django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty. the secret key is stored in a .env file and loaded with from dotenv import load_dotenv load_dotenv() from pathlib import Path env_path = Path('.') / '.env' load_dotenv(dotenv_path=env_path) SECRET_KEY = os.getenv("secret_key") Here is the file tree C:. | db.sqlite3 | manage.py | \---djangosite | .env | asgi.py | settings.py | urls.py | wsgi.py | __init__.py | \---__pycache__ ... This is the manage.py, it is the regular django one with the load .env code from settings.py #!/usr/bin/env python """Django's command-line utility for administrative tasks.""" from dotenv import load_dotenv load_dotenv() from pathlib import Path env_path = Path('.') / '.env' load_dotenv(dotenv_path=env_path) import os import sys def main(): os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'djangosite.settings') try: from django.core.management import execute_from_command_line except ImportError as exc: raise ImportError( "Couldn't import Django. Are you sure it's installed and " "available on your PYTHONPATH environment variable? Did you " "forget to activate a virtual environment?" ) from exc execute_from_command_line(sys.argv) if __name__ == '__main__': main() when I run manage.py on my PC it loads the key and runs the server, but GitHub gives the error above. How do I stop this error from happening? | If you have stored the SECRET_KEY in your system's environment variable, then for GitHub workflow, you can add a dummy environment variable in the YAML file. The settings.py should look like this import os ... SECRET_KEY = os.environ.get('SECRET_KEY') # Or the name by which you stored environment variable ... The steps are given below: Step 1: Generate a dummy SECRET_KEY. You can create it yourself by import secrets print(secrets.token_hex(25)) Or generate from a site like this. Step 2: In your .github/workflows YAML file (e.g., django.yml), add this steps: ... - name: Run Tests env: SECRET_KEY: your-genereated-secret_key run: | python manage.py test Then everything will work fine with the same version of code in your local environment, production environment, and GitHub workflow. | 9 | 13 |
61,706,535 | 2020-5-10 | https://stackoverflow.com/questions/61706535/keras-validation-loss-and-accuracy-stuck-at-0 | I am trying to train a simple 2 layer Fully Connected neural net for Binary Classification in Tensorflow keras. I have split my data into Training and Validation sets with a 80-20 split using sklearn's train_test_split(). When I call model.fit(X_train, y_train, validation_data=[X_val, y_val]), it shows 0 validation loss and accuracy for all epochs, but it trains just fine. Also, when I try to evaluate it on the validation set, the output is non-zero. Can someone please explain why I am facing this 0 loss 0 accuracy error on validation. Thanks for your help. Here is the complete sample code (MCVE) for this error: https://colab.research.google.com/drive/1P8iCUlnD87vqtuS5YTdoePcDOVEKpBHr?usp=sharing | If you use keras instead of tf.keras everything works fine. With tf.keras, I even tried validation_data = [X_train, y_train], this also gives zero accuracy. Here is a demonstration: model.fit(X_train, y_train, validation_data=[X_train.to_numpy(), y_train.to_numpy()], epochs=10, batch_size=64) Epoch 1/10 8/8 [==============================] - 0s 6ms/step - loss: 0.7898 - accuracy: 0.6087 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 2/10 8/8 [==============================] - 0s 6ms/step - loss: 0.6710 - accuracy: 0.6500 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 3/10 8/8 [==============================] - 0s 5ms/step - loss: 0.6748 - accuracy: 0.6500 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 4/10 8/8 [==============================] - 0s 6ms/step - loss: 0.6716 - accuracy: 0.6370 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 5/10 8/8 [==============================] - 0s 6ms/step - loss: 0.6085 - accuracy: 0.6326 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 6/10 8/8 [==============================] - 0s 6ms/step - loss: 0.6744 - accuracy: 0.6326 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 7/10 8/8 [==============================] - 0s 6ms/step - loss: 0.6102 - accuracy: 0.6522 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 8/10 8/8 [==============================] - 0s 6ms/step - loss: 0.7032 - accuracy: 0.6109 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 9/10 8/8 [==============================] - 0s 5ms/step - loss: 0.6283 - accuracy: 0.6717 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 10/10 8/8 [==============================] - 0s 5ms/step - loss: 0.6120 - accuracy: 0.6652 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 So, definitely there is some issue with tensorflow implementation of fit. I dug up the source, and it seems the part responsible for validation_data: ... ... # Run validation. if validation_data and self._should_eval(epoch, validation_freq): val_x, val_y, val_sample_weight = ( data_adapter.unpack_x_y_sample_weight(validation_data)) val_logs = self.evaluate( x=val_x, y=val_y, sample_weight=val_sample_weight, batch_size=validation_batch_size or batch_size, steps=validation_steps, callbacks=callbacks, max_queue_size=max_queue_size, workers=workers, use_multiprocessing=use_multiprocessing, return_dict=True) val_logs = {'val_' + name: val for name, val in val_logs.items()} epoch_logs.update(val_logs) internally calls model.evaluate, as we have already established evaluate works fine, I realized the only culprit could be unpack_x_y_sample_weight. So, I looked into the implementation: def unpack_x_y_sample_weight(data): """Unpacks user-provided data tuple.""" if not isinstance(data, tuple): return (data, None, None) elif len(data) == 1: return (data[0], None, None) elif len(data) == 2: return (data[0], data[1], None) elif len(data) == 3: return (data[0], data[1], data[2]) raise ValueError("Data not understood.") It's crazy, but if you just pass a tuple instead of a list, everything works fine due to the check inside unpack_x_y_sample_weight. (Your labels are missing after this step and somehow the data is getting fixed inside evaluate, so you're training with no reasonable labels, this seems like a bug but the documentation clearly states to pass tuple) The following code gives correct validation accuracy and loss: model.fit(X_train, y_train, validation_data=(X_train.to_numpy(), y_train.to_numpy()), epochs=10, batch_size=64) Epoch 1/10 8/8 [==============================] - 0s 7ms/step - loss: 0.5832 - accuracy: 0.6696 - val_loss: 0.6892 - val_accuracy: 0.6674 Epoch 2/10 8/8 [==============================] - 0s 7ms/step - loss: 0.6385 - accuracy: 0.6804 - val_loss: 0.8984 - val_accuracy: 0.5565 Epoch 3/10 8/8 [==============================] - 0s 7ms/step - loss: 0.6822 - accuracy: 0.6391 - val_loss: 0.6556 - val_accuracy: 0.6739 Epoch 4/10 8/8 [==============================] - 0s 6ms/step - loss: 0.6276 - accuracy: 0.6609 - val_loss: 1.0691 - val_accuracy: 0.5630 Epoch 5/10 8/8 [==============================] - 0s 7ms/step - loss: 0.7048 - accuracy: 0.6239 - val_loss: 0.6474 - val_accuracy: 0.6326 Epoch 6/10 8/8 [==============================] - 0s 7ms/step - loss: 0.6545 - accuracy: 0.6500 - val_loss: 0.6659 - val_accuracy: 0.6043 Epoch 7/10 8/8 [==============================] - 0s 7ms/step - loss: 0.5796 - accuracy: 0.6913 - val_loss: 0.6891 - val_accuracy: 0.6435 Epoch 8/10 8/8 [==============================] - 0s 7ms/step - loss: 0.5915 - accuracy: 0.6891 - val_loss: 0.5307 - val_accuracy: 0.7152 Epoch 9/10 8/8 [==============================] - 0s 7ms/step - loss: 0.5571 - accuracy: 0.7000 - val_loss: 0.5465 - val_accuracy: 0.6957 Epoch 10/10 8/8 [==============================] - 0s 7ms/step - loss: 0.7133 - accuracy: 0.6283 - val_loss: 0.7046 - val_accuracy: 0.6413 So, as this seems to be a bug, I have just opened a relevant issue at Tensorflow Github repo: https://github.com/tensorflow/tensorflow/issues/39370 | 26 | 39 |
61,705,858 | 2020-5-10 | https://stackoverflow.com/questions/61705858/keras-unboundlocalerror-local-variable-logs-referenced-before-assignment | I am relatively new to python, and while attempting to train a chatbot I received the error: ‘UnboundLocalError: local variable 'logs' referenced before assignment‘. I used model.fit to train: model.fit(x_train, y_train, epochs=7) And I received the error: UnboundLocalError Traceback (most recent call last) <ipython-input-10-847c83704a3f> in <module>() 2 x_train, 3 y_train, ----> 4 epochs=7 5 ) 1 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs) 64 def _method_wrapper(self, *args, **kwargs): 65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access ---> 66 return method(self, *args, **kwargs) 67 68 # Running inside `run_distribute_coordinator` already. /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 854 logs = tmp_logs # No error, now safe to assign to logs. 855 callbacks.on_train_batch_end(step, logs) --> 856 epoch_logs = copy.copy(logs) 857 858 # Run validation. UnboundLocalError: local variable 'logs' referenced before assignment I ran this in google colab, with the link here: https://colab.research.google.com/drive/18uTvvKYDrd8CQi31kg6vX2Dbxg1gD20X?usp=sharing I used the chatterbot/english dataset on kaggle: https://www.kaggle.com/kausr25/chatterbotenglish | This issue looks similar to the problem I had while working with small datasets and it is covered in this thread: #38064. I solved my particular issue setting a smaller batch_size, in my case: batch_size = 2 | 15 | 21 |
61,740,748 | 2020-5-11 | https://stackoverflow.com/questions/61740748/python-dataclass-generate-hash-and-exclude-unsafe-fields | I have this dataclass: from dataclasses import dataclass, field from typing import List @dataclass class Person: name: str dob: str friends: List['Person'] = field(default_factory=list, init=False) name and dob are immutable and friends is mutable. I want to generate a hash of each person object. Can I somehow specify which field to be included and excluded for generating the __hash__ method? In this case, name and dob should be included in generating the hash and friends shouldn't. This is my attempt but it doesn't work @dataclass class Person: name: str = field(hash=True) dob: str = field(hash=True) friends: List['Person'] = field(default_factory=list, init=False, hash=False) >>> hash(Person("Mike", "01/01/1900")) Traceback (most recent call last): File "<pyshell#43>", line 1, in <module> hash(Person("Mike", "01/01/1900")) TypeError: unhashable type: 'Person' I also can't find a way to set name and dob to be frozen. And I'd refrain from setting unsafe_hash to True, just by the sound of it. Any suggestions? Also, is what I'm doing considered good practice? If not, can you suggest some alternatives? Thank you Edit: This is just a toy example and we can assume that the name and dob fields are unique. Edit: I gave an example to demonstrate the error. | Just indicate that the friends field should not be taken in account when comparing instances with __eq__, and pass hash=True to field instances on the desired fields. Then, pass the unsafe_hash=True argument to the dataclass decorator itself - it will work as you intend (mostly): In case of hash, the language restriction is that if one instance compares equal with another (__eq__), the hash of of both must be equal as well. The implication in this case is that if you have two instances of the "same" person with the same "name" and "dob" fields, they will be considered equal, even if they feature different friends lists. Other than that, this should work: from dataclasses import dataclass, field from typing import List @dataclass(unsafe_hash=True) class Person: name: str = field(hash=True) dob: str = field(hash=True) friends: List['Person'] = field(default_factory=list, init=False, compare=False, hash=False) Then, remember to behave like a "consenting adult" and not change the "name" and "dob" fields of Person instances in any place, and you are set. | 8 | 13 |
61,675,812 | 2020-5-8 | https://stackoverflow.com/questions/61675812/from-excel-to-list-of-tuples | I have an Excel (.xlsx) file that has two columns of phrases. For example: John I have a dog Mike I need a cat Nick I go to school I want to import it in Python and to get a list of tuples like: [('John', 'I have a dog'), ('Mike', 'I need a cat'), ('Nick', 'I go to school'), ...] What could I do? | You can read the excel file using pd.read_excel. You need to care about the header is there are some or not. As you said, it returns a dataframe. In my case, I have the following. df = pd.read_excel("data.xlsx") print(df) # name message # 0 John I have a dog # 1 Mike I need a cat # 2 Nick I go to school Then, it's possible to have the values of the dataframe using to_numpy. It return a numpy array. If you want a list, we use the numpy method tolist to convert it as list: out = df.to_numpy().tolist() print(out) # [['John', 'I have a dog'], # ['Mike', 'I need a cat'], # ['Nick', 'I go to school']] As you can see, the output is a list of list. If you want a list of tuples, just cast them: # for getting list of tuples out = [tuple(elt) for elt in out] print(out) # [('John', 'I have a dog'), # ('Mike', 'I need a cat'), # ('Nick', 'I go to school')] Note: An older solution was to call values instead of to_numpy(). However, the documentation clearly recommends using to_numpy and forgive values. Hope that helps ! | 10 | 5 |
61,718,126 | 2020-5-10 | https://stackoverflow.com/questions/61718126/how-to-multiply-two-2d-rfft-arrays-fftpack-to-be-compatible-with-numpys-fft | I'm trying to multiply two 2D arrays that were transformed with fftpack_rfft2d() (SciPy's FFTPACK RFFT) and the result is not compatible with what I get from scipy_rfft2d() (SciPy's FFT RFFT). The image below shares the output of the script, which displays: The initialization values of both input arrays; Both arrays after they were transform with SciPy's FFT implementation for RFFT using scipy_rfft2d(), followed by the output of the multiplication after its transformed backwards with scipy_irfft2d(); The same things using SciPy's FFTPACK implementation for RFFT with fftpack_rfft2d() and fftpack_irfft2d(); The result of a test with np.allclose() that checks if the result of both multiplications are the same after they were transformed back with their respective implementations for IRFFT. Just to be clear, the red rectangles display the multiplication result after the inverse transform IRFFT: the rectangle on the left uses SciPy's FFT IRFFT; the rectangle on the right, SciPy's FFTPACK IRFFT. They should present the same data when the multiplication with the FFTPACK version is fixed. I think the multiplication result with the FFTPACK version is not correct because scipy.fftpack returns the real and imaginary parts in the resulting RFFT array differently than the RFFT from scipy.fft: I believe that RFFT from scipy.fftpack returns an array where one element contains the real part and the next element holds its imaginary counterpart; In RFFT from scipy.fft, each element is a complex number and therefore is able to hold the real and imaginary parts simultaneously; Please correct me if I'm wrong! I would also like to point out that since scipy.fftpack doesn't provide functions for transforming 2D arrays like rfft2() and irfft2(), I'm providing my own implementations in the code below: import numpy as np from scipy import fftpack as scipy_fftpack from scipy import fft as scipy_fft # SCIPY RFFT 2D def scipy_rfft2d(matrix): fftRows = [scipy_fft.rfft(row) for row in matrix] return np.transpose([scipy_fft.fft(row) for row in np.transpose(fftRows)]) # SCIPY IRFFT 2D def scipy_irfft2d(matrix, s): fftRows = [scipy_fft.irfft(row) for row in matrix] return np.transpose([scipy_fft.ifft(row) for row in np.transpose(fftRows)]) # FFTPACK RFFT 2D def fftpack_rfft2d(matrix): fftRows = [scipy_fftpack.rfft(row) for row in matrix] return np.transpose([scipy_fftpack.rfft(row) for row in np.transpose(fftRows)]) # FFTPACK IRFFT 2D def fftpack_irfft2d(matrix): fftRows = [scipy_fftpack.irfft(row) for row in matrix] return np.transpose([scipy_fftpack.irfft(row) for row in np.transpose(fftRows)]) print('\n#################### INPUT DATA ###################\n') # initialize two 2D arrays with random data for testing in1 = np.array([[0, 0, 0, 0], \ [0, 255, 255, 0], \ [0, 0, 255, 255], \ [0, 0, 0, 0]]) print('\nin1 shape=', in1.shape, '\n', in1) in2 = np.array([[0, 0, 0, 0], \ [0, 0, 255, 0], \ [0, 255, 255, 0], \ [0, 255, 0, 0]]) print('\nin2 shape=', in2.shape, '\n', in2) print('\n############### SCIPY: 2D RFFT (MULT) ###############\n') # transform both inputs with SciPy RFFT for 2D scipy_rfft1 = scipy_fft.rfftn(in1) scipy_rfft2 = scipy_fft.rfftn(in2) print('* Output from scipy_fft.rfftn():') print('scipy_fft1 shape=', scipy_rfft1.shape, '\n', scipy_rfft1.real) print('\nscipy_fft2 shape=', scipy_rfft2.shape, '\n', scipy_rfft2.real) # perform multiplication between two 2D arrays from SciPy RFFT scipy_rfft_mult = scipy_rfft1 * scipy_rfft2 # perform inverse RFFT for 2D arrays using SciPy scipy_data = scipy_fft.irfftn(scipy_rfft_mult, in1.shape) # passing shape guarantees the output will have the original data size print('\n* Output from scipy_fft.irfftn():') print('scipy_data shape=', scipy_data.shape, '\n', scipy_data) print('\n############### FFTPACK: 2D RFFT (MULT) ###############\n') # transform both inputs with FFTPACK RFFT for 2D fftpack_rfft1 = fftpack_rfft2d(in1) fftpack_rfft2 = fftpack_rfft2d(in2) print('* Output from fftpack_rfft2d():') print('fftpack_rfft1 shape=', fftpack_rfft1.shape, '\n', fftpack_rfft1) print('\nfftpack_rfft2 shape=', fftpack_rfft2.shape, '\n', fftpack_rfft2) # TODO: perform multiplication between two 2D arrays from FFTPACK RFFT fftpack_rfft_mult = fftpack_rfft1 * fftpack_rfft2 # this doesn't work # perform inverse RFFT for 2D arrays using FFTPACK fftpack_data = fftpack_irfft2d(fftpack_rfft_mult) print('\n* Output from fftpack_irfft2d():') print('fftpack_data shape=', fftpack_data.shape, '\n', fftpack_data) print('\n##################### RESULT #####################\n') # compare FFTPACK result with SCIPY print('\nIs fftpack_data equivalent to scipy_data?', np.allclose(fftpack_data, scipy_data), '\n') Assuming my guess is correct, what would be the correct implementation for a function that multiplies two 2D arrays that were generated from fftpack_rfft2d()? Remember: the resulting array must be able to be transformed back with fftpack_irfft2d(). Only answers that address the problem in 2-dimensions are invited. Those interested in how to multiply 1D FFTPACK arrays can check this thread. | Correct functions: import numpy as np from scipy import fftpack as scipy_fftpack from scipy import fft as scipy # FFTPACK RFFT 2D def fftpack_rfft2d(matrix): fftRows = scipy_fftpack.fft(matrix, axis=1) fftCols = scipy_fftpack.fft(fftRows, axis=0) return fftCols # FFTPACK IRFFT 2D def fftpack_irfft2d(matrix): ifftRows = scipy_fftpack.ifft(matrix, axis=1) ifftCols = scipy_fftpack.ifft(ifftRows, axis=0) return ifftCols.real You calculated the 2D FFT in wrong way. Yes, the first FFT (by columns in your case) can be calculated using rfft(), but the second FFT calculation must be provided on the complex output of the first FFT (by columns), so the output of the rfft() must be converted into true complex spectrum. Moreover, this mean, that you must use fft() instead of rfft() for the second FFT by rows. Consiquently, it is more convenient to use fft() in both calculations. Moreover, you have input data as a numpy 2D arrays, why do you use list comprehension? Use fftpack.fft() directly, this is much faster. If you already have only 2D arrays calculated by wrong functions and need multiply them: then, my opinion, to try reconstruct the input data from the wrong 2D FFT using the same 'wrong' way and then calculate correct 2D FFT ================================================================ The full testing code with new functions version: import numpy as np from scipy import fftpack as scipy_fftpack from scipy import fft as scipy_fft # FFTPACK RFFT 2D def fftpack_rfft2d(matrix): fftRows = scipy_fftpack.fft(matrix, axis=1) fftCols = scipy_fftpack.fft(fftRows, axis=0) return fftCols # FFTPACK IRFFT 2D def fftpack_irfft2d(matrix): ifftRows = scipy_fftpack.ifft(matrix, axis=1) ifftCols = scipy_fftpack.ifft(ifftRows, axis=0) return ifftCols.real print('\n#################### INPUT DATA ###################\n') # initialize two 2D arrays with random data for testing in1 = np.array([[0, 0, 0, 0], \ [0, 255, 255, 0], \ [0, 0, 255, 255], \ [0, 0, 0, 0]]) print('\nin1 shape=', in1.shape, '\n', in1) in2 = np.array([[0, 0, 0, 0], \ [0, 0, 255, 0], \ [0, 255, 255, 0], \ [0, 255, 0, 0]]) print('\nin2 shape=', in2.shape, '\n', in2) print('\n############### SCIPY: 2D RFFT (MULT) ###############\n') # transform both inputs with SciPy RFFT for 2D scipy_rfft1 = scipy_fft.fftn(in1) scipy_rfft2 = scipy_fft.fftn(in2) print('* Output from scipy_fft.rfftn():') print('scipy_fft1 shape=', scipy_rfft1.shape, '\n', scipy_rfft1) print('\nscipy_fft2 shape=', scipy_rfft2.shape, '\n', scipy_rfft2) # perform multiplication between two 2D arrays from SciPy RFFT scipy_rfft_mult = scipy_rfft1 * scipy_rfft2 # perform inverse RFFT for 2D arrays using SciPy scipy_data = scipy_fft.irfftn(scipy_rfft_mult, in1.shape) # passing shape guarantees the output will # have the original data size print('\n* Output from scipy_fft.irfftn():') print('scipy_data shape=', scipy_data.shape, '\n', scipy_data) print('\n############### FFTPACK: 2D RFFT (MULT) ###############\n') # transform both inputs with FFTPACK RFFT for 2D fftpack_rfft1 = fftpack_rfft2d(in1) fftpack_rfft2 = fftpack_rfft2d(in2) print('* Output from fftpack_rfft2d():') print('fftpack_rfft1 shape=', fftpack_rfft1.shape, '\n', fftpack_rfft1) print('\nfftpack_rfft2 shape=', fftpack_rfft2.shape, '\n', fftpack_rfft2) # TODO: perform multiplication between two 2D arrays from FFTPACK RFFT fftpack_rfft_mult = fftpack_rfft1 * fftpack_rfft2 # this doesn't work # perform inverse RFFT for 2D arrays using FFTPACK fftpack_data = fftpack_irfft2d(fftpack_rfft_mult) print('\n* Output from fftpack_irfft2d():') print('fftpack_data shape=', fftpack_data.shape, '\n', fftpack_data) print('\n##################### RESULT #####################\n') # compare FFTPACK result with SCIPY print('\nIs fftpack_data equivalent to scipy_data?', np.allclose(fftpack_data, scipy_data), '\n') The output is: #################### INPUT DATA ################### in1 shape= (4, 4) [[ 0 0 0 0] [ 0 255 255 0] [ 0 0 255 255] [ 0 0 0 0]] in2 shape= (4, 4) [[ 0 0 0 0] [ 0 0 255 0] [ 0 255 255 0] [ 0 255 0 0]] ############### SCIPY: 2D RFFT (MULT) ############### * Output from scipy_fft.rfftn(): scipy_fft1 shape= (4, 4) [[1020. -0.j -510. +0.j 0. -0.j -510. -0.j] [-510.-510.j 0. +0.j 0. +0.j 510.+510.j] [ 0. -0.j 0.+510.j 0. -0.j 0.-510.j] [-510.+510.j 510.-510.j 0. -0.j 0. -0.j]] scipy_fft2 shape= (4, 4) [[1020. -0.j -510.-510.j 0. -0.j -510.+510.j] [-510. +0.j 510.+510.j 0.-510.j 0. -0.j] [ 0. -0.j 0. +0.j 0. -0.j 0. -0.j] [-510. -0.j 0. +0.j 0.+510.j 510.-510.j]] * Output from scipy_fft.irfftn(): scipy_data shape= (4, 4) [[130050. 65025. 65025. 130050.] [ 65025. 0. 0. 65025.] [ 65025. 0. 0. 65025.] [130050. 65025. 65025. 130050.]] ############### FFTPACK: 2D RFFT (MULT) ############### * Output from fftpack_rfft2d(): fftpack_rfft1 shape= (4, 4) [[1020. -0.j -510. +0.j 0. -0.j -510. +0.j] [-510.-510.j 0. +0.j 0. +0.j 510.+510.j] [ 0. +0.j 0.+510.j 0. +0.j 0.-510.j] [-510.+510.j 510.-510.j 0. +0.j 0. +0.j]] fftpack_rfft2 shape= (4, 4) [[1020. -0.j -510.-510.j 0. -0.j -510.+510.j] [-510. +0.j 510.+510.j 0.-510.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j 0. +0.j] [-510. +0.j 0. +0.j 0.+510.j 510.-510.j]] * Output from fftpack_irfft2d(): fftpack_data shape= (4, 4) [[130050.+0.j 65025.+0.j 65025.+0.j 130050.+0.j] [ 65025.+0.j 0.+0.j 0.+0.j 65025.+0.j] [ 65025.+0.j 0.+0.j 0.+0.j 65025.+0.j] [130050.+0.j 65025.+0.j 65025.-0.j 130050.+0.j]] ##################### RESULT ##################### Is fftpack_data equivalent to scipy_data? True | 10 | 4 |
61,723,421 | 2020-5-11 | https://stackoverflow.com/questions/61723421/how-python-interpreter-treats-the-position-of-the-function-definition-having-def | Why the first code outputs 51 and the second code outputs 21. I understand the second code should output 21, but the way I understood, the first code should also output 21 (The value of b changed to 20 and then is calling the function f). What am I missing? b = 50 def f(a, b=b): return a + b b = 20 print(f(1)) Output: 51 b = 50 b = 20 def f(a, b=b): return a + b print(f(1)) Output: 21 Edit: This is different from How to change default value of optional function parameter in Python 2.7? because here the unintentional change happening to the default parameter is being discussed, not how to intentionally change the value of default parameter, ie here the question focuses on how the python interpreter treats the position of function definition for functions having default parameters. | Tip for python beginners : If you use IDEs like pycharm - you can put a debugger and see what is happening with the variables. We can get a better understanding of what is going on using the id(b) which gets us the address of the particular object in memory: Return the “identity” of an object. This is an integer which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value. CPython implementation detail: This is the address of the object in memory. Let me modify your code to the following : b = 50 print("b=50 :", id(b)) def f(a, b=b): print("b in the function f :", id(b)) print(id(b)) return a + b b = 20 print("b=20 :", id(b)) print(f(1)) The output is as following: b=50 : 4528710960 b=20 : 4528710000 b in the function f : 4528710960 4528710960 51 As you can see the b inside the function and the b=50 have the same address. When you do b=20 a new object was created. In Python, (almost) everything is an object. What we commonly refer to as "variables" in Python are more properly called names. Likewise, "assignment" is really the binding of a name to an object. Each binding has a scope that defines its visibility, usually the block in which the name originates. In python When you do b=50 a binding of b to an int object is created in the scope of the block When we later say b=20 the int object b=50 is unaffected. These both are essentially two different objects. You can read more about it in these links. Is Python call-by-value or call-by-reference? Neither. Parameter Passing Python id() | 11 | 10 |
61,692,952 | 2020-5-9 | https://stackoverflow.com/questions/61692952/how-to-pass-debug-to-build-ext-when-invoking-setup-py-install | When I execute a command python setup.py install or python setup.py develop it would execute build_ext command as one of the steps. How can I pass --debug option to it as if it was invoked as python setup.py build_ext --debug? UPDATE Here is a setup.py very similar to mine: https://github.com/pybind/cmake_example/blob/11a644072b12ad78352b6e6649db9dfe7f406676/setup.py#L43 I'd like to invoke python setup.py install but turn debug property in build_ext class instance to 1. | A. If I am not mistaken, one could achieve that by adding the following to a setup.cfg file alongside the setup.py file: [build_ext] debug = 1 B.1. For more flexibility, I believe it should be possible to be explicit on the command line: $ path/to/pythonX.Y setup.py build_ext --debug install B.2. Also if I understood right it should be possible to define so-called aliases # setup.cfg [aliases] release_install = build_ext install debug_install = build_ext --debug install $ path/to/pythonX.Y setup.py release_install $ path/to/pythonX.Y setup.py debug_install References https://docs.python.org/3/distutils/configfile.html https://setuptools.readthedocs.io/en/latest/setuptools.html#alias-define-shortcuts-for-commonly-used-commands | 8 | 13 |
61,754,736 | 2020-5-12 | https://stackoverflow.com/questions/61754736/django-3-model-save-when-providing-a-default-for-the-primary-key | I'm working on upgrading my project from Django 2 to Django 3, I have read their release notes of Django 3 and there is a point that I don't really understand what it will impact to my current project. Here they say: As I understand, if we try to call Model.save(), it would always create a new record instead of updating if the model is an existing record. For example: car = Car.objects.first() car.name = 'Honda' car.save() # does it INSERT or UPDATE? I suspect it is an "INSERT" statement as their explanation and "UPDATE" statement in Django 2. I have had an experimented and it is still the same behaviour as Django 2, not sure what they mean. In [5]: u = User.objects.first() (0.001) SELECT "accounts_user"."id", "accounts_user"."password", "accounts_user"."last_login", "accounts_user"."is_superuser", "accounts_user"."username", "accounts_user"."first_name", "accounts_user"."last_name", "accounts_user"."is_staff", "accounts_user"."is_active", "accounts_user"."date_joined", "accounts_user"."email", "accounts_user"."avatar", "accounts_user"."last_location"::bytea, "accounts_user"."uuid", "accounts_user"."country", "accounts_user"."city", "accounts_user"."phone" FROM "accounts_user" ORDER BY "accounts_user"."id" ASC LIMIT 1; args=() In [6]: u.save() (0.006) UPDATE "accounts_user" SET "password" = 'pbkdf2_sha256_sha512$180000$FbFcNuPMrOZ6$GwIftEo+7+OpsORwn99lycye46aJn/aJNAtc50N478Y=', "last_login" = NULL, "is_superuser" = false, "username" = '[email protected]', "first_name" = 'Noah', "last_name" = 'Spencer', "is_staff" = false, "is_active" = true, "date_joined" = '2020-05-12T07:06:20.605650+00:00'::timestamptz, "email" = '[email protected]', "avatar" = 'account/user_avatar/example_HseJquC.jpg', "last_location" = NULL, "uuid" = 'f6992866-e476-409e-9f1b-098afadce5b7'::uuid, "country" = NULL, "city" = NULL, "phone" = NULL WHERE "accounts_user"."id" = 1; args=('pbkdf2_sha256_sha512$180000$FbFcNuPMrOZ6$GwIftEo+7+OpsORwn99lycye46aJn/aJNAtc50N478Y=', False, '[email protected]', 'Noah', 'Spencer', False, True, datetime.datetime(2020, 5, 12, 7, 6, 20, 605650, tzinfo=<UTC>), '[email protected]', 'account/user_avatar/example_HseJquC.jpg', UUID('f6992866-e476-409e-9f1b-098afadce5b7'), 1) Update: In [38]: u1 = User.objects.first() (0.000) SELECT "accounts_user"."id", "accounts_user"."password", "accounts_user"."last_login", "accounts_user"."is_superuser", "accounts_user"."username", "accounts_user"."first_name", "accounts_user"."last_name", "accounts_user"."is_staff", "accounts_user"."is_active", "accounts_user"."date_joined", "accounts_user"."email", "accounts_user"."avatar", "accounts_user"."last_location"::bytea, "accounts_user"."uuid", "accounts_user"."country", "accounts_user"."city", "accounts_user"."phone" FROM "accounts_user" ORDER BY "accounts_user"."id" ASC LIMIT 1; args=() In [39]: u1.pk Out[39]: 1 In [40]: u2 = User(pk=1) In [41]: u2.email = '[email protected]' In [42]: u2.save() (0.006) UPDATE "accounts_user" SET "password" = '', "last_login" = NULL, "is_superuser" = false, "username" = '[email protected]', "first_name" = '', "last_name" = '', "is_staff" = false, "is_active" = true, "date_joined" = '2020-05-13T01:20:47.718449+00:00'::timestamptz, "email" = '[email protected]', "avatar" = '', "last_location" = NULL, "uuid" = '89ba0924-03a7-44d2-bc6d-5fd2dcb0de0b'::uuid, "country" = NULL, "city" = NULL, "phone" = NULL WHERE "accounts_user"."id" = 1; args=('', False, '[email protected]', '', '', False, True, datetime.datetime(2020, 5, 13, 1, 20, 47, 718449, tzinfo=<UTC>), '[email protected]', '', UUID('89ba0924-03a7-44d2-bc6d-5fd2dcb0de0b'), 1) | Consider this example. Suppose we have a simple model as CONSTANT = 10 def foo_pk_default(): return CONSTANT class Foo(models.Model): id = models.IntegerField(primary_key=True, default=foo_pk_default) name = models.CharField(max_length=10) The main thing I have done in this example is, I did set a default callable function for Primary Keys. Also, I returned only a single value from the function, for the sake of demonstration. ## Django 2.2 In [5]: foo_instance_1 = Foo(name='foo_name_1') In [6]: foo_instance_1.save() In [7]: print(foo_instance_1.__dict__) {'_state': , 'id': 10, 'name': 'foo_name_1'} In [8]: foo_instance_2 = Foo(name='foo_name_2') In [9]: foo_instance_2.save() In [10]: print(foo_instance_2.__dict__) {'_state': , 'id': 10, 'name': 'foo_name_2'} ## Django 3.X In [6]: foo_instance_1 = Foo(name='foo_name_1') In [7]: foo_instance_1.save() In [8]: print(foo_instance_1.__dict__) {'_state': , 'id': 10, 'name': 'foo_name_1'} In [9]: foo_instance_2 = Foo(name='foo_name_2') In [10]: foo_instance_2.save() # Raised "IntegrityError: UNIQUE constraint failed: music_foo.id" Conclusion In Django<3.0, the Model.save() will do an update or insert operation if there is a PK value associated with the model instance whereas in Django>=3.0, only perform an insert operation hence the UNIQUE constraint failed exception. Impact of this change in the current project Since this Django change is only applicable when a new instance is created and we usually don't set any default value functions for Primary Keys. In short, this change will not make any problem unless you are providing default value during model instance creation. | 13 | 15 |
61,756,716 | 2020-5-12 | https://stackoverflow.com/questions/61756716/dataclass-argument-choices-with-a-default-option | I'm making a dataclass with a field for which I'd like there to only be a few possible values. I was thinking something like this: @dataclass class Person: name: str = field(default='Eric', choices=['Eric', 'John', 'Graham', 'Terry']) I know that one solution is to validate arguments in the __post_init__ method but is there a cleaner way using something like the syntax above? | python 3.8 introduced a new type called Literal that can be used here: from dataclasses import dataclass from typing import Literal @dataclass class Person: name: Literal['Eric', 'John', 'Graham', 'Terry'] = 'Eric' Type checkers like mypy have no problems interpreting it correctly, Person('John') gets a pass, and Person('Marc') is marked as incompatible. Note that this kind of hint requires a type checker in order to be useful, it won't do anything on its own when you're just running the code. If you're on an older python version and can't upgrade to 3.8, you can also get access to the Literal type through the official pip-installable backport package typing-extensions, and import it with from typing_extensions import Literal instead. If you need to do actual checks of the passed values during runtime, you should consider using pydantic to define your dataclasses instead. Its main goal is to extend on dataclass-like structures with a powerful validation engine which will inspect the type hints in order to enforce them, i.e. what you considered to write by hand in the __post_init__. | 7 | 13 |
61,669,873 | 2020-5-8 | https://stackoverflow.com/questions/61669873/python-venv-env-fails-winerror-2-the-system-cannot-find-the-file-specified | I installed the latest version of Python 3.8.2 on a Windows 10 machine. I previously had Python 3.7, which I uninstalled and confirmed in the System PATH it was no longer referenced. After installing the latest version, I run through CMD as Admin: py -m venv env and I get this error: Error: [WinError 2] The system cannot find the file specified: 'C:\Users\test_user\Documents\app_test\env' I know the Python Path is in the System Path environmental settings, but not specifically for the user (don't know if that makes a difference?). I have also tried to uninstall virtualenv using powershell and reinstaling, but have the same result. Any ideas on where else to look to solve this? | I found out that Windows Defender now has a feature that blocks access/changes to system files. It added my documents folder by default, somehow preventing me from creating any folders in CMD despite having Admin Access. I hope this helps someone else!! In short -- you may have to revise or disable your Windows 10 "Ranswomare Security Protection" setting to allow you to write files to your directories. | 34 | 26 |
61,693,769 | 2020-5-9 | https://stackoverflow.com/questions/61693769/vs-code-run-debug-configuration-for-a-console-script-in-module-mode | My setup.py has the following console_scripts as entry points: entry_points={ 'console_scripts': ['script=myapp.app:do_something', 'script2=myapp.app:do_something2'], }, with the following structure . ├── myapp │ ├── __init__.py │ ├── app.py │ ├── mod.py │ ├── mod2.py │ └── submodules │ ├── __init__.py │ └── mod3.py ├── requirements.txt └── setup.py and app looking like ##my_app.app def do_something(): #do stuff def do_something2(): #do other stuff How can I get VS code debug configuration to enter at these module attributes. I have this that can run the module if I use if __name__ == "__main__": do_something() but want seperate launch.json files depending on the console_scripts ##launch.json { "configurations": [ { "name": "Python: Module", "type": "python", "request": "launch", "cwd": "${workspaceFolder}", "module": "myapp.app", "args": ["--hello world"] } ] } Thought you might be able to do simlar with: "module": "myapp.app:do_something", but alas: No module named myapp.app:do_something | There currently isn't a way to make this work. At minimum you need a separate module per entry point or have a single module that took a command-line argument that then chose which function to call. | 12 | 9 |
61,688,882 | 2020-5-8 | https://stackoverflow.com/questions/61688882/how-is-numpy-stack-different-from-numpy-v-stack-and-h-stack | I understand that numpy hstack stacks column wise and vstack stacks row wise. Then what is the function of numpy stack? | The key difference is in the documentation for np.stack (emphasis mine): Join a sequence of arrays along a new axis. Consider the following arrays: arr1=np.array([[1,2,3],[7,8,9]]) arr2=np.array([[4,5,6],[10,11,12]]) arr3=np.array([['a','b','c'],['d','e','f']]) [[1 2 3] [7 8 9]] [[ 4 5 6] [10 11 12]] [['a' 'b' 'c'] ['d' 'e' 'f']] Then consider the following results: #stack horizontally on existing axis np.hstack([arr1,arr2,arr3]) array([['1', '2', '3', '4', '5', '6', 'a', 'b', 'c'], ['7', '8', '9', '10', '11', '12', 'd', 'e', 'f']], dtype='<U11') shape: (2, 9) #stack vertically on existing axis np.vstack([arr1,arr2,arr3]) array([['1', '2', '3'], ['7', '8', '9'], ['4', '5', '6'], ['10', '11', '12'], ['a', 'b', 'c'], ['d', 'e', 'f']], dtype='<U11') shape: (6, 3) #stack depth-wise, adding an axis 2 np.dstack([arr1,arr2,arr3]) array([[['1', '4', 'a'], ['2', '5', 'b'], ['3', '6', 'c']], [['7', '10', 'd'], ['8', '11', 'e'], ['9', '12', 'f']]], dtype='<U11') shape: (2, 3, 3) Note that in all cases but dstack, the two arrays are joined along an existing axis (axis 0, axis 1, and dstack adds a new axis 2) Then, in light of the above results, what if we use stack instead, only changing the stacking axis? for i in [0,1,2]: stacked=np.stack([arr1,arr2,arr3],axis=i) print(f'Stacked on axis {i}\n',stacked, '\n',f'array shape:{stacked.shape}','\n') Stacked on axis 0 [[['1' '2' '3'] ['7' '8' '9']] [['4' '5' '6'] ['10' '11' '12']] [['a' 'b' 'c'] ['d' 'e' 'f']]] array shape:(3, 2, 3) Stacked on axis 1 [[['1' '2' '3'] ['4' '5' '6'] ['a' 'b' 'c']] [['7' '8' '9'] ['10' '11' '12'] ['d' 'e' 'f']]] array shape:(2, 3, 3) Stacked on axis 2 [[['1' '4' 'a'] ['2' '5' 'b'] ['3' '6' 'c']] [['7' '10' 'd'] ['8' '11' 'e'] ['9' '12' 'f']]] array shape:(2, 3, 3) Note that these are all 3 dimensional arrays, only the order of the elements changes based on the direction they are stacked | 13 | 14 |
61,758,398 | 2020-5-12 | https://stackoverflow.com/questions/61758398/aws-lambda-unable-to-marshal-response-error | I am trying to setup an AWS lambda that will start an SSM session in my EC2 instance and run a command. For simplicity right now I am just trying to run ls. Here is my lambda function: import json import boto3 def lambda_handler(commands, context): """Runs commands on remote linux instances :param client: a boto/boto3 ssm client :param commands: string, each one a command to execute on the instance :return: the response from the send_command function (check the boto3 docs for ssm client.send_command() ) """ client = boto3.client('ssm') print ("Hello world") resp = client.send_command( DocumentName="AWS-RunShellScript", # Would normally pass commands param here but hardcoding it instead for testing Parameters={"commands":["ls"]}, InstanceIds=["i-01112223333"], ) return resp However when I run "Test" on this function I get the following log output: START RequestId: d028de04-4004-4a0p-c8d2-975755c84777 Version: $LATEST Hello world [ERROR] Runtime.MarshalError: Unable to marshal response: datetime.datetime(2020, 5, 12, 19, 34, 36, 349000, tzinfo=tzlocal()) is not JSON serializable END RequestId: d028de04-4asdd4-4a0f-b8d2-9asd847813 REPORT RequestId: d42asd04-44d4-4a0f-b9d2-275755c6557 Duration: 1447.76 ms Billed Duration: 1500 ms Memory Size: 128 MB Max Memory Used: 76 MB Init Duration: 301.68 ms I am not sure what this Runtime.MarshalError: Unable to marshal response error means or how to fix it. The payload I'm passing to run the Test lambda shouldn't matter since I'm not using the commands parameter but it is: { "command": "ls" } Any help would be appreciated | The error has pretty clear wording, you're just fixating on wrong part. I assume you use the resp object somewhere down the line and that part tries to do something like json.load() or related. datetime.datetime(2020, 5, 12, 19, 34, 36, 349000, tzinfo=tzlocal()) is not JSON serializable This is a common error for people running into datetime for the first time, here's pretty comprehensive question about it: How to overcome "datetime.datetime not JSON serializable"? | 22 | 6 |
61,754,710 | 2020-5-12 | https://stackoverflow.com/questions/61754710/create-pandas-column-with-the-max-of-two-calculated-values-from-other-columns | I want to create a column with the maximum value between 2 values calculated from other columns of the data frame. import pandas as pd df = pd.DataFrame({"A": [1,2,3], "B": [-2, 8, 1]}) df['Max Col'] = max(df['A']*3, df['B']+df['A']) ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). The desired outcome is a new df column ['Max Col'] with the maximum value of the above calculations. I know there is the long solution of creating two new columns with the calculations and then apply .max(axis=1). I am looking for a straight solution. Thanks. | Use np.maximum: df['max'] =np.maximum(df['A']*3, df['B']+df['A']) Output: A B max 0 1 -2 3 1 2 8 10 2 3 1 9 | 8 | 11 |
61,750,811 | 2020-5-12 | https://stackoverflow.com/questions/61750811/dropdown-menu-for-plotly-choropleth-map-plots | I am trying to create choropleth maps. Below is an example that works: df = px.data.gapminder().query("year==2007") fig = go.Figure(data=go.Choropleth( locations=happy['iso'], # Spatial coordinates z = happy['Happiness'].astype(float), # Data to be color-coded colorbar_title = "Happiness Score", )) fig.update_layout( title_text = 'Life Expectancy in 2007' ) fig.show() However, I would like to create a dropdown menu that will change the plotted values between different variables (e.g., Life Expectancy, GDP, Population). I believe that this is possible but have not seen any tutorial online. Most of them just uses other kind of barcharts or scatterplots. Here is what I have gotten so far: # Initialize figure fig = go.Figure() # Add Traces fig.add_trace(go.Figure(data=go.Choropleth( locations=df['iso_alpha'], # Spatial coordinates z = df['lifeExp'].astype(float), # Data to be color-coded colorbar_title = "Life Expectancy"))) fig.add_trace(go.Figure(data=go.Choropleth( locations=df['iso_alpha'], # Spatial coordinates z = df['gdpPercap'].astype(float), # Data to be color-coded colorbar_title = "GDP per capita"))) But I am not sure how to proceed from here. Do I need to update the layout of the figure via fig.update_layout or something? | There are two ways to solve this Dash # save this as app.py import pandas as pd import plotly.graph_objs as go import plotly.express as px import dash import dash_core_components as dcc import dash_html_components as html # Data df = px.data.gapminder().query("year==2007") df = df.rename(columns=dict(pop="Population", gdpPercap="GDP per Capita", lifeExp="Life Expectancy")) cols_dd = ["Population", "GDP per Capita", "Life Expectancy"] app = dash.Dash() app.layout = html.Div([ dcc.Dropdown( id='demo-dropdown', options=[{'label': k, 'value': k} for k in cols_dd], value=cols_dd[0] ), html.Hr(), dcc.Graph(id='display-selected-values'), ]) @app.callback( dash.dependencies.Output('display-selected-values', 'figure'), [dash.dependencies.Input('demo-dropdown', 'value')]) def update_output(value): fig = go.Figure() fig.add_trace(go.Choropleth( locations=df['iso_alpha'], # Spatial coordinates z=df[value].astype(float), # Data to be color-coded colorbar_title=value)) fig.update_layout(title=f"<b>{value}</b>", title_x=0.5) return fig if __name__ == '__main__': app.run_server() run this as python app.py and go to http://127.0.0.1:8050 Plotly In this case we need to play with visibility of different traces and create buttons in a way they show one traces and hide all the others. import pandas as pd import numpy as np import plotly.graph_objs as go import plotly.express as px # Data df = px.data.gapminder().query("year==2007") df = df.rename(columns=dict(pop="Population", gdpPercap="GDP per Capita", lifeExp="Life Expectancy")) cols_dd = ["Population", "GDP per Capita", "Life Expectancy"] # we need to add this to select which trace # is going to be visible visible = np.array(cols_dd) # define traces and buttons at once traces = [] buttons = [] for value in cols_dd: traces.append(go.Choropleth( locations=df['iso_alpha'], # Spatial coordinates z=df[value].astype(float), # Data to be color-coded colorbar_title=value, visible= True if value==cols_dd[0] else False)) buttons.append(dict(label=value, method="update", args=[{"visible":list(visible==value)}, {"title":f"<b>{value}</b>"}])) updatemenus = [{"active":0, "buttons":buttons, }] # Show figure fig = go.Figure(data=traces, layout=dict(updatemenus=updatemenus)) # This is in order to get the first title displayed correctly first_title = cols_dd[0] fig.update_layout(title=f"<b>{first_title}</b>",title_x=0.5) fig.show() | 7 | 9 |
61,746,001 | 2020-5-12 | https://stackoverflow.com/questions/61746001/plotly-how-to-specify-colors-for-a-group-using-go-bar | How to use plotly.graph_objs to plot pandas data in a similar way to plotly.express - specifically to color various data types? The plotly express functionality to group data types based on a value in a pandas column is really useful. Unfortunately I can't use express in my system (as I need to send the graph object to orca) I'm able to get the same functionality by specifically mapping Type to colours (full_plot in the example below), however I have types A-Z, is there a better way of mapping each possible Type in the dataframe to a color? import pandas as pd import plotly.express as px import plotly.graph_objs as go d = {'Scenario': [1, 2, 3, 1, 2,3], 'Type': ["A", "A", "A", "B", "B", "B"], 'VAL_1': [100, 200, 300, 400 , 500, 600], 'VAL_2': [1000, 2000, 3000, 4000, 5000, 6000]} df = pd.DataFrame(data=d) def quick_plot(df): fig = px.bar(df, y='VAL_1', x='Scenario', color="Type", barmode='group') fig['layout'].update(title = "PX Plot", width = 600, height = 400, xaxis = dict(showgrid=False)) fig.show() def full_plot(df): colors = {'A': 'blue', 'B': 'red'} s0=df.query('Type=="A"') s1=df.query('Type=="B"') fig = go.Figure() fig.add_trace(go.Bar( name='A', y=s0['VAL_1'],x=s0['Scenario'], marker={'color': colors['A']})) fig.add_trace(go.Bar( name='B', y=s1['VAL_1'],x=s1['Scenario'], marker={'color': colors['B']})) fig['layout'].update(title = "Full Plot", width = 600, height = 400) fig.update_layout(barmode='group') fig.show() quick_plot(df) full_plot(df) | You could simply use a dictionary like this: colors = {'A':'steelblue', 'B':'firebrick'} The only challenge lies in grouping the dataframe for each unique type and adding a new trace for each type using a for loop. The code snippet below takes care of that to produce this plot: # imports import pandas as pd import plotly.express as px import plotly.graph_objs as go # data d = {'Scenario': [1, 2, 3, 1, 2,3], 'Type': ["A", "A", "A", "B", "B", "B"], 'VAL_1': [100, 200, 300, 400 , 500, 600], 'VAL_2': [1000, 2000, 3000, 4000, 5000, 6000]} df = pd.DataFrame(data=d) # assign colors to type using a dictionary colors = {'A':'steelblue', 'B':'firebrick'} # plotly figure fig=go.Figure() for t in df['Type'].unique(): dfp = df[df['Type']==t] fig.add_traces(go.Bar(x=dfp['Scenario'], y = dfp['VAL_1'], name=t, marker_color=colors[t])) fig.show() | 9 | 11 |
61,744,288 | 2020-5-12 | https://stackoverflow.com/questions/61744288/assigning-column-names-while-creating-dataframe-results-in-nan-values | I have a list of dict which is being converted to a dataframe. When I attempt to pass the columns argument the output values are all nan. # This code does not result in desired output l = [{'a': 1, 'b': 2}, {'a': 3, 'b': 4}] pd.DataFrame(l, columns=['c', 'd']) c d 0 NaN NaN 1 NaN NaN # This code does result in desired output l = [{'a': 1, 'b': 2}, {'a': 3, 'b': 4}] df = pd.DataFrame(l) df.columns = ['c', 'd'] df c d 0 1 2 1 3 4 Why is this happening? | Because if pass list of dictionaries from keys are created new columns names in DataFrame constructor: l = [{'a': 1, 'b': 2}, {'a': 3, 'b': 4}] print (pd.DataFrame(l)) a b 0 1 2 1 3 4 If pass columns parameter with some values not exist in keys of dictionaries then are filtered columns from dictonaries and for not exist values are created columns with missing values with order like values in list of columns names: #changed order working, because a,b keys at least in one dictionary print (pd.DataFrame(l, columns=['b', 'a'])) b a 0 2 1 1 4 3 #filtered a, d filled missing values - key is not at least in one dictionary print (pd.DataFrame(l, columns=['a', 'd'])) a d 0 1 NaN 1 3 NaN #filtered b, c filled missing values - key is not at least in one dictionary print (pd.DataFrame(l, columns=['c', 'b'])) c b 0 NaN 2 1 NaN 4 #filtered a,b, c, d filled missing values - keys are not at least in one dictionary print (pd.DataFrame(l, columns=['c', 'd','a','b'])) c d a b 0 NaN NaN 1 2 1 NaN NaN 3 4 So if want another columns names you need rename them or set new one like in your second code. | 7 | 10 |
61,638,496 | 2020-5-6 | https://stackoverflow.com/questions/61638496/python-remove-square-brackets-and-extraneous-information-between-them | I'm trying to handle a file, and I need to remove extraneous information in the file; notably, I'm trying to remove brackets [] including text inside and between bracket [] [] blocks, Saying that everything between these blocks including them itself but print everything outside it. Below is my text File with data sample: $ cat smb Hi this is my config file. Please dont delete it [homes] browseable = No comment = Your Home create mode = 0640 csc policy = disable directory mask = 0750 public = No writeable = Yes [proj] browseable = Yes comment = Project directories csc policy = disable path = /proj public = No writeable = Yes [] This last second line. End of the line. Desired Output: Hi this is my config file. Please dont delete it This last second line. End of the line. What i have tried based on my understanding and re-search: $ cat test.py with open("smb", "r") as file: for line in file: start = line.find( '[' ) end = line.find( ']' ) if start != -1 and end != -1: result = line[start+1:end] print(result) Output: $ ./test.py homes proj | with one regex import re with open("smb", "r") as f: txt = f.read() txt = re.sub(r'(\n\[)(.*?)(\[]\n)', '', txt, flags=re.DOTALL) print(txt) regex explanation: (\n\[) find a sequence where there is a linebreak followed by a [ (\[]\n) find a sequence where there are [] followed by a linebreak (.*?) remove everything in the middle of (\n\[) and (\[]\n) re.DOTALL is used to prevent unnecessary backtracking !!! PANDAS UPDATE !!! The same solution with the same logic can be carried out with pandas import re import pandas as pd # read each line in the file (one raw -> one line) txt = pd.read_csv('smb', sep = '\n', header=None) # join all the line in the file separating them with '\n' txt = '\n'.join(txt[0].to_list()) # apply the regex to clean the text (the same as above) txt = re.sub(r'(\n\[)(.*?)(\[]\n)', '\n', txt, flags=re.DOTALL) print(txt) | 12 | 8 |
61,712,618 | 2020-5-10 | https://stackoverflow.com/questions/61712618/why-does-mypy-not-consider-a-class-as-iterable-if-it-has-len-and-getitem | I was playing around with mypy and some basic iteration in Python and wrote the below code base: from typing import Iterator from datetime import date, timedelta class DateIterator: def __init__(self, start_date, end_date): self.start_date = start_date self.end_date = end_date self._total_dates = self._get_all_dates() def _get_all_dates(self) -> Iterator[date]: current_day = self.start_date while current_day <= self.end_date: yield current_day current_day += timedelta(days=1) def __len__(self): print("Calling the len function...") return len(self._total_dates) def __getitem__(self, index): print(f"Calling getitem with value of index as {index}") return self._total_dates[index] if __name__ == "__main__": date_iterator = DateIterator(date(2019, 1, 1), date(2019, 1, 15)) for new_date in date_iterator: print(new_date) date_str = ",".join([str(new_date) for new_date in date_iterator]) print(date_str) print(f"Checking the length of the collection {len(date_iterator)}") print(f"Checking if indexing works : {date_iterator[4]}") Now to also play around with mypy i got the below issues: iterator_test_with_getitem.py:30: error: Cannot assign to a type iterator_test_with_getitem.py:30: error: "DateIterator" has no attribute "__iter__" (not iterable) iterator_test_with_getitem.py:33: error: "DateIterator" has no attribute "__iter__" (not iterable) Found 3 errors in 1 file (checked 1 source file) Can someone please guide me that if an object is iterable with the addition of __len__ and __getitem__ methods then why is mypy complaining when it has no iter method Also can someone please tell me what the issue with line 30 is . I don't find any logical explanation of that error as well. | Mypy -- and PEP 484 type checkers in general -- define an iterable to be any class that defines the __iter__ method. You can see the exact definition of the Iterable type in Typeshed, the collection of type hints for the standard library: https://github.com/python/typeshed/blob/master/stdlib/3/typing.pyi#L146 The reason why Iterable is defined in this way and excludes types that only define __getitem__ and __len__ is because: The only way to say that a type must either implement __iter__ or implement __getitem__/__len__ is to use a Union -- and defining Iterable to be a union complicates life a bit for anybody who wants to make extensive use of the Iterable type in their own code. Conversely, it's trivial for a class that defines __getitem__ and __len__ to define their own __iter__ method. For example, you could do something as simple as this: def __iter__(self) -> Iterator[date]: for i in range(len(self)): yield self[i] Or alternatively, something like this (assuming you fixed your constructor so self._total_dates is a list, not a generator): def __iter__(self) -> Iterator[date]: return iter(self._total_dates) So, given this cost-benefit tradeoff, it makes sense to define Iterable to just be any object that implements __iter__. It's not much of a burden for people who are defining custom classes and simplifies life for people who want to write functions manipulating iterables. | 9 | 9 |
61,720,708 | 2020-5-11 | https://stackoverflow.com/questions/61720708/how-do-you-save-a-tensorflow-dataset-to-a-file | There are at least two more questions like this on SO but not a single one has been answered. I have a dataset of the form: <TensorSliceDataset shapes: ((512,), (512,), (512,), ()), types: (tf.int32, tf.int32, tf.int32, tf.int32)> and another of the form: <BatchDataset shapes: ((None, 512), (None, 512), (None, 512), (None,)), types: (tf.int32, tf.int32, tf.int32, tf.int32)> I have looked and looked but I can't find the code to save these datasets to files that can be loaded later. The closest I got was this page in the TensorFlow docs, which suggests serializing the tensors using tf.io.serialize_tensor and then writing them to a file using tf.data.experimental.TFRecordWriter. However, when I tried this using the code: dataset.map(tf.io.serialize_tensor) writer = tf.data.experimental.TFRecordWriter('mydata.tfrecord') writer.write(dataset) I get an error on the first line: TypeError: serialize_tensor() takes from 1 to 2 positional arguments but 4 were given How can I modify the above (or do something else) to accomplish my goal? | TFRecordWriter seems to be the most convenient option, but unfortunately it can only write datasets with a single tensor per element. Here are a couple of workarounds you can use. First, since all your tensors have the same type and similar shape, you can concatenate them all into one, and split them back later on load: import tensorflow as tf # Write a = tf.zeros((100, 512), tf.int32) ds = tf.data.Dataset.from_tensor_slices((a, a, a, a[:, 0])) print(ds) # <TensorSliceDataset shapes: ((512,), (512,), (512,), ()), types: (tf.int32, tf.int32, tf.int32, tf.int32)> def write_map_fn(x1, x2, x3, x4): return tf.io.serialize_tensor(tf.concat([x1, x2, x3, tf.expand_dims(x4, -1)], -1)) ds = ds.map(write_map_fn) writer = tf.data.experimental.TFRecordWriter('mydata.tfrecord') writer.write(ds) # Read def read_map_fn(x): xp = tf.io.parse_tensor(x, tf.int32) # Optionally set shape xp.set_shape([1537]) # Do `xp.set_shape([None, 1537])` if using batches # Use `x[:, :512], ...` if using batches return xp[:512], xp[512:1024], xp[1024:1536], xp[-1] ds = tf.data.TFRecordDataset('mydata.tfrecord').map(read_map_fn) print(ds) # <MapDataset shapes: ((512,), (512,), (512,), ()), types: (tf.int32, tf.int32, tf.int32, tf.int32)> But, more generally, you can simply have a separate file per tensor and then read them all: import tensorflow as tf # Write a = tf.zeros((100, 512), tf.int32) ds = tf.data.Dataset.from_tensor_slices((a, a, a, a[:, 0])) for i, _ in enumerate(ds.element_spec): ds_i = ds.map(lambda *args: args[i]).map(tf.io.serialize_tensor) writer = tf.data.experimental.TFRecordWriter(f'mydata.{i}.tfrecord') writer.write(ds_i) # Read NUM_PARTS = 4 parts = [] def read_map_fn(x): return tf.io.parse_tensor(x, tf.int32) for i in range(NUM_PARTS): parts.append(tf.data.TFRecordDataset(f'mydata.{i}.tfrecord').map(read_map_fn)) ds = tf.data.Dataset.zip(tuple(parts)) print(ds) # <ZipDataset shapes: (<unknown>, <unknown>, <unknown>, <unknown>), types: (tf.int32, tf.int32, tf.int32, tf.int32)> It is possible to have the whole dataset in a single file with multiple separate tensors per element, namely as a file of TFRecords containing tf.train.Examples, but I don't know if there is a way to create those within TensorFlow, that is, without having to get the data out of the dataset into Python and then write it to the records file. | 17 | 10 |
61,734,304 | 2020-5-11 | https://stackoverflow.com/questions/61734304/label-outliers-in-a-boxplot-python | I am analysing extreme weather events. My Dataframe is called df and looks like this: | Date | Qm | |------------|--------------| | 1993-01-01 | 4881.977061 | | 1993-02-01 | 4024.396839 | | 1993-03-01 | 3833.664650 | | 1993-04-01 | 4981.192526 | | 1993-05-01 | 6286.879798 | | 1993-06-01 | 6939.726070 | | 1993-07-01 | 6492.936065 | | ... | ... | I want to know whether the extreme events happened in the same year as an outlier measured. Thus, I did my boxplot using seaborn: # Qm boxplot analysis boxplot = sns.boxplot(x=df.index.month,y=df['Qm']) plt.show() Now, I would like to present within the same figure the years corresponding to the outliers. Hence, label them with their date. I have checked in multiple libraries that include boxplots, but there is no clue on how to label them. PD: I used seaborn in this example, but any library that could help will be highly appreciated Thanks! | You could iterate through the dataframe and compare each value against the limits for the outliers. Default these limits are 1.5 times the IQR past the low and high quartiles. For each value outside that range, you can plot the year next to it. Feel free to adapt this definition if you would like to display more or less years. Here is some code to illustrate the idea. In the code the two last digits of the year are shown next to the position of the outlier. import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns Y = 26 df = pd.DataFrame({'Date': pd.date_range('1993-01-01', periods=12 * Y, freq='M'), 'Qm': np.random.normal(np.tile(5000 + 1000 * np.sin(np.linspace(0, 2 * np.pi, 12)), Y), 1000)}) df.set_index('Date', inplace=True) boxplot = sns.boxplot(x=df.index.month, y=df['Qm']) month_q1 = df.groupby(df.index.month).quantile(0.25)['Qm'].to_numpy() month_q3 = df.groupby(df.index.month).quantile(0.75)['Qm'].to_numpy() outlier_top_lim = month_q3 + 1.5 * (month_q3 - month_q1) outlier_bottom_lim = month_q1 - 1.5 * (month_q3 - month_q1) for row in df.itertuples(): month = row[0].month - 1 val = row.Qm if val > outlier_top_lim[month] or val < outlier_bottom_lim[month]: plt.text(month, val, f' {row[0].year % 100:02d}', ha='left', va='center') plt.xlabel('Month') plt.tight_layout() plt.show() | 7 | 11 |
61,628,503 | 2020-5-6 | https://stackoverflow.com/questions/61628503/flask-uploads-importerror-cannot-import-name-secure-filename | I want to create a form that allows to send a picture with a description using flask forms. I tried to use this video: https://www.youtube.com/watch?v=Exf8RbgKmhM but I had troubles when launching app.py: ➜ website git:(master) ✗ python3.6 app.py Traceback (most recent call last): File "app.py", line 10, in <module> from flask.ext.uploads import UploadSet, configure_uploads, IMAGES ModuleNotFoundError: No module named 'flask.ext' I had to replace flask.ext.uploads by flask_uploads but now I get: Traceback (most recent call last): File "app.py", line 10, in <module> from flask_uploads import UploadSet, configure_uploads, IMAGES File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask_uploads.py", line 26, in <module> from werkzeug import secure_filename, FileStorage ImportError: cannot import name 'secure_filename' My imports and config looks like this: from datetime import datetime from flask_sqlalchemy import SQLAlchemy from flask import Flask, session, render_template, url_for, redirect, flash, request from wtforms import Form, fields,TextField, StringField, PasswordField, BooleanField,validators from wtforms.validators import InputRequired, Email, Length, DataRequired from flask_wtf import FlaskForm from flask_uploads import UploadSet, configure_uploads, IMAGES from flask_login import LoginManager, UserMixin, login_user, login_required, logout_user, current_user I couldn't solve this issue, do you have any idea of what can I do ? | In flask_uploads.py Change from werkzeug import secure_filename,FileStorage to from werkzeug.utils import secure_filename from werkzeug.datastructures import FileStorage | 56 | 159 |
61,723,675 | 2020-5-11 | https://stackoverflow.com/questions/61723675/crop-a-video-in-python | I am wondering to create a function which can crop a video in a certain frame and save it on my disk (OpenCV,moviepy,or something like that) I am specifying my function with parameters as dimension of frame along with source and target name (location) def vid_crop(src,dest,l,t,r,b): # something # goes # here left = 1 #any number (pixels) top = 2 # '''' right = 3 # '''' bottom = 4 # '''' vid_crop('myvideo.mp4','myvideo_edit.mp4',left,top,right,bottom) Any suggestions and ideas are really helpful | Ok I think you want this, import numpy as np import cv2 # Open the video cap = cv2.VideoCapture('vid.mp4') # Initialize frame counter cnt = 0 # Some characteristics from the original video w_frame, h_frame = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) fps, frames = cap.get(cv2.CAP_PROP_FPS), cap.get(cv2.CAP_PROP_FRAME_COUNT) # Here you can define your croping values x,y,h,w = 0,0,100,100 # output fourcc = cv2.VideoWriter_fourcc(*'XVID') out = cv2.VideoWriter('result.avi', fourcc, fps, (w, h)) # Now we start while(cap.isOpened()): ret, frame = cap.read() cnt += 1 # Counting frames # Avoid problems when video finish if ret==True: # Croping the frame crop_frame = frame[y:y+h, x:x+w] # Percentage xx = cnt *100/frames print(int(xx),'%') # Saving from the desired frames #if 15 <= cnt <= 90: # out.write(crop_frame) # I see the answer now. Here you save all the video out.write(crop_frame) # Just to see the video in real time cv2.imshow('frame',frame) cv2.imshow('croped',crop_frame) if cv2.waitKey(1) & 0xFF == ord('q'): break else: break cap.release() out.release() cv2.destroyAllWindows() | 8 | 14 |
61,720,217 | 2020-5-10 | https://stackoverflow.com/questions/61720217/error-cannot-use-a-string-pattern-on-a-bytes-like-object | Hy am using Python RegEx to show all internet wirless profiles connected to a computer.There is error (TypeError: cannot use a string pattern on a bytes-like object) in my Second last line pls anyone help to identifi my mistake.Thanks My Program import subprocess,re command = "netsh wlan show profile" output = subprocess.check_output(command, shell=True) network_names = re.search("(Profile\s*:\s)(.*)", output) print(network_names.group(0)) ..................................................... ERROR line 8, in <module> return _compile(pattern, flags).search(string) TypeError: cannot use a string pattern on a bytes-like object | Python 3 distinguishes "bytes" and "string" types; this is especially important for Unicode strings, where each character may be more than one byte, depending on the character and the encoding. Regular expressions can work on either, but it has to be consistent — searching for bytes within bytes, or strings within strings. Depending on what you need, there are two solutions: Decode the output variable before searching in it; for instance, with: output_text = output.decode('utf-8') This depends on the encoding that you are using; UTF-8 is the most common these days. The matched group will be a string. Search with bytes by adding a b prefix to the regular expression. A regular expression should also use the r prefix, so it becomes: re.search(br"(Profile\s*:\s)(.*)", output) The matched group will be a bytes object. | 12 | 21 |
61,718,947 | 2020-5-10 | https://stackoverflow.com/questions/61718947/when-does-dataloader-shuffle-happen-for-pytorch | I have beening using shuffle option for pytorch dataloader for many times. But I was wondering when this shuffle happens and whether it is performed dynamically during iteration. Take the following code as an example: namesDataset = NamesDataset() namesTrainLoader = DataLoader(namesDataset, batch_size=16, shuffle=True) for batch_data in namesTrainLoader: print(batch_data) When we define "namesTrainLoader", does that mean the shuffling is finished and the following iteration will be based on a fixed order of data? Will there be any randomness in the for loop after namesTrainLoader was defined? I was trying to replace half of "batch_data" with some special value: for batch_data in namesTrainLoader: batch_data[:8] = special_val pre = model(batch_data) Let us say there will be infinite number of epoches, will "model" eventually see all the data in "namesTrainLoader"? Or half of the data of "namesTrainLoader" is actually lost to "model"? | The shuffling happens when the iterator is created. In the case of the for loop, that happens just before the for loop starts. You can create the iterator manually with: # Iterator gets created, the data has been shuffled at this point. data_iterator = iter(namesTrainLoader) By default the data loader uses torch.utils.data.RandomSampler if you set shuffle=True (without providing your own sampler). Its implementation is very straight forward and you can see where the data is shuffled when the iterator is created by looking at the RandomSampler.__iter__ method: def __iter__(self): n = len(self.data_source) if self.replacement: return iter(torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64).tolist()) return iter(torch.randperm(n).tolist()) The return statement is the important part, where the shuffling takes place. It simply creates a random permutation of the indices. That means you will see your entire dataset every time you fully consume the iterator, just in a different order every time. Therefore there is no data lost (not including cases with drop_last=True) and your model will see all data at every epoch. | 9 | 9 |
61,712,660 | 2020-5-10 | https://stackoverflow.com/questions/61712660/how-to-efficiently-get-the-mean-of-the-elements-in-two-list-of-lists-in-python | I have two lists as follows. mylist1 = [["lemon", 0.1], ["egg", 0.1], ["muffin", 0.3], ["chocolate", 0.5]] mylist2 = [["chocolate", 0.5], ["milk", 0.2], ["carrot", 0.8], ["egg", 0.8]] I want to get the mean of the common elements in the two lists as follows. myoutput = [["chocolate", 0.5], ["egg", 0.45]] My current code is as follows for item1 in mylist1: for item2 in mylist2: if item1[0] == item2[0]: print(np.mean([item1[1], item2[1]])) However, since there are two for loops (O(n^2) complexity) this is very inefficient for very long lists. I am wondering if there is more standard/efficient way of doing this in Python. | You can do it in O(n) (single pass over each list) by converting 1 to a dict, then per item in the 2nd list access that dict (in O(1)), like this: mylist1 = [["lemon", 0.1], ["egg", 0.1], ["muffin", 0.3], ["chocolate", 0.5]] mylist2 = [["chocolate", 0.5], ["milk", 0.2], ["carrot", 0.8], ["egg", 0.8]] l1_as_dict = dict(mylist1) myoutput = [] for item,price2 in mylist2: if item in l1_as_dict: price1 = l1_as_dict[item] myoutput.append([item, (price1+price2)/2]) print(myoutput) Output: [['chocolate', 0.5], ['egg', 0.45]] | 51 | 37 |
61,711,896 | 2020-5-10 | https://stackoverflow.com/questions/61711896/combine-gridsearchcv-and-stackingclassifier | I want to use StackingClassifier to combine some classifiers and then use GridSearchCV to optimize the parameters: clf1 = RandomForestClassifier() clf2 = LogisticRegression() dt = DecisionTreeClassifier() sclf = StackingClassifier(estimators=[clf1, clf2],final_estimator=dt) params = {'randomforestclassifier__n_estimators': [10, 50], 'logisticregression__C': [1,2,3]} grid = GridSearchCV(estimator=sclf, param_grid=params, cv=5) grid.fit(x, y) But this turns out an error: 'RandomForestClassifier' object has no attribute 'estimators_' I have used n_estimators. Why it warns me that no estimators_? Usually GridSearchCV is applied to single model so I just need to write the name of parameters of the single model in a dict. I refer to this page https://groups.google.com/d/topic/mlxtend/5GhZNwgmtSg but it uses parameters of early version. Even though I change the newly parameters it doesn't work. Btw, where can I learn the details of the naming rule of these params? | First of all, the estimators need to be a list containing the models in tuples with the corresponding assigned names. estimators = [('model1', model()), # model() named model1 by myself ('model2', model2())] # model2() named model2 by myself Next, you need to use the names as they appear in sclf.get_params(). Also, the name is the same as the one you gave to the specific model in the bove estimators list. So, here for model1 parameters you need: params = {'model1__n_estimators': [5,10]} # model1__SOME_PARAM Working toy example: from sklearn.datasets import make_classification from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import StackingClassifier from sklearn.model_selection import GridSearchCV X, y = make_classification(n_samples=1000, n_features=4, n_informative=2, n_redundant=0, random_state=0, shuffle=False) estimators = [('rf', RandomForestClassifier(n_estimators=10, random_state=42)), ('logreg', LogisticRegression())] sclf = StackingClassifier(estimators= estimators , final_estimator=DecisionTreeClassifier()) params = {'rf__n_estimators': [5,10]} grid = GridSearchCV(estimator=sclf, param_grid=params, cv=5) grid.fit(X, y) | 9 | 11 |
61,702,357 | 2020-5-9 | https://stackoverflow.com/questions/61702357/how-to-add-a-spacy-model-to-a-requirements-txt-file | I have an app that uses the Spacy model "en_core_web_sm". I have tested the app on my local machine and it works fine. However when I deploy it to Heroku, it gives me this error: "Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory." My requirements file contains spacy==2.2.4. I have been doing some research on this error and found that the model needs to be downloaded separately using this command: python -m spacy download en_core_web_sm I have been looking for ways to add the same to my requirements.txt file but haven't been able to find one that works! I tried this as well - added the below to the requirements file: -e git://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz#egg=en_core_web_sm==2.2.0 but it gave this error: "Cloning git://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz to /app/.heroku/src/en-core-web-sm Running command git clone -q git://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz /app/.heroku/src/en-core-web-sm fatal: remote error: explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz is not a valid repository name" Is there a way to get this Spacy model to load from the requirements file? Or any other fix that is possible? Thank you. | Ok, so after some more Googling and hunting for a solution, I found this solution that worked: I downloaded the tarball from the url that @tausif shared in his answer, to my local system. Saved it in the directory which had my requirements.txt file. Then I added this line to my requirements.txt file: ./en_core_web_sm-2.2.5.tar.gz Proceeded with deploying to Heroku - it succeeded and the app works perfectly now. | 31 | 7 |
61,686,382 | 2020-5-8 | https://stackoverflow.com/questions/61686382/change-the-text-color-of-cells-in-plotly-table-based-on-value-string | I have the following script that turns a pandas dataframe into an interactive html table with Plotly: goals_output_df = pd.read_csv(os.path.join(question_dir, "data/output.csv")) fig = go.Figure(data=[go.Table( columnwidth = [15,170,15,35,35], header=dict(values=['<b>' + x + '</b>' for x in list(goals_output_df.columns)], # fill_color='#b9e2ff', line_color='darkslategray', align='center', font=dict(color='black', family="Lato", size=20), height=30 ), cells=dict(values=[goals_output_df[column] for column in goals_output_df.columns], # fill_color='#e6f2fd', line_color='darkslategray', align='left', font=dict(color='black', family="Lato", size=20), height=30 )) ]) fig.update_layout( title="<b>Output summary for %s</b>"%question.strip('question'), font=dict( family="Lato", size=18, color="#000000")) fig.write_html(os.path.join(question_dir, "results/output.html")) The table contains a column named "Output" which can have one of three values for each row: "YES", "NO" and "BORDERLINE". I want to be able to change the color of the text in the "Output" column such that "YES" is green, "NO" is red and "BORDERLINE" is blue. Any idea how I could do that? Plotly documentation doesn't seem to help. | It's kind of implicit on this section cell-color-based-on-variable. Here what I'll do in your case. Data import pandas as pd import plotly.graph_objects as go df = pd.DataFrame({"name":["a", "b", "c", "d"], "value":[100,20,30,40], "output":["YES", "NO", "BORDERLINE", "NO"]}) map_color = {"YES":"green", "NO":"red", "BORDERLINE":"blue"} df["color"] = df["output"].map(map_color) cols_to_show = ["name", "value", "output"] Where cols_to_show are the only columns you want to show in your table. Fill color Here you want to have a standard cell background in all columns but the output one fill_color = [] n = len(df) for col in cols_to_show: if col!='output': fill_color.append(['#e6f2fd']*n) else: fill_color.append(df["color"].to_list()) Table data=[go.Table( # columnwidth = [15,20,30], header=dict(values=[f"<b>{col}</b>" for col in cols_to_show], # fill_color='#b9e2ff', line_color='darkslategray', align='center', font=dict(color='black', family="Lato", size=20), height=30 ), cells=dict(values=df[cols_to_show].values.T, fill_color=fill_color, line_color='darkslategray', align='left', font=dict(color='black', family="Lato", size=20), height=30 )) ] fig = go.Figure(data=data) fig.show() UPDATE Change text color In case you want to change text color text_color = [] n = len(df) for col in cols_to_show: if col!='output': text_color.append(["black"] * n) else: text_color.append(df["color"].to_list()) and data=[go.Table( # columnwidth = [15,20,30], header=dict(values=[f"<b>{col}</b>" for col in cols_to_show], # fill_color='#b9e2ff', line_color='darkslategray', align='center', font=dict(color='black', family="Lato", size=20), height=30 ), cells=dict(values=df[cols_to_show].values.T, line_color='darkslategray', align='left', font=dict(color=text_color, family="Lato", size=20), height=30 )) ] fig = go.Figure(data=data) fig.show() | 7 | 17 |
61,698,133 | 2020-5-9 | https://stackoverflow.com/questions/61698133/docker-py-permissionerror13 | While I was running >>> import docker >>> client = docker.from_env() >>> client.containers.list() I encountered the following error requests.exceptions.ConnectionError: ('Connection aborted.', PermissionError(13, 'Permission denied')) I think it is because docker-py is not able to get access of the docker daemon. So how do I fix this? | According to Docker docs you should create a group and attach your user to that group. Create Group sudo groupadd docker Attach User to Group sudo usermod -aG docker $USER Reload su -s ${USER} | 8 | 12 |
61,697,523 | 2020-5-9 | https://stackoverflow.com/questions/61697523/typeerror-int-argument-must-be-a-string-a-bytes-like-object-or-a-number-not | I am trying to run the simple below snippet port = int(os.getenv('PORT')) print("Starting app on port %d" % port) I can understand the PORT is s string but I need to cast to a int. Why I am getting the error TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' | You don't have an environment variable called PORT. os.getenv('PORT') -> returns None -> throws exception when you try to convert it to int Before running your script, create in your terminal the environment variable by: export PORT=1234 Or, you can provide a default port in case it's not defined as an environment variable on your machine: DEFAULT_PORT = 1234 port = int(os.getenv('PORT',DEFAULT_PORT)) print("Starting app on port %d" % port) | 7 | 9 |
61,694,481 | 2020-5-9 | https://stackoverflow.com/questions/61694481/splitting-a-list-of-tuples-to-several-lists-by-the-same-tuple-items | I am presented with a list made entirely of tuples, such as: lst = [("hello", "Blue"), ("hi", "Red"), ("hey", "Blue"), ("yo", "Green")] How can I split lst into as many lists as there are colours? In this case, 3 lists [("hello", "Blue"), ("hey", "Blue")] [("hi", "Red")] [("yo", "Green")] I just need to be able to work with these lists later, so I don't want to just output them to screen. Details about the list I know that every element of lst is strictly a double-element tuple. The colour is also always going to be that second element of each tuple. The problem Problem is,lst is dependant on user input, so I won't always know how many colours there are in total and what they are. That is why I couldn't predefine variables to store these lists in them. So how can this be done? | You could use a collections.defaultdict to group by colour: from collections import defaultdict lst = [("hello", "Blue"), ("hi", "Red"), ("hey", "Blue"), ("yo", "Green")] colours = defaultdict(list) for word, colour in lst: colours[colour].append((word, colour)) print(colours) # defaultdict(<class 'list'>, {'Blue': [('hello', 'Blue'), ('hey', 'Blue')], 'Red': [('hi', 'Red')], 'Green': [('yo', 'Green')]}) Or if you prefer using no libraries, dict.setdefault is an option: colours = {} for word, colour in lst: colours.setdefault(colour, []).append((word, colour)) print(colours) # {'Blue': [('hello', 'Blue'), ('hey', 'Blue')], 'Red': [('hi', 'Red')], 'Green': [('yo', 'Green')]} If you just want the colour tuples separated into nested lists of tuples, print the values() as a list: print(list(colours.values())) # [[('hello', 'Blue'), ('hey', 'Blue')], [('hi', 'Red')], [('yo', 'Green')]] Benefit of the above approaches is they automatically initialize empty lists for new keys as you add them, so you don't have to do that yourself. | 11 | 9 |
61,690,632 | 2020-5-9 | https://stackoverflow.com/questions/61690632/why-pandas-dataframe-sumaxis-0-returns-sum-of-values-in-each-column-where-axis | In pandas, axis=0 represent rows and axis=1 represent columns. Therefore to get the sum of values in each row in pandas, df.sum(axis=0) is called. But it returns a sum of values in each columns and vice-versa. Why??? import pandas as pd df=pd.DataFrame({"x":[1,2,3,4,5],"y":[2,4,6,8,10]}) df.sum(axis=0) Dataframe: x y 0 1 2 1 2 4 2 3 6 3 4 8 4 5 10 Output: x 15 y 30 Expected Output: 0 3 1 6 2 9 3 12 4 15 | I think the right way to interpret the axis parameter is what axis you sum 'over' (or 'across'), rather than the 'direction' the sum is computed in. Specifying axis = 0 computes the sum over the rows, giving you a total for each column; axis = 1 computes the sum across the columns, giving you a total for each row. | 12 | 16 |
61,690,731 | 2020-5-9 | https://stackoverflow.com/questions/61690731/pandas-read-csv-from-bytesio | I have a BytesIO file-like object, containing a CSV. I want to read it into a Pandas dataframe, without writing to disk in between. MWE In my use case I downloaded the file straight into BytesIO. For this MWE I'll have a file on disk, read it into BytesIO, then read that into Pandas. The disk step is just to make a MWE. file.csv a,b 1,2 3,4 Script: import pandas as pd from io import BytesIO bio = BytesIO() with open('file.csv', 'rb') as f: bio.write(f.read()) # now we have a BytesIO with a CSV df = pd.read_csv(bio) Result: Traceback (most recent call last): File "pandas-io.py", line 8, in <module> df = pd.read_csv(bio) File "/home/ec2-user/.local/lib/python3.6/site-packages/pandas/io/parsers.py", line 685, in parser_f return _read(filepath_or_buffer, kwds) File "/home/ec2-user/.local/lib/python3.6/site-packages/pandas/io/parsers.py", line 457, in _read parser = TextFileReader(fp_or_buf, **kwds) File "/home/ec2-user/.local/lib/python3.6/site-packages/pandas/io/parsers.py", line 895, in __init__ self._make_engine(self.engine) File "/home/ec2-user/.local/lib/python3.6/site-packages/pandas/io/parsers.py", line 1135, in _make_engine self._engine = CParserWrapper(self.f, **self.options) File "/home/ec2-user/.local/lib/python3.6/site-packages/pandas/io/parsers.py", line 1917, in __init__ self._reader = parsers.TextReader(src, **kwds) File "pandas/_libs/parsers.pyx", line 545, in pandas._libs.parsers.TextReader.__cinit__ pandas.errors.EmptyDataError: No columns to parse from file Note that this sounds like a similar problem to the title of this post, but the error messages are different, and that post has the X-Y problem. | The error says the file is empty. That's because after writing to a BytesIO object, the file pointer is at the end of the file, ready to write more. So when Pandas tries to read it, it starts reading after the last byte that was written. So you need to move the pointer back to the start, for Pandas to read. bio.seek(0) df = pd.read_csv(bio) | 12 | 34 |
61,686,780 | 2020-5-8 | https://stackoverflow.com/questions/61686780/python-colorama-print-all-colors | I am new to learning Python, and I came across colorama. As a test project, I wanted to print out all the available colors in colorama. from colorama import Fore from colorama import init as colorama_init colorama_init(autoreset=True) colors = [x for x in dir(Fore) if x[0] != "_"] for color in colors: print(color + f"{color}") of course this outputs all black output like this: BLACKBLACK BLUEBLUE CYANCYAN ... because the Dir(Fore) just gives me a string representation of Fore.BLUE, Fore.GREEN, ... Is there a way to access all the Fore Color property so they actually work, as in: print(Fore.BLUE + "Blue") Or in other words, this may express my problem better. I wanted to write this: print(Fore.BLACK + 'BLACK') print(Fore.BLUE + 'BLUE') print(Fore.CYAN + 'CYAN') print(Fore.GREEN + 'GREEN') print(Fore.LIGHTBLACK_EX + 'LIGHTBLACK_EX') print(Fore.LIGHTBLUE_EX + 'LIGHTBLUE_EX') print(Fore.LIGHTCYAN_EX + 'LIGHTCYAN_EX') print(Fore.LIGHTGREEN_EX + 'LIGHTGREEN_EX') print(Fore.LIGHTMAGENTA_EX + 'LIGHTMAGENTA_EX') print(Fore.LIGHTRED_EX + 'LIGHTRED_EX') print(Fore.LIGHTWHITE_EX + 'LIGHTWHITE_EX') print(Fore.LIGHTYELLOW_EX + 'LIGHTYELLOW_EX') print(Fore.MAGENTA + 'MAGENTA') print(Fore.RED + 'RED') print(Fore.RESET + 'RESET') print(Fore.WHITE + 'WHITE') print(Fore.YELLOW + 'YELLOW') in a shorter way: for color in all_the_colors_that_are_available_in_Fore: print('the word color in the representing color') #or something like this? print(Fore.color + color) | The reason why it's printing the color name twice is well described in Patrick's comment on the question. Is their a way to access all the Fore Color property so they actualy work as in According to: https://pypi.org/project/colorama/ You can print a colored string using other ways than e.g.print(Fore.RED + 'some red text') You can use colored function from termcolor module which takes a string and a color to colorize that string. But not all Fore colors are supported so you can do the following: from colorama import Fore from colorama import init as colorama_init from termcolor import colored colorama_init(autoreset=True) colors = [x for x in dir(Fore) if x[0] != "_"] colors = [i for i in colors if i not in ["BLACK", "RESET"] and "LIGHT" not in i] for color in colors: print(colored(color, color.lower())) Hope this answered your question. EDIT: I read more about Fore items and I found that you can retrieve a dictionary containing each color as keys and it's code as values, so you can do the following to include all the colors in Fore: from colorama import Fore from colorama import init as colorama_init colorama_init(autoreset=True) colors = dict(Fore.__dict__.items()) for color in colors.keys(): print(colors[color] + f"{color}") | 10 | 10 |
61,669,939 | 2020-5-8 | https://stackoverflow.com/questions/61669939/git-pre-commit-hooks-keeps-modifying-files-even-after-i-have-staged-previously-m | I'm running git pre-commit and running black as one of the hooks. Now when I run commit, black fails and says: All done! ✨ 🍰 ✨ 15 files reformatted, 1 file left unchanged. I reviewed the reformatted files and I'm fine with them. So I stage those files and try running commit again, but I keep getting the same message as above. I have tried the following commands with no success. git add . git add -A git add -u This is my .pre-commit-config.yaml file: repos: - repo: https://github.com/psf/black rev: 19.10b0 hooks: - id: black language_version: python3.6 - repo: https://github.com/pre-commit/pre-commit-hooks rev: v2.5.0 hooks: - id: check-merge-conflict - id: check-docstring-first - id: check-json - id: check-yaml - id: debug-statements - id: double-quote-string-fixer - id: end-of-file-fixer - id: name-tests-test args: [--django] - id: requirements-txt-fixer - id: trailing-whitespace - repo: https://gitlab.com/pycqa/flake8 rev: 3.7.9 hooks: - id: flake8 additional_dependencies: [flake8-typing-imports==1.6.0] - repo: https://github.com/asottile/reorder_python_imports rev: v1.4.0 hooks: - id: reorder-python-imports args: [--py3-plus] - repo: https://github.com/Lucas-C/pre-commit-hooks-bandit rev: v1.0.4 hooks: - id: python-bandit-vulnerability-check args: [-l, --recursive, -x, tests] files: .py$ - repo: local hooks: - id: tests name: run tests entry: venv/bin/pytest -v -m fast language: python additional_dependencies: [pre-commit, pytest] always_run: true pass_filenames: false types: [python] stages: [commit] - repo: local hooks: - id: tests name: run tests entry: venv/bin/pytest -x language: system types: [python] stages: [push] When I do git status --short, I get this: M .pre-commit-config.yaml M pytest.ini M setup.cfg RM tests/tests_report.html -> tests/commit_pytest_report.html R report.html -> tests/commit_tests_report.html AM tests/coverage/index.html A tests/coverage/file_1.png When I run git commit -m "test", after running git add ., git add -A, or git add -u; I get this: black....................................................................Failed - hook id: black - files were modified by this hook reformatted <filename> ... All done! ✨ 🍰 ✨ 15 files reformatted, 1 file left unchanged. Check for merge conflicts................................................Passed Check docstring is first.................................................Passed Check JSON...............................................................Passed Check Yaml...............................................................Passed Debug Statements (Python)................................................Passed Fix double quoted strings................................................Failed - hook id: double-quote-string-fixer - exit code: 1 - files were modified by this hook Fixing strings in <file_name> ... Fix End of Files.........................................................Failed - hook id: end-of-file-fixer - exit code: 1 - files were modified by this hook Fixing <file_name> ... Tests should end in _test.py.............................................Passed Fix requirements.txt.................................(no files to check)Skipped Trim Trailing Whitespace.................................................Passed flake8...................................................................Failed - hook id: flake8 - exit code: 1 <file_name>: <some flake8 error> ... Reorder python imports...................................................Passed bandit...................................................................Passed run tests................................................................Failed - hook id: tests - files were modified by this hook ============================= test session starts ============================== platform darwin -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 <test details> (0.00 durations hidden. Use -vv to show these durations.) ====================== 2 passed, 113 deselected in 2.51s ======================= I'm not sure what I'm doing wrong; git doesn't seem to have updated my commits with blacks formatting. I couldn't find anything through my Google research. Thanks! | It appears you're using black and double-quote-strings-fixer together the former likes double quoted strings in python (you can disable this by configuring black to skip-string-normalization in pyproject.toml) the latter likes single quoted strings in python (you can remove it if you'd like double quoted strings) If two formatters fight, the end result will be a failure as pre-commit checks to make sure everything resolves disclaimer: I'm the author of pre-commit and pre-commit-hooks | 13 | 24 |
61,676,156 | 2020-5-8 | https://stackoverflow.com/questions/61676156/how-to-use-the-new-numpy-random-number-generator | The fact that NumPy now recommends that new code uses the default_rng() instance instead of numpy.random for new code has got me thinking about how it should be used to yield good results, both performance vice and statistically. This first example is how I first wanted to write: import numpy as np class fancy_name(): def __init__(self): self.rg = np.random.default_rng() self.gamma_shape = 1.0 self.gamma_scale = 1.0 def public_method(self, input): # Do intelligent stuff with input return self.rg.gamma(self.gamma_shape, slef.gamma_scale) But I have also considered creating a new instance in every function call: import numpy as np class fancy_name(): def __init__(self): self.gamma_shape = 1.0 self.gamma_scale = 1.0 def public_method(self, input): # Do intelligent stuff with input rg = np.random.default_rng() return rg.gamma(self.gamma_shape, slef.gamma_scale) A third alternative would be to pass the rng as an argument in the function call. This way, the same rng can be used in other parts of the code as well. This is used in a simulation environment that will be called often to sample, for example, transition times. I guess the question is if there are arguments for any of these three methods and if there exists some kind of praxis? Also, any reference to more in-depth explanations of using these random number generators (except for the NumPy doc and Random Sampling article) is of great interest! | default_rng() isn't a singleton. It makes a new Generator backed by a new instance of the default BitGenerator class. Quoting the docs: Construct a new Generator with the default BitGenerator (PCG64). ... If seed is not a BitGenerator or a Generator, a new BitGenerator is instantiated. This function does not manage a default global instance. This can also be tested empirically: In [1]: import numpy In [2]: numpy.random.default_rng() is numpy.random.default_rng() Out[2]: False This is expensive. You should usually call default_rng() once in your program and pass the generator to anything that needs it. (Yes, this is awkward.) | 9 | 9 |
61,644,638 | 2020-5-6 | https://stackoverflow.com/questions/61644638/conda-concurrent-futures-process-brokenprocesspool-a-process-in-the-process-po | I want to install Anaconda on my mac (version 10.9.5). The command I used: sh Anaconda3-2020.02-MacOSX-x86_64.sh Led to this error: Unpacking payload ... Traceback (most recent call last): File "entry_point.py", line 69, in <module> File "concurrent/futures/process.py", line 483, in _chain_from_iterable_of_lists File "concurrent/futures/_base.py", line 598, in result_iterator File "concurrent/futures/_base.py", line 435, in result File "concurrent/futures/_base.py", line 384, in __get_result concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. [1061] Failed to execute script entry_point After rooting around, I found this suggestion, that I check the hash. I typed this: shasum -a 512 /Users/Slowat/Anaconda3-2020.02-MacOSX-x86_64.sh and the output was: aa1ed0c40646ba9041abf59c13ce38da1dc51bf15de239b6f966a0b02b4c09c960ae33698c72aa46db41731f8e67938d1972fcb76fa4c5c8081bc0272bb1b535 /Users/Slowat/Anaconda3-2020.02-MacOSX-x86_64.sh The hash listed here does not match this. So then I thought it was an issue with downloaded, so i deleted the bash script and the directory that anaconda attempted to make during the failed installation. I double checked my python version: localhost:~ Slowat$ python --version Python 3.7.4 and re-downloaded Anaconda3-2020.02-MacOSX-x86_64.sh from here. I re-checked the hash: aa1ed0c40646ba9041abf59c13ce38da1dc51bf15de239b6f966a0b02b4c09c960ae33698c72aa46db41731f8e67938d1972fcb76fa4c5c8081bc0272bb1b535 /Users/Slowat/Anaconda3-2020.02-MacOSX-x86_64.sh and it's still wrong. Any ideas what I'm doing wrong here (mainly for the original area, checking the hash was just an idea I had). | I have discovered that conda cannot be run on mac v. 10.9.5; i first updated the mac to 10.11, and then from 10.11 to 10.13 (you have to do it in those two steps). Then conda installed and ran fine. | 11 | 0 |
61,631,230 | 2020-5-6 | https://stackoverflow.com/questions/61631230/plotly-how-to-change-the-background-color-of-each-subplot | I am trying to create different subplot and I want for each subplot to have a different background color. I'm using plotly and it makes the thing a little bit more difficult. Any idea? I think the equivalent in matplot is face_color or something like this. fig = make_subplots(rows=1, cols=2) fig.add_trace( go.Scatter( x=list(range(sample_points)), y=data_gx.iloc[scene], name='X-axis', line=dict(color='green', width=2) ), row=1, col=1 ) fig.add_trace( go.Scatter( x=list(range(sample_points)), y=data_ax.iloc[scene], name='X-axis', line=dict(color='green', width=2) ), row=1, col=2 ) | fig.add_trace(go.Scatter(x=[-1,2],y= [2,2],fill='tozeroy'),row=1, col=1) This worked for me. This will draw a line from (-1,2) to (2,2) and will color everything below this points. It can help you color your background of each subplot just by adding in the row and col of subplots. It worked for me, I hope it will work for you too. | 9 | 2 |
61,657,240 | 2020-5-7 | https://stackoverflow.com/questions/61657240/iterating-over-dictionary-using-getitem-in-python | I've implemented a python class to generate data as follows: class Array: def __init__(self): self.arr = [1,2,3,4,5,6,7,8] def __getitem__(self,key): return self.arr[key] a = Array() for i in a: print(i, end = " ") It runs as expected and I get following 1 2 3 4 5 6 7 8 However, I want to do the same thing for dictionary. Why am I not able to iterate over dictionary like this? class Dictionary: def __init__(self): self.dictionary = {'a' : 1, 'b' : 2, 'c': 3} def __getitem__(self,key): return self.dictionary[key] d = Dictionary() for key in d: print(key, d[key], end=" ") I expect following output a 1 b 2 c 3 But when I run above code, I get following error : Traceback (most recent call last): File "<ipython-input-33-4547779db7ec>", line 8, in <module> for key in d: File "<ipython-input-33-4547779db7ec>", line 6, in __getitem__ return self.dictionary[key] KeyError: 0 We can iterate over normal dictionary like this : for key in d. This iterates over all the keys, is it not possible to iterate like this using __getitem__()? | A for loop works with iterators, objects you can pass to next. An object is an iterator if it has a __next__ method. Neither of your classes does, so Python will first pass your object to iter to get an iterator. The first thing iter tries to do is call the object's __iter__ method. Neither of your classes defines __iter__, either, so iter next checks if its object defines __getitem__. Both of your classes do, so iter returns an object of type iterator, whose __next__ method can be imagined to be something like def __next__(self): try: rv = self.thing.__getitem__(self.i) except IndexError: raise StopIteration self.i += 1 return rv (The iterator holds a reference to the thing which defined __getitem__, as well as the value of i to track state between calls to __next__. i is presumed to be initialized to 0.) For Array, this works, because it has integer indices. For Dictionary, though, 0 is not a key, and instead of raising an IndexError, you get a KeyError with the __next__ method does not catch. (This is alluded to in the documentation for __getitem__: Note for loops expect that an IndexError will be raised for illegal indexes to allow proper detection of the end of the sequence. ) To make your Dictionary class iterable, define __iter__ class Dictionary: def __init__(self): self.dictionary = {'a' : 1, 'b' : 2, 'c': 3} def __getitem__(self,key): return self.dictionary[key] def __iter__(self): return iter(self.dictionary) dict.__iter__ returns a value of type dict_keyiterator, which is the thing that yields the dict's keys, which you can use with Dictionary.__getitem__. | 8 | 9 |
61,659,293 | 2020-5-7 | https://stackoverflow.com/questions/61659293/sums-of-variable-size-chunks-of-a-list-where-sizes-are-given-by-other-list | I would like to make the following sum given two lists: a = [0,1,2,3,4,5,6,7,8,9] b = [2,3,5] The result should be the sum of the every b element of a like: b[0] = 2 so the first sum result should be: sum(a[0:2]) b[1] = 3 so the second sum result should be: sum(a[2:5]) b[2] = 5 so the third sum result should be: sum(a[5:10]) The printed result: 1,9,35 | You can make use of np.bincount with weights: groups = np.repeat(np.arange(len(b)), b) np.bincount(groups, weights=a) Output: array([ 1., 9., 35.]) | 8 | 9 |
61,650,474 | 2020-5-7 | https://stackoverflow.com/questions/61650474/valueerror-columns-must-be-same-length-as-key-in-pandas | i have df below Cost,Reve 0,3 4,0 0,0 10,10 4,8 len(df['Cost']) = 300 len(df['Reve']) = 300 I need to divide df['Cost'] / df['Reve'] Below is my code df[['Cost','Reve']] = df[['Cost','Reve']].apply(pd.to_numeric) I got the error ValueError: Columns must be same length as key df['C/R'] = df[['Cost']].div(df['Reve'].values, axis=0) I got the error ValueError: Wrong number of items passed 2, placement implies 1 | Problem is duplicated columns names, verify: #generate duplicates df = pd.concat([df, df], axis=1) print (df) Cost Reve Cost Reve 0 0 3 0 3 1 4 0 4 0 2 0 0 0 0 3 10 10 10 10 4 4 8 4 8 df[['Cost','Reve']] = df[['Cost','Reve']].apply(pd.to_numeric) print (df) # ValueError: Columns must be same length as key You can find this columns names: print (df.columns[df.columns.duplicated(keep=False)]) Index(['Cost', 'Reve', 'Cost', 'Reve'], dtype='object') If same values in columns is possible remove duplicated by: df = df.loc[:, ~df.columns.duplicated()] df[['Cost','Reve']] = df[['Cost','Reve']].apply(pd.to_numeric) #simplify division df['C/R'] = df['Cost'].div(df['Reve']) print (df) Cost Reve C/R 0 0 3 0.0 1 4 0 inf 2 0 0 NaN 3 10 10 1.0 4 4 8 0.5 | 7 | 12 |
61,632,584 | 2020-5-6 | https://stackoverflow.com/questions/61632584/understanding-input-shape-to-pytorch-lstm | This seems to be one of the most common questions about LSTMs in PyTorch, but I am still unable to figure out what should be the input shape to PyTorch LSTM. Even after following several posts (1, 2, 3) and trying out the solutions, it doesn't seem to work. Background: I have encoded text sequences (variable length) in a batch of size 12 and the sequences are padded and packed using pad_packed_sequence functionality. MAX_LEN for each sequence is 384 and each token (or word) in the sequence has a dimension of 768. Hence my batch tensor could have one of the following shapes: [12, 384, 768] or [384, 12, 768]. The batch will be my input to the PyTorch rnn module (lstm here). According to the PyTorch documentation for LSTMs, its input dimensions are (seq_len, batch, input_size) which I understand as following. seq_len - the number of time steps in each input stream (feature vector length). batch - the size of each batch of input sequences. input_size - the dimension for each input token or time step. lstm = nn.LSTM(input_size=?, hidden_size=?, batch_first=True) What should be the exact input_size and hidden_size values here? | You have explained the structure of your input, but you haven't made the connection between your input dimensions and the LSTM's expected input dimensions. Let's break down your input (assigning names to the dimensions): batch_size: 12 seq_len: 384 input_size / num_features: 768 That means the input_size of the LSTM needs to be 768. The hidden_size is not dependent on your input, but rather how many features the LSTM should create, which is then used for the hidden state as well as the output, since that is the last hidden state. You have to decide how many features you want to use for the LSTM. Finally, for the input shape, setting batch_first=True requires the input to have the shape [batch_size, seq_len, input_size], in your case that would be [12, 384, 768]. import torch import torch.nn as nn # Size: [batch_size, seq_len, input_size] input = torch.randn(12, 384, 768) lstm = nn.LSTM(input_size=768, hidden_size=512, batch_first=True) output, _ = lstm(input) output.size() # => torch.Size([12, 384, 512]) | 26 | 28 |
61,648,065 | 2020-5-7 | https://stackoverflow.com/questions/61648065/split-list-into-n-sublists-with-approximately-equal-sums | I have a list of integers, and I need to split it into a given number of sublists (with no restrictions on order or the number of elements in each), in a way that minimizes the average difference in sums of each sublist. For example: >>> x = [4, 9, 1, 5] >>> sublist_creator(x, 2) [[9], [4, 1, 5]] because list(map(sum, sublist_creator(x, 2))) yields [9, 10], minimizing the average distance. Alternatively, [[9, 1], [4, 5]] would have been equally correct, and my use case has no preference between two possibilities. The only way I can think of to do this is by checking, iteratively, all possible combinations, but I'm working with a list of ~5000 elements and need to split it into ~30 sublists, so that approach is prohibitively expensive. | Here's the outline: create N empty lists sort() your input array in ascending order pop() the last element from the sorted array append() the popped element to the list with the lowest sum() of the elements repeat 3 and 4 until input array is empty profit!!! With M=5000 elements and N=30 lists this approach might take about O(N*M) if you carefully store the intermediate sums of the sublists instead of calculating them from the scratch every time. | 11 | 7 |
61,631,955 | 2020-5-6 | https://stackoverflow.com/questions/61631955/python-requests-ssl-error-during-requests | I'm learning API requests using python requests for personal interest. I'm trying to simply download the URL 'https://live.euronext.com/fr/product/equities/fr0000120578-xpar/'. It works perfectly using postman : Screenshot of postman GET request I'm trying the same request in python using this code : import requests headers = { "Accept": "text/html,application/xhtml+xml,application/" \ "xml;q=0.9,image/webp,image/apng,*/*;q=0.8", "Accept-Encoding": "gzip, deflate", "Accept-Language": "en-GB,en;q=0.9,en-US;q=0.8,ml;q=0.7", "Connection": "keep-alive", "Host": 'live.euronext.com', "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:75.0) " \ "Gecko/20100101 Firefox/75.0" } url = "https://live.euronext.com/fr/product/equities/fr0000120578-xpar/" r = requests.get(url, headers=headers, verify="/etc/ssl/certs/ca-certificates.crt") print(r) I've read the requests doc, i've search for similar issues, i've tried various options like verify=False or verify="/etc/ssl/certs/ca-certificates.crt" pointing to some valid certificates. I've also tried many headers options. None option is working. I still have a [SSL: WRONG_SIGNATURE_TYPE] wrong signature type error. Please I need help to understand the issue. Thanks, Here is the full error text : --------------------------------------------------------------------------- SSLError Traceback (most recent call last) /usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 664 # Make the request on the httplib connection object. --> 665 httplib_response = self._make_request( 666 conn, /usr/lib/python3/dist-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 375 try: --> 376 self._validate_conn(conn) 377 except (SocketTimeout, BaseSSLError) as e: /usr/lib/python3/dist-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 995 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 996 conn.connect() 997 /usr/lib/python3/dist-packages/urllib3/connection.py in connect(self) 351 --> 352 self.sock = ssl_wrap_socket( 353 sock=conn, /usr/lib/python3/dist-packages/urllib3/util/ssl_.py in ssl_wrap_socket(sock, keyfile, certfile, cert_reqs, ca_certs, server_hostname, ssl_version, ciphers, ssl_context, ca_cert_dir, key_password) 369 if HAS_SNI and server_hostname is not None: --> 370 return context.wrap_socket(sock, server_hostname=server_hostname) 371 /usr/lib/python3.8/ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session) 499 # ctx._wrap_socket() --> 500 return self.sslsocket_class._create( 501 sock=sock, /usr/lib/python3.8/ssl.py in _create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session) 1039 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets") -> 1040 self.do_handshake() 1041 except (OSError, ValueError): /usr/lib/python3.8/ssl.py in do_handshake(self, block) 1308 self.settimeout(None) -> 1309 self._sslobj.do_handshake() 1310 finally: SSLError: [SSL: WRONG_SIGNATURE_TYPE] wrong signature type (_ssl.c:1108) During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) /usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 438 if not chunked: --> 439 resp = conn.urlopen( 440 method=request.method, /usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 718 --> 719 retries = retries.increment( 720 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] /usr/lib/python3/dist-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 435 if new_retry.is_exhausted(): --> 436 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 437 MaxRetryError: HTTPSConnectionPool(host='live.euronext.com', port=443): Max retries exceeded with url: /fr/product/equities/fr0000120578-xpar/ (Caused by SSLError(SSLError(1, '[SSL: WRONG_SIGNATURE_TYPE] wrong signature type (_ssl.c:1108)'))) During handling of the above exception, another exception occurred: SSLError Traceback (most recent call last) <ipython-input-2-032056a6c771> in <module> 13 } 14 url = "https://live.euronext.com/fr/product/equities/fr0000120578-xpar/" ---> 15 r = requests.get(url, headers=headers, verify="/etc/ssl/certs/ca-certificates.crt") 16 print(r) /usr/lib/python3/dist-packages/requests/api.py in get(url, params, **kwargs) 73 74 kwargs.setdefault('allow_redirects', True) ---> 75 return request('get', url, params=params, **kwargs) 76 77 /usr/lib/python3/dist-packages/requests/api.py in request(method, url, **kwargs) 58 # cases, and look like a memory leak in others. 59 with sessions.Session() as session: ---> 60 return session.request(method=method, url=url, **kwargs) 61 62 /usr/lib/python3/dist-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 531 } 532 send_kwargs.update(settings) --> 533 resp = self.send(prep, **send_kwargs) 534 535 return resp /usr/lib/python3/dist-packages/requests/sessions.py in send(self, request, **kwargs) 644 645 # Send the request --> 646 r = adapter.send(request, **kwargs) 647 648 # Total elapsed time of the request (approximately) /usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 512 if isinstance(e.reason, _SSLError): 513 # This branch is for urllib3 v1.22 and later. --> 514 raise SSLError(e, request=request) 515 516 raise ConnectionError(e, request=request) SSLError: HTTPSConnectionPool(host='live.euronext.com', port=443): Max retries exceeded with url: /fr/product/equities/fr0000120578-xpar/ (Caused by SSLError(SSLError(1, '[SSL: WRONG_SIGNATURE_TYPE] wrong signature type (_ssl.c:1108)'))) | I finally managed to find the issue, thanks to https://github.com/psf/requests/issues/4775 import requests import ssl from urllib3 import poolmanager url = 'https://live.euronext.com/fr/product/equities/FR0000120271-XPAR' class TLSAdapter(requests.adapters.HTTPAdapter): def init_poolmanager(self, connections, maxsize, block=False): """Create and initialize the urllib3 PoolManager.""" ctx = ssl.create_default_context() ctx.set_ciphers('DEFAULT@SECLEVEL=1') self.poolmanager = poolmanager.PoolManager( num_pools=connections, maxsize=maxsize, block=block, ssl_version=ssl.PROTOCOL_TLS, ssl_context=ctx) session = requests.session() session.mount('https://', TLSAdapter()) res = session.get(url) print(res) And the output was <Response [200]> ! | 13 | 21 |
61,642,034 | 2020-5-6 | https://stackoverflow.com/questions/61642034/displaying-full-content-of-dataframe-cell-without-ellipsis-truncating-the-text | I am trying to display the contents of an Excel file in a Jupyter Notebook. However, a column named definition of the Excel sheet contains long strings. So when I display the DataFrame in the notebook, the long strings are truncated by ellipsis (...). Is there a way to display the complete contents of the column in Jupyter Notebook? Since there is clearly space to the right that can utilized by the Definition column. | You can use options.display.max_colwidth to specify you want to see more in the default representation: In [2]: df Out[2]: one 0 one 1 two 2 This is very long string very long string very... In [3]: pd.options.display.max_colwidth Out[3]: 50 In [4]: pd.options.display.max_colwidth = 100 In [5]: df Out[5]: one 0 one 1 two 2 This is very long string very long string very long string veryvery long string reference - Print very long string completely in pandas dataframe | 11 | 11 |
61,634,759 | 2020-5-6 | https://stackoverflow.com/questions/61634759/python-futurewarning-indexing-with-multiple-keys-implicitly-converted-to-a-tup | I've recently upgraded Python to 3.7.6 and my existing code: df['Simple_Avg_Return'] = df.groupby(['YF_Ticker'])['Share_Price_Delta_Percent', 'Dividend_Percent'].transform( sum).divide(2).round(2) Is now throwing this warning: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead. How would I go about converting this to a list as advised and where? | You need to use an extra bracket around ['Share_Price_Delta_Percent', 'Dividend_Percent'] Like this, df['Simple_Avg_Return'] = df.groupby(['YF_Ticker'])[['Share_Price_Delta_Percent', 'Dividend_Percent']].transform( sum).divide(2).round(2) Quoting @ALollz comment The decision was made https://github.com/pandas-dev/pandas/issues/23566. To keep compatibility between 0.25 and 1.0 they didn't remove the feature but added a warning in 1.0. Likely it will be removed in the next major deprecation cycle. Source | 11 | 22 |
61,627,708 | 2020-5-6 | https://stackoverflow.com/questions/61627708/what-does-dictstr-any-mean-in-python | I'm referring to the documentation available here to understand how the return types are defined while using function annotations. I couldn't understand what the str in Dict[str, Any] refers to. Does str refer to the dictionary's keys and Any (meaning, it can be a string or an int) refer to the type of the dictionary's value? EDIT: In the above-mentioned link, it is mentioned The PEP 484 type Dict[str, Any] would be suitable, but it is too lenient, as arbitrary string keys can be used, and arbitrary values are valid. Could someone explain what does arbitrary string keys refer to? I understand keys are strings, but when we say arbitrary string keys do we just mean that the dictionary can take any key that is a string? Or does the word arbitrary hold any other significance here? | Yep! Normally python variables are mutable (types can change), but specifying it like this is good documentation and makes it very clear what goes where. More usage documentation can be found here! https://docs.python.org/3/library/typing.html EDIT: Elaborate on answer on updated question The PEP documentation you refer to identifies how type-hints, albeit useful, are still error prone. By specifying Dict[str, Any], the str can be "arbitrary", meaning it can be a string key of anything (ie. name, age, height, humidity). Out of these keys, one may consider "humidity" shouldn't be one of the string-keys for this dict, but there isn't a way to check, or enforce that until some error occurs down the road (hence, it is "arbitrary"; no rules to govern what are "allowed" string-keys of this dict). Therefore, per the documentation, specifying this class (derived from TypedDict): class Movie(TypedDict): name: str year: int will specifically limit users to create a new kind of dict (a TypedDict) with string-keys name and year (with respective values typed str and int). Users will not be able to "arbitrarily" add a new key (ie. humidity) to this "Movie" TypedDict, or otherwise assign a non-int value to Movie["year"]. | 13 | 11 |
61,620,036 | 2020-5-5 | https://stackoverflow.com/questions/61620036/how-to-run-python3-code-in-vscode-bin-sh-1-python-not-found | I'm trying to run a python file in VSCode using python3. I can run using integrated terminal like it says in the microsoft vscode tutorial on python. However, I would like the program to print in the output tab and not take up the terminal window. However I'm getting this error. The standard code runner config file launch.json, looks like this; "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal" } ] I've tried to set my python path in VSCode in settings.json ... "python.pythonPath": "python3", "code-runner.executorMap": { "python3": "/usr/bin/python3" } I've also set an alias for python -> python3 (as my ubuntu 20.04 doesn't come with python2 anymore) alias python="python3" However, I keep getting the above error. Any Ideas? | Solution in 2024 Step1 : Goto settings of Code Runner extension Step2 : Find the section Code-runner: Executor Map And click on Edit in settings.json Step3 : Now change the setting for python Before "code-runner.executorMap": { ... "python": "python -u" ... } Change this to "code-runner.executorMap": { ... "python": "python3 -u" ... } | 34 | 52 |
61,505,749 | 2020-4-29 | https://stackoverflow.com/questions/61505749/tensorflowcan-save-best-model-only-with-val-acc-available-skipping | I have an issue with tf.callbacks.ModelChekpoint. As you can see in my log file, the warning comes always before the last iteration where the val_acc is calculated. Therefore, Modelcheckpoint never finds the val_acc Epoch 1/30 1/8 [==>...........................] - ETA: 19s - loss: 1.4174 - accuracy: 0.3000 2/8 [======>.......................] - ETA: 8s - loss: 1.3363 - accuracy: 0.3500 3/8 [==========>...................] - ETA: 4s - loss: 1.3994 - accuracy: 0.2667 4/8 [==============>...............] - ETA: 3s - loss: 1.3527 - accuracy: 0.3250 6/8 [=====================>........] - ETA: 1s - loss: 1.3042 - accuracy: 0.3333 WARNING:tensorflow:Can save best model only with val_acc available, skipping. 8/8 [==============================] - 4s 482ms/step - loss: 1.2846 - accuracy: 0.3375 - val_loss: 1.3512 - val_accuracy: 0.5000 Epoch 2/30 1/8 [==>...........................] - ETA: 0s - loss: 1.0098 - accuracy: 0.5000 3/8 [==========>...................] - ETA: 0s - loss: 0.8916 - accuracy: 0.5333 5/8 [=================>............] - ETA: 0s - loss: 0.9533 - accuracy: 0.5600 6/8 [=====================>........] - ETA: 0s - loss: 0.9523 - accuracy: 0.5667 7/8 [=========================>....] - ETA: 0s - loss: 0.9377 - accuracy: 0.5714 WARNING:tensorflow:Can save best model only with val_acc available, skipping. 8/8 [==============================] - 1s 98ms/step - loss: 0.9229 - accuracy: 0.5750 - val_loss: 1.2507 - val_accuracy: 0.5000 This is my code for training the CNN. callbacks = [ TensorBoard(log_dir=r'C:\Users\reda\Desktop\logs\{}'.format(Name), histogram_freq=1), ModelCheckpoint(filepath=r"C:\Users\reda\Desktop\checkpoints\{}".format(Name), monitor='val_acc', verbose=2, save_best_only=True, mode='max')] history = model.fit_generator( train_data_gen, steps_per_epoch=total_train // batch_size, epochs=epochs, validation_data=val_data_gen, validation_steps=total_val // batch_size, callbacks=callbacks) | I know how frustrating these things can be sometimes..but tensorflow requires that you explicitly write out the name of metric you are wanting to calculate You will need to actually say 'val_accuracy' metric = 'val_accuracy' ModelCheckpoint(filepath=r"C:\Users\reda.elhail\Desktop\checkpoints\{}".format(Name), monitor=metric, verbose=2, save_best_only=True, mode='max')] Hope this helps =) *** As later noted by BlueTurtle (Give their answer a thumbs up please, likely still beneath this) you also need to use the full metric name to match your model.compile, ModelCheckpoint, and EarlyStopping. | 24 | 23 |
61,564,284 | 2020-5-2 | https://stackoverflow.com/questions/61564284/when-is-string-swapcase-swapcase-not-equal-to-string | Documentation for str.swapcase() method says: Return a copy of the string with uppercase characters converted to lowercase and vice versa. Note that it is not necessarily true that s.swapcase().swapcase() == s. I can't think of an example where s.swapcase().swapcase() != s, can anyone think of one? | A simple example would be: s = "ß" print(s.swapcase().swapcase()) Ouput: ss ß is German lowercase double s (The "correct" uppercase version would be ẞ). The reason this happens is that the Unicode standard has defined the capitalization of ß to be SS: The data in this file, combined with # the simple case mappings in UnicodeData.txt, defines the full case mappings # Lowercase_Mapping (lc), Titlecase_Mapping (tc), and Uppercase_Mapping (uc). ... # The entries in this file are in the following machine-readable format: # # <code>; <lower>; <title>; <upper>; (<condition_list>;)? # <comment> ... # The German es-zed is special--the normal mapping is to SS. # Note: the titlecase should never occur in practice. It is equal to titlecase(uppercase(<es-zed>)) 00DF; 00DF; 0053 0073; 0053 0053; # LATIN SMALL LETTER SHARP S (00DF is ß, 0053 is S, and 0073 is s) | 7 | 8 |
61,578,899 | 2020-5-3 | https://stackoverflow.com/questions/61578899/disable-black-formatting-of-dict-expression-within-mapping-comprehensions | I'm currently researching Black as our default formatter, but, I'm having some edge cases that don't format well and I want to know if there's a way to get the result I want. Black's documentation partially explores my problem, I have a dictionary expression spread horizontally, and I want to keep it that way since I'm expecting lines to be added, e.g.: # Black would keep this as-is because of the trailing comma TRANSLATIONS = { "en_us": "English (US)", "pl_pl": "polski", } But in my case the dictionary is inside a list comprehension: res = [ { 'id': item.id, 'name': item.name, } for item in items.select() ] Which Black collapses, regardless of the trailing comma, like so: res = [ {"id": item.id, "name": item.name,} for item in items.select() ] Is there a way of telling Black to retain the horizontal structure in these cases? | It seems that black addressed this issue. At the time of writing this answer, using black version 20.8b1, the formatting is done as I was hoping for. As long as there is a magic trailing comma on the last item in the dictionary expression, black will format the code within the list comprehension. res = [ { "id": item.id, "name": item.name, } for item in items.select() ] Will format to: res = [ { "id": item.id, "name": item.name, } for item in items.select() ] | 16 | 10 |
61,544,854 | 2020-5-1 | https://stackoverflow.com/questions/61544854/from-future-import-annotations | Python doc __future__ In the python docs about __future__ there is a table where it shows that annotations are "optional in" 3.7.0b1 and "mandatory in" 4.0 but I am still able to use annotations in 3.8.2 without importing annotations. Given that, what is the use of it? >>> def add_int(a:int, b:int) -> int: ... return a + b >>> add_int.__annotations__ {'a': <class 'int'>, 'b': <class 'int'>, 'return': <class 'int'>} I doubt I clearly understand the meaning of "optional in" and "mandatory in" here | Mandatory is an interesting word choice. I guess it means that it's by default in the language. You don't have to enable it with from __future__ import annotations The annotations feature are referring to the PEP 563: Postponed evaluation of annotations. It's an enhancement to the existing annotations feature which was initially introduced in python 3.0 and redefined as type hints in python 3.5, that's why your code works under python 3.8. Here's what optional from __future__ import annotations changes in python 3.7+: class A: def f(self) -> A: # NameError: name 'A' is not defined pass but this works from __future__ import annotations class A: def f(self) -> A: pass See this chapter in python 3.7 what's new about postponed annotations: Since this change breaks compatibility, the new behavior needs to be enabled on a per-module basis in Python 3.7 using a __future__ import: from __future__ import annotations It will become the default in Python 3.10*. * it was announced to be default in 3.10 (when python3.7 was released), but it was now moved to a later release | 115 | 167 |
61,606,095 | 2020-5-5 | https://stackoverflow.com/questions/61606095/false-boolean-is-not-showing-in-proto3-python | I'm using protoBuffer 3 in python it is not showing bool fields correctly. entity_proto message Foo { string id = 1; bool active = 3; } Setting values in python. foo = entity_proto.Foo(id='id-123', active=True) print(foo) # id: id-123 # active: True # But if you set the value False it does not show 'active' in print statement foo = entity_proto.Foo(id='id-123', active=False) print(foo) # id: id-123 if you try to print print(foo.active) output is False which is somehow OK. Main problem is come when I use Http trancoding if I try to print console.log(foo.active) is give me undefined not false (lang: JavaScript) Can someone please let me know why it is not showing for False values. | Protobuf has default values for fields, such as boolean false, numeric zero, or the empty string. It doesn't bother encoding those since it's a waste of space and/or bandwidth (when transmitting). That's probably why it's not showing up. A good way to check this would be to set id to an empty string and see if it behaves similarly: foo = entity_proto.Foo(id='', active=True) print(foo) # active: True (I suspect). The solution depends really on where that undefined is coming from. Either Javascript has a real undefined value, in which case you can use the null/undefined coalescing operator: console.log(foo.active ?? false) Or, if this HTTP transcoder is doing something like creating an literal "undefined" string, you'll have to figure out how to turn (what is probably) None into "false". | 13 | 9 |
61,507,845 | 2020-4-29 | https://stackoverflow.com/questions/61507845/model-clean-vs-model-clean-fields | What is the difference between the Model.clean() method and the Model.clean_fields() method?. When should I use one or the other?. According to the Model.clean_fields() method documentation: This method will validate all fields on your model. The optional exclude argument lets you provide a list of field names to exclude from validation. It will raise a ValidationError if any fields fail validation. . . . So the Model.clean_fields() method is used to validate the fields of my model. According to the Model.clean() method documentation: This method should be used to provide custom model validation, and to modify attributes on your model if desired. For instance, you could use it to automatically provide a value for a field, or to do validation that requires access to more than a single field . . . But, this method should be used to make other validations, but among those, you can perform validations to different fields. And in the Django examples of the Model.clean() method, validations are made to fields: class Article(models.Model): ... def clean(self): # Don't allow draft entries to have a pub_date. if self.status == 'draft' and self.pub_date is not None: raise ValidationError(_('Draft entries may not have a publication date.')) # Set the pub_date for published items if it hasn't been set already. if self.status == 'published' and self.pub_date is None: self.pub_date = datetime.date.today() So for example, if I want to validate a field, in which of the two methods should I do it? And for what exactly should the methods be used? since I don't understand very well why they should be used exactly. It would be very helpful to provide examples where you can see the difference between the methods, to know exactly what the methods should be used for. | if I want to validate a field, in which of the two methods should I do it? Then you add that normally as validator [Django-doc]: from django.core.exceptions import ValidationError def even_validator(value): if value % 2: ValidationError('The value should be even') class MyModel(models.Model): my_fields = models.IntegerField(validators=[even_validator]) The .clean_fields() method [Django-doc] will then call the cleaning methods and validation methods of the individual fields, and you can pass parameters to specify if you want to exclude validating a certain field. But normally you do not override .clean_fields() itself. The .clean() method [Django-doc] on the other hand is a validation method that is used if multiple fields are involved. You can override this, although in that case you normally also call the super().clean() method to do validations defined at the base class. | 8 | 8 |
61,546,201 | 2020-5-1 | https://stackoverflow.com/questions/61546201/python-typehint-int-as-positive | I am curious what would be the best way to specify that a type is not just a int but a positive int in Python. Examples: # As function argument def my_age(age: int) -> str: return f"You are {age} years old." # As class property class User: id: int In both of these situations a negative value would be erroneous. It would be nice to be warned by my IDE/linter. Is there a simple way to specify an integer as positive using type-hints? | Little late to the party but there is now a library called annotated-types that provides exactly what you want and is the 'official' way to go about this, per PEP-593. So for your example of positive integers, you could make use of the built-in typing.Annotated and annotated-types' Gt predicate, like this: from typing import Annotated from annotated_types import Gt def my_age(age: Annotated[int, Gt(0)]) -> str: return f"You are {age} years old." # As class property class User: id: Annotated[int, Gt(0)] Note that like all things related to type hints in Python, this doesn't actually enforce the constraint at runtime. If you want to enforce it you would still have to build assertions yourself. | 24 | 22 |
61,569,324 | 2020-5-3 | https://stackoverflow.com/questions/61569324/type-annotation-for-callable-that-takes-kwargs | There is a function (f) which consumes a function signature (g) that takes a known first set of arguments and any number of keyword arguments **kwargs. Is there a way to include the **kwargs in the type signature of (g) that is described in (f)? For example: from typing import Callable, Any from functools import wraps import math def comparator(f: Callable[[Any, Any], bool]) -> Callable[[str], bool]: @wraps(f) def wrapper(input_string: str, **kwargs) -> bool: a, b, *_ = input_string.split(" ") return f(eval(a), eval(b), **kwargs) return wrapper @comparator def equal(a, b): return a == b @comparator def equal_within(a, b, rel_tol=1e-09, abs_tol=0.0): return math.isclose(a, b, rel_tol=rel_tol, abs_tol=abs_tol) # All following statements should print `True` print(equal("1 1") == True) print(equal("1 2") == False) print(equal_within("5.0 4.99998", rel_tol=1e-5) == True) print(equal_within("5.0 4.99998") == False) The function comparator wraps its argument f with wrapper, which consumes the input for f as a string, parses it and evaluates it using f. In this case, Pycharm gives a warning that return f(eval(a), eval(b), **kwargs) calls f with the unexpected argument **kwargs, which doesn't match the expected signature. This post on Reddit suggests adding either Any or ... to the type signature of f like f: Callable[[Any, Any, ...], bool] f: Callable[[Any, Any, Any], bool] The former causes a TypeError [1], while the latter seems to misleading, since f accepts at least 2 arguments, rather than exactly 3. Another workaround is to leave the Callable args definition open with ... like f: Callable[..., bool], but I'm wondering if there is a more appropriate solution. TypeError: Callable[[arg, ...], result]: each arg must be a type. Got Ellipsis. | tl;dr: Protocol may be the closest feature that's implemented, but it's still not sufficient for what you need. See this issue for details. Full answer: I think the closest feature to what you're asking for is Protocol, which was introduced in Python 3.8 (and backported to older Pythons via typing_extensions). It allows you to define a Protocol subclass that describes the behaviors of the type, pretty much like an "interface" or "trait" in other languages. For functions, a similar syntax is supported: from typing import Protocol # from typing_extensions import Protocol # if you're using Python 3.6 class MyFunction(Protocol): def __call__(self, a: Any, b: Any, **kwargs) -> bool: ... def decorator(func: MyFunction): ... @decorator # this type-checks def my_function(a, b, **kwargs) -> bool: return a == b In this case, any function that have a matching signature can match the MyFunction type. However, this is not sufficient for your requirements. In order for the function signatures to match, the function must be able to accept an arbitrary number of keyword arguments (i.e., have a **kwargs argument). To this point, there's still no way of specifying that the function may (optionally) take any keyword arguments. This GitHub issue discusses some possible (albeit verbose or complicated) solutions under the current restrictions. For now, I would suggest just using Callable[..., bool] as the type annotation for f. It is possible, though, to use Protocol to refine the return type of the wrapper: class ReturnFunc(Protocol): def __call__(self, s: str, **kwargs) -> bool: ... def comparator(f: Callable[..., bool]) -> ReturnFunc: .... This gets rid of the "unexpected keyword argument" error at equal_within("5.0 4.99998", rel_tol=1e-5). | 24 | 21 |
61,556,618 | 2020-5-2 | https://stackoverflow.com/questions/61556618/plotly-how-to-display-and-filter-a-dataframe-with-multiple-dropdowns | I'm new to Python, Pandas and Plotly so maybe the answer is easy but I couldn't find anything on the forum or anywhere else … I don’t want to use Dash nor ipywidgets since I want to be able to export in HTML using plotly.offline.plot (I need an interactive HTML file to dynamically control the figure without any server running like Dash seems to do). Well my problem is that I would like to filter a plotly figure using several (cumulative) dropdown buttons (2 in this example, but it could be more) by filtering the original data with the selected value in the dropdown lists. num label color value 1 A red 0.4 2 A blue 0.2 3 A green 0.3 4 A red 0.6 5 A blue 0.7 6 A green 0.4 7 B blue 0.2 8 B green 0.4 9 B red 0.4 10 B green 0.2 11 C red 0.1 12 C blue 0.3 13 D red 0.8 14 D blue 0.4 15 D green 0.6 16 D yellow 0.5 In this example, if I choose label ‘A’ and color ‘red’ I would like to display ONLY the values of rows with label ‘A’ AND color ‘red’, as follow : num label color value 1 A red 0.4 4 A red 0.6 Then, the figure should display only 2 values 1) So here is the code I have for the moment (see below) but I don’t know how to continue. Do you have any idea ? 2) Extra question : is it possible to use checkboxes instead of dropdown lists, to be able to select multiple values inside a criteria, for example : Labels filter could be A or B, not only one in the list … Thanks in advance for your help ! import pandas as pd import plotly.graph_objects as go d = { 'num' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], 'label' : ['A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C', 'D', 'D', 'D', 'D'], 'color' : ['red', 'blue', 'green', 'red', 'blue', 'green', 'blue', 'green', 'red', 'green', 'red', 'blue', 'red', 'blue', 'green', 'yellow'], 'value' : [0.4, 0.2, 0.3, 0.6, 0.7, 0.4, 0.2, 0.4, 0.4, 0.2, 0.1, 0.3, 0.8, 0.4, 0.6, 0.5] } # Build dataframe df = pd.DataFrame(data=d) # Build dropdown Labels labels = df["label"].unique() buttonsLabels = [dict(label = "All labels", method = "restyle", args = [{'y' : [df["value"] * 100]}] # or what else ? )] for label in labels: buttonsLabels.append(dict(label = label, method = "restyle", visible = True, #args = [{'y' : ??? }] )) # Build dropdown Colors colors = df["color"].unique() buttonsColors = [dict(label = "All colors", method = "restyle", args = [{'y' : [df["value"] * 100]}] # or what else ? )] for color in colors: buttonsColors.append(dict(label = color, method = "restyle", visible = True, # args = [{'y' : ??? }] )) # Display figure fig = go.Figure(data = [ go.Scatter(x = df["num"], y = df["value"] * 100 ) ]) fig.update_layout(updatemenus = [ dict(buttons = buttonsLabels, showactive = True), dict(buttons = buttonsColors, showactive = True, y = 0.8) ]) fig.show() | It's certainly possible to display and filter a dataframe with multiple dropdowns. The code snippet below will do exactly that for you. The snippet has quite a few elements in common with your provided code, but I had to build it from scratch to make sure everything harmonized. Run the snippet below, and select A and Red to see that you will in fact get: num label color value 1 A red 0.4 4 A red 0.6 Plot: There's still room for improvement. I'll polish the code and improve the layout when I find the time. First, please let me know if this is in fact what you were looking for. Complete code: # Imports import plotly.graph_objs as go import pandas as pd import numpy as np # source data df = pd.DataFrame({0: {'num': 1, 'label': 'A', 'color': 'red', 'value': 0.4}, 1: {'num': 2, 'label': 'A', 'color': 'blue', 'value': 0.2}, 2: {'num': 3, 'label': 'A', 'color': 'green', 'value': 0.3}, 3: {'num': 4, 'label': 'A', 'color': 'red', 'value': 0.6}, 4: {'num': 5, 'label': 'A', 'color': 'blue', 'value': 0.7}, 5: {'num': 6, 'label': 'A', 'color': 'green', 'value': 0.4}, 6: {'num': 7, 'label': 'B', 'color': 'blue', 'value': 0.2}, 7: {'num': 8, 'label': 'B', 'color': 'green', 'value': 0.4}, 8: {'num': 9, 'label': 'B', 'color': 'red', 'value': 0.4}, 9: {'num': 10, 'label': 'B', 'color': 'green', 'value': 0.2}, 10: {'num': 11, 'label': 'C', 'color': 'red', 'value': 0.1}, 11: {'num': 12, 'label': 'C', 'color': 'blue', 'value': 0.3}, 12: {'num': 13, 'label': 'D', 'color': 'red', 'value': 0.8}, 13: {'num': 14, 'label': 'D', 'color': 'blue', 'value': 0.4}, 14: {'num': 15, 'label': 'D', 'color': 'green', 'value': 0.6}, 15: {'num': 16, 'label': 'D', 'color': 'yellow', 'value': 0.5}, 16: {'num': 17, 'label': 'E', 'color': 'purple', 'value': 0.68}} ).T df_input = df.copy() # split df by labels labels = df['label'].unique().tolist() dates = df['num'].unique().tolist() # dataframe collection grouped by labels dfs = {} for label in labels: dfs[label]=pd.pivot_table(df[df['label']==label], values='value', index=['num'], columns=['color'], aggfunc=np.sum) # find row and column unions common_cols = [] common_rows = [] for df in dfs.keys(): common_cols = sorted(list(set().union(common_cols,list(dfs[df])))) common_rows = sorted(list(set().union(common_rows,list(dfs[df].index)))) # find dimensionally common dataframe df_common = pd.DataFrame(np.nan, index=common_rows, columns=common_cols) # reshape each dfs[df] into common dimensions dfc={} for df_item in dfs: #print(dfs[unshaped]) df1 = dfs[df_item].copy() s=df_common.combine_first(df1) df_reshaped = df1.reindex_like(s) dfc[df_item]=df_reshaped # plotly start fig = go.Figure() # one trace for each column per dataframe: AI and RANDOM for col in common_cols: fig.add_trace(go.Scatter(x=dates, visible=True, marker=dict(size=12, line=dict(width=2)), marker_symbol = 'diamond',name=col ) ) # menu setup updatemenu= [] # buttons for menu 1, names buttons=[] # create traces for each color: # build argVals for buttons and create buttons for df in dfc.keys(): argList = [] for col in dfc[df]: #print(dfc[df][col].values) argList.append(dfc[df][col].values) argVals = [ {'y':argList}] buttons.append(dict(method='update', label=df, visible=True, args=argVals)) # buttons for menu 2, colors b2_labels = common_cols # matrix to feed all visible arguments for all traces # so that they can be shown or hidden by choice b2_show = [list(b) for b in [e==1 for e in np.eye(len(b2_labels))]] buttons2=[] buttons2.append({'method': 'update', 'label': 'All', 'args': [{'visible': [True]*len(common_cols)}]}) # create buttons to show or hide for i in range(0, len(b2_labels)): buttons2.append(dict(method='update', label=b2_labels[i], args=[{'visible':b2_show[i]}] ) ) # add option for button two to hide all buttons2.append(dict(method='update', label='None', args=[{'visible':[False]*len(common_cols)}] ) ) # some adjustments to the updatemenus updatemenu=[] your_menu=dict() updatemenu.append(your_menu) your_menu2=dict() updatemenu.append(your_menu2) updatemenu[1] updatemenu[0]['buttons']=buttons updatemenu[0]['direction']='down' updatemenu[0]['showactive']=True updatemenu[1]['buttons']=buttons2 updatemenu[1]['y']=0.6 fig.update_layout(showlegend=False, updatemenus=updatemenu) fig.update_layout(yaxis=dict(range=[0,df_input['value'].max()+0.4])) # title fig.update_layout( title=dict( text= "<i>Filtering with multiple dropdown buttons</i>", font={'size':18}, y=0.9, x=0.5, xanchor= 'center', yanchor= 'top')) # button annotations fig.update_layout( annotations=[ dict(text="<i>Label</i>", x=-0.2, xref="paper", y=1.1, yref="paper", align="left", showarrow=False, font = dict(size=16, color = 'steelblue')), dict(text="<i>Color</i>", x=-0.2, xref="paper", y=0.7, yref="paper", align="left", showarrow=False, font = dict(size=16, color = 'steelblue') ) ]) fig.show() | 10 | 11 |
61,535,744 | 2020-5-1 | https://stackoverflow.com/questions/61535744/return-aggregate-for-all-unique-in-a-group | The trouble is this. Lets say we have a pandas df that can be generated using the following: month=['dec','dec','dec','jan','feb','feb','mar','mar'] category =['a','a','b','b','a','b','b','b'] sales=[1,10,2,5,12,4,3,1] df = pd.DataFrame(list(zip(month,category,sales)), columns =['month', 'cat','sales']) print(df) | month cat sales | |--------------------| | 0 dec a 1 | | 1 dec a 10 | | 2 dec b 2 | | 3 jan b 5 | | 4 feb a 12 | | 5 feb b 4 | | 6 mar b 3 | | 7 mar b 1 | then let's suppose we would like a count of each category by month. so we go and do something like df=df.groupby(['month','cat']).sales.sum().reset_index() print(df) | month cat sales | |--------------------| | 0 dec a 11 | | 1 dec b 2 | | 2 feb a 12 | | 3 feb b 4 | | 4 jan b 5 | | 5 mar b 4 | But what we'd like to see is: | month cat sales | |--------------------| | 0 dec a 11 | | 1 dec b 2 | | 2 feb a 12 | | 3 feb b 4 | | 4 jan b 5 | | 5 jan a 0 | | 6 mar b 4 | | 7 mar a 0 | Where the difference is categories that did not show up in a particular month would still show up just with zero as their total. It's probable this has been asked before, but I couldn't find it. If you point me in the direction of the question, we'll go ahead and delete this one. | Continuing from where you stopped, a combo of stack and unstack will give you your required output: res = ( df.groupby(['month', 'cat']) .sales.sum() .unstack(fill_value=0) # Unstack and fill value for the null column .stack() # Return to groupby form and reset .reset_index(name='sales') ) The output of res: >>> res month cat sales 0 dec a 11 1 dec b 2 2 feb a 12 3 feb b 4 4 jan a 0 5 jan b 5 6 mar a 0 7 mar b 4 You can also work with categoricals and set observed to False; this will ensure that all possible combinations are presented in the final output. (df.astype({'month' : 'category', 'cat' : 'category'}) .groupby(['month', 'cat'], as_index = False, observed = False) .sum(numeric_only = True) ) month cat sales 0 dec a 11 1 dec b 2 2 feb a 12 3 feb b 4 4 jan a 0 5 jan b 5 6 mar a 0 7 mar b 4 | 8 | 14 |
61,553,778 | 2020-5-2 | https://stackoverflow.com/questions/61553778/why-is-my-venv-using-a-different-pip-version-than-i-have-installed | I'm setting up virtual env. I was getting warnings about an outdated pip (19.2) so I updated pip on my (macos) system globally, sudo -H python3 -m pip install --upgrade pip. It seems to have worked, but when I make a new venv, I'm still getting the old pip version. % pip --version pip 20.1 from /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip (python 3.8) % python3 -m pip --version pip 20.1 from /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip (python 3.8) % rm -rf .venv # make sure % python3 -m venv .venv % . .venv/bin/activate (.venv) % python3 -m pip --version pip 19.2.3 from /Users/marvin/.venv/lib/python3.8/site-packages/pip (python 3.8) (.venv) % pip --version pip 19.2.3 from /Users/marvin/.venv/lib/python3.8/site-packages/pip (python 3.8) Where is the older version coming from? | Pip is installed anew in any freshly created venv. The venv's default pip version is associated with the Python version, and is completely independent from whatever pip version you may have installed on the system. The older version comes from a wheel file bundled with the stdlib ensurepip module. This allows users to create a venv even with no internet connection available, as the venv docs mention: Unless the --without-pip option is given, ensurepip will be invoked to bootstrap pip into the virtual environment You can check the bundled pip version with ensurepip.version: >>> import ensurepip >>> ensurepip.version() '19.2.3' Python 3.8.2 is vendoring pip 19.2.3 and setuptools 41.2.0, matching what you've seen. To create venvs directly with the latest pip version, rather than creating them with an older pip and then upgrading the pip version, refer to this answer: How to get “python -m venv” to directly install latest pip version | 29 | 29 |
61,607,367 | 2020-5-5 | https://stackoverflow.com/questions/61607367/how-to-encrypt-json-in-python | I have a JSON file. I am running a program, in python, where data is extracted from the JSON file. Is there any way to encrypt the JSON file with a key, so that if someone randomly opens the file, it would be a mess of characters, but when the key is fed to the program, it decrypts it and is able to read it? Thanks in advance. | Yes, you can encrypt a .json file. Make sure you install the cryptography package by typing pip install cryptography # or on windows: python -m pip install cryptography Then, you can make a program similar to mine: #this imports the cryptography package from cryptography.fernet import Fernet #this generates a key and opens a file 'key.key' and writes the key there key = Fernet.generate_key() with open('key.key','wb') as file: file.write(key) #this just opens your 'key.key' and assings the key stored there as 'key' with open('key.key','rb') as file: key = file.read() #this opens your json and reads its data into a new variable called 'data' with open('filename.json','rb') as f: data = f.read() #this encrypts the data read from your json and stores it in 'encrypted' fernet = Fernet(key) encrypted = fernet.encrypt(data) #this writes your new, encrypted data into a new JSON file with open('filename.json','wb') as f: f.write(encrypted) Note that this block: with open('key.key','wb') as file: file.write(key) #this just opens your 'key.key' and assigns the key stored there as 'key' with open('key.key','rb') as file: key = file.read() isn't necessary. It is just a way to store the generated key in a safe place, and read it back in. You can delete that block if you want. Let me know if you need further assistance :) | 9 | 12 |
61,545,680 | 2020-5-1 | https://stackoverflow.com/questions/61545680/postgresql-partition-and-sqlalchemy | SQLAlchemy doc explain how to create a partitioned table. But it does not explains how to create partitions. So if I have this : #Skipping create_engine and metadata Base = declarative_base() class Measure(Base): __tablename__ = 'measures' __table_args__ = { postgresql_partition_by: 'RANGE (log_date)' } city_id = Column(Integer, not_null=True) log_date = Columne(Date, not_null=True) peaktemp = Column(Integer) unitsales = Column(Integer) class Measure2020(Base): """How am I suppposed to declare this ? """ I know that most of the I'll be doing SELECT * FROM measures WHERE logdate between XX and YY. But that seems interesting. | Maybe a bit late, but I would like to share what I built upon @moshevi 's and @Seb 's answers: In my IoT use-case, I required actual sub-partitioning (first level year, second level nodeid). Also I wanted to generalize it slightly. This is what I came up with: from sqlalchemy.ext.declarative import DeclarativeMeta from sqlalchemy.sql.ddl import DDL from sqlalchemy import event class PartitionByMeta(DeclarativeMeta): def __new__(cls, clsname, bases, attrs, *, partition_by, partition_type): @classmethod def get_partition_name(cls_, suffix): return f'{cls_.__tablename__}_{suffix}' @classmethod def create_partition(cls_, suffix, partition_stmt, subpartition_by=None, subpartition_type=None): if suffix not in cls_.partitions: partition = PartitionByMeta( f'{clsname}{suffix}', bases, {'__tablename__': cls_.get_partition_name(suffix)}, partition_type = subpartition_type, partition_by=subpartition_by, ) partition.__table__.add_is_dependent_on(cls_.__table__) event.listen( partition.__table__, 'after_create', DDL( # For non-year ranges, modify the FROM and TO below # LIST: IN ('first', 'second'); # RANGE: FROM ('{key}-01-01') TO ('{key+1}-01-01') f""" ALTER TABLE {cls_.__tablename__} ATTACH PARTITION {partition.__tablename__} {partition_stmt}; """ ) ) cls_.partitions[suffix] = partition return cls_.partitions[suffix] if partition_by is not None: attrs.update( { '__table_args__': attrs.get('__table_args__', ()) + (dict(postgresql_partition_by=f'{partition_type.upper()}({partition_by})'),), 'partitions': {}, 'partitioned_by': partition_by, 'get_partition_name': get_partition_name, 'create_partition': create_partition } ) return super().__new__(cls, clsname, bases, attrs) Which is to be used as follows, assuming the respective VehicleDataMixin class to be created as introduced by @moshevi class VehicleData(VehicleDataMixin, Project, metaclass=PartitionByMeta, partition_by='timestamp',partition_type='RANGE'): __tablename__ = 'vehicle_data' __table_args__ = ( Index('ts_ch_nod_idx', "timestamp", "nodeid", "channelid", postgresql_using='brin'), UniqueConstraint('timestamp','nodeid','channelid', name='ts_ch_nod_constr') ) Which can then be subpartitoned iteratively like so (to be adapted) for y in range(2017, 2021): # Creating tables for all known nodeids tbl_vehid_y = VehicleData.create_partition( f"{y}", partition_stmt=f"""FOR VALUES FROM ('{y}-01-01') TO ('{y+1}-01-01')""", subpartition_by='nodeid', subpartition_type='LIST' ) for i in {3, 4, 7, 9}: # Creating all the years below these nodeids including a default partition tbl_vehid_y.create_partition( f"nid{i}", partition_stmt=f"""FOR VALUES IN ('{i}')""" ) # Defaults (nodeid) per year partition tbl_vehid_y.create_partition("def", partition_stmt="DEFAULT") # Default to any other year than anticipated VehicleData.create_partition("def", partition_stmt="DEFAULT") partition_by='timestamp' <= This is the column to partition by partition_type='RANGE' <= This is the (PSQL specific) partition type partition_stmt=f"""FOR VALUES IN ('{i}')""" <= This is the (PSQL specific) partitioning statement. | 18 | 5 |
61,582,142 | 2020-5-3 | https://stackoverflow.com/questions/61582142/test-pydantic-settings-in-fastapi | Suppose my main.py is like this (this is a simplified example, in my app I use an actual database and I have two different database URIs for development and testing): from fastapi import FastAPI from pydantic import BaseSettings app = FastAPI() class Settings(BaseSettings): ENVIRONMENT: str class Config: env_file = ".env" case_sensitive = True settings = Settings() databases = { "dev": "Development", "test": "Testing" } database = databases[settings.ENVIRONMENT] @app.get("/") def read_root(): return {"Environment": database} while the .env is ENVIRONMENT=dev Suppose I want to test my code and I want to set ENVIRONMENT=test to use a testing database. What should I do? In FastAPI documentation (https://fastapi.tiangolo.com/advanced/settings/#settings-and-testing) there is a good example but it is about dependencies, so it is a different case as far as I know. My idea was the following (test.py): import pytest from fastapi.testclient import TestClient from main import app @pytest.fixture(scope="session", autouse=True) def test_config(monkeypatch): monkeypatch.setenv("ENVIRONMENT", "test") @pytest.fixture(scope="session") def client(): return TestClient(app) def test_root(client): response = client.get("/") assert response.status_code == 200 assert response.json() == {"Environment": "Testing"} but it doesn't work. Furthermore I get this error: ScopeMismatch: You tried to access the 'function' scoped fixture 'monkeypatch' with a 'session' scoped request object, involved factories test.py:7: def test_config(monkeypatch) env\lib\site-packages\_pytest\monkeypatch.py:16: def monkeypatch() while from pytest official documentation it should work (https://docs.pytest.org/en/3.0.1/monkeypatch.html#example-setting-an-environment-variable-for-the-test-session). I have the latest version of pytest installed. I tried to use specific test environment variables because of this: https://pydantic-docs.helpmanual.io/usage/settings/#field-value-priority. To be honest I'm lost, my only real aim is to have a different test configuration (in the same way Flask works: https://flask.palletsprojects.com/en/1.1.x/tutorial/tests/#setup-and-fixtures). Am I approaching the problem the wrong way? | PydanticSettings are mutable, so you can simply override them in your test.py: from main import settings settings.ENVIRONMENT = 'test' | 21 | 20 |
61,552,475 | 2020-5-1 | https://stackoverflow.com/questions/61552475/properly-set-up-exponential-decay-of-learning-rate-in-tensorflow | I need to apply an exponential decay of learning rate every 10 epochs. Initial learning rate is 0.000001, and decay factor is 0.95 is this the proper way to set it up? lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=0.000001, decay_steps=(my_steps_per_epoch*10), decay_rate=0.05) opt = tf.keras.optimizers.SGD(learning_rate=lr_schedule, momentum=0.9) The formula of exponential decay is current_lr = initial_lr * (1 - decay_factor)^t Except that in the code it is implemented as : decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps) To my knowledge, decay_rate should be 1 - decay_factor and decay_steps should mean how many steps are performed before applying the decay, in my case my_steps_per_epoch*10. Is that correct? EDIT: If I pause and save my model (using callbacks) after the 10th epoch, and then resume by loading the model and calling model.fit with initial_epoch=10 and epochs=11, will it start in the 11th epoch and apply the exponential decay? | decay_steps can be used to state after how many steps (processed batches) you will decay the learning rate. I find it quite useful to just specify the initial and the final learning rate and calculate the decay_factor automatically via the following: initial_learning_rate = 0.1 final_learning_rate = 0.0001 learning_rate_decay_factor = (final_learning_rate / initial_learning_rate)**(1/epochs) steps_per_epoch = int(train_size/batch_size) lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=initial_learning_rate, decay_steps=steps_per_epoch, decay_rate=learning_rate_decay_factor, staircase=True) | 10 | 10 |
61,582,897 | 2020-5-3 | https://stackoverflow.com/questions/61582897/how-to-serve-a-flutter-web-app-with-django | After building a flutter web app with flutter build web I want to serve the app using a simple Django server. How can I do that? | After running the command, copy the build\web directory from the flutter project to your django project. Let's say you put it at the root and renamed it to flutter_web_app. So we have, djangoproject | djangoproject | settings.py | urls.py | ... | flutter_web_app | ... | other apps Now edit the base tag in flutter_web_app/index.html to give the application a prefix. Note that the prefix/base should start and end with a /. <head> ... <base href="/flutter_web_app/"> ... </head> Now, in your djangoproject/urls.py, match the prefix and serve your flutter application. from django.urls import path from django.views.static import serve import os BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) FLUTTER_WEB_APP = os.path.join(BASE_DIR, 'flutter_web_app') def flutter_redirect(request, resource): return serve(request, resource, FLUTTER_WEB_APP) urlpatterns = [ ... path('flutter_web_app/', lambda r: flutter_redirect(r, 'index.html')), path('flutter_web_app/<path:resource>', flutter_redirect), ] Running this locally, url 127.0.0.1/flutter_web_app will serve our flutter_web_app/index.html. All the files required by the flutter application are prefixed with flutter_web_app. For eg. main.dart.js is requested with a GET to flutter_web_app/main.dart.js. We extract the path after the prefix as resource and serve the corresponding file from flutter_web_app directory (flutter_redirect function). | 9 | 14 |
61,494,958 | 2020-4-29 | https://stackoverflow.com/questions/61494958/postgres-on-conflict-do-update-only-non-null-values-in-python | I have a table for which when processing records I get either the full record, else only the columns to be updated. I want to write a query to handle updates but only update the columns with non null values. For example, Existing table: 1 | John Doe | USA 2 | Jane Doe | UK Incoming records: (3, Kate Bill, Canada) (2, null, USA) I want to insert the first record and on conflict of key on second record ONLY update the last column. I'm not sure how to write this using a execute_values method call: execute_values(cursor, "INSERT INTO user_data\ (id, name, country) VALUES %s ON CONFLICT DO UPDATE SET \ <how to only set non null values here>", vendor_records) I'm using psycopg2 to execute this. | In the on conflict (pay particular attention to the do update set name = expression) you can use coalesce with the excluded values to update columns to the specified values (if not null), or the existing values (if specified is null); This would be that format: -- setup create table test1(id integer primary key, col1 text,col2 text); insert into test1(id, col1, col2) values (1, 'John Doe', 'USA') , (2, 'Jane Doe', 'UK'); select * from test1; -- test on conflict insert into test1 as t(id, col1, col2) values (3, 'Kate Bill', 'Canada') , (2, null, 'USA') on conflict(id) do update set col1 = coalesce(excluded.col1, t.col1) , col2 = coalesce(excluded.col2, t.col2); -- validate select * from test1; | 9 | 13 |
61,547,620 | 2020-5-1 | https://stackoverflow.com/questions/61547620/import-error-no-module-named-secrets-python-manage-py-not-working-after-pul | I'm following along a course - Django development to deployment. After pulling it to Digital Ocean everything else ran smoothly. Until I tried running python manage.py help (env) djangoadmin@ubuntu-1:~/pyapps/btre_project_4$ python manage.py help and I get this error. Traceback (most recent call last): File "manage.py", line 21, in <module> main() File "manage.py", line 17, in main execute_from_command_line(sys.argv) File "/home/djangoadmin/pyapps/env/lib/python3.5/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line utility.execute() File "/home/djangoadmin/pyapps/env/lib/python3.5/site-packages/django/core/management/__init__.py", line 377, in execute django.setup() File "/home/djangoadmin/pyapps/env/lib/python3.5/site-packages/django/__init__.py", line 16, in setup from django.urls import set_script_prefix File "/home/djangoadmin/pyapps/env/lib/python3.5/site-packages/django/urls/__init__.py", line 1, in <module> from .base import ( File "/home/djangoadmin/pyapps/env/lib/python3.5/site-packages/django/urls/base.py", line 9, in <module> from .exceptions import NoReverseMatch, Resolver404 File "/home/djangoadmin/pyapps/env/lib/python3.5/site-packages/django/urls/exceptions.py", line 1, in <module> from django.http import Http404 File "/home/djangoadmin/pyapps/env/lib/python3.5/site-packages/django/http/__init__.py", line 2, in <module> from django.http.request import ( File "/home/djangoadmin/pyapps/env/lib/python3.5/site-packages/django/http/request.py", line 10, in <module> from django.core import signing File "/home/djangoadmin/pyapps/env/lib/python3.5/site-packages/django/core/signing.py", line 45, in <module> from django.utils.crypto import constant_time_compare, salted_hmac File "/home/djangoadmin/pyapps/env/lib/python3.5/site-packages/django/utils/crypto.py", line 6, in <module> import secrets ImportError: No module named 'secrets' I'm a newbie and have been stuck on this for a while. I just want to know what could possibly cause this. | The secrets module was added to Python in version 3.6. Your host is using Python 3.5, hence the secrets module is unavailable. You need a host with Python 3.6+, or a version of Django that doesn't depend on the secrets module | 8 | 9 |
61,573,928 | 2020-5-3 | https://stackoverflow.com/questions/61573928/using-ipython-display-audio-to-play-audio-in-jupyter-notebook-not-working-when-u | When using the code below the sound plays: import IPython.display as ipd import numpy sr = 22050 # sample rate T = 0.5 # seconds t = numpy.linspace(0, T, int(T*sr), endpoint=False) # time variable x = 0.5*numpy.sin(2*numpy.pi*440*t) # pure sine wave at 440 Hz ipd.Audio(x, rate=sr, autoplay=True) # load a NumPy array But when I use it inside a function it stops working: import IPython.display as ipd import numpy def SoundNotification(): sr = 22050 # sample rate T = 0.5 # seconds t = numpy.linspace(0, T, int(T*sr), endpoint=False) # time variable x = 0.5*numpy.sin(2*numpy.pi*440*t) # pure sine wave at 440 Hz ipd.Audio(x, rate=sr, autoplay=True) # load a NumPy array SoundNotification() I've tried to assign the audio to a variable and return it which works: import IPython.display as ipd import numpy def SoundNotification(): sr = 22050 # sample rate T = 0.5 # seconds t = numpy.linspace(0, T, int(T*sr), endpoint=False) # time variable x = 0.5*numpy.sin(2*numpy.pi*440*t) # pure sine wave at 440 Hz sound = ipd.Audio(x, rate=sr, autoplay=True) # load a NumPy array return sound sound = SoundNotification() sound But I want to use the sound in a different function: import IPython.display as ipd import numpy def SoundNotification(): sr = 22050 # sample rate T = 0.5 # seconds t = numpy.linspace(0, T, int(T*sr), endpoint=False) # time variable x = 0.5*numpy.sin(2*numpy.pi*440*t) # pure sine wave at 440 Hz sound = ipd.Audio(x, rate=sr, autoplay=True) # load a NumPy array return sound def WhereIWantToUseTheSound(): sound = SoundNotification() sound WhereIWantToUseTheSound() How do I make this work and what causes this behavior? The kernel for the notebook is Python 3. Edit: I want to play the sound in a scheduled event: import IPython.display as ipd import numpy import sched, time sound = [] def SoundNotification(): sr = 22050 # sample rate T = 0.5 # seconds t = numpy.linspace(0, T, int(T*sr), endpoint=False) # time variable x = 0.5*numpy.sin(2*numpy.pi*440*t) # pure sine wave at 440 Hz sound = ipd.Audio(x, rate=sr, autoplay=True) # load a NumPy array return sound def do_something(sc): print("Doing stuff...") # do your stuff sound_ = SoundNotification() s.enter(interval, 1, do_something, (sc,)) return sound_ s = sched.scheduler(time.time, time.sleep) interval = int(input("Interval between captures in seconds: ")) s.enter(0, 1, do_something, (s,)) s.run() I don't know how to return the sound and schedule the next event within the same function. | I was having this same problem, the sound was played when I called: from IPython.display import Audio Audio('/path/beep.mp3', autoplay=True) But it didn't work when it was inside a function. The problem is that the function call doesn't really play the sound, it's actually played by the resulting HTML that is returned to Jupyter output. So to overcome this, you can force the function to render the HTML using display( ) function from IPython. This will work: from IPython.display import Audio from IPython.core.display import display def beep(): display(Audio('/path/beep.mp3', autoplay=True)) beep(); | 15 | 34 |
61,497,145 | 2020-4-29 | https://stackoverflow.com/questions/61497145/pydantic-model-for-array-of-jsons | I am using FastAPI to write a web service. It is good and fast. FastAPI is using pydantic models to validate input and output data, everything is good but when I want to declare a nested model for array of jsons like below: [ { "name": "name1", "family": "family1" }, { "name": "name2", "family": "family2" } ] I get empty response. I think there is a problem with my model which is: class Test(BaseModel): name: str family: str class Config: orm_mode = True class Tests(BaseModel): List[Test] class Config: orm_mode = True So, my question is how should I write a model for array of jsons? | Update (26/09/2020) In Python 3.9 (not yet released), you can do the same as below but with the built-in list generic type (which is always in scope) rather than needing to import the capitalized List type from typing, e.g. @app.get("/tests", response_model=list[Test]) The issue here is that you are trying to create a pydantic model where it is not needed. If you want to serialize/deserialize a list of objects, just wrap your singular model in a List[] from python's builtin typing module. There is no need to try to create a plural version of your object with a pydantic BaseModel (and as you can see, it does not work anyway). With that said, the simplest way to do what you want is to just specify a List[Test] at any point where you need a list of Tests, e.g. from typing import List from fastapi import FastAPI from pydantic import BaseModel existing_tests = [ { "name": "name1", "family": "family1" }, { "name": "name2", "family": "family2" } ] class Test(BaseModel): name: str family: str class Config: orm_mode = True app = FastAPI() @app.get("/tests", response_model=List[Test]) async def fetch_tests(): return existing_tests @app.post("/tests") async def submit_tests(new_tests: List[Test]): print(new_tests) But of course if you find yourself repeatedly (or only) specifying Test as a list, you can of course just assign this to a variable and then use that variable where needed, like so: Tests = List[Test] @app.get("/tests", response_model=Tests) async def fetch_tests(): return existing_tests @app.post("/tests") async def submit_tests(new_tests: Tests): print(new_tests) I think the first option is probably slightly clearer in your code though, and unless you are specifying List[Test] many times, using a variable for this purpose is probably not worth the extra layer of indirection. | 13 | 16 |
61,561,112 | 2020-5-2 | https://stackoverflow.com/questions/61561112/how-to-solve-getting-default-adapter-failed-error-when-launching-chrome-and-tr | I have updated Selenium but the error keeps occurring even though the web page loads. However, in some instances, the driver starts but it is stagnant. Is this causing an issue and if so, how do I resolve it? [11556:9032:0502/152954.314:ERROR:device_event_log_impl.cc(162)] [15:29:54.314] Bluetooth: bluetooth_adapter_winrt.cc:1055 Getting Default Adapter failed. | This error message... ERROR:device_event_log_impl.cc(162)] [15:29:54.314] Bluetooth: bluetooth_adapter_winrt.cc:1055 Getting Default Adapter failed. ...implies that ScopedClosureRunner on_init failed in BluetoothAdapterWinrt::OnGetDefaultAdapter(). Analysis This error is defined in bluetooth_adapter_winrt.cc as follows: void BluetoothAdapterWinrt::OnGetDefaultAdapter( base::ScopedClosureRunner on_init, ComPtr<IBluetoothAdapter> adapter) { DCHECK_CALLED_ON_VALID_THREAD(thread_checker_); if (!adapter) { BLUETOOTH_LOG(ERROR) << "Getting Default Adapter failed."; return; } Solution Ensure that: Selenium is upgraded to current levels Version 3.141.59. ChromeDriver is updated to current ChromeDriver v84.0 level. Chrome is updated to current Chrome Version 84.0 level. (as per ChromeDriver v84.0 release notes) If your base Web Client version is too old, then uninstall it and install a recent GA and released version of Web Client. Additional considerations However it was observed that this error can be supressed by running Chrome as root user (administrator) on Linux. but that would be a deviation from the documentation in ChromeDriver - WebDriver for Chrome where it is mentioned: A common cause for Chrome to crash during startup is running Chrome as root user (administrator) on Linux. While it is possible to work around this issue by passing '--no-sandbox' flag when creating your WebDriver session, i.e. the ChromeDriver session as such a configuration is unsupported and highly discouraged. Ideally, you need to configure your environment to run Chrome as a regular user instead. Suppressing the error Finally, as per the documentation in Selenium Chrome Driver: Resolve Error Messages Regarding Registry Keys and Experimental Options these error logs can be supressed by adding the argument: excludeSwitches: ['enable-logging'] So your effective code block will be: from selenium import webdriver options = webdriver.ChromeOptions() options.add_experimental_option("excludeSwitches", ["enable-logging"]) driver = webdriver.Chrome(options=options, executable_path=r'C:\WebDrivers\chromedriver.exe') driver.get("https://www.google.com/") | 21 | 28 |
61,529,817 | 2020-4-30 | https://stackoverflow.com/questions/61529817/automate-outlook-on-mac-with-python | I can automate Outlook on windows with win32/COM, but does anyone know of a pure python way to do the same on mac osx? A simple use case would be: Open outlook/connect to the active instance Launch a blank new email I want to create an app to create email templates and attach files, then let the user finish editing the email and send when ready, NOT just send emails. Is there a python wrapper for applescript that may work? (I don't know anything about applescript, so an example would help). | @ajrwhite adding attachments had one trick, you need to use 'Alias' from mactypes to convert a string/path object to a mactypes path. I'm not sure why but it works. here's a working example which creates messages with recipients and can add attachments: from appscript import app, k from mactypes import Alias from pathlib import Path def create_message_with_attachment(): subject = 'This is an important email!' body = 'Just kidding its not.' to_recip = ['[email protected]', '[email protected]'] msg = Message(subject=subject, body=body, to_recip=to_recip) # attach file p = Path('path/to/myfile.pdf') msg.add_attachment(p) msg.show() class Outlook(object): def __init__(self): self.client = app('Microsoft Outlook') class Message(object): def __init__(self, parent=None, subject='', body='', to_recip=[], cc_recip=[], show_=True): if parent is None: parent = Outlook() client = parent.client self.msg = client.make( new=k.outgoing_message, with_properties={k.subject: subject, k.content: body}) self.add_recipients(emails=to_recip, type_='to') self.add_recipients(emails=cc_recip, type_='cc') if show_: self.show() def show(self): self.msg.open() self.msg.activate() def add_attachment(self, p): # p is a Path() obj, could also pass string p = Alias(str(p)) # convert string/path obj to POSIX/mactypes path attach = self.msg.make(new=k.attachment, with_properties={k.file: p}) def add_recipients(self, emails, type_='to'): if not isinstance(emails, list): emails = [emails] for email in emails: self.add_recipient(email=email, type_=type_) def add_recipient(self, email, type_='to'): msg = self.msg if type_ == 'to': recipient = k.to_recipient elif type_ == 'cc': recipient = k.cc_recipient msg.make(new=recipient, with_properties={k.email_address: {k.address: email}}) | 8 | 11 |
61,560,056 | 2020-5-2 | https://stackoverflow.com/questions/61560056/extracting-key-phrases-from-text-based-on-the-topic-with-python | I have a large dataset with 3 columns, columns are text, phrase and topic. I want to find a way to extract key-phrases (phrases column) based on the topic. Key-Phrase can be part of the text value or the whole text value. import pandas as pd text = ["great game with a lot of amazing goals from both teams", "goalkeepers from both teams made misteke", "he won all four grand slam championchips", "the best player from three-point line", "Novak Djokovic is the best player of all time", "amazing slam dunks from the best players", "he deserved yellow-card for this foul", "free throw points"] phrase = ["goals", "goalkeepers", "grand slam championchips", "three-point line", "Novak Djokovic", "slam dunks", "yellow-card", "free throw points"] topic = ["football", "football", "tennis", "basketball", "tennis", "basketball", "football", "basketball"] df = pd.DataFrame({"text":text, "phrase":phrase, "topic":topic}) print(df.text) print(df.phrase) I'm having big trouble with finding a path to do something like this, because I have more than 50000 rows in my dataset and around 48000 of unique values of phrases, and 3 different topics. I guess that building a dataset with all football, basketball and tennis topics are not really the best solution. So I was thinking about making some kind of ML model for this, but again that means that I will have 2 features (text and topic) and one result (phrase), but I will have more than 48000 of different classes in my result, and that is not a good approach. I was thinking about using text column as a feature and applying classification model in order to find sentiment. After that I can use predicted sentiment to extract key features, but I do not know how to extract them. One more problem is that I get only 66% accuracy when I try to classify sentiment by using CountVectorizer or TfidfTransformer with Random Forest, Decision Tree, or any other classifying algorithm, and also 66% of accuracy if Im using TextBlob for sentiment analysis. Any help? | It looks like a good approach here would be to use a Latent Dirichlet allocation model, which is an example of what are known as topic models. A LDA is a an unsupervised model that finds similar groups among a set of observations, which you can then use to assign a topic to each of them. Here I'll go through what could be an approach to solve this by training a model using the sentences in the text column. Though in the case the phrases are representative enough an contain the necessary information to be captured by the models, then they could also be a good (possibly better) candidate for training the model, though that you'll better judge by yourself. Before you train the model, you need to apply some preprocessing steps, including tokenizing the sentences, removing stopwords, lemmatizing and stemming. For that you can use nltk: from nltk.stem import WordNetLemmatizer from nltk.corpus import stopwords from nltk.tokenize import word_tokenize import lda from sklearn.feature_extraction.text import CountVectorizer ignore = set(stopwords.words('english')) stemmer = WordNetLemmatizer() text = [] for sentence in df.text: words = word_tokenize(sentence) stemmed = [] for word in words: if word not in ignore: stemmed.append(stemmer.lemmatize(word)) text.append(' '.join(stemmed)) Now we have more appropriate corpus to train the model: print(text) ['great game lot amazing goal team', 'goalkeeper team made misteke', 'four grand slam championchips', 'best player three-point line', 'Novak Djokovic best player time', 'amazing slam dunk best player', 'deserved yellow-card foul', 'free throw point'] We can then convert the text to a matrix of token counts through CountVectorizer, which is the input LDA will be expecting: vec = CountVectorizer(analyzer='word', ngram_range=(1,1)) X = vec.fit_transform(text) Note that you can use the ngram parameter to spacify the n-gram range you want to consider to train the model. By setting ngram_range=(1,2) for instance you'd end up with features containing all individual words as well as 2-grams in each sentence, here's an example having trained CountVectorizer with ngram_range=(1,2): vec.get_feature_names() ['amazing', 'amazing goal', 'amazing slam', 'best', 'best player', .... The advantage of using n-grams is that you could then also find Key-Phrases other than just single words. Then we can train the LDA with whatever amount of topics you want, in this case I'll just be selecting 3 topics (note that this has nothing to do with the topics column), which you can consider to be the Key-Phrases - or words in this case - that you mention. Here I'll be using lda, though there are several options such as gensim. Each topic will have associated a set of words from the vocabulary it has been trained with, with each word having a score measuring the relevance of the word in a topic. model = lda.LDA(n_topics=3, random_state=1) model.fit(X) Through topic_word_ we can now obtain these scores associated to each topic. We can use argsort to sort the vector of scores, and use it to index the vector of feature names, which we can obtain with vec.get_feature_names: topic_word = model.topic_word_ vocab = vec.get_feature_names() n_top_words = 3 for i, topic_dist in enumerate(topic_word): topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1] print('Topic {}: {}'.format(i, ' '.join(topic_words))) Topic 0: best player point Topic 1: amazing team slam Topic 2: yellow novak card The printed results don't really represent much in this case, since the model has been trained with the sample from the question, however you should see more clear and meaningful topics by training with your entire corpus. Also note that for this example I've use the whole vocabulary to train the model. However it seems that in your case what would make more sense, is to split the text column into groups according to the different topics you already have, and train a separate model on each group. But hopefully this gives you a good idea on how to proceed. | 12 | 14 |
61,581,645 | 2020-5-3 | https://stackoverflow.com/questions/61581645/how-do-i-define-a-patch-globally-with-autouse | Right now I do this in a few tests to mock a 3rd party API: @patch("some.api.execute") def sometest( mock_the_api, blah, otherstuff ): mock_the_api.return_value = "mocked response" my_func_that_uses_api(blah, otherstuff) This works and prevents my_func_that_uses_api() (which calls the API) from making actual outbound calls. But I do this 4-5 times and will probably add more in the future. I'd like to mock this globally for all my tests. I see in the docs this example: @pytest.fixture(autouse=True) def no_requests(monkeypatch): """Remove requests.sessions.Session.request for all tests.""" monkeypatch.delattr("requests.sessions.Session.request") How do I do that but with patching the API response? I tried monkeypatch.patch("some.api.execute") but get error AttributeError: 'MonkeyPatch' object has no attribute 'patch' Also, to add, I'm not using any classes in my pytest tests (like test cases) - I'd like to avoid uses classes in my pytests tests for now. | You can just use: from unittest import mock import pytest @pytest.fixture(autouse=True) def my_api_mock(): with mock.patch("some.api.execute") as api_mock: api_mock.return_value = "mocked response" yield api_mock def test_something(blah, otherstuff): my_func_that_uses_api(blah, otherstuff) The fixture lives as long as each function, so the patching is reverted at the end of each function. Note that yielding the mock is not needed in this case, but if we want to change the mock in some test case, this gives you the possibility to access it: def test_something(blah, otherstuff): my_func_that_uses_api(blah, otherstuff) def test_something_else(my_api_mock, blah, otherstuff): my_api_mock.return_value = "other response for this test" my_func_that_uses_api(blah, otherstuff) For completeness, without auto-use it would be: @pytest.fixture def my_api_mock(): with mock.patch("some.api.execute") as api_mock: api_mock.return_value = "mocked response" yield api_mock def test_something(my_api_mock, blah, otherstuff): my_func_that_uses_api(blah, otherstuff) | 9 | 11 |
61,578,927 | 2020-5-3 | https://stackoverflow.com/questions/61578927/use-a-local-file-as-the-set-image-file-discord-py | I am aware that in discord.py, you can make the set_image of an embed a url of an image. But, I want to use a local file on my computer for the set_image instead of a url of an image. embed = discord.Embed(title="Title", description="Desc", color=0x00ff00) embed.set_image(url = "https://example.com/image.png") #this is for set_image using url How can I achieve this? Is there another function or something? | Ok I just got it. The code for it is the following: embed = discord.Embed(title="Title", description="Desc", color=0x00ff00) #creates embed file = discord.File("path/to/image/file.png", filename="image.png") embed.set_image(url="attachment://image.png") await ctx.send(file=file, embed=embed) The only thing you should be changing is line 2 where it says "path/to/image/file.png" Note: on lines 2 and 3, there is an image.png. Fret not about it since thats what Discord is calling the uploaded file (Example: I have a file called duck.png, Discord uploads it to their servers as image.png). So you don't need to change the image.png part. However, if you are using a file that the specific extension matters, remember to change the image.png to the desired extension. An example of a file that requires a specific extension is a GIF so remember to change image.png to for example, an image.gifif you are using a GIF. You can read more here at discord.py's official documentation: https://discordpy.readthedocs.io/en/latest/faq.html#how-do-i-use-a-local-image-file-for-an-embed-image | 7 | 21 |
61,606,279 | 2020-5-5 | https://stackoverflow.com/questions/61606279/how-do-i-create-a-pdf-file-containing-a-signature-field-using-python | In order to be able to sign a PDF document using a token based DSC, I need a so-called signature field in my PDF. This is a rectangular field you can fill with a digital signature using e.g. Adobe Reader or Adobe Acrobat. I want to create this signable PDF in Python. I'm starting from plain text, or a rich-text document (Image & Text) in .docx format. How do I generate a PDF file with this field, in Python? | Unfortunately, I couldn't find any (free) solutions. Just Python programs that sign PDF documents. But there is a Python PDF SDK called PDFTron that has a free trial. Here's a link to a specific article showing how to "add a certification signature field to a PDF document and sign it". # Open an existing PDF doc = PDFDoc(docpath) page1 = doc.GetPage(1) # Create a text field that we can lock using the field permissions feature. annot1 = TextWidget.Create(doc.GetSDFDoc(), Rect(50, 550, 350, 600), "asdf_test_field") page1.AnnotPushBack(annot1) # Create a new signature form field in the PDFDoc. The name argument is optional; # leaving it empty causes it to be auto-generated. However, you may need the name for later. # Acrobat doesn't show digsigfield in side panel if it's without a widget. Using a # Rect with 0 width and 0 height, or setting the NoPrint/Invisible flags makes it invisible. certification_sig_field = doc.CreateDigitalSignatureField(cert_field_name) widgetAnnot = SignatureWidget.Create(doc, Rect(0, 100, 200, 150), certification_sig_field) page1.AnnotPushBack(widgetAnnot) ... # Save the PDFDoc. Once the method below is called, PDFNet will also sign the document using the information provided. doc.Save(outpath, 0) | 8 | 1 |
61,602,547 | 2020-5-4 | https://stackoverflow.com/questions/61602547/how-to-use-python-subprocess-with-bytes-instead-of-files | I can convert a mp4 to wav, using ffmpeg, by doing this: ffmpeg -vn test.wav -i test.mp4 I can also use subprocess to do the same, as long as my input and output are filepaths. But what if I wanted to use ffmpeg directly on bytes or a "file-like" object like io.BytesIO()? Here's an attempt at it: import subprocess from io import BytesIO b = BytesIO() with open('test.mp4', 'rb') as stream: command = ['ffmpeg', '-i'] proc = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=b) proc.communicate(input=stream.read()) proc.wait() proc.stdin.close() proc.stdout.close() Gives me: --------------------------------------------------------------------------- UnsupportedOperation Traceback (most recent call last) <ipython-input-84-0ddce839ebc9> in <module> 5 with open('test.mp4', 'rb') as stream: 6 command = ['ffmpeg', '-i'] ----> 7 proc = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=b) ... 1486 # Assuming file-like object -> 1487 c2pwrite = stdout.fileno() 1488 1489 if stderr is None: UnsupportedOperation: fileno Of course, I could use temp files to funnel my bytes, but I'd like to be able to avoid writing to the disk (because this step is just one link in a pipeline of transformations). | Base on @thorwhalen's answer, here's how it would work from bytes to bytes. What you were probably missing @thorwhalen, is the actual pipe-to-pipe way to send and get data when interacting with a process. When sending bytes, the stdin should be closed before the process can read from it. def from_bytes_to_bytes( input_bytes: bytes, action: str = "-f wav -acodec pcm_s16le -ac 1 -ar 44100")-> bytes or None: command = f"ffmpeg -y -i /dev/stdin -f nut {action} -" ffmpeg_cmd = subprocess.Popen( shlex.split(command), stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=False ) b = b'' # write bytes to processe's stdin and close the pipe to pass # data to piped process ffmpeg_cmd.stdin.write(input_bytes) ffmpeg_cmd.stdin.close() while True: output = ffmpeg_cmd.stdout.read() if len(output) > 0: b += output else: error_msg = ffmpeg_cmd.poll() if error_msg is not None: break return b | 9 | 9 |
61,567,599 | 2020-5-2 | https://stackoverflow.com/questions/61567599/huggingface-bert-inputs-embeds-giving-unexpected-result | The HuggingFace BERT TensorFlow implementation allows us to feed in a precomputed embedding in place of the embedding lookup that is native to BERT. This is done using the model's call method's optional parameter inputs_embeds (in place of input_ids). To test this out, I wanted to make sure that if I did feed in BERT's embedding lookup, I would get the same result as having fed in the input_ids themselves. The result of BERT's embedding lookup can be obtained by setting the BERT configuration parameter output_hidden_states to True and extracting the first tensor from the last output of the call method. (The remaining 12 outputs correspond to each of the 12 hidden layers.) Thus, I wrote the following code to test my hypothesis: import tensorflow as tf from transformers import BertConfig, BertTokenizer, TFBertModel bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') input_ids = tf.constant(bert_tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :] attention_mask = tf.stack([tf.ones(shape=(len(sent),)) for sent in input_ids]) token_type_ids = tf.stack([tf.ones(shape=(len(sent),)) for sent in input_ids]) config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True) bert_model = TFBertModel.from_pretrained('bert-base-uncased', config=config) result = bert_model(inputs={'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}) inputs_embeds = result[-1][0] result2 = bert_model(inputs={'inputs_embeds': inputs_embeds, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}) print(tf.reduce_sum(tf.abs(result[0] - result2[0]))) # 458.2522, should be 0 Again, the output of the call method is a tuple. The first element of this tuple is the output of the last layer of BERT. Thus, I expected result[0] and result2[0] to match. Why is this not the case? I am using Python 3.6.10 with tensorflow version 2.1.0 and transformers version 2.5.1. EDIT: Looking at some of the HuggingFace code, it seems that the raw embeddings that are looked up when input_ids is given or assigned when inputs_embeds is given are added to the positional embeddings and token type embeddings before being fed into subsequent layers. If this is the case, then it may be possible that what I'm getting from result[-1][0] is the raw embedding plus the positional and token type embeddings. This would mean that they are erroneously getting added in again when I feed result[-1][0] as inputs_embeds in order to calculate result2. Could someone please tell me if this is the case and if so, please explain how to get the positional and token type embeddings, so I can subtract them out? Below is what I came up with for positional embeddings based on the equations given here (but according to the BERT paper, the positional embeddings may actually be learned, so I'm not sure if these are valid): import numpy as np positional_embeddings = np.stack([np.zeros(shape=(len(sent),768)) for sent in input_ids]) for s in range(len(positional_embeddings)): for i in range(len(positional_embeddings[s])): for j in range(len(positional_embeddings[s][i])): if j % 2 == 0: positional_embeddings[s][i][j] = np.sin(i/np.power(10000., j/768.)) else: positional_embeddings[s][i][j] = np.cos(i/np.power(10000., (j-1.)/768.)) positional_embeddings = tf.constant(positional_embeddings) inputs_embeds += positional_embeddings | My intuition about positional and token type embeddings being added in turned out to be correct. After looking closely at the code, I replaced the line: inputs_embeds = result[-1][0] with the lines: embeddings = bert_model.bert.get_input_embeddings().word_embeddings inputs_embeds = tf.gather(embeddings, input_ids) Now, the difference is 0.0, as expected. | 8 | 3 |
61,606,054 | 2020-5-5 | https://stackoverflow.com/questions/61606054/passing-ipython-variables-as-string-arguments-to-shell-command | How do I execute a shell command from Ipython/Jupyter notebook passing the value of a python string variable as a string in the bash argument like in this example: sp_name = 'littleGuy' #the variable sp_details = !az ad app list --filter "DisplayName eq '$sp_name'" #the shell command I've tried using $sp_name alone, ${sp_name}, {sp_name} etc as outlined in this related question, but none have worked. The kicker here is the variable name needs to be quoted as a string in the shell command. EDIT1: @manu190466. I was judging from the string output that your solution worked. It appears for some reason it does not in practice. I wonder if az ad app list URL encodes the query or something...? Thoughts? | The main problem you encounters seems to come from the quotes needed in your string. You can keep the quotes in your string by using a format instruction and a raw string. Use a 'r' before the whole string to indicate it is to be read as raw string, ie: special caracters have to not be interpreted. A raw string is not strictly required in your case because the string constructor of python is able to keep single quotes in a double quotes declaration but I think it's a good habit to use raw string declarators when there are non alphanumerics in it. There are at least two way to format strings : Older method herited from ancient langages with % symbols: sp_name = 'littleGuy' #the variable sp_query = r"DisplayName eq '%s'"%(sp_name) sp_details = !az ad app list --filter {sp_query} Newer method with {} symbols and the format() method : sp_name = 'littleGuy' #the variable sp_query = r"DisplayName eq '{}'".format(sp_name) sp_details = !az ad app list --filter {sp_query} | 8 | 14 |
61,574,984 | 2020-5-3 | https://stackoverflow.com/questions/61574984/no-module-named-pkg-resources-py2-warn-pyinstaller | I'm trying to make an executable file (.exe file for windows) for the code here. The main file to run is src/GUI.py. I found that pyinstaller is a better option to create the exe file. I tried both one folder and single executable file options. I tried creating the exe from root directory and as well as in src directory. pyinstaller src/GUI.py pyinstaller src/GUI.py -F cd src pyinstaller GUI.py pyinstaller GUI.py -F GUI.exe gets created with all the above methods. But whenever I tried to run the GUI.exe file, I get the error no module named pkg_resources.py2_warn pyinstaller. I tried running GUI.exe in the dist directory where it is created, in the root directory and in the src directory as well. Everywhere, I get the same error. How can I fix this? PS: Ideally I would like to have a single .exe file which I can distribute and they can run it standalone, without any need to install dependencies or recreating the folder structure. But I got to know that pyinstaller only packages the code files and I've to share the images separately and when running the exe file, the same structure has to be recreated. I'm okay with this as well. I'm even okay to share the one folder exe as well. I just want to share a file or a folder, which users can run without installing any dependencies. Is it possible at all? PPS: I'm open to using tools other than pyinstaller as well. | This is an issue with setuptools as explained in this github ticket. Consider downgrading your setuptools to 44.0 or below with the command pip install --upgrade 'setuptools<45.0.0' | 17 | 9 |
61,594,447 | 2020-5-4 | https://stackoverflow.com/questions/61594447/python-kubernetes-client-equivalent-of-kubectl-get-custom-resource | With kubectl I can execute the following command: kubectl get serviceentries Then I receive some information back. But serviceentries is a custom resource. So how do I go about getting the same information back but then with the kubernetes client? Yaml looks like this for example: apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: external-svc-https spec: hosts: - api.dropboxapi.com - www.googleapis.com - api.facebook.com location: MESH_EXTERNAL ports: - number: 443 name: https protocol: TLS resolution: DNS Anyone know the right method to use? | you should be able to pull it using the python client like this: kubernetes.client.CustomObjectsApi().list_cluster_custom_object(group="networking.istio.io", version="v1alpha3", plural="serviceentries") That method applies to every custom resource within kubernetes and doesn't require any further definition to the python client. | 7 | 21 |
61,565,153 | 2020-5-2 | https://stackoverflow.com/questions/61565153/passing-arguments-to-an-entry-point-python-script-using-argparser | I am looking to pass a user entered arguments from the command line to an entry point for a python script. Thus far I have tried to used argparse to pass the arguments from the command line to the test.py script. When I try to pass the arguments they are not recognised and I recieve the following error. load_entry_point('thesaurus==0.1', 'console_scripts', 'thesaurus')() TypeError: find_synonym() missing 1 required positional argument: 'argv' I have looked at other examples on here but have not been able to get any of the solutions to work. def main(argv): if argv is None: argv = sys.argv parser = argparse.ArgumentParser(description='Enter string') parser.add_argument('string', type=str, help='Enter word or words', nargs='*') args = parser.parse_args(argv[1:]) print(args) if __name__ == "__main__": sys.exit(main(sys.argv)) My setup.py script entry point looks like the following setup(entry_points={'console_scripts': ['test=test_folder.main:main']}) What I expect to happen is similar to when I run python main.py main foo. Which will successfully print out hello when it is passes to the function. | So it works now when I remove the the arguments from the function. def main(): parser = argparse.ArgumentParser(description='Enter string') parser.add_argument('string', type=str, help='Enter word or words', nargs='*') args = parser.parse_args() print(args.string) if __name__ == "__main__": main() I believe that it might work with the sys.arv used in the first example, as my actual problem was that when I re-downloaded this script from GitHub using. pip install git+ URL The script was not updating so the error persisted, This required deleting all the files related to the GitHub repository and re-installing it using the same pip command. | 9 | 7 |
61,577,643 | 2020-5-3 | https://stackoverflow.com/questions/61577643/python-how-to-use-fastapi-and-uvicorn-run-without-blocking-the-thread | I'm looking for a possibility to use uvicorn.run() with a FastAPI app but without uvicorn.run() is blocking the thread. I already tried to use processes, subprocessesand threads but nothing worked. My problem is that I want to start the Server from another process that should go on with other tasks after starting the server. Additinally I have problems closing the server like this from another process. Has anyone an idea how to use uvicorn.run() non blocking and how to stop it from another process? | According to Uvicorn documentation there is no programmatically way to stop the server. instead, you can stop the server only by pressing ctrl + c (officially). But I have a trick to solve this problem programmatically using multiprocessing standard lib with these three simple functions : A run function to run the server. A start function to start a new process (start the server). A stop function to join the process (stop the server). from multiprocessing import Process import uvicorn # global process variable proc = None def run(): """ This function to run configured uvicorn server. """ uvicorn.run(app=app, host=host, port=port) def start(): """ This function to start a new process (start the server). """ global proc # create process instance and set the target to run function. # use daemon mode to stop the process whenever the program stopped. proc = Process(target=run, args=(), daemon=True) proc.start() def stop(): """ This function to join (stop) the process (stop the server). """ global proc # check if the process is not None if proc: # join (stop) the process with a timeout setten to 0.25 seconds. # using timeout (the optional arg) is too important in order to # enforce the server to stop. proc.join(0.25) With the same idea you can : use threading standard lib instead of using multiprocessing standard lib. refactor these functions into a class. Example of usage : from time import sleep if __name__ == "__main__": # to start the server call start function. start() # run some codes .... # to stop the server call stop function. stop() You can read more about : Uvicorn server. multiprocessing standard lib. threading standard lib. Concurrency to know more about multi processing and threading in python. | 29 | 2 |
61,624,276 | 2020-5-5 | https://stackoverflow.com/questions/61624276/group-related-constants-in-python | I'm looking for a pythonic way to define multiple related constants in a single file to be used in multiple modules. I came up with multiple options, but all of them have downsides. Approach 1 - simple global constants # file resources/resource_ids.py FOO_RESOURCE = 'foo' BAR_RESOURCE = 'bar' BAZ_RESOURCE = 'baz' QUX_RESOURCE = 'qux' # file runtime/bar_handler.py from resources.resource_ids import BAR_RESOURCE # ... def my_code(): value = get_resource(BAR_RESOURCE) This is simple and universal, but has a few downsides: _RESOURCE has to be appended to all constant names to provide context Inspecting the constant name in IDE will not display other constant values Approach 2 - enum # file resources/resource_ids.py from enum import Enum, unique @unique class ResourceIds(Enum): foo = 'foo' bar = 'bar' baz = 'baz' qux = 'qux' # file runtime/bar_handler.py from resources.resource_ids import ResourceIds # ... def my_code(): value = get_resource(ResourceIds.bar.value) This solves the problems of the first approach, but the downside of this solution is the need of using .value in order to get the string representation (assuming we need the string value and not just a consistent enum value). Failure to append .value can result in hard to debug issues in runtime. Approach 3 - class variables # file resources/resource_ids.py class ResourceIds: foo = 'foo' bar = 'bar' baz = 'baz' qux = 'qux' # file runtime/bar_handler.py from resources.resource_ids import ResourceIds # ... def my_code(): value = get_resource(ResourceIds.bar) This approach is my favorite, but it may be misinterpreted - classes are made to be instantiated. And while code correctness wouldn't suffer from using an instance of the class instead of the class itself, I would like to avoid this waste. Another disadvantage of this approach that the values are not actually constant. Any code client can potentially change them. Is it possible to prevent a class from being instantiated? Am I missing some idiomatic way of grouping closely related constants? | Use Enum and mix in str: @unique class ResourceIds(str, Enum): foo = 'foo' bar = 'bar' baz = 'baz' qux = 'qux' Then you won't need to compare against .value: >>> ResourceIds.foo == 'foo' True And you still get good debugging info: >>> ResourceIds.foo <ResourceIds.foo: 'foo'> >>> list(ResourceIds.foo.__class__) [ <ResourceIds.foo: 'foo'>, <ResourceIds.bar: 'bar'>, <ResourceIds.baz: 'baz'>, <ResourceIds.qux: 'qux'>, ] | 8 | 10 |
61,506,470 | 2020-4-29 | https://stackoverflow.com/questions/61506470/is-it-possible-to-have-a-red-squiggly-line-appear-under-words-in-a-tkinter-text | As per the question title: Is it possible to have a red squiggly line appear under words in a Tkinter text widget without using a canvas widget? (The same squiggle as when you misspell a word) I'm going for something like this: If so where would I start? | This is just an example of using user-defined XBM as the bgstipple of part of the text inside a Text widget to simulate the squiggly line effect: create a XBM image, for example squiggly.xbm, like below: A XBM with 10x20 pixels then you can config a tag in Text widget using the above XBM image file as bgstipple in red color: # config a tag with squiggly.xbm as bgstipple in red color textbox.tag_config("squiggly", bgstipple="@squiggly.xbm", background='red') and apply the tag to the portion of text inside Text widget: textbox.insert("end", "hello", "squiggly") # add squiggly line Below is a sample code: import tkinter as tk root = tk.Tk() textbox = tk.Text(root, width=30, height=10, font=('Courier New',12), spacing1=1) textbox.pack() # config a tag with squiggly.xbm as bgstipple in red color textbox.tag_config("squiggly", bgstipple="@squiggly.xbm", background='red') textbox.insert("end", "hello", "squiggly") # add squiggly line textbox.insert("end", " world! ") textbox.insert("end", "Python", "squiggly") # add squiggly line textbox.insert("end", "\nthis is second line") root.mainloop() And the output: Note that the height of the XBM image need to match the font size and spacing between lines. | 20 | 18 |
61,606,746 | 2020-5-5 | https://stackoverflow.com/questions/61606746/python-pandas-valueerror-must-have-equal-len-keys-and-value-when-setting-with | I have a DataFrame that I want to change using df.loc[rowId,colId] = myDict to assign a dict to the entry [rowId,colId]. As a result I get following error: ValueError: Must have equal len keys and value when setting with an iterable Setting df.loc[rowId,colId] = 0 works! In my pinion the style to assign the value in the first approach is the right one, so what is wrong? | It is possible by adding list: df.loc[rowId,colId] = [myDict] But for better performance in pandas is best working in pandas only with scalars. | 8 | 1 |
61,599,363 | 2020-5-4 | https://stackoverflow.com/questions/61599363/evaluate-consonant-vowel-composition-of-word-string-in-python | I'm trying to transform a Python string from its original form to its vowel/consonant combinations. Eg - 'Dog' becomes 'cvc' and 'Bike' becomes 'cvcv' In R I was able to employ the following method: con_vowel <- gsub("[aeiouAEIOU]","V",df$col_name) con_vowel <- gsub("[^V]","C",con_vowel) df[["composition"]] <- con_vowel This would assess whether the character is vowel and if true assign the character 'V', then assess that string and replace anything that wasn't 'V' with 'C', then place the results into a new column called 'composition' within the dataframe. In Python I have written some code in an attepmpt to replicate the functionality but it does not return the desired result. Please see below. word = 'yoyo' for i in word.lower(): if i in "aeiou": word = i.replace(i ,'v') else: word = i.replace(i ,'c') print(word) The theory here is that each character would be evaluated and, if it isn't a vowel, then by deduction it must be a consonant. However the result I get is: v I underastand why this is happening, but I am no clearer as to how to achieve my desired result. Please note that I also need the resultant code to be applied to a dataframe column and create a new column from these results. If you could explain the workings of your answer it would help me greatly. Thanks in advance. | use string.replace with some regex to avoid the loop df = pd.DataFrame(['Cat', 'DOG', 'bike'], columns=['words']) # use string.replace df['new_word'] = df['words'].str.lower().str.replace(r"[^aeiuo]", 'c').str.replace(r"[aeiou]", 'v') print(df) words new_word 0 Cat cvc 1 DOG cvc 2 bike cvcv | 9 | 3 |
61,589,427 | 2020-5-4 | https://stackoverflow.com/questions/61589427/matplotlib-bar-vs-barh-plots-problem-with-barh | I've just noticed something strange and was wondering if someone could explain what's going on? I have a dictionary i'm plotting a horizontal and vertical bar charts from. plt.bar(food.keys(), food.values()) #works fine, but: plt.barh(food.keys(), food.values() #gives "unhashable type: 'dict_keys'" error. if dictionary is unhashable, why does it let me plot a normal bar graph? Is this just a quirk of the barh function or am I doing something wrong? Here's my test dataset: food = {'blueberries':2, 'pizza':3.50, 'apples':0.50} Thanks. | The other answer is of course correct in how to solve the problem - simply convert to a list. The reason why it doesn't work in the first place involves digging into the matplotlib source code. bar and barh do both call matplotlib.axes.Axes.bar under the hood. When you plot a bar, x is the x position of the bars and height is the y value. However, calling a barh, the y values (i.e. ['blueberries', 'pizza', 'apples']) is actually set to the argument bottom and the length of the bars is given by width. The following line is called within the source code func(ax, *map(sanitize_sequence, args), **kwargs) Where sanitize_sequence is: def sanitize_sequence(data): """ Convert dictview objects to list. Other inputs are returned unchanged. """ return (list(data) if isinstance(data, collections.abc.MappingView) else data) The problem is that the argument bottom is a kwarg, and is therefore not passed to sanitize_sequence and is never converted to a list. Whereas in a normal bar, x is converted to a list by sanitize_sequence So the issue is with how barh is implemented under the hood. | 8 | 13 |
61,587,974 | 2020-5-4 | https://stackoverflow.com/questions/61587974/how-to-create-a-list-of-lists-where-each-sub-list-increments-as-follows-1-0 | This works but is unwieldy and not very 'Pythonic'. I'd also like to be able to run through different values for 'numValues', say 4 to 40... innerList = [] outerList = [] numValues = 12 loopIter = 0 for i in range(numValues): innerList.append(0) for i in range(numValues): copyInnerList = innerList.copy() outerList.append(copyInnerList) for i in range(len(innerList)): for j in range(loopIter + 1): outerList[i][j] = 1 loopIter += 1 print(outerList) | numValues = 12 result = [ [1] * i + [0] * (numValues - i) for i in range(1, numValues+1) ] | 11 | 12 |
61,583,991 | 2020-5-4 | https://stackoverflow.com/questions/61583991/opencv-python-error-unsupported-data-type-4-in-function-cvopt-avx2getmo | I'm trying to remove noise with morphology and the kernel is giving me errors: import skimage.io as io import numpy as np import cv2 c=io.imread('circles.png').astype('bool')*1 x=np.random.random_sample(c.shape) c[np.nonzero(x>0.95)]= 0 c[np.nonzero(x<=0.05)] = 1 opening = cv2.morphologyEx(c, cv2.MORPH_OPEN, np.ones((2,2),np.uint8)) io.imshow(opening) error: error: OpenCV(4.1.2) C:/projects/opencv-python/opencv/modules/imgproc/src/morph.simd.hpp:756: error: (-213:The function/feature is not implemented) Unsupported data type (=4) in function 'cv::opt_AVX2::getMorphologyRowFilter' | Your data type (=4) is CV_32SC1, which is 32-bit signed single channel -- you need to convert your data into another data type, I'd recommend using CV_8UC1 because of the smallest memory footprint and ease of use: c = c.astype('uint8') # or c.astype(np.byte) | 9 | 13 |
61,528,860 | 2020-4-30 | https://stackoverflow.com/questions/61528860/sqlalchemy-engine-from-airflow-database-hook | What's the best way to get a SQLAlchemy engine from an Airflow connection ID? Currently I am creating a hook, retrieving its URI, then using it to create a SQLAlchemy engine. postgres_hook = PostgresHook(self.postgres_conn_id) engine = create_engine(postgres_hook.get_uri()) This works but both commands make a connection to the database. When I have "extra" parameters on the connection a third connection is needed to retrieve those (see Retrieve full connection URI from Airflow Postgres hook) Is there a shorter and more direct method? | To be clear, indeed your commands will make two database connections, but it's to two separate databases (unless you're trying to connect to your Postgres Airflow database). The first line of initializing the hook should not make any connections. Only the second line first grabs the connection details from the Airflow database (which I don't think you can avoid), then uses that to connect to the Postgres database (which I think is the point). You can make slightly simpler though with: postgres_hook = PostgresHook(self.postgres_conn_id) engine = postgres_hook.get_sqlalchemy_engine() That seems pretty clean, but if you want to get even more direct without going through PostgresHook, you could fetch it directly by querying Airflow's database. However, that means you're going to end up duplicating the code to build a URI from the connection object. The underlying implementation of get_connection() is a good example if you want to proceed with this. from airflow.settings import Session conn = session.query(Connection).filter(Connection.conn_id == self.postgres_conn_id).one() ... # build uri from connection create_engine(uri) Additionally, if you want to be able to access the extras without a separate database fetch beyond what get_uri() or get_sqlalchemy_engine() does, is you can override BaseHook.get_connection() to save the connection object to an instance variable for reuse. This would require creating your own hook on top of PostgresHook, so I understand that may not be ideal. class CustomPostgresHook(PostgresHook): @classmethod def get_connection(cls, conn_id): # type: (str) -> Connection conn = super().get_connection(conn_id) self.conn_obj = conn # can't use self.conn because PostgresHook will overriden in https://github.com/apache/airflow/blob/1.10.10/airflow/hooks/postgres_hook.py#L93 by a different type of connection return conn postgres_hook = CustomPostgresHook(self.postgres_conn_id) uri = postgres_hook.get_uri() # do something with postgres_hook.conn_obj.extras_dejson Some built in Airflow hooks have this behavior already (grpc, samba, tableau), but it's definitely not standardized. | 14 | 17 |
61,578,694 | 2020-5-3 | https://stackoverflow.com/questions/61578694/difference-between-rect-move-and-rect-move-ip-in-pygame | I was just going through the .rect method of pygame in the official docs. We have 2 cases , pygame.rect.move(arg1,arg2) which is used to move a .rect object on the screen and pygame.rect.move_ip(arg1,arg2) which is , according to the docs, also used to move a .rect object on the screen but it moves it in place I didnt quite get what it means. Can anyone explain what move in place means? | "In place" means the object self. While rect.move_ip changes the pygame.Rect object itself, rect.move does not change the object, but it returns a new object with the same size and "moved" position. Note, the return value of rect.move_ip is None, but the return value of rect.move is a new pygame.Rect object. rect.move_ip(x, y) does the same as rect = rect.move(x, y) | 9 | 15 |
61,576,659 | 2020-5-3 | https://stackoverflow.com/questions/61576659/how-to-hot-reload-in-reactjs-docker | This might sound simple, but I have this problem. I have two docker containers running. One is for my front-end and other is for my backend services. these are the Dockerfiles for both services. front-end Dockerfile : # Use an official node runtime as a parent image FROM node:8 WORKDIR /app # Install dependencies COPY package.json /app RUN npm install --silent # Add rest of the client code COPY . /app EXPOSE 3000 CMD npm start backend Dockerfile : FROM python:3.7.7 WORKDIR /usr/src/app COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY server.py /usr/src/app COPY . /usr/src/app EXPOSE 8083 # CMD ["python3", "-m", "http.server", "8080"] CMD ["python3", "./server.py"] I am building images with the docker-compose.yaml as below: version: "3.2" services: frontend: build: ./frontend ports: - 80:3000 depends_on: - backend backend: build: ./backends/banuka ports: - 8080:8083 How can I make this two services Update whenever there is a change to front-end or back-end? I found this repo, which is a booilerplate for reactjs, python-flask and posgresel, which says it has enabled Hot reload for both reactjs frontend and python-flask backend. But I couldn't find anything related to that. Can someone help me? repo link What I want is: after every code change the container should b e up-to-date automatically ! | Try this in your docker-compose.yml version: "3.2" services: frontend: build: ./frontend environment: CHOKIDAR_USEPOLLING: "true" volumes: - /app/node_modules - ./frontend:/app ports: - 80:3000 depends_on: - backend backend: build: ./backends/banuka environment: CHOKIDAR_USEPOLLING: "true" volumes: - ./backends/banuka:/app ports: - 8080:8083 Basically you need that chokidar environment to enable hot reloading and you need volume bindings to make your code on your machine communicate with code in container. See if this works. | 27 | 36 |
61,577,168 | 2020-5-3 | https://stackoverflow.com/questions/61577168/filter-array-of-objects-in-python | I'm using Python to dig through a pretty big project and dig up information about it. I'm able to create an array of ProjectFiles, however I'm having a hard time figuring out how to filter it. class ProjectFile: def __init__(self, filename: str, number_of_lines: int, language: str, repo: str, size: int): self.filename = filename self.number_of_lines = number_of_lines self.language = language self.repo = repo self.size = size How would I filter an array of ProjectFile objects for a specific repo? For instance, let's say I wanted to filter for objects whose repo property is SomeCocoapod. I've looked for examples of filter, but everything I've found uses simple examples like lists of str or int. | You can select attributes of a class using the dot notation. Suppose arr is an array of ProjectFile objects. Now you filter for SomeCocoapod using. filter(lambda p: p.repo == "SomeCocoapod", arr) NB: This returns a filter object, which is a generator. To have a filtered list back you can wrap it in a list constructor. As a very Pythonic alternative you can use list comprehensions: filtered_arr = [p for p in arr if p.repo == "SomeCocoapod"] | 18 | 21 |
61,564,609 | 2020-5-2 | https://stackoverflow.com/questions/61564609/does-a-python-virtual-environment-avoid-redundant-installs | I'm fairly new to python and I'm beginning to work with python virtual environments. When I create a virtual environment, I have to reinstall all the modules that I need for the current project that I'm working on. I'm wondering if the virtual environment somehow avoids redundancies in installation of modules if the same version of the module had already been installed in another project or on a system-wide level? Also, would there be any point in installing modules on a system-wide level rather than just in virtual environments, since I would need to install the modules in the virtual environment anyway? | Short Answer : If you work with virtual environment, you need to install every dependecy (package) that you need for your project even if you installed this package in another virtual environment before. That's exactly the purpose of virtual environment : each project has its own dependencies. That allows you to manage the dependencies clearly for each project without affecting others. Of course you can install a dependecy (a package) globally by doing pip install <Package name> But before doing that be sure to not activate any virtual environment . This will install the package at the root installation of python which is the main environment. But it is strongly not recommended to do that and working with virtual environment is always a good practice. Extras : Now just in addition to this answer you can use the command : pip freeze > requirements.txt This will create a file call requirements.txt in your project's root folder. This file looks like this : numpy==1.18.1 pandas==0.25.3 In this example i installed in my virtual environment the package numpy version 1.18.1 and the package pandas in its version 0.25.3 In this way, if another project needs the package numpy in a newer or older version, i can manage this directly in its requirements.txt without affecting other projects . This file will also help you to re-install quickly and easily your dependencies of your environment (for example if you want to create another project with the same starting dependencies as your current project) by just doing : pip install -r requirements.txt Of course : be sure to first copy this requirements.txt file to your target's root folder of the new project and activate its virtual environment before doing that Quick summary in commands : 1) Install virtual env (Linux): pip3 install virtualenv Install virtual env (Windows): pip install virtualenv 2) Create a new virtual environment (linux and Windows ) : virtualenv venv Be sure to do cd to your root project folder before doing that. This will create a new folder called "venv" (or whatever name you put after the virtualenv command but venv is a convention and a goo practice). If you are not in your root folder and for any reason you do not want to, you always can add a -p flag to this command inn order to precise the path you want to install your virtual environment like so : virtualenv -p /YOUR_PROJECT_PATH/ venv 3) activate the virtual environment (Linux) : $ source YOUR_PROJECT_PATH/venv/bin/activate If you call your virtual environment differently than venv be sure to replace venv by whatever you did call. activate the virtual environment (Windows) : C:\> YOUR_PROJECT_PATH\venv\Scripts\activate.bat After this you should have this prompt : Linux : (venv) $ Windows : (venv) C:\> 4) Install a package : Linux (venv) $ pip3 install <package_name> Windows (venv) C:\> pip install <package_name> This command will install the <package_name> only in the venv site-packages folder and this dependency will only be available for this virtual environment. 5) Freeze your dependencies : Linux (venv) $ pip freeze > requirements.txt Windows (venv) C:\> pip freeze > requirements.txt This as said above will create the requirements.txt in your project root folder (it will contains a list of all your packages name and their versions you installed in this virtual environment) 6) deactivate you virtual environment : deactivate If you create a new environment by doing the above step and you want this new environment to have the same dependencies as the first one : cp YOUR_FIRST_PROJECT_PATH\requirements.txt YOUR_NEW_PROJECT_PATH cd YOUR_NEW_PROJECT_PATH Here Create and activate your new virtual environment (as explained above) then : pip install requirements.txt 7) Install a package globally (not recommended) : if you have a current activated venv first : deactivate Then : pip install <package_name> This will install the package_name at the root installation of python. For further comprehension of course : https://docs.python.org/3/library/venv.html | 15 | 14 |
61,566,808 | 2020-5-2 | https://stackoverflow.com/questions/61566808/manytomany-relationship-between-two-models-in-django | I am trying to build a website that users can add the courses they are taking. I want to know how should I add the ManyToMany relationship. Such that we can get all users in a course based on the course code or instructor or any field. And we can also get the courses user is enrolled in. Currently, my Database structure is: class Course(models.Model): course_code = models.CharField(max_length=20) course_university = models.CharField(max_length=100) course_instructor = models.CharField(max_length=100) course_year = models.IntegerField(('year'), validators=[MinValueValidator(1984), max_value_current_year]) def __str__(self): return self.course_code and my user model: class Profile(AbstractUser): bio = models.TextField() image = models.ImageField(default='defaults/user/default_u_i.png', courses = models.ManyToManyField('home.Course',related_name='courses') def __str__(self): return self.username I was wondering should ManyToMany relationship be in User model or the course model? Or will it make any difference at all? EDIT: For adding course to post object now I am using this view but it seems to not work: @login_required def course_add(request): if request.method == "POST": form = CourseForm(request.POST or none) if form.is_valid(): course = form.save() request.user.add(course) else: form = CourseForm context = { 'form':form } return render(request,'home/courses/course_add.html', context) | For a relational databases, the model where you define the ManyToManyField does not matter. Django will create an extra table with two ForeignKeys to the two models that are linked by the ManyToManyField. The related managers that are added, etc. is all Django logic. Behind the curtains, it will query the table in the middle. You however need to fix the related_name=… parameter [Django-doc]. The related_name specifies the name of the relation in reverse so from Course to Profile in this case. It thus should be something like 'profiles': class Profile(AbstractUser): bio = models.TextField() image = models.ImageField(default='defaults/user/default_u_i.png', courses = models.ManyToManyField('home.Course', related_name='profiles') def __str__(self): return self.username You thus can obtain the people that particiate in a Course object with: mycourse.profiles.all() and you can access the courses in which a Profile is enrolled with: myprofile.courses.all() For more information, see the Many-to-many relationships section of the documentation. You can add a course to the courses of a user with: @login_required def course_add(request): if request.method == 'POST': form = CourseForm(request.POST) if form.is_valid(): course = form.save() request.user.courses.add(course) else: form = CourseForm() context = { 'form': form } return render(request,'home/courses/course_add.html', context) | 10 | 16 |
61,566,711 | 2020-5-2 | https://stackoverflow.com/questions/61566711/which-characters-are-considered-whitespace-by-split | I am porting some Python 2 code that calls split() on strings, so I need to know its exact behavior. The documentation states that when you do not specify the sep argument, "runs of consecutive whitespace are regarded as a single separator". Unfortunately, it does not specify which characters that would be. There are some obvious contenders (like space, tab, and newline), but Unicode contains plenty of other candidates. Which characters are considered to be whitespace by split()? Since the answer might be implementation-specific, I'm targeting CPython. (Note: I researched the answer to this myself since I couldn't find it anywhere, so I'll be posting it here, hopefully for the benefit of others.) | Unfortunately, it depends on whether your string is an str or a unicode (at least, in CPython - I don't know whether this behavior is actually mandated by a specification anywhere). If it is an str, the answer is straightforward: 0x09 Tab 0x0a Newline 0x0b Vertical Tab 0x0c Form Feed 0x0d Carriage Return 0x20 Space Source: these are the characters with PY_CTF_SPACE in Python/pyctype.c, which are used by Py_ISSPACE, which is used by STRINGLIB_ISSPACE, which is used by split_whitespace. If it is a unicode, there are 29 characters, which in addition to the above are: U+001c through 0x001f: File/Group/Record/Unit Separator U+0085: Next Line U+00a0: Non-Breaking Space U+1680: Ogham Space Mark U+2000 through 0x200a: various fixed-size spaces (e.g. Em Space), but note that Zero-Width Space is not included U+2028: Line Separator U+2029: Paragraph Separator U+202f: Narrow No-Break Space U+205f: Medium Mathematical Space U+3000: Ideographic Space Note that the first four are also valid ASCII characters, which means that an ASCII-only string might split differently depending on whether it is an str or a unicode! Source: these are the characters listed in _PyUnicode_IsWhitespace, which is used by Py_UNICODE_ISSPACE, which is used by STRINGLIB_ISSPACE (it looks like they use the same function implementations for both str and unicode, but compile it separately for each type, with certain macros implemented differently). The docstring describes this set of characters as follows: Unicode characters having the bidirectional type 'WS', 'B' or 'S' or the category 'Zs' | 19 | 24 |
61,541,776 | 2020-5-1 | https://stackoverflow.com/questions/61541776/seaborn-ordering-of-facets | In a statement like: g = sns.FacetGrid(df, row="variable", col="state") I would like to control the order in which the variables and the states appear in the FacetGrid. I can't find where this is even possible. | You can use row_order and col_order: from sklearn.datasets import load_iris import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt data = load_iris() df = pd.DataFrame(data.data, columns=['sepal.length','sepal.width','petal.length','petal.width']) df['species'] = data.target df['subset'] = np.random.choice(['A','B','C'],150,replace=True) g = sns.FacetGrid(df, col="species", col_order=[0,2,1],row="subset",row_order=['C','B','A']) g = g.map(plt.scatter, "sepal.length", "sepal.width") | 13 | 16 |
61,522,624 | 2020-4-30 | https://stackoverflow.com/questions/61522624/how-to-create-an-optional-field-in-a-dataclass-that-is-inherited | from typing import Optional @dataclass class Event: id: str created_at: datetime updated_at: Optional[datetime] #updated_at: datetime = field(default_factory=datetime.now) CASE 1 #updated_at: Optional[datetime] = None CASE 2 @dataclass class NamedEvent(Event): name: str When creating an event instance I will generally not have an updated_at field. I can either pass the current time as default value or add a value to it when making the insertion in the database and fetch it in subsequent uses of the object. Which is the better way? As per my understanding, I cannot create a NamedEvent instance without passing the updated_at field in case1 and case2 since I do not have a default value in name field. | The underlying problem that you have seems to be the same one that is described here. The short version of that post is that in a function signature (including the dataclass-generated __init__ method), obligatory arguments (like NamedEvent's name) can not follow after arguments with default values (which are necessary to define the behavior of Event's updated_at) - a child's fields will always follow after those of its parent. So either you have no default values in your parent class (which doesn't work in this case) or all your child's fields need default values (which is annoying, and sometimes simply not feasible). The post I linked above discusses some patterns that you can apply to solve your problem, but as a nicer alternative you can also use the third part party package pydantic which already solved this problem for you. A sample implementation could look like this: import pydantic from datetime import datetime class Event(pydantic.BaseModel): id: str created_at: datetime = None updated_at: datetime = None @pydantic.validator('created_at', pre=True, always=True) def default_created(cls, v): return v or datetime.now() @pydantic.validator('updated_at', pre=True, always=True) def default_modified(cls, v, values): return v or values['created_at'] class NamedEvent(Event): name: str The default-value specification through validators is a bit cumbersome, but overall it's a very useful package that fixes lots of the shortcomings that you run into when using dataclasses, plus some more. Using the class definition, an instance of NamedEvent can be created like this: >>> NamedEvent(id='1', name='foo') NamedEvent(id='1', created_at=datetime.datetime(2020, 5, 2, 18, 50, 12, 902732), updated_at=datetime.datetime(2020, 5, 2, 18, 50, 12, 902732), name='foo') | 35 | 26 |
61,559,359 | 2020-5-2 | https://stackoverflow.com/questions/61559359/could-not-build-wheels-for-pyaudio-since-package-wheel-is-not-installed | I tried to install pyaudio, but it returns an error like this: Could not build wheels for pyaudio, since package 'wheel' is not installed. How can I fix this? | It looks like you just need to install the wheel package. You can do this by running pip install wheel at the terminal. | 18 | 23 |
Subsets and Splits