question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
61,122,395
2020-4-9
https://stackoverflow.com/questions/61122395/is-it-possible-to-test-a-while-true-loop-with-pytest-i-try-with-a-timeout
I have a python function foo with a while True loop inside. For background: It is expected do stream info from the web, do some writing and run indefinitely. The asserts test if the writing was done correctly. Clearly I need it to stop sometime, in order to test. What I did was to run via multirpocessing and introduce a timeout there, however when I see the test coverage, the function which ran through the multiprocessing, are not marked as covered. Question 1: Why does pytest now work this way? Question 2: How can I make this work? I was thinking it's probably because I technically exit the loop, so maybe pytest does not mark this as tested.... import time import multiprocessing def test_a_while_loop(): # Start through multiprocessing in order to have a timeout. p = multiprocessing.Process( target=foo name="Foo", ) try: p.start() # my timeout time.sleep(10) p.terminate() finally: # Cleanup. p.join() # Asserts below ... More info I looked into adding a decorator such as @pytest.mark.timeout(5), but that did not work and it stops the whole function, so I never get to the asserts. (as suggested here). If I don't find a way, I will just test the parts, but ideally I would like to find a way to test by breaking the loop. I know I can re-write my code in order to make it have a timeout, but that would mean changing the code to make it testable, which I don't think is a good design. Mocks I have not tried (as suggested here), because I don't believe I can mock what I do, since it writes info from the web. I need to actually see the "original" working.
Break out the functionality you want to test into a helper method. Test the helper method. def scrape_web_info(url): data = get_it(url) return data # In production: while True: scrape_web_info(...) # During test: def test_web_info(): assert scrape_web_info(...) == ...
9
14
61,122,276
2020-4-9
https://stackoverflow.com/questions/61122276/keras-not-training-on-entire-dataset
So I've been following Google's official tensorflow guide and trying to build a simple neural network using Keras. But when it comes to training the model, it does not use the entire dataset (with 60000 entries) and instead uses only 1875 entries for training. Any possible fix? import tensorflow as tf from tensorflow import keras import numpy as np fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() train_images = train_images / 255.0 test_images = test_images / 255.0 class_names = ['T-shirt', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot'] model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss= tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(train_images, train_labels, epochs=10) Output: Epoch 1/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3183 - accuracy: 0.8866 Epoch 2/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3169 - accuracy: 0.8873 Epoch 3/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3144 - accuracy: 0.8885 Epoch 4/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3130 - accuracy: 0.8885 Epoch 5/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3110 - accuracy: 0.8883 Epoch 6/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3090 - accuracy: 0.8888 Epoch 7/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3073 - accuracy: 0.8895 Epoch 8/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3057 - accuracy: 0.8900 Epoch 9/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3040 - accuracy: 0.8905 Epoch 10/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3025 - accuracy: 0.8915 <tensorflow.python.keras.callbacks.History at 0x7fbe0e5aebe0> Here's the original google colab notebook where I've been working on this: https://colab.research.google.com/drive/1NdtzXHEpiNnelcMaJeEm6zmp34JMcN38
The number 1875 shown during fitting the model is not the training samples; it is the number of batches. model.fit includes an optional argument batch_size, which, according to the documentation: If unspecified, batch_size will default to 32. So, what happens here is - you fit with the default batch size of 32 (since you have not specified anything different), so the total number of batches for your data is 60000/32 = 1875
16
25
61,114,822
2020-4-9
https://stackoverflow.com/questions/61114822/invoking-google-cloud-function-from-python-using-service-account-for-authenticat
I have a cloud function with trigger type set to HTTP and also have a service account which is having permissions to Invoke the cloud function. I want to invoke the cloud function from a python script. I am using the following script to invoke the function: from google.oauth2 import service_account from google.auth.transport.urllib3 import AuthorizedHttp credentials = service_account.Credentials.from_service_account_file('/path/to/service-account-credentials.json') scoped_credentials = credentials.with_scopes(['https://www.googleapis.com/auth/cloud-platform']) authed_http = AuthorizedHttp(scoped_credentials) response = authed_http.request('GET', 'https://test-123456.cloudfunctions.net/my-cloud-function') print(response.status) I am getting Unauthorized ( 401 ) error in response. Is this the correct way of invocation?
To be able to call your cloud function you need an ID Token against a Cloud Functions end point from google.oauth2 import service_account from google.auth.transport.requests import AuthorizedSession url = 'https://test-123456.cloudfunctions.net/my-cloud-function' creds = service_account.IDTokenCredentials.from_service_account_file( '/path/to/service-account-credentials.json', target_audience=url) authed_session = AuthorizedSession(creds) # make authenticated request and print the response, status_code resp = authed_session.get(url) print(resp.status_code) print(resp.text)
8
12
61,114,520
2020-4-9
https://stackoverflow.com/questions/61114520/how-to-fix-valueerror-multiclass-format-is-not-supported
This is my code and I try to calculate ROC score but I have a problem with ValueError: multiclass format is not supported. I'm already looking sci-kit learn but it doesn't help. In the end, I'm still have ValueError: multiclass format is not supported. This is my code from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import BaggingClassifier from sklearn.metrics import confusion_matrix,zero_one_loss from sklearn.metrics import classification_report,matthews_corrcoef,accuracy_score from sklearn.metrics import roc_auc_score, auc dtc = DecisionTreeClassifier() bc = BaggingClassifier(base_estimator=dtc, n_estimators=10, random_state=17) bc.fit(train_x, train_Y) pred_y = bc.predict(test_x) fprate, tprate, thresholds = roc_curve(test_Y, pred_y) results = confusion_matrix(test_Y, pred_y) error = zero_one_loss(test_Y, pred_y) roc_auc_score(test_Y, pred_y) FP = results.sum(axis=0) - np.diag(results) FN = results.sum(axis=1) - np.diag(results) TP = np.diag(results) TN = results.sum() - (FP + FN + TP) print('\n Time Processing: \n',time.process_time()) print('\n Confussion Matrix: \n', results) print('\n Zero-one classification loss: \n', error) print('\n True Positive: \n', TP) print('\n True Negative: \n', TN) print('\n False Positive: \n', FP) print('\n False Negative: \n', FN) print ('\n The Classification report:\n',classification_report(test_Y,pred_y, digits=6)) print ('MCC:', matthews_corrcoef(test_Y,pred_y)) print ('Accuracy:', accuracy_score(test_Y,pred_y)) print (auc(fprate, tprate)) print ('ROC Score:', roc_auc_score(test_Y,pred_y)) This is the traceback
From the docs, roc_curve: "Note: this implementation is restricted to the binary classification task." Are your label classes (y) either 1 or 0? If not, I think you have to add the pos_label parameter to your roc_curve call. fprate, tprate, thresholds = roc_curve(test_Y, pred_y, pos_label='your_label') Or: test_Y = your_test_y_array # these are either 1's or 0's fprate, tprate, thresholds = roc_curve(test_Y, pred_y)
10
7
61,114,350
2020-4-9
https://stackoverflow.com/questions/61114350/error-blahfile-is-not-utf-8-encoded-saving-disabled
So, I'm trying to write a gzip file, actually from the net, but to simplify I wrote some very basic test. import gzip LINES = [b'I am a test line' for _ in range(100_000)] f = gzip.open('./test.text.gz', 'wb') for line in LINES: f.write(line) f.close() It runs great, and I can see in Jupyter that it has created the test.txt.gz file in the directory listing. So I click on it expecting a whole host of garbage characters indicative of a binary file, like you would see in Notepad. However, instead I get this ... Error! test.text.gz is not UTF-8 encoded. Saving disabled. See console for more details Which makes me think, oh my god, coding error, something is wrong with my encoding, my saving, can I save bytes ? Am I using the correct routines ?? And then spend 5 hours trying all combinations of code and modules.
The very simple answer to this is none of the above. This is a very misleading error message, especially when the code you've written was designed to save a binary file with a weird extension. What this actually means is ... I HAVE NO IDEA HOW TO DISPLAY THIS DATA ! - Yours Jupyter So, go to your File Explorer, Finder navigate to the just saved file and open it. Voila !! Everything worked exactly as planned, there is no error. Hope this saves other people many hours of debugging, and please Jupyter, change your error message.
19
50
61,110,188
2020-4-8
https://stackoverflow.com/questions/61110188/how-to-display-a-gif-in-jupyter-notebook-using-google-colab
I am using google colab and would like to embed a gif. Does anyone know how to do this? I am using the code below and it is not animating the gif in the notebook. I would like the notebook to be interactive so that one can see what the code animates without having to run it. I found many ways to do so that did not work in Google colab. The code and the GIF of interest is below. import matplotlib.pyplot as plt import matplotlib.image as mpimg img = mpimg.imread("/content/animationBrownianMotion2d.gif") plt.imshow(img) I tried some of the solutions provided. import IPython from IPython.display import Image Image(filename='/content/animationBrownianMotion2d.gif') and similarly import IPython from IPython.display import Image Image(filename='/content/animationBrownianMotion2d.gif',embed=True) but got the error, --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-27-56bf6cd2b134> in <module>() 1 import IPython 2 from IPython.display import Image ----> 3 Image(filename='/content/animationBrownianMotion2d.gif',embed=True) 4 /usr/local/lib/python3.6/dist-packages/IPython/core/display.py in __init__(self, data, url, filename, format, embed, width, height, retina, unconfined, metadata) 1013 1014 if self.embed and self.format not in self._ACCEPTABLE_EMBEDDINGS: -> 1015 raise ValueError("Cannot embed the '%s' image format" % (self.format)) 1016 self.width = width 1017 self.height = height ValueError: Cannot embed the 'gif' image format both times.
For external gif, you can use Jupyter's display as @knoop's answer. from IPython.display import Image Image(url='https://upload.wikimedia.org/wikipedia/commons/e/e3/Animhorse.gif') But for a local file, you need to read the bytes and display it. !wget https://upload.wikimedia.org/wikipedia/commons/e/e3/Animhorse.gif Image(open('Animhorse.gif','rb').read())
11
28
61,108,376
2020-4-8
https://stackoverflow.com/questions/61108376/force-dask-to-parquet-to-write-single-file
When using dask.to_parquet(df, filename) a subfolder filename is created and several files are written to that folder, whereas pandas.to_parquet(df, filename) writes exactly one file. Can I use dask's to_parquet (without using compute() to create a pandas df) to just write a single file?
Writing to a single file is very hard within a parallelism system. Sorry, such an option is not offered by Dask (nor probably any other parallel processing library). You could in theory perform the operation with a non-trivial amount of work on your part: you would need to iterate through the partitions of your dataframe, write to the target file (which you keep open) and accumulate the output row-groups into the final metadata footer of the file. I would know how to go about this with fastparquet, but that library is not being much developed any more.
7
2
61,104,138
2020-4-8
https://stackoverflow.com/questions/61104138/how-i-can-swap-3-dimensions-with-each-other-in-pytorch
I have a a= torch.randn(28, 28, 8) and I want to swap dimensions (0, 1, 2) to (2, 0, 1). I tried b = a.transpose(2, 0, 1) , but I received this error: TypeError: transpose() received an invalid combination of arguments - got (int, int, int), but expected one of: * (name dim0, name dim1) * (int dim0, int dim1) Is there any way that I can swap all at once?
You can use Pytorch's permute() function to swap all at once, >>>a = torch.randn(28, 28, 8) >>>b = a.permute(2, 0, 1) >>>b.shape torch.Size([8, 28, 28])
8
9
61,104,317
2020-4-8
https://stackoverflow.com/questions/61104317/modulenotfounderror-no-module-named-tf
I'm having problem with tensorflow. I want to use ImageDataGenerator, but I'm receiving error ModuleNotFoundError: No module named 'tf'. Not sure what is the problem. I added this tf.version to test will it work, and it shows the version of tensorflow. import tensorflow as tf from tensorflow import keras print(tf.__version__) from tf.keras.preprocessing.image import ImageDataGenerator When I run this code, I get this: 2.1.0 Traceback (most recent call last): File "q:/TF/Kamen papir maaze/rks.py", line 14, in <module> from tf.keras.preprocessing.image import ImageDataGenerator ModuleNotFoundError: No module named 'tf'
The line import tensorflow as tf means you are importing tensorflow with an alias as tf to call it modules/functions. You cannot use the alias to import other modules. For your case, if you call directly tf.keras.preprocessing.image.ImageDataGenerator(...) then it will work. or you need to import the module with the right module name. i.e. from tensorflow.keras.preprocessing.image import ImageDataGenerator
10
19
61,101,919
2020-4-8
https://stackoverflow.com/questions/61101919/how-can-i-add-an-element-to-a-pytorch-tensor-along-a-certain-dimension
I have a tensor inps, which has a size of [64, 161, 1] and I have some new data d which has a size of [64, 161]. How can I add d to inps such that the new size is [64, 161, 2]?
There is a cleaner way by using .unsqueeze() and torch.cat(), which makes direct use of the PyTorch interface: import torch # create two sample vectors inps = torch.randn([64, 161, 1]) d = torch.randn([64, 161]) # bring d into the same format, and then concatenate tensors new_inps = torch.cat((inps, d.unsqueeze(2)), dim=-1) print(new_inps.shape) # [64, 161, 2] Essentially, unsqueezing the second dimension already brings the two tensors into the same shape; you just have to be careful to unsqueeze along the right dimension. Similarly, the concatenation is unfortunately named differently from the otherwise similarly named NumPy function, but behave the same. Note that instead of letting torch.cat figure out the dimension by providing dim=-1, you can also explicitly provide the dimension to concatenate along, in this case by replacing it with dim=2. Keep in mind the difference between concatenation and stacking, which is helpful for similar problems with tensor dimensions.
7
12
61,097,665
2020-4-8
https://stackoverflow.com/questions/61097665/no-such-file-or-directory-but-file-exists
I'm writing a python script where I need to open a ".txt" folder and analyse the text in there. I have saved this ".txt" document in the same folder as my Python script. But, when I go to open the file; file = open("words.txt",'r') I get the error: No such file or directory: 'words.txt'. I don't understand why this is happening?
Maby it's because your current working directory is different from the directory your files are stored. Try giving the full path to the file file = open("<full_path>\words.txt",'r')
10
12
61,096,522
2020-4-8
https://stackoverflow.com/questions/61096522/pytorch-slice-matrix-with-vector
Say I have one matrix and one vector as follows: import torch x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) y = torch.tensor([0, 2, 1]) is there a way to slice it x[y] so the result is: res = [1, 6, 8] So basically I take the first element of y and take the element in x that corresponds to the first row and the elements' column.
You can specify the corresponding row index as: import torch x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) y = torch.tensor([0, 2, 1]) x[range(x.shape[0]), y] tensor([1, 6, 8])
15
15
61,037,557
2020-4-5
https://stackoverflow.com/questions/61037557/should-i-commit-lock-file-changes-separately-what-should-i-write-for-the-commi
I'm using poetry for my Python package manager but I believe this would apply to any programming practices. I've been doing this without knowing exactly what I'm doing, or how I should be doing. When you use a package manager and install a new package, there's usually a .lock file change to keep your build deterministic. Usually, I would commit these changes like: $ git add poetry.lock pyproject.toml $ git commit -m "Install packages: beautifulsoup4" i.e, I make a commit every time I install/remove a package. I do it because I FEEL like this is what I should do, but I have 0 clue if this is actually a correct way to handle this. Am I doing great? or is there any other specific convention & rules I should abide by to make it follow the best practices as close as possible?
Disclaimer Please refer to this answer for the official stance and justification on the topic, which should be the top post in this thread. Below is my original answer, which I'll leave as it was. The official recommendation of the poetry maintainers is to commit the lockfile if you develop a deployable application (as opposed to a library). That being said, my personal experience has been that it isn't necessary to commit lockfiles to VCS. The pyproject.toml file is the reference for correct build instructions, and the lockfile is the reference for a single successful deployment. Now, I don't know what the spec for poetry.lock is, but I had them backfire on me often enough during collaboration with colleagues in ways where only deleting them would fix the problem. A usual problem was that using different operation systems or python versions would lead to different lockfiles, and that just doesn't fly. I'll gladly let our CI build and persist an authoritative reference-lockfile to enable re-builds, but it still wouldn't be committed to the repository. If maintaining a shared lockfile is viable given your workflow - great! You avoid a step in your pipeline, have one less artifact to worry about, and cut down dramatically on build time (even a medium-size project can take minutes to do a full dependency-resolution). But as far as best practices go, I'd consider adding poetry.lock to the .gitignore a better practice than what you do, and only commit pyproject.toml changes when you add dependencies.
51
45
61,019,498
2020-4-3
https://stackoverflow.com/questions/61019498/flake8-linting-for-databricks-python-code-in-github-using-workflows
I have my databricks python code in github. I setup a basic workflow to lint the python code using flake8. This fails because the names that are implicitly available to my script (like spark, sc, dbutils, getArgument etc) when it runs on databricks are not available when flake8 lints it outside databricks (in github ubuntu vm). How can I lint databricks notebooks in github using flake8? E.g. errors I get: test.py:1:1: F821 undefined name 'dbutils' test.py:3:11: F821 undefined name 'getArgument' test.py:5:1: F821 undefined name 'dbutils' test.py:7:11: F821 undefined name 'spark' my notebook in github: dbutils.widgets.text("my_jdbcurl", "default my_jdbcurl") jdbcurl = getArgument("my_jdbcurl") dbutils.fs.ls(".") df_node = spark.read.format("jdbc")\ .option("driver", "org.mariadb.jdbc.Driver")\ .option("url", jdbcurl)\ .option("dbtable", "my_table")\ .option("user", "my_username")\ .option("password", "my_pswd")\ .load() my .github/workflows/lint.yml on: pull_request: branches: [ master ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/setup-python@v1 with: python-version: 3.8 - run: | python -m pip install --upgrade pip pip install -r requirements.txt - name: Lint with flake8 run: | pip install flake8 flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
TL;DR Don't use the built-in variable dbutils in code that would need to run locally (IDE, Unit tests, ...) and in Databricks (production). Create your own instance of DBUtils class instead. Here is what we ended up doing: Created a new dbk_utils.py from pyspark.sql import SparkSession def get_dbutils(spark: SparkSession): try: from pyspark.dbutils import DBUtils return DBUtils(spark) except ModuleNotFoundError: import IPython return IPython.get_ipython().user_ns["dbutils"] And update the code that uses dbutils to use this utility: from dbk_utils import get_dbutils my_dbutils = get_dbutils() my_dbutils.widgets.text("my_jdbcurl", "default my_jdbcurl") my_dbutils.fs.ls(".") jdbcurl = my_dbutils.widgets.getArgument("my_jdbcurl") df_node = spark.read.format("jdbc")\ .option("driver", "org.mariadb.jdbc.Driver")\ .option("url", jdbcurl)\ .option("dbtable", "my_table")\ .option("user", "my_username")\ .option("password", "my_pswd")\ .load() If you're trying to do unit testing as well, then check out: How to mock an import and ModuleNotFoundError: No module named 'pyspark.dbutils'
8
1
60,987,997
2020-4-2
https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with
On a Windows 10 PC with an NVidia GeForce 820M I installed CUDA 9.2 and cudnn 7.1 successfully, and then installed PyTorch using the instructions at pytorch.org: pip install torch==1.4.0+cu92 torchvision==0.5.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html But I get: >>> import torch >>> torch.cuda.is_available() False
Your graphics card does not support CUDA 9.0. Since I've seen a lot of questions that refer to issues like this I'm writing a broad answer on how to check if your system is compatible with CUDA, specifically targeted at using PyTorch with CUDA support. Various circumstance-dependent options for resolving issues are described in the last section of this answer. The system requirements to use PyTorch with CUDA are as follows: Your graphics card must support the required version of CUDA Your graphics card driver must support the required version of CUDA The PyTorch binaries must be built with support for the compute capability of your graphics card Note: If you install pre-built binaries (using either pip or conda) then you do not need to install the CUDA toolkit or runtime on your system before installing PyTorch with CUDA support. This is because PyTorch, unless compiled from source, is always delivered with a copy of the CUDA library. 1. How to check if your GPU/graphics card supports a particular CUDA version First, identify the model of your graphics card. Before moving forward ensure that you've got an NVIDIA graphics card. AMD and Intel graphics cards do not support CUDA. NVIDIA doesn't do a great job of providing CUDA compatibility information in a single location. The best resource is probably this section on the CUDA Wikipedia page. To determine which versions of CUDA are supported Locate your graphics card model in the big table and take note of the compute capability version. For example, the GeForce 820M compute capability is 2.1. In the bullet list preceding the table check to see if the required CUDA version is supported by the compute capability of your graphics card. For example, CUDA 9.2 is not supported for compute compatibility 2.1. If your card doesn't support the required CUDA version then see the options in section 4 of this answer. Note: Compute capability refers to the computational features supported by your graphics card. Newer versions of the CUDA library rely on newer hardware features, which is why we need to determine the compute capability in order to determine the supported versions of CUDA. 2. How to check if your GPU/graphics driver supports a particular CUDA version The graphics driver is the software that allows your operating system to communicate with your graphics card. Since CUDA relies on low-level communication with the graphics card you need to have an up-to-date driver in order use the latest versions of CUDA. First, make sure you have an NVIDIA graphics driver installed on your system. You can acquire the newest driver for your system from NVIDIA's website. If you've installed the latest driver version then your graphics driver probably supports every CUDA version compatible with your graphics card (see section 1). To verify, you can check Table 2 in the CUDA release notes. In rare cases I've heard of the latest recommended graphics drivers not supporting the latest CUDA releases. You should be able to get around this by installing the CUDA toolkit for the required CUDA version and selecting the option to install compatible drivers, though this usually isn't required. If you can't, or don't want to upgrade the graphics driver then you can check to see if your current driver supports the specific CUDA version as follows: On Windows Determine your current graphics driver version (Source https://www.nvidia.com/en-gb/drivers/drivers-faq/) Right-click on your desktop and select NVIDIA Control Panel. From the NVIDIA Control Panel menu, select Help > System Information. The driver version is listed at the top of the Details window. For more advanced users, you can also get the driver version number from the Windows Device Manager. Right-click on your graphics device under display adapters and then select Properties. Select the Driver tab and read the Driver version. The last 5 digits are the NVIDIA driver version number. Visit the CUDA release notes and scroll down to Table 2. Use this table to verify your graphics driver is new enough to support the required version of CUDA. On Linux/OS X Run the following command in a terminal window nvidia-smi This should result in something like the following Sat Apr 4 15:31:57 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 206... Off | 00000000:01:00.0 On | N/A | | 0% 35C P8 16W / 175W | 502MiB / 7974MiB | 1% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1138 G /usr/lib/xorg/Xorg 300MiB | | 0 2550 G /usr/bin/compiz 189MiB | | 0 5735 G /usr/lib/firefox/firefox 5MiB | | 0 7073 G /usr/lib/firefox/firefox 5MiB | +-----------------------------------------------------------------------------+ Driver Version: ###.## is your graphic driver version. In the example above the driver version is 435.21. CUDA Version: ##.# is the latest version of CUDA supported by your graphics driver. In the example above the graphics driver supports CUDA 10.1 as well as all compatible CUDA versions before 10.1. Note: The CUDA Version displayed in this table does not indicate that the CUDA toolkit or runtime are actually installed on your system. This just indicates the latest version of CUDA your graphics driver is compatible with. To be extra sure that your driver supports the desired CUDA version you can visit Table 2 on the CUDA release notes page. 3. How to check if a particular version of PyTorch is compatible with your GPU/graphics card compute capability Even if your graphics card supports the required version of CUDA then it's possible that the pre-compiled PyTorch binaries were not compiled with support for your compute capability. For example, in PyTorch 0.3.1 support for compute capability <= 5.0 was dropped. First, verify that your graphics card and driver both support the required CUDA version (see Sections 1 and 2 above), the information in this section assumes that this is the case. The easiest way to check if PyTorch supports your compute capability is to install the desired version of PyTorch with CUDA support and run the following from a python interpreter >>> import torch >>> torch.zeros(1).cuda() If you get an error message that reads Found GPU0 XXXXX which is of cuda capability #.#. PyTorch no longer supports this GPU because it is too old. then that means PyTorch was not compiled with support for your compute capability. If this runs without issue then you should be good to go. Update If you're installing an old version of PyTorch on a system with a newer GPU then it's possible that the old PyTorch release wasn't compiled with support for your compute capability. Assuming your GPU supports the version of CUDA used by PyTorch, then you should be able to rebuild PyTorch from source with the desired CUDA version or upgrade to a more recent version of PyTorch that was compiled with support for the newer compute capabilities. 4. Conclusion If your graphics card and driver support the required version of CUDA (section 1 and 2) but the PyTorch binaries don't support your compute capability (section 3) then your options are Compile PyTorch from source with support for your compute capability (see here) Install PyTorch without CUDA support (CPU-only) Install an older version of the PyTorch binaries that support your compute capability (not recommended as PyTorch 0.3.1 is very outdated at this point). AFAIK compute capability older than 3.X has never been supported in the pre-built binaries Upgrade your graphics card If your graphics card doesn't support the required version of CUDA (section 1) then your options are Install PyTorch without CUDA support (CPU-only) Install an older version of PyTorch that supports a CUDA version supported by your graphics card (still may require compiling from source if the binaries don't support your compute capability) Upgrade your graphics card
130
244
61,079,178
2020-4-7
https://stackoverflow.com/questions/61079178/how-to-include-git-branch-in-installing-from-requirements-in-python
Hi I need to install from a branch of a git repo. I want to include it on the requirements.txt so that it would install using the command pip install -r requirements.txt What I know is how to install from master branch (See git ssh entry below): This is my requirements.txt networkx==2.4 numpy==1.18.1 opencv-python==4.2.0.32 scipy==1.4.1 git+ssh://[email protected]/project/project-utils.git What if I want to install from a specific branch namely 1-fix-test on ssh://[email protected]/project/project-utils.git. How do I include the branch name with the ssh address?
According to the document, you can add branch name or commit hash after @: git+ssh://[email protected]/project/project-utils.git@1-fix-test
9
12
61,038,373
2020-4-5
https://stackoverflow.com/questions/61038373/should-i-use-python-magic-methods-directly
I heard from one guy that you should not use magic methods directly. and I think in some use cases I would have to use magic methods directly. So experienced devs, should I use python magic methods directly?
I intended to show some benefits of not using magic methods directly: 1- Readability: Using built-in functions like len() is much more readable than its relevant magic/special method __len__(). Imagine a source code full of only magic methods instead of built-in function... thousands of underscores... 2- Comparison operators: class C: def __lt__(self, other): print('__lt__ called') class D: pass c = C() d = D() d > c d.__gt__(c) I haven't implemented __gt__ for neither of those classes, but in d > c when Python sees that class D doesn't have __gt__, it checks to see if class C implements __lt__. It does, so we get '__lt__ called' in output which isn't the case with d.__gt__(c). 3- Extra checks: class C: def __len__(self): return 'boo' obj = C() print(obj.__len__()) # fine print(len(obj)) # error or: class C: def __str__(self): return 10 obj = C() print(obj.__str__()) # fine print(str(obj)) # error As you see, when Python calls that magic methods implicitly, it does some extra checks as well. 4- This is the least important but using let's say len() on built-in data types such as str gives a little bit of speed as compared to __len__(): from timeit import timeit string = 'abcdefghijklmn' print(timeit("len(string)", globals=globals(), number=10_000_000)) print(timeit("string.__len__()", globals=globals(), number=10_000_000)) output: 0.5442426 0.8312854999999999 It's because of the lookup process(__len__ in the namespace), If you create a bound method before timing, it's gonna be faster. bound_method = string.__len__ print(timeit("bound_method()", globals=globals(), number=10_000_000))
9
12
60,969,987
2020-4-1
https://stackoverflow.com/questions/60969987/how-can-i-save-the-original-index-after-sorting-a-list
Let's say I have the following array: a = [4,2,3,1,4] Then I sort it: b = sorted(A) = [1,2,3,4,4] How could I have a list that map where each number was, ex: position(b,a) = [3,1,2,0,4] to clarify this list contains the positions not values) (ps' also taking in account that first 4 was in position 0)
b = sorted(enumerate(a), key=lambda i: i[1]) This results is a list of tuples, the first item of which is the original index and second of which is the value: [(3, 1), (1, 2), (2, 3), (0, 4), (4, 4)]
8
14
61,028,232
2020-4-4
https://stackoverflow.com/questions/61028232/how-to-find-which-library-prevents-updating-a-package-in-conda
I have set up couple of environments with Data Science libraries like pandas, numpy, matplotlib, scikit-learn, tensorflow etc.. However I cannot update some packages to the latest version. E.g. conda update pandas will tell me I have the latest version available however I know for sure the latest version is 1.+ (mine is 0.25) Is there a way to see which packages prevent a specific package from updating?
There is a way to do it using the drop-in replacement mamba. All you have to do is provide the version of the package you want to update to, and mamba will tell you what's preventing it from updating. E.g., in my case, I wanted to update snakemake to version > 7. But mamba update snakemake only gave me 6.15. So I ran: mamba install snakemake=7, and the result was informative: Looking for: ['snakemake=7'] Pinned packages: - python 3.8.* - bcbio-gff 0.6.7.* Encountered problems while solving: - nothing provides yte >=1.0,<2.0 needed by snakemake-minimal-7.0.0-pyhdfd78af_0 It turns out I had forgotten to include -c conda-forge which is where yte was to come from.
12
3
61,044,136
2020-4-5
https://stackoverflow.com/questions/61044136/modulenotfounderror-when-trying-to-use-mock-patch-on-a-method
My pytest unit test keeps returning the error ModuleNotFoundError: No module name billing. Oddly enough the send_invoices method in the billing module is able to be called when I remove the patch statement. Why is mock.patch unable to find the billing module and patch the method if this is the case? billing.py import pdfkit from django.template.loader import render_to_string from django.core.mail import EmailMessage from projectxapp.models import User Class Billing: #creates a pdf of invoice. Takes an invoice dictionary def create_invoice_pdf(self, invoice, user_id): #get the html of the invoice file_path ='/{}-{}.pdf'.format(user_id, invoice['billing_period']) invoice_html = render_to_string('templates/invoice.html', invoice) pdf = pdfkit.from_file(invoice_html, file_path) return file_path, pdf #sends invoice to customer def send_invoices(self, invoices): for user_id, invoice in invoices.items(): #get customers email email = User.objects.get(id=user_id).email billing_period = invoice['billing_period'] invoice_pdf = self.create_invoice_pdf(invoice, user_id) email = EmailMessage( 'Statement of credit: {}-{}'.format(user_id, billing_period), 'Attached is your statement of credit.\ This may also be viewed on your dashboard when you login.', '[email protected]', [email], ).attach(invoice_pdf) email.send(fail_silently=False) return True test.py from mock import patch from projectxapp import billing @pytest.mark.django_db def test_send_invoice(): invoices = { 1: { 'transaction_processing_fee': 2.00, 'service_fee': 10.00, 'billing_period': '2020-01-02' }, 2: { 'transaction_processing_fee': 4.00, 'service_fee': 20.00, 'billing_period': '2020-01-02' } } with patch('services.billing.Billing().create_invoice_pdf') as p1: p1.return_value = '/invoice.pdf' test = billing.Billing().send_invoices(invoices) assert test == True
Solution Since I had already imported the module of the method I needed to patch. I didn't need to use the full path including the package name. Changed patch('projectxapp.billing.Billing.create_invoice_pdf') to this patch('billing.Billing.create_invoice_pdf') From the unittest documentation: target should be a string in the form 'package.module.ClassName'. The target is imported and the specified object replaced with the new object, so the target must be importable from the environment you are calling patch() from. The target is imported when the decorated function is executed, not at decoration time.
8
5
60,971,502
2020-4-1
https://stackoverflow.com/questions/60971502/python-poetry-how-to-install-optional-dependencies
Python's poetry dependency manager allows specifying optional dependencies via command: $ poetry add --optional redis Which results in this configuration: [tool.poetry.dependencies] python = "^3.8" redis = {version="^3.4.1", optional=true} However how do you actually install them? Docs seem to hint to: $ poetry install -E redis but that just throws and error: Installing dependencies from lock file [ValueError] Extra [redis] is not specified.
You need to add a tool.poetry.extras group to your pyproject.toml if you want to use the -E flag during install, as described in this section of the docs: [tool.poetry.extras] caching = ["redis"] The key refers to the word that you use with poetry install -E, and the value is a list of packages that were marked as --optional when they were added. There currently is no support for making optional packages part of a specific group during their addition, so you have to maintain this section in your pyproject.toml file by hand. The reason behind this additional layer of abstraction is that extra-installs usually refer to some optional functionality (in this case caching) that is enabled through the installation of one or more dependencies (in this case just redis). poetry simply mimics setuptools' definition of extra-installs here, which might explain why it's so sparingly documented.
59
61
61,003,308
2020-4-3
https://stackoverflow.com/questions/61003308/conda-install-psycopg2-errors
New mackbookpro running Catalina. Installed anaconda using homebrew. Tried to install psycopg2 using the command conda install -c anaconda psycopg2 but failed due to package conflicts. Here's some of the output from the attempted install: $ conda install -c anaconda psycopg2 Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: / Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed \ UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package six conflicts for: pyopenssl -> cryptography[version='>=2.8'] -> six[version='>=1.4.1'] pytest-doctestplus -> pytest[version='>=3.0'] -> six[version='>=1.10.0'] python-dateutil -> six[version='>=1.5'] nltk -> six Any thoughts on what's happening or how to fix?
The reason might be there are too many conflicts between [anaconda==2020.02] and [70+ PACKAGES] Try the following worked for me: conda -V conda update -n base conda To ensure you are in version conda 4.8.2 Then conda update --all Then the following packages will be DOWNGRADED: anaconda 2020.02-py37_0 --> custom-py37_1 Then conda install psycopg2 Then packages libpq,psycopg2 would be installed, some packages would be updated, some packages would be superseded by higher-priority channel.
10
6
61,000,501
2020-4-2
https://stackoverflow.com/questions/61000501/json-serialization-of-nested-dataclasses
I would need to take the question about json serialization of @dataclass from Make the Python json encoder support Python's new dataclasses a bit further: consider when they are in a nested structure. Consider: import json from attr import dataclass from dataclasses_json import dataclass_json @dataclass @dataclass_json class Prod: id: int name: str price: float prods = [Prod(1,'A',25.3),Prod(2,'B',79.95)] pjson = json.dumps(prods) That gives us: TypeError: Object of type Prod is not JSON serializable Note the above does incorporate one of the answers https://stackoverflow.com/a/59688140/1056563 . It claims to support the nested case via the dataclass_json decorator . Apparently that does not actually work. I also tried another of the answers https://stackoverflow.com/a/51286749/1056563 : class EnhancedJSONEncoder(json.JSONEncoder): def default(s, o): if dataclasses.is_dataclass(o): return dataclasses.asdict(o) return super().default(o) And I created a helper method for it: def jdump(s,foo): return json.dumps(foo, cls=s.c.EnhancedJSONEncoder) But using that method also did not effect the (error) result. Any further tips?
You can use a pydantic library. From the example in documentation from pydantic import BaseModel class BarModel(BaseModel): whatever: int class FooBarModel(BaseModel): banana: float foo: str bar: BarModel m = FooBarModel(banana=3.14, foo='hello', bar={'whatever': 123}) # returns a dictionary: print(m.dict()) """ { 'banana': 3.14, 'foo': 'hello', 'bar': {'whatever': 123}, } """ print(m.dict(include={'foo', 'bar'})) #> {'foo': 'hello', 'bar': {'whatever': 123}} print(m.dict(exclude={'foo', 'bar'})) #> {'banana': 3.14}
14
12
61,075,295
2020-4-7
https://stackoverflow.com/questions/61075295/shortcut-to-colllaspe-fold-all-methods-in-pycharm
I'm mainly working on PyCharm, and lot of times, I come into a situation where it would be much better if I can collapse/fold all method bodies, and leave their names only. The picture below is the the result I want, but I can't find the shortcut to do this thing. If you know any, let me know.
You can collapse the code as shown in the screenshot going to Code > Folding > Expand All to Level > 1 or using the keyboard shortcut Ctrl + Shift + NumPad *, 1. The level is absolute relative to the module level. If you have nested methods or classes you can select them individually an use the equivalent Expand to Level instead of Expand All to Level.
13
17
61,083,004
2020-4-7
https://stackoverflow.com/questions/61083004/dense-object-has-no-attribute-op
I am trying to make a fully connected model using tensorflow.keras, here is my code from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Dense, Flatten def load_model(input_shape): input = Input(shape = input_shape) dense_shape = input_shape[0] x = Flatten()(input) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) output = Dense(10, activation='softmax') model = Model(input , output) model.summary() return model but when I call the model model = load_model((120,)) I have this error 'Dense' object has no attribute 'op' How can I fix this?
You are missing (x) after your output layer. Try output = Dense(10 , activation = 'softmax')(x)
25
47
60,997,189
2020-4-2
https://stackoverflow.com/questions/60997189/how-can-i-make-faceted-plots-in-plotly-have-their-own-individual-yaxes-tick-labe
When I use Plotly express to plot different parameters with different ranges - in the example below, BloodPressureHigh, Height(cm), Weight(kg), and BloodPressureLow - using the facet_col argument, I am unable to get the resulting plot to display the unique YTicks for each of the faceted plots. Is there an easy method for the fig object to show each set of YTicks in the resulting faceted plot? Otherwise, as you can see in the resulting image, it is unclear that each box plot is on its own unique YAxis. import plotly.express as px import pandas as pd temp = [ {"Clinic": "A", "Subject": "Bill", "Height(cm)": 182, "Weight(kg)": 101, "BloodPressureHigh": 128, "BloodPressureLow": 90}, {"Clinic": "A", "Subject": "Susie", "Height(cm)": 142, "Weight(kg)": 67, "BloodPressureHigh": 120, "BloodPressureLow": 70}, {"Clinic": "B", "Subject": "John", "Height(cm)": 202, "Weight(kg)": 89, "BloodPressureHigh": 118, "BloodPressureLow": 85}, {"Clinic": "B", "Subject": "Stacy", "Height(cm)": 156, "Weight(kg)": 78, "BloodPressureHigh": 114, "BloodPressureLow": 76}, {"Clinic": "B", "Subject": "Lisa", "Height(cm)": 164, "Weight(kg)": 59, "BloodPressureHigh": 112, "BloodPressureLow": 74} ] df = pd.DataFrame(temp) # Melt the dataframe so I can use plotly express to plot distributions of all variables df_melted = df.melt(id_vars=["Clinic", "Subject"]) # Plot distributions, with different parameters in different columns fig = px.box(df_melted, x="Clinic", y="value", facet_col="variable", boxmode="overlay" ) # Update the YAxes so that the faceted column plots no longer share common YLimits fig.update_yaxes(matches=None) # Last step needed: Add tick labels to each yaxis so that the difference in YLimits is clear?
Does this help you? fig = px.box(df_melted, x="Clinic", y="value", facet_col="variable", boxmode="overlay") fig.update_yaxes(matches=None) fig.for_each_yaxis(lambda yaxis: yaxis.update(showticklabels=True)) fig.show()
25
43
61,025,973
2020-4-4
https://stackoverflow.com/questions/61025973/how-to-avoid-arrow-key-values-in-python-input
I'm getting Arrow Key values in my Python Input using input(). This only happens during the time of execution of a Python Script. It doesn't happen if Input is taken from the Interpreter. The Arrow Key values I'm referring to: Why does the terminal show "^[[A" "^[[B" "^[[C" "^[[D" when pressing the arrow keys in Ubuntu? Contents of Script File: s = input("Enter Something: ") print(s) Terminal Output: $ python input_example.py Enter Something: Now Pressing Left Arrow Key^[[D^[[D^[[D^[[D Now Pressing Left Arrow Key I'm unable to navigate (or say change cursor position) left or right while writing input cause the Arrow Key Values Show up in the Input. Is there any way to avoid them? In Terminal, generally, one can change the cursor position, this problem doesn't happen unlike Python's input(). P.s. I don't want to change any settings in bash cause I'm trying to write a script that works in all consoles. I'm a noob so I don't understand many things. I hope this community can help me out.
Found a way to prevent this! You just have to import the readline module import readline This will make the standard input() method utilize some of its utilities, enabling normal arrow-key usage and more.
12
19
61,041,707
2020-4-5
https://stackoverflow.com/questions/61041707/plotly-log-scale-in-subplot-python
I have a 3 columns in my dataframe. I have charted them all in plotly, and the below code puts them side by side in a subplot. I would like to change the third chart 'c' to have a logarithmic scale. Is this possible? fig = make_subplots(rows=1, cols=3) fig.add_trace( go.Scatter(x = df.index,y = df['a'],mode = 'lines+markers',name = 'Daily') fig.add_trace( go.Scatter(x = df.index,y = df['b'],mode = 'lines+markers',name = 'Total' ) fig.add_trace( go.Scatter(x = df.index,y = df['c'],mode = 'lines+markers',name = 'Total' ) fig.layout() fig.update_layout(height=600, width=800, title_text="Subplots") fig.show()
See the section on "Customizing Subplot Axes" in the Plotly documentation. I included an example below for 2 subplots, but the logic is the same regardless of the number of subplots. import plotly.graph_objects as go from plotly.subplots import make_subplots fig = make_subplots(rows=1, cols=2, subplot_titles=("Default Scale", "Logarithmic Scale")) # subplot in default scale fig.add_trace(go.Scatter(x=[0.1, 0.2, 0.3, 0.4, 0.5], y=[1.105, 1.221, 1.35, 1.492, 1.649]), row=1, col=1) fig.update_xaxes(title_text="x-axis in default scale", row=1, col=1) fig.update_yaxes(title_text="y-axis in default scale", row=1, col=1) # subplot in logarithmic scale fig.add_trace(go.Scatter(x=[0.1, 0.2, 0.3, 0.4, 0.5], y=[1.105, 1.221, 1.35, 1.492, 1.649]), row=1, col=2) fig.update_xaxes(title_text="x-axis in logarithmic scale", type="log", row=1, col=2) fig.update_yaxes(title_text="y-axis in logarithmic scale", type="log", row=1, col=2) fig.update_layout(showlegend=False) fig.show()
18
30
61,021,252
2020-4-3
https://stackoverflow.com/questions/61021252/aws-cdk-s3-bucket-creation-error-bucket-name-already-exisits
I am new to using CloudFormation / CDK and am having trouble figuring out to deploy my stacks without error. Currently I am using the python CDK to create a bucket. This bucket will hold model files and I need to ensure that the bucket deployed in this stack retains data over time / new deployments. From my initial tests, it seems that if bucket_name is not specified, the CDK will randomly generate a new bucket name on deployment, which is not ideal. Here is the snippet used to create the bucket: bucket = aws_s3.Bucket(self, "smartsearch-bucket", bucket_name= 'mybucketname') The first time I run cdk deploy, there are no problems and the bucket is created. The second time I run cdk deploy, I get an error stating that my S3 bucket already exists. What else is needed so that I can redeploy my stack using a predetermined S3 bucket name?
I ran into the same issue, and it was due to the reason that bucket was already created by me manually earlier for some testing, NOT by ECS stack initially. Deleting the bucket definitely makes ECS deployment to work fine, as it did for you and I tested this running the deployment multiple times. Ensure that no ECS resources are pre-created manually. The way ECS would identify if it has to re-create a resource is via these tags:
8
3
60,992,109
2020-4-2
https://stackoverflow.com/questions/60992109/valueerror-invalid-elements-received-for-the-data-property
I encounter an issue with plotly. I would like to display different figures but, somehow, I can't manage to achieve what I want. I created 2 sources of data: from plotly.graph_objs.scatter import Line import plotly.graph_objs as go trace11 = go.Scatter( x = [0, 1, 2], y = [0, 0, 0], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}) ) trace12 = go.Scatter( x=[0, 1, 2], y=[1, 1, 1], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}) ) trace21 = go.Scatter( x = [0, 1, 2], y = [0.5, 0.5, 0.5], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1}) ) trace22 = go.Scatter( x=[0, 1, 2], y=[1.5, 1.5, 1.5], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1}) ) data1 = [trace11, trace12] data2 = [trace21, trace22] Then, I created a subplot with 1 row and 2 columns and tried to add this data to the subplot: from plotly import tools fig = tools.make_subplots(rows=1, cols=2) fig.append_trace(data1, 1, 1) fig.append_trace(data2, 1, 2) fig.show() That resulted in the following error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-6-ba20e4900d41> in <module> 1 from plotly import tools 2 fig = tools.make_subplots(rows=1, cols=2) ----> 3 fig.append_trace(data1, 1, 1) 4 fig.append_trace(data2, 1, 2) 5 fig.show() ~\Anaconda3\lib\site-packages\plotly\basedatatypes.py in append_trace(self, trace, row, col) 1797 ) 1798 -> 1799 self.add_trace(trace=trace, row=row, col=col) 1800 1801 def _set_trace_grid_position(self, trace, row, col, secondary_y=False): ~\Anaconda3\lib\site-packages\plotly\basedatatypes.py in add_trace(self, trace, row, col, secondary_y) 1621 rows=[row] if row is not None else None, 1622 cols=[col] if col is not None else None, -> 1623 secondary_ys=[secondary_y] if secondary_y is not None else None, 1624 ) 1625 ~\Anaconda3\lib\site-packages\plotly\basedatatypes.py in add_traces(self, data, rows, cols, secondary_ys) 1684 1685 # Validate traces -> 1686 data = self._data_validator.validate_coerce(data) 1687 1688 # Set trace indexes ~\Anaconda3\lib\site-packages\_plotly_utils\basevalidators.py in validate_coerce(self, v, skip_invalid) 2667 2668 if invalid_els: -> 2669 self.raise_invalid_elements(invalid_els) 2670 2671 v = to_scalar_or_list(res) ~\Anaconda3\lib\site-packages\_plotly_utils\basevalidators.py in raise_invalid_elements(self, invalid_els) 296 pname=self.parent_name, 297 invalid=invalid_els[:10], --> 298 valid_clr_desc=self.description(), 299 ) 300 ) ValueError: Invalid element(s) received for the 'data' property of Invalid elements include: [[Scatter({ 'line': {'color': 'rgb(0, 0, 128)', 'width': 1}, 'x': [0, 1, 2], 'y': [0, 0, 0] }), Scatter({ 'line': {'color': 'rgb(128, 0, 0)', 'width': 1}, 'x': [0, 1, 2], 'y': [1, 1, 1] })]] The 'data' property is a tuple of trace instances that may be specified as: - A list or tuple of trace instances (e.g. [Scatter(...), Bar(...)]) - A single trace instance (e.g. Scatter(...), Bar(...), etc.) - A list or tuple of dicts of string/value properties where: - The 'type' property specifies the trace type One of: ['area', 'bar', 'barpolar', 'box', 'candlestick', 'carpet', 'choropleth', 'choroplethmapbox', 'cone', 'contour', 'contourcarpet', 'densitymapbox', 'funnel', 'funnelarea', 'heatmap', 'heatmapgl', 'histogram', 'histogram2d', 'histogram2dcontour', 'image', 'indicator', 'isosurface', 'mesh3d', 'ohlc', 'parcats', 'parcoords', 'pie', 'pointcloud', 'sankey', 'scatter', 'scatter3d', 'scattercarpet', 'scattergeo', 'scattergl', 'scattermapbox', 'scatterpolar', 'scatterpolargl', 'scatterternary', 'splom', 'streamtube', 'sunburst', 'surface', 'table', 'treemap', 'violin', 'volume', 'waterfall'] - All remaining properties are passed to the constructor of the specified trace type (e.g. [{'type': 'scatter', ...}, {'type': 'bar, ...}]) I mist be doing something wrong. What is weird is that my data seems correctly shaped since I can run the following code without any issue: fig = go.Figure(data1) fig.show() I hope you can help me find a solution. Thanks!
The reason why you are getting an error it is because the function append_trace() is expecting a single trace in the form you've declared them. However, the graph object Figure has the function add_traces() with which you can pass the data parameter as a list with more than one trace. Therefore, I suggest two simple solutions: Solution 1: Append traces individually from plotly.graph_objs.scatter import Line import plotly.graph_objs as go trace11 = go.Scatter(x = [0, 1, 2], y = [0, 0, 0], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1})) trace12 = go.Scatter(x = [0, 1, 2], y = [1, 1, 1], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1})) trace21 = go.Scatter(x = [0, 1, 2], y = [0.5, 0.5, 0.5], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1})) trace22 = go.Scatter(x = [0, 1, 2], y = [1.5, 1.5, 1.5], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1})) from plotly.subplots import make_subplots fig = make_subplots(rows=1, cols=2) fig.append_trace(trace11, row=1, col=1) fig.append_trace(trace12, row=1, col=1) fig.append_trace(trace21, row=1, col=2) fig.append_trace(trace22, row=1, col=2) fig.show() Solution 2: Use the function add_traces(data,rows,cols) instead from plotly.graph_objs.scatter import Line import plotly.graph_objs as go trace11 = go.Scatter(x = [0, 1, 2], y = [0, 0, 0], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1})) trace12 = go.Scatter(x = [0, 1, 2], y = [1, 1, 1], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1})) trace21 = go.Scatter(x = [0, 1, 2], y = [0.5, 0.5, 0.5], line = Line({'color': 'rgb(0, 0, 128)', 'width': 1})) trace22 = go.Scatter(x = [0, 1, 2], y = [1.5, 1.5, 1.5], line = Line({'color': 'rgb(128, 0, 0)', 'width': 1})) from plotly.subplots import make_subplots fig = make_subplots(rows=1, cols=2) data1 = [trace11, trace12] data2 = [trace21, trace22] fig.add_traces(data1, rows=1, cols=1) fig.add_traces(data2, rows=1, cols=2) fig.show()
12
6
61,063,676
2020-4-6
https://stackoverflow.com/questions/61063676/command-errored-out-with-exit-status-1-python-setup-py-egg-info-check-the-logs
I am trying to download auto-py-to-exe on a different (windows) device than I usually use through pip. However when run I get the error (sorry it is so very very long): ERROR: Command errored out with exit status 1: command: 'c:\users\tom\appdata\local\programs\python\python38-32\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\tom\\AppData\\Local\\Temp\\pip-install-aplljhe0\\gevent\\setup.py'"'"'; __file__='"'"'C:\\Users\\tom\\AppData\\Local\\Temp\\pip-install-aplljhe0\\gevent\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\tom\AppData\Local\Temp\pip-install-aplljhe0\gevent\pip-egg-info' cwd: C:\Users\tom\AppData\Local\Temp\pip-install-aplljhe0\gevent\ Complete output (113 lines): Traceback (most recent call last): File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 489, in _find_latest_available_vc_ver return self.find_available_vc_vers()[-1] IndexError: list index out of range During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules yield saved File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context yield File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup _execfile(setup_script, ns) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile exec(code, globals, locals) File "C:\Users\tom\AppData\Local\Temp\easy_install-29et0so1\cffi-1.14.0\setup.py", line 127, in <module> HUB_PRIMITIVES = Extension(name="gevent.__hub_primitives", File "C:\Users\tom\AppData\Local\Temp\easy_install-29et0so1\cffi-1.14.0\setup.py", line 105, in uses_msvc include_dirs=include_dirs) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\distutils\command\config.py", line 225, in try_compile self._compile(body, headers, include_dirs, lang) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\distutils\command\config.py", line 132, in _compile self.compiler.compile([src], include_dirs=include_dirs) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\distutils\_msvccompiler.py", line 360, in compile self.initialize() File "c:\users\tom\appdata\local\programs\python\python38-32\lib\distutils\_msvccompiler.py", line 253, in initialize vc_env = _get_vc_env(plat_spec) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 185, in msvc14_get_vc_env return EnvironmentInfo(plat_spec, vc_min_ver=14.0).return_env() File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 843, in __init__ self.si = SystemInfo(self.ri, vc_ver) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 485, in __init__ self.vc_ver = vc_ver or self._find_latest_available_vc_ver() File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 492, in _find_latest_available_vc_ver raise distutils.errors.DistutilsPlatformError(err) distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/ During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\tom\AppData\Local\Temp\pip-install-aplljhe0\gevent\setup.py", line 427, in <module> run_setup(EXT_MODULES, run_make=_BUILDING) File "C:\Users\tom\AppData\Local\Temp\pip-install-aplljhe0\gevent\setup.py", line 328, in run_setup setup( File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\__init__.py", line 144, in setup _install_setup_requires(attrs) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\__init__.py", line 139, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\dist.py", line 716, in fetch_build_eggs resolved_dists = pkg_resources.working_set.resolve( File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\pkg_resources\__init__.py", line 780, in resolve dist = best[req.key] = env.best_match( File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\pkg_resources\__init__.py", line 1065, in best_match return self.obtain(req, installer) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\pkg_resources\__init__.py", line 1077, in obtain return installer(requirement) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\dist.py", line 786, in fetch_build_egg return cmd.easy_install(req) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 679, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 705, in install_item dists = self.install_eggs(spec, download, tmpdir) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 890, in install_eggs return self.build_and_install(setup_script, setup_base) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 1158, in build_and_install self.run_setup(setup_script, setup_base, args) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 1144, in run_setup run_setup(setup_script, args) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 253, in run_setup raise File "c:\users\tom\appdata\local\programs\python\python38-32\lib\contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context yield File "c:\users\tom\appdata\local\programs\python\python38-32\lib\contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 166, in save_modules saved_exc.resume() File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 141, in resume six.reraise(type, exc, self._tb) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\_vendor\six.py", line 685, in reraise raise value.with_traceback(tb) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules yield saved File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context yield File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup _execfile(setup_script, ns) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile exec(code, globals, locals) File "C:\Users\tom\AppData\Local\Temp\easy_install-29et0so1\cffi-1.14.0\setup.py", line 127, in <module> HUB_PRIMITIVES = Extension(name="gevent.__hub_primitives", File "C:\Users\tom\AppData\Local\Temp\easy_install-29et0so1\cffi-1.14.0\setup.py", line 105, in uses_msvc include_dirs=include_dirs) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\distutils\command\config.py", line 225, in try_compile self._compile(body, headers, include_dirs, lang) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\distutils\command\config.py", line 132, in _compile self.compiler.compile([src], include_dirs=include_dirs) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\distutils\_msvccompiler.py", line 360, in compile self.initialize() File "c:\users\tom\appdata\local\programs\python\python38-32\lib\distutils\_msvccompiler.py", line 253, in initialize vc_env = _get_vc_env(plat_spec) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 185, in msvc14_get_vc_env return EnvironmentInfo(plat_spec, vc_min_ver=14.0).return_env() File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 843, in __init__ self.si = SystemInfo(self.ri, vc_ver) File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 485, in __init__ self.vc_ver = vc_ver or self._find_latest_available_vc_ver() File "c:\users\tom\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 492, in _find_latest_available_vc_ver raise distutils.errors.DistutilsPlatformError(err) distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Even though it does state that I need Visual Studio C++ 14.0 my computer won't install it and I have not needed it before. I checked This Stack Overflow Question but it relates to another pip install and has no answers. If the only answer is to install Visual Studio then I am kinda screwed.
Edit Since this answer was posted, gevent has released several new versions, including prebuilt wheels for Python 3.8 on Windows, so the pip install gevent --pre shouldn't be necessary anymore - just run pip install auto-py-to-exe as usual and it should work. Original answer Allow prerelease gevent versions via $ pip install gevent --pre $ pip install auto-py-to-exe Explanation: auto-py-to-exe is installable on Python 3.8 on Windows without any issues (this can be verified e.g. by running pip install auto-py-to-exe --no-deps). However, it requires bottle-websocket to be installed, which in turn has gevent dependency. gevent did not release a stable version that offers prebuilt wheels for Python 3.8 yet (this would be 1.5), so pip doesn't pick up prebuilt wheels and tries to build gevent==1.4 from source dist. Installing the prerelease 1.5 version of gevent avoids this.
45
27
61,010,431
2020-4-3
https://stackoverflow.com/questions/61010431/how-to-start-with-the-instagramapi-in-python
i want to play with the InstagramAPI and write some code for like getting a list of my follower and something like that. I am really new to that topic. What is the best way to do this? Is there a Python-Lib for handle those json request or should I send them directly to the (new? graphAPI, displayAPI) InstagramAPI? Appreciate every advice I can get. Thanks :)
LevPasha's Instagram-API-python, instabot, and many other API's are no longer functional as of Oct 24, 2020 after Facebook deprecated the legacy API and now has a new, authentication-required, API. It now requires registering your app with Facebook to be able to get access to many of the API features (via oembed) that were previously available without any authentication. See https://developers.facebook.com/docs/instagram/oembed/ for more details on the new implementation and how to migrate. You should still be able to get a list of your followers, etc. via the new oEmbed API and python--it will require registering the app, making a call to the new GET API with your authentication key via the python requests package, and then processing the result.
9
10
61,062,303
2020-4-6
https://stackoverflow.com/questions/61062303/deploy-python-app-to-heroku-slug-size-too-large
I'm trying to deploy a Streamlit app written in python to Heroku. My whole directory is 4.73 MB, where 4.68 MB is my ML model. My requirements.txt looks like this: absl-py==0.9.0 altair==4.0.1 astor==0.8.1 attrs==19.3.0 backcall==0.1.0 base58==2.0.0 bleach==3.1.3 blinker==1.4 boto3==1.12.29 botocore==1.15.29 cachetools==4.0.0 certifi==2019.11.28 chardet==3.0.4 click==7.1.1 colorama==0.4.3 cycler==0.10.0 decorator==4.4.2 defusedxml==0.6.0 docutils==0.15.2 entrypoints==0.3 enum-compat==0.0.3 future==0.18.2 gast==0.2.2 google-auth==1.11.3 google-auth-oauthlib==0.4.1 google-pasta==0.2.0 grpcio==1.27.2 h5py==2.10.0 idna==2.9 importlib-metadata==1.5.2 ipykernel==5.2.0 ipython==7.13.0 ipython-genutils==0.2.0 ipywidgets==7.5.1 jedi==0.16.0 Jinja2==2.11.1 jmespath==0.9.5 joblib==0.14.1 jsonschema==3.2.0 jupyter-client==6.1.1 jupyter-core==4.6.3 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 kiwisolver==1.1.0 Markdown==3.2.1 MarkupSafe==1.1.1 matplotlib==3.2.1 mistune==0.8.4 nbconvert==5.6.1 nbformat==5.0.4 notebook==6.0.3 numpy==1.18.2 oauthlib==3.1.0 opencv-python==4.2.0.32 opt-einsum==3.2.0 pandas==1.0.3 pandocfilters==1.4.2 parso==0.6.2 pathtools==0.1.2 pickleshare==0.7.5 Pillow==7.0.0 prometheus-client==0.7.1 prompt-toolkit==3.0.4 protobuf==3.11.3 pyasn1==0.4.8 pyasn1-modules==0.2.8 pydeck==0.3.0b2 Pygments==2.6.1 pyparsing==2.4.6 pyrsistent==0.16.0 python-dateutil==2.8.0 pytz==2019.3 pywinpty==0.5.7 pyzmq==19.0.0 requests==2.23.0 requests-oauthlib==1.3.0 rsa==4.0 s3transfer==0.3.3 scikit-learn==0.22.2.post1 scipy==1.4.1 Send2Trash==1.5.0 six==1.14.0 sklearn==0.0 streamlit==0.56.0 tensorboard==2.1.1 tensorflow==2.1.0 tensorflow-estimator==2.1.0 termcolor==1.1.0 terminado==0.8.3 testpath==0.4.4 toml==0.10.0 toolz==0.10.0 tornado==5.1.1 traitlets==4.3.3 tzlocal==2.0.0 urllib3==1.25.8 validators==0.14.2 watchdog==0.10.2 wcwidth==0.1.9 webencodings==0.5.1 Werkzeug==1.0.0 widgetsnbextension==3.5.1 wincertstore==0.2 wrapt==1.12.1 zipp==3.1.0 When I push my app to Heroku, the message is: remote: -----> Discovering process types remote: Procfile declares types -> web remote: remote: -----> Compressing... remote: ! Compiled slug size: 623.5M is too large (max is 500M). remote: ! See: http://devcenter.heroku.com/articles/slug-size remote: remote: ! Push failed How can my slug size be too large? Is it the size of the requirements? Then how is it possible to deploy a python app using tensorflow to Heroku after all? Thanks for the help!
I have already answered this here. Turns out the Tensorflow 2.0 module is very large (more than 500MB, the limit for Heroku) because of its GPU support. Since Heroku doesn't support GPU, it doesn't make sense to install the module with GPU support. Solution: Simply replace tensorflow with tensorflow-cpu in your requirements. This worked for me, hope it works for you too!
22
61
60,999,753
2020-4-2
https://stackoverflow.com/questions/60999753/pandas-future-warning-indexing-with-multiple-keys
Pandas throws a Future Warning when I apply a function to multiple columns of a groupby object. It suggests to use a list as index instead of tuples. How would one go about this? >>> df = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]]) >>> df.groupby([0,1])[1,2].apply(sum) <stdin>:1: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead. 1 2 0 1 1 2 2 3 4 5 5 6 7 8 8 9
This warning was introduced in pandas 1.0.0, following a discussion on GitHub. So best use what was suggested there: df.groupby([0, 1])[[1, 2]].apply(sum) It's also possible to move the slicing operation to the end, but that is not as efficient: df.groupby([0, 1]).apply(sum).loc[:, 1:] Thanks @ALollz and @cmosig for helpful comments.
73
76
61,028,653
2020-4-4
https://stackoverflow.com/questions/61028653/msys2-with-python-3-8-importerror-cannot-import-name-open-code-from-io
NOTE: There have been several EDITs to the question, as per comments. They are indicated below, and separated by lines. As of now, the only remaining issue seems to be that numpy cannot load, possibly (but not certainly) due to two alternative python 3.8 systems present. I have updated my msys2 system a couple of months ago. That apparently included an update from python 3.7 to 3.8, but which left me with two broken pythons: I can start python when it is 3.7, but there are no associated packages, and I cannot start python when it is 3.8, which is the version holding packages. I do not know what went wrong with that, or what did I do wrong. I just noticed this now with the first time I mean to use python again after the upgrade. I will describe here a sequence of steps I followed and what I found. I will post supporting code below for clarity. I can start python, but pandas (e.g.) and many other packages are not found in python. Checking further, /mingw64/lib/python3.7/site-packages is essentially empty (surely emptied when upgrading to 3.8). Looking for the pandas package, I found I have one version installed. The pandas version is for python 3.8, surely upgraded from 3.7. I redirected PYTHONPATH from 3.7 to 3.8 Now I cannot even start python. EDIT Now I can start python, with some misconfiguration issues (i.e., partially fixed). Now the question is How can I fix python3.8, which gives the error below? ImportError: cannot import name 'open_code' from 'io' (unknown location) How can I fix python3.8, which gives the problems below? New problems: 5.1. I should have python pointing to 3.8, and also fix packages. 5.2. Some modules are not found, some other are. Note: I don't know if Msys2 upgrade breaks python2-pyqt5 has anything to do with this. Related: https://github.com/tox-dev/tox/issues/1334 https://github.com/yan12125/python3-android/issues/19 https://python-forum.io/Thread-Fatal-Python-error-init-sys-streams-can-t-initialize-sys-standard-streams-Attribute TL;DR: Supporting code pandas not found $ python Python 3.7.4 (default, Jul 11 2019, 10:29:54) [GCC 9.1.0] on msys Type "help", "copyright", "credits" or "license" for more information. Reading /home/user1/.pythonrc readline is in /usr/lib/python3.7/lib-dynload/readline.cpython-37m.dll >>> import pandas Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'pandas' >>> pandas is actually installed $ pacman -Sl | grep python | grep installed mingw64 mingw-w64-x86_64-python 3.8.2-1 [installed: 3.8.1-1] mingw64 mingw-w64-x86_64-python-apipkg 1.5-1 [installed] ... mingw64 mingw-w64-x86_64-python-pandas 1.0.3-1 [installed: 1.0.1-1] ... mingw64 mingw-w64-x86_64-python2-setuptools 44.1.0-1 [installed: 42.0.2-1] msys python 3.7.4-1 [installed] msys python2 2.7.17-1 [installed] My pandas version is for python 3.8 $ pacman -Ql mingw-w64-x86_64-python-pandas | head -5 mingw-w64-x86_64-python-pandas /mingw64/ mingw-w64-x86_64-python-pandas /mingw64/lib/ mingw-w64-x86_64-python-pandas /mingw64/lib/python3.8/ mingw-w64-x86_64-python-pandas /mingw64/lib/python3.8/site-packages/ mingw-w64-x86_64-python-pandas /mingw64/lib/python3.8/site-packages/pandas-1.0.1-py3.8.egg-info/ I redirected PYTHONPATH from 3.7 to 3.8 Changed export PYVERSION="3.7" export PYTHONDIR2="${MINGW_HOME}/lib/python${PYVERSION}" export PYTHONPATH="${PYTHONDIR2}:${PYTHONDIR2}/site-packages" to export PYVERSION="3.8" ... Now I cannot even start python. EDIT: Old problem: $ python Fatal Python error: init_sys_streams: can't initialize sys standard streams Traceback (most recent call last): File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/io.py", line 54, in <module> ImportError: cannot import name 'open_code' from 'io' (unknown location) Aborted (core dumped) New problems: $ python --version Python 3.7.4 $ type python python is hashed (/usr/bin/python) $ ls /usr/bin/python /usr/bin/python $ python3.8 Python 3.8.2 (default, Apr 9 2020, 13:17:39) [GCC 9.3.0 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Reading C:/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/home/user1/.pythonrc Module readline not available. Traceback (most recent call last): File "C:/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/home/user1/.pythonrc", line 42, in <module> del os, atexit, readline, rlcompleter, save_history, historyPath NameError: name 'readline' is not defined >>> import readline Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'readline' >>> import zipfile >>> import pandas Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/pandas/__init__.py", line 16, in <module> raise ImportError( ImportError: Unable to import required dependencies: numpy: DLL load failed while importing _ctypes: No se puede encontrar el módulo especificado. >>> import numpy Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/numpy/__init__.py", line 142, in <module> from . import core File "C:/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/numpy/core/__init__.py", line 106, in <module> from . import _dtype_ctypes File "C:/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/numpy/core/_dtype_ctypes.py", line 25, in <module> import _ctypes ImportError: DLL load failed while importing _ctypes: No se puede encontrar el módulo especificado. >>> exit() Error in atexit._run_exitfuncs: Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'readline' EDIT #2: Adding info as requested. I just noticed I don't have pip. That matches the fact that I never installed any package with pip... $ echo $PATH /usr/local/bin:/usr/bin:/bin:/opt/bin:/c/Windows/System32:/c/Windows:/c/Windows/System32/Wbem:/c/Windows/System32/WindowsPowerShell/v1.0/:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/c/Users/user1/OneDrive/Documents/soft-hard-ware/linux-ubuntu:/c/Users/user1/OneDrive/Documents/soft-hard-ware/linux-ubuntu/rsync:/c/Users/user1/Documents/appls_mydocs/science-math-visualization/gp524-win64-mingw_3/gnuplot/bin:/mingw64/bin $ echo $PYTHONPATH /c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8:/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages $ echo $PYTHONSTARTUP /home/user1/.pythonrc $ which python3.8 /mingw64/bin/python3.8 $ python3.8 -m pip freeze C:/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/bin/python3.8.exe: No module named pip $ python3.8 -c "import sys; print(sys.builtin_module_names)" ('_abc', '_ast', '_codecs', '_collections', '_functools', '_imp', '_io', '_locale', '_operator', '_signal', '_sre', '_stat', '_string', '_symtable', '_thread', '_tracemalloc', '_warnings', '_weakref', '_winapi', 'atexit', 'builtins', 'errno', 'faulthandler', 'gc', 'itertools', 'marshal', 'msvcrt', 'nt', 'sys', 'time', 'winreg', 'xxsubtype') EDIT #3: Posting as requested, plus additional info. $ cat .pythonrc import os print( "Reading " + os.path.realpath(__file__) ) # enable syntax completion try: import readline print( "readline is in " + readline.__file__ ) except ImportError: print("Module readline not available.") else: import rlcompleter readline.parse_and_bind("tab: complete") # From https://docs.python.org/2/tutorial/interactive.html # Add auto-completion and a stored history file of commands to your Python # interactive interpreter. Requires Python 2.0+, readline. Autocomplete is # bound to the Esc key by default (you can change it - see readline docs). # # Store the file in ~/.pystartup, and set an environment variable to point # to it: "export PYTHONSTARTUP=~/.pystartup" in bash. import atexit import os #import readline #import rlcompleter historyPath = os.path.expanduser("~/.pyhistory") def save_history(historyPath=historyPath): import readline readline.write_history_file(historyPath) if os.path.exists(historyPath): #import readline readline.read_history_file(historyPath) atexit.register(save_history) del os, atexit, readline, rlcompleter, save_history, historyPath I don't see why which python3.8 and PYTHONPATH were out of sync: $ cygpath -w $(which python3.8) C:\Users\user1\Documents\appls_mydocs\PortableApps\MSYS2Portable\App\msys32\mingw64\bin\python3.8.exe $ echo $PYTHONPATH /c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8:/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages $ cygpath -w /c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8 C:\Users\user1\Documents\appls_mydocs\PortableApps\MSYS2Portable\App\msys32\mingw64\lib\python3.8 $ cygpath -w /c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages C:\Users\user1\Documents\appls_mydocs\PortableApps\MSYS2Portable\App\msys32\mingw64\lib\python3.8\site-packages $ which python /usr/bin/python I seem to have two incomplete/broken python installations (3.7, 3.8). I don't know what led to an "incomplete" upgrade. A couple of observations (see code below): python points to 3.7 readline is available for 3.7 and not for 3.8. I don't know why. pandas (and many others) is available for 3.8 and not for 3.7. Many of them won't import in 3.8 anyway, due to missing dependencies (which I guess are available in 3.7). I don't know why. python3.8 reports the location of .pythonrc in Windows format, while 3.7 reports it in Cygwin format. Is that normal? Removing PYTHONPATH does not help. $ unset PYTHONPATH $ python3.8 Python 3.8.2 (default, Apr 9 2020, 13:17:39) [GCC 9.3.0 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Reading C:/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/home/user1/.pythonrc Module readline not available. Traceback (most recent call last): File "C:/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/home/user1/.pythonrc", line 42, in <module> del os, atexit, readline, rlcompleter, save_history, historyPath NameError: name 'readline' is not defined >>> import pandas Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/pandas/__init__.py", line 16, in <module> raise ImportError( ImportError: Unable to import required dependencies: numpy: DLL load failed while importing _ctypes: No se puede encontrar el módulo especificado. >>> exit() Error in atexit._run_exitfuncs: Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'readline' $ python Python 3.7.4 (default, Jul 11 2019, 10:29:54) [GCC 9.1.0] on msys Type "help", "copyright", "credits" or "license" for more information. Reading /home/user1/.pythonrc readline is in /usr/lib/python3.7/lib-dynload/readline.cpython-37m.dll >>> import pandas Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'pandas' >>> exit() EDIT #4: I checked for all python packages, and I found something strange. It seems the basic python installation comprises two separate packages, one mingw64 and another msys. I wouldn't know what does each one do. $ pacman -Sl | grep "python" | grep "installed" ... mingw64 mingw-w64-x86_64-python 3.8.2-2 [installed] ... msys python 3.8.2-1 [installed: 3.7.4-1] ... On one hand, there was a mismatch in installed versions mingw64 (3.8.2-2) vs msys (3.7.4-1). On the other hand, the subversions available are not exactly the same for both (3.8.2-2) vs (3.8.2-1). Anyway, I proceeded upgrading msys python, and this fixed things significantly. $ pacman -S python ... $ python Python 3.8.2 (default, Apr 16 2020, 15:31:48) [GCC 9.3.0] on msys Type "help", "copyright", "credits" or "license" for more information. Reading /home/user1/.pythonrc Traceback (most recent call last): File "/home/user1/.pythonrc", line 9, in <module> import readline File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/readline.py", line 6, in <module> from pyreadline.rlmain import Readline File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/pyreadline/__init__.py", line 12, in <module> from . import logger, clipboard, lineeditor, modes, console File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/pyreadline/clipboard/__init__.py", line 13, in <module> from .win32_clipboard import GetClipboardText, SetClipboardText File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/pyreadline/clipboard/win32_clipboard.py", line 37, in <module> import ctypes.wintypes as wintypes File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/ctypes/wintypes.py", line 20, in <module> class VARIANT_BOOL(ctypes._SimpleCData): ValueError: _type_ 'v' not supported Failed calling sys.__interactivehook__ Traceback (most recent call last): File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site.py", line 412, in register_readline import readline File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/readline.py", line 6, in <module> from pyreadline.rlmain import Readline File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/pyreadline/__init__.py", line 12, in <module> from . import logger, clipboard, lineeditor, modes, console File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/pyreadline/clipboard/__init__.py", line 13, in <module> from .win32_clipboard import GetClipboardText, SetClipboardText File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/pyreadline/clipboard/win32_clipboard.py", line 37, in <module> import ctypes.wintypes as wintypes File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/ctypes/wintypes.py", line 20, in <module> class VARIANT_BOOL(ctypes._SimpleCData): ValueError: _type_ 'v' not supported So I have readline in 3.8 now. But there is another issue that pops up during loading of .pythonrc. Plus, there is a problem with numpy. It does not stem from the python version (/usr/bin/python points now to 3.8). $ python --version Python 3.8.2 $ python ... >>> import numpy Traceback (most recent call last): File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/numpy/core/__init__.py", line 24, in <module> from . import multiarray File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/numpy/core/multiarray.py", line 14, in <module> from . import overrides File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/numpy/core/overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ModuleNotFoundError: No module named 'numpy.core._multiarray_umath' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/numpy/__init__.py", line 142, in <module> from . import core File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site-packages/numpy/core/__init__.py", line 54, in <module> raise ImportError(msg) ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy c-extensions failed. - Try uninstalling and reinstalling numpy. - If you have already done that, then: 1. Check that you expected to use Python3.8 from "/usr/bin/python.exe", and that you have no directories in your PATH or PYTHONPATH that can interfere with the Python and numpy version "1.18.3" you're trying to use. 2. If (1) looks fine, you can open a new issue at https://github.com/numpy/numpy/issues. Please include details on: - how you installed Python - how you installed numpy - your operating system - whether or not you have multiple versions of Python installed - if you built from source, your compiler versions and ideally a build log - If you're working with a numpy git repository, try `git clean -xdf` (removes all files not under version control) and rebuild numpy. Note: this error has many possible causes, so please don't comment on an existing issue about this - open a new one instead. Original error was: No module named 'numpy.core._multiarray_umath' EDIT #5 As per @a_guest suggestion, and pointing me to "ValueError: _type_ 'v' not supported" error after installing PyReadline, I removed pyreadline, and this problem is gone. Now the only remaining issue seems to be that numpy cannot load, possibly (but not certainly) due to two alternative python 3.8 systems present. So the questions now is: Msys2: Two python installations?
The ImportError: cannot import name 'open_code' from 'io' (unknown location) comes from the fact that there are two different versions of Python conflicting with each other. python still points to the old version 3.7 but PYTHONPATH got updated to point to the new 3.8 version. As the documentation of PYTHONPATH states, it becomes prepended to the module search path and hence shadows any builtin modules: The default search path is installation dependent, but generally begins with prefix/lib/pythonversion (see PYTHONHOME above). It is always appended to PYTHONPATH. You can reproduce that behavior by creating two different virtual environments and then start one while having PYTHONPATH point to the other. In the following I used Miniconda to create two different environments, py37 and py38, containing a 3.7 and 3.8 installation respectively. (py37) user@pc:~$ python --version Python 3.7.6 (py37) user@pc:~$ PYTHONPATH=~/miniconda3/envs/py38/lib/python3.8/ python Fatal Python error: init_sys_streams: can't initialize sys standard streams Traceback (most recent call last): File "/home/user/miniconda3/envs/py38/lib/python3.8/io.py", line 54, in <module> ImportError: cannot import name 'open_code' from 'io' (unknown location) Aborted (core dumped)
13
14
61,072,873
2020-4-7
https://stackoverflow.com/questions/61072873/hex-size-in-matplotlib-hexbins-based-on-density-of-nearby-points
I've got the following code which produces the following figure import numpy as np np.random.seed(3) import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame() df['X'] = list(np.random.randint(100, size=100)) + list(np.random.randint(30, size=100)) df['Y'] = list(np.random.randint(100, size=100)) + list(np.random.randint(30, size=100)) df['Bin'] = df.apply(lambda row: .1 if row['X'] < 30 and row['Y'] < 30 else .9, axis=1) fig, ax = plt.subplots(figsize=(10,10)) plt.scatter(df['X'], df['Y']) I graphed the data using hexbins, as noted below from matplotlib import cm fig, ax = plt.subplots(figsize=(10,10)) hexbin = ax.hexbin(df['X'], df['Y'], C=df['Bin'], gridsize=20, cmap= cm.get_cmap('RdYlBu_r'),edgecolors='black') plt.show() I'd like to change the size of the hexagons based on the density of the points plotted in the area that a hexagon covers. For example, the hexagons in the bottom left (where the points are compact) will be larger than the hexagons everywhere else (where the points are sparse). Is there a way to do this? Edit: I tried this solution, but I can't figure out how to color the hexes based on df['Bin'], or how to set the min and max hex size. from matplotlib.collections import PatchCollection from matplotlib.path import Path from matplotlib.patches import PathPatch fig, ax = plt.subplots(figsize=(10,10)) hexbin = ax.hexbin(df['X'], df['Y'], C=df['Bins'], gridsize=20, cmap= cm.get_cmap('RdYlBu_r'),edgecolors='black') def sized_hexbin(ax,hc): offsets = hc.get_offsets() orgpath = hc.get_paths()[0] verts = orgpath.vertices values = hc.get_array() ma = values.max() patches = [] for offset,val in zip(offsets,values): v1 = verts*val/ma+offset path = Path(v1, orgpath.codes) patch = PathPatch(path) patches.append(patch) pc = PatchCollection(patches, cmap=cm.get_cmap('RdYlBu_r'), edgecolors='black') pc.set_array(values) ax.add_collection(pc) hc.remove() sized_hexbin(ax,hexbin) plt.show()
You may want to spend sometime in understanding color mapping. import numpy as np np.random.seed(3) import pandas as pd import matplotlib.pyplot as plt from matplotlib.collections import PatchCollection from matplotlib.path import Path from matplotlib.patches import PathPatch df = pd.DataFrame() df['X'] = list(np.random.randint(100, size=100)) + list(np.random.randint(30, size=100)) df['Y'] = list(np.random.randint(100, size=100)) + list(np.random.randint(30, size=100)) df['Bin'] = df.apply(lambda row: .1 if row['X'] < 30 and row['Y'] < 30 else .9, axis=1) #fig, ((ax1, ax2)) = plt.subplots(1, 2, sharex=True, sharey=True) ax1 = plt.scatter(df['X'], df['Y']) fig,ax2 = plt.subplots(figsize=(10,10)) hexbin = ax2.hexbin(df['X'], df['Y'], C=df['Bin'], gridsize=20,edgecolors='black',cmap= 'RdBu', reduce_C_function=np.bincount) #** def sized_hexbin(ax,hc): offsets = hc.get_offsets() orgpath = hc.get_paths()[0] verts = orgpath.vertices values = hc.get_array() ma = values.max() patches = [] for offset,val in zip(offsets,values): v1 = verts*val/ma + offset path = Path(v1, orgpath.codes) patch = PathPatch(path) patches.append(patch) pc = PatchCollection(patches, cmap= 'RdBu', edgecolors='black') pc.set_array(values) ax.add_collection(pc) hc.remove() sized_hexbin(ax2,hexbin) cb = plt.colorbar(hexbin, ax=ax2) plt.show() To plot the chart based on df['bins'] values - Need to change the reduce_C_function in #** marked line - hexbin = ax2.hexbin(df['X'], df['Y'], C=df['Bin'], gridsize=20,edgecolors='black',cmap= 'RdBu', reduce_C_function=np.sum) [![enter image description here][2]][2] [1]: https://i.sstatic.net/kv0U4.png [2]: https://i.sstatic.net/mb0gD.png # Another variation of the chart : # Where size is based on count of points in the bins and color is based on values of the df['bin']./ Also added if condition to control minimum hexbin size. import numpy as np np.random.seed(3) import pandas as pd import matplotlib.pyplot as plt from matplotlib.collections import PatchCollection from matplotlib.path import Path from matplotlib.patches import PathPatch from functools import partial mycmp = 'coolwarm' df = pd.DataFrame() df['X'] = list(np.random.randint(100, size=100)) + list(np.random.randint(30, size=100)) df['Y'] = list(np.random.randint(100, size=100)) + list(np.random.randint(30, size=100)) df['Bin'] = df.apply(lambda row: .1 if row['X'] < 30 and row['Y'] < 30 else .9, axis=1) #fig, ((ax1, ax2)) = plt.subplots(1, 2, sharex=True, sharey=True) ax1 = plt.scatter(df['X'], df['Y']) fig,ax2 = plt.subplots(figsize=(10,10)) hexbin = ax2.hexbin(df['X'], df['Y'], C=df['Bin'], gridsize=15,edgecolors='black',cmap= newcmp , reduce_C_function=np.bincount) hexbin2 = ax2.hexbin(df['X'], df['Y'], C=df['Bin'], gridsize=15,edgecolors='black',cmap= newcmp , reduce_C_function=np.mean) def sized_hexbin(ax,hc,hc2): offsets = hc.get_offsets() orgpath = hc.get_paths()[0] verts = orgpath.vertices values1 = hc.get_array() values2 = hc2.get_array() ma = values1.max() patches = [] for offset,val in zip(offsets,values1): # Adding condition for minimum size if (val/ma) < 0.2: val_t = 0.2 else: val_t = val/ma v1 = verts*val_t + offset path = Path(v1, orgpath.codes) print(path) patch = PathPatch(path) patches.append(patch) pc = PatchCollection(patches, cmap= newcmp) #edgecolors='black' pc.set_array(values2) ax.add_collection(pc) hc.remove() hc2.remove() sized_hexbin(ax2,hexbin,hexbin2) cb = plt.colorbar(hexbin2, ax=ax2) plt.xlim((-5, 100)) plt.ylim((-5, 100)) plt.show()
9
3
61,042,524
2020-4-5
https://stackoverflow.com/questions/61042524/create-a-nxn-matrix-from-one-column-pandas
i have dataframe with each row having a list value. id list_of_value 0 ['a','b','c'] 1 ['d','b','c'] 2 ['a','b','c'] 3 ['a','b','c'] i have to do a calculate a score with one row and against all the other rows For eg: Step 1: Take value of id 0: ['a','b','c'], Step 2: find the intersection between id 0 and id 1 , resultant = ['b','c'] Step 3: Score Calculation => resultant.size / id.size repeat step 2,3 between id 0 and id 1,2,3, similarly for all the ids. and create a N x N dataframe; such as this: - 0 1 2 3 0 1 0.6 1 1 1 1 1 1 1 2 1 1 1 1 3 1 1 1 1 Right now my code has just one for loop: def scoreCalc(x,queryTData): #mathematical calculation commonTData = np.intersect1d(np.array(x),queryTData) return commonTData.size/queryTData.size ids = list(df['feed_id']) dfSim = pd.DataFrame() for indexQFID in range(len(ids)): queryTData = np.array(df.loc[df['id'] == ids[indexQFID]]['list_of_value'].values.tolist()) dfSim[segmentDfFeedIds[indexQFID]] = segmentDf['list_of_value'].apply(scoreCalc,args=(queryTData,)) Is there a better way to do this? can i just write one apply function instead doing a for-loop iteration. can i make it faster?
If you data is not too big, you can use get_dummies to encode the values and do a matrix multiplication: s = pd.get_dummies(df.list_of_value.explode()).sum(level=0) s.dot(s.T).div(s.sum(1)) Output: 0 1 2 3 0 1.000000 0.666667 1.000000 1.000000 1 0.666667 1.000000 0.666667 0.666667 2 1.000000 0.666667 1.000000 1.000000 3 1.000000 0.666667 1.000000 1.000000 Update: Here's a short explanation for the code. The main idea is to turn the given lists into one-hot-encoded: a b c d 0 1 1 1 0 1 0 1 1 1 2 1 1 1 0 3 1 1 1 0 Once we have that, the size of intersection of the two rows, say, 0 and 1 is just their dot product, because a character belongs to both rows if and only if it is represented by 1 in both. With that in mind, first use df.list_of_value.explode() to turn each cell into a series and concatenate all of those series. Output: 0 a 0 b 0 c 1 d 1 b 1 c 2 a 2 b 2 c 3 a 3 b 3 c Name: list_of_value, dtype: object Now, we use pd.get_dummies on that series to turn it to a one-hot-encoded dataframe: a b c d 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 1 0 1 0 0 1 0 0 1 0 2 1 0 0 0 2 0 1 0 0 2 0 0 1 0 3 1 0 0 0 3 0 1 0 0 3 0 0 1 0 As you can see, each value has its own row. Since we want to combine those belong to the same original row to one row, we can just sum them by the original index. Thus s = pd.get_dummies(df.list_of_value.explode()).sum(level=0) gives the binary-encoded dataframe we want. The next line s.dot(s.T).div(s.sum(1)) is just as your logic: s.dot(s.T) computes dot products by rows, then .div(s.sum(1)) divides counts by rows.
12
7
61,071,022
2020-4-7
https://stackoverflow.com/questions/61071022/pywintypes-com-error-2147221008-coinitialize-has-not-been-called-none-n
When I try to run this code as is I get this error "IDispatch = pythoncom.CoCreateInstance(IDispatch, None, clsctx, pythoncom.IID_IDispatch) pywintypes.com_error: (-2147221008, 'CoInitialize has not been called.', None, None)" , however if I run stp_tracker alone it works fine and if I run notify stp's alone it works just fine. I appreciate anyones input. Thanks import time import win32com.client # import sys from watchdog.observers import Observer from watchdog.events import PatternMatchingEventHandler # import watchdog class MyHandler(PatternMatchingEventHandler): patterns = ["*.stp", "*.step", "*.txt"] def process(self, event): """ event.event_type 'modified' | 'created' | 'moved' | 'deleted' event.is_directory True | False event.src_path path/to/observed/file """ # the file will be processed there print(event.src_path, event.event_type) def on_modified(self, event): self.process(event) notify_stps() def on_created(self, event): self.process(event) notify_stps() def on_deleted(self, event): self.process(event) notify_stps() def stp_tracker(): # /if __name__ == '__main__': path = r"W:\TestFolder" observer = Observer() observer.schedule(MyHandler(), path) observer.start() try: while True: time.sleep(1) except KeyboardInterrupt: observer.stop() observer.join() def notify_stps(): const = win32com.client.constants olMailItem = 0x0 obj = win32com.client.Dispatch("Outlook.Application") newMail = obj.CreateItem(olMailItem) newMail.Subject = "I AM SUBJECT!!" newMail.Body = "Step files in directory" # newMail.BodyFormat = 2 # olFormatHTML https://msdn.microsoft.com/en-us/library/office/aa219371(v=office.11).aspx # newMail.HTMLBody = "<HTML><BODY>Enter the <span style='color:red'>message</span> text here.</BODY></HTML>" newMail.To = '[email protected]' # attachment1 = r"C:\Temp\example.pdf" # newMail.Attachments.Add(Source=attachment1) newMail.Send() stp_tracker()
Apologize for that, but searching the internet and I found something that helped. I came across the same post earlier and assumed it was deprecated info because my AutoComplete in pycharm was not picking anything up when typing pythoncom.CoInitialize() so it made me think it was outdated info. Also the same information Strive Sun explained
8
4
61,088,235
2020-4-7
https://stackoverflow.com/questions/61088235/flat-file-nosql-solution
Is there a built-in way in SQLite (or similar) to keep the best of both worlds SQL / NoSQL, for small projects, i.e.: stored in a (flat) file like SQLite (no client/server scheme, no server to install; more precisely : nothing else to install except pip install <package>) possibility to store rows as dict, without having a common structure for each row, like NoSQL databases support of simple queries Example: db = NoSQLite('test.db') db.addrow({'name': 'john doe', 'balance': 1000, 'data': [1, 73.23, 18]}) db.addrow({'name': 'alice', 'balance': 2000, 'email': '[email protected]'}) for row in db.find('balance > 1500'): print(row) # {'id': 'f565a9fd3a', 'name': 'alice', 'balance': 2000, 'email': '[email protected]'} # id was auto-generated Note: I have constantly been amazed along the years by how many interesting features are in fact possible with SQLite in a few lines of code, that's why I'm asking if what I describe here could maybe be available simply with SQLite by using only a few SQLite core features. PS: shelve could look like a solution but in fact it's just a persistent key/value store, and it doesn't have query/find functions; also bsddb (BerkeleyDB for Python) looks deprecated and has no query feature with a similar API.
It's possible via using the JSON1 extension to query JSON data stored in a column, yes: sqlite> CREATE TABLE test(data TEXT); sqlite> INSERT INTO test VALUES ('{"name":"john doe","balance":1000,"data":[1,73.23,18]}'); sqlite> INSERT INTO test VALUES ('{"name":"alice","balance":2000,"email":"[email protected]"}'); sqlite> SELECT * FROM test WHERE json_extract(data, '$.balance') > 1500; data -------------------------------------------------- {"name":"alice","balance":2000,"email":"[email protected]"} If you're going to be querying the same field a lot, you can make it more efficient by adding an index on the expression: CREATE INDEX test_idx_balance ON test(json_extract(data, '$.balance')); will use that index on the above query instead of scanning every single row.
14
8
60,974,077
2020-4-1
https://stackoverflow.com/questions/60974077/how-to-save-keras-model-as-frozen-graph
I am working with Tensorflow 2.0 and want to store the following Keras model as frozen graph. import tensorflow as tf model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(64, input_shape=[100])) model.add(tf.keras.layers.Dense(32, activation='relu')) model.add(tf.keras.layers.Dense(16, activation='relu')) model.add(tf.keras.layers.Dense(2, activation='softmax')) model.summary() model.save('./models/') I can't find any good examples how to do this in Tensorflow 2.0. I have found the freeze_graph.py file in the Tensorflow Github repository but find it hard to wrap my head around it. I load the file mentioned above using: from tensorflow.python.tools.freeze_graph import freeze_graph But what exactly do I have to provide to the freeze_graph function itself? Here I marked the arguments where I am not sure with a questionmark. freeze_graph(input_graph=?, input_saver='', input_binary=False, input_checkpoint=?, output_node_names=?, restore_op_name='', filename_tensor_name='', output_graph='./frozen_graph.pb', clear_devices=True, initializer_nodes='') Can someone provide a simple example that shows how I can store the model above as a frozen graph using the freeeze_graph function?
Freeze_Graph is now gone in Tensorflow 2.0. You can check it here Tensorflow 2.0 : frozen graph support. Except for the .save method that you have in your code. .save Method is already saving a .pb ready for inference. As an alternative, you can also use the below code. You can also use convert_variables_to_constants_v2 Below is the sample code. import tensorflow as tf import os from tensorflow.python.tools import freeze_graph from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(64, input_shape=(1,))) model.add(tf.keras.layers.Dense(32, activation='relu')) model.add(tf.keras.layers.Dense(16, activation='relu')) model.add(tf.keras.layers.Dense(1, activation='softmax')) model.compile(optimizer='adam', loss='mse') model.summary() # Convert Keras model to ConcreteFunction full_model = tf.function(lambda x: model(x)) full_model = full_model.get_concrete_function( tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype, name="yourInputName")) # Get frozen ConcreteFunction frozen_func = convert_variables_to_constants_v2(full_model) frozen_func.graph.as_graph_def() layers = [op.name for op in frozen_func.graph.get_operations()] print("-" * 50) print("Frozen model layers: ") for layer in layers: print(layer) print("-" * 50) print("Frozen model inputs: ") print(frozen_func.inputs) print("Frozen model outputs: ") print(frozen_func.outputs) # Save frozen graph from frozen ConcreteFunction to hard drive tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir="./frozen_models", name="frozen_graph.pb", as_text=False) ### USAGE ## def wrap_frozen_graph(graph_def, inputs, outputs, print_graph=False): def _imports_graph_def(): tf.compat.v1.import_graph_def(graph_def, name="") wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, []) import_graph = wrapped_import.graph print("-" * 50) print("Frozen model layers: ") layers = [op.name for op in import_graph.get_operations()] if print_graph == True: for layer in layers: print(layer) print("-" * 50) return wrapped_import.prune( tf.nest.map_structure(import_graph.as_graph_element, inputs), tf.nest.map_structure(import_graph.as_graph_element, outputs)) ## Example Usage ### # Load frozen graph using TensorFlow 1.x functions with tf.io.gfile.GFile("./frozen_models/frozen_graph.pb", "rb") as f: graph_def = tf.compat.v1.GraphDef() loaded = graph_def.ParseFromString(f.read()) # Wrap frozen graph to ConcreteFunctions frozen_func = wrap_frozen_graph(graph_def=graph_def, inputs=["yourInputName:0"], outputs=["Identity:0"], print_graph=True) print("-" * 50) print("Frozen model inputs: ") print(frozen_func.inputs) print("Frozen model outputs: ") print(frozen_func.outputs) # Get predictions for test images predictions = frozen_func(yourInputName=tf.constant([[3.]])) # Print the prediction for the first image print("-" * 50) print("Example prediction reference:") print(predictions[0].numpy())
9
14
61,074,714
2020-4-7
https://stackoverflow.com/questions/61074714/open-cv-contour-area-miscalculation
I am just starting to play with OpenCV and I have found some very strange behaviour from the contourArea function. See this image. It has three non connected areas, the left is a grouping of long strokes and on the top center there is a single dot and finally a big square on the right. When I run my function, I get this result Contour[0] Area: 221, Length: 70, Colour: Red Contour[1] Area: 13772, Length: 480, Colour: Green Contour[2] Area: 150, Length: 2370, Colour: Blue While I havent actually counted the area of the left part, It seems as if it encompasses much more than 150 pixels and would certainly have a higher value than the dot in the top center, I would say that dot should be able to fit in to the left part at least 10 times. The area of the square does work out. Square Area width = 118 height = 116 118 * 116 = 13,688 13,688 is really close to what opencv gave as the area (13,772), the difference is likely measurement error on my behalf. I manually calculated the area of the dot Dot Area width = 27 height = 6 27*6 = 162 Not too far off from what opencv said it would be (221) Reading from the OpenCV docs page on contourArea it says that it will give wrong results for contours with self intersections. Not really understanding what self intersections are, I made a test image. As you can see I have a rectangle on the left and a cross in the middle and another cross rotated 45 deg. I would expect the cross to have slightly less than double the area of the rectangle due to the overlap in the center. Contour[0] Area: 1805, Length: 423, Colour: Red Contour[1] Area: 947, Length: 227, Colour: Green Contour[2] Area: 1825, Length: 415, Colour: Blue As you can see the area of the two crosses are slightly less than double the area of the rectangle. As expected. I am not interested in capturing the inside of the square or getting a box drawn around the shape on the left and the dot (though it would be tangentially interesting) it's not specifically what I'm asking about in this question. So my question: Why is the area of my irregular shape severly underestimated? Am I using the wrong function? Am I using the right function incorrectly? Have I found a bug in opencv? Does self intersections have a meaning that wasn't demonstrated in my test? I copied most of this code from this tutorial I have stripped down my code to this self contained example below. def contour_test(name): import cv2 as cv colours = [{'name': 'Red ', 'bgr': (0, 0, 255)}, {'name': 'Green ', 'bgr': (0, 255, 0)}, {'name': 'Blue ', 'bgr': (255, 0, 0)}] src = cv.imread(cv.samples.findFile(name)) src_gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY) src_gray = cv.blur(src_gray, (3,3)) threshold = 100 canny_output = cv.Canny(src_gray, threshold, threshold * 2) contours, _ = cv.findContours(canny_output, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE) # Get the moments mu = [None for i in contours] for i in range(len(contours)): mu[i] = cv.moments(contours[i]) # Get the mass centers mc = [None for i in contours] for i in range(len(contours)): mc[i] = (mu[i]['m10'] / (mu[i]['m00'] + 1e-5), mu[i]['m01'] / (mu[i]['m00'] + 1e-5)) # Draw contours drawing = np.zeros((canny_output.shape[0], canny_output.shape[1], 3), dtype=np.uint8) for i, j in enumerate(contours): colour = colours[i]['bgr'] cv.drawContours(drawing, contours, i, colour, 2) area = int(cv.contourArea(contours[i])) length = int(cv.arcLength(contours[i], True)) print('Contour[{0}] Area: {1}, Length: {2}, Colour: {3}'.format(i, area, length, colours[i]['name']))
The inner part of the contours the findContours finds is supposed to be of filled with white color. Don't use cv.Canny before findContours (cv.blur is also not required). Make sure the contours are white and not black. You may use cv.threshold with cv.THRESH_BINARY_INV option for inverting polarity. It is recommended to add cv.THRESH_OTSU option for automatic threshold. You may replace cv.blur and cv.Canny and cv.findContours(canny_output... with: _, src_thresh = cv.threshold(src_gray, 0, 255, cv.THRESH_BINARY_INV + cv.THRESH_OTSU) contours, _ = cv.findContours(src_thresh, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE) Result (of top image): Contour[0] Area: 13531, Length: 476, Colour: Red Contour[1] Area: 184, Length: 71, Colour: Green Contour[2] Area: 4321, Length: 1202, Colour: Blue Here is the complete (updated) code: import numpy as np def contour_test(name): import cv2 as cv colours = [{'name': 'Red ', 'bgr': (0, 0, 255)}, {'name': 'Green ', 'bgr': (0, 255, 0)}, {'name': 'Blue ', 'bgr': (255, 0, 0)}] src = cv.imread(cv.samples.findFile(name)) src_gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY) #src_gray = cv.blur(src_gray, (3,3)) #threshold = 100 #canny_output = cv.Canny(src_gray, threshold, threshold * 2) #contours, _ = cv.findContours(canny_output, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE) _, src_thresh = cv.threshold(src_gray, 0, 255, cv.THRESH_BINARY_INV + cv.THRESH_OTSU) cv.imshow('src_thresh', src_thresh);cv.waitKey(0);cv.destroyAllWindows() # Show src_thresh for testing contours, _ = cv.findContours(src_thresh, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE) # Get the moments mu = [None for i in contours] for i in range(len(contours)): mu[i] = cv.moments(contours[i]) # Get the mass centers mc = [None for i in contours] for i in range(len(contours)): mc[i] = (mu[i]['m10'] / (mu[i]['m00'] + 1e-5), mu[i]['m01'] / (mu[i]['m00'] + 1e-5)) # Draw contours drawing = np.zeros((src_thresh.shape[0], src_thresh.shape[1], 3), dtype=np.uint8) for i, j in enumerate(contours): colour = colours[i]['bgr'] cv.drawContours(drawing, contours, i, colour, 2) area = int(cv.contourArea(contours[i])) length = int(cv.arcLength(contours[i], True)) print('Contour[{0}] Area: {1}, Length: {2}, Colour: {3}'.format(i, area, length, colours[i]['name'])) cv.imshow('drawing', drawing);cv.waitKey(0);cv.destroyAllWindows() # Show drawing for testing contour_test('img.jpg') I added cv.imshow in two places for testing.
8
6
61,051,161
2020-4-6
https://stackoverflow.com/questions/61051161/find-xarray-indices-where-conditions-are-satisfied
I want to get the indices of an xarray data array where some condition is satisfied. An answer provided in a related thread (here) for how to find the location for the maximum did not work for me either. In my case, I want to find out the locations for other types of conditions too, not just maximum. Here is what I tried: h=xr.DataArray(np.random.randn(3,4)) h.where(h==h.max(),drop=True).squeeze() # This is the output I got: <xarray.DataArray ()> array(1.66065694) This does not return the position as shown in the example I linked to, even though I am executing the same command. Am not sure why.
I updated the linked example to show the indexes more clearly. Because xarray no longer adds default indexes, the previous example finds the max location but doesn't show the indexes. Copied below: In [17]: da = xr.DataArray( np.random.rand(2,3), dims=list('ab'), coords=dict(a=list('xy'), b=list('ijk')) ) In [18]: da.where(da==da.max(), drop=True).squeeze() Out[18]: <xarray.DataArray ()> array(0.96213673) Coordinates: a <U1 'x' b <U1 'j'
9
7
61,082,381
2020-4-7
https://stackoverflow.com/questions/61082381/xgboost-produce-prediction-result-and-probability
I am probably looking right over it in the documentation, but I wanted to know if there is a way with XGBoost to generate both the prediction and probability for the results? In my case, I am trying to predict a multi-class classifier. it would be great if I could return Medium - 88%. Classifier = Medium Probability of Prediction = 88% parameters params = { 'max_depth': 3, 'objective': 'multi:softmax', # error evaluation for multiclass training 'num_class': 3, 'n_gpus': 0 } prediction pred = model.predict(D_test) results array([2., 2., 1., ..., 1., 2., 2.], dtype=float32) User friendly (label encoder) pred_int = pred.astype(int) label_encoder.inverse_transform(pred_int[:5]) array(['Medium', 'Medium', 'Low', 'Low', 'Medium'], dtype=object) EDIT: @Reveille suggested predict_proba. I am not instantiating XGBClassifer(). Should I be? How would I modify my pipeline to use that, if so? params = { 'max_depth': 3, 'objective': 'multi:softmax', # error evaluation for multiclass training 'num_class': 3, 'n_gpus': 0 } steps = 20 # The number of training iterations model = xgb.train(params, D_train, steps)
You can try pred_p = model.predict_proba(D_test) An example I had around (not multi-class though): import xgboost as xgb from sklearn.datasets import make_moons from sklearn.model_selection import train_test_split X, y = make_moons(noise=0.3, random_state=0) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1) xgb_clf = xgb.XGBClassifier() xgb_clf = xgb_clf.fit(X_train, y_train) print(xgb_clf.predict(X_test)) print(xgb_clf.predict_proba(X_test)) [1 1 1 0 1 0 1 0 0 1] [[0.0394336 0.9605664 ] [0.03201818 0.9679818 ] [0.1275925 0.8724075 ] [0.94218 0.05782 ] [0.01464975 0.98535025] [0.966953 0.03304701] [0.01640552 0.9835945 ] [0.9297296 0.07027044] [0.9580196 0.0419804 ] [0.02849442 0.9715056 ]] Note as mentioned in the comments by @scarpacci (ref): predict_proba() method only exists for the scikit-learn interface
19
25
61,076,688
2020-4-7
https://stackoverflow.com/questions/61076688/django-form-dateinput-with-widget-in-update-loosing-the-initial-value
I need a DateInput field in a ModelForm with the default HTML datepicker (I'm not using 3rd party libraries). Since the DateInput is rendered with <input type = "text"> by default, the datepicker is missing (it comes for free with <input type = "date">) I've found some examples explaining how to change the input type by handling widget parameters (below the code I've done so far) The issue I have the datepicker working correctly but in "update mode" when passing initial date value to the form (see view part), the date remains empty in the HTML. I've tried to find the cause and it seems that the 'type': 'date' part in the widget customization is clearing the initial value is some way; in fact, removing it, the initial value date is displayed again, but I loose the datepicker of course. In the view the date is passed with a valid value I also found another similar unanswered question where the field was declared as class DateInput(forms.DateInput): input_type = 'date' date_effet = forms.DateField(widget=forms.DateInput(format='%d-%m-%Y'), label='Date effet') the problem still remains My code model.py class TimesheetItem(models.Model): date = models.DateField() description = models.CharField(max_length=100) # many other fields here form.py class TimesheetItemForm(forms.ModelForm): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # init is used for other fields initialization and crispy forms class Meta: model = TimesheetItem fields = ['date', 'description'] widgets = { 'date': forms.DateInput( format=('%d/%m/%Y'), attrs={'class': 'form-control', 'placeholder': 'Select a date', 'type': 'date' # <--- IF I REMOVE THIS LINE, THE INITIAL VALUE IS DISPLAYED }), } view.py def edit(request, uuid): try: timesheet_entry_item = TimesheetItem.objects.get(uuid=uuid) if request.method == 'POST': form = TimesheetItemForm( data=request.POST, instance=timesheet_entry_item ) if form.is_valid(): pass # save the form else: form = TimesheetItemForm(initial={ 'date': timesheet_entry_item.date, # <--- the date here has a valid value 'description': timesheet_entry_item.description }) return render(request, 'template.html', {'form': form}) except ObjectDoesNotExist: raise Http404("error") Thanks for any help M.
I managed to make it work. Following the cause of the issue, I hope it can be useful to others. The HTML <input type='date'> element wants a date in the format YYYY-mm-dd; in fact an example of working HTML must be like this: <input type="date" name="date" value="2020-03-31" class="form-control dateinput form-control" placeholder="Select a date" required="" id="id_date"> Since by default the form.DateInput produces the element <input type='text'>, it expects a date in the local format: let's say '31/03/2020'. Forcing the 'type': 'date' and local format format=('%d/%m/%Y') or not passing a format at all, it ignores the value passed since the <input type='date'> wants format=('%Y-%m-%d') At last the correct instruction was: widgets = { 'date': forms.DateInput( format=('%Y-%m-%d'), attrs={'class': 'form-control', 'placeholder': 'Select a date', 'type': 'date' }), }
19
32
61,058,798
2020-4-6
https://stackoverflow.com/questions/61058798/python-relative-import-in-jupyter-notebook
Let's say I have the following structure: dir_1 ├── functions.py └── dir_2 └── code.ipynb In, code.ipynb, I simply want to access a function inside functions.py and tried this: from ..functions import some_function I get the error: attempted relative import with no known parent package I have checked a bunch of similar posts but not yet figured this out... I am running jupyter notebook from a conda env and my python version is 3.7.6.
In your notebook do: import os, sys dir2 = os.path.abspath('') dir1 = os.path.dirname(dir2) if not dir1 in sys.path: sys.path.append(dir1) from functions import some_function
14
10
61,052,890
2020-4-6
https://stackoverflow.com/questions/61052890/import-could-not-be-resolved-reported-by-pyright
I've just started using Pyright. Running it on files that run perfectly well I get plenty of errors. This question is similar, but refers to one's own modules. For example Import "numpy" could not be resolved. What does it mean, and how do I resolve it?
On my computer I have 3 Pythons, a 3.6 from Anaconda, and a 2.7 & 3.7 that are regular python. Prompted by a nudge from this GH issue, I switched from the Anaconda 3.6 to the 3.7, and back again, and the problem went away. I think that this is the case because your .vscode/settings.json (the following is mine), doesn't have that last line until you change your python, at which point, that last line is put in and Pyright has something to look at. { "python.linting.enabled": true, "python.formatting.provider": "black", "python.pythonPath": "C:\\Users\\ben\\Anaconda3\\python.exe" }
35
68
61,077,802
2020-4-7
https://stackoverflow.com/questions/61077802/how-to-use-a-datepicker-in-a-modelform-in-django
I am using django 3.0 and I am trying to display a datepicker widget in my ModelForm, but I can't figure out how (all I can get is text field). I have tried looking for some solutions, but couldn't find any. This is how my Model and my ModelForm look like: class Membership(models.Model): start_date = models.DateField(default=datetime.today, null=True) owner = models.ForeignKey(Client, on_delete=models.CASCADE, null=True) type = models.ForeignKey(MembershipType, on_delete=models.CASCADE, null=True) class MembershipForm(ModelForm): class Meta: model = Membership fields = ['owner', 'start_date', 'type'] widgets = { 'start_date': forms.DateInput } And this is my html: <form class="container" action="" method="POST"> {% csrf_token %} {{ form|crispy }} <button type="submit" class="btn btn-primary">Submit</button> </form>
This is the expected behavior. A DateInput widget [Django-doc] is just a <input type="text"> element with an optional format parameter. You can make use of a package, like for example django-bootstrap-datepicker-plus [pypi] , and then define a form with the DatePickerInput: from bootstrap_datepicker_plus import DatePickerInput class MembershipForm(ModelForm): class Meta: model = Membership fields = ['owner', 'start_date', 'type'] widgets = { 'start_date': DatePickerInput } In the template you will need to render the media of the form and load the bootstrap css and javascript: {% load bootstrap4 %} {% bootstrap_css %} {% bootstrap_javascript jquery='full' %} {{ form.media }} <form class="container" action="" method="POST"> {% csrf_token %} {{ form|crispy }} <button type="submit" class="btn btn-primary">Submit</button> </form>
7
0
61,041,214
2020-4-5
https://stackoverflow.com/questions/61041214/making-a-tqdm-progress-bar-for-asyncio
Am attempting a tqdm progress bar with asyncio tasks gathered. Want the progress bar to be progressively updated upon completion of a task. Tried the code: import asyncio import tqdm import random async def factorial(name, number): f = 1 for i in range(2, number+1): await asyncio.sleep(random.random()) f *= i print(f"Task {name}: factorial {number} = {f}") async def tq(flen): for _ in tqdm.tqdm(range(flen)): await asyncio.sleep(0.1) async def main(): # Schedule the three concurrently flist = [factorial("A", 2), factorial("B", 3), factorial("C", 4)] await asyncio.gather(*flist, tq(len(flist))) asyncio.run(main()) ...but this simply completes the tqdm bar and then processes factorials. Is there a way to make the progress bar move upon completion of each asyncio task?
Made a couple of small changes to Dragos' code in pbar format and used tqdm.write() to get almost what I want, as follows: import asyncio import random import tqdm async def factorial(name, number): f = 1 for i in range(2, number + 1): await asyncio.sleep(random.random()) f *= i return f"Task {name}: factorial {number} = {f}" async def tq(flen): for _ in tqdm.tqdm(range(flen)): await asyncio.sleep(0.1) async def main(): flist = [factorial("A", 2), factorial("B", 3), factorial("C", 4)] pbar = tqdm.tqdm(total=len(flist), position=0, ncols=90) for f in asyncio.as_completed(flist): value = await f pbar.set_description(desc=value, refresh=True) tqdm.tqdm.write(value) pbar.update() if __name__ == '__main__': asyncio.run(main())
22
4
61,071,271
2020-4-7
https://stackoverflow.com/questions/61071271/how-does-one-use-pytest-monkeypatch-to-patch-a-class
I would like to use pytest monkeypatch to mock a class which is imported into a separate module. Is this actually possible, and if so how does one do it? It seems like I have not seen an example for this exact situation. Suppose you have app with and imported class A in something.py from something import A #Class is imported class B : def __init__(self) : self.instance = A() #class instance is created def f(self, value) : return self.instance.g(value) inside my test.py I want to mock A inside B from something import B #this is where I would mock A such that def mock_A : def g(self, value) : return 2*value #Then I would call B c = B() print(c.g(2)) #would be 4 I see how monkeypatch can be used to patch instances of classes, but how is it done for classes that have not yet been instantiated? Is it possible? Thanks!
tested this, works for me: def test_thing(monkeypatch): def patched_g(self, value): return value * 2 monkeypatch.setattr(A, 'g', patched_g) b = B() assert b.f(2) == 4
13
8
61,061,435
2020-4-6
https://stackoverflow.com/questions/61061435/modulenotfounderror-no-module-named-jose
I am using python-social-auth in my django project to use social platforms for authentication in my project. It all worked well but am getting this error ModuleNotFoundError: No module named 'jose' This is the whole error: [05/Apr/2020 14:01:00] "GET /accounts/login/ HTTP/1.1" 200 3058 Internal Server Error: /login/twitter/ Traceback (most recent call last): File "C:\Program Files\Python37\lib\site-packages\social_core\backends\utils.py", line 50, in get_backend return BACKENDSCACHE[name] KeyError: 'twitter' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Ahmed\AppData\Roaming\Python\Python37\site-packages\django\core\handlers\exception.py", line 34, in inner response = get_response(request) File "C:\Users\Ahmed\AppData\Roaming\Python\Python37\site-packages\django\core\handlers\base.py", line 115, in _get_response response = self.process_exception_by_middleware(e, request) File "C:\Users\Ahmed\AppData\Roaming\Python\Python37\site-packages\django\core\handlers\base.py", line 113, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\Ahmed\AppData\Roaming\Python\Python37\site-packages\django\views\decorators\cache.py", line 44, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "C:\Program Files\Python37\lib\site-packages\social_django\utils.py", line 46, in wrapper backend, uri) File "C:\Program Files\Python37\lib\site-packages\social_django\utils.py", line 27, in load_backend Backend = get_backend(BACKENDS, name) File "C:\Program Files\Python37\lib\site-packages\social_core\backends\utils.py", line 53, in get_backend load_backends(backends, force_load=True) File "C:\Program Files\Python37\lib\site-packages\social_core\backends\utils.py", line 35, in load_backends backend = module_member(auth_backend) File "C:\Program Files\Python37\lib\site-packages\social_core\utils.py", line 62, in module_member module = import_module(mod) File "C:\Program Files\Python37\lib\site-packages\social_core\utils.py", line 56, in import_module __import__(name) File "C:\Program Files\Python37\lib\site-packages\social\backends\google.py", line 3, in <module> from social_core.backends.google_openidconnect import GoogleOpenIdConnect File "C:\Program Files\Python37\lib\site-packages\social_core\backends\google_openidconnect.py", line 5, in <module> from .open_id_connect import OpenIdConnectAuth File "C:\Program Files\Python37\lib\site-packages\social_core\backends\open_id_connect.py", line 6, in <module> from jose import jwk, jwt ModuleNotFoundError: No module named 'jose' [05/Apr/2020 14:01:06] "GET /login/twitter/?next=/accounts/login/ HTTP/1.1" 500 132103 I am pretty new to python and can't figure out what the issue is.
Install jose by running: pip install python-jose>=3.0.0
10
22
61,057,046
2020-4-6
https://stackoverflow.com/questions/61057046/list-of-dicts-to-multilevel-dict-based-on-depth-info
I have some data, more or less like this: [ {"tag": "A", "level":0}, {"tag": "B", "level":1}, {"tag": "D", "level":2}, {"tag": "F", "level":3}, {"tag": "G", "level":4}, {"tag": "E", "level":2}, {"tag": "H", "level":3}, {"tag": "I", "level":3}, {"tag": "C", "level":1}, {"tag": "J", "level":2}, ] I want to turn it into a multilevel dict based on depth level (key "level"): { "A": {"level": 0, "children": { "B": {"level": 1, "children": { "D": {"level": 2, "children": { "F": {"level": 3, "children": { "G": {"level": 4, "children": {}}}}}}, "E": {"level": 2, "children": { "H": {"level": 3, "children": {}}, "I": {"level": 3, "children": {}}}}}}, "C": {"level": 1, "children": { "J": {"level": 2, "children": {}}}}}} } All I can come up with right now is this little piece of code... which obviously breaks after few items: def list2multilevel(list): children = {} parent = list.pop(0) tag = parent.get("Tag") level = parent.get("Level") for child in list: ctag = child.get("Tag") clevel = child.get("Level") if clevel == level + 1: children.update(list2multilevel(list)) elif clevel <= level: print(clevel, level) break return {tag: children} Originally sat down to it on Friday and it was supposed to be just a small exercise....
data = [ {"tag": "A", "level": 0}, {"tag": "B", "level": 1}, {"tag": "D", "level": 2}, {"tag": "F", "level": 3}, {"tag": "G", "level": 4}, {"tag": "E", "level": 2}, {"tag": "H", "level": 3}, {"tag": "I", "level": 3}, {"tag": "C", "level": 1}, {"tag": "J", "level": 2}, ] root = {'level': -1, 'children': {}} parents = {-1: root} for datum in data: level = datum['level'] parents[level] = parents[level - 1]['children'][datum['tag']] = { 'level': datum['level'], 'children': {}, } result = root['children'] print(result) output: {'A': {'level': 0, 'children': {'B': {'level': 1, 'children': {'D': {'level': 2, 'children': {'F': {'level': 3, 'children': {'G': {'level': 4, 'children': {}}}}}}, 'E': {'level': 2, 'children': {'H': {'level': 3, 'children': {}}, 'I': {'level': 3, 'children': {}}}}}}, 'C': {'level': 1, 'children': {'J': {'level': 2, 'children': {}}}}}}} restriction: level >= 0 Any level cannot be bigger than +1 of max level appeared before. explanation: parents is a dictionary to remember last element for each level. root is a starting point(dummy element). logic: Start with -1 level which indicates the root. Make an item and register it into parent's children. Update same item to parents dictionary. Repeat. Extract root['children'].
9
6
61,049,310
2020-4-5
https://stackoverflow.com/questions/61049310/how-to-avoid-reloading-ml-model-every-time-when-i-call-python-script
I have two files, file1.py which have ML model size of 1GB and file2.py which calls get_vec() method from file1 and receives vectors in return. ML model is being loaded everytime when file1 get_vec() method is called. This is where it is taking lots of time (around 10s) to load the model from disk. I want to tell file1 somehow not to reload model every time but utilize loaded model from earlier calls. Sample code is as follows # File1.py import spacy nlp = spacy.load('model') def get_vec(post): doc = nlp(post) return doc.vector File2.py from File1 import get_vec df['vec'] = df['text'].apply(lambda x: get_vec(x)) So here, it is taking 10 to 12 seconds in each call. This seems small code but it is a part of a large project and I can not put both in the same file. Update1: I have done some research and came to know that I can use Redis to store model in cache first time it runs and thereafter I can read the model from cache directly. I tried it for testing with Redis as follows import spacy import redis nlp = spacy.load('en_core_web_lg') r = redis.Redis(host = 'localhost', port = 6379, db = 0) r.set('nlp', nlp) It throws an error DataError: Invalid input of type: 'English'. Convert to a bytes, string, int or float first. Seems, type(nlp) is English() and it need to convert in a suitable format. So I tried to use pickle as well to convert it. But again, pickle is taking lots of time in encoding and decoding. Is there anyway to store this in Redis? Can anybody suggest me how can I make it faster? Thanks.
Heres how to do it Step 1) create a function in python and load your model in that function model=None def load_model(): global model model = ResNet50(weights="imagenet") if you carefully observe first I assigned variable model to None. Then inside load_model function I loaded a model. Also I made sure the variable model is made global so that it can be accessed from outside this function. The intuition here is we load model object in a global variable. So that we can access this variable anywhere within the code. Now that we have our tools ready (i.e we can access the model from anywhere within this code ) lets freeze this model in your computers RAM. This is done by: if __name__ == "__main__": print(("* Loading Keras model and Flask starting server..." "please wait until server has fully started")) load_model() app.run() Now what's the use of freezing model in RAM without using it. So, to use it I use POST request in flask @app.route("/predict", methods=["POST"]) def predict(): if flask.request.method == "POST": output=model.predict(data) #what you want to do with frozen model goes here So using this trick you can freeze model in RAM, access it using a global variable. and then use it in your code.
10
11
61,054,415
2020-4-6
https://stackoverflow.com/questions/61054415/find-least-common-denominator-for-a-list-of-fractions-in-python
I have a list of fractions that I need to transform. from fractions import Fraction fractions_list=[Fraction(3,14),Fraction(1,7),Fraction(9,14)] The output should be a list with the numerators for each fraction, followed by the least common denominator for all of them. For above example the result (3/14, 2/14, 9/14) would be represented as follows [3,2,9,14] Is there an elegant solution for this? All I can think of involves a lot of intermediate lists to store some variables, and scales horribly.
import numpy as np fractions_list=[Fraction(3,14),Fraction(1,7),Fraction(9,14)] lcm = np.lcm.reduce([fr.denominator for fr in fractions_list]) vals = [int(fr.numerator * lcm / fr.denominator) for fr in fractions_list] vals.append(lcm)
8
11
61,049,744
2020-4-5
https://stackoverflow.com/questions/61049744/import-dataset-into-google-colab-from-another-drive-account
lastly I'm working on google colab I get this dataset colled celeba and it is into a google drive accout and this account is not mine but I have the access to go through it now because the internet problems and drive capacity I can not dounload the dataset then upload it to my drive ... so the question is: is there any way to let google colab get access to this dataset or such a way to import the path... I have this function definition below create_celebahq_cond_continuous('/content/drive/My Drive/kiki96/results/tfrecords','https://drive.google.com/open?id=0B7EVK8r0v71pWEZsZE9oNnFzTm8','https://drive.google.com/open?id=0B4qLcYyJmiz0TXY1NG02bzZVRGs',4,100,False) where I have tried to put the sharablelink of the dataset but, it does not work please help
To download a file to Colab If you want to download the file directly into your Google Colab instance, then you can use gdown. Note that the file must be shared to the public. If the link to your dataset is https://drive.google.com/file/d/10vAwF6hFUjvw3pf6MmB_S0jZm9CLWbSx/view?usp=sharing, you can use: !gdown --id "10vAwF6hFUjvw3pf6MmB_S0jZm9CLWbSx" To download the file to your Drive Instead, if you want to download it to your drive then Mount your Google Drive from google.colab import drive drive.mount('/content/drive') Change the directory to a folder in your Google Drive cd '/content/drive/My Drive/datasets/' Download the file into your Google Drive folder !gdown --id "10vAwF6hFUjvw3pf6MmB_S0jZm9CLWbSx" To download a folder If you are trying to download a folder, follow these steps: Open the shared folder Click "Add shortcut to my drive" and select a folder Mount your Google Drive to Google Colab Go to the folder where you added the shortcut You can see the newly-added folder, being referenced by its Google Drive folder ID.
12
18
61,050,767
2020-4-5
https://stackoverflow.com/questions/61050767/how-to-force-zero-0-to-the-center-of-an-axis-in-matplotlib
I'm trying to plot percent change data and would like to plot it such that the y axis is symmetric about 0. i.e. 0 is in the center of the axis. import matplotlib.pyplot as plt import pandas as pd data = pd.DataFrame([1,2,3,4,3,6,7,8], columns=['Data']) data['PctChange'] = data['Data'].pct_change() data['PctChange'].plot() This is different from How to draw axis in the middle of the figure?. The goal here is not to move the x axis, but rather, change the limits of the y axis such that the zero is in the center. Specifically in a programmatic way that changes in relation to the data.
After plotting the data find the maximum absolute value between the min and max axis values. Then set the min and max limits of the axis to the negative and positive (respectively) of that value. import matplotlib.pyplot as plt import pandas as pd data = pd.DataFrame([1,2,3,4,3,6,7,8], columns=['Data']) data['PctChange'] = data['Data'].pct_change() ax = data['PctChange'].plot() yabs_max = abs(max(ax.get_ylim(), key=abs)) ax.set_ylim(ymin=-yabs_max, ymax=yabs_max)
11
16
61,047,555
2020-4-5
https://stackoverflow.com/questions/61047555/indirect-fixture-error-using-pytest-what-is-wrong
def fatorial(n): if n <= 1: return 1 else: return n*fatorial(n - 1) import pytest @pytest.mark.parametrize("entrada","esperado",[ (0,1), (1,1), (2,2), (3,6), (4,24), (5,120) ]) def testa_fatorial(entrada,esperado): assert fatorial(entrada) == esperado The error: ERROR collecting Fatorial_pytest.py ____________________________________________________________________ In testa_fatorial: indirect fixture '(0, 1)' doesn't exist I dont know why I got "indirect fixture”. Any idea? I am using python 3.7 and windows 10 64 bits.
TL;DR - The problem is with the line @pytest.mark.parametrize("entrada","esperado",[ ... ]) It should be written as a comma-separated string: @pytest.mark.parametrize("entrada, esperado",[ ... ]) You got the indirect fixture because pytest couldn't unpack the given argvalues since it got a wrong argnames parameter. You need to make sure all parameters are written as one string. Please see the documentation: The builtin pytest.mark.parametrize decorator enables parametrization of arguments for a test function. Parameters: 1. argnames – a comma-separated string denoting one or more argument names, or a list/tuple of argument strings. 2. argvalues – The list of argvalues determines how often a test is invoked with different argument values. Meaning, you should write the arguments you want to parametrize as a single string and separate them using a comma. Therefore, your test should look like this: @pytest.mark.parametrize("n, expected", [ (0, 1), (1, 1), (2, 2), (3, 6), (4, 24), (5, 120) ]) def test_factorial(n, expected): assert factorial(n) == expected
62
164
60,962,274
2020-4-1
https://stackoverflow.com/questions/60962274/plotly-how-to-change-the-colorscheme-of-a-plotly-express-scatterplot
I am trying to work with plotly, specifically ploty express, to build a few visualizations. One of the things I am building is a scatterplot I have some code below, that produces a nice scatterplot: import plotly.graph_objs as go, pandas as pd, plotly.express as px df = pd.read_csv('iris.csv') fig = px.scatter(df, x='sepal_length', y='sepal_width', color='species', marker_colorscale=px.colors.sequential.Viridis) fig.show() However, I want to try and change the colorscheme, i.e., the colors presented for each species. I have read: https://plotly.com/python/builtin-colorscales/ https://plotly.com/python/colorscales/ https://plotly.com/python/v3/colorscales/ But can not get the colors to change. Trying: fig = px.scatter(df, x='sepal_length', y='sepal_width', color='species', marker_colorscale=px.colors.sequential.Viridis) yields: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-78a9d58dce23> in <module> 2 # https://plotly.com/python/line-and-scatter/ 3 fig = px.scatter(df, x='sepal_length', y='sepal_width', ----> 4 color='species', marker_colorscale=px.colors.sequential.Viridis) 5 fig.show() TypeError: scatter() got an unexpected keyword argument 'marker_colorscale' Trying Trying: fig = px.scatter(df, x='sepal_length', y='sepal_width', color='species', continuous_colorscale=px.colors.sequential.Viridis) yields: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-78a9d58dce23> in <module> 2 # https://plotly.com/python/line-and-scatter/ 3 fig = px.scatter(df, x='sepal_length', y='sepal_width', ----> 4 color='species', continuous_colorscale=px.colors.sequential.Viridis) 5 fig.show() TypeError: scatter() got an unexpected keyword argument 'continuous_colorscale' How can I change the colors used in a plotly visualization?
Generally, changing the color scheme for a plotly express figure is very straight-forward. What's causing the problems here is the fact that species is a categorical variable. Continuous or numerical values are actually easier, but we'll get to that in a bit. For categorical values, using color_discrete_map is a perfectly valid, albeit cumbersome approach. I prefer using the keyword argument continuous_colorscale in combination with px.colors.qualitative.Antique, where Antique can be changed to any of the discrete color schemes available in plotly express. Just run dir(px.colors.qualitative) to see what are available to you in the plotly version you are running: ['Alphabet', 'Antique', 'Bold', 'D3', 'Dark2', 'Dark24', 'G10',......] Code 1: import plotly.express as px df = px.data.iris() fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species", color_discrete_sequence=px.colors.qualitative.Antique) fig.show() Plot 1: So what about continuous variables? Consider the following snippet: import plotly.express as px df = px.data.iris() fig = px.scatter(df, x="sepal_width", y="sepal_length", color="sepal_length", color_continuous_scale=px.colors.sequential.Viridis) fig.show() Running this will produce this plot: You can change the colors to any other theme available under dir(px.colors.sequential), for example color_continuous_scale=px.colors.sequential.Inferno, and get this plot: What's possibly causing confusion here, is that setting color='species, and keeping color_continuous_scale=px.colors.sequential.Inferno will give you this plot: The figure now jumps straight back to using the default plotly colors, without giving you any warning about color_continuous_scale=px.colors.sequential.Inferno not having an effect. This is because species is a categorical variable with these different values : ['setosa', 'versicolor', 'virginica'], so color_continuous_scale is simply ignored. For color_continuous_scale to take effect you'll have to use a numerical value, like sepal_length = [5.1, 4.9, 4.7, 4.6, 5. , 5.4, ...] And this brings us right back to my initial answer for categorical values: Use the keyword argument continuous_colorscale in combination with px.colors.qualitative
26
35
61,020,313
2020-4-3
https://stackoverflow.com/questions/61020313/is-there-a-way-to-add-autofilter-to-all-columns-using-xlsxwriter-without-specify
I have a dataframe which I am writing to excel using xlsxwriter and I want there to be autofilter applied to all columns where the header is not blank in my spreadsheet without having to specify a range (e.g. A1:D1). Is there any way to do this?
You will need to specify the range in some way but you can do it programatically based on the shape() of the data frame. For example: import xlsxwriter import pandas as pd df = pd.DataFrame({'A' : [1, 2, 3, 4, 5, 6, 7, 8], 'B' : [1, 2, 3, 4, 5, 6, 7, 8], 'C' : [1, 2, 3, 4, 5, 6, 7, 8], 'D' : [1, 2, 3, 4, 5, 6, 7, 8]}) writer = pd.ExcelWriter('test.xlsx', engine = 'xlsxwriter') # Convert the dataframe to an XlsxWriter Excel object. df.to_excel(writer, sheet_name='Sheet1') # Get the xlsxwriter objects from the dataframe writer object. workbook = writer.book worksheet = writer.sheets['Sheet1'] # Apply the autofilter based on the dimensions of the dataframe. worksheet.autofilter(0, 0, df.shape[0], df.shape[1]) workbook.close() writer.save() Output:
9
17
61,024,263
2020-4-4
https://stackoverflow.com/questions/61024263/python-logging-does-not-log-pd-info
import logging import pandas as pd logger = logging.getLogger('train') logger.setLevel(logging.DEBUG) # Data data = {'Name': ['Tom', 'nick', 'krish', 'jack'], 'Age': [20, 21, 19, 18]} # Create DataFrame df = pd.DataFrame(data) logger.info(type(df)) logger.info(df.info()) . . . <other_processes> . The above code outputs: <class 'pandas.core.frame.DataFrame'> None . . . And at the end of the logs (after all other processes), it also outputs: <class 'pandas.core.frame.DataFrame'> RangeIndex: 4 entries, 0 to 3 Data columns (total 2 columns): Name 4 non-null object Age 4 non-null int64 dtypes: int64(1), object(1) memory usage: 144.0+ bytes Why does it print None when I try to log df.info()? How can I get df.info() at the intended location in my logs?
Change buffer parameter in DataFrame.info to StringIO for text with .getvalue(): from io import StringIO buf = StringIO() df.info(buf=buf) logger.info(type(df)) logger.info(buf.getvalue())
9
11
61,022,248
2020-4-4
https://stackoverflow.com/questions/61022248/i-can%c2%b4t-install-anaconda-on-linux
When I try to install Anaconda on Linux, I get to this point: Anaconda3 will now be installed into this location: /home/jorge/anaconda3 - Press ENTER to confirm the location - Press CTRL-C to abort the installation - Or specify a different location below [/home/jorge/anaconda3] >>> PREFIX=/home/jorge/anaconda3 Unpacking payload ... Then I receive the following error message: concurrent.futures.process._RemoteTraceback: ''' Traceback (most recent call last): File "concurrent/futures/process.py", line 367, in _queue_management_worker File "multiprocessing/connection.py", line 251, in recv TypeError: __init__() missing 1 required positional argument: 'msg' ''' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "entry_point.py", line 69, in <module> File "concurrent/futures/process.py", line 483, in _chain_from_iterable_of_lists File "concurrent/futures/_base.py", line 598, in result_iterator File "concurrent/futures/_base.py", line 435, in result File "concurrent/futures/_base.py", line 384, in __get_result concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. [1770] Failed to execute script entry_point What can I do? I was following all the instructions step by step
Did you verify the integrity of the installer's data? because it is a common error when downloading this corrupted or incomplete since it is the previous step you have to do to make sure that the file is ok before executing the script. This post helped me a lot for the first time I installed it. https://www.digitalocean.com/community/tutorials/how-to-install-anaconda-on-ubuntu-18-04-quickstart
11
6
61,016,110
2020-4-3
https://stackoverflow.com/questions/61016110/plot-multiple-confusion-matrices-with-plot-confusion-matrix
I am using plot_confusion_matrix from sklearn.metrics. I want to represent those confusion matrices next to each other like subplots, how could I do this?
Let's use the good'ol iris dataset to reproduce this, and fit several classifiers to plot their respective confusion matrices with plot_confusion_matrix: from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from matplotlib import pyplot as plt from sklearn.datasets import load_iris from sklearn.metrics import plot_confusion_matrix data = load_iris() X = data.data y = data.target Set up - X_train, X_test, y_train, y_test = train_test_split(X, y) classifiers = [LogisticRegression(solver='lbfgs'), AdaBoostClassifier(), GradientBoostingClassifier(), SVC()] for cls in classifiers: cls.fit(X_train, y_train) So the way you could compare all matrices at simple sight, is by creating a set of subplots with plt.subplots. Then iterate both over the axes objects and the trained classifiers (plot_confusion_matrix expects the as input) and plot the individual confusion matrices: fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(15,10)) for cls, ax in zip(classifiers, axes.flatten()): plot_confusion_matrix(cls, X_test, y_test, ax=ax, cmap='Blues', display_labels=data.target_names) ax.title.set_text(type(cls).__name__) plt.tight_layout() plt.show()
7
26
60,983,836
2020-4-2
https://stackoverflow.com/questions/60983836/complete-set-of-punctuation-marks-for-python-not-just-ascii
Is there a listing or library that has all punctuations that we might commonly come across? Normally I use string.punctuation, but some punctuation characters are not included in it, for example: >>> "'" in string.punctuation True >>> "’" in string.punctuation False
You might do better with this check: >>> import unicodedata >>> unicodedata.category("'").startswith("P") True >>> unicodedata.category("’").startswith("P") True The Unicode categories P* are specifically for Punctuation: connector (Pc), dash (Pd), initial quote (Pi), final quote (Pf), open (Ps), close (Pe), other (Po) To prepare the exhaustive collection, which you can subsequently use for fast membership checks, use a set comprehension: >>> import sys >>> from unicodedata import category >>> codepoints = range(sys.maxunicode + 1) >>> punctuation = {c for i in codepoints if category(c := chr(i)).startswith("P")} >>> "'" in punctuation True >>> "’" in punctuation True Assignment expression here requires Python 3.8+, equivalent for older Python versions: chrs = (chr(i) for i in range(sys.maxunicode + 1)) punctuation = set(c for c in chrs if category(c).startswith("P")) Beware that some of the other characters in string.punctuation are actually in Unicode category Symbol. It's easy to add those in also if you want.
41
63
61,008,937
2020-4-3
https://stackoverflow.com/questions/61008937/python-round-to-next-highest-power-of-10
How would I manage to perform math.ceil such that a number is assigned to the next highest power of 10? # 0.04 -> 0.1 # 0.7 -> 1 # 1.1 -> 10 # 90 -> 100 # ... My current solution is a dictionary that checks the range of the input number, but it's hardcoded and I would prefer a one-liner solution. Maybe I am missing a simple mathematical trick or a corresponding numpy function here?
You can use math.ceil with math.log10 to do this: >>> 10 ** math.ceil(math.log10(0.04)) 0.1 >>> 10 ** math.ceil(math.log10(0.7)) 1 >>> 10 ** math.ceil(math.log10(1.1)) 10 >>> 10 ** math.ceil(math.log10(90)) 100 log10(n) gives you the solution x that satisfies 10 ** x == n, so if you round up x it gives you the exponent for the next highest power of 10. Note that for a value n where x is already an integer, the "next highest power of 10" will be n: >>> 10 ** math.ceil(math.log10(0.1)) 0.1 >>> 10 ** math.ceil(math.log10(1)) 1 >>> 10 ** math.ceil(math.log10(10)) 10
52
70
60,969,101
2020-4-1
https://stackoverflow.com/questions/60969101/how-to-build-a-population-pyramid-with-python
I'm trying to build a population pyramid from a pandas df using seaborn. The problem is that some data isn't displayed. As you can see from the plot I created there's some missing data. The Y-axis ticks are 21 and the df's age classes are 21 so why don't they match? What am I missing? Here's the code I wrote: import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns df = pd.DataFrame({'Age': ['0-4','5-9','10-14','15-19','20-24','25-29','30-34','35-39','40-44','45-49','50-54','55-59','60-64','65-69','70-74','75-79','80-84','85-89','90-94','95-99','100+'], 'Male': [-49228000, -61283000, -64391000, -52437000, -42955000, -44667000, -31570000, -23887000, -22390000, -20971000, -17685000, -15450000, -13932000, -11020000, -7611000, -4653000, -1952000, -625000, -116000, -14000, -1000], 'Female': [52367000, 64959000, 67161000, 55388000, 45448000, 47129000, 33436000, 26710000, 25627000, 23612000, 20075000, 16368000, 14220000, 10125000, 5984000, 3131000, 1151000, 312000, 49000, 4000, 0]}) AgeClass = ['100+','95-99','90-94','85-89','80-84','75-79','70-74','65-69','60-64','55-59','50-54','45-49','40-44','35-39','30-34','25-29','20-24','15-19','10-14','5-9','0-4'] bar_plot = sns.barplot(x='Male', y='Age', data=df, order=AgeClass) bar_plot = sns.barplot(x='Female', y='Age', data=df, order=AgeClass) bar_plot.set(xlabel="Population (hundreds of millions)", ylabel="Age-Group", title = "Population Pyramid")
As explained by JohanC, the data is not missing, it's just very small compared to the other bars. Another factor is that you seem to have a white border around each of your bars, which hides the very small bars at the top. Try putting lw=0 in your call to barplot. This is what I am getting: bar_plot = sns.barplot(x='Male', y='Age', data=df, order=AgeClass, lw=0) bar_plot = sns.barplot(x='Female', y='Age', data=df, order=AgeClass, lw=0)
8
7
60,996,205
2020-4-2
https://stackoverflow.com/questions/60996205/python-datetime-timezone-conversion-off-by-4-minutes
when I run this code: #!/usr/bin/env python3 from datetime import datetime, timedelta from dateutil import tz from pytz import timezone time = "2020-01-15 10:14:00" time = datetime.strptime(time, "%Y-%m-%d %H:%M:%S") print("time1 = " + str(time)) time = time.replace(tzinfo=timezone('America/New_York')) print("time2 = " + str(time)) time = time.astimezone(tz.gettz('UTC')) # explicity convert to UTC time print("time3 = " + str(time)) time = datetime.strftime(time, "%Y-%m-%d %H:%M:%S") # output format print("done time4 = " + str(time)) I get this output: time1 = 2020-01-15 10:14:00 time2 = 2020-01-15 10:14:00-04:56 time3 = 2020-01-15 15:10:00+00:00 done time4 = 2020-01-15 15:10:00 I would have expected the final time to be "2020-01-15 15:14:00" anyone have any ideas why it's off by 4 mintutes? I don't understand why the offset in time2 would by "-04:56" instead of "-05:00"
From pytz documentation: This library differs from the documented Python API for tzinfo implementations; if you want to create local wallclock times you need to use the localize() method documented in this document. In addition, if you perform date arithmetic on local times that cross DST boundaries, the result may be in an incorrect timezone (ie. subtract 1 minute from 2002-10-27 1:00 EST and you get 2002-10-27 0:59 EST instead of the correct 2002-10-27 1:59 EDT). So, you are using pytz incorrectly. Following is both correct, and erroneous code Following code shows results of your use of pytz (datetime.replace(tzinfo=pytz.timezone)), and the recommended way of using pytz with datetime (pytz.timezone.localize(datetime)). from datetime import datetime, date, time, timezone from dateutil import tz import pytz d = date(2019, 1, 27) t = time(19, 32, 00) t1 = datetime.combine(d, t) t1_epoch = t1.timestamp() print("t1_epoch " + str(t1_epoch)) print("t1 " + str(t1)) # your approach/code nytz = pytz.timezone('America/New_York') t3 = t1.replace(tzinfo=nytz) t3_epoch = t3.timestamp() print("t3_epoch " + str(t3_epoch)) print("t3 " + str(t3)) # recommended approach/code using localize nytz = pytz.timezone('America/New_York') t6 = nytz.localize(t1) t6_epoch = t6.timestamp() print("t6_epoch " + str(t6_epoch)) print("t6 " + str(t6)) Output of above code: t1_epoch 1548617520.0 t1 2019-01-27 19:32:00 t3_epoch 1548635280.0 t3 2019-01-27 19:32:00-04:56 t6_epoch 1548635520.0 t6 2019-01-27 19:32:00-05:00 t3 is what you are doing, and it is giving incorrect offset (-4:56). Note that POSIX time is also incorrect in this case. POSIX time, by definition, does not change with timezone. t6 has been created using pytz.timezone.localize() method, and gives correct UTC offset (-5:00). Update: Updated language of the answer as one user found the answer confusing.
12
16
61,008,229
2020-4-3
https://stackoverflow.com/questions/61008229/flatten-a-list-of-lists-containing-single-strings-to-a-list-of-ints
i have list with lists of strings looks like allyears #[['1916'], ['1919'], ['1922'], ['1912'], ['1924'], ['1920']] i need to have output like this: #[1916, 1919, 1922, 1912, 1924, 1920] have been try this: for i in range(0, len(allyears)): allyears[i] = int(allyears[i]) but i have error >>> TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
You can simply do this: allyears = [int(i[0]) for i in allyears] Because all the elements in your allyears is a list which has ony one element, so I get it by i[0] The error is because ypu can't convert a list to an int
11
9
61,006,189
2020-4-3
https://stackoverflow.com/questions/61006189/can-i-define-functions-other-than-fixtures-in-conftest-py
Can I define functions other than fixtures in conftest.py. If I have a function add(), defined in conftest.py. Can I using the function inside test_add.py file just calling add()?
Two possible ways You could just import function add from conftest.py, or move it to something more appropriate. (for example utils.py) You could create fixture that returns function and use it in your tests. Something like this. conftest.py import pytest @pytest.fixture def add(): def inner_add(x, y): return x + y return inner_add test_all.py def test_all(add): assert add(1, 2) == 3 I can not imagine situation when it will be a good solution. But I am sure it exists.
10
8
61,001,812
2020-4-2
https://stackoverflow.com/questions/61001812/pip-cant-find-django-3-x-error-no-matching-distribution-found-for-django-3
When I run: pip3 install django==3.0.5 I get the error ERROR: Could not find a version that satisfies the requirement django==3.0.5 (from versions: 1.1.3, ... 2.2.11, 2.2.12) ERROR: No matching distribution found for django==3.0.4 I need to update some references somewhere, but I am not sure how. Plz Help.
As noted in the comments, Django 3.x is only available for Python 3.6 or greater. If you attempt to install Django 3 while using an older version of Python (e.g. Python 3.5 in the case of the OP), pip will be unable to find a matching package. The solution is to simply upgrade to a more modern version of Python.
11
10
60,996,892
2020-4-2
https://stackoverflow.com/questions/60996892/how-to-replace-loss-function-during-training-tensorflow-keras
I want to replace the loss function related to my neural network during training, this is the network: model = tensorflow.keras.models.Sequential() model.add(tensorflow.keras.layers.Conv2D(32, kernel_size=(3, 3), activation="relu", input_shape=input_shape)) model.add(tensorflow.keras.layers.Conv2D(64, (3, 3), activation="relu")) model.add(tensorflow.keras.layers.MaxPooling2D(pool_size=(2, 2))) model.add(tensorflow.keras.layers.Dropout(0.25)) model.add(tensorflow.keras.layers.Flatten()) model.add(tensorflow.keras.layers.Dense(128, activation="relu")) model.add(tensorflow.keras.layers.Dropout(0.5)) model.add(tensorflow.keras.layers.Dense(output_classes, activation="softmax")) model.compile(loss=tensorflow.keras.losses.categorical_crossentropy, optimizer=tensorflow.keras.optimizers.Adam(0.001), metrics=['accuracy']) history = model.fit(x_train, y_train, batch_size=128, epochs=5, validation_data=(x_test, y_test)) so now I want to change tensorflow.keras.losses.categorical_crossentropy with another, so I made this: model.compile(loss=tensorflow.keras.losses.mse, optimizer=tensorflow.keras.optimizers.Adam(0.001), metrics=['accuracy']) history = model.fit(x_improve, y_improve, epochs=1, validation_data=(x_test, y_test)) #FIXME bug during training but I have this error: ValueError: No gradients provided for any variable: ['conv2d/kernel:0', 'conv2d/bias:0', 'conv2d_1/kernel:0', 'conv2d_1/bias:0', 'dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0']. Why? How can I fix it? There is another way to change loss function? Thanks
So, a straightforward answer I would give is: switch to pytorch if you want to play this kind of games. Since in pytorch you define your training and evaluation functions, it takes just an if statement to switch from a loss function to another one. Also, I see in your code that you want to switch from cross_entropy to mean_square_error, the former is suitable for classification the latter for regression, so this is not really something you can do, in the code that follows I switched from mean squared error to mean squared logarithmic error, which are both loss suitable for regression. Despite other answers offers solutions to your question (see change-loss-function-dynamically-during-training) it is not clear wether you can trust or not the results. Some people found that even with a customised function sometimes Keras keep training with the first loss. Solution: My solution is based on train_on_batch, which allows us to train a model in a for loop and therefore stop training it whenever we prefer to recompile the model with a new loss function. Please note that recompiling the model does not reset the weights (see:Does recompiling a model re-initialize the weights?). The dataset can be found here Boston housing dataset # Regression Example With Boston Dataset: Standardized and Larger from pandas import read_csv from keras.models import Sequential from keras.layers import Dense from sklearn.model_selection import train_test_split from keras.losses import mean_squared_error, mean_squared_logarithmic_error from matplotlib import pyplot import matplotlib.pyplot as plt # load dataset dataframe = read_csv("housing.csv", delim_whitespace=True, header=None) dataset = dataframe.values # split into input (X) and output (Y) variables X = dataset[:,0:13] y = dataset[:,13] trainX, testX, trainy, testy = train_test_split(X, y, test_size=0.33, random_state=42) # create model model = Sequential() model.add(Dense(13, input_dim=13, kernel_initializer='normal', activation='relu')) model.add(Dense(6, kernel_initializer='normal', activation='relu')) model.add(Dense(1, kernel_initializer='normal')) batch_size = 25 # have to define manually a dict to store all epochs scores history = {} history['history'] = {} history['history']['loss'] = [] history['history']['mean_squared_error'] = [] history['history']['mean_squared_logarithmic_error'] = [] history['history']['val_loss'] = [] history['history']['val_mean_squared_error'] = [] history['history']['val_mean_squared_logarithmic_error'] = [] # first compiling with mse model.compile(loss='mean_squared_error', optimizer='adam', metrics=[mean_squared_error, mean_squared_logarithmic_error]) # define number of iterations in training and test train_iter = round(trainX.shape[0]/batch_size) test_iter = round(testX.shape[0]/batch_size) for epoch in range(2): # train iterations loss, mse, msle = 0, 0, 0 for i in range(train_iter): start = i*batch_size end = i*batch_size + batch_size batchX = trainX[start:end,] batchy = trainy[start:end,] loss_, mse_, msle_ = model.train_on_batch(batchX,batchy) loss += loss_ mse += mse_ msle += msle_ history['history']['loss'].append(loss/train_iter) history['history']['mean_squared_error'].append(mse/train_iter) history['history']['mean_squared_logarithmic_error'].append(msle/train_iter) # test iterations val_loss, val_mse, val_msle = 0, 0, 0 for i in range(test_iter): start = i*batch_size end = i*batch_size + batch_size batchX = testX[start:end,] batchy = testy[start:end,] val_loss_, val_mse_, val_msle_ = model.test_on_batch(batchX,batchy) val_loss += val_loss_ val_mse += val_mse_ val_msle += msle_ history['history']['val_loss'].append(val_loss/test_iter) history['history']['val_mean_squared_error'].append(val_mse/test_iter) history['history']['val_mean_squared_logarithmic_error'].append(val_msle/test_iter) # recompiling the model with new loss model.compile(loss='mean_squared_logarithmic_error', optimizer='adam', metrics=[mean_squared_error, mean_squared_logarithmic_error]) for epoch in range(2): # train iterations loss, mse, msle = 0, 0, 0 for i in range(train_iter): start = i*batch_size end = i*batch_size + batch_size batchX = trainX[start:end,] batchy = trainy[start:end,] loss_, mse_, msle_ = model.train_on_batch(batchX,batchy) loss += loss_ mse += mse_ msle += msle_ history['history']['loss'].append(loss/train_iter) history['history']['mean_squared_error'].append(mse/train_iter) history['history']['mean_squared_logarithmic_error'].append(msle/train_iter) # test iterations val_loss, val_mse, val_msle = 0, 0, 0 for i in range(test_iter): start = i*batch_size end = i*batch_size + batch_size batchX = testX[start:end,] batchy = testy[start:end,] val_loss_, val_mse_, val_msle_ = model.test_on_batch(batchX,batchy) val_loss += val_loss_ val_mse += val_mse_ val_msle += msle_ history['history']['val_loss'].append(val_loss/test_iter) history['history']['val_mean_squared_error'].append(val_mse/test_iter) history['history']['val_mean_squared_logarithmic_error'].append(val_msle/test_iter) # Some plots to check what is going on # loss function pyplot.subplot(311) pyplot.title('Loss') pyplot.plot(history['history']['loss'], label='train') pyplot.plot(history['history']['val_loss'], label='test') pyplot.legend() # Only mean squared error pyplot.subplot(312) pyplot.title('Mean Squared Error') pyplot.plot(history['history']['mean_squared_error'], label='train') pyplot.plot(history['history']['val_mean_squared_error'], label='test') pyplot.legend() # Only mean squared logarithmic error pyplot.subplot(313) pyplot.title('Mean Squared Logarithmic Error') pyplot.plot(history['history']['mean_squared_logarithmic_error'], label='train') pyplot.plot(history['history']['val_mean_squared_logarithmic_error'], label='test') pyplot.legend() plt.tight_layout() pyplot.show() The resulting plot confirm that the loss function is changing after the second epoch: The drop in the loss function is due to the fact that the model is switching from normal mean squared error to the logarithmic one, which has much lower values. Printing the scores also prove that the used loss truly changed: print(history['history']['loss']) [599.5209197998047, 570.4041115897043, 3.8622902120862688, 2.1578191178185597] print(history['history']['mean_squared_error']) [599.5209197998047, 570.4041115897043, 510.29034205845426, 425.32058388846264] print(history['history']['mean_squared_logarithmic_error']) [8.624503476279122, 6.346359729766846, 3.8622902120862688, 2.1578191178185597] In the first two epochs the values of loss are equal to ones of mean_square_error and during the third and fourth epochs the values becomes equal to the ones of mean_square_logarithmic_error, which is the new loss that was set. So it seems that using train_on_batch allows to change loss function, nevertheless I want to stress out again that this is basically what one should do on pytoch to achieve the same results, with the difference that the behaviour of pytorch (in this scenario and in my opinion) is more reliable.
8
5
60,999,816
2020-4-2
https://stackoverflow.com/questions/60999816/argparse-not-parsing-boolean-arguments
I am trying to make a build script like this: import glob import os import subprocess import re import argparse import shutil def create_parser(): parser = argparse.ArgumentParser(description='Build project') parser.add_argument('--clean_logs', type=bool, default=True, help='If true, old debug logs will be deleted.') parser.add_argument('--run', type=bool, default=True, help="If true, executable will run after compilation.") parser.add_argument('--clean_build', type=bool, default=False, help="If true, all generated files will be deleted and the" " directory will be reset to a pristine condition.") return parser.parse_args() def main(): parser = create_parser() print(parser) However no matter how I try to pass the argument I only get the default values. I always get Namespace(clean_build=False, clean_logs=True, run=True). I have tried: python3 build.py --run False python3 build.py --run=FALSE python3 build.py --run FALSE python3 build.py --run=False python3 build.py --run false python3 build.py --run 'False' It's always the same thing. What am I missing?
You are misunderstanding how the argparse understands the boolean arguments. Basically you should use action='store_true' or action='store_false' instead of the default value, with the understanding that not specifying the argument will give you the opposite of the action, e.g. parser.add_argument('-x', type=bool, action='store_true') will cause: python3 command -x to have x set to True and python3 command to have x set to False. While action=store_false will do the opposite. Setting bool as type does not behave as you expect and this is a known issue. The reason for the current behavior is that type is expected to be a callable which is used as argument = type(argument). bool('False') evaluates to True, so you need to set a different type for the behavior you expect to happen.
12
21
60,999,164
2020-4-2
https://stackoverflow.com/questions/60999164/is-the-ordering-of-pathlibs-glob-method-consistent-between-runs
Will Path('.').glob('*.ext') produce consistent ordering of results (assuming the files being globbed don't change)? It seems the glob ordering is based on the file system order (at least, for the old glob package). Will pathlib's glob order be changed by adding files to the directory (which will not be included in the glob)? Will this order be changed by the file system even if nothing is added to the specific directory (e.g., when other large file changes are made elsewhere on the system)? Over the course of several days? Or will the ordering remain consistent in all these cases? Just to clarify, I can't simple convert to a list and sort as there are too many file paths to fit into memory simultaneously. I'm hoping to achieve the same order each time as I will be doing some ML training, and want to set aside every nth file as validation data. This training will take several days, which is why I'm interested to know if the order remains stable over long times on the file system.
Checking the source code for the pathlib module, by chance, the latest commit points us directly to the relevant place: Use os.scandir() as context manager in Path.glob(). So under the hood Path.glob uses os.scandir to get the directory entries. The docs of this function report that the results are unordered: Return an iterator of os.DirEntry objects corresponding to the entries in the directory given by path. The entries are yielded in arbitrary order, and the special entries '.' and '..' are not included. (emphasis mine)
11
10
60,995,830
2020-4-2
https://stackoverflow.com/questions/60995830/debug-python-code-thats-in-a-pip-package-in-vs-code
My debugging is setup in VS code, I can hit break points in the file I'm running via the launch.json config, but I can't get breakpoints in packages that are installed with PIP. How do I get breakpoints in these package files?
If you know where you python installation is on your computer do this: Know where you python packets were installed. File -> add folder to workspace Add the breakpoints where necessary. As an alternative i would advise to create a virtual environment and do the same thing but with the safety of virtual environment. Hope that helps.
12
8
60,989,409
2020-4-2
https://stackoverflow.com/questions/60989409/telethon-leads-to-runtimewarning-coroutine-messagemethods-send-message-was-n
I'm trying to run this first code snippet provided by the Telethon documentation. But, after multiple problems (here and here), I ended up with this modified version: import os import sys from telethon.sync import TelegramClient, events # import nest_asyncio # nest_asyncio.apply() session_name = "<session_name>" api_id = <api_id> api_hash = "<api_hash>" os.chdir(sys.path[0]) if f"{session_name}.session" in os.listdir(): os.remove(f"{session_name}.session") async with TelegramClient(session_name, api_id, api_hash) as client: client.send_message('me', 'Hello, myself!') print(client.download_profile_photo('me')) @client.on(events.NewMessage(pattern='(?i).*Hello')) async def handler(event): await event.reply('Hey!') client.run_until_disconnected() However now I'm getting these warnings: usr/local/lib/python3.7/site-packages/ipykernel_launcher.py:23: RuntimeWarning: coroutine 'MessageMethods.send_message' was never awaited RuntimeWarning: Enable tracemalloc to get the object allocation traceback /usr/local/lib/python3.7/site-packages/ipykernel_launcher.py:24: RuntimeWarning: coroutine 'DownloadMethods.download_profile_photo' was never awaited RuntimeWarning: Enable tracemalloc to get the object allocation traceback /usr/local/lib/python3.7/site-packages/ipykernel_launcher.py:30: RuntimeWarning: coroutine 'UpdateMethods._run_until_disconnected' was never awaited RuntimeWarning: Enable tracemalloc to get the object allocation traceback when running the code on Jupyter. Now here are my questions: what those warning messages mean and how should I address them? what is the expected result of this code if working properly? Should I receive a message in Telegram or something? Because I don't recive any messages other than the signin code. What does the @ symbol at the beginning of the @client.on... line mean? what does that line is supposed to do? From this line onwards I do not understand the code. Would appreciate if you could help me understand it.
Just add await the client.send_message('me', 'Hello, myself!') to solve that error and print afterdownload_profile_photo has done its work downloads an image to localhost so that may be why you don't see anything. You should read telethon documentation thoroughly and also how to use photo downloads correctly All the calls to the client have a delay and should always be awaited so that your code doesn't get blocked. You should read the asyncio tutorial The correct code would be: async with TelegramClient(session_name, api_id, api_hash) as client: await client.send_message('me', 'Hello, myself!') print(await client.download_profile_photo('me')) @client.on(events.NewMessage(pattern='(?i).*Hello')) async def handler(event): await event.reply('Hey!') #await client.run_until_disconnected() The @ is a decorator and you should read the PEP related to decorators, but in short words, they execute a function before yours. In this case @client.on(events.NewMessage means: When there is a new event that happens to be a message that matches the pattern specified handle it with this function called handler
9
7
60,989,914
2020-4-2
https://stackoverflow.com/questions/60989914/add-id-found-in-list-to-new-column-in-pandas-dataframe
Say I have the following dataframe (a column of integers and a column with a list of integers)... ID Found_IDs 0 12345 [15443, 15533, 3433] 1 15533 [2234, 16608, 12002, 7654] 2 6789 [43322, 876544, 36789] And also a separate list of IDs... bad_ids = [15533, 876544, 36789, 11111] Given that, and ignoring the df['ID'] column and any index, I want to see if any of the IDs in the bad_ids list are mentioned in the df['Found_IDs'] column. The code I have so far is: df['bad_id'] = [c in l for c, l in zip(bad_ids, df['Found_IDs'])] This works but only if the bad_ids list is longer than the dataframe and for the real dataset the bad_ids list is going to be a lot shorter than the dataframe. If I set the bad_ids list to only two elements... bad_ids = [15533, 876544] I get a very popular error (I have read many questions with the same error)... ValueError: Length of values does not match length of index I have tried converting the list to a series (no change in the error). I have also tried adding the new column and setting all values to False before doing the comprehension line (again no change in the error). Two questions: How do I get my code (below) to work for a list that is shorter than a dataframe? How would I get the code to write the actual ID found back to the df['bad_id'] column (more useful than True/False)? Expected output for bad_ids = [15533, 876544]: ID Found_IDs bad_id 0 12345 [15443, 15533, 3433] True 1 15533 [2234, 16608, 12002, 7654] False 2 6789 [43322, 876544, 36789] True Ideal output for bad_ids = [15533, 876544] (ID(s) are written to a new column or columns): ID Found_IDs bad_id 0 12345 [15443, 15533, 3433] 15533 1 15533 [2234, 16608, 12002, 7654] False 2 6789 [43322, 876544, 36789] 876544 Code: import pandas as pd result_list = [[12345,[15443,15533,3433]], [15533,[2234,16608,12002,7654]], [6789,[43322,876544,36789]]] df = pd.DataFrame(result_list,columns=['ID','Found_IDs']) # works if list has four elements # bad_ids = [15533, 876544, 36789, 11111] # fails if list has two elements (less elements than the dataframe) # ValueError: Length of values does not match length of index bad_ids = [15533, 876544] # coverting to Series doesn't change things # bad_ids = pd.Series(bad_ids) # print(type(bad_ids)) # setting up a new column of false values doesn't change things # df['bad_id'] = False print(df) df['bad_id'] = [c in l for c, l in zip(bad_ids, df['Found_IDs'])] print(bad_ids) print(df)
Using np.intersect1d to get the intersect of the two lists: df['bad_id'] = df['Found_IDs'].apply(lambda x: np.intersect1d(x, bad_ids)) ID Found_IDs bad_id 0 12345 [15443, 15533, 3433] [15533] 1 15533 [2234, 16608, 12002, 7654] [] 2 6789 [43322, 876544, 36789] [876544] Or with just vanilla python using intersect of sets: bad_ids_set = set(bad_ids) df['Found_IDs'].apply(lambda x: list(set(x) & bad_ids_set))
12
9
60,975,243
2020-4-1
https://stackoverflow.com/questions/60975243/not-able-to-start-django-project-in-local-as-well-as-in-docker
I am using Docker to deploy Python2.7 application with Django1.8. I am facing some issue from last two days and I found error as below. Docker Image: python:2.7-slim-buster Error: root@64f8c580dd0a:/code# python manage.py runserver read completed! read completed! Traceback (most recent call last): File "manage.py", line 11, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 312, in execute django.setup() File "/usr/local/lib/python2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/usr/local/lib/python2.7/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/usr/local/lib/python2.7/site-packages/django/apps/config.py", line 86, in create module = import_module(entry) File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/usr/local/lib/python2.7/site-packages/imagekit/__init__.py", line 2, in <module> from . import conf File "/usr/local/lib/python2.7/site-packages/imagekit/conf.py", line 1, in <module> from appconf import AppConf File "/usr/local/lib/python2.7/site-packages/appconf/__init__.py", line 1, in <module> from .base import AppConf # noqa File "/usr/local/lib/python2.7/site-packages/appconf/base.py", line 107 class AppConf(metaclass=AppConfMetaClass): ^ SyntaxError: invalid syntax uwsgi configuration: [uwsgi] wsgi-file = /code/config/wsgi.py callable = application uid = nginx gid = nginx socket = /tmp/uwsgi.sock chown-socket = nginx:nginx chmod-socket = 664 master = true cheaper = 5 processes = 15 vacuum = true I have installed below dependencies: Babel==2.8.0 beautifulsoup4==4.4.1 boto==2.38.0 boto3==1.4.4 botocore==1.5.95 cffi==1.14.0 contextlib2==0.6.0.post1 copyleaks==2.5.1 cryptography==2.8 dce-lti-py==0.7.4 Django==1.8 django-admin-honeypot==1.0.0 django-allauth==0.31.0 django-appconf==1.0.4 django-ckeditor==5.2.2 django-colorful==1.0.1 django-common-helpers==0.9.2 django-cors-headers==1.0.0 django-cron==0.5.0 django-debug-toolbar==1.6 django-environ==0.4.3 django-extensions==1.5.0 django-filter==1.1.0 django-hosts==2.0 django-imagekit==4.0.1 django-model-utils==2.0.3 django-mysql==2.2.0 django-phonenumber-field==1.3.0 django-polymorphic==0.6 django-recaptcha==1.3.0 django-redis==4.8.0 django-rest-framework-docs==0.1.7 django-rest-swagger==0.3.3 django-storages==1.1.8 django-uuslug==1.1.8 djangorestframework==3.1.0 docutils==0.16 drf-extensions==0.3.1 drf-nested-routers==0.11.1 enum34==1.1.6 et-xmlfile==1.0.1 future==0.18.2 futures==3.3.0 html5lib==1.0b8 httplib2==0.17.0 inflect==0.2.4 ipaddress==1.0.23 jdcal==1.4.1 jmespath==0.9.5 jsonpickle==0.9.0 lxml==3.7.3 MySQL-python==1.2.5 oauth2client==4.1.3 oauthlib==3.1.0 openpyxl==2.6.4 paypalrestsdk==1.13.1 paytm==0.1.8 phonenumberslite==8.12.1 pilkit==2.0 Pillow==3.2.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.20 pycrypto==2.6.1 pyOpenSSL==19.1.0 pyotp==2.2.6 pyPdf==1.13 PyPDF2==1.26.0 python-dateutil==2.4.0 python-openid==2.2.5 python-slugify==1.2.4 pytz==2019.3 PyYAML==3.11 raven==6.1.0 redis==3.4.1 reportlab==3.3.0 requests==2.9.1 requests-oauthlib==1.3.0 rsa==4.0 s3transfer==0.1.13 six==1.9.0 sqlparse==0.3.1 Unidecode==0.4.18 uWSGI==2.0.18 xhtml2pdf==0.0.6 xl2dict==0.1.5 xlrd==1.1.0 XlsxWriter==1.0.5
Django-appconf version 1.0.4 only supports Django 1.11 and up and Python 3.5 and up. (https://github.com/django-compressor/django-appconf/blob/v1.0.4/setup.py). You need to downgrade to at least version 1.0.2 (supports Python 2.6+, doesn't say which django version: https://github.com/django-compressor/django-appconf/blob/v1.0.2/setup.py)
11
19
60,978,672
2020-4-1
https://stackoverflow.com/questions/60978672/python-string-to-camelcase
This is a question from Codewars: Complete the method/function so that it converts dash/underscore delimited words into camel casing. The first word within the output should be capitalized only if the original word was capitalized (known as Upper Camel Case, also often referred to as Pascal case). The input test cases are as follows: test.describe("Testing function to_camel_case") test.it("Basic tests") test.assert_equals(to_camel_case(''), '', "An empty string was provided but not returned") test.assert_equals(to_camel_case("the_stealth_warrior"), "theStealthWarrior", "to_camel_case('the_stealth_warrior') did not return correct value") test.assert_equals(to_camel_case("The-Stealth-Warrior"), "TheStealthWarrior", "to_camel_case('The-Stealth-Warrior') did not return correct value") test.assert_equals(to_camel_case("A-B-C"), "ABC", "to_camel_case('A-B-C') did not return correct value") This is what I've tried so far: def to_camel_case(text): str=text str=str.replace(' ','') for i in str: if ( str[i] == '-'): str[i]=str.replace('-','') str[i+1].toUpperCase() elif ( str[i] == '_'): str[i]=str.replace('-','') str[i+1].toUpperCase() return str It passes the first two tests but not the main ones : test.assert_equals(to_camel_case("the_stealth_warrior"), "theStealthWarrior", "to_camel_case('the_stealth_warrior') did not return correct value") test.assert_equals(to_camel_case("The-Stealth-Warrior"), "TheStealthWarrior", "to_camel_case('The-Stealth-Warrior') did not return correct value") test.assert_equals(to_camel_case("A-B-C"), "ABC", "to_camel_case('A-B-C') did not return correct value") What am I doing wrong?
You may have a working implementation with slight errors as mentioned in your comments, but I propose that you: split by the delimiters apply a capitalization for all but the first of the tokens rejoin the tokens My implementation is: def to_camel_case(text): s = text.replace("-", " ").replace("_", " ") s = s.split() if len(text) == 0: return text return s[0] + ''.join(i.capitalize() for i in s[1:]) IMO it makes a bit more sense. The output from running tests is: >>> to_camel_case("the_stealth_warrior") 'theStealthWarrior' >>> to_camel_case("The-Stealth-Warrior") 'TheStealthWarrior' >>> to_camel_case("A-B-C") 'ABC'
9
19
60,976,758
2020-4-1
https://stackoverflow.com/questions/60976758/how-can-i-get-the-mse-of-a-tensor-across-a-specific-dimension
I have 2 tensors with .size of torch.Size([2272, 161]). I want to get mean-squared-error between them. However, I want it along each of the 161 channels, so that my error tensor has a .size of torch.Size([161]). How can I accomplish this? It seems that torch.nn.MSELoss doesn't let me specify a dimension.
For the nn.MSELoss you can specify the option reduction='none'. This then gives you back the squared error for each entry position of both of your tensors. Then you can apply torch.sum/torch.mean. a = torch.randn(2272,161) b = torch.randn(2272,161) loss = nn.MSELoss(reduction='none') loss_result = torch.sum(loss(a,b),dim=0) I don't think there is a direct way to specify at the initialisation of the loss to which dimension to apply mean/sum. Hope that helps!
11
15
60,962,196
2020-4-1
https://stackoverflow.com/questions/60962196/plotly-how-to-plot-rectangle-with-gradient-color-in-plotly
Can a shape such as a rectangle have a smooth color gradient in Plotly? I define the shape with a solid fill color as: shapes=[dict( type='rect', xref='x', yref='paper', x0=box_from, x1=box_to, y0=0, y1=1, fillcolor='Green', opacity=0.07, layer='below', line=dict(width=0), )] But I'd like the box not to have a solid color fill, but to have a smooth color gradient. This is the example in the documentation I'm following: https://plotly.com/python/shapes/#highlighting-time-series-regions-with-rectangle-shapes The docs on fillcolor aren't very extensive: https://plotly.com/python/reference/#layout-shapes-items-shape-fillcolor I guess colorscales don't apply to shapes: https://plotly.com/python/builtin-colorscales/ My guess is the answer is a simple "not supported", but perhaps someone else knows better.
Someone will correct me if I'm wrong but I think that no, there is no straight implementation to fill with a gradient a shape. But to achieve a similar results you could plot several lines inside the rectangle specifying decreasing rgb values. For example I added this for loop after the first rectangle definition in the documentation code (changing also the rectangle fillcolor to white). for i in range(100): fig.add_shape(type='line', xref="x", yref="y", x0=2.5, x1=3.5, y0=i*(2/100), y1=i*(2/100), line=dict( color='rgb({}, {}, {})'.format((i/100*255),(i/100*255),(i/100*255)), width=3, )) And the result is: I know it's impractical and that it can increase a bit the running time but if you care only about the aesthetic it does the job.
8
8
60,925,137
2020-3-30
https://stackoverflow.com/questions/60925137/using-mypy-with-with-lazy-initialization-of-instance-attributes
I'm trying to use mypy in my projects, but many of the instance attributes I use are only initialized after __init__, and not inside it. However, I do want to keep the good practice of declaring all instance attributes at __init__, so I need some complicated solutions to make this work. An example to how I want this to behave (currently mypy is complaining): from typing import Optional class Foo: def __init__(self, x: int): self.x = x self.y: int = None # will initialize later, but I know it will be an int def fill_values(self): self.y = self.x**2 def do(self) -> int: return self.x + self.y Currently mypy complains about the assignment of self.y, and wants it to be Optional or None. If I agree with it and change the line to self.y: Optional[int] = None, then mypy complains on the return value of do, because self.y might be None. The only way I found around it is to add as assert before using self.y, like: assert self.y is not None, which mypy picks up and understands. However, starting each method with many asserts is quite hard. I have many such values, and usually one method that initializes all of them, and all other methods runs after it. I understand that mypy is rightfully complaining (the method do can be called before fill_values), but even when I try to prevent it I can't get mypy to accept this. I can extend this example by adding more functionality but mypy can't infer this: from typing import Optional class Foo: def __init__(self, x: int): self.x = x self.y: int = None # will initialize later, but I know it will be an int def fill_values(self): self.y = x**2 def check_values(self): assert self.y is not None def do(self) -> int: if self.y is None: self.fill_values() self.check_values() return self.x + self.y Any idea of a more elegant solution that multiple assert statements and Optional types which obscure the code?
I have found this to work for me: class Foo: def __init__(self, x: int): self.x = x self.y: int # Give self.y a type but no value def fill_values(self): self.y = self.x ** 2 def do(self) -> int: return self.x + self.y Essentially all you are doing is telling mypy that self.y will be an integer when (and if) it is initialised. Trying to call self.y before it is initialised will raise an error and you can check if it has been initialised using hasattr(self, "y"). Run it on mypy playground.
17
18
60,917,800
2020-3-29
https://stackoverflow.com/questions/60917800/how-to-get-the-opencv-image-from-python-and-use-it-in-c-in-pybind11
I'm trying to figure out how it is possible to receive an OpenCV image from a Python in C++. I'm trying to send a callback function, from C++ to my Python module, and then when I call a specific python method in my C++ app, I can access the needed image. Before I add more details, I need to add that there are already several questions in this regard including : how-to-convert-opencv-image-data-from-python-to-c pass-image-data-from-python-to-cvmat-in-c writing-python-bindings-for-c-code-that-use-opencv c-conversion-from-numpy-array-to-mat-opencv but none of them have anything about Pybind11. In fact they are all using the PyObject (from Python.h header) with and without Boost.Python. So my first attempt is to know how it is possible in Pybind11 knowing that it has support for Numpy arrays, so it can hopefully make things much easier. Also On the C++ side, OpenCV has two versions, 3.x and 4.x which 4.x as I've recently found, is C++11 compliant. on Python side, I used OpenCV 3.x and I'm on a crossroad of which one to choose and what implications it has when it comes to Pybind11. What I have tried so far: I made a quick dummy callback and tried passing a simple cv::Mat& like this : #include <pybind11/embed.h> #include <pybind11/numpy.h> #include <pybind11/stl.h> #include <pybind11/functional.h> namespace py = pybind11; ... void cpp_callback1(bool i, std::string id, cv::Mat img) { auto timenow = chrono::system_clock::to_time_t(chrono::system_clock::now()); cout <<"arg1: " << i << " arg2: " << id<<" arg3: " << typeid(img).name() <<" " << ctime(&timenow)<<endl; } and used it like this : py::list callback_lst; callback_lst.attr("append")(py::cpp_function(cpp_callback1)); py::dict core_kwargs = py::dict("callback_list"_a = callback_lst, "debug_show_feed"_a = true); py::object core_obj = core_cls(**core_kwargs); core_obj.attr("start")(); but it fails with an exception on python part which says : 29/03/2020 21:56:47 : exception occured ("(): incompatible function arguments. The following argument types are supported:\n 1. (arg0: bool, arg1: str, arg2: cv::Mat) -> None\n\nInvoked with: True, '5', array([[[195, 217, 237],\n [195, 217, 237],\n [196, 218, 238],\n ...,\n [211, 241, 255],\n [211, 241, 255],\n [211, 241, 255]],\n\n [[195, 217, 237],\n [195, 217, 237],\n [195, 217, 237],\n ...,\n [211, 241, 255],\n [211, 241, 255],\n [211, 241, 255]],\n\n [[195, 217, 237],\n [195, 217, 237],\n [195, 217, 237],\n ...,\n [211, 241, 255],\n [211, 241, 255],\n [211, 241, 255]],\n\n ...,\n\n [[120, 129, 140],\n [110, 120, 130],\n [113, 122, 133],\n ...,\n [196, 209, 245],\n [195, 207, 244],\n [195, 207, 244]],\n\n [[120, 133, 142],\n [109, 121, 130],\n [114, 120, 131],\n ...,\n [195, 208, 242],\n [195, 208, 242],\n [195, 208, 242]],\n\n [[121, 134, 143],\n [106, 119, 128],\n [109, 114, 126],\n ...,\n [194, 207, 241],\n [195, 208, 242],\n [195, 208, 242]]], dtype=uint8)",) Traceback (most recent call last): File "C:\Users\Master\Anaconda3\Lib\site-packages\F\utils.py", line 257, in start self._main_loop() File "C:\Users\Master\Anaconda3\Lib\site-packages\F\utils.py", line 301, in _main_loop self._execute_callbacks(is_valid, name, frame) File "C:\Users\Master\Anaconda3\Lib\site-packages\F\utils.py", line 142, in _execute_callbacks callback(*args) TypeError: (): incompatible function arguments. The following argument types are supported: 1. (arg0: bool, arg1: str, arg2: cv::Mat) -> None Invoked with: True, '5', array([[[195, 217, 237], [195, 217, 237], [196, 218, 238], ..., [211, 241, 255], [211, 241, 255], [211, 241, 255]], [[195, 217, 237], [195, 217, 237], [195, 217, 237], ..., Using py::object or py::array_t<uint8_t> instead of cv::Mat doesn't cause any errors, but I can't seem to find a way to cast them back to a cv::Mat properly! I tried to cast the numpy array into a cv::Mat as instructed in the comments but the output is garbage: void cpp_callback1(bool i, std::string id, py::array_t<uint8_t>& img) { auto im = img.unchecked<3>(); auto rows = img.shape(0); auto cols = img.shape(1); auto type = CV_8UC3; //py::buffer_info buf = img.request(); cv::Mat img2(rows, cols, type, img.ptr()); cv::imshow("test", img2); } results in : It seems to me, the strides, or something in that direction is messed up that image is showing like this. what am I doing wrong here? I couldn't use the img.strides() though! when printed it using py::print, it shows 960 or something like that. So I'm completely clueless how to interpret that!
I ultimately could successfully get this to work thanks to @DanMasek and this link: void cpp_callback1(py::array_t<uint8_t>& img) { py::buffer_info buf = img.request(); cv::Mat mat(buf.shape[0], buf.shape[1], CV_8UC3, (unsigned char*)buf.ptr); cv::imshow("test", mat); } note that the cast is necessary, or otherwise, you'd get a blackish screen only! However, if somehow there was a way like py::return_value_policy that we could use to change the type of reference, so even though the python part ends, the c++ side wouldn't crash would be great. side note : it seems the ptr property exposed in the numpy array, is actually not a py::handle but a PyObject*&. I couldn't have a successful conversion and thus resorted to the solution I posted above. I'll update this answer, when I figure this out. Update: I found out, the arrays data holds a pointer to the underlying buffer and can be used easily as well. From <pybind11/numpy.h> L681: /// Pointer to the contained data. If index is not provided, points to the /// beginning of the buffer. May throw if the index would lead to out of bounds access. So my original code that used img.ptr(), can work using img.data() like this : void cpp_callback1(py::array_t<uint8_t>& img) { //auto im = img.unchecked<3>(); auto rows = img.shape(0); auto cols = img.shape(1); auto type = CV_8UC3; cv::Mat img2(rows, cols, type, (unsigned char*)img.data()); cv::imshow("test", img2); }
12
7
60,897,536
2020-3-28
https://stackoverflow.com/questions/60897536/python-count-and-replace-regular-expression-in-same-pass
I can globally replace a regular expression with re.sub(), and I can count matches with for match in re.finditer(): count++ Is there a way to combine these two, so that I can count my substitutions without making two passes through the source string? Note: I'm not interested in whether the substitution matched, I'm interested in the exact count of matches in the same call, avoiding one call to count and one call to substitute.
You can use re.subn. re.subn(pattern, repl, string, count=0, flags=0) it returns (new_string, number_of_subs_made) For example purposes, I'm using the same example as @Shubham Sharma used. text = "Jack 10, Lana 11, Tom 12, Arthur, Mark" out_str, count = re.subn(r"(\d+)", repl='repl', string=text) # out_str--> 'Jack repl, Lana repl, Tom repl, Arthur, Mark' # count---> 3
11
10
60,873,454
2020-3-26
https://stackoverflow.com/questions/60873454/how-can-i-list-all-the-virtual-environments-created-with-venv
Someone's just asked me how to list all the virtual environments created with venv. I could only think of searching for pyvenv.cfg files to find them. Something like: from pathlib import Path venv_list = [str(p.parent) for p in Path.home().rglob('pyvenv.cfg')] This could potentially include some false positives. Is there a better way to list all the virtual environment created with venv? NB: The question is about venv specifically, NOT Anaconda, virtualenv, etc.
On Linux/macOS this should get most of it find ~ -d -name "site-packages" 2>/dev/null Looking for directories under your home that are named "site-packages" which is where venv puts its pip-installed stuff. the /dev/null bit cuts down on the chattiness of things you don't have permission to look into. Or you can look at the specifics of a particular expected file. For example, activate has nondestructive as content. Then you need to look for a pattern than matches venv but not anaconda and the rest. find ~ -type f -name "activate" -exec egrep -l nondestructive /dev/null {} \; 2>/dev/null macos mdfind On macos, this is is pretty fast, using mdfind (locate on Linux would probably have similar performance. mdfind -name activate | egrep /bin/activate$| xargs -o egrep -l nondestructive 2>/dev/null | xargs -L 1 dirname | xargs -L 1 dirname So we : look for all activate files egrep to match only bin/activate files (mdfind matches on things like .../bin/ec2-activate-license) look for that nondestructive and print filename where there is a match. the 2 xargs -L 1 dirname allow us to "climb up" from /bin/activate to the virtual env's root. Helper function with -v flag to show details. jvenvfindall(){ # search for Python virtual envs. -v for verbose details unset verbose OPTIND=1 while getopts 'v' OPTION; do case "$OPTION" in v) verbose=1 ;; ?) ;; esac done shift "$(($OPTIND -1))" local bup=$PWD for dn in $(mdfind -name activate | egrep /bin/activate$| xargs -o egrep -l nondestructive 2>/dev/null | xargs -L 1 dirname | xargs -L 1 dirname) do if [[ -z "$verbose" ]]; then printf "$dn\n" else printf "\n\nvenv info for $dn:\n" cd $dn echo space usage, $(du -d 0 -h) #requires the jq and jc utilities... to extract create and modification times echo create, mod dttm: $(stat . | jc --stat | jq '.[]|{birth_time, change_time}') tree -d -L 1 lib fi done cd $bup } output: ... venv info for /Users/me/kds2/issues2/067.pip-stripper/010.fixed.p1.check_venv/venvtest: space usage, 12M . create, mod dttm: { "birth_time": "Apr 16 13:04:43 2019", "change_time": "Sep 30 00:00:39 2019" } lib └── python3.6 ... Hmmm, disk usage is not that bad, but something similar for node_modules might save some real space.
16
13
60,902,650
2020-3-28
https://stackoverflow.com/questions/60902650/how-does-the-quotechar-parameter-of-the-csv-reader-function-work
My current understanding of the quotechar parameter is that it surrounds the fields that are separated by a comma. I'm reading the csv documentation for python and have written a similar code to theirs as such: import csv with open("test.csv", newline="") as file: reader = csv.reader(file, delimiter=",", quotechar="|") for row in reader: print(row) My csv file contains the following: |Hello|,|My|,|name|,|is|,|John| The output gives a list of strings as expected: ['Hello', 'My', 'name', 'is', 'John'] The problem arises when I have whitespace in between the commas in my csv file. For example, if i have a whitespace after the closing | of a field like such: |Hello| ,|My| ,|name| ,|is| ,|John| It gives the same output as before but now there's a whitespace included in the strings in the list: ['Hello ', 'My ', 'name ', 'is ', 'John'] It was my understanding that the quotechar parameter would only consider what was between the | symbol. Any help is greatly appreciated!
The quotechar argument is A one-character string used to quote fields containing special characters, such as the delimiter or quotechar, or which contain new-line characters. It defaults to '"'. For example, If your csv file contains data of the form |Hello|,|My|,|name|,|is|,|"John"| |Hello|,|My|,|name|,|is|,|"Tom"| then in that case you can't use the default quotechar which is " because its already present in entities of the csv data so to instruct the csv reader that you want "John" to be included as it is in the output you would specify the some other quotechar, it may be | or ; or any character depending on the requirements. The output now include John and Tom in quotation marks, ['Hello', 'My', 'name', 'is', '"John"'] ['Hello', 'My', 'name', 'is', '"Tom"'] Consider another example where csv field itself contains delimiter, consider the csv file contains "Fruit","Quantity","Cost" "Strawberry","1000","$2,200" "Apple","500","$1,100" Now in such case you have to specify the quotechar explicitly to instruct the csv reader so that it can distinguish between actual delimiter (control character) and comma (literal characters) in the csv field. Now in this case the quotechar " will also work. Now coming to your code, you have to replace the extra white space before the delimiter in the csv file with the empty string. You can do this in the following way: Try this: from io import StringIO with open("test.csv", newline="") as f: file = StringIO(f.read().replace(" ,", ",")) reader = csv.reader(file, delimiter=",", quotechar="|") for row in reader: print(row) This outputs, ['Hello', 'My', 'name', 'is', 'John']
10
8
60,939,392
2020-3-30
https://stackoverflow.com/questions/60939392/django-annotate-whether-exists-or-not
I have a query I'm using: people = Person.objects.all().annotate(num_pets=Count('pets')) for p in people: print(p.name, p.num_pets == 0) (Pet is ManyToOne with Person) But i'm actually not interested in the number of pets, but only on whether a person has any pets or not. How can this be done?
You can make use of an Exists expression [Django-doc] to determine if there exists a Pet for that Person. For example: from django.db.models import Exists, OuterRef Person.objects.annotate( has_pet=Exists(Pet.objects.filter(person=OuterRef('pk'))) ) Here the model is thus Pet that has a ForeignKey named person to Person. If the fields are named differently, then you should of course update the query accordingly.
15
26
60,921,001
2020-3-29
https://stackoverflow.com/questions/60921001/internalerror-spectrum-scan-error-s3-to-redshift-copy-command
I am trying to copy some data from S3 bucket to redshift table by using the COPY command. The format of the file is PARQUET. When I run the execute the COPY command query, I get InternalError_: Spectrum Scan Error. This is the first time I tried copying from a parquet file. Please help me if there is a solution for this. I am using boto3 in python.
This generally happens for below reasons: If there is a mismatch in number of columns between table and file. If the Column type of your file schema is incompatible with your target table column type. Try going into the error logs. You might find partial log in cloud watch. From the screen shot you have uplaoded, you can also find a query number you have run. Got to aws redshift query editor and run below query to get the full log: select message from svl_s3log where query = '<<your query number>>' order by query,segment,slice; Hope this helps !
10
29
60,847,083
2020-3-25
https://stackoverflow.com/questions/60847083/attributeerror-torch-return-types-max-object-has-no-attribute-dim-maxpool
I'm trying to do maxpooling over channel dimension: class ChannelPool(nn.Module): def forward(self, input): return torch.max(input, dim=1) but I get the error AttributeError: 'torch.return_types.max' object has no attribute 'dim'
The torch.max function called with dim returns a tuple so: class ChannelPool(nn.Module): def forward(self, input): input_max, max_indices = torch.max(input, dim=1) return input_max From the documentation of torch.max: Returns a namedtuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax).
17
28
60,927,188
2020-3-30
https://stackoverflow.com/questions/60927188/django-3-x-error-mysql-connector-django-isnt-an-available-database-backend
Having recently upgraded a Django project from 2.x to 3.x, I noticed that the mysql.connector.django backend (from mysql-connector-python) no longer works. The last version of Django that it works with is 2.2.11. It breaks with 3.0. I am using mysql-connector-python==8.0.19. When running manage.py runserver, the following error occurs: django.core.exceptions.ImproperlyConfigured: 'mysql.connector.django' isn't an available database backend. Try using 'django.db.backends.XXX', where XXX is one of: 'mysql', 'oracle', 'postgresql', 'sqlite3' I am aware that this is not an official Django backend but I have to use it on this project for reasons beyond my control. I am 80% sure this is an issue with the library but I'm just looking to see if there is anything that can be done to resolve it beyond waiting for an update. UPDATE: mysql.connector.django now works with Django 3+.
For Django 3.0 and Django 3.1 I managed to have it working with mysql-connector-python 8.0.22. See this https://dev.mysql.com/doc/relnotes/connector-python/en/news-8-0-22.html.
8
5
60,885,641
2020-3-27
https://stackoverflow.com/questions/60885641/problem-while-installing-virtualenvwrapper-with-pyenv-pipx
I'm trying to install virtualenvwrapper (not pyenv-virtualenvwrapper) in my macOS (using zsh). I'm using pyenv to mange multiple python versions and pipx to install CLI stuff. I'm using Python 3.8.1 $ pyenv versions system 2.7.17 * 3.8.1 (set by /Users/my_user/.pyenv/version) I installed virtualenvwrapper with pipx $ pipx install virtualenvwrapper $ pipx list venvs are in /Users/my_user/.local/pipx/venvs apps are exposed on your $PATH at /Users/my_user/.local/bin package sshuttle 0.78.5, Python 3.8.1 - sshuttle package virtualenv 20.0.15, Python 3.8.1 - virtualenv package virtualenvwrapper 4.8.4, Python 3.8.1 - virtualenvwrapper.sh - virtualenvwrapper_lazy.sh and I inserted in my .zshrc the following lines: export WORKON_HOME=$HOME/.virtualenvs source /Users/my_user/.local/pipx/venvs/virtualenvwrapper/bin/virtualenvwrapper.sh export PIP_VIRTUALENV_BASE=$WORKON_HOME But when I launch the shell I get the following error: /Users/my_user/.pyenv/versions/3.8.1/bin/python: Error while finding module specification for 'virtualenvwrapper.hook_loader' (ModuleNotFoundError: No module named 'virtualenvwrapper') virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenvwrapper has been installed for VIRTUALENVWRAPPER_PYTHON=/Users/my_user/.pyenv/shims/python and that PATH is set properly. $ How can I fix this problem?
Fixed specifying a specific VIRTUALENVWRAPPER_PYTHON without pointing to the shim export WORKON_HOME=$HOME/.virtualenvs export VIRTUALENVWRAPPER_PYTHON=/Users/my_user/.local/pipx/venvs/virtualenvwrapper/bin/python3.8 source /Users/my_user/.local/pipx/venvs/virtualenvwrapper/bin/virtualenvwrapper.sh
8
14
60,949,936
2020-3-31
https://stackoverflow.com/questions/60949936/why-bilinear-scaling-of-images-with-pil-and-pytorch-produces-different-results
In order to feed an image to the pytorch network I first need to downscale it to some fixed size. At first I've done it using PIL.Image.resize() method, with interpolation mode set to BILINEAR. Then I though it would be more convenient to first convert a batch of images to pytorch tensor and then use torch.nn.functional.interpolate() function to scale the whole tensor at once on a GPU ('bilinear' interpolation mode as well). This lead to a decrease of the model accuracy because now during inference a type of scaling (torch) was different from the one used during training (PIL). After that, I compared two methods of downscaling visually and found out that they produce different results. Pillow downscaling seems more smooth. Do these methods perform different operations under the hood though both being bilinear? If so, I am also curious if there is a way to achieve the same result as Pillow image scaling with torch tensor scaling? Original image (the well-known Lenna image) Pillow scaled image: Torch scaled image: Mean channel absolute difference map: Demo code: import numpy as np from PIL import Image import torch import torch.nn.functional as F from torchvision import transforms import matplotlib.pyplot as plt pil_to_torch = transforms.ToTensor() res_shape = (128, 128) pil_img = Image.open('Lenna.png') torch_img = pil_to_torch(pil_img) pil_image_scaled = pil_img.resize(res_shape, Image.BILINEAR) torch_img_scaled = F.interpolate(torch_img.unsqueeze(0), res_shape, mode='bilinear').squeeze(0) pil_image_scaled_on_torch = pil_to_torch(pil_image_scaled) relative_diff = torch.abs((pil_image_scaled_on_torch - torch_img_scaled) / pil_image_scaled_on_torch).mean().item() print('relative pixel diff:', relative_diff) pil_image_scaled_numpy = pil_image_scaled_on_torch.cpu().numpy().transpose([1, 2, 0]) torch_img_scaled_numpy = torch_img_scaled.cpu().numpy().transpose([1, 2, 0]) plt.imsave('pil_scaled.png', pil_image_scaled_numpy) plt.imsave('torch_scaled.png', torch_img_scaled_numpy) plt.imsave('mean_diff.png', np.abs(pil_image_scaled_numpy - torch_img_scaled_numpy).mean(-1)) Python 3.6.6, requirements: cycler==0.10.0 kiwisolver==1.1.0 matplotlib==3.2.1 numpy==1.18.2 Pillow==7.0.0 pyparsing==2.4.6 python-dateutil==2.8.1 six==1.14.0 torch==1.4.0 torchvision==0.5.0
"Bilinear interpolation" is an interpolation method. But downscaling an image is not necessarily only accomplished using interpolation. It is possible to simply resample the image as a lower sampling rate, using an interpolation method to compute new samples that don't coincide with old samples. But this leads to aliasing (which is what you get when higher frequency components in the image cannot be represented at the lower sampling density, "aliasing" the energy of these higher frequencies onto lower frequency components; that is, new low frequency components appear in the image after the resampling). To avoid aliasing, some libraries apply a low-pass filter (remove high frequencies that cannot be represented at the lower sampling frequency) before resampling. The subsampling algorithm in these libraries do much more than just interpolating. The difference you see is because these two libraries take different approaches, one tries to avoid aliasing by low-pass filtering, the other doesn't. To obtain the same results in Torch as in Pillow, you need to explicitly low-pass filter the image yourself. To get identical results you will have to figure out exactly how Pillow filters the image, there are different methods and different possible parameter settings. Looking at the source code is the best way to find out exactly what they do.
8
9
60,912,744
2020-3-29
https://stackoverflow.com/questions/60912744/install-pytorch-from-requirements-txt
Torch documentation says use pip install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html to install the latest version of PyTorch. This works when I do it manually but when I add it to req.txt and do pip install -r req.txt, it fails and says ERROR: No matching distribution. Edit: adding the whole line from req.txt and error here. torch==1.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.htmltorch==1.4.0+cpu ERROR: Could not find a version that satisfies the requirement torch==1.4.0+cpu (from -r requirements.txt (line 1)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.3.1, 0.4.0, 0.4.1, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0) ERROR: No matching distribution found for torch==1.4.0+cpu (from -r requirements.txt (line 1))
Add --find-links in requirements.txt before torch --find-links https://download.pytorch.org/whl/torch_stable.html torch==1.2.0+cpu Source: https://github.com/pytorch/pytorch/issues/29745#issuecomment-553588171
49
69
60,897,366
2020-3-28
https://stackoverflow.com/questions/60897366/how-to-read-rtf-file-and-convert-into-python3-strings-and-can-be-stored-in-pyth
I am having a .rtf file and I want to read the file and store strings into list using python3 by using any package but it should be compatible with both Windows and Linux. I have tried striprtf but read_rtf is not working. from striprtf.striprtf import rtf_to_text from striprtf.striprtf import read_rtf rtf = read_rtf("file.rtf") text = rtf_to_text(rtf) print(text) But in this code, the error is: cannot import name 'read_rtf' Please can anyone suggest any way to get strings from .rtf file in python3?
Have you tried this? with open('yourfile.rtf', 'r') as file: text = file.read() print(text) For a super large file, try this: with open("yourfile.rtf") as infile: for line in infile: do_something_with(line)
9
8
60,868,719
2020-3-26
https://stackoverflow.com/questions/60868719/how-can-a-google-cloud-python-function-access-a-private-python-package
I am using Google Cloud Function using Python. Several other functions are in production. However, for this, I have additionally created a custom Python package that is available on github as a private repo. I need to install the package in the Google Function WHAT I HAVE DONE I run the Google Function in local using functions-framework I have a requirements.txt which has a link to the package. This is done by adding the following line to requirements.txt: -e git+https://github.com/<repo>/<project>#egg=<project> I run pip install -r requirements.txt. And the package is successfully installed. Now in the python code of the function using import <pkg-name> works and I am able to access all the functions. CHALLENGES WHEN PUSHING THE FUNCTION TO THE CLOUD As per the documentation, to push the Cloud function to Google Cloud, I issue the command: gcloud functions \ deploy <function-name> \ --trigger-http \ --entry-point <entry-function> \ --runtime python37 \ --project=<my-project> As expected this gives an error because it does not have access to the private repo in git. I created a Google Cloud Repository and linked it to the git repo, hoping that in some way I could specify that in the requirements.txt. Just do not know how. I tried setting environment variables for username and password (not a good idea, I agree) in Google Cloud Function and specify them in the requirements.txt as: -e git+https://${AUTH_USER}:${AUTH_PASSWORD}@github.com/<repo>/<project>#egg=<project> That too gave an error. Any help or direction will be greatly appreciated.
You can not access the private repo from cloud function. According to the official documentation: " Using private dependencies Dependencies are installed in a Cloud Build environment that does not provide access to SSH keys. Packages hosted in repositories that require SSH-based authentication must be vendored and uploaded alongside your project's code, as described in the previous section. You can use the pip install command with the -t DIRECTORY flag to copy private dependencies into a local directory before deploying your app, as follows: Copy your dependency into a local directory: pip install -t DIRECTORY DEPENDENCY Add an empty init.py file to the DIRECTORY directory to turn it into a module. Import from this module to use your dependency: import DIRECTORY.DEPENDENCY " Specifying dependencies in Python
11
4
60,894,682
2020-3-27
https://stackoverflow.com/questions/60894682/jupyter-notebook-exported-html-dark-color
I am using JupyterLab with light theme and when I exported my notebook as HTML I saw this: What I am expecting to see is something like this: any ideas of the setting ?
I had the exact same issue. After a couple hours debugging I realized it had to do (for me at least) with the jupyter-theme library. I had a dark theme installed, and I think nbconverter uses whichever settings your jupyter is also using, so the dark settings were affecting the html conversion. Solution was simply to restore defaults with: $ jt -r If that doesn't work, then refer to this thread: https://github.com/dunovank/jupyter-themes/issues/86
12
5
60,860,121
2020-3-26
https://stackoverflow.com/questions/60860121/plotly-how-to-make-an-annotated-confusion-matrix-using-a-heatmap
I like to use Plotly to visualize everything, I'm trying to visualize a confusion matrix by Plotly, this is my code: def plot_confusion_matrix(y_true, y_pred, class_names): confusion_matrix = metrics.confusion_matrix(y_true, y_pred) confusion_matrix = confusion_matrix.astype(int) layout = { "title": "Confusion Matrix", "xaxis": {"title": "Predicted value"}, "yaxis": {"title": "Real value"} } fig = go.Figure(data=go.Heatmap(z=confusion_matrix, x=class_names, y=class_names, hoverongaps=False), layout=layout) fig.show() and the result is How can I show the number inside corresponding cell instead of hovering, like this
You can use annotated heatmaps with ff.create_annotated_heatmap() to get this: Complete code: import plotly.figure_factory as ff z = [[0.1, 0.3, 0.5, 0.2], [1.0, 0.8, 0.6, 0.1], [0.1, 0.3, 0.6, 0.9], [0.6, 0.4, 0.2, 0.2]] x = ['healthy', 'multiple diseases', 'rust', 'scab'] y = ['healthy', 'multiple diseases', 'rust', 'scab'] # change each element of z to type string for annotations z_text = [[str(y) for y in x] for x in z] # set up figure fig = ff.create_annotated_heatmap(z, x=x, y=y, annotation_text=z_text, colorscale='Viridis') # add title fig.update_layout(title_text='<i><b>Confusion matrix</b></i>', #xaxis = dict(title='x'), #yaxis = dict(title='x') ) # add custom xaxis title fig.add_annotation(dict(font=dict(color="black",size=14), x=0.5, y=-0.15, showarrow=False, text="Predicted value", xref="paper", yref="paper")) # add custom yaxis title fig.add_annotation(dict(font=dict(color="black",size=14), x=-0.35, y=0.5, showarrow=False, text="Real value", textangle=-90, xref="paper", yref="paper")) # adjust margins to make room for yaxis title fig.update_layout(margin=dict(t=50, l=200)) # add colorbar fig['data'][0]['showscale'] = True fig.show()
10
18
60,935,289
2020-3-30
https://stackoverflow.com/questions/60935289/ego-graph-in-networkx
I have bipartite graph with nodes such as(a1,a2,...a100, m1,m2,...). I want to find the induced subgraph for certain nodes say(a1,a2 and a10). I can do this by using networkx.ego_graph, but it takes one vertex at one time and returns the induced graph. I want to know if there is any way to do this at once for all the nodes that i am interested in and then select the one that is largest.
For the general case, the ego graph can be obtained using nx.ego_graph. Though in your specific case, it looks like you want to find the largest induced ego graph in the network. For that you can first find the node with a highest degree, and then obtain its ego graph. Let's create an example bipartite graph: import networkx as nx B = nx.Graph() B.add_nodes_from([1, 2, 3, 4, 5, 6], bipartite=0) B.add_nodes_from(['a', 'b', 'c', 'j', 'k'], bipartite=1) B.add_edges_from([(1, 'a'), (1, 'b'), (2, 'b'), (2, 'c'), (3, 'c'), (4, 'a'), (2, 'b'), (3, 'a'), (5, 'k'), (6, 'k'), (6, 'j')]) rcParams['figure.figsize'] = 12, 6 nx.draw(B, node_color='lightblue', with_labels=True) And as mentioned in the question, say we want to select among the following list of nodes: l = [1,'a',6] It looks like you want to select the one that has the highest centrality degree among these. For that you could do: deg_l = {i:B.degree(i) for i in l} highest_centrality_node = max(deg_l.items(), key=lambda x: x[1])[0] Now we could plot the corresponding ego_graph with: ego_g = nx.ego_graph(B, highest_centrality_node) d = dict(ego_g.degree) nx.draw(ego_g, node_color='lightblue', with_labels=True, nodelist=d, node_size=[d[k]*300 for k in d])
8
6
60,896,993
2020-3-28
https://stackoverflow.com/questions/60896993/importerror-cannot-import-name-url-encode-from-werkzeug
I am currently running a conda environment with flask-wtf version 0.14.2 and wtforms version 2.21 and I have trouble solving this ImportError: cannot import name 'url_encode' from 'werkzeug' The following code is the complete traceback. Traceback (most recent call last): File "run.py", line 1, in <module> from flaskblog import app File "/Users/justinding/Desktop/test/test_wesite/flaskblog/__init__.py", line 10, in <module> from flaskblog import routes File "/Users/justinding/Desktop/test/test_wesite/flaskblog/routes.py", line 4, in <module> from flaskblog.forms import RegistrationForm,LoginForm File "/Users/justinding/Desktop/test/test_wesite/flaskblog/forms.py", line 1, in <module> from flask_wtf import FlaskForm File "/opt/anaconda3/envs/smartbox/lib/python3.7/site-packages/flask_wtf/__init__.py", line 17, in <module> from .recaptcha import * File "/opt/anaconda3/envs/smartbox/lib/python3.7/site-packages/flask_wtf/recaptcha/__init__.py", line 2, in <module> from .fields import * File "/opt/anaconda3/envs/smartbox/lib/python3.7/site-packages/flask_wtf/recaptcha/fields.py", line 3, in <module> from . import widgets File "/opt/anaconda3/envs/smartbox/lib/python3.7/site-packages/flask_wtf/recaptcha/widgets.py", line 5, in <module> from werkzeug import url_encode ImportError: cannot import name 'url_encode' from 'werkzeug' (/opt/anaconda3/envs/smartbox/lib/python3.7/site-packages/werkzeug/__init__.py)
Setting werkzeug==0.16.1 in your requirements file fixes it. The issue is with the 1.0.0 version
26
43
60,926,079
2020-3-30
https://stackoverflow.com/questions/60926079/pipenv-install-runtimeerror-location-not-created-nor-specified
I am using Pipenv to manage project dependencies. It was working fine so far until now. Now I am trying to bootstrap an environment with pipenv install and I am getting the following error: ❯ pipenv install --dev --skip-lock Creating a virtualenv for this project… Pipfile: /Users/user/project/Pipfile Using /usr/bin/python3 (3.7.3) to create virtualenv… ⠧ Creating virtual environment...created virtual environment CPython3.7.3.final.0-64 in 399ms creator CPython3Posix(dest=/Users/user/.local/share/virtualenvs/sql_runner-ABIm84c6, clear=False, global=False) seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, via=copy, app_data_dir=/Users/user/Library/Application Support/virtualenv/seed-app-data/v1) activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator ✔ Successfully created virtual environment! Virtualenv location: /Users/user/.local/share/virtualenvs/sql_runner-ABIm84c6 Traceback (most recent call last): File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/bin/pipenv", line 8, in <module> sys.exit(cli()) File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/vendor/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/vendor/click/core.py", line 717, in main rv = self.invoke(ctx) File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/vendor/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/vendor/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/vendor/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/vendor/click/decorators.py", line 64, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/vendor/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/vendor/click/decorators.py", line 17, in new_func return f(get_current_context(), *args, **kwargs) File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/cli/command.py", line 235, in install retcode = do_install( File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/core.py", line 1734, in do_install ensure_project( File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/core.py", line 579, in ensure_project path_to_python = which("python") or which("py") File "/usr/local/Cellar/pipenv/2018.11.26_4/libexec/lib/python3.8/site-packages/pipenv/core.py", line 108, in which raise RuntimeError("location not created nor specified") RuntimeError: location not created nor specified The Pipfile is the following [[source]] name = "pypi" url = "https://pypi.org/simple" verify_ssl = true [dev-packages] pytest = "==4.6.3" flake8 = "==3.7.7" autopep8 = "==1.4.4" pytest-cov = "==2.7.1" moto = "==1.3.13" Sphinx = "==2.3.1" [packages] psycopg2-binary = "==2.8.2" boto3 = "==1.9.166" pymongo = "==3.8.0" deprecated = "==1.2.5" paramiko = "==2.6.0" pandas = "==0.24.2" pyarrow = "==0.14.0" SQLAlchemy = "==1.3.15" s3fs = "==0.4.0" [requires] python_version = "3.7" I have installed Pipenv with Homebrew. I am not sure what could have changed to stop working. Other older projects keep working, but every time that I try to create an environment I get this error. Thanks!
So I manage to make it work. My default python system installation was 3.7.3. However, pipenv didn't like that one for some reason. I installed python 3.7.7 with homebrew and pipenv was able to locate that version properly and use it to create a virtual environment. In summary, to fix this issue try to install python again. In my case: brew install python
9
9
60,879,701
2020-3-27
https://stackoverflow.com/questions/60879701/socketio-flask-detect-disconnect
I had a different question here, but realized it simplifies to this: How do you detect when a client disconnects (closes their page or clicks a link) from a page (in other words, the socket connection closes)? I want to make a chat app with an updating user list, and I’m using Flask on Python. When the user connects, the browser sends a socket.emit() with an event and username passed in order to tell the server a new user exists, after which the server will message all clients with socket.emit(), so that all clients will append this new user to their user list. However, I want the clients to also send a message containing their username to the server on Disconnect. I couldn’t figure out how to get the triggers right. Note: I’m just using a simple html file with script tags for the page, I’m not sure how to add a JS file to go along with the page, though I can figure it out if it’s necessary for this.
Figured it out. socket.on('disconnect') did turn out to be right, however by default it pings each user only once a minute or so, meaning it took a long time to see the event.
7
7
60,960,535
2020-3-31
https://stackoverflow.com/questions/60960535/split-a-python-list-into-chunks-with-maximum-memory-size
Given a python list of bytes values: # actual str values un-important [ b'foo', b'bar', b'baz', ... ] How can the list be broken into chunks where each chunk has the maximum memory size below a certain ceiling? For example: if the ceiling were 7 bytes, then the original list would be broken up into a list of lists [ [b'foo', b'bar'], # sublist 0 [b'baz'], # sublist 1 ... ] And each sublist would be at most 7 bytes, based on accumulated length of the list's contents. Note: each sub-list should be maximally packed, in the order of the original list. In the example above the first 2 str values were grouped because it is the maximum possible under the 7 byte limit. Thank you in advance for your consideration and response.
This solution is with functools.reduce. l = [b'abc', b'def', b'ghi', b'jklm', b'nopqrstuv', b'wx', b'yz'] reduce(lambda a, b, size=7: a[-1].append(b) or a if a and sum(len(x) for x in a[-1]) + len(b) <= size else a.append([b]) or a, l, []) a is an empty list and b is an item from the original list. if a and sum(len(x) for x in a[-1]) + len(b) <= size check if a is not empty and sum of length of bytes in the last appended list and length of b is not exceeding size. a[-1].append(b) or a append b to the last appended list of a and return a if the condition is True. a.append([b]) or a make a list with b and append the new list to a and return a Output; [[b'abc', b'def'], [b'ghi', b'jklm'], [b'nopqrstuv'], [b'wx', b'yz']]
7
2
60,886,568
2020-3-27
https://stackoverflow.com/questions/60886568/google-colab-not-loading-image-files-while-using-tensorflow-2-0-batched-dataset
A little bit of background, I am loading about 60,000 images to colab to train a GAN. I have already uploaded them to Drive and the directory structure contains folders for different classes (about 7-8) inside root. I am loading them to colab as follows: root = "drive/My Drive/data/images" root = pathlib.Path(root) list_ds = tf.data.Dataset.list_files(str(root/'*/*')) for f in list_ds.take(3): print(f.numpy()) which gives the ouput: b'drive/My Drive/data/images/folder_1/2994.jpg' b'drive/My Drive/data/images/folder_1/6628.jpg' b'drive/My Drive/data/images/folder_2/37872.jpg' I am further processing them as follows: def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] image = tf.io.read_file(file_path) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) return image#, label ds = list_ds.map(process_path) BUFFER_SIZE = 60000 BATCH_SIZE = 128 train_dataset = ds.shuffle(BUFFER_SIZE).batch(BATCH_SIZE) Each image is of size 128x128. Now coming to the problem when I try to view a batch in colab the execution goes on forever and never stops, for example, with this code: for batch in train_dataset.take(4): print([arr.numpy() for arr in batch]) Earlier I thought that batch_size might be a issue so tried changing it but still same problem. Can it be a problem due to colab as I am loading a large number of files? Or due to the size of images as it was working when using MNIST(28x28)? If so, what are the possible solutions? Thanks in advance. EDIT: After removing the shuffle statement, the last line gets executed within a few seconds. So I thought it could be a problem due to BUFFER_SIZE of shuffle, but even with a reduced BUFFER_SIZE, it is again taking a very long time to execute. Any workaround?
Here is how I load a 1.12GB zipped FLICKR image dataset from my personal Google Drive. First, I unzip the dataset in the colab environment. Some features that can speed up the performance is prefetch and autotune. Additionally, I use the local colab cache to store the processed images. This takes ~20 seconds to execute the first time (assuming you have unzipped the dataset). The cache then allows subsequent calls to load very fast. Assuming you have authorized the google drive API, I start with unzipping the folder(s) !unzip /content/drive/My\ Drive/Flickr8k !unzip Flickr8k_Dataset !ls I then used your code with the addition of prefetch(), autotune, and cache file. import pathlib import tensorflow as tf def prepare_for_training(ds, cache, BUFFER_SIZE, BATCH_SIZE): if cache: if isinstance(cache, str): ds = ds.cache(cache) else: ds = ds.cache() ds = ds.shuffle(buffer_size=BUFFER_SIZE) ds = ds.batch(BATCH_SIZE) ds = ds.prefetch(buffer_size=AUTOTUNE) return ds AUTOTUNE = tf.data.experimental.AUTOTUNE root = "Flicker8k_Dataset" root = pathlib.Path(root) list_ds = tf.data.Dataset.list_files(str(root/'**')) for f in list_ds.take(3): print(f.numpy()) def process_path(file_path): label = tf.strings.split(file_path, '/')[-2] img = tf.io.read_file(file_path) img = tf.image.decode_jpeg(img) img = tf.image.convert_image_dtype(img, tf.float32) # resize the image to the desired size. img = tf.image.resize(img, [128, 128]) return img#, label ds = list_ds.map(process_path, num_parallel_calls=AUTOTUNE) train_dataset = prepare_for_training(ds, cache="./custom_ds.tfcache", BUFFER_SIZE=600000, BATCH_SIZE=128) for batch in train_dataset.take(4): print([arr.numpy() for arr in batch]) Here is a way to do it with keras flow_from_directory(). The benefit of this approach is that you avoid the tensorflow shuffle() which depending on the buffer size may require a processing of the whole dataset. Keras gives you an iterator which you can call to fetch the data batch and has the random shuffling built in. import pathlib import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator root = "Flicker8k_Dataset" BATCH_SIZE=128 train_datagen = ImageDataGenerator( rescale=1./255 ) train_generator = train_datagen.flow_from_directory( directory = root, # This is the source directory for training images target_size=(128, 128), # All images will be resized batch_size=BATCH_SIZE, shuffle=True, seed=42, #for the shuffle classes=['']) i = 4 for batch in range(i): [print(x[0]) for x in next(train_generator)]
8
3