question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
70,268,140
2021-12-7
https://stackoverflow.com/questions/70268140/could-not-load-dynamic-library-libcublaslt-so-11-dlerror-libcublaslt-so-11
I just updated my graphics cards drives with sudo apt install nvidia-driver-470 sudo apt install cuda-drivers-470 I decided to install them in this manner because they were being held back when trying to sudo apt upgrade. I mistakenly then did sudo apt autoremove to cleanup old packages. After restarting my computer for new drivers to get setup properly, I could no longer use GPU acceleration with tensorflow. import tensorflow as tf tf.test.is_gpu_available() WARNING:tensorflow:From <stdin>:1: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2021-12-07 16:52:01.771391: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-12-07 16:52:01.807283: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-07 16:52:01.807973: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-12-07 16:52:01.808017: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublas.so.11'; dlerror: libcublas.so.11: cannot open shared object file: No such file or directory 2021-12-07 16:52:01.808048: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublasLt.so.11'; dlerror: libcublasLt.so.11: cannot open shared object file: No such file or directory 2021-12-07 16:52:01.856391: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory 2021-12-07 16:52:01.856466: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory 2021-12-07 16:52:01.857601: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... False
You can create symlinks inside of /usr/lib/x86_64-linux-gnu directory. I found it by: $ whereis libcudart libcudart: /usr/lib/x86_64-linux-gnu/libcudart.so /usr/share/man/man7/libcudart.7.gz Within this folder you can find other versions of those cuda libraries. Then create symlinks like this. Your specific version that you are linking to might be slightly different. $ sudo ln -s libcublas.so.10.2.1.243 libcublas.so.11 $ sudo ln -s libcublasLt.so.10.2.1.243 libcublasLt.so.11 $ sudo ln -s libcusolver.so.10.2.0.243 libcusolver.so.11 $ sudo ln -s libcusparse.so.10.3.0.243 libcusparse.so.11 Now your GPU should be detected. import tensorflow as tf >>> tf.test.is_gpu_available() WARNING:tensorflow:From <stdin>:1: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2021-12-07 17:07:26.914296: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-12-07 17:07:26.950731: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-07 17:07:27.029687: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-07 17:07:27.030421: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-07 17:07:27.325218: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-07 17:07:27.325642: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-07 17:07:27.326022: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-07 17:07:27.326408: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /device:GPU:0 with 9280 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:06:00.0, compute capability: 8.6 True This method works because these cuda libraries are similar enough that even NVIDIA build them with symlinks often. If tensorflow is looking for libcublas.so.11, you can create a file with that name that just points to another version of libcublas that is already installed.
9
6
70,265,343
2021-12-7
https://stackoverflow.com/questions/70265343/python-fastapi-server-how-to-extend-connection-timeout
I'm using fastAPI python framework to build a simple POST/GET https server. On the client side, i send heartbeat POST messages every 10 seconds and i'd like to keep my connection open during this period. However, for some reason I see that every new heartbeat, my connection get disconnected by the peer, so I need to re-establish it. If the idle period between 2 consecutive keepalives is 1 second, the Connection remains active, and can be reused. I'm using HTTP/1.1 with Connection: keep-alive, but it is entirely up to the server how long it will keep the connection alive, and I'm looking for a way to extend this timeout to ~15 seconds. is there any suitable way to do it ? or even just make the server print proper log message when it decide to disconnect the client peer ... P.S. in order to start the server I'm using the following command, Perhaps it need to be modified ? uvicorn main:app --port 44444 --host 0.0.0.0 --reload --ssl-keyfile ./key.pem --ssl-certfile ./certificate.pem --log-level debug
From the uvicorn docs: --timeout-keep-alive <int> - Close Keep-Alive connections if no new data is received within this timeout. Default: 5. But it‘s probably no great idea to set this to 10 minutes. What is your problem with dropping the connection for a heartbeat?
8
12
70,262,199
2021-12-7
https://stackoverflow.com/questions/70262199/different-behavior-between-python-2-and-3-of-vars-in-list-comprehension
I am currently converting a script from Python 2 to Python 3. While debugging it, I stumbled upon a part of the code that behaves differently between both versions. However, I am unable to explain this difference. Here is a reproducer: variable_1 = "x" variable_2 = "y" list_of_variables = ['variable_1', 'variable_2'] existing_variables = vars() print([variable for variable in list_of_variables if variable in vars()]) print([variable for variable in list_of_variables if variable in existing_variables]) Python 2.7.18 shows the following output: ['variable_1', 'variable_2'] ['variable_1', 'variable_2'] Whereas Python 3.9.0 displays: [] ['variable_1', 'variable_2'] Why is the first list comprehension not working in Python 3? And why is it working when storing the content of vars() within a variable?
It's working in both: it's just working differently. In Python 3, list comprehensions create their own local scope to avoid leaking variable names into the calling scope. The call to vars() inside the list comprehension is just returning the variables defined in the list comprehension's own scope, not the scope where the list comprehension is used. From https://docs.python.org/3/reference/expressions.html#displays-for-lists-sets-and-dictionaries: However, aside from the iterable expression in the leftmost for clause, the comprehension is executed in a separate implicitly nested scope. This ensures that names assigned to in the target list don’t “leak” into the enclosing scope.
4
10
70,259,880
2021-12-7
https://stackoverflow.com/questions/70259880/how-to-capture-messages-written-to-stderr-by-opencv
In case of invalid parameters, cv2.VideoWriter writes stuff to stderr. here is a minimal example: import cv2 cv2.VideoWriter("foo.mp4", cv2.VideoWriter_fourcc(*"mp4v"), 90000, (240, 240)) Running like so python3 video_verification/verification.py 2> err.txt one can see the expected errors in err.txt: [mpeg4 @ 0x13e4640] timebase 1/90000 not supported by MPEG 4 standard, the maximum admitted value for the timebase denominator is 65535 [ERROR:0] global /tmp/pip-req-build-13uokl4r/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (2705) open Could not open codec mpeg4, error: Unspecified error [ERROR:0] global /tmp/pip-req-build-13uokl4r/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (2722) open VIDEOIO/FFMPEG: Failed to initialize VideoWriter However, I'm unable to capture this directly in Python. Here are my two (failing) attemps so far: import io from contextlib import redirect_stdout, redirect_stderr import cv2 f_out = io.StringIO() f_err = io.StringIO() with redirect_stdout(f_out): with redirect_stderr(f_err): fps = 90000 writer = cv2.VideoWriter( "foo.mp4", cv2.VideoWriter_fourcc(*"mp4v"), fps, (240, 240) ) print(f"stdout: {f_out.getvalue()}") print(f"stderr: {f_err.getvalue()}") import io import sys import cv2 def flush(): sys.stdout.flush() sys.stderr.flush() flush() original_stdout = sys.stdout original_stderr = sys.stderr captured_stdout = io.StringIO() captured_stderr = io.StringIO() sys.stdout = captured_stdout sys.stderr = captured_stderr flush() cv2.VideoWriter("foo.mp4", cv2.VideoWriter_fourcc(*"mp4v"), 90000, (240, 240)) flush() sys.stdout = original_stdout sys.stderr = original_stderr print(f"stdout: {captured_stdout.getvalue()}") print(f"stderr: {captured_stderr.getvalue()}") I would love to learn what I'm doing wrong. :-)
I've found the wurlitzer library, which can do exactly that, i.e., capture the streams written to by a C library: import cv2 from wurlitzer import pipes with pipes() as (out, err): cv2.VideoWriter("foo.mp4", cv2.VideoWriter_fourcc(*"mp4v"), 90000, (240, 240)) print(f"stderr: {err.read()}")
5
4
70,259,852
2021-12-7
https://stackoverflow.com/questions/70259852/what-is-the-difference-between-pathlib-and-os-path
Reading official documentation of both libraries: os.path See also The pathlib module offers high-level path objects. pathlib See also For low-level path manipulation on strings, you can also use the os.path module. So os.path is for low-level use and path lib for high level use. But what it means? I don't understand the low vs high in this context. There are many examples out there about pathlib vs os.path for example Pathlib vs. os.path.join in Python but in none of them there is explanation of the low vs high concept. Is there example that explain what exactly is low and what exactly is high? Is it correct that low level means the need to manipulate the path as string object?
I see it as "Use pathlib if you are interested in working with the path as an abstract entity" , use "os.path" if you're really interested in the resources represented by that path; in most cases you are interested in the using the 'path' to learn about or handle resources. High level being 'the path as an abstract entity' low level being 'the resources represented by a path'. While os.path also does have functionality for pathname manipulation etc. it's all there to enable you to get to the resources you're really interested in. From the pathlib docs: If you’ve never used this module before or just aren’t sure which class is right for your task, Path is most likely what you need. It instantiates a concrete path for the platform the code is running on. Pure paths are useful in some special cases; for example: If you want to manipulate Windows paths on a Unix machine (or vice versa). You cannot instantiate a WindowsPath when running on Unix, but you can instantiate PureWindowsPath. You want to make sure that your code only manipulates paths without actually accessing the OS. In this case, instantiating one of the pure classes may be useful since those simply don’t have any OS-accessing operations.
7
4
70,254,535
2021-12-7
https://stackoverflow.com/questions/70254535/how-to-list-all-packages-in-apples-conda-channel
I know Apple provides tensorflow-metal on their conda -c apple channel. How do I see the rest of the packages in that channel? The "answers" I found online address finding a package to install, like conda search *tensorflow* but this is not what I want. Instead, I would like to see what packages Apple has in their -c apple channel that I can install: in short "browse apple repository", a concept that I'm surprised is so difficult to achieve under conda... p.s. I have tried conda search -c apple but it only lists all the packages from conda-forge, even after I did a conda config --add channels apple.
Web Browser You can browse all packages in an Anaconda Cloud channel by visiting: https://anaconda.org/<channel_name>/repo For the apple channel, the tensorflow-deps package is the only package (right now). Conda CLI Alternatively, one can also use conda search with something like: $ conda search --override-channels -c apple '*' Loading channels: done # Name Version Build Channel tensorflow-deps 2.5.0 0 apple tensorflow-deps 2.5.0 1 apple tensorflow-deps 2.6.0 0 apple tensorflow-deps 2.7.0 0 apple to list all packages. Note that the selector * must be escaped ('*') otherwise the shell will interpret it. PyPI The tensorflow-macos and tensorflow-metal packages are on PyPI, not Anaconda Cloud, and are uploaded by user Kulin Seth.
5
4
70,241,246
2021-12-6
https://stackoverflow.com/questions/70241246/is-it-possible-to-fit-a-scikit-learn-model-in-a-loop-or-with-an-iterator
Usually people use scikit-learn to train a model this way: from sklearn.ensemble import GradientBoostingClassifier as gbc clf = gbc() clf.fit(X_train, y_train) predicted = clf.predict(X_test) It works fine as long as users' memory is large enough to accommodate the entire dataset. The dilemma for me is exactly this--the dataset is too big for my memory. My current solution is to enlarge the virtual memory of my machine and I have already made the system extremely slow by having too much virtual memory--so I start to think whether or not is it possible to feed the fit() method with samples in batches like this (and the answre is no, please keep reading and stop reminding me that the answer is no): clf = gbc() for i in range(X_train.shape[0]): clf.fit(X_train[i], y_train[i]) so that I can read the training set from hard drive only when needed. I read the sklearn's manual and it seems to me that it does not support this: Calling fit() more than once will overwrite what was learned by any previous fit() So, is this possible?
After reading the section 6. Strategies to scale computationally: bigger data of the official manual mentioned by @StupidWolf in this post, I am aware that this question is more to this than meets the eye. The real difficulty is about the design of a lot of models. Take Random Forest as an example, one of the most important techniques used to improve its performance compared with the simpler Decision Tree is the application of bagging, which means that the algorithm has to pick some random samples from the entire dataset to construct several weak learners as the basis of the Random Forest. It means that feeding the model with one sample after another won't work with this design. Although it is still possible for scikit-learn to define an interface for end-users to implement so that scikit-learn can pick a random sample by calling this interface and end-users will decide how their implementation of the interface is about to return the needed data by scanning the dataset on the hard drive, it becomes way more complicated than I initially thought and the performance gain may not be very significant given that the IO-heavy "full table scan" (in database's term) is frequently needed.
5
2
70,249,059
2021-12-6
https://stackoverflow.com/questions/70249059/sort-a-subset-of-columns-of-a-pandas-dataframe-alphabetically-by-column-name
I'm having trouble finding the solution to a fairly simple problem. I would like to alphabetically arrange certain columns of a pandas dataframe that has over 100 columns (i.e. so many that I don't want to list them manually). Example df: import pandas as pd subject = [1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,4,4,4,4,4,4] timepoint = [1,2,3,4,5,6,1,2,3,4,5,6,1,2,4,1,2,3,4,5,6] c = [2,3,4,5,6,7,3,4,1,2,3,4,5,4,5,8,4,5,6,2,3] d = [2,3,4,5,6,7,3,4,1,2,3,4,5,4,5,8,4,5,6,2,3] a = [2,3,4,5,6,7,3,4,1,2,3,4,5,4,5,8,4,5,6,2,3] b = [2,3,4,5,6,7,3,4,1,2,3,4,5,4,5,8,4,5,6,2,3] df = pd.DataFrame({'subject':subject, 'timepoint':timepoint, 'c':c, 'd':d, 'a':a, 'b':b}) df.head() subject timepoint c d a b 0 1 1 2 2 2 2 1 1 2 3 3 3 3 2 1 3 4 4 4 4 3 1 4 5 5 5 5 4 1 5 6 6 6 6 How could I rearrange the column names to generate a df.head() that looks like this: subject timepoint a b c d 0 1 1 2 2 2 2 1 1 2 3 3 3 3 2 1 3 4 4 4 4 3 1 4 5 5 5 5 4 1 5 6 6 6 6 i.e. keep the first two columns where they are and then alphabetically arrange the remaining column names. Thanks in advance.
You can split your your dataframe based on column names, using normal indexing operator [], sort alphabetically the other columns using sort_index(axis=1), and concat back together: >>> pd.concat([df[['subject','timepoint']], df[df.columns.difference(['subject', 'timepoint'])]\ .sort_index(axis=1)],ignore_index=False,axis=1) subject timepoint a b c d 0 1 1 2 2 2 2 1 1 2 3 3 3 3 2 1 3 4 4 4 4 3 1 4 5 5 5 5 4 1 5 6 6 6 6 5 1 6 7 7 7 7 6 2 1 3 3 3 3 7 2 2 4 4 4 4 8 2 3 1 1 1 1 9 2 4 2 2 2 2 10 2 5 3 3 3 3 11 2 6 4 4 4 4 12 3 1 5 5 5 5 13 3 2 4 4 4 4 14 3 4 5 5 5 5 15 4 1 8 8 8 8 16 4 2 4 4 4 4 17 4 3 5 5 5 5 18 4 4 6 6 6 6 19 4 5 2 2 2 2 20 4 6 3 3 3 3
4
6
70,228,997
2021-12-4
https://stackoverflow.com/questions/70228997/how-to-connect-sqlalchemy-to-snowflake-database-using-oauth2
I need to connect to Snowflake using SQLAlchemy but the trick is, I need to authenticate using OAuth2. Snowflake documentation only describes connecting using username and password and this cannot be used in the solution I'm building. I can authenticate using Snowflake's python connector but I see no simple path how to glue it with SQLAlchemy. I'd like to know if there is a ready solution before I write a custom interface for this.
Use snowflake.connector.connect to create a PEP-249 Connection to the database - see documentation. Then use param creator of create_engine (docs) - it takes a callable that returns PEP-249 Connection. If you use it then URL param is ignored. Example code: def get_connection(): return snowflake.connector.connect( user="<username>", host="<hostname>", account="<account_identifier>", authenticator="oauth", token="<oauth_access_token>", warehouse="test_warehouse", database="test_db", schema="test_schema" ) engine = create_engine("snowflake://not@used/db", creator=get_connection)
5
5
70,242,667
2021-12-6
https://stackoverflow.com/questions/70242667/how-to-create-venv
I have been using my python v3.9 in my virtual environment, it contains all the packages and scripts i have been using for so long. But, now with the release of python v 3.10 it installed itself globally though I wanted it to be installed in same venv I was using for python v3.9 So, if anyone could help me with how can I install python v3.10 in the same venv of my v3.9 . My IDE is PyCharm
Simply put all the dependencies of your python 3.9 (venv) in requirements.txt file pip freeze > requirements.txt Create a new folder then move that file inside the newly created folder then execute the following code, it will create a new virtual environment with python 3.10 python -m venv newenv activate the newly created environment by source newenv/bin/activate then install the required dependencies by pip install -r requirements.txt Note: If your OS do not have 'venv' module then simply install it by using pip install venv
7
14
70,235,264
2021-12-5
https://stackoverflow.com/questions/70235264/beautiful-soup-select-google-image-returns-empty-list
I would like to retrieve information from Google Arts & Culture using BeautifulSoup. I have checked many of the stackoverflow posts ([1], [2], [3], [4], [5]), and still couldn't retrieve the information. I would like each tile (picture)'s (li) information such as href, however, find_all and select one return empty list or None. Could you help me get the below href value of anchor tag of class "e0WtYb HpzMff PJLMUc" ? href="/entity/claude-monet/m01xnj?categoryId=artist" Below are what I had tried. import requests from bs4 import BeautifulSoup url = 'https://artsandculture.google.com/category/artist?tab=time&date=1850' html = requests.get(url) soup = BeautifulSoup(html.text, 'html.parser') print(soup.find_all('li', class_='DuHQbc')) # [] print(soup.find_all('a', class_='PJLMUc')) # [] print(soup.find_all('a', class_='e0WtYb HpzMff PJLMUc')) # [] print(soup.select_one('#tab_time > div > div:nth-child(2) > div > ul > li:nth-child(2) > a')) # None for elem in soup.find_all('a', class_=['e0WtYb', 'HpzMff', 'PJLMUc'], href=True): print(elem) # others with class 'e0WtYb' ... # and then something like elem['href'] https://artsandculture.google.com/category/artist?tab=time&date=1850 Copied selector from Chrome #tab_time > div > div:nth-child(2) > div > ul > li:nth-child(2) > a
Unfortunately, the problem is not that you're using BeautifulSoup wrong. The webpage that you're requesting appears to be missing its content! I saved html.text to a file for inspection: Why does this happen? Because the webpage actually loads its content using JavaScript. When you open the site in your browser, the browser executes the JavaScript, which adds all of the artist squares to the webpage. (You may even notice the brief moment during which the squares aren't there when you first load the site.) On the other hand, requests does NOT execute JavaScript—it just downloads the contents of the webpage and saves them to a string. What can you do about it? Unfortunately, this means that scraping the website will be really tough. In such cases, I would suggest looking for an alternative source of information or using an API provided by the website.
5
2
70,235,696
2021-12-5
https://stackoverflow.com/questions/70235696/checking-folder-and-if-it-doesnt-exist-create-it
I am quite new to python so I need some help. I would like to create a code that checks if a folder with a given name is created, if not, it creates it, if it exists, goes to it and in it checks if there is another folder with a given name, if not, it creates it. In my code I don't know how to go to folder that already exist or has just been created and repeat IF. import os MYDIR = ("test") CHECK_FOLDER = os.path.isdir(MYDIR) if not CHECK_FOLDER: os.makedirs(MYDIR) print("created folder : ", MYDIR) else: print(MYDIR, "folder already exists.")
If you want to create a folder inside a second folder or check the existence of them you can use builtin os library: import os PATH = 'folder_1/folder_2' if not os.path.exists(PATH): os.makedirs(PATH) os.makedirs() create all folders in the PATH if they does not exist.
11
28
70,232,617
2021-12-5
https://stackoverflow.com/questions/70232617/heatmap-error-nonetype-object-is-not-callable-when-using-with-dataframe
I have this issue with heatmap from seaborn. I don't know how, but seaborn.heatmap() refuses to take in dataframe, it instead show the mentioned error. Seaborn, matplotlib and pandas is up-to-date and I'm using python 3.10 on Visual Studio. The code is just a sample code from seaborn.heatmap itself: import pandas as pd import seaborn as sns import matplotlib.pyplot as plt flights = sns.load_dataset("flights") flights = flights.pivot("month", "year", "passengers") ax=sns.heatmap(flights) plt.show()
Use Python 3.9 (or 3.8, 3.7, 3.6) as it seems like both pandas and plt are not quite ready to be used with Python 3.10:
6
8
70,229,777
2021-12-4
https://stackoverflow.com/questions/70229777/can-you-pre-install-libraries-on-databricks-pool-nodes
We have a number of Python Databricks jobs that all use the same underlying Wheel package to install their dependencies. Installing this Wheel package even with a node that has been idling in a Pool still takes 90 seconds. Some of these jobs are very long-running so we would like to use Jobs computer clusters for the lower cost in DBUs. Some of these jobs are much shorter-running (<10 seconds) where the 90 second install time seems more significant. We have been considering using a hot cluster (All-Purpose Compute) for these shorter jobs. We would like to avoid the extra cost of the All-Purpose Compute if possible. Reading the Databricks documentation suggests that the Idle instances in the Pool are reserved for us but not costing us DBUs. Is there a way for us to pre-install the required libraries on our Idle instances so that when a job comes through we are able to immediately start processing it? Is there an alternate approach that can fulfill a similar use case?
You can't install libraries directly into nodes from pool, because the actual code is executed in the Docker container corresponding to Databricks Runtime. There are several ways to speedup installation of the libraries: Create your own Docker image with all necessary libraries pre-installed, and pre-load Databricks Runtime version and your Docker image - this part couldn't be done via UI, so you need to use REST API (see description of preloaded_docker_images attribute), databrick-cli, or Databricks Terraform provider. The main disadvantage of custom Docker images is that some functionality isn't available out of box, for example, arbitrary files in Repos, web terminal, etc. (don't remember full list) Put all necessary libraries and their dependencies onto DBFS and install them via cluster init script. It's very important that you collect binary dependencies, not packages only with the source code, so you won't need to compile them when installing. This could be done once: for Python this could be done with pip download --prefer-binary lib1 lib2 ... for Java/Scala you can use mvn dependency:get -Dartifact=<maven_coordinates>, that will download dependencies into ~/.m2/repository folder, from which you can copy jars to DBFS and in init script use cp /dbfs/.../jars/* /databricks/jars/ command for R, it's slightly more complicated, but is also doable
6
3
70,231,487
2021-12-5
https://stackoverflow.com/questions/70231487/output-dimensions-of-convolution-in-pytorch
The size of my input images are 68 x 224 x 3 (HxWxC), and the first Conv2d layer is defined as conv1 = torch.nn.Conv2d(3, 16, stride=4, kernel_size=(9,9)). Why is the size of the output feature volume 16 x 15 x 54? I get that there are 16 filters, so there is a 16 in the front, but if I use [(W−K+2P)/S]+1 to calculate dimensions, the dimensions are not divisible. Can someone please explain?
The calculation of feature maps is [(W−K+2P)/S]+1 and here [] brackets means floor division. In your example padding is zero, so the calculation is [(68-9+2*0)/4]+1 ->[14.75]=14 -> [14.75]+1 = 15 and [(224-9+2*0)/4]+1 -> [53.75]=53 -> [53.75]+1 = 54. import torch conv1 = torch.nn.Conv2d(3, 16, stride=4, kernel_size=(9,9)) input = torch.rand(1, 3, 68, 224) print(conv1(input).shape) # torch.Size([1, 16, 15, 54]) You may see different formulas too calculate feature maps. In PyTorch: In general, you may see this: However the result of both cases are the same
8
11
70,227,330
2021-12-4
https://stackoverflow.com/questions/70227330/maximum-path-sum-of-2-lists
My question is about this kata on Codewars. The function takes two sorted lists with distinct elements as arguments. These lists might or might not have common items. The task is find the maximum path sum. While finding the sum, if there any common items you can choose to change your path to the other list. The given example is like this: list1 = [0, 2, 3, 7, 10, 12] list2 = [1, 5, 7, 8] 0->2->3->7->10->12 => 34 0->2->3->7->8 => 20 1->5->7->8 => 21 1->5->7->10->12 => 35 (maximum path) I solved the kata but my code doesn't match the performance criteria so I get execution timed out. What can I do for it? Here is my solution: def max_sum_path(l1:list, l2:list): common_items = list(set(l1).intersection(l2)) if not common_items: return max(sum(l1), sum(l2)) common_items.sort() s = 0 new_start1 = 0 new_start2 = 0 s1 = 0 s2 = 0 for item in common_items: s1 = sum(itertools.islice(l1, new_start1, l1.index(item))) s2 = sum(itertools.islice(l2, new_start2, l2.index(item))) new_start1 = l1.index(item) new_start2 = l2.index(item) s += max(s1, s2) s1 = sum(itertools.islice(l1, new_start1, len(l1))) s2 = sum(itertools.islice(l2, new_start2, len(l2))) s += max(s1, s2) return s
Your algorithm is actually fast, just your implementation is slow. The two things that make it take overall O(n²) time: l1.index(item) always searches from the start of the list. Should be l1.index(item, new_start1). itertools.islice(l1, new_start1, ...) creates an iterator for l1 and iterates over the first new_start1 elements before it reaches the elements you want. So just use a normal list slice instead. Then it's just O(n log n) for the sorting and O(n) for everything else. And the sorting's O(n log n) is fast, might easily take less time than the O(n) part for any allowed input and even larger ones. Here's the rewritten version, gets accepted in about 6 seconds, just like the solutions from the other answers. def max_sum_path(l1:list, l2:list): common_items = list(set(l1).intersection(l2)) if not common_items: return max(sum(l1), sum(l2)) common_items.sort() s = 0 new_start1 = 0 new_start2 = 0 s1 = 0 s2 = 0 for item in common_items: next_start1 = l1.index(item, new_start1) # changed next_start2 = l2.index(item, new_start2) # changed s1 = sum(l1[new_start1 : next_start1]) # changed s2 = sum(l2[new_start2 : next_start2]) # changed new_start1 = next_start1 # changed new_start2 = next_start2 # changed s += max(s1, s2) s1 = sum(l1[new_start1:]) # changed s2 = sum(l2[new_start2:]) # changed s += max(s1, s2) return s Or you could use iterators instead of indexes. Here's your solution rewritten to do that, also gets accepted in about 6 seconds: def max_sum_path(l1:list, l2:list): common_items = sorted(set(l1) & set(l2)) s = 0 it1 = iter(l1) it2 = iter(l2) for item in common_items: s1 = sum(iter(it1.__next__, item)) s2 = sum(iter(it2.__next__, item)) s += max(s1, s2) + item s1 = sum(it1) s2 = sum(it2) s += max(s1, s2) return s I'd combine the last four lines into one, just left it like you had so it's easier to compare.
8
5
70,202,457
2021-12-2
https://stackoverflow.com/questions/70202457/sorting-multiple-lists-together-in-place
I have lists a,b,c,... of equal length. I'd like to sort all of them the order obtained by sorting a, i.e., I could do the decorate-sort-undecorate pattern a, b, c = map(list, zip(*sorted(zip(a, b, c)))) or something like that. However, I'd like that the lists are sorted in place (I assume that sorted pulls everything from the temporary iterator passed to it to a temporary list, and then zip stuff into three output lists, so every datum in the input is copied twice unnecessarily) without creating temporary objects. So what I don't mean is: a_sorted, b_sorted, c_sorted = map(list, zip(*sorted(zip(a, b, c)))) a[:] = a_sorted b[:] = b_sorted c[:] = c_sorted How can I achieve that?
I think "without creating temporary objects" is impossible, especially since "everything is an object" in Python. You could get O(1) space / number of objects if you implement some sorting algorithm yourself, though if you want O(n log n) time and stability, it's difficult. If you don't care about stability (seems likely, since you say you want to sort by a but then actually sort by a, b and c), heapsort is reasonably easy: def sort_together_heapsort(a, b, c): n = len(a) def swap(i, j): a[i], a[j] = a[j], a[i] b[i], b[j] = b[j], b[i] c[i], c[j] = c[j], c[i] def siftdown(i): while (kid := 2*i+1) < n: imax = kid if a[kid] > a[i] else i kid += 1 if kid < n and a[kid] > a[imax]: imax = kid if imax == i: return swap(i, imax) i = imax for i in range(n // 2)[::-1]: siftdown(i) while n := n - 1: swap(0, n) siftdown(0) Anyway, if someone's interested in just saving some amount of memory, that can be done by decorating in-place (building tuples and storing them in a): def sort_together_decorate_in_a(a, b, c): for i, a[i] in enumerate(zip(a, b, c)): pass a.sort() for i, [a[i], b[i], c[i]] in enumerate(a): pass Or if you trust that list.sort will ask for keys for the elements in order (at least in CPython it does, already did so when the key parameter was introduced 18 years ago, and I suspect will keep doing so): def sort_together_iter_key(a, b, c): it = iter(a) b.sort(key=lambda _: next(it)) it = iter(a) c.sort(key=lambda _: next(it)) a.sort() Testing memory and time with three lists of 100,000 elements: 15,072,520 bytes 152 ms sort_together_sorted_zip 15,072,320 bytes 166 ms sort_together_sorted_zip_2 14,272,576 bytes 152 ms sort_together_sorted_zip_X 6,670,708 bytes 126 ms sort_together_decorate_in_a 6,670,772 bytes 177 ms sort_together_decorate_in_first_X 5,190,212 bytes 342 ms sort_multi_by_a_guest_X 1,597,400 bytes 100 ms sort_together_iter_key 1,597,448 bytes 102 ms sort_together_iter_key_X 744 bytes 1584 ms sort_together_heapsort 704 bytes 1663 ms sort_together_heapsort_X 168 bytes 1326 ms sort_together_heapsort_opti 188 bytes 1512 ms sort_together_heapsort_opti_X Note: The second solution is a shortened/improved version of yours, no need for temporary variables and conversions to lists. The solutions with _X suffix are versions that take arbitrarily many lists as parameters. The @a_guest is from their answer. Runtime-wise it currently benefits from my data being random, as that doesn't expose that solution's worst case complexity O(m * n²), where m is the number of lists and n is the length of each list. Testing memory and time with ten lists of 100,000 elements: 19,760,808 bytes 388 ms sort_together_sorted_zip_X 12,159,100 bytes 425 ms sort_together_decorate_in_first_X 5,190,292 bytes 1249 ms sort_multi_by_a_guest_X 1,597,528 bytes 393 ms sort_together_iter_key_X 704 bytes 4186 ms sort_together_heapsort_X 188 bytes 4032 ms sort_together_heapsort_opti_X The whole code (Try it online!): import tracemalloc as tm from random import random from timeit import timeit def sort_together_sorted_zip(a, b, c): a_sorted, b_sorted, c_sorted = map(list, zip(*sorted(zip(a, b, c)))) a[:] = a_sorted b[:] = b_sorted c[:] = c_sorted def sort_together_sorted_zip_2(a, b, c): a[:], b[:], c[:] = zip(*sorted(zip(a, b, c))) def sort_together_sorted_zip_X(*lists): sorteds = zip(*sorted(zip(*lists))) for lst, lst[:] in zip(lists, sorteds): pass def sort_together_decorate_in_a(a, b, c): for i, a[i] in enumerate(zip(a, b, c)): pass a.sort() for i, [a[i], b[i], c[i]] in enumerate(a): pass def sort_together_decorate_in_first_X(*lists): first = lists[0] for i, first[i] in enumerate(zip(*lists)): pass first.sort() for i, values in enumerate(first): for lst, lst[i] in zip(lists, values): pass def sort_together_iter_key(a, b, c): it = iter(a) b.sort(key=lambda _: next(it)) it = iter(a) c.sort(key=lambda _: next(it)) a.sort() def sort_together_iter_key_X(*lists): for lst in lists[1:]: it = iter(lists[0]) lst.sort(key=lambda _: next(it)) lists[0].sort() def sort_together_heapsort(a, b, c): n = len(a) def swap(i, j): a[i], a[j] = a[j], a[i] b[i], b[j] = b[j], b[i] c[i], c[j] = c[j], c[i] def siftdown(i): while (kid := 2*i+1) < n: imax = kid if a[kid] > a[i] else i kid += 1 if kid < n and a[kid] > a[imax]: imax = kid if imax == i: return swap(i, imax) i = imax for i in range(n // 2)[::-1]: siftdown(i) while n := n - 1: swap(0, n) siftdown(0) def sort_together_heapsort_X(*lists): a = lists[0] n = len(a) def swap(i, j): for lst in lists: lst[i], lst[j] = lst[j], lst[i] def siftdown(i): while (kid := 2*i+1) < n: imax = kid if a[kid] > a[i] else i kid += 1 if kid < n and a[kid] > a[imax]: imax = kid if imax == i: return swap(i, imax) i = imax for i in range(n // 2)[::-1]: siftdown(i) while n := n - 1: swap(0, n) siftdown(0) def sort_together_heapsort_opti(a, b, c): # Avoid inner functions and range-loop to minimize memory. # Makes it faster, too. But duplicates code. Not recommended. n = len(a) i0 = n // 2 - 1 while i0 >= 0: i = i0 while (kid := 2*i+1) < n: imax = kid if a[kid] > a[i] else i kid += 1 if kid < n and a[kid] > a[imax]: imax = kid if imax == i: break a[i], a[imax] = a[imax], a[i] b[i], b[imax] = b[imax], b[i] c[i], c[imax] = c[imax], c[i] i = imax i0 -= 1 while n := n - 1: a[0], a[n] = a[n], a[0] b[0], b[n] = b[n], b[0] c[0], c[n] = c[n], c[0] i = 0 while (kid := 2*i+1) < n: imax = kid if a[kid] > a[i] else i kid += 1 if kid < n and a[kid] > a[imax]: imax = kid if imax == i: break a[i], a[imax] = a[imax], a[i] b[i], b[imax] = b[imax], b[i] c[i], c[imax] = c[imax], c[i] i = imax def sort_together_heapsort_opti_X(*lists): # Avoid inner functions and range-loop to minimize memory. # Makes it faster, too. But duplicates code. Not recommended. a = lists[0] n = len(a) i0 = n // 2 - 1 while i0 >= 0: i = i0 while (kid := 2*i+1) < n: imax = kid if a[kid] > a[i] else i kid += 1 if kid < n and a[kid] > a[imax]: imax = kid if imax == i: break for lst in lists: lst[i], lst[imax] = lst[imax], lst[i] i = imax i0 -= 1 while n := n - 1: for lst in lists: lst[0], lst[n] = lst[n], lst[0] i = 0 while (kid := 2*i+1) < n: imax = kid if a[kid] > a[i] else i kid += 1 if kid < n and a[kid] > a[imax]: imax = kid if imax == i: break for lst in lists: lst[i], lst[imax] = lst[imax], lst[i] i = imax def sort_multi_by_a_guest_X(a, *lists): indices = list(range(len(a))) indices.sort(key=lambda i: a[i]) a.sort() for lst in lists: for i, j in enumerate(indices): while j < i: j = indices[j] lst[i], lst[j] = lst[j], lst[i] funcs = [ sort_together_sorted_zip, sort_together_sorted_zip_2, sort_together_sorted_zip_X, sort_together_decorate_in_a, sort_together_decorate_in_first_X, sort_multi_by_a_guest_X, sort_together_iter_key, sort_together_iter_key_X, sort_together_heapsort, sort_together_heapsort_X, sort_together_heapsort_opti, sort_together_heapsort_opti_X, ] n = 100000 a0 = [random() for _ in range(n)] b0 = [x + 1 for x in a0] c0 = [x + 2 for x in a0] for _ in range(3): for func in funcs: a, b, c = a0[:], b0[:], c0[:] time = timeit(lambda: func(a, b, c), number=1) assert a == sorted(a0) assert b == sorted(b0) assert c == sorted(c0) a, b, c = a0[:], b0[:], c0[:] tm.start() func(a, b, c) memory = tm.get_traced_memory()[1] tm.stop() print(f'{memory:10,} bytes {int(time * 1e3):4} ms {func.__name__}') print()
11
11
70,227,908
2021-12-4
https://stackoverflow.com/questions/70227908/iterating-over-rows-in-a-dataframe-in-pandas-is-there-a-difference-between-usin
When iterating through rows in a dataframe in Pandas, is there a difference in performance between using: for index in df.index: .... And: for index, row in df.iterrows(): .... ? Which one should be preferred?
When we doing for loop , look up index get the data require additional loc for index in df.index: value = df.loc['index','col'] When we do df.iterrows for index, row in df.iterrows(): value = row['col'] Since you already with pandas , both of them are not recommended. Unless you need certain function and cannot be vectorized. However, IMO, I preferred df.index
5
5
70,207,122
2021-12-2
https://stackoverflow.com/questions/70207122/fastapi-some-requests-are-failing-due-to-10s-timeout
We have deployed a model prediction service in production that is using FastAPI and unfortunately, some of the requests are failing due to a 10s timeout. In terms of concurrent requests, we typically only load about 2/3 requests per second, so I wouldn't think that would be too much strain on FastAPI. The first thing we tried to do is isolate the FastAPI framework from the model itself, and when we performed some tracing, we noticed that a lot of time (6 seconds) was spent on this segment: starlette.exceptions:ExceptionMiddleware.__call__. The gunicorn configuration we are using didn't seem to help either: """gunicorn server configuration.""" import os ​ threads = 2 workers = 4 timeout = 60 keepalive = 1800 graceful_timeout = 1800 bind = f":{os.environ.get('PORT', '80')}" worker_class = "uvicorn.workers.UvicornWorker" Would really appreciate some guidance on what the above segment implies and what is causing timeout issues for some requests under a not too strenuous load.
guidance on what the above segment implies here you have the official gunicorn config file with lot of explainations included. since you use gunicorn to manager uvicorn workers, forcing your timeout to 60 sec should work just fine for lnog running tasks (you should think about using a asynchronous task queue or job queue like celery) but what is returning your route ? first thing would be to see the error thrown by your api starlette.exceptions:ExceptionMiddleware.call Since you have expanded the list you can see that what take the most time (as expected) is not fastapi nor starlette but you function in app.api.routes.predictions. so I wouldn't think that would be too much strain on FastAPI it is not too much strain of fastapi since it is not involved in your request treatment. Remember that fastapi is "just" a framework so when your function take time it's your function/development that's at fault. here it can be one or a combinaison of thoses things that cause long runing tasks: sync route blocking I/O function or treatment in your route function prediction algo that take a lot of time (too much maybe) bad worker class configuration for your type of treatment When you do AI or nlp stuff often if take a lot of treatment time, regarding the integration of such models in api you use a task queue like celery. If your api is not at fault and your route not returning an error, just that it take a lot of time you should take a look at implementing task queue's.
7
2
70,215,049
2021-12-3
https://stackoverflow.com/questions/70215049/attributeerror-tfidfvectorizer-object-has-no-attribute-get-feature-names-out
Why do I keep on getting this error? I try other codes too, but once it uses the get_feature_names_out function it will pop out this error. Below is my code: from sklearn.datasets._twenty_newsgroups import fetch_20newsgroups from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.naive_bayes import MultinomialNB # fast to train and achieves a decent F-score from sklearn import metrics import numpy as np def show_top10(classifier, vectorizer, categories): feature_names = vectorizer.get_feature_names_out() for i, category in enumerate(categories): top10 = np.argsort(classifier.coef_[i])[-10:] print("%s: %s" % (category, " ".join(feature_names[top10]))) newsgroups_train = fetch_20newsgroups(subset='train') print(list(newsgroups_train.target_names)) cats = ['alt.atheism', 'sci.space', 'rec.sport.baseball', 'rec.sport.hockey'] newsgroups_train = fetch_20newsgroups(subset='train', categories=cats) print(list(newsgroups_train.target_names)) print(newsgroups_train.filenames.shape) vectorizer = TfidfVectorizer() vectors = vectorizer.fit_transform(newsgroups_train.data) print(vectors.shape)
This is probably because you are using an older scikit-learn version than the one this code was written for. get_feature_names_out is a method of the class sklearn.feature_extraction.text.TfidfVectorizer since scikit-learn 1.0. Previously, there was a similar method called get_feature_names. So you should update your scikit-learn package, or use the old method (not recommended).
15
14
70,211,643
2021-12-3
https://stackoverflow.com/questions/70211643/understanding-sklearn-calibratedclassifiercv
Hi all I am having trouble understanding how to use the output of sklearn.calibration.CalibratedClassifierCV. I have calibrated my binary classifier using this method, and results are greatly improved. However I am not sure how to interpret the results. sklearn guide states that, after calibration, the output of predict_proba method can be directly interpreted as a confidence level. For instance, a well calibrated (binary) classifier should classify the samples such that among the samples to which it gave a predict_proba value close to 0.8, approximately 80% actually belong to the positive class. Now I would like to reduce false positive by applying a cutoff at .6 for the model to predict label True. Without the calibration, I would have simply used my_model.predict_proba() > .6. However, it seems that after calibration the meaning of predict_proba has changed, so I am not sure if I can do that anymore. From a quick testing it seems that predict and predict_proba follow the same logic I would expect before calibration. The output of: pred = my_model.predict(valid_x) proba= my_model.predict_proba(valid_x) pd.DataFrame({"label": pred, "proba": proba[:,1]}) is the following: Where everything that has a probability of above .5 gets to be classifed as True, and everything below .5 as False. Can you confirm that, after calibration, I can still use predict_proba to apply a different cutoff to identify my labels? 2 https://scikit-learn.org/stable/modules/calibration.html#calibration
For me, you can actually use predict_proba() after calibration to apply a different cutoff. What happens within class CalibratedClassifierCV (as you noticed) is effectively that the output of predict() is based on the output of predict_proba() (see here for reference), i.e. np.argmax(self.predict_proba(X), axis=1) == self.predict(X). On the other side, for the non-calibrated classifier that you're passing to CalibratedClassifierCV (depending on whether it is a probabilistic classifier or not) the above equality may or may not hold (e.g. it does not for an SVC() classifier - see here, for instance, for some other details on this).
7
2
70,213,579
2021-12-3
https://stackoverflow.com/questions/70213579/how-to-install-libappindicator1-on-python-of-docker-image
I'd like to install goole chrome on Python of Docker image. So, I need install libappindicator1. However when I build this Dockerfile, I got error on libappindicator1 Dockerfile FROM python:3.9 # Install manually all the missing libraries RUN apt-get update RUN apt-get install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils fonts-takao-* # Install Chrome RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install Error message E: Unable to locate package libappindicator1 How can I install libappindicator1 on Python of Docker image?
I solved this problem by modifying the python image tag. python: 3.8 -> python: 3.8-buster When I use python: 3.8-bullseye I got the same error. So this error seems to be related with Debian 10 (bullseye). Note: buster is Debian 9 This is the reason, why Debian 10 (bullseye) can not install libappindicator1 5.3.1. Noteworthy obsolete packages The deprecated libappindicator libraries are no longer provided. As a result, the related packages libappindicator1, libappindicator3-1 and libappindicator-dev are no longer available. This is expected to cause dependency errors for third-party software that still depends on libappindicator to provide system tray and indicator support.
6
12
70,209,111
2021-12-3
https://stackoverflow.com/questions/70209111/how-to-swap-words-multiple-characters-in-a-string
Consider the following examples: string_now = 'apple and avocado' stringthen = string_now.swap('apple', 'avocado') # stringthen = 'avocado and apple' and: string_now = 'fffffeeeeeddffee' stringthen = string_now.swap('fffff', 'eeeee') # stringthen = 'eeeeefffffddffee' Approaches discussed in Swap character of string in Python do not work, as the mapping technique used there only takes one character into consideration. Python's built-in str.maketrans() also only supports one-character translations, as when I try to do multiple characters, it throws the following error: Traceback (most recent call last): File "main.py", line 4, in <module> s.maketrans(mapper) ValueError: string keys in translate table must be of length 1 A chain of replace() methods is not only far from ideal (since I have many replacements to do, chaining replaces would be a big chunk of code) but because of its sequential nature, it will not translate things perfectly as: string_now = 'apple and avocado' stringthen = string_now.replace('apple', 'avocado').replace('avocado', 'apple') gives 'apple and apple' instead of 'avocado and apple'. What's the best way to achieve this?
Given that we want to swap words x and y, and that we don't care about the situation where they overlap, we can: split the string on occurrences of x within each piece, replace y with x join the pieces with y Essentially, we use split points within the string as a temporary marker to avoid the problem with sequential replacements. Thus: def swap_words(s, x, y): return y.join(part.replace(y, x) for part in s.split(x)) Test it: >>> swap_words('apples and avocados and avocados and apples', 'apples', 'avocados') 'avocados and apples and apples and avocados' >>>
10
17
70,205,486
2021-12-2
https://stackoverflow.com/questions/70205486/clickable-hyperlinks-in-plotly-dash-datatable
There are several similar questions out there that I will reference in this, but I have a Dash DataTable with a column that I would like to make a clickable hyperlink. The table essentially looks like so: Date Ticket ID Work Order Link (s) 2018-08-30 22:52:25 1444008 119846184 google.com/woNum=119846184 2021-09-29 13:33:49 1724734 122445397, 122441551 google.com/woNum=122445397, google.com/woNum=122441551 Without the hyperlinks, I am creating the table through a Pandas dataframe and the Data and Column references for Dash DataTable like this: # works fine searchFrame = searchFrame.drop(columns=['ContentNoStop']) columns = [{'name': col, 'id': col} for col in searchFrame.columns] The links are created via: woLink = r'http://corp.com/uniqueid=' df['WO Link'] = df['Work Order'].str.replace('(\d+)', rf'{woLink}\1') crLink = r'http://corp.com/uniqueid=' df['Ticket Link'] = crLink + df['Ticket ID'].astype(str) Now, following this question from Plotly forums, I edited to fit mine: columns = [ {'name': col, 'id': col} for col in searchFrame.loc[ :, [ 'Reported Date', 'Site','Ticket ID', 'Ticket Link', 'Work Order',\ 'Score', 'Level', 'First', 'Last', 'Department', \ 'Detection', 'Code', 'Content', 'Description', 'Owner', 'Owner Group', 'Document ID' ], { "name": "WO Link", "id": "WO Link", 'type': 'text', "presentation" :'markdown' } ], ] Directly copying that code throws a Syntax Error: Invalid Syntax. So, I edited to: columns = [ {'name': col, 'id': col} for col in searchFrame.loc[ :, [ 'Reported Date', 'Site','Ticket ID', 'Ticket Link', 'Work Order',\ 'Score', 'Level', 'First', 'Last', 'Department', \ 'Detection', 'Code', 'Content', 'Description', 'Owner', 'Owner Group', 'Document ID' ], { "name": "WO Link", "id": "WO Link", 'type': 'text', "presentation" :'markdown' } ]] However, this throws a raise IndexingError("Too many indexers") pandas.core.indexing.IndexingError: Too many indexers Using this SO question, I did the following which also threw the Too many indexers. The other answer on that question I tried as well (below) threw TypeError: unhashable type: 'slice' idx = pd.IndexSlice columns = [ {'name': col, 'id': col} for col in searchFrame.loc[idx[ :, [ 'Reported Date', 'Site','Ticket ID', 'Ticket Link', 'Work Order',\ 'Score', 'Level', 'First', 'Last', 'Department', \ 'Detection', 'Code', 'Content', 'Description', 'Owner', 'Owner Group', 'Document ID' ], { "name": "WO Link", "id": "WO Link", 'type': 'text', "presentation" :'markdown' } ], :] ] columns = [ {'name': col, 'id': col} for col in searchFrame.loc(axis=0)[ :, [ 'Reported Date', 'Site','Ticket ID', 'Ticket Link', 'Work Order',\ 'Score', 'Level', 'First', 'Last', 'Department', \ 'Detection', 'Code', 'Content', 'Description', 'Owner', 'Owner Group', 'Document ID' ], { "name": "WO Link", "id": "WO Link", 'type': 'text', "presentation" :'markdown' } ]] What am I doing wrong here? I feel like it should not be super complicated to create links for these. Also, you will be my forever hero if you can accommodate for multiple links in some rows.
If you make sure that the links are in Markdown format the solution suggested in the Plotly Dash community forum should work: import dash import dash_html_components as html import dash_table import pandas as pd df = pd.DataFrame({ 'Date': ['2018-08-30 22:52:25', '2021-09-29 13:33:49'], 'Ticket ID': [1444008, 1724734], 'Work Order': ['119846184', '122445397'], 'Link(s)': ['[Google](https://www.google.com)', '[Twitter](https://www.twitter.com), [Facebook](https://www.facebook.com)'], }) app = dash.Dash() app.layout = html.Div(children=[ dash_table.DataTable( data=df.to_dict(orient='records'), columns=[{'id': x, 'name': x, 'presentation': 'markdown'} if x == 'Link(s)' else {'id': x, 'name': x} for x in df.columns], style_table={'position': 'relative', 'top': '5vh', 'left': '5vw', 'width': '60vw'} ), ]) if __name__ == '__main__': app.run_server(host='127.0.0.1', debug=True)
6
5
70,196,164
2021-12-2
https://stackoverflow.com/questions/70196164/how-to-draw-oriented-edges-on-pyvis
I'm trying to plot an oriented graph with pyvis. In the documentation they suggest using the following command for creating an oriented edge: net.add_edge(4,1,from=1,to=4) The problems are two: I'm getting this error TypeError: add_edge() got multiple values for argument 'to' from is a python keyword so it can't be used as a parameter. Any suggestion?
You don't need to directly specify to and from in your add_edge function if you had specified directed=True when you created your network. The order of the nodes in the add_edge function is enough to describe the direction. Below is an example: from pyvis.network import Network net = Network(directed =True) net.add_node(0, label='a') net.add_node(1, label='b') net.add_edge(0,1) net.show('mygraph.html') And the output gives:
6
11
70,197,646
2021-12-2
https://stackoverflow.com/questions/70197646/python-add-weeks-to-date-from-df
How would I add two df columns together (date + weeks): This works for me: df['Date'] = pd.to_datetime(startDate, format='%Y-%m-%d') + datetime.timedelta(weeks = 3) But when I try to add weeks from a column, I get a type error: unsupported type for timedelta weeks component: Series df['Date'] = pd.to_datetime(startDate, format='%Y-%m-%d') + datetime.timedelta(weeks = df['Duration (weeks)']) Would appreciate any help thank you!
You can use the pandas to_timelta function to transform the number of weeks column to a timedelta, like this: import pandas as pd import numpy as np # create a DataFrame with a `date` column df = pd.DataFrame( pd.date_range(start='1/1/2018', end='1/08/2018'), columns=["date"] ) # add a column `weeks` with a random number of weeks df['weeks'] = np.random.randint(1, 6, df.shape[0]) # use `pd.to_timedelta` to transform the number of weeks column to a timedelta # and add it to the `date` column df["new_date"] = df["date"] + pd.to_timedelta(df["weeks"], unit="W") df.head() date weeks new_date 0 2018-01-01 5 2018-02-05 1 2018-01-02 2 2018-01-16 2 2018-01-03 2 2018-01-17 3 2018-01-04 4 2018-02-01 4 2018-01-05 3 2018-01-26
5
4
70,197,944
2021-12-2
https://stackoverflow.com/questions/70197944/webdriver-object-has-no-attribute-switch-to-frame
I cannot switch to the sucessfully identified iFrame(s). The script identifies the iFrame (checked in debugger), but the switch to the iFrame fails and runs into the exception trap. Few times ago it worked perfectly. Message='WebDriver' object has no attribute 'switch_to_frame' What happened in the meantime? Chromedriver has been updated from version 95.0.4638.17 to ChromeDriver 96.0.4664.45 Is the Chromedriver is no longer compatible with the latest Selenium version? ... driver.switch_to.default_content() try: # find the frame wait.until(EC.element_to_be_clickable((By.ID, "wysiwygTextarea_ifr"))) frame2 = driver.find_element(By.XPATH, "//iframe[@id='wysiwygTextarea_ifr']"); # switch to frame driver.switch_to.frame(frame2.tag_name); print("--------------iframe found-------------------"); except: print("--------------iframe not found-------------------"); ...
While switching to frame, the supported notations are: Switch to a frame using frame name: driver.switch_to.frame('frame_name') Switch to a frame using frame index: driver.switch_to.frame(1) Switch to a frame using frame element: driver.switch_to.frame(driver.find_elements_by_tag_name("iframe")[0]) Switch to parent frame: driver.switch_to.parent_frame() Switch to default content: driver.switch_to.default_content() This usecase To switch the frame you have used: driver.switch_to.frame(frame2.tag_name); that is, the TAG_NAME which isn't supported. Hence you see the error: Message='WebDriver' object has no attribute 'switch_to_frame' Solution You can use the following line of code: # find the frame wait.until(EC.element_to_be_clickable((By.ID, "wysiwygTextarea_ifr"))) frame2 = driver.find_element(By.XPATH, "//iframe[@id='wysiwygTextarea_ifr']"); # switch to frame by frame element driver.switch_to.frame(frame2);
6
5
70,185,150
2021-12-1
https://stackoverflow.com/questions/70185150/return-all-possible-entity-types-from-spacy-model
Is there a method to extract all possible named entity types from a model in spaCy? You can manually figure it out by running on sample text, but I imagine there is a more programmatic way to do this? For example: import spacy model=spacy.load("en_core_web_sm") model.*returns_entity_types*
The statistical pipeline components like ner provide their labels under .labels: import spacy nlp = spacy.load("en_core_web_sm") nlp.get_pipe("ner").labels
12
23
70,187,208
2021-12-1
https://stackoverflow.com/questions/70187208/in-pandas-how-to-pivot-a-dataframe-on-a-categorical-series-with-missing-categor
I have a pandas dataframe with a categorical series that has missing categories. In the example shown below, group has the categories "a", "b", and "c", but there are no cases of "c" in the dataframe. import pandas as pd dfr = pd.DataFrame({ "id": ["111", "222", "111", "333"], "group": ["a", "a", "b", "b"], "value": [1, 4, 9, 16]}) dfr["group"] = pd.Categorical(dfr["group"], categories=["a", "b", "c"]) dfr.pivot(index="id", columns="group") The resulting pivoted dataframe has columns a and b. I expected a c column containing all missing value as well. value group a b id 111 1.0 9.0 222 4.0 NaN 333 NaN 16.0 How can I pivot a dataframe on a categorical series to include columns with all categories, regardless of whether they were present in the original dataframe?
pd.pivot_table has a dropna argument which dictates dropping or not value columns full of NaNs. Try setting it to False: import pandas as pd dfr = pd.DataFrame({ "id": ["111", "222", "111", "333"], "group": ["a", "a", "b", "b"], "value": [1, 4, 9, 16]}) dfr["group"] = pd.Categorical(dfr["group"], categories=["a", "b", "c"]) pd.pivot_table(dfr, index="id", columns="group", dropna=False)
6
5
70,184,440
2021-12-1
https://stackoverflow.com/questions/70184440/how-can-i-unpack-a-pydantic-basemodel-into-kwargs
I am trying to make a function that takes a pydantic BaseModel as an input to run another function. I need to unpack the BaseModel into kwargs. I tried doing this: def run_routing_from_job(job): return run_routing( job.inputs.input_file, **job.inputs.config.dict() ) where job is of the format class Job(BaseModel): client_info: ClientInfo # Another BaseModel inputs: RoutingJobInputs # Another BaseModel uid: UUID = Field(default_factory=uuid4) status: str = "job_queued" result: int = None However, doing .dict() parses all of the items recursively into a dictionary format. I want to keep the client_info and inputs as a BaseModel class, not convert it into a dictionary. I could make a way to do it, but I can't find a clean way to do it.
I worked it out, just replace .dict() with __dict__ def run_routing_from_job(job): return run_routing( job.inputs.input_file, **job.inputs.config.__dict__ )
7
3
70,184,091
2021-12-1
https://stackoverflow.com/questions/70184091/matplotlib-axes-axes-set-xticks-throws-set-ticks-takes-2-positional-arguments
this easy example from matplotlib throws the mentioned error: https://matplotlib.org/stable/gallery/lines_bars_and_markers/barchart.html I also cannot find any typo according to the documentation of set_xticks(): https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_xticks.html Can anyone provide clearification and some help, please? Kind regards, Daniel
This is error is based on the matplotlib version you use. The set_axis command was changed from matplotlib version 3.4.x to 3.5.x. Try using the newer version. You can install it via $ pip install matplotlib==3.5.0
11
12
70,111,401
2021-11-25
https://stackoverflow.com/questions/70111401/python-3-10-pattern-matching-pep-634-wildcard-in-string
I got a large list of JSON objects that I want to parse depending on the start of one of the keys, and just wildcard the rest. A lot of the keys are similar, like "matchme-foo" and "matchme-bar". There is a builtin wildcard, but it is only used for whole values, kinda like an else. I might be overlooking something but I can't find a solution anywhere in the proposal: https://docs.python.org/3/whatsnew/3.10.html#pep-634-structural-pattern-matching Also a bit more about it in PEP-636: https://www.python.org/dev/peps/pep-0636/#going-to-the-cloud-mappings My data looks like this: data = [{ "id" : "matchme-foo", "message": "hallo this is a message", },{ "id" : "matchme-bar", "message": "goodbye", },{ "id" : "anotherid", "message": "completely diffrent event" }, ...] I want to do something that can match the id without having to make a long list of |'s. Something like this: for event in data: match event: case {'id':'matchme-*'}: # Match all 'matchme-' no matter what comes next log.INFO(event['message']) case {'id':'anotherid'}: log.ERROR(event['message']) It's a relatively new addition to Python so there aren't many guides on how to use it yet.
You can use a guard. for event in data: match event: case {'id': x} if x.startswith("matchme"): # guard print(event["message"]) case {'id':'anotherid'}: print(event["message"]) Quoting from the official documentation: Guard We can add an if clause to a pattern, known as a “guard”. If the guard is false, match goes on to try the next case block. Note that value capture happens before the guard is evaluated: match point: case Point(x, y) if x == y: print(f"The point is located on the diagonal Y=X at {x}.") case Point(x, y): print(f"Point is not on the diagonal.") See also: PEP 622 - Guards PEP 636 - Adding conditions to patterns PEP 634 - Guards
29
38
70,177,062
2021-11-30
https://stackoverflow.com/questions/70177062/cartopy-not-able-to-identify-geos-for-proj-install-on-windows
I am trying to install Cartopy on Windows. I have installed all the dependencies from their website, however when I go to run pip install Cartopy I get: Complete output (5 lines): setup.py:117: UserWarning: Unable to determine GEOS version. Ensure you have 3.7.2 or later installed, or installation may fail. warnings.warn( setup.py:166: UserWarning: Unable to determine Proj version. Ensure you have 8.0.0 or later installed, or installation may fail. warnings.warn( Proj version 0.0.0 is installed, but cartopy requires at least version 8.0.0 I have ran and succesfully completed pip install proj pip install geos
Installing Cartopy on Windows using pip is not trivial. Nevertheless, here is a cartopy installation overview using the method that worked for me, specifically for Windows and without using conda. Start by uninstalling proj, geos, and shapely if they are already installed, otherwise skip to step 2. This will facilitate linking them in later steps. pip uninstall shapely pip uninstall proj pip uninstall geos Install proj and geos from OSGeo4W. You cannot use pip to install these because pip points to other projects of the same name. Instead, use the OSGeo4W installer: https://trac.osgeo.org/osgeo4w/ Run as admin and use all the default options, including default installation directories (Advanced Install -> Install from Internet -> All Users -> Next -> Direct Connection -> download.osgeo.org). Then search proj, expand Libs and click the top two "skip"s (proj and proj-data) once each to toggle to the latest release. Now search geos, expand Libs again, and toggle the first "skip" (geos) once to the latest version. Then click next, allow the installer to load dependencies, and click next. The installation took ~5 minutes for me. You now have proj and geos installed. Install shapely from the .whl. You cannot use the method listed in the cartopy install instructions; it fails to link shapely to geos and you will get an error when importing cartopy. Instead, head to https://www.lfd.uci.edu/~gohlke/pythonlibs/#shapely https://pypi.org/project/shapely/#files and download the version that suits your python installation (e.g. if you run 64-bit python 3.10, download shapely‑2.0.2‑cp310‑cp310‑win_amd64.whl) Now you can run pip install \{path}\{to}\{whl}\{Shapely_file.whl} Install cartopy from the .whl. You can download one that suits you here: https://www.lfd.uci.edu/~gohlke/pythonlibs/#cartopy https://pypi.org/project/Cartopy/#files Pick the one that suits your system (e.g. if you run 64-bit python 3.10, download Cartopy‑0.22.0‑cp310‑cp310‑win_amd64.whl) Now you can run pip install \{path}\{to}\{whl}\{Cartopy_file.whl} That's it! It took me a long while and sifting through at least a couple dozen "just use conda" threads to figure this out. Select relevant discussions: https://github.com/SciTools/cartopy/issues/1471 https://towardsdatascience.com/install-shapely-on-windows-72b6581bb46c Note: This answer was initially tested and worked with shapely v1.8.2 and cartopy v0.20.2.
8
20
70,118,412
2021-11-26
https://stackoverflow.com/questions/70118412/keeping-endpoint-function-declarations-in-separate-modules-with-fastapi
I have an API that uses FastAPI. In a single file (main.py), I have the call to the function that creates the API from fastapi import FastAPI # ... app = FastAPI() As well as all the endpoints: @app.post("/sum") async def sum_two_numbers(number1: int, number2: int): return {'result': number1 + number2} But as the application gets larger, the file is becoming messy and hard to maintain. The obvious solution would be to keep function definitions in separate modules and just import them and use them in main.py, like this: from mymodules.operations import sum_two_numbers # ... @app.post("/sum") sum_two_numbers(number1: int, number2: int) Only that doesn't work. I don't know if I'm doing it wrong or it can't be done, but I get this error from VSCode: Expected function or class declaration after decorator | Pylance (My program has so many errors that I haven't seen the actual interpreter complaint, but if that's important, I can try debug it and post it here) So is this impossible to do and I have to stick to the one-file API, or it is possible to do, but I'm doing it wrong? If the second, what is the right way?
So is this impossible to do and I have to stick to the one-file API, or it is possible to do, but I'm doing it wrong? One file api is just for demo/test purposes, in the real world you always do multi-file api especialy with framework like fastapi where you use different type of file like pydantic models, db models, etc. If the second, what is the right way? There is no "right way", there is ways that fit your needs. You can follow the advanced user guide, to see a good example. what the doc suggest: . ├── app │ ├── __init__.py │ ├── main.py │ ├── dependencies.py │ └── routers │ │ ├── __init__.py │ │ ├── items.py │ │ └── users.py │ └── internal │ ├── __init__.py │ └── admin.py what i use when i have db, pydantic models, etc . ├── app │ ├── __init__.py │ ├── main.py │ ├── dependencies.py │ └── routers │ │ ├── __init__.py │ │ ├── items.py │ │ └── users.py │ └── models │ │ ├── __init__.py │ │ ├── items.py │ │ └── users.py │ └── schemas │ │ ├── __init__.py │ │ ├── items.py │ │ └── users.py │ └── internal │ │ ├── __init__.py │ │ └── admin.py here models represent db models and schemas represent pydantic models
7
6
70,098,351
2021-11-24
https://stackoverflow.com/questions/70098351/how-to-mock-a-http-request-post-in-python-unittest
First time write unittest. Production code: def get_session_token(organization_id): endpoint = get_endpoint(organization_id) response = requests.post( endpoint + "/some/url/part", data=json.dumps({ "Login": CONFIG[organization_id]["username"], "Password": CONFIG[organization_id]["password"], }), headers={"Accept": "application/json"} ) if response.status_code != 201: log.error("filename.get_session_token(%r): couldn't auth: %r %r", organization_id, response, response.text) raise ValueError() return response.json() def member(organization_id): session_key = get_session_token(organization_id) (some other code...) I need to test member. And I have test code: @patch('requests.post') def test_member(self, mock_post): mock_post().status_code = 201 mock_response = mock_post("some/url", data=ANY, headers={"Accept": "application/json"}) mock_response.status_code = 201 (some other code...) Every time I run test, it always raises ValueError()(which is a 403 error) How can I bypass that requests.post and just get a 201? Thank you!
See Where to patch. And, you should provide a mock return value for the mock_post using return_value - The value returned when the mock is called. E.g. member.py: import requests import json def get_session_token(organization_id): endpoint = 'http://localhost:3000/api' response = requests.post( endpoint + "/some/url/part", data=json.dumps({ "Login": 'python', "Password": '123456', }), headers={"Accept": "application/json"} ) if response.status_code != 201: raise ValueError() return response.json() def member(organization_id): session_key = get_session_token(organization_id) return session_key test_member.py: from unittest.mock import patch import unittest import json from member import member class TestMember(unittest.TestCase): @patch('member.requests.post') def test_member_success(self, mock_post): mock_post.return_value.status_code = 201 mock_post.return_value.json.return_value = 'mock response' actual = member(1) self.assertEqual(actual, 'mock response') mock_post.assert_called_once_with( 'http://localhost:3000/api/some/url/part', data=json.dumps({ "Login": 'python', "Password": '123456', }), headers={"Accept": "application/json"} ) @patch('member.requests.post') def test_member_failure(self, mock_post): mock_post.return_value.status_code = 400 self.assertRaises(ValueError, member, 1) if __name__ == '__main__': unittest.main() Test result: . ---------------------------------------------------------------------- Ran 1 test in 0.001s OK Name Stmts Miss Cover Missing ------------------------------------------------------------------------- src/stackoverflow/70098351/member.py 11 2 82% 18, 23 src/stackoverflow/70098351/test_member.py 11 0 100% ------------------------------------------------------------------------- TOTAL 22 2 91%
6
14
70,175,977
2021-11-30
https://stackoverflow.com/questions/70175977/multiprocessing-no-space-left-on-device
When i run the multiprocessing example on a OSX. I get the Error OSError: [Errno 28] No space left on device. The ENOSPC ("No space left on device") error will be triggered in any situation in which the data or the metadata associated with an I/O operation can't be written down anywhere because of lack of space. This doesn't always mean disk space – it could mean physical disk space, logical space (e.g. maximum file length), space in a certain data structure or address space. For example you can get it if there isn't space in the directory table (vfat) or there aren't any inodes left. It roughly means “I can't find where to write this down”. Source: https://stackoverflow.com/a/6999259/330658 What i don't understand, where files are written down in my code below? Any help highly appreicated. Example Code: #! /usr/bin/env python3 import sys import os import multiprocessing as mp import time def foo_pool(x): time.sleep(2) return x*x result_list = [] def log_result(result): result_list.append(result) print(result) def apply_async_with_callback(): pool = mp.Pool() for i in range(10): pool.apply_async(foo_pool, args = (i, ), callback = log_result) pool.close() pool.join() print(result_list) if __name__ == '__main__': apply_async_with_callback() Full Error: python3 test.py Traceback (most recent call last): File "test.py", line 32, in <module> apply_async_with_callback() File "test.py", line 23, in apply_async_with_callback pool = mp.Pool() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/pool.py", line 191, in __init__ self._setup_queues() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/pool.py", line 343, in _setup_queues self._inqueue = self._ctx.SimpleQueue() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 113, in SimpleQueue return SimpleQueue(ctx=self.get_context()) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/queues.py", line 342, in __init__ self._rlock = ctx.Lock() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 68, in Lock return Lock(ctx=self.get_context()) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/synchronize.py", line 162, in __init__ SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/synchronize.py", line 57, in __init__ sl = self._semlock = _multiprocessing.SemLock( OSError: [Errno 28] No space left on device
One possible reason to run into this error (as in my case), is that the system reaches a limit of allowed POSIX semaphores. This limit can be inspected by the sysctl kern.posix.sem.max command and is 10000 on my macOS 13.0.1. To set it, for example to 15000 until next reboot, you can use: sudo sysctl -w kern.posix.sem.max=15000 While this allowed the Python script to run, I wasn't able to find out which processes were actually using up the semaphores. The only way I found of listing this type of semaphores was lsof. It shows them as type PSXSEM, e.g.: sudo lsof | grep PSXSEM But it found only a couple of the semaphores -- not nearly enough to justify reaching the limit. So, I suspect a bug in the system, where semaphores are not cleaned up correctly. Further evidence to this is that after a reboot the script was able to run with the initial limit set.
5
8
70,141,277
2021-11-28
https://stackoverflow.com/questions/70141277/is-it-possible-to-use-python-3-10-in-google-colab
I would like to use the Structural Pattern Matching feature from Python 3.10 in Google Colab so using the commands !sudo apt-get install python3.10 !sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 1 !sudo update-alternatives --set python3 /usr/bin/python3.10 I was able to make !python --version output 3.10.0, but the print(sys.version) still outputs 3.7.12 in code cells and so the match cases statement raises SyntaxError number = 1 match number: case 0: print("Error") case _: print(number) Is there any way to make this work?
You can use this notebook. make a copy of it run the first cell reload (Ctrl + R, or Cmd + R) run the second cell See this video demo by 1littlecoder.
16
6
70,136,985
2021-11-27
https://stackoverflow.com/questions/70136985/linting-error-on-bitbucket-typeerror-linterstats-object-is-not-subscriptable
I am using BitBucket pipelines to perform linting checks with pylint. It was working fine a few hours ago. I have been facing the following error even though the final score is well past the minimum criteria (8.0): Your code has been rated at 9.43/10 Traceback (most recent call last): File "/usr/local/bin/pylint-fail-under", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.6/dist-packages/pylint_fail_under/__main__.py", line 42, in main score = results.linter.stats["global_note"] TypeError: 'LinterStats' object is not subscriptable
Do not use pylint-fail-under, pylint has a fail-under option since pylint 2.5.0, and pylint-fail-under's maintener will not update their package for newer pylint. Change pylint-fail-under --fail_under 8.0 to pylint --fail-under=8.0 and remove the dependency to pylint-fail-under. See also https://github.com/PyCQA/pylint/issues/5405, and: https://github.com/TNThieding/pylint-fail-under/issues/8#issuecomment-626369567
6
9
70,127,649
2021-11-26
https://stackoverflow.com/questions/70127649/how-to-have-a-single-source-of-truth-for-poetry-and-pre-commit-package-version
I'm looking into this Python project template. They use poetry to define dev dependencies [tool.poetry.dev-dependencies] black = {version = "*", allow-prereleases = true} flake8 = "*" isort = "^5.6" mypy = ">0.900,<1" ... They use also pre-commit to check housekeeping things (e.g., formatting, linting, issorting, ...), both for git and for CI workflows: minimum_pre_commit_version: 2.8.0 default_stages: [commit, push, manual] repos: - repo: https://github.com/psf/black rev: 21.11b1 hooks: - id: black - repo: https://github.com/pycqa/flake8 rev: 4.0.1 hooks: - id: flake8 args: [--max-line-length=88] - repo: https://github.com/pycqa/isort rev: 5.10.1 hooks: - id: isort args: [--filter-files] - ... In my case, I definitely want a local version of dev packages managed by poetry for my IDE, and I'd like also to harness the pre-commit framework "as is", without switching to language: system. Working this way, I need to manage each package version in two different places. Is there a non-manual way to keep dev packages versions (i.e,. black, flake8, isort, mypy, ...) aligned to a single source of truth? A Coockiecutter template could be an option, but it looks overkilling.
Finally, I created a pre-commit hook to do the job: https://github.com/floatingpurr/sync_with_poetry Edit: If you use PDM, you can refer to this one: https://github.com/floatingpurr/sync_with_pdm This hook just keeps in sync the repos rev in .pre-commit-config.yaml with the packages version locked into poetry.lock. If you use: Poetry for dependency management local dev packages for your IDE (via Poetry) pre-commit hooks this meta-hook can be useful to (un-)bump repos rev for you.
27
20
70,128,335
2021-11-26
https://stackoverflow.com/questions/70128335/what-is-the-proper-way-to-make-an-object-with-unpickable-fields-pickable
For me what I do is detect what is unpickable and make it into a string (I guess I could have deleted it too but then it will falsely tell me that field didn't exist but I'd rather have it exist but be a string). But I wanted to know if there was a less hacky more official way to do this. Current code I use: def make_args_pickable(args: Namespace) -> Namespace: """ Returns a copy of the args namespace but with unpickable objects as strings. note: implementation not tested against deep copying. ref: - https://stackoverflow.com/questions/70128335/what-is-the-proper-way-to-make-an-object-with-unpickable-fields-pickable """ pickable_args = argparse.Namespace() # - go through fields in args, if they are not pickable make it a string else leave as it # The vars() function returns the __dict__ attribute of the given object. for field in vars(args): field_val: Any = getattr(args, field) if not dill.pickles(field_val): field_val: str = str(field_val) setattr(pickable_args, field, field_val) return pickable_args Context: I think I do it mostly to remove the annoying tensorboard object I carry around (but I don't think I will need the .tb field anymore thanks to wandb/weights and biases). Not that this matters a lot but context is always nice. Related: What does it mean for an object to be picklable (or pickle-able)? Python - How can I make this un-pickleable object pickleable? Edit: Since I decided to move away from dill - since sometimes it cannot recover classes/objects (probably because it cannot save their code or something) - I decided to only use pickle (which seems to be the recommended way to be done in PyTorch). So what is the official (perhaps optimized) way to check for pickables without dill or with the official pickle? Is this the best: def is_picklable(obj): try: pickle.dumps(obj) except pickle.PicklingError: return False return True thus current soln: def make_args_pickable(args: Namespace) -> Namespace: """ Returns a copy of the args namespace but with unpickable objects as strings. note: implementation not tested against deep copying. ref: - https://stackoverflow.com/questions/70128335/what-is-the-proper-way-to-make-an-object-with-unpickable-fields-pickable """ pickable_args = argparse.Namespace() # - go through fields in args, if they are not pickable make it a string else leave as it # The vars() function returns the __dict__ attribute of the given object. for field in vars(args): field_val: Any = getattr(args, field) # - if current field value is not pickable, make it pickable by casting to string if not dill.pickles(field_val): field_val: str = str(field_val) elif not is_picklable(field_val): field_val: str = str(field_val) # - after this line the invariant is that it should be pickable, so set it in the new args obj setattr(pickable_args, field, field_val) return pickable_args def make_opts_pickable(opts): """ Makes a namespace pickable """ return make_args_pickable(opts) def is_picklable(obj: Any) -> bool: """ Checks if somehting is pickable. Ref: - https://stackoverflow.com/questions/70128335/what-is-the-proper-way-to-make-an-object-with-unpickable-fields-pickable """ import pickle try: pickle.dumps(obj) except pickle.PicklingError: return False return True Note: one of the reasons I want something "offical"/tested is because I am getting pycharm halt on the try catch: How to stop PyCharm's break/stop/halt feature on handled exceptions (i.e. only break on python unhandled exceptions)? which is not what I want...I want it to only halt on unhandled exceptions.
What is the proper way to make an object with unpickable fields pickable? I believe the answer to this belongs in the question you linked -- Python - How can I make this un-pickleable object pickleable?. I've added a new answer to that question explaining how you can make an unpicklable object picklable the proper way, without using __reduce__. So what is the official (perhaps optimized) way to check for pickables without dill or with the official pickle? Objects that are picklable are defined in the docs as follows: None, True, and False integers, floating point numbers, complex numbers strings, bytes, bytearrays tuples, lists, sets, and dictionaries containing only picklable objects functions defined at the top level of a module (using def, not lambda) built-in functions defined at the top level of a module classes that are defined at the top level of a module instances of such classes whose dict or the result of calling getstate() is picklable (see section Pickling Class Instances for details). The tricky parts are (1) knowing how functions/classes are defined (you can probably use the inspect module for that) and (2) recursing through objects, checking against the rules above. There are a lot of caveats to this, such as the pickle protocol versions, whether the object is an extension type (defined in a C extension like numpy, for example) or an instance of a 'user-defined' class. Usage of __slots__ can also impact whether an object is picklable or not (since __slots__ means there's no __dict__), but can be pickled with __getstate__. Some objects may also be registered with a custom function for pickling. So, you'd need to know if that has happened as well. Technically, you can implement a function to check for all of this in Python, but it will be quite slow by comparison. The easiest (and probably most performant, as pickle is implemented in C) way to do this is to simply attempt to pickle the object you want to check. I tested this with PyCharm pickling all kinds of things... it doesn't halt with this method. The key is that you must anticipate pretty much any kind of exception (see footnote 3 in the docs). The warnings are optional, they're mostly explanatory for the context of this question. def is_picklable(obj: Any) -> bool: try: pickle.dumps(obj) return True except (pickle.PicklingError, pickle.PickleError, AttributeError, ImportError): # https://docs.python.org/3/library/pickle.html#what-can-be-pickled-and-unpickled return False except RecursionError: warnings.warn( f"Could not determine if object of type {type(obj)!r} is picklable" "due to a RecursionError that was supressed. " "Setting a higher recursion limit MAY allow this object to be pickled" ) return False except Exception as e: # https://docs.python.org/3/library/pickle.html#id9 warnings.warn( f"An error occurred while attempting to pickle" f"object of type {type(obj)!r}. Assuming it's unpicklable. The exception was {e}" ) return False Using the example from my other answer I linked above, you could make your object picklable by implementing __getstate__ and __setstate__ (or subclassing and adding them, or making a wrapper class) adapting your make_args_pickable... class Unpicklable: """ A simple marker class so we can distinguish when a deserialized object is a string because it was originally unpicklable (and not simply a string to begin with) """ def __init__(self, obj_str: str): self.obj_str = obj_str def __str__(self): return self.obj_str def __repr__(self): return f'Unpicklable(obj_str={self.obj_str!r})' class PicklableNamespace(Namespace): def __getstate__(self): """For serialization""" # always make a copy so you don't accidentally modify state state = self.__dict__.copy() # Any unpicklables will be converted to a ``Unpicklable`` object # with its str format stored in the object for key, val in state.items(): if not is_picklable(val): state[key] = Unpicklable(str(val)) return state def __setstate__(self, state): self.__dict__.update(state) # or leave unimplemented In action, I'll pickle a namespace whose attributes contain a file handle (normally not picklable) and then load the pickle data. # Normally file handles are not picklable p = PicklableNamespace(f=open('test.txt')) data = pickle.dumps(p) del p loaded_p = pickle.loads(data) # PicklableNamespace(f=Unpicklable(obj_str="<_io.TextIOWrapper name='test.txt' mode='r' encoding='cp1252'>"))
6
2
70,104,873
2021-11-25
https://stackoverflow.com/questions/70104873/how-to-access-relationships-with-async-sqlalchemy
import asyncio from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import ForeignKey from sqlalchemy import func from sqlalchemy import Integer from sqlalchemy import String from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.ext.asyncio import create_async_engine from sqlalchemy.future import select from sqlalchemy.orm import declarative_base from sqlalchemy.orm import relationship from sqlalchemy.orm import selectinload from sqlalchemy.orm import sessionmaker engine = create_async_engine( "postgresql+asyncpg://user:pass@localhost/db", echo=True, ) # expire_on_commit=False will prevent attributes from being expired # after commit. async_session = sessionmaker( engine, expire_on_commit=False, class_=AsyncSession ) Base = declarative_base() class A(Base): __tablename__ = "a" id = Column(Integer, primary_key=True) name = Column(String, unique=True) data = Column(String) create_date = Column(DateTime, server_default=func.now()) bs = relationship("B") # required in order to access columns with server defaults # or SQL expression defaults, subsequent to a flush, without # triggering an expired load __mapper_args__ = {"eager_defaults": True} class B(Base): __tablename__ = "b" id = Column(Integer, primary_key=True) a_id = Column(ForeignKey("a.id")) data = Column(String) async with engine.begin() as conn: await conn.run_sync(Base.metadata.drop_all) await conn.run_sync(Base.metadata.create_all) async with async_session() as session: async with session.begin(): session.add_all( [ A(bs=[B(), B()], data="a1"), A(bs=[B()], data="a2"), ] ) async with async_session() as session: result = await session.execute(select(A).order_by(A.id)) a1 = result.scalars().first() # no issue: print(a1.name, a1.data) # throws error: print(a1.bs) Trying to access a1.bs gives this error: 59 current = greenlet.getcurrent() 60 if not isinstance(current, _AsyncIoGreenlet): ---> 61 raise exc.MissingGreenlet( 62 "greenlet_spawn has not been called; can't call await_() here. " 63 "Was IO attempted in an unexpected place?" MissingGreenlet: greenlet_spawn has not been called; can't call await_() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/14/xd2s)
This is how: from sqlalchemy.orm import selectinload async with async_session() as session: result = await session.execute(select(A).order_by(A.id) .options(selectinload(A.bs))) a = result.scalars().first() print(a.bs) key is using the selectinload method to prevent implicit IO UPDATE There are a few alternatives to selectinload like joinedload, lazyload. I am still trying to understand the differences.
26
23
70,105,907
2021-11-25
https://stackoverflow.com/questions/70105907/django-full-test-suite-failing-when-adding-a-testcase-but-full-test-suite-pas
So this seems to be an issue talked about here and there on StackOverflow with no real solution. So I have a bunch of tests that all pass when run individual. They even pass when run as a full test suite, EXCEPT when I add in my TestCase ExploreFeedTest. Now ExploreFeedTest passes when run by itself and it actually doesn't fail when run in the full test suite as in running python manage.py test, it causes another test HomeTest to fail, which passes on it's own and passes when ExploreFeedTest is commented out from the init.py under the test folder. I hear this is an issue with Django not cleaning up data properly? All my TestCase classes are from django.test.TestCase, because apparently if you don't use that class Django doesn't teardown the data properly, so I don't really know how to solve this. I'm also running Django 3.2.9, which is supposedly the latest. Anyone have a solution for this? ExploreFeedTest.py from django.test import TestCase from django.urls import reverse from rest_framework import status class ExploreFeedTest(TestCase): Folder setup Here are some others having similar issue: why would a django test fail only when the full test suite is run? Inconsistent Django test results depending upon how the test is called in Django 1.5.1 running on Python 2.7.4 here's a small snippet of the failing test I am getting .............................FE.E.E.EFE.E.E.EFEFE.E.E.E.E.E.E.E.E.E.E.............................. ====================================================================== ERROR: test_all_posts_contains_post_by_user_followees_and_follow_goal (cheers.test.PostTests.HomeTest.HomeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\ProgramData\Anaconda3\envs\cheers\lib\site-packages\django\db\backends\utils.py", line 82, in _execute return self.cursor.execute(sql) psycopg2.errors.ForeignKeyViolation: insert or update on table "cheers_post" violates foreign key constraint "cheers_post_join_goal_id_da1e6957_fk_cheers_joingoal_uuid" DETAIL: Key (join_goal_id)=(0947b806-f9f8-4e2c-98df-6072eaa61533) is not present in table "cheers_joingoal". All the E I'm getting in the tests are essentially because of this. Keep in mind these all pass just fine when I comment out ExploreFeedTest and ExploreFeedTest passes successfully when ran alone. Also as you can see it's HomeTest failing. Here's an example of one of the tests that's failing due to this error as well class HomeTest(TestCase): @classmethod # Generates Test DB data to persist throughout all tests def setUpTestData(cls) -> None: cls.generate_count = 4 cls.access_token = get_test_user_access_token() cls.num_posts = 0 cls.user = create_test_user_in_DB() GoalCategory.objects.create(name='health', emoji_url='url') cls.goal = GoalFactory() cls.join_goal = JoinGoal.objects.create(joiner=cls.user, goal=cls.goal) # Posts by the current user for i in range(cls.generate_count): cls.num_posts += 1 PostFactory() cls.followee = User.objects.create(uuid=uuid.uuid4(), username='Test') followee_join_goal = JoinGoal.objects.create(joiner=cls.followee, goal=cls.goal) FollowUser.objects.create(followee=cls.followee, follower=cls.user) # Posts by user the current user is following for i in range(cls.generate_count): cls.num_posts += 1 Post.objects.create(creator=cls.followee, join_goal=followee_join_goal, type=PostType.UPDATE, body='test') random_user = User.objects.create(uuid=uuid.uuid4(), username='Random') cls.followed_join_goal = JoinGoal.objects.create(joiner=random_user, goal=cls.goal) FollowGoal.objects.create(follower=cls.user, join_goal=cls.followed_join_goal) # Posts of goal current user is following for i in range(cls.generate_count): cls.num_posts += 1 Post.objects.create(creator=random_user, join_goal=cls.followed_join_goal, type=PostType.UPDATE, body='test') cls.count = int(cls.num_posts / 2) - 1 cls.post_list = list(Post.objects.all().order_by('uuid')) cls.mid_idx, cls.mid_post = get_mid_idx_and_post_from_post_list(cls.post_list) def test_all_posts_contains_post_by_user_followees_and_follow_goal(self): response = self.client.get(reverse('get_initial_home_feed', kwargs={'count': self.num_posts}), **{'HTTP_AUTHORIZATION': f'bearer {self.access_token}'}) self.assertEqual(response.status_code, status.HTTP_200_OK) self.assertEqual(len(response.data), self.num_posts) posts_by_user = list(Post.objects.filter(creator=self.user).values_list('uuid', flat=True)) posts_by_followees = list(Post.objects.filter(creator=self.followee).values_list('uuid', flat=True)) posts_of_followed_join_goal = list( Post.objects.filter(join_goal=self.followed_join_goal).values_list('uuid', flat=True)) uuids_of_posts = [uuid.UUID(data['uuid']) for data in list(response.data)] self.assertTrue(all(uuid in uuids_of_posts for uuid in posts_by_user)) self.assertTrue(all(uuid in uuids_of_posts for uuid in posts_by_followees)) self.assertTrue(all(uuid in uuids_of_posts for uuid in posts_of_followed_join_goal)) Solutions I've tried so far: Change ordering of tests (ExploreFeedTest started failing and HomeTest all succeeded when I did this
I posted the answer on the stack overflow question Django - Serializer throwing "Invalid pk - object does not exist" when setting ManyToMany attribute where foreign keyed object does exist I was also using factory boy, which doesn't seem to play nice with test suite. Test suite doesn't seem to know how to rollback the DB without getting rid of factory boy generated data.
5
0
70,118,990
2021-11-26
https://stackoverflow.com/questions/70118990/flask-fails-with-error-while-importing-x-an-importerror-was-raised-but-do
When starting a Flask app with: $ flask run I received the error: Error: While importing 'wsgi', an ImportError was raised. Usage: flask [OPTIONS] COMMAND [ARGS]...` ... However, there is no stack trace or other information provided. What is the best way to get the ImportError stack trace?
Import the Flask app at the Python interpreter prompt To see the ImportError stack trace, open a Python interpreter prompt and import the module that loads the Flask app (usually app.py or wsgi.py). If applicable, be sure that your virtual environment is activated. $ python >>> from my_app_folder import app Set the FLASK_APP environment variable If you can import the Flask app module using the Python interpreter without error, try setting the FLASK_APP environment variable to point to the Flask app module. $ FLASK_APP='my_app_folder/app' FLASK_ENV=development flask run
4
7
70,110,392
2021-11-25
https://stackoverflow.com/questions/70110392/mqtt-tls-certificate-verify-failed-self-signed-certificate
Am trying to implement TLS for mqtt and has followed the tutorials from the link below http://www.steves-internet-guide.com/mosquitto-tls/ I followed exactly how it has been instructed to generate certificates using openssl and pasted in the location of mqtt and changed the conf of mqtt and restarted the service. But when I try to connect to mqtt using tls it shows the below error message ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1124) And python code is client1 = paho.Client("control1") client1.tls_set(ca_certs="ca.crt") client1.tls_insecure_set(True) client1.connect("localhost", 8883) client1.loop_forever() where ca.crt is in the project directory.
The message you are receiving indicates that the broker's server certificate is not trusted (because it is self-signed), therefore paho is not being correctly told it is trustworthy. It is possible your fake certificate authority's root certificate (the ca.crt file you feed to paho) is not properly signed or generated, or the certificates that Mosquitto is using are not signed correctly. Either way, you likely need to start the entire process over to be 100% certain everything was done right. Generate the fake certificate authority's (CA) signing key $ openssl genrsa -des3 -out ca.key 2048 Generate a certificate signing request for the fake CA $ openssl req -new -key ca.key -out ca-cert-request.csr -sha256 Give the organization a name like "Fake Authority" and do not enter a common name (since your fake CA does not actually live on a server with a name) Create the fake CA's root certificate $ openssl x509 -req -in ca-cert-request.csr -signkey ca.key -out ca-root-cert.crt -days 365 -sha256 Create the server / mqtt broker's keypair $ openssl genrsa -out server.key 2048 Create a certificate signing request using the server key to send to the fake CA for identity verification $ openssl req -new -key server.key -out server-cert-request.csr -sha256 Give the organization a name like "Localhost MQTT Broker Inc." and the common name should be localhost or the exact domain you use to connect to the mqtt broker Now acting as the fake CA, you receive the server's request for your signature. You have verified the server is who it says it is (an MQTT broker operating on localhost), so create a new certificate & sign it with all the power of your fake authority. $ openssl x509 -req -in server-cert-request.csr -CA ca-root-cert.crt -CAkey ca.key -CAcreateserial -out server.crt -days 360 Now you have everything you need. Make sure (as in Steve's tutorial) Mosquitto is loading the following in mosquitto.conf: listener 8883 cafile certs\ca-root-cert.crt keyfile certs\server.key certfile certs\server.crt Make sure paho-mqtt is loading the fake CA's root certificate. client1.tls_set(ca_certs="ca-root-cert.crt") This is how it knows that mosquitto's server.crt is legitimately signed by a "real and trusted authority" and is not "self-signed" and thus untrusted. Mosquitto and paho should now be able to securely connect and communicate.
7
13
70,118,623
2021-11-26
https://stackoverflow.com/questions/70118623/valueerror-after-attempting-to-use-onehotencoder-and-then-normalize-values-with
So I was trying to convert my data's timestamps from Unix timestamps to a more readable date format. I created a simple Java program to do so and write to a .csv file, and that went smoothly. I tried using it for my model by one-hot encoding it into numbers and then turning everything into normalized data. However, after my attempt to one-hot encode (which I am not sure if it even worked), my normalization process using make_column_transformer failed. # model 4 # next model import tensorflow as tf import matplotlib.pyplot as plt import pandas as pd import numpy as np from tensorflow.keras import layers from sklearn.compose import make_column_transformer from sklearn.preprocessing import MinMaxScaler, OneHotEncoder from sklearn.model_selection import train_test_split np.set_printoptions(precision=3, suppress=True) btc_data = pd.read_csv( "/content/drive/MyDrive/Science Fair/output2.csv", names=["Time", "Open"]) X_btc = btc_data[["Time"]] y_btc = btc_data["Open"] enc = OneHotEncoder(handle_unknown="ignore") enc.fit(X_btc) X_btc = enc.transform(X_btc) print(X_btc) X_train, X_test, y_train, y_test = train_test_split(X_btc, y_btc, test_size=0.2, random_state=62) ct = make_column_transformer( (MinMaxScaler(), ["Time"]) ) ct.fit(X_train) X_train_normal = ct.transform(X_train) X_test_normal = ct.transform(X_test) callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3) btc_model_4 = tf.keras.Sequential([ layers.Dense(100, activation="relu"), layers.Dense(100, activation="relu"), layers.Dense(100, activation="relu"), layers.Dense(100, activation="relu"), layers.Dense(100, activation="relu"), layers.Dense(100, activation="relu"), layers.Dense(1, activation="linear") ]) btc_model_4.compile(loss = tf.losses.MeanSquaredError(), optimizer = tf.optimizers.Adam()) history = btc_model_4.fit(X_train_normal, y_train, batch_size=8192, epochs=100, callbacks=[callback]) btc_model_4.evaluate(X_test_normal, y_test, batch_size=8192) y_pred = btc_model_4.predict(X_test_normal) btc_model_4.save("btc_model_4") btc_model_4.save("btc_model_4.h5") # plot model def plot_evaluations(train_data=X_train_normal, train_labels=y_train, test_data=X_test_normal, test_labels=y_test, predictions=y_pred): print(test_data.shape) print(predictions.shape) plt.figure(figsize=(100, 15)) plt.scatter(train_data, train_labels, c='b', label="Training") plt.scatter(test_data, test_labels, c='g', label="Testing") plt.scatter(test_data, predictions, c='r', label="Results") plt.legend() plot_evaluations() # plot loss curve pd.DataFrame(history.history).plot() plt.ylabel("loss") plt.xlabel("epochs") My normal data format is like so: 2015-12-05 12:52:00,377.48 2015-12-05 12:53:00,377.5 2015-12-05 12:54:00,377.5 2015-12-05 12:56:00,377.5 2015-12-05 12:57:00,377.5 2015-12-05 12:58:00,377.5 2015-12-05 12:59:00,377.5 2015-12-05 13:00:00,377.5 2015-12-05 13:01:00,377.79 2015-12-05 13:02:00,377.5 2015-12-05 13:03:00,377.79 2015-12-05 13:05:00,377.74 2015-12-05 13:06:00,377.79 2015-12-05 13:07:00,377.64 2015-12-05 13:08:00,377.79 2015-12-05 13:10:00,377.77 2015-12-05 13:11:00,377.7 2015-12-05 13:12:00,377.77 2015-12-05 13:13:00,377.77 2015-12-05 13:14:00,377.79 2015-12-05 13:15:00,377.72 2015-12-05 13:16:00,377.5 2015-12-05 13:17:00,377.49 2015-12-05 13:18:00,377.5 2015-12-05 13:19:00,377.5 2015-12-05 13:20:00,377.8 2015-12-05 13:21:00,377.84 2015-12-05 13:22:00,378.29 2015-12-05 13:23:00,378.3 2015-12-05 13:24:00,378.3 2015-12-05 13:25:00,378.33 2015-12-05 13:26:00,378.33 2015-12-05 13:28:00,378.31 2015-12-05 13:29:00,378.68 The first is the date and the second value after the comma is the price of BTC at that time. Now after "one-hot encoding", I added a print statement to print the value of those X values, and that gave the following value: (0, 0) 1.0 (1, 1) 1.0 (2, 2) 1.0 (3, 3) 1.0 (4, 4) 1.0 (5, 5) 1.0 (6, 6) 1.0 (7, 7) 1.0 (8, 8) 1.0 (9, 9) 1.0 (10, 10) 1.0 (11, 11) 1.0 (12, 12) 1.0 (13, 13) 1.0 (14, 14) 1.0 (15, 15) 1.0 (16, 16) 1.0 (17, 17) 1.0 (18, 18) 1.0 (19, 19) 1.0 (20, 20) 1.0 (21, 21) 1.0 (22, 22) 1.0 (23, 23) 1.0 (24, 24) 1.0 : : (2526096, 2526096) 1.0 (2526097, 2526097) 1.0 (2526098, 2526098) 1.0 (2526099, 2526099) 1.0 (2526100, 2526100) 1.0 (2526101, 2526101) 1.0 (2526102, 2526102) 1.0 (2526103, 2526103) 1.0 (2526104, 2526104) 1.0 (2526105, 2526105) 1.0 (2526106, 2526106) 1.0 (2526107, 2526107) 1.0 (2526108, 2526108) 1.0 (2526109, 2526109) 1.0 (2526110, 2526110) 1.0 (2526111, 2526111) 1.0 (2526112, 2526112) 1.0 (2526113, 2526113) 1.0 (2526114, 2526114) 1.0 (2526115, 2526115) 1.0 (2526116, 2526116) 1.0 (2526117, 2526117) 1.0 (2526118, 2526118) 1.0 (2526119, 2526119) 1.0 (2526120, 2526120) 1.0 Following fitting for normalization, I receive the following error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/sklearn/utils/__init__.py in _get_column_indices(X, key) 408 try: --> 409 all_columns = X.columns 410 except AttributeError: 5 frames AttributeError: columns not found During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/sklearn/utils/__init__.py in _get_column_indices(X, key) 410 except AttributeError: 411 raise ValueError( --> 412 "Specifying the columns using strings is only " 413 "supported for pandas DataFrames" 414 ) ValueError: Specifying the columns using strings is only supported for pandas DataFrames Am I one-hot encoding correctly? What is the appropriate way to do this? Should I directly implement the one-hot encoder in my normalization process?
using OneHotEncoder is not the way to go here, it's better to extract the features from the column time as separate features like year, month, day, hour, minutes etc... and give these columns as input to your model. btc_data['Year'] = btc_data['Date'].astype('datetime64[ns]').dt.year btc_data['Month'] = btc_data['Date'].astype('datetime64[ns]').dt.month btc_data['Day'] = btc_data['Date'].astype('datetime64[ns]').dt.day the issue here is coming from the oneHotEncoder which is getting returning a scipy sparse matrix and get rides of the column "Time" so to correct this you must re-transform the output to a pandas dataframe and add the "Time" column. enc = OneHotEncoder(handle_unknown="ignore") enc.fit(X_btc) X_btc = enc.transform(X_btc) X_btc = pd.DataFrame(X_btc.todense()) X_btc["Time"] = btc_data["Time"] one way to countournate the memory issue is : Generate two indexes with the same random_state, one for the pandas data frame and one for the scipy sparse matrix X_train, X_test, y_train, y_test = train_test_split(X_btc, y_btc, test_size=0.2, random_state=62) X_train_pd, X_test_pd, y_train_pd, y_test_pd = train_test_split(btc_data, y_btc, test_size=0.2, random_state=62) Use the pandas data frame for the MinMaxScaler(). ct = make_column_transformer((MinMaxScaler(), ["Time"])) ct.fit(X_train_pd) result_train = ct.transform(X_train_pd) result_test = ct.transform(X_test_pd) Use generators for load data in train and test phase ( this will get ride of the memory issue ) and include the scaled time in the generators. def nn_batch_generator(X_data, y_data, scaled, batch_size): samples_per_epoch = X_data.shape[0] number_of_batches = samples_per_epoch / batch_size counter = 0 index = np.arange(np.shape(y_data)[0]) while True: index_batch = index[batch_size * counter:batch_size * (counter + 1)] scaled_array = scaled[index_batch] X_batch = X_data[index_batch, :].todense() y_batch = y_data.iloc[index_batch] counter += 1 yield np.array(np.hstack((np.array(X_batch), scaled_array))), np.array(y_batch) if (counter > number_of_batches): counter = 0 def nn_batch_generator_test(X_data, scaled, batch_size): samples_per_epoch = X_data.shape[0] number_of_batches = samples_per_epoch / batch_size counter = 0 index = np.arange(np.shape(X_data)[0]) while True: index_batch = index[batch_size * counter:batch_size * (counter + 1)] scaled_array = scaled[index_batch] X_batch = X_data[index_batch, :].todense() counter += 1 yield np.hstack((X_batch, scaled_array)) if (counter > number_of_batches): counter = 0 Finally fit the model history = btc_model_4.fit(nn_batch_generator(X_train, y_train, scaled=result_train, batch_size=2), steps_per_epoch=#Todetermine, batch_size=2, epochs=10, callbacks=[callback]) btc_model_4.evaluate(nn_batch_generator(X_test, y_test, scaled=result_test, batch_size=2), batch_size=2, steps=#Todetermine) y_pred = btc_model_4.predict(nn_batch_generator_test(X_test, scaled=result_test, batch_size=2), steps=#Todetermine)
5
4
70,118,083
2021-11-25
https://stackoverflow.com/questions/70118083/how-to-write-an-array-tag-in-a-variant-structure-on-an-openopc-server
I'm trying to communicate with an OPC DA server and need to write in a tag which is in an array format. We can connect with a simulation server, read tags (int, real, array) and write tags (int, real, str). The problem comes when we need to write in an array tag. The developper of the OpenOPC library (Barry Barnreiter) recommand to use a VARIANT variable because OPC "expect to see a Windows VARIANT structure when writing complex objects such as arrays". I did install Pywin32 (build 217) as suggested here. I tried to send a simple integer instead of an array in a VARIANT structure. Here's the code: from win32com.client import VARIANT import pythoncom import OpenOPC opc_local = OpenOPC.open_client() opc_local.connect('Matrikon.OPC.Simulation','localhost') values = VARIANT(pythoncom.VT_ARRAY | pythoncom.VT_R8, [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]) w = opc_local.write(('Bucket Brigade.ArrayOfReal8', values)) print(w) Here's the error that we get when the line with opc_local.write gets executed: AttributeError: 'module' object has no attribute 'VARIANT' Here's the entire traceback: runfile('C:/Users/nadmin/Downloads/sanstitre0.py', wdir='C:/Users/nadmin/Downloads') Traceback (most recent call last): File "<ipython-input-5-6799f41ab928>", line 1, in <module> runfile('C:/Users/nadmin/Downloads/sanstitre0.py', wdir='C:/Users/nadmin/Downloads') File "C:\Users\nadmin\AppData\Local\Continuum\anaconda2\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\Users\nadmin\AppData\Local\Continuum\anaconda2\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 95, in execfile exec(compile(scripttext, filename, 'exec'), glob, loc) File "C:/Users/nadmin/Downloads/sanstitre0.py", line 14, in <module> w = opc_local.write(('Bucket Brigade.ArrayOfReal8', values)) File "C:\Users\nadmin\AppData\Local\Continuum\anaconda2\lib\site-packages\Pyro\core.py", line 381, in __call__ return self.__send(self.__name, args, kwargs) File "C:\Users\nadmin\AppData\Local\Continuum\anaconda2\lib\site-packages\Pyro\core.py", line 456, in _invokePYRO return self.adapter.remoteInvocation(name, Pyro.constants.RIF_VarargsAndKeywords, vargs, kargs) File "C:\Users\nadmin\AppData\Local\Continuum\anaconda2\lib\site-packages\Pyro\protocol.py", line 497, in remoteInvocation return self._remoteInvocation(method, flags, *args) File "C:\Users\nadmin\AppData\Local\Continuum\anaconda2\lib\site-packages\Pyro\protocol.py", line 572, in _remoteInvocation answer.raiseEx() File "C:\Users\nadmin\AppData\Local\Continuum\anaconda2\lib\site-packages\Pyro\errors.py", line 72, in raiseEx raise self.excObj And here's the configuration of the computer: Windows 10 Python 2.7 Pyro 3.16 Pywin32 Build 223 OpenOPC 1.3.1 win32-py27
You have to change your line opc_local = OpenOPC.open_client() for opc_local = OpenOPC.client(). This will make you connect directly to the OPC server, as opposed to using the OpenOPC Gateway Service. The VARIANT structure is not included inside the Gateway Service exe. Note that the Gateway Service exe is it's own frozen Python distribution. Thus it only includes the Python modules inside it that it needs to run and nothing else. So by avoiding using the Gateway Service you should not have this problem since you'll be executing your code entirely using the Python distribution that you installed yourself on your PC.
8
3
70,098,133
2021-11-24
https://stackoverflow.com/questions/70098133/npm-error-cant-find-python-executable-in-macos-big-sur
I've been looking for the answer to this for a good solid week now, with no success. I've looked at every StackOverflow post, every article from Google and every related Github issue I could find. Most related errors seem to be older, so I'm wondering if my issue is slightly different due to me being on macOS Big Sur. The issue: When I try to run yarn install in my local repo, I receive an error related to node-gyp and a python executable that is unable to be found. Here is what my terminal shows: yarn install v1.22.17 ...other stuff [4/4] 🔨 Building fresh packages... [6/13] ⠐ node-sass [2/13] ⠐ node-sass [10/13] ⠐ metrohash [4/13] ⠐ fsevents error /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/metrohash: Command failed. Exit code: 1 Command: node-gyp rebuild Arguments: Directory: /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/metrohash Output: gyp info it worked if it ends with ok gyp info using [email protected] gyp info using [email protected] | darwin | x64 gyp ERR! configure error gyp ERR! stack Error: Can't find Python executable "/usr/local/opt/[email protected]/bin/python3", you can set the PYTHON env variable. gyp ERR! stack at PythonFinder.failNoPython (/Users/jimmiejackson/Documents/repositories/repo-name/node_modules/node-gyp/lib/configure.js:484:19) gyp ERR! stack at PythonFinder.<anonymous> (/Users/jimmiejackson/Documents/repositories/repo-name/node_modules/node-gyp/lib/configure.js:406:16) gyp ERR! stack at F (/Users/jimmiejackson/Documents/repositories/repo-name/node_modules/which/which.js:68:16) gyp ERR! stack at E (/Users/jimmiejackson/Documents/repositories/repo-name/node_modules/which/which.js:80:29) gyp ERR! stack at /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/which/which.js:89:16 gyp ERR! stack at /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/isexe/index.js:42:5 gyp ERR! stack at /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/isexe/mode.js:8:5 gyp ERR! stack at FSReqCallback.oncomplete (fs.js:167:21) gyp ERR! System Darwin 20.6.0 gyp ERR! command "/Users/jimmiejackson/.nvm/versions/node/v12.18.0/bin/node" "/Users/jimmiejackson/Documents/repositories/repo-name/node_modules/metrohash/node_modules/.bin/node-gyp" "rebuild" gyp ERR! cwd /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/metrohash I'm not entirely sure what this error means or why this node module is searching for python3. I've tried running npm set config /path/to/python, downloading python3, setting the PYTHON path in my .zshrc profile, but nothing seems to be working. It's entirely possible that my lack of understanding of the issue means that I'm on the right path but didn't quite get something right. Any ideas?
Reading the gyp-node source might helps. Here are some steps you can try. Install python2. You should make sure that in the terminal, which -a python2 only returns one python2 and python2 -V returns the correct 2.x version. override PYTHON env. export PYTHON=python2. Rerun the install. If there's still an error, probably the error message is different.
75
34
70,107,143
2021-11-25
https://stackoverflow.com/questions/70107143/rename-and-refactor-a-python-file-in-vs-code
On PyCharm you can right-click a file -> Refactor -> Rename. And it would rename that file and update all import statements in all files in the project. In VS Code, while I can rename and refactor symbols by highlighting them -> F2, this only works for modules, classes, their members, and variables. E.g. I have an utils/__init__.py with: from utils.readers import CSVReader utils/readers.py with: class CSVReader: pass And main.py with: from utils import CSVReader r = CSVReader() I would like to, for example, rename utils/readers.py -> utils/local_readers.py and have VS Code auto-update utils/__init__.py with: from utils.local_readers import CSVReader Dozens of google search results point to Move TS (for TypeScript only). Is there a similar extension for Python, or some built-in hotkey I missed?
Check out the enhanced module re-naming coming to vscode v1.63 (see release notes: python module rename refactoring. Module rename refactoring You can now more easily rename modules with the Python and Pylance extensions. Once you rename a Python module, you’ll be prompted to choose whether you’d like to change all import and references throughout your code. If you’re not sure, you can first preview what all the changes will look like before you make the decision. Once you’re confident about it, you can click on Apply Refactoring, or Discard Refactoring to not have them applied.
7
4
70,170,480
2021-11-30
https://stackoverflow.com/questions/70170480/how-to-scrape-product-data-from-website-pages-that-uses-graphql
I previously used the code below to scrape the search result for a word search, for example book, on https://www.walmart.com/. They have currently changed their request and response parameters and this code does not get any response again. params = { 'query': 'book', 'cat_id': 0, 'ps': 24, 'offset': 0, 'prg': 'desktop', 'stores': re.search(r'store/(\d+)', url).group(1) } try: data1 = requests.get(api_url, params=params).json() except Exception as e: print("Sleeping for 10 seconds", e) time.sleep(10) try: data1 = requests.get(api_url, params=params).json() except Exception as e: print("sleeping for 60 seconds", e) time.sleep(60) try: data1 = requests.get(api_url, params=params).json() except Exception as e: print("sleeping for 360 seconds") time.sleep(360) data1 = requests.get(api_url, params=params).json() I want to get the json response for a product page for example the product the in this url https://www.walmart.com/ip/SKIPPY-Natural-Creamy-Peanut-Butter-Spread-15-oz/37447671 How could i rewrite the code with their current parameters to get the json response?
According to your question, to get the json response, You can follow my working solution as an example. Actually, the hidden api calls json response is here. The interesing matter is that the request method is post but it sends query string parameters & request payload/formdata and the next pages at the same time which type of response I face first time ever and I have to make both types of parameters to get desired json response. I've also made the pagination following json response and you can increase or decrease it according to json response maxpage. import requests import json data= { "query":"query Browse( $query:String $page:Int $prg:Prg! $facet:String $sort:Sort $catId:String! $max_price:String $min_price:String $module_search:String $affinityOverride:AffinityOverride $ps:Int $ptss:String $beShelfId:String $fitmentFieldParams:JSON ={}$fitmentSearchParams:JSON ={}$rawFacet:String $seoPath:String $trsp:String $fetchMarquee:Boolean! $fetchSkyline:Boolean! $additionalQueryParams:JSON ={}){search( query:$query page:$page prg:$prg facet:$facet sort:$sort cat_id:$catId max_price:$max_price min_price:$min_price module_search:$module_search affinityOverride:$affinityOverride additionalQueryParams:$additionalQueryParams ps:$ps ptss:$ptss trsp:$trsp _be_shelf_id:$beShelfId ){query searchResult{...BrowseResultFragment}}contentLayout( channel:\"WWW\" pageType:\"BrowsePage\" tenant:\"WM_GLASS\" version:\"v1\" searchArgs:{query:$query cat_id:$catId _be_shelf_id:$beShelfId prg:$prg}){modules{...ModuleFragment configs{...on EnricherModuleConfigsV1{zoneV1}__typename...on _TempoWM_GLASSWWWSearchSortFilterModuleConfigs{facetsV1{...FacetFragment}}...on TempoWM_GLASSWWWPillsModuleConfigs{moduleSource pillsV2{...PillsModuleFragment}}...on TempoWM_GLASSWWWSearchFitmentModuleConfigs{fitments( fitmentSearchParams:$fitmentSearchParams fitmentFieldParams:$fitmentFieldParams ){...FitmentFragment sisFitmentResponse{...BrowseResultFragment}}}...on TempoWM_GLASSWWWStoreSelectionHeaderConfigs{fulfillmentMethodLabel storeDislayName}...on TempoWM_GLASSWWWBreadcrumbConfigs{_rawConfigs}...on TempoWM_GLASSWWWSponsoredProductCarouselConfigs{_rawConfigs}...PopularInModuleFragment...CopyBlockModuleFragment...BannerModuleFragment...HeroPOVModuleFragment...InlineSearchModuleFragment...MarqueeDisplayAdConfigsFragment @include(if:$fetchMarquee)...SkylineDisplayAdConfigsFragment @include(if:$fetchSkyline)...HorizontalChipModuleConfigsFragment}}...LayoutFragment pageMetadata{location{postalCode stateOrProvinceCode city storeId}pageContext}}seoBrowseMetaData( id:$catId facets:$rawFacet path:$seoPath facet_query_param:$facet _be_shelf_id:$beShelfId ){metaTitle metaDesc metaCanon h1}}fragment BrowseResultFragment on SearchInterface{title aggregatedCount...BreadCrumbFragment...DebugFragment...ItemStacksFragment...PageMetaDataFragment...PaginationFragment...RequestContextFragment...ErrorResponse modules{facetsV1{...FacetFragment}pills{...PillsModuleFragment}}}fragment ModuleFragment on TempoModule{name version type moduleId schedule{priority}matchedTrigger{zone}}fragment LayoutFragment on ContentLayout{layouts{id layout}}fragment BreadCrumbFragment on SearchInterface{breadCrumb{id name url}}fragment DebugFragment on SearchInterface{debug{sisUrl}}fragment ItemStacksFragment on SearchInterface{itemStacks{displayMessage meta{adsBeacon{adUuid moduleInfo max_ads}query stackId stackType title layoutEnum totalItemCount totalItemCountDisplay viewAllParams{query cat_id sort facet affinityOverride recall_set min_price max_price}}itemsV2{...ItemFragment...InGridMarqueeAdFragment}}}fragment ItemFragment on Product{__typename id usItemId fitmentLabel name checkStoreAvailabilityATC seeShippingEligibility brand type shortDescription imageInfo{...ProductImageInfoFragment}canonicalUrl externalInfo{url}category{path{name url}}badges{flags{...on BaseBadge{key text type id}}tags{...on BaseBadge{key text type}}}classType averageRating numberOfReviews esrb mediaRating salesUnitType sellerId sellerName hasSellerBadge availabilityStatusV2{display value}productLocation{displayValue aisle{zone aisle}}badge{type dynamicDisplayName}fulfillmentSpeed offerId preOrder{...PreorderFragment}priceInfo{...ProductPriceInfoFragment}variantCriteria{...VariantCriteriaFragment}fulfillmentBadge fulfillmentTitle fulfillmentType brand manufacturerName showAtc sponsoredProduct{spQs clickBeacon spTags}showOptions}fragment ProductImageInfoFragment on ProductImageInfo{thumbnailUrl}fragment ProductPriceInfoFragment on ProductPriceInfo{priceRange{minPrice maxPrice}currentPrice{...ProductPriceFragment}wasPrice{...ProductPriceFragment}unitPrice{...ProductPriceFragment}listPrice{...ProductPriceFragment}shipPrice{...ProductPriceFragment}subscriptionPrice{priceString subscriptionString}priceDisplayCodes{priceDisplayCondition finalCostByWeight}}fragment PreorderFragment on PreOrder{isPreOrder preOrderMessage preOrderStreetDateMessage}fragment ProductPriceFragment on ProductPrice{price priceString}fragment VariantCriteriaFragment on VariantCriterion{name type id isVariantTypeSwatch variantList{id images name rank swatchImageUrl availabilityStatus products selectedProduct{canonicalUrl usItemId}}}fragment InGridMarqueeAdFragment on MarqueePlaceholder{__typename type moduleLocation lazy}fragment PageMetaDataFragment on SearchInterface{pageMetadata{storeSelectionHeader{fulfillmentMethodLabel storeDislayName}title canonical description location{addressId}}}fragment PaginationFragment on SearchInterface{paginationV2{maxPage pageProperties}}fragment RequestContextFragment on SearchInterface{requestContext{vertical isFitmentFilterQueryApplied searchMatchType categories{id name}}}fragment ErrorResponse on SearchInterface{errorResponse{correlationId source errors{errorType statusCode statusMsg source}}}fragment PillsModuleFragment on PillsSearchInterface{title url image:imageV1{src alt}baseSeoURL}fragment BannerModuleFragment on TempoWM_GLASSWWWSearchBannerConfigs{moduleType viewConfig{title image imageAlt displayName description url urlAlt appStoreLink appStoreLinkAlt playStoreLink playStoreLinkAlt}}fragment PopularInModuleFragment on TempoWM_GLASSWWWPopularInBrowseConfigs{seoBrowseRelmData(id:$catId){relm{id name url}}}fragment CopyBlockModuleFragment on TempoWM_GLASSWWWCopyBlockConfigs{copyBlock(id:$catId){cwc}}fragment FacetFragment on Facet{name type layout min max selectedMin selectedMax unboundedMax stepSize values{id name description type itemCount isSelected baseSeoURL}}fragment FitmentFragment on Fitments{partTypeIDs result{status formId position quantityTitle extendedAttributes{...FitmentFieldFragment}labels{...LabelFragment}resultSubTitle}labels{...LabelFragment}savedVehicle{vehicleYear{...VehicleFieldFragment}vehicleMake{...VehicleFieldFragment}vehicleModel{...VehicleFieldFragment}additionalAttributes{...VehicleFieldFragment}}fitmentFields{...VehicleFieldFragment}fitmentForms{id fields{...FitmentFieldFragment}title labels{...LabelFragment}}}fragment LabelFragment on FitmentLabels{ctas{...FitmentLabelEntityFragment}messages{...FitmentLabelEntityFragment}links{...FitmentLabelEntityFragment}images{...FitmentLabelEntityFragment}}fragment FitmentLabelEntityFragment on FitmentLabelEntity{id label}fragment VehicleFieldFragment on FitmentVehicleField{id label value}fragment FitmentFieldFragment on FitmentField{id displayName value extended data{value label}dependsOn}fragment HeroPOVModuleFragment on TempoWM_GLASSWWWHeroPovConfigsV1{povCards{card{povStyle image{mobileImage{...TempoCommonImageFragment}desktopImage{...TempoCommonImageFragment}}heading{text textColor textSize}subheading{text textColor}detailsView{backgroundColor isTransparent}ctaButton{button{linkText clickThrough{value}}}logo{...TempoCommonImageFragment}links{link{linkText}}}}}fragment TempoCommonImageFragment on TempoCommonImage{src alt assetId uid clickThrough{value}}fragment InlineSearchModuleFragment on TempoWM_GLASSWWWInlineSearchConfigs{headingText placeholderText}fragment MarqueeDisplayAdConfigsFragment on TempoWM_GLASSWWWMarqueeDisplayAdConfigs{_rawConfigs ad{...DisplayAdFragment}}fragment DisplayAdFragment on Ad{...AdFragment adContent{type data{__typename...AdDataDisplayAdFragment}}}fragment AdFragment on Ad{status moduleType platform pageId pageType storeId stateCode zipCode pageContext moduleConfigs adsContext adRequestComposite}fragment AdDataDisplayAdFragment on AdData{...on DisplayAd{json status}}fragment SkylineDisplayAdConfigsFragment on TempoWM_GLASSWWWSkylineDisplayAdConfigs{_rawConfigs ad{...SkylineDisplayAdFragment}}fragment SkylineDisplayAdFragment on Ad{...SkylineAdFragment adContent{type data{__typename...SkylineAdDataDisplayAdFragment}}}fragment SkylineAdFragment on Ad{status moduleType platform pageId pageType storeId stateCode zipCode pageContext moduleConfigs adsContext adRequestComposite}fragment SkylineAdDataDisplayAdFragment on AdData{...on DisplayAd{json status}}fragment HorizontalChipModuleConfigsFragment on TempoWM_GLASSWWWHorizontalChipModuleConfigs{chipModuleSource:moduleSource chipModule{title url{linkText title clickThrough{type value}}}chipModuleWithImages{title url{linkText title clickThrough{type value}}image{alt clickThrough{type value}height src title width}}}", "variables":{ "id":"", "affinityOverride":"default", "dealsId":"", "query":"", "page":1, "prg":"desktop", "catId":"3920", "facet":"", "sort":"best_seller", "rawFacet":"", "seoPath":"", "ps":40, "ptss":"", "trsp":"", "beShelfId":"", "recall_set":"", "module_search":"", "min_price":"", "max_price":"", "storeSlotBooked":"", "additionalQueryParams":None, "fitmentFieldParams":None, "fitmentSearchParams":{ "id":"", "affinityOverride":"default", "dealsId":"", "query":"", "page":1, "prg":"desktop", "catId":"3920", "facet":"", "sort":"best_seller", "rawFacet":"", "seoPath":"", "ps":40, "ptss":"", "trsp":"", "beShelfId":"", "recall_set":"", "module_search":"", "min_price":"", "max_price":"", "storeSlotBooked":"", "additionalQueryParams":None, "cat_id":"3920", "_be_shelf_id":"" }, "fetchMarquee":True, "fetchSkyline":True, "fetchSbaTop":False } } headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36', 'content-type':'application/json', 'wm_mp': 'true', 'wm_page_url': 'https://www.walmart.com/browse/books/3920?sort=best_seller&affinityOverride=default', 'wm_qos.correlation_id': 'FWpup9KEKUrLFOY68gppqfprABL16K6qE76g', 'x-apollo-operation-name': 'Browse', 'x-enable-server-timing': '1', 'x-latency-trace': '1', 'x-o-ccm': 'server', 'x-o-correlation-id': 'FWpup9KEKUrLFOY68gppqfprABL16K6qE76g', 'x-o-gql-query': 'query Browse', 'x-o-market': 'us', 'x-o-platform': 'rweb', 'x-o-platform-version': 'main-176-e8acb5', 'x-o-segment': 'oaoh' } params= { "affinityOverride": "default", "page": "1", "prg": "desktop", "catId": "3920", "sort": "best_seller", "ps": "40", "fetchMarquee": "true", "fetchSkyline": "true", "fetchSbaTop": "false"} for i in range(1,25,1): params['maxPage']=i api_url='https://www.walmart.com/orchestra/home/graphql/browse' resp = requests.post(api_url, data=json.dumps(data), headers=headers,params=params) r=resp.json() print(r) # items = r['data']['search']['searchResult']['itemStacks'][0]['itemsV2'] # for item in items: # price = item['priceInfo']['currentPrice']['price'] # print(price) Output: 'https://i5.walmartimages.com/asr/deaaef8d-ca7f-4fc1-b556-1209c4c9000c.c6df15d971f1b9cddde27f80f17ae80b.jpeg?odnHeight=180&odnWidth=180&odnBg=ffffff'}, 'canonicalUrl': '/ip/Sonic-The-Hedgehog-Coloring-Book-For-Kids-Girls-Adults-Toddlers-Kids-ages-2-8-Unofficial-25-high-quality-illustrations-Pages-8-5-x-11-Paperback-9781677024223/614029660?athbdg=L1600', 'externalInfo': None, 'category': {'path': None}, 'badges': {'flags': [{'key': 'BESTSELLER', 'text': 'Best seller', 'type': 'LABEL', 'id': 'L1600'}], 'tags': [{'key': 'THREE_PLUS_DAY_SHIPPING', 'text': '3+ day shipping', 'type': 'LABEL'}, {'key': 'SAVE_WITH_W_PLUS', 'text': 'Save with', 'type': 'ICON'}]}, 'classType': 'REGULAR', 'averageRating': 3.9, 'numberOfReviews': 8, 'esrb': None, 'mediaRating': None, 'salesUnitType': 'EACH', 'sellerId': 'F55CDC31AB754BB68FE0B39041159D63', 'sellerName': 'Walmart.com', 'hasSellerBadge': None, 'availabilityStatusV2': {'display': 'In stock', 'value': 'IN_STOCK'}, 'productLocation': None, 'badge': [{'type': 'bestSeller', 'dynamicDisplayName': None}], 'fulfillmentSpeed': None, 'offerId': '27E4BA43A8704A1DABF0B37693611D16', 'preOrder': {'isPreOrder': False, 'preOrderMessage': None, 'preOrderStreetDateMessage': None}, 'priceInfo': {'priceRange': None, 'currentPrice': {'price': 6.99, 'priceString': '$6.99'}, 'wasPrice': None, 'unitPrice': None, 'listPrice': None, 'shipPrice': None, 'subscriptionPrice': None, 'priceDisplayCodes': {'priceDisplayCondition': None, 'finalCostByWeight': None}}, 'variantCriteria': [], 'fulfillmentBadge': None, 'fulfillmentTitle': 'title_shipToHome_not_available', 'fulfillmentType': 'FC', 'manufacturerName': None, 'showAtc': True, 'sponsoredProduct': None, 'showOptions': False}, {'__typename': 'Product', 'id': '16FA2JT4ZT52', 'usItemId': '491355610', 'fitmentLabel': None, 'name': 'Bible Word Search Books: Word Search Bible Puzzle Book - Extra Large Print: Bible Word Search Large Print Puzzles for Seniors and Adults - Beginners Edition (Large Print) (Paperback)', 'checkStoreAvailabilityATC': False, 'seeShippingEligibility': False, 'brand': None, 'type': 'REGULAR', 'shortDescription': '<li>Format:Paperback</li><li>Publication Date: 2019-11-22</li>', 'imageInfo': {'thumbnailUrl': 'https://i5.walmartimages.com/asr/46269778-a1bc-4a7a-aff2-75a825e35cf9.62e71231ecd42f6cda6d3701a3281b53.jpeg?odnHeight=180&odnWidth=180&odnBg=ffffff'}, 'canonicalUrl': '/ip/Bible-Word-Search-Books-Puzzle-Book-Extra-Large-Print-Print-Puzzles-Seniors-Adults-Beginners-Edition-Large-Print-Paperback-9781710478792/491355610', 'externalInfo': None, 'category': {'path': None}, 'badges': {'flags': None, 'tags': [{'key': 'THREE_PLUS_DAY_SHIPPING', 'text': '3+ day shipping', 'type': 'LABEL'}, {'key': 'SAVE_WITH_W_PLUS', 'text': 'Save with', 'type': 'ICON'}]}, 'classType': 'REGULAR', 'averageRating': 4.7, 'numberOfReviews': 13, 'esrb': None, 'mediaRating': None, 'salesUnitType': 'EACH', 'sellerId': 'F55CDC31AB754BB68FE0B39041159D63', 'sellerName': 'Walmart.com', 'hasSellerBadge': None, 'availabilityStatusV2': {'display': 'In stock', 'value': 'IN_STOCK'}, 'productLocation': None, 'badge': None, 'fulfillmentSpeed': None, 'offerId': '0E3D37D69AE14435A4E83D3AE2789B7F', 'preOrder': {'isPreOrder': False, 'preOrderMessage': None, 'preOrderStreetDateMessage': None}, 'priceInfo': {'priceRange': None, 'currentPrice': {'price': 6.99, 'priceString': '$6.99'}, 'wasPrice': None, 'unitPrice': None, 'listPrice': None, 'shipPrice': None, 'subscriptionPrice': None, 'priceDisplayCodes': {'priceDisplayCondition': None, 'finalCostByWeight': None}}, 'variantCriteria': [], 'fulfillmentBadge': None, 'fulfillmentTitle': 'title_shipToHome_not_available', 'fulfillmentType': 'FC', 'manufacturerName': None, 'showAtc': True, 'sponsoredProduct': None, 'showOptions': False}, {'__typename': 'Product', 'id': '72BDRK2VT8QQ', 'usItemId': '599380007', 'fitmentLabel': None, 'name': 'Trace Letters and Numbers Workbook: Learn How to Write Alphabet Upper and Lower Case and Numbers (Series #2) (Paperback)', 'checkStoreAvailabilityATC': False, 'seeShippingEligibility': False, 'brand': None, 'type': 'REGULAR', 'shortDescription': '9781794540767', 'imageInfo': {'thumbnailUrl': 'https://i5.walmartimages.com/asr/535dff68-7946-4e20-899b-20d05015b05a_1.22a1f229111edd3725505c0db3fe1371.jpeg?odnHeight=180&odnWidth=180&odnBg=ffffff'}, 'canonicalUrl': '/ip/Trace-Letters-and-Numbers-Workbook-Learn-How-to-Write-Alphabet-Upper-and-Lower-Case-and-Numbers-Series-2-Paperback/599380007?athbdg=L1600', 'externalInfo': None, 'category': {'path': None}, 'badges': {'flags': [{'key': 'BESTSELLER', 'text': 'Best seller', 'type': 'LABEL', 'id': 'L1600'}], 'tags': [{'key': 'THREE_PLUS_DAY_SHIPPING', 'text': '3+ day shipping', 'type': 'LABEL'}, {'key': 'SAVE_WITH_W_PLUS', 'text': 'Save with', 'type': 'ICON'}]}, 'classType': 'REGULAR', 'averageRating': 4.7, 'numberOfReviews': 37, 'esrb': None, 'mediaRating': None, 'salesUnitType': 'EACH', 'sellerId': 'F55CDC31AB754BB68FE0B39041159D63', 'sellerName': 'Walmart.com', 'hasSellerBadge': None, 'availabilityStatusV2': {'display': 'In stock', 'value': 'IN_STOCK'}, 'productLocation': None, 'badge': [{'type': 'bestSeller', 'dynamicDisplayName': None}], 'fulfillmentSpeed': None, 'offerId': 'B701DAA6361D4A97A599815F29FA450D', 'preOrder': {'isPreOrder': False, 'preOrderMessage': None, 'preOrderStreetDateMessage': None}, 'priceInfo': {'priceRange': None, 'currentPrice': {'price': 6.95, 'priceString': '$6.95'}, 'wasPrice': None, 'unitPrice': None, 'listPrice': None, 'shipPrice': None, 'subscriptionPrice': None, 'priceDisplayCodes': {'priceDisplayCondition': None, 'finalCostByWeight': None}}, 'variantCriteria': [], 'fulfillmentBadge': None, 'fulfillmentTitle': 'title_shipToHome_not_available', 'fulfillmentType': 'FC', 'manufacturerName': None, 'showAtc': True, 'sponsoredProduct': None, 'showOptions': False}, {'__typename': 'Product', 'id': '72DZILK2NY05', 'usItemId': '817841366', 'fitmentLabel': None, 'name': 'Toddler Coloring Book for Kids Age 1-3 : aby Activity Book Boys or Girls, Preschool coloring for Their Fun Early Learning of First Easy Number Shape and Color (Paperback)', 'checkStoreAvailabilityATC': False, 'seeShippingEligibility': False, 'brand': None, 'type': 'REGULAR', 'shortDescription': '<li>Format:Paperback</li><li>Publication Date: 2019-08-02</li>', 'imageInfo': {'thumbnailUrl': 'https://i5.walmartimages.com/asr/d2f8e8be-7fa1-4a25-a80c-e1741c6b2f6f.6392f3a67fccf0b396e1fb6ee2848b4b.jpeg?odnHeight=180&odnWidth=180&odnBg=ffffff'}, 'canonicalUrl': '/ip/Toddler-Coloring-Book-Kids-Age-1-3-aby-Activity-Boys-Girls-Preschool-coloring-Their-Fun-Early-Learning-First-Easy-Number-Shape-Color-Paperback-9781086986501/817841366?athbdg=L1600', 'externalInfo': None, 'category': {'path': None}, 'badges': {'flags': [{'key': 'BESTSELLER', 'text': 'Best seller', 'type': 'LABEL', 'id': 'L1600'}], 'tags': [{'key': 'THREE_PLUS_DAY_SHIPPING', 'text': '3+ day shipping', 'type': 'LABEL'}, {'key': 'SAVE_WITH_W_PLUS', 'text': 'Save with', 'type': 'ICON'}]}, 'classType': 'REGULAR', 'averageRating': 5, 'numberOfReviews': 2, 'esrb': None, 'mediaRating': None, 'salesUnitType': 'EACH', 'sellerId': 'F55CDC31AB754BB68FE0B39041159D63', 'sellerName': 'Walmart.com', 'hasSellerBadge': None, 'availabilityStatusV2': {'display': 'In stock', 'value': 'IN_STOCK'}, 'productLocation': None, 'badge': [{'type': 'bestSeller', 'dynamicDisplayName': None}], 'fulfillmentSpeed': None, 'offerId': 'D3D88022487D4BF19D540BED3742A75D', 'preOrder': {'isPreOrder': False, 'preOrderMessage': None, 'preOrderStreetDateMessage': None}, 'priceInfo': {'priceRange': None, 'currentPrice': {'price': 6.95, 'priceString': '$6.95'}, 'wasPrice': None, 'unitPrice': None, 'listPrice': None, 'shipPrice': None, 'subscriptionPrice': None, 'priceDisplayCodes': {'priceDisplayCondition': None, 'finalCostByWeight': None}}, 'variantCriteria': [], 'fulfillmentBadge': None, 'fulfillmentTitle': 'title_shipToHome_not_available', 'fulfillmentType': 'FC', 'manufacturerName': None, 'showAtc': True, 'sponsoredProduct': None, 'showOptions': False}, {'__typename': 'Product', 'id': '46CGMFA2PY1Y', 'usItemId': '56172624', 'fitmentLabel': None, 'name': 'Crystals for Beginners : The Guide to Get Started with the Healing Power of Crystals (Paperback)', 'checkStoreAvailabilityATC': False, 'seeShippingEligibility': False, 'brand': None, 'type': 'VARIANT', 'shortDescription': '<li>Format:Paperback</li><li>Publication Date: 2017-10-17</li>', 'imageInfo': {'thumbnailUrl': 'https://i5.walmartimages.com/asr/d2954574-c30c-48af-8297-900867a2458e_1.03867d2efc65af18a6fdbff418a68afa.jpeg?odnHeight=180&odnWidth=180&odnBg=ffffff'}, 'canonicalUrl': '/ip/Crystals-for-Beginners-The-Guide-to-Get-Started-with-the-Healing-Power-of-Crystals-Paperback-9781623159917/56172624?athbdg=L1600', 'externalInfo': None, 'category': {'path': None}, 'badges': {'flags': [{'key': 'BESTSELLER', 'text': 'Best seller', 'type': 'LABEL', 'id': 'L1600'}], 'tags': [{'key': 'THREE_PLUS_DAY_SHIPPING', 'text': '3+ day shipping', 'type': 'LABEL'}, {'key': 'SAVE_WITH_W_PLUS', 'text': 'Save with', 'type': 'ICON'}]}, 'classType': 'VARIANT', 'averageRating': 5, 'numberOfReviews': 13, 'esrb': None, 'mediaRating': None, 'salesUnitType': 'EACH', 'sellerId': 'F55CDC31AB754BB68FE0B39041159D63', 'sellerName': 'Walmart.com', 'hasSellerBadge': None, 'availabilityStatusV2': {'display': 'In stock', 'value': 'IN_STOCK'}, 'productLocation': None, 'badge': [{'type': 'bestSeller', 'dynamicDisplayName': None}], 'fulfillmentSpeed': None, 'offerId': '5F59F4B1DE6945728E7F2EC9A3005472', 'preOrder': {'isPreOrder': False, 'preOrderMessage': None, 'preOrderStreetDateMessage': None}, 'priceInfo': {'priceRange': None, 'currentPrice': {'price': 8.99, 'priceString': '$8.99'}, 'wasPrice': None, 'unitPrice': None, 'listPrice': {'price': 14.99, 'priceString': '$14.99'}, 'shipPrice': None, 'subscriptionPrice': None, 'priceDisplayCodes': {'priceDisplayCondition': None, 'finalCostByWeight': None}}, 'variantCriteria': [], 'fulfillmentBadge': None, 'fulfillmentTitle': 'title_shipToHome_not_available', 'fulfillmentType': 'FC', 'manufacturerName': None, 'showAtc': True, 'sponsoredProduct': None, 'showOptions': False}, {'__typename': 'Product', 'id': '7FF8DA7PEPDT', 'usItemId': '136868031', 'fitmentLabel': None, 'name': 'Hack Learning: Hacking School Discipline : 9 Ways to Create a Culture of Empathy and Responsibility Using Restorative Justice (Series #22) (Paperback)', 'checkStoreAvailabilityATC': False, 'seeShippingEligibility': False, 'brand': None, 'type': 'REGULAR', 'shortDescription': '<li>Format:Paperback</li><li>Publication Date: 2019-03-12</li>', 'imageInfo': {'thumbnailUrl': 'https://i5.walmartimages.com/asr/4c639aa7-2580-4782-84ce-33a428cae000.571d547af78d39ffc35bdd28f988023f.jpeg?odnHeight=180&odnWidth=180&odnBg=ffffff'}, 'canonicalUrl': '/ip/Hack-Learning-Hacking-School-Discipline-9-Ways-Create-Culture-Empathy-Responsibility-Using-Restorative-Justice-Series-22-Paperback-9781948212137/136868031?athbdg=L1600', 'externalInfo': None, 'category': {'path': None}, 'badges': {'flags': [{'key': 'BESTSELLER', 'text': 'Best seller', 'type': 'LABEL', 'id': 'L1600'}], 'tags': [{'key': 'THREE_PLUS_DAY_SHIPPING', 'text': '3+ day shipping', 'type': 'LABEL'}, {'key': 'SAVE_WITH_W_PLUS', 'text': 'Save with', 'type': 'ICON'}]}, 'classType': Output: Extracted price as example: 22.99 1.22 6.99 9.95 5.95 9.81 5.99 13.17 4.52 6.99 4.99 7.99 6.79 5.99 6.5 6.95 6.99 5.99 4.99 5 4.99 11.93 5.99 4.99 6.99 6.99 6.95 6.95 8.99 14.81 5.13 7.29 3.95 5.99 5.5 5.99 16.88 6.99 6.99 1.99 22.99 1.22 6.99 9.95 5.95 9.81 5.99 13.17 4.52 6.99 4.99 7.99 6.7
6
7
70,172,127
2021-11-30
https://stackoverflow.com/questions/70172127/how-to-generate-a-uuid-field-with-fastapi-sqlalchemy-and-sqlmodel
I'm struggling to get the syntax to create a UUID field when creating a model in my FastAPI application. I'm using SQLModel. So basically, my models.py file looks like this: from datetime import datetime from typing import Optional import uuid from sqlalchemy import Column, DateTime from sqlalchemy.dialects import postgresql as psql from sqlmodel import SQLModel, Field class ModelBase(SQLModel): """ Base class for database models. """ id: Optional[int] = Field(default=None, primary_key=True) created_at: datetime = Field(sa_column=Column(DateTime(timezone=True), default=datetime.utcnow)) updated_at: datetime = Field(sa_column=Column(DateTime(timezone=True), onupdate=datetime.utcnow, default=datetime.utcnow)) class UUIDModelBase(ModelBase, table=True): """ Base class for UUID-based models. """ uuid: uuid.UUID = Field(sa_column=Column(psql.UUID(as_uuid=True)), default=uuid.uuid4) The above errors out with AttributeError: 'FieldInfo' object has no attribute 'UUID' I also tried id: uuid.UUID = Column(psql.UUID(as_uuid=True), default=uuid.uuid4) TypeError: Boolean value of this clause is not defined Also uuid: uuid.UUID = Column(psql.UUID(as_uuid=True), default=uuid.uuid4) AttributeError: Neither 'Column' object nor 'Comparator' object has an attribute 'UUID' and uuid: uuid.UUID = Field(default_factory=uuid.uuid4, index=True, nullable=False) AttributeError: 'FieldInfo' object has no attribute 'UUID' You get the idea. The errors are not helping me, I just need the right syntax. In this case, I'm not actually looking to use UUID as a primary key. And as you can tell from the imports, I'm using postgreSQL. The database is based on a postgres:12 docker image.
The interpreter might be using UUID of your actual field uuid instead of the imported package. So, you can change the code as follows. import uuid as uuid_pkg from sqlalchemy import Field from sqlmodel import Field class UUIDModelBase(ModelBase): """ Base class for UUID-based models. """ uuid: uuid_pkg.UUID = Field( default_factory=uuid_pkg.uuid4, primary_key=True, index=True, nullable=False, ) Reference: https://github.com/tiangolo/sqlmodel/issues/140
10
18
70,169,219
2021-11-30
https://stackoverflow.com/questions/70169219/what-is-total-loss-loss-cls-etc
I want to train a custom dataset on using faster_rcnn or mask_rcnn with the Pytorch and Detectron2 .Everything works well but I wanted to know I want to know what are the results I have. [11/29 20:16:31 d2.utils.events]: eta: 0:24:04 iter: 19 total_loss: 9.6 loss_cls: 1.5 loss_box_reg: 0.001034 loss_mask: 0.6936 loss_rpn_cls: 6.773 loss_rpn_loc: 0.5983 time: 1.4664 data_time: 0.0702 lr: 4.9953e-06 max_mem: 2447M I have this as result and I want to know what all of this means
Those are metrics printed out at every iteration of the training loop. The most important ones are the loss values, but below are basic descriptions of them all (eta and iter are self-explanatory I think). total_loss: This is a weighted sum of the following individual losses calculated during the iteration. By default, the weights are all one. loss_cls: Classification loss in the ROI head. Measures the loss for box classification, i.e., how good the model is at labelling a predicted box with the correct class. loss_box_reg: Localisation loss in the ROI head. Measures the loss for box localisation (predicted location vs true location). loss_rpn_cls: Classification loss in the Region Proposal Network. Measures the "objectness" loss, i.e., how good the RPN is at labelling the anchor boxes as foreground or background. loss_rpn_loc: Localisation loss in the Region Proposal Network. Measures the loss for localisation of the predicted regions in the RPN. loss_mask: Mask loss in the Mask head. Measures how "correct" the predicted binary masks are. For more details on the losses (1) and (2), take a look at the Fast R-CNN paper and the code. For more details on the losses (3) and (4), take a look at the Faster R-CNN paper and the code. For more details on the loss (5), take a look at the Mask R-CNN paper and the code. time: Time taken by the iteration. data_time: Time taken by the dataloader in that iteration. lr: The learning rate in that iteration. max_mem: Maximum GPU memory occupied by tensors in bytes.
9
12
70,091,290
2021-11-24
https://stackoverflow.com/questions/70091290/tensorflow-datasets-crop-resize-images-per-batch-after-dataset-batch
Is it possible to Crop/Resize images per batch ? I'm using Tensorflow dataset API as below: dataset = dataset.shuffle().repeat().batch(batch_size, drop_remainder=True) I want, within the batch all the images should have the same size. However across the batches it can have different sizes. For example, 1st batch has all the images of shape (batch_size, 300, 300, 3). Next batch can have images of shape (batch_size, 224, 224, 3). Another batch can have images of shape (batch_size, 400, 400, 3). Basically I want to have dynamically shaped batches, however all the images within the batch have static shapes. If we do as follow: dataset = dataset.shuffle().repeat().batch(batch_size, drop_remainder=True).map(lambda x, y: map_fn(x, y)) Does the above .map() applies to each batch separately or over the entire dataset ? If above .map() doesn't apply to each batch separately, how can we do this ? Can we define any iterator after dataset.batch(), apply tf.image.crop_and_resize() over each image per batch and later use dataset.concatenate() to combine all transformed batches ? I'm creating the dataset as below: # Dataset creation (read image data from files of COCO dataset) dataset = tf.data.Dataset.list_files(self._file_pattern, shuffle=False) dataset = dataset.shard(dataset_num_shards, dataset_shard_index) dataset = dataset.shuffle(tf.cast(256 / dataset_num_shards, tf.int64)) dataset = dataset.interleave(map_func=tf.data.TFRecordDataset(filename).prefetch(1), cycle_length=32, block_length=1, num_parallel_calls=tf.data.experimental.AUTOTUNE) dataset = dataset.map(tf_example_decoder.TfExampleDecoder().decode, num_parallel_calls=64) dataset = dataset.shuffle(64).repeat() # Parse each image for preprocessing dataset = dataset.map(lambda data, _: _parse_example(data), num_parallel_calls=64) dataset = dataset.batch(batch_size=batch_size, drop_remainder=True) # Below code suggested by you to resize images to fixed shape in each batch def resize_data(images, labels): tf.print('Original shape -->', tf.shape(images)) SIZE = (300, 300) return tf.image.resize(images, SIZE), labels dataset = dataset.map(resize_data) dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE) tf.estimator.Estimator(...).train( input_fn=dataset, steps=steps, hooks=train_hooks)
Generally, you can try something like this: import tensorflow as tf import numpy as np dataset1 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 300, 300, 3))) dataset2 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 224, 224, 3))) dataset3 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 400, 400, 3))) dataset = dataset1.concatenate(dataset2.concatenate(dataset3)) dataset = dataset.shuffle(1).repeat().batch(32, drop_remainder=True) def resize_data(images): tf.print('Original shape -->', tf.shape(images)) SIZE = (180, 180) return tf.image.resize(images, SIZE) dataset = dataset.map(resize_data) for images in dataset.take(3): tf.print('New shape -->', tf.shape(images)) Original shape --> [32 300 300 3] New shape --> [32 180 180 3] Original shape --> [32 224 224 3] New shape --> [32 180 180 3] Original shape --> [32 400 400 3] New shape --> [32 180 180 3] You could also use tf.image.resize_with_crop_or_pad if you want: def resize_data(images): tf.print('Original shape -->', tf.shape(images)) SIZE = (180, 180) return tf.image.resize_with_crop_or_pad(images, SIZE[0], SIZE[1]) dataset = dataset.map(resize_data) for images in dataset.take(3): tf.print('New shape -->', tf.shape(images)) Note that using repeat() will create an infinite dataset. Update 1 If you want a random size for each batch, try something like this: import tensorflow as tf import numpy as np dataset1 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 300, 300, 3))) dataset2 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 224, 224, 3))) dataset3 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 400, 400, 3))) dataset = dataset1.concatenate(dataset2.concatenate(dataset3)) dataset = dataset.batch(32, drop_remainder=True).shuffle(96) def resize_data(images): batch_size = tf.shape(images)[0] images_resized = tf.TensorArray(dtype=tf.float32, size = 0, dynamic_size=True) SIZE = tf.random.uniform((2,), minval=300, maxval=500, dtype=tf.int32) for i in range(batch_size): images_resized = images_resized.write(images_resized.size(), tf.image.resize(images[i], SIZE)) return images_resized.stack() dataset = dataset.map(resize_data) for images in dataset: tf.print('New shape -->', tf.shape(images)) New shape --> [32 392 385 3] New shape --> [32 468 459 3] New shape --> [32 466 461 3] Update 2 A very flexible option that works for any batch size would look like this: import tensorflow as tf import numpy as np dataset1 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 300, 300, 3))) dataset2 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 224, 224, 3))) dataset3 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 400, 400, 3))) dataset = dataset1.concatenate(dataset2.concatenate(dataset3)) def resize_and_batch(dataset, batch_size): final_dataset = None duration = len(dataset)//batch_size random_sizes = [tf.random.uniform((2,), minval=300, maxval=500, dtype=tf.int32) for _ in range(duration)] for i, size in zip(range(duration), random_sizes): idx = i * batch_size if i == 0: final_dataset = tf.data.Dataset.from_tensor_slices([tf.image.resize(x, size) for x in dataset.take(batch_size)]) else: final_dataset = final_dataset.concatenate(tf.data.Dataset.from_tensor_slices([tf.image.resize(x, size) for x in dataset.skip(idx).take(batch_size)])) return final_dataset batch_size = 10 ds = resize_and_batch(dataset, batch_size) ds = ds.batch(batch_size).shuffle(len(ds)) for images in ds: tf.print('New shape -->', images.shape) New shape --> TensorShape([10, 399, 348, 3]) New shape --> TensorShape([10, 356, 329, 3]) New shape --> TensorShape([10, 473, 373, 3]) New shape --> TensorShape([10, 489, 489, 3]) New shape --> TensorShape([10, 421, 335, 3]) New shape --> TensorShape([10, 447, 455, 3]) New shape --> TensorShape([10, 355, 382, 3]) New shape --> TensorShape([10, 310, 396, 3]) New shape --> TensorShape([10, 345, 356, 3])
5
2
70,168,829
2021-11-30
https://stackoverflow.com/questions/70168829/use-ffmpeg-command-to-push-rtsp-stream-it-doesnt-contain-sps-and-pps-frame
I use python and opencv-python to capture frames from video, then use ffmpeg command to push rtsp stream with pipe. I can play the rtsp stream via gstreamer and vlc. However, a display device cannot decode and play the rtsp-stream because it cannot receive SPS and PPS frames. Use wireshark to capture stream, found that it doesn't send sps and pps frames, only send IDR frames. The key codes are as follows. # ffmpeg command command = ['ffmpeg', '-re', '-y', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-pix_fmt', 'bgr24', '-s', "{}x{}".format(width, height), '-r', str(fps), '-i', '-', '-c:v', 'libx264', '-preset', 'ultrafast', '-f', 'rtsp', '-flags', 'local_headers', '-rtsp_transport', 'tcp', '-muxdelay', '0.1', rtsp_url] p = sp.Popen(command, stdin=sp.PIPE) while (cap.isOpened()): ret, frame = cap.read() if not ret: cap = cv2.VideoCapture(video_path) continue p.stdin.write(frame.tobytes() May be I miss some options of ffmpeg command?
Try adding the arguments '-bsf:v', 'dump_extra'. According to FFmpeg Bitstream Filters Documentation: dump_extra Add extradata to the beginning of the filtered packets except when said packets already exactly begin with the extradata that is intended to be added. The filter supposed to add SPS and PPS NAL units with every key frame. Here is a complete code sample: import subprocess as sp import cv2 rtsp_url = 'rtsp://localhost:31415/live.stream' video_path = 'input.mp4' # We have to start the server up first, before the sending client (when using TCP). See: https://trac.ffmpeg.org/wiki/StreamingGuide#Pointtopointstreaming ffplay_process = sp.Popen(['ffplay', '-rtsp_flags', 'listen', rtsp_url]) # Use FFplay sub-process for receiving the RTSP video. cap = cv2.VideoCapture(video_path) width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) # Get video frames width height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) # Get video frames height fps = int(cap.get(cv2.CAP_PROP_FPS)) # Get video framerate # FFmpeg command command = ['ffmpeg', '-re', '-y', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-pix_fmt', 'bgr24', '-s', "{}x{}".format(width, height), '-r', str(fps), '-i', '-', '-c:v', 'libx264', '-preset', 'ultrafast', '-f', 'rtsp', #'-flags', 'local_headers', '-rtsp_transport', 'tcp', '-muxdelay', '0.1', '-bsf:v', 'dump_extra', rtsp_url] p = sp.Popen(command, stdin=sp.PIPE) while (cap.isOpened()): ret, frame = cap.read() if not ret: break p.stdin.write(frame.tobytes()) p.stdin.close() # Close stdin pipe p.wait() # Wait for FFmpeg sub-process to finish ffplay_process.kill() # Forcefully close FFplay sub-process Notes: '-flags', 'local_headers' are not valid arguments in my version of FFmpeg. I don't know how to verify my solution, so I could be wrong...
5
5
70,169,519
2021-11-30
https://stackoverflow.com/questions/70169519/how-can-i-save-more-metadata-on-an-mlflow-model
I am trying to save a model to MLFlow, but as I have a custom prediction pipeline to retrieve data, I need to save extra metadata into the model. I tried using my custom signature class, which It does the job correctly and saves the model with the extra metadata in the MLModel file (YAML format). But when want to load the model from the MLFlow registry, the signature is not easy accesible. mlflow.sklearn.log_model(model, "model", signature = signature) I've also tried to save an extra dictionary at the log_model function, but it saves it in the conda.yaml file: mlflow.sklearn.log_model(model, "model", {"metadata1":"value1", "metadata2":"value2"}) Should I make my own flavour? Or my own Model inheritance? I've seen here that the PyFuncModel recieves some metadata class and an implementation to solve this, but I don't know where should I pass my own implementations to PyFuncModel on an experiment script. Here's a minimal example: import mlflow import numpy as np import pandas as pd from sklearn.linear_model import LogisticRegression metadata_dic = {"metadata1": "value1", "metadata2": "value2"} X = np.array([[-2, -1, 0, 1, 2, 1],[-2, -1, 0, 1, 2, 1]]).T y = np.array([0, 0, 1, 1, 1, 0]) X = pd.DataFrame(X, columns=["X1", "X2"]) y = pd.DataFrame(y, columns=["y"]) model = LogisticRegression() model.fit(X, y) mlflow.sklearn.log_model(model, "model")
Finally, I made a class that contains every metadata and saved it as an model argument: model = LogisticRegression() model.fit(X, y) model.metadata = ModelMetadata(**metadata_dic) mlflow.sklearn.log_model(model, "model") Here I lost the customizable predict process, but after reading the MLFlow documentation is not very clear how to proceed. If anyone finds a good approach It would be very appreciated.
5
1
70,167,811
2021-11-30
https://stackoverflow.com/questions/70167811/how-to-load-custom-model-in-pytorch
I'm trying to load my pretrained model (yolov5n) and test it with the following code in PyTorch: import os import torch model = torch.load(os.getcwd()+'/weights/last.pt') # Images imgs = ['https://example.com/img.jpg'] # Inference results = model(imgs) # Results results.print() results.save() # or .show() results.xyxy[0] # img1 predictions (tensor) results.pandas().xyxy[0] # img1 predictions (pandas) and I'm getting the following error: ModuleNotFoundError Traceback (most recent call last) in 3 import torch 4 ----> 5 model = torch.load(os.getcwd()+'/weights/last.pt') My model is located in the folder /weights/last.py, I'm not sure what I'm doing false. Could you please tell me, what it's missing in my code.
You should be able to find the weights in this directory: yolov5/runs/train/exp/weights/last.pt Then you load the weights with a line like this: model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/runs/train/exp/weights/last.pt', force_reload=True) I have an example of a notebook that loads custom models and from that directory after training the model here https://github.com/pylabel-project/samples/blob/main/pylabeler.ipynb
4
10
70,177,467
2021-11-30
https://stackoverflow.com/questions/70177467/databricks-pyspark-vs-pandas
I have a python script where I'm using pandas for transformations/manipulation of my data. I know I have some "inefficient" blocks of code. My question is, if pyspark is supposed to be much faster, can I just replace these blocks using pyspark instead of pandas or do I need everything to be in pyspark? If I'm in Databricks, how much does this really matter since it's already on a spark cluster?
If the data is small enough that you can use pandas to process it, then you likely don't need pyspark. Spark is useful when you have such large data sizes that it doesn't fit into memory in one machine since it can perform distributed computation. That being said, if the computation is complex enough that it could benefit from a lot of parallelization, then you could see an efficiency boost using pyspark. I'm more comfortable with pyspark's APIs than pandas, so I might end up using pyspark anyways, but whether you'll see an efficiency boost depends a lot on the problem.
11
20
70,174,054
2021-11-30
https://stackoverflow.com/questions/70174054/how-to-reorder-rows-in-pandas-dataframe-by-factor-level-in-python
I've created a small dataset comparing coffee drink prices per cup size. When I pivot my dataset the output automatically reorders the index (the 'Size' column) alphabetically. Is there a way to assign the different sizes a numerical level (e.g. small = 0, medium = 1, large = 2) and reorder the rows this way instead? I'm know this can be done in R using the forcats library (using fct_relevel for example), but I'm not aware of how to do this in python. I would prefer to keep the solution to using numpy and pandas. data = {'Item': np.repeat(['Latte', 'Americano', 'Cappuccino'], 3), 'Size': ['Small', 'Medium', 'Large']*3, 'Price': [2.25, 2.60, 2.85, 1.95, 2.25, 2.45, 2.65, 2.95, 3.25] } df = pd.DataFrame(data, columns = ['Item', 'Size', 'Price']) df = pd.pivot_table(df, index = ['Size'], columns = 'Item') df # Price # Item Americano Cappuccino Latte # Size # Large 2.45 3.25 2.85 # Medium 2.25 2.95 2.60 # Small 1.95 2.65 2.25
You can use a Categorical type with ordered=True: df.index = pd.Categorical(df.index, categories=['Small', 'Medium', 'Large'], ordered=True) df = df.sort_index() output: Price Item Americano Cappuccino Latte Small 1.95 2.65 2.25 Medium 2.25 2.95 2.60 Large 2.45 3.25 2.85 You can access the codes with: >>> df.index.codes array([0, 1, 2], dtype=int8) If this was a Series: >>> series.cat.codes
4
4
70,173,576
2021-11-30
https://stackoverflow.com/questions/70173576/how-to-add-a-line-to-a-plotly-express-bar-chart
I am trying to add a very simple line across a bar chart in plotly and am struggling to to this. My dataframe contains one column with bins and the other with returnsdata and can be copied here: {'bins': {0: '(-23.077, 25.877]', 1: '(25.877, 34.666]', 2: '(34.666, 42.552]', 3: '(42.552, 46.044]', 4: '(46.044, 49.302]', 5: '(49.302, 52.746]', 6: '(52.746, 57.075]', 7: '(57.075, 62.349]', 8: '(62.349, 69.171]', 9: '(69.171, 90.975]'}, 'returns': {0: 0.39754, 1: 0.6817, 2: -0.1918399999999998, 3: -0.44406, 4: -0.6611199999999998, 5: -0.0742857142857142, 6: 0.25304, 7: 0.4166, 8: 0.97648, 9: 0.0539999999999999}} I have created a plotly bar chart from this using the following code: fig = px.bar(dfs, x='bins', y='returns') fig.show() I want to add a constant line across the bar chart that represents a benchmark score and have looked at this: Plotly: How to add trendline to a bar chart? The methods seem to be depreciated and I can't seem to find any way to do this. The benchmark list is this: [0.14080542857142858, 0.14080542857142858, 0.14080542857142858, 0.14080542857142858, 0.14080542857142858, 0.14080542857142858, 0.14080542857142858, 0.14080542857142858, 0.14080542857142858, 0.14080542857142858] I want it to look this this (the line is suppose to be straight, apologies for the terrible paint job edit) Would anyone know how to do this?
You can use the Plotly's horizontal and vertical shapes to add a horizontal line: fig.add_hline(y=0.14080542857142858)
5
5
70,163,883
2021-11-30
https://stackoverflow.com/questions/70163883/google-colab-modulenotfounderror-no-module-named-sklearn-externals-joblib
My Initial import looks like this and this code block runs fine. # Libraries to help with reading and manipulating data import numpy as np import pandas as pd # Libraries to help with data visualization import matplotlib.pyplot as plt import seaborn as sns sns.set() # Removes the limit for the number of displayed columns pd.set_option("display.max_columns", None) # Sets the limit for the number of displayed rows pd.set_option("display.max_rows", 200) # to split the data into train and test from sklearn.model_selection import train_test_split # to build linear regression_model from sklearn.linear_model import LinearRegression # to check model performance from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score But when I try to following command I get the error ModuleNotFoundError: No module named 'sklearn.externals.joblib' I tried to use !pip to install all the modules and other suggestions for this error it didnt work. This is google colab so not sure what I am missing from mlxtend.feature_selection import SequentialFeatureSelector as SFS
For the second part you can do this to fix it, I copied the rest of your code as well, and added the bottom part. # Libraries to help with reading and manipulating data import numpy as np import pandas as pd # Libraries to help with data visualization import matplotlib.pyplot as plt import seaborn as sns sns.set() # Removes the limit for the number of displayed columns pd.set_option("display.max_columns", None) # Sets the limit for the number of displayed rows pd.set_option("display.max_rows", 200) # to split the data into train and test from sklearn.model_selection import train_test_split # to build linear regression_model from sklearn.linear_model import LinearRegression # to check model performance from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score # I changed this part !pip install mlxtend import joblib import sys sys.modules['sklearn.externals.joblib'] = joblib from mlxtend.feature_selection import SequentialFeatureSelector as SFS it works for me.
4
6
70,160,450
2021-11-29
https://stackoverflow.com/questions/70160450/subprocess-command-shows-filenotfounderror-errno-2-no-such-file-or-directory
I'm trying to run shell commands using python by using subprocess module in the below code, but I don't why my script is throwing an error like below. Can someone help me what I'm missing? Traceback (most recent call last): File "/Scripts/test_file.py", line 6, in <module> p2 = subprocess.Popen('sed s\'/&quot;/ /g\'', stdin=p1.stdout, stdout=subprocess.PIPE) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 854, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 1702, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: "sed s'/&quot;/ /g'"` import subprocess #output3.txt='/Users/boggulv/Desktop/output3.txt' p1 = subprocess.Popen( ['cat', 'output3.txt'], stdout=subprocess.PIPE) print(p1) p2 = subprocess.Popen('sed s\'/&quot;/ /g\'', stdin=p1.stdout, stdout=subprocess.PIPE) p3 = subprocess.Popen('grep "sO"', stdin=p2.stdout,stdout=subprocess.PIPE) p4 = subprocess.Popen('grep -v "get"', stdin=p3.stdout, stdout=subprocess.PIPE) p5 = subprocess.Popen('cut -d \',\' -f2', stdin=p4.stdout, stdout=subprocess.PIPE) p6 = subprocess.Popen('sed \'s/"//g\'', stdin=p5.stdout, stdout=subprocess.PIPE) p7 = subprocess.Popen('sort', stdin=p6.stdout, stdout=subprocess.PIPE) p8 = subprocess.Popen('sort', stdin=p8.stdout, stdout=subprocess.PIPE) p9 = subprocess.Popen('uniq -c', stdin=p8.stdout, stdout=subprocess.PIPE) p0 = subprocess.Popen('sort -nr', stdin=p9.stdout, stdout=subprocess.PIPE) print(p01.communicate()) Tried now changing to lists. p2 = subprocess.Popen('sed \'s/&quot;/ /g\'', stdin=p1.stdout, stdout=subprocess.PIPE, shell=True) p3 = subprocess.Popen(['grep','"shipOption"'], stdin=p2.stdout,stdout=subprocess.PIPE,shell = True) p4 = subprocess.Popen(['grep','-v', '"getShipMethod"'], stdin=p3.stdout, stdout=subprocess.PIPE,shell = True) p5 = subprocess.Popen(['cut','-d','\',\'', '-f2'], stdin=p4.stdout, stdout=subprocess.PIPE,shell = True) p6 = subprocess.Popen(['sed','\'s/"//g\''],stdin=p5.stdout, stdout=subprocess.PIPE,shell = True) p7 = subprocess.Popen(['sort'], stdin=p6.stdout, stdout=subprocess.PIPE,shell = True) p8 = subprocess.Popen(['uniq', '-c'], stdin=p7.stdout, stdout=subprocess.PIPE,shell = True) p9 = subprocess.Popen(['sort', '-nr'], stdin=p8.stdout, stdout=subprocess.PIPE,shell = True) p0 = subprocess.Popen(['head', '-10'], stdin=p9.stdout, stdout=subprocess.PIPE,shell = True)``` New Error: `usage: grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C[num]] [-e pattern] [-f file] [--binary-files=value] [--color=when] [--context[=num]] [--directories=action] [--label] [--line-buffered] [--null] [pattern] [file ...] usage: grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C[num]] [-e pattern] [-f file] [--binary-files=value] [--color=when] [--context[=num]] [--directories=action] [--label] [--line-buffered] [--null] [pattern] [file ...] usage: cut -b list [-n] [file ...] cut -c list [file ...] cut -f list [-s] [-w | -d delim] [file ...] (b'', None) cat: stdin: Input/output error`
Your commands are still wrong. If you just want to run these commands like the shell does, the absolutely easiest way to do that is to ... use the shell. result = subprocess.run(''' # useless cat, but bear with cat output3.txt | sed 's/&quot;/ /g' | grep "shipOption" | grep -v "getShipMethod" | cut -d ',' -f2 | sed 's/"//g' | sort | uniq -c | sort -nr | head -10 ''', # Probably add these too check=True, capture_output=True, # We are using the shell for piping etc shell=True) If you want to remove the shell=True and manually run all these processes, you have to understand how the shell works. In particular, you need to fix the quoting so that the commands you run have the quotes which remain after the shell has processed the syntactic quotes. p1 = subprocess.Popen(['cat', 'output3.txt'], stdout=subprocess.PIPE) # still useless p2 = subprocess.Popen(['sed','s/&quot;/ /g'], stdin=p1.stdout, stdout=subprocess.PIPE) p3 = subprocess.Popen(['grep', "shipOption"], stdin=p2.stdout,stdout=subprocess.PIPE) p4 = subprocess.Popen(['grep', '-v', "getShipMethod"], stdin=p3.stdout, stdout=subprocess.PIPE) p5 = subprocess.Popen(['cut', '-d', ',', '-f2'], stdin=p4.stdout, stdout=subprocess.PIPE) p6 = subprocess.Popen(['sed', 's/"//g'],stdin=p5.stdout, stdout=subprocess.PIPE) p7 = subprocess.Popen(['sort'], stdin=p6.stdout, stdout=subprocess.PIPE) p8 = subprocess.Popen(['uniq', '-c'], stdin=p7.stdout, stdout=subprocess.PIPE) p9 = subprocess.Popen(['sort', '-nr'], stdin=p8.stdout, stdout=subprocess.PIPE) p0 = subprocess.Popen(['head', '-10'], stdin=p9.stdout, stdout=subprocess.PIPE) Notice in particular how the arguments to sed and grep have their outer quotes removed, and how we removed shell=True everywhere. As a rule of thumb, if the first argument to Popen (or other subprocess methods) is a list, you should not use shell=True, and vice versa. (There are situations where you can pass a list to shell=True but ... let's not even begin to go there.) All of this seems rather moot, though, since Python can eminently well do all of these things. from collections import Counter counts = Counter() with open('output3.txt', 'r', encoding='utf-8') as lines: for line in lines: line = line.rstrip('\n').replace('&quot;', ' ') if "shipOption" in line and "getShipMethod" not in line: field = line.split(',')[1].replace('"', '') counts[field] += 1 print(counts.most_common(10)) Probably you would want to put the rstrip and replace inside the if to avoid unnecessary work. The same refactoring could be done to the shell pipeline, of course.
8
4
70,142,953
2021-11-28
https://stackoverflow.com/questions/70142953/matplotlib-colormaps-choosing-a-different-color-for-each-graph-line-subject
I created a script that reads and plots .txt files and their content (numbers/values). Each .txt file is located in a different folder. Each folder, in turn, represents one subject from which the data stems. This code works fine. Python reads each single .txt. file and plots 23 individual graphs/lines into one single plot. Python uses some standard colors here, i.e., each graph is automatically presented in a different color. What I would like to do is the following: Instead of using the standard colors that are assigned by python automatically without adding any color related code, I would like to use a specific colormap (for example "plasma") from matplotlib. The problem: no matter what code from the internet I use, all graphs/lines/subjects always receive the same color (e.g. the first color or last color from the plasma colormap). How do I specify the code so that every line gets one distinct color from a colormap of choice? Here is my code: # Initialize import numpy as np import matplotlib.pyplot as plt from scipy import signal from matplotlib.pyplot import cm # Numpy.loadtxt – Loads data from a textfile. Scipy.signal.welch – Creation of the FFT/power-spectrum. f, Pxx_den creates the ideal frequencies/FFT (f, Welch = Power Spectrum or Power Spectral Density) Subjects = ["Subject1", "Subject2", "Subject3", "Subject4", "Subject5", "Subject7", "Subject8", "Subject9", "Subject10", "Subject11", "Subject12", "Subject13", "Subject14", "Subject15", "Subject16", "Subject17", "Subject18", "Subject19", "Subject20", "Subject22", "Subject23", "Subject24", "Subject25"] for Subject in Subjects: Subject = np.loadtxt("/volumes/SanDisk2/fmri/dataset/processed/extracted_timeseriespython/restingstate/{0}/TimeSeries.SPC.Core_ROI.{0}.txt".format(Subject), comments="#", delimiter=None, converters=None, skiprows=0, usecols=0, unpack=False, ndmin=0, encoding=None, max_rows=None, like=None) f, Welch = signal.welch(Subject, fs=1.0, window="hann", nperseg=None, noverlap=None, nfft=1024, detrend="constant", return_onesided=True, scaling="density", axis=-1, average="mean") cmap = plt.get_cmap("inferno") slicedCM = cmap(np.linspace(0, 1, len(Subjects))) plt.plot(f, Welch, c=slicedCM[Subjects.index(Subject)]) # Grid labels plt.title("Power Spectrum for all subjects", fontsize=12, fontweight="bold") plt.xlabel("Log Frequency [Hz]", fontsize=11, fontweight="bold") plt.ylabel("Log Power [Hz]", fontsize=11, fontweight="bold") # Grid dimenions and style plt.xlim([0.005, 0.2]) # x-axis range plt.ylim([0, 100]) # y-axis range plt.xticks(np.arange(0, 0.21, 0.025)) # x ticks range (start, end, step) plt.yticks(np.arange(0, 101, 10)) # y ticks range (start, end, step) plt.grid(True) # Show grid plt.rc("axes", axisbelow=True) # Grid behind figures plt.rc("grid", linestyle="-", color="black") # Grid look # Show result plt.show() Here is the resulting screenshot, showing that the standard colors are used instead of the desired plasma colormap: I'm running matplotlib 3.5.0 with MacOSX as backend.
One way to achieve your goal is to slice-up a colormap and then plot each line with one of the resulting colors. See the lines below that can be integrated in your code in appropriate places. import numpy as np import matplotlib.pyplot as plt # 1. Choose your desired colormap cmap = plt.get_cmap('plasma') # 2. Segmenting the whole range (from 0 to 1) of the color map into multiple segments slicedCM = cmap(np.linspace(0, 1, len(Subjects))) # 3. Color the i-th line with the i-th color, i.e. slicedCM[i] plt.plot(f, Welch, c=slicedCM[Subjects.index(Subject)]) (The first two lines can be put in the beginning and you can replace the line plotting curves in your code with the third line of code suggested above.) Alternatively, and perhaps a neater approach, is using the below lines inside your main loop through Subjects: cmap = plt.get_cmap('inferno') plt.plot(f, Welch, c=cmap(Subjects.index(Subject)/len(Subjects))) (I see in your question that you are changing Subject when you load the file again into Subject. Just use another variable name, say, data = np.loadtxt... and then f, Welch = signal.welch(data, ..... Keep the codes for plotting with different colors as suggested above and you won't have any problem.)
5
4
70,127,049
2021-11-26
https://stackoverflow.com/questions/70127049/what-is-the-use-of-dmatrix
The docs say: Data Matrix used in XGBoost. DMatrix is an internal data structure that is used by XGBoost, which is optimized for both memory efficiency and training speed. You can construct DMatrix from multiple different sources of data. I get this bit but what's the difference/use of DMatrix instead of a Pandas Dataframe?
When using the XGBoost Python package you can choose between two different APIs to train your model. XGB's own Learning API and the Scikit-Learn API. When using the Scikit-Learn API data is passed to the model as numpy array or pandas dataframes. When using the Learning API data is passed using the DMatrix. Have a look at the python examples, to see both APIs used. Basically you already found the "use of DMatrix instead of a Pandas Dataframe" in the docs: It is a data structure the XGBoost developers created for "memory efficiency and training speed" with their machine learning library.
21
25
70,159,221
2021-11-29
https://stackoverflow.com/questions/70159221/runtimeerror-mean-input-dtype-should-be-either-floating-point-or-complex-dty
I wrote below code using PyTorch and ran into the runtime error: tns = torch.tensor([1,0,1]) tns.mean() --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-666-194e5ab56931> in <module> ----> 1 tns.mean() RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead. However, if I change the tensor to float, the error goes away: tns = torch.tensor([1.,0,1]) tns.mean() --------------------------------------------------------------------------- tensor(0.6667) My question is why the error happens. The data type of the first tensor is int64 instead of Long, why does PyTorch take it as Long?
This is because torch.int64 and torch.long both refer to the same data type, of 64-bit signed integers. See here for an overview of all data types.
8
6
70,158,335
2021-11-29
https://stackoverflow.com/questions/70158335/how-to-download-files-in-customized-location-using-selenium-chromedriver-and-chr
I want to download a txt and pdf files to a specific folder. But it just downloads them on another folder. The website http://demo.automationtesting.in/FileDownload.html. Is there something wrong with the code or I didn't put the correct location of the folder? import time from selenium import webdriver from selenium.webdriver.chrome.options import Options chromeOptions=Options() chromeOptions.add_experimental_option("prefs", {"download.default_dictionary": "C:\DownloadedAutomationFiles"}) driver=webdriver.Chrome(executable_path="D:\ChromeDriverExtracted\chromedriver", chrome_options=chromeOptions) driver.get("http://demo.automationtesting.in/FileDownload.html") driver.maximize_window() driver.find_element_by_id("textbox").send_keys("testing") driver.find_element_by_id("createTxt").click() #generate file button driver.find_element_by_id("link-to-download").click() #dowload link #Download PDF FILE driver.find_element_by_id("pdfbox").send_keys("testing download text file") driver.find_element_by_id("createPdf").click() #generate file button driver.find_element_by_id("pdf-link-to-download").click() #dowload link time.sleep(2) driver.close()
To download the required file within Automation Demo Site to a specific folder using Selenium, ChromeDriver and google-chrome you need to pass the preference "download.default_directory" along with the value (location of the directory) through add_experimental_option() and you can use the following solution: Code Block: from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option('useAutomationExtension', False) options.add_experimental_option("prefs", { "download.default_directory": r"C:\Data_Files\output_files" }) s = Service('C:\\BrowserDrivers\\chromedriver.exe') driver = webdriver.Chrome(service=s, options=options) driver.get("http://demo.automationtesting.in/FileDownload.html") driver.execute_script("return arguments[0].scrollIntoView(true);", WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.LINK_TEXT, "Download")))) WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "textbox"))).send_keys("testing") WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "createTxt"))).click() WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "link-to-download"))).click() Snapshot:
5
5
70,145,904
2021-11-28
https://stackoverflow.com/questions/70145904/transposition-tables-for-python-chess-engine
This is a follow-up to my last post. The code works without any errors and can calculate the next best move. I've been looking into how to incorporate transposition tables and move ordering into my negamax function to make it run faster and more accurately, but it seems somewhat difficult and advanced for a beginner like myself. You can find my code here. While researching on the chess programming wiki I found some sample code for transposition tables: def negamax(node, depth, alpha, beta, color): alphaOrig = alpha ## Transposition Table Lookup; node is the lookup key for ttEntry ttEntry = transpositionTableLookup(node) if ttEntry.is_valid is True and ttEntry.depth >= depth: if ttEntry.flag == EXACT : return ttEntry.value if ttEntry.flag == LOWERBOUND: alpha = max(alpha, ttEntry.value) if ttEntry.flag == UPPERBOUND: beta = min(beta, ttEntry.value) if alpha >= beta: return ttEntry.value if depth == 0 or node is terminal_node: return color* heuristic_value_of_node childNodes = domove(node) childNodes = orderMoves(childNodes) bestValue = -99999 for child in childNodes: bestValue = max(bestValue, -negamax(child, depth - 1, -beta, -alpha, -color)) alpha = max(alpha, bestValue) if alpha >= beta: break ##Transposition Table Store; node is the lookup key for ttEntry ttEntry.value = bestValue if bestValue <= alphaOrig: ttEntry.flag = UPPERBOUND if bestValue >= beta: ttEntry.flag = LOWERBOUND else: ttEntry.flag = EXACT ttEntry.depth = depth transpositionTableStore(node, ttEntry) return bestValue I tried making a few modifications to integrate it into my code, but I didn't get any results out of it. I've also seen something about storing hash keys with a Zobrist key for the positions, but I didn't understand well how it worked so I dropped the idea. Currently somewhat stuck with these problems and don't know what the next step is.
To use transposition tables you "need" to use Zorbrist hashing. The hashing gives each position an (almost) unique code that you store in the transposition table along with its evaluation. Then, to explain it easily, if the current position you are searching is found in your transposition table you won't have to evaluate it again, you just use the stored value. Zorbrist keys are a nightmare to get right and very hard to debug. If it helps you can check out my implementation in the Endamat Chess Engine, but since you might have a different approach it might be easier to just read on how Zorbrist keys works and try to get it right for your implementation.
6
4
70,147,659
2021-11-28
https://stackoverflow.com/questions/70147659/importing-only-a-specific-class-from-a-python-module-with-importlib
How would one go about importing only a specific class from a Python module using its path? I need to import a specific class from a Python file using the file path. I have no control over the file and its completely outside of my package. file.py: class Wanted(metaclass=MyMeta): ... class Unwanted(metaclass=MyMeta): ... The metaclass implementation is not relevant here. However, I will point out that it's part of my package and I have full control over it. Import example: spec = importlib.util.spec_from_file_location(name='Wanted', location="path_to_module/mudule.py") module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) This works, and Wanted is imported. The problem is that Unwanted is also imported. In fact, as long as there is any string value given for name (including an empty string), both Wanted and Unwanted are imported from the module. This has the same effect as in the example before, where both Wanted and Unwanted are imported: importlib.util.spec_from_file_location(name='random string', location="path_to_module/mudule.py") I'm not looking for a specific solution using importlib; any reasonable way will do. I will point out that I don't have a need of using the class when it's imported, I only need the import to happen and my metaclass will take care of the rest.
If I am not mistaken, the name parameter is just used to name the module you are importing. But, more importantly, when you are importing any module, you are executing the whole file, which means that in your case both of these classes will be created. It would not matter whether you wrote from file import Wanted, import file or used any other form of the import statement. A Python program is constructed from code blocks. A block is a piece of Python program text that is executed as a unit. The following are blocks: a module, a function body, and a class definition. Source: https://docs.python.org/3/reference/executionmodel.html#structure-of-a-program
5
2
70,138,641
2021-11-27
https://stackoverflow.com/questions/70138641/using-sqlalchemy-declarative-base-with-sql-model
In one of our project, we used SQL Alchemy declarative Base for all of our models for years (and there is many). We want to try the new SQLModel library for our latest model declaration. For that, we tried to declare it separately from the Base object, and call the create_all methods for both. i.e : Base.metadata.create_all() and SQLModel.metadata.create_all(). But the model declared with SQLModel does not recognize the table declared with the Base. And at this moment, we cannot change all previous models declaration from Base to SQLModel. Here is a reproducible code : from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column from sqlalchemy import Integer from typing import Optional from sqlmodel import Field, SQLModel from sqlalchemy import String # Declarative base object Base = declarative_base() class DummySATable(Base): __tablename__ = 'dummy_table' id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(32)) class DummyModelTable(SQLModel, table=True): id: Optional[int] = Field(default=None, primary_key=True) dummy_table_id : int = Field(default=None, foreign_key='dummy_table.id') name: str Base.metadata.create_all(engine) SQLModel.metadata.create_all(engine) Here is the traceback : NoReferencedTableError Traceback (most recent call last) /tmp/ipykernel_307893/3665898561.py in <module> 24 25 Base.metadata.create_all(engine) ---> 26 SQLModel.metadata.create_all(engine) 27 ~/project/venv/lib/python3.9/site-packages/sqlalchemy/sql/schema.py in create_all(self, bind, tables, checkfirst) 4783 if bind is None: 4784 bind = _bind_or_error(self) -> 4785 bind._run_ddl_visitor( 4786 ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables 4787 ) ~/project/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py in _run_ddl_visitor(self, visitorcallable, element, **kwargs) 3108 def _run_ddl_visitor(self, visitorcallable, element, **kwargs): 3109 with self.begin() as conn: -> 3110 conn._run_ddl_visitor(visitorcallable, element, **kwargs) 3111 3112 @util.deprecated_20( ~/project/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py in _run_ddl_visitor(self, visitorcallable, element, **kwargs) 2111 2112 """ -> 2113 visitorcallable(self.dialect, self, **kwargs).traverse_single(element) 2114 2115 @util.deprecated( ~/project/venv/lib/python3.9/site-packages/sqlalchemy/sql/visitors.py in traverse_single(self, obj, **kw) 522 meth = getattr(v, "visit_%s" % obj.__visit_name__, None) 523 if meth: --> 524 return meth(obj, **kw) 525 526 def iterate(self, obj): ~/project/venv/lib/python3.9/site-packages/sqlalchemy/sql/ddl.py in visit_metadata(self, metadata) 820 tables = list(metadata.tables.values()) 821 --> 822 collection = sort_tables_and_constraints( 823 [t for t in tables if self._can_create_table(t)] 824 ) ~/project/venv/lib/python3.9/site-packages/sqlalchemy/sql/ddl.py in sort_tables_and_constraints(tables, filter_fn, extra_dependencies, _warn_for_cycles) 1284 continue 1285 -> 1286 dependent_on = fkc.referred_table 1287 if dependent_on is not table: 1288 mutable_dependencies.add((dependent_on, table)) ~/project/venv/lib/python3.9/site-packages/sqlalchemy/sql/schema.py in referred_table(self) 3703 3704 """ -> 3705 return self.elements[0].column.table 3706 3707 def _validate_dest_table(self, table): ~/project/venv/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py in __get__(self, obj, cls) 1111 if obj is None: 1112 return self -> 1113 obj.__dict__[self.__name__] = result = self.fget(obj) 1114 return result 1115 ~/project/venv/lib/python3.9/site-packages/sqlalchemy/sql/schema.py in column(self) 2408 2409 if tablekey not in parenttable.metadata: -> 2410 raise exc.NoReferencedTableError( 2411 "Foreign key associated with column '%s' could not find " 2412 "table '%s' with which to generate a " NoReferencedTableError: Foreign key associated with column 'dummymodeltable.dummy_table_id' could not find table 'dummy_table' with which to generate a foreign key to target column 'id' What did I missed? Is it even possible (is there any workaround ?)
I've finally found a simple way to do that, according to this thread. Since SQLModel inherits the Metadata object from SQLAlchemy, We can simply bind the metadata object of SQLModel to the metadata object from SQLAlchemy : # Declarative base object Base = declarative_base() SQLModel.metadata = Base.metadata # Table declaration.... SQLModel.metadata.create_all(engine)
5
7
70,134,026
2021-11-27
https://stackoverflow.com/questions/70134026/no-speedup-when-summing-uint16-vs-uint64-arrays-with-numpy
I have to do a large number of operations (additions) on relatively small integers, and I started considering which datatype would give the best performance on a 64 bit machine. I was convinced that adding together 4 uint16 would take the same time as one uint64, since the ALU could make 4 uint16 additions using only 1 uint64 adder. (Carry propagation means this doesn't work that easily for a single 64-bit adder, but this is how integer SIMD instructions work.) Apparently this is not the case: In [3]: data = np.random.rand(10000) In [4]: int16 = data.astype(np.uint16) In [5]: int64 = data.astype(np.uint64) In [6]: int32 = data.astype(np.uint32) In [7]: float32 = data.astype(np.float32) In [8]: float64 = data.astype(np.float64) In [9]: %timeit int16.sum() 13.4 µs ± 43.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [10]: %timeit int32.sum() 13.9 µs ± 347 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [11]: %timeit int64.sum() 9.33 µs ± 47.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [12]: %timeit float32.sum() 5.79 µs ± 6.51 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [13]: %timeit float64.sum() 6 µs ± 3.54 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) All int operations take the same time (more than the float ops) with no speedup. Is this due the C implementation of numpy not being fully optimized or is there some hardware limitation that I am not aware of?
TL;DR: I made an experimental analysis on Numpy 1.21.1. Experimental results show that np.sum does NOT (really) make use of SIMD instructions: no SIMD instruction are used for integers, and scalar SIMD instructions are used for floating-point numbers! Moreover, Numpy converts the integers to 64-bits values for smaller integer types by default so to avoid overflows! Note that this may not reflect all Numpy versions since there is an ongoing work to provide SIMD support for commonly used functions (the version Numpy 1.22.0rc1 not yet released continue this long-standing work). Moreover, the compiler or the processor used may significantly impact the results. The following experiments have been done using a Numpy retrieved from pip on a Debian Linux with a i5-9600KF processor. Under the hood of np.sum For floating-point numbers, Numpy uses a pairwise algorithm which is known to be quite numerically stable while being relatively fast. This can be seen in the code, but also simply using a profiler: TYPE_pairwise_sum is the C function called to compute the sum at runtime (where TYPE is DOUBLE or FLOAT). For integers, Numpy use a classical naive reduction. The C function called is ULONG_add_avx2 on AVX2-compatible machines. It also surprisingly convert items to 64-bit ones if the type is not np.int64. Here is the hot part of the assembly code executed by the DOUBLE_pairwise_sum function 3,65 │ a0:┌─→add $0x8,%rcx ; Loop iterator 3,60 │ │ prefetcht0 (%r8,%rax,1) ; Prefetch data 9,46 │ │ vaddsd (%rax,%rbp,1),%xmm1,%xmm1 ; Accumulate an item in xmm1[0] 4,65 │ │ vaddsd (%rax),%xmm0,%xmm0 ; Same for xmm0[0] 6,91 │ │ vaddsd (%rax,%rdx,1),%xmm4,%xmm4 ; Etc. 7,77 │ │ vaddsd (%rax,%rdx,2),%xmm7,%xmm7 7,41 │ │ vaddsd (%rax,%r10,1),%xmm2,%xmm2 7,27 │ │ vaddsd (%rax,%rdx,4),%xmm6,%xmm6 6,80 │ │ vaddsd (%rax,%r11,1),%xmm3,%xmm3 7,46 │ │ vaddsd (%rax,%rbx,1),%xmm5,%xmm5 3,46 │ │ add %r12,%rax ; Switch to the next items (x8) 0,13 │ ├──cmp %rcx,%r9 ; Should the loop continue? 3,27 │ └──jg a0 ; Jump to the beginning if so The Numpy compiled code clearly uses the scalar SIMD instruction vaddsd (computing only a single double-precision item) although it successfully unroll the loop 8 times. The same code is generated for FLOAT_pairwise_sum: vaddss is called 8 times. For the np.uint32, here is the hot part of the generated assembly code: 2,37 │160:┌─→add $0x1,%rax ; Loop iterator 95,95 │ │ add (%rdi),%rdx ; Accumulate the values in %rdx 0,06 │ │ add %r10,%rdi ; Switch to the next item │ ├──cmp %rsi,%rax ; Should the loop continue? 1,08 │ └──jne 160 ; Jump to the beginning if so Numpy obviously does not use SIMD instructions for np.uint32 type. It does not even unroll the loop. The add (%rdi),%rdx instruction performing the accumulation is the bottleneck here due to the data dependency on the accumulator. The same loops can be seen for np.uint64(despite the name of the function isULONG_add_avx2`). However, the np.uint32 version call the C function _aligned_contig_cast_uint_to_ulong in order to convert the integer items to a wider type. Numpy does that to avoid integer overflows. The same thing can be seen for the types np.uint8 and np.uint16 (although the name of the function differs). Hopefully, this function makes use of SIMD instructions (SSE) but still takes a significant portion of the execution time (~30% of the np.sum time). EDIT: as pointed out by @user2357112supportsMonica, the dtype parameter of np.sum can be explicitly specified. When it match with the dtype of the input array, then the conversion is not performed. This results in a smaller execution time at the expense of a possible overflow. Benchmark results Here is the result on my machine: uint16: 7.17 µs ± 80 ns per loop (mean ± std. dev. of 7 runs, 20000 loops each) uint32: 7.11 µs ± 12.3 ns per loop (mean ± std. dev. of 7 runs, 20000 loops each) uint64: 5.05 µs ± 8.57 ns per loop (mean ± std. dev. of 7 runs, 20000 loops each) float32: 2.88 µs ± 9.27 ns per loop (mean ± std. dev. of 7 runs, 20000 loops each) float64: 3.06 µs ± 10.6 ns per loop (mean ± std. dev. of 7 runs, 20000 loops each) First of all, not that the results are very similar to the ones provided in the question meaning that the behavior seen on my machine can be successfully reproduced on other one. Thus, the explanation should also be consistent. As you can see, the 64-bit version is faster than the other integer-based versions. This is due to the overhead of the conversion. The two first are equally+fast because of the scalar loop and the add instruction being equally-fast for 8-bit, 16-bit and 32-bit integers (this should be true for most 64-bit mainstream platforms). The integer implementations are slower than the floating-point ones because of the lack of (proper) loop unrolling. The floating-point implementations are equally-fast due to the scalar SIMD instructions. Indeed, the instructions vaddss (for np.float32) and vaddsd (for np.float64) have the same latency and throughput (at least on all modern Intel processors). Thus the throughput of the two implementation is the same since the loop of the two implementation is similar (same unrolling). Additional notes This is sad np.sum does not fully make use of SIMD instructions as this would speed up a lot the computations using it (especially small integers). [UPDATE] Looking at the Numpy code, I discovered that the code is not vectorized because the array stride is a runtime value and the compiler do not generate a specialized version where the stride is 1. In fact, this can be partially seen in the previous assembly code: the compiler used the instruction add %r10, %rdi because %r10 (the stride of the target array) is not known at compile-time. There is currently no optimization for this specific case of reduction in the Numpy code yet (the functions are relatively generic). This may change in a near future. In addition to the stride problem, one big point makes it hard for compilers to automatically vectorize a code: the floating-point addition is not commutative, nor associative (unless flags like -ffast-math are used).
7
12
70,147,889
2021-11-28
https://stackoverflow.com/questions/70147889/how-to-reduce-a-string-by-another-string-in-python
I would like to remove all characters from a first string s1 exactly the number of times they appear in another string s2, i.e. if s1 = "AAABBBCCCCCCD" and s2 = "ABBCCC" then the result should be s = "AABCCCD". (The order of the characters in the resulting string is actually irrelevant but it's a plus if it can be preserved.) The following rather crude code can do this: def reduce_string(s1, s2): s = s1 for c in s2: if c in s: s = s.replace(c, "", 1) return(s) # examples reduce_string("AAABBBCCCCCCD", "ABBCCC") reduce_string("AAABBBCCCCCCD", "ABBCCCE") My question is, can the same be achieved by clever use of some built-in function or at least in a more elegant way? Thank you for all your answers!
You can use counter objects. Subtract one against the other and join the remaining elements together. from collections import Counter s1 = "AAABBBCCCCCCD" s2 = "ABBCCC" counter = Counter(s1) counter.subtract(Counter(s2)) result = ''.join(counter.elements()) print(result) AABCCCD As a one-liner: print(''.join((Counter(s1) - Counter(s2)).elements()))
4
6
70,146,417
2021-11-28
https://stackoverflow.com/questions/70146417/python-dictionary-with-enum-as-key
Let's say I have an enum class Color(Enum): RED = "RED" GREEN = "GREEN" BLUE = "BLUE" I wanted to create a ColorDict class that works as a native python dictionary but only takes the Color enum or its corresponding string value as key. d = ColorDict() # I want to implement a ColorDict class such that ... d[Color.RED] = 123 d["RED"] = 456 # I want this to override the previous value d[Color.RED] # ==> 456 d["foo"] = 789 # I want this to produce an KeyError exception What's the "pythonic way" of implementing this ColorDict class? Shall I use inheritance (overriding python's native dict) or composition (keep a dict as a member)?
A simple solution would be to slightly modify your Color object and then subclass dict to add a test for the key. I would do something like this: class Color(Enum): RED = "RED" GREEN = "GREEN" BLUE = "BLUE" @classmethod def is_color(cls, color): if isinstance(color, cls): color=color.value if not color in cls.__members__: return False else: return True class ColorDict(dict): def __setitem__(self, k, v): if Color.is_color(k): super().__setitem__(Color(k), v) else: raise KeyError(f"Color {k} is not valid") def __getitem__(self, k): if isinstance(k, str): k = Color(k.upper()) return super().__getitem__(k) d = ColorDict() d[Color.RED] = 123 d["RED"] = 456 d[Color.RED] d["foo"] = 789 In the Color class, I have added a test function to return True or False if a color is/isn't in the allowed list. The upper() function puts the string in upper case so it can be compared to the pre-defined values. Then I have subclassed the dict object to override the __setitem__ special method to include a test of the value passed, and an override of __getitem__ to convert any key passed as str into the correct Enum. Depending on the specifics of how you want to use the ColorDict class, you may need to override more functions. There's a good explanation of that here: How to properly subclass dict and override __getitem__ & __setitem__
9
7
70,143,131
2021-11-28
https://stackoverflow.com/questions/70143131/application-default-credentials-in-google-cloud-build
Within my code, I am attempting to gather the Application Default Credentials from the associated service account in Cloud Build: from google.auth import default credentials, project_id = default() This works fine in my local space because I have set the environment variable GOOGLE_APPLICATION_CREDENTIALS appropriately. However, when this line is executed (via a test step in my build configuration) within Cloud Build, the following error is raised: google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started This is confusing me because, according to the docs: By default, Cloud Build uses a special service account to execute builds on your behalf. This service account is called the Cloud Build service account and it is created automatically when you enable the Cloud Build API in a Google Cloud project. Read Here If the environment variable GOOGLE_APPLICATION_CREDENTIALS isn't set, ADC uses the service account that is attached to the resource that is running your code. Read Here So why is the default call not able to access the Cloud Build service account credentials?
There is a trick: you have to define the network to use in your Docker build. Use the parameter --network=cloudbuild, like that steps: - name: gcr.io/cloud-builders/docker entrypoint: 'docker' args: - build - '--no-cache' - '--network=cloudbuild' - '-t' - '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA' - . - '-f' - 'Dockerfile' ... You can find the documentation here
10
23
70,145,289
2021-11-28
https://stackoverflow.com/questions/70145289/fastest-way-to-find-all-pairs-of-close-numbers-in-a-numpy-array
Say I have a Numpy array of N = 10 random float numbers: import numpy as np np.random.seed(99) N = 10 arr = np.random.uniform(0., 10., size=(N,)) print(arr) out[1]: [6.72278559 4.88078399 8.25495174 0.31446388 8.08049963 5.6561742 2.97622499 0.46695721 9.90627399 0.06825733] I want to find all unique pairs of numbers that are not different from each other more than a tolerance tol = 1. (i.e. absolute difference <= 1). Specifically, I want to get all unique pairs of indexes. The indexes of each close pair should be sorted, and all close pairs should be sorted by the first index. I managed to write the following working code: def all_close_pairs(arr, tol=1.): res = set() for i, x1 in enumerate(arr): for j, x2 in enumerate(arr): if i == j: continue if np.isclose(x1, x2, rtol=0., atol=tol): res.add(tuple(sorted([i, j]))) res = np.array(list(res)) return res[res[:,0].argsort()] print(all_close_pairs(arr, tol=1.)) out[2]: [[1 5] [2 4] [3 7] [3 9] [7 9]] However, in reality I have an array of N = 1000 numbers, and my code becomes extremely slow due to the nested for loops. I believe there are much more efficient ways to do this with Numpy vectorization. Does anyone know the fastest way to do this in Numpy?
This is a solution with pure numpy operations. It seems pretty fast on my machine, but I don't know what kind of speed we're looking for. def all_close_pairs(arr, tol=1.): N = arr.shape[0] # get indices in the array to consider using meshgrid pair_coords = np.array(np.meshgrid(np.arange(N), np.arange(N))).T # filter out pairs so we get indices in increasing order pair_coords = pair_coords[pair_coords[:, :, 0] < pair_coords[:, :, 1]] # compare indices in your array for closeness is_close = np.isclose(arr[pair_coords[:, 0]], arr[pair_coords[:, 1]], rtol=0, atol=tol) return pair_coords[is_close, :]
4
3
70,143,797
2021-11-28
https://stackoverflow.com/questions/70143797/pycharm-type-warnings-iterable-vs-valuesview-keysview-itemsview
Lately in PyCharm (I don't know which version started it, I'm currently running 2021.2.3 Pro), I'm getting warnings that don't make sense. For example, this snippet: d = {1: 2, 3: 4, 5: 6} for v in d.values(): print(v) Triggers the following warning: Expected type 'collections.Iterable', got 'ValuesView' instead In the above snippet, replacing values() with keys() gives a similar warning. BTW, the return value of d.values() is dict_values and not ValuesView: type(d.values()) <class 'dict_values'> Why does PyCharm give me this warning, when this has always been the correct way to iterate over dictionary keys/values? It might be a bug in PyCharm, but maybe I'm missing something. EDIT: Even the sample code at https://docs.python.org/3.8/library/stdtypes.html#dict-views gets this warning, see screenshot.
This is a regression of a known bug in the PyCharm 2021.2.3 linter, see PY-41457 on the JetBrains bug tracker. It doesn't happen for me using the immediately previous PyCharm 2021.2.2 Pro version. The solution, for now, is to report the regression on the JetBrains bug tracker and wait for a fix. There's nothing wrong with your code.
5
3
70,143,961
2021-11-28
https://stackoverflow.com/questions/70143961/what-is-a-pythonic-way-to-conditionally-compose-a-sequence
In Perl I can do something like this to dynamically create an array based on some conditions: my @arr = ( $foo_defined ? 'foo' : (), $bar_defined ? 'bar' : (), $baz_defined ? 'baz' : (), ); This creates an array of up to 3 elements based on the values of the variables. What would be the closest alternative to this in Python?
A close literal translation might be arr = list(chain( ['foo'] if foo_defined else [], ['bar'] if bar_defined else [], ['baz'] if baz_defined else [], )) a if b else c is the Python equivalent of b ? a : c. chain concatenates multiple iterable values into a single sequence, similar to how ( (), (), () ) builds an array consisting of array elements (rather than nesting the arrays). list turns the chain object into an actual list. The differing semantics of a Perl array and a Python list make it difficult to do exactly the same thing as tersely as Perl does. Python lacks the ability to have an expression evaluate to nothing to simulate the Perl array handling in your example. (Or put another way, [ [], [], [] ] in Python creates a list containing 3 empty lists, rather than a single empty list.) Patrick Artner's answer takes advantage of the fact that a list comprehension can selectively include values from one iterable into the list under construction. A combination of his and my answers might look like arr = [s for s in ['foo' if foo_defined else None, 'bar' if bar_defined else None, 'baz' if baz_defined else None] if s is not None] The difference, though, is that a full list is built first, then a second, final list is built from the first one. Rolv Apneseth's answer dispenses with a single expression, building the result from an initially empty list and conditionally adding values to it one item at a time. It's terse, but the simulation of Perl-style statement modifiers isn't really idiomatic in Python. Instead, one would just use if statements: arr = [] if foo_defined: arr.append("foo") if bar_defined: arr.append("bar") if baz_defined: arr.append("baz")
4
4
70,135,719
2021-11-27
https://stackoverflow.com/questions/70135719/creating-a-nested-recursive-list-without-slicing
I need to write a function that receives an non-negative integer and returns: [] for n=0 [[]] for n=1 [[],[[]]] for n=2 [[],[[]],[[],[[]]]] for n=3 And so on. For n, we will receive an n sized list, so that in index i there will be all the i-1 elements from the list. I don't know how to explain that better, English isn't my first language. I'm not allowed to use list slicing or loops and I'm supposed to create deep copies of each list, without the copy module. I'm not allowed to let 2 different lists or indexes point to the same list in memory. This is what I tried: def list_seq(x, outer_list=[]): if x == 0: return [] outer_list.append(list_seq(x-1,outer_list)) return outer_list And the output for print(list_seq(2)) is [[], [...]].
If you can't use loops, you can use the following: def recursive_list(n): if n == 0: return [] else: return recursive_list(n-1) + [recursive_list(n-1)] EDIT You can do the following if you want to use append: def recursive_list(n: int) -> list: if n: result = recursive_list(n-1) result.append(recursive_list(n-1)) return result return [] NOTE as pointed out in the comments, caching introduces some reference issues, so I have removed the cached versions.
6
4
70,138,815
2021-11-27
https://stackoverflow.com/questions/70138815/fastapi-responding-slowly-when-calling-through-other-python-app-but-fast-in-cur
I have an issue that I can't wrap my head around. I have an API service built using FastAPI, and when I try to call any endpoint from another Python script on my local machine, the response takes 2+ seconds. When I send the same request through cURL or the built-in Swagger docs, the response is nearly instant. The entire server script is this: from fastapi import FastAPI import uvicorn app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} if __name__ == '__main__': uvicorn.run(app, host='0.0.0.0', port=8000) I then call it from a test script using HTTPX. I also tried with the requests package, and it is the same result. import httpx r = httpx.get('http://localhost:8000/') print(r.elapsed) This prints something like: 0:00:02.069705 I then do the same thing using cURL: curl -w "@curl-format.txt" -o /dev/null -X 'GET' 'http://localhost:8000/' This prints: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 32 100 32 0 0 941 0 --:--:-- --:--:-- --:--:-- 969 time_namelookup: 0.006436s time_connect: 0.006747s time_appconnect: 0.000000s time_pretransfer: 0.006788s time_redirect: 0.000000s time_starttransfer: 0.034037s ---------- time_total: 0.034093s The issue isn't what the endpoint does, but rather that it doesn't even start executing for 2 seconds. I have a debugger running on the endpoint, and the first line only gets executed after those 2 seconds. I tried to inspect the request to see whether there are any headers or similar in the request that could slow it down, but nothing. When I try again with the headers generated by HTTPX, it still executes fast: curl -w "@curl-format.txt" -o /dev/null -X 'GET' \ 'http://localhost:8000/events' \ -H 'accept: */*' \ -H 'host: localhost:8000' \ -H 'accept-encoding: gzip, deflate' \ -H 'connection: keep-alive' \ -H 'user-agent: python-httpx/0.20.0' Here is a screenshot of the request in PyCharm, it unfortunately can't be dumped to JSON directly. I'm starting to think that it has something to do with Uvicorn and how it runs the app, but I can't figure out why.
Try using „127.0.0.1“ instead of „localhost“ to refer to your machine. I once had a similar issue, where the DNS lookup for localhost on windows was taking half a second or longer. I don‘t have an explanation for that behaviour, but at least one other person on SO seems to have struggled with it as well…
7
19
70,138,169
2021-11-27
https://stackoverflow.com/questions/70138169/what-is-the-point-of-an-abstractmethod-with-default-implementation
I’ve seen code that declares abstract methods that actually have a non-trivial body. What is the point of this since you have to implement in any concrete class anyway? Is it just to allow you to do something like this? def method_a(self): super(self).method_a()
I've used this before in cases where it was possible to have the concrete implementation, but I wanted to force subclass implementers to consider if that implementation is appropriate for them. One specific example: I was implementing an abstract base class with an abstract factory method so subclasses can define their own __init__ function but have a common interface to create them. It was something like class Foo(ABC): def __init__(self, a, b, c): self.a = a self.b = b self.c = c @classmethod @abstractmethod def from_args(cls, a, b, c) -> "Foo": return cls(a, b, c) Subclasses usually only need a subset of the arguments. When testing, it's cumbersome to access the framework which actually uses this factory method, so it's likely someone will forget to implement the from_args factory function since it wouldn't come up in their usual testing. Making it an abstractmethod would make it impossible to initialize the class without first implementing it, and will definitely come up during normal testing.
10
7
70,131,817
2021-11-27
https://stackoverflow.com/questions/70131817/fourier-transform-time-series-in-python
I've got a time series of sunspot numbers, where the mean number of sunspots is counted per month, and I'm trying to use a Fourier Transform to convert from the time domain to the frequency domain. The data used is from https://wwwbis.sidc.be/silso/infosnmtot. The first thing I'm confused about is how to express the sampling frequency as once per month. Do I need to convert it to seconds, eg. 1/(seconds in 30 days)? Here's what I've got so far: fs = 1/2592000 #the sampling frequency is 1/(seconds in a month) fourier = np.fft.fft(sn_value) #sn_value is the mean number of sunspots measured each month freqs = np.fft.fftfreq(sn_value.size,d=fs) power_spectrum = np.abs(fourier) plt.plot(freqs,power_spectrum) plt.xlim(0,max(freqs)) plt.title("Power Spectral Density of the Sunspot Number Time Series") plt.grid(True) I don't think this is correct - namely because I don't know what the scale of the x-axis is. However I do know that there should be a peak at (11years)^-1. The second thing I'm wondering from this graph is why there seems to be two lines - one being a horizontal line just above y=0. It's more clear when I change the x-axis bounds to: plt.xlim(0,1). Am I using the fourier transform functions incorrectly?
You can use any units you want. Feel free to express your sampling frequency as fs=12 (samples/year), the x-axis will then be 1/year units. Or use fs=1 (sample/month), the units will then be 1/month. The extra line you spotted comes from the way you plot your data. Look at the output of the np.fft.fftfreq call. The first half of that array contains positive values from 0 to 1.2e6 or so, the other half contain negative values from -1.2e6 to almost 0. By plotting all your data, you get a data line from 0 to the right, then a straight line from the rightmost point to the leftmost point, then the rest of the data line back to zero. Your xlim call makes it so you don’t see half the data plotted. Typically you’d plot only the first half of your data, just crop the freqs and power_spectrum arrays.
7
4
70,132,993
2021-11-27
https://stackoverflow.com/questions/70132993/does-numpy-array-really-take-less-memory-than-python-list
Please refer to below execution - import sys _list = [2,55,87] print(f'1 - Memory used by Python List - {sys.getsizeof(_list)}') narray = np.array([2,55,87]) size = narray.size * narray.itemsize print(f'2 - Memory usage of np array using itemsize - {size}') print(f'3 - Memory usage of np array using getsizeof - {sys.getsizeof(narray)}') Here is what I get in result 1 - Memory used by Python List - 80 2 - Memory usage of np array using itemsize - 12 3 - Memory usage of np array using getsizeof - 116 One way of calculation suggests numpy array is consuming way too less memory but other says it is consuming more than regular python list? Shouldn't I be using getSizeOf with numpy array. What I am doing wrong here? Edit - I just checked, an empty python list is consuming 56 bytes whereas an empty np array 104. Is this space being used in pointing to associated built-in methods and attributes?
The calculation using: size = narray.size * narray.itemsize does not include the memory consumed by non-element attributes of the array object. This can be verified by the documentation of ndarray.nbytes: >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 In the above link, it can be read that ndarray.nbytes: Does not include memory consumed by non-element attributes of the array object. Note that from the code above you can conclude that your calculation excludes non-element attributes given that the value is equal to the one from ndarray.nbytes. A list of the non-element attributes can be found in the section Array Attributes, including here for completeness: ndarray.flags Information about the memory layout of the array. ndarray.shape Tuple of array dimensions. ndarray.strides Tuple of bytes to step in each dimension when traversing an array. ndarray.ndim Number of array dimensions. ndarray.data Python buffer object pointing to the start of the array’s data. ndarray.size Number of elements in the array. ndarray.itemsize Length of one array element in bytes. ndarray.nbytes Total bytes consumed by the elements of the array. ndarray.base Base object if memory is from some other object. With regards to sys.getsizeof it can be read in the documentation (emphasis mine) that: Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to.
8
7
70,121,429
2021-11-26
https://stackoverflow.com/questions/70121429/how-to-plot-scatter-graph-with-markers-based-on-column-value
I am trying to plot a scatter graph on some data with grouping. They are grouped by the column group and I want them to have different marker styles based on the group. Minimal working code import matplotlib.pyplot as plt import pandas as pd import numpy as np colors = ['r','g','b','y'] markers = ['o', '^', 's', 'P'] df = pd.DataFrame() df["index"] = list(range(100)) df["data"] = np.random.randint(100, size=100) df["group"] = np.random.randint(4, size=100) df["color"] = df.apply(lambda x: colors[x["group"]], axis=1) df["marker"] = df.apply(lambda x: markers[x["group"]], axis=1) plt.scatter(x=df["index"], y=df["data"], c=df["color"]) # What I thought would have worked # plt.scatter(x=df["index"], y=df["data"], c=df["color"], marker=df["marker"]) plt.show() What I want I want the groups to have different marker styles as well. For example the red entries will have marker "o" (big dot), green entries with marker "^" (upward triangle) and so on. What I tried I thought plt.scatter(x=df["index"], y=df["data"], c=df["color"], marker=df["marker"]) would have worked but nope... TypeError: 'Series' objects are mutable, thus they cannot be hashed I can for loop over the DataFrame and group the entries by their group. Then plot them with the marker argument set with the list defined (like plt.scatter(..., marker=markers[group]). That would result in 4 plt.scatter(...) as there are 4 groups in total. But that is ugly IMO to loop through a DataFrame row by row and I strongly believe there is a better way. Thanks in advance!
matplotlib that is ugly IMO to loop through a DataFrame row by row and I strongly believe there is a better way With matplotlib, I don't think there is a better way than to loop. Note that if you groupby the markers, it does not loop row by row, just group by group (so 4 times in this case). This will call plt.scatter 4 times (once per marker): for marker, d in df.groupby('marker'): plt.scatter(x=d['index'], y=d['data'], c=d['color'], marker=marker, label=marker) plt.legend() seaborn As r-beginners commented, sns.scatterplot supports multiple markers via style: sns.scatterplot(x=df['index'], y=df['data'], c=df['color'], style=df['marker'])
4
8
70,123,328
2021-11-26
https://stackoverflow.com/questions/70123328/how-to-set-environment-variables-in-github-actions-using-python
I want to execute a python script to set some environment variables in GitHub actions. I want to use those environment variables later in my GitHub actions steps. My python script looks like: new_ver = get_version_from_commit(commit_msg) if new_ver: if new_ver == "false": os.environ["SHOULD_PUSH"] = "0" print("Not pushing the image to k8s") exit(0) else: new_tag = app_name + ":" + str(new_ver) os.environ["DOCKER_IMAGE_TAG"] = new_tag os.environ["SHOULD_PUSH"] = "1" print("New tag: " + new_tag) exit(0) Part of my GitHub actions file, after the execution of the above python script looks like: - name: Print env var run: echo ${{ env.DOCKER_IMAGE_TAG }} - name: Build and push id: docker_build uses: docker/build-push-action@v2 with: push: true tags: ${{ secrets.DOCKER_REGISTRY }}/${{ env.DOCKER_IMAGE_TAG }} But using os.environ won't expose the environment variable outside of the python process. How can I fix this ?
You cannot set environment variables directly. Instead, you need to write your environment variables into a file, whose name you can get via $GITHUB_ENV. In a simple workflow step, you can append it to the file like so (from the docs): echo "{name}={value}" >> $GITHUB_ENV In python, you can do it like so: import os env_file = os.getenv('GITHUB_ENV') with open(env_file, "a") as myfile: myfile.write("MY_VAR=MY_VALUE") Given this python script, you can set and use your new environment variable like the following: - run: python write-env.py - run: echo ${{ env.MY_VAR }}
18
30
70,110,429
2021-11-25
https://stackoverflow.com/questions/70110429/pytorch-runtimeerror-result-type-float-cant-be-cast-to-the-desired-output-typ
I have a model which looks as follows: IMG_WIDTH = IMG_HEIGHT = 224 class AlexNet(nn.Module): def __init__(self, output_dim): super(AlexNet, self).__init__() self._to_linear = None self.x = torch.randn(3, IMG_WIDTH, IMG_HEIGHT).view(-1, 3, IMG_WIDTH, IMG_HEIGHT) self.features = nn.Sequential( nn.Conv2d(3, 64, 3, 2, 1), # in_channels, out_channels, kernel_size, stride, padding nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(64, 192, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(192, 384, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(384, 256, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(256, 512, 3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(512, 256, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True) ) self.conv(self.x) self.classifier = nn.Sequential( nn.Dropout(.5), nn.Linear(self._to_linear, 4096), nn.ReLU(inplace=True), nn.Dropout(.5), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, output_dim), ) def conv(self, x): x = self.features(x) if self._to_linear is None: self._to_linear = x.shape[1] * x.shape[2] * x.shape[3] return x def forward(self, x): x = self.conv(x) h = x.view(x.shape[0], -1) x = self.classifier(h) return x, h Here is my optimizer and loss functions: optimizer = torch.optim.Adam(model.parameters()) criterion = nn.BCEWithLogitsLoss().to(device) Here is my train and evaluate functions: def train(model, iterator, optimizer, criterion, device): epoch_loss, epoch_acc = 0, 0 model.train() for (x, y) in iterator: # features and labels to the device x = x.to(device) y = y.to(device).long() # Zero the gradients optimizer.zero_grad() y_pred, _ = model(x) # Calculate the loss and accuracy loss = criterion(y_pred.squeeze(), y) acc = binary_accuracy(y_pred, y) # Backward propagate loss.backward() # Update the weights optimizer.step() epoch_loss +=loss.item() epoch_acc += acc.item() return epoch_loss/len(iterator), epoch_acc/len(iterator) def evaluate(model, iterator, criterion, device): epoch_loss, epoch_acc = 0, 0 model.eval() with torch.no_grad(): for (x, y) in iterator: x = x.to(device) y = y.to(device).long() y_pred, _ = model(x) loss = criterion(y_pred, y) acc = binary_accuracy(y_pred, y) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss/len(iterator), epoch_acc/len(iterator) This is the error that I'm getting: RuntimeError: result type Float can't be cast to the desired output type Long What may be possibly my problem because I have tried to convert my labels to long tensors as follows: y = y.to(device).long() But it seems not to work.
I was getting the same error doing this: loss_fn(output, target) where the output was Tensor torch.float32 and target was Tensor torch.int64. What solved this problem was calling the loss function like this: loss_fn(output, target.float())
25
45
70,122,203
2021-11-26
https://stackoverflow.com/questions/70122203/pandas-multiindex-intersection-on-partial-levels
say I have two dataframes with multiindices, where one of the indices is deeper than the other. Now I want to select only those rows from the one (deeper) dataframe where their partial index is included in the other dataframe. Example input: df = pandas.DataFrame( { "A": ["a1", "a1", "a1", "a2", "a2", "a2"], "B": ["b1", "b1", "b2", "b1", "b2", "b2"], "C": ["c1", "c2", "c1", "c1", "c1", "c2"], "V": [1, 2, 3, 4, 5, 6], } ).set_index(["A", "B", "C"]) df2 = pandas.DataFrame( { "A": ["a1", "a1", "a2", "a2"], "B": ["b1", "b3", "b1", "b3"], "X": [1, 2, 3, 4] } ).set_index(["A", "B"]) Visual: V A B C a1 b1 c1 1 c2 2 b2 c1 3 a2 b1 c1 4 b2 c1 5 c2 6 X A B a1 b1 1 b3 2 a2 b1 3 b3 4 Desired output: result = pandas.DataFrame( { "A": ["a1", "a1", "a2"], "B": ["b1", "b1", "b1"], "C": ["c1", "c2", "c1"], "V": [1, 2, 4], } ).set_index(["A", "B", "C"]) Visual: V A B C a1 b1 c1 1 c2 2 a2 b1 c1 4 I tried df.loc[df2.index] and df.loc[df.index.intersection(df2.index)] but that does not work. I guess I could do df.join(df2, how="inner") and afterwards remove all the columns of df2 that were added, but that is cumbersome. Or is there a way to take away all the columns of df2? I would appreciate any help.
One option is to use isin on the specific labels common to both, and use the resulting boolean to filter df: df.loc[df.index.droplevel('C').isin(df2.index)] V A B C a1 b1 c1 1 c2 2 a2 b1 c1 4
4
8
70,119,576
2021-11-26
https://stackoverflow.com/questions/70119576/exception-value-failed-loading-libasound-so-2-libasound-so-2-cannot-open-shar
I made a drowsiness detector for driving using django. I tried deploying it on heroku and everything seems to be working fine except as soon as I try to open the camera, I get the error: Exception Value: Failed loading libasound.so.2: libasound.so.2: cannot open shared object file: No such file or directory All the other pages are working fine, you can try it out here (login credentials username: temp password: Temp@123). As soon as you click on Start Driving it throws this exception. The app is working fine in localhost. Please help! Log after git push heroku master: Enumerating objects: 5, done. Counting objects: 100% (5/5), done. Delta compression using up to 12 threads Compressing objects: 100% (2/2), done. Writing objects: 100% (3/3), 288 bytes | 288.00 KiB/s, done. Total 3 (delta 1), reused 0 (delta 0) remote: Compressing source files... done. remote: Building source: remote: remote: -----> Building on the Heroku-20 stack remote: -----> Using buildpack: heroku/python remote: -----> Python app detected remote: -----> Using Python version specified in runtime.txt remote: -----> No change in requirements detected, installing from cache remote: -----> Using cached install of python-3.8.12 remote: -----> Installing pip 21.3.1, setuptools 57.5.0 and wheel 0.37.0 remote: -----> Installing SQLite3 remote: -----> Installing requirements with pip remote: -----> $ python manage.py collectstatic --noinput remote: 130 static files copied to '/tmp/build_fa3901e7/staticfiles'. remote: remote: -----> Discovering process types remote: Procfile declares types -> web remote: remote: -----> Compressing... remote: Done: 319.5M remote: -----> Launching... remote: ! Warning: Your slug size (319 MB) exceeds our soft limit (300 MB) which may affect boot time. remote: Released v13 remote: https://dontsleepapp.herokuapp.com/ deployed to Heroku remote: remote: Verifying deploy... done. To https://git.heroku.com/dontsleepapp.git After heroku logs --tail 2021-11-26T06:07:53.697167+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response 2021-11-26T06:07:53.697168+00:00 app[web.1]: response = wrapped_callback(request, *callback_args, **callback_kwargs) 2021-11-26T06:07:53.697169+00:00 app[web.1]: File "/app/myapp/views.py", line 126, in StartDrive 2021-11-26T06:07:53.697169+00:00 app[web.1]: pygame.mixer.init() 2021-11-26T06:07:53.697169+00:00 app[web.1]: pygame.error: Failed loading libasound.so.2: libasound.so.2: cannot open shared object file: No such file or directory 2021-11-26T06:07:53.697834+00:00 app[web.1]: 10.1.4.217 - - [25/Nov/2021:22:07:53 -0800] "GET /myapp/drive/ HTTP/1.1" 500 55987 "https://dontsleepapp.herokuapp.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0" 2021-11-26T06:07:53.701240+00:00 heroku[router]: at=info method=GET path="/myapp/drive/" host=dontsleepapp.herokuapp.com request_id=4f1d6ac0-8d16-4bd0-8ac9-381ed4758b91 fwd="73.93.42.18" dyno=web.1 connect=0ms service=66ms status=500 bytes=56254 protocol=https
Your machine is missing libasound, install it with: sudo apt-get install libasound2
4
10
70,119,439
2021-11-26
https://stackoverflow.com/questions/70119439/return-the-index-of-the-first-element-in-the-list-from-where-incremental-increas
Suppose I have a list like this, where numbers increase in different steps: [ 0, 4, 6, 8, 12, 15, 19, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] I want to return the index for the first element in the list where the increase is incremental (+1 step only). In this case, 23 is the first location from which point the increase becomes incremental, and its index would be 8, which is what I want as an output. What would be an elegant simple way to achieve this? This is what I have tried: >>> for (a,b) in zip(l, l[1:]): ... if b-a == 1: ... print(l.index(a)) ... break UPDATE: In this particular setup, once the increase becomes incremental it will continue to stay that way. It is possible that the increase will never become incremental.
Solution 1: operator from operator import sub, indexOf L = [ 0, 4, 6, 8, 12, 15, 19, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] print(indexOf(map(sub, L[1:], L), 1)) # prints 8 Raises ValueError: sequence.index(x): x not in sequence if difference 1 never occurs, so might want to use try/except for that. Solution 2: bisect This one only takes O(log n) time, using the monotonicity of incrementalness (as you commented "once the increase becomes incremental it will continue to stay that way"). from bisect import bisect L = [ 0, 4, 6, 8, 12, 15, 19, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] class IsIncremental: def __getitem__(_, i): return L[i+1] - L[i] == 1 print(bisect(IsIncremental(), False, 0, len(L) - 1)) # prints 8 Prints len(L) - 1 if difference 1 never occurs. Btw... readability As PEP 8 says: Never use the characters 'l' (lowercase letter el), [...] as single character variable names. In some fonts, these characters are indistinguishable from the numerals one and zero. When tempted to use 'l', use 'L' instead.
4
6
70,115,749
2021-11-25
https://stackoverflow.com/questions/70115749/is-there-any-reason-for-changing-the-channels-order-of-an-image-from-rgb-to-bgr
I've been following this keras video classification tutorial where in the data preparation section, they load the frames of a video in the load_video function pretty generically, but what caught my eye was this line: frame = frame[:, :, [2, 1, 0]] This is the first time I encounter this, most of the times you will just append the frame "as-is" to your list of frames, but here they change the order of the channels (if I'm not mistaken) from RGB to BGR, I couldn't find anything related to it in the web or their docs, can someone give me some insight for this decision?
From experience, the reason why the order can change is dependent on the framework you are using to load in images. OpenCV in particular orders the channels in BGR format because of mostly historical reasons that are now outdated. Because of this, we are unfortunately stuck with this design choice. Images in the regular RGB format can be seen with scikit-image, matplotlib and Pillow. In fact, if you look at the load_video function, it uses OpenCV to open up a video so the frames coming in are BGR format. Therefore, the swapping of channels is mandatory to get it to RGB format: def load_video(path, max_frames=0): cap = cv2.VideoCapture(path) frames = [] try: while True: ret, frame = cap.read() if not ret: break frame = crop_center(frame) frame = frame[:, :, [2, 1, 0]] frames.append(frame) if len(frames) == max_frames: break finally: cap.release() return np.array(frames) You of course do not need to reverse the channels as a neural network will learn based on the input data that it was provided, but people tend to do this so that it's easy to debug images and not have to worry about continuously reversing the channels for display. Specifically, if a neural network was trained in BGR ordering, if you loaded in images in RGB format then the reversal of the channels needs to be done as that was how the image channels were represented in training. All in all, it depends on the framework but you need to keep this in mind when using a neural network after it has been trained. If the data was trained in BGR format, if your images are read in RGB format you'll need to reverse the channels prior to inference. In fact, this is a common bug when using networks! Be extremely diligent and understand how the image data was preprocessed for the network before using it.
4
8
70,110,804
2021-11-25
https://stackoverflow.com/questions/70110804/fast-removal-of-only-zero-columns-in-pandas-dataframe
Using: import pandas as pd import numpy as np df = pd.DataFrame(np.random.randint(0,3,(100000,5000))) df = df.loc[:, (df != 0).any(axis=0)] to get rid of columns containing only zeros is way too slow for a very large (1000000x2000) dataframe. Any suggestions how to speed this up? Thanks
There is a much faster way to implement that using Numba. Indeed, most of the Numpy implementation will create huge temporary arrays that are slow to fill and read. Moreover, Numpy will iterate over the full dataframe while this is often not needed (at least in your example). The point is that you can very quickly know if you need to keep a column by just iteratively check column values and early stop the computation of the current column if there is any 0 (typically at the beginning). Moreover, there is no need to always copy the entire dataframe (using about 1.9 GiB of memory): when all the columns are kept. Finally, you can perform the computation in parallel. However, there are performance-critical low-level catches. First, Numba cannot deal with Pandas dataframes, but the conversion to a Numpy array is almost free using df.values (the same thing applies for the creation of a new dataframe). Moreover, regarding the memory layout of the array, it could be better to iterate either over the lines or over the columns in the innermost loop. This layout can be fetched by checking the strides of the input dataframe Numpy array. Note that the example use a row-major dataframe due to the (unusual) Numpy random initialization, but most dataframes tend to be column major. Here is an optimized implementation: import numba as nb @nb.njit('int_[:,:](int_[:,:])', parallel=True) def filterNullColumns(dfValues): n, m = dfValues.shape s0, s1 = dfValues.strides columnMajor = s0 < s1 toKeep = np.full(m, False, dtype=np.bool_) # Find the columns to keep # Only-optimized for column-major dataframes (quite complex otherwise) for colId in nb.prange(m): for rowId in range(n): if dfValues[rowId, colId] != 0: toKeep[colId] = True break # Optimization: no columns are discarded if np.all(toKeep): return dfValues # Create a new dataframe newColCount = np.sum(toKeep) res = np.empty((n,newColCount), dtype=dfValues.dtype) if columnMajor: newColId = 0 for colId in nb.prange(m): if toKeep[colId]: for rowId in range(n): res[rowId, newColId] = dfValues[rowId, colId] newColId += 1 else: for rowId in nb.prange(n): newColId = 0 for colId in range(m): res[rowId, newColId] = dfValues[rowId, colId] newColId += toKeep[colId] return res result = pd.DataFrame(filterNullColumns(df.values)) Here are the result on my 6-core machine: Reference: 1094 ms Valdi_Bo answer: 1262 ms This implementation: 0.056 ms (300 ms with discarded columns) This, the implementation is about 20 000 times faster than the reference implementation on the provided example (no discarded column) and 4.2 times faster on more pathological cases (only one column discarded). If you want to reach even faster performance, then you can perform the computation in-place (dangerous, especially due to Pandas) or use smaller datatypes (like np.uint8 or np.int16) since the computation is mainly memory-bound.
4
4
70,110,613
2021-11-25
https://stackoverflow.com/questions/70110613/str-wrapper-in-custom-exception
Why does the below code print error msg instead of ABC\nerror msg? class CustomException(Exception): """ABC""" def __init__(self, *args): super().__init__(*args) self.__str__ = self._wrapper(self.__str__) def _wrapper(self, f): def _inner(*args, **kwargs): return self.__doc__ + '\n' + f(*args, **kwargs) return _inner print(CustomException('error msg'))
Operations backed by special methods usually explicitly look up the special method as a proper method not just as a callable attribute. Concretely, instead of self.__str__ the interpreter roughly looks at type(self).__str__.__get__(self, type(self)) – i.e. a descriptor __str__ on the class to be bound with the instance. To override a special method, it is thus necessary to override the class' descriptor instead of the instance' attribute. This can be done by a) declaring the special method as a slot, which handles the type(self).__str__ part, and b) assigning a function, which handles the __get__(self, type(self)) part. class CustomException(Exception): """ABC""" __slots__ = ("__str__",) # <<< magic def __init__(self, *args): super().__init__(*args) # vvv self.__str__ is the class' slot self.__str__ = self._wrapper(super().__str__) # AAA real __str__ lives on the super class def _wrapper(self, f): def _inner(*args, **kwargs): return self.__doc__ + '\n' + f(*args, **kwargs) return _inner print(CustomException('error msg')) Note that since every instance behaves the same in this case, it is advisable to just define a new __str__ method in practice.
5
1
70,102,323
2021-11-24
https://stackoverflow.com/questions/70102323/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least
I trained a model for sequence classification using transformers (BertForSequenceClassification) and I get the error: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select) I don't really get where is the problem, if it's on my model, on how I tokenize the data, or what. Here is my code: LOADING THE PRETRAINED MODEL model_state_dict = torch.load("../MODELOS/TRANSFORMERS/TransformersNormal", map_location='cpu') #Doesnt work with map_location='cuda:0' neither model = BertForSequenceClassification.from_pretrained(pretrained_model_name_or_path="bert-base-uncased", state_dict=model_state_dict, cache_dir='./data') CREATING DATALOAD def crearDataLoad(dfv,tokenizer): dft=dfv # usamos el del validacion para que nos salga los resultados y no tener que cambiar mucho codigo #validation=dfv['text'] validation=dfv['text'].str.lower() # para modelos uncased # el fichero que hemos llamado test es usado en la red neuronal validation_labels=dfv['label'] validation_inputs = crearinputs (validation,tokenizer) validation_masks= crearmask (validation_inputs) validation_inputs = torch.tensor(validation_inputs) validation_labels = torch.tensor(validation_labels.values) validation_masks = torch.tensor(validation_masks) from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler# The DataLoader needs to know our batch size for training, so we specify it #Colab batch_size = 32 #local #batch_size = 15 validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels) validation_sampler = SequentialSampler(validation_data) validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size) return validation_dataloader SHOWING RESULTS def resultados(validation_dataloader, model, tokenizer): model.eval() # Tracking variables predictions , true_labels = [], [] pred = [] t_label =[] # Predict for batch in validation_dataloader: # Add batch to GPU , como no tengo lo dejo aquí batch = tuple(t.to(device) for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_input_mask, b_labels = batch # Telling the model not to compute or store gradients, saving memory and # speeding up prediction with torch.no_grad(): # Forward pass, calculate logit predictions outputs = model(b_input_ids, #toktype_ids=None, # attention_mask=b_input_mask) #I GET THE ERROR HERE logits = outputs[0] # Move logits and labels to CPU logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() # Store predictions and true labels # Store predictions and true labels predictions.append(logits) true_labels.append(label_ids) for l in logits: # para cada tupla del logits, se selecciona 0 o 1 dependiendo del valor # que sea el mayor (argmax) pred_labels_i = np.argmax(l).item() pred.append(pred_labels_i) #Si no me equivoco, en pred guardamos las predicciones hechas por el modelo pred=np.asarray(pred).tolist() t_label = [val for sublist in true_labels for val in sublist] # para aplanar la lista de etiquetas #print('predicciones',pred) #print('t_labels',t_label) #print('validation_labels',validation_labels ) print("RESULTADOS KFOLD validacion cruzada") from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report print(classification_report(t_label, pred)) print ("Distribution test {}".format(Counter(t_label))) from sklearn.metrics import confusion_matrix print(confusion_matrix(t_label, pred)) from sklearn.metrics import roc_auc_score print('AUC ROC:') print(roc_auc_score(t_label, pred)) from sklearn.metrics import f1_score result=f1_score(t_label, pred, average='binary',labels=[0,1],pos_label=1,zero_division=0) print('f1-score macro:') print(result) print("****************************************************************") return result I get the error at this line in function resultados: with torch.no_grad(): # Forward pass, calculate logit predictions outputs = model(b_input_ids, #toktype_ids=None, # attention_mask=b_input_mask) #Esto falla MAIN PROGRAM trial_data = pd.DataFrame(trial_dataset) device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': print('no hay gpu') print('Found GPU at: {}'.format(device_name)) #import torch# If there's a GPU available... if torch.cuda.is_available(): # Tell PyTorch to use the GPU. device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) # If not... else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') validation_dataloader = crearDataLoad(trial_data,tokenizer) # obteniendo metricas del modelo generado en el paso anterior model.eval() result= resultados(validation_dataloader, model,tokenizer)
You did not move your model to device, only the data. You need to call model.to(device) before using it with data located on device.
11
18
70,099,593
2021-11-24
https://stackoverflow.com/questions/70099593/pandas-find-start-and-stop-point-of-non-null-values
I'd like to find the start and stop points of a column and flag them as shown below: value flag NaN NaN NaN NaN 1 start 2 NaN 1 NaN 3 NaN 2 stop NaN NaN 1 start 2 stop
start occurs when the current value is notnull and the previous value isnull stop occurs when the current value is notnull and the next value isnull Generate these conditions using shift and assign using loc: start = df.value.notnull() & df.value.shift().isnull() stop = df.value.notnull() & df.value.shift(-1).isnull() df.loc[start, 'flag'] = 'start' df.loc[stop, 'flag'] = 'stop' # value flag # 0 NaN NaN # 1 NaN NaN # 2 1.0 start # 3 2.0 NaN # 4 1.0 NaN # 5 3.0 NaN # 6 2.0 stop # 7 NaN NaN # 8 1.0 start # 9 2.0 stop Alternatively assign using mask: df['flag'] = df['flag'].mask(start, 'start') df['flag'] = df['flag'].mask(stop, 'stop')
5
5
70,104,101
2021-11-24
https://stackoverflow.com/questions/70104101/valueerror-method-eth-maxpriorityfeepergas-not-supported-web3-py-with-ganache
I'm running the following code with web3.py: transaction = SimpleStorage.constructor().buildTransaction( {"chainId": chain_id, "from": my_address, "nonce": nonce} ) And I am running into the following error: Traceback (most recent call last): File "/Users/patrick/code/web3_py_simple_storage/deploy.py", line 64, in <module> transaction = SimpleStorage.constructor().buildTransaction( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/eth_utils/decorators.py", line 18, in _wrapper return self.method(obj, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/web3/contract.py", line 684, in buildTransaction return fill_transaction_defaults(self.web3, built_transaction) File "cytoolz/functoolz.pyx", line 250, in cytoolz.functoolz.curry.__call__ File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/web3/_utils/transactions.py", line 121, in fill_transaction_defaults default_val = default_getter(web3, transaction) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/web3/_utils/transactions.py", line 71, in <lambda> web3.eth.max_priority_fee + (2 * web3.eth.get_block('latest')['baseFeePerGas']) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/web3/eth.py", line 549, in max_priority_fee return self._max_priority_fee() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/web3/module.py", line 57, in caller result = w3.manager.request_blocking(method_str, File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/web3/manager.py", line 198, in request_blocking return self.formatted_response(response, File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/web3/manager.py", line 171, in formatted_response raise ValueError(response["error"]) ValueError: {'message': 'Method eth_maxPriorityFeePerGas not supported.', 'code': -32000, 'data': {'stack': 'Error: Method eth_maxPriorityFeePerGas not supported.\n at GethApiDouble.handleRequest (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/lib/subproviders/geth_api_double.js:70:16)\n at next (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:136:18)\n at GethDefaults.handleRequest (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/lib/subproviders/gethdefaults.js:15:12)\n at next (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:136:18)\n at SubscriptionSubprovider.FilterSubprovider.handleRequest (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/node_modules/web3-provider-engine/subproviders/filters.js:89:7)\n at SubscriptionSubprovider.handleRequest (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/node_modules/web3-provider-engine/subproviders/subscriptions.js:137:49)\n at next (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:136:18)\n at DelayedBlockFilter.handleRequest (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/lib/subproviders/delayedblockfilter.js:31:3)\n at next (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:136:18)\n at RequestFunnel.handleRequest (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/lib/subproviders/requestfunnel.js:32:12)\n at next (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:136:18)\n at Web3ProviderEngine._handleAsync (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:123:3)\n at Timeout._onTimeout (/Applications/Ganache.app/Contents/Resources/static/node/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:107:12)\n at listOnTimeout (internal/timers.js:531:17)\n at processTimers (internal/timers.js:475:7)', 'name': 'Error'}} How do I fix this?
This is an issue from a new edition of web3.py. You need to add gasPrice to your transaction, like so: transaction = SimpleStorage.constructor().buildTransaction( {"chainId": chain_id, "gasPrice": w3.eth.gas_price, "from": my_address, "nonce": nonce} ) This is due to EIP1559 changing, and web3.py updating for the change.
13
38
70,091,826
2021-11-24
https://stackoverflow.com/questions/70091826/sum-dataframe-values-in-for-loop-inside-a-for-loop
I have a large polygon file, small polygon file and points file. What I do here is loop through large polygons to find which small polygons intersect. Then calculate the area of each small polygon within the large one. And then I loop through the small polygons to find points statistics in each of them. I have found number_of_somethin value in each small polygon. And the question would be how to can I sum all number_of_somethin small polygons values within the large polygon and store the results in original large_polygon file as a new column, let's say large_polygon['smth_sum']? With df_res_2.loc[idx, 'smth'] = number_of_somethin I get number_of_somethin values in each small polygon inside the large ones. Now I need to sum them in large_polygon['smth_sum'] Note: FID is the id for large polygons and ID is the id for small polygons import geopandas as gpd small_polygon = gpd.read_file(r'R:\...\small.shp') large_polygon = gpd.read_file(r'R:\...\large.shp') points = gpd.read_file(r'R:\...\points.shp') SmallJoin =gpd.sjoin(small_polygon, large_polygon)[['FID', 'ID', 'someValue','geometry']] for i in large_polygon.index: df_i = SmallJoin[SmallJoin['FID'] == i] # i do something here, f.e. calculate small polgyon area df_res = gpd.overlay(large_polygon, df_i, how='intersection') df_res['area'] = round((df_res.apply(lambda row: row.geometry.area, axis=1)), 4) # now i know area for each small polygon within large polygon df_res_2 = df_res[df_res['FID_1'] == i] # now point statistics in small polygons PointsJoin =gpd.sjoin(points, df_res)[['ID','someAttribute', 'someAttribute2','geometry']] for idx, val in df_res_2['ID'].items(): df_idx = PointsJoin[PointsJoin['ID'] == val] number_of_somethin = df_idx ['someAttribute'] + 121 + df_idx['someAttribute2'] df_res_2.loc[idx, 'smth'] = number_of_somethin I had a few ideas how to do this, but none of them are not wokring, so I assume that there is some other way. large_polygon.loc[i, 'smth_sum'] = df_res_2['smth'] large_polygon.loc[i, 'smth_sum'] = df_res_2['smth'].sum() large_polygon['smth_sum'] = large_polygon[large_polygon['FID'] == df_res_2['FID_1'].sum()]
you describe three GeoDataFrame large - have used country boundaries for this small - have used UTM zone boundaries for this point - have used randomly generated points that mostly overlap 2 you define that you want two outputs for each large geometry (country here) area - sum of intersection area of each small geometry value - sum of value of points that is within a small geometry that spatially joins to a large geometry all of the above can be achieved with spatial joins and pandas merge() and groupby() to make this clearer - also included a way to visualise all of this import geopandas as gpd import shapely.geometry import requests import numpy as np import plotly.express as px # get some sample data.... # fmt: off gdf_utm = gpd.GeoDataFrame.from_features(requests.get("https://opendata.arcgis.com/datasets/b294795270aa4fb3bd25286bf09edc51_0.geojson").json()).set_crs("EPSG:4326") gdf_countries = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres")) large_polygon = gdf_countries.loc[lambda d: d["iso_a3"].isin(["BEL", "LUX", "NLD", "DEU", "AUT"])] # large_polygon.boundary.plot() small_polygon = gpd.sjoin(gdf_utm, large_polygon).loc[:, gdf_utm.columns].groupby(["FID", "ZONE"]).first().reset_index() # fmt: on # some points within geometry of small_polygon b = small_polygon.total_bounds POINTS = 10 points = gpd.GeoDataFrame( geometry=[ shapely.geometry.Point(x, y) for x, y in zip( np.random.uniform(*b[[0, 2]], POINTS), np.random.uniform(*b[[1, 3]], POINTS), ) ], data={"value": np.arange(POINTS)}, crs="EPSG:4326", ) # spatial small to large with geometry from large SmallJoin = gpd.sjoin(small_polygon, large_polygon).merge( large_polygon["geometry"], left_on="index_right", right_index=True, suffixes=("", "_large"), ) SmallJoin["area"] = SmallJoin.intersection(gpd.GeoSeries(SmallJoin["geometry_large"])).area # get sums of area of overlap and sum of values from points Final = ( SmallJoin.rename(columns={"index_right": "index_large"}) .sjoin(points) .groupby("index_large") .agg({"area": "sum", "value": "sum", "geometry_large": "first"}) ) output index_large area value 114 24.6382 25 121 90.3565 45 128 0.603031 20 129 7.65999 20 130 10.5284 20 visualise it px.choropleth_mapbox( Final, geojson=gpd.GeoSeries(Final["geometry_large"]), locations=Final.index, color="value", hover_data=["area"], ).add_traces( px.scatter_mapbox( points, lat=points.geometry.y, lon=points.geometry.x, color="value", ) .update_traces(marker_coloraxis="coloraxis2", marker_size=10) .data ).update_layout( mapbox={ "style": "carto-positron", "center": {"lon": sum(b[[0, 2]]) / 2, "lat": sum(b[[1, 3]]) / 2}, "zoom": 3, "layers": [{"source": small_polygon.__geo_interface__, "type": "line"}], }, coloraxis2={ "colorbar": {"x": -0.1, "title": "scatter"}, "colorscale": [[0, "blue"], [1, "blue"]], }, coloraxis={"colorscale": [[0, "white"], [1, "green"]]}, margin={"l": 0, "r": 0, "t": 0, "b": 0}, )
5
2
70,100,524
2021-11-24
https://stackoverflow.com/questions/70100524/iterate-dict-in-python-using-next
Is it possible to iterate dictionary in python using next(). (I need to access keys and values). Also would it work for extendable dictionaries with non-fixed length?
Use items to get the pairs key/value and iter() to be able to call next content = dict(zip(range(10), range(10, 20))) print(content) # {0: 10, 1: 11, 2: 12, 3: 13, 4: 14, 5: 15, 6: 16, 7: 17, 8: 18, 9: 19} iterable = iter(content.items()) print(next(iterable)) # (0, 10) print(next(iterable)) # (1, 11) print(next(iterable)) # (2, 12) print(next(iterable)) # (3, 13)
4
8
70,096,110
2021-11-24
https://stackoverflow.com/questions/70096110/valueerror-on-inverse-transform-using-ordinalencoder-with-dictionary
I can transform the target column to desired ordered numerical value using categorical encoding and ordinal encoding. But I am unable to perform inverse_transform as an error is showing which is written below. import pandas as pd import category_encoders as ce from sklearn.preprocessing import OrdinalEncoder lst = [ 'BRANCHING/ELONGATION', 'EARLY', 'EARLY', 'EARLY', 'EARLY', 'MID', 'MID', 'ADVANCED/TILLERING', 'FLOWERING', 'FLOWERING', 'FLOWERING', 'SEEDLING/EMERGED'] filtered_df = pd.DataFrame(lst, columns =['growth_state']) filtered_df['growth_state'].value_counts() EARLY 4 FLOWERING 3 MID 2 ADVANCED/TILLERING 1 SEEDLING/EMERGED 1 BRANCHING/ELONGATION 1 Name: growth_state, dtype: int64 dictionary = [{'col': 'growth_state', 'mapping':{'SEEDLING/EMERGED':0, 'EARLY':1, 'MID':2, 'ADVANCED/TILLERING':3, 'BRANCHING/ELONGATION':4, 'FLOWERING':5 }}] # instiating encoder encoder = ce.OrdinalEncoder(cols = 'growth_state', mapping= dictionary) filtered_df['growth_state'] = encoder.fit_transform(filtered_df['growth_state']) filtered_df growth_state 0 4 1 1 2 1 3 1 4 1 5 2 6 2 7 3 8 5 9 5 10 5 11 0 But when I perform inverse_transform: newCol = encoder.inverse_transform(filtered_df['growth_state']) AttributeError Traceback (most recent call last) <ipython-input-26-b6505b4be1e1> in <module> ----> 1 newCol = encoder.inverse_transform(filtered_df['growth_state']) d:\users\tiwariam\appdata\local\programs\python\python36\lib\site-packages\category_encoders\ordinal.py in inverse_transform(self, X_in) 266 for switch in self.mapping: 267 column_mapping = switch.get('mapping') --> 268 inverse = pd.Series(data=column_mapping.index, index=column_mapping.values) 269 X[switch.get('col')] = X[switch.get('col')].map(inverse).astype(switch.get('data_type')) 270 AttributeError: 'dict' object has no attribute 'index' Note: the above column is a target column, I could have applied a label encoder as this is a classification-related problem. But I have adopted the above combination of categorical and ordinal encoding as variables are ordered in nature.
The error comes from this line in the inverse_transform source code: inverse = pd.Series(data=column_mapping.index, index=column_mapping.values) It seems that even though the category_encoders documentation says that the mapping should be provided as a dictionary, their inverse_transform code is actually looking for a pd.Series: import pandas as pd from category_encoders import OrdinalEncoder df = pd.DataFrame({ 'growth_state': ['BRANCHING/ELONGATION', 'EARLY', 'EARLY', 'EARLY', 'EARLY', 'MID', 'MID', 'ADVANCED/TILLERING', 'FLOWERING', 'FLOWERING', 'FLOWERING', 'SEEDLING/EMERGED'] }) mapping = [{ 'col': 'growth_state', 'mapping': pd.Series(data={'SEEDLING/EMERGED': 0, 'EARLY': 1, 'MID': 2, 'ADVANCED/TILLERING': 3, 'BRANCHING/ELONGATION': 4, 'FLOWERING': 5}), 'data_type': object }] enc = OrdinalEncoder(cols=['growth_state'], mapping=mapping) df_transformed = enc.fit_transform(df) df_transformed.head() # growth_state # 0 4 # 1 1 # 2 1 # 3 1 # 4 1 df_inverse = enc.inverse_transform(df_transformed) df_inverse.head() # growth_state # 0 BRANCHING/ELONGATION # 1 EARLY # 2 EARLY # 3 EARLY # 4 EARLY
5
3
70,051,619
2021-11-21
https://stackoverflow.com/questions/70051619/pass-on-value-from-dependencies-in-include-router-to-routes-fastapi
I was wondering if it was possible to pass the results from the dependencies kwarg in include_router to the router that is passed to it. What I want to do is decode a JWT from the x-token header of a request and pass the decoded payload to the books routes. I know that I could just write authenticate_and_decode_JWT as a dependency of each of the routes in routers/book.py, but this would be quite repetitive for a large app. main.py from typing import Optional from jose import jwt from fastapi import FastAPI, Depends, Header, HTTPException, status from jose.exceptions import JWTError from routers import books app = FastAPI() def authenticate_and_decode_JWT(x_token: str = Header(None)): try: payload = jwt.decode(x_token.split(' ')[1], 'secret key', algorithms=['HS256']) return payload # pass decoded user information from here to books.router routes somehow except JWTError: raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED) app.include_router( books.router, prefix="/books", dependencies=[Depends(authenticate_and_decode_JWT)], ) @app.get("/") def read_root(): return {"Hello": "World"} routers/books.py from fastapi import APIRouter router = APIRouter() @router.get('/') def get_book(): # do stuff with decoded authenticated user data from JWT payload here pass @router.post('/') def create_book(): # do stuff with decoded authenticated user data from JWT payload here pass
While the answer above about writing a middleware does work, if you don't want a middleware, you just want to use the include_router, then you can take the authenticate_and_decode_JWT method and extend it to write the JWT payload into the request object and then later have your routes read from that out from the request object. This way you only need to modify a few lines in your code for it to work: from fastapi import Request # <== new import def authenticate_and_decode_JWT(x_token: str = Header(None), request: Request): # <== request is a new param try: payload = jwt.decode(x_token.split(' ')[1], 'secret key', algorithms=['HS256']) request.token_payload = payload # type: ignore # <== store the token payload in request return payload except JWTError: raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED) And then read from request: from fastapi import Request # <== new import @router.get('/', request: Request) # <== request is a new param def get_book(): # do stuff with decoded authenticated user data from JWT payload here print(request.token_payload) # <== access your token payload
7
5
70,077,422
2021-11-23
https://stackoverflow.com/questions/70077422/why-python-matches-a-list-as-a-tuple
With Python 3.11.0a2+, and the following code: def my_fun(e): match e: case (1,): print("tuple (1,)") case [1]: print("list [1]") case _: print("I don't understand") Calling the function with my_fun([1]) prints "tuple (1,)". Is this behavior correct? If I explicitly match against tuple((1, )) instead of (1,), it works as expected. If this is not a bug of the interpreter, what is the reason behind this seemingly weird behavior?
This is documented under Structural Pattern Matching Like unpacking assignments, tuple and list patterns have exactly the same meaning and actually match arbitrary sequences. Technically, the subject must be a sequence. Therefore, an important exception is that patterns don’t match iterators. Also, to prevent a common mistake, sequence patterns don’t match strings. and in PEP 635 -- Structural Pattern Matching: Motivation and Rationale As in iterable unpacking, we do not distinguish between 'tuple' and 'list' notation. [a, b, c], (a, b, c) and a, b, c are all equivalent. While this means we have a redundant notation and checking specifically for lists or tuples requires more effort (e.g. case list([a, b, c])), we mimic iterable unpacking as much as possible.
4
6
70,060,847
2021-11-22
https://stackoverflow.com/questions/70060847/how-to-work-with-openai-maximum-context-length-is-2049-tokens
I'd like to send the text from various PDF's to OpenAI's API. Specifically the Summarize for a 2nd grader or the TL;DR summarization API's. I can extract the text from PDF's using PyMuPDF and prepare the OpenAI prompt. Question: How best to prepare the prompt when the token count is longer than the allowed 2049? Do I just truncate the text then send multiple requests? Or is there a way to sample the text to "compress" it to lose key points?
I faced the same problem. Here is the strategy I used to send text that is much, much longer than OpenAIs GPT3 token limit. Depending on the model (Davinci, Curie, etc.) used, requests can use up to 4097 tokens shared between prompt and completion. Prompt being the input you send to OpenAI, i.e. your "command", e.g. "Summarize the following text" plus the text itself Completion being the response, i.e. the entire summary of your text If your prompt is 4000 tokens, your completion can be 97 tokens at most. For more information on OpenAI tokens and how to count them, see here. To ensure that we don’t exceed the maximum length limit for prompt plus completion, we need to ensure that prompt (i.e. your text) and completion (i.e. the summary) put together always fits into the 4097 token boundary. For that reason we split the entire text into multiple text chunks, summarize each chunk independently and finally merge all summarized chunks using a simple " ".join() function. Maximum Number of Words - Token-to-Word Conversion OpenAI has a fixed limit on the number of tokens. However, a token is not the same as a word. Hence, we first need to calculate the maximum number of words we can send to OpenAI. The documentation says: Given the token-to-word ratio, we can send approximately 2900 words to OpenAI's GPT3 assuming a 5 sentence summary per text chunk. Max tokens per request: 4000 tokens (leaving 97 tokens as a safety buffer) = 3000 words Max prompt tokens: “Summarize the following text in five sentences” has 7 words = 10 tokens Max tokens of returned summary (5 sentences): 20 words per sentence. 5 * 20 = 100 words = 133 tokens Max tokens of text chunk: 4000 - 10 - 133 = 3857 tokens = 2900 words Text Chunking We can choose from a plethora of strategies to split up the entire text into smaller chunks. The simplest approach is creating a single list of all words by splitting the entire text on whitespaces, and then creating buckets of words with words evenly distributed across all buckets. The downside is that we are likely to split a sentence half-way through and lose the meaning of the sentence because GPT ends up summarizing the first half of the sentence independently from the second half — ignoring any relations between the two chunks. Other options include tokenizers such as SentencePiece and spaCy’s sentence splitter. Choosing the later generates the most stable results. Implementation of Text Chunking with spaCy The following example splits the text “My first birthday was great. My 2. was even better.” into a list of two sentences. python -m spacy download en_core_web_sm import spacy from spacy.lang.en import English nlp = spacy.load("en_core_web_sm") text = "My first birthday was great. My 2. was even better." for sentence in nlp(text).sents: print(sentence.text) Output My first birthday was great. My 2. was even better. spaCy correctly detected the second sentence instead of splitting it after the “2.”. Now, let’s write a text_to_chunks helper function to generate chunks of sentences where each chunk holds at most 2700 words. 2900 words was the initially calculated word limit, but we want to ensure to have enough buffer for words that are longer than 1.33 tokens. def text_to_chunks(text): chunks = [[]] chunk_total_words = 0 sentences = nlp(text) for sentence in sentences.sents: chunk_total_words += len(sentence.text.split(" ")) if chunk_total_words > 2700: chunks.append([]) chunk_total_words = len(sentence.text.split(" ")) chunks[len(chunks)-1].append(sentence.text) return chunks An alternative approach to determine the number of tokens of a text was recently introduced by OpenAI. The approach uses tiktoken and is tailored towards OpenAI's models. import tiktoken encoding = tiktoken.encoding_for_model("gpt-3.5-turbo") number_of_tokens = len(encoding.encode("tiktoken is great!")) print(number_of_tokens) Next, we wrap the text summarization logic into a summarize_text function. def summarize_text(text): prompt = f"Summarize the following text in 5 sentences:\n{text}" response = openai.Completion.create( engine="text-davinci-003", prompt=prompt, temperature=0.3, max_tokens=150, # = 112 words top_p=1, frequency_penalty=0, presence_penalty=1 ) return response["choices"][0]["text"] Our final piece of code looks like this: chunks = text_to_chunks(one_large_text) chunk_summaries = [] for chunk in chunks: chunk_summary = summarize_text(" ".join(chunk)) chunk_summaries.append(chunk_summary) summary = " ".join(chunk_summaries) References How to count tokens with tiktoken, OpenAI Cookbook
25
31
70,035,997
2021-11-19
https://stackoverflow.com/questions/70035997/i-cant-read-excel-file-using-dt-fread-from-datatable-attributeerror
Hello I'm trying to read an excel file 'myFile.xlsx' using datatable.fread (version 1.0.0) function to speedup data manipulation. The problem is I had an AttributeError: module 'xlrd' has no attribute 'xlsx'. The command I used is: import datatable as dt DT = dt.fread("myFile.xlsx") I checked the module where the error occurred is the module xls of datatable package: def read_xls_workbook(filename, subpath): try: import xlrd # Fixes the warning # "PendingDeprecationWarning: This method will be removed in future # versions. Use 'tree.iter()' or 'list(tree.iter())' instead." xlrd.xlsx.ensure_elementtree_imported(False, None) # Here xlrd.xlsx.Element_has_iter = True # and Here Is there any solution to fix this issue? please.
The issue is that datatable package is not updated yet to make use of xlrd>1.2.0, so in order to make it work you have to install xlrd = 1.2.0 pip install xlrd==1.2.0 I hope it helped.
5
6
70,051,750
2021-11-21
https://stackoverflow.com/questions/70051750/pytorch-runtimeerror-requires-grad-is-not-supported-on-scriptmodules
I'm running Python code from an old repo that seems to no longer work but I can't figure out why or how to fix it. SETUP CODE (Google Colab Notebook) #@title Setup import pkg_resources print(pkg_resources.get_distribution("torch").version) from IPython.utils import io with io.capture_output() as captured: !git clone https://github.com/openai/CLIP # !pip install taming-transformers !git clone https://github.com/CompVis/taming-transformers.git !rm -Rf clipit !git clone https://github.com/mfrashad/clipit.git !pip install ftfy regex tqdm omegaconf pytorch-lightning !pip install kornia !pip install imageio-ffmpeg !pip install einops !pip install torch-optimizer !pip install easydict !pip install braceexpand !pip install git+https://github.com/pvigier/perlin-numpy # ClipDraw deps !pip install svgwrite !pip install svgpathtools !pip install cssutils !pip install numba !pip install torch-tools !pip install visdom !pip install gradio !git clone https://github.com/BachiLi/diffvg %cd diffvg # !ls !git submodule update --init --recursive !python setup.py install %cd .. !mkdir -p steps !mkdir -p models import sys sys.path.append("clipit") THEN ERRORS ON SECOND TO LAST INE import clipit # To reset settings to default clipit.reset_settings() # You can use "|" to separate multiple prompts prompts = "underwater city" # You can trade off speed for quality: draft, normal, better, best quality = "normal" # Aspect ratio: widescreen, square aspect = "widescreen" # Add settings clipit.add_settings(prompts=prompts, quality=quality, aspect=aspect) # Apply these settings and run settings = clipit.apply_settings() clipit.do_init(settings) # generates error clipit.do_run(settings) Working with z of shape (1, 256, 16, 16) = 65536 dimensions. loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth VQLPIPSWithDiscriminator running with hinge loss. Restored from models/vqgan_imagenet_f16_16384.ckpt --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in () 12 # Apply these settings and run 13 settings = clipit.apply_settings() ---> 14 clipit.do_init(settings) 15 clipit.do_run(settings) 1 frames /usr/local/lib/python3.7/dist-packages/torch/jit/_script.py in fail(self, *args, **kwargs) 912 def _make_fail(name): 913 def fail(self, *args, **kwargs): --> 914 raise RuntimeError(name + " is not supported on ScriptModules") 915 916 return fail RuntimeError: requires_grad_ is not supported on ScriptModules
For any unlucky soul that comes accross this issue this URL saved my soul https://github.com/mfrashad/text2art/issues/5
5
1
70,065,235
2021-11-22
https://stackoverflow.com/questions/70065235/understanding-torch-nn-layernorm-in-nlp
I'm trying to understanding how torch.nn.LayerNorm works in a nlp model. Asuming the input data is a batch of sequence of word embeddings: batch_size, seq_size, dim = 2, 3, 4 embedding = torch.randn(batch_size, seq_size, dim) print("x: ", embedding) layer_norm = torch.nn.LayerNorm(dim) print("y: ", layer_norm(embedding)) # outputs: """ x: tensor([[[ 0.5909, 0.1326, 0.8100, 0.7631], [ 0.5831, -1.7923, -0.1453, -0.6882], [ 1.1280, 1.6121, -1.2383, 0.2150]], [[-0.2128, -0.5246, -0.0511, 0.2798], [ 0.8254, 1.2262, -0.0252, -1.9972], [-0.6092, -0.4709, -0.8038, -1.2711]]]) y: tensor([[[ 0.0626, -1.6495, 0.8810, 0.7060], [ 1.2621, -1.4789, 0.4216, -0.2048], [ 0.6437, 1.0897, -1.5360, -0.1973]], [[-0.2950, -1.3698, 0.2621, 1.4027], [ 0.6585, 0.9811, -0.0262, -1.6134], [ 0.5934, 1.0505, -0.0497, -1.5942]]], grad_fn=<NativeLayerNormBackward0>) """ From the document's description, my understanding is that the mean and std are computed by all embedding values per sample. So I try to compute y[0, 0, :] manually: mean = torch.mean(embedding[0, :, :]) std = torch.std(embedding[0, :, :]) print((embedding[0, 0, :] - mean) / std) which gives tensor([ 0.4310, -0.0319, 0.6523, 0.6050]) and that's not the right output. I want to know what is the right way to compute y[0, 0, :]?
Pytorch layer norm states mean and std calculated over last D dimensions. Based on this as I expect for (batch_size, seq_size, embedding_dim) here calculation should be over (seq_size, embedding_dim) for layer norm as last 2 dimensions excluding batch dim. A similar question and answer with layer norm implementation can be found here, layer Normalization in pytorch?. In some paper below it shows different layer norm application in NLP. Explanation of Intance vs Layer vs Group Norm From group norm paper Layer Normalization (LN) operates along the channel dimension LN computes µ and σ along the (C, H, W) axes for each sample. Different Application Example In pytorch doc for NLP 3d tensor example mean and std instead are calculated over only last dim embedding_dim. In this paper it shows similar to pytorch doc example, almost all NLP tasks take variable length sequences as input, which is very suitable for LN that only calculates statistics in the channel dimension without involving the batch and sequence length dimension. Example shown in Another paper, LN normalizes across the channel/feature dimension as shown in Figure 1. Manual Layer Norm with only Embed Dim import torch batch_size, seq_size, dim = 2, 3, 4 last_dims = 4 embedding = torch.randn(batch_size, seq_size, dim) print("x: ", embedding) layer_norm = torch.nn.LayerNorm(last_dims, elementwise_affine = False) layer_norm_out = layer_norm(embedding) print("y: ", layer_norm_out) eps: float = 0.00001 mean = torch.mean(embedding[0, :, :], dim=(-1), keepdim=True) var = torch.square(embedding[0, :, :] - mean).mean(dim=(-1), keepdim=True) y_custom = (embedding[0, :, :] - mean) / torch.sqrt(var + eps) print("y_custom: ", y_custom) assert torch.allclose(layer_norm_out[0], y_custom), 'Tensors do not match.' eps: float = 0.00001 mean = torch.mean(embedding[1, :, :], dim=(-1), keepdim=True) var = torch.square(embedding[1, :, :] - mean).mean(dim=(-1), keepdim=True) y_custom = (embedding[1, :, :] - mean) / torch.sqrt(var + eps) print("y_custom: ", y_custom) assert torch.allclose(layer_norm_out[1], y_custom), 'Tensors do not match.' Output x: tensor([[[-0.0594, -0.8702, -1.9837, 0.2914], [-0.4774, 1.0372, 0.6425, -1.1357], [ 0.3872, -0.9190, -0.5774, 0.3281]], [[-0.5548, 0.0815, 0.2333, 0.3569], [ 1.0380, -0.1756, -0.7417, 2.2930], [-0.0075, -0.3623, 1.9310, -0.7043]]]) y: tensor([[[ 0.6813, -0.2454, -1.5180, 1.0822], [-0.5700, 1.1774, 0.7220, -1.3295], [ 1.0285, -1.2779, -0.6747, 0.9241]], [[-1.6638, 0.1490, 0.5814, 0.9334], [ 0.3720, -0.6668, -1.1513, 1.4462], [-0.2171, -0.5644, 1.6809, -0.8994]]]) y_custom: tensor([[ 0.6813, -0.2454, -1.5180, 1.0822], [-0.5700, 1.1774, 0.7220, -1.3295], [ 1.0285, -1.2779, -0.6747, 0.9241]]) y_custom: tensor([[-1.6638, 0.1490, 0.5814, 0.9334], [ 0.3720, -0.6668, -1.1513, 1.4462], [-0.2171, -0.5644, 1.6809, -0.8994]]) Manual Layer Norm over 4D Tensor import torch batch_size, c, h, w = 2, 3, 2, 4 last_dims = [c, h, w] embedding = torch.randn(batch_size, c, h, w) print("x: ", embedding) layer_norm = torch.nn.LayerNorm(last_dims, elementwise_affine = False) layer_norm_out = layer_norm(embedding) print("y: ", layer_norm_out) eps: float = 0.00001 mean = torch.mean(embedding[0, :, :], dim=(-3, -2, -1), keepdim=True) var = torch.square(embedding[0, :, :] - mean).mean(dim=(-3, -2, -1), keepdim=True) y_custom = (embedding[0, :, :] - mean) / torch.sqrt(var + eps) print("y_custom: ", y_custom) assert torch.allclose(layer_norm_out[0], y_custom), 'Tensors do not match.' eps: float = 0.00001 mean = torch.mean(embedding[1, :, :], dim=(-3, -2, -1), keepdim=True) var = torch.square(embedding[1, :, :] - mean).mean(dim=(-3, -2, -1), keepdim=True) y_custom = (embedding[1, :, :] - mean) / torch.sqrt(var + eps) print("y_custom: ", y_custom) assert torch.allclose(layer_norm_out[1], y_custom), 'Tensors do not match.' Output x: tensor([[[[ 1.0902, -0.8648, 1.5785, 0.3087], [ 0.0249, -1.3477, -0.9565, -1.5024]], [[ 1.8024, -0.2894, 0.7284, 0.7822], [ 1.4385, -0.2848, -0.3114, 0.4633]], [[ 0.9061, 0.3066, 0.9916, 0.9284], [ 0.3356, 0.9162, -0.4579, 1.0669]]], [[[-0.8292, 0.9111, -0.7307, -1.1003], [ 0.3441, -1.9823, 0.1313, 0.2048]], [[-0.2838, 0.1147, -0.1605, -0.4637], [-2.1343, -0.4402, 1.6685, 0.4455]], [[ 0.6895, -2.7331, 1.1693, -0.6999], [-0.3497, -0.2942, -0.0028, -1.3541]]]]) y: tensor([[[[ 0.8653, -1.3279, 1.4131, -0.0114], [-0.3298, -1.8697, -1.4309, -2.0433]], [[ 1.6643, -0.6824, 0.4594, 0.5198], [ 1.2560, -0.6772, -0.7071, 0.1619]], [[ 0.6587, -0.0137, 0.7547, 0.6838], [ 0.0188, 0.6701, -0.8715, 0.8392]]], [[[-0.4938, 1.2220, -0.3967, -0.7610], [ 0.6629, -1.6306, 0.4531, 0.5256]], [[ 0.0439, 0.4368, 0.1655, -0.1335], [-1.7805, -0.1103, 1.9686, 0.7629]], [[ 1.0035, -2.3707, 1.4764, -0.3663], [-0.0211, 0.0337, 0.3210, -1.0112]]]]) y_custom: tensor([[[ 0.8653, -1.3279, 1.4131, -0.0114], [-0.3298, -1.8697, -1.4309, -2.0433]], [[ 1.6643, -0.6824, 0.4594, 0.5198], [ 1.2560, -0.6772, -0.7071, 0.1619]], [[ 0.6587, -0.0137, 0.7547, 0.6838], [ 0.0188, 0.6701, -0.8715, 0.8392]]]) y_custom: tensor([[[-0.4938, 1.2220, -0.3967, -0.7610], [ 0.6629, -1.6306, 0.4531, 0.5256]], [[ 0.0439, 0.4368, 0.1655, -0.1335], [-1.7805, -0.1103, 1.9686, 0.7629]], [[ 1.0035, -2.3707, 1.4764, -0.3663], [-0.0211, 0.0337, 0.3210, -1.0112]]]) Example of custom layer norm implementation from typing import Union, List import torch batch_size, seq_size, embed_dim = 2, 3, 4 embedding = torch.randn(batch_size, seq_size, embed_dim) print("x: ", embedding) print(embedding.shape) print() layer_norm = torch.nn.LayerNorm(embed_dim, elementwise_affine=False) layer_norm_output = layer_norm(embedding) print("y: ", layer_norm_output) print(layer_norm_output.shape) print() def custom_layer_norm( x: torch.Tensor, dim: Union[int, List[int]] = -1, eps: float = 0.00001 ) -> torch.Tensor: mean = torch.mean(x, dim=(dim,), keepdim=True) var = torch.square(x - mean).mean(dim=(dim,), keepdim=True) return (x - mean) / torch.sqrt(var + eps) custom_layer_norm_output = custom_layer_norm(embedding) print("y_custom: ", custom_layer_norm_output) print(custom_layer_norm_output.shape) assert torch.allclose(layer_norm_output, custom_layer_norm_output), 'Tensors do not match.' Output x: tensor([[[-0.4808, -0.1981, 0.4538, -1.2653], [ 0.3578, 0.6592, 0.2161, 0.3852], [ 1.2184, -0.4238, -0.3415, -0.3487]], [[ 0.9874, -1.7737, 0.1886, 0.0448], [-0.5162, 0.7872, -0.3433, -0.3266], [-0.5459, -0.0371, 1.2625, -1.6030]]]) torch.Size([2, 3, 4]) y: tensor([[[-0.1755, 0.2829, 1.3397, -1.4471], [-0.2916, 1.5871, -1.1747, -0.1208], [ 1.7301, -0.6528, -0.5334, -0.5439]], [[ 1.1142, -1.6189, 0.3235, 0.1812], [-0.8048, 1.7141, -0.4709, -0.4384], [-0.3057, 0.1880, 1.4489, -1.3312]]]) torch.Size([2, 3, 4]) y_custom: tensor([[[-0.1755, 0.2829, 1.3397, -1.4471], [-0.2916, 1.5871, -1.1747, -0.1208], [ 1.7301, -0.6528, -0.5334, -0.5439]], [[ 1.1142, -1.6189, 0.3235, 0.1812], [-0.8048, 1.7141, -0.4709, -0.4384], [-0.3057, 0.1880, 1.4489, -1.3312]]]) torch.Size([2, 3, 4])
10
20