question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
63,162,354 | 2020-7-29 | https://stackoverflow.com/questions/63162354/pip-install-via-requirements-txt-specify-a-direct-github-private-repo-branch-n | I am trying to add a package to my requirements.txt file that is: From a private GitHub repo I'm a member of the private repo I have ssh configured for the private repo From a branch besides master, whose name has a slash in it Using ssh protocol All over the internet, there are questions on this topic. Here are the pip docs on this: pip install -e "vcs+protocol://repo_url/#egg=pkg&subdirectory=pkg_dir" And an answer for GitHub from How to state in requirements.txt a direct github source git+git://github.com/path/to/package-two@master#egg=package-two Attempt Method #1 I am trying to use this answer in my requirements.txt file: -e git+ssh://github.com/Organization/repo-name.git@branch/name#egg=foo I am getting an error: ERROR: Command errored out with exit status 128: git clone -q ssh://github.com/Organization/repo-name.git /path/to/venv/src/foo Check the logs for full command output. And when I view the logs using the --log option: Using pip 20.2 from /path/to/venv/lib/python3.8/site-packages/pip (python 3.8) Non-user install because user site-packages disabled Created temporary directory: /private/var/folders/yl/tnzf__856m90wwx0jrqwmhbw0000gn/T/pip-ephem-wheel-cache-yysggsvl Created temporary directory: /private/var/folders/yl/tnzf__856m90wwx0jrqwmhbw0000gn/T/pip-req-tracker-ckqzav7i Initialized build tracking at /private/var/folders/yl/tnzf__856m90wwx0jrqwmhbw0000gn/T/pip-req-tracker-ckqzav7i Created build tracker: /private/var/folders/yl/tnzf__856m90wwx0jrqwmhbw0000gn/T/pip-req-tracker-ckqzav7i Entered build tracker: /private/var/folders/yl/tnzf__856m90wwx0jrqwmhbw0000gn/T/pip-req-tracker-ckqzav7i Created temporary directory: /private/var/folders/yl/tnzf__856m90wwx0jrqwmhbw0000gn/T/pip-install-c9xw78wg Looking in indexes: https://pypi.org/simple Obtaining foo from git+ssh://github.com/Organization/repo-name.git@branch/name#egg=foo (from -r requirements.txt (line 6)) Cloning ssh://github.com/Organization/repo-name.git (to revision branch/name) to ./venv/src/foo ERROR: Command errored out with exit status 128: git clone -q ssh://github.com/Organization/repo-name.git /path/to/venv/src/foo Check the logs for full command output. Exception information: Traceback (most recent call last): File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 216, in _main status = self.run(options, args) File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/cli/req_command.py", line 182, in wrapper return func(self, options, args) File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 324, in run requirement_set = resolver.resolve( File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/resolution/legacy/resolver.py", line 183, in resolve discovered_reqs.extend(self._resolve_one(requirement_set, req)) File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/resolution/legacy/resolver.py", line 388, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/resolution/legacy/resolver.py", line 326, in _get_abstract_dist_for return self.preparer.prepare_editable_requirement(req) File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/operations/prepare.py", line 523, in prepare_editable_requirement req.update_editable(not self._download_should_save) File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/req/req_install.py", line 664, in update_editable vcs_backend.obtain(self.source_dir, url=hidden_url) File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/vcs/versioncontrol.py", line 641, in obtain self.fetch_new(dest, url, rev_options) File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/vcs/git.py", line 230, in fetch_new self.run_command(make_command('clone', '-q', url, dest)) File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/vcs/versioncontrol.py", line 771, in run_command return call_subprocess(cmd, cwd, File "/path/to/venv/lib/python3.8/site-packages/pip/_internal/vcs/versioncontrol.py", line 166, in call_subprocess raise SubProcessError(exc_msg) pip._internal.exceptions.SubProcessError: Command errored out with exit status 128: git clone -q ssh://github.com/Organization/repo-name.git /path/to/venv/src/foo Check the logs for full command output. Removed build tracker: '/private/var/folders/yl/tnzf__856m90wwx0jrqwmhbw0000gn/T/pip-req-tracker-ckqzav7i' Attempt Method #2 Another way I try for requirements.txt: -e [email protected]:Organization/repo-name.git#egg=foo The cloning in this actually works! It also prints this warning: DEPRECATION: This form of VCS requirement is being deprecated Unfortunately, I cannot figure out how to specify a branch name in this format. What am I doing wrong? Am I missing something? FYI, my versions: Python==3.8.5 pip==20.2 setuptools==47.1.0 | I figured out my problem in both cases... syntax Attempt #1 Mistake: needed to say [email protected] Correct method: -e git+ssh://[email protected]/Organization/repo-name.git@branch/name#egg=foo Attempt #2 Mistake: didn't know one can use @ twice Correct method: -e [email protected]:Organization/repo-name.git@branch/name#egg=foo | 13 | 14 |
63,207,385 | 2020-8-1 | https://stackoverflow.com/questions/63207385/how-do-i-install-pip-for-python-3-8-on-ubuntu-without-changing-any-defaults | I'm trying to install pip for Python 3.8 on an Ubuntu 18.04 LTS. I know this has been asked way too many times. But those questions do not concern keeping Ubuntu's defaults specifically. And the answers on those questions either don't work or go on to suggest something so drastic that it would break the system - e.g. change default python3 version from 3.6 to 3.8, which you shouldn't! So far, I've been able to install python3.8 successfully using the PPA - ppa:deadsnakes/ppa: sudo add-apt-repository ppa:deadsnakes/ppa sudo apt update sudo apt install python3.8 I changed python command from python2 to python3.8 using update-alternatives: update-alternatives --remove python /usr/bin/python2 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.8 10 Now, I get Python 3.8 when I run python --version: Python 3.8.5 The problem is, I still can't install pip for Python 3.8. If I try to install python3-pip, it installs pip for Python 3.6 since python3 still points to python3.6.9, and I intend to keep it that way. Try installing python-pip, and it will install pip for Python 2.7. Also there's no such package as python3.8-pip, so I can't install it like: sudo apt install python3.8-pip Output: E: Unable to locate package python3.8-pip E: Couldn't find any package by glob 'python3.8-pip' E: Couldn't find any package by regex 'python3.8-pip' What can I do to install pip for Python 3.8 on an Ubuntu 18.04? | While we can use pip directly as a Python module (the recommended way): python -m pip --version This is how I installed it (so it can be called directly): Firstly, make sure that command pip is available and it isn't being used by pip for Python 2.7 sudo apt remove python-pip Now if you write pip in the Terminal, you'll get that nothing is installed there: pip --version Output: Command 'pip' not found, but can be installed with: sudo apt install python-pip Install python3.8 and setup up correct version on python command using update-alternatives (as done in the question). Make sure, you have python3-pip installed: (This won't work without python3-pip. Although this will install pip 9.0.1 from python 3.6, we'll need it.) sudo apt install python3-pip This will install pip 9.0.1 as pip3: pip3 --version Output: pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6) Now, to install pip for Python 3.8, I used pip by calling it as a python module (ironic!): python -m pip install pip Output: Collecting pip Downloading https://files.pythonhosted.org/packages/36/74/38c2410d688ac7b48afa07d413674afc1f903c1c1f854de51dc8eb2367a5/pip-20.2-py2.py3-none-any.whl (1.5MB) 100% |████████████████████████████████| 1.5MB 288kB/s Installing collected packages: pip Successfully installed pip-20.2 It looks like, when I called pip (which was installed for Python 3.6, BTW) as a module of Python 3.8, and installed pip, it actually worked. Now, make sure your ~/.local/bin directory is set in PATH environment variable: Open ~/.bashrc using your favourite editor (if you're using zsh, replace .bashrc with .zshrc) nano ~/.bashrc And paste the following at the end of the file # set PATH so it includes user's private bin if it exists if [ -d "$HOME/.local/bin" ] ; then PATH="$HOME/.local/bin:$PATH" fi Finally, source your .bashrc (or restart the Terminal window): source ~/.bashrc Now if you try running pip directly it'll give you the correct version: pip --version Output: pip 20.2 from /home/qumber/.local/lib/python3.8/site-packages/pip (python 3.8) Sweet! | 38 | 42 |
63,161,585 | 2020-7-29 | https://stackoverflow.com/questions/63161585/what-is-the-purpose-of-pydantics-secretstr | I am learning the Pydantic module, trying to adopt its features/benefits via a toy FastAPI web backend as an example implementation. I chose to use Pydantic's SecretStr to "hide" passwords. I know it is not really secure, and I am also using passlib for proper password encryption in DB storage (and using HTTPS for security in transit). But this got me thinking: if there is no real security to SecretStr, what is its purpose? I don't mean for this to sound inflammatory; Pydantic does not claim that the Secret Types are secure. The only claim they provide is this: You can use the SecretStr and the SecretBytes data types for storing sensitive information that you do not want to be visible in logging or tracebacks. But I do not understand this: how does SecretStr help in hiding logging or tracebacks? Can't I just make sure not to log the password at all? Can someone provide an explanation + example to help me better understand when and how it can be helpful? I am struggling to find its real purpose... and if there is no benefit, then it is better to just use an str for the model/schema instead of SecretStr. | You already answered a big part of the question yourself. You can use the SecretStr and the SecretBytes data types for storing sensitive information that you do not want to be visible in logging or tracebacks. I would like to add another benefit. Developers are constantly reminded that they are working with secrets because they need to invoke .get_secret_value() to read the real value. We might consider this as syntactic salt (analog to syntactic sugar which makes things easier). Syntactic salt makes things harder to mess up. If you would try to send a SecretStr in a response model in e.g. FastAPI, you'd need to proactively enable that functionality. From the docs: class SimpleModelDumpable(BaseModel): password: SecretStr class Config: json_encoders = { SecretStr: lambda v: v.get_secret_value() if v else None } | 17 | 33 |
63,182,075 | 2020-7-30 | https://stackoverflow.com/questions/63182075/python-opencv-centroid-determination-in-bacterial-clusters | I am currently working on developing an algorithm to determine centroid positions from (Brightfield) microscopy images of bacterial clusters. This is currently a major open problem in image processing. This question is a follow-up to: Python/OpenCV — Matching Centroid Points of Bacteria in Two Images. Currently, the algorithm is effective for sparse, spaced-out bacteria. However, it becomes totally ineffective when the bacteria become clustered together. In these images, notice how the bacterial centroids are located effectively. Bright-Field Image #1 Bright-Field Image #2 Bright-Field Image #3 However, the algorithm fails when the bacteria cluster at varying levels. Bright-Field Image #4 Bright-Field Image #5 Bright-Field Image #6 Bright-Field Image #7 Bright-Field Image #8 Original Images Bright-Field Image #1 Bright-Field Image #2 Bright-Field Image #3 Bright-Field Image #4 Bright-Field Image #5 Bright-Field Image #6 Bright-Field Image #7 Bright-Field Image #8 I'd like to optimize my current algorithm so it's more robust for these type of images. This is the program I'm running. import cv2 import numpy as np import os kernel = np.array([[0, 0, 1, 0, 0], [0, 1, 1, 1, 0], [1, 1, 1, 1, 1], [0, 1, 1, 1, 0], [0, 0, 1, 0, 0]], dtype=np.uint8) def e_d(image, it): image = cv2.erode(image, kernel, iterations=it) image = cv2.dilate(image, kernel, iterations=it) return image path = r"(INSERT IMAGE DIRECTORY HERE)" img_files = [file for file in os.listdir(path)] def segment_index(index: int): segment_file(img_files[index]) def segment_file(img_file: str): img_path = path + "\\" + img_file print(img_path) img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Applying adaptive mean thresholding th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 11, 2) # Removing small noise th = e_d(th.copy(), 1) # Finding contours with RETR_EXTERNAL flag and removing undesired contours and # drawing them on a new image. cnt, hie = cv2.findContours(th, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cntImg = th.copy() for contour in cnt: x, y, w, h = cv2.boundingRect(contour) # Eliminating the contour if its width is more than half of image width # (bacteria will not be that big). if w > img.shape[1] / 2: continue cntImg = cv2.drawContours(cntImg, [cv2.convexHull(contour)], -1, 255, -1) # Removing almost all the remaining noise. # (Some big circular noise will remain along with bacteria contours) cntImg = e_d(cntImg, 3) # Finding new filtered contours again cnt2, hie2 = cv2.findContours(cntImg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # Now eliminating circular type noise contours by comparing each contour's # extent of overlap with its enclosing circle. finalContours = [] # This will contain the final bacteria contours for contour in cnt2: # Finding minimum enclosing circle (x, y), radius = cv2.minEnclosingCircle(contour) center = (int(x), int(y)) radius = int(radius) # creating a image with only this circle drawn on it(filled with white colour) circleImg = np.zeros(img.shape, dtype=np.uint8) circleImg = cv2.circle(circleImg, center, radius, 255, -1) # creating a image with only the contour drawn on it(filled with white colour) contourImg = np.zeros(img.shape, dtype=np.uint8) contourImg = cv2.drawContours(contourImg, [contour], -1, 255, -1) # White pixels not common in both contour and circle will remain white # else will become black. union_inter = cv2.bitwise_xor(circleImg, contourImg) # Finding ratio of the extent of overlap of contour to its enclosing circle. # Smaller the ratio, more circular the contour. ratio = np.sum(union_inter == 255) / np.sum(circleImg == 255) # Storing only non circular contours(bacteria) if ratio > 0.55: finalContours.append(contour) finalContours = np.asarray(finalContours) # Finding center of bacteria and showing it. bacteriaImg = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for bacteria in finalContours: M = cv2.moments(bacteria) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) bacteriaImg = cv2.circle(bacteriaImg, (cx, cy), 5, (0, 0, 255), -1) cv2.imshow("bacteriaImg", bacteriaImg) cv2.waitKey(0) # Segment Each Image for i in range(len(img_files)): segment_index(i) Ideally I would like at least to improve on a couple of the posted images. | The mask is always the weak point in identifying objects, and the most important step. This will improve identifying images with high numbers of bacteria. I have modified your e_d function by adding an OPEN and another ERODE pass with the kernal, and changed the it (number of iterations) variable (to 1, 2 instead of 1,3) for your code to do this. This is by no means a finished effort, but I hope it will give you an idea of what you might try to enhance it further. I used the images you provided, and since they already have a red dot, this may be interfering with my result images... but you can see it is able to identify more bacteria on most. Some of my results show two dots, and the image with only one bacteria, I missed it, each quite possibly because it was already marked. Try it with the raw images and see how it does. Also, since the bacteria are relatively uniform in both size and shape, I think you could work with the ratio and/or average of height to width of each bacteria to filter out the extreme shapes (small or large) and the skinny, long shapes too. You can measure enough bacteria to see what is the average contour length, or height and width, or height/width ratio, etc., to find reasonable tolerances rather than the proportion to the image size itself. Another suggestion, would be to rethink how you are masking the images all together, possibly to try it in two steps. One to find the boundary of the long shape containing the bacteria, and then to find the bacteria within it. This assumes all of your images will be similar to these, and if that is so, it may help to eliminate the stray hits outside of this boundary, that are never bacteria. #!usr/bin/env python # https://stackoverflow.com/questions/63182075/python-opencv-centroid-determination-in-bacterial-clusters import cv2 import numpy as np import os kernel = np.array([[0, 0, 1, 0, 0], [0, 1, 1, 1, 0], [1, 1, 1, 1, 1], [0, 1, 1, 1, 0], [0, 0, 1, 0, 0]], dtype=np.uint8) def e_d(image, it): print(it) image = cv2.erode(image, kernel, iterations=it) image = cv2.dilate(image, kernel, iterations=it) image = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel, iterations = 1) image = cv2.morphologyEx(image, cv2.MORPH_ERODE, kernel, iterations = 1) return image #path = r"(INSERT IMAGE DIRECTORY HERE)" path = r"E:\stackimages" img_files = [file for file in os.listdir(path)] def segment_index(index: int): segment_file(img_files[index]) def segment_file(img_file: str): img_path = path + "\\" + img_file print(img_path) head, tail = os.path.split(img_path) img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) cv2.imshow("bacteriaImg-1", img) cv2.waitKey(0) # Applying adaptive mean thresholding th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 11, 2) # Removing small noise th = e_d(th.copy(), 1) # Finding contours with RETR_EXTERNAL flag and removing undesired contours and # drawing them on a new image. cnt, hie = cv2.findContours(th, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cntImg = th.copy() for contour in cnt: x, y, w, h = cv2.boundingRect(contour) # Eliminating the contour if its width is more than half of image width # (bacteria will not be that big). if w > img.shape[1] / 2: continue else: cntImg = cv2.drawContours(cntImg, [cv2.convexHull(contour)], -1, 255, -1) # Removing almost all the remaining noise. # (Some big circular noise will remain along with bacteria contours) cntImg = e_d(cntImg, 2) cv2.imshow("bacteriaImg-2", cntImg) cv2.waitKey(0) # Finding new filtered contours again cnt2, hie2 = cv2.findContours(cntImg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # Now eliminating circular type noise contours by comparing each contour's # extent of overlap with its enclosing circle. finalContours = [] # This will contain the final bacteria contours for contour in cnt2: # Finding minimum enclosing circle (x, y), radius = cv2.minEnclosingCircle(contour) center = (int(x), int(y)) radius = int(radius) # creating a image with only this circle drawn on it(filled with white colour) circleImg = np.zeros(img.shape, dtype=np.uint8) circleImg = cv2.circle(circleImg, center, radius, 255, -1) # creating a image with only the contour drawn on it(filled with white colour) contourImg = np.zeros(img.shape, dtype=np.uint8) contourImg = cv2.drawContours(contourImg, [contour], -1, 255, -1) # White pixels not common in both contour and circle will remain white # else will become black. union_inter = cv2.bitwise_xor(circleImg, contourImg) # Finding ratio of the extent of overlap of contour to its enclosing circle. # Smaller the ratio, more circular the contour. ratio = np.sum(union_inter == 255) / np.sum(circleImg == 255) # Storing only non circular contours(bacteria) if ratio > 0.55: finalContours.append(contour) finalContours = np.asarray(finalContours) # Finding center of bacteria and showing it. bacteriaImg = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for bacteria in finalContours: M = cv2.moments(bacteria) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) bacteriaImg = cv2.circle(bacteriaImg, (cx, cy), 5, (0, 0, 255), -1) cv2.imshow("bacteriaImg", bacteriaImg) cv2.waitKey(0) # Segment Each Image for i in range(len(img_files)): segment_index(i) | 9 | 5 |
63,163,251 | 2020-7-29 | https://stackoverflow.com/questions/63163251/how-to-easily-share-a-sample-dataframe-using-df-to-dict | Despite the clear guidance on How do I ask a good question? and How to create a Minimal, Reproducible Example, many just seem to ignore to include a reproducible data sample in their question. So what is a practical and easy way to reproduce a data sample when a simple pd.DataFrame(np.random.random(size=(5, 5))) is not enough? How can you, for example, use df.to_dict() and include the output in a question? | The answer: In many situations, using an approach with df.to_dict() will do the job perfectly! Here are two cases that come to mind: Case 1: You've got a dataframe built or loaded in Python from a local source Case 2: You've got a table in another application (like Excel) The details: Case 1: You've got a dataframe built or loaded from a local source Given that you've got a pandas dataframe named df, just run df.to_dict() in you console or editor, and copy the output that is formatted as a dictionary, and paste the content into pd.DataFrame(<output>) and include that chunk in your now reproducible code snippet. Case 2: You've got a table in another application (like Excel) Depending on the source and separator like (',', ';' '\\s+') where the latter means any spaces, you can simply: Ctrl+C the contents run df=pd.read_clipboard(sep='\\s+') in your console or editor, and run df.to_dict(), and include the output in df=pd.DataFrame(<output>) In this case, the start of your question would look something like this: import pandas as pd df = pd.DataFrame({0: {0: 0.25474768796402636, 1: 0.5792136563952824, 2: 0.5950396800676201}, 1: {0: 0.9071073567355232, 1: 0.1657288354283053, 2: 0.4962367707789421}, 2: {0: 0.7440601352930207, 1: 0.7755487356392468, 2: 0.5230707257648775}}) Of course, this gets a little clumsy with larger dataframes. But very often, all anyone who seeks to answer your question need is a little sample of your real world data to take the structure of your data into consideration. And there are two ways you can handle larger dataframes: run df.head(20).to_dict() to only include the first 20 rows, and change the format of your dict using, for example, df.to_dict('split') (there are other options besides 'split') to reshape your output to a dict that requires fewer lines. Here's an example using the iris dataset, among other places available from plotly express. If you just run: import plotly.express as px import pandas as pd df = px.data.iris() df.to_dict() This will produce an output of nearly 1000 lines, and won't be very practical as a reproducible sample. But if you include .head(25), you'll get: {'sepal_length': {0: 5.1, 1: 4.9, 2: 4.7, 3: 4.6, 4: 5.0, 5: 5.4, 6: 4.6, 7: 5.0, 8: 4.4, 9: 4.9}, 'sepal_width': {0: 3.5, 1: 3.0, 2: 3.2, 3: 3.1, 4: 3.6, 5: 3.9, 6: 3.4, 7: 3.4, 8: 2.9, 9: 3.1}, 'petal_length': {0: 1.4, 1: 1.4, 2: 1.3, 3: 1.5, 4: 1.4, 5: 1.7, 6: 1.4, 7: 1.5, 8: 1.4, 9: 1.5}, 'petal_width': {0: 0.2, 1: 0.2, 2: 0.2, 3: 0.2, 4: 0.2, 5: 0.4, 6: 0.3, 7: 0.2, 8: 0.2, 9: 0.1}, 'species': {0: 'setosa', 1: 'setosa', 2: 'setosa', 3: 'setosa', 4: 'setosa', 5: 'setosa', 6: 'setosa', 7: 'setosa', 8: 'setosa', 9: 'setosa'}, 'species_id': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1}} And now we're getting somewhere. But depending on the structure and content of the data, this may not cover the complexity of the contents in a satisfactory manner. But you can include more data on fewer lines by including to_dict('split') like this: import plotly.express as px df = px.data.iris().head(10) df.to_dict('split') Now your output will look like: {'index': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], 'columns': ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species', 'species_id'], 'data': [[5.1, 3.5, 1.4, 0.2, 'setosa', 1], [4.9, 3.0, 1.4, 0.2, 'setosa', 1], [4.7, 3.2, 1.3, 0.2, 'setosa', 1], [4.6, 3.1, 1.5, 0.2, 'setosa', 1], [5.0, 3.6, 1.4, 0.2, 'setosa', 1], [5.4, 3.9, 1.7, 0.4, 'setosa', 1], [4.6, 3.4, 1.4, 0.3, 'setosa', 1], [5.0, 3.4, 1.5, 0.2, 'setosa', 1], [4.4, 2.9, 1.4, 0.2, 'setosa', 1], [4.9, 3.1, 1.5, 0.1, 'setosa', 1]]} And now you can easily increase the number in .head(10) without cluttering your question too much. But there's one minor drawback. Now you can no longer use the input directly in pd.DataFrame. But if you include a few specifications with regards to index, column, and data you'll be just fine. So for this particluar dataset, my preferred approach would be: import pandas as pd import plotly.express as px sample = {'index': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], 'columns': ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species', 'species_id'], 'data': [[5.1, 3.5, 1.4, 0.2, 'setosa', 1], [4.9, 3.0, 1.4, 0.2, 'setosa', 1], [4.7, 3.2, 1.3, 0.2, 'setosa', 1], [4.6, 3.1, 1.5, 0.2, 'setosa', 1], [5.0, 3.6, 1.4, 0.2, 'setosa', 1], [5.4, 3.9, 1.7, 0.4, 'setosa', 1], [4.6, 3.4, 1.4, 0.3, 'setosa', 1], [5.0, 3.4, 1.5, 0.2, 'setosa', 1], [4.4, 2.9, 1.4, 0.2, 'setosa', 1], [4.9, 3.1, 1.5, 0.1, 'setosa', 1], [5.4, 3.7, 1.5, 0.2, 'setosa', 1], [4.8, 3.4, 1.6, 0.2, 'setosa', 1], [4.8, 3.0, 1.4, 0.1, 'setosa', 1], [4.3, 3.0, 1.1, 0.1, 'setosa', 1], [5.8, 4.0, 1.2, 0.2, 'setosa', 1]]} df = pd.DataFrame(index=sample['index'], columns=sample['columns'], data=sample['data']) df Now you'll have this dataframe to work with: sepal_length sepal_width petal_length petal_width species species_id 0 5.1 3.5 1.4 0.2 setosa 1 1 4.9 3.0 1.4 0.2 setosa 1 2 4.7 3.2 1.3 0.2 setosa 1 3 4.6 3.1 1.5 0.2 setosa 1 4 5.0 3.6 1.4 0.2 setosa 1 5 5.4 3.9 1.7 0.4 setosa 1 6 4.6 3.4 1.4 0.3 setosa 1 7 5.0 3.4 1.5 0.2 setosa 1 8 4.4 2.9 1.4 0.2 setosa 1 9 4.9 3.1 1.5 0.1 setosa 1 10 5.4 3.7 1.5 0.2 setosa 1 11 4.8 3.4 1.6 0.2 setosa 1 12 4.8 3.0 1.4 0.1 setosa 1 13 4.3 3.0 1.1 0.1 setosa 1 14 5.8 4.0 1.2 0.2 setosa 1 Which will increase your chances of receiving useful answers significantly! Edit: df_to_dict() will not be able to read timestamps like 1: Timestamp('2020-01-02 00:00:00') without also including from pandas import Timestamp | 15 | 20 |
63,196,320 | 2020-7-31 | https://stackoverflow.com/questions/63196320/installing-opencv-with-conda-and-spyder | I'm having trouble installing OpenCV with Conda. I tried running numerous commands, none of which worked. For example, when I ran conda install -c anaconda opencv (as per https://anaconda.org/anaconda/opencv) I get this error: UnsatisfiableError: The following specifications were found to be incompatible with the existing python installation in your environment: Specifications: - opencv -> python[version='>=2.7,<2.8.0a0|>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0'] Your python: python=3.8 If python is on the left-most side of the chain, that's the version you've asked for. When python appears to the right, that indicates that the thing on the left is somehow not available for the python version you are constrained to. Note that conda will not change your python version to a different minor version unless you explicitly specify that. Why is this happening and how can I install OpenCV in Spyder? Thanks. | The answer for this issue is updating Anaconda to the newest version: conda update -n base -c defaults conda then you can install opencv normally using: conda install -c conda-forge opencv | 10 | 12 |
63,163,027 | 2020-7-29 | https://stackoverflow.com/questions/63163027/how-to-use-pyinstaller-with-matplotlib-in-use | I have this script that I attached a GUI to the front of and wanted to distribute it to other DnD DMs for them to use to overlay grids onto images. Only issue is that everytime I try to package the python script using Pyinstaller, it keeps throwing two different errors. If I run pyinstaller --hidden-import matplotlib myscript.py it returns NameError: name 'defaultParams' is not defined [7532] Failed to execute script ImageGridder So I decided to try and run the command again with the --onefile option. When I do so it returns, RuntimeError: Could not find the matplotlib data files [18884] Failed to execute script ImageGridder Now in both examples, the packaging process completes, and all of the files seem to be generated correctly. It's just when I run the .exe it generates, it crashes. I know the file itself runs properly as I have gotten it to run and work properly on both my desktop and laptop. I've done a good few hours of searching for anything that would help, but nothing really seems to work. The script itself, from PIL import Image import tkinter as tk from tkinter import filedialog import matplotlib.pyplot as PLT import matplotlib.ticker as plticker class gridder(tk.Tk): def __init__(self): tk.Tk.__init__(self) self.initialize() def initialize(self): self.iWidth = tk.StringVar() self.iHeight = tk.StringVar() self.imgSelect = tk.StringVar() self.squareLength= tk.IntVar() self.ratioX=tk.IntVar() self.ratioY=tk.IntVar() self.checkSquare = tk.IntVar() self.colorLine= tk.StringVar() self.colorLine.set('k') self.checkSquare.set(0) self.ratioX.set(10) self.ratioY.set(10) self.squareLength.set(120) # row 1 labelDisclaim = tk.Label(self, text='Currently only works with jpegs') labelDisclaim.grid(column=2, row=1) # row 2 buttonOpen = tk.Button(self, text="Select an Image", command=self.openExplorer) buttonOpen.grid(column=1, row=2) labelSelected= tk.Label(self, text="Selected Image: ") labelSelected.grid(column=2,row=2) labelImgName = tk.Label(self, textvariable=self.imgSelect) labelImgName.grid(column=3,row=2) # row 3 labelStaticImg= tk.Label(self, text="Width of image, in pixels: ") labelStaticImg.grid(column=1,row=3) labelImgWidth = tk.Label(self, textvariable=self.iWidth, anchor='w') labelImgWidth.grid(column=2,row=3) labelStaticHeight= tk.Label(self, text="Height of image, in pixels: ") labelStaticHeight.grid(column=3,row=3) labelImgHeight = tk.Label(self, textvariable=self.iHeight, anchor='w') labelImgHeight.grid(column=4,row=3) # row 4 labelRatioX = tk.Label(self, text="Enter the Ratio along the X axis, default is 10: ") labelRatioX.grid(column=1,row=4) entryRatioX = tk.Entry(self, textvariable=self.ratioX) entryRatioX.grid(column=2,row=4) labelRatioY =tk.Label(self, text="Enter the Ratio along the Y axis, default is 10: ") labelRatioY.grid(column=3,row=4) entryRatioY = tk.Entry(self, textvariable=self.ratioY) entryRatioY.grid(column=4,row=4) # row 5 labelSquare = tk.Label(self, text="For strict squares, in the sense of a battle map, check this ->") labelSquare.grid(column=1,row=5) checkboxSquare = tk.Checkbutton(self, variable=self.checkSquare, text="If checked, it will ignore the ratio and apply squares that are specified by the entry, (default 120x120) ->",wraplength=150) checkboxSquare.grid(column=2,row=5) labelSquareLength = tk.Label(self, text="Side length of Square: ") labelSquareLength.grid(column=3,row=5) entrySquareLength = tk.Entry(self, textvariable=self.squareLength) entrySquareLength.grid(column=4,row=5) # row 6 labelColor= tk.Label(self, text="Enter a color for the grid, valid choices black=k, blue=b, green=g, red=r, white=w, brown=brown, yellow=yellow, cyan=c. Default is black: ",wraplength=250) labelColor.grid(column=1,row=6) entryColor = tk.Entry(self, textvariable=self.colorLine) entryColor.grid(column=2,row=6) execButton = tk.Button(self, text="Gridify", command=self.gridify) execButton.grid(column=4,row=6) # row 9 button = tk.Button(self,text="Exit",command=self.closeProgram) button.grid(column=2,row=9) # row 10 labelSig = tk.Label(self, text='By Johnathan Keith, 2020. Ver 1.0. This is free-to-use, and will always be. This was willingly distributed to the public.',wraplength=350) labelSig.grid(column=2,row=10) labelDisclaimer = tk.Label(self, text="This program does NOT generate pop up windows for bad data entries. If the image does not generate into the folder the script is in, you did something wrong.",wraplength=200) labelDisclaimer.grid(column=4,row=10) def openFile(self, imagefilename): Img = Image.open(imagefilename) height, width = Img.size self.iHeight.set(height) self.iWidth.set(width) def gridify(self): ratioX=0 ratioY=0 sidelengthy=0 sidelengthx=0 if self.checkSquare.get(): ratioX=int(self.squareLength.get()) ratioY=int(self.squareLength.get()) sidelengthx=ratioX sidelengthy=ratioY else: ratioX=int(self.ratioX.get()) ratioY=int(self.ratioY.get()) sidelengthy=int(self.iWidth.get())/ratioY sidelengthx=int(self.iHeight.get())/ratioX image=Image.open(self.imgSelect.get()) my_dpi=300. #set the figure up fig=PLT.figure(figsize=(float(image.size[0])/my_dpi,float(image.size[1])/my_dpi),dpi=my_dpi) ax=fig.add_subplot(111) #remove whitespace fig.subplots_adjust(left=0,right=1,bottom=0,top=1) #set gridding interval locx = plticker.MultipleLocator(base=sidelengthx) locy = plticker.MultipleLocator(base=sidelengthy) ax.xaxis.set_major_locator(locx) ax.yaxis.set_major_locator(locy) #add the grid ax.grid(which='major', axis='both', linestyle='-',color=self.colorLine.get()) ax.imshow(image) token=self.imgSelect.get().split('/') saveName= "gridded_"+token[-1] # Save the figure fig.savefig(saveName,dpi=my_dpi) def closeProgram(self): self.destroy() exit() def dataEntry(self): if type(int) == type(int(bHeight)): self.bHeight = int(entryHeight.get()) else: return def openExplorer(self): filename= filedialog.askopenfilename(initialdir="/", title="Select an Image", filetypes=(("jpeg files", "*.jpg"),("all files", "*.*"))) if filename: self.imgSelect.set(filename) self.openFile(filename) if __name__ == "__main__": app = gridder() app.title('Image Gridder') app.mainloop() I am running python 3.8, matplotlib 3.3.0, tkinter, and PIL 6.2.2, and pyinstaller 3.6. The contents of the warning file while using the --onefile command: This file lists modules PyInstaller was not able to find. This does not necessarily mean this module is required for running you program. Python and Python 3rd-party packages include a lot of conditional or optional modules. For example the module 'ntpath' only exists on Windows, whereas the module 'posixpath' only exists on Posix systems. Types if import: * top-level: imported at the top-level - look at these first * conditional: imported within an if-statement * delayed: imported from within a function * optional: imported within a try-except-statement IMPORTANT: Do NOT post this list to the issue-tracker. Use it as a basis for yourself tracking down the missing module. Thanks! missing module named _posixsubprocess - imported by subprocess (optional), multiprocessing.util (delayed) missing module named 'org.python' - imported by copy (optional), xml.sax (delayed, conditional), setuptools.sandbox (conditional) missing module named _frozen_importlib_external - imported by importlib._bootstrap (delayed), importlib (optional), importlib.abc (optional), zipimport (top-level) excluded module named _frozen_importlib - imported by importlib (optional), importlib.abc (optional), zipimport (top-level), PyInstaller.loader.pyimod02_archive (delayed, conditional) missing module named urllib.pathname2url - imported by urllib (conditional), PyInstaller.lib.modulegraph._compat (conditional) missing module named _posixshmem - imported by multiprocessing.resource_tracker (conditional), multiprocessing.shared_memory (conditional) missing module named multiprocessing.set_start_method - imported by multiprocessing (top-level), multiprocessing.spawn (top-level) missing module named multiprocessing.get_start_method - imported by multiprocessing (top-level), multiprocessing.spawn (top-level) missing module named _scproxy - imported by urllib.request (conditional) missing module named termios - imported by tty (top-level), getpass (optional) missing module named resource - imported by posix (top-level), test.support (optional) missing module named 'java.lang' - imported by platform (delayed, optional), xml.sax._exceptions (conditional) missing module named vms_lib - imported by platform (delayed, conditional, optional) missing module named java - imported by platform (delayed) missing module named _winreg - imported by platform (delayed, optional), numpy.distutils.cpuinfo (delayed, conditional, optional), pkg_resources._vendor.appdirs (delayed, conditional) missing module named multiprocessing.get_context - imported by multiprocessing (top-level), multiprocessing.pool (top-level), multiprocessing.managers (top-level), multiprocessing.sharedctypes (top-level) missing module named multiprocessing.TimeoutError - imported by multiprocessing (top-level), multiprocessing.pool (top-level) missing module named multiprocessing.BufferTooShort - imported by multiprocessing (top-level), multiprocessing.connection (top-level) missing module named multiprocessing.AuthenticationError - imported by multiprocessing (top-level), multiprocessing.connection (top-level) missing module named asyncio.DefaultEventLoopPolicy - imported by asyncio (delayed, conditional), asyncio.events (delayed, conditional) missing module named readline - imported by cmd (delayed, conditional, optional), code (delayed, conditional, optional), pdb (delayed, optional) missing module named org - imported by pickle (optional) missing module named grp - imported by shutil (optional), tarfile (optional), pathlib (delayed), distutils.archive_util (optional) missing module named pwd - imported by posixpath (delayed, conditional), shutil (optional), tarfile (optional), pathlib (delayed, conditional, optional), http.server (delayed, optional), webbrowser (delayed), netrc (delayed, conditional), getpass (delayed), distutils.util (delayed, conditional, optional), distutils.archive_util (optional) missing module named posix - imported by os (conditional, optional), shutil (conditional) missing module named 'multiprocessing.forking' - imported by C:\Users\Owner\AppData\Local\Programs\Python\Python38-32\Lib\site-packages\PyInstaller\loader\rthooks\pyi_rth_multiprocessing.py (optional) missing module named 'win32com.gen_py' - imported by win32com (conditional, optional), C:\Users\Owner\AppData\Local\Programs\Python\Python38-32\Lib\site-packages\PyInstaller\loader\rthooks\pyi_rth_win32comgenpy.py (top-level) missing module named pyimod03_importers - imported by PyInstaller.loader.pyimod02_archive (delayed, conditional), C:\Users\Owner\AppData\Local\Programs\Python\Python38-32\Lib\site-packages\PyInstaller\loader\rthooks\pyi_rth_pkgres.py (top-level) missing module named 'pkg_resources.extern.pyparsing' - imported by pkg_resources._vendor.packaging.requirements (top-level), pkg_resources._vendor.packaging.markers (top-level) missing module named _uuid - imported by uuid (optional) missing module named __builtin__ - imported by PIL.Image (optional), numpy.core.numerictypes (conditional), numpy.core.numeric (conditional), numpy.lib.function_base (conditional), numpy.lib._iotools (conditional), numpy.ma.core (conditional), numpy.distutils.misc_util (delayed, conditional), numpy (conditional), pyparsing (conditional), pkg_resources._vendor.pyparsing (conditional), setuptools._vendor.pyparsing (conditional) missing module named ordereddict - imported by pyparsing (optional), pkg_resources._vendor.pyparsing (optional), setuptools._vendor.pyparsing (optional) missing module named StringIO - imported by PyInstaller.lib.modulegraph._compat (conditional), PyInstaller.lib.modulegraph.zipio (conditional), setuptools._vendor.six (conditional), numpy.lib.utils (delayed, conditional), numpy.lib.format (delayed, conditional), numpy.testing._private.utils (conditional), six (conditional), pkg_resources._vendor.six (conditional) missing module named 'com.sun' - imported by pkg_resources._vendor.appdirs (delayed, conditional, optional) missing module named com - imported by pkg_resources._vendor.appdirs (delayed) missing module named pkg_resources.extern.packaging - imported by pkg_resources.extern (top-level), pkg_resources (top-level) missing module named pkg_resources.extern.appdirs - imported by pkg_resources.extern (top-level), pkg_resources (top-level) missing module named 'pkg_resources.extern.six.moves' - imported by pkg_resources (top-level), pkg_resources._vendor.packaging.requirements (top-level) missing module named pkg_resources.extern.six - imported by pkg_resources.extern (top-level), pkg_resources (top-level), pkg_resources.py31compat (top-level) missing module named pytest - imported by numpy._pytesttester (delayed), matplotlib (delayed, optional) missing module named commands - imported by numpy.distutils.cpuinfo (conditional) missing module named setuptools.extern.packaging - imported by setuptools.extern (top-level), setuptools.dist (top-level), setuptools.command.egg_info (top-level) missing module named 'setuptools.extern.six' - imported by setuptools (top-level), setuptools.extension (top-level) missing module named 'setuptools.extern.packaging.version' - imported by setuptools.config (top-level), setuptools.msvc (top-level) missing module named setuptools.extern.six.moves.filterfalse - imported by setuptools.extern.six.moves (top-level), setuptools.dist (top-level), setuptools.msvc (top-level) missing module named setuptools.extern.six.moves.filter - imported by setuptools.extern.six.moves (top-level), setuptools.dist (top-level), setuptools.ssl_support (top-level), setuptools.command.py36compat (top-level) missing module named _manylinux - imported by setuptools.pep425tags (delayed, optional) missing module named 'setuptools.extern.packaging.utils' - imported by setuptools.wheel (top-level) missing module named wincertstore - imported by setuptools.ssl_support (delayed, optional) missing module named 'backports.ssl_match_hostname' - imported by setuptools.ssl_support (optional) missing module named backports - imported by setuptools.ssl_support (optional) missing module named 'setuptools._vendor.six.moves' - imported by 'setuptools._vendor.six.moves' (top-level) missing module named 'setuptools.extern.pyparsing' - imported by setuptools._vendor.packaging.markers (top-level), setuptools._vendor.packaging.requirements (top-level) missing module named setuptools.extern.six.moves.map - imported by setuptools.extern.six.moves (top-level), setuptools.dist (top-level), setuptools.command.easy_install (top-level), setuptools.sandbox (top-level), setuptools.package_index (top-level), setuptools.ssl_support (top-level), setuptools.command.egg_info (top-level), setuptools.namespaces (top-level) runtime module named setuptools.extern.six.moves - imported by setuptools.dist (top-level), setuptools.py33compat (top-level), configparser (top-level), setuptools.command.easy_install (top-level), setuptools.sandbox (top-level), setuptools.command.setopt (top-level), setuptools.package_index (top-level), setuptools.ssl_support (top-level), setuptools.command.egg_info (top-level), setuptools.command.py36compat (top-level), setuptools.namespaces (top-level), setuptools.msvc (top-level), 'setuptools._vendor.six.moves' (top-level) missing module named setuptools.extern.six - imported by setuptools.extern (top-level), setuptools.monkey (top-level), setuptools.dist (top-level), setuptools.extern.six.moves (top-level), setuptools.py33compat (top-level), setuptools.config (top-level), setuptools.command.easy_install (top-level), setuptools.sandbox (top-level), setuptools.py27compat (top-level), setuptools.package_index (top-level), setuptools.wheel (top-level), setuptools.pep425tags (top-level), setuptools.command.egg_info (top-level), setuptools.command.sdist (top-level), setuptools.command.bdist_egg (top-level), setuptools.unicode_utils (top-level), setuptools.command.develop (top-level) missing module named 'numpy_distutils.cpuinfo' - imported by numpy.f2py.diagnose (delayed, conditional, optional) missing module named 'numpy_distutils.fcompiler' - imported by numpy.f2py.diagnose (delayed, conditional, optional) missing module named 'numpy_distutils.command' - imported by numpy.f2py.diagnose (delayed, conditional, optional) missing module named numpy_distutils - imported by numpy.f2py.diagnose (delayed, optional) missing module named __svn_version__ - imported by numpy.f2py.__version__ (optional) missing module named numarray - imported by numpy.distutils.system_info (delayed, conditional, optional) missing module named Numeric - imported by numpy.distutils.system_info (delayed, conditional, optional) missing module named ConfigParser - imported by numpy.distutils.system_info (conditional), numpy.distutils.npy_pkg_config (conditional) missing module named _curses - imported by curses (top-level), curses.has_key (top-level) missing module named _dummy_threading - imported by dummy_threading (optional) missing module named 'nose.plugins' - imported by numpy.testing._private.noseclasses (top-level), numpy.testing._private.nosetester (delayed) missing module named scipy - imported by numpy.testing._private.nosetester (delayed, conditional) missing module named 'nose.util' - imported by numpy.testing._private.noseclasses (top-level) missing module named nose - imported by numpy.testing._private.utils (delayed, optional), numpy.testing._private.decorators (delayed), numpy.testing._private.noseclasses (top-level) missing module named dummy_thread - imported by numpy.core.arrayprint (conditional, optional) missing module named thread - imported by numpy.core.arrayprint (conditional, optional), PyInstaller.loader.pyimod02_archive (conditional) missing module named cPickle - imported by numpy.core.numeric (conditional) missing module named cStringIO - imported by cPickle (top-level) missing module named copy_reg - imported by cPickle (top-level), cStringIO (top-level), numpy.core (conditional) missing module named pickle5 - imported by numpy.core.numeric (conditional, optional) missing module named numpy.core.number - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.signbit - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.float64 - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.float32 - imported by numpy.core (top-level), numpy.testing._private.utils (top-level) missing module named numpy.lib.i0 - imported by numpy.lib (top-level), numpy.dual (top-level) missing module named numpy.core.integer - imported by numpy.core (top-level), numpy.fft.helper (top-level) missing module named numpy.core.sqrt - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.fft.fftpack (top-level) missing module named numpy.core.conjugate - imported by numpy.core (top-level), numpy.fft.fftpack (top-level) missing module named numpy.core.divide - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.object_ - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.intp - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.geterrobj - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.add - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.complexfloating - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.inexact - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.cdouble - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.csingle - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.double - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.single - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named future_builtins - imported by numpy.lib.npyio (conditional) missing module named urllib2 - imported by numpy.lib._datasource (delayed, conditional) missing module named urlparse - imported by numpy.lib._datasource (delayed, conditional) missing module named numpy.recarray - imported by numpy (top-level), numpy.ma.mrecords (top-level) missing module named numpy.dtype - imported by numpy (top-level), numpy.ma.mrecords (top-level), numpy.ctypeslib (top-level) missing module named numpy.expand_dims - imported by numpy (top-level), numpy.ma.core (top-level) missing module named numpy.array - imported by numpy (top-level), numpy.ma.core (top-level), numpy.ma.extras (top-level), numpy.ma.mrecords (top-level), numpy.ctypeslib (top-level) missing module named numpy.bool_ - imported by numpy (top-level), numpy.ma.core (top-level), numpy.ma.mrecords (top-level) missing module named numpy.iscomplexobj - imported by numpy (top-level), numpy.ma.core (top-level) missing module named numpy.amin - imported by numpy (top-level), numpy.ma.core (top-level) missing module named numpy.amax - imported by numpy (top-level), numpy.ma.core (top-level) missing module named numpy.ndarray - imported by numpy (top-level), numpy.ma.core (top-level), numpy.ma.extras (top-level), numpy.ma.mrecords (top-level), numpy.ctypeslib (top-level) missing module named numpy.histogramdd - imported by numpy (delayed), numpy.lib.twodim_base (delayed) missing module named numpy.eye - imported by numpy (delayed), numpy.core.numeric (delayed) missing module named six.moves.zip - imported by six.moves (top-level), cycler (top-level) runtime module named six.moves - imported by cycler (top-level), dateutil.tz.tz (top-level), dateutil.tz._factories (top-level), dateutil.tz.win (top-level), dateutil.rrule (top-level) missing module named six.moves.range - imported by six.moves (top-level), dateutil.rrule (top-level) missing module named colorama - imported by tornado.log (optional) missing module named typing_extensions - imported by tornado.ioloop (conditional), tornado.websocket (conditional) missing module named fcntl - imported by tornado.platform.posix (top-level) missing module named dateutil.tz.tzfile - imported by dateutil.tz (top-level), dateutil.zoneinfo (top-level) missing module named shiboken - imported by matplotlib.backends.qt_compat (delayed, conditional) missing module named PySide - imported by matplotlib.backends.qt_compat (delayed, conditional) missing module named PyQt4 - imported by matplotlib.backends.qt_compat (delayed) missing module named shiboken2 - imported by matplotlib.backends.qt_compat (delayed, conditional) missing module named PySide2 - imported by PIL.ImageQt (conditional, optional), matplotlib.backends.qt_compat (delayed, conditional) missing module named sip - imported by matplotlib.backends.qt_compat (delayed, conditional, optional), PyQt5 (top-level) missing module named matplotlib.axes.Axes - imported by matplotlib.axes (top-level), matplotlib.pyplot (top-level), matplotlib.legend (delayed), matplotlib.projections.geo (top-level), matplotlib.projections.polar (top-level), mpl_toolkits.mplot3d.axes3d (top-level), matplotlib.figure (top-level) missing module named 'IPython.core' - imported by matplotlib.backend_bases (delayed), matplotlib.pyplot (delayed, conditional, optional) missing module named IPython - imported by matplotlib.backend_bases (delayed), matplotlib.pyplot (delayed, conditional, optional) missing module named matplotlib.tri.Triangulation - imported by matplotlib.tri (top-level), matplotlib.tri.trifinder (top-level), matplotlib.tri.tritools (top-level), matplotlib.tri.triinterpolate (top-level) missing module named matplotlib.axes.Subplot - imported by matplotlib.axes (top-level), matplotlib.pyplot (top-level) missing module named olefile - imported by PIL.MicImagePlugin (top-level), PIL.FpxImagePlugin (top-level) missing module named UserDict - imported by PIL.PdfParser (optional) missing module named Tkinter - imported by PIL.ImageTk (conditional) missing module named 'PySide.QtCore' - imported by PIL.ImageQt (conditional, optional) missing module named 'PyQt4.QtCore' - imported by PIL.ImageQt (conditional, optional) missing module named 'PySide2.QtCore' - imported by PIL.ImageQt (conditional, optional) missing module named pathlib2 - imported by PIL.Image (optional) missing module named cffi - imported by PIL.Image (optional), PIL.PyAccess (top-level), win32ctypes.core (optional), PIL.ImageTk (delayed, conditional, optional) | You can try to solve this problem by installing older versions of the matplotlib package. eg: pip install matplotlib==3.2.2 | 9 | 8 |
63,220,629 | 2020-8-2 | https://stackoverflow.com/questions/63220629/inverse-of-numpy-gradient-function | I need to create a function which would be the inverse of the np.gradient function. Where the Vx,Vy arrays (Velocity component vectors) are the input and the output would be an array of anti-derivatives (Arrival Time) at the datapoints x,y. I have data on a (x,y) grid with scalar values (time) at each point. I have used the numpy gradient function and linear interpolation to determine the gradient vector Velocity (Vx,Vy) at each point (See below). I have achieved this by: #LinearTriInterpolator applied to a delaunay triangular mesh LTI= LinearTriInterpolator(masked_triang, time_array) #Gradient requested at the mesh nodes: (Vx, Vy) = LTI.gradient(triang.x, triang.y) The first image below shows the velocity vectors at each point, and the point labels represent the time value which formed the derivatives (Vx,Vy) The next image shows the resultant scalar value of the derivatives (Vx,Vy) plotted as a colored contour graph with associated node labels. So my challenge is: I need to reverse the process! Using the gradient vectors (Vx,Vy) or the resultant scalar value to determine the original Time-Value at that point. Is this possible? Knowing that the numpy.gradient function is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries, I am sure there is a function which would reverse this process. I was thinking that taking a line derivative between the original point (t=0 at x1,y1) to any point (xi,yi) over the Vx,Vy plane would give me the sum of the velocity components. I could then divide this value by the distance between the two points to get the time taken.. Would this approach work? And if so, which numpy integrate function would be best applied? An example of my data can be found here [http://www.filedropper.com/calculatearrivaltimefromgradientvalues060820] Your help would be greatly appreciated EDIT: Maybe this simplified drawing might help understand where I'm trying to get to.. EDIT: Thanks to @Aguy who has contibuted to this code.. I Have tried to get a more accurate representation using a meshgrid of spacing 0.5 x 0.5m and calculating the gradient at each meshpoint, however I am not able to integrate it properly. I also have some edge affects which are affecting the results that I don't know how to correct. import numpy as np from scipy import interpolate from matplotlib import pyplot from mpl_toolkits.mplot3d import Axes3D #Createmesh grid with a spacing of 0.5 x 0.5 stepx = 0.5 stepy = 0.5 xx = np.arange(min(x), max(x), stepx) yy = np.arange(min(y), max(y), stepy) xgrid, ygrid = np.meshgrid(xx, yy) grid_z1 = interpolate.griddata((x,y), Arrival_Time, (xgrid, ygrid), method='linear') #Interpolating the Time values #Formatdata X = np.ravel(xgrid) Y= np.ravel(ygrid) zs = np.ravel(grid_z1) Z = zs.reshape(X.shape) #Calculate Gradient (dx,dy) = np.gradient(grid_z1) #Find gradient for points on meshgrid Velocity_dx= dx/stepx #velocity ms/m Velocity_dy= dy/stepx #velocity ms/m Resultant = (Velocity_dx**2 + Velocity_dy**2)**0.5 #Resultant scalar value ms/m Resultant = np.ravel(Resultant) #Plot Original Data F(X,Y) on the meshgrid fig = pyplot.figure() ax = fig.add_subplot(projection='3d') ax.scatter(x,y,Arrival_Time,color='r') ax.plot_trisurf(X, Y, Z) ax.set_xlabel('X-Coordinates') ax.set_ylabel('Y-Coordinates') ax.set_zlabel('Time (ms)') pyplot.show() #Plot the Derivative of f'(X,Y) on the meshgrid fig = pyplot.figure() ax = fig.add_subplot(projection='3d') ax.scatter(X,Y,Resultant,color='r',s=0.2) ax.plot_trisurf(X, Y, Resultant) ax.set_xlabel('X-Coordinates') ax.set_ylabel('Y-Coordinates') ax.set_zlabel('Velocity (ms/m)') pyplot.show() #Integrate to compare the original data input dxintegral = np.nancumsum(Velocity_dx, axis=1)*stepx dyintegral = np.nancumsum(Velocity_dy, axis=0)*stepy valintegral = np.ma.zeros(dxintegral.shape) for i in range(len(yy)): for j in range(len(xx)): valintegral[i, j] = np.ma.sum([dxintegral[0, len(xx) // 2], dyintegral[i, len(yy) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]]) valintegral = valintegral * np.isfinite(dxintegral) Now the np.gradient is applied at every meshnode (dx,dy) = np.gradient(grid_z1) Now in my process I would analyse the gradient values above and make some adjustments (There is some unsual edge effects that are being create which I need to rectify) and would then integrate the values to get back to a surface which would be very similar to f(x,y) shown above. I need some help adjusting the integration function: #Integrate to compare the original data input dxintegral = np.nancumsum(Velocity_dx, axis=1)*stepx dyintegral = np.nancumsum(Velocity_dy, axis=0)*stepy valintegral = np.ma.zeros(dxintegral.shape) for i in range(len(yy)): for j in range(len(xx)): valintegral[i, j] = np.ma.sum([dxintegral[0, len(xx) // 2], dyintegral[i, len(yy) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]]) valintegral = valintegral * np.isfinite(dxintegral) And now I need to calculate the new 'Time' values at the original (x,y) point locations. UPDATE (08-09-20) : I am getting some promising results using the help from @Aguy. The results can be seen below (with the blue contours representing the original data, and the red contours representing the integrated values). I am still working on an integration approach which can remove the inaccuarcies at the areas of min(y) and max(y) from matplotlib.tri import (Triangulation, UniformTriRefiner, CubicTriInterpolator,LinearTriInterpolator,TriInterpolator,TriAnalyzer) import pandas as pd from scipy.interpolate import griddata import matplotlib.pyplot as plt import numpy as np from scipy import interpolate #------------------------------------------------------------------------- # STEP 1: Import data from Excel file, and set variables #------------------------------------------------------------------------- df_initial = pd.read_excel( r'C:\Users\morga\PycharmProjects\venv\Development\Trial' r'.xlsx') Inputdata can be found here link df_initial = df_initial .sort_values(by='Delay', ascending=True) #Update dataframe and sort by Delay x = df_initial ['X'].to_numpy() y = df_initial ['Y'].to_numpy() Arrival_Time = df_initial ['Delay'].to_numpy() # Createmesh grid with a spacing of 0.5 x 0.5 stepx = 0.5 stepy = 0.5 xx = np.arange(min(x), max(x), stepx) yy = np.arange(min(y), max(y), stepy) xgrid, ygrid = np.meshgrid(xx, yy) grid_z1 = interpolate.griddata((x, y), Arrival_Time, (xgrid, ygrid), method='linear') # Interpolating the Time values # Calculate Gradient (velocity ms/m) (dy, dx) = np.gradient(grid_z1) # Find gradient for points on meshgrid Velocity_dx = dx / stepx # x velocity component ms/m Velocity_dy = dy / stepx # y velocity component ms/m # Integrate to compare the original data input dxintegral = np.nancumsum(Velocity_dx, axis=1) * stepx dyintegral = np.nancumsum(Velocity_dy, axis=0) * stepy valintegral = np.ma.zeros(dxintegral.shape) # Makes an array filled with 0's the same shape as dx integral for i in range(len(yy)): for j in range(len(xx)): valintegral[i, j] = np.ma.sum( [dxintegral[0, len(xx) // 2], dyintegral[i, len(xx) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]]) valintegral[np.isnan(dx)] = np.nan min_value = np.nanmin(valintegral) valintegral = valintegral + (min_value * -1) ##Plot Results fig = plt.figure() ax = fig.add_subplot() ax.scatter(x, y, color='black', s=7, zorder=3) ax.set_xlabel('X-Coordinates') ax.set_ylabel('Y-Coordinates') ax.contour(xgrid, ygrid, valintegral, levels=50, colors='red', zorder=2) ax.contour(xgrid, ygrid, grid_z1, levels=50, colors='blue', zorder=1) ax.set_aspect('equal') plt.show() | TL;DR; You have multiple challenges to address in this issue, mainly: Potential reconstruction (scalar field) from its gradient (vector field) But also: Observation in a concave hull with non rectangular grid; Numerical 2D line integration and numerical inaccuracy; It seems it can be solved by choosing an adhoc interpolant and a smart way to integrate (as pointed out by @Aguy). MCVE In a first time, let's build a MCVE to highlight above mentioned key points. Dataset We recreate a scalar field and its gradient. import numpy as np from scipy import interpolate import matplotlib.pyplot as plt def f(x, y): return x**2 + x*y + 2*y + 1 Nx, Ny = 21, 17 xl = np.linspace(-3, 3, Nx) yl = np.linspace(-2, 2, Ny) X, Y = np.meshgrid(xl, yl) Z = f(X, Y) zl = np.arange(np.floor(Z.min()), np.ceil(Z.max())+1, 2) dZdy, dZdx = np.gradient(Z, yl, xl, edge_order=1) V = np.hypot(dZdx, dZdy) The scalar field looks like: axe = plt.axes(projection='3d') axe.plot_surface(X, Y, Z, cmap='jet', alpha=0.5) axe.view_init(elev=25, azim=-45) And, the vector field looks like: axe = plt.contour(X, Y, Z, zl, cmap='jet') axe.axes.quiver(X, Y, dZdx, dZdy, V, units='x', pivot='tip', cmap='jet') axe.axes.set_aspect('equal') axe.axes.grid() Indeed gradient is normal to potential levels. We also plot the gradient magnitude: axe = plt.contour(X, Y, V, 10, cmap='jet') axe.axes.set_aspect('equal') axe.axes.grid() Raw field reconstruction If we naively reconstruct the scalar field from the gradient: SdZx = np.cumsum(dZdx, axis=1)*np.diff(xl)[0] SdZy = np.cumsum(dZdy, axis=0)*np.diff(yl)[0] Zhat = np.zeros(SdZx.shape) for i in range(Zhat.shape[0]): for j in range(Zhat.shape[1]): Zhat[i,j] += np.sum([SdZy[i,0], -SdZy[0,0], SdZx[i,j], -SdZx[i,0]]) Zhat += Z[0,0] - Zhat[0,0] We can see the global result is roughly correct, but levels are less accurate where the gradient magnitude is low: Interpolated field reconstruction If we increase the grid resolution and pick a specific interpolant (usual when dealing with mesh grid), we can get a finer field reconstruction: r = np.stack([X.ravel(), Y.ravel()]).T Sx = interpolate.CloughTocher2DInterpolator(r, dZdx.ravel()) Sy = interpolate.CloughTocher2DInterpolator(r, dZdy.ravel()) Nx, Ny = 200, 200 xli = np.linspace(xl.min(), xl.max(), Nx) yli = np.linspace(yl.min(), yl.max(), Nx) Xi, Yi = np.meshgrid(xli, yli) ri = np.stack([Xi.ravel(), Yi.ravel()]).T dZdxi = Sx(ri).reshape(Xi.shape) dZdyi = Sy(ri).reshape(Xi.shape) SdZxi = np.cumsum(dZdxi, axis=1)*np.diff(xli)[0] SdZyi = np.cumsum(dZdyi, axis=0)*np.diff(yli)[0] Zhati = np.zeros(SdZxi.shape) for i in range(Zhati.shape[0]): for j in range(Zhati.shape[1]): Zhati[i,j] += np.sum([SdZyi[i,0], -SdZyi[0,0], SdZxi[i,j], -SdZxi[i,0]]) Zhati += Z[0,0] - Zhati[0,0] Which definitely performs way better: So basically, increasing the grid resolution with an adhoc interpolant may help you to get more accurate result. The interpolant also solve the need to get a regular rectangular grid from a triangular mesh to perform integration. Concave and convex hull You also have pointed out inaccuracy on the edges. Those are the result of the combination of the interpolant choice and the integration methodology. The integration methodology fails to properly compute the scalar field when it reach concave region with few interpolated points. The problem disappear when choosing a mesh-free interpolant able to extrapolate. To illustrate it, let's remove some data from our MCVE: q = np.full(dZdx.shape, False) q[0:6,5:11] = True q[-6:,-6:] = True dZdx[q] = np.nan dZdy[q] = np.nan Then the interpolant can be constructed as follow: q2 = ~np.isnan(dZdx.ravel()) r = np.stack([X.ravel(), Y.ravel()]).T[q2,:] Sx = interpolate.CloughTocher2DInterpolator(r, dZdx.ravel()[q2]) Sy = interpolate.CloughTocher2DInterpolator(r, dZdy.ravel()[q2]) Performing the integration we see that in addition of classical edge effect we do have less accurate value in concave regions (swingy dot-dash lines where the hull is concave) and we have no data outside the convex hull as Clough Tocher is a mesh-based interpolant: Vl = np.arange(0, 11, 1) axe = plt.contour(X, Y, np.hypot(dZdx, dZdy), Vl, cmap='jet') axe.axes.contour(Xi, Yi, np.hypot(dZdxi, dZdyi), Vl, cmap='jet', linestyles='-.') axe.axes.set_aspect('equal') axe.axes.grid() So basically the error we are seeing on the corner are most likely due to integration issue combined with interpolation limited to the convex hull. To overcome this we can choose a different interpolant such as RBF (Radial Basis Function Kernel) which is able to create data outside the convex hull: Sx = interpolate.Rbf(r[:,0], r[:,1], dZdx.ravel()[q2], function='thin_plate') Sy = interpolate.Rbf(r[:,0], r[:,1], dZdy.ravel()[q2], function='thin_plate') dZdxi = Sx(ri[:,0], ri[:,1]).reshape(Xi.shape) dZdyi = Sy(ri[:,0], ri[:,1]).reshape(Xi.shape) Notice the slightly different interface of this interpolator (mind how parmaters are passed). The result is the following: We can see the region outside the convex hull can be extrapolated (RBF are mesh free). So choosing the adhoc interpolant is definitely a key point to solve your problem. But we still need to be aware that extrapolation may perform well but is somehow meaningless and dangerous. Solving your problem The answer provided by @Aguy is perfectly fine as it setups a clever way to integrate that is not disturbed by missing points outside the convex hull. But as you mentioned there is inaccuracy in concave region inside the convex hull. If you wish to remove the edge effect you detected, you will have to resort to an interpolant able to extrapolate as well, or find another way to integrate. Interpolant change Using RBF interpolant seems to solve your problem. Here is the complete code: df = pd.read_excel('./Trial-Wireup 2.xlsx') x = df['X'].to_numpy() y = df['Y'].to_numpy() z = df['Delay'].to_numpy() r = np.stack([x, y]).T #S = interpolate.CloughTocher2DInterpolator(r, z) #S = interpolate.LinearNDInterpolator(r, z) S = interpolate.Rbf(x, y, z, epsilon=0.1, function='thin_plate') N = 200 xl = np.linspace(x.min(), x.max(), N) yl = np.linspace(y.min(), y.max(), N) X, Y = np.meshgrid(xl, yl) #Zp = S(np.stack([X.ravel(), Y.ravel()]).T) Zp = S(X.ravel(), Y.ravel()) Z = Zp.reshape(X.shape) dZdy, dZdx = np.gradient(Z, yl, xl, edge_order=1) SdZx = np.nancumsum(dZdx, axis=1)*np.diff(xl)[0] SdZy = np.nancumsum(dZdy, axis=0)*np.diff(yl)[0] Zhat = np.zeros(SdZx.shape) for i in range(Zhat.shape[0]): for j in range(Zhat.shape[1]): #Zhat[i,j] += np.nansum([SdZy[i,0], -SdZy[0,0], SdZx[i,j], -SdZx[i,0]]) Zhat[i,j] += np.nansum([SdZx[0,N//2], SdZy[i,N//2], SdZx[i,j], -SdZx[i,N//2]]) Zhat += Z[100,100] - Zhat[100,100] lz = np.linspace(0, 5000, 20) axe = plt.contour(X, Y, Z, lz, cmap='jet') axe = plt.contour(X, Y, Zhat, lz, cmap='jet', linestyles=':') axe.axes.plot(x, y, '.', markersize=1) axe.axes.set_aspect('equal') axe.axes.grid() Which graphically renders as follow: The edge effect is gone because of the RBF interpolant can extrapolate over the whole grid. You can confirm it by comparing the result of mesh-based interpolants. Linear Clough Tocher Integration variable order change We can also try to find a better way to integrate and mitigate the edge effect, eg. let's change the integration variable order: Zhat[i,j] += np.nansum([SdZy[N//2,0], SdZx[N//2,j], SdZy[i,j], -SdZy[N//2,j]]) With a classic linear interpolant. The result is quite correct, but we still have an edge effect on the bottom left corner: As you noticed the problem occurs at the middle of the axis in region where the integration starts and lacks a reference point. | 9 | 7 |
63,201,036 | 2020-8-1 | https://stackoverflow.com/questions/63201036/add-additional-layers-to-the-huggingface-transformers | I want to add additional Dense layer after pretrained TFDistilBertModel, TFXLNetModel and TFRobertaModel Huggingface models. I have already seen how I can do this with the TFBertModel, e.g. in this notebook: output = bert_model([input_ids,attention_masks]) output = output[1] output = tf.keras.layers.Dense(32,activation='relu')(output) So, here I need to use the second item(i.e. item with index 1) of the BERT output tuple. According to the docs TFBertModel has pooler_output at this tuple index. But the other three models don't have pooler_output. So, how can I add additional layers to the other three model outputs? | It looks like pooler_output is a Roberta and Bert specific output. But instead of using pooler_output we can use a few hidden_states(so, not only last hidden state) with all models, we want to use them because papers report that hidden_states can give more accuracy than just one last_hidden_state. # Import the needed model(Bert, Roberta or DistilBert) with output_hidden_states=True transformer_model = TFBertForSequenceClassification.from_pretrained('bert-large-cased', output_hidden_states=True) input_ids = tf.keras.Input(shape=(128, ),dtype='int32') attention_mask = tf.keras.Input(shape=(128, ), dtype='int32') transformer = transformer_model([input_ids, attention_mask]) hidden_states = transformer[1] # get output_hidden_states hidden_states_size = 4 # count of the last states hiddes_states_ind = list(range(-hidden_states_size, 0, 1)) selected_hiddes_states = tf.keras.layers.concatenate(tuple([hidden_states[i] for i in hiddes_states_ind])) # Now we can use selected_hiddes_states as we want output = tf.keras.layers.Dense(128, activation='relu')(selected_hiddes_states) output = tf.keras.layers.Dense(1, activation='sigmoid')(output) model = tf.keras.models.Model(inputs = [input_ids, attention_mask], outputs = output) model.compile(tf.keras.optimizers.Adam(lr=1e-4), loss='binary_crossentropy', metrics=['accuracy']) | 7 | 7 |
63,200,489 | 2020-8-1 | https://stackoverflow.com/questions/63200489/pygame-basic-calculator | I am trying to make a basic calculator but the problem I am having is how do I output the text? How do I make it so when I click plus it allows me to add or if I click divide it allows me to divide and shows the output on the yellow part on my screen This is what I have right now. You could run it; there is nothing special. My question is how could I make the calculator make it so when I click plus and then go and add up numbers it allows me to add or when I click divide it allows me to divide my numbers and show the outputs on the screen? import pygame,math pygame.init() window_height = 500 window_width = 500 window = pygame.display.set_mode((window_height,window_width)) # the buttons for the shop MENU class button(): def __init__(self, color, x,y,width,height, text=''): self.color = color self.x = x self.y = y self.width = width self.height = height self.text = text self.over = False def draw(self,window,outline=None): #Call this method to draw the button on the screen if outline: pygame.draw.rect(window, outline, (self.x-2,self.y-2,self.width+4,self.height+4),0) pygame.draw.rect(window, self.color, (self.x,self.y,self.width,self.height),0) if self.text != '': font = pygame.font.SysFont('comicsans', 60) text = font.render(self.text, 1, (0,0,0)) window.blit(text, (self.x + (self.width/2 - text.get_width()/2), self.y + (self.height/2 - text.get_height()/2))) def isOver(self, pos): #Pos is the mouse position or a tuple of (x,y) coordinates if pos[0] > self.x and pos[0] < self.x + self.width: if pos[1] > self.y and pos[1] < self.y + self.height: return True return False def playSoundIfMouseIsOver(self, pos, sound): if self.isOver(pos): if not self.over: beepsound.play() self.over = True else: self.over = False white = (255,255,255) # the numbers for the calcaltor s_1s = button((0,255,0),40,450,30,30, '1') s_2s = button((0,255,0),40,400,30,30, '2') s_3s = button((0,255,0),40,350,30,30, '3') s_4s = button((0,255,0),100,450,30,30, '4') s_5s = button((0,255,0),100,400,30,30, '5') s_6s = button((0,255,0),100,350,30,30, '6') s_7s = button((0,255,0),150,450,30,30, '7') s_8s = button((0,255,0),150,400,30,30, '8') s_9s = button((0,255,0),150,350,30,30, '9') s_0s = button((0,255,0),200,450,30,30, '0') numbers = [s_1s,s_2s,s_3s,s_4s,s_5s,s_6s,s_7s,s_8s,s_9s,s_0s] # the symbols! d_1s = button((0,255,0),260,450,30,30, '+') d_2s = button((0,255,0),260,400,30,30, '-') d_3s = button((0,255,0),260,350,30,30, 'x') d_4s = button((0,255,0),200,400,30,30, '÷') symbols = [d_1s,d_2s,d_3s,d_4s] # input tap inputtap = button((253,100,32),10,280,450,50,"") # redraw window def redraw(): # draw all the numbers for button in numbers: button.draw(window) # the symbols for button in symbols: button.draw(window) inputtap.draw(window) def Symbols(): if event.type == pygame.MOUSEBUTTONDOWN: pos = pygame.mouse.get_pos() if d_1s.isOver(pos): print("+") if d_2s.isOver(pos): print("-") if d_3s.isOver(pos): print("x") if d_4s.isOver(pos): print("÷") def MOUSEOVERnumbers(): if event.type == pygame.MOUSEBUTTONDOWN: pos = pygame.mouse.get_pos() if s_1s.isOver(pos): print("1") if s_2s.isOver(pos): print("2") if s_3s.isOver(pos): print("3") if s_4s.isOver(pos): print("4") if s_5s.isOver(pos): print("5") if s_6s.isOver(pos): print("6") if s_7s.isOver(pos): print("7") if s_8s.isOver(pos): print("8") if s_9s.isOver(pos): print("9") if s_0s.isOver(pos): print("0") # the main loop run = True while run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False MOUSEOVERnumbers() Symbols() redraw() pygame.display.update() pygmae.quit() | You could add a = button, so every time user clicks it, calculate the user input with python eval() function. As for the user input, you first need to record it globally . Then you can pass user input to the string field of inputtap = button((253,100,32),10,280,450,50,"") to show it on the window. import pygame, math pygame.init() window_height = 500 window_width = 600 window = pygame.display.set_mode((window_height,window_width)) # the buttons for the shop MENU class button(): def __init__(self, color, x,y,width,height, text=''): self.color = color self.x = x self.y = y self.width = width self.height = height self.text = text self.over = False def draw(self,window,outline=None): #Call this method to draw the button on the screen if outline: pygame.draw.rect(window, outline, (self.x-2,self.y-2,self.width+4,self.height+4),0) pygame.draw.rect(window, self.color, (self.x,self.y,self.width,self.height),0) if self.text != '': font = pygame.font.SysFont('comicsans', 60) text = font.render(self.text, 1, (0,0,0)) window.blit(text, (self.x + (self.width/2 - text.get_width()/2), self.y + (self.height/2 - text.get_height()/2))) def isOver(self, pos): #Pos is the mouse position or a tuple of (x,y) coordinates if pos[0] > self.x and pos[0] < self.x + self.width: if pos[1] > self.y and pos[1] < self.y + self.height: return True return False def playSoundIfMouseIsOver(self, pos, sound): if self.isOver(pos): if not self.over: beepsound.play() self.over = True else: self.over = False white = (255,255,255) # the numbers for the calcaltor s_1s = button((0,255,0),40,450,30,30, '1') s_2s = button((0,255,0),40,400,30,30, '2') s_3s = button((0,255,0),40,350,30,30, '3') s_4s = button((0,255,0),100,450,30,30, '4') s_5s = button((0,255,0),100,400,30,30, '5') s_6s = button((0,255,0),100,350,30,30, '6') s_7s = button((0,255,0),150,450,30,30, '7') s_8s = button((0,255,0),150,400,30,30, '8') s_9s = button((0,255,0),150,350,30,30, '9') s_0s = button((0,255,0),200,450,30,30, '0') numbers = [s_1s,s_2s,s_3s,s_4s,s_5s,s_6s,s_7s,s_8s,s_9s,s_0s] # the symbols! d_1s = button((0,255,0),260,450,30,30, '+') d_2s = button((0,255,0),260,400,30,30, '-') d_3s = button((0,255,0),260,350,30,30, 'x') d_4s = button((0,255,0),200,400,30,30, '÷') d_5s = button((0,255,0),200,350,30,30, '=') d_6s = button((0,255,0),260,500,30,30, 'C') symbols = [d_1s,d_2s,d_3s,d_4s,d_5s,d_6s] # redraw window def redraw(inputtap): # draw all the numbers for button in numbers: button.draw(window) # the symbols for button in symbols: button.draw(window) inputtap.draw(window) def Symbols(): global user_input global python_input global is_finished if event.type == pygame.MOUSEBUTTONDOWN: pos = pygame.mouse.get_pos() try: if is_finished or user_input[-1] in ["+", "-", "x", "÷", "="]: # User shouldn't type two symbols continuously # User shouldn't input any symbols when game finished because there is no number return except IndexError: # User shouldn't input any symbols if there is no number return if d_1s.isOver(pos): print("+") user_input += "+" python_input += "+" if d_2s.isOver(pos): print("-") user_input += "-" python_input += "-" if d_3s.isOver(pos): print("x") user_input += "x" python_input += "*" if d_4s.isOver(pos): print("÷") user_input += "÷" python_input += "/" if d_5s.isOver(pos): print("=") result = eval(python_input) python_input = "" user_input += f"={result:.2f}" is_finished = True if d_6s.isOver(pos): print("C") python_input = "" user_input = "" def MOUSEOVERnumbers(): global user_input global python_input global is_finished if event.type == pygame.MOUSEBUTTONDOWN: if is_finished: user_input = "" python_input = "" is_finished = False pos = pygame.mouse.get_pos() if s_1s.isOver(pos): print("1") user_input += "1" python_input += "1" if s_2s.isOver(pos): print("2") user_input += "2" python_input += "2" if s_3s.isOver(pos): print("3") user_input += "3" python_input += "3" if s_4s.isOver(pos): print("4") user_input += "4" python_input += "4" if s_5s.isOver(pos): print("5") user_input += "5" python_input += "5" if s_6s.isOver(pos): print("6") user_input += "6" python_input += "6" if s_7s.isOver(pos): print("7") user_input += "7" python_input += "7" if s_8s.isOver(pos): print("8") user_input += "8" python_input += "8" if s_9s.isOver(pos): print("9") user_input += "9" python_input += "9" if s_0s.isOver(pos): print("0") user_input += "0" python_input += "0" # the main loop run = True user_input = "" python_input = "" is_finished = True while run: # input tap inputtap = button((253,100,32),10,280,450,50,f"{user_input}") for event in pygame.event.get(): if event.type == pygame.QUIT: run = False MOUSEOVERnumbers() Symbols() redraw(inputtap) pygame.display.update() pygame.quit() You then can add a reset button to reset the user input. Also after user clicks = button, start a new user input rather then concatting on the old one. The reset button is labeled with C in this example. Every time user clicks it, empty the user input string and the python input string. I also use a global is_finished boolean variable to check if user clicks = button. If user clicks it, it means user has finished the calculation, so that next time user clicks any symbols button, the user input string is cleared. In the meanwhile, user shouldn't input two sysmbols except C button at the same time. I judge it by comparing the last character user inputs and the current character user inputs. Also, user shouldn't input any symbol before inputting any number. I judge it with global variable is_finished. If is_finished is true, it means user doesn't start inputting, so there is no value in the user input string. I also use a IndexError exception just in case because empty user input string cann't work with negative index. To distinguish between integer and float result, you can judge if there is dot in the result: >>> '.' in '45.3' True >>> '.' in '453' False At last, you can also simplify those if logic with button.text properties like what Rabbid76 does: for number_button in numbers: if number_button.isOver(pos): print(number_button.text) user_input += number_button.text python_input += number_button.text | 6 | 6 |
63,248,354 | 2020-8-4 | https://stackoverflow.com/questions/63248354/how-to-check-does-a-file-imports-from-another-file-in-python | Suppose i have a project structure like this src └── app ├── main.py ├── db │ └── database.py ├── models │ ├── model_a.py │ └── model_b.py └── tests ├── test_x.py └── test_y.py I want to check which file uses a class or a function from another file. I have a class called Test in main.py class Test: pass I used that class in model_a, from ..main import Test But in model_b i used from ..main import Test from ..db.database import Data I want to to check which file uses another file, just like tree command, just a folder name is enough so i tried an old method but it was inefficient ,dirty and that was not something that i expect. The method was i created a file in src named check.py, i imported all packages from app.db import database from app.models import model_a, model_b from app.tests import test_x, test_y from app import main print('__file__={0:<35} | __name__={1:<20} | __package__={2:<20}'.format(__file__,__name__,str(__package__))) And i added this line in the bottom of all files print('__file__={0:<35} | __name__={1:<20} | __package__={2:<20}'.format(__file__,__name__,str(__package__))) So when i run check.py i get this result __file__=/home/yagiz/Desktop/struct/src/app/main.py | __name__=app.main | __package__=app __file__=/home/yagiz/Desktop/struct/src/app/db/database.py | __name__=app.db.database | __package__=app.db __file__=/home/yagiz/Desktop/struct/src/app/models/model_a.py | __name__=app.models.model_a | __package__=app.models __file__=/home/yagiz/Desktop/struct/src/app/models/model_b.py | __name__=app.models.model_b | __package__=app.models __file__=/home/yagiz/Desktop/struct/src/app/tests/test_x.py | __name__=app.tests.test_x | __package__=app.tests __file__=/home/yagiz/Desktop/struct/src/app/tests/test_y.py | __name__=app.tests.test_y | __package__=app.tests __file__=/home/yagiz/Desktop/struct/src/check.py | __name__=__main__ | __package__=None The result is dirty and doesn't meet my expectations is there a way to get a output like this? main.py = app/models/model_a, app/models/model_b # These files imports something from main.py models_b = None # No file imports from models_b Update, i tried @Hessam Korki's suggestion it doesn't works. I looked up the source code of modulefinder and i found it adds a badmodule in every import statement which is not useful for me. Here is how did it go, first i created a function, also i created an another project structure. src ├── empty.py ├── __init__.py ├── main.py ├── module_finder.py ├── other │ └── other.py ├── test │ └── some_app.py └── this_imports.py Here is the module_finder.py that contains my function from modulefinder import ModuleFinder file_names = ["this_imports.py", "main.py", "test/some_app.py", "other/other.py", "empty.py"] def check_imports(file_names): finder = ModuleFinder() for file in file_names: finder.run_script(file) print("\n", file) for name, mod in finder.modules.items(): print('%s: ' % name, end='') print(','.join(list(mod.globalnames.keys())[:3])) print('\n'.join(finder.badmodules.keys())) Empty file is empty(as expected), in main.py i have class Test: pass In this_imports.py i only have from src.main import Test In other/other.py i have from src.main import Test from src.test import DifferentTest And for the last one in test/some_app.py i have from src.main import Test class DifferentTest: pass So the result should be: empty.py = None main.py = None other/other.py = src.main , src.test test/some_app.py = src.main this_imports.py = src.main But the function gives a wrong result, here is the output: Filename: this_imports.py __main__: Test src.main Filename: main.py __main__: Test,__module__,__qualname__ src.main Filename: test/some_app.py __main__: Test,__module__,__qualname__ src.main Filename: other/other.py __main__: Test,__module__,__qualname__ src.main src.test Filename: empty.py __main__: Test,__module__,__qualname__ src.main src.test | What you are looking for is to find import dependencies in your package modules. You can run a static analysis on your package directory and parse the import nodes in the syntax trees (ast), and build a dependency graph. Something like below: import os from ast import NodeVisitor, parse import networkx as nx class Dependency(): def __init__(self, root): self.root = root self.base = os.path.basename(root) self.dependency = nx.DiGraph() self.visitor = NodeVisitor() self.visitor.visit_ImportFrom = self.visit_ImportFrom self.current_node = None self.dependency.add_node = self.base def visit_ImportFrom(self, node): self.dependency.add_edge(node.module, self.current_node) self.visitor.generic_visit(node) def run(self): for root, dirs, files in os.walk(self.root): for file in files: full_path = os.path.join(root+os.sep, file) loc = full_path.split(self.root+os.sep)[1].replace(os.sep,'.') self.current_node = self.base+'.'+loc with open(full_path) as fp: src = fp.read() tree = parse(src) self.visitor.generic_visit(tree) dependency = {} for src, target in nx.dfs_edges(self.dependency): if src in dependency: dependency[src].add(target) else: dependency[src] = set([target]) return dependency For the root location of any package you want to map the import dependencies, you need to do the following then: root = "path/to/your/src" d = Dependency(root) d.run() This will return the dependency tree (as a dict). Note, we parsed only ImportFrom, you need to add Import to make it complete. Also, all imports are assumed absolute here (i.e. no .. etc). If required, you can add that too (check the level field of the ImportFrom node to do that). | 9 | 3 |
63,232,732 | 2020-8-3 | https://stackoverflow.com/questions/63232732/how-to-use-the-past-with-huggingface-transformers-gpt-2 | I have: context = torch.tensor(context, dtype=torch.long, device=self.device) context = context.unsqueeze(0) generated = context with torch.no_grad(): past_outputs = None for i in trange(num_words): print(i, num_words) inputs = {"input_ids": generated} outputs, past_outputs = self.model( **inputs, past=past_outputs ) next_token_logits = outputs[ 0, -1, :] / (temperature if temperature > 0 else 1.0) # reptition penalty from CTRL # (https://arxiv.org/abs/1909.05858) for _ in set(generated.view(-1).tolist()): next_token_logits[_] /= repetition_penalty filtered_logits = top_k_top_p_filtering( next_token_logits, top_k=top_k, top_p=top_p) if temperature == 0: # greedy sampling: next_token = torch.argmax(filtered_logits).unsqueeze(0) else: next_token = torch.multinomial( F.softmax(filtered_logits, dim=-1), num_samples=1) generated = torch.cat( (generated, next_token.unsqueeze(0)), dim=1) This works for the first iteration, but then I get an error for the next iteration: File "/Users/shamoon/Sites/wordblot/packages/ml-server/generator.py", line 143, in sample_sequence past=past_outputs File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 601, in forward output_hidden_states=output_hidden_states, File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 470, in forward position_embeds = self.wpe(position_ids) File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/torch/nn/functional.py", line 1724, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self Is there something I'm doing wrong? | I did: outputs, past_outputs = self.models[model_name]( context, past=past_outputs ) context = next_token.unsqueeze(0) | 7 | 0 |
63,195,577 | 2020-7-31 | https://stackoverflow.com/questions/63195577/how-to-locate-qr-code-in-large-image-to-improve-decoding-performance | Background I need to detect and decode a relatively small QR code (110x110 pixels) in a large image (2500x2000) on a Raspberry Pi. The QR code can be at any location in the frame, but the orientation is expected to be normal, i.e. top-up. We are using high quality industrial cameras and lenses, so images are generally good quality and in focus. Currently, I am able to detect and decode the image reliably with pyzbar when I crop the image around the QR code using a window of aprox 600x500. If I attempt to decode the full image, the symbol is not detected/decoded. What I Have Tried I have written a loop that slides a crop window over the image, and attempts to decode each cropped frame separately. I move the window by 50% each iteration to ensure I don't miss any symbols at the edge of the window. I have also tried using OpenCV for detection/decoding but the performance was no better than with pyzbar Problems With My Solution Problems which affect my current project: The sliding window approach is difficult to tune, inefficient and slow b/c: it causes the entire area to be analyzed nearly 4 times; a side effect of shifting the window by 50%, the most reliable window sizes tend to be small and require many iterations, the symbol size may vary due to being closer/further from the camera. Problems that may affect other projects where I would use this approach: The sliding window may catch a symbol more than once, making it difficult to determine if the symbol was present more than once. The Question How can I find the approximate location of the QR code(s) so I can crop the image accordingly? I am interested in any solutions to improve the detection/decoding performance, but prefer ones that (a) use machine learning techniques (I'm a ML newbie but willing to learn), (b) use OpenCV image pre-processing or (c) make improvements to my basic cropping algorithm. Sample Image Here is one of the sample images that I'm using for testing. It's purposely poor lighting quality to approximate the worst case scenario, however the individual codes still detect and decode correctly when cropped. | I think I have found a simple yet reliable way in which the corners of the QR code can be detected. However, my approach assumes there is some contrast (the more the better) between the QR and its surrounding area. Also, we have to keep in mind that neither pyzbar nor opencv.QRCodeDetector are 100% reliable. So, here is my approach: Resize image. After some experimentation I have come to the conclusion that pyzbar is not completely scale invariant. Although I don't have references that can back this claim, I still use small to medium images for barcode detection as a rule of thumb. You can skip this step as it might seem completely arbitrary. image = cv2.imread("image.jpg") scale = 0.3 width = int(image.shape[1] * scale) height = int(image.shape[0] * scale) image = cv2.resize(image, (width, height)) Thresholding. We can take advantage on the fact that barcodes are generally black on white surfaces. The more contrast the better. gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) _, thresh = cv2.threshold(gray, 120, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) 3. Dilation + contours. This step is a little bit trickier and I do apologize if my english is not completely clear here. We can see from the previous image that there are black spaces in between the white inside the QR code. If we were to just find the contours, then opencv will assume these spaces are separate entities and not part of a whole. If we want to transform the QR code and make it seem as just a white square, we have to do a bit of morphological operations. Namely, we have to dilate the image. # The bigger the kernel, the more the white region increases. # If the resizing step was ignored, then the kernel will have to be bigger # than the one given here. kernel = np.ones((3, 3), np.uint8) thresh = cv2.dilate(thresh, kernel, iterations=1) contours, _ = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) 4. Filtering and getting bounding boxes. Most of the found contours are too small to contain a barcode, so we have to filter them in order to make our search space smaller. After filtering out the weak candidates, we can fetch the bounding boxes of the strong ones. EDIT: In this case we are filtering by area (small area = weak candidate), but we can also filter by the extent of the detection. Basically what the extent measures is the rectangularity of an object, and we can use that information since we know a QR code is a square. I chose the extent to be greater than pi / 4, since that is the extent of a perfect circle, meaning we are also filtering out circular objects. bboxes = [] for cnt in contours: area = cv2.contourArea(cnt) xmin, ymin, width, height = cv2.boundingRect(cnt) extent = area / (width * height) # filter non-rectangular objects and small objects if (extent > np.pi / 4) and (area > 100): bboxes.append((xmin, ymin, xmin + width, ymin + height)) 5. Detect barcodes. We have reduced our search space to just the actual QR codes! Now we can finally use pyzbar without worrying too much about it taking too long to do barcode detection. qrs = [] info = set() for xmin, ymin, xmax, ymax in bboxes: roi = image[ymin:ymax, xmin:xmax] detections = pyzbar.decode(roi, symbols=[pyzbar.ZBarSymbol.QRCODE]) for barcode in detections: info.add(barcode.data) # bounding box coordinates x, y, w, h = barcode.rect qrs.append((xmin + x, ymin + y, xmin + x + w, ymin + y + height)) Unfortunately, pyzbar was only able to decode the information of the largest QR code (b'3280406-001'), even though both barcodes were in the search space. With regard to knowing how many times was a particular code detected, you can use a Counter object from the collections standard module. If you don't mind having that information, then you can just use a set as I did here. Hope this could be of help :). | 17 | 22 |
63,160,370 | 2020-7-29 | https://stackoverflow.com/questions/63160370/how-can-i-accept-and-run-users-code-securely-on-my-web-app | I am working on a django based web app that takes python file as input which contains some function, then in backend i have some lists that are passed as parameters through the user's function,which will generate a single value output.The result generated will be used for some further computation. Here is how the function inside the user's file look like : def somefunctionname(list): ''' some computation performed on list''' return float value At present the approach that i am using is taking user's file as normal file input. Then in my views.py i am executing the file as module and passing the parameters with eval function. Snippet is given below. Here modulename is the python file name that i had taken from user and importing as module exec("import "+modulename) result = eval(f"{modulename}.{somefunctionname}(arguments)") Which is working absolutely fine. But i know this is not the secured approach. My question , Is there any other way through which i can run users file securely as the method that i am using is not secure ? I know the proposed solutions can't be full proof but what are the other ways in which i can run this (like if it can be solved with dockerization then what will be the approach or some external tools that i can use with API )? Or if possible can somebody tell me how can i simply sandbox this or any tutorial that can help me..? Any reference or resource will be helpful. | It is an important question. In python sandboxing is not trivial. It is one of the few cases where the question which version of python interpreter you are using. For example, Jyton generates Java bytecode, and JVM has its own mechanism to run code securely. For CPython, the default interpreter, originally there were some attempts to make a restricted execution mode, that were abandoned long time ago. Currently, there is that unofficial project, RestrictedPython that might give you what you need. It is not a full sandbox, i.e. will not give you restricted filesystem access or something, but for you needs it may be just enough. Basically the guys there just rewrote the python compilation in a more restricted way. What it allows to do is to compile a piece of code and then execute, all in a restricted mode. For example: from RestrictedPython import safe_builtins, compile_restricted source_code = """ print('Hello world, but secure') """ byte_code = compile_restricted( source_code, filename='<string>', mode='exec' ) exec(byte_code, {__builtins__ = safe_builtins}) >>> Hello world, but secure Running with builtins = safe_builtins disables the dangerous functions like open file, import or whatever. There are also other variations of builtins and other options, take some time to read the docs, they are pretty good. EDIT: Here is an example for you use case from RestrictedPython import safe_builtins, compile_restricted from RestrictedPython.Eval import default_guarded_getitem def execute_user_code(user_code, user_func, *args, **kwargs): """ Executed user code in restricted env Args: user_code(str) - String containing the unsafe code user_func(str) - Function inside user_code to execute and return value *args, **kwargs - arguments passed to the user function Return: Return value of the user_func """ def _apply(f, *a, **kw): return f(*a, **kw) try: # This is the variables we allow user code to see. @result will contain return value. restricted_locals = { "result": None, "args": args, "kwargs": kwargs, } # If you want the user to be able to use some of your functions inside his code, # you should add this function to this dictionary. # By default many standard actions are disabled. Here I add _apply_ to be able to access # args and kwargs and _getitem_ to be able to use arrays. Just think before you add # something else. I am not saying you shouldn't do it. You should understand what you # are doing thats all. restricted_globals = { "__builtins__": safe_builtins, "_getitem_": default_guarded_getitem, "_apply_": _apply, } # Add another line to user code that executes @user_func user_code += "\nresult = {0}(*args, **kwargs)".format(user_func) # Compile the user code byte_code = compile_restricted(user_code, filename="<user_code>", mode="exec") # Run it exec(byte_code, restricted_globals, restricted_locals) # User code has modified result inside restricted_locals. Return it. return restricted_locals["result"] except SyntaxError as e: # Do whaever you want if the user has code that does not compile raise except Exception as e: # The code did something that is not allowed. Add some nasty punishment to the user here. raise Now you have a function execute_user_code, that receives some unsafe code as a string, a name of a function from this code, arguments, and returns the return value of the function with the given arguments. Here is a very stupid example of some user code: example = """ def test(x, name="Johny"): return name + " likes " + str(x*x) """ # Lets see how this works print(execute_user_code(example, "test", 5)) # Result: Johny likes 25 But here is what happens when the user code tries to do something unsafe: malicious_example = """ import sys print("Now I have the access to your system, muhahahaha") """ # Lets see how this works print(execute_user_code(malicious_example, "test", 5)) # Result - evil plan failed: # Traceback (most recent call last): # File "restr.py", line 69, in <module> # print(execute_user_code(malitious_example, "test", 5)) # File "restr.py", line 45, in execute_user_code # exec(byte_code, restricted_globals, restricted_locals) # File "<user_code>", line 2, in <module> #ImportError: __import__ not found Possible extension: Pay attention that the user code is compiled on each call to the function. However, it is possible that you would like to compile the user code once, then execute it with different parameters. So all you have to do is to save the byte_code somewhere, then to call exec with a different set of restricted_locals each time. EDIT2: If you want to use import, you can write your own import function that allows to use only modules that you consider safe. Example: def _import(name, globals=None, locals=None, fromlist=(), level=0): safe_modules = ["math"] if name in safe_modules: globals[name] = __import__(name, globals, locals, fromlist, level) else: raise Exception("Don't you even think about it {0}".format(name)) safe_builtins['__import__'] = _import # Must be a part of builtins restricted_globals = { "__builtins__": safe_builtins, "_getitem_": default_guarded_getitem, "_apply_": _apply, } .... i_example = """ import math def myceil(x): return math.ceil(x) """ print(execute_user_code(i_example, "myceil", 1.5)) Note that this sample import function is VERY primitive, it will not work with stuff like from x import y. You can look here for a more complex implementation. EDIT3 Note, that lots of python built in functionality is not available out of the box in RestrictedPython, it does not mean it is not available at all. You may need to implement some function for it to become available. Even some obvious things like sum or += operator are not obvious in the restricted environment. For example, the for loop uses _getiter_ function that you must implement and provide yourself (in globals). Since you want to avoid infinite loops, you may want to put some limits on the number of iterations allowed. Here is a sample implementation that limits number of iterations to 100: MAX_ITER_LEN = 100 class MaxCountIter: def __init__(self, dataset, max_count): self.i = iter(dataset) self.left = max_count def __iter__(self): return self def __next__(self): if self.left > 0: self.left -= 1 return next(self.i) else: raise StopIteration() def _getiter(ob): return MaxCountIter(ob, MAX_ITER_LEN) .... restricted_globals = { "_getiter_": _getiter, .... for_ex = """ def sum(x): y = 0 for i in range(x): y = y + i return y """ print(execute_user_code(for_ex, "sum", 6)) If you don't want to limit loop count, just use identity function as _getiter_: restricted_globals = { "_getiter_": labmda x: x, Note that simply limiting the loop count does not guarantee security. First, loops can be nested. Second, you cannot limit the execution count of a while loop. To make it secure, you have to execute unsafe code under some timeout. Please take a moment to read the docs. Note that not everything is documented (although many things are). You have to learn to read the project's source code for more advanced things. Best way to learn is to try and run some code, and to see what kind function is missing, then to see the source code of the project to understand how to implement it. EDIT4 There is still another problem - restricted code may have infinite loops. To avoid it, some kind of timeout is required on the code. Unfortunately, since you are using django, that is multi threaded unless you explicitly specify otherwise, simple trick for timeouts using signeals will not work here, you have to use multiprocessing. Easiest way in my opinion - use this library. Simply add a decorator to execute_user_code so it will look like this: @timeout_decorator.timeout(5, use_signals=False) def execute_user_code(user_code, user_func, *args, **kwargs): And you are done. The code will never run more than 5 seconds. Pay attention to use_signals=False, without this it may have some unexpected behavior in django. Also note that this is relatively heavy on resources (and I don't really see a way to overcome this). I mean not really crazy heavy, but it is an extra process spawn. You should hold that in mind in your web server configuration - the api which allows to execute arbitrary user code is more vulnerable to ddos. | 14 | 11 |
63,174,054 | 2020-7-30 | https://stackoverflow.com/questions/63174054/what-is-a-good-design-pattern-to-combine-datasets-that-are-related-but-stored-in | Suppose we want to construct a stock portfolio. To decide which stocks to include in the portfolio and what weight to assign to these stocks, we use different metrics such as e.g., price, earnings-per-share (eps), dividend yield, etc... All these metrics are stored in individual pandas dataframes where rows specify a certain point in time and columns are associated with a specific stock (e.g., IBM, MSFT, ...): import pandas as pd price = pd.DataFrame([[-1.332298, 0.396217, 0.574269, -0.679972, -0.470584, 0.234379], [-0.222567, 0.281202, -0.505856, -1.392477, 0.941539, 0.974867], [-1.139867, -0.458111, -0.999498, 1.920840, 0.478174, -0.315904], [-0.189720, -0.542432, -0.471642, 1.506206, -1.506439, 0.301714]], columns=['IBM', 'MSFT', 'APPL', 'ORCL','FB','TWTR'], index=pd.date_range('2000', freq='D', periods=4)) eps = pd.DataFrame([[-1.91, 1.63, 0.51, -.32, -0.84, 0.37], [-0.56, 0.02, 0.56, 1.77, 0.99, 0.97], [-1.67, -0.41, -0.98, 1.20, 0.74, -0.04], [-0.80, -0.43, -0.12, 1.06, 1.59, 0.34]], columns=['IBM', 'MSFT', 'APPL', 'ORCL','FB','TWTR'], index=pd.date_range('2000', freq='D', periods=4)) price IBM MSFT APPL ORCL FB TWTR 2000-01-01 -1.332298 0.396217 0.574269 -0.679972 -0.470584 0.234379 2000-01-02 -0.222567 0.281202 -0.505856 -1.392477 0.941539 0.974867 2000-01-03 -1.139867 -0.458111 -0.999498 1.920840 0.478174 -0.315904 2000-01-04 -0.189720 -0.542432 -0.471642 1.506206 -1.506439 0.301714 eps IBM MSFT APPL ORCL FB TWTR 2000-01-01 -1.91 1.63 0.51 -0.32 -0.84 0.37 2000-01-02 -0.56 0.02 0.56 1.77 0.99 0.97 2000-01-03 -1.67 -0.41 -0.98 1.20 0.74 -0.04 2000-01-04 -0.80 -0.43 -0.12 1.06 1.59 0.34 The different dataframes are obviously closely connected. However, they are all stored in separate variables. In a large application, it can become difficult to keep track of which variables belong together and form a coherent unit. What is a good design paradigm to arrange this kind of related datasets? Using an object-oriented design pattern, I would construct something like a StockList() object that stores individual Stock() objects, which in turn store the information (time series) that correspond to a specific stock. class Stock(): def __init__(self, price_series, eps_series, div_yield_series): self.price = price_series self.eps = eps_series self.div_yield = div_yield_series class StockList(): def __init__(self, stock_list): self.stock_list = stock_list def append(self, stock): self.stock_list.append(stock) But is this a viable option when working with dataframes? I think taking the time series apart and merging them back together when queried, leads to a considerable loss in performance and a superfluous set of operations. Alternatively, the StockList() could store the dataframes directly, without constructing single Stock() objects (serving more or less as a data structure). However, is this an appropriate compromise? I generally wonder whether a separate object should be created at all or if these individual dataframes should just be left as separate variables. This most likely would increase performance, reduce memory usage, support parallel computing and foster a functional programming style. But how can we then bundle data that belongs together? | If I understand your questions correctly, you basically have 2 (or multiple) dataframes that are related and you want to join together let's try it out with 2. I do realize that this mainly a design pattern question but I'm showing you that you can easily have them as separate dataframes and then efficiently combine them together to end up with one whenever needed. tl;dr We'll do those steps: change how the dataframes are represented to those columns [date, stock, DF_NAME] DF_NAME value would be price/eps/..etc merge them together (outer merge to retain all data) based on [date, stock]. [Optional] pivot back into a multi-index. [Optional] squash the headers to end up with a single index import pandas as pd def reshape_df(df, value_name): """reshapes the dataframe by resetting the index and melting""" df = df.reset_index() df = df.melt(id_vars=['index']) df.columns = ['date', 'stock', value_name] return df price = pd.DataFrame([[-1.332298, 0.396217, 0.574269, -0.679972, -0.470584, 0.234379], [-0.222567, 0.281202, -0.505856, -1.392477, 0.941539, 0.974867], [-1.139867, -0.458111, -0.999498, 1.920840, 0.478174, -0.315904], [-0.189720, -0.542432, -0.471642, 1.506206, -1.506439, 0.301714]], columns=['IBM', 'MSFT', 'APPL', 'ORCL','FB','TWTR'], index=pd.date_range('2000', freq='D', periods=4)) eps = pd.DataFrame([[-1.91, 1.63, 0.51, -.32, -0.84, 0.37], [-0.56, 0.02, 0.56, 1.77, 0.99, 0.97], [-1.67, -0.41, -0.98, 1.20, 0.74, -0.04], [-0.80, -0.43, -0.12, 1.06, 1.59, 0.34]], columns=['IBM', 'MSFT', 'APPL', 'ORCL','FB','TWTR'], index=pd.date_range('2000', freq='D', periods=4)) price_fixed = reshape_df(price, 'price') eps_fixed = reshape_df(eps, 'eps') merged = price_fixed.merge(eps_fixed, on=['date', 'stock'], how='outer') # Optional merged_pivoted = merged.pivot(columns='stock', index='date') merged_pivoted_fixed_header = merged_pivoted.copy() merged_pivoted_fixed_header.columns = ['-'.join(col).strip() for col in merged_pivoted_fixed_header.columns.values] Breakdown We'll first start by changing how the data is represented by changing it to a 3 column representation [date, stock, DF_NAME] using this function def rearrange(df, value_name): """rearranges the dataframe by resetting the index and melting""" df = df.reset_index() df = df.melt(id_vars=['index']) df.columns = ['date', 'stock', value_name] return df which when calling on price for example price_fixed = reshape_df(price, 'price') would give you date stock price 0 2000-01-01 IBM -1.332298 1 2000-01-02 IBM -0.222567 2 2000-01-03 IBM -1.139867 3 2000-01-04 IBM -0.189720 4 2000-01-01 MSFT 0.396217 5 2000-01-02 MSFT 0.281202 6 2000-01-03 MSFT -0.458111 7 2000-01-04 MSFT -0.542432 8 2000-01-01 APPL 0.574269 9 2000-01-02 APPL -0.505856 10 2000-01-03 APPL -0.999498 11 2000-01-04 APPL -0.471642 12 2000-01-01 ORCL -0.679972 13 2000-01-02 ORCL -1.392477 14 2000-01-03 ORCL 1.920840 15 2000-01-04 ORCL 1.506206 16 2000-01-01 FB -0.470584 17 2000-01-02 FB 0.941539 18 2000-01-03 FB 0.478174 19 2000-01-04 FB -1.506439 20 2000-01-01 TWTR 0.234379 21 2000-01-02 TWTR 0.974867 22 2000-01-03 TWTR -0.315904 23 2000-01-04 TWTR 0.301714 and we do the same for eps with eps_fixed = reshape_df(eps, 'eps') we merge them with merged = price_fixed.merge(eps_fixed, on=['date', 'stock'], how='outer') which gives us date stock price eps 0 2000-01-01 IBM -1.332298 -1.91 1 2000-01-02 IBM -0.222567 -0.56 2 2000-01-03 IBM -1.139867 -1.67 3 2000-01-04 IBM -0.189720 -0.80 4 2000-01-01 MSFT 0.396217 1.63 5 2000-01-02 MSFT 0.281202 0.02 6 2000-01-03 MSFT -0.458111 -0.41 7 2000-01-04 MSFT -0.542432 -0.43 8 2000-01-01 APPL 0.574269 0.51 9 2000-01-02 APPL -0.505856 0.56 10 2000-01-03 APPL -0.999498 -0.98 11 2000-01-04 APPL -0.471642 -0.12 12 2000-01-01 ORCL -0.679972 -0.32 13 2000-01-02 ORCL -1.392477 1.77 14 2000-01-03 ORCL 1.920840 1.20 15 2000-01-04 ORCL 1.506206 1.06 16 2000-01-01 FB -0.470584 -0.84 17 2000-01-02 FB 0.941539 0.99 18 2000-01-03 FB 0.478174 0.74 19 2000-01-04 FB -1.506439 1.59 20 2000-01-01 TWTR 0.234379 0.37 21 2000-01-02 TWTR 0.974867 0.97 22 2000-01-03 TWTR -0.315904 -0.04 23 2000-01-04 TWTR 0.301714 0.34 [Optional] Change back to the previous representation If you'd like to have it in the same representation you had before, this is a job of pivot which can be done with merged_pivoted = merged.pivot(columns='stock', index='date') giving you price eps stock APPL FB IBM MSFT ORCL TWTR APPL FB IBM MSFT ORCL TWTR date 2000-01-01 0.574269 -0.470584 -1.332298 0.396217 -0.679972 0.234379 0.51 -0.84 -1.91 1.63 -0.32 0.37 2000-01-02 -0.505856 0.941539 -0.222567 0.281202 -1.392477 0.974867 0.56 0.99 -0.56 0.02 1.77 0.97 2000-01-03 -0.999498 0.478174 -1.139867 -0.458111 1.920840 -0.315904 -0.98 0.74 -1.67 -0.41 1.20 -0.04 2000-01-04 -0.471642 -1.506439 -0.189720 -0.542432 1.506206 0.301714 -0.12 1.59 -0.80 -0.43 1.06 0.34 And since you mentioned that you don't want to work with Multi-index, you can squash the headers like merged_pivoted_fixed_header = merged_pivoted.copy() merged_pivoted_fixed_header.columns = ['-'.join(col).strip() for col in merged_pivoted_fixed_header.columns.values] giving us price-APPL price-FB price-IBM price-MSFT price-ORCL price-TWTR eps-APPL eps-FB eps-IBM eps-MSFT eps-ORCL eps-TWTR date 2000-01-01 0.574269 -0.470584 -1.332298 0.396217 -0.679972 0.234379 0.51 -0.84 -1.91 1.63 -0.32 0.37 2000-01-02 -0.505856 0.941539 -0.222567 0.281202 -1.392477 0.974867 0.56 0.99 -0.56 0.02 1.77 0.97 2000-01-03 -0.999498 0.478174 -1.139867 -0.458111 1.920840 -0.315904 -0.98 0.74 -1.67 -0.41 1.20 -0.04 2000-01-04 -0.471642 -1.506439 -0.189720 -0.542432 1.506206 0.301714 -0.12 1.59 -0.80 -0.43 1.06 0.34 | 7 | 1 |
63,235,326 | 2020-8-3 | https://stackoverflow.com/questions/63235326/works-with-urrlib-request-but-doesnt-work-with-requests | I am trying to send a request wtih post method to an API, my code looks like the following one: import urllib.request import json url = "https://api.cloudflareclient.com/v0a745/reg" referrer = "e7b507ed-5256-4bfc-8f17-2652d3f0851f" body = {"referrer": referrer} data = json.dumps(body).encode('utf8') headers = {'User-Agent': 'okhttp/3.12.1'} req = urllib.request.Request(url, data, headers) response = urllib.request.urlopen(req) status_code = response.getcode() print (status_code) Actually it works fine but i want to use "requests" library instead as it's more updated and more flexible with proxies with following code: import requests import json url = "https://api.cloudflareclient.com/v0a745/reg" referrer = "e7b507ed-5256-4bfc-8f17-2652d3f0851f" data = {"referrer": referrer} headers = {'User-Agent': 'okhttp/3.12.1'} req = requests.post(url, headers=headers, json=data) status_code = req.status_code print (status_code) But it returns 403 status code, how can i fix it ? Keep in mind that this API is open to everyone and you can just run the code with no worries. EDIT-1: i have tried removing json.dumps(body).encode('utf8') or just .encode('utf8') from the second code by @tomasz-wojcik advice but i am still getting 403 while the first code still works! EDIT-2: i tried requesting with postman that successfully made the request and returned 200 status code. postman generated the following python code: import requests url = "https://api.cloudflareclient.com/v0a745/reg" payload = "{\"referrer\": \"e7b507ed-5256-4bfc-8f17-2652d3f0851f\"}" headers = { 'Content-Type': 'application/x-www-form-urlencoded', 'User-Agent': 'okhttp/3.12.1', 'Host': 'api.cloudflareclient.com' } response = requests.request("POST", url, headers=headers, data=payload) status_code = response.status_code print (status_code) If you run the code outside of postman, it still returns 403 status code, i'm a litte confused, i am thinking that maybe "requests" library doesn't changing the user-agent in the second code. EDIT-3: I have looked into it and found out that it works on python 2.7.16 but doesn't work on python 3.8.5! EDIT-4: Some Developers are reporting that the second code works on python 3.6 too but the main thing is why it is working on other versions but not working on 3.8 or 3.7 ? Python Versions that returned 403 status code(second code): 3.8.5 & 3.7 Python Versions that returned 200 status code(second code): 3.6 & 2.7.16 | The issue seems to be with how the host is handling ssl. Newer versions of requests uses certifi which in your case is having issues with the host server. I downgraded requests to an earlier version and it worked. (2.1.0). You can fix the version in your requirements.txt and it should work with any python version. https://requests.readthedocs.io/en/master/user/advanced/#ca-certificates Before version 2.16, Requests bundled a set of root CAs that it trusted, sourced from the Mozilla trust store. The certificates were only updated once for each Requests version. When certifi was not installed, this led to extremely out-of-date certificate bundles when using significantly older versions of Requests. For the sake of security we recommend upgrading certifi frequently! | 11 | 3 |
63,215,752 | 2020-8-2 | https://stackoverflow.com/questions/63215752/how-to-use-fileupload-widget-in-jupyter-lab | I want to use the FileUpload widget in jupyter lab. I have the following lines of code in my notebook cell: uploader = widgets.FileUpload() uploader In jupyter notebook, the output of the cell is a clickable button that I can use to upload a file. In jupyter lab, the output is the following : FileUpload(value={}, description='Upload') Here's the info on the uploader object : Type: FileUpload String form: FileUpload(value={}, description='Upload') File: ~/miniconda3/envs/fastai2/lib/python3.7/site-packages/ipywidgets/widgets/widget_upload.py Is it possible to make this widget work on jupyter lab? And if so how should I proceed ? | If you're using jupyterlab out the box, it doesn't have ipywidgets enabled by default, you need to rebuild it after enabling the extension. Follow the steps from here: Install nodeJS pip install ipywidgets jupyter nbextension enable --py widgetsnbextension jupyter labextension install @jupyter-widgets/jupyterlab-manager (may need to restart your lab) It says that newer Jupyterlab has it enabled, but I still had troubles with it, depending on the platform. Manual install is usually the way to go. | 18 | 15 |
63,160,595 | 2020-7-29 | https://stackoverflow.com/questions/63160595/implementing-a-recursive-algorithm-in-pyspark-to-find-pairings-within-a-datafram | I have a spark dataframe (prof_student_df) that lists student/professor pair for a timestamp. There are 4 professors and 4 students for each timestamp and each professor-student pair has a “score” (so there are 16 rows per time frame). For each time frame, I need to find the one to one pairing between professors/students that maximizes the overall score. Each professor can only be matched with one student for a single time frame. For example, here are the pairings/scores for one time frame. +------------+--------------+------------+-------+----------+ | time | professor_id | student_id | score | is_match | +------------+--------------+------------+-------+----------+ | 1596048041 | p1 | s1 | 0.7 | FALSE | | 1596048041 | p1 | s2 | 0.5 | TRUE | | 1596048041 | p1 | s3 | 0.3 | FALSE | | 1596048041 | p1 | s4 | 0.2 | FALSE | | 1596048041 | p2 | s1 | 0.9 | TRUE | | 1596048041 | p2 | s2 | 0.1 | FALSE | | 1596048041 | p2 | s3 | 0.15 | FALSE | | 1596048041 | p2 | s4 | 0.2 | FALSE | | 1596048041 | p3 | s1 | 0.2 | FALSE | | 1596048041 | p3 | s2 | 0.3 | FALSE | | 1596048041 | p3 | s3 | 0.4 | FALSE | | 1596048041 | p3 | s4 | 0.8 | TRUE | | 1596048041 | p4 | s1 | 0.2 | FALSE | | 1596048041 | p4 | s2 | 0.3 | FALSE | | 1596048041 | p4 | s3 | 0.35 | TRUE | | 1596048041 | p4 | s4 | 0.4 | FALSE | +------------+--------------+------------+-------+----------+ The goal Is to get this is_match column. It can be a boolean or a 0/1 bit or whatever works. In the above example, p1 matched with s2, p2 matched with s1, p3 matched with s4 and p4 matched with s3 because that is the combination that maximized the total score (yields a score of 2.55). There is one weird edge case - it is possible to have LESS than 4 professors or students for a given time frame. If there are 4 professors and 3 students then 1 professor would be without a pairing and all of his is_match would be false. Similarly, if there are 3 professors and 4 students, 1 student would be without a pairing and all of his is_match would be false. Does anyone know how I might accomplish this? i am thinking I would partition or group by time and then feed the data into some UDF that spits out the pairings and then maybe I would have to join that back to the original rows (although I am not sure). I am trying to implement this logic in pyspark and can use spark sql/sql or pyspark. Ideally, I would like this to be as efficient as possible as there will be millions of rows. In the question, I mentioned a recursive algorithm because this is a traditional recursive type problem, but if there is a quicker solution that doesn't use recursion I am open to that. many thanks, I am new to spark and a little stumped with how to do this. EDIT: clarifying the question as I realize in my example I did not specify this for a single day, there will be up to 14 professors and 14 students to choose from. I am just looking at one day at a time which is why I didnt have the date in the dataframe. at any one time frame, there is at most 4 professors and 4 students. this dataframe just shows one time frame. but for the next time frame it is possible that the 4 professors are p5, p1, p7, p9 or something like that. the students might still be s1, s2, s3, s4. | Edit: As discussed in comments, to fix the issue mentioned in your update, we can convert student_id at each time into generalized sequence-id using dense_rank, go through Step 1 to 3 (using student column) and then use join to convert student at each time back to their original student_id. see below Step-0 and Step-4. in case there are less than 4 professors in a timeUnit, dimension will be resize to 4 in Numpy-end (using np_vstack() and np_zeros()), see the updated function find_assigned. You can try pandas_udf and scipy.optimize.linear_sum_assignment(note: the backend method is the Hungarian algorithm as mentioned by @cronoik in the main comments), see below: from pyspark.sql.functions import pandas_udf, PandasUDFType, first, expr, dense_rank from pyspark.sql.types import StructType from scipy.optimize import linear_sum_assignment from pyspark.sql import Window import numpy as np df = spark.createDataFrame([ ('1596048041', 'p1', 's1', 0.7), ('1596048041', 'p1', 's2', 0.5), ('1596048041', 'p1', 's3', 0.3), ('1596048041', 'p1', 's4', 0.2), ('1596048041', 'p2', 's1', 0.9), ('1596048041', 'p2', 's2', 0.1), ('1596048041', 'p2', 's3', 0.15), ('1596048041', 'p2', 's4', 0.2), ('1596048041', 'p3', 's1', 0.2), ('1596048041', 'p3', 's2', 0.3), ('1596048041', 'p3', 's3', 0.4), ('1596048041', 'p3', 's4', 0.8), ('1596048041', 'p4', 's1', 0.2), ('1596048041', 'p4', 's2', 0.3), ('1596048041', 'p4', 's3', 0.35), ('1596048041', 'p4', 's4', 0.4) ] , ['time', 'professor_id', 'student_id', 'score']) N = 4 cols_student = [*range(1,N+1)] Step-0: add an extra column student, and create a new dataframe df3 with all unique combos of time + student_id + student. w1 = Window.partitionBy('time').orderBy('student_id') df = df.withColumn('student', dense_rank().over(w1)) +----------+------------+----------+-----+-------+ | time|professor_id|student_id|score|student| +----------+------------+----------+-----+-------+ |1596048041| p1| s1| 0.7| 1| |1596048041| p2| s1| 0.9| 1| |1596048041| p3| s1| 0.2| 1| |1596048041| p4| s1| 0.2| 1| |1596048041| p1| s2| 0.5| 2| |1596048041| p2| s2| 0.1| 2| |1596048041| p3| s2| 0.3| 2| |1596048041| p4| s2| 0.3| 2| |1596048041| p1| s3| 0.3| 3| |1596048041| p2| s3| 0.15| 3| |1596048041| p3| s3| 0.4| 3| |1596048041| p4| s3| 0.35| 3| |1596048041| p1| s4| 0.2| 4| |1596048041| p2| s4| 0.2| 4| |1596048041| p3| s4| 0.8| 4| |1596048041| p4| s4| 0.4| 4| +----------+------------+----------+-----+-------+ df3 = df.select('time','student_id','student').dropDuplicates() +----------+----------+-------+ | time|student_id|student| +----------+----------+-------+ |1596048041| s1| 1| |1596048041| s2| 2| |1596048041| s3| 3| |1596048041| s4| 4| +----------+----------+-------+ Step-1: use pivot to find the matrix of professors vs students, notice we set negative of scores to the values of pivot so that we can use scipy.optimize.linear_sum_assignment to find the min cost of an assignment problem: df1 = df.groupby('time','professor_id').pivot('student', cols_student).agg(-first('score')) +----------+------------+----+----+-----+----+ | time|professor_id| 1| 2| 3| 4| +----------+------------+----+----+-----+----+ |1596048041| p4|-0.2|-0.3|-0.35|-0.4| |1596048041| p2|-0.9|-0.1|-0.15|-0.2| |1596048041| p1|-0.7|-0.5| -0.3|-0.2| |1596048041| p3|-0.2|-0.3| -0.4|-0.8| +----------+------------+----+----+-----+----+ Step-2: use pandas_udf and scipy.optimize.linear_sum_assignment to get column indices and then assign the corresponding column name to a new column assigned: # returnSchema contains one more StringType column `assigned` than schema from the input pdf: schema = StructType.fromJson(df1.schema.jsonValue()).add('assigned', 'string') # since the # of students are always N, we can use np.vstack to set the N*N matrix # below `n` is the number of professors/rows in pdf # sz is the size of input Matrix, sz=4 in this example def __find_assigned(pdf, sz): cols = pdf.columns[2:] n = pdf.shape[0] n1 = pdf.iloc[:,2:].fillna(0).values _, idx = linear_sum_assignment(np.vstack((n1,np.zeros((sz-n,sz))))) return pdf.assign(assigned=[cols[i] for i in idx][:n]) find_assigned = pandas_udf(lambda x: __find_assigned(x,N), schema, PandasUDFType.GROUPED_MAP) df2 = df1.groupby('time').apply(find_assigned) +----------+------------+----+----+-----+----+--------+ | time|professor_id| 1| 2| 3| 4|assigned| +----------+------------+----+----+-----+----+--------+ |1596048041| p4|-0.2|-0.3|-0.35|-0.4| 3| |1596048041| p2|-0.9|-0.1|-0.15|-0.2| 1| |1596048041| p1|-0.7|-0.5| -0.3|-0.2| 2| |1596048041| p3|-0.2|-0.3| -0.4|-0.8| 4| +----------+------------+----+----+-----+----+--------+ Note: per suggestion from @OluwafemiSule, we can use the parameter maximize instead of negate the score values. this parameter is available SciPy 1.4.0+: _, idx = linear_sum_assignment(np.vstack((n1,np.zeros((N-n,N)))), maximize=True) Step-3: use SparkSQL stack function to normalize the above df2, negate the score values and filter rows with score is NULL. the desired is_match column should have assigned==student: df_new = df2.selectExpr( 'time', 'professor_id', 'assigned', 'stack({},{}) as (student, score)'.format(len(cols_student), ','.join("int('{0}'), -`{0}`".format(c) for c in cols_student)) ) \ .filter("score is not NULL") \ .withColumn('is_match', expr("assigned=student")) df_new.show() +----------+------------+--------+-------+-----+--------+ | time|professor_id|assigned|student|score|is_match| +----------+------------+--------+-------+-----+--------+ |1596048041| p4| 3| 1| 0.2| false| |1596048041| p4| 3| 2| 0.3| false| |1596048041| p4| 3| 3| 0.35| true| |1596048041| p4| 3| 4| 0.4| false| |1596048041| p2| 1| 1| 0.9| true| |1596048041| p2| 1| 2| 0.1| false| |1596048041| p2| 1| 3| 0.15| false| |1596048041| p2| 1| 4| 0.2| false| |1596048041| p1| 2| 1| 0.7| false| |1596048041| p1| 2| 2| 0.5| true| |1596048041| p1| 2| 3| 0.3| false| |1596048041| p1| 2| 4| 0.2| false| |1596048041| p3| 4| 1| 0.2| false| |1596048041| p3| 4| 2| 0.3| false| |1596048041| p3| 4| 3| 0.4| false| |1596048041| p3| 4| 4| 0.8| true| +----------+------------+--------+-------+-----+--------+ Step-4: use join to convert student back to student_id (use broadcast join if possible): df_new = df_new.join(df3, on=["time", "student"]) +----------+-------+------------+--------+-----+--------+----------+ | time|student|professor_id|assigned|score|is_match|student_id| +----------+-------+------------+--------+-----+--------+----------+ |1596048041| 1| p1| 2| 0.7| false| s1| |1596048041| 2| p1| 2| 0.5| true| s2| |1596048041| 3| p1| 2| 0.3| false| s3| |1596048041| 4| p1| 2| 0.2| false| s4| |1596048041| 1| p2| 1| 0.9| true| s1| |1596048041| 2| p2| 1| 0.1| false| s2| |1596048041| 3| p2| 1| 0.15| false| s3| |1596048041| 4| p2| 1| 0.2| false| s4| |1596048041| 1| p3| 4| 0.2| false| s1| |1596048041| 2| p3| 4| 0.3| false| s2| |1596048041| 3| p3| 4| 0.4| false| s3| |1596048041| 4| p3| 4| 0.8| true| s4| |1596048041| 1| p4| 3| 0.2| false| s1| |1596048041| 2| p4| 3| 0.3| false| s2| |1596048041| 3| p4| 3| 0.35| true| s3| |1596048041| 4| p4| 3| 0.4| false| s4| +----------+-------+------------+--------+-----+--------+----------+ df_new = df_new.drop("student", "assigned") | 7 | 5 |
63,248,340 | 2020-8-4 | https://stackoverflow.com/questions/63248340/sqlalchemy-when-does-an-object-become-not-persistent | I have a function that has a semi-long running session that I use for a bunch of database rows... and at a certain point I want to reload or "refresh" one of the rows to make sure none of the state has changed. most of the time this code works fine, but every now and then I get this error sqlalchemy.exc.InvalidRequestError: Instance '<Event at 0x58cb790>' is not persistent within this Session I've been reading up on state but cannot understand why an object would stop being persistent? I'm still within a session, so I'm not sure why I would stop being persistent. Can someone explain what could cause my object to be "not persistent" within the session? I'm not doing any writing to the object prior to this point. db_event below is the object that is becoming "not persistent" async def event_white_check_mark_handler( self: Events, ctx, channel: TextChannel, member: discord.Member, message: Message ): """ This reaction is for completing an event """ session = database_objects.SESSION() try: message_id = message.id db_event = self.get_event(session, message_id) if not db_event: return logger.debug(f"{member.display_name} wants to complete an event {db_event.id}") db_guild = await db.get_or_create( session, db.Guild, name=channel.guild.name, discord_id=channel.guild.id ) db_member = await db.get_or_create( session, db.Member, name=member.name, discord_id=member.id, nick=member.display_name, guild_id=db_guild.discord_id, ) db_scheduler_config: db.SchedulerConfig = ( session.query(db.SchedulerConfig) .filter(db.SchedulerConfig.guild_id == channel.guild.id) .one() ) # reasons to not complete the event if len(db_event) == 0: await channel.send( f"{member.display_name} you cannot complete an event with no one on it!" ) elif ( db_member.discord_id == db_event.creator_id or await db_scheduler_config.check_permission( ctx, db_event.event_name, member, db_scheduler_config.MODIFY ) ): async with self.EVENT_LOCKS[db_event.id]: session.refresh(db_event) ########### <---- right here is when I get the error thrown db_event.status = const.COMPLETED session.commit() self.DIRTY_EVENTS.add(db_event.id) member_list = ",".join( filter( lambda x: x not in const.MEMBER_FIELD_DEFAULT, [str(x.mention) for x in db_event.members], ) ) await channel.send(f"Congrats on completing a event {member_list}!") logger.info(f"Congrats on completing a event {member_list}!") # await self.stop_tracking_event(db_event) del self.REMINDERS_BY_EVENT_ID[db_event.id] else: await channel.send( f"{member.display_name} you did not create this event and do not have permission to delete the event!" ) logger.warning(f"{member.display_name} you did not create this event!") except Exception as _e: logger.error(format_exc()) session.rollback() finally: database_objects.SESSION.remove() | I am fairly certain that the root cause in this case is a race condition. Using a scoped session in its default configuration manages scope based on the thread only. Using coroutines on top can mean that 2 or more end up sharing the same session, and in case of event_white_check_mark_handler they then race to commit/rollback and to remove the session from the scoped session registry, effectively closing it and expunging all remaining instances from the now-defunct session, making the other coroutines unhappy. A solution is to not use scoped sessions at all in event_white_check_mark_handler, because it fully manages its session's lifetime, and seems to pass the session forward as an argument. If on the other hand there are some paths that use the scoped session database_objects.SESSION instead of receiving the session as an argument, define a suitable scopefunc when creating the registry: https://docs.sqlalchemy.org/en/13/orm/contextual.html#using-custom-created-scopes SQLAlchemy+Tornado: How to create a scopefunc for SQLAlchemy's ScopedSession? Correct usage of sqlalchemy scoped_session with python asyncio | 8 | 8 |
63,248,112 | 2020-8-4 | https://stackoverflow.com/questions/63248112/how-to-implement-oauth-to-fastapi-with-client-id-secret | I have followed the docs about Oauth2 but it does not describe the proccess to add client id and secret https://fastapi.tiangolo.com/advanced/security/oauth2-scopes/ and what this does class UserInDB(User): hashed_password: str from the original example | In documentation it uses OAuth2PasswordRequestForm to authenticate the user this class has basically 6 different Fields, grant_type: str = Form(None, regex="password"), username: str = Form(...), password: str = Form(...), scope: str = Form(""), client_id: Optional[str] = Form(None), client_secret: Optional[str] = Form(None), So you can add client_id and client_secret,if you are interested Repository here. But i usally prefer authlib, it saves so much time makes it easier. Here is a complete example of how you can create a OAuth with authlib First create a OAuth Client from authlib.integrations.starlette_client import OAuth from starlette.config import Config config = Config('.env') # read config from .env file oauth = OAuth(config) oauth.register( name='google', server_metadata_url='https://accounts.google.com/.well-known/openid-configuration', client_kwargs={ 'scope': 'openid email profile' } ) We don't need to add client_id and client_secret here, because they are in .env file. You are not supposed to hard code them in the code in real products.Google has an OpenID discovery endpoint, we can use this URL for server_metadata_url. Authlib will automatically fetch this server_metadata_url to configure the OAuth client for you. Now we will create a FastAPI application to define a login route. from fastapi import FastAPI, Request from starlette.middleware.sessions import SessionMiddleware app = FastAPI() app.add_middleware(SessionMiddleware, secret_key="secret-string") We need this SessionMiddleware, because Authlib will use request.session to store temporary codes and states. The below code which is /login endpoint, will redirect you to Google account website. @app.route('/login') async def login(request: Request): redirect_uri = request.url_for('auth') return await oauth.google.authorize_redirect(request, redirect_uri When you grant access from Google website, Google will redirect back to your given redirect_uri, which is request.url_for('auth'): @app.route('/auth') async def auth(request: Request): token = await oauth.google.authorize_access_token(request) user = await oauth.google.parse_id_token(request, token) return user The above code will obtain a token which contains access_token and id_token. An id_token contains user info, we just need to parse it to get the login user's information. Sources: Authlib-FastAPI-Google-Login Also if you still wanna use Pure FastAPI check this link FastAPI OAuth2PasswordRequestForm | 12 | 11 |
63,255,631 | 2020-8-4 | https://stackoverflow.com/questions/63255631/mlflow-invalid-parameter-value-unsupported-uri-mlruns-for-model-registry-s | I got this error when I was trying to have a model registered in the model registry. Could someone help me? RestException: INVALID_PARAMETER_VALUE: Unsupported URI './mlruns' for model registry store. Supported schemes are: ['postgresql', 'mysql', 'sqlite', 'mssql']. See https://www.mlflow.org/docs/latest/tracking.html#storage for how to setup a compatible server. | Mlflow required DB as datastore for Model Registry So you have to run tracking server with DB as backend-store and log model to this tracking server. The easiest way to use DB is to use SQLite. mlflow server \ --backend-store-uri sqlite:///mlflow.db \ --default-artifact-root ./artifacts \ --host 0.0.0.0 And set MLFLOW_TRACKING_URI environment variable to http://localhost:5000 or mlflow.set_tracking_uri("http://localhost:5000") After got to http://localhost:5000 and you can register a logged model from UI or from the code. | 10 | 32 |
63,246,100 | 2020-8-4 | https://stackoverflow.com/questions/63246100/how-to-draw-character-with-gradient-colors-using-pil | I have the function that generates character images from a font file using PIL. For the current example, it generates a white background image and a red character text. What I want now is that instead of pure red or any other color I can generate a gradient color. Is this possible with my current code? I have seen this post but it didn't help me. Edit 1: Currently, I am generating English alphabet images from font files using PIL. The fonts variable in my code has N number of ".ttf" files. lets suppose N=3 all in different styles e.g. style1, style2, style3. My current code will always generate these N different styles with fixed white background and fixed red character color. As shown in the below figure. Instead of red color for the characters, I would like to apply gradients for each style. i.e. all characters in style1 font images should have the same gradient, style 2 font style should have a different gradient from style1 characters but should be the same for all of its characters and so on. As shown below (styles are different from the above images. Its just for demonstration of what I want). My code so far: fonts = glob.glob(os.path.join(fonts_dir, '*.ttf')) for font in fonts: image = Image.new('RGB', (IMAGE_WIDTH, IMAGE_HEIGHT), color='white') font = ImageFont.truetype(font, 150) drawing = ImageDraw.Draw(image) w, h = drawing.textsize(character, font=font) drawing.text( ((IMAGE_WIDTH-w)/2, (IMAGE_HEIGHT-h)/2), character, fill='red', font=font ) image.save(file_path, 'PNG') | One fairly easy way of doing it is to draw the text in white on a black background and then use that as the alpha/transparency channel over a background with a gradient. Here's a background gradient: #!/usr/bin/env python3 from PIL import Image, ImageDraw, ImageFont w, h = 400, 150 image = Image.open('gradient.jpg').rotate(90).resize((w,h)) font = ImageFont.truetype('/System/Library/Fonts/MarkerFelt.ttc', 80) # Create new alpha channel - solid black alpha = Image.new('L', (w,h)) draw = ImageDraw.Draw(alpha) draw.text((20,10),'Some Text',fill='white',font=font) alpha.save('alpha.png') # Use text cutout as alpha channel for gradient image image.putalpha(alpha) image.save('result.png') The alpha.png looks like this: And the result.png looks like this: Note that the area around the text is transparent. but you can easily paste it onto a white or black background. So, say you wanted the background yellow, add the following to the bottom of the code above: solid = Image.new('RGB', (w,h), 'yellow') solid.paste(image,image) solid.save('result2.png') | 9 | 11 |
63,255,485 | 2020-8-4 | https://stackoverflow.com/questions/63255485/python-argparse-how-to-reference-a-parameter-with-a-dash-in-it | I'm using argparse and I'd like to specify the positional argument with a dash in it. argparse seems to let me do this. Indeed, it shows up in the Namespace of parse_args(), but I cannot figure out how to reference the corresponding value. Here's a minimal example (notice the dash in 'a-string'): #!/usr/bin/env python3 import argparse parser = argparse.ArgumentParser() parser.add_argument('a-string', help='A string') args = parser.parse_args() # AttributeError: 'Namespace' object has no attribute 'a_string' #print("Argument was: " + args.a_string) # TypeError: 'Namespace' object is not subscriptable #print("Argument was: " + args['a-string']) # AttributeError: 'Namespace' object has no attribute 'a' #print("Argument was: " + args.a-string) # I give up. Ask StackOverflow. I initially thought to address this with the dest argument to add_argument, but I get "ValueError: dest supplied twice for positional argument" if I add dest to a positional argument. How do I reference the value of this positional argument? | The arguments are given as attributes of the namespace object returned by parse_args(). But identifiers (including attributes) can't have a hyphen in them (at least not if you want to access them directly), because it's a minus sign, and that means something else. If you want your argument to be named "a-string" to the user but accessible in your code as a_string (which was one of your attempted approaches) you can use the metavar argument to specify how it's described to the user. parser.add_argument('a_string', help='A string', metavar='a-string') args = parser.parse_args() print(args.a_string) The argument will appear in the usage information as: positional arguments: a-string A string https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.add_argument | 6 | 10 |
63,247,803 | 2020-8-4 | https://stackoverflow.com/questions/63247803/pipenv-requires-python-3-7-but-installed-version-is-3-8-and-wont-install | I know a little of Python and more than a year ago I wrote a small script, using pipenv to manage the dependencies. The old platform was Windows 7, the current platform is Windows 10. At that time I probably had Python 3.7 installed, now I have 3.8.3 but running: pipenv install Complained that: Warning: Python 3.7 was not found on your system… Neither 'pyenv' nor 'asdf' could be found to install Python. You can specify specific versions of Python with: $ pipenv --python path\to\python This is the Pipfile [[source]] url = "https://pypi.org/simple" verify_ssl = true name = "pypi" [packages] python-ldap = {path = "./dependencies/python_ldap-3.1.0-cp37-cp37m-win_amd64.whl"} requests = "~=2.0" mysqlclient = "~=1.0" [dev-packages] [requires] python_version = "3.7" I manually edited that last line to allow 3.8, but how do I properly fix that? I think 3.7 should be a minimum requirement — well, the script is so simple that I think even 3.0 should work. | You can download Python 3.7 from the official site - https://www.python.org/downloads/ | 33 | 3 |
63,241,608 | 2020-8-4 | https://stackoverflow.com/questions/63241608/install-opencv-from-source-to-conda-environment | I would like to install opencv to my conda environment from source. Since I'm using Jetson, there is no pip or conda packages that are available for opencv. I use this command for installing from source, -D BUILD_EXAMPLES=OFF -D BUILD_opencv_python2=ON -D BUILD_opencv_python3=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=${PREFIX} -D CUDA_ARCH_BIN=5.3,6.2,7.2 -D CUDA_ARCH_PTX= -D CUDA_FAST_MATH=ON -D CUDNN_VERSION='8.0' -D EIGEN_INCLUDE_PATH=/usr/include/eigen3 -D ENABLE_NEON=ON -D OPENCV_DNN_CUDA=ON -D OPENCV_ENABLE_NONFREE=ON -D OPENCV_EXTRA_MODULES_PATH=/tmp/build_opencv/opencv_contrib/modules -D OPENCV_GENERATE_PKGCONFIG=ON -D WITH_CUBLAS=ON -D WITH_CUDA=ON -D WITH_CUDNN=ON -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D WITH_OPENGL=ON" How do I install the python dependencies to my conda environment instead of installing it to usr/local/python? | By default it will install to your system Python path which you can see by entering: which python in the terminal. In your cmake commands (the above list you posted) you need to tell it which python executable path you want to build to. At the moment your build is pointing to the above default Python location, and now you want to point it to your Conda Python path. So for example, my base path for my Python environment in Anaconda is: /home/robert/anaconda3/ You can get a list of your Anaconda environments and their location by entering this in the terminal: conda env list To do this, you'll need to update the cmake commands to tell it where the Python path which you want to build to is located. I've used this post before to help me correctly specify the Python executable build path, and it has worked for me when specifying the Python path for a venv. For example, if I wanted to install to one of my Anaconda environments I would do something like this in my cmake: -D PYTHON_DEFAULT_EXECUTABLE=$(/home/robert/anaconda3/envs/venv_openvcv/python3) When you build cmake, scroll through the output and pay particular attention to the line which says something like: Python (for build): /home/robert/anaconda3/envs/venv_openvcv/python3 This is your way of confirming if it is about to build opencv to the correct Python executable (the Anaconda one you have specified). Edit: Additionally here is a tutorial which outlines in detail the steps to compile OpenCV for an Anaconda environment - Installing OpenCV for Conda Virtual Environments | 23 | 8 |
63,187,161 | 2020-7-31 | https://stackoverflow.com/questions/63187161/error-while-import-pytorch-module-the-specified-module-could-not-be-found | I just newly install python 3.8 via anaconda installer and install pytorch using command conda install pytorch torchvision cpuonly -c pytorch when i try to import torch, I got this error message. OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\chunc\anaconda3\lib\site-packages\torch\lib\asmjit.dll" or one of its dependencies. I can see dll files are still in the directory. I ran Dependency Walker and it gave me this result. I am with this problem for a day. What should i do if i want to use PyTorch module? | I had the same problem, you should check if you installed Microsoft Visual C++ Redistributable, because if you didn't this may lead to the DLL load failure. Here is a link to download it: https://aka.ms/vs/16/release/vc_redist.x64.exe | 9 | 30 |
63,232,724 | 2020-8-3 | https://stackoverflow.com/questions/63232724/how-to-add-documentation-for-required-query-parameters | I'm trying to create a fastapi API endpoint that relies on HTTP GET parameters, has them documented and uses fastapi's validation capabilities. Consider the following minimal example: import fastapi app = fastapi.FastAPI( ) @app.get("/endpoint") def example_endpoint( par1: int = fastapi.Query( None, description="example documentation1", ), par2: int = fastapi.Query( None, description="example documentation2", ), ): return {"test": par1 + par2} This has the documentation support and works over HTTP GET parameters, but doesn't validate them - http://localhost:8000/endpoint?par1=2&par2=3 works fine, but http://localhost:8000/endpoint crashes with an internal server error, instead of notifying the user that a parameter was expected. Is there a way to make par1 and par2 required and keep the documentation feature? | You can use Ellipsis, If you hadn't seen that ... before: it is a special single value that makes query required from fastapi import Query Query(...,description="example documentation1") So in your case answer below can do the job @app.get("/endpoint") def example_endpoint( par1: int = fastapi.Query(..., description="example documentation1",), par2: int = fastapi.Query(..., description="example documentation2",), ): if par1 and par2: return {"test": par1 + par2} raise ValueError("Missing query parameters") Also you can use example=1 Query(..., description="example documentation2", example=1) | 6 | 11 |
63,231,163 | 2020-8-3 | https://stackoverflow.com/questions/63231163/what-is-the-usermixin-in-flask | from datetime import datetime from werkzeug.security import generate_password_hash from werkzeug.security import check_password_hash from flask_login import UserMixin from app import db class User(UserMixin, db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(64), index=True, unique=True) email = db.Column(db.String(64), index=True, unique=True) password_hash = db.Column(db.String(64)) posts = db.relationship('Post', backref='author', lazy='dynamic') def set_password(self, password): self.password_hash = generate_password_hash(password) def check_password(self, password): return check_password_hash(self.password_hash, password) def __repr__(self): return '<User{}>'.format(self.username) After reading the official documentation I couldn't yet understand the purpose of UserMixin. Can anyone please describe it in brief? | Flask-login requires a User model with the following properties: has an is_authenticated() method that returns True if the user has provided valid credentials has an is_active() method that returns True if the user’s account is active has an is_anonymous() method that returns True if the current user is an anonymous user has a get_id() method which, given a User instance, returns the unique ID for that object UserMixin class provides the implementation of this properties. Its the reason you can call for example is_authenticated to check if login credentials provide is correct or not instead of having to write a method to do that yourself. | 24 | 51 |
63,231,615 | 2020-8-3 | https://stackoverflow.com/questions/63231615/apply-function-to-pandas-row-row-cross-product | I have two pandas DataFrames / Series containing one row each. df1 = pd.DataFrame([1, 2, 3, 4]) df2 = pd.DataFrame(['one', 'two', 'three', 'four']) I now want to get all possible combinations into an n*n matrix / DataFrame with values for all cross-products being the output from a custom function. def my_function(x, y): return f"{x}:{y}" This should therefore result in: df = pd.DataFrame([['1:one', '2:one', '3:one', '4:one'], ['1:two', '2:two', '3:two', '4:two'], ['1:three', '2:three', '3:three', '4:three'], ['1:four', '2:four', '3:four', '4:four']]) 0 1 2 3 0 1:one 2:one 3:one 4:one 1 1:two 2:two 3:two 4:two 2 1:three 2:three 3:three 4:three 3 1:four 2:four 3:four 4:four While I can build my own matrix through itertools.product, this seems like a very inefficient way for larger datasets and I was wondering if there is a more pythonic way. Thank you in advance. | Let us try np.add.outer df = pd.DataFrame(np.add.outer(df1[0].astype(str).values,':'+df2[0].values).T) Out[258]: 0 1 2 3 0 1:one 2:one 3:one 4:one 1 1:two 2:two 3:two 4:two 2 1:three 2:three 3:three 4:three 3 1:four 2:four 3:four 4:four | 14 | 9 |
63,229,017 | 2020-8-3 | https://stackoverflow.com/questions/63229017/how-can-i-count-a-pandas-dataframe-over-duplications | My initial dataframe is: Name Info1 Info2 0 Name1 Name1-Info1 Name1-Info2 1 Name1 Name1-Info1 Name1-Info2 2 Name1 Name1-Info1 Name1-Info2 3 Name2 Name2-Info1 Name2-Info2 4 Name2 Name2-Info1 Name2-Info2 and i would like to return the number of repetitions of each row as such: Name Info1 Info2 Count 0 Name1 Name1-Info1 Name1-Info2 3 1 Name2 Name2-Info1 Name2-Info2 2 How can I count a pandas dataframe over duplications? | df.groupby(['Name', 'Info1', 'Info2']).size().reset_index().rename(columns={0:"count"}) | 9 | 9 |
63,225,707 | 2020-8-3 | https://stackoverflow.com/questions/63225707/plotly-dash-refreshing-global-data-on-reload | Imagine I have a dash application where I want the global data to refresh on page reload. I'm using a function to serve the layout as described here. However, I'm note sure how/where I should define df such that I can use it in callbacks (like in a case where I'd like to subset the df based on some input and pass it to a layout table). My code below reloads the data on page refresh, but the callback cannot access the df. I'm very new to dash so apologies in advance for potentially dumb question. def serve_layout(): df = # Fetch data from DB return # Layout app.layout = serve_layout @app.callback() def my_func: # Here I want to reference df | The most common approach for sharing data between callbacks is to save the data in a dash_core_components.Store object, def serve_layout(): df = # Fetch data from DB store = Store(id="mystore", data=df.to_json()) # The store must be added to the layout return # Layout You can then add the store as a State argument for the callbacks that need access to the data, @app.callback(..., [State("mystore", "data")]) def my_func(..., data): df = pd.read_json(data) The main drawback of this approach is that the data is exchanged between the client and the server each time a callback is invoked. If the data frame is small, it doesn't really matter, but if it is large, the data exchange (and the serialization to/from JSON) might cause severe performance issues. It can be avoided by caching the data frame server side, either manually as demonstrated in the documentation or using the enriched components from dash-extensions. Here is a small example of the latter, import dash_core_components as dcc import dash_html_components as html import numpy as np import pandas as pd from dash_extensions.enrich import Dash, ServersideOutput, Output, Input, Trigger app = Dash() app.layout = html.Div([dcc.Store(id="store"), # this is the store that holds the data html.Div(id="onload"), # this div is used to trigger the query_df function on page load html.Div(id="log")]) @app.callback(ServersideOutput("store", "data"), Trigger("onload", "children")) def query_df(): return pd.DataFrame(data=np.random.rand(int(10)), columns=["rnd"]) # some random example data @app.callback(Output("log", "children"), Input("store", "data")) def print_df(df): return df.to_json() # do something with the data if __name__ == '__main__': app.run_server() tested with dash-extensions==0.0.27rc1. Disclaimer: I am the author of dash-extensions. | 8 | 10 |
63,226,700 | 2020-8-3 | https://stackoverflow.com/questions/63226700/overwrite-existing-files-with-pythons-wget | I have installed wget on my Python, and I'm downloading files from different URLs with it. So far my code looks like this: import wget urls = ['https://www.iedb.org/downloader.php?file_name=doc/epitope_full_v3.zip', 'https://www.iedb.org/downloader.php?file_name=doc/tcell_full_v3.zip', 'https://www.iedb.org/downloader.php?file_name=doc/bcell_full_v3.zip', 'https://www.iedb.org/downloader.php?file_name=doc/mhc_ligand_full_single_file.zip'] path = '/home/david/data/files/zip_files' for url in urls: wget.download(url, path) I want my code to overwrite the downloaded files if they exist, so that way every time I run the code I get the latest version of that files, instead of keeping the old ones and downloading the new ones with a different name (e.g, if epitope_full_v3.zip already exists, when I execute the code it will download it again, but will keep the old one and rename the new one to epitope_full_v3_1.zip). I know that wget can take an -O argument in the shell that allows you to do that, but I have not seen that for the python version on the documentation. I appreciate your help. | Although wget doesn't mentioned that, you could change it by yourself.Use os.path.basename() to get the filename, and check whether it exists.Like this: import wget import os urls = ['https://www.iedb.org/downloader.php?file_name=doc/epitope_full_v3.zip', 'https://www.iedb.org/downloader.php?file_name=doc/tcell_full_v3.zip', 'https://www.iedb.org/downloader.php?file_name=doc/bcell_full_v3.zip', 'https://www.iedb.org/downloader.php?file_name=doc/mhc_ligand_full_single_file.zip'] path = '/home/david/data/files/zip_files' for url in urls: filename = path + '/' + os.path.basename(url) # get the full path of the file if os.path.exists(filename): os.remove(filename) # if exist, remove it directly wget.download(url, out=filename) # download it to the specific path. | 7 | 8 |
63,200,201 | 2020-7-31 | https://stackoverflow.com/questions/63200201/create-a-bigquery-table-from-pandas-dataframe-without-specifying-schema-explici | I have a pandas dataframe and want to create a BigQuery table from it. I understand that there are many posts asking about this question, but all the answers I can find so far require explicitly specifying the schema of every column. For example: from google.cloud import bigquery as bq client = bq.Client() dataset_ref = client.dataset('my_dataset', project = 'my_project') table_ref = dataset_ref.table('my_table') job_config = bq.LoadJobConfig( schema=[ bq.SchemaField("a", bq.enums.SqlTypeNames.STRING), bq.SchemaField("b", bq.enums.SqlTypeNames.INT64), bq.SchemaField("c", bq.enums.SqlTypeNames.FLOAT64), ] ) client.load_table_from_dataframe(my_df, table_ref, job_config=job_config).result() However, sometimes I have a dataframe of many columns (for example, 100 columns), it's really non-trival to specify all the columns. Is there a way to do it efficiently? Btw, I found this post with similar question: Efficiently write a Pandas dataframe to Google BigQuery But seems like bq.Schema.from_dataframe does not exist: AttributeError: module 'google.cloud.bigquery' has no attribute 'Schema' | Here's a code snippet to load a DataFrame to BQ: import pandas as pd from google.cloud import bigquery # Example data df = pd.DataFrame({'a': [1,2,4], 'b': ['123', '456', '000']}) # Load client client = bigquery.Client(project='your-project-id') # Define table name, in format dataset.table_name table = 'your-dataset.your-table' # Load data to BQ job = client.load_table_from_dataframe(df, table) If you want to specify only a subset of the schema and still import all the columns, you can switch the last row with # Define a job config object, with a subset of the schema job_config = bigquery.LoadJobConfig(schema=[bigquery.SchemaField('b', 'STRING')]) # Load data to BQ job = client.load_table_from_dataframe(df, table, job_config=job_config) | 13 | 20 |
63,217,735 | 2020-8-2 | https://stackoverflow.com/questions/63217735/import-pyzbar-pyzbar-unable-to-find-zbar-shared-library | I want to make a script for detecting and reading QR codes from photos. I would like to use PyZbar for that, but I have a problem with some errors. I'm working in google colaboratory !sudo apt install tesseract-ocr !pip install pytesseract !pip install pyzbar[scripts] import shutil import os import random import re import cv2 import numpy as np import pytesseract from pytesseract import Output %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.pylab as pylab import glob import pyzbar.pyzbar from PIL import Image this is an error I'm struggling with: ImportError Traceback (most recent call last) <ipython-input-25-d8758fa4db37> in <module>() 24 import glob 25 # ZBAR - Bar Code Reader is an open source software suite for reading bar codes from various sources, such as video streams, image files and raw intensity sensors ---> 26 import pyzbar.pyzbar 27 # PIL - Python Imaging Library 28 from PIL import Image 4 frames /usr/local/lib/python3.6/dist-packages/pyzbar/zbar_library.py in load() 63 path = find_library('zbar') 64 if not path: ---> 65 raise ImportError('Unable to find zbar shared library') 66 libzbar = cdll.LoadLibrary(path) 67 dependencies = [] ImportError: Unable to find zbar shared library Thank You ind advance for your answers | Before you can !pip install pyzbar, you need to install libzbar with this command. !apt install libzbar0 Then, pyzbar should work. | 7 | 9 |
63,223,548 | 2020-8-3 | https://stackoverflow.com/questions/63223548/pycharm-error-cannot-run-program-error-2-no-such-file-or-directory | I am getting the following error message when attempting to execute Python code in PyCharm: Cannot run program "/Users/x/.virtualenvs/untitled/bin/python" (in directory "/Users/x/PycharmProjects/untitled"): error=2, No such file or directory I made sure everything was updated and restarted my computer, but I still get the same error. I have no idea what the problem is. Edit I just opened my terminal and was faced with this error message: virtualenvwrapper_run_hook:12: no such file or directory: /usr/local/bin/python3.7 virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenvwrapper has been installed for VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3.7 and that PATH is set properly. I have no idea what happened here. I certainly didn't touch any of this. Edit 2 If I execute Python3 --version, I get Python 3.8.5. Edit 3 I followed this, but this error remains: Edit 4 This is the current state: I think this is related. | If it helps at all this is what my venv settings looks like. I don't have the answer as to why it happens, but I find its usually when renaming the project. In the past i've recreated the project and copied the project files directly from the old folder to the new one in a file explorer (not pycharm) and its fixed it. | 12 | 4 |
63,221,997 | 2020-8-2 | https://stackoverflow.com/questions/63221997/how-to-get-class-path-in-python | In Python, we can define a class and print it as such: class a: num = 1 b = a() print(b) And we would get the output: <class '__main__.a'> I'm trying to identify unique classes, and I have some classes with longer paths. I would like to extract the "class path", or __main__.a in the case above. For example, if I print some longer class I would get: <class 'django_seed.tests.Game'> <class 'django_seed.tests.Player'> <class 'django_seed.tests.Action'> And I would like to extract: 'django_seed.tests.Game' 'django_seed.tests.Player' 'django_seed.tests.Action' Since I can cast <class 'django_seed.tests.Game'> to a string, I can substring it quite easily with '<class 'django_seed.tests.Game'>'[8:-2], but I'm sure there must be a cleaner way. Thanks! | The __module__ attribute can be used to access the "path" of a class (where it was defined) and the __name__ attribute returns the name of the class as a string print(f'{YourClass.__module__}.{YourClass.__name__}') | 8 | 13 |
63,218,987 | 2020-8-2 | https://stackoverflow.com/questions/63218987/convert-x-escaped-string-into-readable-string-in-python | Is there a way to convert a \x escaped string like "\\xe8\\xaa\\x9e\\xe8\\xa8\\x80" into readable form: "語言"? >>> a = "\\xe8\\xaa\\x9e\\xe8\\xa8\\x80" >>> print(a) \xe8\xaa\x9e\xe8\xa8\x80 I am aware that there is a similar question here, but it seems the solution is only for latin characters. How can I convert this form of string into readable CJK characters? | Decode it first using 'unicode-escape', then as 'utf8': a = "\\xe8\\xaa\\x9e\\xe8\\xa8\\x80" decoded = a.encode('latin1').decode('unicode_escape').encode('latin1').decode('utf8') print(decoded) # 語言 Note that since we can only decode bytes objects, we need to transparently encode it in between, using 'latin1'. | 7 | 14 |
63,201,014 | 2020-8-1 | https://stackoverflow.com/questions/63201014/display-grouped-list-of-items-from-python-list-cyclically | I have an array, given number of items in a group and number of groups, I need to print the array cyclically in a loop. Array-[1,2,3,4,5,6] Group- 4 Iterations- 7 Output should be: ['1', '2', '3', '4'] ['5', '6', '1', '2'] ['3', '4', '5', '6'] ['1', '2', '3', '4'] ['5', '6', '1', '2'] ['3', '4', '5', '6'] ['1', '2', '3', '4'] | One solution is to combine itertools.cycle with itertools.slice. from itertools import cycle, islice def format_print(iterable, group_size, iterations): iterable = cycle(iterable) for _ in range(iterations): print(list(islice(iterable, 0, group_size))) format_print(range(1, 7), 4, 7) Output: [1, 2, 3, 4] [5, 6, 1, 2] [3, 4, 5, 6] [1, 2, 3, 4] [5, 6, 1, 2] [3, 4, 5, 6] [1, 2, 3, 4] If it is required to print string lists, cycle(iterable) can be replaced with cycle(map(str, iterable)). | 10 | 5 |
63,212,888 | 2020-8-2 | https://stackoverflow.com/questions/63212888/sum-of-consecutive-pairs-in-a-list-including-a-sum-of-the-last-element-with-the | I have a list of elements like [1,3,5,6,8,7]. I want a list of sums of two consecutive elements of the list in a way that the last element is also added with the first element of the list. I mean in the above case, I want this list: [4,8,11,14,15,8] But when it comes to the addition of the last and the first element during for loop, index out of range occurs. Consider the following code: List1 = [1,3,5,6,8,7] List2 = [List1[i] + List1[i+1] for i in range (len(List1))] print(List2) | List2 = [List1[i] + List1[(i+1)%len(List1)] for i in range (len(List1))] | 7 | 4 |
63,207,734 | 2020-8-1 | https://stackoverflow.com/questions/63207734/implementing-inplace-operations-for-methods-in-a-class | In pandas lot's of methods have the keyword argument inplace. This means if inplace=True, the called function will be performed on the object itself, and returns None, on the other hand if inplace=False the original object will be untouched, and the method is performed on the returned new instance. I've managed to implement this functionality as follows: from copy import copy class Dummy: def __init__(self, x: int): self.x = x def increment_by(self, increment: int, inplace=True): if inplace: self.x += increment else: obj = copy(self) obj.increment_by(increment=increment, inplace=True) return obj def __copy__(self): cls = self.__class__ klass = cls.__new__(cls) klass.__dict__.update(self.__dict__) return klass if __name__ == "__main__": a = Dummy(1) a.increment_by(1) assert a.x == 2 b = a.increment_by(2, inplace=False) assert a.x == 2 assert b.x == 4 It works as expected. However I have many methods where I repeat that same template: def function(self, inplace=True, **kwds) if inplace: # do something else: obj = copy(self) obj.function(inplace=True, *args, **kwds) return obj To avoid repetition, I would like to create a decorator and mark functions which can be executed inplace and also non-inplace. I would like to use it this way from copy import copy class Dummy: def __init__(self, x: int): self.x = x @inplacify def increment_by(self, increment: int): self.x += increment # just the regular inplace way def __copy__(self): cls = self.__class__ klass = cls.__new__(cls) klass.__dict__.update(self.__dict__) return klass and I expect it to behave as the same as the example ahove. I've tried writing different decorators (something starting like this def inplacify(method): def inner(self, *method_args, **method_kwds): inplace = method_kwds.pop("inplace", True) def new_method(inplace, *method_args, **method_kwds): ) but I got stuck every time. I need the reference for self in order to return a copy of the class, which I don't have there. Also it feels a little vague to change the function signature with a decorator. I have several questions: Is this behaviour can be implemented? Do I need a class decorator? Is it considered to be a bad practice, and if so, what would be the best option to deal with such issue? | If your method has return self, the following works: import copy def inplacify(method): def wrap(self,*a,**k): inplace = k.pop("inplace",True) if inplace: method(self,*a,**k) else: return method(copy.copy(self),*a,**k) return wrap class classy: def __init__(self,n): self.n=n @inplacify def func(self,val): self.n+=val return self I tested it: inst = classy(5) print(inst.n) inst.func(4) print(inst.n) obj = inst.func(3,inplace=False) print(inst.n,obj.n) And got the expected result: 5 9 9 12 Hope this works for your needs. | 7 | 5 |
63,185,157 | 2020-7-31 | https://stackoverflow.com/questions/63185157/can-i-make-my-custom-pytorch-modules-behave-differently-when-train-or-eval-a | According to the official documents, using train() or eval() will have effects on certain modules. However, now I wish to achieve a similar thing with my custom module, i.e. it does something when train() is turned on, and something different when eval() is turned on. How can I do this? | Yes, you can. As you can see in the source code, eval() and train() are basically changing a flag called self.training (note that it is called recursively): def train(self: T, mode: bool = True) -> T: self.training = mode for module in self.children(): module.train(mode) return self def eval(self: T) -> T: return self.train(False) This flag is available in every nn.Module. If your custom module inherits this base class, then it is quite simple to achieve what you want: import torch.nn as nn class MyCustomModule(nn.Module): def __init__(self): super().__init__() # [...] def forward(self, x): if self.training: # train() -> training logic else: # eval() -> inference logic | 7 | 7 |
63,194,140 | 2020-7-31 | https://stackoverflow.com/questions/63194140/how-to-sort-a-group-in-a-way-that-i-get-the-largest-number-in-the-first-row-and | So I have a df like this In [1]:data= {'Group': ['A','A','A','A','A','A','B','B','B','B'], 'Name': [ ' Sheldon Webb',' Traci Dean',' Chad Webster',' Ora Harmon',' Elijah Mendoza',' June Strickland',' Beth Vasquez',' Betty Sutton',' Joel Gill',' Vernon Stone'], 'Performance':[33,64,142,116,122,68,95,127,132,80]} In [2]:df = pd.DataFrame(data, columns = ['Group', 'Name','Performance']) Out[1]: Group Name Performance 0 A Sheldon Webb 33 1 A Traci Dean 64 2 A Chad Webster 142 3 A Ora Harmon 116 4 A Elijah Mendoza 122 5 A June Strickland 68 6 B Beth Vasquez 95 7 B Betty Sutton 127 8 B Joel Gill 132 9 B Vernon Stone 80 I want to sort it in such an alternating way that within a group, say group "A", the first row should have its highest performing person (in this case "Chad Webster") and then in the second row the least performing (which is "Sheldon Webb"). The output I am looking for would look something like this: Out[2]: Group Name Performance 0 A Chad Webster 142 1 A Sheldon Webb 33 2 A Elijah Mendoza 122 3 A Traci Dean 64 4 A Ora Harmon 116 5 A June Strickland 68 6 B Joel Gill 132 7 B Vernon Stone 80 8 B Betty Sutton 127 9 B Beth Vasquez 95 You can see the sequence is alternating between the highest and lowest within a group. | Take the sorted order and then apply a quadratic function to it where the root is 1/2 the length of the array (plus some small offset). This way the highest rank is given to the extremal values (the sign of the eps offset determines whether you want a the highest value ranked above the lowest value). I added a small group at the end to show how it properly handles repeated values or an odd group size. def extremal_rank(s): eps = 10**-4 y = (pd.Series(np.arange(1, len(s)+1), index=s.sort_values().index) - (len(s)+1)/2 + eps)**2 return y.reindex_like(s) df['rnk'] = df.groupby('Group')['Performance'].apply(extremal_rank) df = df.sort_values(['Group', 'rnk'], ascending=[True, False]) Group Name Performance rnk 2 A Chad Webster 142 6.2505 0 A Sheldon Webb 33 6.2495 4 A Elijah Mendoza 122 2.2503 1 A Traci Dean 64 2.2497 3 A Ora Harmon 116 0.2501 5 A June Strickland 68 0.2499 8 B Joel Gill 132 2.2503 9 B Vernon Stone 80 2.2497 7 B Betty Sutton 127 0.2501 6 B Beth Vasquez 95 0.2499 11 C b 110 9.0006 12 C c 68 8.9994 10 C a 110 4.0004 13 C d 68 3.9996 15 C f 70 1.0002 16 C g 70 0.9998 14 C e 70 0.0000 | 8 | 4 |
63,194,917 | 2020-7-31 | https://stackoverflow.com/questions/63194917/python-how-to-solve-oserror-errno-22-invalid-argument | I am learning about file objects in python but whenever i try to open file it shows the following error. I have already checked that file is in same directory and it exists this error occurs only if i name my file as test if i use any other name then it works fine here's my CODE f = open('C:\\Users\Tanishq\Desktop\python tutorials\test.txt', 'r') here's the ERROR Traceback (most recent call last): File "C:/Users/Tanishq/Desktop/question.py", line 1, in <module> f = open('C:\\Users\Tanishq\Desktop\python tutorials\test.txt', 'r') OSError: [Errno 22] Invalid argument: 'C:\\Users\\Tanishq\\Desktop\\python tutorials\test.txt' | Your issue is with backslashing characters like \T : Try: f = open(r'C:\\Users\Tanishq\Desktop\python tutorials\test.txt', 'r') Python uses \ to denote special characters. Therefore, the string you provided does not actually truly represent the correct filepath, since Python will interpret \Tanishq\ differently than the raw string itself. This is we we place the r in front of it. This lets Python know that we do indeed want to use the raw string and to treat \ as a normal character. | 10 | 14 |
63,191,845 | 2020-7-31 | https://stackoverflow.com/questions/63191845/confusion-related-to-pythons-in-operator | I found strange behavior with Python's in operator d = {} 'k' in d == False # False! I thought it's because of precedence: ('k' in d) == False # True, it's okay 'k' in (d == False) # Error, it's also okay But, what precedence evaluates the following expression then? d = {} 'k' in d == False If it's because of wrong precedence why it doesn't fire an error like if: 'k' in (d == False) In other words, what happens under the hood of Python with this expression? 'k' in d == False | in is considered a comparison operator, and so it is subject to comparison chaining. 'k' in d == False is equivalent to 'k' in d and d == False because both in and == are comparison operators. You virtually never need direct comparison to Boolean literals, though. The "correct" expression here is 'k' not in d. For reference, this is described in the Python documentation, under 6.10. Comparisons: comparison ::= or_expr (comp_operator or_expr)* comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "!=" | "is" ["not"] | ["not"] "in" and Comparisons can be chained arbitrarily, e.g., x < y <= z is equivalent to x < y and y <= z, except that y is evaluated only once (but in both cases z is not evaluated at all when x < y is found to be false). | 13 | 23 |
63,183,234 | 2020-7-31 | https://stackoverflow.com/questions/63183234/trying-to-understand-init-py-combined-with-getattr | I am trying to understand a piece of Python code. First, I have the following file structure: folder1-- model __init__.py file1.py file2.py __init__.py The __init__.py in the folder folder1 has nothing. The __init__.py in the folder model has the following: import os files = os.listdir(os.path.split(os.path.realpath(__file__))[0]) files.remove('__init__.py') for file in files: if file.endswith('.py'): exec('from .{} import *'.format(file[:-3])) With that said, I have some code in python that uses all the above Now, I am trying to understand the following code from folder1 import model as mymodel My first question is what does this do? I mean model is a folder name right? it is not an object. or is it? What is exactly importing as mymodel here? then later in the same code it says global args, cfg, is_fov120 args = parser.parse_args() model = getattr(mymodel, args.arch)(sync_bn=False) Apparently there is some arguments called arch. What is happening here and what does model have after this? Edit When I do print(mymodel) I get <module 'folder1.model' from 'C:\\path\\to\\folder1\\model\\__init__.py'> Investigating even more I can see that I have imported all the objects from the files in the folder model. mymodel.files gives the list of files in the folder, and I can call mymodel.somevariable if some variable was defined in file1.py or file2.py. As for classes I have to create an object first like x=mymodel.aClass() and then I can access the elements of the object x.someElement. Finally I found out that getattr is getting a class from the files inside model and I can guess that the sync_bn=False is a parameter to the constructor of that class. So in the end, model is an object of that class. | If you desire to have a folder as a python module, the folder must contain an __init__.py, even if it is empty. Then you can import the rest. import os files = os.listdir(os.path.split(os.path.realpath(__file__))[0]) #get the folder's content files.remove('__init__.py') #remove __init__.py since it is empty for file in files: #loop through the files if file.endswith('.py'): #if it is a python file exec('from .{} import *'.format(file[:-3])) #import The above code, imports every other .py files than the __init__, which is empty. from folder1 import model as mymodel Here folder1 is the module, and model is the object you imported from (folder) model in this case, since it is now imported to folder1's __init__.py, and now it is part of folder1 (which is a module as discussed). model = getattr(mymodel, args.arch)(sync_bn=False) This line is equal to: mymodel.attr, where attr is the desired attribute of the object. Can you please post more code around the getattr, since I can't tell what args.arch are refering to. As Pyzard suggested, the getattr method gets an attribute, which is a function, since it is getting called, and method is the value that this function returned. In this case sync_bn is irrevelant, but knowing more about args.arch would still help. More about the getattr function, how import works. Better explanation of how init.py works. | 7 | 5 |
63,184,392 | 2020-7-31 | https://stackoverflow.com/questions/63184392/pandas-merge-and-keep-only-non-matching-records | How can I merge/join these two dataframes ONLY on "id". Produce 3 new dataframes: 1)R1 = Merged records 2)R2 = (DF1 - Merged records) 3)R3 = (DF2 - Merged records) Using pandas in Python. First dataframe (DF1) | id | name | |-----------|-------| | 1 | Mark | | 2 | Dart | | 3 | Julia | | 4 | Oolia | | 5 | Talia | Second dataframe (DF2) | id | salary | |-----------|--------| | 1 | 20 | | 2 | 30 | | 3 | 40 | | 4 | 50 | | 6 | 33 | | 7 | 23 | | 8 | 24 | | 9 | 28 | My solution for R1 =pd.merge(DF1, DF2, on='id', how='inner') I am unsure that is the easiest way to get R2 and R3 R2 should look like | id | name | |-----------|-------| | 5 | Talia | R3 should look like: | id | salary | |-----------|--------| | 6 | 33 | | 7 | 23 | | 8 | 24 | | 9 | 28 | | You can turn on indicator in merge and look for the corresponding values: total_merge = df1.merge(df2, on='id', how='outer', indicator=True) R1 = total_merge[total_merge['_merge']=='both'] R2 = total_merge[total_merge['_merge']=='left_only'] R3 = total_merge[total_merge['_merge']=='right_only'] Update: Ben's suggestion would be something like this: dfs = {k:v for k,v in total_merge.groupby('_merge')} and then you can do, for examples: dfs['both'] | 6 | 14 |
63,162,731 | 2020-7-29 | https://stackoverflow.com/questions/63162731/how-can-i-automate-slicing-of-a-dataframe-into-batches-so-as-to-avoid-memoryerro | I have a dataframe with 2.7 million rows as you see below- df Out[10]: ClaimId ServiceSubCodeKey ClaimRowNumber SscRowNumber 0 1902659 183 1 1 1 1902659 2088 1 2 2 1902663 3274 2 1 3 1902674 12 3 1 4 1902674 23 3 2 ... ... ... ... 2793010 2563847 3109 603037 4 2793011 2563883 3109 603038 1 2793012 2564007 3626 603039 1 2793013 2564007 3628 603039 2 2793014 2564363 3109 603040 1 [2793015 rows x 4 columns] I am trying to Hot Encode this in python below but I end up with a Memory error: import pandas as pd columns = ( pd.get_dummies(df["ServiceSubCodeKey"]) .reindex(range(df.ServiceSubCodeKey.min(), df.ServiceSubCodeKey.max()+1), axis=1, fill_value=0) # now it has all digits .astype(str) ) # this will create codes codes_values = [int(''.join(r)) for r in columns.itertuples(index=False)] codes = pd.Series({'test': codes_values}).explode() codes.index = df.index # groupby and aggregate the values into lists dfg = codes.groupby(df.ClaimId).agg(list).reset_index() # sum the lists; doing this with a pandas function also does not work, so no .sum or .apply summed_lists = list() for r, v in dfg.iterrows(): summed_lists.append(str(sum(v[0]))) # assign the list of strings to a column dfg['sums'] = summed_lists # perform the remainder of the functions on the sums column dfg['final'] = dfg.sums.str.pad(width=columns.shape[1], fillchar='0').str.rstrip('0') # merge df and dfg.final dfm = pd.merge(df, dfg[['ClaimId', 'final']], on='ClaimId') dfm File "pandas/_libs/lib.pyx", line 574, in pandas._libs.lib.astype_str MemoryError How can I do this in automated batches so it doesnt give me a memory error? | onehot = [] for groupi, group in df.groupby(df.index//1e5): # encode each group separately onehot.expand(group_onehot) df = df.assign(onehot=onehot) Will give you 28 groups to work on individually. However, looking at your code, the line: codes_values = [int(''.join(r)) for r in columns.itertuples(index=False)] Is creating a string possibly up to 4k digits long and trying to create an integer in the range 10e4000, which will cause an overflow ( see https://numpy.org/devdocs/user/basics.types.html) Edit An alternative encoding method. Starting with this df: df = pd.DataFrame({ 'ClaimId': [1902659, 1902659, 1902663, 1902674, 1902674, 2563847, 2563883, 2564007, 2564007, 2564363], 'ServiceSubCodeKey': [183, 2088, 3274, 12, 23, 3109, 3109, 3626, 3628, 3109] }) The code: scale = df.ServiceSubCodeKey.max() + 1 onehot = [] for claimid, ssc in df.groupby('ClaimId').ServiceSubCodeKey: ssc_list = ssc.to_list() onehot.append([claimid, ''.join(['1' if i in ssc_list else '0' for i in range(1, scale)])]) onehot = pd.DataFrame(onehot, columns=['ClaimId', 'onehot']) print(onehot) Output ClaimId onehot 0 1902659 0000000000000000000000000000000000000000000000... 1 1902663 0000000000000000000000000000000000000000000000... 2 1902674 0000000000010000000000100000000000000000000000... 3 2563847 0000000000000000000000000000000000000000000000... 4 2563883 0000000000000000000000000000000000000000000000... 5 2564007 0000000000000000000000000000000000000000000000... 6 2564363 0000000000000000000000000000000000000000000000... This fixes the overflow issue in your method and avoids calling pd.get_dummies() to create a 600K x 4K dummy dataframe, with the handicap of iterating a grouped series and building a list-comprehension on every group (neither take advantage of built-in C implementations from pandas). From here you can: Recomended: carry on keeping a summary of one-hot encodings per ClaimId, or What you ask for: merge into df as you want, duplicating the same encoding as many times as ClaimId is duplicated in df with df = df.merge(onehot, on='ClaimId') Output ClaimId ServiceSubCodeKey onehot 0 1902659 183 0000000000000000000000000000000000000000000000... 1 1902659 2088 0000000000000000000000000000000000000000000000... 2 1902663 3274 0000000000000000000000000000000000000000000000... 3 1902674 12 0000000000010000000000100000000000000000000000... 4 1902674 23 0000000000010000000000100000000000000000000000... 5 2563847 3109 0000000000000000000000000000000000000000000000... 6 2563883 3109 0000000000000000000000000000000000000000000000... 7 2564007 3626 0000000000000000000000000000000000000000000000... 8 2564007 3628 0000000000000000000000000000000000000000000000... 9 2564363 3109 0000000000000000000000000000000000000000000000... | 7 | 2 |
63,179,049 | 2020-7-30 | https://stackoverflow.com/questions/63179049/whats-the-purpose-of-the-new-method-int-as-integer-ratio-in-python-3-8 | I was taking a look at the new features that were implemented in python 3.8, and I found out about a new function numerator, denominator = x.as_integer_ratio(). They state in the documentation that it Return a pair of integers whose ratio is exactly equal to the original integer and with a positive denominator. The integer ratio of integers (whole numbers) is always the integer as the numerator and 1 as the denominator. Basically this code x = 10 numerator, denominator = x.as_integer_ratio() print(numerator) print(denominator) Output 10 1 I was just wondering what is the point of having a function that will always return the same value and 1 ? I also saw that it was previously available for float which makes sense. | It appears the main reason for the changes were to implement uniformity, and typing for mypy, so that an int can be a subtype of float msg313780 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-03-13 21:25 Goal: make int() more interoperable with float by making a float/Decimal method also available on ints. This will let mypy treat ints as a subtype of floats. See: https://mail.python.org/pipermail/python-dev/2018-March/152384.html Open question: Is this also desired for fractions.Fraction and numbers.Rational? and digging into the pipermail it appears it helps with being able to change the domain of a type via decomposition, etc: [Python-Dev] Symmetry arguments for API expansion Guido van Rossum guido at python.org Tue Mar 13 15:07:15 EDT 2018 Previous message (by thread): [Python-Dev] Symmetry arguments for API expansion Next message (by thread): [Python-Dev] Symmetry arguments for API expansion Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] OK, please make it so. On Tue, Mar 13, 2018 at 11:39 AM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: On Mar 13, 2018, at 10:43 AM, Guido van Rossum wrote: So let's make as_integer_ratio() the standard protocol for "how to make a Fraction out of a number that doesn't implement numbers.Rational". We already have two examples of this (float and Decimal) and perhaps numpy or the sometimes proposed fixed-width decimal type can benefit from it too. If this means we should add it to int, that's fine with me. I would like that outcome. The signature x.as_integer_ratio() -> (int, int) is pleasant to work with. The output is easy to explain, and the denominator isn't tied to powers of two or ten. Since Python ints are exact and unbounded, there isn't worry about range or rounding issues. In contrast, math.frexp(float) ->(float, int) is a bit of pain because it still leaves you in the domain of floats rather than letting you decompose to more more basic types. It's nice to have a way to move down the chain from ℚ, ℝ, or ℂ to the more basic ℤ (of course, that only works because floats and complex are implemented in a way that precludes exact irrationals). Raymond see: https://bugs.python.org/issue33073 | 7 | 3 |
63,177,236 | 2020-7-30 | https://stackoverflow.com/questions/63177236/how-to-calculate-signal-to-noise-ratio-using-python | Dear experts i have a data set.i just want to calculate signal to noise ratio of the data. data is loaded here https://i.fluffy.cc/jwg9d7nRNDFqdzvg1Qthc0J7CNtKd5CV.html my code is given below: import numpy as np from scipy import signaltonoise import scipy.io dat=scipy.io.loadmat('./data.mat') arr=dat['dn'] snr=scipy.stats.signaltonoise(arr, axis=0, ddof=0) but i am getting error like importError: cannot import name 'signaltonoise' from 'scipy' if it doesn't exist how to calculate snr,please suggest some other way to do it with this data set using python. | scipy.stats.signaltonoise was removed in scipy 1.0.0. You can either downgrade your scipy version or create the function yourself: def signaltonoise(a, axis=0, ddof=0): a = np.asanyarray(a) m = a.mean(axis) sd = a.std(axis=axis, ddof=ddof) return np.where(sd == 0, 0, m/sd) Source: https://github.com/scipy/scipy/blob/v0.16.0/scipy/stats/stats.py#L1963 See the github link for the docstring. Edit: the full script would then look as follows import numpy as np import scipy.io def signaltonoise(a, axis=0, ddof=0): a = np.asanyarray(a) m = a.mean(axis) sd = a.std(axis=axis, ddof=ddof) return np.where(sd == 0, 0, m/sd) dat = scipy.io.loadmat('./data.mat') arr = dat['dn'] snr = signaltonoise(arr) | 8 | 12 |
63,166,479 | 2020-7-30 | https://stackoverflow.com/questions/63166479/valueerror-validation-split-is-only-supported-for-tensors-or-numpy-arrays-fo | When I tried to add validation_split in my LSTM model, I got this error ValueError: `validation_split` is only supported for Tensors or NumPy arrays, found: (<tensorflow.python.keras.preprocessing.sequence.TimeseriesGenerator object) This is the code from keras.preprocessing.sequence import TimeseriesGenerator train_generator = TimeseriesGenerator(df_scaled, df_scaled, length=n_timestamp, batch_size=1) model.fit(train_generator, epochs=50,verbose=2,callbacks=[tensorboard_callback], validation_split=0.1) ---------- ValueError: `validation_split` is only supported for Tensors or NumPy arrays, found: (<tensorflow.python.keras.preprocessing.sequence.TimeseriesGenerator object) One reason I could think of is, to use validation_split a tensor or numpy array is expected, as mentioned in the error, however, when passing train data through TimeSeriesGenerator, it changes the dimension of the train data to a 3D array And since TimeSeriesGenerator is mandatory to be used when using LSTM, does this means for LSTM we can't use validation_split | Your first intution is right that you can't use the validation_split when using dataset generator. You will have to understand how the functioninig of dataset generator happens. The model.fit API does not know how many records or batch your dataset has in its first epoch. As the data is generated or supplied for each batch one at a time to the model for training. So there is no way to for the API to know how many records are initially there and then making a validation set out of it. Due to this reason you cannot use the validation_split when using dataset generator. You can read it in their documentation. Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. You need to read the last two lines where they have said that it is not supported for dataset generator. What you can instead do is use the following code to split the dataset. You can read in detail here. I am just writing the important part from the link below. # Splitting the dataset for training and testing. def is_test(x, _): return x % 4 == 0 def is_train(x, y): return not is_test(x, y) recover = lambda x, y: y # Split the dataset for training. test_dataset = dataset.enumerate() \ .filter(is_test) \ .map(recover) # Split the dataset for testing/validation. train_dataset = dataset.enumerate() \ .filter(is_train) \ .map(recover) I hope my answer helps you. | 8 | 17 |
63,157,909 | 2020-7-29 | https://stackoverflow.com/questions/63157909/how-to-determine-if-two-sentences-talk-about-similar-topics | I would like to ask you a question. Is there any algorithm/tool which can allow me to do some association between words? For example: I have the following group of sentences: (1) "My phone is on the table" "I cannot find the charger". # no reference on phone (2) "My phone is on the table" "I cannot find the phone's charger". What I would like to do is to find a connection, probably a semantic connection, which can allow me to say that the first two sentences are talking about a topic (phone) as two terms (phone and charger) are common within it (in general). Same for the second sentence. I should have something that can connect phone to charger, in the first sentence. I was thinking of using Word2vec, but I am not sure if this is something that I can do with it. Do you have any suggestions about algorithms that I can use to determine similarity of topics (i.e. sentence which are formulated in a different way, but having same topic)? | In Python I'm pretty sure you have a Sequence Matcher that you can usee from difflib import SequenceMatcher def similar(a, b): return SequenceMatcher(None, a, b).ratio() If you want your own algorithm I would suggest a Levenstains Distance (it calculates how many operations you need to turn one string(sentance) into another. Might be usefull.). I coded it myself in like this for two strings edits = [[x for x in range(len(str1) + 1)] for y in range(len(str2)+ 1)] for i in range(len(str2) + 1): edits[i][0] = i for i in range(1, len(str2) + 1): for j in range(1, len(str1) + 1): if str2[i-1] == str1[j-1]: edits[i][j] = edits[i-1][j-1] else: edits[i][j] = 1 + min(edits[i-1][j-1], edits[i-1][j], edits[i][j-1]) return edits[-1][-1] [EDIT] For you, you want to compare if the sentances are about the similar topic. I would suggest any of the following algorithms (all are pretty easy) Jaccary Similarity K-means and Hierarchical Clustering Dendrogram Cosine Similarity | 7 | 4 |
63,152,656 | 2020-7-29 | https://stackoverflow.com/questions/63152656/installing-rdkit-in-google-colab | I cannot figure out how to fix the following issue. Up until today I was using the following code snippet for installing RDKit in Google Colab: !wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh !chmod +x Miniconda3-latest-Linux-x86_64.sh !time bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local !time conda install -q -y -c conda-forge rdkit import sys sys.path.append('/usr/local/lib/python3.7/site-packages/') However, today I started to get the following error: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-2-d24c24e2d1f9> in <module>() ----> 1 from rdkit import Chem 2 import networkx as nx ModuleNotFoundError: No module named 'rdkit' I've tried using the full Anaconda distribution instead of Miniconda, as well as changing the python version to 3.6 and 3.8 but nothing seems to work. | I think you need to specify python 3.7 when you install Miniconda (the current rdkit build supports python 3.7), the latest Miniconda version is py3.8: !wget -c https://repo.continuum.io/miniconda/Miniconda3-py37_4.8.3-Linux-x86_64.sh !chmod +x Miniconda3-py37_4.8.3-Linux-x86_64.sh !time bash ./Miniconda3-py37_4.8.3-Linux-x86_64.sh -b -f -p /usr/local !time conda install -q -y -c conda-forge rdkit import sys sys.path.append('/usr/local/lib/python3.7/site-packages/') https://colab.research.google.com/drive/1MAZyv3O4-TrI8c1MD4JVmwExDquaprRT?usp=sharing | 6 | 3 |
63,156,252 | 2020-7-29 | https://stackoverflow.com/questions/63156252/python-compare-two-sentence-by-words-using-difflib | Im using difflib and tried to compare the two sentence and get the difference. Somewhat like this. i have this code but instead of word by word it analyzed letter by letter. import difflib # define original text # taken from: https://en.wikipedia.org/wiki/Internet_Information_Services original = ["IIS 8.5 has several improvements related"] # define modified text edited = ["It has several improvements related"] # initiate the Differ object d = difflib.Differ() # calculate the difference between the two texts diff = d.compare(original, edited) # output the result print ('\n'.join(diff)) | If you remove the []'s from your strings, and call .split() on them in the .compare() perhaps you'll get what you want. import difflib # define original text # taken from: https://en.wikipedia.org/wiki/Internet_Information_Services original = "IIS 8.5 has several improvements related" # define modified text edited = "It has several improvements related" # initiate the Differ object d = difflib.Differ() # calculate the difference between the two texts diff = d.compare(original.split(), edited.split()) # output the result print ('\n'.join(diff)) Output + It - IIS - 8.5 has several improvements related | 6 | 15 |
63,153,629 | 2020-7-29 | https://stackoverflow.com/questions/63153629/use-data-coords-for-x-axis-coords-for-y-for-text-annotations | From the documentation: The default transform specifies that text is in data coords, alternatively, you can specify text in axis coords (0,0 is lower-left and 1,1 is upper-right). The example below places text in the center of the axes: >>> text(0.5, 0.5, 'matplotlib', horizontalalignment='center', ... verticalalignment='center', transform=ax.transAxes) Can I instead use both data and axis coords? For x and y respectively. Example code: import random import matplotlib.pyplot as plt values = [random.randint(2,30) for _ in range(15)] plt.violinplot(values, positions=[1]) # This might place the text outside the figure plt.gca().text(1, 30, "Text") # I would like to use axis coords for y instead of data coords. Example call # would be something like this: # text_mixed_coords(xdata=1, yaxis=0.9, "Text") plt.savefig("plot.png") Potential results: See also: Putting text in top left corner of matplotlib plot | This is known as a "blended transformation" you can create a blended transformation that uses the data coordinates for the x axis and axes coordinates for the y axis, like so: import matplotlib.transforms as transforms trans = transforms.blended_transform_factory(ax.transData, ax.transAxes) In your minimal example: import random import matplotlib.pyplot as plt import matplotlib.transforms as transforms fig, ax = plt.subplots() values = [random.randint(2,30) for _ in range(15)] ax.violinplot(values, positions=[1]) # the x coords of this transformation are data, and the # y coord are axes trans = transforms.blended_transform_factory( ax.transData, ax.transAxes) ax.text(1, 0.9, "Text", transform=trans) plt.savefig("plot.png") Also worth noting this from the matplotlib tutorial which makes it a little easier in this particular case: Note: The blended transformations where x is in data coords and y in axes coordinates is so useful that we have helper methods to return the versions mpl uses internally for drawing ticks, ticklabels, etc. The methods are matplotlib.axes.Axes.get_xaxis_transform() and matplotlib.axes.Axes.get_yaxis_transform(). So in the example above, the call to blended_transform_factory() can be replaced by get_xaxis_transform: trans = ax.get_xaxis_transform() | 19 | 21 |
63,145,532 | 2020-7-29 | https://stackoverflow.com/questions/63145532/how-to-convert-ragged-tensor-to-tensor-in-python | I have a ragged tensor, and upon trying to create a model, and use model.fit(), I get an error: TypeError: Failed to convert object of type <class 'tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor'> to Tensor. Contents: tf.RaggedTensor(values=Tensor("Cast_1:0", shape=(None,), dtype=float32), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(None,), dtype=int64)). Consider casting elements to a supported type. Is this an issue with the shape of my data? Perhaps the data altogether? Maybe I need to use a sparse or dense tensor instead? Here is my full traceback error: As well as my data: | Every RaggedTensor has an associated to_tensor() method, call that. Check your input data transformations, that is where the RaggedTensor is getting created. Some ops return RaggedTensor. | 9 | 19 |
63,146,892 | 2020-7-29 | https://stackoverflow.com/questions/63146892/how-to-load-a-keras-model-saved-as-pb | I'd like to load a keras model that i've trained and saved it as .pb. Here's the code, Am using a jupyter notebook. The model is successfully saved as saved_model.pb under the same directory. But the code is unable to access it. Can anybody see to it, how can i access this keras model that's saved in .pb extension. I checked at several other places for solution but no luck. Model is saved at model/saved_model.pb. I've taken out the .pb file and placed it in the same directory where my code file exists. | The function tf.keras.models.load_model load a SavedModel into a tf.keras -model. The argument of the function is path to a saved model. So try model = tf.keras.models.load_model('model') | 10 | 14 |
63,125,259 | 2020-7-28 | https://stackoverflow.com/questions/63125259/what-is-the-proper-way-to-type-hint-the-return-value-of-an-asynccontextmanager | What is the proper way to add type hints for the return of a function with the @asynccontextmanager decorator? Here are two attempts I made that both fail. from contextlib import asynccontextmanager from typing import AsyncContextManager async def caller(): async with return_str() as i: print(i) async with return_AsyncContextManager() as j: print(j) @asynccontextmanager async def return_str() -> str: yield "hello" @asynccontextmanager async def return_AsyncContextManager() -> AsyncContextManager[str]: yield "world" For both i and j Pylance in vscode shows type Any. Other ideas I've considered: I thought maybe I could pass in the type info to the decorator itself (like @asynccontextmanager(cls=str), but I can't find any examples of that, or any description of the args I could pass in. async with return_str() as i: # type: str doesn't work either. Even if it did, I'd rather hint at the function definition, not at every invocation. Type comments are not very DRY. I tried to create an AsyncContextManager class with __aenter()__ and __aexit()__ functions, but was not successful. I would fall back to that if it worked, but I'd prefer to make the decorator work because it's much more concise. Here's a screencap of me hovering the return_AsyncContextManager() function, and showing the Pylance popup saying it returns AsyncContextManager[_T] | You have to hint AsyncIterator as the return type, like this: @asynccontextmanager async def my_acm() -> AsyncIterator[str]: yield "this works" async def caller(): async with my_acm() as val: print(val) This is because the yield keyword is used to create generators. Consider: def a_number_generator(): for x in range(10): # type: int yield x g = a_number_generator() # g is type Generator[int] This makes sense given the type hints for @asynccontextmanager: asynccontextmanager(func: Callable[_P, AsyncIterator[_T_co]]) -> Callable[_P, _AsyncGeneratorContextManager[_T_co]] That's a lot to parse but it says that the asynccontextmanager takes a function which returns AsyncIterator and transforms it into a new function that returns AsyncContextManager. The generic types _P and _T_co are preserved as well. Here is a screenshot showing the type hint transferring into the caller function. | 16 | 17 |
63,107,594 | 2020-7-27 | https://stackoverflow.com/questions/63107594/how-to-deal-with-multi-level-column-names-downloaded-with-yfinance | I have a list of tickers (tickerStrings) that I have to download all at once. When I try to use Pandas' read_csv it doesn't read the CSV file in the way it does when I download the data from yfinance. I usually access my data by ticker like this: data['AAPL'] or data['AAPL'].Close, but when I read the data from the CSV file it does not let me do that. if path.exists(data_file): data = pd.read_csv(data_file, low_memory=False) data = pd.DataFrame(data) print(data.head()) else: data = yf.download(tickerStrings, group_by="Ticker", period=prd, interval=intv) data.to_csv(data_file) Here's the print output: Unnamed: 0 OLN OLN.1 OLN.2 OLN.3 ... W.1 W.2 W.3 W.4 W.5 0 NaN Open High Low Close ... High Low Close Adj Close Volume 1 Datetime NaN NaN NaN NaN ... NaN NaN NaN NaN NaN 2 2020-06-25 09:30:00-04:00 11.1899995803833 11.220000267028809 11.010000228881836 11.079999923706055 ... 201.2899932861328 197.3000030517578 197.36000061035156 197.36000061035156 112156 3 2020-06-25 09:45:00-04:00 11.130000114440918 11.260000228881836 11.100000381469727 11.15999984741211 ... 200.48570251464844 196.47999572753906 199.74000549316406 199.74000549316406 83943 4 2020-06-25 10:00:00-04:00 11.170000076293945 11.220000267028809 11.119999885559082 11.170000076293945 ... 200.49000549316406 198.19000244140625 200.4149932861328 200.4149932861328 88771 The error I get when trying to access the data: Traceback (most recent call last): File "getdata.py", line 49, in processData avg = data[x].Close.mean() AttributeError: 'Series' object has no attribute 'Close' | In dealing with financial data from multiple tickers, specifically using yfinance and pandas, the process can be broken down into a few key steps: downloading the data, organizing it in a structured format, and accessing it in a way that aligns with the user's needs. Below, the answer is organized into clear, actionable segments. Downloading Data for Multiple Tickers Direct Download and DataFrame Creation Single Ticker, Single DataFrame Approach: For individual tickers, the DataFrame downloaded directly from yfinance comes with single-level column names but lacks a ticker column. By iterating over each ticker, adding a ticker column, and then combining these into a single DataFrame, a clear structure for each ticker's data is maintained. import yfinance as yf import pandas as pd tickerStrings = ['AAPL', 'MSFT'] df_list = [] for ticker in tickerStrings: data = yf.download(ticker, group_by="Ticker", period='2d') data['ticker'] = ticker # Add ticker column df_list.append(data) # Combine all dataframes into a single dataframe df = pd.concat(df_list) df.to_csv('ticker.csv') Condensed Single DataFrame Approach: Achieve the same result as above with a one-liner using list comprehension, streamlining the process of fetching and combining data. # Download 2 days of data for each ticker in tickerStrings, add a 'ticker' column for identification, and concatenate into a single DataFrame with continuous indexing. df = pd.concat([yf.download(ticker, group_by="Ticker", period='2d').assign(ticker=ticker) for ticker in tickerStrings], ignore_index=True) Multi-Ticker, Structured DataFrame Approach When downloading data for multiple tickers simultaneously, yfinance groups data by ticker, resulting in a DataFrame with multi-level column headers. This structure can be reorganized for easier access. Unstacking Column Levels: # Define a list of ticker symbols to download tickerStrings = ['AAPL', 'MSFT'] # Download 2 days of data for each ticker, grouping by 'Ticker' to structure the DataFrame with multi-level columns df = yf.download(tickerStrings, group_by='Ticker', period='2d') # Transform the DataFrame: stack the ticker symbols to create a multi-index (Date, Ticker), then reset the 'Ticker' level to turn it into a column df = df.stack(level=0).rename_axis(['Date', 'Ticker']).reset_index(level=1) Handling CSV Files with Multi-Level Column Names To read a CSV file that has been saved with yfinance data (which often includes multi-level column headers), adjustments are necessary to ensure the DataFrame is accessible in the desired format. Reading and Adjusting Multi-Level Columns: # Read the CSV file. The file has multi-level headers, hence header=[0, 1]. df = pd.read_csv('test.csv', header=[0, 1]) # Drop the first row as it contains only the Date information in one column, which is redundant after setting the index. df.drop(index=0, inplace=True) # Convert the 'Unnamed: 0_level_0', 'Unnamed: 0_level_1' column (which represents dates) to datetime format. # This assumes the dates are in the 'YYYY-MM-DD' format. df[('Unnamed: 0_level_0', 'Unnamed: 0_level_1')] = pd.to_datetime(df[('Unnamed: 0_level_0', 'Unnamed: 0_level_1')]) # Set the datetime column as the index of the DataFrame. This makes time series analysis more straightforward. df.set_index(('Unnamed: 0_level_0', 'Unnamed: 0_level_1'), inplace=True) # Clear the name of the index to avoid confusion, as it previously referred to the multi-level column names. df.index.name = None Flattening Multi-Level Columns for Easier Access Depending on the initial structure of the DataFrame, multi-level columns many need to be flattened to a single level, adding clarity and simplicity to the dataset. Flattening and Reorganizing Based on Ticker Level: For DataFrames where the ticker symbol is at the top level of the column headers: df.stack(level=0).rename_axis(['Date', 'Ticker']).reset_index(level=1) If the ticker symbol is at the bottom level: df.stack(level=1).rename_axis(['Date', 'Ticker']).reset_index(level=1) Individual Ticker File Management For those preferring to manage each ticker's data separately, downloading and saving each ticker's data to individual files can be a straightforward approach. Downloading and Saving Individual Ticker Data: for ticker in tickerStrings: # Downloads historical market data from Yahoo Finance for the specified ticker. # The period ('prd') and interval ('intv') for the data are specified as string variables. data = yf.download(ticker, group_by="Ticker", period='prd', interval='intv') # Adds a new column named 'ticker' to the DataFrame. This column is filled with the ticker symbol. # This step is helpful for identifying the source ticker when multiple DataFrames are combined or analyzed separately. data['ticker'] = ticker # Saves the DataFrame to a CSV file. The file name is dynamically generated using the ticker symbol, # allowing each ticker's data to be saved in a separate file for easy access and identification. # For example, if the ticker symbol is 'AAPL', the file will be named 'ticker_AAPL.csv'. data.to_csv(f'ticker_{ticker}.csv') Consolidating Multiple Ticker Files into a Single DataFrame If data for each ticker is stored in separate files, combining these into a single DataFrame can be accomplished through file reading and concatenation. Reading Multiple Files into One DataFrame: # Import the Path class from the pathlib module, which provides object-oriented filesystem paths from pathlib import Path # Create a Path object 'p' that represents the directory containing the CSV files p = Path('path_to_files') # Use the .glob method to create an iterator over all files in the 'p' directory that match the pattern 'ticker_*.csv'. # This pattern will match any files that start with 'ticker_' and end with '.csv', which are presumably files containing ticker data. files = p.glob('ticker_*.csv') # Read each CSV file matched by the glob pattern into a separate pandas DataFrame, then concatenate all these DataFrames into one. # The 'ignore_index=True' parameter is used to reindex the new DataFrame, preventing potential index duplication. # This results in a single DataFrame 'df' that combines all the individual ticker data files into one comprehensive dataset. df = pd.concat([pd.read_csv(file) for file in files], ignore_index=True) This structured approach ensures that regardless of the initial data format or how it's stored, you can effectively organize and access financial data for multiple tickers using yfinance and pandas. Overview of Data Representations This seciton showcases examples of financial data represented in both multi-level and single-level column formats. These representations are crucial for understanding different data structures and their implications for data analysis in financial contexts. Multi-Level Column Data Multi-level column data can be complex but allows for the organization of related data under broader categories. This structure is especially useful for datasets where each entity (e.g., a stock ticker) has multiple attributes (e.g., Open, High, Low, Close prices). Example: DataFrame with Multi-Level Columns Below is a sample DataFrame showcasing multi-level column data for two stock tickers, AAPL and MSFT. Each ticker has multiple attributes, such as Open, High, Low, Close, Adjusted Close, and Volume. AAPL MSFT Open High Low Close Adj Close Volume Open High Low Close Adj Close Volume Date 1980-12-12 0.513393 0.515625 0.513393 0.513393 0.405683 117258400 NaN NaN NaN NaN NaN NaN 1980-12-15 0.488839 0.488839 0.486607 0.486607 0.384517 43971200 NaN NaN NaN NaN NaN NaN 1980-12-16 0.453125 0.453125 0.450893 0.450893 0.356296 26432000 NaN NaN NaN NaN NaN NaN 1980-12-17 0.462054 0.464286 0.462054 0.462054 0.365115 21610400 NaN NaN NaN NaN NaN NaN 1980-12-18 0.475446 0.477679 0.475446 0.475446 0.375698 18362400 NaN NaN NaN NaN NaN NaN Example: CSV Format of Multi-Level Columns Representing the above DataFrame in CSV format poses a unique challenge, as shown below. The multi-level structure is flattened into two header rows followed by the data rows. ,AAPL,AAPL,AAPL,AAPL,AAPL,AAPL,MSFT,MSFT,MSFT,MSFT,MSFT,MSFT ,Open,High,Low,Close,Adj Close,Volume,Open,High,Low,Close,Adj Close,Volume Date,,,,,,,,,,,, 1980-12-12,0.5133928656578064,0.515625,0.5133928656578064,0.5133928656578064,0.40568336844444275,117258400,,,,,, 1980-12-15,0.4888392984867096,0.4888392984867096,0.4866071343421936,0.4866071343421936,0.3845173120498657,43971200,,,,,, 1980-12-16,0.453125,0.453125,0.4508928656578064,0.4508928656578064,0.3562958240509033,26432000,,,,,, Single-Level Column Data For datasets where each entity shares a uniform set of attributes, single-level column data structures are ideal. This simpler format facilitates easier data manipulation and analysis, making it a common choice for many applications. Example: DataFrame with Single-Level Columns Below is a sample DataFrame displaying single-level column data for the MSFT stock ticker. It includes attributes such as Open, High, Low, Close, Adjusted Close, and Volume, alongside the ticker symbol for each entry. This format is straightforward, enabling direct access to each attribute of the stock data. Open High Low Close Adj Close Volume ticker Date 1986-03-13 0.088542 0.101562 0.088542 0.097222 0.062205 1031788800 MSFT 1986-03-14 0.097222 0.102431 0.097222 0.100694 0.064427 308160000 MSFT 1986-03-17 0.100694 0.103299 0.100694 0.102431 0.065537 133171200 MSFT 1986-03-18 0.102431 0.103299 0.098958 0.099826 0.063871 67766400 MSFT 1986-03-19 0.099826 0.100694 0.097222 0.098090 0.062760 47894400 MSFT Example: CSV Format of Single-Level Columns When single-level column data is exported to a CSV format, it results in a straightforward, easily readable file. Each row corresponds to a specific date, and each column header directly represents an attribute of the stock data. This simplicity enhances the CSV's usability for both humans and software applications. Date,Open,High,Low,Close,Adj Close,Volume,ticker 1986-03-13,0.0885416641831398,0.1015625,0.0885416641831398,0.0972222238779068,0.0622050017118454,1031788800,MSFT 1986-03-14,0.0972222238779068,0.1024305522441864,0.0972222238779068,0.1006944477558136,0.06442664563655853,308160000,MSFT 1986-03-17,0.1006944477558136,0.1032986119389534,0.1006944477558136,0.1024305522441864,0.0655374601483345,133171200,MSFT 1986-03-18,0.1024305522441864,0.1032986119389534,0.0989583358168602,0.0998263880610466,0.06387123465538025,67766400,MSFT 1986-03-19,0.0998263880610466,0.1006944477558136,0.0972222238779068,0.0980902761220932,0.06276042759418488,47894400,MSFT This section exemplifies how single-level column data is organized, providing an intuitive and accessible way to work with financial datasets. Whether in DataFrame or CSV format, single-level data structures support efficient data processing and analysis tasks. | 13 | 42 |
63,135,648 | 2020-7-28 | https://stackoverflow.com/questions/63135648/how-to-reorder-the-keys-of-a-dictionary | I have multiple dictionaries inside the list. I want to sort the dictionary with the custom key. In my case, I want to sort it using Date key. By that, I mean to move the Date key to the first position. What is the efficient way to sort the dictionary using Date key? PS: I don't want to sort by the value of the Date. [ { "AmazonS3":6.54, "AmazonEC2":27.55, "AmazonCloudWatch":0.51, "Date":"2020-07-01" }, { "AmazonEC2":27.8, "Date":"2020-07-02" }, { "AmazonElastiCache":0.01, "AmazonEC2":35.34, "Date":"2020-07-03" } ] Expected output: ... { "Date":"2020-07-03", "AmazonElastiCache":0.01, "AmazonEC2":35.34 } ... | If you are using a Python version that preserves key insertion order (i.e. 3.7 or newer) you can do this: print([{"Date": di["Date"], **di} for di in my_list]) [ { 'Date': '2020-07-01', 'AmazonS3': 6.54, 'AmazonEC2': 27.55, 'AmazonCloudWatch': 0.51 }, { 'Date': '2020-07-02', 'AmazonEC2': 27.8 }, { 'Date': '2020-07-03', 'AmazonElastiCache': 0.01, 'AmazonEC2': 35.34 } ] | 7 | 9 |
63,085,356 | 2020-7-25 | https://stackoverflow.com/questions/63085356/oserror-errno-30-read-only-file-system-user-macos-catalina | I was coding sorter for downloads folder. I'm getting this error, I tried to change permissions: chmod: Unable to change file mode on Users: Operation not permitted import os from_dir = os.path.dirname('/Users/user/Downloads/') working_dir = os.walk(from_dir) to_dir = os.path.dirname('/User/user/Downloads/New Folder/') def move(folder): for roots, dirs, files in folder: for file in files: src_folder = from_dir + '/' + file to_folder = to_dir + '/' + file if not os.path.exists(to_dir): os.makedirs(to_dir) os.rename(src_folder, to_folder) move(working_dir) Maybe there is another way to write this code just without touching root folders? Full error: Traceback (most recent call last): File "/Users/beknazarnurbek/Documents/PycharmProjects/Move Files/move.py", line 19, in <module> move(working_dir) File "/Users/beknazarnurbek/Documents/PycharmProjects/Move Files/move.py", line 14, in move os.makedirs(to_dir) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/os.py", line 211, in makedirs makedirs(head, exist_ok=exist_ok) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/os.py", line 211, in makedirs makedirs(head, exist_ok=exist_ok) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/os.py", line 211, in makedirs makedirs(head, exist_ok=exist_ok) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/os.py", line 221, in makedirs mkdir(name, mode) OSError: [Errno 30] Read-only file system: '/User' | That error message is bit misleading. In this case, the problem is that there is no /User directory on macOS. The directory is named /Users. In the following line to_dir = os.path.dirname('/User/user/Downloads/New Folder/') User should be Users to_dir = os.path.dirname('/Users/user/Downloads/New Folder/') What's happening is that os.mkdirs() is attempting to create a directory User in /. Which is not writable. Which is causing the error message. | 14 | 7 |
63,059,979 | 2020-7-23 | https://stackoverflow.com/questions/63059979/cannot-install-tensorflow-1-x | I am trying to install Tensorflow 1.14 for a package that I am trying to use. I tried: pip3 uninstall tensorflow Then I tried to install Tensorflow 1.14 using: pip3 install tensorflow==1.14 and I get the following error ERROR: Could not find a version that satisfies the requirement tensorflow==1.14 (from versions: 2.2.0rc3, 2.2.0rc4, 2.2.0, 2.3.0rc0, 2.3.0rc1, 2.3.0rc2) ERROR: No matching distribution found for tensorflow==1.14 I also tried making a new virtual env and tried the following commands but it didn't work. Is there any way to install Tensorflow 1? | What I've found on discourse: You just need to make sure you’re using Python 3.5, 3.6 or 3.7. TensorFlow 1.15 does not support Python 3.8 | 23 | 25 |
63,119,831 | 2020-7-27 | https://stackoverflow.com/questions/63119831/how-to-solve-valueerror-operands-could-not-be-broadcast-together-with-shapes | I have to sum 2 arrays with broadcasting. This is the first: a = [0 1 2 3] And this is the second: A = [[ 0 1 2 3 4 5] [ 6 7 8 9 10 11] [12 13 14 15 16 17] [18 19 20 21 22 23]] This is the code I had tried until now: a = np.array(a) A = np.array(A) G = a + A print(G) But when I run, it throws this error: ValueError: operands could not be broadcast together with shapes (4,) (4,6) How to solve it? | Arrays need to have compatible shapes and same number of dimensions when performing a mathematical operation. That is, you can't add two arrays of shape (4,) and (4, 6), but you can add arrays of shape (4, 1) and (4, 6). You can add that extra dimension as follows: a = np.array(a) a = np.expand_dims(a, axis=-1) # Add an extra dimension in the last axis. A = np.array(A) G = a + A Upon doing this and broadcasting, a will practically become [[0 0 0 0 0 0] [1 1 1 1 1 1] [2 2 2 2 2 2] [3 3 3 3 3 3]] for the purpose of addition (the actual value of a won't change, a will still be [[0] [1] [2] [3]]; the above is the array A will be added to). | 9 | 13 |
63,101,307 | 2020-7-26 | https://stackoverflow.com/questions/63101307/does-python-3-8-2-have-ensurepip-do-i-need-to-install-ensurepip-to-use-it | I'm using python 3.8.2 on ubuntu on windows 10. I'm reading an OOP pdf and I'm at the third-party libraries section. It says that pip doesn't come with python, but python 3.4 contains a useful tool called ensurepip, which will install it: python -m ensurepip. But when I press enter, it says no module named ensurepip /usr/bin/python3: No module named ensurepip So I thought that I already have pip so I tried to install pygame with pip but it says there's no module named pip. What am I doing wrong? Thanks. | The module ensurepip is part of Python's standard library. It should be there. You say you're on Windows, but then you show /usr/bin/python3 in your question, which is obviously not a Windows path (rather Linux). My assumption is that you might be using WSL (or WSL2), which is actually Linux running on Windows (without going into details). By default WSL runs a Ubuntu distribution. This distribution (and other Debian-related distributions) typically breaks up Python and its standard library in multiple pieces. So you might need to install an additional system package, I believe it could be the python3-venv system package that contains the Python ensurepip module in your case: sudo apt-get install python3-venv | 8 | 26 |
63,108,452 | 2020-7-27 | https://stackoverflow.com/questions/63108452/python-specify-type-with-multiple-bases-typing-and-operator | I understand (as explained in this question and the docs) that a type hint of X or Y can be expressed as: Union[X,Y] But how does one express a type hint of X and Y? This would be useful when expressing that the object in question must be a subclass of both X and Y. The following example works as long as all classes that inherit both X and Y are known in advance: class X: pass class Y: pass class A(X,Y): pass class B(X,Y): pass def some_function(arg: Union[A,B]): pass # do stuff with arg that only depends on inherited members from X and Y But what if another package which depends on the code above defines: class C(X,Y): pass C also will work in some_function by design. I am looking for a better type hint to use instead of Union[X,Y] that includes any possible class that sub-classes both X and Y. I understand a workaround could be to define: class XY(X,Y): pass And then use it as a base class and type hint: class A(XY): pass class B(XY): pass class C(XY): pass def some_function(arg: XY): pass But I am not sure if it would be worthwhile to define a new class only for type hint, which doesn't effect runtime anyway. How do we create a type hint for any class that is a subclass of both X and Y? | Python type hints does not support explicit intersection annotation. But you have at least two workarounds: You could introduce a mix class, e.g: class A: def foo_a(self): pass class B: def foo_b(self): pass class Mix(A, B): pass def foo(p: Mix) -> None: p.foo_a() p.foo_b() Or use structural subtyping, Protocol, e.g.: from typing import Protocol class Supports_ab(Protocol): def a(self): pass def b(self): pass class A: def a(self): pass class B: def b(self): pass class Derived(A, B): pass class SomeClassAB: # no need superclass def a(self): pass def b(self): pass def foo(p: Supports_ab) -> None: p.a() p.b() foo(SomeClassAB()) foo(Derived()) | 11 | 11 |
63,084,049 | 2020-7-25 | https://stackoverflow.com/questions/63084049/sslerrorcant-connect-to-https-url-because-the-ssl-module-is-not-available | In my Ubuntu 20.04. I am using two python versions. One of them is Python3.8.2 which came with my Ubuntu installation and another one is Python3.7.5. I installed Python3.7.5 using update-alternatives alongside the system default version. But now the problem is no pip command is working on Python3.7.5. Although pip is available in this (Python3.7.5) installation and while printing the version it shows the following (using command pip3.7 -V): pip 19.2.3 from /usr/local/lib/python3.7/site-packages/pip (python 3.7) But whenever I try to install a package using it always shows the error mentioned in the title. For example while installing the following package: sudo pip3.7 install intel-tensorflow==1.15.2 The following error is thrown: WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Collecting intel-tensorflow==1.15.2 WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/intel-tensorflow/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/intel-tensorflow/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/intel-tensorflow/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/intel-tensorflow/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/intel-tensorflow/ Could not fetch URL https://pypi.org/simple/intel-tensorflow/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/intel-tensorflow/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping ERROR: Could not find a version that satisfies the requirement intel-tensorflow==1.15.2 (from versions: none) ERROR: No matching distribution found for intel-tensorflow==1.15.2 WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping Why this is actually happening? Again it shows the same error for all pip3.7 installations no matter which module I am going to install. Also there is no such problems like that whenever I am using the system default python version (Python3.8.2). | TL;DR you're probably missing some system dependencies. sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev wget Read below for the full details on how we got there. The error states that the SSL python module is not available; meaning you either don't have an appropriate ssl lib installed (probably not since you state the system python can pip install fine), or the python you built from source or otherwise installed doesn't include the ssl module. If you built it from source, you need to be sure to set the configuration option --with-openssl. Also, I'd really caution against sudo pip installing anything. Use something like virtualenv to have separate python environments from the system python or other python versions you install. EDIT: Upon closer examination, the python configure script appears to enable ssl support by default, assuming the dev headers are present. Make sure all the dev headers are present with sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev wget, then retry configuring and building python. | 15 | 22 |
63,105,799 | 2020-7-26 | https://stackoverflow.com/questions/63105799/understanding-python-contextvars | Regarding the following SO answer . I've made some changes in order to understand the difference between do use Contextvars and don't. I expect at some point the variable myid gets corrupted but changing the range to a higher number seems doesn't affect at all. import asyncio import contextvars # declare context var request_id = contextvars.ContextVar('Id of request.') async def some_inner_coroutine(myid): # get value print('Processed inner coroutine of myid : {}'.format(myid)) print('Processed inner coroutine of request: {}'.format(request_id.get())) if myid != request_id.get(): print("ERROR") async def some_outer_coroutine(req_id): # set value request_id.set(req_id) await some_inner_coroutine(req_id) # get value print('Processed outer coroutine of request: {}'.format(request_id.get())) async def main(): tasks = [] for req_id in range(1, 1250): tasks.append(asyncio.create_task(some_outer_coroutine(req_id))) await asyncio.gather(*tasks) if __name__ == '__main__': asyncio.run(main()) Results Processed inner coroutine of myid : 1 Processed inner coroutine of request: 1 Processed outer coroutine of request: 1 Processed inner coroutine of myid : 2 Processed inner coroutine of request: 2 Processed outer coroutine of request: 2 Processed inner coroutine of myid : 3 Processed inner coroutine of request: 3 Processed outer coroutine of request: 3 Processed inner coroutine of myid : 4 Processed inner coroutine of request: 4 Processed outer coroutine of request: 4 ... ... Processed inner coroutine of myid : 1244 Processed inner coroutine of request: 1244 Processed outer coroutine of request: 1244 Processed inner coroutine of myid : 1245 Processed inner coroutine of request: 1245 Processed outer coroutine of request: 1245 Processed inner coroutine of myid : 1246 Processed inner coroutine of request: 1246 Processed outer coroutine of request: 1246 Processed inner coroutine of myid : 1247 Processed inner coroutine of request: 1247 Processed outer coroutine of request: 1247 Processed inner coroutine of myid : 1248 Processed inner coroutine of request: 1248 Processed outer coroutine of request: 1248 Processed inner coroutine of myid : 1249 Processed inner coroutine of request: 1249 Processed outer coroutine of request: 1249 What should I change to see an unexpected behaviour of the variable myid? | Context variables are convenient when you need to pass a variable along the chain of calls so that they share the same context, in the case when this cannot be done through a global variable in case of concurrency. Context variables can be used as an alternative to global variables both in multi-threaded code and in asynchronous (with coroutines). Context variables are natively supported in asyncio and are ready to be used without any extra configuration. Because when a Task is created it copies the current context and later runs its coroutine in the copied context: # asyncio/task.py class Task: def __init__(self, coro): ... # Get the current context snapshot. self._context = contextvars.copy_context() self._loop.call_soon(self._step, context=self._context) def _step(self, exc=None): ... # Every advance of the wrapped coroutine is done in # the task's context. self._loop.call_soon(self._step, context=self._context) ... Below is your example showing the corruption of a global variable compared to context vars: import asyncio import contextvars # declare context var current_request_id_ctx = contextvars.ContextVar('') current_request_id_global = '' async def some_inner_coroutine(): global current_request_id_global # simulate some async work await asyncio.sleep(0.1) # get value print('Processed inner coroutine of request: {}'.format(current_request_id_ctx.get())) if current_request_id_global != current_request_id_ctx.get(): print(f"ERROR! global var={current_request_id_global}") async def some_outer_coroutine(req_id): global current_request_id_global # set value current_request_id_ctx.set(req_id) current_request_id_global = req_id await some_inner_coroutine() # get value print('Processed outer coroutine of request: {}\n'.format(current_request_id_ctx.get())) async def main(): tasks = [] for req_id in range(1, 10000): tasks.append(asyncio.create_task(some_outer_coroutine(req_id))) await asyncio.gather(*tasks) if __name__ == '__main__': asyncio.run(main()) Output: ... Processed inner coroutine of request: 458 ERROR! global var=9999 Processed outer coroutine of request: 458 Processed inner coroutine of request: 459 ERROR! global var=9999 Processed outer coroutine of request: 459 ... An example of converting code that uses threading.local() can be found in PЕP 567 | 24 | 46 |
63,136,177 | 2020-7-28 | https://stackoverflow.com/questions/63136177/how-to-set-the-maximum-image-size-to-upload-image-in-django-ckeditor | I am using django-ckeditor for my project to upload image along with text content. I used body = RichTextUploadingField(blank=True, null=True) in model. Now I want to restrict the user to upload large size images in content or larger than predefined maximum height/width. I want the uploaded images in content of particular height/weight and size like less then 1mb. How can I predefine maximum image height and width as well as maximum image size limit? Is there any way to define it from django-ckeditor configuration? Or How to resize the uploaded images from content at backend after user submitted the form? Here is my models.py: class Post(models.Model): STATUS_CHOICES = { ('draft', 'Draft'), ('published', 'Published'), } title = models.CharField(max_length=250) slug = models.SlugField(max_length=250, unique=True) author = models.ForeignKey(User, on_delete=models.CASCADE) body = RichTextUploadingField(blank=True, null=True) status = models.CharField(max_length=10, choices=STATUS_CHOICES, default='draft') I tried a lot to solve it but failed.Any suggetion to solve the issue? Thanks in advance. | You can set CKEDITOR_THUMBNAIL_SIZE in setting.py CKEDITOR_THUMBNAIL_SIZE = (500, 500) With the pillow backend, you can change the thumbnail size with the CKEDITOR_THUMBNAIL_SIZE setting (formerly THUMBNAIL_SIZE). Default value: (75, 75) With the pillow backend, you can convert and compress the uploaded images to jpeg, to save disk space. Set the CKEDITOR_FORCE_JPEG_COMPRESSION setting to True (default False) You can change the CKEDITOR_IMAGE_QUALITY setting (formerly IMAGE_QUALITY), which is passed to Pillow: The image quality, on a scale from 1 (worst) to 95 (best). The default is 75. Values above 95 should be avoided; 100 disables portions of the JPEG compression algorithm and results in large files with hardly any gain in image quality. This feature is disabled for animated images. check official doc. https://github.com/django-ckeditor/django-ckeditor/blob/master/README.rst | 9 | 0 |
63,057,468 | 2020-7-23 | https://stackoverflow.com/questions/63057468/how-to-ignore-and-initialize-missing-keys-in-state-dict | My saved state_dict does not contain all the layers that are in my model. How can I ignore the Missing key(s) in state_dict error and initialize the remaining weights? | This can be achieved by passing strict=False to load_state_dict. load_state_dict(state_dict, strict=False) Documentation | 13 | 32 |
63,090,280 | 2020-7-25 | https://stackoverflow.com/questions/63090280/i-cant-get-certain-guild-with-discord-py | I'm trying to take certain roles when bot starting. First i need to take guild but i couldn't achieve it. takenGuild = client.get_guild(myServerID) takenGuild returning None When i try to loop guilds for guild in client.guilds: print(guild.id) it's not printing anything. My whole code : import discord from discord.ext import commands, tasks from discord.utils import get client = commands.Bot(command_prefix = '.') takenGuild = client.get_guild(123123123123123123) print(takenGuild.id) for guild in client.guilds: print(guild) print(guild.id) client.run('Token') | You have to wait for the bot to be ready you can use this. FYI bot is more common now after the updates than client import discord from discord.ext import commands, tasks from discord.utils import get bot = commands.Bot(command_prefix = '.') @bot.event async def on_ready(): print("I am running on " + bot.user.name) print("With the ID: " + bot.user.id) print('Bot is ready to be used') # after it is ready do it takenGuild = bot.get_guild(123123123123123123) print(takenGuild.id) for guild in bot.guilds: print(guild) print(guild.id) bot.run('Token') | 8 | 8 |
63,069,190 | 2020-7-24 | https://stackoverflow.com/questions/63069190/how-to-capture-arbitrary-paths-at-one-route-in-fastapi | I'm serving React app from FastAPI by mounting app.mount("/static", StaticFiles(directory="static"), name="static") @app.route('/session') async def renderReactApp(request: Request): return templates.TemplateResponse("index.html", {"request": request}) by this React app get served and React routing also works fine at client side but as soon as client reloads on a route which is not defined on server but used in React app FastAPI return not found to fix this I did something as below. @app.route('/network') @app.route('/gat') @app.route('/session') async def renderReactApp(request: Request): return templates.TemplateResponse("index.html", {"request": request}) but it seems weird and wrong to me as I need to add every route at the back-end as well as at frontend. I'm sure there must be something like Flask @flask_app.add_url_rule('/<path:path>', 'index', index) in FastAPI which will server all arbitrary path | Since FastAPI is based on Starlette, you can use what they call "converters" with your route parameters, using type path in this case, which "returns the rest of the path, including any additional / characers." See https://www.starlette.io/routing/#path-parameters for reference. If your react (or vue or ...) app is using a base path, you can do something like this, which assigns anything after /my-app/ to the rest_of_path variable: @app.get("/my-app/{rest_of_path:path}") async def serve_my_app(request: Request, rest_of_path: str): print("rest_of_path: "+rest_of_path) return templates.TemplateResponse("index.html", {"request": request}) If you are not using a unique base path like /my-app/ (which seems to be your use case), you can still accomplish this with a catch-all route, which should go after any other routes so that it doesn't overwrite them: @app.route("/{full_path:path}") async def catch_all(request: Request, full_path: str): print("full_path: "+full_path) return templates.TemplateResponse("index.html", {"request": request}) (In fact you would want to use this catch-all regardless in order to catch the difference between requests for /my-app/ and /my-app) | 55 | 59 |
63,080,326 | 2020-7-24 | https://stackoverflow.com/questions/63080326/could-not-find-module-atari-py-ale-interface-ale-c-dll-or-one-of-its-dependenc | I'm trying to work with the openai gym module but I get this error: >>> import atari_py Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\ssit5\AppData\Local\Programs\Python\Python38\lib\site-packages\atari_py\__init__.py", line 1, in <module> from .ale_python_interface import * File "C:\Users\ssit5\AppData\Local\Programs\Python\Python38\lib\site-packages\atari_py\ale_python_interface.py", line 17, in <module> ale_lib = cdll.LoadLibrary(os.path.join(os.path.dirname(__file__), File "C:\Users\ssit5\AppData\Local\Programs\Python\Python38\lib\ctypes\__init__.py", line 451, in LoadLibrary return self._dlltype(name) File "C:\Users\ssit5\AppData\Local\Programs\Python\Python38\lib\ctypes\__init__.py", line 373, in __init__ self._handle = _dlopen(self._name, mode) FileNotFoundError: Could not find module 'C:\Users\ssit5\AppData\Local\Programs\Python\Python38\lib\site-packages\atari_py\ale_interface\ale_c.dll' (or one of its dependencies). Try using the full path with constructor syntax. I don't have an ale_c.dll and tried finding solutions but nothing worked. I followed the solution here https://github.com/openai/gym/issues/1726 but when trying to import atari_py it comes up with the same error. I don't see why the __init__ would search for something that didn't come with the module either. There were other StackOverflow questions that I looked at but they also yielded no results. The only solution I can think of is to get a copy of ale_c.dll but I don't know how I would get it. | I was facing the same error. Fortunately, I was able to find one workaround. Follow this steps and you should be good to go. Download ale_c.dll from here. Copy it in C:\Users\Deep Raval\AppData\Local\Programs\Python\Python38\Lib\site-packages\atari_py\ale_interface (Your path can be different). | 9 | 15 |
63,140,037 | 2020-7-28 | https://stackoverflow.com/questions/63140037/how-to-remove-hidden-marks-from-images-using-python-opencv | I wanted to work on a small project to challenge my computer vision and image processing skills. I came across a project where I want to remove the hidden marks from the image. Hidden here refers to the watermarks that are not easily visible in rgb space but when you convert into hsv or some other space the marks become visible. Here's one example: BGR SPACE: HSV SPACE: I've tried different ways but was able to implement a solution that would remove those watermarks from the image. I am posting this question here to get different ideas to tackle this problem. What I have tried: I have tried various approaches but none of them worked, sharing the code might not help. It is not necessary to provide code for it, a pseudo code, idea or any lead would be appreciated. I noticed that the hidden marks are all the colors similar to RGB(90,94,105). And when I showed R, G, and B separately I noticed that the watermarks were only visible in B channel. I thought that if adjust/remove the marks in B channel and merge the image again, may be I could get better results. Code: b,g,r = cv2.split(img) b = b//2; r = cv2.merge((r,g,b)) cv2.imshow("image",r) Problems: This doesn't does solve the problem, it did make the colors little dimmer but the image colors were also disturbed. I tried playing around with B channel to see if could accomplish something. I also noticed that if we convert the image to LUV space then the marks are visible in V space. | I didn't find any answer that completely solved the question. I appreciate everyone's effort though (Thank you). I did something on my own and would like to share. It results in little quality loss (a little bluish blurriness) but successfully removes the watermarks. The solution is very simple but took time to analyze the image. I WOULD BE VERY GLAD IF SOMEONE CAN EXTEND THIS APPROACH AND COME UP WITH SOMETHING EVEN BETTER I observed that the watermarks were only visible in B space (out of RGB) and there were no traces of watermarks in R and G space. B space: I also red somewhere that blue light contributes little to the overall image compared to R and G channel so here's what I decided to do. Blur the B channel by a large enough amount to remove traces of those patterns. Here's how the B channel would appear afterwards: Finally, merge the image with the new B channel, previous R and previous G channel. Here's how the RGB channel would appear afterwards: The advantage of using approach is that the traces are gone. The only disadvantage is that the bluish and purplish colors appear at the black edges and the image is a little bluish in general. My Code: import cv2 from matplotlib import pyplot as plt import numpy as np img = cv2.imread("img.png") b, g, r = cv2.split(img) # split into B,G,R spaces b = cv2.GaussianBlur(b, None, 8) plt.imshow(cv2.merge((r,g,b)), cmap='gray') | 13 | 2 |
63,069,595 | 2020-7-24 | https://stackoverflow.com/questions/63069595/how-to-transpile-python-compare-ast-nodes-to-c | Let's start by considering python3.8.5's grammar, in this case I'm interested to figure out how to transpile python Comparisons to c. For the sake of simplicity, let's assume we're dealing with a very little python trivial subset and we just want to transpile trivial Compare expressions: expr = Compare(expr left, cmpop* ops, expr* comparators) If I'm not mistaken, in python an expression such as a<b<c is converted into something like a<b && b<c where b is only evaluated once... so I guess in c you should do something like bool v0=a<b; bool v1=v0<c in order to prevent b being evaluated more than once in case the first clause is true. Unfortunately I don't know how to put that into code, so far this is what I've got: import ast import shutil import textwrap from subprocess import PIPE from subprocess import Popen class Visitor(ast.NodeVisitor): def visit(self, node): ret = super().visit(node) if ret is None: raise Exception("Unsupported node") return ret def visit_Expr(self, node): return f"{self.visit(node.value)};" def visit_Eq(self, node): return "==" def visit_Lt(self, node): return "<" def visit_LtE(self, node): return "<=" def visit_Load(self, node): return "//load" def visit_Name(self, node): return f"{node.id}" def visit_Compare(self, node): left = self.visit(node.left) ops = [self.visit(x) for x in node.ops] comparators = [self.visit(x) for x in node.comparators] if len(ops) == 1 and len(comparators) == 1: return f"({left} {ops[0]} {comparators[0]})" else: lhs = ",".join([f"'{v}'" for v in ops]) rhs = ",".join([f"{v}" for v in comparators]) return f"cmp<{lhs}>({rhs})" def visit_Call(self, node): func = self.visit(node.func) args = [self.visit(x) for x in node.args] # keywords = [self.visit(x) for x in node.keywords] return f"{func}({','.join(args)})" def visit_Module(self, node): return f"{''.join([self.visit(x) for x in node.body])}" def visit_Num(self, node): return node.n if __name__ == "__main__": out = Visitor().visit( ast.parse( textwrap.dedent( """ 1 == 1<3 1 == (1<3) 1 == (0 < foo(0 <= bar() < 3, baz())) < (4 < 5) foo(0 <= bar() < 3, baz()) """ ) ) ) if shutil.which("clang-format"): cmd = "clang-format -style webkit -offset 0 -length {} -assume-filename None" p = Popen( cmd.format(len(out)), stdout=PIPE, stdin=PIPE, stderr=PIPE, shell=True ) out = p.communicate(input=out.encode("utf-8"))[0].decode("utf-8") print(out) else: print(out) As you can see, the output will be some sort of non compilable c output: cmp<'==', '<'>(1, 3); (1 == (1 < 3)); cmp<'==', '<'>((0 < foo(cmp<'<=', '<'>(bar(), 3), baz())), (4 < 5)); foo(cmp<'<=', '<'>(bar(), 3), baz()); Question, what'd be the algorithm (a python working example would be ideal here but just some general pseudocode that allowed me to improve the provided snippet would be also fine) that'd allowed me to convert python Compare expressions to c? | An additional complication when converting Compare expressions is that you want to prevent sub-expressions that are used more than once after the split from being evaluated more than once, which is particularly important if there are side effects such as a function call. One could take the sub-expressions and declare them as variables in advance to avoid multiple evaluations. There is a clever method for converting Python comparison expressions to JavaScript from a guy named Alexander Schepanovski. He explains his whole solution in detail in his blog post: http://hackflow.com/blog/2015/04/12/metaprogramming-beyond-decency-part-2/. Basically the same can be applied for a transpilation to C. He determines pairs of adjacent operands. This is necessary to convert chained comparisons into separate comparisons in which the 'middle' operand is then copied and is the left operand of the splited second subcomparison. A kind of symbol table could be used to associate the variables with sub-expressions. The naming of the variable can be done by a simple counter. The variables can be output when visiting an expression node. To get an output in C for the expressions given as an example in the question, you can simply emit a printf. For further simplification we could assume that the assumed small, trivial Python subset has only to deal with int expressions. Python Code I have taken your snippet and slightly modified it according to the above points so that it is a self-contained example that outputs compilable C code for your sample expressions. import ast import itertools import textwrap def pairwise(iterable): """s -> (s0,s1), (s1,s2), (s2, s3), ...""" a, b = itertools.tee(iterable) next(b, None) return zip(a, b) class Visitor(ast.NodeVisitor): def __init__(self): self.varCounter = 0 self.varTable = [] def visit_Expr(self, node): code = self.visit(node.value) variables = '\n'.join(self.varTable) self.varTable = [] return f'{variables}\nprintf("%d\\n", {code});\n' def visit_Eq(self, node): return "==" def visit_Lt(self, node): return '<' def visit_LtE(self, node): return '<=' def visit_Gt(self, node): return ">" def visit_GtE(self, node): return ">=" def visit_Name(self, node): return str(node.id) # see http://hackflow.com/blog/2015/04/12/metaprogramming-beyond-decency-part-2/ def visit_Compare(self, node): ops = node.ops operands = [node.left] + node.comparators variables = [] for o in operands: self.varCounter += 1 num = self.varCounter op = self.visit(o) variables.append((num, op)) self.varTable.append(f'int t{num} = {op};') pairs = pairwise(variables) # adjacent pairs of operands return ' && '.join('%s(%s %s %s)' % ('!' if isinstance(op, ast.NotIn) else '', f't{l[0]}', self.visit(op), f't{r[0]}') for op, (l, r) in zip(ops, pairs)) def visit_Call(self, node): args = [self.visit(x) for x in node.args] return self.visit(node.func) + "(" + ", ".join(args) + ")" def visit_Num(self, node): return str(node.n) def main(): analyzer = Visitor() tree = ast.parse( textwrap.dedent( """ 1 == 1<3 1 == (1<3) 1 == (0 < foo(0 <= bar() < 3, baz())) < (4 < 5) foo(0 <= bar() < 3, baz()) """ ) ) # print(ast.dump(tree)) for node in ast.iter_child_nodes(tree): c = analyzer.visit(node) print(c) if __name__ == '__main__': main() Test Run When you run the Python program, the following is displayed in the debug console: int t1 = 1; int t2 = 1; int t3 = 3; printf("%d\n", (t1 == t2) && (t2 < t3)); int t4 = 1; int t6 = 1; int t7 = 3; int t5 = (t6 < t7); printf("%d\n", (t4 == t5)); int t8 = 1; int t10 = 0; int t12 = 0; int t13 = bar(); int t14 = 3; int t11 = foo((t12 <= t13) && (t13 < t14), baz()); int t9 = (t10 < t11); int t16 = 4; int t17 = 5; int t15 = (t16 < t17); printf("%d\n", (t8 == t9) && (t9 < t15)); int t18 = 0; int t19 = bar(); int t20 = 3; printf("%d\n", foo((t18 <= t19) && (t19 < t20), baz())); Of course there is a way to simplify this further. For example, constant expressions do not need to be assigned to a variable. And of course there are many more details to consider. But this should be a starting point that outputs compilable C code for your example data. | 7 | 2 |
63,094,847 | 2020-7-26 | https://stackoverflow.com/questions/63094847/how-to-scale-target-values-of-a-keras-autoencoder-model-using-a-sklearn-pipeline | I'm using sklearn pipelines to build a Keras autoencoder model and use gridsearch to find the best hyperparameters. This works fine if I use a Multilayer Perceptron model for classification; however, in the autoencoder I need the output values to be the same as input. In other words, I am using a StandardScalar instance in the pipeline to scale the input values and therefore this leads to my question: how can I make the StandardScalar instance inside the pipeline to work on both the input data as well as target data, so that they end up to be the same? I'm providing a code snippet as an example. from sklearn.datasets import make_classification from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV, KFold from keras.models import Sequential from keras.layers import Dense, Dropout from keras.optimizers import RMSprop, Adam from tensorflow.keras.wrappers.scikit_learn import KerasRegressor X, y = make_classification (n_features = 50, n_redundant = 0, random_state = 0, scale = 100, n_clusters_per_class = 1) # Define wrapper def create_model (learn_rate = 0.01, input_shape, metrics = ['mse']): model = Sequential () model.add (Dense (units = 64, activation = 'relu', input_shape = (input_shape, ))) model.add (Dense (32, activation = 'relu')) model.add (Dense (8, activation = 'relu')) model.add (Dense (32, activation = 'relu')) model.add (Dense (input_shape, activation = None)) model.compile (loss = 'mean_squared_error', optimizer = Adam (lr = learn_rate), metrics = metrics) return model # Create scaler my_scaler = StandardScaler () steps = list () steps.append (('scaler', my_scaler)) standard_scaler_transformer = Pipeline (steps) # Create classifier clf = KerasRegressor (build_fn = create_model, verbose = 2) # Assemble pipeline # How to scale input and output?? clf = Pipeline (steps = [('scaler', my_scaler), ('classifier', clf)], verbose = True) # Run grid search param_grid = {'classifier__input_shape' : [X.shape [1]], 'classifier__batch_size' : [50], 'classifier__learn_rate' : [0.001], 'classifier__epochs' : [5, 10]} cv = KFold (n_splits = 5, shuffle = False) grid = GridSearchCV (estimator = clf, param_grid = param_grid, scoring = 'neg_mean_squared_error', verbose = 1, cv = cv) grid_result = grid.fit (X, X) print ('Best: %f using %s' % (grid_result.best_score_, grid_result.best_params_)) | You can use TransformedTargetRegressor to apply arbitrary transformations on the target values (i.e. y) by providing either a function (i.e. using func argument) or a transformer (i.e. transformer argument). In this case (i.e. fitting an auto-encoder model), since you want to apply the same StandardScalar instance on the target values as well, you can use transformer argument. And it could be done in one of the following ways: You can use it as one of the pipeline steps, wrapping the regressor: scaler = StandardScaler() regressor = KerasRegressor(...) pipe = Pipeline(steps=[ ('scaler', scaler), ('ttregressor', TransformedTargetRegressor(regressor, transformer=scaler)) ]) # Use `__regressor` to access the regressor hyperparameters param_grid = {'ttregressor__regressor__hyperparam_name' : ...} gridcv = GridSearchCV(estimator=pipe, param_grid=param_grid, ...) gridcv.fit(X, X) Alternatively, you can wrap it around the GridSearchCV like this: ttgridcv = TransformedTargetRegressor(GridSearchCV(...), transformer=scalar) ttgridcv.fit(X, X) # Use `regressor_` attribute to access the fitted regressor (i.e. `GridSearchCV` instance) print(ttgridcv.regressor_.best_score_, ttgridcv.regressor_.best_params_)) | 7 | 2 |
63,035,551 | 2020-7-22 | https://stackoverflow.com/questions/63035551/how-to-scrape-all-topics-from-twitter | All topics in twitter can be found in this link I would like to scrape all of them with each of the subcategory inside. BeautifulSoup doesn't seem to be useful here. I tried using selenium, but I don't know how to match the Xpaths that come after clicking the main category. from selenium import webdriver from selenium.common import exceptions url = 'https://twitter.com/i/flow/topics_selector' driver = webdriver.Chrome('absolute path to chromedriver') driver.get(url) driver.maximize_window() main_topics = driver.find_elements_by_xpath('/html/body/div[1]/div/div/div[1]/div[2]/div/div/div/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/div/div/div/span') topics = {} for main_topic in main_topics[2:]: print(main_topic.text.strip()) topics[main_topic.text.strip()] = {} I know I can click the main category using main_topics[3].click(), but I don't know how I can maybe recursively click through them until I find only the ones with Follow on the right. | To scrape all the main topics e.g. Arts & culture, Business & finance, etc using Selenium and python you have to induce WebDriverWait for visibility_of_all_elements_located() and you can use either of the following Locator Strategies: Using XPATH and text attribute: driver.get("https://twitter.com/i/flow/topics_selector") print([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//span[contains(., 'see top Tweets about them in your timeline')]//following::div[@role='button']/div/span")))]) Using XPATH and get_attribute(): driver.get("https://twitter.com/i/flow/topics_selector") print([my_elem.get_attribute("textContent") for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//span[contains(., 'see top Tweets about them in your timeline')]//following::div[@role='button']/div/span")))]) Console Output: ['Arts & culture', 'Business & finance', 'Careers', 'Entertainment', 'Fashion & beauty', 'Food', 'Gaming', 'Lifestyle', 'Movies and TV', 'Music', 'News', 'Outdoors', 'Science', 'Sports', 'Technology', 'Travel'] To scrape all the main and sub topics using Selenium and WebDriver you can use the following Locator Strategy: Using XPATH and get_attribute("textContent"): driver.get("https://twitter.com/i/flow/topics_selector") elements = WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//span[contains(., 'see top Tweets about them in your timeline')]//following::div[@role='button']/div/span"))) for element in elements: element.click() print([my_elem.get_attribute("textContent") for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[@role='button']/div/span[text()]")))]) driver.quit() Console Output: ['Arts & culture', 'Animation', 'Art', 'Books', 'Dance', 'Horoscope', 'Theater', 'Writing', 'Business & finance', 'Business personalities', 'Business professions', 'Cryptocurrencies', 'Careers', 'Education', 'Fields of study', 'Entertainment', 'Celebrities', 'Comedy', 'Digital creators', 'Entertainment brands', 'Podcasts', 'Popular franchises', 'Theater', 'Fashion & beauty', 'Beauty', 'Fashion', 'Food', 'Cooking', 'Cuisines', 'Gaming', 'Esports', 'Game development', 'Gaming hardware', 'Gaming personalities', 'Tabletop gaming', 'Video games', 'Lifestyle', 'Animals', 'At home', 'Collectibles', 'Family', 'Fitness', 'Unexplained phenomena', 'Movies and TV', 'Movies', 'Television', 'Music', 'Alternative', 'Bollywood music', 'C-pop', 'Classical music', 'Country music', 'Dance music', 'Electronic music', 'Hip-hop & rap', 'J-pop', 'K-hip hop', 'K-pop', 'Metal', 'Musical instruments', 'Pop', 'R&B and soul', 'Radio stations', 'Reggae', 'Reggaeton', 'Rock', 'World music', 'News', 'COVID-19', 'Local news', 'Social movements', 'Outdoors', 'Science', 'Biology', 'Sports', 'American football', 'Australian rules football', 'Auto racing', 'Baseball', 'Basketball', 'Combat Sports', 'Cricket', 'Extreme sports', 'Fantasy sports', 'Football', 'Golf', 'Gymnastics', 'Hockey', 'Lacrosse', 'Pub sports', 'Rugby', 'Sports icons', 'Sports journalists & coaches', 'Tennis', 'Track & field', 'Water sports', 'Winter sports', 'Technology', 'Computer programming', 'Cryptocurrencies', 'Data science', 'Information security', 'Operating system', 'Tech brands', 'Tech personalities', 'Travel', 'Adventure travel', 'Destinations', 'Transportation'] Note : You have to add the following imports : from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC | 7 | 8 |
63,067,807 | 2020-7-24 | https://stackoverflow.com/questions/63067807/sharing-array-of-objects-with-python-multiprocessing | For this question, I refer to the example in Python docs discussing the "use of the SharedMemory class with NumPy arrays, accessing the same numpy.ndarray from two distinct Python shells". A major change that I'd like to implement is manipulate array of class objects rather than integer values as I demonstrate below. import numpy as np from multiprocessing import shared_memory # a simplistic class example class A(): def __init__(self, x): self.x = x # numpy array of class objects a = np.array([A(1), A(2), A(3)]) # create a shared memory instance shm = shared_memory.SharedMemory(create=True, size=a.nbytes, name='psm_test0') # numpy array backed by shared memory b = np.ndarray(a.shape, dtype=a.dtype, buffer=shm.buf) # copy the original data into shared memory b[:] = a[:] print(b) # array([<__main__.Foo object at 0x7fac56cd1190>, # <__main__.Foo object at 0x7fac56cd1970>, # <__main__.Foo object at 0x7fac56cd19a0>], dtype=object) Now, in a different shell, we attach to the shared memory space and try to manipulate the contents of the array. import numpy as np from multiprocessing import shared_memory # attach to the existing shared space existing_shm = shared_memory.SharedMemory(name='psm_test0') c = np.ndarray((3,), dtype=object, buffer=existing_shm.buf) Even before we are able to manipulate c, printing it will result in a segmentation fault. Indeed I can not expect to observe a behaviour that has not been written into the module, so my question is what can I do to work with a shared array of objects? I'm currently pickling the list but protected read/writes add a fair bit of overhead. I've also tried using Namespace, which was quite slow because indexed writes are not allowed. Another idea could be to use share Ctypes Structure in a ShareableList but I wouldn't know where to start with that. In addition there is also a design aspect: it appears that there is an open bug in shared_memory that may affect my implementation wherein I have several processes working on different elements of the array. Is there a more scalable way of sharing a large list of objects between several processes so that at any given time all running processes interact with a unique object/element in the list? UPDATE: At this point, I will also accept partial answers that talk about whether this can be achieved with Python at all. | So, I did a bit of research (Shared Memory Objects in Multiprocessing) and came up with a few ideas: Pass numpy array of bytes Serialize the objects, then save them as byte strings to a numpy array. Problematic here is that One one needs to pass the data type from the creator of 'psm_test0' to any consumer of 'psm_test0'. This could be done with another shared memory though. pickle and unpickle is essentailly like deepcopy, i.e. it actually copies the underlying data. The code for the 'main' process reads: import pickle from multiprocessing import shared_memory import numpy as np # a simplistic class example class A(): def __init__(self, x): self.x = x def pickle(self): return pickle.dumps(self) @classmethod def unpickle(self, bts): return pickle.loads(bts) if __name__ == '__main__': # Test pickling procedure a = A(1) print(A.unpickle(a.pickle()).x) # >>> 1 # numpy array of byte strings a_arr = np.array([A(1).pickle(), A(2).pickle(), A('This is a really long test string which should exceed 42 bytes').pickle()]) # create a shared memory instance shm = shared_memory.SharedMemory( create=True, size=a_arr.nbytes, name='psm_test0' ) # numpy array backed by shared memory b_arr = np.ndarray(a_arr.shape, dtype=a_arr.dtype, buffer=shm.buf) # copy the original data into shared memory b_arr[:] = a_arr[:] print(b_arr.dtype) # 'S105' and for the consumer import numpy as np from multiprocessing import shared_memory from test import A # attach to the existing shared space existing_shm = shared_memory.SharedMemory(name='psm_test0') c = np.ndarray((3,), dtype='S105', buffer=existing_shm.buf) # Test data transfer arr = [a.x for a in list(map(A.unpickle, c))] print(arr) # [1, 2, ...] I'd say you have a few ways of going forward: Stay with simple data types. Implement something using the C api, but I can't really help you there. Use Rust Use Mangers. You maybe loose out on some performance (I'd like to see a real benchmark though), but You can get a relatively safe and simple interface for shared objects. Use Redis, which also has Python bindings... | 10 | 3 |
63,078,267 | 2020-7-24 | https://stackoverflow.com/questions/63078267/x-axis-in-matplotlib-print-random-numbers-instead-of-the-years | Im new in this Pandas and Matplotlib, I follow an example from a book and apparently it give me a warning "MatplotlibDeprecationWarning: The epoch2num function was deprecated in Matplotlib 3.3 and will be removed two minor releases later. base = dates.epoch2num(dt.asi8 / 1.0e9)" and the X value of axis change from years to some random numbers import matplotlib.pyplot as plt from pandas_datareader import data AMZ = data.DataReader('AMZN', start='2011', end='2018', data_source='yahoo') AMZ = AMZ['Close'] AMZ.plot() AMZ.resample('BA').mean().plot(style=':') AMZ.asfreq('BA').plot(style='--') plt.show() | This was caused by a temporary bad interaction between Matplotlib and Pandas and is fixed in both projects. To work around until the new versions are available: plt.rcParams['date.epoch'] = '0000-12-31' | 8 | 10 |
63,129,163 | 2020-7-28 | https://stackoverflow.com/questions/63129163/how-can-i-correctly-include-a-path-dependency-in-pyproject-toml | I have 2 projects structured as below: /abc-lib / abc / __init__.py / main.py / pyproject.toml /abc-web-api / src / __init__.py / main.py / pyproject.toml I attempted to include abc-lib as a dependency in abc-web-api, thus having a abc-web-api/pyproject.toml as below: [tool.poetry] name = "abc-web-api" version = "0.0.1" description = "Some description." authors = ["Someone <[email protected]>"] repository = "https://github.com/someone/abc-web-api" readme = "README.md" [tool.poetry.scripts] serve = "src.main:app" [tool.poetry.dependencies] python = "~3.6.8" abc-lib = { path="../abc-lib" } [tool.poetry.dev-dependencies] pytest = "^3.10.1" yapf = "^0.30.0" flake8 = "^3.8.3" [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api" When I execute poetry install, I receive the following message: Package operations: 1 installs, 0 updates, 0 removals - Installing abc-lib (1.0.0 ../abc-lib) [ModuleOrPackageNotFound] No file/folder found for package abc-lib The version number shown in the "Installing" statement is correct, so I am quite confused about the meaning of [ModuleOrPackageNotFound]. Does anyone know how can I resolve it? Thanks | Your folder structure looks a bit weird. It looks like your prefer the "src" variant. So I would suggest the following: ./ ├── abc-lib │ ├── pyproject.toml │ └── src │ └── abc_lib │ ├── __init__.py │ └── main.py └── abc-web-api ├── pyproject.toml └── src └── abc_web_api ├── __init__.py └── main.py With this pyproject.toml in abc-lib: [tool.poetry] name = "abc-lib" version = "0.1.0" description = "" authors = ["Someone <[email protected]>"] [tool.poetry.dependencies] python = "^3.6" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry>=1.0"] build-backend = "poetry.masonry.api" And this in abc-web-api: [tool.poetry] name = "abc-web-api" version = "0.1.0" description = "" authors = ["Someone <[email protected]>"] [tool.poetry.dependencies] python = "^3.6" abc-lib = {path = "../abc-lib"} [tool.poetry.dev-dependencies] [build-system] requires = ["poetry>=1.0"] build-backend = "poetry.masonry.api" | 13 | 11 |
63,115,867 | 2020-7-27 | https://stackoverflow.com/questions/63115867/isolation-forest-vs-robust-random-cut-forest-in-outlier-detection | I am examining different methods in outlier detection. I came across sklearn's implementation of Isolation Forest and Amazon sagemaker's implementation of RRCF (Robust Random Cut Forest). Both are ensemble methods based on decision trees, aiming to isolate every single point. The more isolation steps there are, the more likely the point is to be an inlier, and the opposite is true. However, even after looking at the original papers of the algorithms, I am failing to understand exactly the difference between both algorithms. In what way do they work differently? Is one of them more efficient than the other? EDIT: I am adding the links to the research papers for more information, as well as some tutorials discussing the topics. Isolation Forest: Paper Tutorial Robust Random Cut Forest: Paper Tutorial | In part of my answers I'll assume you refer to Sklearn's Isolation Forest. I believe those are the 4 main differences: Code availability: Isolation Forest has a popular open-source implementation in Scikit-Learn (sklearn.ensemble.IsolationForest), while both AWS implementation of Robust Random Cut Forest (RRCF) are closed-source, in Amazon Kinesis and Amazon SageMaker. There is an interesting third party RRCF open-source implementation on GitHub though: https://github.com/kLabUM/rrcf ; but unsure how popular it is yet Training design: RRCF can work on streams, as highlighted in the paper and as exposed in the streaming analytics service Kinesis Data Analytics. On the other hand, the absence of partial_fit method hints me that Sklearn's Isolation Forest is a batch-only algorithm that cannot readily work on data streams Scalability: SageMaker RRCF is more scalable. Sklearn's Isolation Forest is single-machine code, which can nonetheless be parallelized over CPUs with the n_jobs parameter. On the other hand, SageMaker RRCF can be used over one machine or multiple machines. Also, it supports SageMaker Pipe mode (streaming data via unix pipes) which makes it able to learn on much bigger data than what fits on disk the way features are sampled at each recursive isolation: RRCF gives more weight to dimension with higher variance (according to SageMaker doc), while I think isolation forest samples at random, which is one reason why RRCF is expected to perform better in high-dimensional space (picture from the RRCF paper) | 8 | 12 |
63,128,641 | 2020-7-28 | https://stackoverflow.com/questions/63128641/weighted-average-of-pytorch-tensors | I have two Pytorch tensors of the form [y11, y12] and [y21, y22]. How do I get the weighted mean of the two tensors? | you can add two tensors using torch.add and then get the mean of output tensor using torch.mean assuming weight as 0.6 for tensor1 and 0.4 for tensor2 example: tensor1 = [y11, y12] * 0.6 # multiplying with weight tensor2 = [y21, y22] * 0.4 # multiplying with weight pt_addition_result_ex = tensor1.add(tensor2) # addition of two tensors torch.mean(pt_addition_result_ex) # mean of output tensors | 9 | 8 |
63,132,402 | 2020-7-28 | https://stackoverflow.com/questions/63132402/how-to-combine-numeric-columns-in-pandas-dataframe-with-nan | I have a dataframe with this format: ID measurement_1 measurement_2 0 3 NaN 1 NaN 5 2 NaN 7 3 NaN NaN I want to combine to: ID measurement measurement_type 0 3 1 1 5 2 2 7 2 For each row there will be a value in either measurement_1 or measurement_2 column, not in both, the other column will be NaN. In some rows both columns will be NaN. I want to add a column for the measurement type (depending on which column has the value) and take the actual value out of both columns, and remove the rows that have NaN in both columns. Is there an easy way of doing this? Thanks! | Use DataFrame.stack to reshape the dataframe then use reset_index and use DataFrame.assign to assign the column measurement_type by using Series.str.split + Series.str[:1] on level_1: df1 = ( df.set_index('ID').stack().reset_index(name='measurement') .assign(mesurement_type=lambda x: x.pop('level_1').str.split('_').str[-1]) ) Result: print(df1) ID measurement mesurement_type 0 0 3.0 1 1 1 5.0 2 2 2 7.0 2 | 12 | 12 |
63,109,897 | 2020-7-27 | https://stackoverflow.com/questions/63109897/keyerror-requested-level-date-does-not-match-index-name-none | I got error while reshaping my dataframe data. KeyError: 'Requested level (date) does not match index name (None)' More details are as below: # dataframe # print(df.head(3)) ... account_id entity ae is_pc is_new_customer agency related_entity type medium our_side_entity settlement_title settlement_short_title settlement_type system_value account_status date sale 12323 entity1 ae1 PC yes MB EC TWITTER our_side_entity1 settlement_title settlement_short_title 1 0.2 active 2020-07-01 jimmy 12323 entity1 ae1 PC yes MB EC GOOGLE our_side_entity2 settlement_title settlement_short_title 1 0.5 active 2020-07-02 jimmy 1037093 Bentity1 ae1 PC yes MB APP Google our_side_entity3 settlement_title settlement_short_title 2 0 disable 2020-07-03 jimmy 1037093 Bentity1 ae1 PC yes MB APP Google our_side_entity3 settlement_title settlement_short_title 2 2020-07-04 jimmy 1037093 Bentity1 ae1 PC yes MB APP Google our_side_entity3 settlement_title settlement_short_title 2 2020-07-05 jimmy ... Then I want group by account, date and sum the total system_value of the account. I tried with below codes but failed: indices = OrderedDict([ ('account_id', 'ID'), ('entity', 'entity'), ('ae', 'AE'), ('is_pc', 'PC'), ('is_new_customer', 'new_customer'), ('agency', 'agency'), ('related_entity', 'related_entity'), ('type', 'type'), ('medium', 'medium'), ('our_side_entity', 'our_side_entity'), ('settlement_title', 'settlement_title'), ('settlement_short_title', 'settlement_short_title'), ('settlement_type', 'settlement_type'), ('account_status', 'account_status'), ('sale', 'sale'), ('date', 'date'), ]) df = df.groupby(list(indices.keys())).system_value.sum() \ .unstack('date', fill_value=None) \ .assign(total=lambda x: x.sum(1)) \ .reset_index() print(df) df = df.rename(columns=indices). \ set_index(indices['account_id']) error like below: KeyError: 'Requested level (date) does not match index name (None)' Could you please tell me what's wrong with my trial? Thanks. Update more details of my trial Below codes can reproduce the error all the time import pandas as pd from collections import OrderedDict s = [ {'account_id': '123123213', 'entity': 'entity2', 'ae': 'ae1', 'is_pc': 'PC', 'is_new_customer': 'yes', 'agency': 'BV', 'related_entity': None, 'type': 'EC', 'medium': 'Facebook', 'our_side_entity': 'our_side_entity', 'settlement_title': 'settlement_title', 'settlement_short_title': 'SS', 'settlement_type': 'unknown', 'system_value': None, 'account_status': None, 'date': '2020-07-22', 'sale': 'sale1'}, ] indices = OrderedDict([ ('account_id', 'ID'), ('entity', 'Entity'), ('ae', 'AE'), ('is_pc', 'PC'), ('is_new_customer', 'NEW_CUSTOMER'), ('agency', 'agency'), ('related_entity', 'related_entity'), ('type', 'type'), ('medium', 'medium'), ('our_side_entity', 'our_side_entity'), ('settlement_title', 'settlement_title'), ('settlement_short_title', 'settlement_short_title'), ('settlement_type', 'settlement_type'), ('sale', 'sale'), ('date', 'date'), ]) df = pd.DataFrame.from_records(s) # print df.to_dict() {'account_id': {0: '123123213'}, 'entity': {0: 'entity2'}, 'ae': {0: 'ae1'}, 'is_pc': {0: 'PC'}, 'is_new_customer': {0: 'yes'}, 'agency': {0: 'BV'}, 'related_entity': {0: None}, 'type': {0: 'EC'}, 'medium': {0: 'Facebook'}, 'our_side_entity': {0: 'our_side_entity'}, 'settlement_title': {0: 'settlement_title'}, 'settlement_short_title': {0: 'SS'}, 'settlement_type': {0: 'unknown'}, 'system_value': {0: None}, 'account_status': {0: None}, 'date': {0: '2020-07-22'}, 'sale': {0: 'sale1'}} df = df.groupby(list(indices.keys())).system_value.sum() \ .unstack('date', fill_value=None) \ .assign(total=lambda x: x.sum(1)) \ .reset_index() indices["account_status"] = "status" df = df.rename(columns=indices). \ set_index(indices['account_id']) print(df) | You are grouping by columns with all none values. In your example, the value for related_entity is None, which leads to an empty dataframe: In [7]: df.groupby(list(indices.keys())).sum() Out[7]: Empty DataFrame Columns: [] Index: [] I suggest that you remove this column from the groupby clause [EDIT]: to set the value of related_entity to the value of entity, you can simply do: df['related_entity'] = df['entity'] Or assuming you have some values in it that you don't want to replace: df['related_entity'] = df['related_entity'].fillna(df['entity']) | 7 | 5 |
63,127,402 | 2020-7-28 | https://stackoverflow.com/questions/63127402/how-to-get-the-time-of-5-minutes-ago-in-python | I'm using python to get time of 5 minutes ago. Code: import datetime import time now = datetime.datetime.now() current_time = now.strftime(f"%Y-%m-%d %H:%M") print(current_time) 2020-07-27 08:35:00 My question is how to get the time of 5 minutes ago. something like current_time-5mins | You may use datetime.timedelta(): datetime.datetime.now() - datetime.timedelta(minutes=5) | 18 | 28 |
63,109,987 | 2020-7-27 | https://stackoverflow.com/questions/63109987/nameerror-name-mysql-is-not-defined-after-setting-change-to-mysql | I have a running Django blog with sqlite3 db at my local machine. What I want is to convert sqlite3 db to mysql db change Django settings.py file to serve MySQL db Before I ran into the first step, I jumped into the second first. I followed this web page (on MacOS). I created databases called djangolocaldb on root user and have those infos in /etc/mysql/my.cnf like this: # /etc/mysql/my.cnf [client] database=djangolocaldb user=root password=ROOTPASSWORD default-character-set=utf8 Of course I created db, but not table within it. mysql> show databases; +--------------------+ | Database | +--------------------+ | djangolocaldb | | employees | | information_schema | | mydatabase | | mysql | | performance_schema | | sys | +--------------------+ 7 rows in set (0.00 sec) I changed settings.py like this as the web page suggested. Here's how: # settings.py ... # Database # https://docs.djangoproject.com/en/3.0/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', #'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), 'OPTIONS' : { 'read_default_file': '/etc/mysql/my.cnf', } } } ... Now, when I run python manage.py runserver with my venv activated, I got a brutal traceback like this(I ran python manage.py migrate first, and the traceback looked almost the same anyway): (.venv) ➜ django-local-blog git:(master) ✗ python manage.py runserver Watching for file changes with StatReloader Exception in thread django-main-thread: Traceback (most recent call last): File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/MySQLdb/__init__.py", line 18, in <module> from . import _mysql ImportError: dlopen(/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/MySQLdb/_mysql.cpython-37m-darwin.so, 2): Library not loaded: @rpath/libmysqlclient.21.dylib Referenced from: /Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/MySQLdb/_mysql.cpython-37m-darwin.so Reason: image not found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/gwanghyeongim/.pyenv/versions/3.7.6/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/Users/gwanghyeongim/.pyenv/versions/3.7.6/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/utils/autoreload.py", line 53, in wrapper fn(*args, **kwargs) File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run autoreload.raise_last_exception() File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/utils/autoreload.py", line 76, in raise_last_exception raise _exception[1] File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/core/management/__init__.py", line 357, in execute autoreload.check_errors(django.setup)() File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/utils/autoreload.py", line 53, in wrapper fn(*args, **kwargs) File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/apps/registry.py", line 114, in populate app_config.import_models() File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/apps/config.py", line 211, in import_models self.models_module = import_module(models_module_name) File "/Users/gwanghyeongim/.pyenv/versions/3.7.6/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/contrib/auth/models.py", line 2, in <module> from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/contrib/auth/base_user.py", line 47, in <module> class AbstractBaseUser(models.Model): File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/db/models/base.py", line 121, in __new__ new_class.add_to_class('_meta', Options(meta, app_label)) File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/db/models/base.py", line 325, in add_to_class value.contribute_to_class(cls, name) File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/db/models/options.py", line 208, in contribute_to_class self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/db/__init__.py", line 28, in __getattr__ return getattr(connections[DEFAULT_DB_ALIAS], item) File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/db/utils.py", line 207, in __getitem__ backend = load_backend(db['ENGINE']) File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/db/utils.py", line 111, in load_backend return import_module('%s.base' % backend_name) File "/Users/gwanghyeongim/.pyenv/versions/3.7.6/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/django/db/backends/mysql/base.py", line 16, in <module> import MySQLdb as Database File "/Users/gwanghyeongim/Documents/py/coreyMS_pj/django-local-blog/.venv/lib/python3.7/site-packages/MySQLdb/__init__.py", line 24, in <module> version_info, _mysql.version_info, _mysql.__file__ NameError: name '_mysql' is not defined So this NameError: name '_mysql' is not defined is the problem. I installed mysqlclient before, changed settings.py, made db in mysql, but none of the steps made it any helpful yet. And I noticed that even I changed my settings.py back to sqlite3, my blog spit the same _mysql not defined error. So I ended up reverting my commit and now I'm back to sqlite3 (at least my blog is running with it). I'm guessing it could be that I didn't convert data first, but I'm not 100% sure of it. Any suggestion? | So as a full answer: If you use the python package mysqlclient you still need to install the mysql client from Oracle/MySQL. This contains the C-library that the python package uses. To make things more confusing: the python package is in fact written in C for speed increases. To install this library on MacOS: % brew install mysql-client There's also a pure python package, with a more attractive MIT License, which can be a solution if your company or client does not allow GPL. However, it's not officially supported and some subtle bugs can occur in between releases. YMMV. | 40 | 14 |
63,109,860 | 2020-7-27 | https://stackoverflow.com/questions/63109860/how-to-install-python-packages-for-spyder | I am using the IDE called Spyder for learning Python. I would like to know in how to go about in installing Python packages for Spyder? | step 1. First open Spyder and click Tools --> Open command prompt. For more details click visit this link, https://miamioh.instructure.com/courses/38817/pages/downloading-and-installing-packages | 23 | 9 |
63,106,109 | 2020-7-26 | https://stackoverflow.com/questions/63106109/how-to-display-graphs-of-loss-and-accuracy-on-pytorch-using-matplotlib | I am new to pytorch, and i would like to know how to display graphs of loss and accuraccy And how exactly should i store these values,knowing that i'm applying a cnn model for image classification using CIFAR10. here is my current implementation : def train(num_epochs,optimizer,criterion,model): for epoch in range(num_epochs): for i, (images, labels) in enumerate(trainloader): # origin shape: [4, 3, 32, 32] = 4, 3, 1024 # input_layer: 3 input channels, 6 output channels, 5 kernel size images = images.to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (i+1) % 2000 == 0: print (f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}') PATH = './cnn.pth' torch.save(model.state_dict(), PATH) def test (): with torch.no_grad(): n_correct = 0 n_samples = 0 n_class_correct = [0 for i in range(10)] n_class_samples = [0 for i in range(10)] for images, labels in testloader: images = images.to(device) labels = labels.to(device) outputs = model(images) # max returns (value ,index) _, predicted = torch.max(outputs, 1) n_samples += labels.size(0) n_correct += (predicted == labels).sum().item() for i in range(batch_size): label = labels[i] pred = predicted[i] if (label == pred): n_class_correct[label] += 1 n_class_samples[label] += 1 acc = 100.0 * n_correct / n_samples print(f'Accuracy of the network: {acc} %') for i in range(10): acc = 100.0 * n_class_correct[i] / n_class_samples[i] print(f'Accuracy of {classes[i]}: {acc} %') test_score = np.mean([100 * n_class_correct[i] / n_class_samples[i] for i in range(10)]) print("the score test is : {0:.3f}%".format(test_score)) return acc | What you need to do is: Average the loss over all the batches and then append it to a variable after every epoch and then plot it. Implementation would be something like this: import matplotlib.pyplot as plt def my_plot(epochs, loss): plt.plot(epochs, loss) def train(num_epochs,optimizer,criterion,model): loss_vals= [] for epoch in range(num_epochs): epoch_loss= [] for i, (images, labels) in enumerate(trainloader): # rest of the code loss.backward() epoch_loss.append(loss.item()) # rest of the code # rest of the code loss_vals.append(sum(epoch_loss)/len(epoch_loss)) # rest of the code # plotting my_plot(np.linspace(1, num_epochs, num_epochs).astype(int), loss_vals) my_plot([1, 2, 3, 4, 5], [100, 90, 60, 30, 10]) You can do a similar calculation for accuracy. | 7 | 10 |
63,103,703 | 2020-7-26 | https://stackoverflow.com/questions/63103703/calculate-min-and-max-value-of-a-transition-with-index-of-first-occurrence-in-pa | I have a DataFrame: df = pd.DataFrame({'ID':['a','b','d','d','a','b','c','b','d','a','b','a'], 'sec':[3,6,2,0,4,7,10,19,40,3,1,2]}) print(df) ID sec 0 a 3 1 b 6 2 d 2 3 d 0 4 a 4 5 b 7 6 c 10 7 b 19 8 d 40 9 a 3 10 b 1 11 a 2 I want to calculate how many times a transition has occurred. Here in the ID column a->b is considered as a transition, similarly for b->d, d->d, d->a, b->c, c->b, b->a. I can do this using Counter like: Counter(zip(df['ID'].to_list(),df['ID'].to_list()[1:])) Counter({('a', 'b'): 3, ('b', 'd'): 2, ('d', 'd'): 1, ('d', 'a'): 2, ('b', 'c'): 1, ('c', 'b'): 1, ('b', 'a'): 1}) I also need to get min and max of the sec column of those transitions. Here for example a->b has occurred 3 times out of them min sec value is 1 and max sec value is 7. Also I want to get where this transition first occurred for a->b its 0. For the transition_index column I consider the first value of a transition, i.e. index of a and for calculating, min, max I take the second value of the transition, i.e. value at b. Here is the final output I want to get: df = pd.DataFrame({'ID_1':['a','b','d','d','b','c','b'], 'ID_2':['b','d','d','a','c','b','a'], 'sec_min':[1,2,0,3,10,19,2], 'sec_max':[7,40,0,4,10,19,2], 'transition_index':[0,1,2,3,5,6,10], 'count':[3,2,1,2,1,1,1]}) print(df) ID_1 ID_2 sec_min sec_max transition_index count 0 a b 1 7 0 3 1 b d 2 40 1 2 2 d d 0 0 2 1 3 d a 3 4 3 2 4 b c 10 10 5 1 5 c b 19 19 6 1 6 b a 2 2 10 1 How can I achieve this in Python? Also I have a huge amount of data, so I'm looking for the fastest way possible. | You have transitions of the form from -> to. 'transition_index' is based on the index of the "from" row, while the 'sec' aggregations are based on the value associated with the "to" row. We can shift the index and group on the ID and the shifted the ID, allowing us to use a single groupby with named aggregations to get the desired output. df = df.reset_index() df['index'] = df['index'].shift().astype('Int64') (df.groupby([df['ID'].shift(1).rename('ID_1'), df['ID'].rename('ID_2')], sort=False) .agg(sec_min=('sec', 'min'), sec_max=('sec', 'max'), transition_index=('index', 'first'), count=('sec', 'size')) .reset_index() ) ID_1 ID_2 sec_min sec_max transition_index count 0 a b 1 7 0 3 1 b d 2 40 1 2 2 d d 0 0 2 1 3 d a 3 4 3 2 4 b c 10 10 5 1 5 c b 19 19 6 1 6 b a 2 2 10 1 | 10 | 10 |
63,101,601 | 2020-7-26 | https://stackoverflow.com/questions/63101601/import-error-no-module-named-utils-when-using-pickle-load | I firstly dump some stuff into a pickle file using pickle.dump. in utils.load_data, my project hierarchy looks like this project1 -utils -__init__.py -load_data.py -data (other folder...) Then it outputs a pickle file into a data folder. Then I move the .pickle file to another project, the project hierarchy is project2 -data -main.py When I run a pickle.load() operation in this main.py, it prompts the error as the title. However, if I move main.py back to project1 folder, then the error disappears. So the issue must be from the file. My question is, why does pickle try to import the package from where it was born? Could anyone share a good explanation for this? I got quite confused. | By default, unpickling will import any class that it finds in the pickle data. This means if you have pickled a custom class and you are unpickling it somewhere, pickle will try import the module (utils in this case). So you need to have the utils module inside project2 folder Follow this for more information | 6 | 10 |
63,103,090 | 2020-7-26 | https://stackoverflow.com/questions/63103090/how-do-i-count-specific-values-across-multiple-columns-in-pandas | I have the DataFrame df = pd.DataFrame({ 'colA':['?',2,3,4,'?'], 'colB':[1,2,'?',3,4], 'colC':['?',2,3,4,5] }) I would like to get the count the the number of '?' in each column and return the following output - colA - 2 colB - 1 colC - 1 Is there a way to return this output at once. Right now the only way I know how to do it is write a for loop for each column. | looks like the simple way is df[df == '?'].count() the result is colA 2 colB 1 colC 1 dtype: int64 where df[df == '?'] give us DataFrame with ? and Nan colA colB colC 0 ? NaN ? 1 NaN NaN NaN 2 NaN ? NaN 3 NaN NaN NaN 4 ? NaN NaN and the count non-NA cells for each column. Please, look on the other solutions: good readable and the most faster | 6 | 10 |
63,101,913 | 2020-7-26 | https://stackoverflow.com/questions/63101913/unable-to-parse-string-at-position-0-problem | I use """Data taken from https://datos.gob.mx/busca/organization/conapo and https://es.wikipedia.org/wiki/Anexo:Entidades_federativas_de_M%C3%A9xico_por_superficie,_poblaci%C3%B3n_y_densidad """ total_population_segmentation = pd.read_html('professional_segmentation_mexico.html') population_segmentation = pd.read_html('population_segmentation.html') followed by total_population_segmentation = population_segmentation[2] total_population_segmentation = total_population_segmentation['Población histórica de México'] total_population_segmentation = total_population_segmentation.drop('Pos',axis=1) total_population_segmentation = total_population_segmentation.sort_values('Entidad').reset_index().drop('index',axis=1) Therefore, I am working with the following DataFrame total_population_segmentation.head(5) I used total_population_segmentation.dtypes and I got Entidad object 2010 object 2015 object 2020 object 2025 object 2030 object dtype: object I used pd.to_numeric(total_population_segmentation['2010']) to check if it works but I got ValueError Traceback (most recent call last) pandas\_libs\lib.pyx in pandas._libs.lib.maybe_convert_numeric() ValueError: Unable to parse string "1 195 787" During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-202-28db64f185e1> in <module>() ----> 1 pd.to_numeric(total_population_segmentation['2010']) ~\Anaconda3\lib\site-packages\pandas\core\tools\numeric.py in to_numeric(arg, errors, downcast) 148 try: 149 values = lib.maybe_convert_numeric( --> 150 values, set(), coerce_numeric=coerce_numeric 151 ) 152 except (ValueError, TypeError): pandas\_libs\lib.pyx in pandas._libs.lib.maybe_convert_numeric() ValueError: Unable to parse string "1 195 787" at position 0 When I look at each one of the values I obtain data that is decoded differently In [1]: total_population_segmentation['2010'][4] Out[1]: '4\xa0933\xa0755' How can I convert this type of data to float? | it looks like you’ve got [NO-BREAK SPACE][1] character xa0 You should normalize your data first & convert it from string to integer. One way to do this is (this is just for one column) is like that: $ df = pd.DataFrame([ {'Entidad':'BajaCaliforniaSur', '2010': '3\xa0224\xa0884', '2015': '763\xa0321', '2030': '763\xa0321'}, {'Entidad':'BajaCaliforniaSur', '2010': '5\xa0224\xa0684', '2015': '763\xa0321', '2030': '763\xa0321'}, {'Entidad':'BajaCaforniaSur', '2010': '4\xa0214\xa0784' , '2015': '762\xa0321', '2030': '762\xa0321'}, {'Entidad':'BajaCaorniaSur', '2010': '8\xa0234\xa0684' , '2015': '761\xa0321', '2030': '761\xa0321'}, {'Entidad':'BajaCaorniaSur', '2010': '8\xa0234\xa0684' , '2015': '761\xa0321', '2030': '761\xa0321'}, {'Entidad':'BajaCalrniaSur', '2010': '2\xa0274\xa0084' , '2015': '769\xa0321', '2030': '769\xa0321'}]) $ from unidecode import unidecode $ df['2010'][0] '3\xa0224\xa0884' $ df['2010'] = df['2010'].apply(lambda x: (unidecode(x).replace(' ',''))).astype(float) $ df['2010'][0] 3224884.0 | 7 | 2 |
63,096,810 | 2020-7-26 | https://stackoverflow.com/questions/63096810/python-destructor-called-in-the-wrong-order-based-on-reference-count | As far as I understand Python destructors should be called when the reference count of an object reaches 0. But this assumption seems not to be correct. Look at the following code: class A: def __init__(self, b): self.__b = b print("Construct A") def __del__(self): # It seems that the destructor of B is called here. print("Delete A") # But it should be called here class B: def __init__(self): print("Construct B") def __del__(self): print("Delete B") b = B() a = A(b) Outputs Construct B Construct A Delete B Delete A But A has a reference to B, so I would expect the following output: Construct B Construct A Delete A Delete B What am I not getting? | So, since the objects are still alive when the interpreter shuts down, you are actually not even guaranteed that __del__ will be called. At this point, the language makes no guarantees about when the finalizer is called. From the docs: It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits. Note, if you change the script to: (py38) 173-11-109-137-SFBA:~ juan$ cat test.py class A: def __init__(self, b): self.__b = b print("Construct A") def __del__(self): # It seems that the destructor of B is called here. print("Delete A") # But it should be called here class B: def __init__(self): print("Construct B") def __del__(self): print("Delete B") b = B() a = A(b) del a del b Then, executed: (py38) 173-11-109-137-SFBA:~ juan$ python test.py Construct B Construct A Delete A Delete B Although del does not delete objects, it deletes references, so it forces the reference count to reach 0 while the interpreter is still running, so the order is as you would expect. Sometimes, __del__ won't be called at all. A common circumstance is file-objects created by f = open('test.txt') That have live references in the global scope. If not closed explicitly, it might not call __del__ and the file will not flush and you won't get anything written. Which is a great reason to use the file object as a context-manager... | 9 | 6 |
63,094,078 | 2020-7-25 | https://stackoverflow.com/questions/63094078/pandas-drop-duplicate-rows-including-index | I know how to drop duplicate rows based on column data. I also know how to drop dublicate rows based on row index. My question is: is there a way to drop duplicate rows based on index and one column? Thanks! | This can be done by turning the index into a column. Below is a sample data set (fyi, I think someone downvoted your question because it didn't include a sample data set): df=pd.DataFrame({'a':[1,2,2,3,4,4,5], 'b':[2,2,2,3,4,5,5]}, index=[0,1,1,2,3,5,5]) Output: a b 0 1 2 1 2 2 1 2 2 2 3 3 3 4 4 5 4 5 5 5 5 Then you can use the following line. The first reset_index() makes a new column with the index numbers. Then you can drop duplicates based on the new index column and the other column (b in this case). Afterward, you can set the index to the original index values with set_index('index'): df.reset_index().drop_duplicates(subset=['index','b']).set_index('index') Ouput: a b index 0 1 2 1 2 2 2 3 3 3 4 4 5 4 5 | 7 | 6 |
63,093,045 | 2020-7-25 | https://stackoverflow.com/questions/63093045/keras-softmax-output-and-accuracy | This is the last layer of a Keras model. model.add(Dense(3, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) I know that the output of the softmax layer is an array, with probability summing up to 1, such as [0.1, 0.4, 0.5]. I have one question about using accuracy as a metric. e.g., when the true class is [0, 0, 1] and predicted probability is [0.1, 0.4, 0.5], even if 0.5 is the largest probability, the accuracy of this prediction should be 0, because 0.5 != 1. Is that correct? More generally, when the output layer activation is softmax, we will normally get floating probability predictions, and in very very little chance will we get integer probability predictions like [0, 0, 1]. So we can't use accuracy as a metric when using softmax as activation. Is that correct? | e.g., when the true class is [0, 0, 1] and predicted probability is [0.1, 0.4, 0.5], even if 0.5 is the largest probability, the accuracy of this prediction should be 0, because 0.5 != 1. Is that correct? No. You treat the index with the maximum value as the prediction of the model. So in your example, this sample prediction would count towards increasing the accuracy. This is normally called Top-1 accuracy. In image classification, the Top-5 accuracy is also often used (the top 5 maximum values in the softmax layer are treated as guesses of the NN and they are considered for the accuracy). More generally, when the output layer activation is softmax, we will normally get floating probability predictions, and in very very little chance will we get integer probability predictions like [0, 0, 1]. So we can't use accuracy as a metric when using softmax as activation. Is that correct? Technically speaking, you will never get integer values for the softmax layer output since the type is float. But yeah, there's a very teeny tiny chance of getting [0.0, 0.0, 1.0]. And this assumption of yours is incorrect since the premise does not hold. Nevertheless, accuracy is a valid metric when using Softmax as the classification layer of a neural network. | 8 | 6 |
63,087,188 | 2020-7-25 | https://stackoverflow.com/questions/63087188/can-you-run-google-colab-on-your-local-computer | My PC is rocking a 2080TI so I don't really need the GPU computation of Google Colab, but I do find it a nice development environment (in comparison to jupyter notebooks) and I like the fact that I can access my files from anywhere, so, is it possible to use Google Colab but let my local pc do the computation? | Steps: Go to Google colab and click on connect to local runtime, a pop-up comes. Go to your terminal and execute: jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0 If this shows error: ERROR: the notebook server could not be started because port 8888 is not available. Then run the following to basically kill any process using it or use another port: lsof -wni tcp:8888 kill -9 <JOB_ID> If successful, then the command gives a link: For example: http://localhost:8888/?token=bb80b05aef71999353fe4715e0f06be40d22911648dbdcd6 Copy it in the pop-up in the colab and you are set to go. | 10 | 9 |
63,076,512 | 2020-7-24 | https://stackoverflow.com/questions/63076512/how-to-set-a-help-message-for-a-flask-command-group | I'm trying to adapt an example from Flask documentation to create a custom command in a group: import click from flask import Flask from flask.cli import AppGroup app = Flask(__name__) user_cli = AppGroup('user') @user_cli.command('create') @click.argument('name') def create_user(name): ... app.cli.add_command(user_cli) $ flask user create demo This appears to work fine, however when I run flask --help I see the commands listed without any help messages, e.g.: Commands: user foo db Perform database migrations. How can I add a help message to a group of commands ('user' in this case)? | Use the short_help parameter. AppGroup inherits from Group which inherits from MultiCommand which inherits from Command. See Click source code for Command. For example: import click from flask import Flask from flask.cli import AppGroup user_cli = AppGroup('user', short_help="Adds a user") @user_cli.command('create') @click.argument('name') def create_user(name): print(name) app = Flask(__name__) app.cli.add_command(user_cli) @app.route('/') def hello_world(): return 'Hello World!' if __name__ == '__main__': app.run() Gives the following output (Windows PyCharm terminal): (flask_cli_group) D:\Paul\PycharmProjects\flask_cli_group>flask Usage: flask [OPTIONS] COMMAND [ARGS]... A general utility script for Flask applications. Provides commands from Flask, extensions, and the application. Loads the application defined in the FLASK_APP environment variable, or from a wsgi.py file. Setting the FLASK_ENV environment variable to 'development' will enable debug mode. > set FLASK_APP=hello.py > set FLASK_ENV=development > flask run Options: --version Show the flask version --help Show this message and exit. Commands: routes Show the routes for the app. run Run a development server. shell Run a shell in the app context. user Adds a user (flask_cli_group) D:\Paul\PycharmProjects\flask_cli_group> | 10 | 5 |
63,068,639 | 2020-7-24 | https://stackoverflow.com/questions/63068639/valueerror-unknown-layer-functional | I made a CNN in colab and saved the models at every epoch. I exported the h5 file and now am trying to run the model on some test images. Here's the main error: ValueError: Unknown layer: Functional Here's the code I used to run the model and save at each epoch: epochs = 50 callbacks = [ tf.keras.callbacks.TensorBoard(log_dir='./logs'), keras.callbacks.ModelCheckpoint("save_at_{epoch}.h5"), ] model.compile( optimizer=keras.optimizers.Adam(1e-3), loss="binary_crossentropy", metrics=["accuracy"], ) model.fit( train_ds, epochs=epochs, callbacks=callbacks, validation_data=val_ds, ) After the model ran I just downloaded the h5 file from the colab sidebar locally. I re-uploaded the file from the local disk, and here's how I'm trying to load the model: # load and evaluate a saved model from tensorflow.keras.models import load_model # load model# loaded_model = load_model('save_at_47.h5') loaded_model.layers[0].input_shape Here's the full traceback: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-4-6af7396280fa> in <module>() 3 4 # load model# ----> 5 loaded_model = load_model('save_at_47.h5') 6 loaded_model.layers[0].input_shape 5 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/save.py in load_model(filepath, custom_objects, compile) 182 if (h5py is not None and ( 183 isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))): --> 184 return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile) 185 186 if sys.version_info >= (3, 4) and isinstance(filepath, pathlib.Path): /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in load_model_from_hdf5(filepath, custom_objects, compile) 176 model_config = json.loads(model_config.decode('utf-8')) 177 model = model_config_lib.model_from_config(model_config, --> 178 custom_objects=custom_objects) 179 180 # set weights /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/model_config.py in model_from_config(config, custom_objects) 53 '`Sequential.from_config(config)`?') 54 from tensorflow.python.keras.layers import deserialize # pylint: disable=g-import-not-at-top ---> 55 return deserialize(config, custom_objects=custom_objects) 56 57 /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/serialization.py in deserialize(config, custom_objects) 107 module_objects=globs, 108 custom_objects=custom_objects, --> 109 printable_module_name='layer') /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 360 config = identifier 361 (cls, cls_config) = class_and_config_for_serialized_keras_object( --> 362 config, module_objects, custom_objects, printable_module_name) 363 364 if hasattr(cls, 'from_config'): /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name) 319 cls = get_registered_object(class_name, custom_objects, module_objects) 320 if cls is None: --> 321 raise ValueError('Unknown ' + printable_module_name + ': ' + class_name) 322 323 cls_config = config['config'] ValueError: Unknown layer: Functional It seems there have been several similar questions here,and here. Changing the import method hasn't helped yet, and trying to make some kind of custom object has not worked either. | Rebuilt the network from scratch: image_size = (212, 212) batch_size = 32 data_augmentation = keras.Sequential( [ layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"), layers.experimental.preprocessing.RandomRotation(0.8), ] ) def make_model(input_shape, num_classes): inputs = keras.Input(shape=input_shape) # Image augmentation block x = data_augmentation(inputs) # Entry block x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(x) x = layers.Conv2D(32, 3, strides=2, padding="same")(x) x = layers.BatchNormalization()(x) x = layers.Activation("relu")(x) x = layers.Conv2D(64, 3, padding="same")(x) x = layers.BatchNormalization()(x) x = layers.Activation("relu")(x) previous_block_activation = x # Set aside residual for size in [128, 256, 512, 728]: x = layers.Activation("relu")(x) x = layers.SeparableConv2D(size, 3, padding="same")(x) x = layers.BatchNormalization()(x) x = layers.Activation("relu")(x) x = layers.SeparableConv2D(size, 3, padding="same")(x) x = layers.BatchNormalization()(x) x = layers.MaxPooling2D(3, strides=2, padding="same")(x) # Project residual residual = layers.Conv2D(size, 1, strides=2, padding="same")( previous_block_activation ) x = layers.add([x, residual]) # Add back residual previous_block_activation = x # Set aside next residual x = layers.SeparableConv2D(1024, 3, padding="same")(x) x = layers.BatchNormalization()(x) x = layers.Activation("relu")(x) x = layers.GlobalAveragePooling2D()(x) if num_classes == 2: activation = "sigmoid" units = 1 else: activation = "softmax" units = num_classes x = layers.Dropout(0.5)(x) outputs = layers.Dense(units, activation=activation)(x) return keras.Model(inputs, outputs) model = make_model(input_shape=image_size + (3,), num_classes=2) keras.utils.plot_model(model, show_shapes=False) Loaded the weights: model.load_weights('save_at_47.h5') And ran a prediction on an image: # Running inference on new data img = keras.preprocessing.image.load_img( "le_image.jpg", target_size=image_size ) img_array = keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) # Create batch axis predictions = model.predict(img_array) score = predictions[0] print( "This image is %.2f percent negative and %.2f percent positive." % (100 * (1 - score), 100 * score) ) | 24 | 9 |
63,068,332 | 2020-7-24 | https://stackoverflow.com/questions/63068332/does-pytorch-allow-to-apply-given-transformations-to-bounding-box-coordinates-of | In Pytorch, I know that certain image processing transformations can be composed as such: import torchvision.transforms as transforms transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) In my case, each image has a corresponding annotation of bounding box coordinates with YOLO format. Does Pytorch allow to apply these transformations to the bounding box coordinates of the image as well, and later save them as new annotations? Thanks. | The transformations that you used as examples do not change the bounding box coordinates. ToTensor() converts a PIL image to a torch tensor and Normalize() is used to normalize the channels of the image. Transformations such as RandomCrop() and RandomRotation() will cause a mismatch between the location of the bounding box and the (modified) image. However, Pytorch makes it very flexible for you to create your own transformations and have control over what happens with the bounding box coordinates. Docs for more details: https://pytorch.org/docs/stable/torchvision/transforms.html#functional-transforms As an example (modified from the documentation): import torchvision.transforms.functional as TF import random def my_rotation(image, bonding_box_coordinate): if random.random() > 0.5: angle = random.randint(-30, 30) image = TF.rotate(image, angle) bonding_box_coordinate = TF.rotate(bonding_box_coordinate, angle) # more transforms ... return image, bonding_box_coordinate Hope that helps =) | 7 | 6 |
63,065,134 | 2020-7-24 | https://stackoverflow.com/questions/63065134/how-to-reference-self-in-dataclass-fields | I am trying to do the equivalent of: class A: def __init__(self): self.b = self.get_b() def get_b(self): return 1 using @dataclass. I want to use a @dataclass here because there are other (omitted) fields initialized from provided constructor arguments. b is one of the fields that is initialized from an instance method's result. Executing this: @dataclass class A: b = self.get_b() def get_b(self): return 1 shows that self is not defined in b's scope. Is it possible to get a reference to self here? | Use the __post_init__ method. from dataclasses import dataclass, field @dataclass class A: b: int = field(init=False) def __post_init__(self): self.b = self.get_b() Not exactly an improvement, is it? The default value assigned to a field is equivalent to a default parameter value in __init__, e.g., def __init__(self, b=0): self.b = b Just like you can't use self in such a parameter default, you can't use it as the field default value. The default value has to exist before the instance actually exists. We create the field explicitly so that we can pass init=False, preventing the generated __init__ method from expecting an argument to initialize self.b. If the function get_b is really independent of an instance (that is, you don't need an instance of A in order to return the value 1), you can use an already defined function as a default factory. from dataclasses import dataclass, field @dataclass class A: def get_b(self=None): return 1 b: int = field(default_factory=get_b, init=False) Here, get_b will be called as a function with zero arguments (hence the default value for self in the definition) rather than be invoked from a class instance. This is, though, rather unorthodox and I wouldn't recommend structuring your code like this. | 12 | 12 |
63,061,398 | 2020-7-23 | https://stackoverflow.com/questions/63061398/all-arguments-should-have-the-same-length-plotly | I try to do a bar graph using plotly.express but I find this problem All arguments should have the same length. The length of argument y is 51, whereas the length of previously-processed arguments ['x'] is 4399 and this my code import pandas as pd import plotly.express as px df= pd.read_csv('...../datasets-723010-1257097-fatal-police-shootings-data1.csv.xls') c = df['state'].value_counts() fig =px.bar(c , x = df['state']) fig.show() and this sample of data enter image description here | df['state'] has all the rows from the dataframe while c only contains a row for each unique value of state. you should use c.index instead: px.bar(y=c, x=c.index) | 7 | 3 |
63,057,939 | 2020-7-23 | https://stackoverflow.com/questions/63057939/altair-create-a-mark-line-chart-with-a-max-min-band-similar-to-mark-errorband | I've been working to create a chart similar to this EIA Chart (data at linked page): I've seen a similar example using Altair in the line chart with confidence interval band gallery example but I do not see a way to explicitly set the "extent" with my own values using the mark_errorband method. The documentation provides that you can use one of 4 methods to set the extent but I can't figure out how to pass in my own values. The mark_errorband examples make me believe that this must be possible however I am at a loss as to how to accomplish it. I'd appreciate any guidance on how an min-max band may be achieved in Altair. | You can use an area mark with the y and y2 encodings. For example: import altair as alt import pandas as pd import numpy as np x = np.linspace(0, 10) y = np.sin(x) + 0.1 * np.random.randn(len(x)) df = pd.DataFrame({ 'x': x, 'y': y, 'upper': y + 0.5 * (1 + np.random.rand(len(x))), 'lower': y - 0.5 * (1 + np.random.rand(len(x))) }) line = alt.Chart(df).mark_line( color='black' ).encode( x='x', y='y' ) band = alt.Chart(df).mark_area( opacity=0.5, color='gray' ).encode( x='x', y='lower', y2='upper' ) band + line Under the hood, mark_errorband is essentially a macro within Vega-Lite that computes lower/upper bounds and automatically populates the y and y2 encodings for you. | 8 | 12 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.