question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
62,541,192
2020-6-23
https://stackoverflow.com/questions/62541192/display-pytorch-tensor-as-image-using-matplotlib
I am trying to display an image stored as a pytorch tensor. trainset = datasets.ImageFolder('data/Cat_Dog_data/train/', transform=transforms) trainload = torch.utils.data.DataLoader(trainset, batch_size=32, shuffle=True) images, labels = iter(trainload).next() image = images[0] image.shape >>> torch.Size([3, 224, 224]) # pyplot doesn't like this, so reshape image = image.reshape(224,224,3) plt.imshow(image.numpy()) This method is displaying a 3 by 3 grid of the same image, always in greyscale. For example: How do I fix this so that the single color image is displayed correctly?
That's very odd. Try putting the channels last by permuting rather than reshaping: image.permute(1, 2, 0)
8
16
62,537,703
2020-6-23
https://stackoverflow.com/questions/62537703/how-to-find-inflection-point-in-python
I have a histogram of an image in RGB which represents the three curves of the three components R, G and B. I want to find the inflection points of each curve. I used the second derivative to find them but I can't, the second derivative does not cancel its returns null. So how can I find the inflection point? Is there any other method to find them? import os, cv2, random import numpy as np import matplotlib.pyplot as plt import math from sympy import * image = cv2.imread('C:/Users/Xers/Desktop/img.jpg') CHANNELS = ['r', 'g', 'b'] for i, channel in enumerate( CHANNELS ): histogram = cv2.calcHist([image], [i], None, [256], [0,256]) histogram = cv2.GaussianBlur( histogram, (5,5), 0) plt.plot(histogram, color = channel) x= plt.xlim([0,256]) y = plt.ylim([0, 24000]) derivative1= np.diff(histogram, axis=0) derivative2= np.diff(derivative1, axis=0) inf_point = np.where ( derivative2 == 0)[0] print(inf_point) plt.show()
There are two issues of numerical nature with your code: the data does not seem to be continuous enough to rely on the second derivative computed from two subsequent np.diff() applications even if it were, the chances of it being exactly 0 are very slim To address the first point, you should smooth your histogram (e.g. using a uniform or Gaussian filter on the histogram itself). To solve the second point, instead of looking for == 0, look for positive-to-negative (and viceversa) switching point. To give you some minimal example of a possible approach: import numpy as np import matplotlib.pyplot as plt from scipy.ndimage import gaussian_filter1d np.random.seed(0) # generate noisy data raw = np.cumsum(np.random.normal(5, 100, 1000)) raw /= np.max(raw) # smooth smooth = gaussian_filter1d(raw, 100) # compute second derivative smooth_d2 = np.gradient(np.gradient(smooth)) # find switching points infls = np.where(np.diff(np.sign(smooth_d2)))[0] # plot results plt.plot(raw, label='Noisy Data') plt.plot(smooth, label='Smoothed Data') plt.plot(smooth_d2 / np.max(smooth_d2), label='Second Derivative (scaled)') for i, infl in enumerate(infls, 1): plt.axvline(x=infl, color='k', label=f'Inflection Point {i}') plt.legend(bbox_to_anchor=(1.55, 1.0))
12
28
62,518,389
2020-6-22
https://stackoverflow.com/questions/62518389/how-to-convert-a-dataframe-of-counts-to-a-probability-density-function
Suppose that I have the following observations of integers: df = pd.DataFrame({'observed_scores': [100, 100, 90, 85, 100, ...]}) I know that this can be used as an input to make a density plot: df['observed_scores'].plot.density() but suppose that what I have is a counts table: df = pd.DataFrame({'observed_scores': [100, 95, 90, 85, ...], 'counts': [1534, 1399, 3421, 8764, ...}) which is cheaper to store than the full observed_scores Series (I have LOTS of observations). I know it's possible to plot the histogram using the counts, but how do I plot the density plot? If possible, can it be done without having to unstack/unravel the counts table into thousands of rows?
IIUC, statsmodels lets you fit a weighted KDE: from statsmodels.nonparametric.kde import KDEUnivariate df = pd.DataFrame({'observed_scores': [100, 95, 90, 85], 'counts': [1534, 1399, 3421, 8764]}) kde1= KDEUnivariate(df.observed_scores) kde_noweight = KDEUnivariate(df.observed_scores) kde1.fit(weights=df.counts, fft=False) kde_noweight.fit() plt.plot(kde1.support, kde1.density) plt.plot(kde_noweight.support, kde_noweight.density) plt.legend(['weighted', 'unweighted']) Output:
8
3
62,536,189
2020-6-23
https://stackoverflow.com/questions/62536189/select-rows-where-value-of-column-a-starts-with-value-of-column-b
I have a pandas dataframe and want to select rows where values of a column starts with values of another column. I have tried the following: import pandas as pd df = pd.DataFrame({'A': ['apple', 'xyz', 'aa'], 'B': ['app', 'b', 'aa']}) df_subset = df[df['A'].str.startswith(df['B'])] But it errors out and this solutions that I found also have not been helping. KeyError: "None of [Float64Index([nan, nan, nan], dtype='float64')] are in the [columns]" np.where(df['A'].str.startswith(df['B']), True, False) from here also returns True for all.
For row wise comparison, we can use DataFrame.apply: m = df.apply(lambda x: x['A'].startswith(x['B']), axis=1) df[m] A B 0 apple app 2 aa aa The reason your code is not working is because Series.str.startswith accepts a character sequence (a string scalar), and you are using a pandas Series. Quoting the docs: pat : str Character sequence. Regular expressions are not accepted.
7
8
62,532,237
2020-6-23
https://stackoverflow.com/questions/62532237/how-can-i-create-an-api-token-on-pypi-for-a-new-project
I am trying to upload a package to PyPI using API tokens. I would like to use a project specific API token instead of an account specific token, as this seems more secure. However, since the project is not created on PyPI yet, there is no project for me to select when I try to create a new API token on the PyPI website. Since I have activated 2-factor authentication, I get an authentication error when trying to upload with twine. This is closely related to How to upload package to PyPi with Two Factor enabled?, except that the accepted answers does not address the particular issue of project versus account API tokens. I have also tried browsing through https://pypi.org/help/, but cannot seem to find any information there. So the question is then, how can I create an API token for a not-yet-created PyPI project?
So the question is then, how can I create an API token for a not-yet-created PyPI project? You cannot, for sure! Create and use a token for the account; later you can replace it with a project token.
8
4
62,525,771
2020-6-23
https://stackoverflow.com/questions/62525771/python-range-with-uneven-gap
Today I had a python exam where following question was asked: Given the following code extract, Complete the code so the output is: 10 7 5. nums = list (range (?,?,?)) print(nums) How is it possible to get such output in python using range function?
not sure if this answer the question, provided we can fill in any syntax to the ? as long it produce the result. 1st ? = 10 2nd ? = 4 3rd ? = -3))+(([5] # nums = list(range( ? , ? , ? )) nums = list(range( 10 , 4 , -3))+(([5] )) print(nums) # nums = [10,7,5]
23
28
62,525,295
2020-6-22
https://stackoverflow.com/questions/62525295/how-to-use-python-to-schedule-tasks-in-a-django-application
I'm new to Django and web frameworks in general. I have an app that is all set up and works perfectly fine on my localhost. The program uses Twitter's API to gather a bunch of tweets and displays them to the user. The only problem is I need my python program that gets the tweets to be run in the background every-so-often. This is where using the schedule module would make sense, but once I start the local server it never runs the schedule functions. I tried reading up on cronjobs and just can't seem to get it to work. How can I get Django to run a specific python file periodically?
I've encountered a similar situation and have had a lot of success with django-apscheduler. It is all self-contained - it runs with the Django server and jobs are tracked in the Django database, so you don't have to configure any external cron jobs or anything to call a script. Below is a basic way to get up and running quickly, but the links at the end of this post have far more documentation and details as well as more advanced options. Install with pip install django-apscheduler then add it to your INSTALLED_APPS: INSTALLED_APPS = [ ... 'django_apscheduler', ... ] Once installed, make sure to run makemigrations and migrate on the database. Create a scheduler python package (a folder in your app directory named scheduler with a blank __init__.py in it). Then, in there, create a file named scheduler.py, which should look something like this: from apscheduler.schedulers.background import BackgroundScheduler from django_apscheduler.jobstores import DjangoJobStore, register_events from django.utils import timezone from django_apscheduler.models import DjangoJobExecution import sys # This is the function you want to schedule - add as many as you want and then register them in the start() function below def deactivate_expired_accounts(): today = timezone.now() ... # get accounts, expire them, etc. ... def start(): scheduler = BackgroundScheduler() scheduler.add_jobstore(DjangoJobStore(), "default") # run this job every 24 hours scheduler.add_job(deactivate_expired_accounts, 'interval', hours=24, name='clean_accounts', jobstore='default') register_events(scheduler) scheduler.start() print("Scheduler started...", file=sys.stdout) In your apps.py file (create it if it doesn't exist): from django.apps import AppConfig class AppNameConfig(AppConfig): name = 'your_app_name' def ready(self): from scheduler import scheduler scheduler.start() A word of caution: when using this with DEBUG = True in your settings.py file, run the development server with the --noreload flag set (i.e. python manage.py runserver localhost:8000 --noreload), otherwise the scheduled tasks will start and run twice. Also, django-apscheduler does not allow you to pass any parameters to the functions that are scheduled to be run. It is a limitation, but I've never had a problem with it. You can load them from some external source, like the Django database, if you really need to. You can use all the standard Django libraries, packages and functions inside the apscheduler tasks (functions). For example, to query models, call external APIs, parse responses/data, etc. etc. It's seamlessly integrated. Some additional links: Project repository: https://github.com/jarekwg/django-apscheduler More documentation: https://medium.com/@mrgrantanderson/replacing-cron-and-running-background-tasks-in-django-using-apscheduler-and-django-apscheduler-d562646c062e
10
30
62,519,791
2020-6-22
https://stackoverflow.com/questions/62519791/finding-duplicates-in-two-dataframes-and-removing-the-duplicates-from-one-datafr
Working in Python / pandas / dataframes I have these two dataframes: Dataframe one: 1 2 3 1 Stockholm 100 250 2 Stockholm 150 376 3 Stockholm 105 235 4 Stockholm 109 104 5 Burnley 145 234 6 Burnley 100 250 Dataframe two: 1 2 3 1 Stockholm 100 250 2 Stockholm 117 128 3 Stockholm 105 235 4 Stockholm 100 250 5 Burnley 145 234 6 Burnley 100 953 And I would like to find the duplicate rows found in Dataframe one and Dataframe two and remove the duplicates from Dataframe one. As in data frame two, you can find row 1, 3, 5 in data frame one, which would remove them from data frame on and create the below: 1 2 3 1 Stockholm 150 376 2 Stockholm 109 104 3 Burnley 100 250
Use: df_merge = pd.merge(df1, df2, on=[1,2,3], how='inner') df1 = df1.append(df_merge) df1['Duplicated'] = df1.duplicated(keep=False) # keep=False marks the duplicated row with a True df_final = df1[~df1['Duplicated']] # selects only rows which are not duplicated. del df_final['Duplicated'] # delete the indicator column The idea is as follows: do a inner join on all the columns append the output of the inner join to df1 identify the duplicated rows in df1 select the not duplicated rows in df1 Each number corresponds to each line of code.
11
11
62,436,766
2020-6-17
https://stackoverflow.com/questions/62436766/cant-login-to-instagram-using-requests
I'm trying to login to Instagram using requests library. I succeeded using following script, however it doesn't work anymore. The password field becomes encrypted (checked the dev tools while logging in manually). I've tried : import re import requests from bs4 import BeautifulSoup link = 'https://www.instagram.com/accounts/login/' login_url = 'https://www.instagram.com/accounts/login/ajax/' payload = { 'username': 'someusername', 'password': 'somepassword', 'enc_password': '', 'queryParams': {}, 'optIntoOneTap': 'false' } with requests.Session() as s: r = s.get(link) csrf = re.findall(r"csrf_token\":\"(.*?)\"",r.text)[0] r = s.post(login_url,data=payload,headers={ "user-agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36", "x-requested-with": "XMLHttpRequest", "referer": "https://www.instagram.com/accounts/login/", "x-csrftoken":csrf }) print(r.status_code) print(r.url) I found using dev tools: username: someusername enc_password: #PWD_INSTAGRAM_BROWSER:10:1592421027:ARpQAAm7pp/etjy2dMjVtPRdJFRPu8FAGILBRyupINxLckJ3QO0u0RLmU5NaONYK2G0jQt+78BBDBxR9nrUsufbZgR02YvR8BLcHS4uN8Gu88O2Z2mQU9AH3C0Z2NpDPpS22uqUYhxDKcYS5cA== queryParams: {"oneTapUsers":"[\"36990119985\"]"} optIntoOneTap: false How can I login to Instagram using requests?
You can use authentication version 0 - plain password, no encryption: import re import requests from bs4 import BeautifulSoup from datetime import datetime link = 'https://www.instagram.com/accounts/login/' login_url = 'https://www.instagram.com/accounts/login/ajax/' time = int(datetime.now().timestamp()) payload = { 'username': '<USERNAME HERE>', 'enc_password': f'#PWD_INSTAGRAM_BROWSER:0:{time}:<PLAIN PASSWORD HERE>', # <-- note the '0' - that means we want to use plain passwords 'queryParams': {}, 'optIntoOneTap': 'false' } with requests.Session() as s: r = s.get(link) csrf = re.findall(r"csrf_token\":\"(.*?)\"",r.text)[0] r = s.post(login_url,data=payload,headers={ "user-agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36", "x-requested-with": "XMLHttpRequest", "referer": "https://www.instagram.com/accounts/login/", "x-csrftoken":csrf }) print(r.status_code) print(r.url) print(r.text) Prints: 200 https://www.instagram.com/accounts/login/ajax/ {"authenticated": true, "user": true, "userId": "XXXXXXXX", "oneTapPrompt": true, "reactivated": true, "status": "ok"}
8
24
62,521,777
2020-6-22
https://stackoverflow.com/questions/62521777/how-to-declare-python-dataclass-member-field-same-as-the-dataclass-type
How can I have a member field of a dataclass same as the class name in python3.7+ ? I am trying to define a class like so (which can be done in Java or C++) -- which might be used as a class for LinkedList node @dataclass class Node: val:str next:Node prev:Node However, all I get is NameError: name 'Node' is not defined. What should be the correct way to have self referential member variables in dataclasses
You need to add the following import to your file from __future__ import annotations This enables deferred annotations, which are proposed in PEP 563. Deferred annotations allow you to reference a class that is not yet defined, in your example the Node class is not defined when you are still in it's body
9
15
62,522,117
2020-6-22
https://stackoverflow.com/questions/62522117/how-to-calculate-execution-time-of-a-view-in-django
This is my view: def post_detail(request, year, month, day, slug): post = get_object_or_404(models.Post, slug=slug, status='published', publish__year=year, publish__month=month, publish__day=day) comment_form = forms.CommentForm() comments = post.comments.filter(active=True) context = { 'comments': comments, 'post': post, 'comment_form': comment_form, } return render(request, 'blog/post_detail.html', context) Is there any way to calculate time of execution in Django ?
You can write a timer decorator to output the results in your console from functools import wraps import time def timer(func): """helper function to estimate view execution time""" @wraps(func) # used for copying func metadata def wrapper(*args, **kwargs): # record start time start = time.time() # func execution result = func(*args, **kwargs) duration = (time.time() - start) * 1000 # output execution time to console print('view {} takes {:.2f} ms'.format( func.__name__, duration )) return result return wrapper @timer def your_view(request): pass
7
12
62,513,005
2020-6-22
https://stackoverflow.com/questions/62513005/how-does-poetry-work-regarding-binary-dependencies-esp-numpy
Until now I have been using conda as virtual environment and dependency management. However, some stuff does not work as expected when transfering my environment.yml file from my development machine to the production server. Now, I would like to look into alternatives. Poetry seems nice, especially because poetry also maintains a lock file, and it has a benefit over pipenv because it keeps track of which packages are subdependencies. (https://realpython.com/effective-python-environment/#poetry) which might improve stability quite a bit. However, I am working on science-heavy projects (matrices, data science, machine learning), so in practise I need the scipy stack (e.g. numpy, pandas, scitkit-learn). Python became too slow for some pure computational workloads so numpy and scipy were born. [...] They are written in C and just wrapped as a python library. Compiling such libraries brings a set of challenges since they (more or less) have to be compiled on your machine for maximum performance and proper linking with libraries like glibc. Conda was introduced as an all-in-one solution to manage python environments for the scientific community. [...] Instead of using a fragile process of compiling libraries on your machine, libraries are precompiled and just downloaded when you request them. Unfortunately, the solution comes with a caveat - conda does not use PyPI, the most popular index of python packages. (https://modelpredict.com/python-dependency-management-tools#fnref:conda-compiling-challenges) As far as I know, this doesn't even do Conda justice, because it does quite a bit of optimization to get the most out of my CPU/GPU/architecture for numpy. (https://jakevdp.github.io/blog/2016/08/25/conda-myths-and-misconceptions/#Myth-#6:-Now-that-pip-uses-wheels,-conda-is-no-longer-necessary) https://numpy.org/install/ itself advises to use conda, but also says that one can install via pip (and poetry uses pypi) For users who know, from personal preference or reading about the main differences between conda and pip below, they prefer a pip/PyPI-based solution, we recommend: [...] Use Poetry as the most well-maintained tool that provides a dependency resolver and environment management capabilities in a similar fashion as conda does. I would like to get the stability of the poetry setup and the speed of the conda setup. How does poetry handle binary dependencies? Does it also, like conda, consider my hardware? If poetry not deliver in this regard, can I combine it with conda?
numpy provides several wheel files for different os, cpu architecture and python versions. wheel packages are precompiled, so the target system doesn't have to compile the package. poetry is able to choose the right wheel for you, depending on your system. Saying this, I would recommend using poetry, as long as you just need python packages, which are also available at pypi. As soon as you need other, non-python tools, stick to conda. (Disclaimer: I'm one of the maintainer of poetry). Also related: https://github.com/python-poetry/poetry/issues/1904
15
9
62,505,041
2020-6-21
https://stackoverflow.com/questions/62505041/add-autopep8-and-linting-to-jupyter-in-vs-code-python-notebook
Question Error highlighting and autoformatting can be great tools to help one create great notebooks. I am trying to change the settings on the VS code to allow me to autoformat to pep8 in my python notebooks. On this page for Jupiter notebooks have found that I have to put some lines in my .json files in the settings>preference of VSCode in order to do this. I am particularly interested in changing my code to the pep8 coding convention and also adding linting in order to highlight errors. linting (error highlighting) autoformatting (autopep8) I am using VS Code on Ubuntu 18.04. Below is my attempt that led to an error "Code language not supported or defined". Attempt After installing the Python extension and the autopep8 extension in VS code and running pip3 install autopep8 I got an error message and was unable to use pep8. If you may know how to set up an efficient working environment in VS Code for Jupyter notebooks I would really appreciate any assistance Summary How to set up: linting (error highlighting) autoformatting (autopep8) in VS code for python notebooks. Edit 1: I also tried running autopep8 in the command palette and got the error Command 'autopep8' resulted in an error (Running the contributed command: 'extension.sayHello' failed.)
Nbextensions are notebook extensions and only work within the notebook itself. VS Code does not support native notebooks so these extensions won't work at the time. They are planning to add it in future releases per link
9
6
62,498,436
2020-6-21
https://stackoverflow.com/questions/62498436/how-to-run-a-django-project-with-pyc-files-without-using-source-codes
I have a django project and i want to create the .pyc files and remove the source code. My project folder name is mysite and I ran the command python -m compileall mysite. The .pyc files are created. After that i tried to run my project with python __pycache__/manage.cpython-37.pyc runserver command but i've got an error such as ModuleNotFoundError: No module named 'mysite' There are two things I would like to ask about this. First, how can I solve this problem and run my project with .pyc file? Secondly, is it enough to move the .pyc files created to a separate folder in accordance with the django project structure?
First of all, I created a new folder in another directory such as a new django project and i created my app folders, static folder, templates folder etc. manually as the same as my django project architecture that I created before. Then, I moved the .pyc files that I created with the compileall command to my new project folders. As you know, while creating .pyc files, a .cpython-37 section is added to the file names automatically (for example, manage.py -> manage.cpython-37.pyc). I removed that section and i converted them to manage.pyc, views.pyc, etc. So my file structure was like this: mysite/ manage.pyc mysite/ __init__.pyc settings.pyc urls.pyc wsgi.pyc app/ migrations/ __init__.pyc __init__.pyc admin.pyc apps.pyc models.pyc tests.pyc views.pyc ... After I created this django project structure with .pyc files, i ran the python manage.pyc runserver command and it works.
8
3
62,500,697
2020-6-21
https://stackoverflow.com/questions/62500697/django-model-attribute-and-database-field-with-different-name-in-model
I have a database table called Person contains following columns: Id, first_name, last_name, So is there any way to assign different name to table fields in django model. like this class Person(models.Model): firstname = models.CharField(max_length = 30) lastname = models.CharField(max_length = 30) firstname instead of first_name and lastname instead of last_name
You can pass db_column to the field to customise the column name for a field class Person(models.Model): firstname = models.CharField(max_length=30, db_column='first_name') lastname = models.CharField(max_length=30, db_column='last_name')
7
12
62,495,381
2020-6-21
https://stackoverflow.com/questions/62495381/how-to-compare-2-files-having-random-numbers-in-non-sequential-order
There are 2 files named compare 1.txt and compare2.txt having random numbers in non-sequential order cat compare1.txt 57 11 13 3 889 014 91 cat compare2.txt 003 889 13 14 57 12 90 Aim Output list of all the numbers which are present in compare1 but not in compare 2 and vice versa If any number has zero in its prefix, ignore zeros while comparing ( basically the absolute value of number must be different to be treated as a mismatch ) Example - 3 should be considered matching with 003 and 014 should be considered matching with 14, 008 with 8 etc Note - It is not necessary that matching must necessarily happen on the same line. A number present in the first line in compare1 should be considered matched even if that same number is present on other than the first line in compare2 Expected output 90 91 12 11 PS ( I don't necessarily need this exact order in expected output, just these 4 numbers in any order would do ) What I tried? Obviously I didn't have hopes of getting the second condition correct, I tried only fulfilling the first condition but couldn't get correct results. I had tried these commands grep -Fxv -f compare1.txt compare2.txt && grep -Fxv -f compare2.txt compare1.txt cat compare1.txt compare2.txt | sort |uniq Edit - A Python solution is also fine
Could you please try following, written and tested with shown samples in GNU awk. awk ' { $0=$0+0 } FNR==NR{ a[$0] next } ($0 in a){ b[$0] next } { print } END{ for(j in a){ if(!(j in b)){ print j } } } ' compare1.txt compare2.txt Explanation: Adding detailed explanation for above. awk ' ##Starting awk program from here. { $0=$0+0 ##Adding 0 will remove extra zeros from current line,considering that your file doesn't have float values. } FNR==NR{ ##Checking condition FNR==NR which will be TRUE when 1st Input_file is being read. a[$0] ##Creating array a with index of current line here. next ##next will skip all further statements from here. } ($0 in a){ ##Checking condition if current line is present in a then do following. b[$0] ##Creating array b with index of current line. next ##next will skip all further statements from here. } { print } ##will print current line from 2nd Input_file here. END{ ##Starting END block of this code from here. for(j in a){ ##Traversing through array a here. if(!(j in b)){ print j } ##Checking condition if current index value is NOT present in b then print that index. } } ' compare1.txt compare2.txt ##Mentioning Input_file names here.
13
14
62,497,603
2020-6-21
https://stackoverflow.com/questions/62497603/is-there-a-plugin-similar-to-gitlens-for-pycharm-or-other-products
My question is very simple , as you read the title I want plugin similar to GitLens that I found in vscode. As you know with GitLens you can easily see the difference between two or multiple commits. I searched it up and I found GitToolBox but I don't know how to install it as well and I don't think that's like GitLens...
You can use Git Toolbox link here. Features : Git status: number of ahead / behind commits for current branch as status bar widget ahead / behind, current branch, tags on HEAD as Project View decoration on modules status bar widget with detailed information and additional actions Git blame: inline blame - show blame for line at caret in active editor
10
15
62,498,581
2020-6-21
https://stackoverflow.com/questions/62498581/typeerror-when-merging-dictionaries-unsupported-operand-types-for-dict-a
I wanted to join two dictionaries using | operator and I got the following error: TypeError: unsupported operand type(s) for |: 'dict' and 'dict' The MWE code is the following: d1 = {'k': 1, 'l': 2, 'm':4} d2 = {'g': 3, 'm': 7} e = d1 | d2
The merge (|) and update (|=) operators for dictionaries were introduced in Python 3.9 so they do not work in older versions. You have an option to either update your Python interpreter to Python 3.9 or use one of the alternatives: # option 1: e = d1.copy() e.update(d2) # option 2: e = {**d1, **d2} However, should you want to update to Python 3.9 you can save some memory updating dictionary d1 directly instead of creating another dictionary using in-place merge operation: d1 |= d2 Which is equivalent of the following in the older versions of Python: d1.update(d2)
9
8
62,446,077
2020-6-18
https://stackoverflow.com/questions/62446077/0-accuracy-with-lstm
I trained LSTM classification model, but got weird results (0 accuracy). Here is my dataset with preprocessing steps: import pandas as pd from sklearn.model_selection import train_test_split import tensorflow as tf from tensorflow import keras import numpy as np url = 'https://raw.githubusercontent.com/MislavSag/trademl/master/trademl/modeling/random_forest/X_TEST.csv' X_TEST = pd.read_csv(url, sep=',') url = 'https://raw.githubusercontent.com/MislavSag/trademl/master/trademl/modeling/random_forest/labeling_info_TEST.csv' labeling_info_TEST = pd.read_csv(url, sep=',') # TRAIN TEST SPLIT X_train, X_test, y_train, y_test = train_test_split( X_TEST.drop(columns=['close_orig']), labeling_info_TEST['bin'], test_size=0.10, shuffle=False, stratify=None) ### PREPARE LSTM x = X_train['close'].values.reshape(-1, 1) y = y_train.values.reshape(-1, 1) x_test = X_test['close'].values.reshape(-1, 1) y_test = y_test.values.reshape(-1, 1) train_val_index_split = 0.75 train_generator = keras.preprocessing.sequence.TimeseriesGenerator( data=x, targets=y, length=30, sampling_rate=1, stride=1, start_index=0, end_index=int(train_val_index_split*X_TEST.shape[0]), shuffle=False, reverse=False, batch_size=128 ) validation_generator = keras.preprocessing.sequence.TimeseriesGenerator( data=x, targets=y, length=30, sampling_rate=1, stride=1, start_index=int((train_val_index_split*X_TEST.shape[0] + 1)), end_index=None, #int(train_test_index_split*X.shape[0]) shuffle=False, reverse=False, batch_size=128 ) test_generator = keras.preprocessing.sequence.TimeseriesGenerator( data=x_test, targets=y_test, length=30, sampling_rate=1, stride=1, start_index=0, end_index=None, shuffle=False, reverse=False, batch_size=128 ) # convert generator to inmemory 3D series (if enough RAM) def generator_to_obj(generator): xlist = [] ylist = [] for i in range(len(generator)): x, y = train_generator[i] xlist.append(x) ylist.append(y) X_train = np.concatenate(xlist, axis=0) y_train = np.concatenate(ylist, axis=0) return X_train, y_train X_train_lstm, y_train_lstm = generator_to_obj(train_generator) X_val_lstm, y_val_lstm = generator_to_obj(validation_generator) X_test_lstm, y_test_lstm = generator_to_obj(test_generator) # test for shapes print('X and y shape train: ', X_train_lstm.shape, y_train_lstm.shape) print('X and y shape validate: ', X_val_lstm.shape, y_val_lstm.shape) print('X and y shape test: ', X_test_lstm.shape, y_test_lstm.shape) and here is my model with resuslts: ### MODEL model = keras.models.Sequential([ keras.layers.LSTM(124, return_sequences=True, input_shape=[None, 1]), keras.layers.LSTM(258), keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit(X_train_lstm, y_train_lstm, epochs=10, batch_size=128, validation_data=[X_val_lstm, y_val_lstm]) # history = model.fit_generator(train_generator, epochs=40, validation_data=validation_generator, verbose=1) score, acc = model.evaluate(X_val_lstm, y_val_lstm, batch_size=128) historydf = pd.DataFrame(history.history) historydf.head(10) Why do I get 0 accuracy?
You're using sigmoid activation, which means your labels must be in range 0 and 1. But in your case, the labels are 1. and -1. Just replace -1 with 0. for i, y in enumerate(y_train_lstm): if y == -1.: y_train_lstm[i,:] = 0. for i, y in enumerate(y_val_lstm): if y == -1.: y_val_lstm[i,:] = 0. for i, y in enumerate(y_test_lstm): if y == -1.: y_test_lstm[i,:] = 0. Sidenote: The signals are very close, it would be hard to distinguish them. So, probably accuracy won't be high with simple models. After training with 0. and 1. labels, model = keras.models.Sequential([ keras.layers.LSTM(124, return_sequences=True, input_shape=(30, 1)), keras.layers.LSTM(258), keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit(X_train_lstm, y_train_lstm, epochs=5, batch_size=128, validation_data=(X_val_lstm, y_val_lstm)) # history = model.fit_generator(train_generator, epochs=40, validation_data=validation_generator, verbose=1) score, acc = model.evaluate(X_val_lstm, y_val_lstm, batch_size=128) historydf = pd.DataFrame(history.history) historydf.head(10) Epoch 1/5 12/12 [==============================] - 5s 378ms/step - loss: 0.7386 - accuracy: 0.4990 - val_loss: 0.6959 - val_accuracy: 0.4896 Epoch 2/5 12/12 [==============================] - 4s 318ms/step - loss: 0.6947 - accuracy: 0.5133 - val_loss: 0.6959 - val_accuracy: 0.5104 Epoch 3/5 12/12 [==============================] - 4s 318ms/step - loss: 0.6941 - accuracy: 0.4895 - val_loss: 0.6930 - val_accuracy: 0.5104 Epoch 4/5 12/12 [==============================] - 4s 332ms/step - loss: 0.6946 - accuracy: 0.5269 - val_loss: 0.6946 - val_accuracy: 0.5104 Epoch 5/5 12/12 [==============================] - 4s 334ms/step - loss: 0.6931 - accuracy: 0.4901 - val_loss: 0.6929 - val_accuracy: 0.5104 3/3 [==============================] - 0s 73ms/step - loss: 0.6929 - accuracy: 0.5104 loss accuracy val_loss val_accuracy 0 0.738649 0.498980 0.695888 0.489583 1 0.694708 0.513256 0.695942 0.510417 2 0.694117 0.489463 0.692987 0.510417 3 0.694554 0.526852 0.694613 0.510417 4 0.693118 0.490143 0.692936 0.510417 Source code in colab: https://colab.research.google.com/drive/10yRf4TfGDnp_4F2HYoxPyTlF18no-8Dr?usp=sharing
8
7
62,493,590
2020-6-21
https://stackoverflow.com/questions/62493590/creating-a-new-column-assigning-same-index-to-repeated-values-in-pandas-datafram
How can I generate a new column listing repeated values? For example, my dataframe is: id color 123 white 123 white 123 white 345 blue 345 blue 678 red This is the desired output: # id color 1 123 white 1 123 white 1 123 white 2 345 blue 2 345 blue 3 678 red
Check withfactorize df['#']=df.id.factorize()[0]+1 df id color # 0 123 white 1 1 123 white 1 2 123 white 1 3 345 blue 2 4 345 blue 2 5 678 red 3 Another method df.groupby('id').ngroup()+1 0 1 1 1 2 1 3 2 4 2 5 3 dtype: int64 To add it to the first positon: df.insert(loc=0, column='#', value=df.id.factorize()[0]+1) df # id color 0 1 123 white 1 1 123 white 2 1 123 white 3 2 345 blue 4 2 345 blue 5 3 678 red
7
10
62,475,443
2020-6-19
https://stackoverflow.com/questions/62475443/regex-to-find-a-pair-of-adjacent-digits-with-different-digits-around-them
I want to find if there are two of the same digits next to each other, and the digit behind and in front of the pair is different. For example, 123456678 should match as there is a double 6, 1234566678 should not match as there is no double with different surrounding numbers. 12334566 should match because there are two 3s. So far i have this which works only with 1, and as long as the double is not at the start or end of the string, however I can deal with that by adding a letter at the start and end. ^.*([^1]11[^1]).*$ I know I can use [0-9] instead of the 1s but the problem is having them all be the same digit.
With regex, it is much more convenient to use a PyPi regex module with the (*SKIP)(*FAIL) based pattern: import regex rx = r'(\d)\1{2,}(*SKIP)(*F)|(\d)\2' l = ["123456678", "1234566678"] for s in l: print(s, bool(regex.search(rx, s)) ) See the Python demo. Output: 123456678 True 1234566678 False Regex details (\d)\1{2,}(*SKIP)(*F) - a digit and then two or more occurrences of the same digit | - or (\d)\2 - a digit and then the same digit. The point is to match all chunks of identical 3 or more digits and skip them, and then match a chunk of two identical digits. See the regex demo.
58
33
62,487,112
2020-6-20
https://stackoverflow.com/questions/62487112/python-issue-with-for-loop-and-append
I'm having trouble understanding the output of a piece of python code. mani=[] nima=[] for i in range(3) nima.append(i) mani.append(nima) print(mani) The output is [[0,1,2], [0,1,2], [0,1,2]] I can't for the life of me understand why it is not [[0], [0,1], [0,1,2]] Any help much appreciated.
It's because when you append nima into mani, it isn't a copy of nima, but a reference to nima. So as nima changes, the reference at each location in mani, just points to the changed nima. Since nima ends up as [0, 1, 2], then each reference appended into mani, just refers to the same object.
9
6
62,484,597
2020-6-20
https://stackoverflow.com/questions/62484597/understanding-width-shift-range-and-height-shift-range-arguments-in-kerass
The Keras documentation of ImageDataGenerator class says— width_shift_range: Float, 1-D array-like or int - float: fraction of total width, if < 1, or pixels if >= 1. - 1-D array-like: random elements from the array. - int: integer number of pixels from interval (-width_shift_range, +width_shift_range) - With width_shift_range=2 possible values are integers [-1, 0, +1], same as with width_shift_range=[-1, 0, +1], while with width_shift_range=1.0 possible values are floats in the interval [-1.0, +1.0). height_shift_range: Float, 1-D array-like or int - float: fraction of total height, if < 1, or pixels if >= 1. - 1-D array-like: random elements from the array. - int: integer number of pixels from interval (-height_shift_range, +height_shift_range) - With height_shift_range=2 possible values are integers [-1, 0, +1], same as with height_shift_range=[-1, 0, +1], while with height_shift_range=1.0 possible values are floats in the interval [-1.0, +1.0). I’m new in Keras and machine learning, and I just have started learning it. I am struggling to understand the documentation and use of these two arguments of Keras ImageDataGenerator class, named width_shift_range and height_shift_range. I have searched out a lot, but couldn't find any good documentation other than the official. What exactly do these two arguments do? When have to use them? This talk may seem inappropriate here, but since there is no discussion anywhere on the internet, I think it would be nice to have the discussion here. If anyone helps me understanding these, I would be grateful. Thank you very much.
These two argument used by ImageDataGenerator class Which use to preprocess image before feeding it into network. If you want to make your model more robust then small amount of data is not enough. That is where data augmentation come in handy. This are used to generate random data. width_shift_range: It actually shift the image to the left or right(horizontal shifts). If the value is float and <=1 it will take the percentage of total width as range. Suppose image width is 100px. if width_shift_range = 1.0 it will take -100% to +100% means -100px to +100px. It will shift image randomly between this range. Randomly selected positive value will shift the image to the right side and negative value will shift the image to the left side. We can also do this by selecting pixels. if we set width_shift_range = 100 it will have the same effect. More importantly integer value>=1 count pixel as range and float value<=1 count percentage of total width as range. Below images are for width_shift_range = 1.0. height_shift_range: It works same as width_shift_range but shift vertically(up or down). Below images are for height_shift_range=0.2,fill_mode="constant" fill_mode: It sets rules for newly shifted pixel in the input area. ## fill_mode: One of {"constant", "nearest", "reflect" or "wrap"}. ## Points outside the boundaries of the input are filled according to the given mode: ## "constant": kkkkkkkk|abcd|kkkkkkkk (cval=k) ## "nearest": aaaaaaaa|abcd|dddddddd ## "reflect": abcddcba|abcd|dcbaabcd ## "wrap": abcdabcd|abcd|abcdabcd For more you can check this blog
26
31
62,478,839
2020-6-19
https://stackoverflow.com/questions/62478839/sklearn-set-config-is-erroring
I am facing an issue where the sklearn set_config is failing. I am using Google Colab and also it is failing on Jupyter Notebook as well. Even the code when copied from https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights_0_23_0.html#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-23-0-py also failing with same error. from sklearn import set_config set_config(display='diagram') Error: TypeError: set_config() got an unexpected keyword argument 'display' Please suggest how to resolve this.
You need to upgrade scikit-learn on colab, its version is 'v0.22.2.post1', while display parameter in set_config() function was introduced in v0.23. !pip install --upgrade scikit-learn Then, restart the runtime, display should work now.
10
15
62,479,386
2020-6-19
https://stackoverflow.com/questions/62479386/no-module-named-application-error-while-deploying-simple-web-app-to-elastic-be
I am deploying a web app to elastic beanstalk using this tutorial and the same 'application.py' file they have: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-flask.html#python-flask-setup-venv I get a 502 error when going to the site, and degraded/severe health on the environment. When I check the logs, I see this (which I assume is the root of the problem): Jun 19 22:05:18 ip-172-31-15-237 web: File "/usr/lib64/python3.7/importlib/__init__.py", line 127, in import_module Jun 19 22:05:18 ip-172-31-15-237 web: return _bootstrap._gcd_import(name[level:], package, level) Jun 19 22:05:18 ip-172-31-15-237 web: File "<frozen importlib._bootstrap>", line 1006, in _gcd_import Jun 19 22:05:18 ip-172-31-15-237 web: File "<frozen importlib._bootstrap>", line 983, in _find_and_load Jun 19 22:05:18 ip-172-31-15-237 web: File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked Jun 19 22:05:18 ip-172-31-15-237 web: ModuleNotFoundError: No module named 'application' Here is my application.py file: from flask import Flask # print a nice greeting. def say_hello(username = "World"): return '<p>Hello %s!</p>\n' % username # some bits of text for the page. header_text = ''' <html>\n<head> <title>EB Flask Test</title> </head>\n<body>''' instructions = ''' <p><em>Hint</em>: This is a RESTful web service! Append a username to the URL (for example: <code>/Thelonious</code>) to say hello to someone specific.</p>\n''' home_link = '<p><a href="/">Back</a></p>\n' footer_text = '</body>\n</html>' # EB looks for an 'application' callable by default. application = Flask(__name__) # add a rule for the index page. application.add_url_rule('/', 'index', (lambda: header_text + say_hello() + instructions + footer_text)) # add a rule when the page is accessed with a name appended to the site # URL. application.add_url_rule('/<username>', 'hello', (lambda username: header_text + say_hello(username) + home_link + footer_text)) # run the app. if __name__ == "__main__": # Setting debug to True enables debug output. This line should be # removed before deploying a production app. #application.debug = True application.run() And here is my requirements.txt file: click==7.1.2 Flask==1.1.2 itsdangerous==1.1.0 Jinja2==2.11.2 MarkupSafe==1.1.1 numpy==1.16.3 pandas==0.24.2 python-dateutil==2.8.1 pytz==2020.1 six==1.15.0 Werkzeug==1.0.1 The zipped folder that I upload to elastic beanstalk consists of just these two files. I did have a virtual environment in there too, but the tutorial says you don't need it so I got rid of it. Also I am running Python 3.7.1 so I have pip3. And I should note that the web app works when I just run the python code.
A possible reason is the use of Amazon Linux 2 environment, instead of Amazon Linux 1. The list of python environments and their linux distributions is here. From the link you provided: In this tutorial we use Python 3.6 and the corresponding Elastic Beanstalk platform version. The Python 3.6 is supported in Amazon Linux 1 environment, while you are using Python 3.7 which is for Amazon Linux 2 environment. There are many differences between AL1 and AL2, which make them incompatible.
13
6
62,479,608
2020-6-19
https://stackoverflow.com/questions/62479608/lambdatype-vs-functiontype
What's the difference? docs show nothing on this, and their help() is identical. Is there an object for which isinstance will fail with one but not other?
Back in 1994 I wasn't sure that we would always be using the same implementation type for lambda and def. That's all there is to it. It would be a pain to remove it, so we're just leaving it (it's only one line). If you want to add a note to the docs, feel free to submit a PR.
13
26
62,475,991
2020-6-19
https://stackoverflow.com/questions/62475991/how-to-write-an-app-layout-in-dash-such-that-two-graphs-are-side-by-side
I want to plot two charts side by side (and not one above the other) in Dash by Plotly. The tutorials did not have an example where the graphs are plotted side by side. I am writing the app.layout in the following way app.layout = html.Div(className = 'row', children= [ html.H1("Tips database analysis (First dashboard)"), dcc.Dropdown(id='d', options = col_options, value = 'Sun'), dcc.Graph(id="graph1"), dcc.Graph(id="graph2") ] ) but after this graph1 appears above graph2 rather than side by side
You can achieve this by wrapping the graphs in a div and adding display: inline-block css property to each of the graphs. app.layout = html.Div(className='row', children=[ html.H1("Tips database analysis (First dashboard)"), dcc.Dropdown(), html.Div(children=[ dcc.Graph(id="graph1", style={'display': 'inline-block'}), dcc.Graph(id="graph2", style={'display': 'inline-block'}) ]) ]) Result EDIT I removed the display: flex property from the wrapping div, and instead added the display: inline-block property to each of the graphs. This makes it so the second graph does not get cut off, and will instead stack on smaller screens.
13
22
62,470,743
2020-6-19
https://stackoverflow.com/questions/62470743/change-line-width-of-specific-line-in-line-plot-pandas-matplotlib
I am plotting a dataframe that looks like this. Date 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 Date 01 Jan 12.896 13.353 12.959 13.011 13.073 12.721 12.643 12.484 12.876 13.102 02 Jan 12.915 13.421 12.961 13.103 13.125 12.806 12.644 12.600 12.956 13.075 03 Jan 12.926 13.379 13.012 13.116 13.112 12.790 12.713 12.634 12.959 13.176 04 Jan 13.051 13.414 13.045 13.219 13.051 12.829 12.954 12.724 13.047 13.187 05 Jan 13.176 13.417 13.065 13.148 13.115 12.874 12.956 12.834 13.098 13.123 The code for plotting is here. ice_data_dates.plot(figsize=(20,12), title='Arctic Sea Ice Extent', lw=3, fontsize=16, ax=ax, grid=True) This plots a line plot for each of the years listed in the dataframe over each day in the year. However, I would like to make the line for 2020 much thicker than the others so it stands out more clearly. Is there a way to do that using this one line of code? Or do I need to manually plot all of the years such that I can control the thickness of each line separately? A current picture is attached, where the line thicknesses are all the same.
You can iterate over the lines in the plot, which can be retrieved with ax.get_lines, and increase the width using set_linewidth if its label matches the value of interest: fig, ax = plt.subplots() df.plot(figsize=(20,12), title='Arctic Sea Ice Extent', lw=3, fontsize=16, ax=ax, grid=True) for line in ax.get_lines(): if line.get_label() == '2020': line.set_linewidth(15) plt.show()
12
21
62,470,439
2020-6-19
https://stackoverflow.com/questions/62470439/vscode-python-jedienabled-false-showing-as-unknown-configuration-setting
this is the settings.json file code { "python.autoComplete.addBrackets": true, "python.linting.enabled": true, "python.pythonPath": "C:\\Program Files\\Python37\\python.exe", "python.jediEnabled": false, "python.languageServer": "Microsoft" } in this "python.jediEnabled": false, showing error that Unknown Configuration Setting pleaase give the solution
With vscode-python's release on June 16th 2020 they removed the python.jediEnabled setting in favor for the python.languageServer setting. From the changelog: Removed python.jediEnabled setting in favor of python.languageServer. Instead of "python.jediEnabled": true please use "python.languageServer": "Jedi". (#7010)
8
17
62,455,693
2020-6-18
https://stackoverflow.com/questions/62455693/access-all-column-values-of-joined-tables-with-sqlalchemy
Imagine one has two SQL tables objects_stock id | number and objects_prop id | obj_id | color | weight that should be joined on objects_stock.id=objects_prop.obj_id, hence the plain SQL-query reads select * from objects_prop join objects_stock on objects_stock.id = objects_prop.obj_id; How can this query be performed with SqlAlchemy such that all returned columns of this join are accessible? When I execute query = session.query(ObjectsStock).join(ObjectsProp, ObjectsStock.id == ObjectsProp.obj_id) results = query.all() with ObjectsStock and ObjectsProp the appropriate mapped classes, the list results contains objects of type ObjectsStock - why is that? What would be the correct SqlAlchemy-query to get access to all fields corresponding to the columns of both tables?
Just in case someone encounters a similar problem: the best way I have found so far is listing the columns to fetch explicitly, query = session.query(ObjectsStock.id, ObjectsStock.number, ObjectsProp.color, ObjectsProp.weight).\ select_from(ObjectsStock).join(ObjectsProp, ObjectsStock.id == ObjectsProp.obj_id) results = query.all() Then one can iterate over the results and access the properties by their original column names, e.g. for r in results: print(r.id, r.color, r.number)
8
7
62,460,182
2020-6-18
https://stackoverflow.com/questions/62460182/how-to-invoke-another-command-inside-another-one-in-discord-py
I want my bot to play a specific song when typing +playtest using already defined function (+play) but i got an error says "Discord.ext.commands.errors.CommandInvokeError: Command raised an exception: TypeError: 'Command' object is not callable" an entire code work perfectly fine except for this command i wonder does ctx.invoke enable passing arguments? or i just missed something here is my brief code import discord import wavelink from discord.ext import commands import asyncio from bs4 import BeautifulSoup import requests import datetime queue = [] class Bot(commands.Bot): def __init__(self): super(Bot, self).__init__(command_prefix=['+']) self.add_cog(Music(self)) async def on_ready(self): print(f'Logged in as {self.user.name} | {self.user.id}') class Music(commands.Cog): def __init__(self, bot): self.bot = bot if not hasattr(bot, 'wavelink'): self.bot.wavelink = wavelink.Client(bot=self.bot) self.bot.loop.create_task(self.start_nodes()) self.bot.remove_command("help") async def start_nodes(self): await self.bot.wait_until_ready() await self.bot.wavelink.initiate_node(host='127.0.0.1', port=80, rest_uri='http://127.0.0.1:80', password='testing', identifier='TEST', region='us_central') @commands.command(name='connect') async def connect_(self, ctx, *, channel: discord.VoiceChannel = None): @commands.command() async def help(self, ctx): @commands.command() async def play(self, ctx, *, query: str): @commands.command(aliases=['sc']) async def soundcloud(self, ctx, *, query: str): @commands.command() async def leave(self, ctx): @commands.command(aliases=['queue', 'q']) async def check_queue(self, ctx): @commands.command(aliases=['clearq', 'clearqueue']) async def clear_queue(self, ctx): @commands.command(aliases=['removequeue', 'removeq', 'req']) async def remove_queue(self, ctx, num: int): @commands.command() async def skip(self, ctx): @commands.command(aliases=['eq']) async def equalizer(self, ctx: commands.Context, *, equalizer: str): @commands.command() async def playtest(self,ctx): await ctx.invoke(self.play('hi')) bot = Bot() bot.run('sd')
ctx.invoke does allow passing arguments, but they need to be handled in a different way to how you may be used to ( function(params) ) The parameters must be explicitly shown in the invoke (e.g. param = 'value') and the command must be a command object. This would be how you could invoke a command: @commands.command() async def playtest(self, ctx): await ctx.invoke(self.bot.get_command('play'), query='hi')
8
12
62,429,677
2020-6-17
https://stackoverflow.com/questions/62429677/how-to-use-str-replace-to-replace-multiple-pairs-at-once
Currently I am using the following code to make replacements which is a little cumbersome: df1['CompanyA'] = df1['CompanyA'].str.replace('.','') df1['CompanyA'] = df1['CompanyA'].str.replace('-','') df1['CompanyA'] = df1['CompanyA'].str.replace(',','') df1['CompanyA'] = df1['CompanyA'].str.replace('ltd','limited') df1['CompanyA'] = df1['CompanyA'].str.replace('&','and') df1['Address1A'] = df1['Address1A'].str.replace('.','') df1['Address1A'] = df1['Address1A'].str.replace('-','') df1['Address1A'] = df1['Address1A'].str.replace('&','and') df1['Address1A'].str.replace(r'\brd\b', 'road') df1['Address2A'] = df1['Address2A'].str.replace('.','') df1['Address2A'] = df1['Address2A'].str.replace('-','') df1['Address2A'] = df1['Address2A'].str.replace('&','and') df1['Address2A'].str.replace(r'\brd\b', 'road') In order to make changing on the fly easier my ideal scenario would be something like: df1['CompanyA'] = df1['CompanyA'].str.replace(('&','and'), ('.', ''), ('-','')....) df1['Address1A'] = df1['Address1A'].str.replace(('&','and'), ('.', ''), ('-','')....) df1['Address2A'] = df1['Address2A'].str.replace(('&','and'), ('.', ''), ('-','')....) This is so I could just input/change what I wanted to replace for a particular column without having to adjust multiple lines of code. Is this possible at all?
You can create a dictionary and pass it to the function replace() without needing to chain or name the function so many times. replacers = {',':'','.':'','-':'','ltd':'limited'} #etc.... df1['CompanyA'] = df1['CompanyA'].replace(replacers)
11
29
62,459,704
2020-6-18
https://stackoverflow.com/questions/62459704/np-reshape-with-padding-if-there-are-not-enough-elements
Is it possible to reshape a np.array() and, in case of inconsistency of the new shape, fill the empty spaces with NaN? Ex: arr = np.array([1,2,3,4,5,6]) Target, for instance a 2x4 Matrix: [1 2 3 4] [5 6 NaN NaN] I need this to bypass the error: ValueError: cannot reshape array of size 6 into shape (2,4)
We'll use np.pad first, then reshape: m, n = 2, 4 np.pad(arr.astype(float), (0, m*n - arr.size), mode='constant', constant_values=np.nan).reshape(m,n) array([[ 1., 2., 3., 4.], [ 5., 6., nan, nan]]) The assumption here is that arr is a 1D array. Add an assertion before this code to fail on unexpected cases.
7
12
62,436,382
2020-6-17
https://stackoverflow.com/questions/62436382/how-to-get-the-name-of-a-property-in-python
How do you get the name of a property in python? Any suggestions welcome. For functions and methods it is as simple as f.__name__. But properties do not have the __name__ attribute.
The property does not have a name, but you are probably really looking for the name of its fget attribute, which (ignoring any shuffling done after the fact) will be the name of the class attribute to which the property instance is bound. class A: @property def foo(self): return 3 assert A.foo.fget.__name__ == "foo"
8
14
62,453,270
2020-6-18
https://stackoverflow.com/questions/62453270/why-do-different-strings-have-the-same-id-in-python
It is stated that strings are immutable objects, and when we make changes in that variable it actually creates a new string object. So I wanted to test this phenomenon with this piece of code: result_str = "" print("string 1 (unedited):", id(result_str)) for a in range(1,11): result_str = result_str + str(a) print(f"string {a+1}:", id(result_str)) And I got the following IDs: string 1 (unedited): 2386354993840 string 2: 2386357170336 string 3: 2386357170336 string 4: 2386357170336 string 5: 2386357170336 string 6: 2386357170336 string 7: 2386357170336 string 8: 2386357170336 string 9: 2386360410800 string 10: 2386360410800 string 11: 2386360410800 So, if each string is different from each other, then why do the strings 2-8 and 9-11 have the same ID? And, if somehow this question is explained why does the ID change at string 9 specifically?
The string you associate with result_str create reaches end of lifetime at the next assignment. Hence the possibility of duplicate id. Here's the doc Return the “identity” of an object. This is an integer which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value.
9
4
62,449,983
2020-6-18
https://stackoverflow.com/questions/62449983/how-to-specify-return-value-of-mocked-function-with-pytest-mock
The below prints False. Is this not how mocking works? I tried changing the path to the function, but it errors out, so the path seems correct. What am I missing? import pytest from deals.services.services import is_user_valid class TestApi: def test_api(self, mocker): mocker.patch('deals.services.services.is_user_valid', return_value=True) print(is_user_valid("sdfds", "sdfsdf"))
The issue here is that you're essentially doing the following: from deals.services.services import is_user_valid import deals.services.services deals.services.services.is_user_valid = Mock(return_value=True) # call local is_user_valid By importing the "terminal" symbol itself you've shorted any possibility of mocking, it's now a local reference, and so updating the "remote" reference will have no effect on the local version. Meaning you should keep a handle on the module itself, such that the relevant symbol gets resolved on each access: from deals.services import services def test_api(mocker): mocker.patch('deals.services.services.is_user_valid', return_value=True) print(services.is_user_valid("sdfds", "sdfsdf")) should work better. This is also an issue with any module using such imports, they requiring patching the point of use rather than the point of definition because by the time the mock runs chances are the user module already has their copy. See the documentation for some more details.
20
20
62,449,644
2020-6-18
https://stackoverflow.com/questions/62449644/multiple-insert-columns-if-not-exist-pandas
I have the following df list_columns = ['A', 'B', 'C'] list_data = [ [1, '2', 3], [4, '4', 5], [1, '2', 3], [4, '4', 6] ] df = pd.DataFrame(columns=list_columns, data=list_data) I want to check if multiple columns exist, and if not to create them. Example: If B,C,D do not exist, create them(For the above df it will create only D column) I know how to do this with one column: if 'D' not in df: df['D']=0 Is there a way to test if all my columns exist, and if not create the one that are missing? And not to make an if for each column
Here loop is not necessary - use DataFrame.reindex with Index.union: cols = ['B','C','D'] df = df.reindex(df.columns.union(cols, sort=False), axis=1, fill_value=0) print (df) A B C D 0 1 2 3 0 1 4 4 5 0 2 1 2 3 0 3 4 4 6 0
18
27
62,440,193
2020-6-17
https://stackoverflow.com/questions/62440193/passing-multiple-parameters-in-threadpoolexecutor-map
The following code: import concurrent.futures def worker(item, truefalse): print(item, truefalse) return item processed = [] with concurrent.futures.ThreadPoolExecutor() as pool: for res in pool.map(worker, [1,2,3], False): processed.append(res) Yields an exception: TypeError: zip argument #2 must support iteration I also tried: for res in pool.map(worker, ([1,2,3], False)): Which yields: TypeError: worker() missing 1 required positional argument: 'truefalse' How does one pass multiple arguments to the function in calling ThreadPoolExecutor.map()?
If you're trying to call the worker function with 1, False, then 2, False, then 3, False, you need to extend your single False to an iterable of Falses at least as long as the other iterable. Two approaches that work: Multiply a sequence: for res in pool.map(worker, [1,2,3], [False] * 3): Use itertools.repeat to make it as long as needed (map will stop when the shortest iterable is exhausted). Add from itertools import repeat to the top of the file, then use: for res in pool.map(worker, [1,2,3], repeat(False)): For the record, this is also how you'd do this with regular map, not just ThreadPoolExecutor.
12
25
62,436,786
2020-6-17
https://stackoverflow.com/questions/62436786/attributeerror-module-time-has-no-attribute-clock-in-sqlalchemy-python-3-8
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\Anirudh\Documents\flask_app\connecting_to_database\application.py", line 2, in <module> from flask_sqlalchemy import SQLAlchemy File "C:\Users\Anirudh\AppData\Local\Programs\Python\Python38\lib\site-packages\flask_sqlalchemy\__init__.py", line 18, in <module> import sqlalchemy File "C:\Users\Anirudh\AppData\Local\Programs\Python\Python38\lib\site-packages\sqlalchemy\__init__.py", line 9, in <module> from .sql import ( File "C:\Users\Anirudh\AppData\Local\Programs\Python\Python38\lib\site-packages\sqlalchemy\sql\__init__.py", line 8, in <module> from .expression import ( File "C:\Users\Anirudh\AppData\Local\Programs\Python\Python38\lib\site-packages\sqlalchemy\sql\expression.py", line 34, in <module> from .visitors import Visitable File "C:\Users\Anirudh\AppData\Local\Programs\Python\Python38\lib\site-packages\sqlalchemy\sql\visitors.py", line 28, in <module> from .. import util File "C:\Users\Anirudh\AppData\Local\Programs\Python\Python38\lib\site-packages\sqlalchemy\util\__init__.py", line 8, in <module> from .compat import callable, cmp, reduce, \ File "C:\Users\Anirudh\AppData\Local\Programs\Python\Python38\lib\site-packages\sqlalchemy\util\compat.py", line 234, in <module> time_func = time.clock AttributeError: module 'time' has no attribute 'clock'
The error occurs because in python 2, there is time.clock(), but in python 3, it has been replaced with time.perf_counter(). Just replace all the time.clock to time.perf_counter, and it should be fine. For more info: https://www.webucator.com/blog/2015/08/python-clocks-explained/
10
4
62,433,286
2020-6-17
https://stackoverflow.com/questions/62433286/truncate-f-string-float-without-rounding
I want to print a very very close-to-one float, truncating it to 2 decimal places without rounding, preferably with the least amount of code possible. a = 0.99999999999 print(f'{a:0.2f}') Expected: 0.99 Actual: 1.00
I don't think you need f-strings or math functions, if I understand you correctly. Plain old string manipulation should get you there: a = 0.987654321 print(str(a)[:4]) output: 0.98
8
2
62,427,205
2020-6-17
https://stackoverflow.com/questions/62427205/in-python-3-using-pytest-how-do-we-test-for-exit-code-exit1-and-exit0-fo
I am new to Pytest in python . I am facing a tricky scenario where I need to test for exit codes - exit(1) and exit(0) , using Pytest module. Below is the python program : def sample_script(): count_file = 0 if count_file == 0: print("The count of files is zero") exit(1) else: print("File are present") exit(0) Now I want to test the above program for exit codes, exit(1) and exit(0) . Using Pytest how we can frame the test code so that we can test or asset the exit code of the function sample_script ? Please help me.
Once you put the exit(1) inside the if block as suggested, you can test for SystemExit exception: from some_package import sample_script def test_exit(): with pytest.raises(SystemExit) as pytest_wrapped_e: sample_script() assert pytest_wrapped_e.type == SystemExit assert pytest_wrapped_e.value.code == 42 The example is taken from here: https://medium.com/python-pandemonium/testing-sys-exit-with-pytest-10c6e5f7726f UPDATE: Here's a complete working example you can copy/paste to test: import pytest def sample_func(): exit(1) def test_exit(): with pytest.raises(SystemExit) as e: sample_func() assert e.type == SystemExit assert e.value.code == 1 if __name__ == '__main__': test_exit()
15
26
62,362,693
2020-6-13
https://stackoverflow.com/questions/62362693/how-do-i-read-project-dependencies-from-pyproject-toml-from-my-setup-py-to-avoi
We're upgrading to use BeeWare's Briefcase 0.3.1 for packaging, which uses pyproject.toml instead of setup.py to specify how to package, including which dependencies to include in a package. Here's a minimal example of a pyproject.toml for briefcase: [tool.briefcase.app.exampleapp] formal_name = "exampleapp" description = "something" requires = ['PyQt5', 'qtconsole'] sources = ['exampleapp'] We'd like to access the list of requires from setup.py, so we wouldn't have to replicate it in both files, and keep them in sync. We're not ready to switch away from setuptools, this is only for packaging. The alternative is of course to let setup.py auto-generate the pyproject.toml file, but that seems a little backwards to the intention with PEP 518.
This answer might be outdated. I do not have time to investigate right now. I recommend checking the briefcase resources for more up-to-date information. For example this section of the doc might be relevant: https://briefcase.readthedocs.io/en/latest/reference/configuration.html#pep621-compatibility As far as I can tell, briefcase isn't actually PEP 517 compatible (at least not by default). It uses a pyproject.toml file, but doesn't fill up the [build-system] section, so it should be possible to set an actual PEP 517 build backend in that file without causing conflict. pyproject.toml [build-system] build-backend = 'setuptools.build_meta' requires = [ 'setuptools', 'toml', ] [tool.briefcase.app.exampleapp] formal_name = 'exampleapp' description = 'something' requires = ['PyQt5', 'qtconsole'] sources = ['exampleapp'] setup.py #!/usr/bin/env python3 import pathlib import pkg_resources import setuptools import toml def _parse_briefcase_toml(pyproject_path, app_name): pyproject_text = pyproject_path.read_text() pyproject_data = toml.loads(pyproject_text) briefcase_data = pyproject_data['tool']['briefcase'] app_data = briefcase_data['app'][app_name] setup_data = { 'name': pkg_resources.safe_name(app_data['formal_name']), 'version': briefcase_data['version'], 'install_requires': app_data['requires'], # ... } return setup_data def _setup(): app_name = 'exampleapp' pyproject_path = pathlib.Path('pyproject.toml') setup_data = _parse_briefcase_toml(pyproject_path, app_name) setuptools.setup(**setup_data) if __name__ == '__main__': _setup() Then pip and other PEP 517-compatible frontends should be able to build and install the project by delegating to setuptools while taking care to correctly setup a build environment containing both setuptools and toml. I guess it would be also possible to let briefcase handle the parsing of the pyproject.toml file (maybe with briefcase.config.parse_config(...)) but it's not documented so I don't know how stable these APIs are.
10
8
62,408,128
2020-6-16
https://stackoverflow.com/questions/62408128/buffererror-local-queue-full-in-python
import logging from confluent_kafka import Producer import os logger = logging.getLogger("main") BOOTSTRAP_SERVERS = os.environ['BOOTSTRAP_SERVERS'] APPLICATION_ID = os.getenv('APPLICATION_ID', default = "nke-data-source") RECONNECT_BACKOFF_MS = os.getenv('RECONNECT_BACKOFF_MS', default = 1000) REQUEST_TIMEOUT_MS = os.getenv('REQUEST_TIMEOUT_MS', default = 40000) ACKS = os.getenv('ACKS', default = "all") RETRIES = os.getenv('RETRIES', default = 15) RETRY_BACK_OFF = os.getenv('RETRY_BACK_OFF', default = 1000) MAX_IN_FLIGHT_REQUESTS = os.getenv('MAX_IN_FLIGHT_REQUESTS', default = 1) topic = os.getenv('OUTBOUND_TOPIC', default = "tti-nke-raw") p = Producer({'bootstrap.servers': BOOTSTRAP_SERVERS, 'client.id': APPLICATION_ID, 'reconnect.backoff.ms': RECONNECT_BACKOFF_MS, 'request.timeout.ms': REQUEST_TIMEOUT_MS, 'acks': ACKS, 'retries': RETRIES, 'retry.backoff.ms': RETRY_BACK_OFF, 'max.in.flight.requests.per.connection': MAX_IN_FLIGHT_REQUESTS, 'compression.type': "lz4"}) def send(key, event): try: logger.info("Sending key: [{0}] value: [{1}]".format(key, event)) p.produce(topic=topic, value=event.encode('utf-8'), key=key) except Exception: logger.error("error sending events to kafka", exc_info=True) Error:- Traceback (most recent call last): BufferError: Local: Queue full File "/app/sender.py", line 30, in send p.produce(topic=topic, value=event.encode('utf-8'), key=key) Can anyone help me in this as I'am new in python
This Queue is something implemented in the librdkafka library (which confluent_kafka is binding to) There is an inner Queue for the produce that takes the producer delivery report and waits for the produce to deal with them (mostly doing nothing), but you need to trigger this mechanism of going through the queue, which can be executed by calling poll You should call producer.poll(0) after every call of produce so change: p.produce(topic=topic, value=event.encode('utf-8'), key=key) to: p.produce(topic=topic, value=event.encode('utf-8'), key=key) p.poll(0) Which will trigger the queue cleaning, don't worry about performance because this is a very simple function that doesn't really do much as the author of librdkafka wrote: poll() is cheap to call, it will not have a performance impact, so please add it to your producer loop. basically it does: call poll() at regular intervals to serve the producer's delivery report callbacks. consider to read about this in this Issue too
11
21
62,410,871
2020-6-16
https://stackoverflow.com/questions/62410871/how-do-i-test-if-point-is-in-polygon-multipolygon-with-geopandas-in-python
I have the Polygon data from the States from the USA from the website arcgis and I also have an excel file with coordinates of citys. I have converted the coordinates to geometry data (Points). Now I want to test if the Points are in the USA. Both are dtype: geometry. I thought with this I can easily compare, but when I use my code I get for every Point the answer false. Even if there are Points that are in the USA. The code is: import geopandas as gp import pandas as pd import xlsxwriter import xlrd from shapely.geometry import Point, Polygon df1 = pd.read_excel('PATH') gdf = gp.GeoDataFrame(df1, geometry= gp.points_from_xy(df1.longitude, df1.latitude)) US = gp.read_file('PATH') print(gdf['geometry'].contains(US['geometry'])) Does anybody know what I do wrong?
contains in GeoPandas currently work on a pairwise basis 1-to-1, not 1-to-many. For this purpose, use sjoin. points_within = gp.sjoin(gdf, US, predicate='within') That will return only those points within the US. Alternatively, you can filter polygons which contain points. polygons_contains = gp.sjoin(US, gdf, predicate='contains')
11
13
62,363,657
2020-6-13
https://stackoverflow.com/questions/62363657/how-can-i-plot-validation-curves-using-the-results-from-gridsearchcv
I am training a model with GridSearchCV in order to find the best parameters Code: grid_params = { 'n_estimators': [100, 200, 300, 400], 'criterion': ['gini', 'entropy'], 'max_features': ['auto', 'sqrt', 'log2'] } gs = GridSearchCV( RandomForestClassifier(), grid_params, cv=2, verbose=1, n_jobs=-1 ) clf = gs.fit(X_train, y_train) This is a slow process, and after this, I print the confusion matrix, but I want to plot the validation curve to in order to check if there are overfitting for this I use the following code: train_scores, valid_scores = validation_curve(clf.best_estimator_, X, y) The problem is that I need to set param_name, param_range, but I don't want to train again, because it is a too slow process. Another choice is to use gs, instead of clf.best_estimator_, but I need the gs trained, in order to get another information. How can I plot the validation curve, and keep the gs trainer, without train two times?
You can use the cv_results_ attribute of GridSearchCV and get the results for each combination of hyperparameters. Validation Curve is meant to depict the impact of single parameter in training and cross validation scores. Since fine tuning is done for multiple parameters in GridSearchCV, multiple plots are required to vizualise the impact of each parameter. Point to note is that the other parameter's impact has to be averaged out. This can be achieved by doing groupby on each parameter separately. For mean train and test - the mean of means would work out, but for standard deviation we have to use pooled variance because std deviations for each combination in Cross validation is almost constant. from sklearn.datasets import make_classification from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.ensemble import RandomForestClassifier import matplotlib.pyplot as plt import numpy as np X, y = make_classification(n_samples=1000, n_features=100, n_informative=2, class_sep=0.1,random_state=7) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) grid_params = { 'n_estimators': [10, 20], 'max_features': ['sqrt', 'log2'], 'criterion': ['gini', 'entropy'], 'max_depth': [2, 5,] } cv = 5 gs = GridSearchCV( RandomForestClassifier(random_state=42), grid_params, cv=cv, verbose=1, n_jobs=-1, return_train_score=True # set this for train score ) gs.fit(X_train, y_train) import pandas as pd df = pd.DataFrame(gs.cv_results_) results = ['mean_test_score', 'mean_train_score', 'std_test_score', 'std_train_score'] # https://en.wikipedia.org/wiki/Pooled_variance#Pooled_standard_deviation def pooled_var(stds): n = cv # size of each group return np.sqrt(sum((n-1)*(stds**2))/ len(stds)*(n-1)) fig, axes = plt.subplots(1, len(grid_params), figsize = (5*len(grid_params), 7), sharey='row') axes[0].set_ylabel("Score", fontsize=25) lw = 2 for idx, (param_name, param_range) in enumerate(grid_params.items()): grouped_df = df.groupby(f'param_{param_name}')[results]\ .agg({'mean_train_score': 'mean', 'mean_test_score': 'mean', 'std_train_score': pooled_var, 'std_test_score': pooled_var}) previous_group = df.groupby(f'param_{param_name}')[results] axes[idx].set_xlabel(param_name, fontsize=30) axes[idx].set_ylim(0.0, 1.1) axes[idx].plot(param_range, grouped_df['mean_train_score'], label="Training score", color="darkorange", lw=lw) axes[idx].fill_between(param_range, grouped_df['mean_train_score'] - grouped_df['std_train_score'], grouped_df['mean_train_score'] + grouped_df['std_train_score'], alpha=0.2, color="darkorange", lw=lw) axes[idx].plot(param_range, grouped_df['mean_test_score'], label="Cross-validation score", color="navy", lw=lw) axes[idx].fill_between(param_range, grouped_df['mean_test_score'] - grouped_df['std_test_score'], grouped_df['mean_test_score'] + grouped_df['std_test_score'], alpha=0.2, color="navy", lw=lw) handles, labels = axes[0].get_legend_handles_labels() fig.suptitle('Validation curves', fontsize=40) fig.legend(handles, labels, loc=8, ncol=2, fontsize=20) fig.subplots_adjust(bottom=0.25, top=0.85) plt.show() Note: line plots are not the right one for parameters with string values like criterion, you could modify it to be bar plots with error bars.
8
16
62,349,875
2020-6-12
https://stackoverflow.com/questions/62349875/how-long-does-colabs-usage-limit-lasts
This message keeps popping out after I used two GPUs simultaneously for two notebooks from the same account for about half an hour (Colab wasn't running for 12 hours): Photo of pop-out message You cannot currently connect to a GPU due to usage limits in Colab. It has been about two hours since I last used colab, but the message still pops up. It would be really great if I know for how long does this lasts. I think it could be 12 hours but would want to know from someone who has experienced this. In my case, the GPUs are available, but due to recent excess computing and running one cell for long, I have reached my usage limit for gpus. However, I want to know that after how much waiting will colab let me use its GPUs again. Update: It has been more than 2 days and colab still doesn't allow me to use GPUs. The usage limit message still pops out.
The usage limit is pretty dynamic and depends on how much/long you use colab. I was able to use the GPUs after 5 days; however, my account again reached usage limit right after 30mins of using the GPUs (google must have decreased it further for my account). The situation really became normal after months of not using colab from that account. My suggestion is to have multiple google accounts for colab, so you could use the other accounts when facing usage limits. Sorry for not replying to the comments.
22
8
62,411,746
2020-6-16
https://stackoverflow.com/questions/62411746/flaskenv-or-env-file-not-being-read
I have a flask app that uses some enviroment variables, I don't now what has changed but now it doesn't read the variables in the .flaskenv file, already tried changing its name to .env, still not working. This is the .flaskenv file: FLASK_APP=app FLASK_ENV=development CONSUMER_KEY= CONSUMER_SECRET= ACCESS_KEY= ACCESS_SECRET= SEC_USERNAME= SEC_PASSWORD= In the app.py file: from flask import Flask, render_template, url_for, request, redirect, session, g import os s_user = os.environ.get('SEC_USERNAME') s_pass = os.environ.get('SEC_PASSWORD') print(s_user) print(s_pass) if __name__ == "__main__": app.run() It prints "none". If i do flask run : Usage: flask run [OPTIONS] Error: Could not locate Flask application. You did not provide the FLASK_APP environment variable. For more information see http://flask.pocoo.org/docs/latest/quickstart/ pip list: (venv) λ pip list Package Version ----------------- ---------- certifi 2020.4.5.2 chardet 3.0.4 click 6.7 Flask 0.12.2 idna 2.9 itsdangerous 0.24 Jinja2 2.10 MarkupSafe 1.0 oauthlib 3.1.0 pip 19.2.3 PySocks 1.7.1 python-dotenv 0.13.0 requests 2.23.0 requests-oauthlib 1.3.0 setuptools 41.2.0 six 1.15.0 tweepy 3.8.0 urllib3 1.25.9 Werkzeug 0.13
As written in the flask docs: If python-dotenv is installed, running the flask command will set environment variables defined in the files .env and .flaskenv. This can be used to avoid having to set FLASK_APP manually every time you open a new terminal, and to set configuration using environment variables similar to how some deployment services work. make sure to run pip install python-dotenv Edit This flask seems to be very old, try updating it with pip install -U Flask
9
18
62,346,091
2020-6-12
https://stackoverflow.com/questions/62346091/how-can-i-disable-hide-the-grouping-of-variables-in-vscode-python
Recently the ms-python extension (v2020.5.86806) for vscode implements grouping of variables in the debug console/variable explorer. They appear as: <object> > special variables > function variables Is there a way to disable this behavior? EDIT: Screenshot added:
There's no single flag to revert to old behavior, but you can fine-tune it on a per-group basis in your launch.json: { "version": "0.2.0", "configurations": [ { "name": ..., "module": ..., ... "variablePresentation": { "all": "inline", "class": "group", "function": "hide", "protected": ..., "special": ... } } ] } "all" applies to all groups, and sets the default that can be overridden as needed; other group names are self-explanatory. For values, "group" is the default behavior, "hide" removes those variables entirely, and "inline" places them without grouping. Note that the VSCode JSON schema hasn't been updated to reflect this yet, so you'll get squiggles when editing. It'll still work, though.
12
10
62,331,439
2020-6-11
https://stackoverflow.com/questions/62331439/how-to-terminate-current-colab-session-from-notebook-cell
I'm trying to be a good citizen and make sure my notebook session is terminated immediately after running even if I'm not sitting at my machine. Is there any code I can run in a notebook cell to achieve this?
We have a way to do this correctly now: from google.colab import runtime runtime.unassign()
18
7
62,312,308
2020-6-10
https://stackoverflow.com/questions/62312308/typeerror-file-must-have-read-and-readline-attributes
1st approach: I am trying to make the below code work since morning. I have read many answers here in stackoverflow and tutorials on google about python, I have done 0 progress. Can you help me with this error: Using TensorFlow backend. Traceback (most recent call last): File "source_code_modified.py", line 65, in <module> dict1 = pickle.load(f1,encoding='bytes') TypeError: file must have 'read' and 'readline' attributes The code that explodes is this part: class load_train_data: os.open("/home/just_learning/Desktop/CNN/datasets/training_data/images", os.O_RDONLY) def __enter__(self): return self def __exit__(self, type, value, traceback): pass class load_test_data: os.open("/home/just_learning/Desktop/CNN/datasets/test_data/images",os.O_RDONLY) def __enter__(self): return self def __exit__(self, type, value, traceback): pass with load_train_data() as f1: dict1 = pickle.load(f1,encoding='bytes') 2nd approach: Ok, I did that, the error is: Using TensorFlow backend. Traceback (most recent call last): File "source_code_modified.py", line 74, in <module> with open_train_data() as f1: File "source_code_modified.py", line 47, in open_train_data return open('/home/just_learning/Desktop/CNN/datasets/training_data/images','rb') IsADirectoryError: [Errno 21] Is a directory: '/home/just_learning/Desktop/CNN/datasets/training_data/images' And the code explodes on these points: def open_train_data(): return open('/home/just_learning/Desktop/CNN/datasets/training_data/images','rb') <--- explodes here def open_test_data(): return open('/home/just_learning/Desktop/CNN/datasets/test_data/images','rb') with open_train_data() as f1: dict1 = pickle.load(f1) <--- explodes here 3rd approach: I found this: "IsADirectoryError: [Errno 21] Is a directory: " It is a file and I changed the code to this: def open_train_data(): return os.listdir('/home/just_learning/Desktop/CNN/datasets/training_data/images') def open_test_data(): return os.listdir('/home/just_learning/Desktop/CNN/datasets/test_data/images') with open_train_data() as f1: <-------------- explodes here dict1 = pickle.load(f1) #,encoding='bytes') and the error is this: Using TensorFlow backend. Traceback (most recent call last): File "source_code_modified.py", line 74, in <module> with open_train_data() as f1: AttributeError: __enter__ and this error I solved it by searching in github/google and was the result depicted on the first approach, that I have posted above...
Just sharing some stupid mistake for anyone else affected (by stupidity)^^: I had the error TypeError: file must have 'read' and 'readline' attributes because I used the plain text file_path instead of the opened f file object as the parameter of the pickle function. Correct: file_path = 'path/to/filename' with open(file_path , 'rb') as f: dict1 = pickle.load(f) Wrong: file_path = 'path/to/filename' with open(file_path , 'rb') as f: dict1 = pickle.load(file_path)
8
15
62,314,556
2020-6-10
https://stackoverflow.com/questions/62314556/how-to-install-virtualenv-on-ubuntu-20-04-gcp-instance
I am trying to install python3 virtualenv. I get the following message when I try to run virtualenv. virtualenv Command 'virtualenv' not found, but can be installed with: apt install python3-virtualenv but if I run install command, I get the following error. apt install python3-virtualenv Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package python3-virtualenv For python3 -m venv, I get message to install using apt-get install python3-venv but when I try it, I get the same message. sudo apt-get install python3-venv Reading package lists... Done Building dependency tree Reading state information... Done Package python3-venv is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'python3-venv' has no installation candidate I am running this as root. wget also works.
AFAIU the latest versions of Ubuntu removed Python2 altogether so Python3 is now just the Python. Try: apt-get update apt-get install python3-virtualenv
38
83
62,409,303
2020-6-16
https://stackoverflow.com/questions/62409303/how-to-handle-missing-values-nan-in-categorical-data-when-using-scikit-learn-o
I have recently started learning python to develop a predictive model for a research project using machine learning methods. I have a large dataset comprised of both numerical and categorical data. The dataset has lots of missing values. I am currently trying to encode the categorical features using OneHotEncoder. When I read about OneHotEncoder, my understanding was that for a missing value (NaN), OneHotEncoder would assign 0s to all the feature's categories, as such: 0 Male 1 Female 2 NaN After applying OneHotEncoder: 0 10 1 01 2 00 However, when running the following code: # Encoding categorical data from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder ct = ColumnTransformer([('encoder', OneHotEncoder(handle_unknown='ignore'), [1])], remainder='passthrough') obj_df = np.array(ct.fit_transform(obj_df)) print(obj_df) I am getting the error ValueError: Input contains NaN So I am guessing my previous understanding of how OneHotEncoder handles missing values is wrong. Is there a way for me to get the functionality described above? I know imputing the missing values before encoding will resolve this issue, but I am reluctant to do this as I am dealing with medical data and fear that imputation may decrease the predictive accuracy of my model. I found this question that is similar but the answer doesn't offer a detailed enough solution on how to deal with the NaN values. Let me know what your thoughts are, thanks.
You will need to impute the missing values before. You can define a Pipeline with an imputing step using SimpleImputer setting a constant strategy to input a new category for null fields, prior to the OneHot encoding: from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline import numpy as np categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('encoder', OneHotEncoder(handle_unknown='ignore'))]) preprocessor = ColumnTransformer( transformers=[ ('cat', categorical_transformer, [0]) ]) df = pd.DataFrame(['Male', 'Female', np.nan]) preprocessor.fit_transform(df) array([[0., 1., 0.], [1., 0., 0.], [0., 0., 1.]])
28
14
62,295,889
2020-6-10
https://stackoverflow.com/questions/62295889/how-to-set-environment-variables-in-vscode-when-running-flask-app
I have a Python Flask app and have lots of environment variables that I need to set when running my app. I normally run my app like so... python3 -m app.py I would like it so that I can set all the environment variables my app needs so that I do not need to export each time I reopen my terminal. It would be nice if it could be workspace-specific or project-specific. I know other editors like Pycharm can do similar things and was wondering how to do this in VS Code?
For Flask apps, you can create a launch.json configuration that will run your Flask app using VS Code's debugger. VS Code's launch.json supports a number of options, including the setting of environment variables that your Flask app needs. Start with installing the Python extension for VS Code, to add support for "debugging of a number of types of Python applications, which includes Flask. Then, follow the Flask Tutorial from the VS Code docs. Basically, you'll need to create a launch.json file in your workspace's .vscode folder. { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "run-flask-app", "type": "python", "request": "launch", "module": "flask", "env": { "FLASK_APP": "/path/to/app.py", "FLASK_ENV": "development", "FLASK_DEBUG": "0", "AAA": "value of AAA env var", "BBB": "value of BBB env var", "CCC": "value of CCC env var" }, "args": [ "run", "--no-debugger", "--no-reload" ], "jinja": true }, ] } The env portion is where you set all the environment variables. By default, it includes the required FLASK_*-related variables because it uses the flask command line tool for running apps. Here, as an example, I've also set custom AAA and BBB and CCC vars. You can access that from code as normal env vars: @app.route('/test') def root(): aaa = os.environ.get("AAA") bbb = os.environ.get("BBB") ccc = os.environ.get("CCC") print(f'{aaa}, {bbb}, {ccc}') return f'{aaa}, {bbb}, {ccc}' Then run it from the Debug panel: Just hit the play button and you should be able to see the printed env values from the Terminal panel or from the browser after accessing the /test endpoint. (If you have multiple terminals, it should appear under the Python Debug Console). Lastly, by saving the .vscode/launch.json file under the workspace folder, then it's workspace-specific, and it will only affect the runtime environment used to launch your Flask app.
11
20
62,375,034
2020-6-14
https://stackoverflow.com/questions/62375034/find-non-overlapping-area-between-two-kde-plots
I was attempting to determine whether a feature is important or not base on its kde distribution for target variable. I am aware how to plot the kde plot and guess after looking at the plots, but is there a more formal doing this? Such as can we calculate the area of non overlapping area between two curves? When I googled for the area between two curves there are many many links but none of them could solve my exact problem. NOTE: The main aim of this plot is to find whether the feature is important or not. So, please suggest me further if I am missing any hidden concepts here. What I am trying to do is set some threshold such as 0.2, if the non-overlapping area > 0.2, then assert that the feature is important, otherwise not. MWE: import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt df = sns.load_dataset('titanic') x0 = df.loc[df['survived']==0,'fare'] x1 = df.loc[df['survived']==1,'fare'] sns.kdeplot(x0,shade=1) sns.kdeplot(x1,shade=1) Output Similar links Fill area of overlap between two normal distributions in seaborn / matplotlib Python: Overlap between two functions (PDF of kde and normal) Fill area between two curves in python
Here are my ideas about the computational part of the question: In order to compare the kde's, they need to be calculated with the same bandwidth. (The default bandwidth depends on the number of x-values, which can be different for both sets.) The intersection of two positive curves is just their minimum. The area of a curve can be approximated via the trapezium rule: np.trapz. Here are these ideas converted to some example code and illustrating plot: import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from scipy.stats import gaussian_kde df = sns.load_dataset('titanic') x0 = df.loc[df['survived'] == 0, 'fare'] x1 = df.loc[df['survived'] == 1, 'fare'] kde0 = gaussian_kde(x0, bw_method=0.3) kde1 = gaussian_kde(x1, bw_method=0.3) xmin = min(x0.min(), x1.min()) xmax = max(x0.max(), x1.max()) dx = 0.2 * (xmax - xmin) # add a 20% margin, as the kde is wider than the data xmin -= dx xmax += dx x = np.linspace(xmin, xmax, 500) kde0_x = kde0(x) kde1_x = kde1(x) inters_x = np.minimum(kde0_x, kde1_x) plt.plot(x, kde0_x, color='b', label='No') plt.fill_between(x, kde0_x, 0, color='b', alpha=0.2) plt.plot(x, kde1_x, color='orange', label='Yes') plt.fill_between(x, kde1_x, 0, color='orange', alpha=0.2) plt.plot(x, inters_x, color='r') plt.fill_between(x, inters_x, 0, facecolor='none', edgecolor='r', hatch='xx', label='intersection') area_inters_x = np.trapz(inters_x, x) handles, labels = plt.gca().get_legend_handles_labels() labels[2] += f': {area_inters_x * 100:.1f} %' plt.legend(handles, labels, title='Survived?') plt.title('Fare vs Survived') plt.tight_layout() plt.show()
10
14
62,345,198
2020-6-12
https://stackoverflow.com/questions/62345198/extract-individual-links-from-a-single-youtube-playlist-link-using-python
I need a python script that takes link to a single youtube playlist and then gives out a list containing the links to individual videos in the playlist. I realize that same question was asked few years ago, but it was asked for python2.x and the codes in the answer don't work properly. They are very weird, they work sometimes but give empty output once in a while(maybe some of the packages used there have been updated, I don't know). I've included one of those code below. If any of you don't believe, run this code several times you'll receive empty list once in a while, but most of the time it does the job of breaking down a playlist. from bs4 import BeautifulSoup as bs import requests r = requests.get('https://www.youtube.com/playlist?list=PL3D7BFF1DDBDAAFE5') page = r.text soup=bs(page,'html.parser') res=soup.find_all('a',{'class':'pl-video-title-link'}) for l in res: print(l.get("href")) In case of some playlists the code just doesn't work at all. Also, if beautifulsoup can't do the job, any other popular python library will do.
It seems youtube loads sometimes different versions of the page, sometimes with html organized like you expected using links with pl-video-title-link class : <td class="pl-video-title"> <a class="pl-video-title-link yt-uix-tile-link yt-uix-sessionlink spf-link " dir="ltr" href="/watch?v=GtWXOzsD5Fw&amp;list=PL3D7BFF1DDBDAAFE5&amp;index=101&amp;t=0s" data-sessionlink="ei=TJbjXtC8NYri0wWCxarQDQ&amp;feature=plpp_video&amp;ved=CGoQxjQYYyITCNCSmqHD_OkCFQrxtAodgqIK2ij6LA"> Android Application Development Tutorial - 105 - Spinners and ArrayAdapter </a> <div class="pl-video-owner"> de <a href="/user/thenewboston" class=" yt-uix-sessionlink spf-link " data-sessionlink="ei=TJbjXtC8NYri0wWCxarQDQ&amp;feature=playlist&amp;ved=CGoQxjQYYyITCNCSmqHD_OkCFQrxtAodgqIK2ij6LA" >thenewboston</a> </div> <div class="pl-video-bottom-standalone-badge"> </div> </td> Sometimes with data embedded in a JS variables and loaded dynamically : window["ytInitialData"] = { .... very big json here .... }; For the second version, you will need to use regex to parse Javascript unless you want to use tools like selenium to grab the content after page load. The best way is to use the official API which is straightforward to get the playlist items : Go to Google Developer Console, search Youtube Data API / enable Youtube Data API v3 Click on Create Credentials / Youtube Data API v3 / Public data Alternatively (For Credentials Creation) Go to Credentials / Create Credentials / API key install google api client for python : pip3 install --upgrade google-api-python-client Use the API key in the script below. This script fetch playlist items for playlist with id PL3D7BFF1DDBDAAFE5, use pagination to get all of them, and re-create the link from the videoId and playlistID : import googleapiclient.discovery from urllib.parse import parse_qs, urlparse #extract playlist id from url url = 'https://www.youtube.com/playlist?list=PL3D7BFF1DDBDAAFE5' query = parse_qs(urlparse(url).query, keep_blank_values=True) playlist_id = query["list"][0] print(f'get all playlist items links from {playlist_id}') youtube = googleapiclient.discovery.build("youtube", "v3", developerKey = "YOUR_API_KEY") request = youtube.playlistItems().list( part = "snippet", playlistId = playlist_id, maxResults = 50 ) response = request.execute() playlist_items = [] while request is not None: response = request.execute() playlist_items += response["items"] request = youtube.playlistItems().list_next(request, response) print(f"total: {len(playlist_items)}") print([ f'https://www.youtube.com/watch?v={t["snippet"]["resourceId"]["videoId"]}&list={playlist_id}&t=0s' for t in playlist_items ]) Output: get all playlist items links from PL3D7BFF1DDBDAAFE5 total: 195 [ 'https://www.youtube.com/watch?v=SUOWNXGRc6g&list=PL3D7BFF1DDBDAAFE5&t=0s', 'https://www.youtube.com/watch?v=857zrsYZKGo&list=PL3D7BFF1DDBDAAFE5&t=0s', 'https://www.youtube.com/watch?v=Da1jlmwuW_w&list=PL3D7BFF1DDBDAAFE5&t=0s', ........... 'https://www.youtube.com/watch?v=1j4prh3NAZE&list=PL3D7BFF1DDBDAAFE5&t=0s', 'https://www.youtube.com/watch?v=s9ryE6GwhmA&list=PL3D7BFF1DDBDAAFE5&t=0s' ]
9
25
62,390,517
2020-6-15
https://stackoverflow.com/questions/62390517/no-module-named-sklearn-utils-linear-assignment
I am trying to run a project from github , every object counter applications using sort algorithm. I can't run any of them because of a specific error, attaching errors screenshot. Can anyone help me about fixing this issue?
The linear_assignment function is deprecated in 0.21 and will be removed from 0.23, but sklearn.utils.linear_assignment_ can be replaced by scipy.optimize.linear_sum_assignment. You can use: from scipy.optimize import linear_sum_assignment as linear_assignment then you can run the file and don't need to change the code.
26
77
62,352,670
2020-6-12
https://stackoverflow.com/questions/62352670/deserialization-of-large-numpy-arrays-using-pickle-is-order-of-magnitude-slower
I am deserializing large numpy arrays (500MB in this example) and I find the results vary by orders of magnitude between approaches. Below are the 3 approaches I've timed. I'm receiving the data from the multiprocessing.shared_memory package, so the data comes to me as a memoryview object. But in these simple examples, I just pre-create a byte array to run the test. I wonder if there are any mistakes in these approaches, or if there are other techniques I didn't try. Deserialization in Python is a real pickle of a problem if you want to move data fast and not lock the GIL just for the IO. A good explanation as to why these approaches vary so much would also be a good answer. """ Deserialization speed test """ import numpy as np import pickle import time import io sz = 524288000 sample = np.random.randint(0, 255, size=sz, dtype=np.uint8) # 500 MB data serialized_sample = pickle.dumps(sample) serialized_bytes = sample.tobytes() serialized_bytesio = io.BytesIO() np.save(serialized_bytesio, sample, allow_pickle=False) serialized_bytesio.seek(0) result = None print('Deserialize using pickle...') t0 = time.time() result = pickle.loads(serialized_sample) print('Time: {:.10f} sec'.format(time.time() - t0)) print('Deserialize from bytes...') t0 = time.time() result = np.ndarray(shape=sz, dtype=np.uint8, buffer=serialized_bytes) print('Time: {:.10f} sec'.format(time.time() - t0)) print('Deserialize using numpy load from BytesIO...') t0 = time.time() result = np.load(serialized_bytesio, allow_pickle=False) print('Time: {:.10f} sec'.format(time.time() - t0)) Results: Deserialize using pickle... Time: 0.2509949207 sec Deserialize from bytes... Time: 0.0204288960 sec Deserialize using numpy load from BytesIO... Time: 28.9850852489 sec The second option is the fastest, but notably less elegant because I need to explicitly serialize the shape and dtype information.
I found your question useful, I'm looking for best numpy serialization and confirmed that np.load() was best except it was beaten by pyarrow in my add on test below. Arrow is now a super popular data serialization framework for distributed compute (E.g. Spark, ...) """ Deserialization speed test """ import numpy as np import pickle import time import io import pyarrow as pa sz = 524288000 sample = np.random.randint(0, 255, size=sz, dtype=np.uint8) # 500 MB data pa_buf = pa.serialize(sample).to_buffer() serialized_sample = pickle.dumps(sample) serialized_bytes = sample.tobytes() serialized_bytesio = io.BytesIO() np.save(serialized_bytesio, sample, allow_pickle=False) serialized_bytesio.seek(0) result = None print('Deserialize using pickle...') t0 = time.time() result = pickle.loads(serialized_sample) print('Time: {:.10f} sec'.format(time.time() - t0)) print('Deserialize from bytes...') t0 = time.time() result = np.ndarray(shape=sz, dtype=np.uint8, buffer=serialized_bytes) print('Time: {:.10f} sec'.format(time.time() - t0)) print('Deserialize using numpy load from BytesIO...') t0 = time.time() result = np.load(serialized_bytesio, allow_pickle=False) print('Time: {:.10f} sec'.format(time.time() - t0)) print('Deserialize pyarrow') t0 = time.time() restored_data = pa.deserialize(pa_buf) print('Time: {:.10f} sec'.format(time.time() - t0)) Results from i3.2xlarge on Databricks Runtime 8.3ML Python 3.8, Numpy 1.19.2, Pyarrow 1.0.1 Deserialize using pickle... Time: 0.4069395065 sec Deserialize from bytes... Time: 0.0281322002 sec Deserialize using numpy load from BytesIO... Time: 0.3059172630 sec Deserialize pyarrow Time: 0.0031735897 sec Your BytesIO results were about 100x more than mine, which I don't know why.
11
3
62,393,032
2020-6-15
https://stackoverflow.com/questions/62393032/custom-loss-function-with-weights-in-keras
I'm new with neural networks. I wanted to make a custom loss function in TensorFlow, but I need to get a vector of weights, so I did it in this way: def my_loss(weights): def custom_loss(y, y_pred): return weights*(y - y_pred) return custom_loss model.compile(optimizer='adam', loss=my_loss(weights), metrics=['accuracy']) model.fit(x_train, y_train, batch_size=None, validation_data=(x_test, y_test), epochs=100) When I launch it, I receive this error: InvalidArgumentError: Incompatible shapes: [50000,10] vs. [32,10] The shapes are: print(weights.shape) print(y_train.shape) (50000, 10) (50000, 10) So I thought that it was a problem with the batches, I don't have a strong background with TensorFlow, so I tried to solve in a naive way using a global variable batch_index = 0 and then updating it within a custom callback into the "on_batch_begin" hook. But it didn't work and it was a horrible solution. So, how can I get the exact part of the weights with the corresponding y? Do I have a way to get the current batch index inside the custom loss? Thank you in advance for your help
this is a workaround to pass additional arguments to a custom loss function, in your case an array of weights. the trick consists in using fake inputs which are useful to build and use the loss in the correct ways. don't forget that keras handles fixed batch dimension I provide a dummy example in a regression problem def mse(y_true, y_pred, weights): error = y_true-y_pred return K.mean(K.square(error) + K.sqrt(weights)) X = np.random.uniform(0,1, (1000,10)) y = np.random.uniform(0,1, 1000) w = np.random.uniform(0,1, 1000) inp = Input((10,)) true = Input((1,)) weights = Input((1,)) x = Dense(32, activation='relu')(inp) out = Dense(1)(x) m = Model([inp,true,weights], out) m.add_loss( mse( true, out, weights ) ) m.compile(loss=None, optimizer='adam') m.fit(x=[X, y, w], y=None, epochs=3) ## final fitted model to compute predictions (remove W if not needed) final_m = Model(inp, out)
8
5
62,416,223
2020-6-16
https://stackoverflow.com/questions/62416223/how-to-select-only-few-columns-in-scikit-learn-column-selector-pipeline
I was reading the scikitlearn tutorial about column transformer. The given example (https://scikit-learn.org/stable/modules/generated/sklearn.compose.make_column_selector.html#sklearn.compose.make_column_selector) works, but when I tried to select only few columns, It gives me error. MWE import numpy as np import pandas as pd import seaborn as sns from sklearn.compose import make_column_transformer from sklearn.compose import make_column_selector df = sns.load_dataset('tips') mycols = ['tip','sex'] ct = make_column_transformer(make_column_selector(pattern=mycols) ct.fit_transform(df) Required I want only the select columns in the output. NOTE Of course, I know I can do df[mycols], I am looking for scikit learn pipeline example.
If you don't mind mlxtend, it has built-in transformer for that. Using mlxtend from mlxtend.feature_selection import ColumnSelector pipe = ColumnSelector(mycols) pipe.fit_transform(df) For sklearn >= 0.20 Reference: https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline import seaborn as sns df = sns.load_dataset('tips') mycols = ['tip','sex'] pipeline = Pipeline([ ("selector", ColumnTransformer([ ("selector", "passthrough", mycols) ], remainder="drop")) ]) pipeline.fit_transform(df) For sklearn < 0.20 from sklearn.base import BaseEstimator, TransformerMixin from sklearn.pipeline import Pipeline class FeatureSelector(BaseEstimator, TransformerMixin): def __init__(self, columns): self.columns = columns def fit(self, X, y=None): return self def transform(self, X, y=None): return X[self.columns] pipeline = Pipeline([('selector', FeatureSelector(columns=mycols)) ]) pipeline.fit_transform(df)[:5]
16
19
62,418,465
2020-6-16
https://stackoverflow.com/questions/62418465/why-does-djangos-apps-get-model-return-a-fake-mymodel-object
I am writing a custom Django migration script. As per the django docs on custom migrations, I should be able to use my model vis-a-vis apps.get_model(). However, when trying to do this I get the following error: AttributeError: type object 'MyModel' has no attribute 'objects' I think this has to do with the apps registry not being ready, but I am not sure. Sample code: def do_thing(apps, schema_editor): my_model = apps.get_model('app', 'MyModel') objects_ = my_model.objects.filter( some_field__isnull=True).prefetch_related( 'some_field__some_other_field') # exc raised here class Migration(migrations.Migration): atomic = False dependencies = [ ('app', '00xx_auto_xxx') ] operations = [ migrations.RunPython(do_thing), ] A simple print statement of apps.get_model()'s return value shows the following: <class '__fake__.MyModel'>. I'm not sure what this is, and if it is a result of not being ready. EDIT: I couldn't find any resources to explain why I am getting a __fake__ object so I decided to tinker with the code. I got it to work by preempting apps from args, as can be seen here: def do_thing(apps, schema_editor): from django.apps import apps my_model = apps.get_model('app', 'MyModel') objects_ = my_model.objects.filter( some_field__isnull=True).prefetch_related( 'some_field__some_other_field') # no more exc raised here I am still confused and any help would be appreciated.
The fake objects are historical models. Here's the explanation from Django docs: When you run migrations, Django is working from historical versions of your models stored in the migration files. [...] Because it’s impossible to serialize arbitrary Python code, these historical models will not have any custom methods that you have defined. They will, however, have the same fields, relationships, managers (limited to those with use_in_migrations = True) and Meta options (also versioned, so they may be different from your current ones). In case objects is a custom manager, you can set use_in_migrations = True to make it available in migrations.
16
12
62,397,170
2020-6-15
https://stackoverflow.com/questions/62397170/python-pandas-how-to-select-rows-where-objects-start-with-letters-pl
I have specific problem with pandas: I need to select rows in dataframe which start with specific letters. Details: I've imported my data to dataframe and selected columns that I need. I've also narrowed it down to row index I need. Now I also need to select rows in other column where objects START with letters 'pl'. Is there any solution to select row only based on first two characters in it? I was thinking about pl = df[‘Code’] == pl* but it won't work due to row indexing. Advise appreciated!
If you use a string method on the Series that should return you a true/false result. You can then use that as a filter combined with .loc to create your data subset. new_df = df.loc[df[‘Code’].str.startswith('pl')].copy()
14
2
62,395,559
2020-6-15
https://stackoverflow.com/questions/62395559/input-0-of-layer-lstm-5-is-incompatible-with-the-layer-expected-ndim-3-found-n
I am trying to create an image captioning model. Could you please help with this error? input1 is the image vector, input2 is the caption sequence. 32 is the caption length. I want to concatenate the image vector with the embedding of the sequence and then feed it to the decoder model. def define_model(vocab_size, max_length): input1 = Input(shape=(512,)) input1 = tf.keras.layers.RepeatVector(32)(input1) print(input1.shape) input2 = Input(shape=(max_length,)) e1 = Embedding(vocab_size, 512, mask_zero=True)(input2) print(e1.shape) dec1 = tf.concat([input1,e1], axis=2) print(dec1.shape) dec2 = LSTM(512)(dec1) dec3 = LSTM(256)(dec2) dec4 = Dropout(0.2)(dec3) dec5 = Dense(256, activation="relu")(dec4) output = Dense(vocab_size, activation="softmax")(dec5) model = tf.keras.Model(inputs=[input1, input2], outputs=output) model.compile(loss="categorical_crossentropy", optimizer="adam") print(model.summary()) return model ValueError: Input 0 of layer lstm_5 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 512]
This error occurs when an LSTM layer gets input in 2D instead of 3D. For instance: (64, 100) The correct format is (n_samples, time_steps, features): (64, 5, 100) In this case, the mistake you did was that the input of dec3, which is an LSTM layer, was the output of dec2, which is also an LSTM layer. By default, the argument return_sequences in an LSTM layer is False. This means that the first LSTM returned a 2D tensor, which was incompatible with the next LSTM layer. I solved your issue by setting return_sequences=True in your first LSTM layer. Also, there was an error in this line: model = tf.keras.Model(inputs=[input1, input2], outputs=output) input1 was not an input layer because you reassigned it. See: input1 = Input(shape=(512,)) input1 = tf.keras.layers.RepeatVector(32)(input1) I renamed the second one e0, consistent with how you're naming your variables. Now, everything is working: import tensorflow as tf from tensorflow.keras.layers import * from tensorflow.keras import Input vocab_size, max_length = 1000, 32 input1 = Input(shape=(128)) e0 = tf.keras.layers.RepeatVector(32)(input1) print(input1.shape) input2 = Input(shape=(max_length,)) e1 = Embedding(vocab_size, 128, mask_zero=True)(input2) print(e1.shape) dec1 = Concatenate()([e0, e1]) print(dec1.shape) dec2 = LSTM(16, return_sequences=True)(dec1) dec3 = LSTM(16)(dec2) dec4 = Dropout(0.2)(dec3) dec5 = Dense(32, activation="relu")(dec4) output = Dense(vocab_size, activation="softmax")(dec5) model = tf.keras.Model(inputs=[input1, input2], outputs=output) model.compile(loss="categorical_crossentropy", optimizer="adam") print(model.summary()) Model: "model_2" _________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================= input_24 (InputLayer) [(None, 128)] 0 _________________________________________________________________________________ input_25 (InputLayer) [(None, 32)] 0 _________________________________________________________________________________ repeat_vector_12 (RepeatVector) (None, 32, 128) 0 input_24[0][0] _________________________________________________________________________________ embedding_11 (Embedding) (None, 32, 128) 128000 input_25[0][0] _________________________________________________________________________________ concatenate_7 (Concatenate) (None, 32, 256) 0 repeat_vector_12[0][0] embedding_11[0][0] _________________________________________________________________________________ lstm_12 (LSTM) (None, 32, 16) 17472 concatenate_7[0][0] _________________________________________________________________________________ lstm_13 (LSTM) (None, 16) 2112 lstm_12[0][0] _________________________________________________________________________________ dropout_2 (Dropout) (None, 16) 0 lstm_13[0][0] _________________________________________________________________________________ dense_4 (Dense) (None, 32) 544 dropout_2[0][0] _________________________________________________________________________________ dense_5 (Dense) (None, 1000) 33000 dense_4[0][0] ================================================================================= Total params: 181,128 Trainable params: 181,128 Non-trainable params: 0 _________________________________________________________________________________
9
2
62,335,424
2020-6-11
https://stackoverflow.com/questions/62335424/tkinter-how-to-bind-to-shifttab
I'm trying to bind the SHIFT+TAB keys, but I can't seem to get it to work. The widget I'm binding to is an Entry widget. I've tried binding the keys with widget.bind('<Shift_Tab>', func), but I get an error message saying: File "/usr/lib64/python3.8/tkinter/init.py", line 1337, in _bind self.tk.call(what + (sequence, cmd)) _tkinter.TclError: bad event type or keysym "Shift_Tab" Update I'm still having a problem detecting SHIFT+TAB. Here is my test code. My OS is Linux. The tab key works, just not SHIFT+TAB. Seems like a simple problem to solve, so I must be going about it wrong? I'm trying to tab between columns in a Treeview that I have overlaid widgets on a row to simulate inline editing. There can be only one active widget on a line. I keep track of what column I'm in and when the user presses SHIFT+TAB or TAB, I remove the current widget and display a new widget in the corresponding column. Here is a link to the complete project: The project is in one file and has no imports. The code below is my attempt and it doesn't work. import tkinter as tk import tkinter.ttk as ttk class App(tk.Tk): def __init__(self): super().__init__() self.rowconfigure(0, weight=1) self.columnconfigure(0, weight=1) self.title('Default Demo') self.geometry('420x200') wdg = ttk.Entry(self) wdg.grid() def tab(_): print('Tab pressed.') def shift_tab(_): print('Shift tab pressed.') wdg.bind('<Tab>', tab) wdg.bind('<Control-ISO_Left_Tab>', shift_tab) def main(): app = App() app.mainloop() if __name__ == '__main__': main()
Not a direct answer and too long for a comment. You can solve your question by yourself with a simple trick, bind <Key> to a function, and print the key event argument passed to the bind function where you can see which key is pressed or not. Try multiple combinations of keys to see what is the state and what is their keysym or keycode. import tkinter as tk def key_press(evt): print(evt) root = tk.Tk() root.bind("<Key>", key_press) root.mainloop() It will output the following for pressing SHIFT + TAB combo on macOS. <KeyPress event state=Shift keysym=Tab keycode=3145753 char='\x19' x=-5 y=-50> Where, state=Shift means a key event's state is on SHIFT. keysym=Tab means tab key is pressed. If we just press SHIFT, the keysym will be Shift_L or Shift_R (shows Shift_L for both shift keys on mac). keycode is an unique code for each key even for different key combos for example keycode for Left Shift is 131330 and keycode for TAB is 3145737 but when SHIFT + TAB is pressed the code is not the same to either, it is 3145753. (I'm not sure if these are the same code for every os but one can figure it out by printing them to the console) Also, see all the event attributes. Though bind '<Shift-Tab>' key combination works well on Mac, it can also be used like so... def key_press(evt): if evt.keycode==3145753: print('Shift+Tab is pressed')
8
5
62,378,481
2020-6-14
https://stackoverflow.com/questions/62378481/typeerror-input-filename-of-readfile-op-has-type-float32-that-does-not-matc
I am running this code from the tutorial here: https://keras.io/examples/vision/image_classification_from_scratch/ with a custom dataset, that is divided in 2 datasets as in the tutorial. However, I got this error: TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string. I made this casting. I tried this: is_jfif = str(tf.compat.as_bytes("JFIF")) in fobj.peek(10) but nothing changed as far as the error I am trying all day to figure out how to solve it, without any success. Can someone help me? Thank you...
Simplest way I found is to create a subfolder and copy the files to that subfolder. i.e. Lets assume your files are 0.jpg, 1.jpg,2.jpg....2000.jpg and in directory named "patterns". Seems like the Keras API does not accept it as the files are named by numbers and for Keras it is in float32. To overcome this issue, either you can rename the files as one answer suggests, or you can simply create a subfolder under "patterns" (i.e. "patterndir"). So now your image files are under ...\patterns\patterndir Keras (internally) possibly using the subdirectory name and may be attaching it in front of the image file thus making it a string (sth like patterndir_01.jpg, patterndir_02.jpg) [Note this is my interpretation, does not mean that it is true] When you compile it this time, you will see that it works and you will get a compiler message as: Found 2001 files belonging to 1 classes. Using 1601 files for training. Found 2001 files belonging to 1 classes. Using 400 files for validation. My code looks like this import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers #Generate a dataset image_size = (28, 28) batch_size = 32 train_ds = tf.keras.preprocessing.image_dataset_from_directory( "patterns", validation_split=0.2, subset="training", seed=1337, image_size=image_size, batch_size=batch_size, ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( "patterns", validation_split=0.2, subset="validation", seed=1337, image_size=image_size, batch_size=batch_size, )
8
17
62,303,980
2020-6-10
https://stackoverflow.com/questions/62303980/python-version-in-azure-databricks
I am trying to find out the python version I am using in Databricks. To find out I tried import sys print(sys.version) And I got the output as 3.7.3 However when I went to Cluster --> SparkUI --> Environment I see that the cluster Python version is 2. Which version does this refer to ? When I tried running %sh python --version I still get Python 3.7.3 Can there be a different python version for each worker / driver node ? Note: I am using a setup where there is 1 worker node and 1 driver node (2 nodes in total with the same spec) and Databricks Runtime Version is 6.5 ML
Update: This issue has been fixed. For new cluster: If you create a new cluster it will have python environment variable as 3. For existing clusters: You need to add in Environment Variables tab in Cluster Configuration > Advanced, it changes in the Environmental variable. PYSPARK_PYTHON=/databricks/python3/bin/python3 Thanks for bringing this to our attention. This is a product-bug, currently I'm working with the product team to fix the issue asap. The default Python version for clusters created using the UI is Python 3. As part of repro, I had created Databricks Runtime Version: 6.5 ML and observed the same behaviour. Cluster --> SparkUI --> Environment shows incorrect version.
15
8
62,316,405
2020-6-11
https://stackoverflow.com/questions/62316405/how-to-get-sliding-window-of-a-values-for-each-element-in-both-direction-forwar
I have a list of values like this, lst = [1, 2, 3, 4, 5, 6, 7, 8] Desired Output: window size = 3 1 # first element in the list forward = [2, 3, 4] backward = [] 2 # second element in the list forward = [3, 4, 5] backward = [1] 3 # third element in the list forward = [4, 5, 6] backward = [1, 2] 4 # fourth element in the list forward = [5, 6, 7] backward = [1, 2, 3] 5 # fifth element in the list forward = [6, 7, 8] backward = [2, 3, 4] 6 # sixth element in the list forward = [7, 8] backward = [3, 4, 5] 7 # seventh element in the list forward = [8] backward = [4, 5, 6] 8 # eight element in the list forward = [] backward = [5, 6, 7] Lets assume a window size of 4, now my desired output: for each_element in the list, I want 4 values in-front and 4 values backward ignoring the current value. I was able to use this to get sliding window of values but this also not giving me the correct required output. import more_itertools list(more_itertools.windowed([1, 2, 3, 4, 5, 6, 7, 8], n=3))
Code: arr = [1, 2, 3, 4, 5, 6, 7, 8] window = 3 for backward, current in enumerate(range(len(arr)), start = 0-window): if backward < 0: backward = 0 print(arr[current+1:current+1+window], arr[backward:current]) Output: [2, 3, 4], [] [3, 4, 5], [1] [4, 5, 6], [1, 2] [5, 6, 7], [1, 2, 3] [6, 7, 8], [2, 3, 4] [7, 8], [3, 4, 5] [8], [4, 5, 6] [], [5, 6, 7] One Liner: print(dict([(e, (lst[i+1:i+4], lst[max(i-3,0):i])) for i,e in enumerate(last)])) Output: {1: ([2, 3, 4], []), 2: ([3, 4, 5], [1]), 3: ([4, 5, 6], [1, 2]), 4: ([5, 6, 7], [1, 2, 3]), 5: ([6, 7, 8], [2, 3, 4]), 6: ([7, 8], [3, 4, 5]), 7: ([8], [4, 5, 6]), 8: ([], [5, 6, 7])} Credit: thanks to suggestions from @FeRD and @Androbin, the solution now looks better
29
15
62,414,423
2020-6-16
https://stackoverflow.com/questions/62414423/google-drive-api-list-files-in-a-shared-folder-that-i-have-not-accessed-yet
I am trying to automate the downloading of files from a Google Drive shared folder. The contents of the folder change daily. The folder is shared to anyone with a link to the folder. My problem is that the query does not return the new files that I have not opened yet unless I open the new files in Google Drive. folder_id = 'xxxx...xxx' results = drive_service.files().list(q=f"parents in '{folder_id}' and trashed = false", fields="nextPageToken, files(id, name)").execute() Is there a way to list files in a shared folder that I have not opened yet? I tried to negate the sharedWithMe field but it does not work.
Yes, there is, but you need to specify that you want to include shared folders into the search This you can do by setting includeItemsFromAllDrives and supportsAllDrives to true In python you can implement it with folder_id = 'xxxx...xxx' results = drive_service.files().list(supportsAllDrives=True, includeItemsFromAllDrives=True, q="parents in '{folder_id}' and trashed = false", fields = "nextPageToken, files(id, name)").execute() UPDATE Mind that shared with anyone with a link means that until you open a file, it is not explicitly shared with you and thus won't show up as part of your file list. If you want to change this, you need to ask the owner to specify you as a viewer of the folder.
9
21
62,408,115
2020-6-16
https://stackoverflow.com/questions/62408115/updating-a-matplotlib-figure-during-simulation
I try to implement a matplotlib figure that updates during the simulation of my environment. The following Classes works fine in my test but doesn't update the figure when I use it in my environment. During the simulation of the environment, the graph is shown, but no lines are plotted. My guess is that .draw() is not working how I think it does. Can anyone figure out the issue here? class Visualisation: def __init__(self, graphs): self.graphs_dict = {} for graph in graphs: fig = plt.figure() ax = fig.add_subplot(111) line, = ax.plot(graph.x, graph.y, 'r-') self.graphs_dict[graph.title] = {"fig": fig, "ax": ax, "line": line, "graph": graph} self.graphs_dict[graph.title]["fig"].canvas.draw() plt.ion() plt.show() def update(self, graph): graph = self.graphs_dict[graph.title]["graph"] self.graphs_dict[graph.title]["line"].set_xdata(graph.x) self.graphs_dict[graph.title]["line"].set_ydata(graph.y) self.graphs_dict[graph.title]["fig"].canvas.flush_events() x_lim, y_lim = self.get_lim(graph) self.graphs_dict[graph.title]["ax"].set_xlim(x_lim) self.graphs_dict[graph.title]["ax"].set_ylim(y_lim) self.graphs_dict[graph.title]["fig"].canvas.draw() @staticmethod def get_lim(graph): if graph.x_lim is None: x = np.array(graph.x) y = np.array(graph.y) x_lim = [x.min(), x.max()] y_lim = [y.min(), y.max()] else: x_lim = graph.x_lim y_lim = graph.y_lim return x_lim, y_lim class Graph: def __init__(self, title, x, y, x_label="", y_label=""): """ Sets up a graph for Matplotlib Parameters ---------- title : String Title of the plot x : float y : float x_label : String x Label y_label : String y Label """ self.title = title self.x = x self.y = y self.x_label = x_label self.y_label = y_label self.x_lim, self.y_lim = None, None def set_lim(self, x_lim, y_lim): self.x_lim = x_lim self.y_lim = y_lim class Environment: def __init__(self, [..], verbose=0): """verbose : int 0 - No Visualisation 1 - Visualisation 2 - Visualisation and Logging""" self.vis = None self.verbose = verbose [......] def simulate(self): for _ in range(self.n_steps): [...] self.visualize() def visualize(self): if self.verbose == 1 or self.verbose == 2: if self.vis is None: graphs = [Graph(title="VariableY", x=[], y=[])] graphs[0].set_lim(x_lim=[0, 100], y_lim=[0, 300]) self.vis = Visualisation(graphs=graphs) else: self.vis.graphs_dict["VariableY"]["graph"].x.append(self.internal_step) self.vis.graphs_dict["VariableY"]["graph"].y.append(150) self.vis.update(self.vis.graphs_dict["VariableY"]["graph"]) When I run the code I more or less just write: env.simulate(). The code runs fine here: class TestSingularVisualisation(unittest.TestCase): def setUp(self): self.graph = Graph(title="Test", x=[0], y=[0]) self.vis = Visualisation(graphs=[self.graph]) class TestSingleUpdate(TestSingularVisualisation): def test_repeated_update(self): for i in range(5): self.graph.x.append(i) self.graph.y.append(np.sin(i)) self.vis.update(self.graph) time.sleep(1)
Turns out your code works the way it is set up. Here is the sole problem with the code you provided: self.vis.graphs_dict["VariableY"]["graph"].x.append(self.internal_step) self.vis.graphs_dict["VariableY"]["graph"].y.append(150) You are plotting a line and correctly updating the canvas, however, you keep appending the exact same (x, y) coordinate. So the simulation does update the line, but the line simplifies to a point. Your test case does not do this. You can run a dummy example with your code by simply adding a line like this: self.internal_step += 5 before adding the new x point, and you will produce a horizontal line. Let me know if this solves your problem.
8
6
62,403,763
2020-6-16
https://stackoverflow.com/questions/62403763/how-to-add-planes-in-a-3d-scatter-plot
Using Blender created this model that can be seen in A-frame in this link This model is great and it gives an overview of what I'm trying to accomplish here. Basically, instead of having the names, I'd have dots that symbolize one specific platform. The best way to achieve it with current state of the art, at my sight, is through Plotly 3D Scatter Plots. I've got the following scatterplot import plotly.express as px import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/tiago-peres/immersion/master/Platforms_dataset.csv') fig = px.scatter_3d(df, x='Functionality ', y='Accessibility', z='Immersion', color='Platforms') fig.show() that by going to this link you'll be able to click a button and open it in Colab This nearly looks like the model. Yet, still in need to add three planes to the plot in specific locations. More precisely, in x=?, y=? and z=? (I'm using question mark because the value can be anything stablished). In other words, want to add three planes to that scatterplot x = 10 y = 30 z = 40 In the documentation, what closely resembles what I want was 3D Surface Plots. I've done research and found two similar questions with R Insert 2D plane into a 3D Plotly scatter plot in R Add Regression Plane to 3d Scatter Plot in Plotly
I think you might be looking for the add_trace function in plotly so you can just create the surfaces and then add them to the figure: Also, note, there's definitely ways to simplify this code, but for a general idea: import plotly.express as px import pandas as pd import plotly.graph_objects as go import numpy as np fig = px.scatter_3d(df, x='Functionality ', y='Accessibility', z='Immersion', color='Platforms') bright_blue = [[0, '#7DF9FF'], [1, '#7DF9FF']] bright_pink = [[0, '#FF007F'], [1, '#FF007F']] light_yellow = [[0, '#FFDB58'], [1, '#FFDB58']] # need to add starting point of 0 to each dimension so the plane extends all the way out zero_pt = pd.Series([0]) z = zero_pt.append(df['Immersion'], ignore_index = True).reset_index(drop = True) y = zero_pt.append(df['Accessibility'], ignore_index = True).reset_index(drop = True) x = zero_pt.append(df['Functionality '], ignore_index = True).reset_index(drop = True) length_data = len(z) z_plane_pos = 40*np.ones((length_data,length_data)) fig.add_trace(go.Surface(x=x, y=y, z=z_plane_pos, colorscale=light_yellow, showscale=False)) fig.add_trace(go.Surface(x=x.apply(lambda x: 10), y=y, z = np.array([z]*length_data), colorscale= bright_blue, showscale=False)) fig.add_trace(go.Surface(x=x, y= y.apply(lambda x: 30), z = np.array([z]*length_data).transpose(), colorscale=bright_pink, showscale=False))
10
14
62,401,591
2020-6-16
https://stackoverflow.com/questions/62401591/python-3-3-internal-string-representation
I was looking into how Python represents string after PEP 393 and I am not understanding the difference between PyASCIIObject and PyCompactUnicodeObject. My understanding is that strings are represented with the following structures: typedef struct { PyObject_HEAD Py_ssize_t length; /* Number of code points in the string */ Py_hash_t hash; /* Hash value; -1 if not set */ struct { unsigned int interned:2; unsigned int kind:3; unsigned int compact:1; unsigned int ascii:1; unsigned int ready:1; unsigned int :24; } state; wchar_t *wstr; /* wchar_t representation (null-terminated) */ } PyASCIIObject; typedef struct { PyASCIIObject _base; Py_ssize_t utf8_length; char *utf8; Py_ssize_t wstr_length; } PyCompactUnicodeObject; typedef struct { PyCompactUnicodeObject _base; union { void *any; Py_UCS1 *latin1; Py_UCS2 *ucs2; Py_UCS4 *ucs4; } data; } PyUnicodeObject; Correct me if I am wrong, but my understanding is that PyASCIIObject is used for strings with ASCII characters only, PyCompactUnicodeObject uses the PyASCIIObject structure and it is used for strings with at least one non-ASCII character, and PyUnicodeObject is used for legacy functions. Is that correct? Also, why PyASCIIObject uses wchar_t? Isn't a char enough to represent ASCII strings? In addition, if PyASCIIObject already has a wchar_t pointer, why does PyCompactUnicodeObject also have a char pointer? My understanding is that both pointers point to the same location, but why would you include both?
PEP 373 is really the best reference for your questions, though the C-API docs are sometimes needed too. Lets address your questions one by one: You have the types right. But there is one non-obvious wrinkle: When you're using either of the "compact" types (either PyASCIIObject or PyCompactUnicodeObject), the structure itself is just a header. The string's actual data is stored immediately after the structure in memory. The encoding used by the data is described by the kind field, and will depend on the largest character value in the string. The wstr and utf8 pointers in the first two structures are places where a transformed representation can be stored if one is requested by C code. For an ASCII string (using the PyASCIIObject), no cache pointer is needed for UTF-8 data, since the ASCII data itself is UTF-8 compatible. The wide character cache is only used by deprecated functions. The two cache pointers will never point to the same place, since their types are not directly compatible. For compact strings, they are only allocated when a function that needs a UTF-8 buffer (e.g. PyUnicode_AsUTF8AndSize) or a Py_UNICODE buffer (e.g. the deprecated PyUnicode_AS_UNICODE) gets called. For strings created with the deprecated Py_UNICODE based APIs, the wstr pointer has an extra use. It points to the only version of the string data until the PyUnicode_READY macro is called on the string. The first time the string is readied, a new data buffer will be created, and the characters will be stored in it, using the most compact encoding possible among Latin-1, UTF-16 and UTF-32. The wstr buffer will be kept, as it might be needed later by other deprecated API functions that want to look up a PY_UNICODE string. It is interesting that you're asking about CPython's internal string representations right now, as there's a discussion currently ongoing about whether deprecated string API functions and implementation details like the wchar * pointer can be removed in an upcoming version of Python. It looks like it might happen for Python 3.11.0 (which is expected to be released in 2022), though plans could still change before then, especially if the impact on code being used in the wild is more severe than expected.
8
6
62,416,819
2020-6-16
https://stackoverflow.com/questions/62416819/runtimeerror-given-groups-1-weight-of-size-32-3-16-16-16-expected-input
RuntimeError: Given groups=1, weight of size [32, 3, 16, 16, 16], expected input[100, 16, 16, 16, 3] to have 3 channels, but got 16 channels instead This is the portion of code I think where the problem is. def __init__(self): super(Lightning_CNNModel, self).__init__() self.conv_layer1 = self._conv_layer_set(3, 32) self.conv_layer2 = self._conv_layer_set(32, 64) self.fc1 = nn.Linear(2**3*64, 128) self.fc2 = nn.Linear(128, 10) # num_classes = 10 self.relu = nn.LeakyReLU() self.batch=nn.BatchNorm1d(128) self.drop=nn.Dropout(p=0.15) def _conv_layer_set(self, in_c, out_c): conv_layer = nn.Sequential( nn.Conv3d(in_c, out_c, kernel_size=(3, 3, 3), padding=0), nn.LeakyReLU(), nn.MaxPool3d((2, 2, 2)), ) return conv_layer def forward(self, x): out = self.conv_layer1(x) out = self.conv_layer2(out) out = out.view(out.size(0), -1) out = self.fc1(out) out = self.relu(out) out = self.batch(out) out = self.drop(out) out = self.fc2(out) return out This is the code I am working on
nn.Conv3d expects the input to have size [batch_size, channels, depth, height, width]. The first convolution expects 3 channels, but with your input having size [100, 16, 16, 16, 3], that would be 16 channels. Assuming that your data is given as [batch_size, depth, height, width, channels], you need to swap the dimensions around, which can be done with torch.Tensor.permute: # From: [batch_size, depth, height, width, channels] # To: [batch_size, channels, depth, height, width] input = input.permute(0, 4, 1, 2, 3)
14
17
62,400,420
2020-6-16
https://stackoverflow.com/questions/62400420/given-two-lists-of-2d-points-how-to-find-the-closest-point-in-the-2nd-list-for
I have two large numpy arrays of randomly sorted 2d points, let's say they're A and B. What I need to do is find the number of "matches" between the two arrays, where a match is a point in A (call it A') being within some given radius R with a point in B (call it B'). This means that every point in A must match with either 1 or no points in B. It would also be nice to return the list indices of the matches between the two arrays, however this isn't necessary. Since there can be many points in this radius R, it seems better to find the point which is closest to A' in B, and then checking if it's within the radius R. This is tested simply with the distance formula dx^2 + dy^2. Obviously there's the brute force O(n^2) solution of looping through both arrays, but I need something faster, hopefully O(n log n). What I've seen is that a Voronoi diagram can be used for a problem like this, however I'm not sure how this would be implemented. I'm unfamiliar with Voronoi diagrams, so I'm generating it with scipy.spatial.Voronoi. Is there a fast algorithm for this problem by using these diagrams or is there another?
I think there are several options. I ginned up a small comparison test to explore a few. The first couple of these only go as far as finding how many points are mutually within radius of each other to make sure I was getting consistent results on the main part of the problem. It does not answer the mail on the part of your problem about finding the closest, which I think would be just a bit more work on a few of them--did it for the last option, see bottom of post. The driver of the problem is doing all of the comparisons, and I think you can make some hay by some sorting (last notion here) to limit comparisons. Naive Python Use brute force point-to-point comparison. Clearly O(n^2). Scipy's cdist module Works great & fastest for "small" data. With large data, this starts to blow up because of size of matrix output in memory. Probably infeasible for 1M x 1M application. Scipy's KDTree module From other solution. Fast, but not as fast as cdist or "sectioning" (below). Perhaps there is a different way to employ KDTree for this task... I'm not very experienced with it. This approach (below) seemed logical. Sectioning the compare-to array This works very well because you are not interested in all of the distances, you just want ones that are within a radius. So, by sorting the target array and only looking within a rectangular window around it for "contenders" you can get very fast performance w/ native python and no "memory explosion." Probably still a bit "left on the table" here for enhancement maybe by embedding cdist within this implementation or (gulp) trying to multithread it. Other ideas... This is a tight "mathy" loop so trying something in cython or splitting up one of the arrays and multi-threading it would be novel. And pickling the result so you don't have to run this often seems prudent. I think any of these you could augment the tuples with the index within the array pretty easily to get a list of the matches. My older iMac does 100K x 100K in 90 seconds via sectioning, so that does not bode well for 1M x 1M Comparison: # distance checker from random import uniform import time import numpy as np from scipy.spatial import distance, KDTree from bisect import bisect from operator import itemgetter import sys from matplotlib import pyplot as plt sizes = [100, 500, 1000, 2000, 5000, 10000, 20000] #sizes = [20_000, 30_000, 40_000, 50_000, 60_000] # for the playoffs. :) naive_times = [] cdist_times = [] kdtree_times = [] sectioned_times = [] delta = 0.1 for size in sizes: print(f'\n *** running test with vectors of size {size} ***') r = 20 # radius to match r_squared = r**2 A = [(uniform(-1000,1000), uniform(-1000,1000)) for t in range(size)] B = [(uniform(-1000,1000), uniform(-1000,1000)) for t in range(size)] # naive python print('naive python') tic = time.time() matches = [(p1, p2) for p1 in A for p2 in B if (p1[0] - p2[0])**2 + (p1[1] - p2[1])**2 <= r_squared] toc = time.time() print(f'found: {len(matches)}') naive_times.append(toc-tic) print(toc-tic) print() # using cdist module print('cdist') tic = time.time() dist_matrix = distance.cdist(A, B, 'euclidean') result = np.count_nonzero(dist_matrix<=r) toc = time.time() print(f'found: {result}') cdist_times.append(toc-tic) print(toc-tic) print() # KDTree print('KDTree') tic = time.time() my_tree = KDTree(A) results = my_tree.query_ball_point(B, r=r) # for count, r in enumerate(results): # for t in r: # print(count, A[t]) result = sum(len(lis) for lis in results) toc = time.time() print(f'found: {result}') kdtree_times.append(toc-tic) print(toc-tic) print() # python with sort and sectioning print('with sort and sectioning') result = 0 tic = time.time() B.sort() for point in A: # gather the neighborhood in x-dimension within x-r <= x <= x+r+1 # if this has any merit, we could "do it again" for y-coord.... contenders = B[bisect(B,(point[0]-r-delta, 0)) : bisect(B,(point[0]+r+delta, 0))] # further chop down to the y-neighborhood # flip the coordinate to support bisection by y-value contenders = list(map(lambda p: (p[1], p[0]), contenders)) contenders.sort() contenders = contenders[bisect(contenders,(point[1]-r-delta, 0)) : bisect(contenders,(point[1]+r+delta, 0))] # note (x, y) in contenders is still inverted, so need to index properly matches = [(point, p2) for p2 in contenders if (point[0] - p2[1])**2 + (point[1] - p2[0])**2 <= r_squared] result += len(matches) toc = time.time() print(f'found: {result}') sectioned_times.append(toc-tic) print(toc-tic) print('complete.') plt.plot(sizes, naive_times, label = 'naive') plt.plot(sizes, cdist_times, label = 'cdist') plt.plot(sizes, kdtree_times, label = 'kdtree') plt.plot(sizes, sectioned_times, label = 'sectioning') plt.legend() plt.show() Results for one of the sizes and plots: *** running test with vectors of size 20000 *** naive python found: 124425 101.40657806396484 cdist found: 124425 2.9293079376220703 KDTree found: 124425 18.166933059692383 with sort and sectioning found: 124425 2.3414530754089355 complete. Note: In first plot, cdist overlays the sectioning. Playoffs are shown in second plot. The "playoffs" Modified sectioning code This code finds the minimum within the points within radius. Runtime is equivalent to the sectioning code above. print('with sort and sectioning, and min finding') result = 0 pairings = {} tic = time.time() B.sort() def dist_squared(a, b): # note (x, y) in point b will be inverted (below), so need to index properly return (a[0] - b[1])**2 + (a[1] - b[0])**2 for idx, point in enumerate(A): # gather the neighborhood in x-dimension within x-r <= x <= x+r+1 # if this has any merit, we could "do it again" for y-coord.... contenders = B[bisect(B,(point[0]-r-delta, 0)) : bisect(B,(point[0]+r+delta, 0))] # further chop down to the y-neighborhood # flip the coordinate to support bisection by y-value contenders = list(map(lambda p: (p[1], p[0]), contenders)) contenders.sort() contenders = contenders[bisect(contenders,(point[1]-r-delta, 0)) : bisect(contenders,(point[1]+r+delta, 0))] matches = [(dist_squared(point, p2), point, p2) for p2 in contenders if dist_squared(point, p2) <= r_squared] if matches: pairings[idx] = min(matches)[1] # pair the closest point in B with the point in A toc = time.time() print(toc-tic)
8
12
62,388,701
2020-6-15
https://stackoverflow.com/questions/62388701/are-executables-produced-with-cython-really-free-of-the-source-code
I have read Making an executable in Cython and BuvinJ's answer to How to obfuscate Python code effectively? and would like to test if the source code compiled with Cython is really "no-more-there" after the compilation. It is indeed a popular opinion that using Cython is a way to protect a Python source code, see for example the article Protecting Python Sources With Cython. Let's take this simple example test.pyx: import json, time # this will allow to see what happens when we import a library print(json.dumps({'key': 'hello world'})) time.sleep(3) print(1/0) # division error! Then let's use Cython: cython test.pyx --embed This produces a test.c. Let's compile it: call "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x64 cl test.c /I C:\Python37\include /link C:\Python37\libs\python37.lib It works! It produces a 140KB test.exe executable, nice! But in this answer How to obfuscate Python code effectively? it is said implicitly that this "compilation" will hide the source code. It does not seem true, if you run test.exe, you will see: Traceback (most recent call last): File "test.pyx", line 4, in init test print(1/0) # division error! <-- the source code and even the comments are still there! ZeroDivisionError: integer division or modulo by zero which shows that the source code in human-readable form is still there. Question: Is there a way to compile code with Cython, such that the claim "the source code is no longer revealed" is true? Note: I'm looking for a solution where neither the source code nor the bytecode (.pyc) is present (if the bytecode/.pyc is embedded, it's trivial to recover the source code with uncompyle6) PS: I remembered I did the same observation a few years ago but I could not find it anymore, after deeper research here it is: Is it possible to decompile a .dll/.pyd file to extract Python Source Code?
The code is found in the original pyx-file next to your exe. Delete/don't distribute this pyx-file with your exe. When you look at the generated C-code, you will see why the error message is shown by your executable: For a raised error, Cython will emit a code similar to the following: __PYX_ERR(0, 11, __pyx_L3_error) where __PYX_ERR is a macro defined as: #define __PYX_ERR(f_index, lineno, Ln_error) \ { \ __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \ } and the variable __pyx_f is defined as static const char *__pyx_f[] = { "test.pyx", "stringsource", }; Basically __pyx_f[0] tells where the original code could be found. Now, when an exception is raised, the (embedded) Python interpreter looks for your original pyx-file and finds the corresponding code (this can be looked up in __Pyx_AddTraceback which is called when an error is raised). Once this pyx-file is not around, the original source code will no longer be known to the Python interpreter/anybody else. However, the error trace will still show the names of the functions and line-numbers but no longer any code snippets. The resulting executable (or extension if one creates one) doesn't content any bytecode (as in pyc-files) and cannot be decompiled with tools like uncompyle: bytecode is produced when py-file is translated into Python-opcodes which are then evaluated in a huge loop in ceval.c. Yet for builtin/cython modules no bytecode is needed because the resulting code uses directly Python's C-API, cutting out the need to have/evaluate the opcodes - these modules skip interpretation, which a reason for them being faster. Thus no bytecode will be in the executable. One important note though: One should check that the linker doesn't include debug information (and thus the C-code where the pyx-file content can be found as comments). MSVC with /Z7 options is such an example. However, the resulting executable can be disassembled to assembler and then the generated C-code can be reverse engineered - so while cythonizing is Ok to make it hard to understand the code, it is not the right tool to conceal keys or security algorithms.
26
30
62,412,976
2020-6-16
https://stackoverflow.com/questions/62412976/writing-tensor-to-a-file-in-a-visually-readable-manner
In pytorch, I want to write a tensor to a file and visually read the file contents. For example, consider T = torch.tensor([3,4,5,6]). I want to write the tensor T to a file, say file_T.txt, and want to visually read the contents of the file_T.txt, which will be 3,4,5 and 6. How can I achieve this?
You can use numpy: import numpy as np np.savetxt('my_file.txt', torch.Tensor([3,4,5,6]).numpy())
9
13
62,412,754
2020-6-16
https://stackoverflow.com/questions/62412754/python-asyncio-errors-oserror-winerror-6-the-handle-is-invalid-and-runtim
I am having some difficulties with properly with my code, as I get the following error after my code finishes executing while debugging on VSCode: Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x00000188AB3259D0> Traceback (most recent call last): File "c:\users\gam3p\appdata\local\programs\python\python38\lib\asyncio\proactor_events.py", line 116, in __del__ self.close() File "c:\users\gam3p\appdata\local\programs\python\python38\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "c:\users\gam3p\appdata\local\programs\python\python38\lib\asyncio\base_events.py", line 719, in call_soon self._check_closed() File "c:\users\gam3p\appdata\local\programs\python\python38\lib\asyncio\base_events.py", line 508, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed I also get the following error if I run the code using the command line: Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped=<pending, 0x25822633550> cb=[_ProactorReadPipeTransport._loop_reading()]> Traceback (most recent call last): File "c:\users\gam3p\appdata\local\programs\python\python38\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid The code below is an asynchronous API wrapper for Reddit. Since Reddit asks bots to be limited to a rate of 60 requests per minute, I have decided to implement a throttled loop that processes the requests in a queue, in a separate thread. To run it, you would need to create a Reddit app and use your login credentials as well as the bot ID and secret. If it helps, I am using Python 3.8.3 64-bit on Windows 10. import requests import aiohttp import asyncio import threading from types import SimpleNamespace from time import time oauth_url = 'https://oauth.reddit.com/' base_url = 'https://www.reddit.com/' agent = 'windows:reddit-async:v0.1 (by /u/UrHyper)' class Reddit: def __init__(self, username: str, password: str, app_id: str, app_secret: str): data = {'grant_type': 'password', 'username': username, 'password': password} auth = requests.auth.HTTPBasicAuth(app_id, app_secret) response = requests.post(base_url + 'api/v1/access_token', data=data, headers={'user-agent': agent}, auth=auth) self.auth = response.json() if 'error' in self.auth: msg = f'Failed to authenticate: {self.auth["error"]}' if 'message' in self.auth: msg += ' - ' + self.auth['message'] raise ValueError(msg) token = 'bearer ' + self.auth['access_token'] self.headers = {'Authorization': token, 'User-Agent': agent} self._rate = 1 self._last_loop_time = 0.0 self._loop = asyncio.new_event_loop() self._queue = asyncio.Queue(0) self._end_loop = False self._loop_thread = threading.Thread(target=self._start_loop_thread) self._loop_thread.start() def stop(self): self._end_loop = True def __del__(self): self.stop() def _start_loop_thread(self): asyncio.set_event_loop(self._loop) self._loop.run_until_complete(self._process_queue()) async def _process_queue(self): while True: if self._end_loop and self._queue.empty(): await self._queue.join() break start_time = time() if self._last_loop_time < self._rate: await asyncio.sleep(self._rate - self._last_loop_time) try: queue_item = self._queue.get_nowait() url = queue_item['url'] callback = queue_item['callback'] data = await self._get_data(url) self._queue.task_done() callback(data) except asyncio.QueueEmpty: pass finally: self._last_loop_time = time() - start_time async def _get_data(self, url): async with aiohttp.ClientSession() as session: async with session.get(url, headers=self.headers) as response: assert response.status == 200 data = await response.json() data = SimpleNamespace(**data) return data async def get_bot(self, callback: callable): url = oauth_url + 'api/v1/me' await self._queue.put({'url': url, 'callback': callback}) async def get_user(self, user: str, callback: callable): url = oauth_url + 'user/' + user + '/about' await self._queue.put({'url': url, 'callback': callback}) def callback(data): print(data['name']) async def main(): reddit = Reddit('', '', '', '') await reddit.get_bot(lambda bot: print(bot.name)) reddit.stop() if __name__ == "__main__": loop = asyncio.get_event_loop() loop.run_until_complete(main())
I ran into a similar problem with asyncio. Since Python 3.8 they change the default event loop on Windows to ProactorEventLoop instead of SelectorEventLoop and their are some issues with it. so adding asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) above loop = asyncio.get_event_loop() Will get the old eventloop back without issues.
8
10
62,408,749
2020-6-16
https://stackoverflow.com/questions/62408749/how-to-reset-keras-metrics
To do some parameter tuning, I like to loop over some training function with Keras. However, I realized that when using tensorflow.keras.metrics.AUC() as a metric, for every training loop, an integer gets added to the auc metric name (e.g. auc_1, auc_2, ...). So actually the keras metrics are somehow stored even when coming out of the training function. This first of all leads to the callbacks not recognizing the metric anymore and also makes me wonder if there are not other things stored like the model weights. How can I reset the metrics and are there other things that get stored by keras that I need to reset to get a clean restart for training? Below you can find a minimal working example: edit: this example seems to only work with tensorflow 2.2 import numpy as np import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras.metrics import AUC def dummy_network(input_shape): model = keras.Sequential() model.add(keras.layers.Dense(10, input_shape=input_shape, activation=tf.nn.relu, kernel_initializer='he_normal', kernel_regularizer=keras.regularizers.l2(l=1e-3))) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(11, activation='sigmoid')) model.compile(optimizer='adagrad', loss='binary_crossentropy', metrics=[AUC()]) return model def train(): CB_lr = tf.keras.callbacks.ReduceLROnPlateau( monitor="val_auc", patience=3, verbose=1, mode="max", min_delta=0.0001, min_lr=1e-6) CB_es = tf.keras.callbacks.EarlyStopping( monitor="val_auc", min_delta=0.00001, verbose=1, patience=10, mode="max", restore_best_weights=True) callbacks = [CB_lr, CB_es] y = [np.ones((11, 1)) for _ in range(1000)] x = [np.ones((37, 12, 1)) for _ in range(1000)] dummy_dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(batch_size=100).repeat() val_dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(batch_size=100).repeat() model = dummy_network(input_shape=((37, 12, 1))) model.fit(dummy_dataset, validation_data=val_dataset, epochs=2, steps_per_epoch=len(x) // 100, validation_steps=len(x) // 100, callbacks=callbacks) for i in range(3): print(f'\n\n **** Loop {i} **** \n\n') train() The output is: **** Loop 0 **** 2020-06-16 14:37:46.621264: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f991e541f10 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-06-16 14:37:46.621296: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version Epoch 1/2 10/10 [==============================] - 0s 44ms/step - loss: 0.1295 - auc: 0.0000e+00 - val_loss: 0.0310 - val_auc: 0.0000e+00 - lr: 0.0010 Epoch 2/2 10/10 [==============================] - 0s 10ms/step - loss: 0.0262 - auc: 0.0000e+00 - val_loss: 0.0223 - val_auc: 0.0000e+00 - lr: 0.0010 **** Loop 1 **** Epoch 1/2 10/10 [==============================] - ETA: 0s - loss: 0.4751 - auc_1: 0.0000e+00WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_auc` which is not available. Available metrics are: loss,auc_1,val_loss,val_auc_1,lr WARNING:tensorflow:Early stopping conditioned on metric `val_auc` which is not available. Available metrics are: loss,auc_1,val_loss,val_auc_1,lr 10/10 [==============================] - 0s 36ms/step - loss: 0.4751 - auc_1: 0.0000e+00 - val_loss: 0.3137 - val_auc_1: 0.0000e+00 - lr: 0.0010 Epoch 2/2 10/10 [==============================] - ETA: 0s - loss: 0.2617 - auc_1: 0.0000e+00WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_auc` which is not available. Available metrics are: loss,auc_1,val_loss,val_auc_1,lr WARNING:tensorflow:Early stopping conditioned on metric `val_auc` which is not available. Available metrics are: loss,auc_1,val_loss,val_auc_1,lr 10/10 [==============================] - 0s 10ms/step - loss: 0.2617 - auc_1: 0.0000e+00 - val_loss: 0.2137 - val_auc_1: 0.0000e+00 - lr: 0.0010 **** Loop 2 **** Epoch 1/2 10/10 [==============================] - ETA: 0s - loss: 0.1948 - auc_2: 0.0000e+00WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_auc` which is not available. Available metrics are: loss,auc_2,val_loss,val_auc_2,lr WARNING:tensorflow:Early stopping conditioned on metric `val_auc` which is not available. Available metrics are: loss,auc_2,val_loss,val_auc_2,lr 10/10 [==============================] - 0s 34ms/step - loss: 0.1948 - auc_2: 0.0000e+00 - val_loss: 0.0517 - val_auc_2: 0.0000e+00 - lr: 0.0010 Epoch 2/2 10/10 [==============================] - ETA: 0s - loss: 0.0445 - auc_2: 0.0000e+00WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_auc` which is not available. Available metrics are: loss,auc_2,val_loss,val_auc_2,lr WARNING:tensorflow:Early stopping conditioned on metric `val_auc` which is not available. Available metrics are: loss,auc_2,val_loss,val_auc_2,lr 10/10 [==============================] - 0s 10ms/step - loss: 0.0445 - auc_2: 0.0000e+00 - val_loss: 0.0389 - val_auc_2: 0.0000e+00 - lr: 0.0010
Your reproducible example failed in several places for me, so I changed just a few things (I'm using TF 2.1). After getting it to run, I was able to get rid of the additional metric names by specifying metrics=[AUC(name='auc')]. Here's the full (fixed) reproducible example: import numpy as np import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras.metrics import AUC def dummy_network(input_shape): model = keras.Sequential() model.add(keras.layers.Dense(10, input_shape=input_shape, activation=tf.nn.relu, kernel_initializer='he_normal', kernel_regularizer=keras.regularizers.l2(l=1e-3))) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(11, activation='softmax')) model.compile(optimizer='adagrad', loss='binary_crossentropy', metrics=[AUC(name='auc')]) return model def train(): CB_lr = tf.keras.callbacks.ReduceLROnPlateau( monitor="val_auc", patience=3, verbose=1, mode="max", min_delta=0.0001, min_lr=1e-6) CB_es = tf.keras.callbacks.EarlyStopping( monitor="val_auc", min_delta=0.00001, verbose=1, patience=10, mode="max", restore_best_weights=True) callbacks = [CB_lr, CB_es] y = tf.keras.utils.to_categorical([np.random.randint(0, 11) for _ in range(1000)]) x = [np.ones((37, 12, 1)) for _ in range(1000)] dummy_dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(batch_size=100).repeat() val_dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(batch_size=100).repeat() model = dummy_network(input_shape=((37, 12, 1))) model.fit(dummy_dataset, validation_data=val_dataset, epochs=2, steps_per_epoch=len(x) // 100, validation_steps=len(x) // 100, callbacks=callbacks) for i in range(3): print(f'\n\n **** Loop {i} **** \n\n') train() Train for 10 steps, validate for 10 steps Epoch 1/2 1/10 [==>...........................] - ETA: 6s - loss: 0.3426 - auc: 0.4530 7/10 [====================>.........] - ETA: 0s - loss: 0.3318 - auc: 0.4895 10/10 [==============================] - 1s 117ms/step - loss: 0.3301 - auc: 0.4893 - val_loss: 0.3222 - val_auc: 0.5085 This happens because every loop, you created a new metric without a specified name by doing this: metrics=[AUC()]. The first iteration of the loop, TF automatically created a variable in the name space called auc, but at the second iteration of your loop, the name 'auc' was already taken, so TF named it auc_1 since you didn't specify a name. But, your callback was set to be based on auc, which is a metric that this model didn't have (it was the metric of the model from the previous loop). So, you either do name='auc' and overwrite the previous metric name, or define it outside of the loop, like this: import numpy as np import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras.metrics import AUC auc = AUC() def dummy_network(input_shape): model = keras.Sequential() model.add(keras.layers.Dense(10, input_shape=input_shape, activation=tf.nn.relu, kernel_initializer='he_normal', kernel_regularizer=keras.regularizers.l2(l=1e-3))) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(11, activation='softmax')) model.compile(optimizer='adagrad', loss='binary_crossentropy', metrics=[auc]) return model And don't worry about keras resetting the metrics. It takes care of all that in the fit() method. If you want more flexibility and/or do it yourself, I suggest using custom training loops, and reset it yourself: auc = tf.keras.metrics.AUC() auc.update_state(np.random.randint(0, 2, 10), np.random.randint(0, 2, 10)) print(auc.result()) auc.reset_states() print(auc.result()) Out[6]: <tf.Tensor: shape=(), dtype=float32, numpy=0.875> # state updated Out[8]: <tf.Tensor: shape=(), dtype=float32, numpy=0.0> # state reset
8
6
62,390,314
2020-6-15
https://stackoverflow.com/questions/62390314/how-to-call-asynchronous-function-in-django
The following doesn't execute foo and gives RuntimeWarning: coroutine 'foo' was never awaited # urls.py async def foo(data): # process data ... @api_view(['POST']) def endpoint(request): data = request.data.get('data') # How to call foo here? foo(data) return Response({})
Found a way to do it. Create another file bar.py in the same directory as urls.py. # bar.py def foo(data): // process data # urls.py from multiprocessing import Process from .bar import foo @api_view(['POST']) def endpoint(request): data = request.data.get('data') p = Process(target=foo, args=(data,)) p.start() return Response({})
12
8
62,399,546
2020-6-16
https://stackoverflow.com/questions/62399546/python-datetime-now-as-a-default-function-parameter-return-same-value-in-diffe
Now I got some problem that I can't explain and fix. This is my first python module TimeHelper.py from datetime import datetime def fun1(currentTime = datetime.now()): print(currentTime) and another is Main.py from TimeHelper import fun1 import time fun1() time.sleep(5) fun1() When I run the Main.py, the out put is 2020-06-16 09:17:52.316714 2020-06-16 09:17:52.316714 My problem is why the time will be same in the result ? Is there any restrict when passing datetime.now() in to default parameter ?
I think I find the answer. Thanks for @user2864740 So I change my TimeHelper.py to this from datetime import datetime def fun1(currentTime = None): if currentTime is None: currentTime = datetime.now() print(currentTime) and anything work in my expectation.
13
14
62,378,782
2020-6-14
https://stackoverflow.com/questions/62378782/py-datatable-in-operator
I am unable to perform a standard in operation with a pre-defined list of items. I am looking to do something like this: # Construct a simple example frame from datatable import * df = Frame(V1=['A','B','C','D'], V2=[1,2,3,4]) # Filter frame to a list of items (THIS DOES NOT WORK) items = ['A','B'] df[f.V1 in items,:] This example results in the error: TypeError: A boolean value cannot be used as a row selector Unfortunately, there doesn't appear to be a built-in object for in operations. I would like to use something like the %in% operator that is native to the R language. Is there any method for accomplishing this in python? I can take this approach with the use of multiple 'equals' operators, but this is inconvenient when you want to consider a large number of items: df[(f.V1 == 'A') | (f.V1 == 'B'),:] datatable 0.10.1 python 3.6
You could also try this out: First import all the necessary packages as, import datatable as dt from datatable import by,f,count import functools import operator Create a sample datatable: DT = dt.Frame(V1=['A','B','C','D','E','B','A'], V2=[1,2,3,4,5,6,7]) Make a list of values to be filtered among the observations, in your case it is sel_obs = ['A','B'] Now create a filter expression using funtools and operators modules, filter_rows = functools.reduce(operator.or_,(f.V1==obs for obs in sel_obs)) Finally apply the above created filter on datatable DT[fil_rows,:] its output as- Out[6]: | V1 V2 -- + -- -- 0 | A 1 1 | B 2 2 | B 6 3 | A 7 [4 rows x 2 columns] You can just play around with operators to do different type of filterings. @sammyweemy's solution should also work.
8
6
62,395,983
2020-6-15
https://stackoverflow.com/questions/62395983/how-to-create-a-text-shape-with-python-pptx
I want to add a text box to a presentation with python pptx. I would like to add a text box with several paragraphs in the specific place and then format it (fonts, color, etc.). But since text shape object always comes with the one paragraph in the beginning, I cannot edit first of my paragraphs. The code sample looks like this: txBox = slide.shapes.add_textbox(left, top, width, height) tf = txBox.text_frame p = tf.add_paragraph() p.text = "This is a first paragraph" p.font.size = Pt(11) p = tf.add_paragraph() p.text = "This is a second paragraph" p.font.size = Pt(11) Which creates output like this: I can add text to this first line with tf.text = "This is text inside a textbox", but it won't be editable in terms of fonts or colors. So is there any way how I can omit or edit that line, so all paragraphs in the box would be the same?
Access the first paragraph differently, using: p = tf.paragraphs[0] Then you can add runs, set fonts and all the rest of it just like with a paragraph you get back from tf.add_paragraph().
10
12
62,384,215
2020-6-15
https://stackoverflow.com/questions/62384215/best-way-to-construct-a-graphql-query-string-in-python
I'm trying to do this (see title), but it's a bit complicated since the string I'm trying to build has to have the following properties: mulitiline contains curly braces I want to inject variables into it Using a normal '''''' multiline string makes injecting variables difficult. Using multiple f-strings makes injecting variables easy, but every curly brace, of which there are a lot, has to be doubled. And an f has to be prepended to each line. On the other hand, if I try using format, it also gets confused by all the curly braces. Is there a better way that I haven't considered yet?
You can use the """ multiline string method. For injecting variables, make sure to use the $ sign while defining the string and use the variables object in the JSON parameter of the requests.post method. Here is an example. ContactInput is one of the types I defined in my GraphQL schema. query = """ mutation ($input:[ContactInput!]!) { AddContacts(contacts: $input) { user_id } } """ variables = {'input': my_arrofcontacts} r = requests.post(url, json={'query': query , 'variables': variables})
31
42
62,380,562
2020-6-15
https://stackoverflow.com/questions/62380562/sort-list-of-dicts-by-two-keys
I have this list of dicts: [{'score': '1.9', 'id': 756, 'factors': [1.25, 2.25, 2.5, 2.0, 1.75]}, {'score': '2.0', 'id': 686, 'factors': [2.0, 2.25, 2.75, 1.5, 2.25]}, {'score': '2.0', 'id': 55, 'factors': [1.5, 3.0, 2.5, 1.5, 1.5]}, {'score': '1.9', 'id': 863, 'factors': [1.5, 3.0, 1.5, 2.5, 1.5]}] I can sort by score with : sorted(l, key=lambda k: k['score'], reverse=True). However I have tied scores. How can I sort first by score and then by id asc or desc?
You can sort by a tuple: sorted(l, key=lambda k: (float(k['score']), k['id']), reverse=True) This will sort by score descending, then id descending. Note that since score is a string value, it needs to be converted to float for comparison. [ {'score': '2.0', 'id': 686, 'factors': [2.0, 2.25, 2.75, 1.5, 2.25]}, {'score': '2.0', 'id': 55, 'factors': [1.5, 3.0, 2.5, 1.5, 1.5]}, {'score': '1.9', 'id': 863, 'factors': [1.5, 3.0, 1.5, 2.5, 1.5]}, {'score': '1.9', 'id': 756, 'factors': [1.25, 2.25, 2.5, 2.0, 1.75]} ] To sort by id ascending, use -k['id'] (sorting negated numbers descending is equivalent to sorting the non-negated numbers ascending): sorted(l, key=lambda k: (float(k['score']), -k['id']), reverse=True) [ {'score': '2.0', 'id': 55, 'factors': [1.5, 3.0, 2.5, 1.5, 1.5]}, {'score': '2.0', 'id': 686, 'factors': [2.0, 2.25, 2.75, 1.5, 2.25]}, {'score': '1.9', 'id': 756, 'factors': [1.25, 2.25, 2.5, 2.0, 1.75]}, {'score': '1.9', 'id': 863, 'factors': [1.5, 3.0, 1.5, 2.5, 1.5]} ]
8
15
62,364,030
2020-6-13
https://stackoverflow.com/questions/62364030/keyboard-interrupt-from-python-does-not-abort-rust-function-pyo3
I have a Python library written in Rust with PyO3, and it involves some expensive calculations (up to 10 minutes for a single function call). How can I abort the execution when calling from Python ? Ctrl+C seems to only be handled after the end of the execution, so is essentially useless. Minimal reproducible example: # Cargo.toml [package] name = "wait" version = "0.0.0" authors = [] edition = "2018" [lib] name = "wait" crate-type = ["cdylib"] [dependencies.pyo3] version = "0.10.1" features = ["extension-module"] // src/lib.rs use pyo3::wrap_pyfunction; #[pyfunction] pub fn sleep() { std::thread::sleep(std::time::Duration::from_millis(10000)); } #[pymodule] fn wait(_py: Python, m: &PyModule) -> PyResult<()> { m.add_wrapped(wrap_pyfunction!(sleep)) } $ rustup override set nightly $ cargo build --release $ cp target/release/libwait.so wait.so $ python3 >>> import wait >>> wait.sleep() Immediately after having entered wait.sleep() I type Ctrl + C, and the characters ^C are printed to the screen, but only 10 seconds later do I finally get >>> wait.sleep() ^CTraceback (most recent call last): File "<stdin>", line 1, in <module> KeyboardInterrupt >>> The KeyboardInterrupt was detected, but was left unhandled until the end of the call to the Rust function. Is there a way to bypass that ? The behavior is the same when the Python code is put in a file and executed from outside the REPL.
One option would be to spawn a separate process to run the Rust function. In the child process, we can set up a signal handler to exit the process on interrupt. Python will then be able to raise a KeyboardInterrupt exception as desired. Here's an example of how to do it: // src/lib.rs use pyo3::prelude::*; use pyo3::wrap_pyfunction; use ctrlc; #[pyfunction] pub fn sleep() { ctrlc::set_handler(|| std::process::exit(2)).unwrap(); std::thread::sleep(std::time::Duration::from_millis(10000)); } #[pymodule] fn wait(_py: Python, m: &PyModule) -> PyResult<()> { m.add_wrapped(wrap_pyfunction!(sleep)) } # wait.py import wait import multiprocessing as mp def f(): wait.sleep() p = mp.Process(target=f) p.start() p.join() print("Done") Here's the output I get on my machine after pressing CTRL-C: $ python3 wait.py ^CTraceback (most recent call last): File "wait.py", line 9, in <module> p.join() File "/home/kerby/miniconda3/lib/python3.7/multiprocessing/process.py", line 140, in join res = self._popen.wait(timeout) File "/home/kerby/miniconda3/lib/python3.7/multiprocessing/popen_fork.py", line 48, in wait return self.poll(os.WNOHANG if timeout == 0.0 else 0) File "/home/kerby/miniconda3/lib/python3.7/multiprocessing/popen_fork.py", line 28, in poll pid, sts = os.waitpid(self.pid, flag) KeyboardInterrupt
9
3
62,376,164
2020-6-14
https://stackoverflow.com/questions/62376164/how-to-change-max-iter-in-optimize-function-used-by-sklearn-gaussian-process-reg
I am using sklearn's GPR library, but occasionally run into this annoying warning: ConvergenceWarning: lbfgs failed to converge (status=2): ABNORMAL_TERMINATION_IN_LNSRCH. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html _check_optimize_result("lbfgs", opt_res) Not only can I find almost no documentation on this warning, max_iter is not a parameter in sklearn's GPR model at all. I tried to rescale the data as suggested, but it didn't work and frankly I didn't understand it (do I also need to scale the output? Again, little documentation). Increasing the maximum iterations in the optimization process makes sense, but sklearn does not appear to have a way to do that, which is frustrating because they suggest it in response to this warning. Looking at the GPR source code, this is how sklearn calls the optimizer, def _constrained_optimization(self, obj_func, initial_theta, bounds): if self.optimizer == "fmin_l_bfgs_b": opt_res = scipy.optimize.minimize( obj_func, initial_theta, method="L-BFGS-B", jac=True, bounds=bounds) _check_optimize_result("lbfgs", opt_res) theta_opt, func_min = opt_res.x, opt_res.fun elif callable(self.optimizer): theta_opt, func_min = \ self.optimizer(obj_func, initial_theta, bounds=bounds) else: raise ValueError("Unknown optimizer %s." % self.optimizer) return theta_opt, func_min where scipy.optimize.minimize() has default values of scipy.optimize.minimize(fun, x0, args=(), method='L-BFGS-B', jac=None, bounds=None, tol=None, callback=None, options={'disp': None, 'maxcor': 10, 'ftol': 2.220446049250313e-09, 'gtol': 1e-05, 'eps': 1e-08, 'maxfun': 15000, 'maxiter': 15000, 'iprint': -1, 'maxls': 20}) according to the scipy docs. I would like to use exactly the optimizer as shown above in the GPR source code, but change maxiter to a higher number. In other words, I don't want to change the behavior of the optimizer other than the changes made by increasing the maximum iterations. The challenge there is that other parameters like obj_func, initial_theta, bounds are set within the GPR source code and are not accessible from the GPR object. This is how I'm calling GPR, note that these are mostly default parameters with the exception of n_restarts_optimizer and the kernel. for kernel in kernels: gp = gaussian_process.GaussianProcessRegressor( kernel = kernel, alpha = 1e-10, copy_X_train = True, optimizer = "fmin_l_bfgs_b", n_restarts_optimizer= 25, normalize_y = False, random_state = None)
You want to extend and/or modify the behavior of an existing Python object, which sounds like a good use case for inheritance. A solution could be to inherit from the scikit-learn implementation, and ensure that the usual optimizer is called with the arguments you'd like. Here's a sketch, but note that this is not tested. from functools import partial from sklearn.gaussian_process import GaussianProcessRegressor import scipy.optimize class MyGPR(GaussianProcessRegressor): def __init__(self, *args, max_iter=15000, **kwargs): super().__init__(*args, **kwargs) self._max_iter = max_iter def _constrained_optimization(self, obj_func, initial_theta, bounds): def new_optimizer(obj_func, initial_theta, bounds): return scipy.optimize.minimize( obj_func, initial_theta, method="L-BFGS-B", jac=True, bounds=bounds, max_iter=self._max_iter, ) self.optimizer = new_optimizer return super()._constrained_optimization(obj_func, initial_theta, bounds)
22
7
62,377,883
2020-6-14
https://stackoverflow.com/questions/62377883/how-can-i-get-user-input-in-a-python-discord-bot
I have a python discord bot and I need it to get user input after a command, how can I do this? I am new to python and making discord bots. Here is my code: import discord, datetime, time from discord.ext import commands from datetime import date, datetime prefix = "!!" client = commands.Bot(command_prefix=prefix, case_insensitive=True) times_used = 0 @client.event async def on_ready(): print(f"I am ready to go - {client.user.name}") await client.change_presence(activity=discord.Activity(type=discord.ActivityType.watching, name=f"{client.command_prefix}python_help. This bot is made by drakeerv.")) @client.command(name="ping") async def _ping(ctx): global times_used await ctx.send(f"Ping: {client.latency}") times_used = times_used + 1 @client.command(name="time") async def _time(ctx): global times_used from datetime import date, datetime now = datetime.now() if (now.strftime("%H") <= "12"): am_pm = "AM" else: am_pm = "PM" datetime = now.strftime("%m/%d/%Y, %I:%M") await ctx.send("Current Time:" + ' ' + datetime + ' ' + am_pm) times_used = times_used + 1 @client.command(name="times_used") async def _used(ctx): global times_used await ctx.send(f"Times used since last reboot:" + ' ' + str(times_used)) times_used = times_used + 1 @client.command(name="command") #THIS LINE async def _command(ctx): global times_used await ctx.send(f"y or n") times_used = times_used + 1 @client.command(name="python_help") async def _python_help(ctx): global times_used msg = '\r\n'.join(["!!help: returns all of the commands and what they do.", "!!time: returns the current time.", "!!ping: returns the ping to the server."]) await ctx.send(msg) times_used = times_used + 1 client.run("token") I am using python version 3.8.3. I have already looked at other posts but they did not answer my question or gave me errors. Any help would be greatly appreciated!
You'll be wanting to use Client.wait_for(): @client.command(name="command") async def _command(ctx): global times_used await ctx.send(f"y or n") # This will make sure that the response will only be registered if the following # conditions are met: def check(msg): return msg.author == ctx.author and msg.channel == ctx.channel and \ msg.content.lower() in ["y", "n"] msg = await client.wait_for("message", check=check) if msg.content.lower() == "y": await ctx.send("You said yes!") else: await ctx.send("You said no!") times_used = times_used + 1 And with a timeout: import asyncio # To get the exception @client.command(...) async def _command(ctx): # code try: msg = await client.wait_for("message", check=check, timeout=30) # 30 seconds to reply except asyncio.TimeoutError: await ctx.send("Sorry, you didn't reply in time!") References: Client.wait_for() - More examples in here Message.author Message.channel Message.content asyncio.TimeoutError
10
16
62,376,042
2020-6-14
https://stackoverflow.com/questions/62376042/calculating-and-displaying-a-convexhull
I'm trying to calculate and show a convex hull for some random points in python. This is my current code: import numpy as np import random import matplotlib.pyplot as plt import cv2 points = np.random.rand(25,2) hull = ConvexHull(points) plt.plot(points[:,0], points[:,1], 'o',color='c') for simplex in hull.simplices: plt.plot(points[simplex, 0], points[simplex, 1], 'r') plt.plot(points[hull.vertices,0], points[hull.vertices,1], 'r', lw=-1) plt.plot(points[hull.vertices[0],0], points[hull.vertices[0],1], 'r-') plt.show() My questions: How can I change the X,Y labels and points limit between 0 to 9? For example (0,1,2,3,4,5,6,7,8,9) How can I mark the points on the convex hull with a circle? As in example
Replacing np.rand() with randint(0, 10) will generate the coordinates as integers from 0,1,... to 9. Using '.' as marker will result in smaller markers for the given points. Using 'o' as marker, setting a markeredgecolor and setting the main color to 'none' will result in a circular marker, which can be used for the points on the hull. The size of the marker can be adapted via markersize=. fig, axes = plt.subplots(ncols=..., nrows=...) is a handy way to create multiple subplots. Here is some code for a minimal example: from scipy.spatial import ConvexHull import matplotlib.pyplot as plt import numpy as np points = np.random.randint(0, 10, size=(15, 2)) # Random points in 2-D hull = ConvexHull(points) fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10, 3)) for ax in (ax1, ax2): ax.plot(points[:, 0], points[:, 1], '.', color='k') if ax == ax1: ax.set_title('Given points') else: ax.set_title('Convex hull') for simplex in hull.simplices: ax.plot(points[simplex, 0], points[simplex, 1], 'c') ax.plot(points[hull.vertices, 0], points[hull.vertices, 1], 'o', mec='r', color='none', lw=1, markersize=10) ax.set_xticks(range(10)) ax.set_yticks(range(10)) plt.show() PS: To show the plots in separate windows: from scipy.spatial import ConvexHull import matplotlib.pyplot as plt import numpy as np points = np.random.randint(0, 10, size=(15, 2)) # Random points in 2-D hull = ConvexHull(points) for plot_id in (1, 2): fig, ax = plt.subplots(ncols=1, figsize=(5, 3)) ax.plot(points[:, 0], points[:, 1], '.', color='k') if plot_id == 1: ax.set_title('Given points') else: ax.set_title('Convex hull') for simplex in hull.simplices: ax.plot(points[simplex, 0], points[simplex, 1], 'c') ax.plot(points[hull.vertices, 0], points[hull.vertices, 1], 'o', mec='r', color='none', lw=1, markersize=10) ax.set_xticks(range(10)) ax.set_yticks(range(10)) plt.show()
8
12
62,375,432
2020-6-14
https://stackoverflow.com/questions/62375432/is-there-a-dunder-method-corresponding-to-pipe-equal-update-for-dicts-i
In python 3.9, dictionaries gained combine | and update |= operators. Is there a dunder/magic method which will enable this to be used for other classes? I've tried looking in the python source but found it a bit bewildering.
Yes, | and |= correspond to __or__ and __ior__. Don't look at the python source code, look at the documentation. In particular, the data model. See here And note, this isn't specific to python 3.9.
10
14
62,372,081
2020-6-14
https://stackoverflow.com/questions/62372081/what-is-the-advantage-of-using-multiple-cursors-in-psycopg2-for-postgresql-queri
What is the difference between using a single cursor in psycopg2 to perform all your queries against using multiple cursors. I.e, say I do this: import psycopg2 as pg2 con = psycopg2.connect(...) cur = con.cursor() cur.execute(...) .... .... cur.execute(...) ... and every time I wish to execute a query thereafter, I use the same cursor cur. Alternatively I could do this every time I want to query my database: with cur as con.cursor(): cur.execute(...) In which case, my cursor cur would be deleted after every use. Which method is better? Does one have an advantage over another? Is one faster than the other? More generally, why are multiple cursors for one connection even needed?
The two options are comparable; you can always benchmark both to see if there's a meaningful difference, but psycopg2 cursors are pretty lightweight (they don't represent an actual server-side, DECLAREd cursor, unless you pass a name argument) and I wouldn't expect any substantial slowdown from either route. The reason psycopg2 has cursors at all is twofold. The first is to be able to represent server-side cursors for situations where the result set is larger than memory, and can't be retrieved from the DB all at once; in this case the cursor serves as the client-side interface for interacting with the server-side cursor. The second is that psycopg2 cursors are not thread-safe; a connection object can be freely used by any thread, but each cursor should be used by at most one thread. Having a cursor-per-thread allows for multithreaded applications to access the DB from any thread, while sharing the same connection. See the psycopg2 usage docs on server side cursors and thread and process safety for more details.
11
17
62,372,762
2020-6-14
https://stackoverflow.com/questions/62372762/delete-an-element-from-torch-tensor
I'm trying to delete an item from a tensor. In the example below, How can I remove the third item from the tensor ? tensor([[-5.1949, -6.2621, -6.2051, -5.8983, -6.3586, -6.2434, -5.8923, -6.1901, -6.5713, -6.2396, -6.1227, -6.4196, -3.4311, -6.8903, -6.1248, -6.3813, -6.0152, -6.7449, -6.0523, -6.4341, -6.8579, -6.1961, -6.5564, -6.6520, -5.9976, -6.3637, -5.7560, -6.7946, -5.4101, -6.1310, -3.3249, -6.4584, -6.2202, -6.3663, -6.9293, -6.9262]], grad_fn=<SqueezeBackward1>)
You can first filter array through indices and then concat both t.shape torch.Size([1, 36]) t = torch.cat((t[:,:3], t[:,4:]), axis = 1) t.shape torch.Size([1, 35])
11
2
62,370,427
2020-6-14
https://stackoverflow.com/questions/62370427/read-xlsx-from-azure-blob-storage-to-pandas-dataframe-without-creating-temporary
I am trying to read a xlsx file from an Azure blob storage to a pandas dataframe without creating a temporary local file. I have seen many similar questions, e.g. Issues Reading Azure Blob CSV Into Python Pandas DF, but haven't managed to get the proposed solutions to work. Below code snippet results in a UnicodeDecodeError: 'utf-8' codec can't decode byte 0x87 in position 14: invalid start byte exception. from io import StringIO import pandas as pd from azure.storage.blob import BlobClient, BlobServiceClient blob_client = BlobClient.from_blob_url(blob_url = url + container + "/" + blobname, credential = token) blob = blob_client.download_blob().content_as_text() df = pd.read_excel(StringIO(blob)) Using a temporary file, I do manage to make it work with the following code snippet: blob_service_client = BlobServiceClient(account_url = url, credential = token) blob_client = blob_service_client.get_blob_client(container=container, blob=blobname) with open(tmpfile, "wb") as my_blob: download_stream = blob_client.download_blob() my_blob.write(download_stream.readall()) data = pd.read_excel(tmpfile)
Similar to what you have already done, we could use download_blob() to get the StorageStreamDownloader object into memory, then context_as_text() to decode the contents to a string. Then we can read the the contents from the CSV StringIO buffer into a pandas Dataframe with pandas.read_csv(). from io import StringIO import pandas as pd from azure.storage.blob import BlobClient, BlobServiceClient import os connection_string = os.getenv('AZURE_STORAGE_CONNECTION_STRING') blob_service_client = BlobServiceClient.from_connection_string(connection_string) blob_client = blob_service_client.get_blob_client(container="blobs", blob="test.csv") blob = blob_client.download_blob().content_as_text() df = pd.read_csv(StringIO(blob)) Update If we are working with XLSX files, use content_as_bytes() to return bytes instead of a string, and convert to a pandas dataframe with pandas.read_excel(): from io import StringIO import pandas as pd from azure.storage.blob import BlobClient, BlobServiceClient import os connection_string = os.getenv('AZURE_STORAGE_CONNECTION_STRING') blob_service_client = BlobServiceClient.from_connection_string(connection_string) blob_client = blob_service_client.get_blob_client(container="blobs", blob="test.xlsx") blob = blob_client.download_blob().content_as_bytes() df = pd.read_excel(blob) Since content_as_text() uses UTF-8 encoding by default, this is probably causing the UnicodeDecodeError when decoding bytes. We could still use this with pandas.read_excel() if we set the encoding to None: blob = blob_client.download_blob().content_as_text(encoding=None) df = pd.read_excel(blob)
8
9
62,369,326
2020-6-14
https://stackoverflow.com/questions/62369326/what-is-the-purpose-of-floating-point-index-in-pandas
s.index=[0.0,1.1,2.2,3.3,4.4,5.5] s.index # Float64Index([0.0, 1.1, 2.2, 3.3, 4.4, 5.5], dtype='float64') s # 0.0 141.125 # 1.1 142.250 # 2.2 143.375 # 3.3 143.375 # 4.4 144.500 # 5.5 145.125 s.index=s.index.astype('float32') # s.index # Float64Index([ 0.0, 1.100000023841858, 2.200000047683716, # 3.299999952316284, 4.400000095367432, 5.5], # dtype='float64') What's the intuition behind floating point indices? Struggling to understand when we would use them instead of int indices (it seems like you can have three types of indices: int64, float64, or object, e.g. s.index=['a','b','c','d','e','f']). From the code above, it also looks like Pandas really wants float indices to be in 64-bit, as these 64-bit floats are getting cast to 32-bit floats and then back to 64-bit floats, with the dtype of the index remaining 'float64'. How do people use float indicies? Is the idea that you might have some statistical calculation over data and want to rank on the result of it, but those results may be floats? And we want to force float64 to avoid losing resolution?
Float indices are generally useless for label-based indexing, because of general floating point restrictions. Of course, pd.Float64Index is there in the API for completeness but that doesn't always mean you should use it. Jeff (core library contributor) has this to say on github: [...] It is rarely necessary to actually use a float index; you are often better off served by using a column. The point of the index is to make individual elements faster, e.g. df[1.0], but this is quite tricky; this is the reason for having an issue about this. The tricky part there being 1.0 == 1.0 isn't always true, depending on how you represent that 1.0 in bits. Floating indices are useful in a few situations (as cited in the github issue), mainly for recording temporal axis (time), or extremely minute/accurate measurements in, for example, astronomical data. For most other cases there's pd.cut or pd.qcut for binning your data because working with categorical data is usually easier than continuous data.
9
8
62,366,211
2020-6-13
https://stackoverflow.com/questions/62366211/vscode-modulenotfounderror-no-module-named-x
I am trying to build a new package, however, when I try to run any of the files from inside VSCode or from terminal, I am coming across this error: ModuleNotFoundError: No module name 'x' My current folder structure is as follows: package |---module |------__init__.py |------calculations.py |------miscfuncs.py |---tests |------__init__.py |------test_calcs.py |---setup.py |---requirements.txt However, when I run my tests (PyTest) through VSCode and using import module.calculations as calc or from module.calculations import Class in test_calcs.py, the tests work as expected - which is confusing me. I know this is a commonly asked question, but I cannot fathom out a solution that will work here. I have tried checking the working directory is in system path using the code below. The first item on the returned list of directories is the one I am working in. import sys print(sys.path) I have also used the following in the files to no avail: import module.calculations import .module.calculations from . import miscfuncs When trying import .module.calculations, I get the following: ModuleNotFoundError: No module named '__main__.module'; '__main__' is not a package When trying from . import miscfuncs in calculations.py, I get the following error: ImportError: cannot import name 'miscfuncs' When working on a file within the module folder, I can use a relative import: import calculations and it works fine. This is fine for files within module, but not when I am working in test_calcs.py. In my setup.py, I have a line for: packages=['module'] Happy to post more info if required or a link to my repo for the full code. EDIT Following remram's solution: I have updated launch.json to include the CWD and PYTHONPATH variables. The module name is still not recognised, however, IntelliSense within VSCode is picking up the functions within the imported file just fine. "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "cwd": "${workspaceFolder}", "env": {"PYTHONPATH": "${cwd}" } } ]
Make sure you are running from the package folder (not from package/module) if you want import module.calculations to work. You can also set the PYTHONPATH environment variable to the path to the package folder.
35
8
62,363,953
2020-6-13
https://stackoverflow.com/questions/62363953/how-to-create-toggle-switch-button-in-qt-designer
I am trying to create toggle button in qt designer. I refer on internet also but i couldn't find how to do that. Can anyone know how to do toggle switch button. I have attached a sample button image. EDIT I created a text box below that i want this toggle button. When i try to add it throws me error. How to add the button below the text area? i have attached the code snippet also. from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtCore import QPropertyAnimation, QRectF, QSize, Qt, pyqtProperty from PyQt5.QtGui import QPainter from PyQt5.QtWidgets import ( QAbstractButton, QApplication, QHBoxLayout, QSizePolicy, QWidget, ) class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(472, 180) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.url = QtWidgets.QLineEdit(self.centralwidget) self.url.setGeometry(QtCore.QRect(30, 20, 411, 31)) font = QtGui.QFont() font.setFamily("MS Shell Dlg 2") font.setPointSize(10) font.setBold(True) font.setWeight(75) self.url.setFont(font) self.url.setAutoFillBackground(False) self.url.setStyleSheet("border-radius:10px;") self.url.setAlignment(QtCore.Qt.AlignCenter) self.url.setCursorMoveStyle(QtCore.Qt.LogicalMoveStyle) self.url.setObjectName("url") MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 472, 21)) self.menubar.setObjectName("menubar") MainWindow.setMenuBar(self.menubar) self.statusbar = QtWidgets.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.url.setPlaceholderText(_translate("MainWindow", "Playlist URL")) if __name__ == "__main__": import sys main() app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_())
Qt Designer to set the position and initialize some properties of the widgets in a window, but it does not work to create widgets with custom painted as the switch so you will have to implement with python. Long ago for a project implement a switch so it will show that code: from PyQt5.QtCore import QObject, QSize, QPointF, QPropertyAnimation, QEasingCurve, pyqtProperty, pyqtSlot, Qt from PyQt5.QtGui import QPainter, QPalette, QLinearGradient, QGradient from PyQt5.QtWidgets import QAbstractButton, QApplication, QWidget, QHBoxLayout, QLabel class SwitchPrivate(QObject): def __init__(self, q, parent=None): QObject.__init__(self, parent=parent) self.mPointer = q self.mPosition = 0.0 self.mGradient = QLinearGradient() self.mGradient.setSpread(QGradient.PadSpread) self.animation = QPropertyAnimation(self) self.animation.setTargetObject(self) self.animation.setPropertyName(b'position') self.animation.setStartValue(0.0) self.animation.setEndValue(1.0) self.animation.setDuration(200) self.animation.setEasingCurve(QEasingCurve.InOutExpo) self.animation.finished.connect(self.mPointer.update) @pyqtProperty(float) def position(self): return self.mPosition @position.setter def position(self, value): self.mPosition = value self.mPointer.update() def draw(self, painter): r = self.mPointer.rect() margin = r.height()/10 shadow = self.mPointer.palette().color(QPalette.Dark) light = self.mPointer.palette().color(QPalette.Light) button = self.mPointer.palette().color(QPalette.Button) painter.setPen(Qt.NoPen) self.mGradient.setColorAt(0, shadow.darker(130)) self.mGradient.setColorAt(1, light.darker(130)) self.mGradient.setStart(0, r.height()) self.mGradient.setFinalStop(0, 0) painter.setBrush(self.mGradient) painter.drawRoundedRect(r, r.height()/2, r.height()/2) self.mGradient.setColorAt(0, shadow.darker(140)) self.mGradient.setColorAt(1, light.darker(160)) self.mGradient.setStart(0, 0) self.mGradient.setFinalStop(0, r.height()) painter.setBrush(self.mGradient) painter.drawRoundedRect(r.adjusted(margin, margin, -margin, -margin), r.height()/2, r.height()/2) self.mGradient.setColorAt(0, button.darker(130)) self.mGradient.setColorAt(1, button) painter.setBrush(self.mGradient) x = r.height()/2.0 + self.mPosition*(r.width()-r.height()) painter.drawEllipse(QPointF(x, r.height()/2), r.height()/2-margin, r.height()/2-margin) @pyqtSlot(bool, name='animate') def animate(self, checked): self.animation.setDirection(QPropertyAnimation.Forward if checked else QPropertyAnimation.Backward) self.animation.start() class Switch(QAbstractButton): def __init__(self, parent=None): QAbstractButton.__init__(self, parent=parent) self.dPtr = SwitchPrivate(self) self.setCheckable(True) self.clicked.connect(self.dPtr.animate) def sizeHint(self): return QSize(84, 42) def paintEvent(self, event): painter = QPainter(self) painter.setRenderHint(QPainter.Antialiasing) self.dPtr.draw(painter) def resizeEvent(self, event): self.update() if __name__ == '__main__': import sys app = QApplication(sys.argv) w = Switch() w.show() sys.exit(app.exec_()) Note: Another possible solution within the Qt world is to use the QML Switch1, 2 component. Note: In this post I point out how to add custom widgets to a .ui file.
9
6
62,363,774
2020-6-13
https://stackoverflow.com/questions/62363774/python-pip-install-wheel-dependencies-from-a-folder
I know that I can create a wheel by first writing a setup.py and then typing python setup.py bdist_wheel If my wheels depend only on packages in pypi I know that I can install them by doing: pip install mypkg.whl Question: if my wheels depend on other of my wheels, can I have pip automatically install them from a folder? Essentially using a folder as a poor man's private pypi To be more specific, if in pkg1 I have a setup.py: from setuptools import setup setup( ... name = "pkg1", install_requires = ["requests"], ... ) And in pkg2 I have: from setuptools import setup setup( ... name = "pkg2", install_requires = ["pkg1"], ... ) This will fail on installation because pip will try to look for pkg1 in pypi. Is it possible to tell it to just look in a folder?
pip install --find-links /path/to/wheel/dir/ pkg2 If you want to completely disable access to PyPI add --no-index: pip install --no-index --find-links /path/to/wheel/dir/ pkg2
18
30
62,352,767
2020-6-12
https://stackoverflow.com/questions/62352767/cant-install-open3d-libraries-errorcould-not-find-a-version-that-satisfies-th
I use pyCharm software in windows 10, and when I tried to install open3d the following error appeared: ERROR: Could not find a version that satisfies the requirement open3d (from versions: none) ERROR: No matching distribution found for open3d I tried to install it using cmd but the same error appeared, also pip version is 20.1.1.
Solved: by installing python version 3.77 and run code using it instead of 3.8
9
4
62,351,462
2020-6-12
https://stackoverflow.com/questions/62351462/fastapi-app-running-locally-but-not-in-docker-container
I have a FastAPI app that is working as expected when running locally, however, I get an 'Internal Server Error' when I try to run in a Docker container. Here's the code for my app: from fastapi import FastAPI from pydantic import BaseModel import pandas as pd from fbprophet import Prophet class Data(BaseModel): length: int ds: list y: list model: str changepoint: float = 0.5 daily: bool = False weekly: bool = False annual: bool = False upper: float = None lower: float = 0.0 national_holidays: str = None app = FastAPI() @app.post("/predict/") async def create_item(data: Data): # Create df from base model df = pd.DataFrame(list(zip(data.ds, data.y)), columns =['ds', 'y']) # Add the cap and floor to df for logistic model if data.model == "logistic": df['y'] = 10 - df['y'] df['cap'] = data.upper df['floor'] = data.lower # make basic prediction m = Prophet(growth=data.model, changepoint_prior_scale=data.changepoint, weekly_seasonality=data.weekly, daily_seasonality=data.daily, yearly_seasonality=data.annual ) # Add national holidays if data.national_holidays is not None: m.add_country_holidays(country_name=data.national_holidays) # Fit data frame m.fit(df) # Create data frame for future future = m.make_future_dataframe(periods=data.length) # Add the cap and floor to future for logistic model if data.model == "logistic": future['cap'] = 6 future['floor'] = 1.5 # forecast forecast = m.predict(future) # Print values print(list(forecast[['ds']].values)) # Return results # {'ds': forecast[['ds']], 'yhat': forecast[['yhat']], 'yhat_lower': forecast[['yhat_lower']], 'yhat_upper': forecast[['yhat_upper']] } return [forecast[['ds']], forecast[['yhat']], forecast[['yhat_lower']], forecast[['yhat_upper']]] Which is working locally with uvicorn main:app, but not when I build using this Dockerfile: FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7 COPY ./app /app RUN pip install -r requirements.txt and start with docker run -d --name mycontainer -p 8000:80 myimage I'm seeing Internal Server Error in Postman. Is there something wrong with my dockerfile or docker commands? Or else how do I debug this?
Run the docker without the -d parameter and you'll get more clues about it. If I were to guess, I might say that you're missing some python requirement.
12
7
62,340,498
2020-6-12
https://stackoverflow.com/questions/62340498/open-database-files-db-using-python
I have a data base file .db in SQLite3 format and I was attempting to open it to look at the data inside it. Below is my attempt to code using python. import sqlite3 # Create a SQL connection to our SQLite database con = sqlite3.connect(dbfile) cur = con.cursor() # The result of a "cursor.execute" can be iterated over by row for row in cur.execute("SELECT * FROM "): print(row) # Be sure to close the connection con.close() For the line ("SELECT * FROM ") , I understand that you have to put in the header of the table after the word "FROM", however, since I can't even open up the file in the first place, I have no idea what header to put. Hence how can I code such that I can open up the data base file to read its contents?
So, you analyzed it all right. After the FROM you have to put in the tablenames. But you can find them out like this: SELECT name FROM sqlite_master WHERE type = 'table' In code this looks like this: # loading in modules import sqlite3 # creating file path dbfile = '/home/niklas/Desktop/Stuff/StockData-IBM.db' # Create a SQL connection to our SQLite database con = sqlite3.connect(dbfile) # creating cursor cur = con.cursor() # reading all table names table_list = [a for a in cur.execute("SELECT name FROM sqlite_master WHERE type = 'table'")] # here is you table list print(table_list) # Be sure to close the connection con.close() That worked for me very good. The reading of the data you have done already right just paste in the tablenames.
19
29
62,330,675
2020-6-11
https://stackoverflow.com/questions/62330675/get-local-time-zone-name-on-windows-python-3-9-zoneinfo
Checking out the zoneinfo module in Python 3.9, I was wondering if it also offers a convenient option to retrieve the local time zone (OS setting) on Windows. On GNU/Linux, you can do from datetime import datetime from zoneinfo import ZoneInfo naive = datetime(2020, 6, 11, 12) aware = naive.replace(tzinfo=ZoneInfo('localtime')) but on Windows, that throws ZoneInfoNotFoundError: 'No time zone found with key localtime' so would I still have to use a third-party library? e.g. import time import dateutil tzloc = dateutil.tz.gettz(time.tzname[time.daylight]) aware = naive.replace(tzinfo=tzloc) Since time.tzname[time.daylight] returns a localized name (German in my case, e.g. 'Mitteleuropäische Sommerzeit'), this doesn't work either: aware = naive.replace(tzinfo=ZoneInfo(tzloc)) Any thoughts? p.s. to try this on Python < 3.9, use backports (see also this answer): pip install backports.zoneinfo pip install tzdata # needed on Windows
You don't need to use zoneinfo to use the system local time zone. You can simply pass None (or omit) the time zone when calling datetime.astimezone. From the docs: If called without arguments (or with tz=None) the system local timezone is assumed. The .tzinfo attribute of the converted datetime instance will be set to an instance of timezone with the zone name and offset obtained from the OS. Thus: from datetime import datetime naive = datetime(2020, 6, 11, 12) aware = naive.astimezone()
12
10
62,330,374
2020-6-11
https://stackoverflow.com/questions/62330374/input-image-dtype-is-bool-interpolation-is-not-defined-with-bool-data-type
I am facing this issue while using Mask_RCNN to train on my custom dataset with multiple classes. This error occurs when I start training. This is what I get: /home/parth/anaconda3/envs/compVision/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning. order = _validate_interpolation_order(image.dtype, order) I keep getting this for like a hundred times and then the kernel dies. Please Help!!
Maybe you can try the skimage version 0.16.2。when I use the version 0.17.2, I faced the same issue.Good luck!Idont know why.
14
21
62,328,661
2020-6-11
https://stackoverflow.com/questions/62328661/what-is-the-difference-between-higher-order-functions-and-decorators
I do understand that higher-order functions are functions that take functions as parameters or return functions. I also know that decorators are functions that add some functionality to other functions. What are they exactly? Are they the functions that are passed in as parameters or are they the higher-order functions themselves? Note: If you give an example, use python.
A higher order function is a function that takes a function as an argument OR* returns a function. A decorator in Python is (typically) an example of a higher-order function, but there are decorators that aren't (class decorators**, and decorators that aren't functions), and there are higher-order functions that aren't decorators, for example those that take two required arguments that are functions. Not decorator, not higher-order function: def hello(who): print("Hello", who) Not decorator, but higher-order function: def compose(f, g): def wrapper(*args, **kwargs): return g(f(*args, **kwargs)) return wrapper Decorator, not higher-order function: def classdeco(cls): cls.__repr__ = lambda self: "WAT" return cls # Usage: @classdeco class Foo: pass Decorator, higher-order function: def log_calls(fn): def wrapper(*args, **kwargs): print("Calling", fn.__name__) return fn(*args, **kwargs) return wrapper * Not XOR ** Whether or not you consider class decorators to be higher-order functions, because classes are callable etc. is up for debate, I guess..
9
13