question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
70,395,238 | 2021-12-17 | https://stackoverflow.com/questions/70395238/add-multiple-annotations-at-once-to-plotly-line-chart | I want to add many annotations with an arrow to my line plot. However, I don't want to add them all manually (like I did in the code below). This will be a hell of a job and I would rather add the annotations directly from the column df['text'] (or a list from this column). import plotly.express as px import pandas as pd # assign data of lists. data = {'x': [0, 1, 2, 3, 4, 5, 6, 7, 8], 'y': [0, 1, 3, 2, 4, 3, 4, 6, 5], 'text':["","","Annotation1","","Annotation2","","","",""]} # Create DataFrame df = pd.DataFrame(data) fig = px.line(df, x='x', y='y', title='I want to add annotation with the tekst column in my dataframe (instead of manually)') fig.add_annotation(x=2, y=3, text="Annotation1 (added manual)", showarrow=True, arrowhead= 2) fig.add_annotation(x=4, y=4, text="Annotation2 (added manual)", showarrow=True, arrowhead= 2) fig.update_layout(showlegend=False) fig.show() The expected outcome looks like this (but then I want to add the annotations all at once with a list or something like that): Many thanks in advance for your help. | I figured it out myself. See below the answer: import plotly.express as px import pandas as pd # assign data of lists. data = {'x': [0, 1, 2, 3, 4, 5, 6, 7, 8], 'y': [0, 1, 3, 2, 4, 3, 4, 6, 5], 'text':["","","Annotation1","","Annotation2","","","",""]} # Create DataFrame df = pd.DataFrame(data) fig = px.line(df, x='x', y='y', title='I want to add annotation with the tekst column in my dataframe (instead of manually)') arrow_list=[] counter=0 for i in df['text'].tolist(): if i != "": arrow=dict(x=df['x'].values[counter],y=df['y'].values[counter],xref="x",yref="y",text=i,arrowhead = 2, arrowwidth=1.5, arrowcolor='rgb(255,51,0)',) arrow_list.append(arrow) counter+=1 else: counter+=1 fig.update_layout(annotations=arrow_list) fig.show() | 5 | 4 |
70,403,906 | 2021-12-18 | https://stackoverflow.com/questions/70403906/type-hint-for-a-list-with-0-or-more-items | I have a list which can have one or more items: a_list = ["apple"] But it can also be empty: a_list = [] In this case, which of List[str] and List[Optional[str]] is the appropriate type-hint for this variable and why? | List[str] includes all lists of strings, including the empty list. (From a typing perspective, an empty list of type List[str] is distinct from an empty list of type List[int]). Optional[str] is shorthand for Union[None, str], so List[Optional[str]] is the type of lists that can contain str values and Nones, not the type of list that may or may not have a str value. | 7 | 11 |
70,402,667 | 2021-12-18 | https://stackoverflow.com/questions/70402667/how-to-use-create-all-for-sqlalchemy-orm-objects-across-files | I have declared multiple orm classes in a.py: Base = declarative_base() class Message(Base): __tablename__ = "message" __table_args__ = {"schema": "stocktwits"} id = Column(BigInteger, primary_key=True) user_id = Column(BigInteger, nullable=False) body = Column(String, nullable=False) created_at = Column(DateTime, nullable=False) __table_args__ = ( Index("created_at_idx", created_at), ) ... I am initializing the database in another file b.py: def init_db(self): engine = create_engine(db_url) meta.create_all(engine) session = sessionmaker(bind=engine)() I have two questions about such structure: First, how can I refer to my orm objects declared in a.py when I call the method meta.create_all(engine) in b.py? Should I use Base.metadata.create_all(bind=engine)? In that case I have to figure out a way to share the Base object across files. Second, I have read posts about importing objects before calling the meta.create_all() , since python does not allowed wildcard import within a function, does it mean I have to import all my orm objects individually, like from a.py import a, b, c, d, .... | Should I use Base.metadata.create_all(bind=engine)? Yes - all model classes that inherit from Base are registered in its metadata, so calling Base.metadata.create_all(engine) will create all the associated tables. In that case I have to figure out a way to share the Base object across files from a import Base in b.py should be sufficient. Obviously you should only define Base once. | 5 | 5 |
70,398,579 | 2021-12-17 | https://stackoverflow.com/questions/70398579/matplotlib-background-matches-vscode-theme-on-dark-mode-and-cant-see-axis | I just got a new PC and downloaded visual studio code. I'm trying to run the exact same plots as the code I had on my other PC (just plt.plot(losses)) but now matplotlib seems to have a dark background instead of white: I found this and this that had opposite problems. To clarify, I'm asking how to change the matplotlib background plots to white (note that in my other machine I didn't have to hard code any matplotlib background information so I think it's a visual studio problem, but couldn't figure it out) | Difficult to be sure since I cannot reproduce your problem. Two things to try (both presume that you import matplotlib using import matplotlib.pyplot as plt): if you use plt.figure, add facecolor='white' parameter. Or try to run fig.set_facecolor('white') (fig here is the variable that stored the figure which facecolor you are changing. If you don't have any, use plt.gcf().set_facecolor('white') once the figure is created; gcf() returns current figure, see this doc). Try to change plt.style.context as in this matplotlib example. | 9 | 6 |
70,393,064 | 2021-12-17 | https://stackoverflow.com/questions/70393064/what-does-event-pos0-mean-in-the-pygame-library-i-saw-an-example-using-it | I don't understand how it works. I don't know if I understood the purpose of this function wrong. I tried to search what posx=event.pos[0] means but all I found was that if you want to take x, write the code of posx,posy=pygame.mouse.get_pos() and then take posx. But I still can't understand the method he followed in the example I saw. | See pygame.event module. The MOUSEMOTION, MOUSEBUTTONUP and MOUSEBUTTONDOWN events provide a position property pos with the position of the mouse cursor. pos is a tuple with 2 components, the x and y coordinate. e.g.: for event in pygame.event.get(): if event.type == pygame.MOUSEBUTTONDOWN: print("mouse cursor x", event.pos[0]) print("mouse cursor y", event.pos[1]) pygame.mouse.get_pos() returns a Tuple and event.pos is a Tuple. Both give you the position of the mouse pointer as a tuple with 2 components: ex, ey = event.pos mx, my = pygame.mouse.get_pos() pygame.mouse.getpos() returns the current position of the mouse. The pos attribute stores the position of the mouse when the event occurred. Note that you can call pygame.event.get() much later than the event occurred. If you want to know the position of the mouse at the time of the event, you can call it up using the pos attribute. | 6 | 2 |
70,387,080 | 2021-12-17 | https://stackoverflow.com/questions/70387080/what-allows-bare-class-instances-to-have-assignable-attributes | I am trying to fill in a gap in my understanding of how Python objects and classes work. A bare object instance does not support attribute assignment in any way: > object().a = 5 # or > setattr(object(), 'a', 5) AttributeError: 'object' object has no attribute 'a' I assume that this is because a bare object instance does not possess an __dict__ attribute: > '__dict__' in dir(object()) False However a bare object instance does have a defined __setattr__ attribute, so this is a bit confusing to me: > '__setattr__' in dir(object()) True An instance of a regular empty class on the other hand has full ability of attribute assignment: class B(object): pass > B().a = 5 > setattr(B(), 'a', 5) My question is: what inherent difference between an object instance and a class B(object) instance allows the latter to have assignable attributes, if B inherits directly from object? | The object() class is like a fundamental particle of the python universe, and is the base class (or building block) for all objects (read everything) in Python. As such, the stated behavior is logical, for not all objects can (or should) have arbitrary attributes set. For example, it wouldn't make sense if a NoneType object could have attributes set, and, just like object(), a None object also does not have a __dict__ attribute. In fact, the only difference in the two is that a None object has a __bool__ attribute. For example: n = None o = object() type(n) >>> <class 'NoneType'> set(dir(n)) - set(dir(o)) >>> {'__bool__'} isinstance(n, object) >>> True bool(n) >>> False Inheritance from object is automatic, and, just like any other means of inheriting, one can add their own class methods and attributes to the child. Python automatically adds the __dict__ attribute for custom data types as you already showed. In short, it is much easier to add an object's __dict__ attribute than to take it away for objects that do not have custom writable attributes (i.e. the NoneType). Update based on comment: Original comment: Would it then be safe to assume that it is __setattr__ that checks for the existence of __dict__ and raises an exception accordingly? β bool3max In CPython, the logic behind object.__setattr__(self, name, value) is implemented by Objects/object.c _PyObject_GenericSetAttrWithDict (see CPython source code). Specifically, it looks like if the name argument is a string, then the object is checked for a __dict__ object in one of its slots and makes sure it is "ready" (see this line). The readiness state of the object is determined by the PyType_Ready(), briefly described and quoted from here: Defining a Python type in C involves populating the fields of a PyTypeObject struct with the values you care about. We call each of those fields a βslotβ. On[c]e the definition is ready, we pass it into the PyType_Ready() function, which does a number of things, inclulding exposing most of the type definition to Pythonβs attribute lookup mechanism. | 11 | 6 |
70,396,519 | 2021-12-17 | https://stackoverflow.com/questions/70396519/make-sure-each-print-is-on-a-newline-when-threading-indefinitely-in-python | just a small problem. I'm using around 6 threads, all of which are printing something every couple of seconds. Occasionally they print on the same line like this: OUTPUT OUTPUT OUTPUTOUTPUT OUTPUT OUTPUT This leaves an empty line and a double print as you can see. Is there a way that I can make sure this doesn't happen. I saw something saying: print("OUTPUT", end="\n") This didn't work so I've come back to stack overflow! | One way to manage this is to have a wrapper function for print and utilise a threading.Lock. Here's an example: from threading import Thread, Lock import time LOCK = Lock() def doprint(msg): with LOCK: print(msg) def athread(): doprint('Thread is starting') time.sleep(0.1) doprint('Thread is ending') threads = [] for _ in range(10): t = Thread(target=athread) t.start() threads.append(t) for t in threads: t.join() In this way, print() can only ever be invoked by a thread that has successfully acquired the lock | 7 | 8 |
70,393,970 | 2021-12-17 | https://stackoverflow.com/questions/70393970/how-to-add-missing-spaces-after-periods-using-regex-without-changing-decimals | I have a large piece of text that is missing spaces after some of the periods. However the text also contains decimal numbers. Here's what I have so far to fix the problem using regex (I'm using python): re.sub(r"(?!\d\.\d)(?!\. )\.", '. ', my_string) But the first escape group doesn't seem to work. It still matches periods in decimal numbers. Here is sample text to make sure any potential solution works: this is a.match this should also match.1234 and this should 123.match this should NOT match. Has space after period this also should NOT match 1.23 | You can use re.sub(r'\.(?!(?<=\d\.)\d) ?', '. ', text) See the regex demo. The trailing space is matched optionally, so if it is there, it will be removed and put back. Details \. - a dot (?!(?<=\d\.)\d) - do not match any further if the dot before was a dot between two digit ? - an optional space. See a Python demo: import re text = "this is a.match\nthis should also match.1234\nand this should 123.match\n\nthis should NOT match. Has space after period\nthis also should NOT match 1.23" print(re.sub(r'\.(?!(?<=\d\.)\d) ?', '. ', text)) Output: this is a. match this should also match. 1234 and this should 123. match this should NOT match. Has space after period this also should NOT match 1.23 Alternatively, use a (?! ) lookahead as in your attempt: re.sub(r'\.(?!(?<=\d\.)\d)(?! )', '. ', text) See the regex demo and the Python demo. | 7 | 1 |
70,392,403 | 2021-12-17 | https://stackoverflow.com/questions/70392403/dividing-an-even-number-into-n-parts-each-part-being-a-multiple-of-2 | Let's assume I have the number 100 which I need to divide into N parts each of which shouldn't exceed 30 initially. So the initial grouping would be (30,30,30). The remainder (which is 10) is to be distributed among these three groups by adding 2 to each group in succession, thus ensuring that each group is a multiple of 2. The desired output should therefore look like (34,34,32). Note: The original number is always even. I tried solving this in Python and this is what I came up with. Clearly it's not working in the way I thought it would. It distributes the remainder by adding 1 (and not 2, as desired) iteratively to each group. num = 100 parts = num//30 #Number of parts into which 'num' is to be divided def split(a, b): result = ([a//b + 1] * (a%b) + [a//b] * (b - a%b)) return(result) print(split(num, parts)) Output: [34, 33, 33] Desired output: [34, 34, 32] | Simplified problem: forget about multiples of 2 First, let's simplify your problem for a second. Forget about the multiples of 2. Imagine you want to split a non-necessarily-even number n into k non-necessarily-even parts. Obviously the most balanced solution is to have some parts be n // k, and some parts be n // k + 1. How many of which? Let's call r the number of parts with n // k + 1. Then there are k - r parts with n // k, and all the parts sum up to: (n // k) * (k - r) + (n // k + 1) * r == (n // k) * (k - r) + (n // k) * r + r == (n // k) * (k - r + r) + r == (n // k) * k + r But the parts should sum up to n, so we need to find r such that: n == (n // k) * k + r Happily, you might recognise Euclidean division here, with n // k being the quotient and r being the remainder. This gives us our split function: def split(n, k): d,r = divmod(n, k) return [d+1]*r + [d]*(k-r) Testing: print( split(50, 3) ) # [17, 17, 16] Splitting into multiples of 2 Now back to your split_even problem. Now that we have the generic function split, a simple way to solve split_even is to use split: def split_even(n, k): return [2 * x for x in split(n // 2, k)] Testing: print( split_even(100, 3) ) # [34, 34, 32] Generalisation: multiples of m It's trivial to do the same thing with multiples of a number m other than 2: def split_multiples(n, k, m=2): return [m * x for x in split(n // m, k)] Testing: print( split_multiples(102, 4, 3) ) # [27, 27, 24, 24] | 7 | 9 |
70,381,559 | 2021-12-16 | https://stackoverflow.com/questions/70381559/ensure-that-an-argument-can-be-iterated-twice | Suppose I have the following function: def print_twice(x): for i in x: print(i) for i in x: print(i) When I run: print_twice([1,2,3]) or: print_twice((1,2,3)) I get the expected result: the numbers 1,2,3 are printed twice. But when I run: print_twice(zip([1,2,3],[4,5,6])) the pairs (1,4),(2,5),(3,6) are printed only once. Probably, this is because the zip returns a generator that terminates after one pass. How can I modify the function print_twice such that it will correctly handle all inputs? I could insert a line at the beginning of the function: x = list(x). But this might be inefficient in case x is already a list, a tuple, a range, or any other iterator that can be iterated more than once. Is there a more efficient solution? | I could insert a line at the beginning of the function: x = list(x). But this might be inefficient in case x is already a list, a tuple, a range, or any other iterator that can be iterated more than once. Is there a more efficient solution? Copying single-use iterables to a list is perfectly adequate, and reasonably efficient even for multi-use iterables. The list (and to some extend tuple) type is one of the most optimised data structures in Python. Common operations such as copying a list or tuple to a list are internally optimised;1 even for iterables that are not special-cased, copying them to a list is significantly faster than any realistic work done by two (or more) loops. def print_twice(x): x = list(x) for i in x: print(i) for i in x: print(i) Copying indiscriminately can also be advantageous in the context of concurrency, when the iterable may be modified while the function is running. Common cases are threading and weakref collections. In case one wants to avoid needless copies, checking whether the iterable is a Collection is a reasonable guard. from collections.abc import Collection x = list(x) if not isinstance(x, Collection) else x Alternatively, one can check whether the iterable is in fact an iterator, since this implies statefulness and thus single-use. from collections.abc import Iterator x = list(x) if isinstance(x, Iterator) else x x = list(x) if iter(x) is x else x Notably, the builtins zip, filter, map, ... and generators all are iterators. 1Copying a list of 128 items is roughly as fast as checking whether it is a Collection. | 9 | 2 |
70,390,050 | 2021-12-17 | https://stackoverflow.com/questions/70390050/pythonic-way-to-make-a-dictionary-from-lists-of-unequal-length-without-padding-n | I have a list of 'Id's' that I wish to associate with a property from another list, their 'rows'. I have found a way to do it by making smaller dictionaries and concatenating them together which works, but I wondered if there was a more pythonic way to do it? Code row1 = list(range(1, 6, 1)) row2 = list(range(6, 11, 1)) row3 = list(range(11, 16, 1)) row4 = list(range(16, 21, 1)) row1_dict = {} row2_dict = {} row3_dict = {} row4_dict = {} for n in row1: row1_dict[n] = 1 for n in row2: row2_dict[n] = 2 for n in row3: row3_dict[n] = 3 for n in row4: row4_dict[n] = 4 id_to_row_dict = {} id_to_row_dict = {**row1_dict, **row2_dict, **row3_dict, **row4_dict} print('\n') for k, v in id_to_row_dict.items(): print(k, " : ", v) Output of dictionary which I want to replicate more pythonically 1 : 1 2 : 1 3 : 1 4 : 1 5 : 1 6 : 2 7 : 2 8 : 2 9 : 2 10 : 2 11 : 3 12 : 3 13 : 3 14 : 3 15 : 3 16 : 4 17 : 4 18 : 4 19 : 4 20 : 4 Desired output Same as my output above, I just want to see if there is a better way to do it? | This dict-comprehension should do it: rows = [row1, row2, row3, row4] {k: v for v, row in enumerate(rows, 1) for k in row} | 8 | 10 |
70,385,155 | 2021-12-16 | https://stackoverflow.com/questions/70385155/why-is-this-float-conversion-made | I have this dataframe Python 3.9.0 (v3.9.0:9cf6752276, Oct 5 2020, 11:29:23) [Clang 6.0 (clang-600.0.57)] on darwin >>> import pandas as pd >>> import datetime as datetime >>> pd.__version__ '1.3.5' >>> dates = [datetime.datetime(2012, 2, 3) , datetime.datetime(2012, 2, 4)] >>> x = pd.DataFrame({'Time': dates, 'Selected': [0, 0], 'Nr': [123.4, 25.2]}) >>> x.set_index('Time', inplace=True) >>> x Selected Nr Time 2012-02-03 0 123.4 2012-02-04 0 25.2 An integer value from an integer column is converted to a float in the example but I do not see the reason for this conversion. In both cases I assume I pick the value from the 'Selected' column from the first row. What is going on? >>> x['Selected'].iloc[0] 0 >>> x.iloc[0]['Selected'] 0.0 >>> x['Selected'].dtype dtype('int64') | x.iloc[0] selects a single "row". A new series object is actually created. When it decides on the dtype of that row, a pd.Series, it uses a floating point type, since that would not lose information in the "Nr" column. On the other hand, x['Selected'].iloc[0] first selects a column, which will always preserve the dtype. pandas is fundamentally "column oriented". You can think of a dataframe as a dictionary of columns (it isn't, although I believe it used to essentially have that under the hood, but now it uses a more complex "block manager" approach, but these are internal implementation details) | 6 | 3 |
70,377,512 | 2021-12-16 | https://stackoverflow.com/questions/70377512/can-a-function-and-local-variable-have-the-same-name | Here's an example of what I mean: def foo(): foo = 5 print(foo + 5) foo() # => 10 The code doesn't produce any errors and runs perfectly. This contradicts the idea that variables and functions shouldn't have the same name unless you overwrite them. Why does it work? And when applied to real code, should I use different function/local variable names, or is this perfectly fine? | foo = 5 creates a local variable inside your function. def foo creates a global variable. That's why they can both have the same name. If you refer to foo inside your foo() function, you're referring to the local variable. If you refer to foo outside that function, you're referring to the global variable. Since it evidently causes confusion for people trying to follow the code, you probably shouldn't do this. | 47 | 54 |
70,376,424 | 2021-12-16 | https://stackoverflow.com/questions/70376424/other-than-max-value-replace-all-value-to-0-in-each-column-pandas-python | i have a df which I want to replace values that are not the max value for each column to 0. code: data = { "A": [1, 2, 3], "B": [3, 5, 1], "C": [9, 0, 1] } df = pd.DataFrame(data) sample df: A B C 0 1 3 9 1 2 5 0 2 3 1 1 result trying to get: A B C 0 0 0 9 1 0 5 0 2 3 0 0 kindly advise. many thanks | Try: df[df!=df.max()] = 0 Output: A B C 0 0 0 9 1 0 5 0 2 3 0 0 | 6 | 3 |
70,374,346 | 2021-12-16 | https://stackoverflow.com/questions/70374346/how-to-remove-special-characters-from-rows-in-pandas-dataframe | I have a column in pandas data frame like the one shown below; LGA Alpine (S) Ararat (RC) Ballarat (C) Banyule (C) Bass Coast (S) Baw Baw (S) Bayside (C) Benalla (RC) Boroondara (C) What I want to do, is to remove all the special characters from the ending of each row. ie. (S), (RC). Desired output should be; LGA Alpine Ararat Ballarat Banyule Bass Coast Baw Baw Bayside Benalla Boroondara I am not quite sure how to get desired output mentioned above. Any help would be appreciated. Thanks | I have different approach using regex. It will delete anything between brackets: import re import pandas as pd df = {'LGA': ['Alpine (S)', 'Ararat (RC)', 'Bass Coast (S)'] } df = pd.DataFrame(df) df['LGA'] = [re.sub("[\(\[].*?[\)\]]", "", x).strip() for x in df['LGA']] # delete anything between brackets | 6 | 2 |
70,369,790 | 2021-12-15 | https://stackoverflow.com/questions/70369790/python-requests-response-403-forbidden | So I am trying to scrape this website: https://www.auto24.ee I was able to scrape data from it without any problems, but today it gives me "Response 403". I tried using proxies, passing more information to headers, but unfortunately nothing seems to work. I could not find any solution on the internet, I tried different methods. The code that worked before without any problems: import requests headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36', } page = requests.get("https://www.auto24.ee/", headers=headers) print(page) | The code here import requests headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36'} page = requests.get("https://www.auto24.ee/", headers=headers) print(page.text) Always will get something as the following <div class="cf-section cf-wrapper"> <div class="cf-columns two"> <div class="cf-column"> <h2 data-translate="why_captcha_headline">Why do I have to complete a CAPTCHA?</h2> <p data-translate="why_captcha_detail">Completing the CAPTCHA proves you are a human and gives you temporary access to the web property.</p> </div> <div class="cf-column"> <h2 data-translate="resolve_captcha_headline">What can I do to prevent this in the future?</h2> <p data-translate="resolve_captcha_antivirus">If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware.</p> The website is protected by CloudFlare. By standard means, there is minimal chance of being able to access the WebSite through automation such as requests or selenium. You are seeing 403 since your client is detected as a robot. There may be some arbitrary methods to bypass CloudFlare that could be found elsewhere, but the WebSite is working as intended. There must be a ton of data submitted through headers and cookies that show your request is valid, and since you are simply submitting only a user agent, CloudFlare is triggered. Simply spoofing another user-agent is not even close to enough to not trigger a captcha, CloudFlare checks for MANY things. I suggest you look at selenium here since it simulates a real browser, or research guides to (possibly?) bypass Cloudflare with requests. Update Found 2 python libraries cloudscraper and cfscrape. Both are not usable for this site since it uses cloudflare v2 unless you pay for a premium version. | 6 | 13 |
70,369,697 | 2021-12-15 | https://stackoverflow.com/questions/70369697/winsorizing-on-column-with-nan-does-not-change-the-max-value | Please note that a similar question was asked a while back but never answered (see Winsorizing does not change the max value). I am trying to winsorize a column in a dataframe using winsorize from scipy.stats.mstats. If there are no NaN values in the column then the process works correctly. However, NaN values seem to prevent the process from working on the top (but not the bottom) of the distribution. Regardless of what value I set for nan_policy, the NaN values are set to the maximum value in the distribution. I feel like a must be setting the option incorrectly some how. Below is an example that can be used to reproduce both correct winsorizing when there are no NaN values and the problem behavior I am experiencing when there NaN values are present. Any help on sorting this out would be appreciated. #Import import pandas as pd import numpy as np from scipy.stats.mstats import winsorize # initialise data of lists. data = {'Name':['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T'], 'Age':[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0]} # Create 2 DataFrames df = pd.DataFrame(data) df2 = pd.DataFrame(data) # Replace two values in 2nd DataFrame with np.nan df2.loc[5,'Age'] = np.nan df2.loc[8,'Age'] = np.nan # Winsorize Age in both DataFrames winsorize(df['Age'], limits=[0.1, 0.1], inplace = True, nan_policy='omit') winsorize(df2['Age'], limits=[0.1, 0.1], inplace = True, nan_policy='omit') # Check min and max values of Age in both DataFrames print('Max/min value of Age from dataframe without NaN values') print(df['Age'].max()) print(df['Age'].min()) print() print('Max/min value of Age from dataframe with NaN values') print(df2['Age'].max()) print(df2['Age'].min()) | It looks like the nan_policy is being ignored. But winsorization is just clipping, so you can handle this with pandas. def winsorize_with_pandas(s, limits): """ s : pd.Series Series to winsorize limits : tuple of float Tuple of the percentages to cut on each side of the array, with respect to the number of unmasked data, as floats between 0. and 1 """ return s.clip(lower=s.quantile(limits[0], interpolation='lower'), upper=s.quantile(1-limits[1], interpolation='higher')) winsorize_with_pandas(df['Age'], limits=(0.1, 0.1)) 0 3.0 1 3.0 2 3.0 3 4.0 4 5.0 5 6.0 6 7.0 7 8.0 8 9.0 9 10.0 10 11.0 11 12.0 12 13.0 13 14.0 14 15.0 15 16.0 16 17.0 17 18.0 18 18.0 19 18.0 Name: Age, dtype: float64 winsorize_with_pandas(df2['Age'], limits=(0.1, 0.1)) 0 2.0 1 2.0 2 3.0 3 4.0 4 5.0 5 NaN 6 7.0 7 8.0 8 NaN 9 10.0 10 11.0 11 12.0 12 13.0 13 14.0 14 15.0 15 16.0 16 17.0 17 18.0 18 19.0 19 19.0 Name: Age, dtype: float64 | 5 | 3 |
70,368,592 | 2021-12-15 | https://stackoverflow.com/questions/70368592/pandas-group-by-cumsum-with-a-flag-condition | Assuming i have the following data frame date flag user num 0 2019-01-01 1 a 10 1 2019-01-02 0 a 20 2 2019-01-03 1 b 30 3 2019-03-04 1 b 40 I want to create a cumulative sum of the nums grouped by user only if flag == 1 so i will get this: date flag user num cumsum 0 2019-01-01 1 a 10 10 1 2019-01-02 0 a 20 10 2 2019-01-03 1 b 30 30 3 2019-03-04 1 b 40 70 So far i was able to cumsum by flag, disregarding the group by user df['cumsum'] = df[df['flag'] == 1 ]['num'].transform(pd.Series.cumsum) or cumsum by user disregarding the flag df['cumsum'] = df.groupby('user')['num'].transform(pd.Series.cumsum) I need help making them work together. | You could multiply num by flag to make num = 0 where flag = 0, group by user, and cumsum: df['cumsum'] = df['num'].mul(df['flag']).groupby(df['user']).cumsum() Output: >>> df date flag user num cumsum 0 2019-01-01 1 a 10 10 1 2019-01-02 0 a 20 10 2 2019-01-03 1 b 30 30 3 2019-03-04 1 b 40 70 | 6 | 4 |
70,365,296 | 2021-12-15 | https://stackoverflow.com/questions/70365296/how-to-use-conda-update-n-base-conda-properly | I have two very simple questions regarding updating conda. I.e. when updating one of my environments with conda update --all, I get a warning ==> WARNING: A newer version of conda exists. <== current version: xyz1 latest version: xyz2 Please update conda by running $ conda update -n base conda My setup comprises a base environment and two actual work environments, say, (env1) and (env2). The latter two environments are kept up to date with conda update --all, issued within each of those environments. The base environment was only generated in the installation process of Anaconda. Question 1: Should one run conda update -n base conda on the command line of the OS (linux) prior to activating any environment? Or should one activate a particular environment? Or is the environment out of which this command is issued irrelevant? Question 2: After running conda update -n base conda from out of whatever environment, as determined by the answer to question 1, would a subsequent conda update --all issued within one of my two work environments (env1,2) install or update any additional stuff, only as a consequence of the previous conda update -n base conda? (PS.: I find many questions on stackoverflow regarding conda update conda, but they don't seem to cover this one.) | Q1: -n explicitly specifies environment - this command will run in any environment and even if you have no environment active. Q2: In all but very few cases updating conda will not affect the packages ought to be installed in other environments. conda plays the role as a package manager. The packages are pulled from an index that is independent of conda's version. | 28 | 7 |
70,363,722 | 2021-12-15 | https://stackoverflow.com/questions/70363722/python-pandas-groupby-and-if-any-condition | I have a dataframe similar to the one below, and I would like to create a new variable which contains true/false if for each project the sector "a" has been covered at least once. I'm trying with the group.by() function, and wanted to use the .transform() method but since my data is text, I don't know how to use it. project sector 01 a 01 b 02 b 02 b 03 a 03 a project sector new_col 01 a true 01 b true 02 b false 02 b false 03 a true 03 a true | You could try the following: df['new_col'] = df.groupby('project')['sector'].transform(lambda x: (x == 'a').any() ) This will group by project and check if any 'a' is in the groups sectors | 9 | 4 |
70,363,072 | 2021-12-15 | https://stackoverflow.com/questions/70363072/group-together-consecutive-numbers-in-a-list | I have an ordered Python list of forms: [1, 2, 3, 4, 5, 12, 13, 14, 15, 20, 21, 22, 23, 30, 35, 36, 37, 38, 39, 40] How can I group together consecutive numbers in a list. A group like this: [[1, 2, 3, 4, 5], [12, 13, 14, 15], [20, 21, 22, 23,], [30], [35, 36, 37, 38, 39, 40]] I tried using groupby from here but was not able to tailor it to my need. Thanks, | You could use negative indexing: def group_by_missing(seq): if not seq: return seq grouped = [[seq[0]]] for x in seq[1:]: if x == grouped[-1][-1] + 1: grouped[-1].append(x) else: grouped.append([x]) return grouped Example Usage: >>> lst = [1, 2, 3, 4, 5, 12, 13, 14, 15, 20, 21, 22, 23, 30, 35, 36, 37, 38, 39, 40] >>> group_by_missing(lst) [[1, 2, 3, 4, 5], [12, 13, 14, 15], [20, 21, 22, 23], [30], [35, 36, 37, 38, 39, 40]] | 5 | 6 |
70,359,226 | 2021-12-15 | https://stackoverflow.com/questions/70359226/circular-objects-rotate-angle-detection | I'm trying to detect angle difference between two circular objects, which be shown as 2 image below. I'm thinking about rotate one of image with some small angle. Every time one image rotated, SSIM between rotated image and the another image will be calculated. The angle with maximum SSIM will be the angle difference. But, finding the extremes is never an easy problem. So my question is: Are there another algorithms (opencv) can be used is this case? IMAGE #1 IMAGE #2 EDIT: Thanks @Micka, I just do the same way he suggest and remove black region like @Yves Daoust said to improve processing time. Here is my final result: ORIGINAL IMAGE ROTATED + SHIFTED IMAGE | Here's a way to do it: detect circles (for the example I assume circle is in the image center and radius is 50% of the image width) unroll circle images by polar coordinates make sure that the second image is fully visible in the first image, without a "circle end overflow" simple template matching Result for the following code: min: 9.54111e+07 pos: [0, 2470] angle-right: 317.571 angle-left: -42.4286 I think this should work quite well in general. int main() { // load images cv::Mat image1 = cv::imread("C:/data/StackOverflow/circleAngle/circleAngle1.jpg"); cv::Mat image2 = cv::imread("C:/data/StackOverflow/circleAngle/circleAngle2.jpg"); // generate circle information. Here I assume image center and image is filled by the circles. // use houghCircles or a RANSAC based circle detection instead, if necessary cv::Point2f center1 = cv::Point2f(image1.cols/2.0f, image1.rows/2.0f); cv::Point2f center2 = cv::Point2f(image2.cols / 2.0f, image2.rows / 2.0f); float radius1 = image1.cols / 2.0f; float radius2 = image2.cols / 2.0f; cv::Mat unrolled1, unrolled2; // define a size for the unrolling. Best might be to choose the arc-length of the circle. The smaller you choose this, the less resolution is available (the more pixel information of the circle is lost during warping) cv::Size unrolledSize(radius1, image1.cols * 2); // unroll the circles by warpPolar cv::warpPolar(image1, unrolled1, unrolledSize, center1, radius1, cv::WARP_POLAR_LINEAR); cv::warpPolar(image2, unrolled2, unrolledSize, center2, radius2, cv::WARP_POLAR_LINEAR); // double the first image (720Β° of the circle), so that the second image is fully included without a "circle end overflow" cv::Mat doubleImg1; cv::vconcat(unrolled1, unrolled1, doubleImg1); // the height of the unrolled image is exactly 360Β° of the circle double degreesPerPixel = 360.0 / unrolledSize.height; // template matching. Maybe correlation could be the better matching metric cv::Mat matchingResult; cv::matchTemplate(doubleImg1, unrolled2, matchingResult, cv::TemplateMatchModes::TM_SQDIFF); double minVal; double maxVal; cv::Point minLoc; cv::Point maxLoc; cv::Point matchLoc; cv::minMaxLoc(matchingResult, &minVal, &maxVal, &minLoc, &maxLoc, cv::Mat()); std::cout << "min: " << minVal << std::endl; std::cout << "pos: " << minLoc << std::endl; // angles in clockwise direction: std::cout << "angle-right: " << minLoc.y * degreesPerPixel << std::endl; std::cout << "angle-left: " << minLoc.y * degreesPerPixel -360.0 << std::endl; double foundAngle = minLoc.y * degreesPerPixel; // visualizations: // display the matched position cv::Rect pos = cv::Rect(minLoc, cv::Size(unrolled2.cols, unrolled2.rows)); cv::rectangle(doubleImg1, pos, cv::Scalar(0, 255, 0), 4); // resize because the images are too big cv::Mat resizedResult; cv::resize(doubleImg1, resizedResult, cv::Size(), 0.2, 0.2); cv::resize(unrolled1, unrolled1, cv::Size(), 0.2, 0.2); cv::resize(unrolled2, unrolled2, cv::Size(), 0.2, 0.2); double startAngleUpright = 0; cv::ellipse(image1, center1, cv::Size(100, 100), 0, startAngleUpright, startAngleUpright + foundAngle, cv::Scalar::all(255), -1, 0); cv::resize(image1, image1, cv::Size(), 0.5, 0.5); cv::imshow("image1", image1); cv::imshow("unrolled1", unrolled1); cv::imshow("unrolled2", unrolled2); cv::imshow("resized", resizedResult); cv::waitKey(0); } This is how the intermediate images and results look like: unrolled image 1 / unrolled 2 / unrolled 1 (720Β°) / best match of unrolled 2 in unrolled 1 (720Β°): | 7 | 8 |
70,359,583 | 2021-12-15 | https://stackoverflow.com/questions/70359583/drf-from-django-conf-urls-import-url-in-django-4-0 | I've a project in django 3.2 and I've updated (pip install -r requirements.txt) to version 4.0 (new release) and I've currently the below error when I run the server in a virtual environment. I use DRF. Can't import => from rest_framework import routers in urls.py from django.conf.urls import url ImportError: cannot import name 'url' from 'django.conf.urls' | As per the Django docs here: url(regex, view, kwargs=None, name=None)ΒΆ This function is an alias to django.urls.re_path(). Deprecated since version 3.1: Alias of django.urls.re_path() for backwards compatibility. django.conf.urls.url() was deprecated since Django 3.1, and as per the release notes here, is removed in Django 4.0+. You can resolve the issue using path or re_path. https://docs.djangoproject.com/en/4.0/ref/urls/ | 5 | 2 |
70,344,193 | 2021-12-14 | https://stackoverflow.com/questions/70344193/pandas-dataframe-set-categories-the-inplace-parameter-in-pandas-categorical | I have following statement in my code: mcap_summary['cap'].cat.set_categories(['Large','Mid','Small','None'],inplace=True) Which now generates a warning as: D:\Python\Python39\lib\site-packages\pandas\core\arrays\categorical.py:2630: FutureWarning: The inplace parameter in pandas.Categorical.set_categories is deprecated and will be removed in a future version. Removing unused categories will always return a new Categorical object. res = method(*args, **kwargs) How am I suppose to write to avoid this warning and future errors? Thanks in advance | pandas v1.3.0+ deprecated the inplace option. Deprecated the inplace parameter of Categorical.remove_categories(), Categorical.add_categories(), Categorical.reorder_categories(), Categorical.rename_categories(), Categorical.set_categories() and will be removed in a future version (GH37643) β https://pandas.pydata.org/pandas-docs/version/1.3.0/whatsnew/v1.3.0.html#other-deprecations so you can code like below: mcap_summary['cap'] = mcap_summary['cap'].cat.set_categories(['Large', 'Mid', 'Small', 'None']) | 6 | 7 |
70,342,277 | 2021-12-13 | https://stackoverflow.com/questions/70342277/vs-code-python-execute-statement-during-debug | Is it possible to execute statements while the debug mode is active, possibly in the interactive mode? Let's say I'm working with a dataframe, and it doesn't behave as I want. I go line by line in debug mode, and I want to check some properties while doing that, for example the number of NaN values. Using the variable window to check the entries is obviously a waste of time for such a task. I could use write a print statement into my code, stop the debugging, start the debugging again, and then the print statement is part of the code and will be executed. But that would mean always stopping and starting debug, as soon as I run into an unforeseen problem and trying to find out what's happening. Is it possible to execute statements while still remaining in debug, at the line you currently are at. It would be especially good if there is a solution with running the statement in Interactive Mode because then I wouldn't need to mess with the original program. I hope it's clear but if not, I can try to construct an example with screenshots. Any help would be appreciated! | When on a breakpoint you can use the Debug Console to run Python code in the current context. It's in the same tab as "Problems", "Output" and "Terminal", typically under the Editor pane. See in the upper menu "View > Debug Console". More info is available in the official Visual Studio Code documentation: Debug Console REPL | 6 | 14 |
70,339,355 | 2021-12-13 | https://stackoverflow.com/questions/70339355/are-python-coroutines-stackless-or-stackful | I've seen conflicting views on whether Python coroutines (I primarily mean async/await) are stackless or stackful. Some sources say they're stackful: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2074r0.pdf 'Python coroutines are stackful.' How do coroutines in Python compare to those in Lua? Yes, Python coroutines are stackful, first-class and asymmetric. While others seem to imply they're stackless, e.g. https://gamelisp.rs/reference/coroutines.html GameLisp's coroutines follow the model set by Rust, Python, C# and C++. Our coroutines are "stackless" In general my understanding always was that any meaningful async/await implementation implies stackless coroutines, while stackful ones are basically fibers (userspace threads, often switched more or less cooperatively), like goroutines, Boost.Coroutine, apparently those in Lua etc. Is my understanding correct? Or do Python coroutines somehow fundamentally differ from those in say C++, and are stackful? Or do the authors of the source above mean different things? | TLDR: Going by the C++ definition, Python's coroutines too are stackless. The defining feature is that the coroutine's persistent state is stored separately from the stack. Coroutines are stackless: they suspend execution by returning to the caller and the data that is required to resume execution is stored separately from the stack. [β¦] (cppreference: Coroutines (C++20)) On a less technical level, Python requires await, async for, etc. to allow child coroutines to suspend their parent(s). It is not possible for a regularly called function to suspend its parent. async def foo(): a = await some_awaitable() # this may suspend `foo` as well b = some_function() # this cannot suspend `foo` return a + b In contrast to a stackless coroutine a stackful coroutine can be suspended from within a nested stackframe. (Boost: Coroutines) Stack and Coroutines The interaction of stack and coroutine is not as clearcut as for regular routines. A routine is either executing or done. This naturally maps to function execution adding a single execution frame to the stack;1 once that frame completes the routine completes and the frame is popped from the stack and discarded. In contrast, a coroutine can be executing, suspended, or done. This raises the question how to handle the state during suspension β practically, what to do with local variables. There are two prominent means: We can treat each partial execution like a regular routine. To get coroutine semantics, when such a routine finishes we safe its state. This allows us to switch between individual coroutines. We can treat the entire execution like a regular routine. To get coroutine semantics, when such a routine suspends we safe the entire stack. This allows us to switch between individual stacks. Solution 1. is usually called stackless β the coroutine lives separately from the stack. This gives a lot of power since each step is first-class, but it exposes a lot of implementation details. We must explicitly emulate the stacking of nested coroutine execution β usually, this is what await is there for. Solution 2. is usually called stackfull β the coroutine lives as part of the stack itself. This removes a lot of power since execution happens internally, but hides a lot of implementation details. The distinction of routines versus coroutines is hidden or even inconsequential β any routine may suspend at any point. These definitions are not universal. For example, you may find 1. being called stackfull since the coroutine itself keeps its own suspended stack. When in doubt, compare the semantics instead of the naming. However, this appears to be the most prevalent naming scheme (see references). Where does Python fit in? From a technical standpoint, Python's coroutines are stackless. Coroutine functions create first-class coroutine objects that allow partial execution and store state internally during suspension (as cr_frame). Coroutines are nested using explicit await and routines cannot await anything. Notably, Python itself does not support stackful coroutines: Regular routines cannot suspend execution.2 1The semantics of a frame stack do not necessarily mean that the implementation has a 1:1 relation of function execution to frames on a memory stack. Rather, consider that "the stack" as an abstract description of a runtime can be defined by function execution. Implementations may differ without having observable difference for high-level semantics of a language. Consider CPython which has a C stack running a Python stack. 2The absence of stackfull coroutines is not strictly guaranteed by the language semantics. There are third-party extensions to add stackfull coroutines, namely greenlet. Fittingly, this was introduced by Stackless Python β the "Stackless" refers to the use of the C-Stack by the interpreter, not to how coroutines use the stack. However, this is a separate suspension mechanism from Python's own await and yield suspension. References: Boost: Stackfull Coroutines Boost: Stackless Coroutines We Never Needed Stackfull Coroutines Stackless vs. Stackful Coroutines | 15 | 12 |
70,297,043 | 2021-12-9 | https://stackoverflow.com/questions/70297043/when-and-how-to-use-polynomial-fit-as-opposed-to-polyfit | Using Python 3.10.0 and NumPy 1.21.4. I'm trying to understand why Polynomial.fit() calculates wildly different coefficient values from polyfit(). In the following code: import numpy as np def main(): x = np.array([3000, 3200, 3400, 3600, 3800, 4000, 4200, 4400, 4600, 4800, 5000, 5200, 5400, 5600, 5800, 6000, 6200, 6400, 6600, 6800, 7000]) y = np.array([5183.17702344, 5280.24520952, 5758.94478531, 6070.62698406, 6584.21169885, 8121.20863245, 7000.57326186, 7380.01493624, 7687.97802847, 7899.71417408, 8506.90860692, 8421.73816463, 8705.58403352, 9275.46094996, 9552.44715196, 9850.70796049, 9703.53073907, 9833.39941224, 9900.21604921, 9901.06392084, 9974.51206378]) c1 = np.polynomial.polynomial.polyfit(x, y, 2) c2 = np.polynomial.polynomial.Polynomial.fit(x, y, 2).coef print(c1) print(c2) if __name__ == '__main__': main() c1 contains: [-3.33620814e+03 3.44704650e+00 -2.18221029e-04] which produces the the line of best fit when plugged a + bx + cx^2 that I predicted while c2 contains: [8443.4986422 2529.67242075 -872.88411679] which results in a very different line when plugged into the same formula. The documentation seems to imply that Polynomial.fit() is the new preferred way of calculating the line but it keeps outputting the wrong coefficients (unless my understanding of polynomial regression is completely wrong). If I am not using the functions correctly, what is the correct way of using them? If I am using both functions correctly, why would I use Polynomial.fit() over polyfit(), as the documentation seems to imply I should? | According to Polynomial.fit() documentation, it returns: A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do new_series.convert().coef. You can find in https://numpy.org/doc/stable/reference/routines.polynomials.html#transitioning-from-numpy-poly1d-to-numpy-polynomial that coefficients are given in the scaled domain defined by the linear mapping between the window and domain. convert can be used to get the coefficients in the unscaled data domain. You can check import numpy as np def main(): x = np.array([3000, 3200, 3400, 3600, 3800, 4000, 4200, 4400, 4600, 4800, 5000, 5200, 5400, 5600, 5800, 6000, 6200, 6400, 6600, 6800, 7000]) y = np.array([5183.17702344, 5280.24520952, 5758.94478531, 6070.62698406, 6584.21169885, 8121.20863245, 7000.57326186, 7380.01493624, 7687.97802847, 7899.71417408, 8506.90860692, 8421.73816463, 8705.58403352, 9275.46094996, 9552.44715196, 9850.70796049, 9703.53073907, 9833.39941224, 9900.21604921, 9901.06392084, 9974.51206378]) c1 = np.polynomial.polynomial.polyfit(x, y, 2) c2 = np.polynomial.polynomial.Polynomial.fit(x, y, 2).convert().coef c3 = np.polynomial.polynomial.Polynomial.fit(x, y, 2, window=(x.min(), x.max())).coef print(c1) print(c2) print(c3) if __name__ == '__main__': main() # [-3.33620814e+03 3.44704650e+00 -2.18221029e-04] # [-3.33620814e+03 3.44704650e+00 -2.18221029e-04] # [-3.33620814e+03 3.44704650e+00 -2.18221029e-04] Another argument for using np.polynomial.Polynomial class is stated here in the docs https://numpy.org/doc/stable/reference/routines.polynomials.package.html | 5 | 9 |
70,337,120 | 2021-12-13 | https://stackoverflow.com/questions/70337120/create-pydantic-model-for-optional-field-with-alias | Pydantic model for compulsory field with alias is created as follows class MedicalFolderUpdate(RWModel): id : str = Field(alias='_id') university : Optional[str] How to add optional field university's alias name 'school' as like of id? | It is not documented on the Pydantic website how to use the typing Optional with the Fields Default besides their allowed types in which they include the mentioned Optional: Optional[x] is simply shorthand for Union[x, None]; see Unions below for more detail on parsing and validation and Required Fields for details about required fields that can receive None as a value. for that, you would have to use their field customizations as in the example: class Figure(BaseModel): name: str = Field(alias='Name') edges: str = Field(default=None, alias='Edges') without the default value, it breaks because the optional does not override that the field is required and needs a default value. Which is the solution I used to overcome this problem while using Pydantic with fast API to manage mongo resources | 6 | 13 |
70,272,742 | 2021-12-8 | https://stackoverflow.com/questions/70272742/how-to-check-for-floatnan-in-python | In some data I am processing I am encountering data of the type float, which are filled with 'nan', i.e. float('nan'). However checking for it does not work as expected: float('nan') == float('nan') >> False You can check it with math.isnan, but as my data also contains strings (For example: 'nan', but also other user input), it is not that convenient: import math math.isnan(float('nan')) >> True math.isnan('nan') >> TypeError: must be real number, not str In the ideal world I would like to check if a value is in a list of all possible NaN values, for example: import numpy as np if x in ['nan', np.nan, ... ]: # Do something pass Now the question: How can I still use this approach but also check for float('nan') values? And why equals float('nan') == float('nan') False | Why not just wrap whatever you pass to math.isnan with another float conversion? If it was already a float (i.e. float('nan')) you just made a "redundant" call: import math def is_nan(value): return math.isnan(float(value)) And this seems to give your expected results: >>> is_nan(float('nan')) True >>> is_nan('nan') True >>> is_nan(np.nan) True >>> is_nan(5) False >>> is_nan('5') False This will still raise a ValueError for non-numeric (except 'nan') strings. If that's a problem, you can wrap with try/except. As long as the float conversion worked, there is no reason for isnan to fail. So we are basically catching non-numeric strings that my fail the float conversion: def is_nan(value): try: return math.isnan(float(value)) except ValueError: return False Any non-numeric string is surely not a NaN value so return False. | 16 | 13 |
70,318,352 | 2021-12-11 | https://stackoverflow.com/questions/70318352/how-to-get-the-price-of-a-crypto-at-a-given-time-in-the-past | Is there any way I can use ccxt to extract the price of a crypto currency at a given time in the past? Example: get price of BTC on binance at time 2018-01-24 11:20:01 | You can use the fetch_ohlcv method on the binance class in CCXT def fetch_ohlcv(self, symbol, timeframe='1m', since=None, limit=None, params={}): You'll need the date as a timestamp in milliseconds, and you can only get it precise to the minute, so take away the seconds, or you'll get the price for the minute after timestamp = int(datetime.datetime.strptime("2018-01-24 11:20:00", "%Y-%m-%d %H:%M:%S").timestamp() * 1000) You can only get the price of BTC in comparison to another currency, we'll use USDT(closely matches USD) as our comparison currency, so we will look up the price of BTC in the BTC/USDT market When we use the method, we will set since to your timestamp, but set the limit as one so that we only get one price import ccxt from pprint import pprint print('CCXT Version:', ccxt.__version__) exchange = ccxt.binance() timestamp = int(datetime.datetime.strptime("2018-01-24 11:20:00+00:00", "%Y-%m-%d %H:%M:%S%z").timestamp() * 1000) response = exchange.fetch_ohlcv('BTC/USDT', '1m', timestamp, 1) pprint(response) Which will return candlestick values for one candle [ 1516792860000, // timestamp 11110, // value at beginning of minute, so the value at exactly "2018-01-24 11:20:00" 11110.29, // highest value between "2018-01-24 11:20:00" and "2018-01-24 11:20:59" 11050.91, // lowest value between "2018-01-24 11:20:00" and "2018-01-24 11:20:59" 11052.27, // value just before "2018-01-24 11:21:00" 39.882601 // The volume traded during this minute ] | 7 | 11 |
70,297,011 | 2021-12-9 | https://stackoverflow.com/questions/70297011/why-is-numba-so-fast | I want to write a function which will take an index lefts of shape (N_ROWS,) I want to write a function which will create a matrix out = (N_ROWS, N_COLS) matrix such that out[i, j] = 1 if and only if j >= lefts[i]. A simple example of doing this in a loop is here: class Looped(Strategy): def copy(self, lefts): out = np.zeros([N_ROWS, N_COLS]) for k, l in enumerate(lefts): out[k, l:] = 1 return out Now I wanted it to be as fast as possible, so I have different implementations of this function: The plain python loop numba with @njit A pure c++ implementation which I call with ctypes Here are the results of the average of 100 runs: Looped took 0.0011599776260009093 Numba took 8.886413300206186e-05 CPP took 0.00013200821400096175 So numba is about 1.5 times than the next fastest implementation which is the c++ one. My question is why? I have heard in similar questions cython was slower because it wasn't being compiled with all the optimisation flags set, but the cpp implementation was compiled with -O3 is that enough for me to have all the possible optimisations the compiler will give me? I do not fully understand how to hand the numpy array to c++, am I inadvertently making a copy of the data here? # numba implementation @njit def numba_copy(lefts): out = np.zeros((N_ROWS, N_COLS), dtype=np.float32) for k, l in enumerate(lefts): out[k, l:] = 1. return out class Numba(Strategy): def __init__(self) -> None: # avoid compilation time when timing numba_copy(np.array([1])) def copy(self, lefts): return numba_copy(lefts) // array copy cpp extern "C" void copy(const long *lefts, float *outdatav, int n_rows, int n_cols) { for (int i = 0; i < n_rows; i++) { for (int j = lefts[i]; j < n_cols; j++){ outdatav[i*n_cols + j] = 1.; } } } // compiled to a .so using g++ -O3 -shared -o array_copy.so array_copy.cpp # using cpp implementation class CPP(Strategy): def __init__(self) -> None: lib = ctypes.cdll.LoadLibrary("./array_copy.so") fun = lib.copy fun.restype = None fun.argtypes = [ ndpointer(ctypes.c_long, flags="C_CONTIGUOUS"), ndpointer(ctypes.c_float, flags="C_CONTIGUOUS"), ctypes.c_long, ctypes.c_long, ] self.fun = fun def copy(self, lefts): outdata = np.zeros((N_ROWS, N_COLS), dtype=np.float32, ) self.fun(lefts, outdata, N_ROWS, N_COLS) return outdata The full code with the timing etc: import time import ctypes from itertools import combinations import numpy as np from numpy.ctypeslib import ndpointer from numba import njit N_ROWS = 1000 N_COLS = 1000 class Strategy: def copy(self, lefts): raise NotImplementedError def __call__(self, lefts): s = time.perf_counter() n = 1000 for _ in range(n): out = self.copy(lefts) print(f"{type(self).__name__} took {(time.perf_counter() - s)/n}") return out class Looped(Strategy): def copy(self, lefts): out = np.zeros([N_ROWS, N_COLS]) for k, l in enumerate(lefts): out[k, l:] = 1 return out @njit def numba_copy(lefts): out = np.zeros((N_ROWS, N_COLS), dtype=np.float32) for k, l in enumerate(lefts): out[k, l:] = 1. return out class Numba(Strategy): def __init__(self) -> None: numba_copy(np.array([1])) def copy(self, lefts): return numba_copy(lefts) class CPP(Strategy): def __init__(self) -> None: lib = ctypes.cdll.LoadLibrary("./array_copy.so") fun = lib.copy fun.restype = None fun.argtypes = [ ndpointer(ctypes.c_long, flags="C_CONTIGUOUS"), ndpointer(ctypes.c_float, flags="C_CONTIGUOUS"), ctypes.c_long, ctypes.c_long, ] self.fun = fun def copy(self, lefts): outdata = np.zeros((N_ROWS, N_COLS), dtype=np.float32, ) self.fun(lefts, outdata, N_ROWS, N_COLS) return outdata def copy_over(lefts): strategies = [Looped(), Numba(), CPP()] outs = [] for strategy in strategies: o = strategy(lefts) outs.append(o) for s_0, s_1 in combinations(outs, 2): for a, b in zip(s_0, s_1): np.testing.assert_allclose(a, b) if __name__ == "__main__": copy_over(np.random.randint(0, N_COLS, size=N_ROWS)) | Numba currently uses LLVM-Lite to compile the code efficiently to a binary (after the Python code has been translated to an LLVM intermediate representation). The code is optimized like a C++ code compiled with Clang with the flags -O3 and -march=native. This last parameter is very important as it enables LLVM to use wider SIMD instructions on relatively-recent x86-64 processors: AVX and AVX2 (possibly AVX512 for very recent Intel processors). Otherwise, by default, Clang and GCC use only the SSE/SSE2 instructions (because of backward compatibility). Another difference come from the comparison between GCC and the LLVM code from Numba. Clang/LLVM tends to aggressively unroll the loops while GCC often don't. This has a significant performance impact on the resulting program. In fact, you can see that the generated assembly code from Clang: With Clang (128 items per loops): .LBB0_7: vmovups ymmword ptr [r9 + 4*r8 - 480], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 448], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 416], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 384], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 352], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 320], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 288], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 256], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 224], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 192], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 160], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 128], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 96], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 64], ymm0 vmovups ymmword ptr [r9 + 4*r8 - 32], ymm0 vmovups ymmword ptr [r9 + 4*r8], ymm0 sub r8, -128 add rbp, 4 jne .LBB0_7 With GCC (8 items per loops): .L5: mov rdx, rax vmovups YMMWORD PTR [rax], ymm0 add rax, 32 cmp rdx, rcx jne .L5 Thus, to be fair, you need to compare the Numba code with the C++ code compiled with Clang and the above optimization flags. Note that regarding your needs and the size of your last-level processor cache, you can write a faster platform-specific C++ code using non-temporal stores (NT-stores). NT-stores tell to the processor not to store the array in its cache. Writing data using NT-stores is faster to write huge arrays in RAM, but this can slower when you read the stored array after the copy if the array could fit in the cache (since the array will have to be reloaded from the RAM). In your case (4 MiB array), this is unclear either this would be faster. | 7 | 9 |
70,329,648 | 2021-12-13 | https://stackoverflow.com/questions/70329648/type-friendly-delegation-in-python | Consider the following code sample def sum(a: int, b: int): return a + b def wrap(*args, **kwargs): # delegate to sum return sum(*args, **kwargs) The code works well except that type hint is lost. It's very common in Python to use *args, **kwargs to implement delegation pattern. It would be great to have a way to keep the type hint while using them, but I don't know if it is possible and how. | See https://github.com/python/typing/issues/270 for a long discussion of this problem. You can achieve this by decorating wrap with an appropriately typed identity function: F = TypeVar("F", bound=Callable) def copy_signature(_: F) -> Callable[..., F]: return lambda f: f def s(x: int, y: int) -> int: return x + y @copy_signature(s) def wrap(*args, **kwargs): s(*args, **kwargs) reveal_type(wrap) # Revealed type is "def (x: int, y: int) -> int" As far as I know, the decorator is necessary - it is still not possible to do this using type hints alone, even with PEP612. Since it was already good practice to use the functools.wraps decorator in this situation (which copies the runtime type information), this is not such a loss - you could instead define def wraps(f: F) -> Callable[..., F]: return functools.wraps(f) # type: ignore and then both the runtime and static type information should be correct so long as you use this decorator. (Sadly the typeshed stubs for functools.wraps included with mypy aren't quite restrictive enough to get this working out of the box.) PEP612 adds the ability to add/remove arguments in your wrapper (by combining ParamSpec with Concatenate), but it doesn't remove the need for some kind of higher-order function (like a decorator) to let the type system infer the signature of wrap from that of s. | 11 | 6 |
70,356,656 | 2021-12-14 | https://stackoverflow.com/questions/70356656/google-colab-change-variable-name-everywhere-in-code-cell | When I highlight a variable within a cell in colab, all the instances of that variable shines up. Is it possible to change all these instances of the variables at the same time and if such, how? | Google colab uses the same keys short-cuts of microsoft VS, just use Ctrl+D to add to the selection the next occurrence. Use instead Ctrl+Shift+L to select all occurrence. Once you selected the occurrences you can edit them by just typing the new name. If you use short variable names ( as X) this method will probably not work, as every word with an x in it will also change. | 7 | 10 |
70,319,606 | 2021-12-11 | https://stackoverflow.com/questions/70319606/importerror-cannot-import-name-url-from-django-conf-urls-after-upgrading-to | After upgrading to Django 4.0, I get the following error when running python manage.py runserver ... File "/path/to/myproject/myproject/urls.py", line 16, in <module> from django.conf.urls import url ImportError: cannot import name 'url' from 'django.conf.urls' (/path/to/my/venv/lib/python3.9/site-packages/django/conf/urls/__init__.py) My urls.py is as follows: from django.conf.urls from myapp.views import home urlpatterns = [ url(r'^$', home, name="home"), url(r'^myapp/', include('myapp.urls'), ] | django.conf.urls.url() was deprecated in Django 3.0, and is removed in Django 4.0+. The easiest fix is to replace url() with re_path(). re_path uses regexes like url, so you only have to update the import and replace url with re_path. from django.urls import include, re_path from myapp.views import home urlpatterns = [ re_path(r'^$', home, name='home'), re_path(r'^myapp/', include('myapp.urls'), ] Alternatively, you could switch to using path. path() does not use regexes, so you'll have to update your URL patterns if you switch to path. from django.urls import include, path from myapp.views import home urlpatterns = [ path('', home, name='home'), path('myapp/', include('myapp.urls'), ] If you have a large project with many URL patterns to update, you may find the django-upgrade library useful to update your urls.py files. | 204 | 323 |
70,300,675 | 2021-12-10 | https://stackoverflow.com/questions/70300675/fastapi-uvicorn-run-always-create-3-instances-but-i-want-it-1-instance | I run FastAPI at PyCharm IDE and it always run 3 workers. I don't know why, but, the last instance created is being accessed on every API call. Could anyone can help how can I get single running worker? Code: import uvicorn from fastapi import FastAPI from fastapi.templating import Jinja2Templates from starlette.middleware.cors import CORSMiddleware app = FastAPI() app.add_middleware(CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"]) print(f"main.py with :{app}") @app.get('/') def home(): return "Hello" if __name__ == "__main__": uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=False, log_level="debug", debug=True, workers=1, limit_concurrency=1, limit_max_requests=1) Console output: /Users/user/.pyenv/versions/3.7.10/bin/python /Users/user/github/my-project/backend/main.py main.py with :<fastapi.applications.FastAPI object at 0x102b35d50> INFO: Will watch for changes in these directories: ['/Users/user/github/my-project/backend'] INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) INFO: Started reloader process [96259] using statreload main.py with :<fastapi.applications.FastAPI object at 0x10daadf50> main.py with :<fastapi.applications.FastAPI object at 0x1106bfe50> INFO: Started server process [96261] INFO: Waiting for application startup. INFO: Application startup complete. | Thanks to your last comment I understood better your question. Your real question So in fact your question is why is FastAPI object is created 3 times. In the log one can indeed see that you have 3 different memory adresses 0x102b35d50, 0x10daadf50, 0x1106bfe50 This doesn't mean that you have 3 workers, just that the FastAPI object is created 3 times. The last one this the one that your API will use. Why it happens The object is created multiple times, because : First, you run main.py that go through the all code (one creation of FastAPI object), and then reach the __main__ Then uvicorn launch main:app so it go once again to the file main.py and build another FastAPI object. The last one is created by the debug=True when you set it to False you have one less FastAPI object created. I'm not quite sure why. The solution The solution is to separate the API definition from the start of the API. For example, one could create a run.py file with : import uvicorn if __name__ == "__main__": uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=False, log_level="debug", debug=True, workers=1, limit_concurrency=1, limit_max_requests=1) and launch this file. Another option would be to launch your API in command line: uvicorn main:app --host=0.0.0.0 --port=8000 --log-level=debug --limit-max-requests=1 --limit-concurrency=1 You can find all the command line arguments here | 15 | 24 |
70,339,062 | 2021-12-13 | https://stackoverflow.com/questions/70339062/read-spark-data-with-column-that-clashes-with-partition-name | I have the following file paths that we read with partitions on s3 prefix/company=abcd/service=xyz/date=2021-01-01/file_01.json prefix/company=abcd/service=xyz/date=2021-01-01/file_02.json prefix/company=abcd/service=xyz/date=2021-01-01/file_03.json When I read these with pyspark self.spark \ .read \ .option("basePath", 'prefix') \ .schema(self.schema) \ .json(['company=abcd/service=xyz/date=2021-01-01/']) All the files have the same schema and get loaded in the table as rows. A file could be something like this: {"id": "foo", "color": "blue", "date": "2021-12-12"} The issue is that sometimes the files have the date field that clashes with my partition code, like date. So I want to know if it is possible to load the files without the partition columns, rename the JSON date column and then add the partition columns. Final table would be: | id | color | file_date | company | service | date | ------------------------------------------------------------- | foo | blue | 2021-12-12 | abcd | xyz | 2021-01-01 | | bar | red | 2021-10-10 | abcd | xyz | 2021-01-01 | | baz | green | 2021-08-08 | abcd | xyz | 2021-01-01 | EDIT: More information: I have 5 or 6 partitions sometimes and date is one of them (not the last). And I need to read multiple date partitions at once too. The schema that I pass to Spark contains also the partition columns which makes it more complicated. I don't control the input data so I need to read as is. I can rename the file columns but not the partition columns. Would it be possible to add an alias to file columns as we would do when joining 2 dataframes? Spark 3.1 | One way is to list the files under prefix S3 path using for example Hadoop FS API, then pass that list to spark.read. This way Spark won't detect them as partitions and you'll be able to rename the file columns if needed. After you load the files into a dataframe, loop through the df columns and rename those which are also present in your partitions_colums list (by adding file prefix for example). Finally, parse the partitions from the input_file_name() using regexp_extract function. Here's an example: from pyspark.sql import functions as F Path = sc._gateway.jvm.org.apache.hadoop.fs.Path conf = sc._jsc.hadoopConfiguration() s3_path = "s3://bucket/prefix" file_cols = ["id", "color", "date"] partitions_cols = ["company", "service", "date"] # listing all files for input path json_files = [] files = Path(s3_path).getFileSystem(conf).listFiles(Path(s3_path), True) while files.hasNext(): path = files.next().getPath() if path.getName().endswith(".json"): json_files.append(path.toString()) df = spark.read.json(json_files) # you can pass here the schema of the files without the partition columns # renaming file column in if exists in partitions df = df.select(*[ F.col(c).alias(c) if c not in partitions_cols else F.col(c).alias(f"file_{c}") for c in df.columns ]) # parse partitions from filenames for p in partitions_cols: df = df.withColumn(p, F.regexp_extract(F.input_file_name(), f"/{p}=([^/]+)/", 1)) df.show() #+-----+----------+---+-------+-------+----------+ #|color| file_date| id|company|service| date| #+-----+----------+---+-------+-------+----------+ #|green|2021-08-08|baz| abcd| xyz|2021-01-01| #| blue|2021-12-12|foo| abcd| xyz|2021-01-01| #| red|2021-10-10|bar| abcd| xyz|2021-01-01| #+-----+----------+---+-------+-------+----------+ | 6 | 3 |
70,321,132 | 2021-12-12 | https://stackoverflow.com/questions/70321132/mypy-cant-find-types-for-my-local-package | I have two python packages. One is a util library, and one is an application that will use the util library (eventually I will have more apps sharing the library. I am using poetry for both, and the app specifies the common library as a dependency using the path and develop properties. For example, my layout looks something like this: - common/ - common/ - __init__.py - py.typed - pyproject.toml - myapp/ - myapp/ - __init__.py - py.typed - pyproject.toml And the myapp\pyproject.toml looks something like this: [tool.poetry] name = "myapp" version = "0.1.0" description = "" authors = ["Your Name <[email protected]>"] [tool.poetry.dependencies] python = "^3.9" common = { path = "../common", develop = true } [tool.poetry.dev-dependencies] mypy = "^0.910" flake8 = "^4.0.1" black = {version = "^21.12b0", allow-prereleases = true} pytest = "^6.2.5" pytest-cov = "^3.0.0" pytest-mock = "^3.6.1" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" When I run mypy on myapp I get something like: myapp/__init__.py:1:1: error: Cannot find implementation or library stub for module named "common" [import] | Assuming the "util" package has type hints, you'll want to add a py.typed file to it (in the root directory of the package) so mypy understands it comes with type hints. py.typed should be empty, it's just a flag file. If your package does not have type hints, then you'd have to add stubs files (.pyi). More details: https://mypy.readthedocs.io/en/stable/installed_packages.html#creating-pep-561-compatible-packages | 12 | 7 |
70,356,867 | 2021-12-14 | https://stackoverflow.com/questions/70356867/solverproblemerror-on-install-tensorfow-with-poetry | when i add tensoflow with poetry (poetry add tensorflow) i get this error : Using version ^2.7.0 for tensorflow Updating dependencies Resolving dependencies... (0.8s) SolverProblemError The current project's Python requirement (>=3.6,<4.0) is not compatible with some of the required packages Python requirement: - tensorflow-io-gcs-filesystem requires Python >=3.6, <3.10, so it will not be satisfied for Python >=3.10,<4.0 - tensorflow-io-gcs-filesystem requires Python >=3.6, <3.10, so it will not be satisfied for Python >=3.10,<4.0 - tensorflow-io-gcs-filesystem requires Python >=3.6, <3.10, so it will not be satisfied for Python >=3.10,<4.0 .... For tensorflow-io-gcs-filesystem, a possible solution would be to set the `python` property to ">=3.6,<3.10" For tensorflow-io-gcs-filesystem, a possible solution would be to set the `python` property to ">=3.6,<3.10" For tensorflow-io-gcs-filesystem, a possible solution would be to set the `python` property to ">=3.6,<3.10" | The error message tells you, that your project aims to be compatible for python >=3.6,<4.0 (You probably have ^3.6 in your pyproject.toml), but pytorch says it's compatible only for >=3.6, <3.10. This is only a subset of the range of you project definition. Poetry doesn't care about your current environment. It cares about a valid project definition at all. The solution is already suggested within the error message. Set the version range for python in your pyproject.toml to >=3.6, <3.10. | 4 | 8 |
70,351,937 | 2021-12-14 | https://stackoverflow.com/questions/70351937/loading-pandas-dataframe-from-parquet-lists-are-deserialized-as-numpys-ndarra | import pandas as pd df = pd.DataFrame({ "col1" : ["a", "b", "c"], "col2" : [[1,2,3], [4,5,6,7], [8,9,10,11,12]] }) df.to_parquet("./df_as_pq.parquet") df = pd.read_parquet("./df_as_pq.parquet") [type(val) for val in df["col2"].tolist()] Output: [<class 'numpy.ndarray'>, <class 'numpy.ndarray'>, <class 'numpy.ndarray'>] Is there any way in which I can read the parquet file and get the list values as pythonic lists (just like at creation)? Preferably using pandas but willing to try alternatives. The problem I'm facing is that I have no prior knowledge which columns hold lists, so I check types similarly to what I do in the code. Assuming I'm not interested in adding numpy currently as a dependency, is there any way to check if a variable is array-like without explicitly importing and specifying np.ndarray? | You can't change this behavior in the API, either when loading the parquet file into an arrow table or converting the arrow table to pandas. But you can write your own function that would look at the schema of the arrow table and convert every list field to a python list import pyarrow as pa import pyarrow.parquet as pq def load_as_list(file): table = pq.read_table(file) df = table.to_pandas() for field in table.schema: if pa.types.is_list(field.type): df[field.name] = df[field.name].apply(list) return df load_as_list("./df_as_pq.parquet") | 11 | 5 |
70,350,364 | 2021-12-14 | https://stackoverflow.com/questions/70350364/automatic-change-of-discord-about-me | THE PROBLEM: In Discord, as you may know, there is an "About me" section. This section is a description of a profile that you can write yourself. Bots can have an "About me" section. What I want to do is to edit this "About me" section automatically in discord.py; For example, the "About me" section of the bot changes every hour. WHAT I TRIED: I searched for some answers for a long time but didn't find anything relevant. I saw that you could modify the "about me" with the Developer portal, but it's not automated. I saw that some people said "This will be able in discord.py V2" but didn't find it It may be possible to resolve this problem with HTTP requests but it's only a supposition, I'm not very good at this topic. CODE: @bot.event async def on_ready(): while 1: await change_the_about_me_section(str(random.randint(0,1000)) time.sleep(3600) # change_the_about_me_section isn't a real function # I just wanted to show an exemple of what I wanted to do | There is an answer. You can fully automate it with Python in a few lines. With the requests library. You just have to first include requests with: import requests Then, do a patch request requests.patch(url="https://discord.com/api/v9/users/@me", headers= {"authorization": token}, json = {"bio": abio} ) # With token your token, and abio a string like "hello" And... that's it :) (Note: You can also do that with the color of the Account with accent_color instead of bio, and the banner with banner instead of bio) | 5 | 8 |
70,353,961 | 2021-12-14 | https://stackoverflow.com/questions/70353961/tenacity-output-the-messages-of-retrying | The code import logging from tenacity import retry, wait_incrementing, stop_after_attempt import tenacity @retry(wait=wait_incrementing(start=10, increment=10, max=100), stop=stop_after_attempt(3)) def print_msg(): logging.info('Hello') logging.info("World") raise Exception('Test error') if __name__ == '__main__': logging.basicConfig( format='%(asctime)s,%(msecs)d %(levelname)-8s [%(filename)s:%(lineno)d] %(message)s', datefmt='%d-%m-%Y:%H:%M:%S', level=logging.INFO) logging.info('Starting') print_msg() output 21-11-2018:12:40:48,586 INFO [retrier.py:18] Starting 21-11-2018:12:40:48,586 INFO [retrier.py:8] Hello 21-11-2018:12:40:48,586 INFO [retrier.py:9] World 21-11-2018:12:40:58,592 INFO [retrier.py:8] Hello 21-11-2018:12:40:58,592 INFO [retrier.py:9] World 21-11-2018:12:41:18,596 INFO [retrier.py:8] Hello 21-11-2018:12:41:18,596 INFO [retrier.py:9] World 21-11-2018:12:41:18,596 ERROR [retrier.py:22] Received Exception .... How to log it's retrying? Such as Error. Retrying 1... ... Error. Retrying 2... ... | You can write your own callback function to get attempt_number from retry_state. code: def log_attempt_number(retry_state): """return the result of the last call attempt""" logging.error(f"Retrying: {retry_state.attempt_number}...") log the attempt after every call of the function as shown below @retry(wait=wait_incrementing(start=10, increment=10, max=100), stop=stop_after_attempt(3), after=log_attempt_number) Output: 14-12-2021:19:01:26,716 INFO [<ipython-input-15-f6916dbe7ec1>:24] Starting 14-12-2021:19:01:26,718 INFO [<ipython-input-15-f6916dbe7ec1>:15] Hello 14-12-2021:19:01:26,720 INFO [<ipython-input-15-f6916dbe7ec1>:16] World 14-12-2021:19:01:26,723 ERROR [<ipython-input-15-f6916dbe7ec1>:11] Retrying: 1... 14-12-2021:19:01:36,731 INFO [<ipython-input-15-f6916dbe7ec1>:15] Hello 14-12-2021:19:01:36,733 INFO [<ipython-input-15-f6916dbe7ec1>:16] World 14-12-2021:19:01:36,735 ERROR [<ipython-input-15-f6916dbe7ec1>:11] Retrying: 2... 14-12-2021:19:01:56,756 INFO [<ipython-input-15-f6916dbe7ec1>:15] Hello 14-12-2021:19:01:56,758 INFO [<ipython-input-15-f6916dbe7ec1>:16] World 14-12-2021:19:01:56,759 ERROR [<ipython-input-15-f6916dbe7ec1>:11] Retrying: 3... | 4 | 13 |
70,351,360 | 2021-12-14 | https://stackoverflow.com/questions/70351360/keep-getting-307-temporary-redirect-before-returning-status-200-hosted-on-fast | Edit: I found the problem but not sure why this happens. Whenever I query: http://localhost:4001/hello/ with the "/" in the end - I get a proper 200 status response. I do not understand why. Original Post: Whenever I send a query to my app - I keep getting a 307 redirect. How to get my app to return regular status 200 instead of redirecting it through 307 This is the request output: abm | INFO: 172.18.0.1:46476 - "POST /hello HTTP/1.1" 307 Temporary Redirect abm | returns the apples data. nothing special here. abm | INFO: 172.18.0.1:46480 - "POST /hello/ HTTP/1.1" 200 OK pytest returns: E assert 307 == 200 E + where 307 = <Response [307]>.status_code test_main.py:24: AssertionError in my root dir: /__init__.py file: from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware # from .configs import cors from .subapp import router_hello from .potato import router_potato from .apple import router_apple abm = FastAPI( title = "ABM" ) # potato.add_middleware(cors) abm.add_middleware( CORSMiddleware, allow_origins=["*"], allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) abm.include_router(router_hello.router) abm.include_router(router_potato.router) abm.include_router(router_apple.router) @abm.post("/test", status_code = 200) def test(): print('test') return 'test' /subapp/router_hello.py file: router = APIRouter( prefix='/hello', tags=['hello'], ) @router.post("/", status_code = 200) def hello(req: helloBase, apple: appleHeader = Depends(set_apple_header), db: Session = Depends(get_db)) -> helloResponse: db_apple = apple_create(apple, db, req.name) if db_apple: return set_hello_res(db_apple.potato.api, db_apple.name, 1) else: return "null" in /Dockerfile: CMD ["uvicorn", "abm:abm", "--reload", "--proxy-headers", "--host", "0.0.0.0", "--port", "4001", "--forwarded-allow-ips", "*", "--log-level", "debug"] I tried with and without "--forwarded-allow-ips", "*" part. | It happens because the exact path defined by you for your view is yourdomainname/hello/, so when you hit it without / at the end, it first attempts to get to that path but as it is not available it checks again after appending / and gives a redirect status code 307 and then when it finds the actual path it returns the status code that is defined in the function/view linked with that path, i.e status code 200 in your case. You can also read more about the issue here: https://github.com/tiangolo/fastapi/issues/2060#issuecomment-834868906 | 58 | 114 |
70,324,030 | 2021-12-12 | https://stackoverflow.com/questions/70324030/using-apply-function-works-however-using-assign-the-same-function-does-not | I am a bit stuck, I have a working function that can be utilised using .apply(), however, I cannot seem to get it to work with .assign(). I'd like this to work with assign, so I can chain a number of transformations together. Could anyone point me in the right direction to resolving the issue? This works data = {'heading': ['some men', 'some men', 'some women']} dataframe = pd.DataFrame(data=data) def add_gender(x): if re.search("(womens?)", x.heading, re.IGNORECASE): return 'women' elif re.search("(mens?)", x.heading, re.IGNORECASE): return 'men' else: return 'unisex' dataframe['g'] = dataframe.apply(lambda ref: add_gender(ref), axis=1) This does not work dataframe = dataframe.assign(gender = lambda ref: add_gender(ref)) TypeError: expected string or bytes-like object Is this because .assign() does not provide an axis argument? So perhaps the function is not looking for the right thing? Having read the documentation .assign states you can generate a new column, so I assumed the output would be the same as .apply(axis=1) | From the documentation of DataFrame.assign: DataFrame.assign(**kwargs) (...) Parameters **kwargs : dict of {str: callable or Series} The column names are keywords. If the values are callable, they are computed on the DataFrame and assigned to the new columns. The callable must not change input DataFrame (though pandas doesnβt check it). If the values are not callable, (e.g. a Series, scalar, or array), they are simply assigned. This means that in dataframe = dataframe.assign(gender=lambda ref: add_gender(ref)) ref stands for the calling DataFrame, i.e. dataframe, and thus you are passing the whole dataframe to the function add_gender. However, according to how it's defined, add_gender expects a single row (Series object) to be passed as the argument x, not the whole DataFrame. if re.search("(womens?)", x.heading, re.IGNORECASE): In the case of assign, x.heading stands for the whole column heading of dataframe (x), which is a Series object. However, re.search only works with string or bytes-like objects, so the error is raised. While in the case of apply, x.heading corresponds to the field heading of each individual row x of dataframe, which are string values. To solve this just use assign with apply. Note that the lambda in lambda ref: add_gender(ref) is redundant, it's equivalent to just passing add_gender. dataframe = dataframe.assign(gender=lambda df: df.apply(add_gender, axis=1)) As a suggestion, here is a more concise way of defining add_gender, using Series.str.extract and Series.fillna. def add_gender(df): pat = r'\b(men|women)s?\b' return df['heading'].str.extract(pat, flags=re.IGNORECASE).fillna('unisex') Regarding the regex pattern '\b(men|women)s?\b': \b matches a word boundary (men|women) matches men or women literally and captures the group s? matches s zero or one times Series.str.extract extract the capture group of each string value of the column heading. Non-matches are set to NaN. Then, Series.fillna replaces the NaNs with 'unisex'. In this case, add_gender expects the whole DataFrame to be passed. With this definition, you can simply do dataframe = dataframe.assign(gender=add_gender) Setup: import pandas as pd import re data = {'heading': ['some men', 'some men', 'some women', 'x mens', 'y womens', 'other', 'blahmenblah', 'blahwomenblah']} dataframe = pd.DataFrame(data=data) def add_gender(df): pat = r'\b(men|women)s?\b' return df['heading'].str.extract(pat, flags=re.IGNORECASE).fillna('unisex') Output: >>> dataframe heading 0 some men 1 some men 2 some women 3 x mens 4 y womens 5 other 6 blahmenblah 7 blahwomenblah >>> dataframe = dataframe.assign(gender = add_gender) >>> dataframe heading gender 0 some men men 1 some men men 2 some women women 3 x mens men 4 y womens women 5 other unisex 6 blahmenblah unisex 7 blahwomenblah unisex | 8 | 8 |
70,347,526 | 2021-12-14 | https://stackoverflow.com/questions/70347526/filtering-a-mutli-index | C1 C2 C3 C4 A 12 True 89 9 False 77 5 True 23 B 9 True 45 5 True 45 2 False 78 C 11 True 10 8 False 08 12 False 09 C1 & C2 are the multi index. I'm hoping to get a result which gives me only values in C1 which have values both lower than 10 and greater than or equal to 10 in C2. So in the table above C1 - B should go, with the final result should look like this: C1 C2 C3 C4 A 12 True 89 9 False 77 5 True 23 C 11 True 10 8 False 08 12 False 09 I tried df.loc[(df.C2 < 10 ) & (df.C2 >= 10)] but this didn't work. I also tried: filter1 = df.index.get_level_values('C2') < 10 filter2 = df.index.get_level_values('C2') >= 10 df.iloc[filter1 & filter2] Which I saw suggested on another post that also didn't work. Any one know how to solve this? Thanks | Use GroupBy.transform with GroupBy.any for test at least one condition match per groups, so possible last filter by m DataFrame: filter1 = df.index.get_level_values('C2') < 10 filter2 = df.index.get_level_values('C2') >= 10 m = (df.assign(filter1= filter1, filter2=filter2) .groupby(level=0)[['filter1','filter2']] .transform('any')) print (m) filter1 filter2 C1 C2 A 12 True True 9 True True 5 True True B 9 True False 5 True False 2 True False C 11 True True 8 True True 12 True True df = df[m.filter1 & m.filter2] print (df) C3 C4 C1 C2 A 12 True 89 9 False 77 5 True 23 C 11 True 10 8 False 8 12 False 9 Alternative solution: filter1 = df.index.get_level_values('C2') < 10 filter2 = df.index.get_level_values('C2') >= 10 lvl1 = df.index[filter1].remove_unused_levels().levels[0] lvl2 = df.index[filter2].remove_unused_levels().levels[0] df1 = df.loc[set(lvl1).intersection(lvl2)] print (df1) C3 C4 C1 C2 A 12 True 89 9 False 77 5 True 23 C 11 True 10 8 False 8 12 False 9 | 7 | 2 |
70,290,958 | 2021-12-9 | https://stackoverflow.com/questions/70290958/asking-isinstance-on-a-union-type | I'm trying to ask an isinstance question on a user defined type: ConstData = Union[int, str]: from typing import Union, Optional ConstData = Union[int, str] def foo(x) -> Optional[ConstData]: if isinstance(x, ConstData): # <--- this doesn't work # if isinstance(x, (int, str)): <--- this DOES work ... return x return None Sadly enough, it doesn't work: $ mypy main.py main.py:4: error: Parameterized generics cannot be used with class or instance checks main.py:4: error: Argument 2 to "isinstance" has incompatible type "object"; expected "Union[type, Tuple[Union[type, Tuple[Any, ...]], ...]]" Found 2 errors in 1 file (checked 1 source file) Is there any elegant way to solve this? | About: "Still, with if isinstance(x, get_args(ConstData)): return x mypy can't infer that x has the correct return type. It complains it's Any" I think it is a bug in reveal_type(), Mypy undertands the narrowing of x type. As you can see in the following code, it does not raise an error when test() is called with x: def test(k:Union[int, str]) -> None: pass def foo(x: Any) -> Optional[ConstData]: if isinstance(x, get_args(ConstData)): reveal_type(x) # Mypy: Revealed type is "Any" test(x) # Mypy: no error return x # Mypy: no error return None | 8 | 2 |
70,344,739 | 2021-12-14 | https://stackoverflow.com/questions/70344739/backend-youtube-dl-py-line-54-in-fetch-basic-self-dislikes-self-ydl-info | I have the below code that has been used to download youtube videos. I automatically detect if it's a playlist or single video. However all the sudden it is giving the above error. What can be the problem? import pafy from log import * import tkinter.filedialog import pytube url = input("Enter url :") directory = tkinter.filedialog.askdirectory() def single_url(url,directory): print("==================================================================================================================") video = pafy.new(url) print(url) print(video.title) #logs(video.title,url) file_object = open(directory+"/links.log", "a") file_object.write(video.title +' '+ url + '\n') file_object.close() print('Rating :',video.rating,', Duration :',video.duration,', Likes :',video.likes, ', Dislikes : ', video.dislikes) #print(video.description) best = video.getbest() print(best.resolution, best.extension) best.download(quiet=False, filepath=directory+'/'+video.title+"." + best.extension) print("saved at :", directory, " directory") print("==================================================================================================================") def playlist_func(url,directory): try: playlist = pytube.Playlist(url) file_object = open(directory+"/links.log", "a") file_object.write('Playlist Url :'+ url + '\n') file_object.close() print('There are {0}'.format(len(playlist.video_urls))) for url in playlist.video_urls: single_url(url,directory) except: single_url(url,directory) playlist_func(url,directory) | Your issue doesn't have anything to do with your code. Youtube does no longer have a dislike count, they simply removed it. You just have to wait for the pafy package to be updated accordingly, or patch the package locally and remove that part by yourself. Keep in mind there are at least 5 different pull requests open trying to fix it. | 4 | 5 |
70,343,666 | 2021-12-14 | https://stackoverflow.com/questions/70343666/python-boto3-float-types-are-not-supported-use-decimal-types-instead | I'm using Python 3.7 to store data in a DynamoDB database and encountering the following error message when I try and write an item to the database: Float types are not supported. Use Decimal types instead. My code: ddb_table = my_client.Table(table_name) with ddb_table.batch_writer() as batch: for item in items: item_to_put: dict = json.loads(json.dumps(item), parse_float=Decimal) # Send record to database. batch.put_item(Item=item_to_put) "items" is a list of Python dicts. If I print out the types of the "item_to_put" dict, they are all of type str. Thanks in advance for any assistance. | Ran into the same issue and it turned out that Python was passing a string parameter as something else. The issue went away when I wrapped all of the items in str(). | 33 | 10 |
70,343,350 | 2021-12-14 | https://stackoverflow.com/questions/70343350/what-does-sys-executable-do-in-jupyter-notebook | I bought a book which comes with jupyter notebook. In the first chapter, it asks me to install required libraries. It use {sys.executable} -m. I never see it before. what does {sys.executable} and -m do? also why use --user at the end? typically, I just use ! pip install numpy==1.19.2 Anyone can help me understand it? Thank you! import sys !{sys.executable} -m pip install numpy==1.19.2 --user !{sys.executable} -m pip install scipy==1.6.2 --user !{sys.executable} -m pip install tensorflow==2.4.0 --user !{sys.executable} -m pip install tensorflow-probability==0.11.0 --user !{sys.executable} -m pip install scikit-learn==0.24.1 --user !{sys.executable} -m pip install statsmodels==0.12.2 --user !{sys.executable} -m pip install ta --user | Let's split this question up into multiple parts. Part 1 From the Python documentation: sys.executable A string giving the absolute path of the executable binary for the Python interpreter, on systems where this makes sense. If Python is unable to retrieve the real path to its executable, sys.executable will be an empty string or None. Formatting that in, we get: ...\python.exe -m pip install <package> --user Part 2 Also from the docs: -m <module-name> Search sys.path for the named module and execute its contents as the __main__ module. This is generally the same as just pip install <package> --user, however if you have multiple versions of Python installed, the wrong version of pip might get invoked. By using -m, a matching version of pip will always be invoked. Part 3 This time from the pip documentation: Install to the Python user install directory for your platform. Typically ~/.local/, or %APPDATA%\Python on Windows. (See the Python documentation for site.USER_BASE for full details.) Basically, this means that instead of installing to the normal package directory (which could require administrator privileges), it installs to %APPDATA%\Python, which should always be accessible as it is in your user folder. | 7 | 7 |
70,339,388 | 2021-12-13 | https://stackoverflow.com/questions/70339388/using-numba-with-np-concatenate-is-not-efficient-in-parallel | Iβm having some trouble getting np.concatenate to parallelize efficiently. Here is a minimal working example. (I know that here I could sum a and b separately, but I am focussing on parallelizing the concatenate operation since this is what I need to do in my project. I would then do further operations on the concatenated array, such as sorting.) However many cores I run this on, it always seems to take the same time (~10 seconds). If anything, it is slower on more cores. I tried releasing the GIL on cc using nogil=True in the decorator, but to no avail. Note that all cores are clearly in use during the calculation, even though there is no speed-up. Can anybody help me? Many thanks from numba import prange, njit import numpy as np @njit() def cc(): r = np.random.rand(20) a = r[r < 0.5] b = r[r > 0.7] c = np.concatenate((a, b)) return np.sum(c) @njit(parallel=True) def cc_wrap(): n = 10 ** 7 result = np.empty(n) for i in prange(n): result[i] = cc() return result cc_wrap() | The main issue comes from contention of the allocator. The cc function creates many implicit small temporary arrays. For example np.random.rand do that as well as r < 0.5 or even a = r[condition], not to mention np.concatenate. Temporary arrays generally needs to be allocated in the heap using a given allocator. The default allocator provided by the standard library is no guaranteed to scale well using multiple threads. Allocations do not perfectly scale because an expensive implicit synchronisation is needed between the threads allocating data. For example, a thread can allocate an array deleted by another one. In the worst case, the allocator could serialize the allocations/deletes. Because the cost of allocating data is huge compared to the work to perform on the allocated data, the overhead of the synchronization is predominant and the overall execution is serialized. Actually, it is even worse since the sequential time is already dominated by overheads. Note that an aggressively optimizing compiler could allocate the arrays on the stack because they do not escape the function. However, this is unfortunately not visibly done by Numba. Moreover, the allocator could be tuned to scale very-well using per-thread pools assuming Numba threads never delete data allocated by other ones (which is probably the case although I am not totally sure). Still, the allocated memory pool needs to be requested to the Operating System that generally do not scale well too (especially Windows AFAIK). The best solution is simply avoid creating implicit temporary arrays. This is possible using a per-worker local temporary array combined with a partitioning algorithm. Note that the functions can be compiled ahead-of-time to time by specifying types to Numba. Here is the resulting implementation: import numba as nb import numpy as np import random @nb.njit('float64(float64[::1])') def cc(tempBuffer): assert tempBuffer.size >= 20 # View to the temporary scratch buffer r = tempBuffer[0:20] # Generate 20 random numbers without any allocation for i in range(20): r[i] = random.random() j = 0 # Partition the array so to put values smaller than # a condition in the beginning. # After the loop, r[0:split] contains the filtered values. for i in range(r.size): if r[i] < 0.5: r[i], r[j] = r[j], r[i] j += 1 split = j # Partition the rest of the array. # After the loop, r[split:j] contains the other filtered values. for i in range(j, r.size): if r[i] > 0.7: r[i], r[j] = r[j], r[i] j += 1 # Note that extracting contiguous views it cheap as # it does not create a new temporary array # a = r[0:split] # b = r[split:j] c = r[0:j] return np.sum(c) @nb.njit('float64[:]()', parallel=True) def cc_wrap(): n = 10 ** 7 result = np.empty(n) # Preallocate some space once for all threads globalTempBuffer = np.empty((nb.get_num_threads(), 64), dtype=np.float64) for i in nb.prange(n): threadId = nb.np.ufunc.parallel._get_thread_id() threadLocalBuffer = globalTempBuffer[threadId] result[i] = cc(threadLocalBuffer) return result cc_wrap() Note that the thread-local manipulation is a bit tricky and often not needed. In this case, there is visible a speed up only using the partitioning algorithm to reduce allocations. However, the overhead of the allocation is still pretty due to the very very small size of the temporary array and the number of allocations. Note also that the r array is not strictly required in this code as random numbers can be summed in-place. This may not fit your need regarding your real-word use-cases. Here is a (much simpler implementation): @nb.njit('float64()') def cc(): s = 0.0 for i in range(20): e = random.random() if e < 0.5 or e > 0.7: s += e return s @nb.njit('float64[:]()', parallel=True) def cc_wrap(): n = 10 ** 7 result = np.empty(n) for i in nb.prange(n): result[i] = cc() return result cc_wrap() Here are timings on my 6-core machine: # Initial (sequential): 8.1 s # Initial (parallel): 9.0 s # Array-based (sequential): 2.50 s # Array-based (parallel): 0.41 s # In-place (sequential): 1.09 s # In-place (parallel): 0.19 s In the end, the fastest parallel version is 47 times faster than the original one (and scale almost perfectly). | 5 | 10 |
70,332,426 | 2021-12-13 | https://stackoverflow.com/questions/70332426/type-hinting-a-function-that-takes-the-return-type-as-parameter | Is there a way to type hint a function that takes as argument its return type? I naively tried to do this: # NOTE: # This code does not work! # # If I define `ret_type = TypeVar("ret_type")` it becomes syntactically # correct, but type hinting is still broken. # def mycoerce(data: Any, ret_type: type) -> ret_type return ret_type(data) a = mycoerce("123", int) # desired: a type hinted to int b = mycoerce("123", float) # desired: b type hinted to float but it doesn't work. | Have a look at Generics, especially TypeVar. You can do something like this: from typing import TypeVar, Callable R = TypeVar("R") D = TypeVar("D") def mycoerce(data: D, ret_type: Callable[[D], R]) -> R: return ret_type(data) a = mycoerce("123", int) # desired: a type hinted to int b = mycoerce("123", float) # desired: b type hinted to float print(a, b) | 5 | 5 |
70,335,028 | 2021-12-13 | https://stackoverflow.com/questions/70335028/how-do-i-convert-an-ioc-country-code-to-country-name | I have a pandas dataframe import pandas as pd s = pd.DataFrame({'ioc' : ['ESP', 'CYP', 'USA', 'ESP', 'NED']}) and I want to return out = pd.DataFrame( {'ioc' : ['ESP', 'CYP', 'USA', 'ESP', 'NED'], 'countryName' : ['Spain', 'Cyprus', 'United States of America', 'Spain', 'Netherlands']}) | Use List of IOC country codes ioc = pd.read_html('https://en.wikipedia.org/wiki/List_of_IOC_country_codes')[0] ioc = ioc.assign(Code=ioc['Code'].str[-3:]).set_index('Code')['National Olympic Committee'] s['countryName'] = s['ioc'].map(ioc) print(s) # Output: ioc countryName 0 ESP Spain 1 CYP Cyprus 2 USA United States 3 ESP Spain 4 NED Netherlands | 4 | 6 |
70,328,519 | 2021-12-12 | https://stackoverflow.com/questions/70328519/find-cosine-similarity-between-two-columns-of-type-arraydouble-in-pyspark | I am trying to find the cosine similarity between two columns of type array in a pyspark dataframe and add the cosine similarity as a third column, as shown below Col1 Col2 Dot Prod [0.5, 0.6 ... 0.7] [0.5, 0.3 .... 0.1] dotProd(Col1, Col2) The current implementation I have is: import pyspark.sql.functions as func def cosine_similarity(df, col1, col2): df_cosine = df.select(func.sum(df[col1] * df[col2]).alias('dot'), func.sqrt(func.sum(df[col1]**2)).alias('norm1'), func.sqrt(func.sum(df[col2] **2)).alias('norm2')) d = df_cosine.rdd.collect()[0].asDict() return d['dot']/(d['norm1'] * d['norm2']) But I guess the above code only for works for columns with integer values. Is there anyway I would be able to extend the above function to achieve a similar behavior for array columns | For the array of double, you can use the aggregate function. df = spark.createDataFrame([[[0.1, 0.5, 2.0, 1.0], [3.0, 2.4, 0.2, 1.1]]], ['Col1', 'Col2']) df.show() +--------------------+--------------------+ | Col1| Col2| +--------------------+--------------------+ |[0.1, 0.5, 2.0, 1.0]|[3.0, 2.4, 0.2, 1.1]| +--------------------+--------------------+ df.withColumn('dot', f.expr('aggregate(arrays_zip(Col1, Col2), 0D, (acc, x) -> acc + (x.Col1 * x.Col2))')) \ .withColumn('norm1', f.expr('sqrt(aggregate(Col1, 0D, (acc, x) -> acc + (x * x)))')) \ .withColumn('norm2', f.expr('sqrt(aggregate(Col2, 0D, (acc, x) -> acc + (x * x)))')) \ .withColumn('cosine', f.expr('dot / (norm1 * norm2)')) \ .show(truncate=False) +--------------------+--------------------+---+-----------------+-----------------+------------------+ |Col1 |Col2 |dot|norm1 |norm2 |cosine | +--------------------+--------------------+---+-----------------+-----------------+------------------+ |[0.1, 0.5, 2.0, 1.0]|[3.0, 2.4, 0.2, 1.1]|3.0|2.293468988235943|4.001249804748511|0.3269133956691457| +--------------------+--------------------+---+-----------------+-----------------+------------------+ | 5 | 4 |
70,318,373 | 2021-12-11 | https://stackoverflow.com/questions/70318373/copying-a-list-of-numpy-arrays | I am working with (lists of) lists of numpy arrays. As a bare bones example, consider this piece of code: a = [np.zeros(5)] b = a.copy() b[0] += 1 Here, I copy a list of one array from a to b. However, the array itself is not copied, so: print(a) print(b) both give [array([1., 1., 1., 1., 1.])]. If I want to make a copy of the array as well, I could do something like: b = [arr.copy() for arr in a] and a would remain unchanged. This works well for a simple list, but it becomes more complicated when working with nested lists of arrays where the number of arrays in each list is not always the same. Is there a simple way to copy a multi-level list and every object that it contains without keeping references to the objects in the original list? Basically, I would like to avoid nested loops as well as dealing with the size of each individual sub-list. | What you are looking for is a deepcopy import numpy as np import copy a = [np.zeros(5)] b = copy.deepcopy(a) b[0] += 1 # a[0] is not changed This is actually method recommended in numpy doc for the deepcopy of object arrays. | 5 | 4 |
70,327,505 | 2021-12-12 | https://stackoverflow.com/questions/70327505/python-type-checking-literalfalse-overload-and-noreturn | I have the following (typed) Python function: from typing import overload, Literal, NoReturn @overload def check(condition: Literal[False], msg: str) -> NoReturn: pass @overload def check(condition: Literal[True], msg: str) -> None: pass def check(condition, msg): if not condition: raise Exception(msg) Pyright type-checker complains: Overloaded function implementation is not consistent with signature of overload 1 Function return type "NoReturn" is incompatible with type "None" Type cannot be assigned to type "None" I'm confused by this-- Pyright apparently can't determine that check will always throw an error if condition is False. How can I massage this to make it work? | I posted this to the Pyright issue tracker and was informed that I had to add a Union[NoReturn, None] annotation to the check implementation to resolve the error: [This] falls out of pyright's rules for return type inference. Pyright will infer a NoReturn type only in the case that all code paths raise an exception. It's quite common for some code paths raise an exception, and it would produce many false positives if pyright were to include NoReturn in the union in that case, so NoReturn is always elided from a union in an inferred return type. Mypy doesn't have any support for return type inference, so that explains why this doesn't occur for mypy. The correct way to annotate this is to include an explicit None | NoReturn return type for the implementation. Unfortunately, this defeats the purpose of the overloads in the first place, which is to allow PyRight to infer from the arguments when NoReturn is the return type. I asked whether it is possible to use overloads to conditionally express NoReturn to the type checker. Apparently it is not: Unfortunately, pyright cannot make this determination because of its core architecture. The "reachability" of nodes within a code flow graph cannot rely on type evaluations because type evaluations depend on reachability of code flow nodes. To work around this chicken-and-egg problem, the logic that determines reachability does some basic checks to determine if the called function is potentially a "NoReturn" function, but these basic checks are not sophisticated enough to handle overload evaluation. Evaluating overloads requires the full type evaluator, and that depends on reachability. | 7 | 5 |
70,316,221 | 2021-12-11 | https://stackoverflow.com/questions/70316221/difference-between-gridsearchcv-and-cross-val-score | I have a binary time series classification problem. Since it is a time series, I can't just train_test_split my data. So, I used the object tscv = TimeSeriesSplit() from this link, and got something like this: I can see from GridSearchCV and cross_val_score that i can pass as parameter my split strategy cv = tscv. But my question is, whats the difference between GridSearchCV and coss_val_score? Using one of them is enough to train/test my model? or should i use both? First the GridSearchCV to get the best hyperparamaters and then the cross_val_score? | Grid search is a method to evaluate models by using different hyperparameter settings (the values of which you define in advance). Your GridSearch can use cross validation (hence, GridSearchCV exists) in order to deliver a final score for the the different parameter settings of your model. After the training and the evaluation (after the grid search has finished), you can take a look at the parameters with which your model performed best (by taking a look at the attribute best_params_dict).So, Grid search is basically a brute forcing strategy in which you run the model with all possible hyperparameter combinations. With coss_val_score you don't perform the grid search (you don't use the strategy mentioned above with all predefined params), but you get the score after the cross-validation. I hope it is now clear. | 5 | 5 |
70,326,002 | 2021-12-12 | https://stackoverflow.com/questions/70326002/typeerror-webdriver-init-got-an-unexpected-keyword-argument-firefox-opti | I'm trying to create a script which downloads a file from a website and for this I want to change the download filepath. When I try to do this with the Firefox options it gives me this error: TypeError: WebDriver.__init__() got an unexpected keyword argument 'firefox_options' Code: from selenium import webdriver from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.firefox.options import Options from selenium.webdriver.common.keys import Keys import time options = Options() options.add_argument("download.default_directory=C:\\Music") browser = webdriver.Firefox(firefox_options=options, executable_path=r'C:\\selenium\\geckodriver.exe') browser.get('https://duckduckgo.com/') | The browser option parameter firefox_options was deprecated in Selenium 3.8.0 Browser option parameters are now standardized across drivers as options. firefox_options, chrome_options, and ie_options are now deprecated Instead you have to use options as follows: from selenium.webdriver.firefox.options import Options options = Options() options.add_argument("download.default_directory=C:\\Music") browser = webdriver.Firefox(options=options, executable_path=r'C:\\selenium\\geckodriver.exe') | 15 | 15 |
70,286,615 | 2021-12-9 | https://stackoverflow.com/questions/70286615/extract-mutiple-tables-from-excel | I have seen many relatable posts for my question but couldn't find the proper solutions. read excel sheet containing multiple tables, tables that have headers with a non white background cell color here is the link of that excel data: https://docs.google.com/spreadsheets/d/1m4v_wbIJCGWBigJx53BRnBpMDHZ9CwKf/edit?usp=sharing&ouid=107579116880049687042&rtpof=true&sd=true So far what i have tried: import pandas as pd df = pd.read_excel("dell.xlsx") df =df.dropna() The above code deletes the wanted data because it has nan. df.iloc[1,2:5]=['Description','Qty','Price'] print(df) nul_rows = list(df[df.isnull().all(axis=1)].index) list_of_dataframes = [] for i in range(len(nul_rows) - 1): list_of_dataframes.append(df.iloc[nul_rows[i]+1:nul_rows[i+1],:]) cleaned_tables = [] for _df in list_of_dataframes: cleaned_tables.append(_df.dropna(axis=1, how='all')) for p in cleaned_tables: print(p.dropna()) couldn't get the data I want because those were not in the header format, its in unnamed. I want to extract these data "Sku Description" "Qty" "Price" "Total" from the excel in the link. Hope I get some response. NOTE! the file content and format differs always, so based on one file solution cant be used for next file but the header's name won't change for example qty, description,total. | Here is one way of creating dataframes out of that excel file. See comments in the code itself. You can uncomment some print() to see how this is developed. Code import pandas as pd import numpy as np def get_dataframes(fn): """ Returns 3 dataframes from the 3 tables given inside excel file fn. """ # (1) Read file df_start = pd.read_excel(fn) df = df_start.copy() # preserve df_start, just in case we need it later # print(df.to_string()) # study layout # (2) Clean df df = pd.DataFrame(np.vstack([df.columns, df])) # reset column index, so we can see 0, 1, 2 ... df = df.drop([0], axis=1) # drop index column 0, it is not needed df = df.dropna(how='all') # drop rows if all cells are empty df = df.reset_index(drop=True) # reset row index so we can see the correct row no. 0, 1, ... df = pd.DataFrame(np.vstack([df.columns, df])) # reset col index so we can see the correct col no. 0, 1, ... # (3) Tasks # We have 3 dataframes to create: # 1) df1 is for the first table the one with "Description". # 2) df2 is the one with the first "Sku Description:". # 3) df3 is the one with the second "Sku Description:". # (4) Get all the indexes (row and col) of "Description" and "Sku Description:" texts, # we will use it to extract data in order to create dataframes df1, df2 and df3. # Get the index of "Description" we will get row index and col index from it, by knowing # the row index, we will be able to extract at correct row index on this particular table. i_desc = df.where(df=='Description').dropna(how='all').dropna(how='all', axis=1) # Assume that there is only one "Description" and this is true according to the given sample excel files. i_desc_row = i_desc.index[0] # print(f'row index of "Description": {i_desc_row}') # Get the index of "Sku Description:". i_sku = df.where(df=='Sku Description:').dropna(how='all').dropna(how='all', axis=1) # There are 2 "Sku Description:", get the row indexes of each. i_sku_row = i_sku.index # print(f'row indexes of "Sku Description:": {i_sku_row}') i_sku_row_1 = i_sku_row[0] i_sku_row_2 = i_sku_row[1] # print(f'i_sku_row_1: {i_sku_row_1}, i_sku_row_2: {i_sku_row_2}') # There are 2 "Sku Description:", get the col indexes of each. i_sku_col = i_sku.columns # print(f'col indexes of "Sku Description:": {i_sku_col}') if len(i_sku_col) == 2: i_sku_col_1 = i_sku_col[0] i_sku_col_2 = i_sku_col[1] else: i_sku_col_1 = i_sku_col[0] i_sku_col_2 = i_sku_col_1 # print(f'i_sku_col_1: {i_sku_col_1}, i_sku_col_2: {i_sku_col_2}') # (5) Create df1 cols = ['Description', 'Qty', 'Price', 'Total'] df1 = df.iloc[i_desc_row+1 : i_sku_row_1-2, 0:4] # [start_row:end_row, start_col:end_col] df1.columns = cols # print(df1) df1 = df1.reset_index(drop=True) # reset the row index so we can see 0, 1, ... # print(df1) # (6) Create df2 cols = ['Sku Description:', 'Qty:'] df2 = df.iloc[i_sku_row_1+1 : i_sku_row_2, i_sku_col_1:i_sku_col_1+2] df2.columns = cols df2 = df2.reset_index(drop=True) # print(df2) # (7) Create df3 cols = ['Sku Description:', 'Qty:'] df3 = df.iloc[i_sku_row_2+1:, i_sku_col_2:i_sku_col_2+2] df3.columns = cols df3 = df3.reset_index(drop=True) # print(df3) return df1, df2, df3 def process_file(): # fn = 'F:\\Downloads\\dell.xlsx' fn = 'F:\\Downloads\\dell2.xlsx' desc_df, sku1_df, sku2_df = get_dataframes(fn) print(f'file: {fn}') print(f'desc_df:\n{desc_df}\n') print(f'sku1_df:\n{sku1_df}\n') print(f'sku2_df:\n{sku2_df}\n') # Start process_file() Output file: F:\Downloads\dell2.xlsx desc_df: Description Qty Price Total 0 PowerEdge R640 Server 2.0 6390.0 12780 1 PowerEdge R640 Server 8.0 4360.0 34880 sku1_df: Sku Description: Qty: 0 PowerEdge R640 Server 1.0 1 PowerEdge R640 MLK Motherboard 1.0 2 Intel Xeon Silver 4216 2.1G, 16C/32T, 9.6GT/s,... 2.0 3 Intel Xeon Silver 4216 2.1G, 16C/32T, 9.6GT/s,... 2.0 4 iDRAC Group Manager, Enabled 1.0 5 iDRAC,Factory Generated Password 1.0 6 Additional Processor Selected 1.0 7 2.5 Chassis with up to 10 Hard Drives and 3PCI... 1.0 8 Standard Bezel 1.0 9 Riser Config 2, 3x16 LP 1.0 10 PowerEdge R640 Shipping Material for 4 and 10 ... 1.0 11 PowerEdge R640 Shipping(ICC), for 1300W below, V2 1.0 12 No Quick Sync 1.0 13 Dell EMC Luggage Tag for x10 1.0 14 Performance Optimized 1.0 15 3200MT/s RDIMMs 1.0 16 DIMM Blanks for System with 2 Processors 1.0 17 32GB RDIMM, 3200MT/s, Dual Rank 16Gb BASE x8 4.0 18 iDRAC9,Enterprise 1.0 19 2.4TB 10K RPM SAS 12Gbps 512e 2.5in Hot-plug H... 8.0 20 3.84TB SSD SATA Read Intensive 6Gbps 512 2.5in... 2.0 21 BOSS controller card + with 2 M.2 Sticks 480GB... 1.0 22 PERC H750 Adapter, Low Profile 1.0 23 8 Standard Fans for R640 1.0 24 Performance BIOS Settings 1.0 25 Standard 1U Heatsink 2.0 26 No Internal Optical Drive 1.0 27 Dual, Hot-plug, Redundant Power Supply (1+1), ... 1.0 28 Jumper Cord - C13/C14, 2M, 250V, 10A 2.0 29 Power Cord - C13, 1.8M, 250V, 10A 2.0 30 Trusted Platform Module 2.0 V3 1.0 31 PowerEdge R640 No CE Marking, ICC, for 1300W P... 1.0 32 Broadcom 5720 Quad Port 1GbE BASE-T, rNDC 1.0 33 No Operating System 1.0 34 No Systems Documentation, No OpenManage DVD Kit 1.0 35 Basic Deployment Dell Server R Series 1U/2U 1.0 36 PowerEdge-SE02 Handling n Insurance Charges(In... 1.0 37 ReadyRails Sliding Rails With Cable Management... 1.0 38 Unconfigured RAID 1.0 39 UEFI BIOS Boot Mode with GPT Partition 1.0 40 OpenManage Enterprise Advanced 1.0 41 Basic Next Business Day 36 Months 1.0 42 ProSupport and Next Business Day Onsite Servic... 1.0 43 ProSupport and Next Business Day Onsite Servic... 1.0 44 INFO: Thank you for choosing Dell 1.0 45 Mod Specs Info 1.0 sku2_df: Sku Description: Qty: 0 PowerEdge R640 Server 1.0 1 PowerEdge R640 MLK Motherboard 1.0 2 Intel Xeon Silver 4216 2.1G, 16C/32T, 9.6GT/s,... 1.0 3 iDRAC,Factory Generated Password 1.0 4 iDRAC Group Manager, Enabled 1.0 5 2.5 Chassis with up to 10 Hard Drives and 3PCI... 1.0 6 Standard Bezel 1.0 7 Riser Config 4, 2x16 LP 1.0 8 PowerEdge R640 Shipping(ICC), for 1300W below, V2 1.0 9 PowerEdge R640 Shipping Material for 4 and 10 ... 1.0 10 No Quick Sync 1.0 11 Dell EMC Luggage Tag for x10 1.0 12 Performance Optimized 1.0 13 32GB RDIMM, 3200MT/s, Dual Rank 16Gb BASE x8 2.0 14 Blank for 1CPU Configuration 1.0 15 3200MT/s RDIMMs 1.0 16 Blank for 1CPU Configuration 1.0 17 No Additional Processor 1.0 18 iDRAC9,Enterprise 1.0 19 2.4TB 10K RPM SAS 12Gbps 512e 2.5in Hot-plug H... 3.0 20 BOSS controller card + with 2 M.2 Sticks 480GB... 1.0 21 PERC H750 Adapter, Low Profile 1.0 22 Performance BIOS Settings 1.0 23 5 Standard Fans for R640 1.0 24 Standard 1U Heatsink 1.0 25 1.92TB SSD vSAS Read Intensive 12Gbps 512e 2.5... 2.0 26 No Internal Optical Drive 1.0 27 Dual, Hot-plug, Redundant Power Supply (1+1), ... 1.0 28 Power Cord - C13, 1.8M, 250V, 10A 2.0 29 Jumper Cord - C13/C14, 2M, 250V, 10A 2.0 30 Trusted Platform Module 2.0 V3 1.0 31 PowerEdge R640 No CE Marking, ICC, for 1300W P... 1.0 32 Broadcom 5720 Quad Port 1GbE BASE-T, rNDC 1.0 33 No Operating System 1.0 34 No Systems Documentation, No OpenManage DVD Kit 1.0 35 Basic Deployment Dell Server R Series 1U/2U 1.0 36 PowerEdge-SE02 Handling n Insurance Charges 1.0 37 ReadyRails Sliding Rails With Cable Management... 1.0 38 Unconfigured RAID 1.0 39 UEFI BIOS Boot Mode with GPT Partition 1.0 40 OpenManage Enterprise Advanced 1.0 41 Basic Next Business Day 36 Months 1.0 42 ProSupport and Next Business Day Onsite Servic... 1.0 43 ProSupport and Next Business Day Onsite Servic... 1.0 44 INFO: Thank you for choosing Dell 1.0 45 Mod Specs Info 1.0 | 5 | 4 |
70,315,657 | 2021-12-11 | https://stackoverflow.com/questions/70315657/get-width-and-height-of-plotly-figure | e.g. if I write import plotly.express as px df = px.data.tips() fig = px.scatter(df, x="total_bill", y="tip", facet_col="sex") then how can I get the width and height of fig? If I do fig.layout.width then I get None. | That is the correct way to get the width. fig.layout.width is None because the width has not been set yet. If you set it explicitly you can retrieve it fig = px.scatter(df, x="total_bill", y="tip", facet_col="sex", width=200) >>> fig.layout.width 200 If not set explicitly the width is initialised when executing the show method based on defaults defined by plotly.js. show(*args, **kwargs) Show a figure using either the default renderer(s) or the renderer(s) specified by the renderer argument Parameters ... width (int or float) β An integer or float that determines the number of pixels wide the plot is. The default is set in plotly.js. ... If we look at the plotly.js documentation we see the default width is 700 and the default height is 450. If you set fig.layout.autosize = False you can see that these defaults are correct. Otherwise the width and height are re-initialised on each relayout. autosize: Determines whether or not a layout width or height that has been left undefined by the user is initialized on each relayout. Note that, regardless of this attribute, an undefined layout width or height is always initialized on the first call to plot. https://plotly.com/javascript/reference/layout/#layout-autosize | 4 | 10 |
70,315,213 | 2021-12-11 | https://stackoverflow.com/questions/70315213/either-of-the-two-pydantic-attributes-should-be-optional | I want validate a payload schema & I am using Pydantic to do that. The class created by inheriting Pydantic's BaseModel is named as PayloadValidator and it has two attributes, addCustomPages which is list of dictionaries & deleteCustomPages which is a list of strings. class NestedCustomPages(BaseModel): """This is the schema for each custom page.""" html: str label: str type: int class PayloadValidator(BaseModelWithContext): """This class defines the payload load schema and also validates it.""" addCustomPages: Optional[List[NestedCustomPages]] deleteCustomPages: Optional[List[str]] I want to declare either of the attributes of class PayloadValidator as optional. I have tried looking for the solution for this but couldn't find anything. | There was a question on this a while ago on the pydantic Github page: https://github.com/samuelcolvin/pydantic/issues/506. The conclusion there includes a toy example with a model that requires either a or b to be filled by using a validator: from typing import Optional from pydantic import validator from pydantic.main import BaseModel class MyModel(BaseModel): a: Optional[str] = None b: Optional[str] = None @validator('b', always=True) def check_a_or_b(cls, b, values): if not values.get('a') and not b: raise ValueError('Either a or b is required') return b mm = MyModel() | 5 | 6 |
70,311,592 | 2021-12-10 | https://stackoverflow.com/questions/70311592/numba-np-convolve-really-slow | I'm trying to speed up a piece of code convolving a 1D array (filter) over each column of a 2D array. Somehow, when I run it with numba's njit, I get a 7x slow down. My thoughts: Maybe column indexing is slowing it down, but switching to row indexing didn't affect performance Maybe slice indexing the results of the convolution is slow, but removing it didn't change anything I've checked that numba understands all the types properly (tested on Windows 10, python 3.9.4 from conda, numpy 1.12.2, numba 0.53.1) Can anyone tell me why this code is slow? import numpy as np from numba import njit def f1(a1, filt): l2 = filt.size // 2 res = np.empty(a1.shape) for i in range(a1.shape[1]): res[:, i] = np.convolve(a1[:, i], filt)[l2:-l2] return res @njit def f1_jit(a1, filt): l2 = filt.size // 2 res = np.empty(a1.shape) for i in range(a1.shape[1]): res[:, i] = np.convolve(a1[:, i], filt)[l2:-l2] return res a1 = np.random.random((6400, 1000)) filt = np.random.random((65)) f1(a1, filt) f1_jit(a1, filt) %timeit f1(a1, filt) # 404 ms Β± 19.3 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) %timeit f1_jit(a1, filt) # 2.8 s Β± 66.7 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) | The problem comes from the Numba implementation of np.convolve. This is a known issue. It turns out that the current Numba implementation is much slower than the one of Numpy (version <=0.54.1 tested on Windows). Under the hood On one hand, the Numpy implementation call correlate which itself performs a dot product that should be implemented by the fast BLAS library available on your system. On the other hand, the Numba implementation calls _get_inner_prod which use np.dot that should also use the same BLAS library (assuming a BLAS is detected which should be the case)... That being said, there are multiple issues related to the dot product: First of all, if the internal variable _HAVE_BLAS of numba/np/arraymath.py is manually disabled, Numba use a fallback implementation of the dot product supposed to be significantly slower. However, it turns out that using the fallback dot product implementation used by np.convolve result in a 5 times faster execution than with the BLAS wrapper on my machine! Using additionally the parameter fastmath=True in the njit Numba decorator results in an overall 8.7 times faster execution! Here is the testing code: import numpy as np import numba as nb def npConvolve(a, b): return np.convolve(a, b) @nb.njit('float64[:](float64[:], float64[:])') def nbConvolveUncont(a, b): return np.convolve(a, b) @nb.njit('float64[::1](float64[::1], float64[::1])') def nbConvolveCont(a, b): return np.convolve(a, b) a = np.random.random(6400) b = np.random.random(65) %timeit -n 100 npConvolve(a, b) %timeit -n 100 nbConvolveUncont(a, b) %timeit -n 100 nbConvolveCont(a, b) Here are raw interesting results: With _HAVE_BLAS=True (default): 126 Β΅s Β± 292 ns per loop 1.6 ms Β± 21.3 Β΅s per loop 1.6 ms Β± 18.5 Β΅s per loop With _HAVE_BLAS=False: 125 Β΅s Β± 359 ns per loop 311 Β΅s Β± 1.18 Β΅s per loop 268 Β΅s Β± 4.26 Β΅s per loop With _HAVE_BLAS=False and fastmath=True: 125 Β΅s Β± 757 ns per loop 327 Β΅s Β± 3.69 Β΅s per loop 183 Β΅s Β± 654 ns per loop Moreover, np_convolve of Numba internally flip some array parameter and then perform the dot product using a flipped array that have a non trivial stride (ie. not 1). Such a non-trivial stride may have an impact on the dot product performance. More generally, any transformation preventing the compiler to know that arrays are contiguous will certainly strongly impact the performances. Indeed, the following test shows the impact of working on a contiguous array with the dot product implementation of Numba: import numpy as np import numba as nb def np_dot(a, b): return np.dot(a, b) @nb.njit('float64(float64[::1], float64[::1])') def nb_dot_cont(a, b): return np.dot(a, b) @nb.njit('float64(float64[::1], float64[:])') def nb_dot_stride(a, b): return np.dot(a, b) v = np.random.random(128*1024) %timeit -n 200 np_dot(v, v) # 36.5 Β΅s Β± 4.9 Β΅s per loop %timeit -n 200 nb_dot_stride(v, v) # 361.0 Β΅s Β± 17.1 Β΅s per loop (x10 !!!) %timeit -n 200 nb_dot_cont(v, v) # 34.1 Β΅s Β± 2.9 Β΅s per loop Some general notes about Numpy and Numba Note that Numba can hardly accelerate the Numpy calls when they work on pretty-big arrays since Numba re-implement Numpy functions mostly in Python and use a JIT compiler (LLVM-Lite) to speed them up, while Numpy is mostly implemented in plain-C (with a rather-slow Python wrapping code). The Numpy code uses low-level processor features like SIMD instructions to speed up a lot the execution of many functions. Both appear to use BLAS libraries known to be highly optimized. Numpy tends to be generally more optimized since Numpy is currently more mature than Numba: Numpy has much more contributors working since a longer time. | 6 | 12 |
70,302,056 | 2021-12-10 | https://stackoverflow.com/questions/70302056/define-a-pydantic-nested-model | If I use GET (given an id) I get a JSON like: { "data": { "id": "81", "ks": { "k1": 25, "k2": 5 }, "items": [ { "id": 1, "name": "John", "surname": "Smith" }, { "id": 2, "name": "Jane", "surname": "Doe" } ] }, "server-time": "2021-12-09 14:18:40" } with the particular case (if id does not exist): { "data": { "id": -1, "ks": "", "items": [] }, "server-time": "2021-12-10 09:35:22" } I would like to create a Pydantic model for managing this data structure (I mean to formally define these objects). What is the smartest way to manage this data structure by creating classes (possibly nested)? | If you don't need data validation that pydantic offers, you can use data classes along with the dataclass-wizard for this same task. It's slightly easier as you don't need to define a mapping for lisp-cased keys such as server-time. Simple example below: from __future__ import annotations from dataclasses import dataclass from datetime import datetime from dataclass_wizard import fromdict @dataclass class Something: data: Data # or simply: # server_time: str server_time: datetime @dataclass class Data: id: int ks: dict[str, int] items: list[Person] @dataclass class Person: id: int name: str surname: str # note: data is defined in the OP above input_data = ... print(fromdict(Something, input_data)) Output: Something(data=Data(id=81, ks={'k1': 25, 'k2': 5}, items=[Person(id=1, name='John', surname='Smith'), Person(id=2, name='Jane', surname='Doe')]), server_time=datetime.datetime(2021, 12, 9, 14, 18, 40)) | 8 | 4 |
70,303,895 | 2021-12-10 | https://stackoverflow.com/questions/70303895/python-3-10-asyncio-gather-shows-deprecationwarning-there-is-no-current-event | I have a Django app and in one of its views I use asyncio in order to make some concurent requests to an external component. Here is the idea: import asyncio async def do_request(project): result = ... return result def aggregate_results(projects: list): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) results = loop.run_until_complete( asyncio.gather(*(do_request(project) for project in projects)) ) loop.close() return zip(projects, results) Well, when I run the tests I get DeprecationWarning: There is no current event loop at this line: asyncio.gather(*(do_request(project) for project in projects)) How should I interpret this warning and what do I need to change to get rid of it? Thanks! | According to the documentation, this happens because there is no event loop running at the time when you call gather. Deprecated since version 3.10: Deprecation warning is emitted if no positional arguments are provided or not all positional arguments are Future-like objects and there is no running event loop. As you've probably noticed, your code works. It will continue to work and you can ignore the deprecation warning as long as you use 3.10. At some point in the future, though, this may change to a runtime error. If you'll bear with me a moment, the recommended way to run an event loop is with run, not loop.run_until_complete. def aggregate_results(projects: list): results = asyncio.run(asyncio.gather(*(do_request(project) for project in projects))) return zip(projects, results) This, however, won't actually work. You'll instead get an exception ValueError: a coroutine was expected, got <_GatheringFuture pending> The fix is to instead await gather from another coroutine. async def get_project_results(projects: list): results = await asyncio.gather(*(do_request(project) for project in projects)) return results def aggregate_results(projects: list): results = asyncio.run(get_project_results(projects)) return zip(projects, results) (You could also use get_project_results with your version of aggregate_results.) | 5 | 7 |
70,297,207 | 2021-12-9 | https://stackoverflow.com/questions/70297207/python-should-i-upload-my-coverage-file-to-my-github-repository | Im using python 3.9 and coverage 6.2 I would like to have a record of my most recent coverage but Im not sure if I should upload my .coverage file. Im guessing no since it sort of has info about my directory layout. So i would like to know how I should go about that, is it even standard to upload such a thing? If not, why not? I also generated the htmlcov folder but I didnt upload it since it has a default gitignore for the entire folder. | Most people don't upload the .coverage or HTML report, because they don't need to track it over time. But there's no harm in uploading them. You mention directory layout as if it was a secret to protect, but isn't your layout already committed to GitHub? If you want to commit the HTML report, you will need to remove the .gitignore file in the directory. | 8 | 7 |
70,301,475 | 2021-12-10 | https://stackoverflow.com/questions/70301475/difference-between-functools-cache-and-lru-cache | Recently I came across functools.cache and didn't know how it differs from functools.lru_cache. I found posts about the difference between functools.cached_property and lru_cache but nothing specifically for cache and lru_cache. | functools.cache was newly added in version 3.9. The documentation states: Simple lightweight unbounded function cache. Sometimes called βmemoizeβ. Returns the same as lru_cache(maxsize=None), creating a thin wrapper around a dictionary lookup for the function arguments. Because it never needs to evict old values, this is smaller and faster than lru_cache() with a size limit. Example from the docs: @cache def factorial(n): return n * factorial(n-1) if n else 1 >>> factorial(10) # no previously cached result, makes 11 recursive calls 3628800 >>> factorial(5) # just looks up cached value result 120 >>> factorial(12) # makes two new recursive calls, the other 10 are cached 479001600 So, in short: cache and lru_cache(maxsize=None) are exactly the same (link to cpython source). But in cases where you don't want to limit the cache size, using cache may make the code clearer, since a least recently used cache without limit doesn't make much sense. | 50 | 65 |
70,298,050 | 2021-12-9 | https://stackoverflow.com/questions/70298050/how-to-send-a-document-with-telegram-bot | I followed a tutorial in youtube on how to create a Telegram Bot and for now it can only send messages but I want to send even files like documents or audio, video, photos etc... For now im just trying to send a file but I'm pretty confused and I don't know how to do it. The source code of the bot is divided in 2 main files. One responser.py: def responses(input_text): user_message = str(input_text).lower() if user_message in ("test", "testing"): return "123 working..." return "The command doesn't exists. Type /help to see the command options." and main.py: import constants as key from telegram.ext import * import responser as r print("Hello. Cleint has just started.") def start_command(update): update.message.reply_text("The Bot Has Started send you command sir.") def help_command(update): update.message.reply_text(""" Welcome to the Cleint Bot. For this purchase the following commands are available: send - send command is to send the log file from the other side of computer""") def send_document(update, context): doc_file = open("image1.png", "rb") chat_id = update.effctive_chat.id return context.bot.send_document(chat_id, doc_file) def handle_message(update, context): text = str(update.message.text).lower() response = r.responses(text) update.message.reply_text(response) def error(update, context): print(f"Update {update} cause error: {context.error}") def main(): updater = Updater(key.API_KEY, use_context=True) dp = updater.dispatcher dp.add_handler(CommandHandler("start", start_command)) dp.add_handler(CommandHandler("help", help_command)) dp.add_handler(MessageHandler(Filters.text, handle_message)) dp.add_error_handler(error) updater.start_polling() updater.idle() main() could someone help me out? | Try the following as your send_document function: def send_document(update, context): chat_id = update.message.chat_id document = open('image1.png', 'rb') context.bot.send_document(chat_id, document) And add the command 'send' to the bot in the main function like this: dp.add_handler(CommandHandler("send", send_document)) This will make it so if you type /send in Telegram, the bot will send you the document. | 5 | 11 |
70,296,308 | 2021-12-9 | https://stackoverflow.com/questions/70296308/how-to-create-substrings-efficiently | Given a string, typically a sentence, I want to extract all substrings of lengths 3, 4, 5, 6. How can I achieve this efficiently using only Python's standard library? Here is my approach, I am looking for one which is faster. To me it seems the three outer loops are inevitable either way, but maybe there is a low-level optimized solution with itertools or so. import time def naive(test_sentence, start, end): grams = [] for word in test_sentence: for size in range(start, end): for i in range(len(word)): k = word[i:i+size] if len(k)==size: grams.append(k) return grams n = 10**6 start, end = 3, 7 test_sentence = "Hi this is a wonderful test sentence".split(" ") start_time = time.time() for _ in range(n): naive(test_sentence, start, end) end_time = time.time() print(f"{end-start} seconds for naive approach") Output of naive(): ['thi', 'his', 'this', 'won', 'ond', 'nde', 'der', 'erf', 'rfu', 'ful', 'wond', 'onde', 'nder', 'derf', 'erfu', 'rful', 'wonde', 'onder', 'nderf', 'derfu', 'erful', 'wonder', 'onderf', 'nderfu', 'derful', 'tes', 'est', 'test', 'sen', 'ent', 'nte', 'ten', 'enc', 'nce', 'sent', 'ente', 'nten', 'tenc', 'ence', 'sente', 'enten', 'ntenc', 'tence', 'senten', 'entenc', 'ntence'] Second version: def naive2(test_sentence,start,end): grams = [] for word in test_sentence: if len(word) >= start: for size in range(start,end): for i in range(len(word)-size+1): grams.append(word[i:i+size]) return grams | Well, I think this is not possible to improve the algorithm, but you can micro-optimize the function: def naive3(test_sentence,start,end): rng = range(start,end) return [word[i:i+size] for word in test_sentence if len(word) >= start for size in rng for i in range(len(word)+1-size)] Python 3.8 introduces assignment Expressions that are quite useful for performance. Thus if you can use a recent version, then you can write: def naive4(test_sentence,start,end): rng = range(start,end) return [word[i:i+size] for word in test_sentence if (lenWord := len(word)+1) > start for size in rng for i in range(lenWord-size)] Here are performance results: naive2: 8.28 Β΅s Β± 55 ns per call naive3: 7.28 Β΅s Β± 124 ns per call naive4: 6.86 Β΅s Β± 48 ns per call (20% faster than naive2) Note that half of the time of naive4 is spent in creating the word[i:i+size] string objects and the rest is mainly spent in the CPython interpreter (mainly due to the creation/reference-counting/deletion of variable-sized integer objects). | 5 | 4 |
70,281,103 | 2021-12-8 | https://stackoverflow.com/questions/70281103/can-winget-install-an-older-version-of-python | Currently, the latest Python 3 offered through winget is 3.10.150.0: Name Id Version Match Source ----------------------------------------------------------------------------------------------------------------------------------- Python 3 Python.Python.3 3.10.150.0 Command: python winget but I'd like to install 3.9 and keep using 3.9. Is it possible to do that with winget? | winget install -e --id Python.Python -v 3.9.0 | 18 | 21 |
70,275,016 | 2021-12-8 | https://stackoverflow.com/questions/70275016/split-torch-dataset-without-shuffling | I'm using Pytorch to run Transformer model. when I want to split data (tokenized data) i'm using this code: train_dataset, test_dataset = torch.utils.data.random_split( tokenized_datasets, [train_size, test_size]) torch.utils.data.random_split using shuffling method, but I don't want to shuffle. I want to split it sequentially. Any advice? thanks | The random_split method has no parameter that can help you create a non-random sequential split. The easiest way to achieve a sequential split is by directly passing the indices for the subset you want to create: # Created using indices from 0 to train_size. train_dataset = torch.utils.data.Subset(tokenized_datasets, range(train_size)) # Created using indices from train_size to train_size + test_size. test_dataset = torch.utils.data.Subset(tokenized_datasets, range(train_size, train_size + test_size)) Refer: PyTorch docs. | 6 | 18 |
70,272,486 | 2021-12-8 | https://stackoverflow.com/questions/70272486/convert-each-row-of-a-dataframe-to-list | I have a dataframe like this: df = pd.DataFrame({'A': ['1', '2', '3'], 'B': ['aa', 'b', 'c']}) A B 0 1 aa 1 2 b 2 3 c I want to convert each row of column B to a list. For example, my desired output is something like this: df_new A B 0 1 [aa] 1 2 [b] 2 3 [c] | You can use split to do stuff. import pandas as pd df = pd.DataFrame({'A': ['1', '2', '3'], 'B': ['a', 'b', 'c']}) df['B'] = df['B'].apply(lambda x: x.split(',')) print(df) | 4 | 6 |
70,270,181 | 2021-12-8 | https://stackoverflow.com/questions/70270181/enter-doesnt-work-in-vs-code-for-python | I installed Anaconda. Somehow, afterwards, when I press Enter in Python files, it no longer works. This error shows up: command 'pythonIndent.newlineAndIndent' not found How do I solve this problem? | Try checking your keybindings.json file, also disable all VS Code extensions and install them one by one. Maybe it's caused by some buggy extension. | 9 | 3 |
70,185,942 | 2021-12-1 | https://stackoverflow.com/questions/70185942/why-i-am-getting-notimplementederror-database-objects-do-not-implement-truth-v | I am trying to connect Django with MongoDB using Djongo. I have changed the Database parameter but I am getting this error: NotImplementedError: Database objects do not implement truth value testing or bool(). when I am running the makemigration command. Please can anybody explain why I am getting this error and how to resolve it? I have included my settings.py file, the error log, and a screenshot of the MongoDB Compass setup. settings.py """ Django settings for Chatify project. Generated by 'django-admin startproject' using Django 3.2.9. For more information on this file, see https://docs.djangoproject.com/en/3.2/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/3.2/ref/settings/ """ from pathlib import Path # Build paths inside the project like this: BASE_DIR / 'subdir'. BASE_DIR = Path(__file__).resolve().parent.parent # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'django-insecure-1k4mo05el_0112guspx^004n-i&3h#u4gyev#27u)tkb8t82_%' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'api.apps.ApiConfig', ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'Chatify.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'Chatify.wsgi.application' # Database # https://docs.djangoproject.com/en/3.2/ref/settings/#databases # DATABASES = { # 'default': { # 'ENGINE': 'django.db.backends.sqlite3', # 'NAME': BASE_DIR / 'db.sqlite3', # } # } DATABASES = { 'default': { 'ENGINE': 'djongo', 'NAME': 'users', } } # Password validation # https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/3.2/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/3.2/howto/static-files/ STATIC_URL = '/static/' # Default primary key field type # https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' Error Log File "manage.py", line 22, in <module> main() File "manage.py", line 18, in main execute_from_command_line(sys.argv) File "C:\Users\DELL\AppData\Local\Programs\Python\Python37\lib\site-packages\django\core\management\__init__.py", line 419, in execute_from_command_line utility.execute() File "C:\Users\DELL\AppData\Local\Programs\Python\Python37\lib\site-packages\django\core\management\__init__.py", line 413, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "C:\Users\DELL\AppData\Local\Programs\Python\Python37\lib\site-packages\django\core\management\base.py", line 367, in run_from_argv connections.close_all() File "C:\Users\DELL\AppData\Local\Programs\Python\Python37\lib\site-packages\django\db\utils.py", line 213, in close_all connection.close() File "C:\Users\DELL\AppData\Local\Programs\Python\Python37\lib\site-packages\django\utils\asyncio.py", line 33, in inner return func(*args, **kwargs) File "C:\Users\DELL\AppData\Local\Programs\Python\Python37\lib\site-packages\django\db\backends\base\base.py", line 294, in close self._close() File "C:\Users\DELL\AppData\Local\Programs\Python\Python37\lib\site-packages\djongo\base.py", line 208, in _close if self.connection: File "C:\Users\DELL\AppData\Local\Programs\Python\Python37\lib\site-packages\pymongo\database.py", line 829, in __bool__ raise NotImplementedError("Database objects do not implement truth " NotImplementedError: Database objects do not implement truth value testing or bool(). Please compare with None instead: database is not None MongoDB Compass | The problem is with the new version of pymongo (4.0 from 29.11.2021) which is not supported by Djongo 1.3.6. You need to install pymongo 3.12.1. | 28 | 87 |
70,187,831 | 2021-12-1 | https://stackoverflow.com/questions/70187831/hide-variable-e-g-password-from-print-and-call-of-pydantic-data-model | I am using pydantic for some user/password data model. When the model is printed, I want to replace the value of password with something else (*** for example) to prevent that the password is e.g. written into log-files or the console accidentally. from pydantic import BaseModel class myUserClass(BaseModel): User = 'foo' Password = 'bar' def __str__(self): return "Hidden parameters." U = myUserClass() print(U) # > Hidden parameters. # Should be User='foo', Password='***' or similar U # This will still show my password. #> myUserClass(User='foo', Password='bar') How can I access the string that is normally printed and replace only 'bar' with '***', but keeping all parameters? How can I also do that when just calling U? This may not be equally important as for logging and console output usually print is called. | Pydantic provides SecretStr data type. Which is replaced by *** when converted to str (e.g printed) and actual value can be obtained by get_secret_value() method: class Foobar(BaseModel): password: SecretStr empty_password: SecretStr # Initialize the model. f = Foobar(password='1234', empty_password='') # Assert str and repr are correct. assert str(f.password) == '**********' assert str(f.empty_password) == '' # Assert retrieval of secret value is correct assert f.password.get_secret_value() == '1234' assert f.empty_password.get_secret_value() == '' | 5 | 7 |
70,189,517 | 2021-12-1 | https://stackoverflow.com/questions/70189517/requests-chunkedencodingerror-with-requests-get-but-not-when-using-postman | I have a URL that can only be accessed from within our VPC. If I enter the URL via Postman, I am able to see the content (which is a Pdf) and I'm able to save the output to a file perfectly fine. However, when trying to automate this using python requests with import requests r = requests.get(url, params=params) I get an exception ChunkedEncodingError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read)) The other Stackoverflow questions haven't really helped much with this and this is consistently reproducible with requests | I think the reason for this error is server is not correctly encoding the chunk. some of the chunk sizes are not integer b'' due to which you are getting chunk encoding error. Try below code import requests from requests.exceptions import ChunkedEncodingError response = requests.get(url, params=params, stream=True) try: for data in response.iter_content(chunk_size=1024): print(data) except ChunkedEncodingError as ex: print(f"Invalid chunk encoding {str(ex)}") | 11 | 15 |
70,219,200 | 2021-12-3 | https://stackoverflow.com/questions/70219200/python-fastapi-base-path-control | When I use FastAPI , how can I sepcify a base path for the web-service? To put it another way - are there arguments to the FastAPI object that can set the end-point and any others I define, to a different root path? For example , if I had the code with the spurious argument root below, it would attach my /my_path end-point to /my_server_path/my_path ? from fastapi import FastAPI, Request app = FastAPI(debug = True, root = 'my_server_path') @app.get("/my_path") def service( request : Request ): return { "message" : "my_path" } | You can use an APIRouter and add it to the app after adding the paths: from fastapi import APIRouter, FastAPI app = FastAPI() prefix_router = APIRouter(prefix="/my_server_path") # Add the paths to the router instead @prefix_router.get("/my_path") def service( request : Request ): return { "message" : "my_path" } # Now add the router to the app app.include_router(prefix_router) When adding the router first and then adding paths it does now work. It seems that the paths are not detected dynamically. Note: a path prefix must start with / (Thanks to Harshal Perkh for this) | 10 | 17 |
70,199,494 | 2021-12-2 | https://stackoverflow.com/questions/70199494/how-to-make-vscode-honor-black-excluded-files-in-pyproject-toml-configuration-wh | I have the following pyproject.toml for configuring black: [tool.black] exclude = 'foo.py' If I run black . from the project's root folder that only contains foo.py, I get No Python files are present to be formatted. Nothing to do οΏ½π΄ as expected. However, when I save foo.py from within VS Code (I have black configured as the formatter and enabled Format On Save), the file is still formatted by black. Interestingly, VS Code seems to honor other configurations, e.g. line-length. Is there a way to make VSCode honor the exclude configuration? | The --force-exclude option also excludes files if they are explicitly listed. Thus, it also works with formatOnSave in VS Code. In the example above, simply use [tool.black] force-exclude = 'foo.py' | 5 | 2 |
70,263,918 | 2021-12-7 | https://stackoverflow.com/questions/70263918/how-to-prevent-multiprocessing-from-inheriting-imports-and-globals | I'm using multiprocessing in a larger code base where some of the import statements have side effects. How can I run a function in a background process without having it inherit global imports? # helper.py: print('This message should only print once!') # main.py: import multiprocessing as mp import helper # This prints the message. def worker(): pass # Unfortunately this also prints the message again. if __name__ == '__main__': mp.set_start_method('spawn') process = mp.Process(target=worker) process.start() process.join() Background: Importing TensorFlow initializes CUDA which reserves some amount of GPU memory. As a result, spawing too many processes leads to a CUDA OOM error, even though the processes don't use TensorFlow. Similar question without an answer: How to avoid double imports with the Python multiprocessing module? | Is there a resources that explains exactly what the multiprocessing module does when starting an mp.Process? Super quick version (using the spawn context not fork) Some stuff (a pair of pipes for communication, cleanup callbacks, etc) is prepared then a new process is created with fork()exec(). On windows it's CreateProcessW(). The new python interpreter is called with a startup script spawn_main() and passed the communication pipe file descriptors via a crafted command string and the -c switch. The startup script cleans up the environment a little bit, then unpickles the Process object from its communication pipe. Finally it calls the run method of the process object. So what about importing of modules? Pickle semantics handle some of it, but __main__ and sys.modules need some tlc, which is handled here (during the "cleans up the environment" bit). | 5 | 2 |
70,184,494 | 2021-12-1 | https://stackoverflow.com/questions/70184494/on-what-systems-does-python-not-use-ieee-754-double-precision-floats | Python makes various references to IEEE 754 floating point operations, but doesn't guarantee 1 2 that it'll be used at runtime. I'm therefore wondering where this isn't the case. CPython source code defers to whatever the C compiler is using for a double, which in practice is an IEEE 754-2008 binary64 on all common systems I'm aware of, e.g.: Linux and BSD distros (e.g. FreeBSD, OpenBSD, NetBSD) Intel i386/x86 and x86-64 ARM: AArch64 Power: PPC64 MacOS all architectures supported are 754 compatible Windows x86 and x86-64 systems I'm aware there are other platforms it's known to build on but don't know how these work out in practice. | Update: Since I wrote the original answer below, the situation has changed slightly. CPython versions 3.11 and later now require that the platform C double follows the IEEE 754 binary64 format. This was mostly a matter of convenience for developers - it allowed us to remove special-case code that was in practice close to untestable. Python the language still does not stipulate that IEEE 754 is required, and there's nothing to stop someone from patching CPython to add support for an unusual platform that doesn't follow IEEE 754; it would still be reasonable to call the result "Python". Moreover, even for CPython there remains no documented guarantee that the format will be IEEE 754 binary64 - the developers could decide to reverse the IEEE 754 binary64 requirement. (I personally think that that's extremely unlikely to happen, at least within the next decade, but it's possible.) In theory, as you say, CPython is designed to be buildable and usable on any platform without caring about what floating-point format their C double is using. In practice, two things are true: To the best of my knowledge, CPython has not met a system that's not using IEEE 754 binary64 format for its C double within the last 15 years (though I'd love to hear stories to the contrary; I've been asking about this at conferences and the like for a while). My knowledge is a long way from perfect, but I've been involved with mathematical and floating-point-related aspects of CPython core development for at least 13 of those 15 years, and paying close attention to floating-point related issues in that time. I haven't seen any indications on the bug tracker or elsewhere that anyone has been trying to run CPython on systems using a floating-point format other than IEEE 754 binary64. I strongly suspect that the first time modern CPython does meet such a system, there will be a significant number of test failures, and so the core developers are likely to find out about it fairly quickly. While we've made an effort to make things format-agnostic, it's currently close to impossible to do any testing of CPython on other formats, and it's highly likely that there are some places that implicitly assume IEEE 754 format or semantics, and that will break for something more exotic. We have yet to see any reports of such breakage. There's one exception to the "no bug reports" report above. It's this issue: https://bugs.python.org/issue27444. There, Greg Stark reported that there were indeed failures using VAX floating-point. It's not clear to me whether the original bug report came from a system that emulated VAX floating-point. I joined the CPython core development team in 2008. Back then, while I was working on floating-point-related issues I tried to keep in mind 5 different floating-point formats: IEEE 754 binary64, IBM's hex floating-point format as used in their zSeries mainframes, the Cray floating-point format used in the SV1 and earlier machines, and the VAX D-float and G-float formats; anything else was too ancient to be worth worrying about. Since then, the VAX formats are no longer worth caring about. Cray machines now use IEEE 754 floating-point. The IBM hex floating-point format is very much still in existence, but in practice the relevant IBM hardware also has support for IEEE 754, and the IBM machines that Python meets all seem to be using IEEE 754 floating-point. Rather than exotic floating-point formats, the modern challenges seem to be more to do with variations in adherence to the rest of the IEEE 754 standard: systems that don't support NaNs, or treat subnormals differently, or allow use of higher precision for intermediate operations, or where compilers make behaviour-changing optimizations. The above is all about CPython-the-implementation, not Python-the-language. But the story for the Python language is largely similar. In theory, it makes no assumptions about the floating-point format. In practice, I don't know of any alternative Python implementations that don't end up using an IEEE 754 binary format (if not semantics) for the float type. IronPython and Jython both target runtimes that are explicit that floating-point will be IEEE 754 binary64. JavaScript-based versions of Python will similarly presumably be using JavaScript's Number type, which is required to be IEEE 754 binary64 by the ECMAScript standard. PyPy runs on more-or-less the same platforms that CPython does, with the same floating-point formats. MicroPython uses single-precision for its float type, but as far as I know that's still IEEE 754 binary32 in practice. | 10 | 15 |
70,236,730 | 2021-12-5 | https://stackoverflow.com/questions/70236730/ssl-sslcertverificationerror-ssl-certificate-verify-failed-certificate-verif | I was playing with some web frameworks for Python, when I tried to use the framework aiohhtp with this code (taken from the documentation): import aiohttp import asyncio #******************************** # a solution I found on the forum: # https://stackoverflow.com/questions/50236117/scraping-ssl-certificate-verify-failed-error-for-http-en-wikipedia-org?rq=1 import ssl ssl._create_default_https_context = ssl._create_unverified_context # ... but it doesn't work :( #******************************** async def main(): async with aiohttp.ClientSession() as session: async with session.get("https://python.org") as response: print("Status:", response.status) print("Content-type:", response.headers["content-type"]) html = await response.text() print("Body:", html[:15], "...") loop = asyncio.get_event_loop() loop.run_until_complete(main()) When I run this code I get this traceback: DeprecationWarning: There is no current event loop loop = asyncio.get_event_loop() Traceback (most recent call last): File "c:\Python310\lib\site-packages\aiohttp\connector.py", line 986, in _wrap_create_connection return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa File "c:\Python310\lib\asyncio\base_events.py", line 1080, in create_connection transport, protocol = await self._create_connection_transport( File "c:\Python310\lib\asyncio\base_events.py", line 1110, in _create_connection_transport await waiter File "c:\Python310\lib\asyncio\sslproto.py", line 528, in data_received ssldata, appdata = self._sslpipe.feed_ssldata(data) File "c:\Python310\lib\asyncio\sslproto.py", line 188, in feed_ssldata self._sslobj.do_handshake() File "c:\Python310\lib\ssl.py", line 974, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:997) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "c:\Users\chris\Documents\Programmi_in_Python_offline\Esercitazioni\Python_commands\aioWebTest.py", line 21, in <module> loop.run_until_complete(main()) File "c:\Python310\lib\asyncio\base_events.py", line 641, in run_until_complete return future.result() File "c:\Users\chris\Documents\Programmi_in_Python_offline\Esercitazioni\Python_commands\aioWebTest.py", line 12, in main async with session.get("https://python.org") as response: File "c:\Python310\lib\site-packages\aiohttp\client.py", line 1138, in __aenter__ self._resp = await self._coro File "c:\Python310\lib\site-packages\aiohttp\client.py", line 535, in _request conn = await self._connector.connect( File "c:\Python310\lib\site-packages\aiohttp\connector.py", line 542, in connect proto = await self._create_connection(req, traces, timeout) File "c:\Python310\lib\site-packages\aiohttp\connector.py", line 907, in _create_connection _, proto = await self._create_direct_connection(req, traces, timeout) File "c:\Python310\lib\site-packages\aiohttp\connector.py", line 1206, in _create_direct_connection raise last_exc File "c:\Python310\lib\site-packages\aiohttp\connector.py", line 1175, in _create_direct_connection transp, proto = await self._wrap_create_connection( File "c:\Python310\lib\site-packages\aiohttp\connector.py", line 988, in _wrap_create_connection raise ClientConnectorCertificateError(req.connection_key, exc) from exc aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host python.org:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:997)')] From the final row I have thought that it was a problem with a certificate that is expired, so I searched on the internet and I tried to solve installing some certificates: COMODO ECC Certification Authority; three certificates that I took from the site www.python.org under Bango's advice for the question: I'm sorry for the long question, but I searched a lot on the internet and I couldn't find the solution for my case. Thank you in advance, guys <3 | Picking up on the comment by @salparadise, the following worked for me: session.get("https://python.org", ssl=False) Edit (2023-03-10): I've run into this problem again and have found this answer to a similar question, which provides a much better long-term solution. In short: use the certificates in the certifi package to create the aiohttp client session (as @salparadise also suggested earlier). You'll find the code to do so at the link above. It worked for me just as well as disabling ssl, and is of course a much better way of solving the problem. | 11 | 6 |
70,184,634 | 2021-12-1 | https://stackoverflow.com/questions/70184634/setting-a-default-value-based-on-another-value | I have the following Pydantic model: from pydantic import BaseModel import key class Wallet(BaseModel): private_key: str = Field(default_factory=key.generate_private_key) address: str I want address to have a default_factory as a function that takes a private_key as input and returns an address. My intentions would be something along the lines of the following: address: str = Field(default_factory=key.generate_address(self.private_key) How can I achieve this? | Another option is to just use @validator because in it you can access previously validated fields. From the documentation: validators are "class methods", so the first argument value they receive is the UserModel class, not an instance of UserModel. the second argument is always the field value to validate; it can be named as you please you can also add any subset of the following arguments to the signature (the names must match): values: a dict containing the name-to-value mapping of any previously-validated fields Example: class Wallet(BaseModel): private_key: str = Field(default_factory=key.generate_private_key) address: str = "" # "" seems better than None to use the correct type @validator("address", always=True) def get_address(cls, address: str, values: Dict[str, Any]) -> str: if address == "" and "private_key" in values: return key.generate_address(values["private_key"]) return address It can be argued that @validator should be preferred over @root_validator if you just want to generate a value for a single field. There are two important aspects of this approach that must be considered: The "previously-validated fields" from the documentation means that in your case private_key must be defined before address. The values of fields defined after the field that is validated are not available to the validator. If the field that is validated has a default value and you still want the validator to be executed in that case, you have to use always=True. | 9 | 6 |
70,253,218 | 2021-12-6 | https://stackoverflow.com/questions/70253218/macos-m1-system-is-detected-as-arm-by-python-package-even-though-im-using-roset | I'm on a Macbook with M1 (Apple ARM architecture) and I've tried running the following Python code using the layoutparser library, which indirectly uses pycocotools: import layoutparser as lp lp.Detectron2LayoutModel() And I've received the error: [...] ImportError: dlopen([...]/.venv/lib/python3.9/site-packages/pycocotools/_mask.cpython-39-darwin.so, 0x0002): tried: '[...]/.venv/lib/python3.9/site-packages/pycocotools/_mask.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/_mask.cpython-39-darwin.so' (no such file), '/usr/lib/_mask.cpython-39-darwin.so' (no such file) The crucial info for me seems to be [...] is an incompatible architecture (have 'x86_64', need 'arm64e') [...]. Indeed, I am using the Apple ARM architecture, and sometimes it is not supported by some software. This is usually solved by using Rosetta, which simulates an Intel-x64 architecture. So I start a terminal with Rosetta (arch -x86_64 zsh), create a new virtual environment, make a fresh install of the dependencies, and try to run the code again ... ... and I receive the same error that I had also had without Rosetta: [...] is an incompatible architecture (have 'x86_64', need 'arm64e') [...] π₯² I've double-checked that Rosetta is really activated: > uname -m x86_64 Rosetta seems to be working. And yet, according to the error message, it seems not to be working. Any ideas what could be the problem with Rosetta, or the library, or whatever, and how I could try fixing it? | Charles Duffy explained the problem in the comments, thank you! π When I checked the platform in Python, it was indeed ARM: > python -c 'import platform; print(platform.platform())' macOS-12.0.1-arm64-i386-64bit So I had been using a Python installation for ARM. Now I installed brew and then python3 from the Rosetta terminal and used the newly installed Python to initiate a fresh virtual environment, and this fixed it. (This article helped me a bit with it.) Update: When creating Python environments with conda, it is possible to specify whether they should use Apple ARM or Intel-x64: CONDA_SUBDIR=osx-arm64 conda create -n my_env python makes an ARM environment CONDA_SUBDIR=osx-64 conda create -n my_env python makes an x64 environment | 6 | 12 |
70,229,552 | 2021-12-4 | https://stackoverflow.com/questions/70229552/package-requirement-psycopg2-2-9-1-not-satisfied-pycharm-macos | after long hours of trying: i installed psycopg2==2.9.1 with pip installed with pip I tried adding it to all the interpreter paths i could find but still keep getting this message: error message I tried as well restarting pycharm, invalidate caches .. When trying to just install it with pycharm i get this error messege: Collecting psycopg2==2.9.1 Using cached psycopg2-2.9.1.tar.gz (379 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Building wheels for collected packages: psycopg2 Building wheel for psycopg2 (setup.py): started Building wheel for psycopg2 (setup.py): finished with status 'error' Running setup.py clean for psycopg2 Failed to build psycopg2 Installing collected packages: psycopg2 Running setup.py install for psycopg2: started Running setup.py install for psycopg2: finished with status 'error' ERROR: Command errored out with exit status 1: command: '/Users/Library/Application Support/QGIS/QGIS3/profiles/default/python/plugins/data_index-development/venv/bin/python' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-install-myo1zngt/psycopg2_e9a9698f63464cb99bc5bf3655675aa2/setup.py'"'"'; __file__='"'"'/private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-install-myo1zngt/psycopg2_e9a9698f63464cb99bc5bf3655675aa2/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-wheel-lbvr1lzp cwd: /private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-install-myo1zngt/psycopg2_e9a9698f63464cb99bc5bf3655675aa2/ Complete output (53 lines): running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.13.0-x86_64-3.9 creating build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/_json.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/extras.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/errorcodes.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/tz.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/_range.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/_ipaddress.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/__init__.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/extensions.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/errors.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/sql.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/pool.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 warning: build_py: byte-compiling is disabled, skipping. running build_ext building 'psycopg2._psycopg' extension creating build/temp.macosx-10.13.0-x86_64-3.9 creating build/temp.macosx-10.13.0-x86_64-3.9/psycopg /usr/bin/clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/MISSING/DEPS//stage/include -I/MISSING/DEPS//stage/unixodbc/include -I/MISSING/DEPS//stage/include -I/MISSING/DEPS//stage/unixodbc/include -DPSYCOPG_VERSION=2.9.1 (dt dec pq3 ext lo64) -DPSYCOPG_DEBUG=1 -DPG_VERSION_NUM=140001 -DHAVE_LO64=1 -DPSYCOPG_DEBUG=1 -I/Users/Library/Application Support/QGIS/QGIS3/profiles/default/python/plugins/data_index-development/venv/include -I/Applications/QGIS.app/Contents/MacOS/include/python3.9 -I. -I/opt/homebrew/include -I/opt/homebrew/include/postgresql/server -I/opt/homebrew/Cellar/icu4c/69.1/include -I/opt/homebrew/opt/[email protected]/include -I/opt/homebrew/opt/readline/include -c psycopg/adapter_asis.c -o build/temp.macosx-10.13.0-x86_64-3.9/psycopg/adapter_asis.o In file included from psycopg/adapter_asis.c:28: ./psycopg/psycopg.h:35:10: error: 'Python.h' file not found with <angled> include; use "quotes" instead #include <Python.h> ^~~~~~~~~~ "Python.h" ./psycopg/psycopg.h:35:10: warning: non-portable path to file '<python.h>'; specified path differs in case from file name on disk [-Wnonportable-include-path] #include <Python.h> ^~~~~~~~~~ <python.h> In file included from psycopg/adapter_asis.c:28: In file included from ./psycopg/psycopg.h:35: psycopg/Python.h:31:2: error: "psycopg requires Python 3.6" #error "psycopg requires Python 3.6" ^ psycopg/Python.h:34:10: fatal error: 'structmember.h' file not found #include <structmember.h> ^~~~~~~~~~~~~~~~ 1 warning and 3 errors generated. It appears you are missing some prerequisite to build the package from source. You may install a binary package by installing 'psycopg2-binary' from PyPI. If you want to install psycopg2 from source, please install the packages required for the build and try again. For further information please check the 'doc/src/install.rst' file (also at <https://www.psycopg.org/docs/install.html>). error: command '/usr/bin/clang' failed with exit code 1 ---------------------------------------- ERROR: Failed building wheel for psycopg2 ERROR: Command errored out with exit status 1: command: '/Users/Library/Application Support/QGIS/QGIS3/profiles/default/python/plugins/data_index-development/venv/bin/python' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-install-myo1zngt/psycopg2_e9a9698f63464cb99bc5bf3655675aa2/setup.py'"'"'; __file__='"'"'/private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-install-myo1zngt/psycopg2_e9a9698f63464cb99bc5bf3655675aa2/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-record-fkjtibr5/install-record.txt --single-version-externally-managed --compile --install-headers '/Users/Library/Application Support/QGIS/QGIS3/profiles/default/python/plugins/data_index-development/venv/include/site/python3.9/psycopg2' cwd: /private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-install-myo1zngt/psycopg2_e9a9698f63464cb99bc5bf3655675aa2/ Complete output (55 lines): running install /Users/Library/Application Support/QGIS/QGIS3/profiles/default/python/plugins/data_index-development/venv/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.macosx-10.13.0-x86_64-3.9 creating build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/_json.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/extras.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/errorcodes.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/tz.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/_range.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/_ipaddress.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/__init__.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/extensions.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/errors.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/sql.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 copying lib/pool.py -> build/lib.macosx-10.13.0-x86_64-3.9/psycopg2 warning: build_py: byte-compiling is disabled, skipping. running build_ext building 'psycopg2._psycopg' extension creating build/temp.macosx-10.13.0-x86_64-3.9 creating build/temp.macosx-10.13.0-x86_64-3.9/psycopg /usr/bin/clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/MISSING/DEPS//stage/include -I/MISSING/DEPS//stage/unixodbc/include -I/MISSING/DEPS//stage/include -I/MISSING/DEPS//stage/unixodbc/include -DPSYCOPG_VERSION=2.9.1 (dt dec pq3 ext lo64) -DPSYCOPG_DEBUG=1 -DPG_VERSION_NUM=140001 -DHAVE_LO64=1 -DPSYCOPG_DEBUG=1 -I/Users/Library/Application Support/QGIS/QGIS3/profiles/default/python/plugins/data_index-development/venv/include -I/Applications/QGIS.app/Contents/MacOS/include/python3.9 -I. -I/opt/homebrew/include -I/opt/homebrew/include/postgresql/server -I/opt/homebrew/Cellar/icu4c/69.1/include -I/opt/homebrew/opt/[email protected]/include -I/opt/homebrew/opt/readline/include -c psycopg/adapter_asis.c -o build/temp.macosx-10.13.0-x86_64-3.9/psycopg/adapter_asis.o In file included from psycopg/adapter_asis.c:28: ./psycopg/psycopg.h:35:10: error: 'Python.h' file not found with <angled> include; use "quotes" instead #include <Python.h> ^~~~~~~~~~ "Python.h" ./psycopg/psycopg.h:35:10: warning: non-portable path to file '<python.h>'; specified path differs in case from file name on disk [-Wnonportable-include-path] #include <Python.h> ^~~~~~~~~~ <python.h> In file included from psycopg/adapter_asis.c:28: In file included from ./psycopg/psycopg.h:35: psycopg/Python.h:31:2: error: "psycopg requires Python 3.6" #error "psycopg requires Python 3.6" ^ psycopg/Python.h:34:10: fatal error: 'structmember.h' file not found #include <structmember.h> ^~~~~~~~~~~~~~~~ 1 warning and 3 errors generated. It appears you are missing some prerequisite to build the package from source. You may install a binary package by installing 'psycopg2-binary' from PyPI. If you want to install psycopg2 from source, please install the packages required for the build and try again. For further information please check the 'doc/src/install.rst' file (also at <https://www.psycopg.org/docs/install.html>). error: command '/usr/bin/clang' failed with exit code 1 ---------------------------------------- ERROR: Command errored out with exit status 1: '/Users//Library/Application Support/QGIS/QGIS3/profiles/default/python/plugins/data_index-development/venv/bin/python' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-install-myo1zngt/psycopg2_e9a9698f63464cb99bc5bf3655675aa2/setup.py'"'"'; __file__='"'"'/private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-install-myo1zngt/psycopg2_e9a9698f63464cb99bc5bf3655675aa2/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/fj/blrljwxs22q3csr4fqw7ycmc0000gn/T/pip-record-fkjtibr5/install-record.txt --single-version-externally-managed --compile --install-headers '/Users/Library/Application Support/QGIS/QGIS3/profiles/default/python/plugins/data_index-development/venv/include/site/python3.9/psycopg2' Check the logs for full command output. What can fix it? | Usually had something similar happen to me while using anaconda, and I had to manually delete the folder location and reinstall to be importable (even with new enviroment). I would suggest you try anaconda, miniconda with the same requirements and see if those help you out with your environment problem. Note: This requires separate installation. Follow some of the creation guides but in general the essence would be something like: conda create --name testenv python=3.8 conda activate testenv conda install psycopg2"==2.9.1" -c conda-forge After doing the last command in terminal, you are going to get something like this (just different version, but added into the command for 2.9.1): You generally can search on anaconda website if you aren't familiar with the channels and the commands. To check if everything is alright in the environment itself: Now try to use this environment in PyCharm (by changing the base interpreter to the testenv one), if you still have problems do include the error message. This also separates the issue from PyCharm or Pip wheel, and see if you can run get a running environment. | 8 | -2 |
70,192,880 | 2021-12-2 | https://stackoverflow.com/questions/70192880/how-to-know-what-version-of-a-github-action-to-use | I've noticed in various GitHub Action workflow examples, often when calling a pre-defined action (with the uses: syntax) then a particular version of that action is specified. For example: steps: - uses: actions/checkout@v2 - name: Set up Python uses: actions/setup-python@v2 with: python-version: '3.x' The above workflow specifies @v2 for both actions/checkout and actions/setup-python. The question is, how does one know that @v2 is the best version to use? And how will I know when @v3 becomes available? Even more confusing is the case of the action used to publish to pypi, pypa/gh-action-pypi-publish. In examples I have looked at, I have seen at least four different versions specified: pypa/gh-action-pypi-publish@27b31702a0e7fc50959f5ad993c78deac1bdfc29 pypa/gh-action-pypi-publish@master pypa/gh-action-pypi-publish@v1 pypa/gh-action-pypi-publish@release/v1 How do I know which one to use? And in general, how do you know which one's are available, and what the differences are? | How to know which version to use? When writing a workflow and including an action, I recommend looking at the Release tab on the GitHub repository. For actions/setup-python, that would be https://github.com/actions/setup-python/releases On that page, you should see what versions there are and what the latest one is. You want to use the latest version, because that way you can be sure you're not falling behind and upgrading doesn't become too painful in the future. How to reference a version? By convention, actions are published with specific tags (e.g. v1.0.1) as well as a major tag (e.g. v1). This allows you to reference an action like so actions/setup-python@v1. As soon as version v1.0.2 is published, you will automatically use that one. This means you profit from bug fixes and new features, but you're prevented from pulling in breaking changes. However, note that this is only by convention. Not every author of an action publishes a major tag and moves that along as new tags are published. Furthermore, an author might introduce a breaking change without bumping the major version. When to use other formats As you said there are other ways you can reference an action such as a specific commit (e.g. actions/setup-python@27b31702a0e7fc50959f5ad993c78deac1bdfc29) and others. In general, you want to stick to tags as described above. In particular, referencing @main or @master is dangerous, because you'll always get the latest changes, which might break your workflow. If there is an action that advises you to reference their default branch and they don't publish tags, I recommend creating an issue in their GitHub repository asking to publish tags. Using a git hash can be useful if you need to use a specific version. A use-case could be that you want to test if a specific version would fix a problem or if you see that the author of the action has pushed some new commits with changes that are not tagged yet. You could then test that version of the action. Security From a security perspective, using a tag (e.g. @v1 or @v1.1.0) or a branch (e.g. @main) is problematic, because the author of that repository could change where it refers to. A malicious author of an action could add malicious code to that branch or even simply not be careful enough when reviewing a PR and thereby introduce a vulnerability (e.g. via a transitive dependency). By using hashes (e.g. @27b31702a0e7fc50959f5ad993c78deac1bdfc29) you know exactly what you get and it doesn't change unless you choose change the version by updating the hash (at which point you can carefully review the changes). As of early 2022, using hashes instead of tags is not widely adopted, but for example GitHub does this for their docs repository. As supply chain security becomes more important, tools are created to help with "pinning" (point to a specific version by hash rather than tag), such as sethvargo/ratchet. But even depedanbot (see below) should be able to update hashes to the latest hash. How to know when there is a new version? You can use Dependabot for that: Keeping your actions up to date with Dependabot. Dependabot is a tool that creates a pull request in your repository as soon as a new version of any of your actions is available such that you can review what the changes are and keep your workflow up to date. Here's a sample Dependabot configuration that keeps your actions up to date by creating PRs: version: 2 updates: - package-ecosystem: "github-actions" directory: "/" schedule: interval: "daily" | 13 | 21 |
70,184,543 | 2021-12-1 | https://stackoverflow.com/questions/70184543/import-numpy-could-not-be-resolved-pylance | I have wrote the following code: import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 5, 100) y = x**2 plt.plot(x, y) plt.xlabel("X-axis") plt.ylabel("Y-axis") plt.title("graph of $x^2$") plt.show() When I run the code, it shows the following error: I have installed NumPy, successfully, but still getting this warning. How can I get rid of this? | Try this in VSCode IDE: (1) ctrl-shift-p (2) In search box, type python and then select Python: Select Interpreter (3) select Python 3.10.4 or any one matches your verion the problem usually caused by IDE didn't pick up the right interpreter | 19 | 49 |
70,266,649 | 2021-12-7 | https://stackoverflow.com/questions/70266649/python-difference-of-typing-with-asyncgenerator-or-asynciterator | I make creating a discord asynchronous library that is fully typed. I have a method that create objects from a aiohttp get requests such as the following example: async def get_bans(self): """|coro| Fetches all the bans in the guild. """ data = await self._http.get(f"guilds/{self.id}/bans") for ban_data in data: yield Ban.from_dict(construct_client_dict(self._client, ban_data)) I was wondering about the return type of this code snippet and whether it should be a AsyncGenerator[Ban, None] or AsyncIterator[Ban, None]. To be honest i have been searching for a bit, and i could find any information that would gave me a clear idea on the subject. | Official documentation describes AsyncIterator as: ... that provide __aiter__ and __anext__ methods ... AsyncGenerator leads from documentation to PEP525. ... generator is any function containing one or more yield expressions ... The result of calling an asynchronous generator function is an asynchronous generator object ... Asynchronous generators define both of these methods (__aiter__ and __anext__) So I think, because of yield in your function it's better to use AsyncGenerator. | 5 | 8 |
70,240,506 | 2021-12-6 | https://stackoverflow.com/questions/70240506/why-is-numpy-native-on-m1-max-greatly-slower-than-on-old-intel-i5 | I just got my new MacBook Pro with M1 Max chip and am setting up Python. I've tried several combinational settings to test speed - now I'm quite confused. First put my questions here: Why python run natively on M1 Max is greatly (~100%) slower than on my old MacBook Pro 2016 with Intel i5? On M1 Max, why there isn't significant speed difference between native run (by miniforge) and run via Rosetta (by anaconda) - which is supposed to be slower ~20%? On M1 Max and native run, why there isn't significant speed difference between conda installed Numpy and TensorFlow installed Numpy - which is supposed to be faster? On M1 Max, why run in PyCharm IDE is constantly slower ~20% than run from terminal, which doesn't happen on my old Intel Mac. Evidence supporting my questions is as follows: Here are the settings I've tried: 1. Python installed by Miniforge-arm64, so that python is natively run on M1 Max Chip. (Check from Activity Monitor, Kind of python process is Apple). Anaconda. Then python is run via Rosseta. (Check from Activity Monitor, Kind of python process is Intel). 2. Numpy installed by conda install numpy: numpy from original conda-forge channel, or pre-installed with anaconda. Apple-TensorFlow: with python installed by miniforge, I directly install tensorflow, and numpy will also be installed. It's said that, numpy installed in this way is optimized for Apple M1 and will be faster. Here is the installation commands: conda install -c apple tensorflow-deps python -m pip install tensorflow-macos python -m pip install tensorflow-metal 3. Run from Terminal. PyCharm (Apple Silicon version). Here is the test code: import time import numpy as np np.random.seed(42) a = np.random.uniform(size=(300, 300)) runtimes = 10 timecosts = [] for _ in range(runtimes): s_time = time.time() for i in range(100): a += 1 np.linalg.svd(a) timecosts.append(time.time() - s_time) print(f'mean of {runtimes} runs: {np.mean(timecosts):.5f}s') and here are the results: +-----------------------------------+-----------------------+--------------------+ | Python installed by (run on)β | Miniforge (native M1) | Anaconda (Rosseta) | +----------------------+------------+------------+----------+----------+---------+ | Numpy installed by β | Run from β | Terminal | PyCharm | Terminal | PyCharm | +----------------------+------------+------------+----------+----------+---------+ | Apple Tensorflow | 4.19151 | 4.86248 | / | / | +-----------------------------------+------------+----------+----------+---------+ | conda install numpy | 4.29386 | 4.98370 | 4.10029 | 4.99271 | +-----------------------------------+------------+----------+----------+---------+ This is quite slow. For comparison, run the same code on my old MacBook Pro 2016 with i5 chip - it costs 2.39917s. another post (but not in English) reports that run with M1 chip (not Pro or Max), miniforge+conda_installed_numpy is 2.53214s, and miniforge+apple_tensorflow_numpy is 1.00613s. you may also try on it your own. Here is the CPU information details: My old i5: $ sysctl -a | grep -e brand_string -e cpu.core_count machdep.cpu.brand_string: Intel(R) Core(TM) i5-6360U CPU @ 2.00GHz machdep.cpu.core_count: 2 My new M1 Max: % sysctl -a | grep -e brand_string -e cpu.core_count machdep.cpu.brand_string: Apple M1 Max machdep.cpu.core_count: 10 I follow instructions strictly from tutorials - but why would all these happen? Is it because of my installation flaws, or because of M1 Max chip? Since my work relies heavily on local runs, local speed is very important to me. Any suggestions to possible solution, or any data points on your own device would be greatly appreciated :) | Update Mar 28 2022: Please see @AndrejHribernik's comment below. How to install numpy on M1 Max, with the most accelerated performance (Apple's vecLib)? Here's the answer as of Dec 6 2021. Steps I. Install miniforge So that your Python is run natively on arm64, not translated via Rosseta. Download Miniforge3-MacOSX-arm64.sh, then Run the script, then open another shell $ bash Miniforge3-MacOSX-arm64.sh Create an environment (here I use name np_veclib) $ conda create -n np_veclib python=3.9 $ conda activate np_veclib II. Install Numpy with BLAS interface specified as vecLib To compile numpy, first need to install cython and pybind11: $ conda install cython pybind11 Compile numpy by (Thanks @Marijn's answer) - don't use conda install! $ pip install --no-binary :all: --no-use-pep517 numpy An alternative of 2. is to build from source $ git clone https://github.com/numpy/numpy $ cd numpy $ cp site.cfg.example site.cfg $ nano site.cfg Edit the copied site.cfg: add the following lines: [accelerate] libraries = Accelerate, vecLib Then build and install: $ NPY_LAPACK_ORDER=accelerate python setup.py build $ python setup.py install After either 2 or 3, now test whether numpy is using vecLib: >>> import numpy >>> numpy.show_config() Then, info like /System/Library/Frameworks/vecLib.framework/Headers should be printed. III. For further installing other packages using conda Make conda recognize packages installed by pip conda config --set pip_interop_enabled true This must be done, otherwise if e.g. conda install pandas, then numpy will be in The following packages will be installed list and installed again. But the new installed one is from conda-forge channel and is slow. Comparisons to other installations: 1. Competitors: Except for the above optimal one, I also tried several other installations A. np_default: conda create -n np_default python=3.9 numpy B. np_openblas: conda create -n np_openblas python=3.9 numpy blas=*=*openblas* C. np_netlib: conda create -n np_netlib python=3.9 numpy blas=*=*netlib* The above ABC options are directly installed from conda-forge channel. numpy.show_config() will show identical results. To see the difference, examine by conda list - e.g. openblas packages are installed in B. Note that mkl or blis is not supported on arm64. D. np_openblas_source: First install openblas by brew install openblas. Then add [openblas] path /opt/homebrew/opt/openblas to site.cfg and build Numpy from source. M1 and i9β9880H in this post. My old i5-6360U 2cores on MacBook Pro 2016 13in. 2. Benchmarks: Here I use two benchmarks: mysvd.py: My SVD decomposition import time import numpy as np np.random.seed(42) a = np.random.uniform(size=(300, 300)) runtimes = 10 timecosts = [] for _ in range(runtimes): s_time = time.time() for i in range(100): a += 1 np.linalg.svd(a) timecosts.append(time.time() - s_time) print(f'mean of {runtimes} runs: {np.mean(timecosts):.5f}s') dario.py: A benchmark script by Dario RadeΔiΔ at the post above. 3. Results: +-------+-----------+------------+-------------+-----------+--------------------+----+----------+----------+ | sec | np_veclib | np_default | np_openblas | np_netlib | np_openblas_source | M1 | i9β9880H | i5-6360U | +-------+-----------+------------+-------------+-----------+--------------------+----+----------+----------+ | mysvd | 1.02300 | 4.29386 | 4.13854 | 4.75812 | 12.57879 | / | / | 2.39917 | +-------+-----------+------------+-------------+-----------+--------------------+----+----------+----------+ | dario | 21 | 41 | 39 | 323 | 40 | 33 | 23 | 78 | +-------+-----------+------------+-------------+-----------+--------------------+----+----------+----------+ | 19 | 18 |
70,265,306 | 2021-12-7 | https://stackoverflow.com/questions/70265306/python-selenium-aws-lambda-change-webgl-vendor-renderer-for-undetectable-headles | Concept: Using AWS Lambda functions with Python and Selenium, I want to create a undetectable headless chrome scraper by passing a headless chrome test. I check the undetectability of my headless scraper by opening up the test and taking a screenshot. I ran this test on a Local IDE and on a Lambda server. Implementation: I will be using a python library called selenium-stealth and will follow their basic configuration: stealth(driver, languages=["en-US", "en"], vendor="Google Inc.", platform="Win32", webgl_vendor="Intel Inc.", renderer="Intel Iris OpenGL Engine", fix_hairline=True, ) I implemented this configuration on a Local IDE as well as an AWS Lambda Server to compare the results. Local IDE: Found below are the test results running on a local IDE: Lambda Server: When I run this on a Lambda server, both the WebGL Vendor and Renderer are blank. as shown below: I even tried to manually change the WebGL Vendor/Renderer using the following JavaScript command: driver.execute_cdp_cmd('Page.addScriptToEvaluateOnNewDocument', {"source": "WebGLRenderingContext.prototype.getParameter = function(parameter) {if (parameter === 37445) {return 'VENDOR_INPUT';}if (parameter === 37446) {return 'RENDERER_INPUT';}return getParameter(parameter);};"}) Then I thought maybe that it could be something wrong with the parameter number. I configured the command execution without the if statement, but the same thing happened: It worked on my Local IDE but had no effect on an AWS Lambda Server. Simply Put: Is it possible to add Vendor/Renderer on AWS Lambda? In my efforts, it seems that there is no possible way. I made sure to submit this issue on the selenium-stealth GitHub Repository. | A solution I found for the missing WebGL Vendor/Renderer was using a docker container instead of the normal Lambda layers when creating a function. Not only does the storage increase by a factor of 40X but it also solves the WebGL Vendor/Renderer problem: | 10 | 1 |
70,203,927 | 2021-12-2 | https://stackoverflow.com/questions/70203927/how-to-prevent-alembic-revision-autogenerate-from-making-revision-file-if-it-h | I have project where I'm using SQLAlchemy for models and I'm trying to integrate Alembic for making migrations. Everything works as expected when I change models and Alembic sees that models have changed -> it creates good migration file with command: alembic revision --autogenerate -m "model changed" But when I have NOT changed anything in models and I use the same command: alembic revision --autogenerate -m "should be no migration" revision gives me 'empty' revision file like this: """next Revision ID: d06d2a8fed5d Revises: 4461d5328f57 Create Date: 2021-12-02 18:09:42.208607 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'd06d2a8fed5d' down_revision = '4461d5328f57' branch_labels = None depends_on = None def upgrade(): # ### commands auto generated by Alembic - please adjust! ### pass # ### end Alembic commands ### def downgrade(): # ### commands auto generated by Alembic - please adjust! ### pass # ### end Alembic commands ### What is purpose of this file? Could I prevent creation of this 'empty file' when alembic revision --autogenerate will not see any changes? To compare when I use Django and it's internal migration when I type command: python manage.py makemigrations I get output something like: No changes detected and there is not migration file created. Is there a way to do the same with Alembic revision? Or is there other command that could check if there were changes in models and if there were then I could simply run alembic revision and upgrade? | Accepted answer does not answer the question. The correct answer is: Yes, you can call alembic revision --autogenerate and be sure that only if there are changes a revision file would be generated: As per alembic's documentation Implemented in Flask-Migrate (exactly in this file), it's just a change to env.py to account for the needed feature, namely to not autogenerate a revision if there are no changes to the models. You would still run alembic revision --autogenerate -m "should be no migration" but the change you would make to the env.py is, in short: def run_migrations_online(): # almost identical to Flask-Migrate (Thanks miguel!) # this callback is used to prevent an auto-migration from being generated # when there are no changes to the schema def process_revision_directives(context, revision, directives): if config.cmd_opts.autogenerate: script = directives[0] if script.upgrade_ops.is_empty(): directives[:] = [] print('No changes in schema detected.') connectable = engine_from_config( config.get_section(config.config_ini_section), prefix="sqlalchemy.", poolclass=pool.NullPool, ) with connectable.connect() as connection: context.configure( connection=connection, target_metadata=target_metadata, process_revision_directives=process_revision_directives ) with context.begin_transaction(): context.run_migrations() Now, you can easily call alembic revision --autogenerate without risking the creation of a new empty revision. | 8 | 10 |
70,193,443 | 2021-12-2 | https://stackoverflow.com/questions/70193443/colab-notebook-cannot-import-name-container-abcs-from-torch-six | I'm trying to run the deit colab notebook found here: https://colab.research.google.com/github/facebookresearch/deit/blob/colab/notebooks/deit_inference.ipynb but I'm running into an issue in the second cell, specifically the import timm line, which returns this: ImportError: cannot import name 'container_abcs' from 'torch._six' | Issue related to this error here: Try a specific version of timm library: !pip install timm==0.3.2 | 5 | 3 |
70,183,853 | 2021-12-1 | https://stackoverflow.com/questions/70183853/send-pathlib-path-data-to-fastapi-posixpath-is-not-json-serializable | I have built an API using FastAPI and am trying to send data to it from a client. Both the API and the client use a similar Pydantic model for the data that I want to submit. This includes a field that contains a file path, which I store in a field of type pathlib.path. However, FastAPI does not accept the submission because it apparently cannot handle the path object: TypeError: Object of type PosixPath is not JSON serializable Here's a minimal test file that shows the problem: import pathlib from pydantic import BaseModel from fastapi import FastAPI from fastapi.testclient import TestClient api = FastAPI() client = TestClient(api) class Submission(BaseModel): file_path: pathlib.Path @api.post("/", response_model=Submission) async def add_submission(subm: Submission): print(subm) # add submission to database return subm def test_add_submission(): data = {"file_path": "/my/path/to/file.csv"} print("original data:", data) # create a Submission object, which casts filePath to pathlib.Path: submission = Submission(**data) print("submission object:", submission) payload = submission.dict() print("payload:", payload) response = client.post("/", json=payload) # this throws the error assert response.ok test_add_submission() When I change the model on the client side to use a string instead of a Path for file_path, things go through. But then I lose the pydantic power of casting the input to a Path when a Submission object is created, and then having a Path attribute with all its possibilities. Surely, there must be better way? What is the correct way to send a pathlib.PosixPath object to a FastAPI API as part of the payload? (This is Python 3.8.9, fastapi 0.68.1, pydantic 1.8.2 on Ubuntu) | The problem with you code is, that you first transform the pydantic model into a dict, which you then pass to the client, which uses its own json serializer and not the one provided by pydantic. submission.dict() converts any pydantic model into a dict but keeps any other datatype. With client.post("/", json=payload) requests json serializer is used, which cannot handle pathlib.Path. The solution is, not to convert the pydantic model into dict first, but use the json() method of the pydantic model itself and pass it to the client. response = client.post("/", data=submission.json()) Notice that you have to change the parameter json to data. | 8 | 3 |
70,251,565 | 2021-12-6 | https://stackoverflow.com/questions/70251565/how-to-implement-the-hindenburg-omen-indicator | As defined here the Hindenburg omen indicator is: The daily number of new 52-week highs and 52-week lows in a stock market index are greater than a threshold amount (typically 2.2%). To me it means, we roll daily and look back 52 weeks or 252 business/trading days, then count the number of highs (or lows) and finally compute the return of that or pct_change, which is the ratio of new highs (or lows) they want to monitor e.g., being above 2.2% import pandas as pd import numpy as np import yfinance as yf # download the S&P500 df = yf.download('^GSPC') # compute the "highs" and "lows" df['Highs'] = df['Close'].rolling(252).apply(lambda x: x.cummax().diff(). apply(lambda x: np.where(x > 0, 1, 0)).sum()).pct_change() df['Lows'] = df['Close'].rolling(252).apply(lambda x: x.cummin().diff(). apply(lambda x: np.where(x < 0, 1, 0)).sum()).pct_change() Did we understand it the same way? is there a better way to do it? | Interesting question! Could I suggest the following code - it runs much faster than the apply solution because it is vectorised, and also lays out the steps a bit more clearly so you can inspect the interim results. I got a different result to your code - you can compare by also plotting your result on the timeseries at bottom. import pandas as pd import numpy as np import yfinance as yf # download the S&P500 df = yf.download('^GSPC') # Constants n_trading_day_window = 252 # Simplify the input dataframe to only the relevant column df_hin_omen = df[['Close']] # Calculate rolling highs and lows df_hin_omen.insert(1, 'rolling_high', df_hin_omen['Close'].rolling(n_trading_day_window).max()) df_hin_omen.insert(2, 'rolling_low', df_hin_omen['Close'].rolling(n_trading_day_window).min()) # High and low are simply when the given row matches the 252 day high or low df_hin_omen.insert(3, 'is_high', df_hin_omen.Close == df_hin_omen.rolling_high) df_hin_omen.insert(4, 'is_low', df_hin_omen.Close == df_hin_omen.rolling_low) # Calculate rolling percentages df_hin_omen.insert(5, 'percent_highs', df_hin_omen.is_high.rolling(n_trading_day_window).sum() / n_trading_day_window) df_hin_omen.insert(6, 'percent_lows', df_hin_omen.is_low.rolling(n_trading_day_window).sum() / n_trading_day_window) Once you have run this, you can inspect the results as follows: import matplotlib, matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(16, 6)) df_hin_omen.resample('w').mean().percent_highs.plot(ax=ax) df_hin_omen.resample('w').mean().percent_lows.plot(ax=ax) From the Hindenburg Omen Definition, "The Hindenburg Omen looks for a statistical deviation from the premise that under normal conditions, some stocks are either making new 52-week highs or new 52-week lows. It would be abnormal if both were occurring at the same time." So from a look at our graph, my interpretation is that the stock market is currently closing at a lot of 52 week highs, but is not showing many 52 week lows. Please also note the article cited states that "It was reported that it had correctly predicted a significant stock market decline only 25% of the time." so I'm not sure if we can read too much into this... Edit I've had a look at your code and I don't think that the use of the pct_change function is correct - that will calculate the change on the rolling differential, so a movement from eg 0.10% to 0.11% would actually equate to a 10% change. Instead you want the rolling sum over the past year and divide that by the number of days in the year, per my code above. | 5 | 1 |
70,229,330 | 2021-12-4 | https://stackoverflow.com/questions/70229330/embedding-jupyter-notebook-google-colab-in-django-app | I wanted to build a website and embed the jupyter notebook functionality of being able to create cells and run python code within it into my website For creating a website I m using Django and I would like to embed either the google collab or jupyter notebook By the way I have researched enough and have been stuck with the StackOverflow links where there no answer about this or the one where they want to use django in jupyter notebook Thanks in advance for any guidance or any reference that you guys can provide. | You have many options to do that: Note:: I used "jupyter-lab" you can use "jupyter notebook" 1- The first option to redirect to "jupyter notebook" django view.py from django.shortcuts import redirect,HttpResponse import subprocess import time def open_jupiter_notbook(request): b= subprocess.check_output("jupyter-lab list".split()).decode('utf-8') if "9999" not in b: a=subprocess.Popen("jupyter-lab --no-browser --port 9999".split()) start_time = time.time() unreachable_time = 10 while "9999" not in b: timer = time.time() elapsed_time = timer-start_time b= subprocess.check_output("jupyter-lab list".split()).decode('utf-8') if "9999" in b: break if elapsed_time > unreachable_time: return HttpResponse("Unreachable") path = b.split('\n')[1].split('::',1)[0] #You can here add data to your path if you want to open file or anything return redirect(path) if you want to implement it in template instead of redirect, you can use the following code in Django template: <iframe src="{% url 'open_jupiter_notbook' %}" width= 600px height=200px></iframe> 2- The second option: just use jupyter notebook commands by using this subprocess.check_output("your command".split()) | 7 | 4 |
70,247,344 | 2021-12-6 | https://stackoverflow.com/questions/70247344/save-video-in-opencv-with-h264-codec | I am using opencv-python==4.5.1.48 and python3.9 docker. I want to save a video in h264 format. Here is my function to save a video: import cv2 def save_video(frames): fps = 30 video_path = '/home/save_test.mp4' fourcc = cv2.VideoWriter_fourcc(*'h264') video_writer = cv2.VideoWriter(video_path, fourcc, fps, (112, 112)) for frame in frames: video_writer.write(frame) video_writer.release() When I use .mp4 format to save video, I get the following error: OpenCV: FFMPEG: tag 0x34363268/'h264' is not supported with codec id 27 and format 'mp4 / MP4 (MPEG-4 Part 14)' OpenCV: FFMPEG: fallback to use tag 0x31637661/'avc1' Could not find encoder for codec id 27: Encoder not found I search and read some solutions but none of them solve my issue. Update: I also install libx264-dev which was recommended in this post and did not work. | Finally, I found the solution. I can solve my problem in the ubuntu:20.04 docker. The important thing you should notice is that you should install OpenCV via apt-get install python3-opencv not using pip. | 13 | 3 |
70,221,003 | 2021-12-3 | https://stackoverflow.com/questions/70221003/patch-request-not-patching-403-returned-django-rest-framework | I'm trying to test an API endpoint with a patch request to ensure it works. I'm using APILiveServerTestCase but can't seem to get the permissions required to patch the item. I created one user (adminuser) who is a superadmin with access to everything and all permissions. My test case looks like this: class FutureVehicleURLTest(APILiveServerTestCase): def setUp(self): # Setup users and some vehicle data we can query against management.call_command("create_users_and_vehicle_data", verbosity=0) self.user = UserFactory() self.admin_user = User.objects.get(username="adminuser") self.future_vehicle = f.FutureVehicleFactory( user=self.user, last_updated_by=self.user, ) self.vehicle = f.VehicleFactory( user=self.user, created_by=self.user, modified_by=self.user, ) self.url = reverse("FutureVehicles-list") self.full_url = self.live_server_url + self.url time = str(datetime.now()) self.form_data = { "signature": "TT", "purchasing": True, "confirmed_at": time, } I've tried this test a number of different ways - all giving the same result (403). I have setup the python debugger in the test, and I have tried actually going to http://localhost:xxxxx/admin/ in the browser and logging in manually with any user but the page just refreshes when I click to login and I never get 'logged in' to see the admin. I'm not sure if that's because it doesn't completely work from within a debugger like that or not. My test looks like this (using the Requests library): def test_patch_request_updates_object(self): data_dict = { "signature": "TT", "purchasing": "true", "confirmed_at": datetime.now().strftime("%m/%d/%Y, %H:%M:%S"), } url = self.full_url + str(self.future_vehicle.id) + "/" client = requests.Session() client.auth = HTTPBasicAuth(self.admin_user.username, "test") client.headers.update({"x-test": "true"}) response = client.get(self.live_server_url + "/admin/") csrftoken = response.cookies["csrftoken"] # interact with the api response = client.patch( url, data=json.dumps(data_dict), cookies=response.cookies, headers={ "X-Requested-With": "XMLHttpRequest", "X-CSRFTOKEN": csrftoken, }, ) # RESPONSE GIVES 403 PERMISSION DENIED fte_future_vehicle = FutureVehicle.objects.filter( id=self.future_vehicle.id ).first() # THIS ERRORS WITH '' not equal to 'TT' self.assertEqual(fte_future_vehicle.signature, "TT") I have tried it very similarly to the documentation using APIRequestFactory and forcing authentication: def test_patch_request_updates_object(self): data_dict = { "signature": "TT", "purchasing": "true", "confirmed_at": datetime.now().strftime("%m/%d/%Y, %H:%M:%S"), } url = self.full_url + str(self.future_vehicle.id) + "/" api_req_factory = APIRequestFactory() view = FutureVehicleViewSet.as_view({"patch": "partial_update"}) api_request = api_req_factory.patch( url, json.dumps(data_dict), content_type="application/json" ) force_authenticate(api_request, self.admin_user) response = view(api_request, pk=self.future_assignment.id) fte_future_assignment = FutureVehicle.objects.filter( id=self.future_assignment.id ).first() self.assertEqual(fte_future_assignment.signature, "TT") If I enter the debugger to look at the responses, it's always a 403. The viewset itself is very simple: class FutureVehicleViewSet(ModelViewSet): serializer_class = FutureVehicleSerializer def get_queryset(self): queryset = FutureVehicle.exclude_denied.all() user_id = self.request.query_params.get("user_id", None) if user_id: queryset = queryset.filter(user_id=user_id) return queryset The serializer is just as basic as it gets - it's just the FutureVehicle model and all fields. I just can't figure out why my user won't login - or if maybe I'm doing something wrong in my attempts to patch? I'm pretty new to Django Rest Framework in general, so any guidances is helpful! Edit to add - my DRF Settings look like this: REST_FRAMEWORK = { "DEFAULT_PAGINATION_CLASS": "rest_framework.pagination.LimitOffsetPagination", "DATETIME_FORMAT": "%m/%d/%Y - %I:%M:%S %p", "DATE_INPUT_FORMATS": ["%Y-%m-%d"], "DEFAULT_AUTHENTICATION_CLASSES": [ # Enabling this it will require Django Session (Including CSRF) "rest_framework.authentication.SessionAuthentication" ], "DEFAULT_PERMISSION_CLASSES": [ # Globally only allow IsAuthenticated users access to API Endpoints "rest_framework.permissions.IsAuthenticated" ], } I'm certain adminuser is the user we wish to login - if I go into the debugger and check the users, they exist as a user. During creation, any user created has a password set to 'test'. | Recommended Solution The test you have written is also testing the Django framework logic (ie: Django admin login). I recommend testing your own functionality, which occurs after login to the Django admin. Django's testing framework offers a helper for logging into the admin, client.login. This allows you to focus on testing your own business logic/not need to maintain internal django authentication business logic tests, which may change release to release. from django.test import TestCase, Client def TestCase(): client.login(username=self.username, password=self.password) Alternative Solution However, if you must replicate and manage the business logic of what client.login is doing, here's some of the business logic from Django: def login(self, **credentials): """ Set the Factory to appear as if it has successfully logged into a site. Return True if login is possible or False if the provided credentials are incorrect. """ from django.contrib.auth import authenticate user = authenticate(**credentials) if user: self._login(user) return True return False def force_login(self, user, backend=None): def get_backend(): from django.contrib.auth import load_backend for backend_path in settings.AUTHENTICATION_BACKENDS: backend = load_backend(backend_path) if hasattr(backend, 'get_user'): return backend_path if backend is None: backend = get_backend() user.backend = backend self._login(user, backend) def _login(self, user, backend=None): from django.contrib.auth import login # Create a fake request to store login details. request = HttpRequest() if self.session: request.session = self.session else: engine = import_module(settings.SESSION_ENGINE) request.session = engine.SessionStore() login(request, user, backend) # Save the session values. request.session.save() # Set the cookie to represent the session. session_cookie = settings.SESSION_COOKIE_NAME self.cookies[session_cookie] = request.session.session_key cookie_data = { 'max-age': None, 'path': '/', 'domain': settings.SESSION_COOKIE_DOMAIN, 'secure': settings.SESSION_COOKIE_SECURE or None, 'expires': None, } self.cookies[session_cookie].update(cookie_data) References: Django client.login: https://github.com/django/django/blob/main/django/test/client.py#L596-L646 | 10 | 5 |
70,267,576 | 2021-12-7 | https://stackoverflow.com/questions/70267576/merge-two-files-and-add-computation-and-sorting-the-updated-data-in-python | I need help to make the snippet below. I need to merge two files and performs computation on matched lines I have oldFile.txt which contains old data and newFile.txt with an updated sets of data. I need to to update the oldFile.txt based on the data in the newFile.txt and compute the changes in percentage. Any idea will be very helpful. Thanks in advance from collections import defaultdict num = 0 data=defaultdict(int) with open("newFile.txt", encoding='utf8', errors='ignore') as f: for line in f: grp, pname, cnt, cat = line.split(maxsplit=3) data[(pname.strip(),cat.replace('\n','').strip(),grp,cat)]+=int(cnt) sorteddata = sorted([[k[0],v,k[1],k[2]] for k,v in data.items()], key=lambda x:x[1], reverse=True) for subl in sorteddata[:10]: num += 1 line = " ".join(map(str, subl)) print ("{:>5} -> {:>}".format(str(num), line)) with open("oldFile.txt", 'a', encoding='utf8', errors='ignore') as l: l.write(" ".join(map(str, subl)) + '\n') oldFile.txt #col1 #col2 #col3 #col4 1,396 c15e89f2149bcc0cbd5fb204 4 HUH_Token (HUH) 279 9e4d81c8fc15870b15aef8dc 3 BABY BNB (BBNB) 231 31b5c07636dab8f0909dbd2d 6 Buff Unicorn (BUFFUN...) 438 1c6bc8e962427deb4106ae06 8 Charge (Charge) 2,739 6ea059a29eccecee4e250414 2 MAXIMACASH (MAXCAS...) newFile.txt #-- updated data with additional lines not found in oldFile.txt #col1 #col2 #col3 #col4 8,739 6ea059a29eccecee4e250414 60 MAXIMACASH (MAXCAS...) 138 1c6bc8e962427deb4106ae06 50 Charge (Charge) 860 31b5c07636dab8f0909dbd2d 40 Buff Unicorn (BUFFUN...) 200 9e4d81c8fc15870b15aef8dc 30 BABY BNB (BBNB) #-- not found in the oldFile.txt 20 5esdsds2sd15870b15aef8dc 30 CharliesAngel (CA) 1,560 c15e89f2149bcc0cbd5fb204 20 HUH_Token (HUH) Need Improvement: #-- With additional columns (col5, col6) and sorted based on (col3) values #col1 #col2 #col3 #col4 #col5 (oldFile-newFile) #col6 (oldFile-newFile) 8,739 6ea059a29eccecee4e250414 62 MAXIMACASH (MAXCAS...) 2900.00 % (col3 2-60) 219.06 % (col1 2,739-8,739) 138 1c6bc8e962427deb4106ae06 58 Charge (Charge) 625.00 % (col3 8-50) -68.49 % (col1 438-138) 860 31b5c07636dab8f0909dbd2d 46 Buff Unicorn (BUFFUN...) 566.67 % (col3 6-40) 272.29 % (col1 231-860) 200 9e4d81c8fc15870b15aef8dc 33 BABY BNB (BBNB) 900.00 % (col3 3-30) -28.32 % (col1 279-200) 20 5esdsds2sd15870b15aef8dc 30 CharliesAngel (CA) 0.00 % (col3 0-30) 20.00 % (col1 0-20) 1,560 c15e89f2149bcc0cbd5fb204 24 HUH_Token (HUH) 400.00 % (col3 4-20) 11.75 % (col1 1,396-1,560) | Here is a sample code to output what you need. I use the formula below to calculate pct change. percentage_change = 100*(new-old)/old If old is 0 it is changed to 1 to avoid division by zero error. import pandas as pd def read_file(fn): """ Read file fn and convert data into a dict of dict. data = {pname1: {grp: grp1, pname: pname1, cnt: cnt1, cat: cat1}, pname2: {gpr: grp2, ...} ...} """ data = {} with open(fn, 'r') as f: for lines in f: line = lines.rstrip() grp, pname, cnt, cat = line.split(maxsplit=3) data.update({pname: {'grp': float(grp.replace(',', '')), 'pname': pname, 'cnt': int(cnt), 'cat': cat}}) return data def process_data(oldfn, newfn): """ Read old and new files, update the old file based on new file. Save output to text, and csv files. """ # Get old and new data in dict. old = read_file(oldfn) new = read_file(newfn) # Update old data based on new data u_data = {} for ko, vo in old.items(): if ko in new: n = new[ko] # Update cnt. old_cnt = vo['cnt'] new_cnt = n['cnt'] u_cnt = old_cnt + new_cnt # cnt change, if old is zero we set it to 1 to avoid division by zero error. tmp_old_cnt = 1 if old_cnt == 0 else old_cnt cnt_change = 100 * (new_cnt - tmp_old_cnt) / tmp_old_cnt # grp change old_grp = vo['grp'] new_grp = n['grp'] grp_change = 100 * (new_grp - old_grp) / old_grp u_data.update({ko: {'grp': n['grp'], 'pname': n['pname'], 'cnt': u_cnt, 'cat': n['cat'], 'cnt_change%': round(cnt_change, 2), 'grp_change%': round(grp_change, 2)}}) # add new data to u_data, that is not in old data for kn, vn in new.items(): if kn not in old: # Since this is new item its old cnt is zero, we set it to 1 to avoid division by zero error. old_cnt = 1 new_cnt = vn['cnt'] cnt_change = 100 * (new_cnt - old_cnt) / old_cnt # grp change is similar to cnt change old_grp = 1 new_grp = vn['grp'] grp_change = 100 * (new_grp - old_grp) / old_grp # Update new columns. vn.update({'cnt_change%': round(cnt_change, 2), 'grp_change%': round(grp_change, 2)}) u_data.update({kn: vn}) # Create new data mydata list from u_data, and only extract the dict value. mydata = [] for _, v in u_data.items(): mydata.append(v) # Convert mydata into pandas dataframe to easier manage the data. df = pd.DataFrame(mydata) df = df.sort_values(by=['cnt'], ascending=False) # sort on cnt column # Save to csv file. df.to_csv('output.csv', index=False) # Save to text file. with open('output.txt', 'w') as w: w.write(f'{df.to_string(index=False)}') # Print in console. print(df.to_string(index=False)) # Start oldfn = 'F:/Tmp/oldFile.txt' newfn = 'F:/Tmp/newFile.txt' process_data(oldfn, newfn) Console output: grp pname cnt cat cnt_change% grp_change% 8739.0 6ea059a29eccecee4e250414 62 MAXIMACASH (MAXCAS...) 2900.00 219.06 138.0 1c6bc8e962427deb4106ae06 58 Charge (Charge) 525.00 -68.49 860.0 31b5c07636dab8f0909dbd2d 46 Buff Unicorn (BUFFUN...) 566.67 272.29 200.0 9e4d81c8fc15870b15aef8dc 33 BABY BNB (BBNB) 900.00 -28.32 20.0 5esdsds2sd15870b15aef8dc 30 CharliesAngel (CA) 2900.00 1900.00 1560.0 c15e89f2149bcc0cbd5fb204 24 HUH_Token (HUH) 400.00 11.75 text output: grp pname cnt cat cnt_change% grp_change% 8739.0 6ea059a29eccecee4e250414 62 MAXIMACASH (MAXCAS...) 2900.00 219.06 138.0 1c6bc8e962427deb4106ae06 58 Charge (Charge) 525.00 -68.49 860.0 31b5c07636dab8f0909dbd2d 46 Buff Unicorn (BUFFUN...) 566.67 272.29 200.0 9e4d81c8fc15870b15aef8dc 33 BABY BNB (BBNB) 900.00 -28.32 20.0 5esdsds2sd15870b15aef8dc 30 CharliesAngel (CA) 2900.00 1900.00 1560.0 c15e89f2149bcc0cbd5fb204 24 HUH_Token (HUH) 400.00 11.75 csv output: grp,pname,cnt,cat,cnt_change%,grp_change% 8739.0,6ea059a29eccecee4e250414,62,MAXIMACASH (MAXCAS...),2900.0,219.06 138.0,1c6bc8e962427deb4106ae06,58,Charge (Charge),525.0,-68.49 860.0,31b5c07636dab8f0909dbd2d,46,Buff Unicorn (BUFFUN...),566.67,272.29 200.0,9e4d81c8fc15870b15aef8dc,33,BABY BNB (BBNB),900.0,-28.32 20.0,5esdsds2sd15870b15aef8dc,30,CharliesAngel (CA),2900.0,1900.0 1560.0,c15e89f2149bcc0cbd5fb204,24,HUH_Token (HUH),400.0,11.75 | 12 | 9 |
70,246,591 | 2021-12-6 | https://stackoverflow.com/questions/70246591/delete-directory-and-all-symlinks-recursively | I tried to use shutil to delete a directory and all contained files, as follows: import shutil from os.path import exists if exists(path_dir): shutil.rmtree(path_dir) Unfortunately, my solution does not work, throwing the following error: FileNotFoundError: [Errno 2] No such file or directory: '._image1.jpg' A quick search showed that I'm not alone in having this problem. In my understanding, the rmtree function is equivalent to the rm -Rf $DIR shell command - but this doesn't seem to be the case. p.s. for reconstruction purposes. Please create a symbolic link for example using ln -s /path/to/original /path/to/link | That is strange, I have no issues with shutil.rmtree() with or without symlink under the folder to be deleted, both in windows 10 and Ubuntu 20.04.2 LTS. Anyhow try the following code. I tried it in windows 10 and Ubuntu. from pathlib import Path import shutil def delete_dir_recursion(p): """ Delete folder, sub-folders and files. """ for f in p.glob('**/*'): if f.is_symlink(): f.unlink(missing_ok=True) # missing_ok is added in python 3.8 print(f'symlink {f.name} from path {f} was deleted') elif f.is_file(): f.unlink() print(f'file: {f.name} from path {f} was deleted') elif f.is_dir(): try: f.rmdir() # delete empty sub-folder print(f'folder: {f.name} from path {f} was deleted') except OSError: # sub-folder is not empty delete_dir_recursion(f) # recurse the current sub-folder except Exception as exception: # capture other exception print(f'exception name: {exception.__class__.__name__}') print(f'exception msg: {exception}') try: p.rmdir() # time to delete an empty folder print(f'folder: {p.name} from path {p} was deleted') except NotADirectoryError: p.unlink() # delete folder even if it is a symlink, linux print(f'symlink folder: {p.name} from path {p} was deleted') except Exception as exception: print(f'exception name: {exception.__class__.__name__}') print(f'exception msg: {exception}') def delete_dir(folder): p = Path(folder) if not p.exists(): print(f'The path {p} does not exists!') return # Attempt to delete the whole folder at once. try: shutil.rmtree(p) except Exception as exception: print(f'exception name: {exception.__class__.__name__}') print(f'exception msg: {exception}') # continue parsing the folder else: # else if no issues on rmtree() if not p.exists(): # verify print(f'folder {p} was successfully deleted by shutil.rmtree!') return print(f'Parse the folder {folder} ...') delete_dir_recursion(p) if not p.exists(): # verify print(f'folder {p} was successfully deleted!') # start folder_to_delete = '/home/zz/tmp/sample/b' # delete folder b delete_dir(folder_to_delete) Sample output: We are going to delete the folder b. . βββ 1.txt βββ a βββ b β βββ 1 β βββ 1.txt -> ../1.txt β βββ 2 β β βββ 21 β β βββ 21.txt β βββ 3 β β βββ 31 β βββ 4 β β βββ c -> ../../c β βββ a -> ../a β βββ b.txt βββ c Parse the folder /home/zz/tmp/sample/b ... symlink a from path /home/zz/tmp/sample/b/a was deleted symlink c from path /home/zz/tmp/sample/b/4/c was deleted folder: 4 from path /home/zz/tmp/sample/b/4 was deleted symlink 1.txt from path /home/zz/tmp/sample/b/1.txt was deleted file: b.txt from path /home/zz/tmp/sample/b/b.txt was deleted file: 21.txt from path /home/zz/tmp/sample/b/2/21/21.txt was deleted folder: 21 from path /home/zz/tmp/sample/b/2/21 was deleted folder: 2 from path /home/zz/tmp/sample/b/2 was deleted folder: 1 from path /home/zz/tmp/sample/b/1 was deleted folder: 31 from path /home/zz/tmp/sample/b/3/31 was deleted folder: 3 from path /home/zz/tmp/sample/b/3 was deleted folder: b from path /home/zz/tmp/sample/b was deleted folder /home/zz/tmp/sample/b was successfully deleted! | 11 | 6 |
70,264,157 | 2021-12-7 | https://stackoverflow.com/questions/70264157/logistic-regression-and-gridsearchcv-using-python-sklearn | I am trying code from this page. I ran up to the part LR (tf-idf) and got the similar results After that I decided to try GridSearchCV. My questions below: 1) #lets try gridsearchcv #https://www.kaggle.com/enespolat/grid-search-with-logistic-regression from sklearn.model_selection import GridSearchCV grid={"C":np.logspace(-3,3,7), "penalty":["l2"]}# l1 lasso l2 ridge logreg=LogisticRegression(solver = 'liblinear') logreg_cv=GridSearchCV(logreg,grid,cv=3,scoring='f1') logreg_cv.fit(X_train_vectors_tfidf, y_train) print("tuned hpyerparameters :(best parameters) ",logreg_cv.best_params_) print("best score :",logreg_cv.best_score_) #tuned hpyerparameters :(best parameters) {'C': 10.0, 'penalty': 'l2'} #best score : 0.7390325593588823 Then I calculated f1 score manually. why it is not matching? logreg_cv.predict_proba(X_train_vectors_tfidf)[:,1] final_prediction=np.where(logreg_cv.predict_proba(X_train_vectors_tfidf)[:,1]>=0.5,1,0) #https://www.statology.org/f1-score-in-python/ from sklearn.metrics import f1_score #calculate F1 score f1_score(y_train, final_prediction) 0.9839388145315489 If I try scoring='precision' why does it give below error? I am not clear mainly because I have relatively balanced dataset (55-45%) and f1 which requires precision is getting calculated without any problems #lets try gridsearchcv #https://www.kaggle.com/enespolat/grid-search-with-logistic-regression from sklearn.model_selection import GridSearchCV grid={"C":np.logspace(-3,3,7), "penalty":["l2"]}# l1 lasso l2 ridge logreg=LogisticRegression(solver = 'liblinear') logreg_cv=GridSearchCV(logreg,grid,cv=3,scoring='precision') logreg_cv.fit(X_train_vectors_tfidf, y_train) print("tuned hpyerparameters :(best parameters) ",logreg_cv.best_params_) print("best score :",logreg_cv.best_score_) /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1308: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1308: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1308: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1308: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1308: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) /usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1308: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) tuned hpyerparameters :(best parameters) {'C': 0.1, 'penalty': 'l2'} best score : 0.9474200393672962 is there any easier way to get predictions on the train data back? we already have the logreg_cv object. I used below method to get the predictions back. Is there a better way to do the same? logreg_cv.predict_proba(X_train_vectors_tfidf)[:,1] ############################ ############update 1 Please answer question 1 from above. In the comment for the question it says The best score in GridSearchCV is calculated by taking the average score from cross validation for the best estimators. That is, it is calculated from data that is held out during fitting. From what I can tell, you are calculating predicted values from the training data and calculating an F1 score on that. Since the model was trained on that data, that is why the F1 score is so much larger compared to the results in the grid search is that the reason I get below results #tuned hpyerparameters :(best parameters) {'C': 10.0, 'penalty': 'l2'} #best score : 0.7390325593588823 but when i do manually i get f1_score(y_train, final_prediction) 0.9839388145315489 2) I tried to tune using f1_micro as suggested in the answer below. No error message. I am still not clear why f1_micro is not failing when precision fails from sklearn.model_selection import GridSearchCV grid={"C":np.logspace(-3,3,7), "penalty":["l2"], "solver":['liblinear','newton-cg'], 'class_weight':[{ 0:0.95, 1:0.05 }, { 0:0.55, 1:0.45 }, { 0:0.45, 1:0.55 },{ 0:0.05, 1:0.95 }]}# l1 lasso l2 ridge #logreg=LogisticRegression(solver = 'liblinear') logreg=LogisticRegression() logreg_cv=GridSearchCV(logreg,grid,cv=3,scoring='f1_micro') logreg_cv.fit(X_train_vectors_tfidf, y_train) tuned hpyerparameters :(best parameters) {'C': 10.0, 'class_weight': {0: 0.45, 1: 0.55}, 'penalty': 'l2', 'solver': 'newton-cg'} best score : 0.7894909688013136 | You end up with the error with precision because some of your penalization is too strong for this model, if you check the results, you get 0 for f1 score when C = 0.001 and C = 0.01 res = pd.DataFrame(logreg_cv.cv_results_) res.iloc[:,res.columns.str.contains("split[0-9]_test_score|params",regex=True)] params split0_test_score split1_test_score split2_test_score 0 {'C': 0.001, 'penalty': 'l2'} 0.000000 0.000000 0.000000 1 {'C': 0.01, 'penalty': 'l2'} 0.000000 0.000000 0.000000 2 {'C': 0.1, 'penalty': 'l2'} 0.973568 0.952607 0.952174 3 {'C': 1.0, 'penalty': 'l2'} 0.863934 0.851064 0.836449 4 {'C': 10.0, 'penalty': 'l2'} 0.811634 0.769547 0.787838 5 {'C': 100.0, 'penalty': 'l2'} 0.789826 0.762162 0.773438 6 {'C': 1000.0, 'penalty': 'l2'} 0.781003 0.750000 0.763871 You can check this: lr = LogisticRegression(C=0.01).fit(X_train_vectors_tfidf,y_train) np.unique(lr.predict(X_train_vectors_tfidf)) array([0]) And that the probabilities predicted drift towards the intercept: # expected probability np.exp(lr.intercept_)/(1+np.exp(lr.intercept_)) array([0.41764462]) lr.predict_proba(X_train_vectors_tfidf) array([[0.58732636, 0.41267364], [0.57074279, 0.42925721], [0.57219143, 0.42780857], ..., [0.57215605, 0.42784395], [0.56988186, 0.43011814], [0.58966184, 0.41033816]]) For the question on "get predictions on the train data back", i think that's the only way. The model is refitted on the whole training set using the best parameters, but the predictions or predicted probabilities are not stored. If you are looking for the values obtained during train / test, you can check cross_val_predict | 8 | 5 |
70,199,088 | 2021-12-2 | https://stackoverflow.com/questions/70199088/selenium-with-proxy-not-working-wrong-options | I have the following working test-solutions which outputs the IP-address and information - Now I want to use this with my ScraperAPI-Account with other Proxies. But when I uncomment this 2 lines - # PROXY = f'http://scraperapi:{SCRAPER_API}@proxy-server.scraperapi.com:8001' # options.add_argument('--proxy-server=%s' % PROXY) the solution is not working anymore - How can I use my proxies with selenium / that code? (scraperAPI is recommending using the selenium-wire module but I donΒ΄t like it cause it has some dependencies to a specific version of other tools - so I would like to use the proxies without that) Is this possible? import time from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from sys import platform import os, sys from selenium.webdriver.support.ui import WebDriverWait from webdriver_manager.chrome import ChromeDriverManager from fake_useragent import UserAgent from dotenv import load_dotenv, find_dotenv WAIT = 10 load_dotenv(find_dotenv()) SCRAPER_API = os.environ.get("SCRAPER_API") # PROXY = f'http://scraperapi:{SCRAPER_API}@proxy-server.scraperapi.com:8001' srv=Service(ChromeDriverManager().install()) ua = UserAgent() userAgent = ua.random options = Options() options.add_argument('--headless') options.add_experimental_option ('excludeSwitches', ['enable-logging']) options.add_argument("start-maximized") options.add_argument('window-size=1920x1080') options.add_argument('--no-sandbox') options.add_argument('--disable-gpu') options.add_argument(f'user-agent={userAgent}') # options.add_argument('--proxy-server=%s' % PROXY) path = os.path.abspath (os.path.dirname (sys.argv[0])) if platform == "win32": cd = '/chromedriver.exe' elif platform == "linux": cd = '/chromedriver' elif platform == "darwin": cd = '/chromedriver' driver = webdriver.Chrome (service=srv, options=options) waitWebDriver = WebDriverWait (driver, 10) link = "https://whatismyipaddress.com/" driver.get (link) time.sleep(WAIT) soup = BeautifulSoup (driver.page_source, 'html.parser') tmpIP = soup.find("span", {"id": "ipv4"}) tmpP = soup.find_all("p", {"class": "information"}) for e in tmpP: tmpSPAN = e.find_all("span") for e2 in tmpSPAN: print(e2.text) print(tmpIP.text) driver.quit() | There are a couple of things you need to look back: First of all, there seems be a typo. There is a space character between get and () which may cause: IndexError: list index out of range Not sure what the following line does as I'm able to execute without the line. You may like to comment it. from dotenv import load_dotenv, find_dotenv load_dotenv(find_dotenv()) If you want to stop using SCRAPER_API do comment out the following line as well: SCRAPER_API = os.environ.get("SCRAPER_API") Making those minor tweaks and optimizing your code: import time from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from webdriver_manager.chrome import ChromeDriverManager from fake_useragent import UserAgent from bs4 import BeautifulSoup WAIT = 10 srv=Service(ChromeDriverManager().install()) ua = UserAgent() userAgent = ua.random options = Options() options.add_argument('--headless') options.add_experimental_option ('excludeSwitches', ['enable-logging']) options.add_argument("start-maximized") options.add_argument('window-size=1920x1080') options.add_argument('--no-sandbox') options.add_argument('--disable-gpu') options.add_argument(f'user-agent={userAgent}') driver = webdriver.Chrome (service=srv, options=options) waitWebDriver = WebDriverWait (driver, 10) link = "https://whatismyipaddress.com/" driver.get(link) driver.save_screenshot("whatismyipaddress.png") time.sleep(WAIT) soup = BeautifulSoup (driver.page_source, 'html.parser') tmpIP = soup.find("span", {"id": "ipv4"}) tmpP = soup.find_all("p", {"class": "information"}) for e in tmpP: tmpSPAN = e.find_all("span") for e2 in tmpSPAN: print(e2.text) print(tmpIP.text) driver.quit() Console Output: [WDM] - [WDM] - ====== WebDriver manager ====== [WDM] - Current google-chrome version is 96.0.4664 [WDM] - Get LATEST driver version for 96.0.4664 [WDM] - Driver [C:\Users\Admin\.wdm\drivers\chromedriver\win32\96.0.4664.45\chromedriver.exe] found in cache ISP: Jio City: Pune Region: Maharashtra Country: India 123.12.234.23 Saved Screenshot: Using the proxy import time from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from webdriver_manager.chrome import ChromeDriverManager from fake_useragent import UserAgent from bs4 import BeautifulSoup WAIT = 10 load_dotenv(find_dotenv()) SCRAPER_API = os.environ.get("SCRAPER_API") PROXY = f'http://scraperapi:{SCRAPER_API}@proxy-server.scraperapi.com:8001' srv=Service(ChromeDriverManager().install()) ua = UserAgent() userAgent = ua.random options = Options() options.add_argument('--headless') options.add_experimental_option ('excludeSwitches', ['enable-logging']) options.add_argument("start-maximized") options.add_argument('window-size=1920x1080') options.add_argument('--no-sandbox') options.add_argument('--disable-gpu') options.add_argument(f'user-agent={userAgent}') options.add_argument('--proxy-server={}'.format(PROXY)) path = os.path.abspath (os.path.dirname (sys.argv[0])) if platform == "win32": cd = '/chromedriver.exe' elif platform == "linux": cd = '/chromedriver' elif platform == "darwin": cd = '/chromedriver' driver = webdriver.Chrome (service=srv, options=options) waitWebDriver = WebDriverWait (driver, 10) link = "https://whatismyipaddress.com/" driver.get(link) driver.save_screenshot("whatismyipaddress.png") time.sleep(WAIT) soup = BeautifulSoup (driver.page_source, 'html.parser') tmpIP = soup.find("span", {"id": "ipv4"}) tmpP = soup.find_all("p", {"class": "information"}) for e in tmpP: tmpSPAN = e.find_all("span") for e2 in tmpSPAN: print(e2.text) print(tmpIP.text) driver.quit() Note: print(f'http://scraperapi:{SCRAPER_API}@proxy-server.scraperapi.com:8001') and ensure the SCRAPER_API returns a result. References You can find a couple of relevant detailed discussions in: How to rotate Selenium webrowser IP address using http proxy with selenium Geckodriver | 6 | 2 |
70,236,617 | 2021-12-5 | https://stackoverflow.com/questions/70236617/inheriting-generic-classes-with-restricted-typevar | Consider a simple pair of generic classes: T = TypeVar("T", str, int) class Base(Generic[T]): def __init__(self, v: T): self.v: T = v @property def value(self) -> T: return self.v class Child(Base[T]): def __init__(self, v: T): super().__init__(v) x = Child(123) reveal_type(x.value) While using T = TypeVar("T") works as expected. The restricted TypeVar as shown yields the following errors: error: Argument 1 to "__init__" of "Base" has incompatible type "str"; expected "T" error: Argument 1 to "__init__" of "Base" has incompatible type "int"; expected "T" note: Revealed type is 'builtins.int*' Note that reveal_type still works. Another difference, is that the restricted TypeVar requires the type annotation for self.v assignment whereas the unrestricted does not. In the full use-case, I actually have Callable[[Any], T], but the issue is the same. This is with mypy 0.910 and Python 3.9.7. | Bounding T to an Union[int,str] should do the job: T = TypeVar("T", bound=Union[str, int]) class Base(Generic[T]): def __init__(self, v: T): self.v: T = v @property def value(self) -> T: return self.v class Child(Base[T]): def __init__(self, v: T): super().__init__(v) x = Child(123) reveal_type(x.value) y = Child('a') reveal_type(y.value) | 6 | 2 |
70,261,541 | 2021-12-7 | https://stackoverflow.com/questions/70261541/use-on-conflict-do-update-with-sqlalchemy-orm | I am currently using SQLAlchemy ORM to deal with my db operations. Now I have a SQL command which requires ON CONFLICT (id) DO UPDATE. The method on_conflict_do_update() seems to be the correct one to use. But the post here says the code have to switch to SQLAlchemy core and the high-level ORM functionalities are missing. I am confused by this statement since I think the code like the demo below can achieve what I want while keep the functionalities of SQLAlchemy ORM. class Foo(Base): ... bar = Column(Integer) foo = Foo(bar=1) insert_stmt = insert(Foo).values(bar=foo.bar) do_update_stmt = insert_stmt.on_conflict_do_update( set_=dict( bar=insert_stmt.excluded.bar, ) ) session.execute(do_update_stmt) I haven't tested it on my project since it will require a huge amount of modification. Can I ask if this is the correct way to deal with ON CONFLICT (id) DO UPDATE with SQLALchemy ORM? | As noted in the documentation, the constraint= argument is The name of a unique or exclusion constraint on the table, or the constraint object itself if it has a .name attribute. so we need to pass the name of the PK constraint to .on_conflict_do_update(). We can get the PK constraint name via the inspection interface: insp = inspect(engine) pk_constraint_name = insp.get_pk_constraint(Foo.__tablename__)["name"] print(pk_constraint_name) # tbl_foo_pkey new_bar = 123 insert_stmt = insert(Foo).values(id=12345, bar=new_bar) do_update_stmt = insert_stmt.on_conflict_do_update( constraint=pk_constraint_name, set_=dict(bar=new_bar) ) with Session(engine) as session, session.begin(): session.execute(do_update_stmt) | 7 | 6 |
70,265,961 | 2021-12-7 | https://stackoverflow.com/questions/70265961/install-specific-version-of-bazel-with-bazelisk | I am trying to use Bazel to rebuild Tensorflow with compiler flags, but have been getting errors. In the documentation explaining how to build Tensorflow from the source, they say that Bazelisk will download the correct version for Tensorflow... but when I was getting the errors, I decided to check the Bazel version and it said 4.2.2 (which is the latest version). The tested builds the Bazel version is only 3.7.2 for 2.7 version of Tensorflow (which is what I am using)... Is there a way to use Bazelisk to install version 3.7.2? Or do I have to manually download Bazel and add it to the path? If so, how do I do it? The way I installed it before was just downloading and running the .exe file from the website... and there wasn't any opportunity to enter a version for Bazel... If not, how do I uninstall Bazelisk? I tried looking for a way to uninstall, but couldn't find anything... | Bazelisk will look in the WORKSPACE root for a file named .bazelversion (see here). This file is supposed to contain the Bazel version number you want to use. There are also other options to tell Bazelisk which version to use: How does Bazelisk know which Bazel version to run? To use for instance Bazel 0.26.1 you can Bazelisk this way: $ USE_BAZEL_VERSION=0.26.1 bazelisk version | 7 | 9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.