question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
62,691,279
2020-7-2
https://stackoverflow.com/questions/62691279/how-to-disable-tokenizers-parallelism-true-false-warning
I use pytorch to train huggingface-transformers model, but every epoch, always output the warning: The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false) How to disable this warning?
Set the environment variable to the string "false" either by TOKENIZERS_PARALLELISM=false in your shell or by: import os os.environ["TOKENIZERS_PARALLELISM"] = "false" in the Python script
73
103
62,701,493
2020-7-2
https://stackoverflow.com/questions/62701493/3d-gridded-data-interpolation-in-julia
I'm strugling to convert some MATLAB code into Julia. I have some 3D gridded data (temperature that varies bi-dimensionally and over time) and want to change from a (x,y,t) mesh to a more loose (xi,yi,ti) mesh. In MATLAB it would be a simple interp(x,y,t,T,xi,yi,ti). I tried using Interpolations, Dierckx, but both seemed to work only over 2D gridded data. Am I getting something wrong? I'm quite new in Julia programing... I'm already considering the possibility of solving the problem via PyCall with some NumPy/SciPy funtion. Thanks!
What led you to believe that Interpolations.jl works only for two-dimensional data? julia> a = rand(1:100, 10, 10, 10); julia> using Interpolations julia> itp = interpolate(a, BSpline(Linear())); julia> v = itp(1.4, 2.3, 3.7) 55.24
7
9
62,696,796
2020-7-2
https://stackoverflow.com/questions/62696796/singledispatchmethod-and-class-method-decorators-in-python-3-8
I am trying to use one of the new capabilities of python 3.8 (currently using 3.8.3). Following the documentation I tried the example provided in the docs: from functools import singledispatchmethod class Negator: @singledispatchmethod @classmethod def neg(cls, arg): raise NotImplementedError("Cannot negate a") @neg.register @classmethod def _(cls, arg: int): return -arg @neg.register @classmethod def _(cls, arg: bool): return not arg Negator.neg(1) This, however, yields the following error: ... TypeError: Invalid first argument to `register()`: <classmethod object at 0x7fb9d31b2460>. Use either `@register(some_class)` or plain `@register` on an annotated function. How can I create a generic class method? Is there something I am missing in my example? Update: I have read Aashish A's answer and it seems lik an on-going issue. I have managed to solve my problem the following way. from functools import singledispatchmethod class Negator: @singledispatchmethod @staticmethod def neg(arg): raise NotImplementedError("Cannot negate a") @neg.register def _(arg: int): return -arg @neg.register def _(arg: bool): return not arg print(Negator.neg(False)) print(Negator.neg(-1)) This seems to work in version 3.8.1 and 3.8.3, however it seems it shouldn't as I am not using the staticmethod decorator on neither the undescore functions. This DOES work with classmethods, even tho the issue seems to indicate the opposite. Keep in mind if you are using an IDE that the linter won't be happy with this approach, throwing a lot of errors.
This seems to be a bug in the functools library documented in this issue.
9
6
62,690,377
2020-7-2
https://stackoverflow.com/questions/62690377/tensorflow-compatibility-with-keras
I am using Python 3.6 and Tensorflow 2.0, and have some Keras codes: import keras from keras.models import Sequential from keras.layers import Dense model = Sequential() model.add(Dense(1)) model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy']) When I run this code, I got the following error: Keras requires TensorFlow 2.2 or higher. Install TensorFlow via pip install tensorflow I checked on https://keras.io/, it says Keras was built on Tensorflow 2.0. So I am confused. What exact version of Tensorflow does latest Keras support? and how to fix the above error? Thanks!
The problem is that the latest keras version (2.4.x) is just a wrapper on top of tf.keras, which I do not think is that you want, and this is why it requires specifically TensorFlow 2.2 or newer. What you can do is install Keras 2.3.1, which supports TensorFlow 2.x and 1.x, and is the latest real releases of Keras. You can also install Keras 2.2.4 which only supports TensorFlow 1.x. You can install specific versions like this: pip install --user keras==2.3.1
12
16
62,691,561
2020-7-2
https://stackoverflow.com/questions/62691561/how-to-apply-different-border-widths-for-subregions-in-python-geopandas-chorople
I am making choropleth maps with geopandas. I want to draw maps with two layers of borders: thinner ones for national states (geopandas default), and thicker ones for various economic communities. Is this doable in geopandas? Here is an example: import geopandas as gpd import numpy as np import matplotlib.pyplot as plt world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) africa = world.query('continent == "Africa"') EAC = ["KEN", "RWA", "TZA", "UGA", "BDI"] africa["EAC"] = np.where(np.isin(africa["iso_a3"], EAC), 1, 0) africa.plot(column="pop_est") plt.show() I have created a dummy variable for countries belonging to group EAC. I would like to draw a thicker border surrounding the countries in this group, while leaving the national borders in as well. EDIT: I still don't know how to make this work in subplots. Here is an example: axs = ["ax1", "ax2"] vars = ["pop_est", "gdp_md_est"] fig, axs = plt.subplots(ncols=len(axs), figsize=(10, 10), sharex=True, sharey=True, constrained_layout=True) for ax, var in zip(axs, vars): africa.plot(ax=ax, column=var, edgecolor="black", missing_kwds={ "color": "lightgrey", "hatch": "///" }) ax.set_title(var) plt.show() I wasn't able to apply Martin's solution directly.
Specify the background plot as an axis and use it within the second plot, plotting only EAC countries. To have only outlines, you need facecolor='none'. ax = africa.plot(column="pop_est") africa.loc[africa['EAC'] == 1].plot(ax=ax, facecolor='none', edgecolor='red', linewidth=2) If you want a boundary only around those countries, you have to dissolve geometries before. africa.loc[africa['EAC'] == 1].dissolve('EAC').plot(ax=ax, facecolor='none', edgecolor='red', linewidth=2)
8
10
62,678,411
2020-7-1
https://stackoverflow.com/questions/62678411/how-to-plot-a-paired-histogram-using-seaborn
I would like to make a paired histogram like the one shown here using the seaborn distplot. This kind of plot can also be referred to as the back-to-back histogram shown here, or a bihistogram inverted/mirrored along the x-axis as discussed here. Here is my code: import numpy as np import matplotlib.pyplot as plt import seaborn as sns green = np.random.normal(20,10,1000) blue = np.random.poisson(60,1000) fig, ax = plt.subplots(figsize=(8,6)) sns.distplot(blue, hist=True, kde=True, hist_kws={'edgecolor':'black'}, kde_kws={'linewidth':2}, bins=10, color='blue') sns.distplot(green, hist=True, kde=True, hist_kws={'edgecolor':'black'}, kde_kws={'linewidth':2}, bins=10, color='green') ax.set_xticks(np.arange(-20,121,20)) ax.set_yticks(np.arange(0.0,0.07,0.01)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) plt.show() Here is the output: When I use the method discussed here (plt.barh), I get the bar plot shown just below, which is not what I am looking for. Or maybe I haven't understood the workaround well enough... A simple/short implementation of python-seaborn-distplot similar to these kinds of plots would be perfect. I edited the figure of my first plot above to show the kind of plot I hope to achieve (though y-axis not upside down): Any leads would be greatly appreciated.
Here is a possible approach using seaborn's displots. Seaborn doesn't return the created graphical elements, but the ax can be interrogated. To make sure the ax only contains the elements you want upside down, those elements can be drawn first. Then, all the patches (the rectangular bars) and the lines (the curve for the kde) can be given their height in negative. Optionally the x-axis can be set at y == 0 using ax.spines['bottom'].set_position('zero'). import numpy as np import matplotlib.pyplot as plt import seaborn as sns green = np.random.normal(20, 10, 1000) blue = np.random.poisson(60, 1000) fig, ax = plt.subplots(figsize=(8, 6)) sns.distplot(green, hist=True, kde=True, hist_kws={'edgecolor': 'black'}, kde_kws={'linewidth': 2}, bins=10, color='green') for p in ax.patches: # turn the histogram upside down p.set_height(-p.get_height()) for l in ax.lines: # turn the kde curve upside down l.set_ydata(-l.get_ydata()) sns.distplot(blue, hist=True, kde=True, hist_kws={'edgecolor': 'black'}, kde_kws={'linewidth': 2}, bins=10, color='blue') ax.set_xticks(np.arange(-20, 121, 20)) ax.set_yticks(np.arange(0.0, 0.07, 0.01)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) pos_ticks = np.array([t for t in ax.get_yticks() if t > 0]) ticks = np.concatenate([-pos_ticks[::-1], [0], pos_ticks]) ax.set_yticks(ticks) ax.set_yticklabels([f'{abs(t):.2f}' for t in ticks]) ax.spines['bottom'].set_position('zero') plt.show()
7
7
62,684,213
2020-7-1
https://stackoverflow.com/questions/62684213/asyncio-task-was-destroyed-but-it-is-pending
I am working a sample program that reads from a datasource (csv or rdbms) in chunks, makes some transformation and sends it via socket to a server. But because the csv is very large, for testing purpose I want to break the reading after few chunks. Unfortunately something goes wrong and I do not know what and how to fix it. Probably I have to do some cancellation, but now sure where and how. I get the following error: Task was destroyed but it is pending! task: <Task pending coro=<<async_generator_athrow without __name__>()>> The sample code is: import asyncio import json async def readChunks(): # this is basically a dummy alternative for reading csv in chunks df = [{"chunk_" + str(x) : [r for r in range(10)]} for x in range(10)] for chunk in df: await asyncio.sleep(0.001) yield chunk async def send(row): j = json.dumps(row) print(f"to be sent: {j}") await asyncio.sleep(0.001) async def main(): i = 0 async for chunk in readChunks(): for k, v in chunk.items(): await asyncio.gather(send({k:v})) i += 1 if i > 5: break #print(f"item in main via async generator is {chunk}") loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close()
Many async resources, such as generators, need to be cleaned up with the help of an event loop. When an async for loop stops iterating an async generator via break, the generator is cleaned up by the garbage collector only. This means the task is pending (waits for the event loop) but gets destroyed (by the garbage collector). The most straightforward fix is to aclose the generator explicitly: async def main(): i = 0 aiter = readChunks() # name iterator in order to ... try: async for chunk in aiter: ... i += 1 if i > 5: break finally: await aiter.aclose() # ... clean it up when done These patterns can be simplified using the asyncstdlib (disclaimer: I maintain this library). asyncstdlib.islice allows to take a fixed number of items before cleanly closing the generator: import asyncstdlib as a async def main(): async for chunk in a.islice(readChunks(), 5): ... If the break condition is dynamic, scoping the iterator guarantees cleanup in any case: import asyncstdlib as a async def main(): async with a.scoped_iter(readChunks()) as aiter: async for idx, chunk in a.enumerate(aiter): ... if idx >= 5: break
7
7
62,683,076
2020-7-1
https://stackoverflow.com/questions/62683076/is-there-a-way-to-do-conditionals-inside-python-3-for-loops
Coming from primarily coding in Java and wanted to know if Python could use conditionals and different kinds of incrementing inside its for loops like Java and C can. Sorry if this seems like a simple question. i.e.: boolean flag = True for(int i = 1; i < 20 && flag; i *= 2) { //Code in here }
Not directly. A for loop iterates over a pre-generated sequence, rather than generating the sequence itself. The naive translation would probably look something like flag = True i = 1 while i < 20: if not flag: break ... if some_condition: flag = False i *= 2 However, your code probably could execute the break statement wherever you set flag to False, so you could probably get rid of the flag altogether. i = 1 while i < 20: ... if some_condition: break i *= 2 Finally, you can define your own generator to iterate over def powers_of_two(): i = 1 while True: yield i i *= 2 for i in powers_of_two(): ... if some_condition: break
15
22
62,682,024
2020-7-1
https://stackoverflow.com/questions/62682024/how-to-apply-pandas-map-where-the-function-takes-more-than-1-argument
Suppose I have a dataframe containing a column of probability. Now I create a map function which returns 1 if the probability is greater than a threshold value, otherwise returns 0. Now the catch is that I want to specify the threshold by giving it as an argument to the function, and then mapping it on the pandas dataframe. Take the code example below: def partition(x,threshold): if x<threshold: return 0 else: return 1 df = pd.DataFrame({'probability':[0.2,0.8,0.4,0.95]}) df2 = df.map(partition) My question is, how would the last line work, i.e. how do I pass the threshold value inside my map function?
We can use Dataframe.applymap df2 = df.applymap(lambda x: partition(x, threshold=0.5)) Or if only one column: df['probability']=df['probability'].apply(lambda x: partition(x, threshold=0.5)) but it is not neccesary here. You can do: df2 = df.ge(threshold).astype(int) I recommend you see it
9
9
62,681,223
2020-7-1
https://stackoverflow.com/questions/62681223/pycall-cant-find-scipy-in-julia
I'm currently rewriting a bunch of matlab code into julia. These codes envolves a lot of math and, particularly, interpolation functions for a 3D mesh. It is easy to deal with this in matlab: all I need to do is to use interp3 function. Once I coundn't find any simple way to do similar in Julia, I'm trying to use some Scipy features through PyCall. Now, the problem: I've already installed PyCall, changed ENV[PYTHON] to the path of my own installed anaconda. No metter what, and I extensively looked for solutions, I still get the following error message: julia> pyimport("scipy") ERROR: PyError (PyImport_ImportModule The Python package scipy could not be found by pyimport. Usually this means that you did not install scipy in the Python version being used by PyCall. PyCall is currently configured to use the Python version at: /usr/bin/python3 and you should use whatever mechanism you usually use (apt-get, pip, conda, etcetera) to install the Python package containing the scipy module. One alternative is to re-configure PyCall to use a different Python version on your system: set ENV["PYTHON"] to the path/name of the python executable you want to use, run Pkg.build("PyCall"), and re-launch Julia. Another alternative is to configure PyCall to use a Julia-specific Python distribution via the Conda.jl package (which installs a private Anaconda Python distribution), which has the advantage that packages can be installed and kept up-to-date via Julia. As explained in the PyCall documentation, set ENV["PYTHON"]="", run Pkg.build("PyCall"), and re-launch Julia. Then, To install the scipy module, you can use `pyimport_conda("scipy", PKG)`, where PKG is the Anaconda package the contains the module scipy, or alternatively you can use the Conda package directly (via `using Conda` followed by `Conda.add` etcetera). ) <class 'ModuleNotFoundError'> ModuleNotFoundError("No module named 'scipy'",) Stacktrace: [1] pyimport(::String) at /home/gabriel/.julia/packages/PyCall/zqDXB/src/PyCall.jl:536 [2] top-level scope at none:0 Also, everything I tried, I tried both on windows 10 and Linux. I don't know what to do anymore! I would much appreciate the help! Thanks in advance!
Install scipy with Conda - Julia's interface to Python's packages. using Conda Conda.add("scipy") now pyimport("scipy") will work like charm. Note that with a custom Python installation various things can happen (and you are left on your own with managing that), hence I recommend you to use Python built-into Julia. This is how you switch back to in-built Python: using Pkg ENV["PYTHON"]="" Pkg.build("PyCall")
7
6
62,671,883
2020-7-1
https://stackoverflow.com/questions/62671883/discordbot-using-threading-raise-runtimeerror-set-wakeup-fd-only-works-in-main
I am using the threading module to host a web server and a Discord bot at the same time. Everything runs fine on Windows but as soon as I load it onto my Linux server I get the following error: Starting Bot Exception in thread Bot: Traceback (most recent call last): File "/usr/lib/python3.8/asyncio/unix_events.py", line 95, in add_signal_handler signal.set_wakeup_fd(self._csock.fileno()) ValueError: set_wakeup_fd only works in main thread During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/home/webadmin/discordbot/bot/moduls/m_threadingmaker.py", line 15, in run self.client.run(self.args[0]) File "/home/webadmin/discordbot/bot/venv/lib/python3.8/site-packages/discord/client.py", line 614, in run loop.add_signal_handler(signal.SIGINT, lambda: loop.stop()) File "/usr/lib/python3.8/asyncio/unix_events.py", line 97, in add_signal_handler raise RuntimeError(str(exc)) RuntimeError: set_wakeup_fd only works in main thread I have upgraded from python 3.7 up to python 3.8 but I still have the same error. Here is my code: main.py (webserver worked) dcbot = m_threadingmaker.myThread("Bot", client, secrets.token) webserver = m_threadingmaker.myThread("Flask", app, 'localhost', '7010') #webserver.start() dcbot.start() M_threadingmaker.py from threading import Thread class myThread (Thread): def __init__(self, name, client, *args): Thread.__init__(self) self.name = name self.client = client self.args = args def run(self): print("Starting " + self.name) if self.name == "Flask": self.client.run(host=self.args[0], port=self.args[1]) else: self.client.run(self.args[0]) print("Exiting " + self.name)
I'd suggest you to use client.start() in async coroutine instead of client.run() in separate thread. More detailed example here
7
1
62,671,692
2020-7-1
https://stackoverflow.com/questions/62671692/how-to-unpack-a-single-variable-tuple-in-python3
I have a tuple- ('[email protected]',). I want to unpack it to get '[email protected]'. How can I do so? I am new to Python so please excuse.
tu = ('[email protected]',) str = tu[0] print(str) #will return '[email protected]' A tuple is a sequence type, which means the elements can be accessed by their indices.
14
4
62,668,987
2020-7-1
https://stackoverflow.com/questions/62668987/does-python-3-gzip-closes-the-fileobj
The gzip docs for Python 3 states that Calling a GzipFile object’s close() method does not close fileobj, since you might wish to append more material after the compressed data Does this mean that the gzip file handler f_in is not closed if we do the following import gzip import shutil with gzip.open('/home/joe/file.txt.gz', 'rb') as f_in: with open('/home/joe/file.txt', 'wb') as f_out: shutil.copyfileobj(f_in, f_out) If so, will this cause a leak if this code is executed multiple times?
The warning about fileobj not being closed only applies when you open the file, and pass it to the GzipFile via the fileobj= parameter. When you pass only a filename, GzipFile "owns" the file handle and will also close it.
7
8
62,630,875
2020-6-29
https://stackoverflow.com/questions/62630875/how-to-change-the-plot-order-of-the-categorical-x-axis
I got a dataframe which looks like below: df: Time of Day Season value Day Shoulder 30.581606 Day Summer 25.865560 Day Winter 42.644530 Evening Shoulder 39.954759 Evening Summer 32.053458 Evening Winter 53.678297 Morning Shoulder 32.171245 Morning Summer 25.070815 Morning Winter 42.876667 Night Shoulder 22.082042 Night Summer 17.510290 Night Winter 33.262356 I am plotting the values in line plot using seaborn using following code: g = sns.lineplot(x='Time of Day',y='value',data=df,hue='Season') it generates the following graph: The problem with the graph is that order of the x-axis is not in the desired order. I want to change the order of a-axis value to ['Morning', 'Day','Evening','Night']. I tried to change it with the following command: g.set_xticklabels(['Morning','Day','Evening','Night']) But this command only changes the label of the x-axis, not the order of data points. Could anyone help me in fixing the issue?
Use pandas.Categorical to set the categorical order of 'Time of Day' in the df. Tested in python 3.11, pandas 1.5.3, matplotlib 3.7.1, seaborn 0.12.2 import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data = {'Time of Day': ['Day', 'Day', 'Day', 'Evening', 'Evening', 'Evening', 'Morning', 'Morning', 'Morning', 'Night', 'Night', 'Night'], 'Season': ['Shoulder', 'Summer', 'Winter', 'Shoulder', 'Summer', 'Winter', 'Shoulder', 'Summer', 'Winter', 'Shoulder', 'Summer', 'Winter'], 'value': [30.581606, 25.865560000000002, 42.644529999999996, 39.954759, 32.053458, 53.678297, 32.171245, 25.070815, 42.876667, 22.082042, 17.510289999999998, 33.262356]} # create dataframe df = pd.DataFrame(data) # set categorical order df['Time of Day'] = pd.Categorical(df['Time of Day'], categories=['Morning', 'Day', 'Evening', 'Night'], ordered=True) # plot ax = sns.lineplot(x='Time of Day', y='value', data=df, hue='Season') sns.move_legend(ax, bbox_to_anchor=(1, 0.5), loc='center left', frameon=False) If you are plotting data against a categorical x-axis, then consider using a categorical plot like sns.pointplot, where the x-axis can be ordered with the order parameter. The figure-level plot, sns.catplot with kind='point' also works. order=['Morning', 'Day', 'Evening', 'Night'] ax = sns.pointplot(x='Time of Day', y='value', data=df, hue='Season', order=order, marker='') sns.move_legend(ax, bbox_to_anchor=(1, 0.5), loc='center left', frameon=False)
8
16
62,604,916
2020-6-27
https://stackoverflow.com/questions/62604916/how-to-build-an-sdist-with-pip
I'm in the process of converting my projects to use flit as their build backend using pyroject.toml as defined in PEP517. I still have some projects that will continue to use setuptools as their build backend. Some projects may not be PEP 517 compliant and will use the legacy setup.py build system. I'm using the latest pip (20.1) as the build frontend, regardless of whether the build backend for a particular project is setuptools or flit. However, I can't figure out how to generate an sdist with pip. I can generate a wheel with pip, regardless of the build backend $ pip wheel . -w dist --no-deps But there doesn't appear to be a similar pip command to generate an sdist, even though all build backends must support creating sdists, as defined in PEP517. Shouldn't all build front-ends support building sdist's? Should I be using another tool? I'd like to use the same command to build all of my projects regardless of what their build backend is.
The Python Packaging Guide recommends building packages using the new build library maintained by the same Python Packaging Authority that maintains pip. https://github.com/pypa/build python -m build . --sdist
13
6
62,662,564
2020-6-30
https://stackoverflow.com/questions/62662564/how-do-i-clear-the-cache-from-cached-property-decorator
I have an function called "value" that makes heavy calculation... The result of the function is always the same if the dataset is not changed for the identifier. Once the dataset is changed for some identifier, I want to clear the cache, and let the function calculate it again. You can better understand me by looking at this code: from functools import cached_property class Test: identifiers = {} dataset = an empty object of dataset type def __init__(self, identifier, ...) self.identifier = identifier ... Test.identifiers[identifier] = self ... @cached_property def value(self): result = None # heavy calculate based on dataset return result @classmethod def get(cls, identifier): if identifier in cls.identifiers: return cls.identifiers[identifier] else: return cls(identifier, ...) @classmethod def update(cls, dataset): for block in dataset: # assume there is block['identifier'] in each block # here i want to clear the cache of value() function instance = cls.get(block['identifier']) # clear @cached_property of instance cls.dataset.append(block)
As you can read in the CPython source, the value for a cached_property in Python 3.8 is stored in an instance variable of the same name. This is not documented, so it may be an implementation detail that you should not rely upon. But if you just want to get it done without regards to compatibility, you can remove the cache with del instance.value. As of Python 3.9, this is documented.
30
26
62,618,680
2020-6-28
https://stackoverflow.com/questions/62618680/overwrite-an-excel-sheet-with-pandas-dataframe-without-affecting-other-sheets
I want to overwrite an existing sheet in an excel file with Pandas dataframe but don't want any changes in other sheets of the same file. How this can be achieved. I tried below code but instead of overwriting, it is appending the data in 'Sheet2'. import pandas as pd from openpyxl import load_workbook book = load_workbook('sample.xlsx') writer = pd.ExcelWriter('sample.xlsx', engine = 'openpyxl') writer.book = book writer.sheets = dict((ws.title, ws) for ws in book.worksheets) df.to_excel(writer, 'sheet2', index = False) writer.save()
I didn't find any other option other than this, this would be a quick solution for you. I believe still there's no direct way to do this, correct me if I'm wrong. That's the reason we need to play with these logical ways. import pandas as pd def write_excel(filename,sheetname,dataframe): with pd.ExcelWriter(filename, engine='openpyxl', mode='a') as writer: workBook = writer.book try: workBook.remove(workBook[sheetname]) except: print("Worksheet does not exist") finally: dataframe.to_excel(writer, sheet_name=sheetname,index=False) writer.save() df = pd.DataFrame({'Col1':[1,2,3,4,5,6], 'col2':['foo','bar','foobar','barfoo','foofoo','barbar']}) write_excel('PRODUCT.xlsx','PRODUCTS',df) Let me know if you found this helpful, or ignore it if you need any other better solution.
16
25
62,584,640
2020-6-25
https://stackoverflow.com/questions/62584640/suggested-way-to-run-multiple-sql-statements-in-python
What would be the suggested way to run something like the following in python: self.cursor.execute('SET FOREIGN_KEY_CHECKS=0; DROP TABLE IF EXISTS %s; SET FOREIGN_KEY_CHECKS=1' % (table_name,)) For example, should this be three separate self.cursor.execute(...) statements? Is there a specific method that should be used other than cursor.execute(...) to do something like this, or what is the suggested practice for doing this? Currently the code I have is as follows: self.cursor.execute('SET FOREIGN_KEY_CHECKS=0;') self.cursor.execute('DROP TABLE IF EXISTS %s;' % (table_name,)) self.cursor.execute('SET FOREIGN_KEY_CHECKS=1;') self.cursor.execute('CREATE TABLE %s select * from mytable;' % (table_name,)) As you can see, everything is run separately...so I'm not sure if this is a good idea or not (or rather -- what the best way to do the above is). Perhaps BEGIN...END ?
I would create a stored procedure: DROP PROCEDURE IF EXISTS CopyTable; DELIMITER $$ CREATE PROCEDURE CopyTable(IN _mytable VARCHAR(64), _table_name VARCHAR(64)) BEGIN SET FOREIGN_KEY_CHECKS=0; SET @stmt = CONCAT('DROP TABLE IF EXISTS ',_table_name); PREPARE stmt1 FROM @stmt; EXECUTE stmt1; SET FOREIGN_KEY_CHECKS=1; SET @stmt = CONCAT('CREATE TABLE ',_table_name,' as select * from ', _mytable); PREPARE stmt1 FROM @stmt; EXECUTE stmt1; DEALLOCATE PREPARE stmt1; END$$ DELIMITER ; and then just run: args = ['mytable', 'table_name'] cursor.callproc('CopyTable', args) keeping it simple and modular. Of course you should do some kind of error checking and you could even have the stored procedure return a code to indicate success or failure.
39
13
62,603,598
2020-6-26
https://stackoverflow.com/questions/62603598/enforcing-units-on-numbers-using-python-type-hints
Is there a way to use Python type hints as units? The type hint docs show some examples that suggest it might be possible using NewType, but also those examples show that addition of two values of the same "new type" do not give a result of the "new type" but rather the base type. Is there a way to enrich the type definition so that you can specify type hints that work like units (not insofar as they convert, but just so that you get a type warning when you get a different unit)? Something that would allow me to do this or similar: Seconds = UnitType('Seconds', float) Meters = UnitType('Meters', float) time1 = Seconds(5)+ Seconds(8) # gives a value of type `Seconds` bad_units1 = Seconds(1) + Meters(5) # gives a type hint error, but probably works at runtime time2 = Seconds(1)*5 # equivalent to `Seconds(1*5)` # Multiplying units together of course get tricky, so I'm not concerned about that now. I know runtime libraries for units exist, but my curiosity is if type hints in python are capable of handling some of that functionality.
You can do this by creating a type stub file, which defines the acceptable types for the __add__/__radd__ methods (which define the + operator) and __sub__/__rsub__ methods (which define the - operator). There are many more similar methods for other operators of course, but for the sake of brevity this example only uses those. units.py Here we define the units as simple aliases of int. This minimises the runtime cost, since we aren't actually creating a new class. Seconds = int Meters = int units.pyi This is a type stub file. It tells type checkers the types of everything defined in units.py, instead of having the types defined within the code there. Type checkers assume this is the source of truth, and don't raise errors when it differs from what is actually defined in units.py. from typing import Generic, TypeVar T = TypeVar("T") class Unit(int, Generic[T]): def __add__(self, other: T) -> T: ... def __radd__(self, other: T) -> T: ... def __sub__(self, other: T) -> T: ... def __rsub__(self, other: T) -> T: ... def __mul__(self, other: int) -> T: ... def __rmul__(self, other: int) -> T: ... class Seconds(Unit["Seconds"]): ... class Meters(Unit["Meters"]): ... Here we define Unit as a generic type inheriting from int, where adding/subtracting takes and returns values of type parameter T. Seconds and Meters are then defined as subclasses of Unit, with T equal to Seconds and Meters respectively. This way, the type checker knows that adding/subtracting with Seconds takes and returns other values of type Seconds, and similarly for Meters. Also, we define __mul__ and __rmul__ on Unit as taking a parameter of type int and returning T - so Seconds(1) * 5 should have type Seconds. main.py This is your code. from units import Seconds, Meters time1 = Seconds(5) + Seconds(8) # time1 has type Seconds, yay! bad_units1 = Seconds(1) + Meters(5) # I get a type checking error: # Operator "+" not supported for types "Meters" and "Seconds" # Yay! time2 = Seconds(1) * 5 # time2 has type Seconds, yay! meter_seconds = Seconds(1) * Meters(5) # This is valid because `Meters` is a subclass of `int` (as far # as the type checker is concerned). meter_seconds ends up being # type Seconds though - as you say, multiplying gets tricky. Of course, all of this is just type checking. You can do what you like at run time, and the pyi file won't even be loaded.
14
6
62,584,184
2020-6-25
https://stackoverflow.com/questions/62584184/understanding-the-shape-of-spectrograms-and-n-mels
I am going through these two librosa docs: melspectrogram and stft. I am working on datasets of audio of variable lengths, but I don't quite get the shapes. For example: (waveform, sample_rate) = librosa.load('audio_file') spectrogram = librosa.feature.melspectrogram(y=waveform, sr=sample_rate) dur = librosa.get_duration(waveform) spectrogram = torch.from_numpy(spectrogram) print(spectrogram.shape) print(sample_rate) print(dur) Output: torch.Size([128, 150]) 22050 3.48 What I get are the following points: Sample rate is that you get N samples each second, in this case 22050 samples each second. The window length is the FFT calculated for that period of length of the audio. STFT is calculation os FFT in small windows of time of audio. The shape of the output is (n_mels, t). t = duration/window_of_fft. I am trying to understand or calculate: What is n_fft? I mean what exactly is it doing to the audio wave? I read in the documentation the following: n_fft : int > 0 [scalar] length of the windowed signal after padding with zeros. The number of rows in the STFT matrix D is (1 + n_fft/2). The default value, n_fft=2048 samples, corresponds to a physical duration of 93 milliseconds at a sample rate of 22050 Hz, i.e. the default sample rate in librosa. This means that in each window 2048 samples are taken which means that --> 1/22050 * 2048 = 93[ms]. FFT is being calculated for every 93[ms] of the audio? So, this means that the window size and window is for filtering the signal in this frame? In the example above, I understand I am getting 128 number of Mel spectrograms but what exactly does that mean? And what is hop_length? Reading the docs, I understand that it is how to shift the window from one fft window to the next right? If this value is 512 and n_fft = also 512, what does that mean? Does this mean that it will take a window of 23[ms], calculate FFT for this window and skip the next 23[ms]? How can I specify that I want to overlap from one FFT window to another? Please help, I have watched many videos of calculating spectrograms but I just can't seem to see it in real life.
The essential parameter to understanding the output dimensions of spectrograms is not necessarily the length of the used FFT (n_fft), but the distance between consecutive FFTs, i.e., the hop_length. When computing an STFT, you compute the FFT for a number of short segments. These segments have the length n_fft. Usually these segments overlap (in order to avoid information loss), so the distance between two segments is often not n_fft, but something like n_fft/2. The name for this distance is hop_length. It is also defined in samples. So when you have 1000 audio samples, and the hop_length is 100, you get 10 features frames (note that, if n_fft is greater than hop_length, you may need to pad). In your example, you are using the default hop_length of 512. So for audio sampled at 22050 Hz, you get a feature frame rate of frame_rate = sample_rate/hop_length = 22050 Hz/512 = 43 Hz Again, padding may change this a little. So for 10s of audio at 22050 Hz, you get a spectrogram array with the dimensions (128, 430), where 128 is the number of Mel bins and 430 the number of features (in this case, Mel spectra).
7
15
62,569,594
2020-6-25
https://stackoverflow.com/questions/62569594/request-header-field-access-control-allow-origin-is-not-allowed-by-access-contr
I created an API endpoint using Google Cloud Functions and am trying to call it from a JS fetch function. I am running into errors that I am pretty sure are related to either CORS or the output format, but I'm not really sure what is going on. A few other SO questions are similar, and helped me realize I needed to remove the mode: "no-cors". Most mention enabling CORS on the BE, so I added response.headers.set('Access-Control-Allow-Origin', '*') - which I learned of in this article - to ensure CORS would be enabled... But I still get the "Failed to fetch" error. The Full Errors (reproducible in the live demo linked below) are: Uncaught Error: Cannot add node 1 because a node with that id is already in the Store. (This one is probably unrelated?) Access to fetch at 'https://us-central1-stargazr-ncc-2893.cloudfunctions.net/nearest_csc?lat=37.75&lon=-122.5' from origin 'https://o2gxx.csb.app' has been blocked by CORS policy: Request header field access-control-allow-origin is not allowed by Access-Control-Allow-Headers in preflight response. GET https://us-central1-stargazr-ncc-2893.cloudfunctions.net/nearest_csc?lat=37.75&lon=-122.5 net::ERR_FAILED Uncaught (in promise) TypeError: Failed to fetch See Code Snippets below, please note where I used <---- *** Message *** to denote parts of the code that have recently changed, giving me one of those two errors. Front End Code: function getCSC() { let lat = 37.75; let lng = -122.5; fetch( `https://us-central1-stargazr-ncc-2893.cloudfunctions.net/nearest_csc?lat=${lat}&lon=${lng}`, { method: "GET", // mode: "no-cors", <---- **Uncommenting this predictably gets rid of CORS error but returns a Opaque object which seems to have no data** headers: { // Accept: "application/json", <---- **Originally BE returned stringified json. Not sure if I should be returning it as something else or if this is still needed** Origin: "https://lget3.csb.app", "Access-Control-Allow-Origin": "*" } } ) .then(response => { console.log(response); console.log(response.json()); }); } Back End Code: import json import math import os import flask def nearest_csc(request): """ args: request object w/ args for lat/lon returns: String, either with json representation of nearest site information or an error message """ lat = request.args.get('lat', type = float) lon = request.args.get('lon', type = float) # Get list of all csc site locations with open(file_path, 'r') as f: data = json.load(f) nearby_csc = [] # Removed from snippet for clarity: # populate nearby_csc (list) with sites (dictionaries) as elems # Determine which site is the closest, assigned to var 'closest_site' # Grab site url and return site data if within 100 km if dist_km < 100: closest_site['dist_km'] = dist_km // return json.dumps(closest_site) <--- **Original return statement. Added 4 lines below in an attempt to get CORS set up, but did not seem to work** response = flask.jsonify(closest_site) response.headers.set('Access-Control-Allow-Origin', '*') response.headers.set('Access-Control-Allow-Methods', 'GET, POST') return response return "No sites found within 100 km" Fuller context for code snippets above: Here is a Code Sandbox Demo of the above. Here is the full BE code on GitHub, minus the most recent attempt at adding CORS. The API endpoint. I'm also wondering if it's possible that CodeSandbox does CORS in a weird way, but have had the same issue running it on localhost:3000, and of course in prod would have this on my own personal domain. The Error would appear to be CORS-related ( 'https://o2gxx.csb.app' has been blocked by CORS policy: Request header field access-control-allow-origin is not allowed by Access-Control-Allow-Headers in preflight response.) but I thought adding response.headers.set('Access-Control-Allow-Origin', '*') would solve that. Do I need to change something else on the BE? On the FE? TLDR; I am getting the Errors "Failed to fetch" and "field access-control-allow-origin is not allowed by Access-Control-Allow-Headers" even after attempts to enable CORS on backend and add headers to FE. See the links above for live demo of code.
Drop the part of your frontend code that adds a Access-Control-Allow-Origin header. Never add Access-Control-Allow-Origin as a request header in your frontend code. The only effect that’ll ever have is a negative one: it’ll cause browsers to do CORS preflight OPTIONS requests even in cases when the actual (GET, POST, etc.) request from your frontend code would otherwise not trigger a preflight. And then the preflight will fail with this message: Request header field Access-Control-Allow-Origin is not allowed by Access-Control-Allow-Headers in preflight response …that is, it’ll fail with that unless the server the request is being made to has been configured to send an Access-Control-Allow-Headers: Access-Control-Allow-Origin response header. But you never want Access-Control-Allow-Origin in the Access-Control-Allow-Headers response-header value. If that ends up making things work, you’re actually just fixing the wrong problem. Because the real fix is: never set Access-Control-Allow-Origin as a request header. Intuitively, it may seem logical to look at it as “I’ve set Access-Control-Allow-Origin both in the request and in the response, so that should be better than just having it in the response” — but it’s actually worse than only setting it in the response (for the reasons described above). So the bottom line: Access-Control-Allow-Origin is solely a response header, not a request header. You only ever want to set it in server-side response code, not frontend JavaScript code. The code in the question was also trying to add an Origin header. You also never want to try to set that header in your frontend JavaScript code. Unlike the case with the Access-Control-Allow-Origin header, Origin is actually a request header — but it’s a special header that’s controlled completely by browsers, and browsers won’t ever allow your frontend JavaScript code to set it. So don’t ever try to.
17
38
62,555,987
2020-6-24
https://stackoverflow.com/questions/62555987/lightgbm-ranking-example
Can anyone share a minimal example with data for how to train a ranking model with lightgbm? Preferably with the Scikit-Lean api? What I am struggling with is how to pass the label data. My data are page impressions and look like this: X: user1, feature1, ... user2, feature1, ... y: user1, page1, 10 impressions user1, page2, 6 impressions user2, page1, 9 impressions So far I think I have figured out that the length of my training data has to be that of above y (3): one line per (user, page) group. the parameter group in scikit-klearn api (set_group() in the standard api) is a list of length set(user_ids), where each entry is the number of distinct pages that this user has visited. In above example, thaat would be (2, 1). The sum of this list would equal the length of my training set. But how do I give the information that for user1, page1 has been visited more often than page2?
Here is how I used LightGBM LambdaRank. First we import some libraries and define our dataset import numpy as np import pandas as pd import lightgbm df = pd.DataFrame({ "query_id":[i for i in range(100) for j in range(10)], "var1":np.random.random(size=(1000,)), "var2":np.random.random(size=(1000,)), "var3":np.random.random(size=(1000,)), "relevance":list(np.random.permutation([0,0,0,0,0, 0,0,0,1,1]))*100 }) Here is the dataframe: query_id var1 var2 var3 relevance 0 0 0.624776 0.191463 0.598358 0 1 0 0.258280 0.658307 0.148386 0 2 0 0.893683 0.059482 0.340426 0 3 0 0.879514 0.526022 0.712648 1 4 0 0.188580 0.279471 0.062942 0 .. ... ... ... ... ... 995 99 0.509672 0.552873 0.166913 0 996 99 0.244307 0.356738 0.925570 0 997 99 0.827925 0.827747 0.695029 1 998 99 0.476761 0.390823 0.670150 0 999 99 0.241392 0.944994 0.671594 0 [1000 rows x 5 columns] The structure of this dataset is important. In learning to rank tasks, you probably work with a set of queries. Here I define a dataset of 1000 rows, with 100 queries, each of 10 rows. These queries could also be of variable length. Now for each query, we have some variables and we also get a relevance. I used numbers 0 and 1 here, so this is basically the task that for each query (set of 10 rows), I want to create a model that assigns higher relevance to the 2 rows that have a 1 for relevance. Anyway, we continue with the setup for LightGBM. I split the dataset into a training set and validation set, but you can do whatever you want. I would recommend using at least 1 validation set during training. train_df = df[:800] # first 80% validation_df = df[800:] # remaining 20% qids_train = train_df.groupby("query_id")["query_id"].count().to_numpy() X_train = train_df.drop(["query_id", "relevance"], axis=1) y_train = train_df["relevance"] qids_validation = validation_df.groupby("query_id")["query_id"].count().to_numpy() X_validation = validation_df.drop(["query_id", "relevance"], axis=1) y_validation = validation_df["relevance"] Now this is probably the thing you were stuck at. We create these 3 vectors/matrices for each dataframe. The X_train is the collection of your indepedent variables, so the input data for your model. y_train is your dependent variable, what you are trying to predict/rank. Lastly, qids_train are you query ids. They look like this: array([10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]) Also this is X_train: var1 var2 var3 0 0.624776 0.191463 0.598358 1 0.258280 0.658307 0.148386 2 0.893683 0.059482 0.340426 3 0.879514 0.526022 0.712648 4 0.188580 0.279471 0.062942 .. ... ... ... 795 0.014315 0.302233 0.255395 796 0.247962 0.871073 0.838955 797 0.605306 0.396659 0.940086 798 0.904734 0.623580 0.577026 799 0.745451 0.951092 0.861373 [800 rows x 3 columns] and this is y_train: 0 0 1 0 2 0 3 1 4 0 .. 795 0 796 0 797 1 798 0 799 0 Name: relevance, Length: 800, dtype: int64 Note that both of them are pandas dataframes, LightGBM supports them, however numpy arrays would also work. As you can see they indicate the length of each query. If your queries would be of variable lenght, then the numbers in this list would also be different. In my example, all queries are the same length. We do the exact same thing for the validation set, and then we are ready to start the LightGBM model setup and training. I use the SKlearn API since I am familiar with that one. model = lightgbm.LGBMRanker( objective="lambdarank", metric="ndcg", ) I only use the very minimum amount of parameters here. Feel free to take a look ath the LightGBM documentation and use more parameters, it is a very powerful library. To start the training process, we call the fit function on the model. Here we specify that we want NDCG@10, and want the function to print the results every 10th iteration. model.fit( X=X_train, y=y_train, group=qids_train, eval_set=[(X_validation, y_validation)], eval_group=[qids_validation], eval_at=10, verbose=10, ) which starts the training and prints: [10] valid_0's ndcg@10: 0.562929 [20] valid_0's ndcg@10: 0.55375 [30] valid_0's ndcg@10: 0.538355 [40] valid_0's ndcg@10: 0.548532 [50] valid_0's ndcg@10: 0.549039 [60] valid_0's ndcg@10: 0.546288 [70] valid_0's ndcg@10: 0.547836 [80] valid_0's ndcg@10: 0.552541 [90] valid_0's ndcg@10: 0.551994 [100] valid_0's ndcg@10: 0.542401 I hope I could sufficiently illustrate the process with this simple example. Let me know if you have any questions left.
9
14
62,658,215
2020-6-30
https://stackoverflow.com/questions/62658215/convergencewarning-lbfgs-failed-to-converge-status-1-stop-total-no-of-iter
I have a dataset consisting of both numeric and categorical data and I want to predict adverse outcomes for patients based on their medical characteristics. I defined a prediction pipeline for my dataset like so: X = dataset.drop(columns=['target']) y = dataset['target'] # define categorical and numeric transformers numeric_transformer = Pipeline(steps=[ ('knnImputer', KNNImputer(n_neighbors=2, weights="uniform")), ('scaler', StandardScaler())]) categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) # dispatch object columns to the categorical_transformer and remaining columns to numerical_transformer preprocessor = ColumnTransformer(transformers=[ ('num', numeric_transformer, selector(dtype_exclude="object")), ('cat', categorical_transformer, selector(dtype_include="object")) ]) # Append classifier to preprocessing pipeline. # Now we have a full prediction pipeline. clf = Pipeline(steps=[('preprocessor', preprocessor), ('classifier', LogisticRegression())]) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) clf.fit(X_train, y_train) print("model score: %.3f" % clf.score(X_test, y_test)) However, when running this code, I get the following warning message: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) model score: 0.988 Can someone explain to me what this warning means? I am new to machine learning so am a little lost as to what I can do to improve the prediction model. As you can see from the numeric_transformer, I scaled the data through standardisation. I am also confused as to how the model score is quite high and whether this is a good or bad thing.
The warning means what it mainly says: Suggestions to try to make the solver (the algorithm) converges. lbfgs stand for: "Limited-memory Broyden–Fletcher–Goldfarb–Shanno Algorithm". It is one of the solvers' algorithms provided by Scikit-Learn Library. The term limited-memory simply means it stores only a few vectors that represent the gradients approximation implicitly. It has better convergence on relatively small datasets. But what is algorithm convergence? In simple words. If the error of solving is ranging within very small range (i.e., it is almost not changing), then that means the algorithm reached the solution (not necessary to be the best solution as it might be stuck at what so-called "local Optima"). On the other hand, if the error is varying noticeably (even if the error is relatively small [like in your case the score was good], but rather the differences between the errors per iteration is greater than some tolerance) then we say the algorithm did not converge. Now, you need to know that Scikit-Learn API sometimes provides the user the option to specify the maximum number of iterations the algorithm should take while it's searching for the solution in an iterative manner: LogisticRegression(... solver='lbfgs', max_iter=100 ...) As you can see, the default solver in LogisticRegression is 'lbfgs' and the maximum number of iterations is 100 by default. Final words, please, however, note that increasing the maximum number of iterations does not necessarily guarantee convergence, but it certainly helps! Update: Based on your comment below, some tips to try (out of many) that might help the algorithm to converge are: Increase the number of iterations: As in this answer; Try a different optimizer: Look here; Scale your data: Look here; Add engineered features: Look here; Data pre-processing: Look here - use case and here; Add more data: Look here.
128
192
62,658,237
2020-6-30
https://stackoverflow.com/questions/62658237/it-seems-that-the-version-of-the-libffi-library-seen-at-runtime-is-different-fro
This traceback mess up all my program and I still cant fix it I have tried all methods and it didn't help! Here's the problem: ffi_prep_closure(): bad user_data (it seems that the version of the libffi library seen at runtime is different from the 'ffi.h' file seen at compile-time)
It is cffi python package issue. Try to download the source package tar.gz from https://pypi.org/project/cffi/#files and install it manually using: python setup.py install
17
1
62,629,644
2020-6-29
https://stackoverflow.com/questions/62629644/what-the-difference-between-att-mask-and-key-padding-mask-in-multiheadattnetion
What the difference between att_mask and key_padding_mask in MultiHeadAttnetion of pytorch: key_padding_mask – if provided, specified padding elements in the key will be ignored by the attention. When given a binary mask and a value is True, the corresponding value on the attention layer will be ignored. When given a byte mask and a value is non-zero, the corresponding value on the attention layer will be ignored attn_mask – 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all the batches while a 3D mask allows to specify a different mask for the entries of each batch. Thanks in advance.
The key_padding_mask is used to mask out positions that are padding, i.e., after the end of the input sequence. This is always specific to the input batch and depends on how long are the sequence in the batch compared to the longest one. It is a 2D tensor of shape batch size × input length. On the other hand, attn_mask says what key-value pairs are valid. In a Transformer decoder, a triangle mask is used to simulate the inference time and prevent the attending to the "future" positions. This is what att_mask is usually used for. If it is a 2D tensor, the shape is input length × input length. You can also have a mask that is specific to every item in a batch. In that case, you can use a 3D tensor of shape (batch size × num heads) × input length × input length. (So, in theory, you can simulate key_padding_mask with a 3D att_mask.)
22
30
62,599,950
2020-6-26
https://stackoverflow.com/questions/62599950/is-storing-data-in-thread-local-storage-in-a-django-application-safe-in-cases
I have seen at many places that using thread local storage to store any data in Django application is not a good practice. But this is the only way I could store my request object. I need to store it because my application has a complex structure. And I can't keep on passing the request object at each function call or class intialization. I need the cookies and headers from my request object, to be passed to some api calls I'm making at different places in the application. I'm using this for reference: https://blndxp.wordpress.com/2016/03/04/django-get-current-user-anywhere-in-your-code-using-a-middleware/ So I'm using a middleware, as mentioned in the reference. And, this is how request is stored from threading import local _thread_locals = local() _thread_locals.request = request And, this is how data is fetched: getattr(_thread_locals, "request", None) So does are the data stored in the threads local to that particular request ? Or if another request takes place at the same time, does both of them use the same data ?(Which is certainly not what i want) Or is there any new way of dealing with this old problem(storing request object globally) Note: I'm also using async at places in my Django application(If that matters).
Yes, using thread-local storage in Django is safe. Django uses one thread to handle each request. Django also uses thread-local data itself, for instance for storing the currently activated locale. While appservers such as Gunicorn and uwsgi can be configured to utilize multiple threads, each request will still be handled by a single thread. However, there have been conflicting opinions on whether using thread-locals is an elegant and well-designed solution. The reasons against using thread-locals boil down to the same reasons why global variables are considered bad practice. This answer discusses a number of them. Still, storing the request object in thread-local data has become a widely used pattern in the Django community. There is even an app Django-CRUM that contains a CurrentRequestUserMiddleware class and the functions get_current_user() and get_current_request(). Note that as of version 3.0, Django has started to implement asynchronous support. I'm not sure what its implications are for apps like Django-CRUM. For the foreseeable future, however, thread-locals can safely be used with Django.
7
18
62,580,240
2020-6-25
https://stackoverflow.com/questions/62580240/django-cannot-import-name-config-from-decouple
I'm trying to run this project locally but when i try manage.py makemigrations i keep getting the following error: ImportError: cannot import name 'config' from 'decouple' Here are my steps: Clone the repository from github Create a virtual environment Install the dependencies I made some research but i found nothing about what could be generating that error. Can anyone help me out on this? Thanks in advance!I'm running Django 3.
You might have decouple installed in additional to python-decouple (two different packages). If that is the case simply uninstall decouple pip uninstall decouple And ensure you have python-decouple installed pip install python-decouple
46
181
62,636,860
2020-6-29
https://stackoverflow.com/questions/62636860/why-do-nan-values-make-min-and-max-sensitive-to-order
> import numpy as np > min(50, np.NaN) 50 > min(np.NaN, 50) nan (Same behaviour occurs with max) I know that I can avoid this behaviour by using numpy.nanmin. But what causes the change when the order is reversed? Is min sensitive to input order?
Is min sensitive to input order? Yes. https://docs.python.org/3/library/functions.html#min "If multiple items are minimal, the function returns the first one encountered." The documentation does not specify exactly how "minimal" is defined in the face of items that don't have a consistent order, but it's likely that min is based on looping over the elements and using the < operator to determine if the new element is smaller than the smallest item found so-far. To confirm this hypothesis we can read the source code (search for builtin_min and min_max in https://github.com/python/cpython/blob/c96d00e88ead8f99bb6aa1357928ac4545d9287c/Python/bltinmodule.c ), it's slightly confusing because the implementations for min and max are combined and the variable names seem to be based on it being a max function but it's not too hard to follow. And it does indeed loop through the elements in order and performs the comparison with a call to PyObject_RichCompareBool with an "opid" of Py_LT which is the C API equivalent of the python < operator. Comparisons between NaN and numbers return false, so in a list containing numbers and NaNs if there is a NaN in the first position it will be considered the minimum as no number will be "less than" it. On the other hand, if the NaN is not in the first position then it will be effectively skipped over as it is not "less than" any number.
19
15
62,584,959
2020-6-25
https://stackoverflow.com/questions/62584959/python-mariadb-pip-install-failed-missing-mariadb-config
I am using Linux Ubuntu 18.04 and python 3. I am trying to build a connection between a maria-db and my python scripts. Therefore I have to install the mariadb package. I have already installed: sudo apt install mariadb-server But when i try: pip install mariadb I get following error: Collecting mariadb Using cached mariadb-1.0.0.tar.gz (78 kB) ERROR: Command errored out with exit status 1: command: /home/niklas/Desktop/Stuff/venv/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pycharm-packaging/mariadb/setup.py'"'"'; __file__='"'"'/tmp/pycharm-packaging/mariadb/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-wfnscxnz cwd: /tmp/pycharm-packaging/mariadb/ Complete output (12 lines): /bin/sh: 1: mariadb_config: not found Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pycharm-packaging/mariadb/setup.py", line 26, in <module> cfg = get_config(options) File "/tmp/pycharm-packaging/mariadb/mariadb_posix.py", line 49, in get_config cc_version = mariadb_config(config_prg, "cc_version") File "/tmp/pycharm-packaging/mariadb/mariadb_posix.py", line 27, in mariadb_config "mariadb_config not found.\nPlease make sure, that MariaDB Connector/C is installed on your system, edit the configuration file 'site.cfg' and set the 'mariadb_config'\noption, which should point to the mariadb_config utility.") OSError: mariadb_config not found. Please make sure, that MariaDB Connector/C is installed on your system, edit the configuration file 'site.cfg' and set the 'mariadb_config' option, which should point to the mariadb_config utility. ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Can anybody help me ? Edit: I have now been able to connect to the server but not with te mariadb package. (https://linuxhint.com/connect_mariadb_pymysql/)
To install mariadb python module, you have to install a recent version of MariaDB Connector/C, minimum required version is 3.1.5, afaik Ubuntu 18.04 has 3.0.3. An actual version of Connector/C for bionic is available on the MariaDB Connector/C Download Page. If you want to install it in a special directory, make sure that the PATH and LD_LIBRARY_PATH point to bin and lib directories. I also recommend to use a recent version of MariaDB Server, especially the very fast executemany() method will be much slower on MariaDB Server < 10.2.
8
7
62,660,347
2020-6-30
https://stackoverflow.com/questions/62660347/airflow-send-email-with-aws-ses
Trying to send an email from apache airflow using AWS Simple Email Service (SES), and it's returning errors that are not helping me solve the problem. I believe it's a configuration issue within SES, but I'm not sure what to change. General info: New SES instance, verified email. Airflow 1.10.10 running on Ubuntu 18.04 (local laptop). Same situation from EC2 instance on a separate AWS account. Running 1 DAG with a python operator that works fine. Sending email works with gmail smtp settings and an app password. Abbreviated DAG code: ... from airflow.operators.email_operator import EmailOperator ... email_status = EmailOperator( task_id="sending_status_email", to="[email protected]", subject="Test from SES", html_content="Trying to send an email from airflow through SES.", dag=dag ) ... airflow.cfg SMTP Settings: smtp_host = email-smtp.us-east-1.amazonaws.com smtp_starttls = True smtp_ssl = False smtp_user = AWSUSERKEY smtp_password = PASSWORDFROMAWSSMTP smtp_port = 587 smtp_mail_from = [email protected] Errors Received while trying various changes to starttls, ssl, and port settings. ERROR - (554, b'Transaction failed: Unsupported encoding us_ascii.') ERROR - STARTTLS extension not supported by server. ERROR - (SSL: WRONG_VERSION_NUMBER) wrong version number (_ssl.c:852)
Not sure about the others but we just ran into this error today: ERROR - (554, b'Transaction failed: Unsupported encoding us_ascii.') This is the default value in the class's __init__ method, which isn't valid: https://github.com/apache/airflow/blob/1.10.10/airflow/operators/email_operator.py#L63 You can fix it by passing in a valid value, like "utf-8": email_status = EmailOperator( mime_charset='utf-8', task_id="sending_status_email", to="[email protected]", subject="Test from SES", html_content="Trying to send an email from airflow through SES.", dag=dag )
10
11
62,576,326
2020-6-25
https://stackoverflow.com/questions/62576326/python3-process-and-display-webcam-stream-at-the-webcams-fps
How can I read a camera and display the images at the cameras frame rate? I want to continuously read images from my webcam, (do some fast preprocessing) and then display the image in a window. This should run at the frame rate, that my webcam provides (29 fps). It seems like the OpenCV GUI and Tkinter GUI is too slow, to display images at such a frame rate. These are clearly the bottlenecks in my experiments. Even without the preprocessing, the images are not displayed fast enough. I am on a MacBook Pro 2018. Here is what I tried. The webcam is always read with OpenCV: Everything happens in the main thread, the images are displayed with OpenCV: 12 fps Read camera and do preprocessing in separate threads, show image with OpenCV in the main thread: 20 fps multithreaded like above, but do not show the image: 29 fps multithreaded like above, but show the images with Tkinter: don't know the exact fps but it feels like <10 fps. Here is the code: Single loop, OpenCV GUI: import cv2 import time def main(): cap = cv2.VideoCapture(0) window_name = "FPS Single Loop" cv2.namedWindow(window_name, cv2.WINDOW_NORMAL) start_time = time.time() frames = 0 seconds_to_measure = 10 while start_time + seconds_to_measure > time.time(): success, img = cap.read() img = img[:, ::-1] # mirror time.sleep(0.01) # simulate some processing time cv2.imshow(window_name, img) cv2.waitKey(1) frames = frames + 1 cv2.destroyAllWindows() print( f"Captured {frames} in {seconds_to_measure} seconds. FPS: {frames/seconds_to_measure}" ) if __name__ == "__main__": main() Captured 121 in 10 seconds. FPS: 12.1 Multithreaded, opencv gui: import logging import time from queue import Full, Queue from threading import Thread, Event import cv2 logger = logging.getLogger("VideoStream") def setup_webcam_stream(src=0): cap = cv2.VideoCapture(src) width, height = ( cap.get(cv2.CAP_PROP_FRAME_WIDTH), cap.get(cv2.CAP_PROP_FRAME_HEIGHT), ) logger.info(f"Camera dimensions: {width, height}") logger.info(f"Camera FPS: {cap.get(cv2.CAP_PROP_FPS)}") grabbed, frame = cap.read() # Read once to init if not grabbed: raise IOError("Cannot read video stream.") return cap def video_stream_loop(video_stream: cv2.VideoCapture, queue: Queue, stop_event: Event): while not stop_event.is_set(): try: success, img = video_stream.read() # We need a timeout here to not get stuck when no images are retrieved from the queue queue.put(img, timeout=1) except Full: pass # try again with a newer frame def processing_loop(input_queue: Queue, output_queue: Queue, stop_event: Event): while not stop_event.is_set(): try: img = input_queue.get() img = img[:, ::-1] # mirror time.sleep(0.01) # simulate some processing time # We need a timeout here to not get stuck when no images are retrieved from the queue output_queue.put(img, timeout=1) except Full: pass # try again with a newer frame def main(): stream = setup_webcam_stream(0) webcam_queue = Queue() processed_queue = Queue() stop_event = Event() window_name = "FPS Multi Threading" cv2.namedWindow(window_name, cv2.WINDOW_NORMAL) start_time = time.time() frames = 0 seconds_to_measure = 10 try: Thread( target=video_stream_loop, args=[stream, webcam_queue, stop_event] ).start() Thread( target=processing_loop, args=[webcam_queue, processed_queue, stop_event] ).start() while start_time + seconds_to_measure > time.time(): img = processed_queue.get() cv2.imshow(window_name, img) cv2.waitKey(1) frames = frames + 1 finally: stop_event.set() cv2.destroyAllWindows() print( f"Captured {frames} frames in {seconds_to_measure} seconds. FPS: {frames/seconds_to_measure}" ) print(f"Webcam queue: {webcam_queue.qsize()}") print(f"Processed queue: {processed_queue.qsize()}") if __name__ == "__main__": logging.basicConfig(level=logging.DEBUG) main() INFO:VideoStream:Camera dimensions: (1280.0, 720.0) INFO:VideoStream:Camera FPS: 29.000049 Captured 209 frames in 10 seconds. FPS: 20.9 Webcam queue: 0 Processed queue: 82 Here you can see that there are images remaining in the second queue where the images get fetched for displaying them. When I uncomment these two lines: cv2.imshow(window_name, img) cv2.waitKey(1) then the output is: INFO:VideoStream:Camera dimensions: (1280.0, 720.0) INFO:VideoStream:Camera FPS: 29.000049 Captured 291 frames in 10 seconds. FPS: 29.1 Webcam queue: 0 Processed queue: 0 So it is able to process all frames at the webcams speed without a GUI displaying them. Multithreaded, Tkinter gui: import logging import time import tkinter from queue import Full, Queue, Empty from threading import Thread, Event import PIL from PIL import ImageTk import cv2 logger = logging.getLogger("VideoStream") def setup_webcam_stream(src=0): cap = cv2.VideoCapture(src) width, height = cap.get(cv2.CAP_PROP_FRAME_WIDTH), cap.get(cv2.CAP_PROP_FRAME_HEIGHT) logger.info(f"Camera dimensions: {width, height}") logger.info(f"Camera FPS: {cap.get(cv2.CAP_PROP_FPS)}") grabbed, frame = cap.read() # Read once to init if not grabbed: raise IOError("Cannot read video stream.") return cap, width, height def video_stream_loop(video_stream: cv2.VideoCapture, queue: Queue, stop_event: Event): while not stop_event.is_set(): try: success, img = video_stream.read() # We need a timeout here to not get stuck when no images are retrieved from the queue queue.put(img, timeout=1) except Full: pass # try again with a newer frame def processing_loop(input_queue: Queue, output_queue: Queue, stop_event: Event): while not stop_event.is_set(): try: img = input_queue.get() img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = img[:, ::-1] # mirror time.sleep(0.01) # simulate some processing time # We need a timeout here to not get stuck when no images are retrieved from the queue output_queue.put(img, timeout=1) except Full: pass # try again with a newer frame class App: def __init__(self, window, window_title, image_queue: Queue, image_dimensions: tuple): self.window = window self.window.title(window_title) self.image_queue = image_queue # Create a canvas that can fit the above video source size self.canvas = tkinter.Canvas(window, width=image_dimensions[0], height=image_dimensions[1]) self.canvas.pack() # After it is called once, the update method will be automatically called every delay milliseconds self.delay = 1 self.update() self.window.mainloop() def update(self): try: frame = self.image_queue.get(timeout=0.1) # Timeout to not block this method forever self.photo = ImageTk.PhotoImage(image=PIL.Image.fromarray(frame)) self.canvas.create_image(0, 0, image=self.photo, anchor=tkinter.NW) self.window.after(self.delay, self.update) except Empty: pass # try again next time def main(): stream, width, height = setup_webcam_stream(0) webcam_queue = Queue() processed_queue = Queue() stop_event = Event() window_name = "FPS Multi Threading" try: Thread(target=video_stream_loop, args=[stream, webcam_queue, stop_event]).start() Thread(target=processing_loop, args=[webcam_queue, processed_queue, stop_event]).start() App(tkinter.Tk(), window_name, processed_queue, (width, height)) finally: stop_event.set() print(f"Webcam queue: {webcam_queue.qsize()}") print(f"Processed queue: {processed_queue.qsize()}") if __name__ == "__main__": logging.basicConfig(level=logging.DEBUG) main() INFO:VideoStream:Camera dimensions: (1280.0, 720.0) INFO:VideoStream:Camera FPS: 29.000049 Webcam queue: 0 Processed queue: 968
On this answer I share some considerations on camera FPS VS display FPS and some code examples that demonstrates: The basics on FPS calculation; How to increase the display FPS from 29 fps to 300+ fps; How to use threading and queue efficiently to capture at the closest maximum fps supported by the camera; For anyone going through your issue, here is a couple of important questions that need to be answered first: What's the size of the images being captured? How many FPS does your webcam support? (camera FPS) How fast can you grab a frame from the webcam and display it in a window? (display FPS) Camera FPS VS Display FPS The camera fps refers to what the hardware of the camera is capable of. For instance, ffmpeg tells that at 640x480 my camera can return 15 fps minimum and 30 at maximum, among other formats: ffmpeg -list_devices true -f dshow -i dummy ffmpeg -f dshow -list_options true -i video="HP HD Camera" [dshow @ 00000220181cc600] vcodec=mjpeg min s=640x480 fps=15 max s=640x480 fps=30 [dshow @ 00000220181cc600] vcodec=mjpeg min s=320x180 fps=15 max s=320x180 fps=30 [dshow @ 00000220181cc600] vcodec=mjpeg min s=320x240 fps=15 max s=320x240 fps=30 [dshow @ 00000220181cc600] vcodec=mjpeg min s=424x240 fps=15 max s=424x240 fps=30 [dshow @ 00000220181cc600] vcodec=mjpeg min s=640x360 fps=15 max s=640x360 fps=30 [dshow @ 00000220181cc600] vcodec=mjpeg min s=848x480 fps=15 max s=848x480 fps=30 [dshow @ 00000220181cc600] vcodec=mjpeg min s=960x540 fps=15 max s=960x540 fps=30 [dshow @ 00000220181cc600] vcodec=mjpeg min s=1280x720 fps=15 max s=1280x720 fps=30 The important realization here is that despite being able to capture 30 fps internally, there is NO guarantee that an application will be able to pull those 30 frames from the camera in a second. The reasons behind this are clarified on the following sections. The display fps refers to how many images can be draw in a window per second. This number is not limited by the camera at all and its usually much much higher than the camera fps. As you'll see later, its possible to create and application that pulls 29 images per second from the camera and draws them more than 300 times a second. That means that the same image from the camera is drawn multiple times in a window before the next frame is pulled from the camera. How many FPS can my webcam capture? The following application simply demonstrates how to print the default settings used by the camera (size, fps) and how to retrieve frames from it, display it in a window and compute the amount of FPS being rendered: import numpy as np import cv2 import datetime def main(): # create display window cv2.namedWindow("webcam", cv2.WINDOW_NORMAL) # initialize webcam capture object cap = cv2.VideoCapture(0) # retrieve properties of the capture object cap_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH) cap_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT) cap_fps = cap.get(cv2.CAP_PROP_FPS) fps_sleep = int(1000 / cap_fps) print('* Capture width:', cap_width) print('* Capture height:', cap_height) print('* Capture FPS:', cap_fps, 'ideal wait time between frames:', fps_sleep, 'ms') # initialize time and frame count variables last_time = datetime.datetime.now() frames = 0 # main loop: retrieves and displays a frame from the camera while (True): # blocks until the entire frame is read success, img = cap.read() frames += 1 # compute fps: current_time - last_time delta_time = datetime.datetime.now() - last_time elapsed_time = delta_time.total_seconds() cur_fps = np.around(frames / elapsed_time, 1) # draw FPS text and display image cv2.putText(img, 'FPS: ' + str(cur_fps), (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2, cv2.LINE_AA) cv2.imshow("webcam", img) # wait 1ms for ESC to be pressed key = cv2.waitKey(1) if (key == 27): break # release resources cv2.destroyAllWindows() cap.release() if __name__ == "__main__": main() Output: * Capture width: 640.0 * Capture height: 480.0 * Capture FPS: 30.0 wait time between frames: 33 ms As mentioned earlier, my camera is able to capture 640x480 images at 30 fps by default and even though the loop above is pretty simple, my display FPS is lower: I'm only able to retrieve frames and display them at 28 or 29 fps and that's without performing any custom image processing in between. What's going on? The reality is that even though the loop looks pretty simple, there are things happening under the hood that costs just enough processing time to make it difficult for one iteration of the loop to happen in less than 33ms: cap.read() executes I/O calls to the camera driver in order to pull the new data. This function blocks execution of your application until the data has been transferred completely; a numpy array needs to be setup with the new pixels; other calls are required to display a window and draw the pixels in it, namely cv2.imshow(), which is usually slow operation; there's also a 1ms delay thanks to cv2.waitKey(1) which is required to keep the window opened; All of these operations, as small as they are, make it incredibly difficult for an application to call cap.read(), get a new frame and display it at precisely 30 fps. There's a number of things you can try to speed up the application to be able to display more frames than the camera driver allows and this post covers them well. Just remember this: you won't be able to capture more frames from the camera than what the driver says it supports. You will, however, be able to display more frames. How to increase the display FPS to 300+? A threading example. One of the approaches used to increase the amount of images being displayed per second relies on the threading package to create a separate thread to continuously pull frames from the camera. This happens because the main loop of the application is not blocked on cap.read() anymore waiting for it to return a new frame, thus increasing the number of frames that can be displayed (or draw) per second. Note: this approach renders the same image multiple times on a window until the next image from the camera is retrieved. Keep in mind that it might even draw an image while it's contents are still being updated with new data from the camera. The following application is just an academic example, not something I recommend as production code, to increase the amount of frames per second that are display in a window: import numpy as np import cv2 import datetime from threading import Thread # global variables stop_thread = False # controls thread execution img = None # stores the image retrieved by the camera def start_capture_thread(cap): global img, stop_thread # continuously read fames from the camera while True: _, img = cap.read() if (stop_thread): break def main(): global img, stop_thread # create display window cv2.namedWindow("webcam", cv2.WINDOW_NORMAL) # initialize webcam capture object cap = cv2.VideoCapture(0) # retrieve properties of the capture object cap_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH) cap_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT) cap_fps = cap.get(cv2.CAP_PROP_FPS) fps_sleep = int(1000 / cap_fps) print('* Capture width:', cap_width) print('* Capture height:', cap_height) print('* Capture FPS:', cap_fps, 'wait time between frames:', fps_sleep) # start the capture thread: reads frames from the camera (non-stop) and stores the result in img t = Thread(target=start_capture_thread, args=(cap,), daemon=True) # a deamon thread is killed when the application exits t.start() # initialize time and frame count variables last_time = datetime.datetime.now() frames = 0 cur_fps = 0 while (True): # blocks until the entire frame is read frames += 1 # measure runtime: current_time - last_time delta_time = datetime.datetime.now() - last_time elapsed_time = delta_time.total_seconds() # compute fps but avoid division by zero if (elapsed_time != 0): cur_fps = np.around(frames / elapsed_time, 1) # TODO: make a copy of the image and process it here if needed # draw FPS text and display image if (img is not None): cv2.putText(img, 'FPS: ' + str(cur_fps), (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2, cv2.LINE_AA) cv2.imshow("webcam", img) # wait 1ms for ESC to be pressed key = cv2.waitKey(1) if (key == 27): stop_thread = True break # release resources cv2.destroyAllWindows() cap.release() if __name__ == "__main__": main() How to capture at the closest maximum fps supported by the camera? A threading and queue example. The problem of using a queue is that, performance-wise, what you get depends on how many frames per second the application can pull from the camera. If the camera supports 30 fps then that's what your application might get as long as the image processing operations being done are fast. Otherwise, there will be a drop in the number of frames being displayed (per second) and the size of the queue will slowly increase until all your RAM memory runs out. To avoid that problem, make sure to set queueSize with a number that prevents the queue from growing beyond what your OS can handle. The following code is a naive implementation that creates a dedicated thread to grab frames from the camera and puts them in a queue that is later used by the main loop of the application: import numpy as np import cv2 import datetime import queue from threading import Thread # global variables stop_thread = False # controls thread execution def start_capture_thread(cap, queue): global stop_thread # continuously read fames from the camera while True: _, img = cap.read() queue.put(img) if (stop_thread): break def main(): global stop_thread # create display window cv2.namedWindow("webcam", cv2.WINDOW_NORMAL) # initialize webcam capture object cap = cv2.VideoCapture(0) #cap = cv2.VideoCapture(0 + cv2.CAP_DSHOW) # retrieve properties of the capture object cap_width = cap.get(cv2.CAP_PROP_FRAME_WIDTH) cap_height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT) cap_fps = cap.get(cv2.CAP_PROP_FPS) print('* Capture width:', cap_width) print('* Capture height:', cap_height) print('* Capture FPS:', cap_fps) # create a queue frames_queue = queue.Queue(maxsize=0) # start the capture thread: reads frames from the camera (non-stop) and stores the result in img t = Thread(target=start_capture_thread, args=(cap, frames_queue,), daemon=True) # a deamon thread is killed when the application exits t.start() # initialize time and frame count variables last_time = datetime.datetime.now() frames = 0 cur_fps = 0 while (True): if (frames_queue.empty()): continue # blocks until the entire frame is read frames += 1 # measure runtime: current_time - last_time delta_time = datetime.datetime.now() - last_time elapsed_time = delta_time.total_seconds() # compute fps but avoid division by zero if (elapsed_time != 0): cur_fps = np.around(frames / elapsed_time, 1) # retrieve an image from the queue img = frames_queue.get() # TODO: process the image here if needed # draw FPS text and display image if (img is not None): cv2.putText(img, 'FPS: ' + str(cur_fps), (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2, cv2.LINE_AA) cv2.imshow("webcam", img) # wait 1ms for ESC to be pressed key = cv2.waitKey(1) if (key == 27): stop_thread = True break # release resources cv2.destroyAllWindows() cap.release() if __name__ == "__main__": main() Earlier I said might and here is what I meant: even when I use a dedicated thread to pull frames from the camera and a queue to store them, the displayed fps is still capped to 29.3 when it should have been 30 fps. In this case, I assume that the camera driver or the backend implementation used by VideoCapture can be blamed for the issue. On Windows, the backend used by default is MSMF. It is possible to force VideoCapture to use a different backend by passing the right arguments on the constructor: cap = cv2.VideoCapture(0 + cv2.CAP_DSHOW) My experience with DShow was terrible: the returned CAP_PROP_FPS from the camera was 0 and the displayed FPS got stuck around 14. This is just an example to illustrate how the backend capture driver can interfere negatively with the camera capture. But that's something you can explore. Maybe using a different backend on your OS can provide better results. Here's a nice high-level overview of the Video I/O module from OpenCV that lists the supported backends: Update In one of the comments of this answer, the OP upgraded OpenCV 4.1 to 4.3 on Mac OS and observed a noticeable improvement on FPS rendering. It looks like it was a performance issue related to cv2.imshow().
10
15
62,610,782
2020-6-27
https://stackoverflow.com/questions/62610782/fishers-linear-discriminant-in-python
I have the fisher's linear discriminant that i need to use it to reduce my examples A and B that are high dimensional matrices to simply 2D, that is exactly like LDA, each example has classes A and B, therefore if i was to have a third example they also have classes A and B, fourth, fifth and n examples would always have classes A and B, therefore i would like to separate them in a simple use of fisher's linear discriminant. Im pretty much new to machine learning, so i dont know how to separate my classes, i've been following the formula by eye and coding on the go. From what i was reading, i need to apply a linear transformation to my data so i can find a good threshold for it, but first i'd need to find the maximization function. For such task, i managed to find Sw and Sb, but i don't know how to go from there... Where i also need to find the maximization function. That maximization function gives me an eigen value solution: What i have for each classes are matrices 5x2 of 2 examples. For instance: Example 1 Class_A = [ 201, 103, 40, 43, 23, 50, 12, 123, 99, 78 ] Class_B = [ 201, 129, 114, 195, 180, 90, 69, 62, 76, 90 ] Example 2 Class_A = [ 68, 98, 201, 203, 78, 212, 49, 5, 204, 78 ] Class_B = [ 52, 19, 220, 219, 159, 195, 99, 23, 46, 50 ] I tried finding Sw for the example above like this: Example_1_Class_A = np.dot(Example_1_Class_A, np.transpose(Example_1_Class_A)) Example_1_Class_B = np.dot(Example_1_Class_B, np.transpose(Example_1_Class_B)) Example_2_Class_A = np.dot(Example_2_Class_A, np.transpose(Example_2_Class_A)) Example_2_Class_B = np.dot(Example_2_Class_B, np.transpose(Example_2_Class_B)) Sw = sum([Example_1_Class_A, Example_1_Class_B, Example_2_Class_A, Example_2_Class_B], axis=0) As for Sb, i tried like this: Example_1_Class_A_mean = Example_1_Class_A.mean(axis=0) Example_1_Class_B_mean = Example_1_Class_B.mean(axis=0) Example_2_Class_A_mean = Example_2_Class_A.mean(axis=0) Example_2_Class_B_mean = Example_2_Class_B.mean(axis=0) Example_1_Class_A_Sb = np.dot(Example_1_Class_A_mean, np.transpose(Example_1_Class_A_mean)) Example_1_Class_B_Sb = np.dot(Example_1_Class_B_mean, np.transpose(Example_1_Class_B_mean)) Example_2_Class_A_Sb = np.dot(Example_2_Class_A_mean, np.transpose(Example_2_Class_A_mean)) Example_2_Class_B_Sb = np.dot(Example_2_Class_B_mean, np.transpose(Example_2_Class_B_mean)) Sb = sum([Example_1_Class_A_Sb, Example_1_Class_B_Sb, Example_2_Class_A_Sb, Example_2_Class_B_Sb], axis=0) The problem is, i have no idea what else to do with my Sw and Sb, i am completely lost. Basically, what i need to do is get from here to this: How for given Example A and Example B, do i separate a cluster only for classes As and only for classes b
Before answering your question, I will first touch the basic difference between PCA and (F)LDA. In PCA you don't know anything about underlying classes, but you assume that the information about classes separability lies in the variance of data. So you rotate your original axes (sometimes it is called projecting all the data onto new ones) in such way that your first new axis is pointing to the direction of most variance, second one is perpendicular to the first one and pointing to the direction of most residiual variance, and so on. This way a PCA transformation results in a (sub)space of the same dimensionality as the original one. Than you can take only first 2 dimensions, rejecting the rest, hence getting a dimensionality reduction from k dimensions to only 2. LDA works a bit differently. In this case you know in advance how many classes there are in your data, and you can find their mean and covariance matrices. What Fisher criterion does it finds a direction in which the mean between classes is maximized, while at the same time total variability is minimized (total variability is a mean of within-class covariance matrices). And for each two classes there is only one such line. This is why when your data has C classes, LDA can provide you at most C-1 dimensions, regardless of the original data dimensionality. In your case this means that as you have only 2 classes A and B, you will get a one-dimensional projection, i.e. a line. And this is exactly what you have in your picture: original 2d data is projected on to a line. The direction of the line is the solution of the eigenproblem. Let's generate data that is similar to your picture: a = np.random.multivariate_normal((1.5, 3), [[0.5, 0], [0, .05]], 30) b = np.random.multivariate_normal((4, 1.5), [[0.5, 0], [0, .05]], 30) plt.plot(a[:,0], a[:,1], 'b.', b[:,0], b[:,1], 'r.') mu_a, mu_b = a.mean(axis=0).reshape(-1,1), b.mean(axis=0).reshape(-1,1) Sw = np.cov(a.T) + np.cov(b.T) inv_S = np.linalg.inv(Sw) res = inv_S.dot(mu_a-mu_b) # the trick #### # more general solution # # Sb = (mu_a-mu_b)*((mu_a-mu_b).T) # eig_vals, eig_vecs = np.linalg.eig(inv_S.dot(Sb)) # res = sorted(zip(eig_vals, eig_vecs), reverse=True)[0][1] # take only eigenvec corresponding to largest (and the only one) eigenvalue # res = res / np.linalg.norm(res) plt.plot([-res[0], res[0]], [-res[1], res[1]]) # this is the solution plt.plot(mu_a[0], mu_a[1], 'cx') plt.plot(mu_b[0], mu_b[1], 'yx') plt.gca().axis('square') # let's project data point on it r = res.reshape(2,) n2 = np.linalg.norm(r)**2 for pt in a: prj = r * r.dot(pt) / n2 plt.plot([prj[0], pt[0]], [prj[1], pt[1]], 'b.:', alpha=0.2) for pt in b: prj = r * r.dot(pt) / n2 plt.plot([prj[0], pt[0]], [prj[1], pt[1]], 'r.:', alpha=0.2) The resulting projection is calculated using a neat trick for two class problem. You can read details on it here in section 1.6. Regarding the "examples" you mention in your question. I believe you need to repeat the process for each example, as it is a different set of data point probably with different distributions. Also, put attention that estimated mean (mu_a, mu_b) and class covariance matrices would be slightly different from the ones that data was generated with, especially for small sample size.
13
12
62,658,540
2020-6-30
https://stackoverflow.com/questions/62658540/how-to-combine-a-custom-protocol-with-the-callable-protocol
I have a decorator that takes a function and returns the same function with some added attributes: import functools from typing import * def decorator(func: Callable) -> Callable: func.attr1 = "spam" func.attr2 = "eggs" return func How do I type hint the return value of decorator? I want the type hint to convey two pieces of information: the return value is a Callable the return value has attributes attr1 and attr2 If I write a protocol, class CallableWithAttrs(Protocol): attr1: str attr2: str then I lose Callable. And apparently I can't make the protocol inherit from Callable; class CallableWithAttrs(Callable, Protocol): attr1: str attr2: str mypy says: error: Invalid base class "Callable" On the other hand, if I just use Callable, I lose the information about the added attributes. This is perhaps even more complicated when introducing type variables, i.e. when the decorator must return the same type of callable as the given function func, as pointed out by MisterMiyagi in the comments. import functools from typing import * C = TypeVar('C', bound=Callable) def decorator(func: C) -> C: func.attr1 = "spam" func.attr2 = "eggs" return func Now what do I do? I can't inherit from a type variable: class CallableWithAttrs(C, Protocol): attr1: str attr2: str error: Invalid base class "C"
One can parameterise a Protocol by a Callable: from typing import Callable, TypeVar, Protocol C = TypeVar('C', bound=Callable) # placeholder for any Callable class CallableObj(Protocol[C]): # Protocol is parameterised by Callable C ... attr1: str attr2: str __call__: C # ... which defines the signature of the protocol This creates an intersection of the Protocol itself with an arbitrary Callable. A function that takes any callable C can thus return CallableObj[C], a callable of the same signature with the desired attributes: def decorator(func: C) -> CallableObj[C]: ... MyPy properly recognizes both the signature and attributes: def dummy(arg: str) -> int: ... reveal_type(decorator(dummy)) # CallableObj[def (arg: builtins.str) -> builtins.int]' reveal_type(decorator(dummy)('Hello')) # int reveal_type(decorator(dummy).attr1) # str decorator(dummy)(b'Fail') # error: Argument 1 to "dummy" has incompatible type "bytes"; expected "str" decorator(dummy).attr3 # error: "CallableObj[Callable[[str], int]]" has no attribute "attr3"; maybe "attr2"?
17
11
62,622,704
2020-6-28
https://stackoverflow.com/questions/62622704/attributeerror-module-tensorflow-has-no-attribute-compat-when-loading-tf-co
I can see that this question has been asked before here tensorflow-has-no-attribute-compat but the answer given was to Microsoft Visual C++ 2015-2019 Redistributable (x64) It did not work for the previous member it has not worked for me either. I have visual studio 2019 installed. I downloaded it anyways and ran a repair of (MV C++) just in case. Still getting same error. So that being said I have not found a valid solution for this anywhere on google or stackoverflow. Here are some spec details of what I have installed. tensorflow-gpu 2.1 python 3.7.7 CUDA 10.1 Anaconda 3.7 Looks like gpu started successfully. 2020-06-28 07:19:47.851257: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll python packages for 3.7.7. All packages installed via pip. Only tensorflow installed via conda. Dont think it makes a difference. I used conda so I can specify install version 2.1, pip would automatically install 2.2. Package Version ---------------------- ---------------------------- absl-py 0.9.0 appdirs 1.4.4 astor 0.7.1 astroid 2.4.2 astropy 4.0.1.post1 attrs 19.3.0 backcall 0.2.0 bayesian-optimization 1.2.0 black 19.10b0 bleach 3.1.5 blinker 1.4 brotlipy 0.7.0 cachetools 4.1.0 certifi 2020.6.20 cffi 1.14.0 chardet 3.0.4 click 7.1.2 cloudpickle 1.3.0 colorama 0.4.3 confuse 1.3.0 cryptography 2.9.2 cycler 0.10.0 decorator 4.4.2 defusedxml 0.6.0 entrypoints 0.3 future 0.18.2 gast 0.2.2 gitdb 4.0.5 GitPython 3.1.3 google-auth 1.17.2 google-auth-oauthlib 0.4.1 google-pasta 0.2.0 grpcio 1.27.2 gym 0.17.2 h2o 3.30.0.5 h5py 2.10.0 htmlmin 0.1.12 idna 2.10 ImageHash 4.1.0 importlib-metadata 1.7.0 invoke 1.4.1 ipykernel 5.3.0 ipython 7.16.1 ipython-genutils 0.2.0 ipywidgets 7.5.1 isort 4.3.21 jedi 0.17.1 Jinja2 2.11.2 joblib 0.15.1 json5 0.9.5 jsonschema 3.2.0 jupyter 1.0.0 jupyter-client 6.1.3 jupyter-console 6.1.0 jupyter-core 4.6.3 jupyterlab 2.1.5 jupyterlab-server 1.1.5 kaggle 1.5.6 Keras 2.4.3 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 kiwisolver 1.2.0 lazy-object-proxy 1.4.3 lightgbm 2.3.1 llvmlite 0.33.0 Markdown 3.2.2 MarkupSafe 1.1.1 matplotlib 3.2.2 mccabe 0.6.1 missingno 0.4.2 mistune 0.8.4 mkl-service 2.3.0 nbconvert 5.6.1 nbdime 2.0.0 nbformat 5.0.7 networkx 2.4 notebook 6.0.3 numba 0.50.1 numpy 1.19.0 oauthlib 3.0.1 opt-einsum 0+untagged.56.g2664021.dirty packaging 20.4 pandas 1.0.5 pandas-profiling 2.8.0 pandocfilters 1.4.2 parso 0.7.0 path 13.1.0 path.py 12.4.0 pathspec 0.8.0 patsy 0.5.1 phik 0.10.0 pickleshare 0.7.5 Pillow 7.1.2 pip 20.1.1 plotly 4.8.2 prometheus-client 0.8.0 prompt-toolkit 3.0.5 protobuf 3.12.3 py4j 0.10.9 pyasn1 0.4.8 pyasn1-modules 0.2.7 pycparser 2.20 pyglet 1.5.0 Pygments 2.6.1 PyJWT 1.7.1 pylint 2.5.3 pyOpenSSL 19.1.0 pyparsing 2.4.7 pyreadline 2.1 pyrsistent 0.16.0 PySocks 1.7.1 pyspark 3.0.0 python-dateutil 2.8.1 python-slugify 4.0.0 pytz 2020.1 PyWavelets 1.1.1 pywin32 228 pywinpty 0.5.7 PyYAML 5.3.1 pyzmq 19.0.1 qtconsole 4.7.5 QtPy 1.9.0 regex 2020.6.8 requests 2.24.0 requests-oauthlib 1.2.0 retrying 1.3.3 rsa 4.6 scikit-learn 0.23.1 scipy 1.5.0 seaborn 0.10.1 Send2Trash 1.5.0 setuptools 47.3.1.post20200616 six 1.15.0 smmap 3.0.4 statsmodels 0.11.1 tabulate 0.8.7 tangled-up-in-unicode 0.0.6 tensorboard 2.2.2 tensorboard-plugin-wit 1.6.0.post3 tensorflow 2.1.0 tensorflow-estimator 2.2.0 termcolor 1.1.0 terminado 0.8.3 testpath 0.4.4 text-unidecode 1.3 threadpoolctl 2.1.0 toml 0.10.1 tornado 6.0.4 tqdm 4.46.1 traitlets 4.3.3 typed-ast 1.4.1 urllib3 1.24.3 visions 0.4.4 wcwidth 0.2.5 webencodings 0.5.1 Werkzeug 0.16.1 wheel 0.34.2 widgetsnbextension 3.5.1 win-inet-pton 1.1.0 wincertstore 0.2 wrapt 1.12.1 zipp 3.1.0 This is the error I get once i try to import tensorflow. (NOTE: I get this error on GPU install and CPU install of Tensorflow 2.1) Python 3.7.7 (default, May 6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf 2020-07-05 09:48:26.577683: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll Traceback (most recent call last): File "<stdin>", line 1, in <module> File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow\__init__.py", line 101, in <module> from tensorflow_core import * File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_core\__init__.py", line 46, in <module> from . _api.v2 import compat File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_core\_api\v2\compat\__init__.py", line 39, in <module> from . import v1 File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_core\_api\v2\compat\v1\__init__.py", line 32, in <module> from . import compat File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_core\_api\v2\compat\v1\compat\__init__.py", line 39, in <module> from . import v1 File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_core\_api\v2\compat\v1\compat\v1\__init__.py", line 29, in <module> from tensorflow._api.v2.compat.v1 import app File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_core\_api\v2\compat\__init__.py", line 39, in <module> from . import v1 File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_core\_api\v2\compat\v1\__init__.py", line 32, in <module> from . import compat File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_core\_api\v2\compat\v1\compat\__init__.py", line 39, in <module> from . import v1 File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_core\_api\v2\compat\v1\compat\v1\__init__.py", line 667, in <module> from tensorflow_estimator.python.estimator.api._v1 import estimator File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_estimator\__init__.py", line 10, in <module> from tensorflow_estimator._api.v1 import estimator File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_estimator\_api\v1\estimator\__init__.py", line 10, in <module> from tensorflow_estimator._api.v1.estimator import experimental File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_estimator\_api\v1\estimator\experimental\__init__.py", line 10, in <module> from tensorflow_estimator.python.estimator.canned.dnn import dnn_logit_fn_builder File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_estimator\python\estimator\canned\dnn.py", line 33, in <module> from tensorflow_estimator.python.estimator import estimator File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 53, in <module> from tensorflow_estimator.python.estimator import util as estimator_util File "G:\ProgramFiles\Anaconda37\envs\tensorflow\lib\site-packages\tensorflow_estimator\python\estimator\util.py", line 75, in <module> class _DatasetInitializerHook(tf.compat.v1.train.SessionRunHook): AttributeError: module 'tensorflow' has no attribute 'compat'
This is usually caused by the broken TensorFlow-estimator module. simply do a pip install tensorflow-estimator==2.1.*
16
16
62,559,540
2020-6-24
https://stackoverflow.com/questions/62559540/how-can-i-set-up-a-pusher-server-with-flask
I am trying to setup a simple flask server: import envkey import pysher from flask import Flask # from predictor import PythonPredictor app = Flask(__name__) pusher = pysher.Pusher(envkey.get('PUSHER_KEY')) def my_func(*args, **kwargs): print("processing Args:", args) print("processing Kwargs:", kwargs) # We can't subscribe until we've connected, so we use a callback handler # to subscribe when able def connect_handler(data): print('connect habndler') channel = pusher.subscribe('mychannel') channel.bind('myevent', my_func) pusher.connection.bind('pusher:connection_established', connect_handler) @app.route('/') def index(): pusher.connect() return 'Server Works!' But I Get an error: RuntimeError: cannot join current thread What am I doing wrong?
Specifying my Pusher cluster during initalization helped me get rid of that issue: pusher = pysher.Pusher( key=envkey.get('PUSHER_KEY'), # Or however you get the key cluster="eu", # Add cluster! )
9
3
62,666,926
2020-6-30
https://stackoverflow.com/questions/62666926/str-function-of-class-ported-from-rust-to-python-using-pyo3-doesnt-get-used
I am using the pyo3 rust crate (version 0.11.1) in order to port rust code, into cpython (version 3.8.2) code. I have created a class called my_class and defined the following functions: new, __str__, and __repr__. TL;DR: The __str__ function exists on a class ported from rust using the pyo3 crate, but doesn't get printed when just using print(obj), and instead having to write print(obj.__str__()) The my_class definition is here: use pyo3::prelude::*; #[pyclass] struct my_class { #[pyo3(get, set)] num: i32, #[pyo3(get, set)] debug: bool, } #[pymethods] impl my_class { #[new] fn new(num: i32, debug: bool) -> Self { my_class {num, debug} } fn __str__(&self) -> PyResult<String> { Ok(format!("[__str__] Num: {}, Debug: {}", self.num, self.debug)) } fn __repr__(&self) -> PyResult<String> { Ok(format!("[__repr__] Num: {}, Debug: {}", self.num, self.debug)) } } #[pymodule] fn pymspdb(_py: Python, m: &PyModule) -> PyResult<()> { m.add_class::<my_class>()?; Ok(()) } I build this (into release mode), and test the code with the following code: from my_module import my_class def main(): dsa = my_class(1, True) print(dsa) print(dsa.__str__()) if __name__ == "__main__": main() When running the test python code, I get the following output: <my_class object at 0x7fb7828ae950> [__str__] Num: 1, Debug: true Now I have thought of possible solutions to this. One solution might be the pyo3 rust crate actually acts as a proxy, and in order to port classes into python might implement some sort of object which transfers all actions over to the ported class. So it might not implement its own __str__ therefore not giving me what I want. The second possible solution I thought of was I might not be overloading the __str__ function properly, therefore when python tries to use the print function it doesn't access the correct function and just does the default behavior. Thanks for reading so far, hope I can find an answer since I didn't find anything online for this.
I'm pretty sure this is because you need to implement these methods through the PyObjectProtocol trait. Many Python __magic__ methods correspond to C-level function pointer slots in a type object's memory layout. A type implemented in C needs to provide a function pointer in the slot, and Python will automatically generate a method to wrap the pointer for explicit method calls. A type implemented in Python will automatically have function pointers inserted that delegate to the magic methods. The Python internals will usually look for the function pointer rather than the corresponding magic method, and if Python doesn't find the function pointer, it will behave as though the method doesn't exist. That's why, for example, you had to use #[new] to mark your constructor instead of implementing a __new__ static method. __str__ and __repr__ also correspond to function pointers - specifically, tp_str and tp_repr. If you just try to implement them as regular methods, pyo3 won't generate the function pointers needed. PyObjectProtocol is the pyo3 interface to go through for that.
11
12
62,643,102
2020-6-29
https://stackoverflow.com/questions/62643102/creating-a-dsl-expressions-parser-rules-engine
I'm building an app which has a feature for embedding expressions/rules in a config yaml file. So for example user can reference a variable defined in yaml file like ${variables.name == 'John'} or ${is_equal(variables.name, 'John')}. I can probably get by with simple expressions but I want to support complex rules/expressions such ${variables.name == 'John'} and (${variables.age > 18} OR ${variables.adult == true}) I'm looking for a parsing/dsl/rules-engine library that can support these type of expressions and normalize it. I'm open using ruby, javascript, java, or python if anyone knows of a library for that languages. One option I thought of was to just support javascript as conditons/rules and basically pass it through eval with the right context setup with access to variables and other reference-able vars.
I don't know if you use Golang or not, but if you use it, I recommend this https://github.com/antonmedv/expr. I have used it for parsing bot strategy that (stock options bot). This is from my test unit: func TestPattern(t *testing.T) { a := "pattern('asdas asd 12dasd') && lastdigit(23asd) < sma(50) && sma(14) > sma(12) && ( macd(5,20) > macd_signal(12,26,9) || macd(5,20) <= macd_histogram(12,26,9) )" r, _ := regexp.Compile(`(\w+)(\s+)?[(]['\d.,\s\w]+[)]`) indicator := r.FindAllString(a, -1) t.Logf("%v\n", indicator) t.Logf("%v\n", len(indicator)) for _, i := range indicator { t.Logf("%v\n", i) if strings.HasPrefix(i, "pattern") { r, _ = regexp.Compile(`pattern(\s+)?\('(.+)'\)`) check1 := r.ReplaceAllString(i, "$2") t.Logf("%v\n", check1) r, _ = regexp.Compile(`[^du]`) check2 := r.FindAllString(check1, -1) t.Logf("%v\n", len(check2)) } else if strings.HasPrefix(i, "lastdigit") { r, _ = regexp.Compile(`lastdigit(\s+)?\((.+)\)`) args := r.ReplaceAllString(i, "$2") r, _ = regexp.Compile(`[^\d]`) parameter := r.FindAllString(args, -1) t.Logf("%v\n", parameter) } else { } } } Combine it with regex and you have good (if not great, string translator). And for Java, I personally use https://github.com/ridencww/expression-evaluator but not for production. It has similar feature with above link. It supports many condition and you don't have to worry about Parentheses and Brackets. Assignment = Operators + - * / DIV MOD % ^ Logical < <= == != >= > AND OR NOT Ternary ? : Shift << >> Property ${<id>} DataSource @<id> Constants NULL PI Functions CLEARGLOBAL, CLEARGLOBALS, DIM, GETGLOBAL, SETGLOBAL NOW PRECISION Hope it helps.
10
3
62,572,389
2020-6-25
https://stackoverflow.com/questions/62572389/django-drf-yasg-how-to-add-description-to-tags
Swagger documentation says you can do that: https://swagger.io/docs/specification/grouping-operations-with-tags/ But unfortunately drf-yasg not implementing this feature: https://github.com/axnsan12/drf-yasg/issues/454 It is said, that I can add custom generator class, but it is a very general answer. Now I see that drf_yasg.openapi.Swagger gets info block and I have thoughts, that this might be right place to put global tags section as an additional init argument, but it deeper, than customizing generator class and I have lack of knowledge of this module Does anybody have solution to this particular problem, or at least maybe a link to some sort of tutorial, how to properly customize generator class?
Unfortunately, this is a current issue with drf-yasg. To actually achieve this, you need to create your own schema generator class: from drf_yasg.generators import OpenAPISchemaGenerator class CustomOpenAPISchemaGenerator(OpenAPISchemaGenerator): def get_schema(self, request=None, public=False): """Generate a :class:`.Swagger` object with custom tags""" swagger = super().get_schema(request, public) swagger.tags = [ { "name": "api", "description": "everything about your API" }, { "name": "users", "description": "everything about your users" }, ] return swagger Make sure to also include it in your Schema View from drf_yasg.views import get_schema_view from drf_yasg import openapi schema_view = get_schema_view( openapi.Info( title="My API", default_version='v1', ), generator_class=CustomOpenAPISchemaGenerator, ) Hope this works for you!
11
5
62,624,980
2020-6-28
https://stackoverflow.com/questions/62624980/vscode-python-pandas-dataframe-intellisense-doesnt-show-attributes-method
After importing Pandas, when creating a pandas dataframe, Intellisense doesn't show the available attributes/methods of the created object.(Image 2, where I try to use the .head() function). It detects the module pd(pandas) methods without any problem (see Image 1). I don't have this problem when running a Jupyter Notebook or Jupyter Lab on the browser. I'm using: Windows 7 Python 3.8.3 in a Conda environment. VSCODE 1.46.1 Python extension 2020.6.90262 Microsoft Language Server Visual Studio Intellicode 1.2.8 IMAGE 1: It uses intellisense to detect the module methods/attributes IMAGE 2: Intellisense doesn't show the pandas object available attributes/methods
The detection isn't working because IntelliSense has a hard time with pandas (and pandas.read_csv() especially). It works in Jupyter because it's accessing the live data while IntelliSense has to infer everything from the source code statically. I would advise trying out Pylance as it's the new language server from Microsoft and we have tried to support pandas appropriately. If Pylance doesn't work then try different values for your python.languageServer setting and see which one gives you the best result.
11
9
62,654,908
2020-6-30
https://stackoverflow.com/questions/62654908/layer-is-not-connected-no-input-to-return-error-while-trying-to-get-intermedi
I'm trying to access predictions of intermediate layers of a model during training using custom callback. Following stripped down version of the actual code demonstrates the issue. import tensorflow as tf import numpy as np class Model(tf.keras.Model): def __init__(self, input_shape=None, name="cus_model", **kwargs): super(Model, self).__init__(name=name, **kwargs) def build(self, input_shape): self.dense1 = tf.keras.layers.Dense(input_shape=input_shape, units=32) def call(self, input_tensor): return self.dense1(input_tensor) class CustomCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): get_output = tf.keras.backend.function( inputs = self.model.layers[0].input, outputs = self.model.layers[0].output ) print("Layer output: ",get_output.outputs) X = np.ones((8,16)) y = np.sum(X, axis=1) model = Model() model.compile(optimizer='adam',loss='mean_squared_error', metrics='accuracy') model.fit(X,y, epochs=8, callbacks=[CustomCallback()]) The callback is written as suggested in this answer. Getting following error: <ipython-input-3-635fd53dbffc> in on_epoch_end(self, epoch, logs) 12 def on_epoch_end(self, epoch, logs=None): 13 get_output = tf.keras.backend.function( ---> 14 inputs = self.model.layers[0].input, 15 outputs = self.model.layers[0].output 16 ) . . AttributeError: Layer dense is not connected, no input to return. What's causing this? How to resolve it?
I also cannot get the self.layers[0].input because of the same error, but maybe u can directly call function defined in Model like this: class Model(tf.keras.Model): def __init__(self, input_shape=None, name="cus_model", **kwargs): super(Model, self).__init__(name=name, **kwargs) if not input_shape: input_shape = (10,) self.dense1 = tf.keras.layers.Dense(input_shape=input_shape, units=32) self.dev_dataset = np.ones((8,16)) def call(self, input_tensor): return self.dense1(input_tensor) class CustomCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): self.model.call(self.model.dev_dataset) X = np.ones((8,16)) y = np.sum(X, axis=1) model = Model() model.compile(optimizer='adam',loss='mean_squared_error', metrics='accuracy') model.fit(X,y, epochs=1, callbacks=[CustomCallback()])
8
2
62,667,158
2020-6-30
https://stackoverflow.com/questions/62667158/how-do-i-increase-the-line-thickness-for-sns-lineplot
I have a few seaborn lineplots and I can't figure out how to increase the width of my lines. Here is my code #graph 1 sns.lineplot(x="date", y="nps", data=df_nps, ax=ax1, label="NPS", color='#0550D0') sns.lineplot(x="date", y="ema28", data=df_nps, ax=ax1, label="EMA28", color='#7DF8F3') sns.lineplot(x="date", y="ema7", data=df_nps, ax=ax1, label="EMA7", color='orange') #graph 2 dfz_nps_lineplot = sns.lineplot(x="date", y="nps", data=dfz_nps, ax=ax2, label="NPS", color='#0550D0') dfz_nps_lineplot = sns.lineplot(x="date", y="ema28", data=dfz_nps, ax=ax2, label="EMA28", color='#7DF8F3') dfz_nps_lineplot = sns.lineplot(x="date", y="ema7", data=dfz_nps, ax=ax2, label="EMA7", color='orange') #graph3 dfp_nps_lineplot = sns.lineplot(x="date", y="nps", data=dfp_nps, ax=ax3, label="NPS", color='#0550D0') dfp_nps_lineplot = sns.lineplot(x="date", y="ema28", data=dfp_nps, ax=ax3, label="EMA28", color='#7DF8F3') dfp_nps_lineplot = sns.lineplot(x="date", y="ema7", data=dfp_nps, ax=ax3, label="EMA7", color='orange') # formatting plt.show()
As you can see from seaborn.lineplot documentation, the function accepts matplotlib.axes.Axes.plot() arguments, which means you can pass the same arguments you can to matplotlib function in this documentation. If you want to simply adjust the width of your lineplots I find this the easiest: pass an argument linewidth = your_desired_line_width_in_float , for example, linewidth = 1.5 in your sns.lineplot() functions. You can find additional possible arguments in the documentations linked. Example output on random data: seaborn.lineplot() without linewdith argument provided seaborn.lineplot() with linewidth = 3
34
50
62,658,112
2020-6-30
https://stackoverflow.com/questions/62658112/how-to-download-all-the-python-packages-mentioned-in-the-requirement-txt-to-a-fo
I want to download all the python packages mentioned in the requirement.txt to a folder in Linux. I don't want to install them. I just need to download them. python version is 3.6 list of packages in the requirement.txt aiodns==0.3.2 aiohttp==1.1.5 amqp==1.4.7 anyjson==0.3.3 astroid==1.3.2 asyncio==3.4.3 asyncio-redis==0.14.1 billiard==3.3.0.20 blist==1.3.6 boto==2.38.0 celery==3.1 pexpect==4.0 pycryptodomex==3.7.0 pycurl==7.19.5.1 pyinotify==0.9.6 pylint==1.4.0 pyminifier==2.1 pyOpenSSL==0.15.1 pypacker==2.9 pyquery==1.2.9 pysmi==0.3.2 pysnmp==4.4.4 PyStaticConfiguration==0.9.0 python-daemon==2.1.2 python-dateutil==2.4.2 python-ldap==3.2.0 python-libnmap==0.6.2 python-otrs==0.4.3 pytz==2015.4 PyYAML==3.11 query-string==0.0.2 queuelib==1.2.2 redis==2.10.3 requests==2.22.1 requests-aws4auth==0.9 requests-oauthlib==0.5.0 requests-toolbelt==0.5.0 scp==0.10.2 six==1.10.0 South==1.0.1 tlslite==0.4.9 u-msgpack-python==2.1 urllib3==1.14 w3lib==1.12.0 websockets==3.3 Werkzeug==0.10.4 xlrd==1.0.0 XlsxWriter==1.0.5 zope.interface==4.1.2 GitPython==2.1.3
The documentation gives what you want : pip download pip download does the same resolution and downloading as pip install, but instead of installing the dependencies, it collects the downloaded distributions into the directory provided source So you may try these option with pip download : pip download -r requirement.txt -d your_directory
8
5
62,658,847
2020-6-30
https://stackoverflow.com/questions/62658847/issue-with-using-snowflake-connector-python-with-python-3-x
I've spent half a day trying to figure it out on my own but now I've run out of ideas and googling requests. So basically what I want is to connect to our Snowflake database using snowflake-connector-python package. I was able to install the package just fine (together with all the related packages that were installed automatically) and my current pip3 list results in this: Package Version -------------------------- --------- asn1crypto 1.3.0 azure-common 1.1.25 azure-core 1.6.0 azure-storage-blob 12.3.2 boto3 1.13.26 botocore 1.16.26 certifi 2020.6.20 cffi 1.14.0 chardet 3.0.4 cryptography 2.9.2 docutils 0.15.2 gitdb 4.0.5 GitPython 3.1.3 idna 2.9 isodate 0.6.0 jmespath 0.10.0 msrest 0.6.17 oauthlib 3.1.0 oscrypto 1.2.0 pip 20.1.1 pyasn1 0.2.3 pyasn1-modules 0.0.9 pycparser 2.20 pycryptodomex 3.9.8 PyJWT 1.7.1 pyOpenSSL 19.1.0 python-dateutil 2.8.1 pytz 2020.1 requests 2.23.0 requests-oauthlib 1.3.0 s3transfer 0.3.3 setuptools 47.3.1 six 1.15.0 smmap 3.0.4 snowflake-connector-python 2.2.8 urllib3 1.25.9 wheel 0.34.2 Just to be clear, it's a clean python-venv although I've tried it on the main one, too. When running the following code in VScode: #!/usr/bin/env python import snowflake.connector # Gets the version ctx = snowflake.connector.connect( user='user', password='pass', account='acc') I'm getting this error: AttributeError: module 'snowflake' has no attribute 'connector' Does anyone have any idea what could be the issue here?
AttributeError: module 'snowflake' has no attribute 'connector' Your test code is likely in a file named snowflake.py which is causing a conflict in the import (it is ending up importing itself). Rename the file to some other name and it should allow you to import the right module and run the connector functions.
13
16
62,649,745
2020-6-30
https://stackoverflow.com/questions/62649745/is-it-possible-to-change-font-sizes-according-to-node-sizes
According to NetworkX, draw_networkx(G, pos=None, arrows=True, with_labels=True, **kwds), node_size can be scalar or array but font_size needs to be integer. How can I change the font size to be bigger if the nodes are big? In fact, is it possible to change font sizes according to node sizes?
There isn't really a way of passing an array of font sizes. Both nx.draw and draw_networkx_labels only accept integers as font sizes for all labels. You'll have to loop over the nodes and add the text via matplotlib specifying some size. Here's an example, scaling proportionally to the node degree: from matplotlib.pyplot import figure, text G=nx.Graph() e=[(1,2),(1,5),(2,3),(3,6),(5,6),(4,2),(4,3),(3,5),(1,3)] G.add_edges_from(e) pos = nx.spring_layout(G) figure(figsize=(10,6)) d = dict(G.degree) nx.draw(G, pos=pos,node_color='orange', with_labels=False, node_size=[d[k]*300 for k in d]) for node, (x, y) in pos.items(): text(x, y, node, fontsize=d[node]*5, ha='center', va='center')
10
20
62,620,268
2020-6-28
https://stackoverflow.com/questions/62620268/display-gpu-usage-while-code-is-running-in-colab
I have a program running on Google Colab in which I need to monitor GPU usage while it is running. I am aware that usually you would use nvidia-smi in a command line to display GPU usage, but since Colab only allows one cell to run at once at any one time, this isn't an option. Currently, I am using GPUtil and monitoring GPU and VRAM usage with GPUtil.getGPUs()[0].load and GPUtil.getGPUs()[0].memoryUsed but I can't find a way for those pieces of code to execute at the same time as the rest of my code, thus the usage numbers are much lower than they actually should be. Is there any way to print the GPU usage while other code is running?
Used wandb to log system metrics: !pip install wandb import wandb wandb.init() Which outputs a URL in which you can view various graphs of different system metrics.
21
23
62,652,159
2020-6-30
https://stackoverflow.com/questions/62652159/how-to-get-dynamodb-to-only-return-certain-columns
Hello, I have a simple dynamodb table here filled with placeholder values. How would i go about retrieving only sort_number, current_balance and side with a query/scan? I'm using python and boto3, however, just stating what to configure for each of the expressions and parameters is also enough.
Within the Boto3 SDK you can use: get_item if you're trying to retrieve a specific value query, if you're trying to get values from a single partition (the hash key). scan if you're trying to retrieve values from across multiple parititions. Each of these have a parameter named ProjectionExpression, using this parameter provides the following functionality A string that identifies one or more attributes to retrieve from the specified table or index. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas. ]You would specify the attributes that you want to retrieve comma separated, be aware that this does not reduce the cost of RCU that is applied for performing the interaction.
8
9
62,649,203
2020-6-30
https://stackoverflow.com/questions/62649203/how-did-python-implement-type-free-variables-from-a-statically-typed-language
I know most of python is implemented in C. I was wondering that how does python work under the hood(in terms of its implementation in C) when it comes to determining what type of variable it is in this case lets say x = 5 now if the check the type of x it will say class int but how is that implemented in C? What checks are made under the hood to determine that it belongs to Class int
This is a huge subject. The below document will give you more understanding. https://intopythoncom.files.wordpress.com/2017/04/internalsofcpython3-6-1.pdf Like you said a simple example for integer types To hold an integer type object, there is structure defined in C as said below typedef struct { PyObject_HEAD long ob_ival; } PyIntObject; The object of C structure PyIntObject (section 7.5 of above said document) holds the objects which are integer type. If you are more interested, setup the environment and debug as said in same section 7.5 of the above document. Objects/intobject.c and place a debug point on line number 89. Start debugging the application. PyTypeObject is at higher level for the types to be represented. (look section 7.3 of above said document) As a programmer, it is curious aspect to know the internals. But do not spend too much time to understand unless you work at interpreter level.
10
6
62,646,573
2020-6-29
https://stackoverflow.com/questions/62646573/np-uint16-isnt-the-same-as-np-uint16
I'm attempting to map numpy dtypes to associated values using a dictionary lookup. I observe the following counterintuitive behavior: dtype = np.uint16 x = np.array([0, 1, 2], dtype=dtype) assert x.dtype == dtype d = {np.uint8: 8, np.uint16: 16, np.uint32: 32, np.float32: 32} print(dtype in d) # prints True print(x.dtype in d) # prints False Using other dtypes produces similar results. So we have that np.uint16 == x.dtype, but the former is found in the dictionary's keys while the latter is not. Any explanation and/or simple workaround would be appreciated.
Dtypes don't work like they look at first glance. np.uint16 isn't a dtype object. It's just convertible to one. np.uint16 is a type object representing the type of array scalars of uint16 dtype. x.dtype is an actual dtype object, and dtype objects implement == in a weird way that's non-transitive and inconsistent with hash. dtype == other is basically implemented as dtype == np.dtype(other) when other isn't already a dtype. You can see the details in the source. Particularly, x.dtype compares equal to np.uint16, but it doesn't have the same hash, so the dict lookup doesn't find it.
7
10
62,641,627
2020-6-29
https://stackoverflow.com/questions/62641627/how-to-tell-pip-that-a-packageopencv-has-been-compiled-from-source
Because of some specific requirements I needed to compile a package (opencv with cuda support) from source. After successfull compilation my python-environment is able to import opencv without a problem: $ python Python 3.7.7 (default, Mar 10 2020, 15:16:38) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import cv2 >>> cv2.__version__ '4.3.0' >>> But if I try pip list opencv-python is not part of it: Package Version -------------------- -------- absl-py 0.9.0 astor 0.8.1 dlib 19.20.99 gast 0.3.3 google-pasta 0.2.0 grpcio 1.30.0 h5py 2.10.0 importlib-metadata 1.6.1 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.2 Markdown 3.2.2 numpy 1.19.0 pip 20.1.1 protobuf 3.12.2 setuptools 47.3.1 six 1.15.0 tensorboard 1.14.0 tensorflow-estimator 1.14.0 tensorflow-gpu 1.14.0 termcolor 1.1.0 Werkzeug 1.0.1 wheel 0.34.2 wrapt 1.12.1 zipp 3.1.0 The problem is that afterwards I need to install more packages via pip install -r requirements.txt and some of the packages listed in requirements.txt have opencv as a dependency. As pip is not aware of the opencv installation it now installes a different opencv version. Having two different versions installed along side each other does not sound like a clever solution for me... I could uninstall the pip install opencv later but that does not seem to be a good solution either... So how can I make pip aware of the other opencv installation before running the pip install?
After you compile opencv, you can install the package with pip or python setup.py install. I would recommend building a Python wheel for opencv+cuda, and then installing that wheel. Having a wheel will make installing easier if you ever need to reinstall or make a new environment. The general steps are: Compile opencv Change into the opencv python directory (with setup.py) and run python setup.py bdist_wheel Run auditwheel repair my-python-wheel-1.5.2-cp35-cp35m-linux_x86_64.whl (change the wheel filename) (from https://stackoverflow.com/a/42106034/5666087) You can get auditwheel with pip install auditwheel. Another useful reference is the build documentation for the opencv-python wheels, available at https://github.com/skvark/opencv-python#build-process. You can modify those steps to your build.
10
2
62,641,506
2020-6-29
https://stackoverflow.com/questions/62641506/in-numpy-how-to-compare-all-values-in-an-axis
For a numpy array, how can I change the value only if all elements along an axis are equal to another array? For example... array = np.array([[1, 0, 1], [0, 0, 1], [1, 1, 0], [0, 0, 0], [1, 0, 1]]) I want to replace all [1, 0, 1] with [1, 1, 1]... so that array becomes array([[1, 1, 1], [0, 0, 1], [1, 1, 0], [0, 0, 0], [1, 1, 1]]) When I use a boolean array, it checks each individual number. How can I compare the entire row at once instead?
Try with: array[(array == [1, 0, 1]).all(axis=1)] = [1, 1, 1]
7
11
62,641,851
2020-6-29
https://stackoverflow.com/questions/62641851/how-to-make-two-lists-out-of-two-elements-tuples-that-are-stored-in-a-list-of-li
I have a list which contains many lists and in those there 4 tuples. my_list = [[(12, 1), (10, 3), (4, 0), (2, 0)], [(110, 1), (34, 2), (12, 1), (55, 3)]] I want them in two separate lists like: my_list2 = [12,10,4,2,110,34,12,55] my_list3 = [1,3,0,0,1,2,1,3] my attempt was using the map function for this. my_list2 , my_list3 = map(list, zip(*my_list)) but this is giving me an error: ValueError: too many values to unpack (expected 2)
Your approach is quite close, but you need to flatten first: from itertools import chain my_list = [[(12, 1), (10, 3), (4, 0), (2, 0)], [(110, 1), (34, 2), (12, 1), (55, 3)]] my_list2 , my_list3 = map(list,zip(*chain.from_iterable(my_list))) my_list2 # [12, 10, 4, 2, 110, 34, 12, 55] my_list3 # [1, 3, 0, 0, 1, 2, 1, 3]
9
10
62,639,387
2020-6-29
https://stackoverflow.com/questions/62639387/python-expand-list-of-strings-by-adding-n-elements-for-each-original-element
I have the following list of strings: l1 = ['one','two','three'] I want to obtain a list that has, say, these same elements repeated n times. If n=3 I'd get: l2 = ['one','one','one','two','two','two','three','three','three'] What I am trying is this: l2 = [3*i for i in l1] But what I obtain is this: l2 = ['oneoneone','twotwotwo','threethreethree'] If I try this: l2 = [3*(str(i)+",") for i in l1] I obtain: l2 = ['one,one,one','two,two,two','three,three,three'] What am I missing?
l2 = [j for i in l1 for j in 3*[i]] This gives: ['one', 'one', 'one', 'two', 'two', 'two', 'three', 'three', 'three'] This is equivalent to: l2 = [] for i in l1: for j in 3*[i]: l2.append(j) Note that 3*[i] creates a list with 3 repeated elements (e.g. ['one', one', 'one'])
9
14
62,620,539
2020-6-28
https://stackoverflow.com/questions/62620539/how-to-append-a-total-row-to-pandas-dataframe-with-multiindex
Suppose you have a simple pandas dataframe with a MultiIndex: df = pd.DataFrame(1, index=pd.MultiIndex.from_tuples([('one', 'elem1'), ('one', 'elem2'), ('two', 'elem1'), ('two', 'elem2')]), columns=['col1', 'col2']) Printed as a table: col1 col2 one elem1 1 1 elem2 1 1 two elem1 1 1 elem2 1 1 Question: How do you add a "Total" row to that Dataframe? Expected output: col1 col2 one elem1 1.0 1.0 elem2 1.0 1.0 two elem1 1.0 1.0 elem2 1.0 1.0 Total 4.0 4.0 First attempt: Naive implementation If I am just ignoring the MultiIndex and follow the standard way df.loc['Total'] = df.sum() Output: col1 col2 (one, elem1) 1 1 (one, elem2) 1 1 (two, elem1) 1 1 (two, elem2) 1 1 Total 4 4 It seems to be correct, but the MultiIndex is transformed to Index([('one', 'elem1'), ('one', 'elem2'), ('two', 'elem1'), ('two', 'elem2'), 'Total'], dtype='object') Second attempt: Be explicit df.loc['Total', :] = df.sum() or (being frustrated and changing the axis just out of spite) df.loc['Total', :] = df.sum(axis=1) Output (the same for both calls): col1 col2 one elem1 1.0 1.0 elem2 1.0 1.0 two elem1 1.0 1.0 elem2 1.0 1.0 Total NaN NaN The MultiIndex is not transformed, but the Total is wrong (NaN != 4).
The solution You have to remove the index of df.sum() and just use the values: df.loc['Total', :] = df.sum().values Output: col1 col2 one elem1 1.0 1.0 elem2 1.0 1.0 two elem1 1.0 1.0 elem2 1.0 1.0 Total 4.0 4.0 Why was the second attempt wrong? The second attempt was almost correct. But df.sum() has the Index(['col1', 'col2'], dtype='object'). Consequently, pandas isn't able to match the index. The new index ('Total', '') is appended but without values. But why did df.loc['Total', :] = df.sum(axis=1) also fail? It has the correct Multiindex. Pandas does exactly what you told it, i.e. sum the columns. So, df.sum(axis=1) gives you the following dataframe: one elem1 2 elem2 2 two elem1 2 elem2 2 This dataframe can't be matched with the original df in any meaningful sense.
8
10
62,578,276
2020-6-25
https://stackoverflow.com/questions/62578276/error-no-matching-distribution-found-for-wheel-dash-bootstrap-components
I am trying to install packages in an offline manner. However, when I downloaded all packages and tried to install these packages on another computer, some error has emerged as shown in the following figure. This seems like it is failed to install the "dash-bootstrap-components" package. How can I solve it? By the way, the "dash-bootstrap-components" package is packaged as a "tar.gz" file, does this causes the failure? The following commands can reproduce the problem even on the same computer: pip download dash-bootstrap-components pip install --no-index --find-links ./ dash-bootstrap-components My goal is to configure a python environment on a network-free computer. If this method is unavailable, is there any other methods that can work around it?
I have solved this problem by manually download the "wheel" package from the internet and put it into the folder.
10
13
62,614,078
2020-6-27
https://stackoverflow.com/questions/62614078/why-does-mutating-a-list-in-a-tuple-raise-an-exception-but-mutate-it-anyway
I am not sure I quite understand what's happening in the below mini snippet (on Py v3.6.7). It would be great if someone can explain to me as to how can we mutate the list successfully even though there's an error thrown by Python. I know that we can mutate a list and update it, but what’s with the error? Like I was under the impression that if there's an error, then the x should remain the same. x = ([1, 2], ) x[0] += [3,4] # ------ (1) The Traceback thrown at line (1) is > TypeError: 'tuple' object doesn't support item assignment.. I understand what the error means but I am unable to get the context of it. But now if I try to print the value of my variable x, Python says it's, print(x) # returns ([1, 2, 3, 4]) As far as I can understand, the exception has happened after Python allowed the mutation of the list to happen and then hopefully it tried re-assigning it back. It blew there I think as Tuples are immutable. Can someone explain what's happening under the hood? Edit - 1 Error From ipython console as an image;
My gut feeling is that the line x[0] += [3, 4] first modifies the list itself so [1, 2] becomes [1, 2, 3, 4], then it tries to adjust the content of the tuple which throws a TypeError, but the tuple always points towards the same list so its content (in terms of pointers) is not modified while the object pointed at is modified. We can verify it that way: a_list = [1, 2, 3] a_tuple = (a_list,) print(a_tuple) >>> ([1, 2, 3],) a_list.append(4) print(a_tuple) >>> ([1, 2, 3, 4], ) This does not throw an error and does modify it in place, despite being stored in a "immutable" tuple.
15
9
62,611,167
2020-6-27
https://stackoverflow.com/questions/62611167/plotly-round-hover-decimals-in-charts
How do you round numbers for display in a plotly graph? I included an MRE below. Essentially I wanted rounded numbers to appear when the user hovers over the bar. import plotly.express as px import pandas as pd df = pd.DataFrame({'num': [1, 2, 3], 'sqrt': pd.Series([1, 2, 3]) ** 0.5}) fig = px.bar(df, x='num', y='sqrt', title='Square root') fig.show()
You can do it two ways like this: METHOD-1: Using pd.series.round function. import plotly.express as px import pandas as pd df = pd.DataFrame({'num': [1, 2, 3], 'sqrt': (pd.Series([1, 2, 3]) ** 0.5).round(2)}) fig = px.bar(df, x='num', y='sqrt', title='Square root') fig.show() METHOD-2: Using python builtin round function. For this to work I will have to use list comprehension instead of pandas series. As series doesn't support it directly. import plotly.express as px import pandas as pd df = pd.DataFrame({'num': [1, 2, 3], 'sqrt': [round(x**(1/2),2) for x in [1,2,3]]}) fig = px.bar(df, x='num', y='sqrt', title='Square root') fig.show() EDIT: METHOD-3: As answered by @M. Forsythe below you can also do it using hover_data parameter of plotly. import plotly.express as px import pandas as pd df = pd.DataFrame({'num': [1, 2, 3], 'sqrt': pd.Series([1, 2, 3]) ** 0.5}) fig = px.bar(df, x='num', y='sqrt', title='Square root',hover_data={'sqrt':':.2f'}) fig.show()
10
13
62,604,893
2020-6-27
https://stackoverflow.com/questions/62604893/what-is-right-extension-for-plotly-in-jupyterlab
Plotly is not working in Jupyterlab. I assume that there is a conflict in required extensions but I'm not sure. On checking troubleshooting on Plotly https://plotly.com/python/troubleshooting/ , they advise to remove extensions and install them again. But I found that there is additional extension that came with Jupyterlab update called 'jupyterlab-plotly-extension' which is not mentioned by Plotly in their instructions to make it working in JupyterLab https://plotly.com/python/getting-started/#jupyterlab-support-python-35 My question is: which extensions should be installed to make Plotly working in JupyterLab? jupyterlab-plotly as mentioned in Plotly support jupyterlab-plotly-extension that came with JupyterLab
Enter 'jupyter labextension list' in a terminal or command to run the environment status. The example below shows my environment information with 'jupyter lab' running successfully. xxxxx-no-iMac:~ xxxxx$ jupyter labextension list JupyterLab v2.1.5 Known labextensions: app dir: /Library/Frameworks/Python.framework/Versions/3.6/share/jupyter/lab @jupyter-widgets/jupyterlab-manager v2.0.0 enabled OK @jupyterlab/git v0.20.0 enabled OK @lckr/jupyterlab_variableinspector v0.5.0 enabled OK jupyterlab-plotly v1.5.4 enabled OK nbdime-jupyterlab v2.0.0 enabled OK plotlywidget v1.5.4 enabled OK
13
2
62,606,345
2020-6-27
https://stackoverflow.com/questions/62606345/tensorflow-2-2-0-error-predictions-must-be-0-condition-x-y-did-not-hold
I get the following error message when working on a named-entity-recognition task: tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (bidirectional_lstm_model/time_distributed/Reshape_1:0) = ] [[[-0.100267865 -0.104010895 0.04090859...]]...] [y (Cast_2/x:0) = ] [0] [[{{node assert_greater_equal/Assert/AssertGuard/else/_1/Assert}}]] [Op:__inference_train_function_6216] Function call stack: train_function How can I troubleshoot this? I have checked my input train_x and train_y tensors and they seem fine (Some examples provided towards the end). I was originally using a Conditional Random Field decoder. I replaced that with a Dense layer instead, to see if that changes the error message. The error remains the same though, and is somehow related to the RNN component of the model. In general, what strategy do you use to troubleshoot such errors deep from within the guts of TF? I tried to set up a debugging session on PyCharm and jumped through a bunch of TF files, without learning anything useful about how to solve my problem. The following is my network architecture: Model: "bidirectional_lstm_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= encoder_input (InputLayer) [(None, None)] 0 _________________________________________________________________ encoder_embedding (Embedding (None, None, 300) 2013300 _________________________________________________________________ encoder_bidirectional_rnn (B (None, None, 32) 40576 _________________________________________________________________ time_distributed (TimeDistri (None, None, 25) 825 ================================================================= Total params: 2,054,701 Trainable params: 41,401 Non-trainable params: 2,013,300 _________________________________________________________________ Above + more details (losses, optimizer etc): # Create model encoder_input = keras.Input(shape=(None,), name='encoder_input') encoder_embedding = layers.Embedding(input_dim=input_vocabulary, output_dim=embedding_vector_len, embeddings_initializer=tf.keras.initializers.Constant(embedding_matrix), trainable=False, name='encoder_embedding')(encoder_input) encoder_rnn = layers.LSTM(16, return_sequences=True, name='encoder_rnn') encoder_bidirectional_rnn = layers.Bidirectional(encoder_rnn, name='encoder_bidirectional_rnn')(encoder_embedding) decoder_dense = layers.TimeDistributed(layers.Dense(number_of_tags, name='decoder_dense'))(encoder_bidirectional_rnn) model = keras.Model(inputs=encoder_input, outputs=decoder_dense, name='bidirectional_lstm_model') model.summary() metrics_precision = tf.keras.metrics.Precision() metrics_recall = tf.keras.metrics.Recall() model.compile( loss=tf.keras.losses.categorical_crossentropy, optimizer='adam', metrics=[metrics_precision, metrics_recall] ) Here is what my train_x and train_y arrays look like: # Shapes train_x.shape # (9775, 47) (np.ndarray type) train_y.shape # TensorShape([9775, 47, 25]) (Obtained from tf.one_hot) # Sample (Zero-padded from the right) train_x[0, :] # array([4917, 2806, 6357, 2287, 6059, 0, 0, 0, 0, 0, 0, # 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, # 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, # 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, # 0, 0, 0]) train_y[0, :, :] # array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], # Non "O" tag # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], # Non "O" tag # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], # [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]], dtype=float32)
you are missing the last layer activation: decoder_dense = layers.TimeDistributed(layers.Dense(number_of_tags, name='decoder_dense'))(encoder_bidirectional_rnn) You should specify that you want a softmax, leaving the activation as default is actually a linear activation, meaning that you can have any value, therefore the negative ones. You should create the last Dense layer as follows: decoder_dense = layers.TimeDistributed(layers.Dense(number_of_tags, activation='softmax', name='decoder_dense'))(encoder_bidirectional_rnn)
11
12
62,573,039
2020-6-25
https://stackoverflow.com/questions/62573039/prevent-changing-indentation-from-tabs-to-spaces
I have VSCode installed and python 3.6.8 I use tabs for my indentation. But when ever I save the file, all the tabs are being converted to spaces. This might be because of the formatter I use, i.e. Black. How do I prevent the formatter from doing that(Do all your formatting except inter-changing indents with spaces)? Thank you
Assuming you don't have VS Code set up to insert space into tabs, then Black will very likely replace them with spaces as that's the norm in the Python community and Black takes a very opinionated view on how to format Python code. You could try another formatter like yapf or autopep8 to see if they will leave physical tabs in. But do note that the vast majority of the Python community uses spaces, so you are potentially facing an uphill battle.
9
0
62,599,036
2020-6-26
https://stackoverflow.com/questions/62599036/python-requests-is-slow-and-takes-very-long-to-complete-http-or-https-request
When requesting a web resource or website or web service with the requests library, the request takes a long time to complete. The code looks similar to the following: import requests requests.get("https://www.example.com/") This request takes over 2 minutes (exactly 2 minutes 10 seconds) to complete! Why is it so slow and how can I fix it?
There can be multiple possible solutions to this problem. There are a multitude of answers on StackOverflow for any of these, so I will try to combine them all to save you the hassle of searching for them. In my search I have uncovered the following layers to this: First, try logging For many problems, activating logging can help you uncover what goes wrong (source): import requests import logging import http.client http.client.HTTPConnection.debuglevel = 1 # You must initialize logging, otherwise you'll not see debug output. logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True requests.get("https://www.example.com") In case the debug output does not help you solve the problem, read on. If you only need to check if the server is up, try a HEAD or streaming request It can be faster to not request all data, but to only send a HEAD request (source): requests.head("https://www.example.com") Some servers don't support this, then you can try to stream the response (source): requests.get("https://www.example.com", stream=True) For multiple requests in a row, try utilizing a Session If you send multiple requests in a row, you can speed up the requests by utilizing a requests.Session. This makes sure the connection to the server stays open and configured and also persists cookies as a nice benefit. Try this (source): import requests session = requests.Session() for _ in range(10): session.get("https://www.example.com") To parallelize your requests (try for > 10 requests), use requests-futures If you send a very large number of requests at once, each request blocks execution. You can parallelize this utilizing, e.g., requests-futures (idea from kederrac): from concurrent.futures import as_completed from requests_futures.sessions import FuturesSession with FuturesSession() as session: futures = [session.get("https://www.example.com") for _ in range(10)] for future in as_completed(futures): response = future.result() Be careful not to overwhelm the server with too many requests at the same time. If this also does not solve your problem, read on... The reason might not lie with requests, but the server or your connection In many cases, the reason might lie with the server you are requesting from. First, verify this by requesting any other URL in the same fashion: requests.get("https://www.google.com") If this works fine, you can focus your efforts on the following possible problems: The server only allows specific user-agent strings The server might specifically block requests, or they might utilize a whitelist, or some other reason. To send a nicer user-agent string, try this (source): headers = {"User-Agent": "Mozilla/5.0 (X11; CrOS x86_64 12871.102.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.141 Safari/537.36"} requests.get("https://www.example.com", headers=headers) The server rate-limits you If this problem only occurs sometimes, e.g. after a few requests, the server might be rate-limiting you. Check the response to see if it reads something along those lines (i.e. "rate limit reached", "work queue depth exceeded" or similar; source). Here, the solution is just to wait longer between requests, for example by using time.sleep(). The server response is incorrectly formatted, leading to parsing problems You can check this by not reading the response you receive from the server. If the code is still slow, this is not your problem, but if this fixed it, the problem might lie with parsing the response. In case some headers are set incorrectly, this can lead to parsing errors which prevents chunked transfer (source). In other cases, setting the encoding manually might resolve parsing problems (source). To fix those, try: r = requests.get("https://www.example.com") r.raw.chunked = True # Fix issue 1 r.encoding = 'utf-8' # Fix issue 2 print(response.text) IPv6 does not work, but IPv4 does This might be the worst problem of all to find. An easy, albeit weird, way to check this, is to add a timeout parameter as follows: requests.get("https://www.example.com/", timeout=5) If this returns a successful response, the problem should lie with IPv6. The reason is that requests first tries an IPv6 connection. When that times out, it tries to connect via IPv4. By setting the timeout low, you force it to switch to IPv4 within a shorter amount of time. Verify by utilizing, e.g., wget or curl: wget --inet6-only https://www.example.com -O - > /dev/null # or curl --ipv6 -v https://www.example.com In both cases, we force the tool to connect via IPv6 to isolate the issue. If this times out, try again forcing IPv4: wget --inet4-only https://www.example.com -O - > /dev/null # or curl --ipv4 -v https://www.example.com If this works fine, you have found your problem! But how to solve it, you ask? A brute-force solution is to disable IPv6 completely. You may also disable IPv6 for the current session only. You may just want to force requests to use IPv4. (In the linked answer, you have to adapt the code to always return socket.AF_INET for IPv4.) If you want to fix this problem for SSH, here is how to force IPv4 for SSH. (In short, add AddressFamily inet to your SSH config.) You may also want to check if the problem lies with your DNS or TCP.
59
165
62,597,959
2020-6-26
https://stackoverflow.com/questions/62597959/seaborn-violinplot-transparency
I would like to have increasingly transparent violins in a seaborn.violinplot. I tried the following: import seaborn as sns tips = sns.load_dataset("tips") ax = sns.violinplot(x="day", y="total_bill", data=tips, color='r', alpha=[0.8, 0.6, 0.4, 0.2]) Which does not result in the desired output:
Found this thread looking to change alpha values in general for violin plots, it seems you need to access matplotlib.PolyColections from your ax to even be able to set the alpha values, but since you need to access them anyways, you might as well set alpha values individually (at least in your case since you want individual alpha values). From my understanding, ax.collections contain both matplotlib.PolyCollections and matplotlib.PathCollections, you only need the PolyCollections, so I did the following and it seems to work: ax = sns.violinplot(x = 'day', y = 'total_bill', data = tips, color = 'r') for violin, alpha in zip(ax.collections[::2], [0.8,0.6,0.4,0.2]): violin.set_alpha(alpha) ax.collections[::2] ignores PathCollections, as ax.collections comes in format of [PolyCollection1, PathCollection1, PolyCollection2, PathCollection2, ...] Output:
17
19
62,589,193
2020-6-26
https://stackoverflow.com/questions/62589193/how-to-get-class-diagram-from-python-source-code
I try to get a class diagram from Python source code in Client folder with pyreverse but it requires __init__.py (venv) C:\Users\User\Desktop\project> pyreverse Client parsing Client\__init__.py... Failed to import module Client\__init__.py with error: No module named Client\__init__.py. I don't find any solution for this. Is there a way to get the diagram? Update: There are many files in Client folder: Client.py GUI.py script.py ... This is a part of the Client.py code: import threading class Client: def __init__(self): self.socket = None self.listen_socket = None self.buff_dict = {} self.message_list_dict = {} self.lock = threading.Lock() self.target = None self.listen_flag = True This is a part of the GUI.py code: import tkinter as tk class Window(object): def __init__(self, title, font, client): self.title = title self.font = font self.client = client self.root = tk.Tk() self.root.title(title) self.build_window() def build_window(self): pass class LoginWindow(Window): def __init__(self, client, font): super(LoginWindow, self).__init__('Login', font, client) self.build_window()
Thanks to @Anwarvic and @bruno, I came up with the solution for this. Firstly, create empty __init__.py file inside Client folder: (venv) C:\Users\User\Desktop\project\Client> type NUL > __init__.py Then go to the parent folder of the Client folder where I want to get the class diagram: (venv) C:\Users\User\Desktop\project> pyreverse Client -o png But I got this error: The output format 'png' is currently not available. Please install 'Graphviz' to have other output formats than 'dot' or 'vcg'. After some findings, I found this solution. Then I can run the pyreverse without any error. This is the class diagram I got using pyreverse:
9
10
62,586,878
2020-6-26
https://stackoverflow.com/questions/62586878/why-does-the-pip-requirements-file-contain-file-instead-of-version-number
I created the requirements.txt with pip freeze > requirements.txt. Some modules show the @file..... instead of the version #. What does it mean and why it show? Conda: 4.8.3 Here is the result of requirements.txt. e.g. astroid, flask-admin, matplotlib shows "@ file" below astroid @ file:///opt/concourse/worker/volumes/live/b22b518b-f584-4586-5ee9-55bfa4fca96e/volume/astroid_1592495912194/work bcrypt==3.1.7 blinker==1.4 certifi==2020.6.20 cffi==1.14.0 click==7.1.2 cycler==0.10.0 dnspython==1.16.0 ecdsa==0.13 email-validator @ file:///home/conda/feedstock_root/build_artifacts/email_validator_1589962946737/work flake8==3.8.3 Flask==1.1.2 Flask-Admin @ file:///tmp/build/80754af9/flask-admin_1592429635880/work Flask-Bcrypt==0.7.1 Flask-Login==0.5.0 Flask-Mail==0.9.1 flask-msearch==0.2.9 Flask-SQLAlchemy==2.4.3 Flask-WTF==0.14.3 gunicorn==20.0.4 idna==2.9 importlib-metadata==1.6.1 isort==4.3.21 itsdangerous==1.1.0 Jinja2==2.11.2 kiwisolver==1.2.0 lazy-object-proxy==1.4.3 MarkupSafe==1.1.1 matplotlib @ file:///Users/runner/miniforge3/conda-bld/matplotlib-base_1592576116805/work mccabe==0.6.1 mkl-fft==1.1.0 mkl-random==1.1.1 mkl-service==2.3.0 numpy==1.18.5 pandas @ file:///opt/concourse/worker/volumes/live/38d1301c-8fa9-4d2f-662e-34dddf33b183/volume/pandas_1592841668171/work psycopg2==2.8.4 pycodestyle @ file:///home/conda/feedstock_root/build_artifacts/pycodestyle_1589305246696/work pycparser==2.20 pycryptodome==3.9.7 pyflakes==2.2.0 pylint @ file:///opt/concourse/worker/volumes/live/42ede439-2571-4cb2-513c-394625d2381b/volume/pylint_1592496039330/work pyparsing==2.4.7 python-dateutil==2.8.1 pytz==2020.1 six @ file:///home/conda/feedstock_root/build_artifacts/six_1590081179328/work SQLAlchemy==1.3.17 toml @ file:///tmp/build/80754af9/toml_1592853716807/work tornado==6.0.4 typed-ast==1.4.1 Werkzeug==1.0.1 wrapt==1.11.2 WTForms==2.3.1 xlrd==1.2.0 zipp==3.1.0 Here is the conda list astroid 2.4.2 py37_0 anaconda bcrypt 3.1.7 py37h9bfed18_1 conda-forge blas 1.0 mkl anaconda blinker 1.4 py_1 conda-forge ca-certificates 2020.1.1 0 anaconda certifi 2020.6.20 py37_0 anaconda cffi 1.14.0 py37h356ff06_0 conda-forge click 7.1.2 py_0 anaconda cycler 0.10.0 py_2 conda-forge dnspython 1.16.0 py_1 conda-forge ecdsa 0.13 py_0 conda-forge email_validator 1.1.1 pyh9f0ad1d_0 conda-forge flake8 3.8.3 py_0 anaconda flask 1.1.2 py_0 anaconda flask-admin 1.5.4 py_0 anaconda flask-bcrypt 0.7.1 py_1 conda-forge flask-login 0.5.0 py_0 anaconda flask-mail 0.9.1 py_2 conda-forge flask-msearch 0.2.9 pypi_0 pypi flask-sqlalchemy 2.4.3 pypi_0 pypi flask-wtf 0.14.3 py_0 anaconda freetype 2.10.2 h8da9a1a_0 conda-forge gmp 6.2.0 h4a8c4bd_2 conda-forge gunicorn 20.0.4 py37_0 anaconda idna 2.9 py_1 conda-forge importlib-metadata 1.6.1 py37_0 anaconda intel-openmp 2020.1 216 anaconda isort 4.3.21 py37_0 anaconda itsdangerous 1.1.0 py37_0 anaconda jinja2 2.11.2 py_0 anaconda kiwisolver 1.2.0 py37ha1cc60f_0 conda-forge krb5 1.16.4 hddcf347_0 anaconda lazy-object-proxy 1.4.3 py37h1de35cc_0 anaconda libcxx 10.0.0 1 libedit 3.1.20191231 haf1e3a3_0 libffi 3.2.1 h0a44026_6 libgfortran 3.0.1 h93005f0_2 anaconda libpng 1.6.37 hbbe82c9_1 conda-forge libpq 11.2 h051b688_0 anaconda markupsafe 1.1.1 py37h1de35cc_0 anaconda matplotlib 3.2.2 0 conda-forge matplotlib-base 3.2.2 py37hddda452_0 conda-forge mccabe 0.6.1 py37_1 anaconda mkl 2019.4 233 anaconda mkl-service 2.3.0 py37hfbe908c_0 anaconda mkl_fft 1.1.0 py37hc64f4ea_0 anaconda mkl_random 1.1.1 py37h959d312_0 anaconda ncurses 6.2 h0a44026_1 numpy 1.18.5 py37h1da2735_0 anaconda numpy-base 1.18.5 py37h3304bdc_0 anaconda openssl 1.1.1g h1de35cc_0 anaconda pandas 1.0.5 py37h959d312_0 anaconda pip 20.1.1 py37_1 psycopg2 2.8.4 py37ha12b0ac_0 anaconda pycodestyle 2.6.0 pyh9f0ad1d_0 conda-forge pycparser 2.20 py_0 conda-forge pycryptodome 3.9.7 py37h51495b9_1 conda-forge pyflakes 2.2.0 py_0 anaconda pylint 2.5.3 py37_0 anaconda pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge python 3.7.6 h359304d_2 python-dateutil 2.8.1 py_0 anaconda python_abi 3.7 1_cp37m conda-forge pytz 2020.1 py_0 anaconda readline 7.0 h1de35cc_5 setuptools 47.3.1 py37_0 six 1.15.0 pyh9f0ad1d_0 conda-forge sqlalchemy 1.3.17 pypi_0 pypi sqlite 3.32.3 hffcf06c_0 tk 8.6.10 hb0a8c7a_0 toml 0.10.1 py_0 anaconda tornado 6.0.4 py37h9bfed18_1 conda-forge typed-ast 1.4.1 py37h1de35cc_0 anaconda werkzeug 1.0.1 py_0 anaconda wheel 0.34.2 py37_0 wrapt 1.11.2 py37h1de35cc_0 anaconda wtforms 2.3.1 py_0 anaconda xlrd 1.2.0 py37_0 anaconda xz 5.2.5 h1de35cc_0 zipp 3.1.0 py_0 anaconda zlib 1.2.11 h1de35cc_3 Finally I plan to deploy app in Heroku, so I thought that requirements.txt may be required.
This is a special syntax (supported since pip 19.1) to install packages from VCS repositories : package_name @ git+https://githost/<repo>.git@<commit_id> See https://pip.readthedocs.io/en/stable/reference/pip_install/#requirement-specifiers and https://www.python.org/dev/peps/pep-0440/#direct-references
12
7
62,578,492
2020-6-25
https://stackoverflow.com/questions/62578492/what-is-the-time-complexity-of-checking-membership-in-dict-items
What is the time complexity of checking membership in dict.items()? According to the documentation: Keys views are set-like since their entries are unique and hashable. If all values are hashable, so that (key, value) pairs are unique and hashable, then the items view is also set-like. (Values views are not treated as set-like since the entries are generally not unique.) For set-like views, all of the operations defined for the abstract base class collections.abc.Set are available (for example, ==, <, or ^). So I did some testing with the following code: from timeit import timeit def membership(val, container): val in container r = range(100000) s = set(r) d = dict.fromkeys(r, 1) d2 = {k: [1] for k in r} items_list = list(d2.items()) print('set'.ljust(12), end='') print(timeit(lambda: membership(-1, s), number=1000)) print('dict'.ljust(12), end='') print(timeit(lambda: membership(-1, d), number=1000)) print('d_keys'.ljust(12), end='') print(timeit(lambda: membership(-1, d.keys()), number=1000)) print('d_values'.ljust(12), end='') print(timeit(lambda: membership(-1, d.values()), number=1000)) print('\n*With hashable dict.values') print('d_items'.ljust(12), end='') print(timeit(lambda: membership((-1, 1), d.items()), number=1000)) print('*With unhashable dict.values') print('d_items'.ljust(12), end='') print(timeit(lambda: membership((-1, 1), d2.items()), number=1000)) print('d_items'.ljust(12), end='') print(timeit(lambda: membership((-1, [1]), d2.items()), number=1000)) print('\nitems_list'.ljust(12), end='') print(timeit(lambda: membership((-1, [1]), items_list), number=1000)) With the output: set 0.00034419999999998896 dict 0.0003307000000000171 d_keys 0.0004200000000000037 d_values 2.4773092 *With hashable dict.values d_items 0.0004413000000003109 *With unhashable dict.values d_items 0.00042879999999989593 d_items 0.0005549000000000248 items_list 3.5529328 As you can see, when the dict.values are all hashable (int), the execution time for the membership is similar to that of a set or d_keys, because items view is set-like. The last two examples are on the dict.values with unhashable objects (list). So I assumed the execution time would be similar to that of a list. However, they are still similar to that of a set. Does this mean that even though dict.values are unhashable objects, the implementation of items view is still very efficient, resulting O(1) time complexity for checking the membership? Am I missing something here? EDITED per @chepner's comment: dict.fromkeys(r, [1]) -> {k: [1] for k in r} EDITED per @MarkRansom's comment: another test case list(d2.items())
Lookup in an instance of dict_items is an O(1) operation (though one with an arbitrarily large constant, related to the complexity of comparing values.) dictitems_contains doesn't simply try to hash the tuple and look it up in a set-like collection of key/value pairs. (Note: all of the following links are just to different lines of dictitems_contain, if you don't want to click on them individually.) To evaluate (-1, [1]) in d2.items() it first extracts the key from the tuple, then tries to find that key in the underlying dict. If that lookup fails, it immediately returns false. Only if the key is found does it then compare the value from the tuple to the value mapped to the key in the dict. At no point does dictitems_contains need to hash the second element of the tuple. It's not clear in what ways an instance of dict_items is not set-like when the values are non-hashable, as mentioned in the documentation. A simplified, pure-Python implementation of dict_items.__contains__ might look something like class DictItems: def __init__(self, d): self.d = d def __contains__(self, t): key = t[0] value = t[1] try: dict_value = self.d[key] # O(1) lookup except KeyError: return False return value == dict_value # Arbitrarily expensive comparison ... where d.items() returns DictItems(d).
20
11
62,585,395
2020-6-25
https://stackoverflow.com/questions/62585395/not-able-to-install-jaxlib
I am trying to install jaxlib on my windows 10 by the following command which I found on the documentation.. pip install jaxlib It shows the following error Collecting jaxlib Could not find a version that satisfies the requirement jaxlib (from versions: None) No matching distribution found for jaxlib
Jaxlib is not supported on windows you can see it here.. https://github.com/google/jax/issues/438
15
10
62,564,117
2020-6-24
https://stackoverflow.com/questions/62564117/why-sorted-in-python-didnt-accept-positional-arguments
a=[1,2,3,4] def func(x): return x**x b=sorted(a,func) this line always gives a error-> TypeError: sorted expected 1 argument, got 2 in fact the syntax of sorted is sorted(iterable,key,reverse), in which key and reverse are optional, so according to this, second parameter i pass must go with key. and when i def my own func def func2(x,y=4,z=10): print(x,y,z) func2(100,200)--->output-->>100 200 10 here 200 automatically passed as y argument for func2. How does this work?
In addition to @user4815162342's answer, From the documentation, sorted(iterable, *, key=None, reverse=False) Notice the * between iterable and key parameter. That is the python syntax for specifying that every parameter after * must be specified as keyword arguments. So your custom function should be defined as the following to apply the similar implementation: def func2(x, *, y=4, z=10): print(x, y, z) func2(100, 200) TypeError: func2() takes 1 positional argument but 2 were given
7
8
62,561,254
2020-6-24
https://stackoverflow.com/questions/62561254/print-in-scientific-format-with-powers-of-ten-being-only-multiples-of-3
I haven't found a way to only get exponents which are multiples of 3, when displaying numbers in the scientific format. Neither did I succeed writing a simple custom formatting function. Here is a quick example: Normal behaviour using scientific notation with pythons .format(): numbers = [1.2e-2, 1.3e-3, 1.5e5, 1.6e6] for n in numbers: print("{:.E}".format(n)) >>> 1.20E-02 1.30E-03 1.50E+05 1.60E+06 I, however, need the following output: >>> 12.00E-03 1.30E-03 15.00E+06 1.60E+06 Does anyone know a convenient way for me to get the desired formatting?
Well, it depends on if you want the output format to always adjust to the nearest power of 3, or if you want it to adjust to the nearest lower power of 3. Basically it comes to: how you handle 1.50E+05? Should it be 150.00E+03 or 0.15E+06? Case 1: nearest lower power of 3 from math import log10,floor numbers = [1.2e-2, 1.3e-3, 1.5e5, 1.6e6] def adjusted_scientific_notation(val,num_decimals=2,exponent_pad=2): exponent_template = "{:0>%d}" % exponent_pad mantissa_template = "{:.%df}" % num_decimals order_of_magnitude = floor(log10(abs(val))) nearest_lower_third = 3*(order_of_magnitude//3) adjusted_mantissa = val*10**(-nearest_lower_third) adjusted_mantissa_string = mantissa_template.format(adjusted_mantissa) adjusted_exponent_string = "+-"[nearest_lower_third<0] + exponent_template.format(abs(nearest_lower_third)) return adjusted_mantissa_string+"E"+adjusted_exponent_string for n in numbers: print("{0:.2E} -> {1: >10}".format(n,adjusted_scientific_notation(n))) which prints out: 1.20E-02 -> 12.00E-03 1.30E-03 -> 1.30E-03 1.50E+05 -> 150.00E+03 1.60E+06 -> 1.60E+06 Case 2: nearest power of 3 def adjusted_scientific_notation(val,num_decimals=2,exponent_pad=2): exponent_template = "{:0>%d}" % exponent_pad mantissa_template = "{:.%df}" % num_decimals order_of_magnitude = floor(log10(abs(val))) nearest_third = 3*(order_of_magnitude//3+int(order_of_magnitude%3==2)) adjusted_mantissa = val*10**(-nearest_third) adjusted_mantissa_string = mantissa_template.format(adjusted_mantissa) adjusted_exponent_string = "+-"[nearest_third<0] + exponent_template.format(abs(nearest_third)) return adjusted_mantissa_string+"E"+adjusted_exponent_string for n in numbers: print("{0:.2E} -> {1: >10}".format(n,adjusted_scientific_notation(n))) which prints out: 1.20E-02 -> 12.00E-03 1.30E-03 -> 1.30E-03 1.50E+05 -> 0.15E+06 1.60E+06 -> 1.60E+06
8
6
62,554,840
2020-6-24
https://stackoverflow.com/questions/62554840/how-to-change-only-the-maximum-value-of-a-group-in-pandas-dataframe
I have following dataset Item Count A 60 A 20 A 21 B 33 B 33 B 32 Code to reproduce: import pandas as pd df = pd.DataFrame([ ['A', 60], ['A', 20], ['A', 21], ['B', 33], ['B', 33], ['B', 32], ], columns=['Item', 'Count']) Suppose I have to Change only the maximum value of each group of "Item" column by adding 1. the output should be like this: Item Count New_Count A 60 61 A 20 20 A 21 21 B 33 34 B 33 34 B 32 32 I tried df['New_Count']=df.groupby(['Item'])['Count'].transform(lambda x: max(x)+1) but all the values in "Count" was replaced by max value of each group +1. Item Count New_Count A 60 61 A 20 61 A 21 61 B 33 34 B 33 34 B 32 34
Use idxmax: idx = df.groupby("Item")["Count"].idxmax() df["New_Count"] = df["Count"] df.loc[idx, "New_Count"] += 1 This will only increment the first occurrence of th maximum in each group. If you want to increment all the maximum values in the case of a tie, you can use transform instead. Just replace the first line above with: idx = df.groupby("Item")["Count"].transform(max) == df["Count"]
18
12
62,554,991
2020-6-24
https://stackoverflow.com/questions/62554991/how-do-i-install-python-on-alpine-linux
How do I install python3 and python3-pip on an alpine based image (without using a python image)? $ apk add --update python3.8 python3-pip ERROR: unsatisfiable constraints: python3-pip (missing): required by: world[python3-pip] python3.8 (missing): required by: world[python3.8]
This is what I use in a Dockerfile for an alpine image: # Install python/pip ENV PYTHONUNBUFFERED=1 RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python RUN python3 -m ensurepip RUN pip3 install --no-cache --upgrade pip setuptools
181
263
62,549,990
2020-6-24
https://stackoverflow.com/questions/62549990/what-does-next-and-iter-do-in-pytorchs-dataloader
I have the following code: import torch import numpy as np import pandas as pd from torch.utils.data import TensorDataset, DataLoader # Load dataset df = pd.read_csv(r'../iris.csv') # Extract features and target data = df.drop('target',axis=1).values labels = df['target'].values # Create tensor dataset iris = TensorDataset(torch.FloatTensor(data),torch.LongTensor(labels)) # Create random batches iris_loader = DataLoader(iris, batch_size=105, shuffle=True) next(iter(iris_loader)) What does next() and iter() do in the above code? I have went through PyTorch's documentation and still can't quite understand what is next() and iter() doing here. Can anyone help in explaining this? Many thanks in advance.
These are built-in functions of python, they are used for working with iterables. Basically iter() calls the __iter__() method on the iris_loader which returns an iterator. next() then calls the __next__() method on that iterator to get the first iteration. Running next() again will get the second item of the iterator, etc. This logic often happens 'behind the scenes', for example when running a for loop. It calls the __iter__() method on the iterable, and then calls __next__() on the returned iterator until it reaches the end of the iterator. It then raises a stopIteration and the loop stops. Please see the documentation for further details and some nuances: https://docs.python.org/3/library/functions.html#iter
38
43
62,547,848
2020-6-24
https://stackoverflow.com/questions/62547848/should-isinstance-check-against-typing-or-collections-abc
Both typing and collections.abc includes similar type such as Mapping, Sequence, etc. Based on the python documentation, it seems that collections.abc is preferred for type checking: This module provides abstract base classes that can be used to test whether a class provides a particular interface; for example, whether it is hashable or whether it is a mapping. https://docs.python.org/3/library/collections.abc.html but using typing also works and I'd rather not import Mapping from both typing and collections.abc. So is there any catch in using typing with isinstance()?
Many of the typing generic classes are just aliases to the abc ones. Just as an example from the docs, Hashable: class typing.Hashable An alias to collections.abc.Hashable Also, isinstance(abc.Hashable, typing.Hashable) isinstance(typing.Hashable, abc.Hashable) are both True, making it clear they are equivalent in terms of the class hierarchy. In Python you can check an alias's origin using the origin field: >>> typing.Hashable.__origin__ <class 'collections.abc.Hashable'> No reason I can see to import abc if you do not need it. The two packages provide close but different uses, so I would import only what you need. This is more of an opinion, but if your use-case is only checking against isinstance (i.e. no interesting argument annotation) both seem equivalent to me, though abc may be the more usual tool for this.
12
4
62,510,114
2020-6-22
https://stackoverflow.com/questions/62510114/converting-from-py-to-ipynb
I wrote a juypter notebook that has been converted to .py somehow. I would like it back in the original format. Does anyone know how to do that? There is a previous stack overflow question about this, but the solution doesn't work for me. Converting to (not from) ipython Notebook format
The question was edited so this isn't as direct of an answer as it once was. Nevertheless, if you accidentally changed the extension of your Python notebook from .ipynb to .py as the OP did when originally asking the question, this is the answer for you. Just rename it changing the extension e.g. for linux/macos mv <file>.py <file>.ipynb or right-click rename for windows and type the full name with the extension (Since it seems that the contents are .ipynb contents already)
57
12
62,423,613
2020-6-17
https://stackoverflow.com/questions/62423613/installing-aws-cli-v2-through-pip-on-windows
Is it possible to install AWS CLI v2 through PIP on Windows? In the instructions the recommended way to install is via MSI, but I want to use PIP. What if I install CLI like given on Github in a Linux way: python -m pip install awscli Will it install v1 or v2 by default?
pip install awscliv2 This single command should help you install AWS CLI v2
12
2
62,473,806
2020-6-19
https://stackoverflow.com/questions/62473806/how-to-cache-a-variable-with-flask
I am building a web form using Flask and would like the user to be able to enter multiple entries, and give them the opportunity to regret an entry with an undo button, before sending the data to the database. I am trying to use Flask-Caching but have not managed to set it up properly. I have followed The Flask Mega-Tutorial for setting up Flask (this is my first Flask app). +---app | | forms.py | | routes.py | | __init__.py | +---static | +---templates I wonder how I need to configure the Flask app to basically be able to do the following things: cache.add("variable_name", variable_data) variable_name = cache.get("variable_name") cache.clear() in one of the pages (functions with @app.route decorators)? In app.init.py I have: from flask import Flask from config import Config from flask_caching import Cache app = Flask(__name__) app.config.from_object(Config) cache = Cache(app, config={'CACHE_TYPE': 'simple'}) from app import routes In routes.py I have: from flask import current_app and I use code below when I try to call the cache. current_app.cache.add("variable_name", variable_data) What I get when trying to use the form is the following error: AttributeError: 'Flask' object has no attribute 'cache' Pretty much all tutorials I've found have simply had the app declaration and all the routes in the same module. But how do I access the cache when I have the routes in another module?
You first statement makes me wonder if you are really looking for caching. It seems you are looking for session data storage. Some possibilities... 1. Session Data Storage Client-Side: Store session client data as cookies using built-in Flask session objects. From docs: This can be any small, basic information about that client or their interactions for quick retrieval (up to 4kB). Example session storing explicitly variable: username from flask import Flask, session, request app = Flask(__name__) app.config["SECRET_KEY"] = "any random string" @app.route("/login", methods=["GET", "POST"]) def login(): if request.method == "POST": session["username"] = request.form["username"] # to get value use session["username"] Server-Side: For configurations or long-standing session data that usually still don't require a database. A more comprehensive example usage look here for the Flask-Session package. 2. Caching (Flask-Caching package) Must be stated that: A cache's primary purpose is to increase data retrieval performance by reducing the need to access the underlying slower storage layer So if you need to spee-up things on your site... this is the recommended approach by the Flask team. Example of caching a variable username that will expire after CACHE_DEFAULT_TIMEOUT from flask import Flask, request from flask_caching import Cache app = Flask(__name__) app.config["SECRET_KEY"] = "any random string" app.config["CACHE_TYPE"] = "SimpleCache" app.config["CACHE_DEFAULT_TIMEOUT"] = 300 # timeout in seconds cache = Cache(app) @app.route("/login", methods=["GET", "POST"]) def login(): if request.method == "POST": cache.set("username", request.form["username"]) # to get value use cache.get("username")
7
11
62,489,359
2020-6-20
https://stackoverflow.com/questions/62489359/why-does-pandas-use-nan-from-numpy-instead-of-its-own-null-value
This is somewhat of a broad topic, but I will try to pare it to some specific questions. In starting to answer questions on SO, I have found myself sometimes running into a silly error like this when making toy data: In[0]: import pandas as pd df = pd.DataFrame({"values":[1,2,3,4,5,6,7,8,9]}) df[df < 5] = np.nan Out[0]: NameError: name 'np' is not defined I'm so used to automatically importing numpy with pandas that this doesn't usually occur in real code. However, it did make me wonder why pandas doesn't have it's own value/object for representing null values. I only recently realized that you could just use the Python None instead for a similar situation: import pandas as pd df = pd.DataFrame({"values":[1,2,3,4,5,6,7,8,9]}) df[df < 5] = None Which works as expected and doesn't produce an error. But I have felt like the convention on SO that I have seen is to use np.nan, and that people are usually referring to np.nan when discussing null values (this is perhaps why I hadn't realized None can be used, but maybe that was my own idiosyncrasy). Briefly looking into this, I have seen now that pandas does have a pandas.NA value since 1.0.0, but I have never seen anyone use it in a post: In[0]: import pandas as pd import numpy as np df = pd.DataFrame({'values':np.random.rand(20,)}) df['above'] = df['values'] df['below'] = df['values'] df['above'][df['values']>0.7] = np.nan df['below'][df['values']<0.3] = pd.NA df['names'] = ['a','b','c','a','b','c','a','b','c','a']*2 df.loc[df['names']=='a','names'] = pd.NA df.loc[df['names']=='b','names'] = np.nan df.loc[df['names']=='c','names'] = None df Out[0]: values above below names 0 0.323531 0.323531 0.323531 <NA> 1 0.690383 0.690383 0.690383 NaN 2 0.692371 0.692371 0.692371 None 3 0.259712 0.259712 NaN <NA> 4 0.473505 0.473505 0.473505 NaN 5 0.907751 NaN 0.907751 None 6 0.642596 0.642596 0.642596 <NA> 7 0.229420 0.229420 NaN NaN 8 0.576324 0.576324 0.576324 None 9 0.823715 NaN 0.823715 <NA> 10 0.210176 0.210176 NaN <NA> 11 0.629563 0.629563 0.629563 NaN 12 0.481969 0.481969 0.481969 None 13 0.400318 0.400318 0.400318 <NA> 14 0.582735 0.582735 0.582735 NaN 15 0.743162 NaN 0.743162 None 16 0.134903 0.134903 NaN <NA> 17 0.386366 0.386366 0.386366 NaN 18 0.313160 0.313160 0.313160 None 19 0.695956 0.695956 0.695956 <NA> So it seems that for numerical values, the distinction between these different null values doesn't matter, but they are represented differently for strings (and perhaps for other data types?). My questions based on the above: Is it conventional to use np.nan (rather than None) to represent null values in pandas? Why did pandas not have its own null value for most of its lifetime (until last year)? What was the motivation for adding? In cases where you can have multiple types of missing values in one Series or column, is there any difference between them? Why are they not represented identically (as with numerical data)? I fully anticipate that I may have a flawed interpretation of things and the distinction between pandas and numpy, so please correct me.
A main dependency of pandas is numpy, in other words, pandas is built on-top of numpy. Because pandas inherits and uses many of the numpy methods, it makes sense to keep things consistent, that is, missing numeric data are represented with np.NaN. (This choice to build upon numpy has consequences for other things too. For instance date and time operations are built upon the np.timedelta64 and np.datetime64 dtypes, not the standard datetime module.) One thing you may not have known is that numpy has always been there with pandas import pandas as pd pd.np? pd.np.nan Though you might think this behavior could be better since you don't import numpy, this is discouraged and in the near future will be deprecated in favor of directly importing numpy FutureWarning: The pandas.np module is deprecated and will be removed from pandas in a future version. Import numpy directly instead Is it conventional to use np.nan (rather than None) to represent null values in pandas? If the data are numeric then yes, you should use np.NaN. None requires the dtype to be Object and with pandas you want numeric data stored in a numeric dtype. pandas will generally coerce to the proper null-type upon creation or import so that it can use the correct dtype pd.Series([1, None]) #0 1.0 #1 NaN <- None became NaN so it can have dtype: float64 #dtype: float64 Why did pandas not have its own null value for most of its lifetime (until last year)? What was the motivation for adding? pandas did not have it's own null value because it got by with np.NaN, which worked for the majority of circumstances. However with pandas it's very common to have missing data, an entire section of the documentation is devoted to this. NaN, being a float, does not fit into an integer container which means that any numeric Series with missing data is upcast to float. This can become problematic because of floating point math, and some integers cannot be represented perfectly by a floating point number. As a result, any joins or merges could fail. # Gets upcast to float pd.Series([1,2,np.NaN]) #0 1.0 #1 2.0 #2 NaN #dtype: float64 # Can safely do merges/joins/math because things are still Int pd.Series([1,2,np.NaN]).astype('Int64') #0 1 #1 2 #2 <NA> #dtype: Int64
9
8
62,527,331
2020-6-23
https://stackoverflow.com/questions/62527331/what-does-hexdigest-do-in-python
We need such a code for hashing: from hashlib import sha256 Hash = sha256(b"hello").hexdigest() #Hash = '2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824' hexdigest seems to be doing the main thing, because without it we will get the following result: Hash = sha256(b"hello") #Hash = <sha256 HASH object @ 0x000001E92939B950> The use of hexdigest is mandatory because if it is not used, another output will be obtained, but what does it do?
The actual digest is a really big number. It is conventionally represented as a sequence of hex digits, as we humans aren't very good at dealing with numbers with more than a handful of digits (and hex has the advantage that it reveals some types of binary patterns really well; for example, you'd be hard pressed to reason about a number like 4,262,789,120 whereas its hex representation FE150000 readily reveals that the low 16 bits are all zeros) but the object is more than just a number; it's a class instance with methods which allow you e.g. to add more data in chunks, so that you can calculate the digest of a large file or a stream of data successively, without keeping all of it in memory. You can think of the digest object as a collection of states which permit this operation to be repeated many times, and the hex digest method as a way to query its state at the current point in the input stream. You could argue that the interface could be different - for example, str(Hash) could produce the hex representation; but this only pushes the problem to a different, and arguably more obscure, corner. For completeness, hexdigest is not well-defined in Python generally. It is the name of a set of methods within the hashlib module in the standard library, and this exposition discusses that particular use case. Other libraries could have a method with the same name but completely different semantics; or they could have a method with the same purpose but a completely different name.
12
16
62,419,767
2020-6-17
https://stackoverflow.com/questions/62419767/how-to-reload-python-package-after-pip-install-in-visual-studio-code
I wonder how to reload Python package after pip install in Visual Studio Code? pip install package-A pip list package-A does not exist Restart 'Visual Studio Code' Is the only way to restart?
The best answer I've found is to use Developer: Reload Window in the command palette like @rioV8 suggested. You can either use the command palette or you can change the key mappings as described here. There's already a Ctrl + R key mapping for reloading the window, but it's got a 'when' condition attached to it so I changed that to true. I also had to delete key mappings for other extensions so there was no overlap.
16
17
62,523,166
2020-6-22
https://stackoverflow.com/questions/62523166/how-can-i-generate-an-azure-blob-sas-url-in-python
I am trying to generate blob SAS URLs on the fly using the azure-storage-blob package. This solution only works if you have the now-deprecated azure-storage package, which cannot be installed anymore. I need a way to mimic the behaviour of BlockBlobService.generate_blob_shared_access_signature to generate a blob SAS URL, like this: from datetime import datetime, timedelta from azure.storage.blob import ( BlockBlobService, ContainerPermissions, BlobPermissions, PublicAccess, ) AZURE_ACC_NAME = '<account_name>' AZURE_PRIMARY_KEY = '<account_key>' AZURE_CONTAINER = '<container_name>' AZURE_BLOB='<blob_name>' block_blob_service = BlockBlobService(account_name=AZURE_ACC_NAME, account_key=AZURE_PRIMARY_KEY) sas_url = block_blob_service.generate_blob_shared_access_signature(AZURE_CONTAINER,AZURE_BLOB,permission=BlobPermissions.READ,expiry= datetime.utcnow() + timedelta(hours=1)) print('https://'+AZURE_ACC_NAME+'.blob.core.windows.net/'+AZURE_CONTAINER+'/'+AZURE_BLOB+'?'+sas_url) The above solution works if you have the deprecated package, but I need a solution which doesn't need it.
Take a look to the following code: from datetime import datetime, timedelta from azure.storage.blob import BlobClient, generate_blob_sas, BlobSasPermissions account_name = 'STORAGE_ACCOUNT_NAME' account_key = 'STORAGE_ACCOUNT_ACCESS_KEY' container_name = 'CONTAINER_NAME' blob_name = 'IMAGE_PATH/IMAGE_NAME' def get_blob_sas(account_name,account_key, container_name, blob_name): sas_blob = generate_blob_sas(account_name=account_name, container_name=container_name, blob_name=blob_name, account_key=account_key, permission=BlobSasPermissions(read=True), expiry=datetime.utcnow() + timedelta(hours=1)) return sas_blob blob = get_blob_sas(account_name,account_key, container_name, blob_name) url = 'https://'+account_name+'.blob.core.windows.net/'+container_name+'/'+blob_name+'?'+blob Check this documentation for more detail: link
10
21
62,488,423
2020-6-20
https://stackoverflow.com/questions/62488423/brokenprocesspool-while-running-code-in-jupyter-notebook
I am learning about multiprocessing in python. I have the following code snippet: import time import concurrent.futures def wait(seconds): print(f'Waiting {seconds} seconds...') time.sleep(seconds) return f'Done' if __name__ == "__main__": with concurrent.futures.ProcessPoolExecutor() as executor: p = executor.submit(wait,1) print(p.result()) Running it gives me this error: BrokenProcessPool Traceback (most recent call last) <ipython-input-19-8eff57e3a077> in <module> 9 with concurrent.futures.ProcessPoolExecutor() as executor: 10 p = executor.submit(do_something,1) ---> 11 print(p.result()) ~\.conda\envs\w\lib\concurrent\futures\_base.py in result(self, timeout) 433 raise CancelledError() 434 elif self._state == FINISHED: --> 435 return self.__get_result() 436 else: 437 raise TimeoutError() ~\.conda\envs\w\lib\concurrent\futures\_base.py in __get_result(self) 382 def __get_result(self): 383 if self._exception: --> 384 raise self._exception 385 else: 386 return self._result BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. I am on a Windows computer and I have used if __name__ == "__main__": in my code. But I still get this error.
I got it to work! I saved the wait function in a separate python file called wait.py and imported it in jupyter notebook. wait.py: import time def wait(seconds): print(f'Waiting {seconds} seconds...') time.sleep(seconds) return f'Done' ipynb file: import concurrent.futures import wait #import the wait file if __name__ == "__main__": with concurrent.futures.ProcessPoolExecutor() as executor: p = executor.submit(wait.wait,1) print(p.result())
15
13
62,469,881
2020-6-19
https://stackoverflow.com/questions/62469881/how-to-convert-docx-to-pdf-on-mac-os-with-python
I've looked up several SO and other web pages but I haven't found anything that works. The script I wrote, opens a docx, changes some words and then saves it in a certain folder as a docx. However, I want it to save it as a pdf but I don't know how to. This is an example of the code I'm working with: # Opening the original document doc = Document('./myDocument.docx') # Some code which changes the doc # Saving the changed doc as a docx doc.save('/my/folder/myChangedDocument.docx') The things I tried to do for it to save as a pdf: from docx2pdf import convert # This after it was saved as a docx convert('/my/folder/myChangedDocument.docx', '/my/folder/myChangedDocument.pdf') But it says that Word needs permission to open the saved file and I have to select the file to give it the permission. After that, it just says: 0%| | 0/1 [00:03<?, ?it/s] {'input': '/my/folder/contractsomeVariable.docx', 'output': '/my/folder/contractsomeVariable.pdf', 'result': 'error', 'error': 'Error: An error has occurred.'} And I tried to simply put .pdf instead of .docx after the document name when I saved it but that didn't work either as the module docx can't do that. So does someone know how I can save a docx as a pdf using Python?
you can use docx2pdf by making the changes first and then coverting. Use pip to install on mac (I am guessing you already have it but it is still good to include). pip install docx2pdf Once docx2pdf is installed, you can your docx file in inputfile and put an empty .pdf file in outputfile. from docx2pdf import convert inputFile = "document.docx" outputFile = "document2.pdf" file = open(outputFile, "w") file.close() convert(inputFile, outputFile)
8
8
62,511,086
2020-6-22
https://stackoverflow.com/questions/62511086/how-to-document-kwargs-according-to-numpy-style-docstring
So, I've found posts related to other styles and I am aware of this NumPy page about the documentation but I am confused. I didn't understand how to add each kwargs to the parameters section of a method. This is from the given web page: def foo(var1, var2, *args, long_var_name='hi', **kwargs): r"""Summarize the function in one line. Several sentences providing an extended description. Refer to variables using back-ticks, e.g. `var`. Parameters ---------- var1 : array_like Array_like means all those objects -- lists, nested lists, etc. -- that can be converted to an array. We can also refer to variables like `var1`. var2 : int The type above can either refer to an actual Python type (e.g. ``int``), or describe the type of the variable in more detail, e.g. ``(N,) ndarray`` or ``array_like``. *args : iterable Other arguments. long_var_name : {'hi', 'ho'}, optional Choices in brackets, default first when optional. **kwargs : dict Keyword arguments. It is not clear how to add each kwargs here. I also saw this sphinx page "Example NumPy Style Python Docstring", here is the section about the kwargs: def module_level_function(param1, param2=None, *args, **kwargs): """This is an example of a module level function. Function parameters should be documented in the ``Parameters`` section. The name of each parameter is required. The type and description of each parameter is optional, but should be included if not obvious. If \*args or \*\*kwargs are accepted, they should be listed as ``*args`` and ``**kwargs``. The format for a parameter is:: name : type description The description may span multiple lines. Following lines should be indented to match the first line of the description. The ": type" is optional. Multiple paragraphs are supported in parameter descriptions. Parameters ---------- param1 : int The first parameter. param2 : :obj:`str`, optional The second parameter. *args Variable length argument list. **kwargs Arbitrary keyword arguments. Nope, I am still confused. Is it something like this? """ Dummy docstring. Parameters ---------- **kwargs: dict first_kwarg: int This is an integer second_kwarg: str This is a string """
Summary The **kwargs are not typically listed in the function, but instead the final destination of the **kwargs is mentioned. For example: **kwargs Instructions on how to decorate your plots. The keyword arguments are passed to `matplotlib.axes.Axes.plot()` If there are multiple possible targets, they are all listed (see below) If you happen to use some automation tool to interpolate and link your documentation, then you might list the possible keyword arguments in **kwargs for the convenience of the end users. This kind of approach is used in matplotlib, for example. (see below) How and when document **kwargs (Numpydoc) 1) When to use **kwargs? First thing to note here is that **kwargs should be used to pass arguments to underlying functions and methods. If the argument inside **kwargs would be used in the function (and not passed down), it should be written out as normal keyword argument, instead. 2) Where to put **kwargs decription? The location of **kwargs description is in the Parameters section. Sometimes it is appropriate to list them in the Other Parameters section, but remember: Other Parameters should only be used if a function has a large number of keyword parameters, to prevent cluttering the Parameters section. matplotlib.axes.Axes.grid has **kwargs in Parameters section. matplotlib.axes.Axes.plot has **kwargs in Other Parameters section (reasoning probably to large number of keyword arguments). 3) Syntax for **kwargs decription The syntax for the description for the **kwargs is, following Numpydoc styleguide Parameters ---------- ... (other lines) **kwargs : sometype Some description on what the kwargs are used for. or Parameters ---------- ... (other lines) **kwargs Some description on what the kwargs are used for. The one describing the type is more appropriate, as [source]. For the parameter types, be as precise as possible One exception for this is for example when the **kwargs could be passed to one of many functions based on other parameter values, as in seaborn.kdeplot. Then, the line for the type would become too long for describing all the types and it would be cleaner to use a bullet point list, which also describes the conditions on when the **kwargs are forwarded to where. Eg.: Parameters ---------- fill: bool or None If True, fill in the area under univariate density curves or between bivariate contours. If None, the default depends on multiple. **kwargs Other keyword arguments are passed to one of the following matplotlib functions: * matplotlib.axes.Axes.plot() (univariate, fill=False), * matplotlib.axes.Axes.fill_between() (univariate, fill=True), * matplotlib.axes.Axes.contour() (bivariate, fill=False), * matplotlib.axes.contourf() (bivariate, fill=True). You may also add listing of the valid keyword arguments in **kwargs like in matplotlib.axes.Axes.grid. Here is the interpolated python doc/text version: Parameters ---------- ... (other lines) **kwargs : `.Line2D` properties Define the line properties of the grid, e.g.:: grid(color='r', linestyle='-', linewidth=2) Valid keyword arguments are: Properties: agg_filter: a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha: float or None animated: bool antialiased or aa: bool clip_box: `.Bbox` clip_on: bool clip_path: Patch or (Path, Transform) or None color or c: color contains: unknown dash_capstyle: {'butt', 'round', 'projecting'} dash_joinstyle: {'miter', ' ... (more lines) This is convenient for the user, but challenging for the developer. In matplotlib this kind of luxury is made possible with the automatization using some special documentation decorators and linking1. Manual writing of allowed kwargs will surely become a code maintenance nightmare. 4) Notes related to **kwargs / Extended help Some additional info about the **kwargs could be included in the Notes section. For example matplotlib.axes.Axes.plot discusses marker styles, line styles and colors in the Notes section. [2] [1] They use a @docstring.dedent_interpd decorator which pulls the meaning of the kwargs to the final docs. So that is happening in place of %(Line2D:kwdoc)s, for example. [2] See: help(ax.plot) where ax is instance of matplotlib.axes.Axes.
11
11
62,493,718
2020-6-21
https://stackoverflow.com/questions/62493718/how-asyncio-sleep-isnt-blocking-thread
I'm reading 'Fluent Python' by 'Luciano Ramalho' over and over, but I couldn't understand asyncio.sleep's behavior inside asyncio. Book says at one part: Never use time.sleep in asyncio coroutines unless you want to block the main thread, therefore freezing the event loop and probably the whole application as well. (...) it should yield from asyncio.sleep(DELAY). On the other part: Every Blocking I/O function in the Python standard library releases the GIL (...) The time.sleep() function also releases the GIL. As time.sleep() releases GIL codes on other thread can run, but blocks current thread. Since asyncio is single-threaded, I understand that time.sleep blocks asyncio loop. But, how asyncio.sleep() isn't blocking thread? Is it possible to not delay event loop and wait at the same time?
The function asyncio.sleep simply registers a future to be called in x seconds while time.sleep suspends the execution for x seconds. You can test how both behave with this small example and see how asyncio.sleep(1) doesn't actually give you any clue on how long it will "sleep" because it's not what it really does: import asyncio import time from datetime import datetime async def sleep_demo(): print("async sleep start 1s: ", datetime.now().time()) await asyncio.sleep(1) print("async sleep end: ", datetime.now().time()) async def I_block_everyone(): print("regular sleep start 3s: ", datetime.now().time()) time.sleep(3) print("regular sleep end: ", datetime.now().time()) asyncio.gather(*[sleep_demo(), I_block_everyone()]) This prints: async sleep start 1s: 04:46:55 regular sleep start 3s: 04:46:55 regular sleep end: 04:46:58 async sleep end: 04:46:58 The blocking call time.sleep prevent the event loop from scheduling the future that resumes sleep_demo. In the end, it gains control back only after approximately 3 seconds (even though we explicitly requested a 1 second async sleep). Now concerning "The time.sleep() function also releases the GIL.", this is not a contradiction as it will only allow another thread to execute (but the current thread will remain pending for x seconds). Somewhat both look a bit similar, in one case the GIL is released to make room for another thread, in asyncio.sleep, the event loop gains control back to schedule another task.
10
10
62,453,756
2020-6-18
https://stackoverflow.com/questions/62453756/how-to-move-jupyter-notebook-cells-up-down-using-keyboard-shortcut
Anyone knows keyboard shortcut to move cells up or down in Jupyter notebook? Cannot find the shortcut, any clues?
Further to honeybadger's response, you can see when you open up the Edit Command Mode shortcuts dialog box that there are no shortcuts defined for moving a cell up and down, by default: I simply typed in my preferred combination Ctrl-Shift-Down and Ctrl-Shift-Up in the 'add shortcut' field, and pressed Enter. This is the same in Windows/Mac. Cheers!
39
8
62,528,272
2020-6-23
https://stackoverflow.com/questions/62528272/what-does-asyncio-create-task-do
What does asyncio.create_task() do? A bit of code that confuses me is this: import asyncio async def counter_loop(x, n): for i in range(1, n + 1): print(f"Counter {x}: {i}") await asyncio.sleep(0.5) return f"Finished {x} in {n}" async def main(): slow_task = asyncio.create_task(counter_loop("Slow", 4)) fast_coro = counter_loop("Fast", 2) print("Awaiting Fast") fast_val = await fast_coro print("Finished Fast") print("Awaiting Slow") slow_val = await slow_task print("Finished Slow") print(f"{fast_val}, {slow_val}") asyncio.run(main()) This outputs: 001 | Awaiting Fast 002 | Counter Fast: 1 003 | Counter Slow: 1 004 | Counter Fast: 2 005 | Counter Slow: 2 006 | Finished Fast 007 | Awaiting Slow 008 | Counter Slow: 3 009 | Counter Slow: 4 010 | Finished Slow 011 | Finished Fast in 2, Finished Slow in 4 I don't understand quite how this is working. Shouldn't the slow_task not be able to run until the completion of the fast_coro because it was never used in an asyncio.gather() method? Why do we have to await slow_task? Why is "Awaiting Slow" printed after the coroutine appears to have started? What really is a task? I know that what gather is doing is scheduling a task. And create_task supposedly creates a task.
What does asyncio.create_task() do? It submits the coroutine to run "in the background", i.e. concurrently with the current task and all other tasks, switching between them at await points. It returns an awaitable handle called a "task" which you can also use to cancel the execution of the coroutine. It's one of the central primitives of asyncio, the asyncio equivalent of starting a thread. (In the same analogy, awaiting the task with await is the equivalent of joining a thread.) Shouldn't the slow_task not be able to run until the completion of the fast_coro No, because you explicitly used create_task to start slow_task in the background. Had you written something like: slow_coro = counter_loop("Slow", 4) fast_coro = counter_loop("Fast", 2) fast_val = await fast_coro ...indeed slow_coro would not run because no one would have yet submitted it to the event loop. But create_task does exactly that: submit it to the event loop for execution concurrently with other tasks, the point of switching being any await. because it was never used in an asyncio.gather method? asyncio.gather is not the only way to achieve concurrency in asyncio. It's just a utility function that makes it easier to wait for a number of coroutines to all complete, and submit them to the event loop at the same time. create_task does just the submitting, it should have probably been called start_coroutine or something like that. Why do we have to await slow_task? We don't have to, it just serves to wait for both coroutines to finish cleanly. The code could have also awaited asyncio.sleep() or something like that. Returning from main() (and the event loop) immediately with some tasks still pending would have worked as well, but it would have printed a warning message indicating a possible bug. Awaiting (or canceling) the task before stopping the event loop is just cleaner. What really is a task? It's an asyncio construct that tracks execution of a coroutine in a concrete event loop. When you call create_task, you submit a coroutine for execution and receive back a handle. You can await this handle when you actually need the result, or you can never await it, if you don't care about the result. This handle is the task, and it inherits from Future, which makes it awaitable and also provides the lower-level callback-based interface, such as add_done_callback.
131
157
62,532,559
2020-6-23
https://stackoverflow.com/questions/62532559/list-of-object-attributes-in-pydantic-model
I use Fast API to create a web service. There are following sqlAlchemy models: class User(Base): __tablename__ = 'user' account_name = Column(String, primary_key=True, index=True, unique=True) email = Column(String, unique=True, index=True, nullable=False) roles = relationship("UserRole", back_populates="users", lazy=False, uselist=True) class UserRole(Base): __tablename__ = 'user_role' __table_args__ = (UniqueConstraint('role_name', 'user_name', name='user_role_uc'),) role_name = Column(String, ForeignKey('role.name'), primary_key=True) user_name = Column(String, ForeignKey('user.account_name'), primary_key=True) users = relationship("User", back_populates="roles") Pydantic schemas are below: class UserRole(BaseModel): role_name: str class Config: orm_mode = True class UserBase(BaseModel): account_name: str email: EmailStr roles: List[UserRole] = [] class Config: orm_mode = True What I have now is: { "account_name": "Test.Test", "email": "[email protected]", "roles": [ { "role_name": "all:admin" }, { "role_name": "all:read" } ] } What I want to achieve is to get user from api in following structure: { "account_name": "Test.Test", "email": "[email protected]", "roles": [ "all:admin", "all:read" ] } Is that possible? How should I change schemas to get this?
If you are okay with handling the how to "get user from api" problem statement by modifying the fastapi path definition, see below. Can you change the response model used by the fastapi path definition in order to handle the desired output format? Example pydantic response model definition: class UserResponse(BaseModel): account_name: str email: EmailStr roles: List[str] Example sqlalchemy query + serialization function: def get_user_response(user_id) -> UserResponse: user = User.query.get(user_id) user_roles = UserRole.query.filter(user=user_id).all() role_names = [r.role_name for r in user_roles] response = UserResponse( account_name=user.account_name, email=user.email, roles=role_names } return response Example fastapi path definition: @app.get("/users/{user_id}", response_model=UserResponse) async def read_item(user_id): return get_user_response(user_id) Considerations: I'm using a user_id for the user queries but this can be replaced with whatever you end up using as your primary key for that table. The UserResponse response model is very similar to UserBase (you could potentially subclass UserBase instead of model to avoid the redefinition of account_name and email, with the tradeoff of having to override the class' Config). There may be a way to override the serialization format of the UserBase sqlalchemy model object that gets automatically serialized when you query the model from the database and allows you to eliminate or reduce the code in the get_user_response() function in the example definition above.
9
6
62,444,612
2020-6-18
https://stackoverflow.com/questions/62444612/should-i-list-class-methods-in-the-class-docstring
I am a bit confused by the PEP257 standard for documenting classes. It says, "The docstring for a class should summarize its behavior and list the public methods and instance variables" But it also says that all functions should have dosctrings (which, of course, I want, so that help() works). But this seems to involve duplication i.e. class foo: """A class Attributes ---------- bar : str A string Methods ------- __init__(fish): Constructor, fish is a str which self.bar will be set to. baz(): A function which does blah """ def __init__(self, fish): """ Constructs an XRTProductRequest object. Parameters ---------- fish : str A string, which the self.bar attribute will be set to. """ etc... This then is rather error prone because it means that when I realise that __init__ also needs to recieve an int, then I have to remember to update the docs in 2 places, which I can guarantee I will forget. It also makes the pydoc output duplicated: it prints my class docstring, but then says, "Methods defined here" and goes on to list all of the methods, via their own docstrings. So, is this duplication really part of PEP257, or am I mis-reading it? Should I drop the "Methods"section of the class docstring, since each method has its own docstring? Or is this duplication really part of the standard? TIA
Yes just drop the methods section from the class docstring. I've never ever seen something like that used.(It is used in few places in the standard library.) The class docstring needs to just describe the class and the docstring of individual methods then handle describing themselves. Also the wording in the PEP to me means that the class docstring "should" list the public methods, but not describe them in any other way.(This is also how the above standard library example does it.) But as said, I would never even do that, since the code speaks for itself and that kind of listing is bound to get out-of-date. Final note: I personally prefer to use the Google docstring style, because to me it's the clearest and cleanest.
8
9
62,456,558
2020-6-18
https://stackoverflow.com/questions/62456558/is-one-hot-encoding-required-for-using-pytorchs-cross-entropy-loss-function
For example, if I want to solve the MNIST classification problem, we have 10 output classes. With PyTorch, I would like to use the torch.nn.CrossEntropyLoss function. Do I have to format the targets so that they are one-hot encoded or can I simply use their class labels that come with the dataset?
nn.CrossEntropyLoss expects integer labels. What it does internally is that it doesn't end up one-hot encoding the class label at all, but uses the label to index into the output probability vector to calculate the loss should you decide to use this class as the final label. This small but important detail makes computing the loss easier and is the equivalent operation to performing one-hot encoding, measuring the output loss per output neuron as every value in the output layer would be zero with the exception of the neuron indexed at the target class. Therefore, there's no need to one-hot encode your data if you have the labels already provided. The documentation has some more insight on this: https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html. In the documentation you'll see targets which serves as part of the input parameters. These are your labels and they are described as: This clearly shows how the input should be shaped and what is expected. If you in fact wanted to one-hot encode your data, you would need to use torch.nn.functional.one_hot. To best replicate what the cross entropy loss is doing under the hood, you'd also need nn.functional.log_softmax as the final output and you'd have to additionally write your own loss layer since none of the PyTorch layers use log softmax inputs and one-hot encoded targets. However, nn.CrossEntropyLoss combines both of these operations together and is preferred if your outputs are simply class labels so there is no need to do the conversion.
19
27
62,543,342
2020-6-23
https://stackoverflow.com/questions/62543342/gunicorn-gevent-workers-vs-uvicorn-asgi
I'm currently developing a service in Django which makes use of a slow external API (takes about 10s to get a response), which means the connections to my server are kept open waiting for the external API to respond, and occupying worker time/resources. I know I can use gunicorn's thread or gevent workers to add concurrency, but can't seem to grasp the exact difference between using gunicorn with gevent workers and uvicorn (or any other server) with the asgi interface. What would be the criteria for using one over the other? Django still doesn't fully support async/await views. Would it be better if I just stick with gevent workers?
Gunicorn has a pre-fork worker model A pre-fork worker model basically means a master creates forks which handle each request. A fork is a completely separate *nix process (Source). Uvicorn is a ASGI server running uvloop Python async needs a event loop for it to use it's async features. And uvloop is an alternative to the asyncio loop. So if you're having async calls in your code you could use uvloop internally or use uvicorn as your ASGI server. Note: You still have just a single event loop. You can choose to use gunicorn and uvicorn to have multiple workers and use uvloop. In your django application it makes sense to use uvloop internally (if you choose to) and use gunicorn as your ASGI.
40
18
62,468,402
2020-6-19
https://stackoverflow.com/questions/62468402/query-parameters-from-pydantic-model
Is there a way to convert a pydantic model to query parameters in fastapi? Some of my endpoints pass parameters via the body, but some others pass them directly in the query. All this endpoints share the same data model, for example: class Model(BaseModel): x: str y: str I would like to avoid duplicating my definition of this model in the definition of my "query-parameters endpoints", like for example test_query in this code: class Model(BaseModel): x: str y: str @app.post("/test-body") def test_body(model: Model): pass @app.post("/test-query-params") def test_query(x: str, y: str): pass What's the cleanest way of doing this?
The documentation gives a shortcut to avoid this kind of repetitions. In this case, it would give: from fastapi import Depends @app.post("/test-query-params") def test_query(model: Model = Depends()): pass This will allow you to request /test-query-params?x=1&y=2 and will also produce the correct OpenAPI description for this endpoint. Similar solutions can be used for using Pydantic models as form-data descriptors.
38
45
62,461,847
2020-6-19
https://stackoverflow.com/questions/62461847/django-insert-or-update-record
the insert works but multiple data enters, when two data are inserted when I try to update, it does not update the record I just want that when the data is already have record in the database, just update. if not, it will insert def GroupOfProduct(request): global productOrderList relatedid_id = request.POST.get("relatedid") groups = ProductRelatedGroup(id=id) productOrderList=[] try: for products_id in request.POST.getlist("product"): products = Product(id=products_id) insert_update_groupofproduct = ProductRelatedGroupAndProduct( product = products ) insert_update_groupofproduct.save() except ProductRelatedGroupAndProduct.DoesNotExist: for products_id in request.GET.getlist("relatedid"): products = Product(id=products_id) insert_update_groupofproduct = ProductRelatedGroupAndProduct.objects.get(id=products) insert_update_groupofproduct.product = products insert_update_groupofproduct.save() return redirect(relatedgroup) this is my models.py class Product(models.Model): product = models.CharField(max_length=500) class ProductRelatedGroup(models.Model): category = models.CharField(max_length=500, blank=True) class ProductRelatedGroupAndProduct(models.Model): productrelatedgroup = models.ForeignKey(ProductRelatedGroup,on_delete=models.SET_NULL, null=True, blank=True,verbose_name="Product Related Group") product = models.ForeignKey(Product,on_delete=models.SET_NULL, null=True, blank=True,verbose_name="Product") UPDATE I tried this, the insert works fine, but the update does not work def GroupOfProduct(request): global productOrderList groups = ProductRelatedGroup(id=id) idproduct = request.POST.get('relatedid') if ProductRelatedGroupAndProduct.objects.filter(id=idproduct).exists(): print("update") for products_id in request.GET.getlist("relatedid"): products = Product(id=products_id) insert_update_groupofproduct = ProductRelatedGroupAndProduct.objects.get(id=products) insert_update_groupofproduct.product = products insert_update_groupofproduct.save() return redirect(relatedgroup) else: productOrderList = [] for isa in request.POST.getlist('relatedid'): productOrderList.append(isa) i = 0 for i in productOrderList: for products_id in request.POST.getlist("product"): products = Product(id=products_id) insert_update_groupofproduct = ProductRelatedGroupAndProduct( productrelatedgroup=groups, product=products ) insert_update_groupofproduct.save() return redirect(relatedgroup) return redirect(relatedgroup) FLOW OF MY PROGRAM (with picture) when the admin-user Insert data just (like this) the batch insert work perfectly fine and when i tried to update (batch update) only one data updated and when i tried to insert again (just like this) no result In Insert <QueryDict: {'csrfmiddlewaretoken': ['F3evgRJwNw4p5XCOVE0qeFhP3gmGG5ay4GBbpoZQg3P5l6TNXHY7KN2lD56s6NCU'], 'relatedid': ['200', '201']}> This is my html, {% for relatedgroups in relatedgroups %} <input type="hidden" name="relatedid" value="{{relatedgroups.id}}"> {% endfor %} <fieldset class="module aligned "> <div class="form-row field-user_permissions"> <div> <div class="related-widget-wrapper"> <select name="product" id="id_user_permissions" multiple class="selectfilter" data-field-name="Product" data-is-stacked="0"> {% for relatedgroups in relatedgroups %} <option value="{{relatedgroups.product.id}}" selected>{{relatedgroups.product}}</option> {% endfor %} {% for product in products %} <option value="{{product.id}}">{{product.id}}-{{product}}</option> {% endfor %} </select> </div> </div> </div> </fieldset> when the data is selected or already have data in the database it will shows in Chosen Product box
Please use update_or_create method. This method if a data is exist then updated the details else newly inserted. Reference: https://www.kite.com/python/docs/django.db.models.QuerySet.update_or_create https://djangosnippets.org/snippets/1114/ def GroupOfProduct(request): group_id = request.POST.get('group') groups = ProductRelatedGroup(id=group_id) idproduct = request.POST.get('product') for products_id in request.POST.getlist("product"): products = Product(id=products_id) ProductRelatedGroupAndProduct.objects.update_or_create(id=idproduct,defaults={'product':products,'productrelatedgroup':groups}) return redirect(relatedgroup)
10
8
62,494,622
2020-6-21
https://stackoverflow.com/questions/62494622/python-how-to-remove-default-options-on-typer-cli
I made a simple CLI using Typer and Pillow to change image opacity and this program only have one option: opacity. But when I run python opacity.py --help it gives me the two typerCLI options: Options: --install-completion [bash|zsh|fish|powershell|pwsh] Install completion for the specified shell. --show-completion [bash|zsh|fish|powershell|pwsh] Show completion for the specified shell, to copy it or customize the installation. --help Show this message and exit. There's a way to disable it? I didn't find on docs.
I met the same problem today, i couldn't find anything except this question so dived in the source to find how Typer automatically adds this line in app, so i found this, when Typer initialiazing itself it automatically sets add_completion to True class Typer: def __init__(add_completion: bool = True) So when you initialiazing your app you can add this app = typer.Typer(add_completion=False) This is how it looks after you add add_completion=False Usage: main.py [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit.
12
17
62,436,302
2020-6-17
https://stackoverflow.com/questions/62436302/extract-target-from-tensorflow-prefetchdataset
I am still learning tensorflow and keras, and I suspect this question has a very easy answer I'm just missing due to lack of familiarity. I have a PrefetchDataset object: > print(tf_test) $ <PrefetchDataset shapes: ((None, 99), (None,)), types: (tf.float32, tf.int64)> ...made up of features and a target. I can iterate over it using a for loop: > for example in tf_test: > print(example[0].numpy()) > print(example[1].numpy()) > exit() $ [[-0.31 -0.94 -1.12 ... 0.18 -0.27] [-0.22 -0.54 -0.14 ... 0.33 -0.55] [-0.60 -0.02 -1.41 ... 0.21 -0.63] ... [-0.03 -0.91 -0.12 ... 0.77 -0.23] [-0.76 -1.48 -0.15 ... 0.38 -0.35] [-0.55 -0.08 -0.69 ... 0.44 -0.36]] [0 0 1 0 1 0 0 0 1 0 1 1 0 1 0 0 0 ... 0 1 1 0] However, this is very slow. What I'd like to do is access the tensor corresponding to the class labels and turn that into a numpy array, or a list, or any sort of iterable that can be fed into scikit-learn's classification report and/or confusion matrix: > y_pred = model.predict(tf_test) > print(y_pred) $ [[0.01] [0.14] [0.00] ... [0.32] [0.03] [0.00]] > y_pred_list = [int(x[0]) for x in y_pred] # assumes value >= 0.5 is positive prediction > y_true = [] # what I need help with > print(sklearn.metrics.confusion_matrix(y_true, y_pred_list) ...OR access the data such that it could be used in tensorflow's confusion matrix: > labels = [] # what I need help with > predictions = y_pred_list # could we just use a tensor? > print(tf.math.confusion_matrix(labels, predictions) In both cases, the general ability to grab the target data from the original object in a manner that isn't computationally expensive would be very helpful (and might help with my underlying intuitions re: tensorflow and keras). Any advice would be greatly appreciated.
You can convert it to a list with list(ds) and then recompile it as a normal Dataset with tf.data.Dataset.from_tensor_slices(list(ds)). From there your nightmare begins again but at least it's a nightmare that other people have had before. Note that for more complex datasets (e.g. nested dictionaries) you will need more preprocessing after calling list(ds), but this should work for the example you asked about. This is far from a satisfying answer but unfortunately the class is entirely undocumented and none of the standard Dataset tricks work.
34
15
62,436,243
2020-6-17
https://stackoverflow.com/questions/62436243/attributeerror-smote-object-has-no-attribute-validate-data
I'm resampling my data (multiclass) by using SMOTE. sm = SMOTE(random_state=1) X_res, Y_res = sm.fit_resample(X_train, Y_train) However, I'm getting this attribute error. Can anyone help?
Short answer You need to upgrade scikit-learn to version 0.23.1. Long answer The newest version 0.7.0 of imbalanced-learn seems to have an undocumented dependency on scikit-learn v0.23.1. It would give you AttributeError: 'SMOTE' object has no attribute '_validate_data' if your scikit-learnis 0.22 or below. If you are using Anaconda, installing scikit-learn version 0.23.1 might be tricky. conda update scikit-learn might not update scikit-learn version 0.23 or higher because the newest scikit-learn version Conda has at this point of time is 0.22.1. If you try to install it using conda install scikit-learn=0.23.1 or pip install scikit-learn==0.23.1, you will get tons of compatibility checks and installation might not be quick. Therefore the easiest way to install scikit-learn version 0.23.1 in Anaconda is to create a new virtual environment with minimum packages so that there are less or no conflict issues. Then, in the new virtual environment install scikit-learn version 0.23.1 followed by version 0.7.0 of imbalanced-learn. conda create -n test python=3.7.6 conda activate test pip install scikit-learn==0.23.1 pip install imbalanced-learn==0.7.0 Finally, you need to reinstall your IDE in the new virtual environment in order to use these packages. However, once scikit-learn version 0.23.1 becomes available in Conda and there are no compatibility issues, you can install it in the base environment directly.
18
20
62,442,212
2020-6-18
https://stackoverflow.com/questions/62442212/aws-elastic-beanstalk-container-commands-failing
I've been having a hard time trying to get a successful deployment of my Django Web App to AWS' Elastic Beanstalk. I am able to deploy my app from the EB CLI on my local machine with no problem at all until I add a list of container_commands config file inside a .ebextensions folder. Here are the contents of my config file: container_commands: 01_makeAppMigrations: command: "django-admin.py makemigrations" leader_only: true 02_migrateApps: command: "django-admin.py migrate" leader_only: true 03_create_superuser_for_django_admin: command: "django-admin.py createfirstsuperuser" leader_only: true 04_collectstatic: command: "django-admin.py collectstatic --noinput" I've dug deep into the logs and found these messages in the cfn-init-cmd.log to be the most helpful: 2020-06-18 04:01:49,965 P18083 [INFO] Config postbuild_0_DjangoApp_smt_prod 2020-06-18 04:01:49,991 P18083 [INFO] ============================================================ 2020-06-18 04:01:49,991 P18083 [INFO] Test for Command 01_makeAppMigrations 2020-06-18 04:01:49,995 P18083 [INFO] Completed successfully. 2020-06-18 04:01:49,995 P18083 [INFO] ============================================================ 2020-06-18 04:01:49,995 P18083 [INFO] Command 01_makeAppMigrations 2020-06-18 04:01:49,998 P18083 [INFO] -----------------------Command Output----------------------- 2020-06-18 04:01:49,998 P18083 [INFO] /bin/sh: django-admin.py: command not found 2020-06-18 04:01:49,998 P18083 [INFO] ------------------------------------------------------------ 2020-06-18 04:01:49,998 P18083 [ERROR] Exited with error code 127 I'm not sure why it can't find that command in this latest environment. I've deployed this same app with this same config file to a prior beanstalk environment with no issues at all. The only difference now is that this new environment was launched within a VPC and is using the latest recommended platform. Old Beanstalk environment platform: Python 3.6 running on 64bit Amazon Linux/2.9.3 New Beanstalk environment platform: Python 3.7 running on 64bit Amazon Linux 2/3.0.2 I've ran into other issues during this migration related to syntax updates with this latest platform. I'm hoping this issue is also just a simple syntax issue, but I've dug far and wide with no luck... If someone could point out something obvious that I'm missing here, I would greatly appreciate it! Please let me know if I can provide some additional info!
Finally got to the bottom of it all, after deep-diving through the AWS docs and forums... Essentially, there were a lot of changes that came along with Beanstalk moving from Amazon Linux to Amazon Linux 2. A lot of these changes are vaguely mentioned here. One major difference for the Python platform as mentioned in the link above is that "the path to the application's directory on Amazon EC2 instances of your environment is /var/app/current. It was /opt/python/current/app on Amazon Linux AMI platforms." This is crucial for when you're trying to create the Django migrate scripts as I'll explain further in detail below, or when you eb ssh into the Beanstalk instance and navigate it yourself. Another major difference is the introduction of Platform hooks, which is mentioned in this wonderful article here. According to this article, "Platform hooks are a set of directories inside the application bundle that you can populate with scripts." Essentially these scripts will now handle what the previous container_commands handled in the .ebextensions config files. Here is the directory structure of these Platform hooks: Knowing this, and walking through this forum here, where wonderful community members went through the trouble of filling in the gaps in Amazon's docs, I was able to successfully deploy with the following file set up: (Please note that "MDGOnline" is the name of my Django app) .ebextensions\01_packages.config: packages: yum: git: [] postgresql-devel: [] libjpeg-turbo-devel: [] .ebextensions\django.config: container_commands: 01_sh_executable: command: find .platform/hooks/ -type f -iname "*.sh" -exec chmod +x {} \; option_settings: aws:elasticbeanstalk:application:environment: DJANGO_SETTINGS_MODULE: MDGOnline.settings aws:elasticbeanstalk:environment:proxy:staticfiles: /static: static /static_files: static_files aws:elasticbeanstalk:container:python: WSGIPath: MDGOnline.wsgi:application .platform\hooks\predeploy\01_migrations.sh: #!/bin/bash source /var/app/venv/*/bin/activate cd /var/app/staging python manage.py makemigrations python manage.py migrate python manage.py createfirstsuperuser python manage.py collectstatic --noinput Please note that the '.sh' scripts need to be linux-based. I ran into an error for a while where the deployment would fail and provide this message in the logs: .platform\hooks\predeploy\01_migrations.sh failed with error fork/exec .platform\hooks\predeploy\01_migrations.sh: no such file or directory . Turns out this was due to the fact that I created this script on my windows dev environment. My solution was to create it on the linux environment, and copy it over to my dev environment directory within Windows. There are methods to convert DOS to Unix out there I'm sure. This one looks promising dos2unix! I really wish AWS could document this migration better, but I hope this answer can save someone the countless hours I spent getting this deployment to succeed. Please feel free to ask me for clarification on any of the above! EDIT: I've added a "container_command" to my config file above as it was brought to my attention that another user also encountered the "permission denied" error for the platform hook when deploying. This "01_sh_executable" command is to chmod all of the .sh scripts within the hooks directory of the app, so that Elastic Beanstalk can have the proper permission to execute them during the deployment process. I found this container command solution in this forum here:
20
32
62,457,956
2020-6-18
https://stackoverflow.com/questions/62457956/decorators-on-python-abstractmethods
I have an abstract base class in Python which defines an abstract method. I want to decorate it with a timer function such that every class extending and implementing this base class is timed and doesn't need to be manually annotated. Here's what I have import functools import time import abc class Test(metaclass=abc.ABCMeta): @classmethod def __subclasshook__(cls, subclass): return (hasattr(subclass, 'apply') and callable(subclass.apply)) @abc.abstractmethod def apply(self, a: str) -> str: raise NotImplementedError def timer(func): @functools.wraps(func) def wrapper_timer(*args, **kwargs): start_time = time.perf_counter() value = func(*args, **kwargs) end_time = time.perf_counter() run_time = end_time - start_time print(f"Finished {func.__name__!r} in {run_time:.4f} secs") return value return wrapper_timer def __getattribute__(self, name): if name == "apply": func = getattr(type(self), "apply") return self.timer(func) return object.__getattribute__(self, name) class T2(Test): def apply(self, a: str) -> str: return a if __name__ == '__main__': t = T2() t.apply('a') The error I get is as follow Traceback (most recent call last): File "/Users/blah/test.py", line 41, in <module> t.apply('a') File "/Users/blah/test.py", line 20, in wrapper_timer value = func(*args, **kwargs) TypeError: apply() missing 1 required positional argument: 'a' I think understand the error python thinks that the apply method of the T2() object is a classmethod however I am not sure why given that I call getattr(type(self), "apply"). Is there a way to get the instance method?
Use __init_subclass__ to apply the timer decorator for you. (timer, by the way, doesn't need to be defined in the class; it's more general than that.) __init_subclass__ is also a more appropriate place to determine if apply is callable. import abc import functools import time def timer(func): @functools.wraps(func) def wrapper_timer(*args, **kwargs): start_time = time.perf_counter() value = func(*args, **kwargs) end_time = time.perf_counter() run_time = end_time - start_time print(f"Finished {func.__name__!r} in {run_time:.4f} secs") return value return wrapper_timer class Test(metaclass=abc.ABCMeta): def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) # ABCMeta doesn't let us get this far if cls.apply isn't defined if not callable(cls.apply): raise TypeError("apply not callable") cls.apply = timer(cls.apply) @abc.abstractmethod def apply(self, a: str) -> str: raise NotImplementedError class T2(Test): def apply(self, a: str) -> str: return a if __name__ == '__main__': t = T2() t.apply('a')
9
7
62,495,112
2020-6-21
https://stackoverflow.com/questions/62495112/aligning-and-cropping-same-scene-images
Hello I have different images taken using exposure bracketing (same scene different exposures), I need to align the images and crop each one of them in order for them to be matching exactly. (since there was camera shake when these images were taken) I don't want to merge them, i just want to cut, rotate, or scale ..etc each one in order for them to be exactly aligned, then save them. I would have added a code sample if i knew how i could do this. but i have no idea. I'm new to opencv. Here's an example: Here's a real example of a sample : (this sample has a huge misalignment, most of the samples need just small adjustments because of shaking unlike this one) What i need is to crop each one of the images to make them identical (keep only the shared area) Thank you !
On this answer I describe an approach to achieve Image Alignment which consists on using the euclidean model to transform image 1 (on the left) and image 3 (on the right) according to image 2, the center image. However, I would like to quickly point out that the images shared are very challenging: not only there's a big difference in contrast but they also have a significant difference in translation, scale, rotation, and maybe even a little bit of shearing. The fact that they were posted as very low resolution also doesn't help. Anyway, I pointed out their differences in terms of 2D transformations because that's something you always need to keep in mind when selecting the appropriate model to perform image alignment. This nice picture is from a tutorial that describes the models in greater detail: The approach consists on the following steps: Use Enhanced Correlation Coefficient (ECC) algorithm to perform image alignment using the euclidean model between images 2 and 1; The returned transformation matrix is then used to transform image 1 with the help of cv2.warpAffine() and calculate the approximate rectangular area of the transformation in image 1; Repeat the same steps to transform image 3: use Enhanced Correlation Coefficient (ECC) algorithm to perform image alignment using the euclidean model between images 2 and 3; The returned transformation matrix is then used to transform image 3 with the help of cv2.warpAffine() and calculate the approximate rectangular area of the transformation. The result of these operations are the aligned images and can be seen on the image below. The green rectangles show the area of transformation: The red rectangle in the center image, the reference image used to create the model for the transformations, is the intersection between the areas on images 1 and 3 and can be seen as being the common area between all 3 images. The red rectangle can then be used to crop images 1, 2, and 3 and give that nice look of image alignment. Notice how the terrain and the sky on all these images appear to be perfectly aligned with each other: It's interesting to notice the cost of this approach: because image 1 didn't capture all those terrain features that can be easily seen on images 2 and 3, the final result is that images 2 and 3 end up loosing that part of the terrain. Thus, all the images show the exact same area of the photo. Python source code: ### # reference: # https://www.learnopencv.com/image-alignment-ecc-in-opencv-c-python/ ### import numpy as np import cv2 # internalRect: returns the intersection between two rectangles # # p1 ---------------- p2 # | | # | | # | | # p4 ---------------- p3 def internalRect(r1, r2): x = 0 y = 1 w = 2 h = 3 rect1_pt1 = [ r1[x], r1[y] ] rect1_pt2 = [ r1[x]+r1[w], r1[y] ] rect1_pt3 = [ r1[x]+r1[w], r1[y]+r1[h] ] rect1_pt4 = [ r1[x], r1[y]+r1[h] ] rect2_pt1 = [ r2[x], r2[y] ] rect2_pt2 = [ r2[x]+r2[w], r2[y] ] rect2_pt3 = [ r2[x]+r2[w], r2[y]+r2[h] ] rect2_pt4 = [ r2[x], r2[y]+r2[h] ] int_pt1 = [ max(rect1_pt1[x], rect2_pt1[x]), max(rect1_pt1[y], rect2_pt1[y]) ] int_pt2 = [ min(rect1_pt2[x], rect2_pt2[x]), max(rect1_pt2[y], rect2_pt2[y]) ] int_pt3 = [ min(rect1_pt3[x], rect2_pt3[x]), min(rect1_pt3[y], rect2_pt3[y]) ] int_pt4 = [ max(rect1_pt4[x], rect2_pt4[x]), min(rect1_pt4[y], rect2_pt4[y]) ] rect = [ int_pt1[x], int_pt1[y], int_pt2[x]-int_pt1[x], int_pt4[y]-int_pt1[y] ] return rect # align_image: use src1 as the reference image to transform src2 def align_image(src1, src2, warp_mode=cv2.MOTION_TRANSLATION): # convert images to grayscale img1_gray = cv2.cvtColor(src1, cv2.COLOR_BGR2GRAY) img2_gray = cv2.cvtColor(src2, cv2.COLOR_BGR2GRAY) # define 2x3 or 3x3 matrices and initialize it to a identity matrix if warp_mode == cv2.MOTION_HOMOGRAPHY: warp_matrix = np.eye(3, 3, dtype=np.float32) else: warp_matrix = np.eye(2, 3, dtype=np.float32) # number of iterations: num_iters = 1000 # specify the threshold of the increment in the correlation coefficient between two iterations termination_eps = 1e-8 # Define termination criteria criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, num_iters, termination_eps) print('findTransformECC() may take a while...') # perform ECC: use the selected model to calculate the transformation required to align src2 with src1. The resulting transformation matrix is stored in warp_matrix: (cc, warp_matrix) = cv2.findTransformECC(img1_gray, img2_gray, warp_matrix, warp_mode, criteria, inputMask=None, gaussFiltSize=1) if (warp_mode == cv2.MOTION_HOMOGRAPHY): img2_aligned = cv2.warpPerspective(src2, warp_matrix, (src1.shape[1], src1.shape[0]), flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP) else : # use warpAffine() for: translation, euclidean and affine models img2_aligned = cv2.warpAffine(src2, warp_matrix, (src1.shape[1], src1.shape[0]), flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP, borderMode=cv2.BORDER_CONSTANT, borderValue=0) #print('warp_matrix shape', warp_matrix.shape, 'data=\n', warp_matrix) #print(warp_matrix, warp_matrix) # compute the cropping area to remove the black bars from the transformed image x = 0 y = 0 w = src1.shape[1] h = src1.shape[0] if (warp_matrix[0][2] < 0): x = warp_matrix[0][2] * -1 w -= x if (warp_matrix[1][2] < 0): y = warp_matrix[1][2] * -1 h -= y if (warp_matrix[1][2] > 0): h -= warp_matrix[1][2] matchArea = [ int(x), int(y), int(w), int(h) ] #print('src1 w=', src1.shape[1], 'h=', src1.shape[0]) #print('matchedRect=', matchArea[0], ',', matchArea[1], '@', matchArea[2], 'x', matchArea[3], '\n') return img2_aligned, matchArea ########################################################################################## img1 = cv2.imread("img1.png") img2 = cv2.imread("img2.png") img3 = cv2.imread("img3.png") # TODO: adjust contrast on all input images ### # resize images to be the same size as the smallest image for debug purposes ### max_h = img1.shape[0] max_h = max(max_h, img2.shape[0]) max_h = max(max_h, img3.shape[0]) max_w = img1.shape[1] max_w = max(max_w, img2.shape[1]) max_w = max(max_w, img3.shape[1]) img1_padded = cv2.resize(img1, (max_w, max_h), interpolation=cv2.INTER_AREA) img2_padded = cv2.resize(img2, (max_w, max_h), interpolation=cv2.INTER_AREA) img3_padded = cv2.resize(img3, (max_w, max_h), interpolation=cv2.INTER_AREA) # stack them horizontally for display hStack = np.hstack((img1_padded, img2_padded)) # stack images side-by-side input_stacked = np.hstack((hStack, img3_padded)) # stack images side-by-side cv2.imwrite("input_stacked.jpg", input_stacked) cv2.imshow("input_stacked", input_stacked) cv2.waitKey(0) ### # perform image alignment ### # specify the motion model warp_mode = cv2.MOTION_EUCLIDEAN # cv2.MOTION_TRANSLATION, cv2.MOTION_EUCLIDEAN, cv2.MOTION_AFFINE, cv2.MOTION_HOMOGRAPHY # for testing purposes: img2 will be the reference image img1_aligned, matchArea1 = align_image(img2, img1, warp_mode) img1_aligned_cpy = img1_aligned.copy() cv2.rectangle(img1_aligned_cpy, (matchArea1[0], matchArea1[1]), (matchArea1[0]+matchArea1[2], matchArea1[1]+matchArea1[3]), (0, 255, 0), 2) cv2.imwrite("img1_aligned.jpg", img1_aligned_cpy) print('\n###############################################\n') # for testing purposes: img2 will be the reference image again img3_aligned, matchArea3 = align_image(img2, img3, warp_mode) img3_aligned_cpy = img3_aligned.copy() cv2.rectangle(img3_aligned_cpy, (matchArea3[0], matchArea3[1]), (matchArea3[0]+matchArea3[2], matchArea3[1]+matchArea3[3]), (0, 255, 0), 2) cv2.imwrite("img3_aligned.jpg", img3_aligned_cpy) # compute the crop area in the reference image and draw a red rectangle cropRect = internalRect(matchArea1, matchArea3) print('cropRect=', cropRect[0], ',', cropRect[1], '@', cropRect[2], 'x', cropRect[3], '\n') img2_eq_cpy = img2.copy() cv2.rectangle(img2_eq_cpy, (cropRect[0], cropRect[1]), (cropRect[0]+cropRect[2], cropRect[1]+cropRect[3]), (0, 0, 255), 2) cv2.imwrite("img2_eq.jpg", img2_eq_cpy) # stack results horizontally for display res_hStack = np.hstack((img1_aligned_cpy, img2_eq_cpy)) # stack images side-by-side aligned_stacked = np.hstack((res_hStack, img3_aligned_cpy)) # stack images side-by-side cv2.imwrite("aligned_stacked.jpg", aligned_stacked) cv2.imshow("aligned_stacked", aligned_stacked) cv2.waitKey(0) print('\n###############################################\n') # crop images to the smallest internal area between them img1_aligned_cropped = img1_aligned[cropRect[1] : cropRect[1]+cropRect[3], cropRect[0] : cropRect[0]+cropRect[2]] img3_aligned_cropped = img3_aligned[cropRect[1] : cropRect[1]+cropRect[3], cropRect[0] : cropRect[0]+cropRect[2]] img2_eq_cropped = img2[cropRect[1] : cropRect[1]+cropRect[3], cropRect[0] : cropRect[0]+cropRect[2]] cropped_hStack = np.hstack((img1_aligned_cropped, img2_eq_cropped)) # stack images side-by-side cropped_stacked = np.hstack((cropped_hStack, img3_aligned_cropped)) # stack images side-by-side cv2.imwrite("cropped_stacked.jpg", cropped_stacked) cv2.imshow("cropped_stacked", cropped_stacked) cv2.waitKey(0)
9
12
62,545,411
2020-6-23
https://stackoverflow.com/questions/62545411/how-to-create-a-title-with-a-newline-for-a-chart-in-plotly
I need to create a plot on plotly with a newline. Preferably the text on the top is larger than the second line, but not necessary. fig.update_layout( title=go.layout.Title( text=title, xref="paper", x=0.5, ), For the title I have tried title = "Hello \n World" title = "Hello" + "\n" + "World" title = "$Hello \\ World$" title = "$Hello \newline World$" All run without error, but the newline is ignored. I am not sure how to implement this. Thank you for the help.
You can add a <br> tag to the title and it will be interpreted as an HTML line break. This is supported by this doc. You can see the result in the following example: from dash import Dash import dash_core_components as dcc import dash_html_components as html external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = Dash(__name__, external_stylesheets=external_stylesheets) app.layout = html.Div(children=[ dcc.Graph( id='example-graph', figure={ 'data': [ {'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': 'SF'}, {'x': [1, 2, 3], 'y': [2, 4, 5], 'type': 'bar', 'name': u'Montréal'}, ], 'layout': { 'title': 'Dash Data <br> Visualization' } } ) ]) if __name__ == '__main__': app.run_server(debug=True)
11
34
62,543,302
2020-6-23
https://stackoverflow.com/questions/62543302/python-pylintraising-format-tuple-exception-arguments-suggest-string-formattin
With a simple custom exception class defined as: class MyError(Exception): pass And this call: foo = 'Some more info' raise MyError("%s: there was an error", foo) pylint gives: Exception arguments suggest string formatting might be intended pylint(raising-format-tuple) What does this message mean?
Any one of these fixes the message, depending on your version of Python. foo = 'Some more info' raise MyError("%s: there was an error" % foo ) raise MyError("{}: there was an error".format(foo)) raise MyError(f"{foo}: there was an error") The message is triggered when pylint sees the %s tag in the string with no following arguments. Instead of raising an exception with the string "Some more info: there was an error" you'll get an exception with a tuple, where the first element is ": there was an error" and the second is the contents of foo. This is likely not the intended effect. In the code I was using, there was extensive use of logging, and I suspect the original author confused the exception raising with lazy logging.
9
19
62,502,668
2020-6-21
https://stackoverflow.com/questions/62502668/can-flake8-fix-my-python-whitespace-problems
I just learned about flake8, which calls itself "Flake8: Your Tool For Style Guide Enforcement." While flake8 will find many Python whitespace errors and enforce PEP8, it does not appear to have an option to automatically fix problematic python code. autopep8 does appear to have this option (called --in-place), but flake8 seems to have much wider support. Is there a way to make flake8 fix my code?
no, flake8 is a linter only -- that is, it only checks your code. (technically, flake8 doesn't even check your code -- it is just a framework for other linters to plug into and provides inclusion / exclusion / etc. on top of other tools) if you want something which fixes your code, you'll want a code formatter to do code formatting (such as autopep8 / add-trailing-comma / yapf / black / etc.)
21
29
62,539,255
2020-6-23
https://stackoverflow.com/questions/62539255/grouping-two-numpy-arrays-to-a-dict-of-lists
I have two large NumPy arrays each with shape of (519990,) that look something like this: Order = array([0, 0, 0, 5, 6, 10, 14, 14, 14, 23, 23, 39]) Letters = array([A, B, C, D, E, F, G, H, I, J, K, L]) As you can see the first array is always in ascending and a positive number. I would like to group everything within the Letters to Order to turn out looking like this: {0:[A,B,C], 5:[D], 6:[E], 10:[F], 14:[G, H, I], 23:[J, K], 39:[L]} The code I have to do this is: df = pd.DataFrame() df['order'] = Order df['letters'] = Letters linearDict = df.grouby('order').apply(lambda dfg:dfg.drop('order', axis=1).to_dict(orient='list')).to_dict() endProduct = {} for k, v in linearDict.items(): endProduct[k] = np.array(linearDict[k]['letter'][0:]) enProduct = {0:array([A,B,C]), 5:array([D]), 6:array([E]), 10:array([F]), 14:array([G, H, I]), 23:array([J, K]), 39:array([L])} My problem is this process is BEYOND slow. It's such a drain on the system that it causes my Jupyter Notebook to crash. Is there a faster way of doing this?
We could leverage the fact that Order is sorted, to simply slice Letters after getting the intervaled-indices, like so - def numpy_slice(Order, Letters): Order = np.asarray(Order) Letters = np.asarray(Letters) idx = np.flatnonzero(np.r_[True,Order[:-1]!=Order[1:],True]) return {Order[i]:Letters[i:j] for (i,j) in zip(idx[:-1],idx[1:])} Sample run - In [66]: Order Out[66]: array([16, 16, 16, 16, 23, 30, 33, 33, 39, 39, 39, 39, 39, 39, 39]) In [67]: Letters Out[67]: array(['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O'], dtype='<U1') In [68]: numpy_slice(Order, Letters) Out[68]: {16: array(['A', 'B', 'C', 'D'], dtype='<U1'), 23: array(['E'], dtype='<U1'), 30: array(['F'], dtype='<U1'), 33: array(['G', 'H'], dtype='<U1'), 39: array(['I', 'J', 'K', 'L', 'M', 'N', 'O'], dtype='<U1')}
9
8