question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
69,923,305 | 2021-11-11 | https://stackoverflow.com/questions/69923305/is-python-available-for-windows-on-arm | I know that Python is available for Mac and Linux on ARM because I have Python installed via Homebrew on Mac and it's ARM and I have Python installed via apt on Ubuntu and it's ARM. However, I can't find any download links for Python for Windows on ARM. The Windows download link on the Python website contain amd64 so it is for x86, not ARM, and the list of all links on the Releases page only contains links for 32-bit and 64-bit x86. Is there any way to get Python natively on ARM on Windows? Maybe there's a beta or experimental build, or an unofficial build? | It will be supported starting with Python 3.11: https://bugs.python.org/issue33125 I'll probably make the ARM64 packages available through the Windows Store for 3.11's prereleases, and possibly as a side-loadable MSIX from python.org. The same ticket also mentions an unofficial build for testing purposes: https://www.nuget.org/packages/pythonarm64/ | 6 | 5 |
69,920,761 | 2021-11-10 | https://stackoverflow.com/questions/69920761/how-to-hide-internal-modules-in-a-packages-namespace | My Python package, tinted, finally works. However, when I run dir(tinted), the core.py and sequences.py files exist in the package! I want only the function tint to be included in the package. Current output of dir(tinted): ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', 'core', 'sequences', 'tint'] Desired output of dir(tinted): ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', 'tint'] This is my first time ever uploading a package to PyPi, so please forgive me if I've done anything wrong. Anyone know how to fix this? | Let's say you have your Python package in a folder named tinted. You have a function named tint defined in module core.py inside that folder. But you want to expose that function at the top level of the package's namespace. Which is good practice for important functions, they should be at the top level. Then in __init__.py you would have this: from .core import tint (Or from tinted.core import tint if you happen to use absolute imports.) However, as you have noted, this also puts the core module into the package's top-level namespace: >>> import tinted >>> dir(tinted) ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'core', 'tint'] If you want to keep that namespace squeaky-clean, you can remove the internal module after the import in __init__.py. Like so: from .core import tint del core Then: >>> dir(tinted) ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'tint'] This is however not commonly done. Even packages in Python's standard library don't usually remove these kind of imports. For example, the logging package imports threading and just leaves it there. Though sometimes they do clean up a bit. Like the copy module, which does del types, weakref at the end. So there's nothing wrong with it either. In this case, the object was a function. Just note that, when it's a class, you cannot completely hide the package structure anyway. Consider this example from the popular Pandas library: >>> import pandas >>> pandas.DataFrame <class 'pandas.core.frame.DataFrame'> | 5 | 2 |
69,918,148 | 2021-11-10 | https://stackoverflow.com/questions/69918148/deprecationwarning-executable-path-has-been-deprecated-please-pass-in-a-servic | I started a selenium tutorial today and have run into this error when trying to run the code. I've tried other methods but ultimately get the same error. I'm on MacOS using VSC. My Code: from selenium import webdriver PATH = '/Users/blutch/Documents/Chrom Web Driver\chromedriver.exe' driver = webdriver.Chrome(PATH) driver.get("https://www.google.com") I've also tried inserting C: in front of /Users. Can anyone guide me on why this is happening/how to fix it? | This error message... DeprecationWarning: executable_path has been deprecated, please pass in a Service object ...implies that the key executable_path will be deprecated in the upcoming releases. This change is inline with the Selenium 4.0 Beta 1 changelog which mentions: Deprecate all but Options and Service arguments in driver instantiation. (#9125,#9128) Solution Once the key executable_path is deprecated you have to use an instance of the Service() class as follows: from selenium import webdriver from selenium.webdriver.chrome.service import Service s = Service('C:/Users/.../chromedriver.exe') driver = webdriver.Chrome(service=s) TL; DR You can find a couple of relevant detailed discussion in: Bug Report: deprecate all but Options and Service arguments in driver instantiation Pull Request: deprecate all but Options and Service arguments in driver instantiation | 11 | 32 |
69,916,200 | 2021-11-10 | https://stackoverflow.com/questions/69916200/get-column-as-pl-series-not-as-pl-dataframe-in-polars | I'm trying to get the column of a Dataframe as Series. df['a'] returns allways a pl.Dataframe. Right now I'm doing it this way pl.Series('GID_1',df['GID_1'].to_numpy().flatten().tolist()) I don't think that's the best way to do it. Does anyone have an idea? | I don't really understand. This snippet runs, so it returns a pl.Series. df = pl.DataFrame({ "A": [1, 2, 3], "B": [1, 2, 3] }) assert isinstance(df["A"], pl.Series) | 5 | 8 |
69,914,867 | 2021-11-10 | https://stackoverflow.com/questions/69914867/filling-up-shuffle-buffer-this-may-take-a-while | I have a dataset that includes video frames partially 1000 real videos and 1000 deep fake videos. each video after preprocessing phase converted to the 300 frames in other worlds I have a dataset with 300000 images with Real(0) label and 300000 images with Fake(1) label. I want to train MesoNet with this data. I used costum DataGenerator class to handle train, validation, test data with 0.8,0.1,0.1 ratios but when I run the project show this message: Filling up shuffle buffer (this may take a while): What can I do to solve this problem? You can see the DataGenerator class below. class DataGenerator(keras.utils.Sequence): 'Generates data for Keras' def __init__(self, df, labels, batch_size =32, img_size = (224,224), n_classes = 2, shuffle=True): 'Initialization' self.batch_size = batch_size self.labels = labels self.df = df self.img_size = img_size self.n_classes = n_classes self.shuffle = shuffle self.batch_labels = [] self.batch_names = [] self.on_epoch_end() def __len__(self): 'Denotes the number of batches per epoch' return int(np.floor(len(self.df) / self.batch_size)) def __getitem__(self, index): batch_index = self.indexes[index * self.batch_size : (index + 1) * self.batch_size] frame_paths = self.df.iloc[batch_index]["framePath"].values frame_label = self.df.iloc[batch_index]["label"].values imgs = [cv2.imread(frame) for frame in frame_paths] imgs = [cv2.cvtColor(img, cv2.COLOR_BGR2RGB) for img in imgs] imgs = [ cv2.resize(img, self.img_size) for img in imgs if img.shape != self.img_size ] batch_imgs = np.asarray(imgs) labels = list(map(int, frame_label)) y = np.array(labels) self.batch_labels.extend(labels) self.batch_names.extend([str(frame).split("\\")[-1] for frame in frame_paths]) return ( batch_imgs,y ) def on_epoch_end(self): 'Updates indexes after each epoch' self.indexes = np.arange(len(self.df)) if self.shuffle == True: np.random.shuffle(self.indexes) | Note that this is not an error, but a log message: https://github.com/tensorflow/tensorflow/blob/42b5da6659a75bfac77fa81e7242ddb5be1a576a/tensorflow/core/kernels/data/shuffle_dataset_op.cc#L138 It seems you may be choosing too large a dataset if it's taking too long: https://github.com/tensorflow/tensorflow/issues/30646 You can address this by lowering your buffer size: https://support.huawei.com/enterprise/en/doc/EDOC1100164821/2610406b/what-do-i-do-if-training-times-out-due-to-too-many-dataset-shuffle-operations | 6 | 3 |
69,912,744 | 2021-11-10 | https://stackoverflow.com/questions/69912744/periodically-restart-python-multiprocessing-pool | I have a Python multiprocessing pool doing a very long job that even after a thorough debugging is not robust enough not to fail every 24 hours or so, because it depends on many third-party, non-Python tools with complex interactions. Also, the underlying machine has certain problems that I cannot control. Note that by failing I don't mean the whole program crashing, but some or most of the processes becoming idle because of some errors, and the app itself either hanging or continuing the job just with the processes that haven't failed. My solution right now is to periodically kill the job, manually, and then just restart from where it was. Even if it's not ideal, what I want to do now is the following: restart the multiprocessing pool periodically, programatically, from the Python code itself. I don't really care if this implies killing the pool workers in the middle of their job. Which would be the best way to do that? My code looks like: with Pool() as p: for _ in p.imap_unordered(function, data): save_checkpoint() log() What I have in mind would be something like: start = 0 end = 1000 # magic number while start + 1 < len(data): current_data = data[start:end] with Pool() as p: for _ in p.imap_unordered(function, current_data): save_checkpoint() log() start += 1 end += 1 Or: start = 0 end = 1000 # magic number while start + 1 < len(data): current_data = data[start:end] start_timeout(time=TIMEOUT) # which would be the best way to to do that without breaking multiprocessing? try: with Pool() as p: for _ in p.imap_unordered(function, current_data): save_checkpoint() log() start += 1 end += 1 except Timeout: pass Or any suggestion you think would be better. Any help would be much appreciated, thanks! | The problem with your current code is that it iterates the multiprocessed results directly, and that call will block. Fortunately there's an easy solution: use apply_async exactly as suggested in the docs. But because of how you describe the use-case here and the failure, I've adapted it somewhat. Firstly, a mock task: from multiprocessing import Pool, TimeoutError, cpu_count from time import sleep from random import randint def log(): print("logging is a dangerous activity: wear a hard hat.") def work(d): sleep(randint(1, 100) / 100) print("finished working") if randint(1, 10) == 1: print("blocking...") while True: sleep(0.1) return d This work function will fail with a probabilty of 0.1, blocking indefinitely. We create the tasks: data = list(range(100)) nproc = cpu_count() And then generate futures for all of them: while data: print(f"== Processing {len(data)} items. ==") with Pool(nproc) as p: tasks = [p.apply_async(work, (d,)) for d in data] Then we can try to get the tasks out manually: for task in tasks: try: res = task.get(timeout=1) data.remove(res) log() except TimeoutError: failed.append(task) if len(failed) < nproc: print( f"{len(failed)} processes are blocked," f" but {nproc - len(failed)} remain." ) else: break The controlling timeout here is the timeout to .get. It should be as long as you expect the longest process to take. Note that we detect when the whole pool is tied up and give up. But since in the scenario you describe some threads are going to take longer than others, we can give 'failed' processes some time to recover. Thus every time a task fails we quickly check if the others have in fact succeeded: for task in failed: try: res = task.get(timeout=0.01) data.remove(res) failed.remove(task) log() except TimeoutError: continue Whether this is a good addition in your case depends on whether your tasks really are as flaky as I'm guessing they are. Exiting the context manager for the pool will terminate the pool, so we don't even need to handle that ourselves. If you have significant variation you might want to increase the pool size (thus increasing the number of tasks which are allowed to stall) or allow tasks a grace period before considering them 'failed'. | 5 | 3 |
69,909,234 | 2021-11-10 | https://stackoverflow.com/questions/69909234/pandas-select-columns-based-on-row-values | I have a very large pandas.Dataframe and want to create a new Dataframe by selecting all columns where one row has a specific value. A B C D E Region Nord Süd West Nord Nord value 2.3 1.2 4.2 0.5 1.3 value2 20 400 30 123 200 Now i want to create a new DataFrame with all columns where the row "Region" has the value "Nord". How can it be done? The result should look like this: A D E Region Nord Nord Nord value 2.3 0.5 1.3 value2 20 123 200 Thanks in advance | Use first DataFrame.loc for select all rows (:) by mask compred selected row Region by another loc: df = df.loc[:, df.loc['Region'] == 'Nord'] print (df) A D E Region Nord Nord Nord value 2.3 0.5 1.3 value2 20 123 200 Better is crated MultiIndex by first row with original columns, then is possible select by DataFrame.xs: df.columns = [df.columns, df.iloc[0]] df = df.iloc[1:].rename_axis((None, None), axis=1) print (df) A B C D E Nord Süd West Nord Nord value 2.3 1.2 4.2 0.5 1.3 value2 20 400 30 123 200 print (df.xs('Nord', axis=1, level=1)) A D E value 2.3 0.5 1.3 value2 20 123 200 print (df.xs('Nord', axis=1, level=1, drop_level=False)) A D E Nord Nord Nord value 2.3 0.5 1.3 value2 20 123 200 | 6 | 7 |
69,906,944 | 2021-11-10 | https://stackoverflow.com/questions/69906944/approaches-to-changing-language-at-runtime-with-python-gettext | I have read lots of posts about using Python gettext, but none of them addressed the issue of changing languages at runtime. Using gettext, strings are translated by the function _() which is added globally to builtins. The definition of _ is language-specific and will change during execution when the language setting changes. At certain points in the code, I need strings in an object to be translated to a certain language. This happens by: (Re)define the _ function in builtins to translate to the chosen language (Re)evaluate the desired object using the new _ function - guaranteeing that any calls to _ within the object definition are evaluated using the current definition of _. Return the object I am wondering about different approaches to step 2. I thought of several but they all seem to have fundamental flaws. What is the best way to achieve step 2 in practice? Is it theoretically possible to achieve step 2 for any arbitrary object, without knowledge of its implementation? Possible approaches If all translated text is defined in functions that can be called in step 2, then it's straightforward: calling the function will evaluate using the current definition of _. But there are lots of situations where that's not the case, for instance, translated strings could be module-level variables evaluated at import time, or attributes evaluated when instantiating an object. Minimal example of this problem with module-level variables is here. Re-evaluation Manually reload modules Module-level variables can be re-evaluated at the desired time using importlib.reload. This gets more complicated if the module imports another module that also has translated strings. You have to reload every module that's a (nested) dependency. With knowledge of the module's implementation, you can manually reload the dependencies in the right order: if A imports B, importlib.reload(B) importlib.reload(A) # use A... Problems: Requires knowledge of the module's implementation. Only reloads module-level variables. Automatically reload modules Without knowledge of the module's implementation, you'd need to automate reloading dependencies in the right order. You could do this for every module in the package, or just the (recursive) dependencies. To handle more complex situations, you'd need to generate a dependency graph and reload modules in breadth-first order from the roots. Problems: Requires complex reloading algorithm. There are likely edge cases where it's not possible (cyclic dependencies, unusual package structure, from X import Y-style imports). Only reloads module-level variables. Re-evaluate only the desired object? eval allows you to evaluate dynamically generated expressions. Instead could you re-evaluate an existing object's static expression, given a dynamic context (builtins._)? I guess this would involve recursively re-evaluating the object, and every object referenced in its definition, and every object referenced in their definitions... I looked through the inspect module and didn't find any obvious solution. Problems: Not sure if this is possible. Security issues with eval and similar. Delayed evaluation Lazy evaluation The Flask-Babel project provides a LazyString that delays evaluation of a translated string. If it could be completely delayed until step 2, that seems like the cleanest solution. Problems: A LazyString can still get evaluated before it's supposed to. Lots of things may call its __str__ function and trigger evaluation, such as string formatting and concatenating. Deferred translation The python gettext docs demonstrate temporarily re-defining the _ function, and only calling the actual translation function when the translated string is needed. Problems: Requires knowledge of the object's structure, and code customized to each object, to find the strings to translate. Doesn't allow concatenation or formatting of translated strings. Refactoring All translated strings could be factored out into a separate module, or moved to functions such that they can be completely evaluated at a given time. Problems: As I understand it the point of gettext and the global _ function is to minimize the impact of translation on existing code. Refactoring like this could require significant design changes and make the code more confusing. | The only plausible, general approach is to rewrite all relevant code to not only use _ to request translation but to never cache the result. That’s not a fun idea and it’s not a new idea—you already list Refactoring and Deferred translation that rely on the cooperation of the gettext clients—but it is the “best way […] in practice”. You can try to do a super-reload by removing many things from sys.modules and then doing a real reimport. This approach avoids understanding the import relationships, but works only if the relevant modules are all written in Python and you can guarantee that the state of your program will retain no references to any objects (including types and modules) that used the old language. (I’ve done this, but only in a context where the overarching program was a sort of supervisor utterly uninterested in the features of the discarded modules.) You can try to walk the whole object graph and replace the strings, but even aside from the intrinsic technical difficulty of such an algorithm (consider __slots__ in base classes and co_consts for just the mildest taste), it would involve untranslating them, which changes from hard to impossible when some sort of transformation has already been performed. That transformation might just be concatenating the translated strings, or it might be pre-substituting known values to format, or padding the string, or storing a hash of it: it’s certainly undecidable in general. (I’ve done this too for other data types, but only with data constructed by a file reader whose output used known, simple structures.) Any approach based on partial reevaluation combines the problems of the methods above. The only other possible approach is a super-LazyString that refuses to translate for longer by implementing operations like + to return objects that encode the transformations to eventually apply, but it’s impossible to know when to force those operations unless you control all mechanisms used to display or transmit strings. It’s also impossible to defer past, say, if len(_("…"))>80:. | 5 | 3 |
69,822,702 | 2021-11-3 | https://stackoverflow.com/questions/69822702/poetry-was-not-installed-with-the-recommended-installer-cannot-update-automatic | How to I upgrade to the latest version? Specification: Windows 10, Visual Studio Code, Ubuntu Bash. Current Version: me@PF2DCSXD:/mnt/c/Users/user/Documents/GitHub/workers-python/workers/composite_key/compositekey/tests$ python3 --version Python 3.8.10 Attempt to update | poetry self update: me@PF2DCSXD:/mnt/c/Users/user/Documents/GitHub/workers-python/workers/composite_key/compositekey/tests$ poetry self update RuntimeError Poetry was not installed with the recommended installer. Cannot update automatically. at ~/.local/lib/python3.8/site-packages/poetry/console/commands/self/update.py:389 in _check_recommended_installation 385│ current = Path(__file__) 386│ try: 387│ current.relative_to(self.home) 388│ except ValueError: → 389│ raise RuntimeError( 390│ "Poetry was not installed with the recommended installer. " 391│ "Cannot update automatically." 392│ ) 393│ Please let me know if there is anything else I can add to post. | The error message suggests you've probably installed poetry with pip, which does not support automatic poetry updates. You should uninstall the poetry version currently installed, and reinstall it using the recommended method, which uses a custom installation script. On osx/linux, you'll just have to run curl -sSL https://install.python-poetry.org | python3 - to download and run this installation script. | 9 | 10 |
69,875,125 | 2021-11-7 | https://stackoverflow.com/questions/69875125/find-element-by-commands-are-deprecated-in-selenium | When starting the function def run(driver_path): driver = webdriver.Chrome(executable_path=driver_path) driver.get('https://tproger.ru/quiz/real-programmer/') button = driver.find_element_by_class_name("quiz_button") button.click() run(driver_path) I'm getting errors like these: <ipython-input-27-c5a7960e105f>:6: DeprecationWarning: executable_path has been deprecated, please pass in a Service object driver = webdriver.Chrome(executable_path=driver_path) <ipython-input-27-c5a7960e105f>:10: DeprecationWarning: find_element_by_* commands are deprecated. Please use find_element() instead button = driver.find_element_by_class_name("quiz_button") but I can't understand why. I'm using WebDriver at the latest version for my Chrome's version. I don't why I get find_element_by_* commands are deprecated when it's in the documentation that the command exists. | This error message: DeprecationWarning: find_element_by_* commands are deprecated. Please use find_element() instead implies that the find_element_by_* commands are deprecated in the latest Selenium Python libraries. As AutomatedTester mentions: This DeprecationWarning was the reflection of the changes made with respect to the decision to simplify the APIs across the languages and this does that. Solution Instead you have to use find_element(). As an example: You have to include the following imports from selenium.webdriver.common.by import By Using class_name: button = driver.find_element_by_class_name("quiz_button") Needs be replaced with: button = driver.find_element(By.CLASS_NAME, "quiz_button") Along the lines of, you also have to change the following: Using id: element = find_element_by_id("element_id") Needs be replaced with: element = driver.find_element(By.ID, "element_id") Using name: element = find_element_by_name("element_name") Needs be replaced with: element = driver.find_element(By.NAME, "element_name") Using link_text: element = find_element_by_link_text("element_link_text") Needs be replaced with: element = driver.find_element(By.LINK_TEXT, "element_link_text") Using partial_link_text: element = find_element_by_partial_link_text("element_partial_link_text") Needs be replaced with: element = driver.find_element(By.PARTIAL_LINK_TEXT, "element_partial_link_text") Using tag_name: element = find_element_by_tag_name("element_tag_name") Needs be replaced with: element = driver.find_element(By.TAG_NAME, "element_tag_name") Using css_selector: element = find_element_by_css_selector("element_css_selector") Needs be replaced with: element = driver.find_element(By.CSS_SELECTOR, "element_css_selector") Using xpath: element = find_element_by_xpath("element_xpath") Needs be replaced with: element = driver.find_element(By.XPATH, "element_xpath") Note: If you are searching and replacing to implement the above changes, you will need to do the same thing for find_elements_*, i.e., the plural forms of find_element_*. You may also find this upgrade guide useful as it covers some other unrelated changes you may need to make when upgrading: Upgrade to Selenium 4 | 92 | 205 |
69,844,072 | 2021-11-4 | https://stackoverflow.com/questions/69844072/why-isnt-python-newtype-compatible-with-isinstance-and-type | This doesn't seem to work: from typing import NewType MyStr = NewType("MyStr", str) x = MyStr("Hello World") isinstance(x, MyStr) I don't even get False, but TypeError: isinstance() arg 2 must be a type or tuple of types because MyStr is a function and isinstance wants one or more type. Even assert type(x) == MyStr or is MyStr fails. What am I doing wrong? | The purpose of NewType is purely for static type checking, but for dynamic purposes it produces the wrapped type. It does not make a new type at all, it returns a callable that the static type checker can see, that's all. When you do: x = MyStr("Hello World") it doesn't produce a new instance of MyStr, it returns "Hello World" entirely unchanged, it's still the original str passed in, even down to identity: >>> s = ' '.join(["a", "b", "c"]) # Make a new string in a way that foils interning, just to rule out any weirdness from caches >>> ms = MyStr(s) # "Convert" it to MyStr >>> type(ms) # It's just a str str >>> s is ms # It's even the *exact* same object you passed in True The point is, what NewType(...) returns is effectively a callable that: Acts as the identity function (it returns whatever you give it unchanged); in modern Python, NewType itself is a class that begins with __call__ = _idfunc, that's literally just saying when you make a call with an instance, return the argument unchanged. (In modern Python) Has some useful features like overloading | to produce Unions like other typing-friendly things. but you can't use it usefully for isinstance, not because it's not producing instances of anything. As other answers have mentioned, if you need runtime, dynamic checking, subclassing is the way to go. The other answers are doing both more (unnecessarily implementing __new__) and less (allowing arbitrary attributes, bloating instances of the subclass for a benefit you won't use) than necessary, so here's what you want for something that: Is considered a subclass of str for both static and runtime checking purposes Does not bloat the memory usage per-instance any more than absolutely necessary class MyStr(str): # Inherit all behaviors of str __slots__ = () # Prevent subclass from having __dict__ and __weakref__ slots, saving 16 bytes # per instance on 64 bit CPython builds, and avoiding weirdness like allowing # instance attributes on a logical str That's it, just two lines (technically, a one-liner like class MyStr(str): __slots__ = () is syntactically legal, but it's bad style, so I avoid it), and you've got what you need. | 8 | 1 |
69,848,969 | 2021-11-5 | https://stackoverflow.com/questions/69848969/how-to-build-numpy-from-source-linked-to-apple-accelerate-framework | It is my understanding that NumPy dropped support for using the Accelerate BLAS and LAPACK at version 1.20.0. According to the release notes for NumPy 1.21.1, these bugs have been resolved and building NumPy from source using the Accelerate framework on MacOS >= 11.3 is now possible again: https://numpy.org/doc/stable/release/1.21.0-notes.html, but I cannot find any documentation on how to do so. This seems like it would be an interesting thing to try and do because the Accelerate framework is supposed to be highly-optimized for M-series processors. I imagine the process is something like this: Download numpy source code folder and navigate to this folder. Make a site.cfg file that looks something like: [DEFAULT] library_dirs = /some/directory/ include_dirs = /some/other/directory/ [accelerate] libraries = Accelerate, vecLib Run python setup.py build The problem is I do not know 1. what the variables library_dirs and include_dirs should be so that NumPy knows to use Accelerate BLAS and LAPACK and 2. if there are any other additional steps that need to be taken. If anyone knows how to do this or can provide any insight, it would be greatly appreciated. | I actually attempted this earlier today and these are the steps I used: In the site.cfg file, put [accelerate] libraries = Accelerate, vecLib Build with NPY_LAPACK_ORDER=accelerate python3 setup.py build Install with python3 setup.py install Afterwards, np.show_config() returned the following blas_mkl_info: NOT AVAILABLE blis_info: NOT AVAILABLE openblas_info: NOT AVAILABLE accelerate_info: extra_compile_args = ['-I/System/Library/Frameworks/vecLib.framework/Headers'] extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3), ('HAVE_CBLAS', None)] blas_opt_info: extra_compile_args = ['-I/System/Library/Frameworks/vecLib.framework/Headers'] extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3), ('HAVE_CBLAS', None)] lapack_mkl_info: NOT AVAILABLE openblas_lapack_info: NOT AVAILABLE openblas_clapack_info: NOT AVAILABLE flame_info: NOT AVAILABLE lapack_opt_info: extra_compile_args = ['-I/System/Library/Frameworks/vecLib.framework/Headers'] extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3), ('HAVE_CBLAS', None)] Supported SIMD extensions in this NumPy install: baseline = NEON,NEON_FP16,NEON_VFPV4,ASIMD found = ASIMDHP,ASIMDDP not found = and my quick test suggest significant performance boost relative to OpenBlas. | 7 | 4 |
69,906,416 | 2021-11-9 | https://stackoverflow.com/questions/69906416/forecast-future-values-with-lstm-in-python | This code predicts the values of a specified stock up to the current date but not a date beyond the training dataset. This code is from an earlier question I had asked and so my understanding of it is rather low. I assume the solution would be a simple variable change to add the extra time but I am unaware as to which value needs to be manipulated. import pandas as pd import numpy as np import yfinance as yf import os import matplotlib.pyplot as plt from IPython.display import display from keras.models import Sequential from keras.layers import LSTM, Dense from sklearn.preprocessing import MinMaxScaler os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' pd.options.mode.chained_assignment = None # download the data df = yf.download(tickers=['AAPL'], period='2y') # split the data train_data = df[['Close']].iloc[: - 200, :] valid_data = df[['Close']].iloc[- 200:, :] # scale the data scaler = MinMaxScaler(feature_range=(0, 1)) scaler.fit(train_data) train_data = scaler.transform(train_data) valid_data = scaler.transform(valid_data) # extract the training sequences x_train, y_train = [], [] for i in range(60, train_data.shape[0]): x_train.append(train_data[i - 60: i, 0]) y_train.append(train_data[i, 0]) x_train = np.array(x_train) y_train = np.array(y_train) # extract the validation sequences x_valid = [] for i in range(60, valid_data.shape[0]): x_valid.append(valid_data[i - 60: i, 0]) x_valid = np.array(x_valid) # reshape the sequences x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1) x_valid = x_valid.reshape(x_valid.shape[0], x_valid.shape[1], 1) # train the model model = Sequential() model.add(LSTM(units=50, return_sequences=True, input_shape=x_train.shape[1:])) model.add(LSTM(units=50)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(x_train, y_train, epochs=50, batch_size=128, verbose=1) # generate the model predictions y_pred = model.predict(x_valid) y_pred = scaler.inverse_transform(y_pred) y_pred = y_pred.flatten() # plot the model predictions df.rename(columns={'Close': 'Actual'}, inplace=True) df['Predicted'] = np.nan df['Predicted'].iloc[- y_pred.shape[0]:] = y_pred df[['Actual', 'Predicted']].plot(title='AAPL') display(df) plt.show() | You could train your model to predict a future sequence (e.g. the next 30 days) instead of predicting the next value (the next day) as it is currently the case. In order to do that, you need to define the outputs as y[t: t + H] (instead of y[t] as in the current code) where y is the time series and H is the length of the forecast period (i.e. the number of days ahead that you want to forecast). You also need to set the number of outputs of the last layer equal to H (instead of equal to 1 as in the current code). You can still define the inputs as y[t - T: t] where T is the length of the lookback period (or number of timesteps), and therefore the model's input shape is still (T, 1). The lookback period T is usually longer than the forecast period H (i.e. T > H) and it's often set equal to a multiple of H (i.e. T = m * H where m > 1 is an integer.). import numpy as np import pandas as pd import yfinance as yf import tensorflow as tf from tensorflow.keras.layers import Dense, LSTM from tensorflow.keras.models import Sequential from sklearn.preprocessing import MinMaxScaler pd.options.mode.chained_assignment = None tf.random.set_seed(0) # download the data df = yf.download(tickers=['AAPL'], period='1y') y = df['Close'].fillna(method='ffill') y = y.values.reshape(-1, 1) # scale the data scaler = MinMaxScaler(feature_range=(0, 1)) scaler = scaler.fit(y) y = scaler.transform(y) # generate the input and output sequences n_lookback = 60 # length of input sequences (lookback period) n_forecast = 30 # length of output sequences (forecast period) X = [] Y = [] for i in range(n_lookback, len(y) - n_forecast + 1): X.append(y[i - n_lookback: i]) Y.append(y[i: i + n_forecast]) X = np.array(X) Y = np.array(Y) # fit the model model = Sequential() model.add(LSTM(units=50, return_sequences=True, input_shape=(n_lookback, 1))) model.add(LSTM(units=50)) model.add(Dense(n_forecast)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(X, Y, epochs=100, batch_size=32, verbose=0) # generate the forecasts X_ = y[- n_lookback:] # last available input sequence X_ = X_.reshape(1, n_lookback, 1) Y_ = model.predict(X_).reshape(-1, 1) Y_ = scaler.inverse_transform(Y_) # organize the results in a data frame df_past = df[['Close']].reset_index() df_past.rename(columns={'index': 'Date', 'Close': 'Actual'}, inplace=True) df_past['Date'] = pd.to_datetime(df_past['Date']) df_past['Forecast'] = np.nan df_past['Forecast'].iloc[-1] = df_past['Actual'].iloc[-1] df_future = pd.DataFrame(columns=['Date', 'Actual', 'Forecast']) df_future['Date'] = pd.date_range(start=df_past['Date'].iloc[-1] + pd.Timedelta(days=1), periods=n_forecast) df_future['Forecast'] = Y_.flatten() df_future['Actual'] = np.nan results = df_past.append(df_future).set_index('Date') # plot the results results.plot(title='AAPL') See this answer for a different approach. | 8 | 22 |
69,890,200 | 2021-11-8 | https://stackoverflow.com/questions/69890200/how-to-configure-os-specific-dependencies-in-a-pyproject-toml-file-maturin | I have a rust and python project that I am building using Maturin(https://github.com/PyO3/maturin). It says that it requires a pyproject.toml file for the python dependencies. I have a dependency of uvloop, which is not supported on windows and arm devices. I have added the code that conditionally imports these packages. However, I do not know how to conditionally install these packages. Right now, these packages are getting installed by default on every OS. Here is the pyproject.toml file. [project] name = "robyn" dependencies = [ "watchdog>=2.1.3,<3", "uvloop>=0.16.0,<0.16.1", "multiprocess>=0.70.12.2,<0.70.12.3" ] And the github link, jic anyone is interested: https://github.com/sansyrox/robyn/pull/94/files#diff-50c86b7ed8ac2cf95bd48334961bf0530cdc77b5a56f852c5c61b89d735fd711R21 | The syntax for environment markers is specified in PEP 508 – Dependency specification for Python Software Packages. I will show below how to exclude uvloop as a dependency on Windows platform with a marker for platform.system() which returns: "Linux" on Linux "Darwin" on macOS "Windows" on Windows Using pyproject.toml: [project] dependencies = [ 'uvloop ; platform_system != "Windows"', ] Using setup.cfg: [options] install_requires = uvloop ; platform_system != "Windows" Using setup.py: setup( install_requires=[ 'uvloop ; platform_system != "Windows"', ] ) | 10 | 18 |
69,828,508 | 2021-11-3 | https://stackoverflow.com/questions/69828508/warning-ignoring-xdg-session-type-wayland-on-gnome-use-qt-qpa-platform-wayland | I try to use library cv2 for changing picture. In mode debug I found out that problem in function cv2.namedWindow: def run(self): name_of_window = 'Test_version' image_cv2 = cv2.imread('external_data/probe.jpg') cv2.namedWindow(name_of_window, cv2.WINDOW_NORMAL) cv2.imshow(name_of_window, image_cv2) cv2.waitKey(0) cv2.destroyAllWindows() After cv2.namedWindow appears warning and program stops. I will be pleasure if somebody give the advice. When I call os.environ , appears this: environ({ 'PATH': '/home/spartak/PycharmProjects/python_base/lesson_016/env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin', 'LC_MEASUREMENT': 'ru_RU.UTF-8', 'XAUTHORITY': '/run/user/1000/.mutter-Xwaylandauth.MJ52B1', 'INVOCATION_ID': 'dd129fae7f7c452cb8fa8cd53b9da873', 'XMODIFIERS': '@im=ibus', 'LC_TELEPHONE': 'ru_RU.UTF-8', 'XDG_DATA_DIRS': '/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop', 'GDMSESSION': 'ubuntu', 'LC_TIME': 'ru_RU.UTF-8', 'SNAP_COMMON': '/var/snap/pycharm-community/common', 'SNAP_INSTANCE_KEY': '', 'SNAP_USER_COMMON': '/home/spartak/snap/pycharm-community/common', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'IDE_PROJECT_ROOTS': '/home/spartak/PycharmProjects/python_base', 'PS1': '(env) ', 'SNAP_REVISION': '256', 'XDG_CURRENT_DESKTOP': 'ubuntu:GNOME', 'JOURNAL_STREAM': '8:37824', 'LC_PAPER': 'ru_RU.UTF-8', 'SESSION_MANAGER': 'local/spartak-pc:@/tmp/.ICE-unix/2082,unix/spartak-pc:/tmp/.ICE-unix/2082', 'USERNAME': 'spartak', 'LOGNAME': 'spartak', 'PWD': '/home/spartak/PycharmProjects/python_base/lesson_016', 'MANAGERPID': '1951', 'IM_CONFIG_PHASE': '1', 'PYCHARM_HOSTED': '1', 'GJS_DEBUG_TOPICS': 'JS ERROR;JS LOG', 'PYTHONPATH': '/home/spartak/PycharmProjects/python_base:/home/spartak/PycharmProjects/python_base/lesson_012/python_snippets:/home/spartak/PycharmProjects/python_base/chatbot:/snap/pycharm-community/256/plugins/python-ce/helpers/third_party/thriftpy:/snap/pycharm-community/256/plugins/python-ce/helpers/pydev:/home/spartak/.cache/JetBrains/PyCharmCE2021.2/cythonExtensions:/home/spartak/PycharmProjects/python_base/lesson_016', 'SHELL': '/bin/bash', 'LC_ADDRESS': 'ru_RU.UTF-8', 'GIO_LAUNCHED_DESKTOP_FILE': '/var/lib/snapd/desktop/applications/pycharm-community_pycharm-community.desktop', 'BAMF_DESKTOP_FILE_HINT': '/var/lib/snapd/desktop/applications/pycharm-community_pycharm-community.desktop', 'IPYTHONENABLE': 'True', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'GTK_MODULES': 'gail:atk-bridge', 'VIRTUAL_ENV': '/home/spartak/PycharmProjects/python_base/lesson_016/env', 'SNAP_ARCH': 'amd64', 'SYSTEMD_EXEC_PID': '2099', 'XDG_SESSION_DESKTOP': 'ubuntu', 'GNOME_SETUP_DISPLAY': ':1', 'SNAP_LIBRARY_PATH': '/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void', 'SSH_AGENT_LAUNCHER': 'gnome-keyring', 'SHLVL': '0', 'LC_IDENTIFICATION': 'ru_RU.UTF-8', 'LC_MONETARY': 'ru_RU.UTF-8', 'SNAP_NAME': 'pycharm-community', 'QT_IM_MODULE': 'ibus', 'XDG_CONFIG_DIRS': '/etc/xdg/xdg-ubuntu:/etc/xdg', 'LANG': 'en_US.UTF-8', 'SNAP_INSTANCE_NAME': 'pycharm-community', 'XDG_SESSION_TYPE': 'wayland', 'SNAP_USER_DATA': '/home/spartak/snap/pycharm-community/256', 'PYDEVD_LOAD_VALUES_ASYNC': 'True', 'DISPLAY': ':0', 'SNAP_REEXEC': '', 'WAYLAND_DISPLAY': 'wayland-0', 'LIBRARY_ROOTS': '/home/spartak/.pyenv/versions/3.9.7/lib/python3.9:/home/spartak/.pyenv/versions/3.9.7/lib/python3.9/lib-dynload:/home/spartak/PycharmProjects/python_base/lesson_016/env/lib/python3.9/site-packages:/home/spartak/.cache/JetBrains/PyCharmCE2021.2/python_stubs/-1475777083:/snap/pycharm-community/256/plugins/python-ce/helpers/python-skeletons:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stdlib:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/jwt:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/six:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/mock:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/nmap:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/annoy:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/attrs:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/polib:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/retry:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/docopt:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/enum34:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/orjson:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/pysftp:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/xxhash:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/chardet:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/futures:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/pyaudio:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/tzlocal:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/Markdown:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/Werkzeug:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/aiofiles:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/colorama:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/filelock:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/paramiko:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/pathlib2:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/requests:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/waitress:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/freezegun:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/ipaddress:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/pyRFC3339:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/typed-ast:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/Deprecated:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/cachetools:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/frozendict:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/pyfarmhash:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/JACK-Client:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/contextvars:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/atomicwrites:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/cryptography:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/DateTimeRange:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/click-spinner:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/pkg_resources:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/python-gflags:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/python-slugify:/snap/pycharm-community/256/plugins/python-ce/helpers/typeshed/stubs/python-dateutil', 'SNAP_VERSION': '2021.2.3', 'LC_NAME': 'ru_RU.UTF-8', 'XDG_SESSION_CLASS': 'user', '_': '/usr/bin/gnome-session', 'SNAP_DATA': '/var/snap/pycharm-community/256', 'PYTHONIOENCODING': 'UTF-8', 'DESKTOP_SESSION': 'ubuntu', 'SNAP': '/snap/pycharm-community/256', 'USER': 'spartak', 'SNAP_REAL_HOME': '/home/spartak', 'XDG_MENU_PREFIX': 'gnome-', 'GIO_LAUNCHED_DESKTOP_FILE_PID': '122719', 'QT_ACCESSIBILITY': '1', 'PYTHONDONTWRITEBYTECODE': '1', 'LC_NUMERIC': 'ru_RU.UTF-8', 'GJS_DEBUG_OUTPUT': 'stderr', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'PYTHONUNBUFFERED': '1', 'GNOME_SHELL_SESSION_MODE': 'ubuntu', 'SNAP_CONTEXT': 'AoM6cqDJGx0xxWBUHLXWyVdhoNwTuHJsXSu2foumZWGYLCOaeHoL', 'XDG_RUNTIME_DIR': '/run/user/1000', 'SNAP_COOKIE': 'AoM6cqDJGx0xxWBUHLXWyVdhoNwTuHJsXSu2foumZWGYLCOaeHoL', 'HOME': '/home/spartak', 'QT_QPA_PLATFORM_PLUGIN_PATH': '/home/spartak/PycharmProjects/python_base/lesson_016/env/lib/python3.9/site-packages/cv2/qt/plugins', 'QT_QPA_FONTDIR': '/home/spartak/PycharmProjects/python_base/lesson_016/env/lib/python3.9/site-packages/cv2/qt/fonts', 'LD_LIBRARY_PATH': '/home/spartak/PycharmProjects/python_base/lesson_016/env/lib/python3.9/site-packages/cv2/../../lib64:'}) | I reverted back to Xorg from wayland and its working, no more warnings Here are the steps: Disabled Wayland by uncommenting WaylandEnable=false in the /etc/gdm3/custom.conf Add QT_QPA_PLATFORM=xcb in /etc/environment Check whether you are on Wayland or Xorg using: echo $XDG_SESSION_TYPE | 24 | 17 |
69,872,686 | 2021-11-7 | https://stackoverflow.com/questions/69872686/how-to-upload-file-from-python-flask-web-app-to-supabase-storage | I want to be able to upload a file from Flask to Supabase Storage, but it only has documentation for the javascript api link to docs. Also, I can't find any examples or any open source project that does that. Here it is my function to upload: def upload_file(self): if 'file' not in request.files: flash('No file part') return redirect('/') file = request.files['file'] if file.filename == '': flash('No selected file') return redirect('/') filename = secure_filename(file.filename) # upload to supabase storage return file.path | from storage3 import create_client url = "https://<your_supabase_id>.supabase.co/storage/v1" key = "<your api key>" headers = {"apiKey": key, "Authorization": f"Bearer {key}"} storage_client = create_client(url, headers, is_async=False) def upload_file(self): if 'file' not in request.files: flash('No file part') return redirect('/') file = request.files['file'] if file.filename == '': flash('No selected file') return redirect('/') filename = secure_filename(file.filename) buckets = storage_client.list_buckets() bucket = buckets[0] return bucket.upload(filename, file) I didn't find official docs for python upload. Didn't test the code above so appreciate any feedback. I've based this on the github repo https://github.com/supabase-community/storage-py and the docs here https://supabase-community.github.io/storage-py/api/bucket.html | 5 | 1 |
69,870,135 | 2021-11-7 | https://stackoverflow.com/questions/69870135/attributeerror-module-backend-interagg-has-no-attribute-figurecanvas | I am using verision 3.6.0 of matplotlib and version 2.6.3 of networkx and for some reason my code is giving me AttributeError: module 'backend_interagg' has no attribute 'FigureCanvas' as an error. Code: import networkx as nx import matplotlib.pyplot as plt import numpy as np import matplotlib G = nx.DiGraph() nodes = np.arange(0, 8).tolist() G.add_nodes_from(nodes) G.add_edges_from([(0, 1), (0, 2), (1, 3), (1, 4), (2, 5), (2, 6), (2, 7)]) pos = {0: (10, 10), 1: (7.5, 7.5), 2: (12.5, 7.5), 3: (6, 6), 4: (9, 6), 5: (11, 6), 6: (14, 6), 7: (17, 6)} labels = {0: "CEO", 1: "Team A Lead", 2: "Team B Lead", 3: "Staff A", 4: "Staff B", 5: "Staff C", 6: "Staff D", 7: "Staff E"} nx.draw_networkx(G, pos=pos, labels=labels, arrows=True, node_shape="s", node_color="white") plt.title("Organogram of a company.") plt.savefig("Output/plain organogram using networkx.jpeg", dpi=300) Full error message: C:\Users\Flow\AppData\Local\Programs\Python\Python39\python.exe C:/Users/Flow/Shirt/Test.py Traceback (most recent call last): File "C:\Users\Flow\Shirt\Test.py", line 21, in <module> nx.draw_networkx(G, pos=pos, labels=labels, arrows=True, File "C:\Users\Flow\AppData\Local\Programs\Python\Python39\lib\site-packages\networkx\drawing\nx_pylab.py", line 333, in draw_networkx draw_networkx_nodes(G, pos, **node_kwds) File "C:\Users\Flow\AppData\Local\Programs\Python\Python39\lib\site-packages\networkx\drawing\nx_pylab.py", line 445, in draw_networkx_nodes ax = plt.gca() File "C:\Users\Flow\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\pyplot.py", line 2225, in gca return gcf().gca() File "C:\Users\Flow\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\pyplot.py", line 830, in gcf return figure() File "C:\Users\Flow\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\_api\deprecation.py", line 454, in wrapper return func(*args, **kwargs) File "C:\Users\Flow\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\pyplot.py", line 771, in figure manager = new_figure_manager( File "C:\Users\Flow\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\pyplot.py", line 346, in new_figure_manager _warn_if_gui_out_of_main_thread() File "C:\Users\Flow\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\pyplot.py", line 336, in _warn_if_gui_out_of_main_thread if (_get_required_interactive_framework(_get_backend_mod()) and File "C:\Users\Flow\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\pyplot.py", line 206, in _get_backend_mod switch_backend(dict.__getitem__(rcParams, "backend")) File "C:\Users\Flow\AppData\Local\Programs\Python\Python39\lib\site-packages\matplotlib\pyplot.py", line 266, in switch_backend canvas_class = backend_mod.FigureCanvas AttributeError: module 'backend_interagg' has no attribute 'FigureCanvas' Process finished with exit code 1 | This is a fairly common issue with many causes (edited) tldr matplotlib can't find a backend that supports canvas drawing This usually happens on OSX (where tkinter might not be linked due to how OSX does applications) or linux (where tkinter might not be installed because it comes separately and not by default) try setting a backend manually, since I am on windows, use matplotlib.use('TkAgg') | 5 | 13 |
69,869,534 | 2021-11-7 | https://stackoverflow.com/questions/69869534/files-and-folders-in-google-colab | I just started using Google colab for my projects and I tried to create and parse text files. But I don't quite understand how the file directory works here. Below are my questions: On the left navigation pane (in the picture) that show the list of folders and files. Are they in my drive, if they are, where they are located? How can I find? How do I create/upload a file and store in google drive in order to parse it in my app? I have created one file in the folder "Contents" yesterday. But I woke up this morning, it's gone. Can anyone explain what happened? Appreciate your time, thank you. | The google colab folders are temporary and they will disappear after 8 hours I think. You need to save them to your mounted google drive location. The content folder is part of colab and will be deleted. You need to mount google drive to your Colab session. from google.colab import drive drive.mount('/content/gdrive') Then you can write to google drive like with open('/content/gdrive/My Drive/file.txt', 'w') as f: f.write('content') Or even save stuff like pandas files to csv there like df.to_csv('/content/gdrive/My Drive/file.csv') You can also read files from there like this import pandas as pd df = pd.read_csv('/content/gdrive/My Drive/file.csv') All of this will only work after you mount the drive of course. | 5 | 11 |
69,834,335 | 2021-11-4 | https://stackoverflow.com/questions/69834335/loading-yolo-invalid-index-to-scalar-variable | Getting an error for IndexError: invalid index to scalar variable on the yolo_layers line. network = cv2.dnn.readNetFromDarknet('yolov3.cfg', 'yolov3.weights') layers = network.getLayerNames() yolo_layers = [layers[i[0] - 1] for i in network.getUnconnectedOutLayers()] This code won't work on my Jupyter notebook but will run fine on google collab. No idea why. Could be my python version? | It's may caused by the different versions of cv2. The version of cv2 module with CUDA support will give you a 2-D array when calling network.getUnconnectedOutLayers(). However, the version without CUDA support will give a 1-D array. You may try to take the brackets out which closing the index 0. | 9 | 14 |
69,874,192 | 2021-11-7 | https://stackoverflow.com/questions/69874192/combined-aggregate-based-on-valid-values | I have a df with this structure: id a1_l1 a2_l1 a3_l1 a1_l2 a2_l2 a3_l2 1 1 5 3 1 2 3 2 1 5 3 1 2 3 3 2 5 3 5 5 3 4 5 5 3 5 5 3 5 5 5 2 6 5 5 2 7 5 5 2 8 2 5 2 9 3 5 1 10 3 5 1 I want to summarize in a table such that I get: l1 l2 a1 0.4 0.5 a2 1 0.5 a3 0 0 In which what I'm doing is counting how may times 5 was present divided by the number of valid responses, so that for example: a1, l1 is equal to .4 as I have 4 values of 5 divided by 10. and a2, l1 equals .5 as I have 2 values of 5 divided by 4 valid responses per column. Thanks! | You can reshape to have a dataframe with MultiIndex, then perform a simple division of the (sum of the truthy values equal to 5) by not na. Finally, unstack: df2 = df.set_index('id') df2.columns = df2.columns.str.split('_', expand = True) df2 = (df2.eq(5).sum()/df2.notna().sum()).unstack() output: l1 l2 a1 0.4 0.5 a2 1.0 0.5 a3 0.0 0.0 | 7 | 4 |
69,842,280 | 2021-11-4 | https://stackoverflow.com/questions/69842280/if-condition-with-a-dataframe | I want if the conditions are true if df[df["tg"] > 10 and df[df["tg"] < 32 then multiply by five otherwise divide by two. However, I get the following error ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). d = {'year': [2001, 2001, 2001, 2001, 2001, 2001, 2001, 2001], 'day': [1, 2, 3, 4, 1, 2, 3, 4,], 'month': [1, 1, 1, 1, 2, 2, 2, 2], 'tg': [10, 11, 12, 13, 50, 21, -1, 23], 'rain': [1, 2, 3, 2, 4, 1, 2, 1]} df = pd.DataFrame(data=d) print(df) [OUT] year day month tg rain 0 2001 1 1 10 1 1 2001 2 1 11 2 2 2001 3 1 12 3 3 2001 4 1 13 2 4 2001 1 2 50 4 5 2001 2 2 21 1 6 2001 3 2 -1 2 7 2001 4 2 23 1 df["score"] = (df["tg"] * 5) if ((df[df["tg"] > 10]) and (df[df["tg"] < 32])) else (df["tg"] / 2) [OUT] ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). What I want year day month tg rain score 0 2001 1 1 10 1 5 1 2001 2 1 11 2 55 2 2001 3 1 12 3 60 3 2001 4 1 13 2 65 4 2001 1 2 50 4 25 5 2001 2 2 21 1 42 6 2001 3 2 -1 2 0.5 7 2001 4 2 23 1 46 | You can use where: df['score'] = (df['tg']*5).where(df['tg'].between(10, 32), df['tg']/5) | 10 | 5 |
69,884,878 | 2021-11-8 | https://stackoverflow.com/questions/69884878/replacing-imp-with-importlib-maintaining-old-behavior | I inherited some code that I need to rework since it is using deprecated imp module instead of importlib To test the functionality I have created a simple test script: # test.py def main(): print("main()") if __name__ == "__main__": print("__main__") main() When I run that with the old code (below as a minimal example) # original imp based minimal example import imp import os import site script = r"./test.py" script_dir = os.path.dirname(script) script_name = os.path.splitext(os.path.basename(script))[0] site.addsitedir(script_dir) fp, pathname, description = imp.find_module(script_name, [script_dir]) imp.load_source('__main__', pathname, fp) Would output the following: __main__ main() Now I have some trouble mimicking this verbatim using the importlib module since I do not get the output as above with the following code: # my try so far using importlib import importlib import os import site script = os.path.abspath(r"./test.py") script_dir = os.path.dirname(script) script_name = os.path.splitext(os.path.basename(script))[0] site.addsitedir(script_dir) spec = importlib.util.spec_from_file_location(script_name, script) module = importlib.util.module_from_spec(spec) sys.modules[script_name] = module spec.loader.exec_module(module) Instead it outputs nothing. :-( Any suggestions on how to mimic the old (imp) behavior using importlib? | Eventually I solved it like below based on https://github.com/epfl-scitas/spack/blob/a60ae07083a5744607064221a0cd48204a54394e/lib/spack/llnl/util/lang.py#L598-L625: if sys.version_info[0] == 3: if sys.version_info[1] >= 5: import importlib.util spec = importlib.util.spec_from_file_location(module_name, module_path) module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) elif sys.version_info[1] < 5: import importlib.machinery loader = importlib.machinery.SourceFileLoader(module_name, module_path) loader.load_module() elif sys.version_info[0] == 2: import imp imp.load_source(module_name, module_path) | 6 | 1 |
69,837,716 | 2021-11-4 | https://stackoverflow.com/questions/69837716/error-could-not-find-a-version-that-satisfies-the-requirement-busio-from-versi | When Running Installation, pip install busio Getting ERROR, ERROR: Could not find a version that satisfies the requirement busio (from versions: none) ERROR: No matching distribution found for busio Python version is 3.7.3. | There is a module called "busio" (GitHub) by Adafruit which they describe as providing “hardware-driven interfaces for I2C, SPI, UART”. This can be installed via Adafruit’s Blinka package: pip3 install adafruit-blinka | 4 | 4 |
69,830,902 | 2021-11-3 | https://stackoverflow.com/questions/69830902/poetry-installation-with-windows-wsl-not-working-ignoring-home | I have a WSL instance, Ubuntu 20.04 and I have created another Ubuntu 18.04 WSL instance. I installed Poetry on the 20.04 without issues. I am trying to install Poetry on the Ubuntu 18.04 instance, using the curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3 - command. At the moment, my $HOME env var points to /home/fromzeroedu. However, after installation, Poetry is installed on my Windows user home: $ which poetry /mnt/c/Users/j/.poetry/bin/poetry And if I try getting the version, I get: $ poetry --version /usr/bin/env: ‘python\r’: Permission denied I even tried setting the POETRY_HOME prior to installation: export POETRY_HOME=/home/fromzeroedu/.poetry/bin/poetry But Poetry still installs in the Windows user directory. Sometimes I love Poetry... | That was because bash didn't knew where to look for the bin so it found only the Windows executable (PATH is shared between wsl and windows) to solve it you needed to add the following to your ~/.bashrc (preferably on top) export PATH="$HOME/.poetry/bin:$PATH" With the new installer (poetry 1.1.7 onward) the Bin path changed export PATH="$HOME/.local/bin:$PATH" I faced this issue cause I'm using autocomplete plugin for oh-my-zsh and poetry needs to be added to PATH before the plugin is loaded. but the installation script append it at the end. | 5 | 13 |
69,864,793 | 2021-11-6 | https://stackoverflow.com/questions/69864793/efficient-summation-in-python | I am trying to efficiently compute a summation of a summation in Python: WolframAlpha is able to compute it too a high n value: sum of sum. I have two approaches: a for loop method and an np.sum method. I thought the np.sum approach would be faster. However, they are the same until a large n, after which the np.sum has overflow errors and gives the wrong result. I am trying to find the fastest way to compute this sum. import numpy as np import time def summation(start,end,func): sum=0 for i in range(start,end+1): sum+=func(i) return sum def x(y): return y def x2(y): return y**2 def mysum(y): return x2(y)*summation(0, y, x) n=100 # method #1 start=time.time() summation(0,n,mysum) print('Slow method:',time.time()-start) # method #2 start=time.time() w=np.arange(0,n+1) (w**2*np.cumsum(w)).sum() print('Fast method:',time.time()-start) | (fastest methods, 3 and 4, are at the end) In a fast NumPy method you need to specify dtype=np.object so that NumPy does not convert Python int to its own dtypes (np.int64 or others). It will now give you correct results (checked it up to N=100000). # method #2 start=time.time() w=np.arange(0, n+1, dtype=np.object) result2 = (w**2*np.cumsum(w)).sum() print('Fast method:', time.time()-start) Your fast solution is significantly faster than the slow one. Yes, for large N's, but already at N=100 it is like 8 times faster: start=time.time() for i in range(100): result1 = summation(0, n, mysum) print('Slow method:', time.time()-start) # method #2 start=time.time() for i in range(100): w=np.arange(0, n+1, dtype=np.object) result2 = (w**2*np.cumsum(w)).sum() print('Fast method:', time.time()-start) Slow method: 0.06906533241271973 Fast method: 0.008007287979125977 EDIT: Even faster method (by KellyBundy, the Pumpkin) is by using pure python. Turns out NumPy has no advantage here, because it has no vectorized code for np.objects. # method #3 import itertools start=time.time() for i in range(100): result3 = sum(x*x * ysum for x, ysum in enumerate(itertools.accumulate(range(n+1)))) print('Faster, pure python:', (time.time()-start)) Faster, pure python: 0.0009944438934326172 EDIT2: Forss noticed that numpy fast method can be optimized by using x*x instead of x**2. For N > 200 it is faster than pure Python method. For N < 200 it is slower than pure Python method (the exact value of boundary may depend on machine, on mine it was 200, its best to check it yourself): # method #4 start=time.time() for i in range(100): w = np.arange(0, n+1, dtype=np.object) result2 = (w*w*np.cumsum(w)).sum() print('Fast method x*x:', time.time()-start) | 32 | 21 |
69,875,694 | 2021-11-7 | https://stackoverflow.com/questions/69875694/pip-failed-to-build-package-cytoolz | I'm trying to install eth-brownie using 'pipx install eth-brownie' but I get an error saying pip failed to build package: cytoolz Some possibly relevant errors from pip install: build\lib.win-amd64-3.10\cytoolz\functoolz.cp310-win_amd64.pyd : fatal error LNK1120: 1 unresolved externals error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\link.exe' failed with exit code 1120 I've had a look at the log file and it shows that it failed to build cytoolz. It also mentions "ALERT: Cython not installed. Building without Cython.". From my limited understanding Cytoolz is apart of Cython so i think the reason why the installation for eth-brownie failed is because it could not build cytoolz as it was trying to build it without Cython. The thing is I already have cython installed: C:\Users\alaiy>pip install cython Requirement already satisfied: cython in c:\python310\lib\site-packages (0.29.24) Extract from the log file (I can paste the whole thing but its lengthy): Building wheels for collected packages: bitarray, cytoolz, lru-dict, parsimonious, psutil, pygments-lexer-solidity, varint, websockets, wrapt Building wheel for bitarray (setup.py): started Building wheel for bitarray (setup.py): finished with status 'done' Created wheel for bitarray: filename=bitarray-1.2.2-cp310-cp310-win_amd64.whl size=55783 sha256=d4ae97234d659ed9ff1f0c0201e82c7e321bd3f4e122f6c2caee225172e7bfb2 Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\1d\29\a8\5364620332cc833df35535f54074cf1e51f94d07d2a660bd6d Building wheel for cytoolz (setup.py): started Building wheel for cytoolz (setup.py): finished with status 'error' Running setup.py clean for cytoolz Building wheel for lru-dict (setup.py): started Building wheel for lru-dict (setup.py): finished with status 'done' Created wheel for lru-dict: filename=lru_dict-1.1.7-cp310-cp310-win_amd64.whl size=12674 sha256=6a7e7b2068eb8481650e0a2ae64c94223b3d2c018f163c5a0e7c1d442077450a Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\47\0a\dc\b156cb52954bbc1c31b4766ca3f0ed9eae9b218812bca89d7b Building wheel for parsimonious (setup.py): started Building wheel for parsimonious (setup.py): finished with status 'done' Created wheel for parsimonious: filename=parsimonious-0.8.1-py3-none-any.whl size=42724 sha256=f9235a9614af0f5204d6bb35b8bd30b9456eae3021b5c2a9904345ad7d07a49d Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\b1\12\f1\7a2f39b30d6780ae9f2be9a52056595e0d97c1b4531d183085 Building wheel for psutil (setup.py): started Building wheel for psutil (setup.py): finished with status 'done' Created wheel for psutil: filename=psutil-5.8.0-cp310-cp310-win_amd64.whl size=246135 sha256=834ab1fd1dd0c18e574fc0fbf07922e605169ac68be70b8a64fb90c49ad4ae9b Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\12\a3\6d\615295409067d58a62a069d30d296d61d3ac132605e3a9555c Building wheel for pygments-lexer-solidity (setup.py): started Building wheel for pygments-lexer-solidity (setup.py): finished with status 'done' Created wheel for pygments-lexer-solidity: filename=pygments_lexer_solidity-0.7.0-py3-none-any.whl size=7321 sha256=46355292f790d07d941a745cd58b64c5592e4c24357f7cc80fe200c39ab88d32 Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\36\fd\bc\6ff4fe156d46016eca64c9652a1cd7af6411070c88acbeabf5 Building wheel for varint (setup.py): started Building wheel for varint (setup.py): finished with status 'done' Created wheel for varint: filename=varint-1.0.2-py3-none-any.whl size=1979 sha256=36b744b26ba7534a494757e16ab6e171d9bb60a4fe4663557d57034f1150b678 Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\39\48\5e\33919c52a2a695a512ca394a5308dd12626a40bbcd288de814 Building wheel for websockets (setup.py): started Building wheel for websockets (setup.py): finished with status 'done' Created wheel for websockets: filename=websockets-9.1-cp310-cp310-win_amd64.whl size=91765 sha256=a00a9c801269ea2b86d72c0b0b654dc67672519721afeac8f912a157e52901c0 Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\79\f7\4e\873eca27ecd6d7230caff265283a5a5112ad4cd1d945c022dd Building wheel for wrapt (setup.py): started Building wheel for wrapt (setup.py): finished with status 'done' Created wheel for wrapt: filename=wrapt-1.12.1-cp310-cp310-win_amd64.whl size=33740 sha256=ccd729b6e3915164ac4994aef731f21cd232466b3f6c4823c9fda14b07e821c3 Stored in directory: c:\users\alaiy\appdata\local\pip\cache\wheels\8e\61\d3\d9e7053100177668fa43216a8082868c55015f8706abd974f2 Successfully built bitarray lru-dict parsimonious psutil pygments-lexer-solidity varint websockets wrapt Failed to build cytoolz Installing collected packages: toolz, eth-typing, eth-hash, cytoolz, six, pyparsing, eth-utils, varint, urllib3, toml, rlp, pyrsistent, pycryptodome, py, pluggy, parsimonious, packaging, netaddr, multidict, iniconfig, idna, hexbytes, eth-keys, colorama, charset-normalizer, certifi, base58, attrs, atomicwrites, yarl, typing-extensions, requests, python-dateutil, pytest, multiaddr, jsonschema, inflection, eth-rlp, eth-keyfile, eth-abi, chardet, bitarray, async-timeout, websockets, wcwidth, tomli, sortedcontainers, semantic-version, regex, pywin32, pytest-forked, pyjwt, pygments, protobuf, platformdirs, pathspec, mythx-models, mypy-extensions, lru-dict, ipfshttpclient, execnet, eth-account, dataclassy, click, asttokens, aiohttp, wrapt, web3, vyper, vvm, tqdm, pyyaml, pythx, python-dotenv, pytest-xdist, pygments-lexer-solidity, py-solc-x, py-solc-ast, psutil, prompt-toolkit, lazy-object-proxy, hypothesis, eth-event, eip712, black, eth-brownie Running setup.py install for cytoolz: started Running setup.py install for cytoolz: finished with status 'error' PIP STDERR ---------- WARNING: The candidate selected for download or install is a yanked version: 'protobuf' candidate (version 3.18.0 at https://files.pythonhosted.org/packages/74/4e/9f3cb458266ef5cdeaa1e72a90b9eda100e3d1803cbd7ec02f0846da83c3/protobuf-3.18.0-py2.py3-none-any.whl#sha256=615099e52e9fbc9fde00177267a94ca820ecf4e80093e390753568b7d8cb3c1a (from https://pypi.org/simple/protobuf/)) Reason for being yanked: This version claims to support Python 2 but does not ERROR: Command errored out with exit status 1: command: 'C:\Users\alaiy\.local\pipx\venvs\eth-brownie\Scripts\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\alaiy\\AppData\\Local\\Temp\\pip-install-d1bskwa2\\cytoolz_f765f335272241adba2138f1920a35cd\\setup.py'"'"'; __file__='"'"'C:\\Users\\alaiy\\AppData\\Local\\Temp\\pip-install-d1bskwa2\\cytoolz_f765f335272241adba2138f1920a35cd\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\alaiy\AppData\Local\Temp\pip-wheel-pxzumeav' cwd: C:\Users\alaiy\AppData\Local\Temp\pip-install-d1bskwa2\cytoolz_f765f335272241adba2138f1920a35cd\ Complete output (70 lines): ALERT: Cython not installed. Building without Cython. running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.10 creating build\lib.win-amd64-3.10\cytoolz copying cytoolz\compatibility.py -> build\lib.win-amd64-3.10\cytoolz copying cytoolz\utils_test.py -> build\lib.win-amd64-3.10\cytoolz | Managed to get it working with python 3.10.1 on Win10 x64 installing cython and cytoolz first: python -m pip install --user cython python -m pip install --user cytoolz python -m pip install --user eth-brownie https://github.com/eth-brownie/brownie/issues/1315 | 8 | 9 |
69,894,628 | 2021-11-9 | https://stackoverflow.com/questions/69894628/scipy-stats-bootstrap-not-importing-python | I have tried pip install scipy and everything appears fine, going through the path I opened the files and couldn't find any mention of the bootstrap library despite it being on their website: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bootstrap.html After looking on Github https://github.com/scipy/scipy/blob/master/scipy/stats/_bootstrap.py I can see there was an update 5 days ago although I last ran the code three days ago with no issues | I had this issue and solved it by re-installing the scipy package with pip install -U scipy in order to upgrade to version 1.7 | 4 | 6 |
69,818,376 | 2021-11-3 | https://stackoverflow.com/questions/69818376/localhost5000-unavailable-in-macos-v12-monterey | I cannot access a web server on localhost port 5000 on macOS v12 (Monterey) (Flask or any other). E.g., use the built-in HTTP server, I cannot get onto port 5000: python3 -m http.server 5000 ... (stack trace) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/socketserver.py", line 466, in server_bind self.socket.bind(self.server_address) OSError: [Errno 48] Address already in use If you have Flask installed and you run the Flask web server, it does not fail on start. Let's take the minimum Flask example code: # Save as hello.py in the current working directory. from flask import Flask app = Flask(__name__) @app.route("/") def hello_world(): return "<p>Hello, World!</p>" Then run it (provided you have Flask/Python 3 installed): export FLASK_APP=hello flask run Output: * Running on http://127.0.0.1:5000/ However, if you try to access this server (from a browser or with anything else), it is denied: curl -I localhost:5000 HTTP/1.1 403 Forbidden Content-Length: 0 Server: AirTunes/595.13.1 | macOS Monterey introduced AirPlay Receiver running on port 5000. This prevents your web server from serving on port 5000. Receiver already has the port. You can either: turn off AirPlay Receiver, or; run the server on a different port (normally best). Turn off AirPlay Receiver Go to System Preferences → Sharing → Untick Airplay Receiver. See more details You should be able to rerun the server now on port 5000 and get a response: python3 -m http.server 5000 Serving HTTP on :: port 5000 (http://[::]:5000/) ... Run the server on a different port than 5000 It's probably a better idea to no longer use port 5000 as that's reserved for Airplay Receiver on macOS Monterey. Just to run the server on a different port. There isn't any need to turn off Airplay Receiver. python3 -m http.server 4999 or export FLASK_APP=hello flask run -p 4999 | 55 | 154 |
69,860,233 | 2021-11-5 | https://stackoverflow.com/questions/69860233/cant-install-python-package-on-alpine-docker-anymore | I have a problem that started very recently. The Docker Alpine Python library is not installable any more: apk update && apk upgrade && apk add python fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz v3.14.2-119-g9c4e1aa60c [https://dl-cdn.alpinelinux.org/alpine/v3.14/main] v3.14.2-120-g90167408c8 [https://dl-cdn.alpinelinux.org/alpine/v3.14/community] OK: 14943 distinct packages available (1/1) Upgrading alpine-keys (2.3-r1 -> 2.4-r0) 0% 2% # 5% ## 7% ### 10% #### 12% ##### 15% ###### 17% ####### 20% ######### 23% ########## 25% ########### 28% ############ 30% ############# 33% ############## 35% ############### 38% ################ 41% ################## 43% ################### 46% #################### 48% ##################### 51% ###################### 53% ####################### % ############################################OK: 7 MiB in 16 packages ERROR: unable to select packages: python (no such package): required by: world[python] Exited with code exit status 1 | You are trying to use the python (alias) library instead of python3. Try to use apk update && apk upgrade && apk add python3 instead. | 20 | 26 |
69,876,843 | 2021-11-7 | https://stackoverflow.com/questions/69876843/importerror-cannot-import-name-dcc-from-partially-initialized-module-dash | I'm very new to python/dash/plotly and I keep getting the same error: ImportError: cannot import name 'dcc' from partially initialized module 'dash' (most likely due to a circular import) Does anyone know how to fix this? I've imported the following: from dash import dcc from dash import html from dash.dependencies import Input, Output import plotly.io as pio | "most likely due to a circular import": this is probably due to your file being named as a dash or as a module name. But I got the error message ImportError: cannot import name 'dcc' from 'dash' For me reinstalling dash fixed the issue. pip3 uninstall dash pip3 install dash | 4 | 11 |
69,888,603 | 2021-11-8 | https://stackoverflow.com/questions/69888603/how-to-find-peaks-of-fft-graph-using-python | I am using Python to perform a Fast Fourier Transform on some data. I then need to extract the locations of the peaks in the transform in the form of the x-values. Right now I am using Scipy's fft tool to perform the transform, which seems to be working. However, when i use Scipy's find_peaks I only get the y-values, not the x-position that I need. I also get the warning: ComplexWarning: Casting complex values to real discards the imaginary part Is there a better way for me to do this? Here is my code at the moment: import pandas as pd import matplotlib.pyplot as plt from scipy.fft import fft from scipy.signal import find_peaks headers = ["X","Y"] original_data = pd.read_csv("testdata.csv",names=headers) x = original_data["X"] y = original_data["Y"] a = fft(y) peaks = find_peaks(a) print(peaks) plt.plot(x,a) plt.title("Fast Fourier transform") plt.xlabel("Frequency") plt.ylabel("Amplitude") plt.show() | There seem to be two points of confusion here: What find_peaks is returning. How to interpret complex values that the FFT is returning. I will answer them separately. Point #1 find_peaks returns the indices in "a" that correspond to peaks, so I believe they ARE values you seek, however you must plot them differently. You can see the first example from the documentation here. But basically "peaks" is the index, or x value, and a[peaks] will be the y value. So to plot all your frequencies, and mark the peaks you could do: plt.plot(a) plt.plot(peaks, a[peaks]) Point #2 As for the second point, you should probably read up more on the output of FFTs, this post is a short summary but you may need more background to understand it. But basically, an FFT will return an array of complex numbers, which contains both phase and magnitude information. What you are currently doing is implicitly only looking at the real part of the solution (hence the warning that the imaginary portion is being discarded), what you probably want instead to take the magnitude of your "a" array, but without more information on your application it is impossible to say. | 6 | 7 |
69,830,431 | 2021-11-3 | https://stackoverflow.com/questions/69830431/how-to-use-python3-10-on-ubuntu | I have installed Python 3.10 from deadsnakes on my Ubuntu 20.04 machine. How to use it? python3 --version returns Python 3.8.10 and python3.10 -m venv venv returns error (I've installed python3-venv as well). | python3.10 --version will work. python3-venv is for 3.8, so install python3.10-venv. For reference: deadsnakes packages for 3.10 for Focal. | 9 | 6 |
69,888,695 | 2021-11-8 | https://stackoverflow.com/questions/69888695/how-to-alias-generic-types-for-decorators | Consider the example of a typed decorator bound to certain classes. import unittest from typing import * T = TypeVar("T", bound=unittest.TestCase) def decorate(func: Callable[[T], None]) -> Callable[[T], None]: def decorated_function(self: T) -> None: return func(self) return decorated_function Now I even have a generator that creates these decorators and want to shorthand these decorators. What type do I to the variables variables storing the decorator (simplified example omitting the generator). my_decorate: Callable[[Callable[[T], None]], Callable[[T], None]] = decorate This works, but is clunky. So the question is: How can I alias this type to avoid having to write the the full signature? Things that don't work: TD = Callable[[Callable[[T], None]], Callable[[T], None]] my_decorate: TD[T] = decorator_variable Gives the error error: Type variable "mypytest.T" is unbound note: (Hint: Use "Generic[T]" or "Protocol[T]" base class to bind "T" inside a class) note: (Hint: Use "T" in function signature to bind "T" inside a function) In contrast, I can use TD[T] as argument type for a function. Just using my_decorate: TD = ... yields a --strict error error: Missing type parameters for generic type "TD" And it no longer detects wrong applications of my_decorate. | As in many cases, when Callable is too limited use a Protocol instead: class TD(Protocol): """Type of any callable `(T -> None) -> (T -> None)` for all `T`""" def __call__(self, __original: Callable[[T], None]) -> Callable[[T], None]: ... TD is not a generic type and thus does not need "filling in" a type variable. It can be used directly as an annotation: my_decorate: TD = decorate Notably, TD.__call__ is still a generic callable even though TD is not generic. Its T is filled by the context of each call as desired. | 11 | 3 |
69,853,699 | 2021-11-5 | https://stackoverflow.com/questions/69853699/finding-weekly-combinations-of-items-bought-together-using-pandas-groupby | I have a df: date category subcategory order_id product_id branch 2021-05-04 A aa 10 5 web 2021-06-04 A dd 10 2 web 2021-05-06 B aa 18 3 shop 2021-07-06 A aa 50 10 web 2021-07-06 C cc 10 15 web 2021-07-05 A ff 101 30 shop 2021-10-04 D aa 100 15 shop I am trying to answer a question which items categories and subcategories are bought together per branch type weekly. I am thinking of grouping the order_ids and aggregating the category & subcategory to a list like so: a = (df.set_index('date') .groupby(['order_id','branch']) .resample('W-MON', label = 'left') .agg({'category':list, 'subcategory':list})) Which returns : category subcategory order_id branch date [A, A, A] [aa, dd, aa] 10 web 2021-05-04 ... ... 18 shop ... 50 web 100 web 101 shop I am trying to build a structure which would show the frequency of each variation of the categories and subcategories bought each week per branch, something similar to this: branch date 2021-05-04 2021-05-011 ... web category 3, [A, A, A] 2, [A, A] 2, [A, A, B, B] subcategory 5, [aa, dd, aa] 4, [dd, aa] 1, [dd] shop category 3, [A, A, A] 2, [A, A] 2, [A, A, B, B] subcategory 5, [aa, dd, aa] 4, [dd, aa] 1, [dd] Where the number before the list denotes the number of times a certain combinations of categories and subcategories were bought in the same order. I am unsure how to achieve such a structure or a similar one that would show the weekly combination frequencies by branch. The order of the product_id in the order does not matter as the final basket is the same. So the goal is to see the frequency of categories, subcategories & product_ids bought in the same order weekly. So if 2 different orders have the same products, the aggregated result would show 2, [A,B] [aa, bb] [5, 2] where the lists hold category, subcategory & product_id combinations. | This is what you need: import pandas as pd import numpy as np from datetime import timedelta from datetime import datetime as dt # df=pd.read_excel('demo.xlsx') df['date']=pd.to_datetime(df['date']) df['date']=df['date'].dt.strftime('%Y-%m-%d') df['date']=pd.to_datetime(df['date']) df['year_week'] = df['date'].dt.strftime('%Y_%U') df['orderid_year_week']=df['order_id'].astype(str)+'_'+df['year_week'] df=df.sort_values(['category', 'subcategory','product_id'], ascending=[True, True,True]) a = (df.set_index('orderid_year_week') .groupby(['year_week','order_id'],sort=False) .agg({'category':list, 'subcategory':list,'product_id':list})).reset_index() a['category'] =a['category'].astype(str) a['subcategory'] =a['subcategory'].astype(str) a['product_id'] =a['product_id'].astype(str) df=pd.pivot_table(a,index=['year_week','category','subcategory','product_id'],values='product_id',aggfunc='count').reset_index() df.rename({'order_id':'count'},axis=1,inplace=True) The output looks like this (I have added a few more entries on top of the sample that you provided): Some things in your explanations are not crystal clear. But let me know if this fully answers your question. | 9 | 5 |
69,883,423 | 2021-11-8 | https://stackoverflow.com/questions/69883423/google-business-profile-api-readmask | After the deprecation of my discovery url, I had to make some change on my code and now I get this error. googleapiclient.errors.HttpError: <HttpError 400 when requesting https://mybusinessbusinessinformation.googleapis.com/v1/accounts/{*accountid*}/locations?filter=locationKey.placeId%3{*placeid*}&readMask=paths%3A+%22locations%28name%29%22%0A&alt=json returned "Request contains an invalid argument.". Details: "[{'@type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'read_mask', 'description': 'Invalid field mask provided'}]}]"> I am trying to use this end point accounts.locations.list I'm using : python 3.8 google-api-python-client 2.29.0 My current code look likes : from google.protobuf.field_mask_pb2 import FieldMask googleAPI = GoogleAPI.auth_with_credentials(client_id=config.GMB_CLIENT_ID, client_secret=config.GMB_CLIENT_SECRET, client_refresh_token=config.GMB_REFRESH_TOKEN, api_name='mybusinessbusinessinformation', api_version='v1', discovery_service_url="https://mybusinessbusinessinformation.googleapis.com/$discovery/rest") field_mask = FieldMask(paths=["locations(name)"]) outputLocation = googleAPI.service.accounts().locations().list(parent="accounts/{*id*}", filter="locationKey.placeId=" + google_place_id, readMask=field_mask ).execute() From the error, i tried a lot of fieldmask path and still don't know what they want. I've tried things like location.name, name, locations.name, locations.location.name and it did'nt work. I also try to pass the readMask params without use the FieldMask class with passing a string and same problem. So if someone know what is the format of the readMask they want it will be great for me ! . Can help: https://www.youtube.com/watch?v=T1FUDXRB7Ns https://developers.google.com/google-ads/api/docs/client-libs/python/field-masks | You have not set the readMask correctly. I have done a similar task in Java and Google returns the results. readMask is a String type, and what I am going to provide you in the following line includes all fields. You can omit anyone which does not serve you. I am also writing the request code in Java, maybe it can help you better to convert into Python. String readMask="storeCode,regularHours,name,languageCode,title,phoneNumbers,categories,storefrontAddress,websiteUri,regularHours,specialHours,serviceArea,labels,adWordsLocationExtensions,latlng,openInfo,metadata,profile,relationshipData,moreHours"; MyBusinessBusinessInformation.Accounts.Locations.List request= mybusinessaccountLocations.accounts().locations().list(accountName).setReadMask(readMask); ListLocationsResponse Response = request.execute(); List<Location>= Response.getLocations(); while (Response.getNextPageToken() != null) { locationsList.setPageToken(Response.getNextPageToken()); Response=request.execute(); locations.addAll(Response.getLocations()); } -- about the question in the comment you have asked this is what I have for placeId: | 4 | 7 |
69,852,812 | 2021-11-5 | https://stackoverflow.com/questions/69852812/how-to-add-a-percentage-computation-in-pandas-result | I have the following working code. I need to add a percentage column to monitor changes. I dont know much on how to do it in pandas. I need ideas on what part needs to be modified. import pandas as pd dl = [] with open('sampledata.txt') as f: for line in f: parts = line.split() # Cleaning data here.. Conversions to int/float etc, if not parts[3][:2].startswith('($'): parts.insert(3,'0') if len(parts) > 5: temp = ' '.join(parts[4:]) parts = parts[:4] + [temp] parts[1] = int(parts[1]) parts[2] = float(parts[2].replace(',', '')) parts[3] = float(parts[3].strip('($)')) dl.append(parts) headers = ['col1', 'col2', 'col3', 'col4', 'col5'] df = pd.DataFrame(dl,columns=headers) df = df.groupby(['col1','col5']).sum().reset_index() df = df.sort_values('col2',ascending=False) df['col4'] = '($' + df['col4'].astype(str) + ')' df = df[headers] print(df) sampledata.txt #-- Sample Data Source file alpha 1 54,00.01 ABC DSW2S bravo 3 500,000.00 ACDEF charlie 1 27,722.29 ($250.45) DGAS-CAS delta 2 11 ($10) SWSDSASS-CCSSW echo 5 143,299.00 ($101) ACS34S1 lima 6 45.00181 ($38.9) FGF5GGD-DDD falcon 3 0.1234 DSS2SFS3 echo 8 145,300 ($125.01) ACS34S1 charlie 10 252,336,733.383 ($492.06) DGAS-CAS romeo 12 980 ASDS SSSS SDSD falcon 5 9.19 DSS2SFS3 Current Output: #-- working result col1 col2 col3 col4 col5 4 echo 13 2.885990e+05 ($226.01) ACS34S1 7 romeo 12 9.800000e+02 ($0.0) ASDS SSSS SDSD 2 charlie 11 2.523645e+08 ($742.51) DGAS-CAS 5 falcon 8 9.313400e+00 ($0.0) DSS2SFS3 6 lima 6 4.500181e+01 ($38.9) FGF5GGD-DDD 1 bravo 3 5.000000e+05 ($0.0) ACDEF 3 delta 2 1.100000e+01 ($10.0) SWSDSASS-CCSSW 0 alpha 1 5.400010e+03 ($0.0) ABC DSW2S Improved Output: #-- with Additional Column for % col1 col2 col3 col4 col5 col6 4 echo 13 2.885990e+05 ($226.01) ACS34S1 60% #-- (5 + 8) = 13 7 romeo 12 9.800000e+02 ($0.0) ASDS SSSS SDSD 0% 2 charlie 11 2.523645e+08 ($742.51) DGAS-CAS 900% #-- (1 + 10) = 11 5 falcon 8 9.313400e+00 ($0.0) DSS2SFS3 66.67% #-- (3 + 5) = 8 6 lima 6 4.500181e+01 ($38.9) FGF5GGD-DDD 0% 1 bravo 3 5.000000e+05 ($0.0) ACDEF 0% 3 delta 2 1.100000e+01 ($10.0) SWSDSASS-CCSSW 0% 0 alpha 1 5.400010e+03 ($0.0) ABC DSW2S 0% | You can add the following lines just after your code: The function compute_percentage() is using the list variable dl. def compute_percentage(row): vl = [float(parts[1]) for parts in dl if parts[0] == row['col1']] i = round(100. * (vl[-1]-vl[0])/vl[0] if vl[0] != 0 else 0, 2) if float(int(i)) == i: i = int(i) return str(i) + '%' df['col6'] = df.apply(compute_percentage, axis=1) Output: col1 col2 col3 col4 col5 col6 4 echo 13 2.885990e+05 ($226.01) ACS34S1 60% 7 romeo 12 9.800000e+02 ($0.0) ASDS SSSS SDSD 0% 2 charlie 11 2.523645e+08 ($742.51) DGAS-CAS 900% 5 falcon 8 9.313400e+00 ($0.0) DSS2SFS3 66.67% 6 lima 6 4.500181e+01 ($38.9) FGF5GGD-DDD 0% 1 bravo 3 5.000000e+05 ($0.0) ACDEF 0% 3 delta 2 1.100000e+01 ($10.0) SWSDSASS-CCSSW 0% 0 alpha 1 5.400010e+03 ($0.0) ABC DSW2S 0% | 7 | 1 |
69,849,870 | 2021-11-5 | https://stackoverflow.com/questions/69849870/typeerror-load-missing-1-required-positional-argument-loader | I am trying to run this github repo found at this link: https://github.com/HowieMa/DeepSORT_YOLOv5_Pytorch After installing the requirements via pip install -r requirements.txt. I am running this in a python 3.8 virtual environment, on a dji manifold 2g which runs on an Nvidia jetson tx2. The following is the terminal output. $ python main.py --cam 0 --display Namespace(agnostic_nms=False, augment=False, cam=0, classes=[0], conf_thres=0.5, config_deepsort='./configs/deep_sort.yaml', device='', display=True, display_height=600, display_width=800, fourcc='mp4v', frame_interval=2, img_size=640, input_path='input_480.mp4', iou_thres=0.5, save_path='output/', save_txt='output/predict/', weights='yolov5/weights/yolov5s.pt') Initialize DeepSORT & YOLO-V5 Using CPU Using webcam 0 Traceback (most recent call last): File "main.py", line 259, in <module> with VideoTracker(args) as vdo_trk: File "main.py", line 53, in __init__ cfg.merge_from_file(args.config_deepsort) File "/home/dji/Desktop/targetTrackers/howieMa/DeepSORT_YOLOv5_Pytorch/utils_ds/parser.py", line 23, in merge_from_file self.update(yaml.load(fo.read())) TypeError: load() missing 1 required positional argument: 'Loader' I have found some suggestions on github, such as in here TypeError: load() missing 1 required positional argument: 'Loader' in Google Colab, which suggests to change yaml.load to yaml.safe_load This is the code block to modify: class YamlParser(edict): """ This is yaml parser based on EasyDict. """ def __init__(self, cfg_dict=None, config_file=None): if cfg_dict is None: cfg_dict = {} if config_file is not None: assert(os.path.isfile(config_file)) with open(config_file, 'r') as fo: cfg_dict.update(yaml.load(fo.read())) super(YamlParser, self).__init__(cfg_dict) def merge_from_file(self, config_file): with open(config_file, 'r') as fo: self.update(yaml.load(fo.read())) def merge_from_dict(self, config_dict): self.update(config_dict) However, changing yaml.load into yaml.safe_load leads me to this error instead $ python main.py --cam 0 --display Namespace(agnostic_nms=False, augment=False, cam=0, classes=[0], conf_thres=0.5, config_deepsort='./configs/deep_sort.yaml', device='', display=True, display_height=600, display_width=800, fourcc='mp4v', frame_interval=2, img_size=640, input_path='input_480.mp4', iou_thres=0.5, save_path='output/', save_txt='output/predict/', weights='yolov5/weights/yolov5s.pt') Initialize DeepSORT & YOLO-V5 Using CPU Using webcam 0 Done.. Camera ... Done. Create output file output/results.mp4 Illegal instruction (core dumped) Has anyone encountered anything similar ? Thank you ! | Try this: yaml.load(fo.read(), Loader=yaml.FullLoader) It seems that pyyaml>=5.1 requires a Loader argument. | 4 | 17 |
69,906,075 | 2021-11-9 | https://stackoverflow.com/questions/69906075/is-it-possible-to-maintain-type-information-when-unpacking-object-attributes | Imagine I have an object which is an instance of a class such as the following: @dataclass class Foo: bar: int baz: str I'm using dataclasses for convenience, but in the context of this question, there is no requirement that the class be a dataclass. Normally, if I want to unpack the attributes of such an object, I must implement __iter__, e.g. as follows: class Foo: ... def __iter__(self) -> Iterator[Any]: return iter(dataclasses.astuple(self)) bar, baz = Foo(1, "qux") However, from the perspective of a static type checker like pyright, I've now lost any type information for bar and baz, which it can only infer are of type Any. I could improve slightly by creating the iter tuple parameter manually: def __iter__(self) -> Iterator[Union[str, int]]: return iter((self.bar, self.baz)) But I still don't have specific types for bar and baz. I can annotate bar and baz and then use dataclasses.astuple directly as follows: bar: str baz: int bar, baz = dataclasses.astuple(Foo(1, "qux")) but that necessitates less readable multi-level list comprehensions such as bars: list[int] = [ bar for bar, _ in [dataclasses.astuple(foo) for foo in [(Foo(1, "qux"))]] ] and also ties me to dataclasses. Obviously, none of this is insurmountable. If I want to use a type checker, I can just not use the unpack syntax, but I would really like to if there's a clean way to do it. An answer that is specific to dataclasses, or better yet, attrs, is acceptable if a general method is not currently possible. | As juanpa.arrivillaga has pointed out, the assignment statements docs indicate that, in the case that the left hand side of an assignment statement is a comma separated list of one or more targets, The object must be an iterable with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets. Therefore, if one wants to unpack a bare object, one must necessarily implement __iter__, which will always have a return type of Iterator[Union[...]] or Iterator[SufficientlyGenericSubsumingType] when it includes multiple attribute types. A static type checker, therefore, cannot effectively reason about the specific types of unpacked variables. Presumably, when a tuple is on the right hand side of an assignment, even though the language specification indicates that it will be treated as an iterable, a static type checker can still reason effectively about the types of its constituents. As such, as juanpa.arrivillaga has also pointed out, a bespoke astuple method which emits a tuple[...] type is probably the best approach if one must unpack attributes, even though it does not avoid the pitfall of multi-level list comprehensions mentioned in the question. In terms of the question, we could now have: @dataclass class Foo: bar: int baz: str def astuple(self) -> tuple[int, str]: return self.bar, self.baz bar, baz = Foo(1, "qux").astuple() bars = [bar for bar, _ in [foo.astuple() for foo in [(Foo(1, "qux"))]]] Without any explicit target annotations, provided we're willing to write extra class boilerplate. Neither dataclasses's nor attrs's astuple functions return any better than tuple[Any, ...], so the targets must still be separately annotated if we opt to use those. However, for list comprehension, are these better than bars = [foo.bar for foo in [Foo(1, "qux")]] ? Probably not, in most cases. As a final note, attrs Why not? page mentions, in reference to "why not namedtuples?", that Since they are a subclass of tuples, namedtuples have a length and are both iterable and indexable. That’s not what you’d expect from a class and is likely to shadow subtle typo bugs. Iterability also implies that it’s easy to accidentally unpack a namedtuple which leads to hard-to-find bugs. I'm not sure I totally agree with either of those points, but something to consider for anyone else wanting to go this route. | 7 | 1 |
69,906,411 | 2021-11-9 | https://stackoverflow.com/questions/69906411/create-a-new-column-in-a-pandas-dataframe-from-existing-column-names | I want to deconstruct a pandas DataFrame, using column headers as a new data-column and create a list with all combinations of the row index and columns. Easier to show than explain: index_col = ["store1", "store2", "store3"] cols = ["January", "February", "March"] values = [[2,3,4],[5,6,7],[8,9,10]] df = pd.DataFrame(values, index=index_col, columns=cols) From this DataFrame I wish to get the following list: [['store1', 'January', 2], ['store1', 'February', 3], ['store1', 'March', 4], ['store2', 'January', 5], ['store2', 'February', 6], ['store2', 'March', 7], ['store3', 'January', 8], ['store3', 'February', 9], ['store3', 'March', 10]] Is there a convenient way to do this? | df.unstack().swaplevel().reset_index().values.tolist() #OR df.reset_index().melt(id_vars="index").values.tolist() # [['store1', 'January', 2], # ['store2', 'January', 5], # ['store3', 'January', 8], # ['store1', 'February', 3], # ['store2', 'February', 6], # ['store3', 'February', 9], # ['store1', 'March', 4], # ['store2', 'March', 7], # ['store3', 'March', 10]] With following, the order of elements will match the output in the question. df.transpose().unstack().reset_index().values.tolist() # [['store1', 'January', 2], # ['store1', 'February', 3], # ['store1', 'March', 4], # ['store2', 'January', 5], # ['store2', 'February', 6], # ['store2', 'March', 7], # ['store3', 'January', 8], # ['store3', 'February', 9], # ['store3', 'March', 10]] | 11 | 10 |
69,903,636 | 2021-11-9 | https://stackoverflow.com/questions/69903636/how-can-i-load-a-model-in-pytorch-without-having-to-remember-the-parameters-used | I am training a model in pytorch for which I have made a class like so: from torch import nn class myNN(nn.Module): def __init__(self, dense1=128, dense2=64, dense3=32, ...): self.MLP = nn.Sequential( nn.Linear(dense1, dense2), nn.ReLU(), nn.Linear(dense2, dense3), nn.ReLU(), nn.Linear(dense3, 1) ) ... In order to save it I am using: torch.save(model.state_dict(), checkpoint_model_path) and to load it I am using: model = myNN() # or with specified parameters model.load_state_dict(torch.load(model_file)) However, in order for this method to work I have to use the right values in myNN()'s constructor. That means that I would need to somehow remember or store which parameters (layer sizes) I have used in each case in order to properly load different models. Is there a flexible way to save/load models in pytorch where I would also read the size of the layers? E.g. by loading a myNN() object directly or somehow reading the layer sizes from the saved pickle file? I am hesitant to try the second method in Best way to save a trained model in PyTorch? due to the warnings mentioned there. Is there a better way to achieve what I want? | Indeed serializing the whole Python is quite a drastic move. Instead, you can always add user-defined items in the saved file: you can save the model's state along with its class parameters. Something like this would work: First save your arguments in the instance such that we can serialize them when saving the model: class myNN(nn.Module): def __init__(self, dense1=128, dense2=64, dense3=32): super().__init__() self.kwargs = {'dense1': dense1, 'dense2': dense2, 'dense3': dense3} self.MLP = nn.Sequential( nn.Linear(dense1, dense2), nn.ReLU(), nn.Linear(dense2, dense3), nn.ReLU(), nn.Linear(dense3, 1)) We can save the parameters of the model along with its initializer arguments: >>> torch.save([model.kwargs, model.state_dict()], path) Then load it: >>> kwargs, state = torch.load(path) >>> model = myNN(**kwargs) >>> model.load_state_dict(state) <All keys matched successfully> | 5 | 13 |
69,904,141 | 2021-11-9 | https://stackoverflow.com/questions/69904141/change-marker-style-by-a-dataframe-column-categorical-in-seaborn-stripplot | I was looking to visualise a categorical variable as marker style in seaborn stripplot, but it does not seem to be possible easily. Is there an easy way to do this. For example, I'm trying to run this code tips = sns.load_dataset("tips") sns.stripplot(x="day", y="total_bill", hue="time", style="sex", jitter=True, data=tips) which fails. An alternative is to use relplot which does provide the option but has no way to insert jitter which makes the visualisation less nice. sns.relplot(x="day", y="total_bill", hue="time", data=tips, style="sex") works providing this Is there any way of doing this using stripplot/catplot/swarmplot? EDIT: This question is related. However the solution there does not seem to allow generation of a legend for size (and is quite dated). | sns.relplot is a figure-level function which relies on the axes-level function sns.scatterplot. sns.scatterplot has a parameter x_jitter which unfortunately currently has no effect (seaborn 0.11.2). You can mimic the functionality by grasping the positions of the points, add some random jitter and assigning these positions again. Here is an example: from matplotlib import pyplot as plt import seaborn as sns import numpy as np tips = sns.load_dataset("tips") ax = sns.scatterplot(x="day", y="total_bill", hue="time", data=tips, style="sex") for points in ax.collections: vertices = points.get_offsets().data if len(vertices) > 0: vertices[:, 0] += np.random.uniform(-0.3, 0.3, vertices.shape[0]) points.set_offsets(vertices) xticks = ax.get_xticks() ax.set_xlim(xticks[0] - 0.5, xticks[-1] + 0.5) # the limits need to be moved to show all the jittered dots sns.move_legend(ax, bbox_to_anchor=(1.01, 1.02), loc='upper left') # needs seaborn 0.11.2 sns.despine() plt.tight_layout() plt.show() With sns.relplot you could iterate through all the subplots: g = sns.relplot(x="day", y="total_bill", hue="time", data=tips, style="sex") for ax in g.axes.flat: for points in ax.collections: vertices = points.get_offsets().data if len(vertices) > 0: vertices[:, 0] += np.random.uniform(-0.3, 0.3, vertices.shape[0]) points.set_offsets(vertices) xticks = ax.get_xticks() ax.set_xlim(xticks[0] - 0.5, xticks[-1] + 0.5) # the limits need to be moved to show all the jittered dots plt.show() | 4 | 5 |
69,900,954 | 2021-11-9 | https://stackoverflow.com/questions/69900954/when-creating-a-seaborn-heatmap-could-not-convert-string-to-float-valueerror | Hi everyone I have the following dataframe and I want to create heatmap from it with the following code. plt.figure(figsize=(10,10)) g = sns.heatmap( top_5_stations_hourly_total_traffic_by_time, square=True, cbar_kws={'fraction' : 0.01}, cmap='OrRd', linewidth=1 ) g.set_xticklabels(top_5_stations_hourly_total_traffic_by_time.STATION, rotation=45, horizontalalignment='right') g.set_yticklabels(top_5_stations_hourly_total_traffic_by_time.TIME, rotation=45, horizontalalignment='right') But it gives the following error : ValueError: could not convert string to float: '125 ST' | pivot your data before calling heatmap: df_heatmap = df.pivot("STATION", "TIME", "HOURLY_TOTAL_TRAFFIC") >>> sns.heatmap(df_heatmap) | 5 | 3 |
69,840,389 | 2021-11-4 | https://stackoverflow.com/questions/69840389/what-functions-or-modules-require-contiguous-input | As I understand, you need to call tensor.contiguous() explicitly whenever some function or module needs a contiguous tensor. Otherwise you get exceptions like: RuntimeError: invalid argument 1: input is not contiguous at .../src/torch/lib/TH/generic/THTensor.c:231 (E.g. via.) What functions or modules require contiguous input? Is this documented? Or phrased differently, what are situations where you need to call contiguous? E.g. Conv1d, does it require contiguous input? The documentation does not mention this. When the documentation does not mention this, this would always imply that it does not require contiguous input? (I remember in Theano, any op getting some non-contiguous input, which required it to be contiguous, would just convert it automatically.) | After additional digging under the hood through source_code, it seems that view is the only function that explicitly causes an exception when a non-contiguous input is passed. One would expect any operation using Tensor Views to have the potential of failing with non-contiguous input. In reality, it seems to be the case that most or all of these functions are: (a.) implemented with support for non-contiguous blocks (see example below), i.e. the tensor iterators can handle multiple pointers to the various chunks of the data in memory, perhaps at the expense of performance, or else (b.) a call to .contiguous() wraps the operation (One such example shown here for torch.tensor.diagflat()). reshape is essentially the contiguous()-wrapped form of view. By extension, it seems, the main benefit of view over reshape would be the explicit Exception when tensors are unexpectedly non-contiguous versus code silently handling this discrepancy at the cost of performance. This conclusion is based on: Testing of all Tensor View ops with non-contiguous inputs. Source code analysis of other non-Tensor View functions of interest (e.g. Conv1D, which includes calls to contiguous as necessary in all non-trivial input cases). Inference from pytorch's design philosophy as a simple, at times slow, easy-to-use language. Cross-posting on Pytorch Discuss. Extensive review of web reported errors involving non-contiguous errors, all of which revolve around problematic calls to view. I did not comprehensively test all pytorch functions, as there are thousands. EXAMPLE OF (a.): import torch import numpy import time # allocation start = time.time() test = torch.rand([10000,1000,100]) torch.cuda.synchronize() end = time.time() print("Allocation took {} sec. Data is at address {}. Contiguous: {}".format(end - start,test.storage().data_ptr(),test.is_contiguous())) # view of a contiguous tensor start = time.time() test.view(-1) torch.cuda.synchronize() end = time.time() print("view() took {} sec. Data is at address {}. Contiguous: {}".format(end - start,test.storage().data_ptr(),test.is_contiguous())) # diagonal() on a contiguous tensor start = time.time() test.diagonal() torch.cuda.synchronize() end = time.time() print("diagonal() took {} sec. Data is at address {}. Contiguous: {}".format(end - start,test.storage().data_ptr(),test.is_contiguous())) # Diagonal and a few tensor view ops on a non-contiguous tensor test = test[::2,::2,::2] # indexing is a Tensor View op resulting in a non-contiguous output print(test.is_contiguous()) # False start = time.time() test = test.unsqueeze(-1).expand([test.shape[0],test.shape[1],test.shape[2],100]).diagonal() torch.cuda.synchronize() end = time.time() print("non-contiguous tensor ops() took {} sec. Data is at address {}. Contiguous: {}".format(end - start,test.storage().data_ptr(),test.is_contiguous())) # reshape, which requires a tensor copy operation to new memory start = time.time() test = test.reshape(-1) + 1.0 torch.cuda.synchronize() end = time.time() print("reshape() took {} sec. Data is at address {}. Contiguous: {}".format(end - start,test.storage().data_ptr(),test.is_contiguous())) The following is output: Allocation took 4.269254922866821 sec. Data is at address 139863636672576. Contiguous: True view() took 0.0002810955047607422 sec. Data is at address 139863636672576. Contiguous: True diagonal() took 6.532669067382812e-05 sec. Data is at address 139863636672576. Contiguous: True False non-contiguous tensor ops() took 0.00011277198791503906 sec. Data is at address 139863636672576. Contiguous: False reshape() took 0.13828253746032715 sec. Data is at address 94781254337664. Contiguous: True A few tensor view operations in block 4 are performed on a non-contiguous input tensor. The operation runs without error, maintains the data in the same memory addresses, and runs relatively faster than an operation requiring a copy to new memory addresses (such as reshape in block 5). Thus, it seems these operations are implemented in a way that handles non-contiguous inputs without requiring a data copy. | 6 | 3 |
69,898,774 | 2021-11-9 | https://stackoverflow.com/questions/69898774/how-to-update-multiple-objects-in-django | I'd like to update more than one objects at same time, when the register date achieve more than 6 days: The Idea is update all issue_status from 'On Going' to 'Pending' for each objects Is it necessary iterate it? Below is my current code and error: models.py class MaintenanceIssue(models.Model): issue_status = models.CharField(max_length=30, choices=[('pending', 'Pending'), ('on going', 'On going'), ('done', 'Done')]) register_dt = models.DateTimeField(blank=True, null=True) @property def pending_issue(self): issue_time_diff = (datetime.now() - self.register_dt).days return issue_time_diff views.py: on_going_issues = MaintenanceIssue.objects.get(issue_status='On Going') if on_going_issues.pending_issue > 6: on_going_issues.issue_status = 'Pending' on_going_issues.save() get() returned more than one MaintenanceIssue -- it returned 61! | at: on_going_issues = MaintenanceIssue.objects.get(issue_status='On Going') if on_going_issues.pending_issue > 6: on_going_issues.issue_status = 'Pending' on_going_issues.save() should filter by the field and then loop through each on_going_issues = MaintenanceIssue.objects.filter(issue_status='On Going') for one in on_going_issues: if one.pending_issue > 6: one.issue_status = "Pending" one.save() | 4 | 0 |
69,898,015 | 2021-11-9 | https://stackoverflow.com/questions/69898015/unexpected-type-warning-raised-with-list-in-pycharm | Right to the point, here below is the sample code which will raise error in PyCharm: list1 = [0] * 5 list1[0] = '' list2 = [0 for n in range(5)] list2[0] = '' PyCharm then return an error on both line 2 and line 4 as below: Unexpected type(s):(int, str)Possible type(s):(SupportsIndex, int)(slice, Iterable[int]) Running the code would not cause any error, but PyCharm keep raising the above message when I am coding. Why PyCharm would give this error and how can I solve this with cleanest code? | In your case PyCharm sees you first line and thinks that the type of list is List[int]. I mean it is a list of integers. You may tell that you list is not int-typed and can accept any value this way: from typing import Any, List list1: List[Any] = [0] * 5 list1[0] = '' I used typing module just to explain the idea. Here is a simple way to announce that list1 is just a list of anything: list1: list = [0] * 5 list1[0] = '' Also consider to use as strict typing as possible. It may help you to prevent bugs. If you need both strings and ints then use this: from typing import List, Union list1: List[Union[int, str]] = [0] * 5 # Starting from Python 3.10 you may use List[int|str] instead Docs here: https://docs.python.org/3/library/typing.html | 5 | 7 |
69,897,460 | 2021-11-9 | https://stackoverflow.com/questions/69897460/how-to-make-matplotlib-markers-colorblind-friendly-in-a-simple-way | Currently I'm using command plt.errorbar(X,Y,yerr=myYerr, fmt="o", alpha=0.5,capsize=4) and I get default marker colours: But what should I do to force matplotlib to be more colorblind-friendly? | According to this [1] you can use the predefined colorblind style. It should be as simple as: import matplotlib.pyplot as plt plt.style.use('tableau-colorblind10') [1] https://matplotlib.org/stable/users/prev_whats_new/whats_new_2.2.html#new-style-colorblind-friendly-color-cycle Check out the last image from this link as well: https://matplotlib.org/stable/gallery/style_sheets/style_sheets_reference.html | 6 | 9 |
69,879,246 | 2021-11-8 | https://stackoverflow.com/questions/69879246/no-module-named-wtforms-compat | While we are trying to execute with python 3.6.8 version getting below module error from wtforms.compat import string_types, text_type ModuleNotFoundError: No module named 'wtforms.compat' when i tried installing or upgrading wtforms still it shows the same error Can any one pls suggest | Noticed this error today while running our Airflow 1.10.12 builds: from wtforms.compat import text_type ModuleNotFoundError: No module named 'wtforms.compat' Apparently, the issue has to do with the latest version of wtforms released yesterday (3.0.0). We managed to get around it by pinning it to the previous version: wtforms==2.3.3. Editing just to add more information: compat.py was completely removed once support for Python < 3.6 was dropped (see PR). If you are running Python >= 3.6, you can also work with the latest wtforms by simply using str instead of text_type and string_types, since those were just aliases: if sys.version_info[0] >= 3: text_type = str string_types = (str,) izip = zip And the imports should not be needed anymore. If running Python < 3.6, you may need to stick with wtforms<=2.3.3. | 20 | 35 |
69,879,919 | 2021-11-8 | https://stackoverflow.com/questions/69879919/how-to-add-stretch-for-qgridlayout-in-pyqt5 | I created widgets in a grid-layout. The widgets are stretching based on the window. Is it possible to avoid the stretching and align them as shown in picture below? I created a code to achieve this, but I feel it is not a straightforward solution. If there are any better solutions to achieve this, please share them. Grid layout result: from PyQt5.QtWidgets import * app =QApplication([]) window=QWidget() GL=QGridLayout(window) GL.addWidget(QPushButton('R1C1'),0,0) GL.addWidget(QPushButton('R1C2'),0,1) GL.addWidget(QPushButton('R2C1'),1,0) GL.addWidget(QPushButton('R1C1'),1,1) window.showMaximized() app.exec_() Required Result: My code: from PyQt5.QtWidgets import * app =QApplication([]) window=QWidget() VL=QVBoxLayout(window);HL=QHBoxLayout();VL.addLayout(HL) GL=QGridLayout();HL.addLayout(GL) GL.addWidget(QPushButton('R1C1'),0,0) GL.addWidget(QPushButton('R1C2'),0,1) GL.addWidget(QPushButton('R2C1'),1,0) GL.addWidget(QPushButton('R1C1'),1,1) HL.addStretch();VL.addStretch() window.showMaximized() app.exec_() | The QGridLaout class doesn't have any simple convenience methods like QBoxLayout.addStretch() to do this. But the same effect can be achieved by adding some empty, stretchable rows/columns, like this: GL.setRowStretch(GL.rowCount(), 1) GL.setColumnStretch(GL.columnCount(), 1) | 4 | 8 |
69,886,443 | 2021-11-8 | https://stackoverflow.com/questions/69886443/error-in-python-using-wikipedia-module-wikipedia-exceptions-pageerror-page-id | newbie in Python here when I run this simple code (to load Harry Potter page and simply print it) it returns me Error with the wrong name I wanted to search (harry plotter) can anyone tell me how to fix? thank you! import wikipedia page = wikipedia.page("Harry Potter") print(page.summary) Error message: Traceback (most recent call last): File "C:\Users\Lidor\PycharmProjects\pythonProject\search_engine.py", line 7, in <module> page = wikipedia.page(search[0]) File "C:\Users\Lidor\PycharmProjects\pythonProject\venv\lib\site- packages\wikipedia\wikipedia.py", line 276, in page return WikipediaPage(title, redirect=redirect, preload=preload) File "C:\Users\Lidor\PycharmProjects\pythonProject\venv\lib\site- packages\wikipedia\wikipedia.py", line 299, in __init__ self.__load(redirect=redirect, preload=preload) File "C:\Users\Lidor\PycharmProjects\pythonProject\venv\lib\site- packages\wikipedia\wikipedia.py", line 345, in __load raise PageError(self.title) wikipedia.exceptions.PageError: Page id "harry plotter" does not match any pages. Try another id! | This appears to be a strange result of auto_suggest being set by default. If you do wikipedia.page("Harry Potter", auto_suggest=False) It works fine. Otherwise it autocompletes potter to plotter, hence the error. | 6 | 13 |
69,875,734 | 2021-11-7 | https://stackoverflow.com/questions/69875734/how-to-hide-dataframe-index-on-streamlit | I want to use some pandas style resources and I want to hide table indexes on streamlit. I tryed this: import streamlit as st import pandas as pd table1 = pd.DataFrame({'N':[10, 20, 30], 'mean':[4.1, 5.6, 6.3]}) st.dataframe(table1.style.hide_index().format(subset=['mean'], decimal=',', precision=2).bar(subset=['mean'], align="mid")) but regardless the .hide_index() I got this: Ideas to solve this? | Documentation for st.dataframe shows "Styler support is experimental!" and maybe this is the problem. But I can get table without index if I use .to_html() and st.write() import streamlit as st import pandas as pd df = pd.DataFrame({'N':[10, 20, 30], 'mean':[4.1, 5.6, 6.3]}) styler = df.style.hide_index().format(subset=['mean'], decimal=',', precision=2).bar(subset=['mean'], align="mid") st.write(styler.to_html(), unsafe_allow_html=True) #st.write(df.to_html(index=False), unsafe_allow_html=True) | 9 | 5 |
69,882,397 | 2021-11-8 | https://stackoverflow.com/questions/69882397/check-if-string-is-in-another-column-pandas | Below is my DF df= pd.DataFrame({'col1': ['[7]', '[30]', '[0]', '[7]'], 'col2': ['[0%, 7%]', '[30%]', '[30%, 7%]', '[7%]']}) col1 col2 [7] [0%, 7%] [30] [30%] [0] [30%, 7%] [7] [7%] The aim is to check if col1 value is contained in col2 below is what I've tried df['test'] = df.apply(lambda x: str(x.col1) in str(x.col2), axis=1) Below is the expected output col1 col2 col3 [7] [0%, 7%] True [30] [30%] True [0] [30%, 7%] False [7] [7%] True | You can also replace the square brackets with word boundaries \b and use re.search like in import re #... df.apply(lambda x: bool(re.search(x['col1'].replace("[",r"\b").replace("]",r"\b"), x['col2'])), axis=1) # => 0 True # 1 True # 2 False # 3 True # dtype: bool This will work because \b7\b will find a match in [0%, 7%] as 7 is neither preceded nor followed with letters, digits or underscores. There won't be any match found in [30%, 7%] as \b0\b does not match a zero after a digit (here, 3). | 6 | 2 |
69,868,258 | 2021-11-6 | https://stackoverflow.com/questions/69868258/how-to-pass-pandas-dataframe-to-airflow-tasks | I'm learning how to use airflow to build machine learning pipeline. But didn't find a way to pass pandas dataframe generated from 1 task into another task... It seems that need to convert the data to JSON format or save the data in database within each task? Finally, I had to put everything in 1 task... Is there anyway to pass dataframe between airflow tasks? Here's my code: from datetime import datetime import pandas as pd import numpy as np import os import lightgbm as lgb from sklearn.model_selection import train_test_split from sklearn.model_selection import StratifiedKFold from sklearn.metrics import balanced_accuracy_score from airflow.decorators import dag, task from airflow.operators.python_operator import PythonOperator @dag(dag_id='super_mini_pipeline', schedule_interval=None, start_date=datetime(2021, 11, 5), catchup=False, tags=['ml_pipeline']) def baseline_pipeline(): def all_in_one(label): path_to_csv = os.path.join('~/airflow/data','leaf.csv') df = pd.read_csv(path_to_csv) y = df[label] X = df.drop(label, axis=1) folds = StratifiedKFold(n_splits=5, shuffle=True, random_state=10) lgbm = lgb.LGBMClassifier(objective='multiclass', random_state=10) metrics_lst = [] for train_idx, val_idx in folds.split(X, y): X_train, y_train = X.iloc[train_idx], y.iloc[train_idx] X_val, y_val = X.iloc[val_idx], y.iloc[val_idx] lgbm.fit(X_train, y_train) y_pred = lgbm.predict(X_val) cv_balanced_accuracy = balanced_accuracy_score(y_val, y_pred) metrics_lst.append(cv_balanced_accuracy) avg_performance = np.mean(metrics_lst) print(f"Avg Performance: {avg_performance}") all_in_one_task = PythonOperator(task_id='all_in_one_task', python_callable=all_in_one, op_kwargs={'label':'species'}) all_in_one_task # dag invocation pipeline_dag = baseline_pipeline() | Although it is used in many ETL tasks, Airflow is not the right choice for that kind of operations, it is intended for workflow not dataflow. But there are many ways to do that without passing the whole dataframe between tasks. You can pass information about the data using xcom.push and xcom.pull: a. Save the outcome of the first task somewhere (json, csv, etc.) b. Pass to xcom.push information about saved file. E.g. file name, path. c. Read this filename using xcom.pull from the other task and perform needed operation. Or: Everything above using some database tables: a. In task_1 you can download data from table_1 in some dataframe, process it and save in another table_2 (df.to_sql()). b. Pass the name of the table using xcom.push. c. From the other task get table_2 using xcom.pull and read it with df.read_sql(). Information on how to use xcom you can get from airflow examples. Example: https://github.com/apache/airflow/blob/main/airflow/example_dags/tutorial_etl_dag.py IMHO there are many other better ways, I have just written what I tried. | 11 | 17 |
69,880,739 | 2021-11-8 | https://stackoverflow.com/questions/69880739/numpy-concatenate-behaviour-how-to-concatenate-example-correctly | I have following multi-dimensional array: windows = array([[[[[[0., 0.], [1., 0.]], [[0., 0.], [1., 0.]], [[0., 0.], [1., 0.]]], [[[0., 1.], [0., 0.]], [[0., 1.], [0., 0.]], [[1., 0.], [0., 0.]]], [[[1., 0.], [0., 0.]], [[0., 1.], [0., 0.]], [[0., 1.], [0., 0.]]]]]]) print(windows.shape) (1, 1, 3, 3, 2, 2) # (n, d, a, b, c, c) w = a * (c*c), h = b * (c*c) I want to get next resulting array: mask = array([[ [[0., 0., 0., 0., 0., 0.], [1., 0., 1., 0., 1., 0.], [0., 1., 0., 1., 1., 0.], [0., 0., 0., 0., 0., 0.], [1., 0., 0., 1., 0., 1.], [0., 0., 0., 0., 0., 0.]] ]], dtype=np.float32) print(mask.shape) (1, 1, 6, 6) # (n, d, w, h) Basically, I want to squeeze last 4 dimensions into 2-d matrix so the final shape become (n, d, w, h), in this case (1, 1, 6, 6). I tried np.concatenate(windows, axis = 2), but it didnt concatenate along 2nd dimension and reduced for some reason first(although I set axis = 2) 'n' dimension. Additional information: windows is a result of following code snippet windows = np.lib.stride_tricks.sliding_window_view(arr, (c, c), axis (-2,-1), writeable = True) # arr.shape == mask.shape windows = windows[:, :, ::c, ::c] # these are non-overlapping windows of arr with size (c,c) windows = ... # some modifications of windows Now I want to build from these windows array with shape of arr.shape, this array called mask in example above. Simple reshape doesn't work because it returns elements in wrong order. | IIUC, you want to merge dimensions 2+4 and 3+5, an easy way would be to swapaxes 4 and 5 (or -3 and -2), and reshape to (1,1,6,6): windows.swapaxes(-2,-3).reshape(1,1,6,6) output: array([[[[0., 0., 0., 0., 0., 0.], [1., 0., 1., 0., 1., 0.], [0., 1., 0., 1., 1., 0.], [0., 0., 0., 0., 0., 0.], [1., 0., 0., 1., 0., 1.], [0., 0., 0., 0., 0., 0.]]]]) | 5 | 2 |
69,854,335 | 2021-11-5 | https://stackoverflow.com/questions/69854335/optimize-the-calculation-of-horizontal-and-vertical-adjacency-using-numpy | I have following cells: cells = np.array([[1, 1, 1], [1, 1, 0], [1, 0, 0], [1, 0, 1], [1, 0, 0], [1, 1, 1]]) and I want to calculate horizontal and vertical adjacencies to come to this result: # horizontal adjacency array([[3, 2, 1], [2, 1, 0], [1, 0, 0], [1, 0, 1], [1, 0, 0], [3, 2, 1]]) # vertical adjacency array([[6, 2, 1], [5, 1, 0], [4, 0, 0], [3, 0, 1], [2, 0, 0], [1, 1, 1]]) The actual sollution looks like this: def get_horizontal_adjacency(cells): adjacency_horizontal = np.zeros(cells.shape, dtype=int) for y in range(cells.shape[0]): span = 0 for x in reversed(range(cells.shape[1])): if cells[y, x] > 0: span += 1 else: span = 0 adjacency_horizontal[y, x] = span return adjacency_horizontal def get_vertical_adjacency(cells): adjacency_vertical = np.zeros(cells.shape, dtype=int) for x in range(cells.shape[1]): span = 0 for y in reversed(range(cells.shape[0])): if cells[y, x] > 0: span += 1 else: span = 0 adjacency_vertical[y, x] = span return adjacency_vertical The Algorithm is basically (for horizontal adjacency): loop throgh rows loop backward throgh columns if the x, y value of cells is not zero, add 1 to the actual span if the x, y value of cells is zero, reset actual span to zero set the span as new x, y value of the resulting array Since I need to loop two times over all array elements this is slow for bigger arrays (e.g. images). Is there a way to improve the algorithm using vectorization or some other numpy magic? Summary: Great suggestions have been made by joni and Mark Setchell! I created a small Repo with a sample image and a python file with the comparisons. The results are astonishing: Original Approach: 3.675 s Using Numba: 0.002 s Using Cython: 0.005 s | I had a really quick attempt at this with Numba but have not checked it too thoroughly though the results seem about right: #!/usr/bin/env python3 # https://stackoverflow.com/q/69854335/2836621 # magick -size 1920x1080 xc:black -fill white -draw "circle 960,540 960,1040" -fill black -draw "circle 960,540 960,800" a.png import cv2 import numpy as np import numba as nb def get_horizontal_adjacency(cells): adjacency_horizontal = np.zeros(cells.shape, dtype=int) for y in range(cells.shape[0]): span = 0 for x in reversed(range(cells.shape[1])): if cells[y, x] > 0: span += 1 else: span = 0 adjacency_horizontal[y, x] = span return adjacency_horizontal @nb.jit('void(uint8[:,::1], int32[:,::1])',parallel=True) def nb_get_horizontal_adjacency(cells, result): for y in nb.prange(cells.shape[0]): span = 0 for x in range(cells.shape[1]-1,-1,-1): if cells[y, x] > 0: span += 1 else: span = 0 result[y, x] = span return # Load image im = cv2.imread('a.png', cv2.IMREAD_GRAYSCALE) %timeit get_horizontal_adjacency(im) result = np.zeros((im.shape[0],im.shape[1]),dtype=np.int32) %timeit nb_get_horizontal_adjacency(im, result) The timings are good, showing 4000x speed-up, if it works correctly: In [15]: %timeit nb_get_horizontal_adjacency(im, result) 695 µs ± 9.12 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [17]: %timeit get_horizontal_adjacency(im) 2.78 s ± 44.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Input Input image was created in 1080p dimensions, i.e. 1920x1080, with ImageMagick using: magick -size 1920x1080 xc:black -fill white -draw "circle 960,540 960,1040" -fill black -draw "circle 960,540 960,800" a.png Output (contrast adjusted) | 6 | 4 |
69,879,845 | 2021-11-8 | https://stackoverflow.com/questions/69879845/how-to-plot-my-pandas-dataframe-in-matplotlib | I have the following code: import matplotlib.pyplot as plt import numpy as np import pandas as pd data = pd.read_csv("Ari_atlag.txt", sep = '\t', header = 0) #Num_array = pd.DataFrame(data).to_numpy() print(data.head()) data.plot() #data.columns = ['Date', 'Number_of_test', 'Avarage_of_ARI'] #print(Num_array) plt.show() Output: Date Number_of_test Avarage_of_ARI 0 2011-01 22 0.568734 1 2011-02 5 0.662637 2 2011-03 0 0.000000 3 2011-04 3 0.307692 4 2011-05 6 0.773611 Process finished with exit code 0 and the plot. But with this code in the plot, the x axis is the index. But I want to get Date in x axis. How to plot the Date with Number_of_test and Date with Avarage_of_ARI I think, I somehow should change string(date) to dates, but don't know how to do that. Best | Use x='Date' as parameter of plot: df.plot(x='Date') plt.show() | 5 | 4 |
69,876,148 | 2021-11-7 | https://stackoverflow.com/questions/69876148/is-there-an-efficient-way-to-compare-two-dataframes-of-different-sizes | I found this post, but it's not quite my scenario. Is there an efficient way of comparing two data frames The reason I want to compare two dataframes is that I am looking for changes that may have occured. (Think "audit"). The two frames have exactly the same column layout, just that one may have more or less rows than the other, and values may have changed in two of the columns. ID Period Price OtherID 0 10100001 2019-10-01 8995.00 ABBD 1 38730001 2019-11-01 38227.71 EIRU 2 30100453 2019-12-01 22307.00 DDHF 3 92835543 2020-01-01 2310.00 DDGF 4 66453422 2020-02-01 12113.29 DGFH ID Period Price OtherID 0 10100001 2019-10-01 5.00 ABBD 1 38730001 2019-11-01 38227.71 XXXX 2 30100453 2019-12-01 22307.00 DDHF 3 92835543 2020-01-01 2310.00 DDGF 4 66453422 2020-02-01 12113.29 DGFH 5 22223422 2020-02-01 123.29 HHYG The two columns that I am suspicious about are "Price" and "OtherID". I want to find any changes in the Price or OtherID. I see three scenarios: 1. row has been added 2. row has been deleted 3. row doesn't match I can iterate through it all, but I'm wondering if there is some pandas magic out there that will do it in one fell swoop. The output I seek is something like this: ID Period Analysis 0 10100001 2019-10-01 Changed 1 38730001 2019-11-01 Changed 3 22223422 2020-02-01 New And just to be clear, ID by itself is not unique. And Period by itself is also not unique. The two together are. | Use compare after merge your 2 dataframes on ID and Period: out = pd.merge(df1, df2, on=['ID', 'Period'], how='outer', suffixes=('_df1', '_df2')).set_index(['ID', 'Period']) out.columns = pd.MultiIndex.from_tuples(out.columns.str.split('_').map(tuple)) \ .swaplevel() out = out['df1'].compare(out['df2']) Output: >>> out Price OtherID self other self other ID Period 10100001 2019-10-01 8995.0 5.00 NaN NaN 38730001 2019-11-01 NaN NaN EIRU XXXX 22223422 2020-02-01 NaN 123.29 NaN HHYG # Summarize >>> out.swaplevel(axis=1)['self'].isna().all(axis=1) \ .replace({True: 'New', False: 'Changed'}) ID Period 10100001 2019-10-01 Changed 38730001 2019-11-01 Changed 22223422 2020-02-01 New dtype: object If you append keep_shape=True parameter: >>> out['df1'].compare(out['df2'], keep_shape=True) Price OtherID self other self other ID Period 10100001 2019-10-01 8995.0 5.00 NaN NaN 38730001 2019-11-01 NaN NaN EIRU XXXX 30100453 2019-12-01 NaN NaN NaN NaN 92835543 2020-01-01 NaN NaN NaN NaN 66453422 2020-02-01 NaN NaN NaN NaN 22223422 2020-02-01 NaN 123.29 NaN HHYG | 4 | 7 |
69,876,305 | 2021-11-7 | https://stackoverflow.com/questions/69876305/pytorch-automatically-determine-the-input-shape-of-linear-layer-after-conv1d | I want to build a model with several Conv1d layers followed by several Linear layers. Conv1d layers will work for data of any given length, the problem comes at the first Linear layer, because the data length is unknown at initialization time. Every time the length of the input data changes, the output size of Conv1d layers will change, hence a change in the required in_features of the first Linear layer. Note: I learned CNN and I am aware of how to calculate the output dimensions by hand. I am looking for a programmatic way to determine it, because I have to experiment many times with different length of input data. Question: In pytorch, how do you automatically figure out the output dimension after many Conv1d layers and set the in_features for the following Linear layer? | You can use the builtin nn.LazyLinear which will find the in_features on the first inference and initialize the appropriate number of weights accordingly: linear = nn.LazyLinear(out_features) | 6 | 11 |
69,875,073 | 2021-11-7 | https://stackoverflow.com/questions/69875073/confusion-matrix-valueerror-classification-metrics-cant-handle-a-mix-of-binary | I'm currently trying to make a confusion matrix for my neural network model, but keep getting this error: ValueError: Classification metrics can't handle a mix of binary and continuous targets. I have a peptide dataset that I'm using with 100 positive and 100 negative examples, and the labels are 1s and 0s. I've converted each peptide into a Word2Vec embedding that was put into a ML model and trained. This is my code: pos = "/content/drive/MyDrive/pepfun/Training_format_pos (1).txt" neg = "/content/drive/MyDrive/pepfun/Training_format_neg.txt" # pos sequences extract into list f = open(pos, 'r') file_contents = f.read() data = file_contents f.close() newdatapos = data.splitlines() print(newdatapos) # neg sequences extract into list f2 = open(neg, 'r') file_contents2 = f2.read() data2 = file_contents2 f2.close() newdataneg = data2.splitlines() print(newdataneg) !pip install rdkit-pypi import rdkit from rdkit import Chem # set up embeddings import nltk from gensim.models import Word2Vec import multiprocessing EMB_DIM = 4 # embeddings pos w2vpos = Word2Vec([newdatapos], size=EMB_DIM, min_count=1) sequez = "VVYPWTQRF" w2vpos[sequez].shape words=list(w2vpos.wv.vocab) vectors = [] for word in words: vectors.append(w2vpos[word].tolist()) print(len(vectors)) print(vectors[1]) data = np.array(vectors) # embeddings neg w2vneg = Word2Vec([newdataneg], size=EMB_DIM, min_count=1) sequen = "GIGKFLHSAGKFGKAFLGEVMKS" w2vneg[sequen].shape wordsneg = list(w2vneg.wv.vocab) vectorsneg = [] for word in wordsneg: vectorsneg.append(w2vneg[word].tolist()) allvectors = vectorsneg + vectors print(len(allvectors)) arrayvectors = np.array(allvectors) labels = [] for i in range (100): labels.append(1) print(labels) for i in range (100): labels.append(0) print(labels) print(len(labels)) import seaborn as sns !pip install keras import keras from pylab import rcParams import matplotlib.pyplot as plt from matplotlib import rc from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, classification_report from sklearn.utils import shuffle import numpy as np import pandas as pd import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader from sklearn.preprocessing import StandardScaler !pip install tensorflow==2.7.0 import tensorflow as tf from keras import metrics from keras.models import Sequential from keras.layers import Dense from keras.layers import Conv3D, Flatten, Dropout import sklearn a = sklearn.utils.shuffle(arrayvectors, random_state=1) b = sklearn.utils.shuffle(labels, random_state=1) dfa = pd.DataFrame(a, columns=None) dfb = pd.DataFrame(b, columns=None) X = dfa.iloc[:] y = dfb.iloc[:] X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=300) X_train = np.asarray(X_train) X_test = np.asarray(X_test) y_train = np.asarray(y_train) y_test = np.asarray(y_test) y_train = y_train.astype(np.float32) y_test = y_test.astype(np.float32) # train data & test data tensor conversion class trainData(Dataset): def __init__(self, X_data, y_data): self.X_data = X_data self.y_data = y_data def __getitem__(self, index): return self.X_data[index], self.y_data[index] def __len__ (self): return len(self.X_data) train_data = trainData(torch.FloatTensor(X_train), torch.FloatTensor(y_train)) ## test data class testData(Dataset): def __init__(self, X_data): self.X_data = X_data def __getitem__(self, index): return self.X_data[index] def __len__ (self): return len(self.X_data) test_data = testData(torch.FloatTensor(X_test)) train_loader = DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True) test_loader = DataLoader(test_data, batch_size=1) # make model model = Sequential() model.add(Dense(64, activation='relu', input_shape=(4,))) model.add(Dropout(0.1)) model.add(Dense(32, activation='relu')) model.add(Dropout(0.1)) model.add(Dense(16, input_dim=1, activation='relu')) model.add(Dropout(0.1)) model.add(Dense(12,activation='relu')) model.add(Dropout(0.1)) model.add(Dense(1,activation='sigmoid')) model.summary() model.compile(loss='binary_crossentropy',optimizer='RMSprop', metrics=['accuracy','AUC']) history = model.fit(X_train, y_train, epochs=2000,batch_size=64, validation_data = (X_test, y_test), validation_batch_size=64) from sklearn.metrics import confusion_matrix, classification_report print(y_pred.round) print(classification_report(y_test,y_pred)) I've tried printing my y_pred value to see the problem. This is what I get: [[6.0671896e-01] [9.9999785e-01] [1.6576621e-01] [9.9999899e-01] [5.6016445e-04] [2.4935007e-02] [4.4204036e-11] [2.8884350e-11] [6.3217885e-05] [4.7181606e-02] [9.9742711e-03] [1.0780278e-01] [7.0868194e-01] [2.0298421e-02] [9.5819527e-01] [1.4784497e-01] [1.7605269e-01] [9.9643111e-01] [4.7657710e-01] [9.9991858e-01] [4.5830309e-03] [6.5091753e-01] [3.8710403e-01] [2.4756461e-02] [1.1719930e-01] [6.4381957e-03] [7.1598434e-01] [1.5749395e-02] [6.8473631e-01] [9.5499575e-01] [2.2420317e-02] [9.9999177e-01] [6.9633877e-01] [9.2811453e-01] [1.8373668e-01] [2.9298562e-07] [1.1250973e-03] [4.3785056e-01] [9.6832716e-01] [8.6754566e-01]] It's not 1s and 0s. I believe there's a problem there as well, but I'm not sure. | The model outputs the predicted probabilities, you need to transform them back to class labels before calculating the classification metrics, see below. import numpy as np import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, classification_report tf.random.set_seed(0) # generate the data X, y = make_classification(n_classes=2, n_features=4, n_informative=4, n_redundant=0, random_state=42) # split the data X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) # build the model model = Sequential() model.add(Dense(64, activation='relu', input_shape=(4,))) model.add(Dropout(0.1)) model.add(Dense(32, activation='relu')) model.add(Dropout(0.1)) model.add(Dense(16, input_dim=1, activation='relu')) model.add(Dropout(0.1)) model.add(Dense(12, activation='relu')) model.add(Dropout(0.1)) model.add(Dense(1, activation='sigmoid')) # fit the model model.compile(loss='binary_crossentropy', optimizer='RMSprop', metrics=['accuracy', 'AUC']) model.fit(X_train, y_train, epochs=100, batch_size=64, validation_data=(X_test, y_test), validation_batch_size=64, verbose=0) # extract the predicted probabilities p_pred = model.predict(X_test) p_pred = p_pred.flatten() print(p_pred.round(2)) # [1. 0.01 0.91 0.87 0.06 0.95 0.24 0.58 0.78 ... # extract the predicted class labels y_pred = np.where(p_pred > 0.5, 1, 0) print(y_pred) # [1 0 1 1 0 1 0 1 1 0 0 0 0 1 1 0 1 0 0 0 0 ... print(confusion_matrix(y_test, y_pred)) # [[13 1] # [ 2 9]] print(classification_report(y_test, y_pred)) # precision recall f1-score support # 0 0.87 0.93 0.90 14 # 1 0.90 0.82 0.86 11 # accuracy 0.88 25 # macro avg 0.88 0.87 0.88 25 # weighted avg 0.88 0.88 0.88 25 | 4 | 8 |
69,871,651 | 2021-11-7 | https://stackoverflow.com/questions/69871651/yt-dlp-rate-limit-not-throttiling-speed-in-python-script | I have implemented yt-dlp as part of my Python script, it works well, but I am unable to get the rate-limit feature to work. If you run the same command from the CLI the rate is limited correctly, is anyone able to tell me the correct syntax? I have tried several combinations such as rate-limit, limit-rate 0.5m, 500k, 500KiB, 500, and none seem to work ydl_opts = { 'limit-rate': '500k', } with yt_dlp.YoutubeDL(ydl_opts) as ydl: ydl.download([link]) I am using the docs here; https://github.com/yt-dlp/yt-dlp But am confused as the CLI command works but not the embedded script version, I also tried replacing - with _ but still to no effect, do you have any ideas? Other options in the ydl_opts work without issue Hopefully we can resolve the correct syntax rather than having to implement Trickle or throttling the socket. | Looking at the source code you'll find that the option you're looking for is called ratelimit. Its value should be a float: ydl_opts = { 'ratelimit': 500000 } with yt_dlp.YoutubeDL(params=ydl_opts) as ydl: ydl.download([link]) | 5 | 2 |
69,873,513 | 2021-11-7 | https://stackoverflow.com/questions/69873513/using-set-params-function-for-linearregression | I recently started working on Machine Learning with Linear Regression. I have used a LinearRegression (lr) to predict some values. Indeed, my predictions were bad, and I was asked to change the hyperparameters to obtain better results. I used the following command to obtain the hyperparameters: lr.get_params().keys() lr.get_params() and obtained the following: 'copy_X': True, 'fit_intercept': True, 'n_jobs': None, 'normalize': False, 'positive': False} and dict_keys(['copy_X', 'fit_intercept', 'n_jobs', 'normalize', 'positive']) Now, this is where issues started to raise. I have tried to find the correct syntax to use the .set_params() function, but every answer seemed outside my comprehension. I have tried to assign a positional arguments since commands such as lr.set_params('normalize'==True) returned TypeError: set_params() takes 1 positional argument but 2 were given and lr.set_params(some_params = {'normalize'}) returned ValueError (`ValueError: Invalid parameter some_params for estimator LinearRegression(). Check the list of available parameters with estimator.get_params().keys(). Can someone provide a simple explanation of how this function works? | The correct syntax is set_params(**params) where params is a dictionary containing the estimator's parameters, see the scikit-learn documentation. from sklearn.linear_model import LinearRegression reg = LinearRegression() reg.get_params() # {'copy_X': True, # 'fit_intercept': True, # 'n_jobs': None, # 'normalize': False, # 'positive': False} reg.set_params(**{ 'copy_X': False, 'fit_intercept': False, 'n_jobs': -1, 'normalize': True, 'positive': True }) reg.get_params() # {'copy_X': False, # 'fit_intercept': False, # 'n_jobs': -1, # 'normalize': True, # 'positive': True} | 5 | 3 |
69,866,838 | 2021-11-6 | https://stackoverflow.com/questions/69866838/modulenotfounderror-no-module-named-mxnet | I have been looking for the solution for this error for a whole morning. I created an separate environment for python 3.6 and I still got this error. I am using anacondas. So i am so frustrated. ModuleNotFoundError: No module named 'mxnet' from gluonts.model.deepar import DeepAREstimator from gluonts.trainer import Trainer import mxnet as mx import numpy as np np.random.seed(7) mx.random.seed(7) estimator = DeepAREstimator( prediction_length=28, context_length=100, freq='H', trainer=Trainer(ctx="gpu", # remove if running on windows epochs=5, learning_rate=1e-3, num_batches_per_epoch=100 ) ) predictor = estimator.train(train_ds) --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-14-d803033f31d5> in <module> ----> 1 from gluonts.model.deepar import DeepAREstimator 2 from gluonts.trainer import Trainer 3 import mxnet as mx 4 import numpy as np 5 ~\miniconda3\envs\deepar\lib\site-packages\gluonts\model\deepar\__init__.py in <module> 12 # permissions and limitations under the License. 13 ---> 14 from ._estimator import DeepAREstimator 15 16 __all__ = ["DeepAREstimator"] ~\miniconda3\envs\deepar\lib\site-packages\gluonts\model\deepar\_estimator.py in <module> 16 17 import numpy as np ---> 18 from mxnet.gluon import HybridBlock 19 20 from gluonts.core.component import DType, validated ModuleNotFoundError: No module named 'mxnet' I installed both gluonts successfully. then I tried to install mxnet by conda install mxnet, there are so many conflicts, not sure why, here are just part of conflicts. Thanks for your help Package notebook conflicts for: widgetsnbextension -> notebook[version='>=4.4.1'] jupyter -> notebook ipywidgets -> widgetsnbextension[version='>=3.5.0,<3.6.0'] -> notebook[version='>=4.4.1'] Package matplotlib-base conflicts for: matplotlib -> matplotlib-base[version='3.1.2|3.1.2|3.1.2|3.1.3|3.1.3|3.1.3|>=3.2.1,<3.2.2.0a0|>=3.2.2,<3.2.3.0a0|>=3.3.1,<3.3.2.0a0|>=3.4.3,<3.4.4.0a0|>=3.4.2,<3.4.3.0a0|>=3.3.4,<3.3.5.0a0|>=3.3.2,<3.3.3.0a0',build='py36h64f37c6_1|py36h64f37c6_0|py37h64f37c6_0|py38h64f37c6_0|py37h64f37c6_1|py38h64f37c6_1'] gluonts -> matplotlib~=3.0 -> matplotlib-base[version='3.1.2|3.1.2|3.1.2|3.1.3|3.1.3|3.1.3|>=3.2.1,<3.2.2.0a0|>=3.2.2,<3.2.3.0a0|>=3.3.1,<3.3.2.0a0|>=3.4.3,<3.4.4.0a0|>=3.4.2,<3.4.3.0a0|>=3.3.4,<3.3.5.0a0|>=3.3.2,<3.3.3.0a0',build='py36h64f37c6_1|py36h64f37c6_0|py37h64f37c6_0|py38h64f37c6_0|py37h64f37c6_1|py38h64f37c6_1'] Package pandocfilters conflicts for: notebook -> nbconvert -> pandocfilters[version='>=1.4.1'] nbconvert -> pandocfilters[version='>=1.4.1'] jupyter -> nbconvert -> pandocfilters[version='>=1.4.1'] Package fonttools conflicts for: matplotlib-base -> fonttools[version='>=4.22.0'] matplotlib -> matplotlib-base[version='>=3.4.3,<3.4.4.0a0'] -> fonttools[version='>=4.22.0'] Package pywinpty conflicts for: notebook -> terminado[version='>=0.8.1'] -> pywinpty terminado -> pywinpty Package sip conflicts for: matplotlib -> pyqt -> sip[version='4.18.*|>=4.19.4|>=4.19.4,<=4.19.8|4.19.13.*|>=4.19.13,<=4.19.14'] pyqt -> sip[version='4.18.*|>=4.19.4|>=4.19.4,<=4.19.8|4.19.13.*|>=4.19.13,<=4.19.14'] qtconsole -> pyqt -> sip[version='4.18.*|>=4.19.4|>=4.19.4,<=4.19.8|4.19.13.*|>=4.19.13,<=4.19.14'] Package scandir conflicts for: importlib_metadata -> pathlib2 -> scandir testpath -> pathlib2 -> scandir ipython -> pathlib2 -> scandir pickleshare -> pathlib2 -> scandir Package olefile conflicts for: pillow -> olefile matplotlib-base -> pillow[version='>=6.2.0'] -> olefile Package pandoc conflicts for: nbconvert -> pandoc[version='>=1.12.1|>=1.12.1,<2.0.0'] jupyter -> nbconvert -> pandoc[version='>=1.12.1|>=1.12.1,<2.0.0'] notebook -> nbconvert -> pandoc[version='>=1.12.1|>=1.12.1,<2.0.0'] Package async_generator conflicts for: nbclient -> async_generator nbconvert -> nbclient[version='>=0.5.0,<0.6.0'] -> async_generator | Conda is more usable when we want to install something that is not written in python. It is not the case in Mxnet. I would suggest using pip install for libraries in python. You may take a look at this link to better understand how to use conda environments. What is the difference between pip and conda? Also here's the official documentation of anaconda: https://www.anaconda.com/blog/understanding-conda-and-pip You may go through these. One common trick to understand if we need pip install or conda install, we check if the source code of the library in question is written in python or in another language. If it is in python, then it's highly recommended to use pip install Otherwise, conda install. Here, in https://pypi.org/project/mxnet/ it has been mentioned that the latest version of Mxnet works in all python version from 3.5 onwards, so pip install mxnet should work. | 4 | 2 |
69,866,469 | 2021-11-6 | https://stackoverflow.com/questions/69866469/subtract-two-xarrays-while-keeping-all-dimensions | this might be the most basic question out there but I just could not find a solution. I have two different xarrays containing wind data. Both xarrays have the dimensions (time: 60, plev: 19, lat: 90). I now need to take the difference between the two xarrays over all dimensions to find the anomaly between the two scenarios. I think that the xarray.DataArray.diff function is only for calculating the difference along an axis of one xarray (and not to calculate the difference between two xarrays). So, I tried using simply diff = wind1_xarray - wind2_xarray as well as diff = (wind1_xarray - wind2_xarray).compute() However, both methods give me an xarray with dimensions (time: 60, plev: 0, lat: 90). Why do I loose the pressure levels when calculating the difference? And how can I calculate the difference between two xarrays over all dimensions without loosing one dimension? Thanks everyone | The quick answer is that you're doing it right, but your dimensions are not aligned. xarray IS designed to subtract entire arrays, but the coordinate labels must be aligned exactly. You likely have a disagreement between elements of your plev coordinate, which you can check with xr.align: xr.align(wind1_array, wind2_array, join='exact') See the xarray docs on computation: automatic alignment for more information. Detailed example The biggest difference between xarray and numpy (assuming you're familiar with math using numpy) is that xarray relies on the coordinate labels along each dimension to align arrays before any broadcasting operations, not just the shape. As an example, let's consider two very simple arrays - one counting up from 0 to 19 and the other a block of ones, both reshaped to (4, 5). Subtracting these from each other in numpy is straightforward because they're the same shape: In [15]: arr1 = np.arange(20).reshape((4, 5)) In [16]: arr2 = np.ones(shape=(4, 5)) In [17]: arr1 - arr2 Out[17]: array([[-1., 0., 1., 2., 3.], [ 4., 5., 6., 7., 8.], [ 9., 10., 11., 12., 13.], [14., 15., 16., 17., 18.]]) The xarray equivalent is also straightforward, but we must introduce dimension names and coordinates. Let's assume your pressure levels are in increments of 10 hPa decreasing toward STP, and latitudes are also in increments of 10 from 20 to 60: In [18]: pressures = np.array([71.325, 81.325, 91.325, 101.325]) In [19]: lats = np.array([20, 30, 40, 50, 60]) In [20]: da1 = xr.DataArray(arr1, dims=['plev', 'lat'], coords=[pressures, lats]) In [21]: da2 = xr.DataArray(arr2, dims=['plev', 'lat'], coords=[pressures, lats]) In [22]: da2 Out[22]: <xarray.DataArray (plev: 4, lat: 5)> array([[1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.]]) Coordinates: * plev (plev) float64 71.33 81.33 91.33 101.3 * lat (lat) int64 20 30 40 50 60 In [23]: da1 Out[23]: <xarray.DataArray (plev: 4, lat: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]]) Coordinates: * plev (plev) float64 71.33 81.33 91.33 101.3 * lat (lat) int64 20 30 40 50 60 These arrays are aligned, so subtracting them is straightforward: In [24]: da1 - da2 Out[24]: <xarray.DataArray (plev: 4, lat: 5)> array([[-1., 0., 1., 2., 3.], [ 4., 5., 6., 7., 8.], [ 9., 10., 11., 12., 13.], [14., 15., 16., 17., 18.]]) Coordinates: * plev (plev) float64 71.33 81.33 91.33 101.3 * lat (lat) int64 20 30 40 50 60 But because xarray relies on these coordinates being aligned exactly, relying on float coordinates can be tricky. If we introduce even a small error to the pressure level dimension, the arrays are not aligned and we see a result similar to yours: In [25]: da2 = xr.DataArray(arr2, dims=['plev', 'lat'], coords=[pressures + 1e-8, lats]) In [26]: da1 - da2 Out[26]: <xarray.DataArray (plev: 0, lat: 5)> array([], shape=(0, 5), dtype=float64) Coordinates: * plev (plev) float64 * lat (lat) int64 20 30 40 50 60 This type of mis-alignment can happen for all sorts of reasons, including round-tripping data through storage, where changes in encoding can cause tiny numerical errors which show up as mis-aligned data. You can check whether this is the problem with xr.align using the join='exact' argument: In [27]: xr.align(da1, da2, join='exact') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-29-612460e52308> in <module> ----> 1 xr.align(da1, da2, join='exact') ~/miniconda3/envs/myenv/lib/python3.9/site-packages/xarray/core/alignment.py in align(join, copy, indexes, exclude, fill_value, *objects) 320 ): 321 if join == "exact": --> 322 raise ValueError(f"indexes along dimension {dim!r} are not equal") 323 joiner = _get_joiner(join, type(matching_indexes[0])) 324 index = joiner(matching_indexes) ValueError: indexes along dimension 'plev' are not equal To get around this problem, you might try rounding your coordinates to a known tolerance of the coordinate: In [32]: da2['plev'] = np.round(da2['plev'], 3) In [33]: da1 - da2 Out[33]: <xarray.DataArray (plev: 4, lat: 5)> array([[-1., 0., 1., 2., 3.], [ 4., 5., 6., 7., 8.], [ 9., 10., 11., 12., 13.], [14., 15., 16., 17., 18.]]) Coordinates: * plev (plev) float64 71.33 81.33 91.33 101.3 * lat (lat) int64 20 30 40 50 60 Alternatively, you could set a positional/integer coordinate, with the actual pressure level as a non-indexing coordinate: In [42]: da1 Out[42]: <xarray.DataArray (plev_ind: 4, lat: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]]) Coordinates: plev (plev_ind) float64 71.33 81.33 91.33 101.3 * lat (lat) int64 20 30 40 50 60 * plev_ind (plev_ind) int64 71325 81325 91325 101325 In [43]: da2 Out[43]: <xarray.DataArray (plev_ind: 4, lat: 5)> array([[1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.]]) Coordinates: plev (plev_ind) float64 71.33 81.33 91.33 101.3 * lat (lat) int64 20 30 40 50 60 * plev_ind (plev_ind) int64 71325 81325 91325 101325 In [44]: da1 - da2 Out[44]: <xarray.DataArray (plev_ind: 4, lat: 5)> array([[-1., 0., 1., 2., 3.], [ 4., 5., 6., 7., 8.], [ 9., 10., 11., 12., 13.], [14., 15., 16., 17., 18.]]) Coordinates: * lat (lat) int64 20 30 40 50 60 * plev_ind (plev_ind) int64 71325 81325 91325 101325 | 5 | 10 |
69,858,906 | 2021-11-5 | https://stackoverflow.com/questions/69858906/select-priority-item-from-list | Say I have a descending ordered list in terms of preference: person_choice = ['amy', 'bob', 'chad', 'dan', 'emily'] meaning, amy is preferred to bob, who is preferred to chad and so on. And I have a couple more lists like these (with names appearing in no particular order and list length): group1 = ['joyce', 'amy', 'emily', 'karen', 'rebecca'] group2 = ['chad', 'kyle', 'michael', 'neo', 'bob'] ... In each of the group1 and group2, I would like to select the person within this list that appears in person_choice with the highest preference. So, an expected output is like this: group1_choice = 'amy' group2_choice = 'bob' Assuming that for each groupX, there is at least one person appearing in person_choice, is there a one-liner to do this in Python without complicated for loop? | Using max with a custom key: person_choice = ['amy', 'bob', 'chad', 'dan', 'emily'] group1 = ['joyce', 'amy', 'emily', 'karen', 'rebecca'] group2 = ['chad', 'kyle', 'michael', 'neo', 'bob'] def choice(group): return max(group, key=lambda name: -person_choice.index(name) if name in person_choice else float('-inf')) print(choice(group1)) print(choice(group2)) Result: amy bob | 4 | 5 |
69,856,889 | 2021-11-5 | https://stackoverflow.com/questions/69856889/why-is-the-class-variable-not-updating-for-all-its-instances | I'm learning about classes and don't understand this: class MyClass: var = 1 one = MyClass() two = MyClass() print(one.var, two.var) # out: 1 1 one.var = 2 print(one.var, two.var) # out: 2 1 I thought that class variables are accessible by all instances, why is the Class variable not updating for all its instances? | It doesn't change for all of them because doing this: one.var = 2, creates a new instance variable with the same name as the class variable, but only for the instance one. After that, one will first find its instance variable and return that, while two will only find the class variable and return that. To change the class variable I suggest two options: create a class method to change the class variable (my preference) change it by using the class directly class MyClass: var = 1 @classmethod def change_var(cls, var): cls.var = var one = MyClass() two = MyClass() print(one.var, two.var) # out: 1 1 one.change_var(2) # option 1 print(one.var, two.var) # out: 2 2 MyClass.var = 3 # option 2 print(one.var, two.var) # out: 3 3 | 5 | 4 |
69,843,204 | 2021-11-4 | https://stackoverflow.com/questions/69843204/how-to-type-a-variable-in-fastapi-swaggerui-with-hyphen-in-its-name | If I send a request to this API: from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Response(BaseModel): var_name: str @app.put("/", response_model=Response) def simple_server(a: str): response = Response(var_name=a) return response I get a response which this json file {"var_name1": "a"}. In addition, I get a very beautiful Swagger UI that illustrate the fields of response. My question is, how can I get this json file {"var-name1": "a"} (this is with a hyphen instead of an underscore) with the same nice typing in Swagger docs? Obviously, I cannot name the var_name attribute var-name in Response dataclass. | Modify your pydantic object slightly: from pydantic import BaseModel, Field class Response(BaseModel): var_name: str = Field(alias="var-name") class Config: allow_population_by_field_name = True The allow_population_by_field_name option is needed to allow creating object with field name, without it you could instantiate it only with alias name. | 5 | 11 |
69,849,956 | 2021-11-5 | https://stackoverflow.com/questions/69849956/python-how-to-process-complex-nested-dictionaries-efficiently | I have a complex nested dictionary structured like this: example = { ('rem', 125): { ('emo', 35): { ('mon', 133): { ('ony', 33): 0 }, ('mor', 62): { ('ore', 23): 0 }, ('mot', 35): { ('ote', 22): 0 }, ('mos', 29): { ('ose', 29): 0 } }, ('emi', 32): { ('min', 109): { ('ine', 69): 0 }, ('mit', 58): { ('ite', 64): 0, ('ity', 45): 0 } }, ('eme', 26): { ('mer', 68): { ('ere', 24): 0, ('ery', 20): 0 } } }, ('iro', 30): { ('ron', 27): { ('oni', 94): { ('nic', 205): 0 }, ('one', 47): { ('ned', 86): 0, ('ner', 85): 0, ('nes', 26): 0, ('net', 20): 0 }, ('ona', 44): { ('nal', 246): 0 } } }, ('det', 122): { ('ete', 53): { ('ter', 212): { ('ery', 72): 0, ('era', 35): 0 }, ('ten', 65): { ('ene', 21): 0 } } }, ('uni', 217): { ('nin', 62): { ('ine', 32): { ('ned', 88): 0, ('ner', 74): 0, ('net', 25): 0 } }, ('nim', 31): { ('ima', 50): { ('mal', 23): 0 } }, ('niv', 25): { ('ive', 28): { ('ver', 48): 0 } } }, ('nat', 66): { ('ati', 21): { ('tiv', 517): { ('ive', 821): 0 }, ('tin', 230): { ('ine', 134): 0, ('ina', 22): 0 }, ('tic', 187): { ('ice', 23): 0 }, ('tis', 129): { ('ise', 25): 0 }, ('tiz', 59): { ('ize', 100): 0 }, ('tit', 52): { ('ite', 60): 0 }, ('tif', 43): { ('ify', 30): 0 }, ('til', 25): { ('ile', 79): 0, ('ily', 37): 0 } } }, ('mar', 286): { ('ari', 30): { ('rin', 156): { ('ine', 168): 0, ('ina', 24): 0 }, ('ris', 146): { ('ise', 31): 0 }, ('rit', 119): { ('ite', 174): 0, ('ity', 118): 0 }, ('ril', 63): { ('ily', 88): 0 }, ('riz', 56): { ('ize', 134): 0 }, ('ric', 30): { ('ice', 25): 0 }, ('rid', 25): { ('ide', 49): 0 }, ('rif', 24): { ('ify', 32): 0 } }, ('ara', 21): { ('rac', 84): { ('acy', 60): 0, ('ace', 33): 0 }, ('rat', 76): { ('ate', 451): 0 }, ('rag', 56): { ('age', 90): 0 }, ('rad', 47): { ('ade', 36): 0 } } }, ('ref', 153): { ('efo', 27): dict() }, ('gen', 150): { ('ene', 56): { ('nes', 644): { ('ese', 33): 0 }, ('ner', 118): { ('ery', 37): 0 } }, ('eni', 25): { ('nit', 112): { ('ite', 200): 0, ('ity', 93): 0 }, ('nin', 59): { ('ine', 54): 0 }, ('niz', 33): { ('ize', 178): 0 } } }, ('pol', 384): { ('oly', 255): dict(), ('oli', 35): { ('lit', 234): { ('ity', 698): 0, ('ite', 209): 0 }, ('lis', 79): { ('ise', 28): 0 }, ('lin', 72): { ('ine', 206): 0 }, ('lid', 36): { ('ide', 21): 0 }, ('lif', 24): { ('ify', 21): 0 }, ('liz', 22): { ('ize', 209): 0 } }, ('ola', 22): { ('lat', 165): { ('ate', 422): 0 }, ('lar', 48): { ('ary', 106): 0 }, ('lan', 27): { ('ane', 24): 0 } }, ('ole', 22): { ('len', 60): { ('ene', 58): 0 }, ('ler', 39): { ('ery', 36): 0 } } }, ('lam', 117): { ('ame', 33): { ('mer', 46): { ('ere', 24): 0, ('ery', 20): 0 } } }, ('mal', 213): { ('ala', 64): { ('lac', 67): { ('ace', 32): 0 }, ('lan', 65): { ('ane', 24): 0 }, ('lat', 58): { ('ate', 422): 0 }, ('lar', 31): { ('ary', 106): 0 } }, ('ale', 37): { ('len', 90): { ('ene', 58): 0 }, ('ler', 26): { ('ery', 36): 0 } } }, ('rev', 156): { ('eve', 76): { ('ver', 139): { ('ery', 27): 0 }, ('vel', 36): { ('ely', 168): 0 } }, ('evi', 44): { ('vit', 27): { ('ity', 64): 0 } }, ('evo', 28): dict() }, ('ura', 42): { ('ran', 25): { ('ani', 91): { ('nic', 76): 0 } } }, ('val', 78): { ('ale', 28): { ('len', 90): { ('ene', 58): 0 }, ('ler', 26): { ('ery', 36): 0 } } }, ('ven', 111): { ('ene', 26): { ('nes', 644): { ('ese', 33): 0 }, ('ner', 118): { ('ery', 37): 0 } } }, ('lat', 106): { ('ati', 39): { ('tiv', 517): { ('ive', 821): 0 }, ('tin', 230): { ('ine', 134): 0, ('ina', 22): 0 }, ('tic', 187): { ('ice', 23): 0 }, ('tis', 129): { ('ise', 25): 0 }, ('tiz', 59): { ('ize', 100): 0 }, ('tit', 52): { ('ite', 60): 0 }, ('tif', 43): { ('ify', 30): 0 }, ('til', 25): { ('ile', 79): 0, ('ily', 37): 0 } }, ('ate', 27): { ('ter', 153): { ('ery', 72): 0, ('era', 35): 0 }, ('tel', 117): { ('ely', 112): 0 }, ('ten', 74): { ('ene', 21): 0 } } }, ('aci', 42): { ('cid', 21): { ('idi', 49): { ('dic', 20): 0 }, ('ida', 43): { ('dal', 118): 0 }, ('ide', 43): { ('der', 40): 0, ('des', 38): 0, ('ded', 20): 0 } } } } It is just a small fraction of the complex object, I posted such a complex example to signify how complex the object really is, the complete object can have as many as 15 nested levels from the first level of the example, and has many more first level keys. Let me explain what this is, this is to be used in Markov chain based pseudo-word generation, each key is a tuple composed of a three letter string and a positive integer, the string is either composed of a vowel letter, a consonant letter and a vowel letter or CVC. The string is a three-letter state, the integer represents the weight/likelihood/frequency of the state, each key of the successive level is a state that can follow the state of the previous level, a state can only follow another state iff the last two letters of the other state is the start of the first state, the number is the likelihood of transformation from the previous state to the next state. The first level states are the states that can be found at start of words, the second level states are states that follow the previous first state as second state in words. Each last level value should be 0, it means end of nesting, in the example, if the first nesting level has a nesting value of 0 and each successive nesting level adds nesting value by 1, the last nesting level's nesting value plus three (the length of the first states) should be six (the length of the word generated by walking through the tree should be). A word is generated by weighted random choosing a state, and weighted random choosing a state from its value, recursively, until the value is zero, then, take all letters of the first state, and the letters from the third letter on from the second state on, and join the letters. For example, if the choices are ('uni', 'nin', 'ine', 'ned'), the chunks to be taken are 'uni', 'n', 'e', 'd' and the generated word is 'unined'. Now I have three problems, the first problem is that the dictionary isn't sorted, this is the easiest to solve. The second is that the dictionary has empty nested dictionaries, these dictionaries mean the branch terminated prematurely, because the branches can't grow "legally", these branches are invalid and I would like to cut off the invalid branches recursively from last level to first level (from last level to first level, recursively pop keys backwards if the value dictionary is empty, until the values are no longer empty), this is a little more difficult. And the most difficult part, there are many branches with only one subbranch, and I need to merge such subbranches with their parent branches recursively, from last level to first level backwards. In short I need it to become this: { ('acid', 42): { ('idal', 43): 0, ('ide', 43): { ('ded', 20): 0, ('der', 40): 0, ('des', 38): 0 }, ('idic', 49): 0 }, ('dete', 122): { ('tene', 65): 0, ('ter', 212): { ('era', 35): 0, ('ery', 72): 0 } }, ('gen', 150): { ('ene', 56): { ('nery', 118): 0, ('nese', 644): 0 }, ('eni', 25): { ('nine', 59): 0, ('nit', 112): { ('ite', 200): 0, ('ity', 93): 0 }, ('nize', 33): 0 } }, ('iron', 30): { ('onal', 44): 0, ('one', 47): { ('ned', 86): 0, ('ner', 85): 0, ('nes', 26): 0, ('net', 20): 0 }, ('onic', 94): 0 }, ('lamer', 117): { ('ere', 24): 0, ('ery', 20): 0 }, ('lat', 106): { ('ate', 27): { ('tely', 117): 0, ('tene', 74): 0, ('ter', 153): { ('era', 35): 0, ('ery', 72): 0 } }, ('ati', 39): { ('tice', 187): 0, ('tify', 43): 0, ('til', 25): { ('ile', 79): 0, ('ily', 37): 0 }, ('tin', 230): { ('ina', 22): 0, ('ine', 134): 0 }, ('tise', 129): 0, ('tite', 52): 0, ('tive', 517): 0, ('tize', 59): 0 } }, ('mal', 213): { ('ala', 64): { ('lace', 67): 0, ('lane', 65): 0, ('lary', 31): 0, ('late', 58): 0 }, ('ale', 37): { ('lene', 90): 0, ('lery', 26): 0 } }, ('mar', 286): { ('ara', 21): { ('rac', 84): { ('ace', 33): 0, ('acy', 60): 0 }, ('rade', 47): 0, ('rage', 56): 0, ('rate', 76): 0 }, ('ari', 30): { ('rice', 30): 0, ('ride', 25): 0, ('rify', 24): 0, ('rily', 63): 0, ('rin', 156): { ('ina', 24): 0, ('ine', 168): 0 }, ('rise', 146): 0, ('rit', 119): { ('ite', 174): 0, ('ity', 118): 0 }, ('rize', 56): 0 } }, ('nati', 66): { ('tice', 187): 0, ('tify', 43): 0, ('til', 25): { ('ile', 79): 0, ('ily', 37): 0 }, ('tin', 230): { ('ina', 22): 0, ('ine', 134): 0 }, ('tise', 129): 0, ('tite', 52): 0, ('tive', 517): 0, ('tize', 59): 0 }, ('pol', 384): { ('ola', 22): { ('lane', 27): 0, ('lary', 48): 0, ('late', 165): 0 }, ('ole', 22): { ('lene', 60): 0, ('lery', 39): 0 }, ('oli', 35): { ('lide', 36): 0, ('lify', 24): 0, ('line', 72): 0, ('lise', 79): 0, ('lit', 234): { ('ite', 209): 0, ('ity', 698): 0 }, ('lize', 22): 0 } }, ('rem', 125): { ('emer', 26): { ('ere', 24): 0, ('ery', 20): 0 }, ('emi', 32): { ('mine', 109): 0, ('mit', 58): { ('ite', 64): 0, ('ity', 45): 0 } }, ('emo', 35): { ('mony', 133): 0, ('more', 62): 0, ('mose', 29): 0, ('mote', 35): 0 } }, ('rev', 156): { ('eve', 76): { ('vely', 36): 0, ('very', 139): 0 }, ('evity', 44): 0 }, ('uni', 217): { ('nimal', 31): 0, ('nine', 62): { ('ned', 88): 0, ('ner', 74): 0, ('net', 25): 0 }, ('niver', 25): 0 }, ('uranic', 42): 0, ('vale', 78): { ('lene', 90): 0, ('lery', 26): 0 }, ('vene', 111): { ('nery', 118): 0, ('nese', 644): 0 } } And here is how I do it: def merge(obj): for k, v in list(obj.items()): if isinstance(v, dict): merge(v) if len(v) == 1: a, b = k k1, v1 = list(obj.pop(k).items())[0] key = (a + k1[0][2:], b) obj[key] = v1 def pop_empty(obj): for k, v in list(obj.items()): if isinstance(v, dict): pop_empty(v) if not v: obj.pop(k) def nested_sort(dic): d = dict() for k, v in sorted(dic.items()): if isinstance(v, dict): d[k] = nested_sort(v) else: d[k] = v return d pop_empty(example) merge(example) nested_sort(example) I need to process an extremely complex object much more complex than this, what are more efficient ways to do these? The object has many repeated elements and repeated nested dictionaries and in general repetition is very heavy and memoization is a huge help, but two of the functions don't return any value and the one returns a value, trying to wrap it using lru_cache will raise unhashable type dict, can any one re-implement the same ideas using memoization? | I was able to get about 25 % faster by combining the three processes. def merge(obj, /): result = {} for key, val in sorted(obj.items()): if isinstance(val, dict): val = merge(val) if not val: continue if len(val) == 1: k1, val = next(iter(val.items())) key = (key[0] + k1[0][2:], key[1]) result[key] = val return result I don't know how much more is possible. | 5 | 4 |
69,848,807 | 2021-11-5 | https://stackoverflow.com/questions/69848807/how-do-i-find-the-row-of-a-string-index | I have a dataframe where the indexes are not numbers but strings (specifically, name of countries) and they are all unique. Given the name of a country, how do I find its row number (the 'number' value of the index)? I tried df[df.index == 'country_name'].index but this doesn't work. | We can use Index.get_indexer: df.index.get_indexer(['Peru']) [3] Or we can build a RangeIndex based on the size of the DataFrame then subset that instead: pd.RangeIndex(len(df))[df.index == 'Peru'] Int64Index([3], dtype='int64') Since we're only looking for a single label and the indexes are "all unique" we can also use Index.get_loc: df.index.get_loc('Peru') 3 Sample DataFrame: import pandas as pd df = pd.DataFrame({ 'A': [1, 2, 3, 4, 5] }, index=['Bahamas', 'Cameroon', 'Ecuador', 'Peru', 'Japan']) df: A Bahamas 1 Cameroon 2 Ecuador 3 Peru 4 Japan 5 | 8 | 9 |
69,846,902 | 2021-11-4 | https://stackoverflow.com/questions/69846902/how-to-plot-stacked-100-bar-plot-with-seaborn-for-categorical-data | I have a dataset that looks like this (assume this has 4 categories in Clicked, the head(10) only showed 2 categories): Rank Clicked 0 2.0 Cat4 1 2.0 Cat4 2 2.0 Cat4 3 1.0 Cat1 4 1.0 Cat4 5 2.0 Cat4 6 2.0 Cat4 7 3.0 Cat4 8 5.0 Cat4 9 5.0 Cat4 This is a code that returns this plot: eee = (df.groupby(['Rank','Clicked'])['Clicked'].count()/df.groupby(['Rank'])['Clicked'].count()) eee.unstack().plot.bar(stacked=True) plt.legend(['Cat1','Cat2','Cat3','Cat4']) plt.xlabel('Rank') Is there a way to achieve this with seaborn (or matplotlib) instead of the pandas plotting capabilities? I tried a few ways, both of running the seaborn code and of preprocessing the dataset so it's on the correct format, with no luck. | Seaborn doesn't support stacked barplot, so you need to plot the cumsum: # calculate the distribution of `Clicked` per `Rank` distribution = pd.crosstab(df.Rank, df.Clicked, normalize='index') # plot the cumsum, with reverse hue order sns.barplot(data=distribution.cumsum(axis=1).stack().reset_index(name='Dist'), x='Rank', y='Dist', hue='Clicked', hue_order = distribution.columns[::-1], # reverse hue order so that the taller bars got plotted first dodge=False) Output: Preferably, you can also reverse the cumsum direction, then you don't need to reverse hue order: sns.barplot(data=distribution.iloc[:,::-1].cumsum(axis=1) # we reverse cumsum direction here .stack().reset_index(name='Dist'), x='Rank', y='Dist', hue='Clicked', hue_order=distribution.columns, # forward order dodge=False) Output: | 10 | 7 |
69,840,223 | 2021-11-4 | https://stackoverflow.com/questions/69840223/way-to-pass-arguments-to-fastapi-app-via-command-line | I'm using python 3.8.0 for my FastAPI app. It uses the .env file located on the root of a project directory. I am using the dotenv package, and the location of the .env file is hardcoded within the app. Here is my unit file [Unit] Description=Gunicorn instance for my_app After=network.target [Service] User=nginx Group=nginx WorkingDirectory=/usr/share/nginx/html/my_app/ Environment="PATH=/usr/share/nginx/html/my_app/venv/bin" ExecStart=/usr/share/nginx/html/my_app/venv/bin/gunicorn --bind unix:/usr/share/nginx/html/my_app/my_app.sock -w 4 -k uvicorn.workers.UvicornWorker app.main:app [Install] WantedBy=multi-user.target The challenge is to run two versions (production and test) of the same app using two different .env on two different ports. I'll have to create a second unit file. But how can I pass the name of two diffeent env file names to the app for further usage. These files contain database connections, etc. I imagine it roughly like this 1st unit file ExecStart=/usr/share/nginx/html/my_app/venv/bin/gunicorn --bind unix:/usr/share/nginx/html/my_app/my_app.sock -w 4 -k uvicorn.workers.UvicornWorker app.main:app --env_file_name=".env.prod" 2nd unit file ExecStart=/usr/share/nginx/html/my_app/venv/bin/gunicorn --bind unix:/usr/share/nginx/html/my_app/my_app.sock -w 4 -k uvicorn.workers.UvicornWorker app.main:app --env_file_name=".env.dev" | You can set the path from which systemd will read the environment for your process exactly in unit file configuration. The setting is called EnvironmnetFile=. Just set the option to the path for .env.prod in one unit file and for the path to .env.test for another. | 5 | 3 |
69,833,454 | 2021-11-4 | https://stackoverflow.com/questions/69833454/using-lambda-to-get-image-from-s3-returns-a-white-box-in-python | I'm trying to get my image from S3 bucket and return it. Here's the code: import base64 import boto3 import json import random s3 = boto3.client('s3') def lambda_handler(event, context): number = random.randint(0,1) if number == 1: response = s3.get_object( Bucket='bucket-name', Key='image.png', ) image = response['Body'].read() return { 'headers': { "Content-Type": "image/png" }, 'statusCode': 200, 'body': base64.b64encode(image).decode('utf-8'), 'isBase64Encoded': True } else: return { 'headers': { "Content-type": "text/html" }, 'statusCode': 200, 'body': "<h1>This is text</h1>", } When I hit my endpoint, an image of a tiny white box is returned. I know image.png exists in my bucket, and when I use the web GUI to open it in my browser, the image is loaded properly. What am I exactly doing wrong here? And in case it matters, here's how I'm uploading the image to S3 (from another Lambda): ... # Prepare image for S3 buffer = io.BytesIO() my_image.save(buffer, 'PNG') buffer.seek(0) # Rewind pointer back to start response = s3.put_object( Bucket=S3_BUCKET_NAME, Key=f'{S3_KEY}{filename}.png', Body=buffer, ContentType='image/png', ) ... In the above code, my_image is just an image I created using the PIL library. Thanks for any help! | Here it is how I do this: Your lambda with corrected body: import base64 import boto3 import json import random s3 = boto3.client('s3') def lambda_handler(event, context): response = s3.get_object( Bucket='bucket-name', Key='image.png', ) image = response['Body'].read() return { 'headers': { "Content-Type": "image/png" }, 'statusCode': 200, 'body': base64.b64encode(image), 'isBase64Encoded': True } API gateway settings Integration Request | 6 | 3 |
69,827,390 | 2021-11-3 | https://stackoverflow.com/questions/69827390/pandas-column-multiindex-into-row-multiindex | I have a pandas dataframe df = pd.DataFrame([[i+10*j for i in range(6)] for j in range(5)], index=[f"item{i}" for i in range(5)]) df.columns = pd.MultiIndex.from_product((["abc", "xyz"], ["one", "two", "three"])) abc xyz one two three one two three item0 0 1 2 3 4 5 item1 10 11 12 13 14 15 item2 20 21 22 23 24 25 item3 30 31 32 33 34 35 item4 40 41 42 43 44 45 I cannot figure out how to turn first level of column multiindex into row multiindex to get something like following one two three item0 abc 0 1 2 xyz 3 4 5 item1 abc 10 11 12 xyz 13 14 15 item2 abc 20 21 22 xyz 23 24 25 item3 abc 30 31 32 xyz 33 34 35 item4 abc 40 41 42 xyz 43 44 45 any hint appreciated. Thank you | Here you go: df.stack(level=0) | 4 | 4 |
69,824,126 | 2021-11-3 | https://stackoverflow.com/questions/69824126/mypy-invalid-index-type-str-for-unionstr-dictstr-str-expected-type-u | Why am I getting the error? I have added the type properly, right? Invalid index type "str" for "Union[str, Dict[str, str]]"; expected type "Union[int, slice]" Code from typing import List, Dict, Union d = {"1": 1, "2": 2} listsOfDicts: List[Dict[str, Union[str, Dict[str, str]]]] = [ {"a": "1", "b": {"c": "1"}}, {"a": "2", "b": {"c": "2"}}, ] [d[i["b"]["c"]] for i in listsOfDicts] | Mypy expects dictionaries to have the same type. Using Union models a subtype relation, but since Dict type is invariant, the key-value pair must match exactly as defined in the type annotation—which is the type Union[str, Dict[str, str]], so the subtypes in the Union wouldn't get matched (neither str, Dict[str, str] are valid types). To define multiple types for different keys, you should use TypedDict. Usage as seen here: https://mypy.readthedocs.io/en/latest/more_types.html#typeddict. from typing import List, Dict, Union, TypedDict d = {"1": 1, "2": 2} dictType = TypedDict('dictType', {'a': str, 'b': Dict[str, str]}) listsOfDicts: List[dictType] = [ {"a": "1", "b": {"c": "1"}}, {"a": "2", "b": {"c": "2"}}, ] [d[i["b"]["c"]] for i in listsOfDicts] | 5 | 7 |
69,822,726 | 2021-11-3 | https://stackoverflow.com/questions/69822726/use-of-r-carriage-return-in-python-regex | I'm trying to use regex to match every character between a string and a \r character : text = 'Some text\rText to find !\r other text\r' I want to match 'Text to find !'. I already tried : re.search(r'Some text\r(.*)\r', text).group(1) But it gives me : 'Text to find !\r other text' It's surprising because it works perfectly when replacing \r by \n : re.search(r'Some text\n(.*)\n', 'Some text\nText to find !\n other text\n').group(1) returns Text to find ! Do you know why it behaves differently when we use \r and \n ? | .* is greedy in nature so it is matching longest match available in: r'Some text\r(.*)\r Hence giving you: re.findall(r'Some text\r(.*)\r', 'Some text\rText to find !\r other text\r') ['Text to find !\r other text'] However if you change to non-greedy then it gives expected result as in: re.findall(r'Some text\r(.*?)\r', 'Some text\rText to find !\r other text\r') ['Text to find !'] Reason why re.findall(r'Some text\n(.*)\n', 'Some text\nText to find !\n other text\n') gives just ['Text to find !'] is that DOT matches any character except line break and \n is a line break. If you enable DOTALL then again it will match longest match in: >>> re.findall(r'Some text\n([\s\S]*)\n', 'Some text\nText to find !\n other text\n') ['Text to find !\n other text'] >>> re.findall(r'(?s)Some text\n(.*)\n', 'Some text\nText to find !\n other text\n') ['Text to find !\n other text'] Which again changes behavior when you use non-greedy quantifier: re.findall(r'(?s)Some text\n(.*?)\n', 'Some text\nText to find !\n other text\n') ['Text to find !'] | 4 | 4 |
69,822,360 | 2021-11-3 | https://stackoverflow.com/questions/69822360/separate-fastapi-documentation-into-sections | Currently the OpenAPI documentation looks like this: Is it possible to separate it into multiple sections? For example, 2 sections, one being the "books" section that contains the methods from "/api/bookcollection/books/" endpoints and the other containing the endpoints with "/api/bookcollection/authors/". I have consulted the FastApi documentation, but I do not find anything close to the operation I want to do. | The OpenAPI allows the use of tags to group endpoints. FastAPI also supports this feature. The documentation section can be found here. Example: from fastapi import FastAPI tags_metadata = [ { "name": "users", "description": "Operations with users. The **login** logic is also here.", }, { "name": "items", "description": "Manage items. So _fancy_ they have their own docs.", "externalDocs": { "description": "Items external docs", "url": "https://fastapi.tiangolo.com/", }, }, ] app = FastAPI(openapi_tags=tags_metadata) @app.get("/users/", tags=["users"]) async def get_users(): return [{"name": "Harry"}, {"name": "Ron"}] @app.get("/items/", tags=["items"]) async def get_items(): return [{"name": "wand"}, {"name": "flying broom"}] | 4 | 9 |
69,819,337 | 2021-11-3 | https://stackoverflow.com/questions/69819337/how-keep-a-value-in-a-dataframe-using-the-values-of-another-dataframe-as-indexes | I have the following two DataFrames: import pandas as pd df = pd.DataFrame([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], index = [0, 0.25, 0.50, 0.75, 1], columns = [0, 0.25, 0.50, 0.75, 1]) df_cross = pd.DataFrame([[0.0, 0.25], [0.0, 0.75], [0.5, 1]], columns = ['indexes_to_keep', 'cols_to_keep']) df: 0.00 0.25 0.50 0.75 1.00 0.00 0 0 0 0 0 0.25 0 0 0 0 0 0.50 0 0 0 0 0 0.75 0 0 0 0 0 1.00 0 0 0 0 0 df_cross: indexes_to_keep cols_to_keep 0 0.0 0.25 1 0.0 0.75 2 0.5 1.00 In the df I have my storaged data, and the df_cross contains the indexes and columns that I want to keep the values. The values in df which the index and columns do not match with any row of df_cross I want to replace by a string (for example "NaN"). The expected output is: 0.00 0.25 0.50 0.75 1.00 0.00 NaN 0 NaN 0 NaN 0.25 NaN NaN NaN NaN NaN 0.50 NaN NaN NaN NaN 0 0.75 NaN NaN NaN NaN NaN 1.00 NaN NaN NaN NaN NaN Thanks in advance. | Let us try crosstab on df_cross, then use where to mask the values s = pd.crosstab(*df_cross.values.T) df.where(s == 1) 0.00 0.25 0.50 0.75 1.00 0.00 NaN 0.0 NaN 0.0 NaN 0.25 NaN NaN NaN NaN NaN 0.50 NaN NaN NaN NaN 0.0 0.75 NaN NaN NaN NaN NaN 1.00 NaN NaN NaN NaN NaN PS: pd.crosstab(*df_cross.values.T) is just a syntactical shortcut and is effectively equivalent to using pd.crosstab(df.indexes_to_keep, df.cols_to_keep) | 5 | 5 |
69,782,818 | 2021-10-30 | https://stackoverflow.com/questions/69782818/turn-a-tf-data-dataset-to-a-jax-numpy-iterator | I am interested about training a neural network using JAX. I had a look on tf.data.Dataset, but it provides exclusively tf tensors. I looked for a way to change the dataset into JAX numpy array and I found a lot of implementations that use Dataset.as_numpy_generator() to turn the tf tensors to numpy arrays. However I wonder if it is a good practice, as numpy arrays are stored in CPU memory and it is not what I want for my training (I use the GPU). So the last idea I found is to manually recast the arrays by calling jnp.array but it is not really elegant (I am afraid about the copy in GPU memory). Does anyone have a better idea for that? Quick code to illustrate: import os import jax.numpy as jnp import tensorflow as tf def generator(): for _ in range(2): yield tf.random.uniform((1, )) ds = tf.data.Dataset.from_generator(generator, output_types=tf.float32, output_shapes=tf.TensorShape([1])) ds1 = ds.take(1).as_numpy_iterator() ds2 = ds.skip(1) for i, batch in enumerate(ds1): print(type(batch)) for i, batch in enumerate(ds2): print(type(jnp.array(batch))) # returns: <class 'numpy.ndarray'> # not good <class 'jaxlib.xla_extension.DeviceArray'> # good but not elegant | Both tensorflow and JAX have the ability to convert arrays to dlpack tensors without copying memory, so one way you can create a JAX array from a tensorflow array without copying the underlying data buffer is to do it via dlpack: import numpy as np import tensorflow as tf import jax.dlpack tf_arr = tf.random.uniform((10,)) dl_arr = tf.experimental.dlpack.to_dlpack(tf_arr) jax_arr = jax.dlpack.from_dlpack(dl_arr) np.testing.assert_array_equal(tf_arr, jax_arr) By doing the round-trip to JAX, you can compare unsafe_buffer_pointer() to ensure that the arrays point at the same buffer, rather than copying the buffer along the way: def tf_to_jax(arr): return jax.dlpack.from_dlpack(tf.experimental.dlpack.to_dlpack(arr)) def jax_to_tf(arr): return tf.experimental.dlpack.from_dlpack(jax.dlpack.to_dlpack(arr)) jax_arr = jnp.arange(20.) tf_arr = jax_to_tf(jax_arr) jax_arr2 = tf_to_jax(tf_arr) print(jnp.all(jax_arr == jax_arr2)) # True print(jax_arr.unsafe_buffer_pointer() == jax_arr2.unsafe_buffer_pointer()) # True | 5 | 6 |
69,802,491 | 2021-11-1 | https://stackoverflow.com/questions/69802491/create-recursive-dataclass-with-self-referential-type-hints | I want to write a dataclass definition in Python, but can't refer to that same class inside the declaration. Mainly what I want to achieve is the typing of this nested structure, as illustrated below: @dataclass class Category: title: str children: [Category] # I can't refer to a "Category" tree = Category(title='title 1', children=[ Category('title 11', children=[]), Category('title 12', children=[]) ]) | Option #1 You can wrap class name in a string in order to forward-declare the annotation: from dataclasses import dataclass @dataclass class Category: title: str children: list['Category'] Note: The ability to use list[type] for type hinting in Python was introduced in Python 3.9 as part of PEP 585. For Python versions earlier than 3.9, one can use typing.List instead: from dataclasses import dataclass from typing import List @dataclass class Category: title: str children: List['Category'] Option #2 You can include a __future__ import so that all annotations by default are forward-declared as below. In this case, you can also eliminate the typing import and use new-style annotations in Python 3.7 and above. from __future__ import annotations from dataclasses import dataclass @dataclass class Category: title: str children: list[Category] | 30 | 37 |
69,752,434 | 2021-10-28 | https://stackoverflow.com/questions/69752434/how-can-i-represent-stdin-and-stderr-with-pathlib-path | I love the pathlib.Path api and use it a lot for quick cli tools. Especially with typer. I have a few tightly related questions: In UNIX cli commands - is the de facto standard for stdin. Is that the same under Windows? Is there a clean, cross-platform way to have a pathlib.Path object (or actually the {POSIX,Windows}Path it automatically becomes) to represent stdin? And what about stdout? | There is no cross-platform pseudo file name for standard input and standard output. In POSIX, it's /dev/stdin for standard input and /dev/stdout for standard output, although many CLI tools would accept - in the command line arguments as a shorthand for standard input or standard output. In Windows, it's CONIN$ for standard input and CONOUT$ for standard output, or simply CON for both standard input and standard output. | 7 | 1 |
69,810,895 | 2021-11-2 | https://stackoverflow.com/questions/69810895/pandas-ignores-dropna-false-with-categorical-columns-in-groupby | I want to include NA values when using groupby() which does not happen by default. I think the option dropna=False make it happen. But when the column is of type Categorical the option has no effect. I assume the best would say there is a well thought design decision behind that. Or maybe it is related to this pandas bug which I do not fully understand? The pandas version I use here is 1.2.5. #!/usr/bin/env python3 import pandas as pd print(pd.__version__) # 1.2.5 # initial data df = pd.DataFrame( { '2019': [1, pd.NA, 0], 'N': [2, 0, 7], } ) print(df) ## groupby()'s working as expected # without NA res = df.groupby('2019').size() print(f'\n{res}') # include NA res = df.groupby('2019', dropna=False).size() print(f'\n{res}') ## now the problems ## convert to Category df['2019'] = df['2019'].astype('category') # PROBLEM: NA is ignored res = df.groupby('2019', dropna=False).size() print(f'\n{res}') # PROBLEM: NA is ignored even observed has no effect res = df.groupby('2019', dropna=False, observed=True).size() print(f'\n{res}') In the output you see the initial DataFrame first and then two groupby() outputs that behave as expected. But then the last two groupby() outputs ilustrating my problem. 1.2.5 2019 N 0 1 2 1 <NA> 0 2 0 7 2019 0 1 1 1 dtype: int64 2019 0.0 1 1.0 1 NaN 1 dtype: int64 2019 0 1 1 1 dtype: int64 2019 1 1 0 1 dtype: int64 >>> | This is a bug. It has been fixed and will be released in pandas 2.0. The simplest workaround is to temporarily undo the categories thing: orig = df['2019'].cat.categories.dtype if np.issubdtype(orig, np.integer) or orig == 'bool': orig = 'Int64' # Allow NA values. res = df.astype({'2019': orig}).groupby('2019', dropna=False, observed=True).size() | 8 | 4 |
69,786,885 | 2021-10-31 | https://stackoverflow.com/questions/69786885/after-conda-update-python-kernel-crashes-when-matplotlib-is-used | I have create this simple env with conda: conda create -n test python=3.8.5 pandas scipy numpy matplotlib seaborn jupyterlab The following code in jupyter lab crashes the kernel : import matplotlib.pyplot as plt plt.subplot() I don't face the problem on Linux. The problem is when I try on Windows 10. There are no errors on the jupyter lab console (where I started the server), and I have no idea where to investigate. | Update 2021-11-06 The default pkgs/main channel for conda has reverted to using freetype 2.10.4 for Windows, per main / packages / freetype. If you are still experiencing the issue, use conda list freetype to check the version: freetype != 2.11.0 If it is 2.11.0, then change the version, per the solution, or conda update --all (providing your default channel isn't changed in the .condarc config file). Solution If this is occurring after installing Anaconda, updating conda or freetype since Oct 27, 2021. Go to the Anaconda prompt and downgrade freetype 2.11.0 in any affected environment. conda install freetype=2.10.4 Relevant to any package using matplotlib and any IDE For example, pandas.DataFrame.plot and seaborn Jupyter, Spyder, VSCode, PyCharm, command line. Discovery An issue occurs after updating with the most current updates from conda, released Friday, Oct 29. After updating with conda update --all, there's an issue with anything related to matplotlib in any IDE (not just Jupyter). I tested this in JupyterLab, PyCharm, and python from the command prompt. PyCharm: Process finished with exit code -1073741819 JupyterLab: kernel just restarts and there are no associated errors or Traceback command prompt: a blank interactive matplotlib window will appear briefly, and then a new command line appears. The issue seems to be with conda update --all in (base), then any plot API that uses matplotlib (e.g. seaborn and pandas.DataFrame.plot) kills the kernel in any environment. I had to reinstall Anaconda, but do not do an update of (base), then my other environments worked. I have not figured out what specifically is causing the issue. I tested the issue with python 3.8.12 and python 3.9.7 Current Testing: Following is the conda revision log. Prior to conda update --all this environment was working, but after the updates, plotting with matplotlib crashes the python kernel 2021-10-31 10:47:22 (rev 3) bokeh {2.3.3 (defaults/win-64) -> 2.4.1 (defaults/win-64)} click {8.0.1 (defaults/noarch) -> 8.0.3 (defaults/noarch)} filelock {3.0.12 (defaults/noarch) -> 3.3.1 (defaults/noarch)} freetype {2.10.4 (defaults/win-64) -> 2.11.0 (defaults/win-64)} imagecodecs {2021.6.8 (defaults/win-64) -> 2021.8.26 (defaults/win-64)} joblib {1.0.1 (defaults/noarch) -> 1.1.0 (defaults/noarch)} lerc {2.2.1 (defaults/win-64) -> 3.0 (defaults/win-64)} more-itertools {8.8.0 (defaults/noarch) -> 8.10.0 (defaults/noarch)} pyopenssl {20.0.1 (defaults/noarch) -> 21.0.0 (defaults/noarch)} scikit-learn {0.24.2 (defaults/win-64) -> 1.0.1 (defaults/win-64)} statsmodels {0.12.2 (defaults/win-64) -> 0.13.0 (defaults/win-64)} sympy {1.8 (defaults/win-64) -> 1.9 (defaults/win-64)} tqdm {4.62.2 (defaults/noarch) -> 4.62.3 (defaults/noarch)} xlwings {0.24.7 (defaults/win-64) -> 0.24.9 (defaults/win-64)} The issue seems to be freetype Downgrading from 2.11.0 to 2.10.4 resolved the issue and made the environment work with matplotlib Went to post a bug report and discovered there is [Bug]: Matplotlib crashes Python #21511 | 46 | 75 |
69,796,919 | 2021-11-1 | https://stackoverflow.com/questions/69796919/how-to-generate-python-typing-information-for-a-library-that-supports-sync-and-a | I have a problem with both typing and type-hints in Python. I prepared an (executable) example that showcases the problem that I am facing for a library that should expose both, a synchronous and asynchronous interface. It is obviously a simplified example, cutting out the noise and focussing on the issue at hand: import asyncio import time class CommonFunctions: # I want to use this as the interface defintion # for my async and sync implementations below def do_something(self, i: int, text: str = "hello world") -> str: return text * i def do_something_else(self, i: int, j: int) -> str: return i * j # Imagine more functions here... This is # the library and where most development happens. class SyncLib(CommonFunctions): def __complex_operation(self): # This is a standin for sync-api calls time.sleep(1) def do_something(self, *args, **kwargs): self.__complex_operation() return super().do_something(*args, **kwargs) def do_something_else(self, *args, **kwargs): self.__complex_operation() return super().do_something_else(*args, **kwargs) class AsyncLib(CommonFunctions): async def __complex_async_operation(self): # This is a standin for async-api calls await asyncio.sleep(1) async def do_something(self, *args, **kwargs): await self.__complex_async_operation() return super().do_something(*args, **kwargs) async def do_something_else(self, *args, **kwargs): await self.__complex_async_operation() return super().do_something_else(*args, **kwargs) print("User using the sync-lib...") sync_lib = SyncLib() print(sync_lib.do_something(5)) print(sync_lib.do_something_else(5, 6)) print("User using the async-lib...") async_lib = AsyncLib() async def async_main(): print(await async_lib.do_something(5)) print(await async_lib.do_something_else(5, 6)) asyncio.get_event_loop().run_until_complete(async_main()) The classes SyncLib and AsyncLib are the interfaces that will be used by users of the library. For different usecases, the library should have a synchronous and asynchronous interface provided by the classes SyncLib and AsyncLib (for the interested: the actual library is based around the httpx sync/async clients, here symbolized by time.sleep()/asyncio.sleep() calls). Most of the library actually is synchronous, but for a small portion it depends on the calls to the underlying library, here symbolized by the __complex_operation calls. The problem For users of the library, the problem with this design is that the typing information and type-hints for the AsyncLib (and the SyncLib) is all out of whack. All public member functions look like they accept *args, **kwargs arguments. This, I can easily fix by adding a Protocol for the CommonFunctions-class that exactly defines the available arguments and argument types for each function. However, such a protocol would then only define the synchronous calls. The asynchronous calls would need a second declaration of their own Protocol that just would be a copy of the original sychronous Protocol with all function declarations being marked as async. All of a sudden, should a parameter need to be added to a function, a developer needs to maintain three interfaces: The CommonFunctions, the synchronous protocol, and the asynchronous protocol (plus the docstrings). There must be a better way?! My idea and question: Is there a way to generate an async-Protocol from the sync-Protocol in such a way that the typing library and/or type-hinting tools in IDEs can undestand them? What I tried... Well, I read a bunch of PEPs and such, but actually did not find a lot for converting a set of signatures from one Protocol to another. Even changing the signature of a single function from sync to async while preserving the argument lists from the sync function in a generic way seems complicated. For the single-function case I was thinking of something like a decorator that could do something like this: import typing Result = typing.TypeVar("Result") Args = typing.TypeVar("Args") def typing_sync_to_async(func: typing.Callable[Args, Result]) -> typting.Callable[Result, typing.Coroutine[typing.Any, typing.Any, Result]]: return func @typing_sync_to_async def sync_example(x: int) -> str: pass But well, this does not work because The argument list cannot be captures like this This does not cover **kwargs as I found out, because Callable does not implement a way to define types for keyword arguments, it seems. To me, it seems that the only way to do this is to define those Protocols for hundreds of functions manually (async and sync) and that I cannot rely on an existing feature that would provide this. Is that correct? Maybe, if I learn more about the typing library and contribute something myself? | This can be solved using Paramspec introduced in PEP-612 Here is the modified decorator from your example : from typing import ParamSpec, TypeVar from collections.abc import Awaitable, Callable P = ParamSpec("P") R = TypeVar("R") def to_async(func: Callable[P, R]) -> Callable[P, Awaitable[R]]: ... This will capture necessary function signatures, including key-word arguments. Example demonstrating correct type inference: | 5 | 4 |
69,748,975 | 2021-10-28 | https://stackoverflow.com/questions/69748975/virtual-environments-architecture-made-by-pipenv-become-intel-chip-on-apple-mac | I'm struggling with using python on mac m1, and I found that there's an issue on pipenv for making virtual environment with correct architecture. As you can see on the above picture, when I open the terminal with aram64 architecture and make virtual environment using pipenv, the architecture becomes i386. I'm not sure if this causes a big problem, it blocked me to use some of 3rd party packages such as numpy and pandas, although I failed to reproduce the error. (As I remember, it showed an error message like mach-o: but wrong architecture.) The version of pipenv I'm using is 2021.5.29. > arch arm64 > pipenv --python 3.8 Creating a virtualenv for this project... Pipfile: /Users/seewoolee/development/tmp/Pipfile Using /usr/bin/python3 (3.8.9) to create virtualenv... ⠦ Creating virtual environment...created virtual environment CPython3.8.9.final.0-64 in 388ms creator CPython3macOsFramework(dest=/Users/seewoolee/.local/share/virtualenvs/tmp-miv_sugU, clear=False, no_vcs_ignore=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/seewoolee/Library/Application Support/virtualenv) added seed packages: pip==21.2.4, setuptools==58.1.0, wheel==0.37.0 activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator ✔ Successfully created virtual environment! Virtualenv location: /Users/seewoolee/.local/share/virtualenvs/tmp-miv_sugU Creating a Pipfile for this project... > pipenv shell Launching subshell in virtual environment... . /Users/seewoolee/.local/share/virtualenvs/tmp-miv_sugU/bin/activate > arch i386 | I had this exact problem, and @Beel's comment gave me the clue that I needed to solve it. In my case, pipenv was referencing a version of python that was built for x86_64. Specifically: $ which pipenv /opt/anaconda3/bin/pipenv $ file /opt/anaconda3/bin/python /opt/anaconda3/bin/python: Mach-O 64-bit executable x86_64 This happened when I installed Anaconda and it put itself at the front of my PATH. After adjusting my path to resolve to the version of Pipenv I had installed, pipenv shell does not switch the arch as a side-effect, and I was able to install the correct versions of numpy, psutil, etc. | 6 | 3 |
69,808,514 | 2021-11-2 | https://stackoverflow.com/questions/69808514/how-to-fix-jupyter-extension-activation-failed-when-opening-python-files | I installed python lately on my macos system and when I try to open a python file I see this error popup about Jupyter extension : | Just ran into this today (I'm on MacOS for my work computer). In my case, upgrading to the pre-release version of the Jupyter extension (v2022.5.1001281006) solved it right away. If you're not an experienced programmer/software engineer, as is the case for me, I suggest trying to upgrade (or roll back) either VS code or the plugin itself before going with your own dev code for the plugin, as with @CAG2 pt2 answer. (Personally, I had trouble finding a 'product.json' file; perhaps you need an insider's build of VS code for that kind of thing [edit]: https://code.visualstudio.com/api/advanced-topics/using-proposed-api, in which case I think you'd be editing 'package.json'). | 7 | 2 |
69,783,897 | 2021-10-31 | https://stackoverflow.com/questions/69783897/compute-class-weight-function-issue-in-sklearn-library-when-used-in-keras-cl | The classifier script I wrote is working fine and recently added weight balancing to the fitting. Since I added the weight estimate function using 'sklearn' library I get the following error : compute_class_weight() takes 1 positional argument but 3 were given This error does not make sense per documentation. The script should have three inputs but not sure why it says expecting only one variable. Full error and code information is shown below. Apparently, this is failing only in VS code. I tested in the Jupyter notebook and working fine. So it seems an issue with VS code compiler. Any one notice? ( I am using Python 3.8 with other latest other libraries) from sklearn.utils import compute_class_weight train_classes = train_generator.classes class_weights = compute_class_weight( "balanced", np.unique(train_classes), train_classes ) class_weights = dict(zip(np.unique(train_classes), class_weights)), class_weights In Jupyter Notebook, | After spending a lot of time, this is how I fixed it. I still don't know why but when the code is modified as follows, it works fine. I got the idea after seeing this solution for a similar but slightly different issue. class_weights = compute_class_weight( class_weight = "balanced", classes = np.unique(train_classes), y = train_classes ) class_weights = dict(zip(np.unique(train_classes), class_weights)) class_weights | 35 | 92 |
69,785,084 | 2021-10-31 | https://stackoverflow.com/questions/69785084/running-cells-with-python-3-10-requires-ipykernel-installed | I just installed Python 3.10 on my laptop (Ubuntu 20.04). Running a Jupyter Notebook inside of VS Code works with Python 3.9 but not with Python 3.10. I get the error message: Running cells with 'Python 3.10.0 64 bit' requires ipykernel installed or requires an update. Update February 2022 Jalil Nourmohammadi Khiarak gave a more complete answere, it is now the new accepted answer. Update January 2022 It was a dumb error, I solved my problem (see accepted answer). Things I tried: Clicking on reinstall, which runs: /usr/bin/python3.10 /home/joris/.vscode/extensions/ms-python.python-2021.10.1365161279/pythonFiles/shell_exec.py /usr/bin/python3.10 -m pip install -U --force-reinstall ipykernel /tmp/tmp-12568krFMIDJVy4jp.log Running pip3 install --upgrade ipykernel jupyter notebook pyzmq (from this thread). Edits As asked in the comments, here is the output when I click the "reinstall" button: /usr/bin/python3.10 /home/joris/.vscode/extensions/ms-python.python-2021.10.1365161279/pythonFiles/shell_exec.py /usr/bin/python3.10 -m pip install -U --force-reinstall ipykernel /tmp/tmp-10997AnLZP3B079oV.log Executing command in shell >> /usr/bin/python3.10 -m pip install -U --force-reinstall ipykernel Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/lib/python3/dist-packages/pip/__main__.py", line 19, in <module> sys.exit(_main()) File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 73, in main command = create_command(cmd_name, isolated=("--isolated" in cmd_args)) File "/usr/lib/python3/dist-packages/pip/_internal/commands/__init__.py", line 96, in create_command module = importlib.import_module(module_path) File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/usr/lib/python3/dist-packages/pip/_internal/commands/install.py", line 24, in <module> from pip._internal.cli.req_command import RequirementCommand File "/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py", line 15, in <module> from pip._internal.index.package_finder import PackageFinder File "/usr/lib/python3/dist-packages/pip/_internal/index/package_finder.py", line 21, in <module> from pip._internal.index.collector import parse_links File "/usr/lib/python3/dist-packages/pip/_internal/index/collector.py", line 12, in <module> from pip._vendor import html5lib, requests ImportError: cannot import name 'html5lib' from 'pip._vendor' (/usr/lib/python3/dist-packages/pip/_vendor/__init__.py) Traceback (most recent call last): File "/home/joris/.vscode/extensions/ms-python.python-2021.10.1365161279/pythonFiles/shell_exec.py", line 26, in <module> subprocess.check_call(shell_args, stdout=sys.stdout, stderr=sys.stderr) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/python3.10', '-m', 'pip', 'install', '-U', '--force-reinstall', 'ipykernel']' returned non-zero exit status 1. Here is what my _vendor folder contains: joris@joris-N751JK:~$ ls /usr/lib/python3/dist-packages/pip/_vendor/ __init__.py __pycache__ Here is the output of reinstalling pip and checking the _vendor file: joris@joris-N751JK:~$ python3 -m pip install --upgrade --force-reinstall pip Defaulting to user installation because normal site-packages is not writeable Collecting pip Using cached pip-21.3.1-py3-none-any.whl (1.7 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 21.3.1 Uninstalling pip-21.3.1: Successfully uninstalled pip-21.3.1 Successfully installed pip-21.3.1 joris@joris-N751JK:~$ ls /usr/lib/python3/dist-packages/pip/_vendor __init__.py __pycache__ | I would like to add a comment for that: Your solution is correct but it didn't work for me when I have used it on my new Linux. I did the following job to solve the problem. Probably people after using the following comment: python3.10 -m pip install ipykernel Will get error for 'distutils.util'. So you should install first: sudo apt-get install python3.10-distutils Then again if you try to install you will get the other error: ImportError: cannot import name 'html5lib' from 'pip._vendor' (/usr/lib/python3/dist-packages/pip/_vendor/ To solve it you should use: curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10 Finally, it would be best if you run: /bin/python3.10 ~/.vscode/extensions/ms-python.python-2022.0.1786462952/pythonFiles/shell_exec.py /bin/python3.10 -m pip install -U notebook /tmp/tmp-5290PWIe78U4HgLu.log | 17 | 16 |
69,809,832 | 2021-11-2 | https://stackoverflow.com/questions/69809832/ipykernel-jupyter-notebook-labs-cannot-import-name-filefind-from-traitlets | I installed Jupyter notebook and labs on and EC2 instance and for some reason I get the following error: ImportError: cannot import name 'filefind' from 'traitlets.utils' (/usr/lib/python3/dist-packages/traitlets/utils/init.py) Jupyter opens fine in the browser but I can't seem to be able to work in an python notebook. | I disencourage the solution of op. Downloading and overwriting python libraries is not the way of keeping your system stable and clean! What I found out is that while installing Jupyter notebook it had found four significant errors which resulted from python3 packages that were not installed correctly within that installation itself. ERROR: ipykernel 6.6.0 has requirement traitlets<6.0,>=5.1.0, but you'll have traitlets 4.3.3 which is incompatible. ERROR: jupyterlab-pygments 0.1.2 has requirement pygments<3,>=2.4.1, but you'll have pygments 2.3.1 which is incompatible. ERROR: nbconvert 6.3.0 has requirement pygments>=2.4.1, but you'll have pygments 2.3.1 which is incompatible. ERROR: nbconvert 6.3.0 has requirement traitlets>=5.0, but you'll have traitlets 4.3.3 which is incompatible. The solution is to just patch the packages to the newest version with: pip3 install traitlets==5.1.1 pip3 install pygments==2.4.1 This applies to all similar cases where outdated packages prevent you from your installation | 9 | 19 |
69,810,210 | 2021-11-2 | https://stackoverflow.com/questions/69810210/mediapipe-solutionfacedetection | I want to use mediapipe facedetection module to crop face Images from original images and videos, to build a dataset for emotion recognition. is there a way of getting the bounding boxes from mediapipe faceDetection solution? cap = cv2.VideoCapture(0) with mp_face_detection.FaceDetection( model_selection=0, min_detection_confidence=0.5) as face_detection: while cap.isOpened(): success, image = cap.read() if not success: print("Ignoring empty camera frame.") continue image.flags.writeable = False image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) results = face_detection.process(image) # Draw the face detection annotations on the image. image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) if results.detections: for detection in results.detections: mp_drawing.draw_detection(image, detection) ## ''' #### here i want to grab the bounding box for the detected faces in order to crop the face image ''' ## cv2.imshow('MediaPipe Face Detection', cv2.flip(image, 1)) if cv2.waitKey(5) & 0xFF == 27: break cap.release() thank you | In order to figure out format you can follow two ways: Check protobuf files in medipipe Check out for what "Detection" is: https://github.com/google/mediapipe/blob/master/mediapipe/framework/formats/detection.proto We need location_data. It should have format field, which should be BOUNDING_BOX, or RELATIVE_BOUNDING_BOX (but in fact only RELATIVE_BOUNDING_BOX). Checkout for drawing_utils contents: Just check for draw_detection method. You need line with cv2.rectangle call Here's a snippet results = face_detection.process(image) # Draw the face detection annotations on the image. image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) if results.detections: for detection in results.detections: draw_detection(image, detection) ## ''' #### here i want to grab the bounding box for the detected faces in order to crop the face image ''' ## location_data = detection.location_data if location_data.format == LocationData.RELATIVE_BOUNDING_BOX: bb = location_data.relative_bounding_box bb_box = [ bb.xmin, bb.ymin, bb.width, bb.height, ] print(f"RBBox: {bb_box}") | 5 | 4 |
69,776,492 | 2021-10-30 | https://stackoverflow.com/questions/69776492/indexerror-tuple-index-out-of-range-when-i-try-to-create-an-executable-from-a-p | I have been trying out an open-sourced personal AI assistant script. The script works fine but I want to create an executable so that I can gift the executable to one of my friends. However, when I try to create the executable using the auto-py-to-exe, it states the below error: Running auto-py-to-exe v2.10.1 Building directory: C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x Provided command: pyinstaller --noconfirm --onedir --console --no-embed-manifest "C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py" Recursion Limit is set to 5000 Executing: pyinstaller --noconfirm --onedir --console --no-embed-manifest C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py --distpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\application --workpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\build --specpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x 42681 INFO: PyInstaller: 4.6 42690 INFO: Python: 3.10.0 42732 INFO: Platform: Windows-10-10.0.19042-SP0 42744 INFO: wrote C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\AI_Ass.spec 42764 INFO: UPX is not available. 42772 INFO: Extending PYTHONPATH with paths ['C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310'] 43887 INFO: checking Analysis 43891 INFO: Building Analysis because Analysis-00.toc is non existent 43895 INFO: Initializing module dependency graph... 43915 INFO: Caching module graph hooks... 43975 INFO: Analyzing base_library.zip ... 54298 INFO: Processing pre-find module path hook distutils from 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-distutils.py'. 54306 INFO: distutils: retargeting to non-venv dir 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib' 57474 INFO: Caching module dependency graph... 58088 INFO: running Analysis Analysis-00.toc 58132 INFO: Adding Microsoft.Windows.Common-Controls to dependent assemblies of final executable required by C:\Users\Tarun\AppData\Local\Programs\Python\Python310\python.exe 58365 INFO: Analyzing C:\Users\Tarun\AppData\Local\Programs\Python\Python310\AI_Ass.py 59641 INFO: Processing pre-safe import module hook urllib3.packages.six.moves from 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-urllib3.packages.six.moves.py'. An error occurred while packaging Traceback (most recent call last): File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\auto_py_to_exe\packaging.py", line 131, in package run_pyinstaller() File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\__main__.py", line 124, in run run_build(pyi_config, spec_file, **vars(args)) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\__main__.py", line 58, in run_build PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 782, in main build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build')) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 714, in build exec(code, spec_namespace) File "C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\AI_Ass.spec", line 7, in <module> a = Analysis(['C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py'], File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 277, in __init__ self.__postinit__() File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\datastruct.py", line 155, in __postinit__ self.assemble() File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 439, in assemble priority_scripts.append(self.graph.add_script(script)) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 265, in add_script self._top_script_node = super().add_script(pathname) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1433, in add_script self._process_imports(n) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports target_module = self._safe_import_hook(*import_info, **kwargs)[0] File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook target_modules = self.import_hook( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook target_package, target_module_partname = self._find_head_package( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package target_package = self._safe_import_module( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module return super()._safe_import_module(module_basename, module_name, parent_package) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module self._process_imports(n) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports target_module = self._safe_import_hook(*import_info, **kwargs)[0] File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook target_modules = self.import_hook( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook target_package, target_module_partname = self._find_head_package( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package target_package = self._safe_import_module( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module return super()._safe_import_module(module_basename, module_name, parent_package) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module self._process_imports(n) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports target_module = self._safe_import_hook(*import_info, **kwargs)[0] File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook target_modules = self.import_hook( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook target_package, target_module_partname = self._find_head_package( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package target_package = self._safe_import_module( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module return super()._safe_import_module(module_basename, module_name, parent_package) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module self._process_imports(n) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports target_module = self._safe_import_hook(*import_info, **kwargs)[0] File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook target_modules = self.import_hook( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook target_package, target_module_partname = self._find_head_package( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package target_package = self._safe_import_module( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module return super()._safe_import_module(module_basename, module_name, parent_package) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module self._process_imports(n) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports target_module = self._safe_import_hook(*import_info, **kwargs)[0] File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook target_modules = self.import_hook( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook submodule = self._safe_import_module(head, mname, submodule) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module return super()._safe_import_module(module_basename, module_name, parent_package) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module self._process_imports(n) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports target_module = self._safe_import_hook(*import_info, **kwargs)[0] File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook target_modules = self.import_hook( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook submodule = self._safe_import_module(head, mname, submodule) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module return super()._safe_import_module(module_basename, module_name, parent_package) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module self._process_imports(n) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports target_module = self._safe_import_hook(*import_info, **kwargs)[0] File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook target_modules = self.import_hook( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook submodule = self._safe_import_module(head, mname, submodule) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module return super()._safe_import_module(module_basename, module_name, parent_package) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2061, in _safe_import_module n = self._scan_code(module, co, co_ast) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2645, in _scan_code self._scan_bytecode( File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2749, in _scan_bytecode for inst in util.iterate_instructions(module_code_object): File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\util.py", line 147, in iterate_instructions yield from iterate_instructions(constant) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\util.py", line 139, in iterate_instructions yield from get_instructions(code_object) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\dis.py", line 338, in _get_instructions_bytes argval, argrepr = _get_const_info(arg, constants) File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\dis.py", line 292, in _get_const_info argval = const_list[const_index] IndexError: tuple index out of range Project output will not be moved to output folder Complete. I understand that there is a thread already about similar issue but it still doesn't solve the issue. Hence seeking out help I really have no idea why is the error occurring and how to resolve it. I am pasting the script below for your reference. Can some one please help? Thank you in advance #importing libraries import speech_recognition as sr import pyttsx3 import datetime import wikipedia import webbrowser import os import time import subprocess from ecapture import ecapture as ec import wolframalpha import json import requests #setting up speech engine engine=pyttsx3.init('sapi5') voices=engine.getProperty('voices') engine.setProperty('voice','voices[1].id') def speak(text): engine.say(text) engine.runAndWait() #Greet user def wishMe(): hour=datetime.datetime.now().hour if hour>=0 and hour<12: speak("Hello,Good Morning") print("Hello,Good Morning") elif hour>=12 and hour<18: speak("Hello,Good Afternoon") print("Hello,Good Afternoon") else: speak("Hello,Good Evening") print("Hello,Good Evening") #Setting up the command function for your AI assistant def takeCommand(): r=sr.Recognizer() with sr.Microphone() as source: print("Listening...") audio=r.listen(source) try: statement=r.recognize_google(audio,language='en-in') print(f"user said:{statement}\n") except Exception as e: speak("Pardon me, please say that again") return "None" return statement print("Loading your AI personal assistant Friday") speak("Loading your AI personal assistant Friday") wishMe() #main function if __name__=='__main__': while True: speak("Tell me how can I help you now?") statement = takeCommand().lower() if statement==0: continue if "good bye" in statement or "ok bye" in statement or "stop" in statement: speak('your personal assistant Friday is shutting down,Good bye') print('your personal assistant Friday is shutting down,Good bye') break if 'wikipedia' in statement: speak('Searching Wikipedia...') statement =statement.replace("wikipedia", "") results = wikipedia.summary(statement, sentences=10) webbrowser.open_new_tab("https://en.wikipedia.org/wiki/"+ statement) speak("According to Wikipedia") print(results) speak(results) elif 'open youtube' in statement: webbrowser.register('chrome', None, webbrowser.BackgroundBrowser("C://Program Files (x86)//Google//Chrome//Application//chrome.exe")) webbrowser.get('chrome').open_new_tab("https://www.youtube.com") #webbrowser.open_new_tab("https://www.youtube.com") speak("youtube is open now") time.sleep(5) elif 'open google' in statement: webbrowser.open_new_tab("https://www.google.com") speak("Google chrome is open now") time.sleep(5) elif 'open gmail' in statement: webbrowser.open_new_tab("gmail.com") speak("Google Mail open now") time.sleep(5) elif 'time' in statement: strTime=datetime.datetime.now().strftime("%H:%M:%S") speak(f"the time is {strTime}") elif 'news' in statement: news = webbrowser.open_new_tab("https://timesofindia.indiatimes.com/home/headlines") speak('Here are some headlines from the Times of India,Happy reading') time.sleep(6) elif "camera" in statement or "take a photo" in statement: ec.capture(0,"robo camera","img.jpg") elif 'search' in statement: statement = statement.replace("search", "") webbrowser.open_new_tab(statement) time.sleep(5) elif 'who are you' in statement or 'what can you do' in statement: speak('I am Friday version 1 point O your personal assistant. I am programmed to minor tasks like' 'opening youtube,google chrome, gmail and stackoverflow ,predict time,take a photo,search wikipedia,predict weather' 'In different cities, get top headline news from times of india and you can ask me computational or geographical questions too!') elif "who made you" in statement or "who created you" in statement or "who discovered you" in statement: speak("I was built by Mirthula") print("I was built by Mirthula") elif "log off" in statement or "sign out" in statement: speak("Ok , your pc will log off in 10 sec make sure you exit from all applications") subprocess.call(["shutdown", "/l"]) time.sleep(3) | This is a Python 3.10 issue. To fix it: You have to go to the folder "Python310\Lib" and edit the file 'dis.py'. In the 'dis.py' file you have to find this "def _unpack_opargs" and inside the else statement write a new line with this: "extended_arg = 0", then save the file. I did something like that: else: arg = None extended_arg = 0 yield (i, op, arg) and everything is working fine now. I found the solution here: https://bugs.python.org/issue45757 https://github.com/pyinstaller/pyinstaller/issues/6301 | 24 | 40 |
69,800,500 | 2021-11-1 | https://stackoverflow.com/questions/69800500/how-to-calculate-correlation-coefficients-using-sklearn-cca-module | I need to measure similarity between feature vectors using CCA module. I saw sklearn has a good CCA module available: https://scikit-learn.org/stable/modules/generated/sklearn.cross_decomposition.CCA.html In different papers I reviewed, I saw that the way to measure similarity using CCA is to calculate the mean of the correlation coefficients, for example as done in this following notebook example: https://github.com/google/svcca/blob/1f3fbf19bd31bd9b76e728ef75842aa1d9a4cd2b/tutorials/001_Introduction.ipynb How to calculate the correlation coefficients (as shown in the notebook) using sklearn CCA module? from sklearn.cross_decomposition import CCA import numpy as np U = np.random.random_sample(500).reshape(100,5) V = np.random.random_sample(500).reshape(100,5) cca = CCA(n_components=1) cca.fit(U, V) cca.coef_.shape # (5,5) U_c, V_c = cca.transform(U, V) U_c.shape # (100,1) V_c.shape # (100,1) This is an example of the sklearn CCA module, however I have no idea how to retrieve correlation coefficients from it. | In reference to the notebook you provided which is a supporting artefact to and implements ideas from the following two papers "SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability". Neural Information Processing Systems (NeurIPS) 2017 "Insights on Representational Similarity in Deep Neural Networks with Canonical Correlation". Neural Information Processing Systems (NeurIPS) 2018 The authors there calculate 50 = min(A_fake neurons, B_fake neurons) components and plot the correlations between the transformed vectors of each component (i.e. 50). With the help of the below code, using sklearn CCA, I am trying to reproduce their Toy Example. As we'll see the correlation plots match. The sanity check they used in the notebook came very handy - it passed seamlessly with this code as well. import numpy as np from matplotlib import pyplot as plt from sklearn.cross_decomposition import CCA # rows contain the number of samples for CCA and the number of rvs goes in columns X = np.random.randn(2000, 100) Y = np.random.randn(2000, 50) # num of components n_comps = min(X.shape[1], Y.shape[1]) cca = CCA(n_components=n_comps) cca.fit(X, Y) X_c, Y_c = cca.transform(X, Y) # calculate and plot the correlations of all components corrs = [np.corrcoef(X_c[:, i], Y_c[:, i])[0, 1] for i in range(n_comps)] plt.plot(corrs) plt.xlabel('cca_idx') plt.ylabel('cca_corr') plt.show() Output: For the sanity check, replace the Y data matrix by a scaled invertible transform of X and rerun the code. Y = np.dot(X, np.random.randn(100, 100)) Output: | 7 | 10 |
69,805,091 | 2021-11-2 | https://stackoverflow.com/questions/69805091/how-to-create-an-interactive-brain-shaped-graph | I'm working on a visualization project in networkx and plotly. Is there a way to create a 3D graph that resembles how a human brain looks like in networkx and then to visualize it with plotly (so it will be interactive)? The idea is to have the nodes on the outside (or only show the nodes if it's easier) and to color a set of them differently like the image above | Based on the clarified requirements, I took a new approach: Download accurate brain mesh data from BrainNet Viewer github repo; Plot a random graph with 3D-coordinates using Kamada-Kuwai cost function in three dimensions centered in a sphere containing the brain mesh; Radially expand the node positions away from the center of the brain mesh and then shift them back to the closest vertex actually on the brain mesh; Color some nodes red based on an arbitrary distance criterion from a randomly selected mesh vertex; Fiddle with a bunch of plotting parameters to make it look decent. There is a clearly delineated spot to add in different graph data as well as change the logic by which the node colors are decided. The key parameters to play with so that things look decent after introducing new graph data are: scale_factor: This changes how much the original Kamada-Kuwai calculated coordinates are translated radially away from the center of the brain mesh before they are snapped back to its surface. Larger values will make more nodes snap to the outer surface of the brain. Smaller values will leave more nodes positioned on the surfaces between the two hemispheres. opacity of the lines in the edge trace: Graphs with more edges will quickly clutter up field of view and make the overall brain shape less visible. This speaks to my biggest dissatisfaction with this overall approach -- that edges which appear outside of the mesh surface make it harder to see the overall shape of the mesh, especially between the temporal lobes. My other biggest caveat here is that there is no attempt has been made to check whether any nodes positioned on the brain surface happen to coincide or have any sort of equal spacing. Here is a screenshot and the live demo on Colab. Full copy-pasteable code below. There are a whole bunch of asides that could be discussed here, but for brevity I will only note two: Folks interested in this topic but feeling overwhelmed by programming details should absolutely check out BrainNet Viewer; There are plenty of other brain meshes in the BrainNet Viewer github repo that could be used. Even better, if you have any mesh which can be formatted or reworked to be compatible with this approach, you could at least try wrapping a set of nodes around any other non-brain and somewhat round-ish mesh representing any other object. import plotly.graph_objects as go import numpy as np import networkx as nx import math def mesh_properties(mesh_coords): """Calculate center and radius of sphere minimally containing a 3-D mesh Parameters ---------- mesh_coords : tuple 3-tuple with x-, y-, and z-coordinates (respectively) of 3-D mesh vertices """ radii = [] center = [] for coords in mesh_coords: c_max = max(c for c in coords) c_min = min(c for c in coords) center.append((c_max + c_min) / 2) radius = (c_max - c_min) / 2 radii.append(radius) return(center, max(radii)) # Download and prepare dataset from BrainNet repo coords = np.loadtxt(np.DataSource().open('https://raw.githubusercontent.com/mingruixia/BrainNet-Viewer/master/Data/SurfTemplate/BrainMesh_Ch2_smoothed.nv'), skiprows=1, max_rows=53469) x, y, z = coords.T triangles = np.loadtxt(np.DataSource().open('https://raw.githubusercontent.com/mingruixia/BrainNet-Viewer/master/Data/SurfTemplate/BrainMesh_Ch2_smoothed.nv'), skiprows=53471, dtype=int) triangles_zero_offset = triangles - 1 i, j, k = triangles_zero_offset.T # Generate 3D mesh. Simply replace with 'fig = go.Figure()' or turn opacity to zero if seeing brain mesh is not desired. fig = go.Figure(data=[go.Mesh3d(x=x, y=y, z=z, i=i, j=j, k=k, color='lightpink', opacity=0.5, name='', showscale=False, hoverinfo='none')]) # Generate networkx graph and initial 3-D positions using Kamada-Kawai path-length cost-function inside sphere containing brain mesh G = nx.gnp_random_graph(200, 0.02, seed=42) # Replace G with desired graph here mesh_coords = (x, y, z) mesh_center, mesh_radius = mesh_properties(mesh_coords) scale_factor = 5 # Tune this value by hand to have more/fewer points between the brain hemispheres. pos_3d = nx.kamada_kawai_layout(G, dim=3, center=mesh_center, scale=scale_factor*mesh_radius) # Calculate final node positions on brain surface pos_brain = {} for node, position in pos_3d.items(): squared_dist_matrix = np.sum((coords - position) ** 2, axis=1) pos_brain[node] = coords[np.argmin(squared_dist_matrix)] # Prepare networkx graph positions for plotly node and edge traces nodes_x = [position[0] for position in pos_brain.values()] nodes_y = [position[1] for position in pos_brain.values()] nodes_z = [position[2] for position in pos_brain.values()] edge_x = [] edge_y = [] edge_z = [] for s, t in G.edges(): edge_x += [nodes_x[s], nodes_x[t]] edge_y += [nodes_y[s], nodes_y[t]] edge_z += [nodes_z[s], nodes_z[t]] # Decide some more meaningful logic for coloring certain nodes. Currently the squared distance from the mesh point at index 42. node_colors = [] for node in G.nodes(): if np.sum((pos_brain[node] - coords[42]) ** 2) < 1000: node_colors.append('red') else: node_colors.append('gray') # Add node plotly trace fig.add_trace(go.Scatter3d(x=nodes_x, y=nodes_y, z=nodes_z, #text=labels, mode='markers', #hoverinfo='text', name='Nodes', marker=dict( size=5, color=node_colors ) )) # Add edge plotly trace. Comment out or turn opacity to zero if not desired. fig.add_trace(go.Scatter3d(x=edge_x, y=edge_y, z=edge_z, mode='lines', hoverinfo='none', name='Edges', opacity=0.1, line=dict(color='gray') )) # Make axes invisible fig.update_scenes(xaxis_visible=False, yaxis_visible=False, zaxis_visible=False) # Manually adjust size of figure fig.update_layout(autosize=False, width=800, height=800) fig.show() | 7 | 2 |
69,752,055 | 2021-10-28 | https://stackoverflow.com/questions/69752055/valueerror-none-values-not-supported-code-working-properly-on-cpu-gpu-but-not | I am trying to train a seq2seq model for language translation, and I am copy-pasting code from this Kaggle Notebook on Google Colab. The code is working fine with CPU and GPU, but it is giving me errors while training on a TPU. This same question has been already asked here. Here is my code: strategy = tf.distribute.experimental.TPUStrategy(resolver) with strategy.scope(): model = create_model() model.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy') model.fit_generator(generator = generate_batch(X_train, y_train, batch_size = batch_size), steps_per_epoch = train_samples // batch_size, epochs = epochs, validation_data = generate_batch(X_test, y_test, batch_size = batch_size), validation_steps = val_samples // batch_size) Traceback: Epoch 1/2 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-60-940fe0ee3c8b> in <module>() 3 epochs = epochs, 4 validation_data = generate_batch(X_test, y_test, batch_size = batch_size), ----> 5 validation_steps = val_samples // batch_size) 10 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 992 except Exception as e: # pylint:disable=broad-except 993 if hasattr(e, "ag_error_metadata"): --> 994 raise e.ag_error_metadata.to_exception(e) 995 else: 996 raise ValueError: in user code: /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:853 train_function * return step_function(self, iterator) /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:842 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) ... ValueError: None values not supported. I couldn't figure out the error, and I think the error is because of this generate_batch function: X, y = lines['english_sentence'], lines['hindi_sentence'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 34) def generate_batch(X = X_train, y = y_train, batch_size = 128): while True: for j in range(0, len(X), batch_size): encoder_input_data = np.zeros((batch_size, max_length_src), dtype='float32') decoder_input_data = np.zeros((batch_size, max_length_tar), dtype='float32') decoder_target_data = np.zeros((batch_size, max_length_tar, num_decoder_tokens), dtype='float32') for i, (input_text, target_text) in enumerate(zip(X[j:j + batch_size], y[j:j + batch_size])): for t, word in enumerate(input_text.split()): encoder_input_data[i, t] = input_token_index[word] for t, word in enumerate(target_text.split()): if t<len(target_text.split())-1: decoder_input_data[i, t] = target_token_index[word] if t>0: decoder_target_data[i, t - 1, target_token_index[word]] = 1. yield([encoder_input_data, decoder_input_data], decoder_target_data) My Colab notebook - here Kaggle dataset - here TensorFlow version - 2.6 Edit - Please don't tell me to down-grade TensorFlow/Keras version to 1.x. I can down-grade it to TensorFlow 2.0, 2.1, 2.3 but not 1.x. I don't understand TensorFlow 1.x. Also, there is no point in using a 3-year-old version. | As stated in the referenced answer in the link you provided, tensorflow.data API works better with TPUs. In order to adapt it in your case, try to use return instead of yield in generate_batch function: def generate_batch(X = X_train, y = y_train, batch_size = 128): ... return encoder_input_data, decoder_input_data, decoder_target_dat encoder_input_data, decoder_input_data, decoder_target_data = generate_batch(X_train, y_train, batch_size=128) And then use tensorflow.data to structure your data: from tensorflow.data import Dataset encoder_input_data = Dataset.from_tensor_slices(encoder_input_data) decoder_input_data = Dataset.from_tensor_slices(decoder_input_data) decoder_target_data = Dataset.from_tensor_slices(decoder_target_data) ds = Dataset.zip((encoder_input_data, decoder_input_data, decoder_target_data)).map(map_fn).batch(1024) where map_fn is defined by: def map_fn(encoder_input ,decoder_input, decoder_target): return (encoder_input ,decoder_input), decoder_target And finally use Model.fit instead of Model.fit_generator: model.fit(x=ds, epochs=epochs) | 5 | 1 |
69,755,906 | 2021-10-28 | https://stackoverflow.com/questions/69755906/how-to-obtain-smooth-histogram-after-scaling-image | I am trying to linearly scale an image so the whole greyscale range is used. This is to improve the lighting of the shot. When plotting the histogram however I don't know how to get the scaled histogram so that its smoother so it's a curve as aspired to discrete bins. Any tips or points would be much appreciated. import cv2 as cv import numpy as np import matplotlib.pyplot as plt img = cv.imread(r'/Users/harold/Documents/Academia/Nottingham Uni/Year 4/ImageProcessing/Imaging_Task_Sheet/PointImage.jpeg', cv.IMREAD_GRAYSCALE) img_s = img/255 img_s = img_s / np.max(img_s) img_s = img_s*255 histogram = cv.calcHist([img], [0], None, [256], [0, 256]) histogram1 = cv.calcHist([img_s.astype('uint8')], [0], None, [256], [0, 256]) plt.figure() plt.title("Grayscale Histogram") plt.xlabel("grayscale value") plt.ylabel("pixels") plt.plot(histogram, label='Original Image') # <- or here plt.plot(histogram1, label='Equalised Image') # <- or here The histogram produced is: Which is from this picture: | I think what you have in mind is a spline curve that passes through your points. Here is how to do it: import cv2 as cv import numpy as np import matplotlib.pyplot as plt from scipy import interpolate img = cv.imread(r'3NKTJ.jpg', cv.IMREAD_GRAYSCALE) img_s = img/255 img_s = img_s / np.max(img_s) img_s = img_s*255 histogram = cv.calcHist([img], [0], None, [256], [0, 256]) histogram1 = cv.calcHist([img_s.astype('uint8')], [0], None, [256], [0, 256]) x=np.linspace(0,len(histogram1),len(histogram1)) # x: 0 --> 255 with step=1 X=np.where(histogram1>0)[0] # extract bins with non-zero histogram1 values Y=histogram1[X] # the corresponding Y values F=interpolate.splrep(X, Y) # spline representation of (X,Y) Ynew = interpolate.splev(x, F) # calculate interpolated Ynew plt.figure() plt.title("Grayscale Histogram") plt.xlabel("grayscale value") plt.ylabel("pixels") plt.plot(histogram, label='Original Image') # <- or here plt.plot(histogram1, label='Equalised Image') # <- or here plt.plot(x,Ynew, label='spline interpolation of Equalised Image') Below, the result: Best regards, Stéphane | 6 | 3 |
69,798,145 | 2021-11-1 | https://stackoverflow.com/questions/69798145/circleci-started-11-1-2021-can-t-find-python-executable-python-you-can-set | As of this morning, CircleCI is failing for me with this strange build error: Can't find Python executable "python", you can set the PYTHON env variable I noticed it on a new commit. of course, thinking it was my new commit I forced pushed my last known passing commit onto main branch. In particular, this seems to have started for me this morning (11/1), and the build is now failing on the very same commit that passed 16 hours ago (isn’t that fun) The full error is: #!/bin/bash -eo pipefail if [ ! -f "package.json" ]; then echo echo "---" echo "Unable to find your package.json file. Did you forget to set the app-dir parameter?" echo "---" echo echo "Current directory: $(pwd)" echo echo echo "List directory: " echo ls exit 1 fi case yarn in npm) if [[ "false" == "true" ]]; then npm install else npm ci fi ;; yarn) if [[ "false" == "true" ]]; then yarn install else yarn install --frozen-lockfile fi ;; esac yarn install v1.22.15 [1/4] Resolving packages... [2/4] Fetching packages... info [email protected]: The platform "linux" is incompatible with this module. info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation. info [email protected]: The platform "linux" is incompatible with this module. info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation. [3/4] Linking dependencies... warning " > @babel/[email protected]" has unmet peer dependency "@babel/core@^7.0.0-0". warning "@babel/preset-react > @babel/[email protected]" has unmet peer dependency "@babel/core@^7.0.0-0". warning "@babel/preset-react > @babel/[email protected]" has unmet peer dependency "@babel/core@^7.0.0-0". warning "@babel/preset-react > @babel/[email protected]" has unmet peer dependency "@babel/core@^7.0.0-0". warning "@babel/preset-react > @babel/[email protected]" has unmet peer dependency "@babel/core@^7.0.0-0". warning "@babel/preset-react > @babel/plugin-transform-react-jsx > @babel/[email protected]" has unmet peer dependency "@babel/core@^7.0.0-0". warning " > @reactchartjs/[email protected]" has incorrect peer dependency "chart.js@^2.3". warning " > [email protected]" has unmet peer dependency "react-is@>= 16.8.0". warning " > [email protected]" has unmet peer dependency "webpack@^4.0.0 || ^5.0.0". warning "webpack-dev-server > [email protected]" has unmet peer dependency "webpack@^4.0.0 || ^5.0.0". [4/4] Building fresh packages... error /home/circleci/project/node_modules/node-sass: Command failed. Exit code: 1 Command: node scripts/build.js Arguments: Directory: /home/circleci/project/node_modules/node-sass Output: Building: /usr/local/bin/node /home/circleci/project/node_modules/node-gyp/bin/node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library= gyp info it worked if it ends with ok gyp verb cli [ gyp verb cli '/usr/local/bin/node', gyp verb cli '/home/circleci/project/node_modules/node-gyp/bin/node-gyp.js', gyp verb cli 'rebuild', gyp verb cli '--verbose', gyp verb cli '--libsass_ext=', gyp verb cli '--libsass_cflags=', gyp verb cli '--libsass_ldflags=', gyp verb cli '--libsass_library=' gyp verb cli ] gyp info using [email protected] gyp info using [email protected] | linux | x64 gyp verb command rebuild [] gyp verb command clean [] gyp verb clean removing "build" directory gyp verb command configure [] gyp verb check python checking for Python executable "python2" in the PATH gyp verb `which` failed Error: not found: python2 gyp verb `which` failed at getNotFoundError (/home/circleci/project/node_modules/which/which.js:13:12) gyp verb `which` failed at F (/home/circleci/project/node_modules/which/which.js:68:19) gyp verb `which` failed at E (/home/circleci/project/node_modules/which/which.js:80:29) gyp verb `which` failed at /home/circleci/project/node_modules/which/which.js:89:16 gyp verb `which` failed at /home/circleci/project/node_modules/isexe/index.js:42:5 gyp verb `which` failed at /home/circleci/project/node_modules/isexe/mode.js:8:5 gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:198:21) gyp verb `which` failed python2 Error: not found: python2 gyp verb `which` failed at getNotFoundError (/home/circleci/project/node_modules/which/which.js:13:12) gyp verb `which` failed at F (/home/circleci/project/node_modules/which/which.js:68:19) gyp verb `which` failed at E (/home/circleci/project/node_modules/which/which.js:80:29) gyp verb `which` failed at /home/circleci/project/node_modules/which/which.js:89:16 gyp verb `which` failed at /home/circleci/project/node_modules/isexe/index.js:42:5 gyp verb `which` failed at /home/circleci/project/node_modules/isexe/mode.js:8:5 gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:198:21) { gyp verb `which` failed code: 'ENOENT' gyp verb `which` failed } gyp verb check python checking for Python executable "python" in the PATH gyp verb `which` failed Error: not found: python gyp verb `which` failed at getNotFoundError (/home/circleci/project/node_modules/which/which.js:13:12) gyp verb `which` failed at F (/home/circleci/project/node_modules/which/which.js:68:19) gyp verb `which` failed at E (/home/circleci/project/node_modules/which/which.js:80:29) gyp verb `which` failed at /home/circleci/project/node_modules/which/which.js:89:16 gyp verb `which` failed at /home/circleci/project/node_modules/isexe/index.js:42:5 gyp verb `which` failed at /home/circleci/project/node_modules/isexe/mode.js:8:5 gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:198:21) gyp verb `which` failed python Error: not found: python gyp verb `which` failed at getNotFoundError (/home/circleci/project/node_modules/which/which.js:13:12) gyp verb `which` failed at F (/home/circleci/project/node_modules/which/which.js:68:19) gyp verb `which` failed at E (/home/circleci/project/node_modules/which/which.js:80:29) gyp verb `which` failed at /home/circleci/project/node_modules/which/which.js:89:16 gyp verb `which` failed at /home/circleci/project/node_modules/isexe/index.js:42:5 gyp verb `which` failed at /home/circleci/project/node_modules/isexe/mode.js:8:5 gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:198:21) { gyp verb `which` failed code: 'ENOENT' gyp verb `which` failed } gyp ERR! configure error gyp ERR! stack Error: Can't find Python executable "python", you can set the PYTHON env variable. gyp ERR! stack at PythonFinder.failNoPython (/home/circleci/project/node_modules/node-gyp/lib/configure.js:484:19) gyp ERR! stack at PythonFinder.<anonymous> (/home/circleci/project/node_modules/node-gyp/lib/configure.js:406:16) gyp ERR! stack at F (/home/circleci/project/node_modules/which/which.js:68:16) gyp ERR! stack at E (/home/circleci/project/node_modules/which/which.js:80:29) gyp ERR! stack at /home/circleci/project/node_modules/which/which.js:89:16 gyp ERR! stack at /home/circleci/project/node_modules/isexe/index.js:42:5 gyp ERR! stack at /home/circleci/project/node_modules/isexe/mode.js:8:5 gyp ERR! stack at FSReqCallback.oncomplete (node:fs:198:21) gyp ERR! System Linux 4.15.0-1110-aws gyp ERR! command "/usr/local/bin/node" "/home/circleci/project/node_modules/node-gyp/bin/node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library=" gyp ERR! cwd /home/circleci/project/node_modules/node-sass gyp ERR! node -v v16.13.0 gyp ERR! node-gyp -v v3.8.0 gyp ERR! not ok Build failed with error code: 1 info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command. Exited with code exit status 1 CircleCI received exit code 1 My circleci config file (which has not changed) is: version: 2.1 # Use 2.1 to enable using orbs and other features. # Declare the orbs that we'll use in our config. # read more about orbs: https://circleci.com/docs/2.0/using-orbs/ orbs: ruby: circleci/[email protected] node: circleci/node@2 jobs: build: # our first job, named "build" docker: - image: circleci/ruby:2.7.4-node-browsers # use a tailored CircleCI docker image. auth: username: mydockerhub-user password: $DOCKERHUB_PASSWORD # context / project UI env-var reference steps: - checkout # pull down our git code. - ruby/install-deps # use the ruby orb to install dependencies # use the node orb to install our packages # specifying that we use `yarn` and to cache dependencies with `yarn.lock` # learn more: https://circleci.com/docs/2.0/caching/ - node/install-packages: pkg-manager: yarn cache-key: "yarn.lock" test: # our next job, called "test" | Try using a next-generation Ruby image. In your case, change circleci/ruby:2.7.4-node-browsers to cimg/ruby:2.7.4-browsers. You can find the full list of images here. | 6 | 2 |
69,817,464 | 2021-11-2 | https://stackoverflow.com/questions/69817464/pyyaml-error-could-not-determine-a-constructor-for-the-tag-vault | I am trying to read a YAML file that has the tag !vault in it. I get the error: could not determine a constructor for the tag '!vault' Upon reading a couple of blogs, I understood that I need to specify some constructors to resolve this issue, but I'm not clear on how to do it. import yaml from yaml.loader import SafeLoader with open('test.yml' ) as stream: try: inventory_info = yaml.safe_load(stream) except yaml.YAMLError as exc: print(exc) User = inventory_info['all']['children']['linux']['vars']['user'] key = inventory_info['all']['children']['linux']['vars']['key_file'] The YAML file I am using: all: children: rhel: hosts: 172.18.144.98 centos: hosts: 172.18.144.98 linux: children: rhel: centos: vars: user: "o9ansibleuser" key_file: "test.pem" ansible_password: !vault | $ANSIBLE_VAULT;2.1;AES256 3234567899353936376166353 | Either use the from_yaml utility function: from ansible.parsing.utils.yaml import from_yaml # inventory_info = yaml.safe_load(stream) # Change this inventory_info = from_yaml(stream) # to this Or add the constructor to yaml.SafeLoader: from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode def construct_vault_encrypted_unicode(loader, node): value = loader.construct_scalar(node) return AnsibleVaultEncryptedUnicode(value) yaml.SafeLoader.add_constructor(u'!vault', construct_vault_encrypted_unicode) inventory_info = yaml.safe_load(stream) | 7 | 4 |
69,776,414 | 2021-10-30 | https://stackoverflow.com/questions/69776414/pytzusagewarning-the-zone-attribute-is-specific-to-pytzs-interface-please-mig | I am writing a simple function that sends messages based on a schedule using AsyncIOScheduler. scheduler = AsyncIOScheduler() scheduler.add_job(job, "cron", day_of_week="mon-fri", hour = "16") scheduler.start() It seems to work, but I always get the following message: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html PytzUsageWarning: The localize method is no longer necessary, as this time zone supports the fold attribute (PEP 495) I am not familiar with time in python at all, so I am not really sure what this message is asking me to do. I visited the link provided in the message, but the explanation there is too complicated for me. As I understand, I have to migrate to using PEP495, but how exactly can I do just that? | To set a PIP495 compatible timezone in APScheduler, set a parameter when instantiating the scheduler: scheduler = AsyncIOScheduler(timezone="Europe/Berlin") scheduler.add_job(job, "cron", day_of_week="mon-fri", hour = "16") scheduler.start() With flask-APScheduler (version 1.12.2), add the timezone to the configuration class: """Basic Flask Example from flask-apscheduler examples, file jobs.py""" from flask import Flask from flask_apscheduler import APScheduler class Config: """App configuration.""" JOBS = [ { "id": "job1", "func": "jobs:job1", "args": (1, 2), "trigger": "interval", "seconds": 10, } ] SCHEDULER_API_ENABLED = True SCHEDULER_TIMEZONE = "Europe/Berlin" # <========== add here def job1(var_one, var_two): print(str(var_one) + " " + str(var_two)) if __name__ == "__main__": app = Flask(__name__) app.config.from_object(Config()) scheduler = APScheduler() scheduler.init_app(app) scheduler.start() app.run() | 15 | 22 |
69,742,016 | 2021-10-27 | https://stackoverflow.com/questions/69742016/multiple-strategies-for-same-function-parameter-in-python-hypothesis | I am writing a simple test code in Python using the Hypothesis package. It there a way to use multiple strategies for the same function parameter? As an example, use integers() and floats() to test my values parameter without writing two separate test functions? from hypothesis import given from hypothesis.strategies import lists, integers, floats, sampled_from @given( integers() ,floats() ) def test_lt_operator_interval_bin_numerical(values): interval_bin = IntervalBin(10, 20, IntervalClosing.Both) assert (interval_bin < values) == (interval_bin.right < values) The above code doesn't work, but it represents what i would like to achieve. NB: I have already tried the trivial solution of creating two different tests with two different strategies: def _test_lt(values): interval_bin = IntervalBin(10, 20, IntervalClosing.Both) assert (interval_bin < values) == (interval_bin.right < values) test_lt_operator_interval_bin_int = given(integers())(_test_lt) test_lt_operator_interval_bin_float = given(floats())(_test_lt) However I was wondering if there was a nicer way to do this: when the nummber of strategies becomes larger, it is quite redundant as code. | In general if you need your values to be one of several things (like ints or floats in your example) then we can combine strategies for separate things into one with | operator (which operates similar to one for sets -- a union operator and works by invoking __or__ magic method): from hypothesis import given from hypothesis.strategies import integers, floats @given( integers() | floats() ) def test_lt_operator_interval_bin_numerical(values): interval_bin = IntervalBin(10, 20, IntervalClosing.Both) assert (interval_bin < values) == (interval_bin.right < values) or you can use strategies.one_of as @Zac Hatfield-Dodds mentioned which does pretty much the same thing: from hypothesis import given from hypothesis.strategies import integers, floats, one_of @given( one_of(integers(), floats()) ) def test_lt_operator_interval_bin_numerical(values): interval_bin = IntervalBin(10, 20, IntervalClosing.Both) assert (interval_bin < values) == (interval_bin.right < values) | 5 | 5 |
69,812,523 | 2021-11-2 | https://stackoverflow.com/questions/69812523/how-to-specify-requirements-in-python-packages-metadata | Core metadata specification documents the metadata field Requires-External which seems to be for specifying system (non-python) dependencies. How do you actually specify this field though? This is what I've tried: . ├── mypackage │ └── __init__.py └── setup.py Contents of setup.py from setuptools import setup setup( name="mypackage", description="blah blah", url='https://example.org', version="0.1", packages=["mypackage"], requires_external=[ "C", "libpng (>=1.5)", 'make; sys_platform != "win32"', ], ) When I build this package, that metadata was not included Metadata-Version: 2.1 Name: mypackage Version: 0.1 Summary: blah blah Home-page: https://example.org License: UNKNOWN Platform: UNKNOWN UNKNOWN So what is the syntax to pass Requires-External to setuptools/distutils? Note: this question is not asking about Requires-Dist metadata. | So what is the syntax to pass Requires-External to setuptools/distutils? There is none by default, as neither distutils nor setuptools support the field. Also, requires_external keyword arg is not supported as well - it is silently ignored, just as any other unknown keyword arg. To add local support for requires_external kwarg, you need to provide your own distribution implementation. Minimal example (requires Python 3.9): from setuptools.dist import Distribution as DistributionOrig class Distribution(DistributionOrig): _DISTUTILS_UNSUPPORTED_METADATA = DistributionOrig._DISTUTILS_UNSUPPORTED_METADATA | {'requires_external': dict} You can now pass requires_external to setup() when paired with distclass: setup( ..., requires_external=[ 'C', 'libpng (>=1.5)', 'make; sys_platform != "win32"', ], distclass=Distribution, ) The next goal is to actually write the contents of requires_external to PKG-INFO. This is quite tricky to do via distribution metadata itself because setuptools patches the relevant methods (DistributionMetadata.read_pkg_file() and DistributionMetadata.write_pkg_file()) with own monolithic impls. The easiest way for this IMO is to modify PKG-INFO post factum via a custom egg_info impl: import email from pathlib import Path from setuptools.command.egg_info import egg_info as egg_info_orig class egg_info(egg_info_orig): def run(self): super().run() # PKG-INFO is now guaranteed to exist pkg_info_file = Path(self.egg_info, 'PKG-INFO') pkg_info = email.message_from_bytes(pkg_info_file.read_bytes()) for req in self.distribution.metadata.requires_external: pkg_info.add_header('Requires-External', req) pkg_info_file.write_bytes(pkg_info.as_bytes()) Pass your own egg_info impl via cmdclass argument in setup(): setup( ..., requires_external=[ 'C', 'libpng (>=1.5)', 'make; sys_platform != "win32"', ], distclass=Distribution, cmdclass={'egg_info': egg_info}, ) | 6 | 0 |
69,786,993 | 2021-10-31 | https://stackoverflow.com/questions/69786993/tuning-xgboost-hyperparameters-with-randomizedsearchcv | I''m trying to use XGBoost for a particular dataset that contains around 500,000 observations and 10 features. I'm trying to do some hyperparameter tuning with RandomizedSeachCV, and the performance of the model with the best parameters is worse than the one of the model with the default parameters. Model with default parameters: model = XGBRegressor() model.fit(X_train,y_train["speed"]) y_predict_speed = model.predict(X_test) from sklearn.metrics import r2_score print("R2 score:", r2_score(y_test["speed"],y_predict_speed, multioutput='variance_weighted')) R2 score: 0.3540656307310167 Best model from random search: booster=['gbtree','gblinear'] base_score=[0.25,0.5,0.75,1] ## Hyper Parameter Optimization n_estimators = [100, 500, 900, 1100, 1500] max_depth = [2, 3, 5, 10, 15] booster=['gbtree','gblinear'] learning_rate=[0.05,0.1,0.15,0.20] min_child_weight=[1,2,3,4] # Define the grid of hyperparameters to search hyperparameter_grid = { 'n_estimators': n_estimators, 'max_depth':max_depth, 'learning_rate':learning_rate, 'min_child_weight':min_child_weight, 'booster':booster, 'base_score':base_score } # Set up the random search with 4-fold cross validation random_cv = RandomizedSearchCV(estimator=regressor, param_distributions=hyperparameter_grid, cv=5, n_iter=50, scoring = 'neg_mean_absolute_error',n_jobs = 4, verbose = 5, return_train_score = True, random_state=42) random_cv.fit(X_train,y_train["speed"]) random_cv.best_estimator_ XGBRegressor(base_score=0.5, booster='gblinear', colsample_bylevel=None, colsample_bynode=None, colsample_bytree=None, gamma=None, gpu_id=-1, importance_type='gain', interaction_constraints=None, learning_rate=0.15, max_delta_step=None, max_depth=15, min_child_weight=3, missing=nan, monotone_constraints=None, n_estimators=500, n_jobs=16, num_parallel_tree=None, random_state=0, reg_alpha=0, reg_lambda=0, scale_pos_weight=1, subsample=None, tree_method=None, validate_parameters=1, verbosity=None) Using the best model: regressor = XGBRegressor(base_score=0.5, booster='gblinear', colsample_bylevel=None, colsample_bynode=None, colsample_bytree=None, gamma=None, gpu_id=-1, importance_type='gain', interaction_constraints=None, learning_rate=0.15, max_delta_step=None, max_depth=15, min_child_weight=3, monotone_constraints=None, n_estimators=500, n_jobs=16, num_parallel_tree=None, random_state=0, reg_alpha=0, reg_lambda=0, scale_pos_weight=1, subsample=None, tree_method=None, validate_parameters=1, verbosity=None) regressor.fit(X_train,y_train["speed"]) y_pred = regressor.predict(X_test) from sklearn.metrics import r2_score print("R2 score:", r2_score(y_test["speed"],y_pred, multioutput='variance_weighted')) R2 score: 0.14258774171629718 As you can see after 3 hours of running the randomized search the accuracy actually drops. If I change linear to tree the value goes up to 0.65, so why is the randomized search not working? I'm also getting a warning with the following: This may not be accurate due to some parameters are only used in language bindings but passed down to XGBoost core. Or some parameters are not used but slip through this verification. Please open an issue if you find above cases. Does anyone have a suggestion regarding this hyperparameter tuning method? | As stated in the XGBoost Docs Parameter tuning is a dark art in machine learning, the optimal parameters of a model can depend on many scenarios. You asked for suggestions for your specific scenario, so here are some of mine. Drop the dimensions booster from your hyperparameter search space. You probably want to go with the default booster 'gbtree'. If you are interested in the performance of a linear model you could just try linear or ridge regression, but don't bother with it during your XGBoost parameter tuning. Drop the dimension base_score from your hyperparameter search space. This should not have much of an effect with sufficiently many boosting iterations (see XGB parameter docs). Currently you have 3200 hyperparameter combinations in your grid. Expecting to find a good one by looking at 50 random ones might be a bit too optimistic. After dropping the booster and base_score dimensions you would be down to hyperparameter_grid = { 'n_estimators': [100, 500, 900, 1100, 1500], 'max_depth': [2, 3, 5, 10, 15], 'learning_rate': [0.05, 0.1, 0.15, 0.20], 'min_child_weight': [1, 2, 3, 4] } which has 400 possible combinations. For a first shot I would simplify this a bit more. For example you could try something like hyperparameter_grid = { 'n_estimators': [100, 400, 800], 'max_depth': [3, 6, 9], 'learning_rate': [0.05, 0.1, 0.20], 'min_child_weight': [1, 10, 100] } There are only 81 combinations left and some of the very expensive combinations (e.g. 1500 trees of depth 15) are removed. Of course I don't know your data, so maybe it is necessary to consider such large / complex models. For a regression task with squared loss min_child_weight is just the number of instances in a child (again see XGB parameter docs). Since you have 500000 observations, it will probably not make (much of) a difference wether 1, 2, 3 or 4 observations end up in a leaf. Hence, I am suggesting [1, 10, 100] here. Maybe the random search finds something better than the default parameters in this grid? An alternative strategy could be: Run cross validation for each combination of hyperparameter_grid = { 'max_depth': [3, 6, 9], 'min_child_weight': [1, 10, 100] } fixing the learning rate at some constant value (not to low, e.g. 0.15). For each setting use early stopping to determine an appropriate number of trees. This is possible using the early_stopping_rounds parameter of the xgboost.cv method. Afterwards you know a good combination of max_depth and min_child_weight (e.g. how complex do the base learners need to be for the given problem?) and also a good number of trees for this combination and the fixed learning rate. Fine tuning could then involve doing another hyperparameter search "close to" the current (max_depth, min_child_weight) solution and/or reducing the learning rate while increasing the number of trees. And lastly, as answer is getting a bit long, there are other alternatives to a random search if an exhaustive grid search is to expensive. E.g. you could look at halving grid search and sequential model based optimization. | 11 | 21 |
69,785,596 | 2021-10-31 | https://stackoverflow.com/questions/69785596/sklearn-manifold-tsne-typeerror-ufunc-multiply-did-not-contain-a-loop-with-si | I have run the sklearn.manifold.TSNE example code from the sklearn documentation, but I got the error described in the questions' title. I have already tried updating my sklearn version to the latest one (by !pip install -U scikit-learn) (scikit-learn=1.0.1). However, the problem is still there. Does anyone know how to fix it? python = 3.7.12 sklearn= 1.0.1 Example code: import numpy as np from sklearn.manifold import TSNE X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]) X_embedded = TSNE(n_components=2, learning_rate='auto', init='random').fit_transform(X) X_embedded.shape The error line happened in: X_embedded = TSNE(n_components=2, learning_rate='auto', init='random').fit_transform(X) Error message: UFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U32'), dtype('<U32')) -> dtype('<U32') | Delete learning_rate='auto' solved my problem. Thanks @FlaviaGiammarino comment!! | 27 | 38 |
69,736,380 | 2021-10-27 | https://stackoverflow.com/questions/69736380/using-nested-asyncio-gather-inside-another-asyncio-gather | I have a class with various methods. I have a method in that class something like : class MyClass: async def master_method(self): tasks = [self.sub_method() for _ in range(10)] results = await asyncio.gather(*tasks) async def sub_method(self): subtasks = [self.my_task() for _ in range(10)] results = await asyncio.gather(*subtasks) async def my_task(self): return "task done" So the question here is: Are there any issues, advantages/disadvantages with using asyncio.gather() inside co-routines that are being called from another asyncio.gather() ? Any performance issues? Are all tasks in all levels treated with the same priority by asyncio loop? Would this give the same performance as if I have called all the co-routines with a single asyncio.gather() from the master_method? | TLDR: Using gather instead of returning tasks simplifies usage and makes code easier to maintain. While gather has some overhead, it is negligible for any practical application. Why gather? The point of gather to accumulate child tasks before exiting a coroutine is to delay the completion of the coroutine until its child tasks are done. This encapsulates the implementation, and ensures that the coroutine appears as one single entity "doing its thing". The alternative is to return the child tasks, and expect the caller to run them to completion. For simplicity, let's look at a single layer – corresponding to the intermediate sub_method – but in different variations. async def child(i): await asyncio.sleep(0.2) # some non-trivial payload print("child", i, "done") async def encapsulated() -> None: await asyncio.sleep(0.1) # some preparation work children = [child() for _ in range(10)] await asyncio.gather(*children) async def task_children() -> 'List[asyncio.Task]': await asyncio.sleep(0.1) # some preparation work children = [asyncio.create_task(child()) for _ in range(10)] return children async def coro_children() -> 'List[Awaitable[None]]': await asyncio.sleep(0.1) # some preparation work children = [child() for _ in range(10)] return children All of encapsulated, task_children and coro_children in some way encode that there are sub-tasks. This allows the caller to run them in such a way that the actual goal is "done" reliably. However, each variant differs in how much it does by itself and how much the caller has to do: The encapsulated is the "heaviest" variant: all children are run in Tasks and there is an additional gather. However, the caller is not exposed to any of this: await encapsulated() This guarantees that the functionality works as intended, and its implementation can freely be changed. The task_children is the intermediate variant: all children are run in Tasks. The caller can decide if and how to wait for completion: tasks = await task_children() await asyncio.gather(*tasks) # can add other tasks here as well This guarantees that the functionality starts as intended. Its completion relies on the caller having some knowledge, though. The coro_children is the "lightest" variant: nothing of the children is actually run. The caller is responsible for the entire lifetime: tasks = await coro_children() # children don't actually run yet! await asyncio.gather(*tasks) # can add other tasks here as well This completely relies on the caller to start and wait for the sub-tasks. Using the encapsulated pattern is a safe default – it ensures that the coroutine "just works". Notably, a coroutine using an internal gather still appears like any other coroutine. gather speed? The gather utility a) ensures that its arguments are run as Tasks and b) provides a Future that triggers once the tasks are done. Since gather is usually used when one would run the arguments as Tasks anyway, there is no additional overhead from this; likewise, these are regular Tasks and have the same performance/priority characteristics¹ as everything else. The only overhead is from the wrapping Future; this takes care of bookkeeping (ensuring the arguments are tasks) and then only waits, i.e. does nothing. On my machine, measuring the overhead shows that it takes on average about twice as long as running a no-op Task. This by itself should already be negligible for any real-world task. In addition, the pattern of gathering child tasks inherently means that there is a tree of gather nodes. Thus the number of gather nodes is usually much lower than the number of tasks. For example, for the case of 10 tasks per gather, a total of only 11 gathers is needed to handle a total of 100 tasks. master_method 0 sub_method 0 1 2 3 4 5 ... my_task 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 ... ¹Which is to say, none. asyncio currently has no concept of Task priorities. | 10 | 9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.