question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
60,421,630 | 2020-2-26 | https://stackoverflow.com/questions/60421630/pytorch-tensor-save-produces-huge-files-for-small-tensors-from-mnist | I'm working with MNIST dataset from Kaggle challange and have troubles preprocessing with data. Furthermore, I don't know what are the best practices and was wondering if you could advise me on that. Disclaimer: I can't just use torchvision.datasets.mnist because I need to use Kaggle's data for training and submission. In this tutorial, it was advised to create a Dataset object loading .pt tensors from files, to fully utilize GPU. In order to achieve that, I needed to load the csv data provided by Kaggle and save it as .pt files: import pandas as pd import torch import numpy as np # import data digits_train = pd.read_csv('data/train.csv') train_tensor = torch.tensor(digits_train.drop(label, axis=1).to_numpy(), dtype=torch.int) labels_tensor = torch.tensor(digits_train[label].to_numpy()) for i in range(train_tensor.shape[0]): torch.save(train_tensor[i], "data/train-" + str(i) + ".pt") Each train_tensor[i].shape is torch.Size([1, 784]) However, each such .pt file has size of about 130MB. A tensor of the same size, with randomly generated integers, has size of 6.6kB. Why are these tensors so huge, and how can I reduce their size? Dataset is 42 000 samples. Should I even bother with batching this data? Should I bother with saving tensors to separate files, rather than loading them all into RAM and then slicing into batches? What is the most optimal approach here? | As explained in this discussion, torch.save() saves the whole tensor, not just the slice. You need to explicitly copy the data using clone(). Don't worry, at runtime the data is only allocated once unless you explicitly create copies. As a general advice: If the data easily fits into your memory, just load it at once. For MNIST with 130 MB that's certainly the case. However, I would still batch the data because it converges faster. Look up the advantages of SGD for more details. | 9 | 18 |
60,421,663 | 2020-2-26 | https://stackoverflow.com/questions/60421663/is-python-3-semantically-versioned-and-forwards-compatible | I'm looking at some software that is wanting to bring in Python 3.6 for use in an environment where 3.5 is the standard. Reading up on Python's documentation I can't find anything about whether: 3.5 is representative of a semantic version number 3.6 would represent a forwards compatible upgrade (ie: code written for a 3.5 runtime is guaranteed to work in a 3.6 runtime) The fact that this page about porting to 3.7 exists makes me think strongly no but I can't see official docs on what the version numbers mean (if anything, ala Linux kernel versioning) In the more general sense - is there a PEP around compatibility standards within the 3.X release stream? | The short answer is "No", the long answer is "They strive for something close to it". As a rule, micro versions match semantic versioning rules; they're not supposed to break anything or add features, just fix bugs. This isn't always the case (e.g. 3.5.1 broke vars() on a namedtuple, because it caused a bug that was worse than the break when it came up), but it's very rare for code (especially Python level stuff, as opposed to C extensions) to break across a micro boundary. Minor versions mostly "add features", but they will also make backwards incompatible changes with prior warning. For example, async and await became keywords in Python 3.7, which meant code using them as variable names broke, but with warnings enabled, you would have seen a DeprecationWarning in 3.6. Many syntax changes are initially introduced as optional imports from the special __future__ module, with documented timelines for becoming the default behavior. None of the changes made in minor releases are broad changes; I doubt any individual deprecation or syntax change has affected even 1% of existing source code, but it does happen. If you've got a hundred third party dependencies, and you're jumping a minor version or two, there is a non-trivial chance that one of them will be broken by the change (example: pika prior to 0.12 used async as a variable name, and broke on Python 3.7; they released new versions that fixed the bug, but of course, moving from 0.11 and lower to 0.12 and higher changed their own API in ways that might break your code). Major versions are roughly as you'd expect; backwards incompatible changes are expected/allowed (though they're generally not made frivolously; the bigger the change, the bigger the benefit). Point is, it's close to semantic versioning, but in the interests of not having major releases every few years, while also not letting the language stagnate due to strict compatibility constraints, minor releases are allowed to break small amounts of existing code as long as there is warning (typically in the form of actual warnings from code using deprecated behavior, notes on the What's New documentation, and sometimes __future__ support to ease the migration path). This is all officially documented (with slightly less detail) in their Development Cycle documentation: To clarify terminology, Python uses a major.minor.micro nomenclature for production-ready releases. So for Python 3.1.2 final, that is a major version of 3, a minor version of 1, and a micro version of 2. new major versions are exceptional; they only come when strongly incompatible changes are deemed necessary, and are planned very long in advance; new minor versions are feature releases; they get released annually, from the current in-development branch; new micro versions are bugfix releases; they get released roughly every 2 months; they are prepared in maintenance branches. | 14 | 16 |
60,418,497 | 2020-2-26 | https://stackoverflow.com/questions/60418497/how-do-i-use-kwargs-in-python-3-class-init-function | I am writing a class in Python 3 that I want to be able to take various keyword arguments from the user and to store these values for later use in class methods. An example code would be something like this: class MathematicalModel: def __init__(self, var1, var2, var3, **kwargs): self.var1 = var1 self.var2 = var2 self.var3 = var3 self.var4 = kwarg1 self.var5 = kwarg2 self.var6 = kwarg6 def calculation1(self): x = self.var1 + self.var2 + self.var3 return x def calculation2(self): y = self.var1 * self.var2 * var3 return y class MathematicalModelExtended(MathematicalModel): def __init__(self, var1, var2, var3, **kwargs): super.__init__(self, var1, var2, var3, **kwargs) def calculation1(self): '''Overrides calculation1 method from parent DoThis''' x = (self.var1 + self.var2 + self.var3) / self.kwarg1 return x def calculation2(self): '''Overrides calculation2 method from parent DoThis''' y = (self.var1 * self.var2 * self.var3) / (self.kwarg1 + self.kwarg2) return y a = MathematicalModel(1, 2, 3) b = MathematicalModelExtended(1, 2, 3, var4 = 4, var5 = 5, var6 = 6) However I am not sure how this works, for a few reasons: a) What if the user doesn't put an argument for b or c, or even a for that matter? Then the code will throw an error, so I am not sure how to initialize these attributes in that case. b) How do I access the values associated with the keywords, when I don't know what keyword argument the user passed beforehand? I plan to use the variables in mathematical formulas. Some variables (not included in kwargs) will be used in every formula, whereas others (the ones in kwargs) will only be used in other formulas. I plan to wrap MathematicalModel in another class like-so MathematicalModelExtended(MathematicalModel) in order to achieve that. Thank you! | General kwargs ideas When you load variables with self.var = value, it adds it to an internal dictionary that can be accessed with self.__dict__. class Foo1: def __init__(self, **kwargs): self.a = kwargs['a'] self.b = kwargs['b'] foo1 = Foo1(a=1, b=2) print(foo1.a) # 1 print(foo1.b) # 2 print(foo1.__dict__) # {'a': 1, 'b': 2} If you want to allow for arbitrary arguments, you can leverage the fact that kwargs is also a dictionary and use the update() function. class Foo2: def __init__(self, **kwargs): self.__dict__.update(kwargs) foo2 = Foo2(some_random_variable=1, whatever_the_user_supplies=2) print(foo2.some_random_variable) # 1 print(foo2.whatever_the_user_supplies) # 2 print(foo2.__dict__) # {'some_random_variable': 1, 'whatever_the_user_supplies': 2} This will prevent you from getting an error when you try to store a value that isn't there class Foo3: def __init__(self, **kwargs): self.a = kwargs['a'] self.b = kwargs['b'] foo3 = Foo3(a=1) # KeyError: 'b' If you wanted to ensure that variables a or b were set in the class regardless of what the user supplied, you could create class attributes or use kwargs.get() class Foo4: def __init__(self, **kwargs): self.a = kwargs.get('a', None) self.b = kwargs.get('b', None) foo4 = Foo4(a=1) print(foo4.a) # 1 print(foo4.b) # None print(foo4.__dict__) # {'a': 1, 'b': None} However, with this method, the variables belong to the class rather than the instance. This is why you see foo5.b return a string, but it's not in foo5.__dict__. class Foo5: a = 'Initial Value for A' b = 'Initial Value for B' def __init__(self, **kwargs): self.__dict__.update(kwargs) foo5 = Foo5(a=1) print(foo5.a) # 1 print(foo5.b) # Initial Value for B print(foo5.__dict__) # {'a': 1} If you are giving the users the freedom to specify any kwargs they want, you can iterate through the __dict__ in a function. class Foo6: def __init__(self, **kwargs): self.__dict__.update(kwargs) def do_something(self): for k, v in self.__dict__.items(): print(f"{k} -> {v}") foo6 = Foo6(some_random_variable=1, whatever_the_user_supplies=2) foo6.do_something() # some_random_variable -> 1 # whatever_the_user_supplies -> 2 However, depending on whatever else you have going on in your class, you might end up with a lot more instance attributes than the user supplied. Therefore, it might be good to have the user supply a dictionary as an argument. class Foo7: def __init__(self, user_vars): self.user_vars = user_vars def do_something(self): for k, v in self.user_vars.items(): print(f"{k} -> {v}") foo7 = Foo7({'some_random_variable': 1, 'whatever_the_user_supplies': 2}) foo7.do_something() # some_random_variable -> 1 # whatever_the_user_supplies -> 2 Addressing your code With your updated code, I would suggest using the self.__dict__.update(kwargs) method. Then you can either raise an error when you don't encounter variable you're relying on (option1 method) or you can have a default value for the variable incase it's not defined (option2 method) class MathematicalModel: def __init__(self, var1, var2, var3, **kwargs): self.var1 = var1 self.var2 = var2 self.var3 = var3 self.__dict__.update(kwargs) # Store all the extra variables class MathematicalModelExtended(MathematicalModel): def __init__(self, var1, var2, var3, **kwargs): super().__init__(var1, var2, var3, **kwargs) def option1(self): # Trap error if you need var4 to be specified if 'var4' not in self.__dict__: raise ValueError("Please provide value for var4") x = (self.var1 + self.var2 + self.var3) / self.var4 return x def option2(self): # Use .get() to provide a default value when the user does not provide it. _var4 = self.__dict__.get('var4', 1) x = (self.var1 + self.var2 + self.var3) / self.var4 return x a = MathematicalModel(1, 2, 3) b = MathematicalModelExtended(1, 2, 3, var4=4, var5=5, var6=6) print(b.option1()) # 1.5 print(b.option2()) # 1.5 Granted, if MathematicalModel will never use anything other than var1, var2, and var3, there's no point in passing the kwargs. class MathematicalModel: def __init__(self, var1, var2, var3, **kwargs): self.var1 = var1 self.var2 = var2 self.var3 = var3 class MathematicalModelExtended(MathematicalModel): def __init__(self, var1, var2, var3, **kwargs): super().__init__(var1, var2, var3) self.__dict__.update(kwargs) def option1(self): # Trap error if you need var4 to be specified if 'var4' not in self.__dict__: raise ValueError("Please provide value for var4") x = (self.var1 + self.var2 + self.var3) / self.var4 return x def option2(self): # Use .get() to provide a default value when the user does not provide it. _var4 = self.__dict__.get('var4', 1) x = (self.var1 + self.var2 + self.var3) / self.var4 return x a = MathematicalModel(1, 2, 3) b = MathematicalModelExtended(1, 2, 3, var4=4, var5=5, var6=6) print(b.option1()) # 1.5 print(b.option2()) # 1.5 | 20 | 53 |
60,410,178 | 2020-2-26 | https://stackoverflow.com/questions/60410178/how-to-invoke-python-function-as-a-callback-inside-c-thread-using-pybind11 | I designed a C++ system that invokes user defined callbacks from procedure running in a separate thread. Simplified system.hpp looks like this: #pragma once #include <atomic> #include <chrono> #include <functional> #include <thread> class System { public: using Callback = std::function<void(int)>; System(): t_(), cb_(), stop_(true) {} ~System() { stop(); } bool start() { if (t_.joinable()) return false; stop_ = false; t_ = std::thread([this]() { while (!stop_) { std::this_thread::sleep_for(std::chrono::milliseconds(100)); if (cb_) cb_(1234); } }); return true; } bool stop() { if (!t_.joinable()) return false; stop_ = true; t_.join(); return true; } bool registerCallback(Callback cb) { if (t_.joinable()) return false; cb_ = cb; return true; } private: std::thread t_; Callback cb_; std::atomic_bool stop_; }; It works just fine and can be tested with this short example main.cpp: #include <iostream> #include "system.hpp" int g_counter = 0; void foo(int i) { std::cout << i << std::endl; g_counter++; } int main() { System s; s.registerCallback(foo); s.start(); while (g_counter < 3) { std::this_thread::sleep_for(std::chrono::milliseconds(1)); } s.stop(); return 0; } which will output 1234 a few times and then will stop. However I encountered a problem trying to create python bindings for my System. If I register a python function as a callback, my program will deadlock after calling System::stop. I investigated the topic a bit and it seems that I face the issue with GIL. Reproducible example: binding.cpp: #include "pybind11/functional.h" #include "pybind11/pybind11.h" #include "system.hpp" namespace py = pybind11; PYBIND11_MODULE(mysystembinding, m) { py::class_<System>(m, "System") .def(py::init<>()) .def("start", &System::start) .def("stop", &System::stop) .def("registerCallback", &System::registerCallback); } python script: #!/usr/bin/env python import mysystembinding import time g_counter = 0 def foo(i): global g_counter print(i) g_counter = g_counter + 1 s = mysystembinding.System() s.registerCallback(foo) s.start() while g_counter < 3: time.sleep(1) s.stop() I have read the pybind11 docs section about the possibility to acquire or release GIL on the C++ side. However I did not manage to get rid of the deadlock that occurs in my case: PYBIND11_MODULE(mysystembinding, m) { py::class_<System>(m, "System") .def(py::init<>()) .def("start", &System::start) .def("stop", &System::stop) .def("registerCallback", [](System* s, System::Callback cb) { s->registerCallback([cb](int i) { // py::gil_scoped_acquire acquire; // py::gil_scoped_release release; cb(i); }); }); } If I call py::gil_scoped_acquire acquire; before calling the callback, deadlock occurs anyway. If I call py::gil_scoped_release release; before calling the callback, I get Fatal Python error: PyEval_SaveThread: NULL tstate What should I do to register python functions as callbacks and avoid deadlocks? | Thanks to this discussion and many other resources (1, 2, 3) I figured out that guarding the functions that start and join the C++ thread with gil_scoped_release seems to solve the problem: PYBIND11_MODULE(mysystembinding, m) { py::class_<System>(m, "System") .def(py::init<>()) .def("start", &System::start, py::call_guard<py::gil_scoped_release>()) .def("stop", &System::stop, py::call_guard<py::gil_scoped_release>()) .def("registerCallback", &System::registerCallback); } Apparently deadlocks occurred because python was holding a lock while invoking a binding responsible for C++ thread manipulation. I am still not really sure if my reasoning is correct, so I would appreciate any expert's comments. | 9 | 7 |
60,410,625 | 2020-2-26 | https://stackoverflow.com/questions/60410625/get-django-allowed-hosts-env-variable-formated-right-in-settings-py | I'm facing the following issue. My .env files contains a line like: export SERVERNAMES="localhost domain1 domain2 domain3" <- exactly this kind of format But the variable called SERVERNAMES is used multiple times at multiple locations of my deployment so i can't declare this as compatible list of strings that settings.py can use imidiatly. beside I don't like to set multiple variables of .env for basically the same thing. So my question is how can I format my ALLOWED_HOSTS to be compatible with my settings.py. Something like this does not seem to work it seems: ALLOWED_HOSTS = os.environ.get('SERVERNAMES').split(',') Thanks and kind regards | Simply split your SERVERNAMES variable using space as separator instead of comma ALLOWED_HOSTS = os.environ.get('SERVERNAMES').split(' ') | 12 | 21 |
60,345,503 | 2020-2-21 | https://stackoverflow.com/questions/60345503/pandas-parsererror-error-tokenizing-data-c-error-eof-inside-string | I have data that is over 400,000 lines long. When running this code: f=pd.read_csv(filename,error_bad_lines=False) I get the following error: pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 454751 My data by the end of the file looks like this: BTC 9948 8718 1.57E+12 ASK BTC 52 8718 1.57E+12 ASK BTC 120 8718 1.57E+12 ASK BTC 200 8718 1.57E+12 ASK BTC 150 8718 1.57E+12 ASK BTC 50 8718 1.57E+12 ASK BTC 10 8718 1.57E+12 ASK BTC 57 8718 1.57E+12 ASK BTC 50 8718 1.57E+12 ASK BTC 50191 8718 Line 454751 is this one: BTC 50 8718 1.57E+12 ASK I tried running error_bad_lines=False as seen above but that still doesnt work. I also searched for quotes in my file but I do not have any. | Changing the Parser engine from C to Python should solve your problem. Use the following line to read your csv: f = pd.read_csv(filename, error_bad_lines=False, engine="python") From the read_csv documentation: engine : {βcβ, βpythonβ}, optional Parser engine to use. The C engine is faster while the python engine is currently more feature-complete. | 16 | 28 |
60,312,374 | 2020-2-20 | https://stackoverflow.com/questions/60312374/what-are-all-these-deprecated-loop-parameters-in-asyncio | A lot of the functions in asyncio have deprecated loop parameters, scheduled to be removed in Python 3.10. Examples include as_completed(), sleep(), and wait(). I'm looking for some historical context on these parameters and their removal. What problems did loop solve? Why would one have used it in the first place? What was wrong with loop? Why is it being removed en masse? What replaces loop, now that it's gone? | What problems did loop solve? Why would one have used it in the first place? Prior to Python 3.6, asyncio.get_event_loop() was not guaranteed to return the event loop currently running when called from an asyncio coroutine or callback. It would return whatever event loop was previously set using set_event_loop(some_loop), or the one automatically created by asyncio. But sync code could easily create a different loop with another_loop = asyncio.new_event_loop() and spin it up using another_loop.run_until_complete(some_coroutine()). In this scenario, get_event_loop() called inside some_coroutine and the coroutines it awaits would return the global-default some_loop rather than another_loop which it actually runs in. This kind of thing wouldn't occur when using asyncio casually, but it had to be accounted for by async libraries which couldn't assume that they were running under the default event loop. (For example, in tests or in some usages involving threads, one might want to spin up an event loop without disturbing the global setting with set_event_loop.) The libraries would offer the explicit loop argument where you'd pass another_loop in the above case, and which you'd use whenever the running loop differed from the loop set up with asyncio.set_event_loop(). This issue would be fixed in Python 3.6 and 3.5.3, where get_event_loop() was modified to reliably return the running loop if called from inside one, returning another_loop in the above scenario. Python 3.7 would additionally introduce get_running_loop() which completely ignores the global setting and always returns the currently running loop, raising an exception if not inside one. See this thread for the original discussion. Once get_event_loop() became reliable, another problem was that of performance. Since the event loop was needed for some very frequently used calls, most notably call_soon, it was simply more efficient to pass around and cache the loop object. Asyncio itself did that, and many libraries followed suit. Eventually get_event_loop() was accelerated in C and was no longer a bottleneck. These two changes made the loop arguments redundant. What was wrong with loop? Why is it being removed en masse? As any other redundancy, it complicates the API and opens up possibilities for errors. Async code should almost never just randomly communicate with a different loop, and now that get_event_loop() is both correct and fast, there is no reason not to use it. Also, passing the loop through all the layers of abstraction of a typical application is simply tedious. With async/await becoming mainstream in other languages, it has become clear that manually propagating a global object is not ergonomic and should not be required from programmers. What replaces loop, now that it's gone? Just use get_event_loop() to get the loop when you need it. Alternatively, you can use get_running_loop() to assert that a loop is running. The need for accessing the event loop is somewhat reduced in Python 3.7, as some functions that were previously only available as methods on the loop, such as create_task, are now available as stand-alone functions. | 40 | 49 |
60,330,837 | 2020-2-21 | https://stackoverflow.com/questions/60330837/jupyter-server-not-started-no-kernel-in-vs-code | i am trying to use jupyter notebooks from vs code and installed jupyter notebook extension and i am using (base)conda environment for execution. while this happened Error: Jupyter cannot be started. Error attempting to locate jupyter: at A.startServer (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:784356) at async A.ensureServerAndNotebookImpl (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:783811) at async A.ensureServerAndNotebook (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:783612) at async A.submitCode (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:780564) at async A.reexecuteCell (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:75:879318) how to resolve this issue ? | I had exactly the same problem when I installed Visual Studio Code and tried to run some Python code from a jupyter notebook on my fresh Ubuntu 18.04. How I solved it: Make sure you have installed the Jupyter Extension in VS Code. (for those who don't read the SO question :)) Press Command+Shift+P to open a new command pallete Type >Python: Select Intepreter to start jupyter notebook server Open the notebook again And it worked fine. Hope it works for you. | 54 | 81 |
60,370,869 | 2020-2-24 | https://stackoverflow.com/questions/60370869/print-underscore-separated-integer | Since python3.6, you can use underscore to separate digits of an integer. For example x = 1_000_000 print(x) #1000000 This feature was added to easily read numbers with many digits and I found it very useful. But when you print the number you always get a number not separated with digits. Is there a way to print the number with its digits separated with underscore. P.S. I want the output as integer not as string. Not "1_000_000" but 1_000_000 | Try using this: >>> x = 1_000_000 >>> print(f"{x:_}") 1_000_000 Here are details Another way would be to use format explicitly: >>> x = 1_000_000 >>> print(format(x, '_d')) 1_000_000 | 11 | 33 |
60,345,426 | 2020-2-21 | https://stackoverflow.com/questions/60345426/json-to-protobuf-in-python | Hey I know there is a solution for this in Java, I'm curious to know if anyone knows of a Python 3 solution for converting a JSON object or file into protobuf format. I would accept either or as converting to an object is trivial. Searching the stackoverflow site, I only found examples of protobuf->json, but not the other way around. There is one extremely old repo that may do this but it is in Python 2 and our pipeline is Python 3. Any help is as always, appreciated. | The library you're looking for is google.protobuf.json_format. You can install it with the directions in the README here. The library is compatible with Python >= 2.7. Example usage: Given a protobuf message like this: message Thing { string first = 1; bool second = 2; int32 third = 3; } You can go from Python dict or JSON string to protobuf like: import json from google.protobuf.json_format import Parse, ParseDict d = { "first": "a string", "second": True, "third": 123456789 } message = ParseDict(d, Thing()) # or message = Parse(json.dumps(d), Thing()) print(message.first) # "a string" print(message.second) # True print(message.third) # 123456789 or from protobuf to Python dict or JSON string: from google.protobuf.json_format import MessageToDict, MessageToJson message_as_dict = MessageToDict(message) message_as_dict['first'] # == 'a string' message_as_dict['second'] # == True message_as_dict['third'] # == 123456789 # or message_as_json_str = MessageToJson(message) The documentation for the json_format module is here. | 26 | 52 |
60,368,298 | 2020-2-24 | https://stackoverflow.com/questions/60368298/could-not-load-dynamic-library-libnvinfer-so-6 | I am trying to normally import the TensorFlow python package, but I get the following error: Here is the text from the above terminal image: 2020-02-23 19:01:06.163940: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory 2020-02-23 19:01:06.164019: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory 2020-02-23 19:01:06.164030: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. <module 'tensorflow_core._api.v2.version' from '/home/saman/miniconda3/envs/testconda/lib/python3.7/site-packages/tensorflow_core/_api/v2/version/__init__.py' | This is a warning, not an error. You can still use TensorFlow. The shared libraries libnvinfer and libnvinfer_plugin are optional and required only if you are using nvidia's TensorRT capabilities. To suppress this and all other warnings, set the environment variable TF_CPP_MIN_LOG_LEVEL="2". | 51 | 53 |
60,300,644 | 2020-2-19 | https://stackoverflow.com/questions/60300644/python-image-processing-on-captcha-how-to-remove-noise | I am so new on Image Processing and what I'm trying to do is clearing the noise from captchas; For captchas, I have different types of them: For the first one what I did is : Firstly, I converted every pixel that is not black to the black. Then, I found a pattern that is a noise from the image and deleted it. For the first captcha, it was easy to clear it and I found the text with tesseract. But I am looking for a solution for the second and the third. How this must go like? I mean what are the possible methods to clear it? This is how I delete patterns: def delete(searcher,h2,w2): h = h2 w = w2 search = searcher search = search.convert("RGBA") herear = np.asarray(search) bigar = np.asarray(imgCropped) hereary, herearx = herear.shape[:2] bigary, bigarx = bigar.shape[:2] stopx = bigarx - herearx + 1 stopy = bigary - hereary + 1 pix = imgCropped.load() for x in range(0, stopx): for y in range(0, stopy): x2 = x + herearx y2 = y + hereary pic = bigar[y:y2, x:x2] test = (pic == herear) if test.all(): for q in range(h): for k in range(w): pix[x+k,y+q] = (255,255,255,255) Sorry for the variable names, I was just testing function. Thanks.. | Here is my solution, Firstly I got the background pattern(Edited on paint by hand). From: After that, I created a blank image to fill it with differences between the pattern and image. img = Image.open("x.png").convert("RGBA") pattern = Image.open("y.png").convert("RGBA") pixels = img.load() pixelsPattern = pattern.load() new = Image.new("RGBA", (150, 50)) pixelNew = new.load() for i in range(img.size[0]): for j in range(img.size[1]): if(pixels[i,j] != pixelsPattern[i,j]): pixelNew[i,j] = pixels[i,j] new.save("differences.png") Here are the differences.. and finally, I added blur and cleared the bits which are not black. Result : With pytesseract result is 2041, it is wrong for this image but the general rate is around %60. | 8 | 2 |
60,303,795 | 2020-2-19 | https://stackoverflow.com/questions/60303795/why-i-can-sometimes-use-functions-from-nested-modules-without-importing-the-whol | I was wondering what's the difference between these two cases? Is the inner structure of the modules somehow different? So why this one works: >>> import numpy >>> numpy.random.RandomState <class 'numpy.random.mtrand.RandomState'> But this one doesn't work until I import the nested module also: >>> import tkinter >>> tkinter.ttk.Spinbox Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'tkinter' has no attribute 'ttk' >>> import tkinter.ttk >>> tkinter.ttk.Spinbox <class 'tkinter.ttk.Spinbox'> I think that this must be something to do with the __init__.py files in each module, but a concrete example of an implementation would be helpful. | This kind of import behavior can be achieved by importing the deep class in the respective __init__.py files. Here's a quick demonstration of a project structure that works pretty much exactly like the real numpy and tkinter packages did: . βββ main.py βββ numpy β βββ __init__.py β βββ random β βββ __init__.py β βββ mtrand.py βββ tkinter βββ __init__.py βββ ttk.py numpy/__init__.py: from .random import RandomState numpy/random/__init__.py: from .mtrand import RandomState numpy/random/mtrand.py: class RandomState: pass tkinter/__init__.py: tkinter/ttk.py: class Spinbox: pass main.py: import numpy import tkinter print(numpy.random.RandomState) try: print(tkinter.ttk.Spinbox) except AttributeError as e: print("Error:", e) import tkinter.ttk print(tkinter.ttk.Spinbox) Output: <class 'numpy.random.mtrand.RandomState'> Error: module 'tkinter' has no attribute 'ttk' <class 'tkinter.ttk.Spinbox'> | 8 | 4 |
60,347,349 | 2020-2-21 | https://stackoverflow.com/questions/60347349/attributeerror-tensor-object-has-no-attribute-numpy-in-tensorflow-2-1 | I am trying to convert the shape property of a Tensor in Tensorflow 2.1 and I get this error: AttributeError: 'Tensor' object has no attribute 'numpy' I already checked that the output of tf.executing eagerly() is True, A bit of context: I load a tf.data.Dataset from a TFRecords, then I apply a map. The maping function is trying to convert the shape property of one of the dataset sample Tensor to numpy: def _parse_and_decode(serialized_example): """ parse and decode each image """ features = tf.io.parse_single_example( serialized_example, features={ 'encoded_image': tf.io.FixedLenFeature([], tf.string), 'kp_flat': tf.io.VarLenFeature(tf.int64), 'kp_shape': tf.io.FixedLenFeature([3], tf.int64), } ) image = tf.io.decode_png(features['encoded_image'], dtype=tf.uint8) image = tf.cast(image, tf.float32) kp_shape = features['kp_shape'] kp_flat = tf.sparse.to_dense(features['kp_flat']) kp = tf.reshape(kp_flat, kp_shape) return image, kp def read_tfrecords(records_dir, batch_size=1): # Read dataset from tfrecords tfrecords_files = glob.glob(os.path.join(records_dir, '*')) dataset = tf.data.TFRecordDataset(tfrecords_files) dataset = dataset.map(_parse_and_decode, num_parallel_calls=batch_size) return dataset def transform(img, labels): img_shape = img.shape # type: <class 'tensorflow.python.framework.ops.Tensor'>` img_shape = img_shape.numpy() # <-- Throws the error # ... dataset = read_tfrecords(records_dir) This throws the error: dataset.map(transform, num_parallel_calls=1) While this perfecly works: for img, labels in dataset.take(1): print(img.shape.numpy()) Edit: trying to access the img.numpy() instead of img.shape.numpy() results in the same behavior in the tranformer and the codde just above. I checked the type of img_shape and it is <class 'tensorflow.python.framework.ops.Tensor'>. Has anyone solved this sort of issue in new versions of Tensorflow? | The problem in your code is that you cannot use .numpy() inside functions that are mapped onto tf.data.Datasets, because .numpy() is Python code not pure TensorFlow code. When you use a function like my_dataset.map(my_function), you can only use tf.* functions inside your my_function function. This is not a bug of TensorFlow 2.x versions, but rather on how static graphs are generated behind the scenes for performance purposes. If you want to use custom Python code inside a function which you map on your dataset, you have to use tf.py_function(), docs: https://www.tensorflow.org/api_docs/python/tf/py_function. There is really no other way to mix Python code and TensorFlow code when mapping on a dataset. You can also consult this question for further information; it's the exact question that I asked a couple of months ago: Is there an alternative to tf.py_function() for custom Python code? | 14 | 30 |
60,369,047 | 2020-2-24 | https://stackoverflow.com/questions/60369047/pytest-using-parametized-fixture-vs-pytest-mark-parametrize | I'm writing some unit tests using Pytest and came across two ways to parameterize test inputs. One is using parametereized fixtures and the other is using the pytest.mark.parametrize method. The two examples I have are: # method 1 def tokens(): yield from ["+", "*", "?"] @pytest.mark.parametrize("token", tokens()) def test_stuff(token): assert stuff and # method 2 @pytest.fixture(params=["+", "*", "?"]) def token(request): return request.param def test_stuff(token): assert stuff Both have different advantages and disadvantages from what I can tell: Method 1 Advantages Supports multiple parameters Supports lazy evaluation Disadvantages More boilerplate code when used for multiple test methods requires explicit parameter mapping for every method used, even if parameters are the same name method 2 Advantages Less Boiler Plate code Disadvantages Only allows single parameter to be passed to unit test I'm still new to PyTest so maybe there is a way around the disadvantages that I listed above for each method but given those I have been having a hard time trying to decide which one to use. I would guess that the intended way to do what I am trying to do is to use @pytest.mark.parametrize but when passing only a single parameter having less boilerplate code by using a fixture seems like a big advantage. Can anyone tell me a reason not to do it this way or is this a perfectly valid use case? | As pk786 mentions in his comment, you should use a fixture "...if you have something to set up and teardown for the test or using (the) same dataset for multiple tests then use fixture". For example, you may want to load several datasets that you test against in different test functions. Using a fixture allows you to only load these datasets once and share them across the test functions. You can use params argument of @pytest.fixture to load and cache each dataset. Then, the test functions that use those fixtures will run against each loaded dataset. In code, this might look something like: import json import pytest test_files = ["test_file1.json", "test_file2.json"] @pytest.fixture(params=test_files) def example_data(request): with open(request.param, "r") as f: data = json.load(f) return data def test_function1(example_data): # run test with example data. # this test will be run once for each file in the list `test_files` above. ... def test_function2(example_data): # run a different test with example data. # this test will be run once for each file in the list `test_files` above. # this test takes advantage of automatic caching mechanisms of fixtures... # ...so that data is not loaded again. ... Alternatively, as pk786 states, "If you are using a set of data only once, then @pytest.mark.parametrize should be the approach". This statement applies to the example that you provided since you are not performing any setup in the fixture that you need to share across test. In this case, even if you are using the "tokens" across multiple tests, I would consider decorating each function with @pytest.mark.parameterize since I believe this approach more explicitly states your intent and will be easier to understand for anyone else reading your code. This would look like this: ... def tokens(): yield from ["+", "*", "?"] @pytest.mark.parametrize("token", tokens()) def test_stuff(token): assert stuff @pytest.mark.parametrize("token", tokens()) def test_other_stuff(token) assert other_stuff | 12 | 14 |
60,384,288 | 2020-2-24 | https://stackoverflow.com/questions/60384288/pyinstaller-modulenotfounderror | I have built a python script using tensorflow and I am now trying to convert it to an .exe file, but have ran into a problem. After using pyinstaller and running the program from the command prompt I get the following error: File "site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 25, in <module> ModuleNotFoundError: No module named 'tensorflow.python.platform' I have tried --hidden-import tensorflow.python.platform but it seems to have fixed nothing. (The program runs just fine in the interpreter) Your help would be greatly appreciated. | EDIT: The latest versions of PyInstaller (4.0+) now include support for tensorflow out of the box. Create a directory structure like this: - main.py # Your code goes here - don't bother actually naming you file this - hooks - hook-tensorflow.py Copy the following into hook-tensorflow.py: from PyInstaller.utils.hooks import collect_all def hook(hook_api): packages = [ 'tensorflow', 'tensorflow_core', 'astor' ] for package in packages: datas, binaries, hiddenimports = collect_all(package) hook_api.add_datas(datas) hook_api.add_binaries(binaries) hook_api.add_imports(*hiddenimports) Then, when compiling, add the command line option --additional-hooks-dir=hooks. If you come across more not found errors, simply add the full import name into the packages list. PS - for me, main.py was simply from tensorflow import * | 7 | 20 |
60,292,750 | 2020-2-19 | https://stackoverflow.com/questions/60292750/plotly-how-to-plot-a-bar-line-chart-combined-with-a-bar-chart-as-subplots | I am trying to plot two different charts in python through plotly. I have two plots, one plot consists of merged graph ( line and bar chart) like the following, , and another one is bar chart as follows, I wanted to display one single chart with these two combined charts and display the same. I have tried this in plotly through make_subplots but I am not able to achieve the results properly. Below are the codes for creating these two charts, Line_Bar_chart Code: import plotly.graph_objects as go from plotly.offline import iplot trace1 = go.Scatter( mode='lines+markers', x = df['Days'], y = df['Perc_Cases'], name="Percentage Cases", marker_color='crimson' ) trace2 = go.Bar( x = df['Days'], y = df['Count_Cases'], name="Absolute_cases", yaxis='y2', marker_color ='green', marker_line_width=1.5, marker_line_color='rgb(8,48,107)', opacity=0.5 ) data = [trace1, trace2] layout = go.Layout( title_text='States_Name', yaxis=dict( range = [0, 100], side = 'right' ), yaxis2=dict( overlaying='y', anchor='y3', ) ) fig = go.Figure(data=data, layout=layout) iplot(fig, filename='multiple-axes-double') **Line_Bar_chart Code**: Bar_chart Code: trace2 = go.Bar( x = df['Days'], y = df['Perc_Cases'], yaxis='y2', marker_color ='green', marker_line_width=1.5, marker_line_color='rgb(8,48,107)', opacity=0.5, ) layout = go.Layout( title_text='States_Name', yaxis2=dict( overlaying='y', ) ) fig = go.Figure(data=trace2, layout=layout) iplot(fig, filename='multiple-axes-double') Any help on how to make subplots of these two graphs like below would be helpful, | The key here is to assign your traces to the subplot through row and col in fig.add_trace(). And you don't have to use from plotly.offline import iplot for the latest plotly updates. Plot: Code: # imports from plotly.subplots import make_subplots import plotly.graph_objects as go import pandas as pd import numpy as np # data df = pd.DataFrame({'Index': {0: 1.0, 1: 2.0, 2: 3.0, 3: 4.0, 4: 5.0, 5: 6.0, 6: 7.0, 7: 8.0, 8: 9.0, 9: 10.0}, 'A': {0: 15.0, 1: 6.0, 2: 5.0, 3: 4.0, 4: 3.0, 5: 2.0, 6: 1.0, 7: 0.5, 8: 0.3, 9: 0.1}, 'B': {0: 1.0, 1: 4.0, 2: 2.0, 3: 5.0, 4: 4.0, 5: 6.0, 6: 7.0, 7: 2.0, 8: 8.0, 9: 1.0}, 'C': {0: 12.0, 1: 6.0, 2: 5.0, 3: 4.0, 4: 3.0, 5: 2.0, 6: 1.0, 7: 0.5, 8: 0.2, 9: 0.1}}) # set up plotly figure fig = make_subplots(1,2) # add first bar trace at row = 1, col = 1 fig.add_trace(go.Bar(x=df['Index'], y=df['A'], name='A', marker_color = 'green', opacity=0.4, marker_line_color='rgb(8,48,107)', marker_line_width=2), row = 1, col = 1) # add first scatter trace at row = 1, col = 1 fig.add_trace(go.Scatter(x=df['Index'], y=df['B'], line=dict(color='red'), name='B'), row = 1, col = 1) # add first bar trace at row = 1, col = 2 fig.add_trace(go.Bar(x=df['Index'], y=df['C'], name='C', marker_color = 'green', opacity=0.4, marker_line_color='rgb(8,48,107)', marker_line_width=2), row = 1, col = 2) fig.show() | 11 | 7 |
60,309,060 | 2020-2-19 | https://stackoverflow.com/questions/60309060/cannot-load-library-libcairo | I have problem when trying to run a website in Django: OSError: no library called "libcairo-2" was found cannot load library 'libcairo.so.2': /lib/x86_64-linux-gnu/libfontconfig.so.1: undefined symbol: FT_Done_MM_Var cannot load library 'libcairo.so': /lib/x86_64-linux-gnu/libfontconfig.so.1: undefined symbol: FT_Done_MM_Var cannot load library 'libcairo.2.dylib': libcairo.2.dylib: cannot open shared object file: No such file or directory cannot load library 'libcairo-2.dll': libcairo-2.dll: cannot open shared object file: No such file or directory Althought the package is installed. I have installed weasyprint pip3 install weasyprint python -m pip install WeasyPrint sudo apt-get install build-essential python3-dev python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info I also tried sudo apt install libcairo2-dev sudo apt install libgirepository1.0-dev I have Lubuntu system. Any ideas how can I fix it? Thanks in advance | For Ubuntu 20 and Debian based distro, try: sudo apt-get install libpangocairo-1.0-0 | 13 | 18 |
60,381,208 | 2020-2-24 | https://stackoverflow.com/questions/60381208/ignoring-django-migrations-in-pyproject-toml-file-for-black-formatter | I just got Black and Pre-Commit set up for my Django repository. I used the default config for Black from the tutorial I followed and it's been working great, but I am having trouble excluding my migrations files from it. Here is the default configuration I've been using: pyproject.toml [tool.black] line-length = 79 include = '\.pyi?$' exclude = ''' /( \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist )/ ''' I used Regex101.com to make sure that ^.*\b(migrations)\b.*$ matched apps/examples/migrations/test.py. [tool.black] line-length = 79 include = '\.pyi?$' exclude = ''' /( \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist | ^.*\b(migrations)\b.*$ )/ ''' When I add that regex line to my config file, and run pre-commit run --all-files, it ignores the .git folder but still formats the migrations files. | Add the migration exclusion to your .pre-commit-config.yaml file - id: black exclude: ^.*\b(migrations)\b.*$ | 22 | 18 |
60,382,793 | 2020-2-24 | https://stackoverflow.com/questions/60382793/what-are-the-inputs-to-the-transformer-encoder-and-decoder-in-bert | I was reading the BERT paper and was not clear regarding the inputs to the transformer encoder and decoder. For learning masked language model (Cloze task), the paper says that 15% of the tokens are masked and the network is trained to predict the masked tokens. Since this is the case, what are the inputs to the transformer encoder and decoder? Is the input to the transformer encoder this input representation (see image above). If so, what is the decoder input? Further, how is the output loss computed? Is it a softmax for only the masked locations? For this, the same linear layer is used for all masked tokens? | Ah, but you see, BERT does not include a Transformer decoder. It is only the encoder part, with a classifier added on top. For masked word prediction, the classifier acts as a decoder of sorts, trying to reconstruct the true identities of the masked words. Classifying Non-masked is not included in the classification task and does not effect loss. BERT is also trained on predicting whether a pair of sentences really does precedes one another or not. I do not remember how the two losses are weighted. I hope this draws a clearer picture. | 7 | 6 |
60,298,514 | 2020-2-19 | https://stackoverflow.com/questions/60298514/how-to-reinstall-python2-from-homebrew | I have been having issues with openssl and python@2 with brew, which have explained here (unresolved). The documented workaround to reinstall Python and openssl was not working, so I decided I would uninstall and reinstall Python. The problem is, when you try to install Python 2 with brew, you receive this message: brew install python@2 Error: No available formula with the name "python@2" ==> Searching for a previously deleted formula (in the last month)... Warning: homebrew/core is shallow clone. To get complete history run: git -C "$(brew --repo homebrew/core)" fetch --unshallow python@2 was deleted from homebrew/core in commit 028f11f9e: python@2: delete (https://github.com/Homebrew/homebrew-core/issues/49796) EOL 1 January 2020. We gave it 1 month more to live so that people had time to migrate. All in all, developers had 11 years to do their migration. You can use the `brew extract` command and maintain python@2 in your own tap if necessary: https://docs.brew.sh/How-to-Create-and-Maintain-a-Tap To show the formula before removal run: git -C "$(brew --repo homebrew/core)" show 028f11f9e^:Formula/[email protected] If you still use this formula consider creating your own tap: https://docs.brew.sh/How-to-Create-and-Maintain-a-Tap Unfortunately I still have a number of brew formulas that depend on Brew's python@2. Those include awscli, letsencrypt, pr sshuttle for example aws zsh: /usr/local/bin/aws: bad interpreter: /usr/local/opt/python@2/bin/python2.7: no such file or directory I don't know how to use this brew extract command they documented to reinstall Python@2. It needs a formula and a tap. I imagine the formula would be python@2. I'm not sure what the tap would need to be. Additionally reinstalling the taps such as aws or letsencrypt is not working very well either. After reinstalling awscli (brew reinstall awscli), running aws commands still gives errors. aws /usr/local/Cellar/awscli/2.0.0/libexec/lib/python3.8/site-packages/jmespath/visitor.py:32: SyntaxWarning: "is" with a literal. Did you mean "=="? if x is 0 or x is 1: /usr/local/Cellar/awscli/2.0.0/libexec/lib/python3.8/site-packages/jmespath/visitor.py:32: SyntaxWarning: "is" with a literal. Did you mean "=="? if x is 0 or x is 1: /usr/local/Cellar/awscli/2.0.0/libexec/lib/python3.8/site-packages/jmespath/visitor.py:34: SyntaxWarning: "is" with a literal. Did you mean "=="? elif y is 0 or y is 1: /usr/local/Cellar/awscli/2.0.0/libexec/lib/python3.8/site-packages/jmespath/visitor.py:34: SyntaxWarning: "is" with a literal. Did you mean "=="? elif y is 0 or y is 1: /usr/local/Cellar/awscli/2.0.0/libexec/lib/python3.8/site-packages/jmespath/visitor.py:260: SyntaxWarning: "is" with a literal. Did you mean "=="? if original_result is 0: usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] To see help text, you can run: aws help aws <command> help aws <command> <subcommand> help aws: error: the following arguments are required: command | It seems that the homebrew staff really makes it as hard as possible to use Python 2.7 on macOS as they can. The linked brew extract link is really not helpful, you need to look for answers here about how to make your own tap from extracted sources. The linked commit: 028f11f9e is wrong, as it contains the already deleted file. The brew extract command doesn't even work correctly, because of the @ in the package name. The solution is very simple though, you just need to download the latest known commit and install from that file: cd ~ wget https://raw.githubusercontent.com/Homebrew/homebrew-core/86a44a0a552c673a05f11018459c9f5faae3becc/Formula/[email protected] brew install [email protected] rm [email protected] There might be a warning about this being "unstable", which I don't understand as a commit in a Git history is as stable as you can get. | 206 | 183 |
60,371,624 | 2020-2-24 | https://stackoverflow.com/questions/60371624/drawing-a-3d-box-in-a-3d-scatterplot-using-plotly | I was trying to plot a 3d box in a 3d scatterplot. Basically, this was the result of an optimization problem (background is here). The box is the largest empty box possible given all the points. In the plotly docs I noticed an example of a 3d cube built using 3dmesh. I copied this: import plotly.graph_objects as go x=[ 0.93855, 0.20203, 0.54967, 0.58658, 0.39931, 0.06736, 0.61786, 0.36016, 0.12761, 0.71581, 0.81998, 0.04528, 0.08231, 0.41814, 0.58679, 0.21181, 0.34489, 0.21812, 0.46830, 0.81898, 0.57360, 0.18453, 0.99792, 0.37970, 0.51954, 0.84264, 0.22431, 0.31440, 0.23893, 0.28493, 0.76353, 0.45365, 0.44480, 0.94911, 0.98050, 0.28615, 0.02626, 0.85477, 0.60404, 0.47469, 0.10588, 0.55919, 0.42194, 0.34432, 0.80530, 0.88291, 0.53627, 0.45454, 0.01345, 0.84411, 0.04520, 0.35532, 0.45255, 0.99365, 0.72259, 0.08634, 0.78806, 0.28674, 0.57993, 0.84025, 0.22766, 0.51236, 0.83945, 0.21910, 0.41881, 0.18910, 0.00183, 0.59310, 0.12687, 0.45273, 0.14348, 0.66694, 0.28690, 0.32822, 0.93954, 0.34411, 0.25276, 0.14377, 0.08142, 0.05422, 0.51448, 0.48659, 0.66585, 0.25156, 0.69205, 0.21175, 0.72413, 0.92027, 0.79572, 0.13293, 0.81984, 0.25584, 0.42517, 0.41333, 0.75978, 0.60823, 0.83418, 0.37497, 0.10177, 0.01215] y=[ 0.61424, 0.39918, 0.57526, 0.04537, 0.24058, 0.18701, 0.18450, 0.82907, 0.66274, 0.96315, 0.58458, 0.12807, 0.38695, 0.30646, 0.88417, 0.63859, 0.40404, 0.06445, 0.19149, 0.91259, 0.99317, 0.67468, 0.12954, 0.11868, 0.79252, 0.98170, 0.74706, 0.28944, 0.55650, 0.91190, 0.26978, 0.94868, 0.82534, 0.37846, 0.38055, 0.42637, 0.26349, 0.09109, 0.10308, 0.63728, 0.37470, 0.85528, 0.19407, 0.29683, 0.71095, 0.72789, 0.47052, 0.54725, 0.62322, 0.52442, 0.32547, 0.54581, 0.51336, 0.58652, 0.76841, 0.00042, 0.80743, 0.32560, 0.29931, 0.19091, 0.95850, 0.42236, 0.70728, 0.85435, 0.79661, 0.14909, 0.80658, 0.36827, 0.46344, 0.92196, 0.09802, 0.02856, 0.73966, 0.55969, 0.34595, 0.80634, 0.18350, 0.84283, 0.04560, 0.41515, 0.50151, 0.52665, 0.44211, 0.48040, 0.39643, 0.99743, 0.18206, 0.09721, 0.33793, 0.69245, 0.97670, 0.70870, 0.75288, 0.51147, 0.22298, 0.84305, 0.62014, 0.41474, 0.82815, 0.42865] z=[ 0.13338, 0.81253, 0.46946, 0.76145, 0.83335, 0.96434, 0.79175, 0.20481, 0.60056, 0.26519, 0.89917, 0.16271, 0.02890, 0.49017, 0.18970, 0.16751, 0.47065, 0.85533, 0.73768, 0.14031, 0.92923, 0.11933, 0.40330, 0.46713, 0.69964, 0.25784, 0.87656, 0.25886, 0.64603, 0.92604, 0.83728, 0.71988, 0.48486, 0.57123, 0.78618, 0.70429, 0.30544, 0.20687, 0.47584, 0.58176, 0.43336, 0.35453, 0.96509, 0.98293, 0.88605, 0.70571, 0.51733, 0.09292, 0.69618, 0.76415, 0.82743, 0.99876, 0.86101, 0.58373, 0.03917, 0.60540, 0.59567, 0.94481, 0.35552, 0.80555, 0.97449, 0.31020, 0.61952, 0.48569, 0.50740, 0.69248, 0.01918, 0.04973, 0.21958, 0.98663, 0.09143, 0.24220, 0.96312, 0.66227, 0.91103, 0.26285, 0.28079, 0.10938, 0.07499, 0.34065, 0.83692, 0.33815, 0.89640, 0.06275, 0.01852, 0.08153, 0.88351, 0.08171, 0.87036, 0.51620, 0.90021, 0.67128, 0.36607, 0.54804, 0.72661, 0.18951, 0.11629, 0.46170, 0.24500, 0.88841] fig = go.Figure(data=[ go.Scatter3d(x=x, y=y, z=z, mode='markers', marker=dict(size=2) ), go.Mesh3d( # 8 vertices of a cube x=[0.608, 0.608, 0.998, 0.998, 0.608, 0.608, 0.998, 0.998], y=[0.091, 0.963, 0.963, 0.091, 0.091, 0.963, 0.963, 0.091], z=[0.140, 0.140, 0.140, 0.140, 0.571, 0.571, 0.571, 0.571], i = [7, 0, 0, 0, 4, 4, 6, 6, 4, 0, 3, 2], j = [3, 4, 1, 2, 5, 6, 5, 2, 0, 1, 6, 3], k = [0, 7, 2, 3, 6, 7, 1, 1, 5, 5, 7, 6], opacity=0.6, color='#DC143C' ) ]) fig.show() However, the picture shows really the triangles. Any better way to draw a 3d box (in Plotly)? | For me using the argument flatshading = True did the job. Code fig = go.Figure(data=[ go.Scatter3d(x=x, y=y, z=z, mode='markers', marker=dict(size=2) ), go.Mesh3d( # 8 vertices of a cube x=[0.608, 0.608, 0.998, 0.998, 0.608, 0.608, 0.998, 0.998], y=[0.091, 0.963, 0.963, 0.091, 0.091, 0.963, 0.963, 0.091], z=[0.140, 0.140, 0.140, 0.140, 0.571, 0.571, 0.571, 0.571], i = [7, 0, 0, 0, 4, 4, 6, 6, 4, 0, 3, 2], j = [3, 4, 1, 2, 5, 6, 5, 2, 0, 1, 6, 3], k = [0, 7, 2, 3, 6, 7, 1, 1, 5, 5, 7, 6], opacity=0.6, color='#DC143C', flatshading = True ) ]) Output | 8 | 11 |
60,296,197 | 2020-2-19 | https://stackoverflow.com/questions/60296197/flask-orjson-instead-of-json-module-for-decoding | I'm using flask and have a lot of requests. The json module, which is used by flask, is quite slow. I automatically can use simplejson, but thats a bit slower, not faster. According to the documentation I can define a decoder (flask.json_decoder), but orjson doesn't have this class. I only have the function loads and dumps. Can somebody explain me, how I can exchange the json module with orjson? In the end I just want to use the loads and dumps function, but I can't connect my loose ends. | a very basic implementation could look like this: class ORJSONDecoder: def __init__(self, **kwargs): # eventually take into consideration when deserializing self.options = kwargs def decode(self, obj): return orjson.loads(obj) class ORJSONEncoder: def __init__(self, **kwargs): # eventually take into consideration when serializing self.options = kwargs def encode(self, obj): # decode back to str, as orjson returns bytes return orjson.dumps(obj).decode('utf-8') app = Flask(__name__) app.json_encoder = ORJSONEncoder app.json_decoder = ORJSONDecoder | 10 | 8 |
60,358,228 | 2020-2-23 | https://stackoverflow.com/questions/60358228/how-to-set-title-on-seaborn-jointplot | The JointPlot documentation does not show the title : can it be set? http://seaborn.pydata.org/generated/seaborn.jointplot.html?highlight=reg | this worked for me p = sns.jointplot(x = 'x_', y = 'y_', data = df, kind="kde") p.fig.suptitle("Your title here") p.ax_joint.collections[0].set_alpha(0) p.fig.tight_layout() p.fig.subplots_adjust(top=0.95) # Reduce plot to make room | 17 | 38 |
60,294,463 | 2020-2-19 | https://stackoverflow.com/questions/60294463/attributeerror-dataframe-object-has-no-attribute-set-value | I'm using flask and getting error at set_values. I'm reading the input from html and passing it to the code @app.route('/home', methods=['POST']) def first(): source = request.files['first'] destination = request.files['second'] df = pd.read_csv(source) df1 = pd.read_csv(destination) val1 = int(request.form['val1']) val2 = int(request.form['val2']) val3 = int(request.form['val3']) target = request.form['str'] df2 = df[df.columns[val2]] count = 0 for j in df[df.columns[val1]]: x = df1.loc[df1[df1.columns[val3]] == j].index.values for i in x: df1.set_value(i, target, df2[count]) count = count + 1 df1.to_csv('result.csv', index=False) | Check your pandas version. df.set_value() is deprecated since pandas version 0.21.0 Instead use df.at import pandas as pd df = pd.DataFrame({"A":[1, 5, 3, 4, 2], "B":[3, 2, 4, 3, 4], "C":[2, 2, 7, 3, 4], "D":[4, 3, 6, 12, 7]}) df.at[2,'B']=100 A B C D 0 1 3 2 4 1 5 2 2 3 2 3 100 7 6 3 4 3 3 12 4 2 4 4 7 | 16 | 48 |
60,310,647 | 2020-2-19 | https://stackoverflow.com/questions/60310647/is-tensorflow-data-dataset-the-same-as-datasetv1adapter | When I use: training_ds = tf.data.Dataset.from_generator(SomeTrainingDirectoryIterator, (tf.float32, tf.float32)) I expect for it to return a Tensorflow Dataset, but instead, training_ds is a DatasetV1Adapter object. Are they essentially the same thing? If not could I convert the DatasetV1Adapter to a Tf.Data.Dataset object? Also, what is the best way to view loop over and view my dataset? If I were to call: def show_batch(dataset): for batch, head in dataset.take(1): for labels, value in batch.items(): print("{:20s}: {}".format(labels, value.numpy())) With training_ds as my dataset, I am thrown this error: AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'items' UPDATE: I upgraded my TensorFlow version from 1.14 to 2.0. and now the Dataset is of a FlatMapDataset. But this is still not my expected return object, why am I not being returned a regular tf.data.Dataset? | If you're using Tensorflow 2.0 (or below) from_generator will give you DatasetV1Adapter. For the Tensorflow version greater than 2.0 from_generator will give you FlatMapDataset. The error you are facing is not related to the type of dataset from_generator returns, but with the way you are printing the dataset. batch.items() works if the from_generator is generating the data of <class 'dict'> type. Example 1 - Here I am using from_generator to create <class 'tuple'> type data. So If I print using batch.items(), then it throws the error you are facing. You can simply use list(dataset.as_numpy_iterator()) to print the dataset OR dataset.take(1).as_numpy_iterator() to print required number of records, here as it is take(1), it prints just one record. Have added print statements in the code to explain better. You can find details in the Output. import tensorflow as tf print(tf.__version__) import itertools def gen(): for i in itertools.count(1): yield (i, [1] * i) dataset = tf.data.Dataset.from_generator( gen, (tf.int64, tf.int64), (tf.TensorShape([]), tf.TensorShape([None]))) print("tf.data.Dataset type is:",dataset,"\n") for batch in dataset.take(1): print("My type is of:",type(batch),"\n") # This Works print("Lets print just the first row in dataset :","\n",list(dataset.take(1).as_numpy_iterator()),"\n") # This won't work because we have not created dict print("Lets print using the batch.items() :") for batch in dataset.take(1): for m1,m2 in batch.items(): print("{:20s}: {}".format(m1, m2)) Output - 2.2.0 tf.data.Dataset type is: <FlatMapDataset shapes: ((), (None,)), types: (tf.int64, tf.int64)> My type is of: <class 'tuple'> Lets print just the first row in dataset : [(1, array([1]))] Lets print using the batch.items() : --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-11-27bbc2c21d24> in <module>() 24 print("Lets print using the batch.items() :") 25 for batch in dataset.take(1): ---> 26 for m1,m2 in batch.items(): 27 print("{:20s}: {}".format(m1, m2)) AttributeError: 'tuple' object has no attribute 'items' Example 2 - Here I am using from_generator to create <class 'dict'> type data. So If I print using batch.items(), then it works without any issues. Being said that, you can simply use list(dataset.as_numpy_iterator()) to print the dataset. Have added print statements in the code to explain better. You can find details in the Output. import tensorflow as tf N = 100 # dictionary of arrays: metadata = {'m1': tf.zeros(shape=(N,2)), 'm2': tf.ones(shape=(N,3,5))} num_samples = N def meta_dict_gen(): for i in range(num_samples): ls = {} for key, val in metadata.items(): ls[key] = val[i] yield ls dataset = tf.data.Dataset.from_generator( meta_dict_gen, output_types={k: tf.float32 for k in metadata}, output_shapes={'m1': (2,), 'm2': (3, 5)}) print("tf.data.Dataset type is:",dataset,"\n") for batch in dataset.take(1): print("My type is of:",type(batch),"\n") print("Lets print just the first row in dataset :","\n",list(dataset.take(1).as_numpy_iterator()),"\n") print("Lets print using the batch.items() :") for batch in dataset.take(1): for m1, m2 in batch.items(): print("{:2s}: {}".format(m1, m2)) Output - tf.data.Dataset type is: <FlatMapDataset shapes: {m1: (2,), m2: (3, 5)}, types: {m1: tf.float32, m2: tf.float32}> My type is of: <class 'dict'> Lets print just the first row in dataset : [{'m1': array([0., 0.], dtype=float32), 'm2': array([[1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.]], dtype=float32)}] Lets print using the batch.items() : m1: [0. 0.] m2: [[1. 1. 1. 1. 1.] [1. 1. 1. 1. 1.] [1. 1. 1. 1. 1.]] Hope this answers your question. Happy Learning. | 10 | 6 |
60,358,216 | 2020-2-23 | https://stackoverflow.com/questions/60358216/python-requests-post-request-dropping-authorization-header | I'm trying to make an API POST request using the Python requests library. I am passing through an Authorization header but when I try debugging, I can see that the header is being dropped. I have no idea what's going on. Here's my code: access_token = get_access_token() bearer_token = base64.b64encode(bytes("'Bearer {}'".format(access_token)), 'utf-8') headers = {'Content-Type': 'application/json', 'Authorization': bearer_token} data = '{"FirstName" : "Jane", "LastName" : "Smith"}' response = requests.post('https://myserver.com/endpoint', headers=headers, data=data) As you can see above, I manually set the Authorization header in the request arguments, but it is missing the actual request's headers: {'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'python-requests/2.4.3 CPython/2.7.9 Linux/4.1.19-v7+'}. An additional piece of information is that if I change the POST request to a GET request, the Authorization header passes through normally! Why would this library be dropping the header for POST requests and how do I get this to work? Using v2.4.3 of the requests lib and Python 2.7.9 | TLDR The url you are requesting redirects POST requests to a different host, so the requests library drops the Authoriztion header in fear of leaking your credentials. To fix that you can override the responsible method in requests' Session class. Details In requests 2.4.3, the only place where reqeuests removes the Authorization header is when a request is redirected to a different host. This is the relevant code: if 'Authorization' in headers: # If we get redirected to a new host, we should strip out any # authentication headers. original_parsed = urlparse(response.request.url) redirect_parsed = urlparse(url) if (original_parsed.hostname != redirect_parsed.hostname): del headers['Authorization'] In newer versions of requests, the Authorization header will be dropped in additional cases (for example if the redirect is from a secure to a non-secure protocol). So what probably happens in your case, is that your POST requests get redirected to a different host. The only way you can provide authentication for a redirected host using the requests library, is through a .netrc file. Sadly that will only allow you to use HTTP Basic Auth, which doesn't help you much. In that case, the best solution is probably to subclass requests.Session and override this behavior, like so: from requests import Session class NoRebuildAuthSession(Session): def rebuild_auth(self, prepared_request, response): """ No code here means requests will always preserve the Authorization header when redirected. Be careful not to leak your credentials to untrusted hosts! """ session = NoRebuildAuthSession() response = session.post('https://myserver.com/endpoint', headers=headers, data=data) Edit I have opened a pull-request to the requests library on github to add a warning when this happens. It has been waiting for a second approval to be merged (three months already). | 12 | 23 |
60,323,366 | 2020-2-20 | https://stackoverflow.com/questions/60323366/valueerror-numpy-ufunc-size-changed-may-indicate-binary-incompatibility-expec | In jupyter notebook I am running through this error. I am using py I just installed pytorch, previously it was working fine. import pyodbc import pandas as pd import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') When I run the above cell I got the following error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-2-1051f624fd0e> in <module>() 1 import pyodbc ----> 2 import pandas as pd 3 import matplotlib.pyplot as plt 4 5 import warnings C:\Anaconda3\lib\site-packages\pandas\__init__.py in <module>() 29 30 try: ---> 31 from pandas._libs import hashtable as _hashtable, lib as _lib, tslib as _tslib 32 except ImportError as e: # pragma: no cover 33 # hack but overkill to use re C:\Anaconda3\lib\site-packages\pandas\_libs\__init__.py in <module>() 1 # flake8: noqa 2 ----> 3 from .tslibs import ( 4 NaT, 5 NaTType, C:\Anaconda3\lib\site-packages\pandas\_libs\tslibs\__init__.py in <module>() 1 # flake8: noqa 2 ----> 3 from .conversion import localize_pydatetime, normalize_date 4 from .nattype import NaT, NaTType, iNaT, is_null_datetimelike 5 from .np_datetime import OutOfBoundsDatetime __init__.pxd in init pandas._libs.tslibs.conversion() ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject | Which version of numpy do you have installed? I got the same error with 1.18.5, downgrading to 1.16.0 or 1.16.1 solved the issue. Also check which numpy version is required by pytorch, and any other library requiring numpy, I found pipdeptree quite useful for that. | 17 | 8 |
60,394,664 | 2020-2-25 | https://stackoverflow.com/questions/60394664/cant-debug-django-unit-tests-within-visual-studio-code | I want to be able to run and debug unit tests for a Django project from within Visual Studio Code. I am an experienced developer, but fairly new to both Django and Visual Studio Code. The crux of the problem is that either the tests are undiscoverable within Visual Studio Code, or if they are discoverable, I get a ConnectionHandler exception when I run them. I provide details for both of these situations below. I apologize for the length of this question, but I thought I should list the many solutions to the problem that I have tried unsuccessfully. I also think that it might be useful to put all of them into one place, rather than have them scattered all over StackOverflow and the rest of the Internet. Best Solution So Far The best solution I have found is at Problem with Django app unit tests under Visual Studio Code . It involves putting these four lines into an __init__.py file: import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_app_project.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() This works in my simple tutorial projects, described below, but in a real application some of the tests fail. It looks as though either Django isn't running the database models in isolation from the other tests, or that the tests' tearDown( ) method isn't being called. Also, it's not clear which __init__.py file to put these lines in. Finally, this solution causes unit tests to fail when run from the command line. Other Solutions Tried The StackOverflow question at How to fix "TypeError: argument of type 'ConnectionHandler' is not iterable" when running a django test? seems to be similar to the ConnectionHandler aspect of my problem, but no one has proposed a solution there. I've found a Visual Studio Code extension that is supposed to resolve these problems: Django Test Runner (https://marketplace.visualstudio.com/items?itemName=Pachwenko.django-test-runner). It currently has an issue at https://github.com/Pachwenko/VSCode-Django-Test-Runner/issues/5 which describes the problem I'm having with the extension. This blog says you can run Django unit tests in VSC with pytest: https://technowhisp.com/django-unit-testing-in-vscode/ . But even after carefully following his steps, I can't get pytest to discover the unit tests within Visual Studio Code. The StackOverflow question at Django Unit Testing in Visual Studio 2017 or 2019: has no answer. The code the questioner presents as a partial solution is ugly and bloated. The question is tagged as a possible duplicate of Django Unit Testing in Visual Studio 2017 or 2019: , but that question has no answer either. Other questions on StackOverflow aren't related to Django or even Python. My Setup The rest of this question describes my environment and the exact nature of the problems I am having. I am working with an existing application but have managed to reproduce the problem on two tutorial projects--the one provided by the Django team in their "Writing your first Django app" online tutorial at https://docs.djangoproject.com/en/3.0/intro/ , and a second tutorial offered by the Visual Studio Code team at https://code.visualstudio.com/docs/python/tutorial-django . Structurally, the difference between the two projects is that the second has the manage.py script in the same directory as the project root, while the first project has the script one directory below the project root. I am able to run the tests from the command line with no problem. I want to be able to run them from within Visual Studio Code because, sometimes, I want to run them through the debugger. I am using python 3.7, Django 3.0.3, Visual Studio Code 1.42.1, Windows 10, the Windows Subsystem for Linux and the unittest framework. As for my project, I will describe the setup for the first tutorial, since that is most similar to the way my real project is set up. The root directory is called mysite. Cleaned-up output from the tree command gives this: . βββ mysite βββ db.sqlite3 βββ manage.py βββ mysite β βββ __init__.py β βββ asgi.py β βββ settings.py β βββ urls.py β βββ wsgi.py βββ polls βββ __init__.py βββ admin.py βββ apps.py βββ migrations βββ models.py βββ mydate.py βββ tests.py βββ urls.py βββ views.py Here is my VSC settings.json file: { "python.pythonPath": "venv/bin/python", "python.testing.unittestArgs": [ "-v", "-s", // "/full/path/to/mysite", "./mysite", // "mysite", "-p", "test*.py" ], "python.testing.pytestEnabled": false, "python.testing.nosetestsEnabled": false, "python.testing.unittestEnabled": true } The file shows two lines commented out after the "-s" argument. I get the same behavior no matter which of these three lines I use for the path. ConnectionHandler Error within VSC In Visual Studio Code, my polls/tests.py file looks like this: As you can see, the "Run Test" and "Debug Test" buttons indicate the test is discoverable. With the unittest import, the test runs correctly within VSC, but with the django.test import I get a ConnectionHandler error with the following message stack: ====================================================================== ERROR: setUpClass (polls.tests.TestDjango)---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jkurlandski/workspace/randd/djangoprojs/mysite/venv/lib/python3.7/site-packages/django/test/testcases.py", line 1123, in setUpClass super().setUpClass() File "/home/jkurlandski/workspace/randd/djangoprojs/mysite/venv/lib/python3.7/site-packages/django/test/testcases.py", line 197, in setUpClass cls._add_databases_failures() File "/home/jkurlandski/workspace/randd/djangoprojs/mysite/venv/lib/python3.7/site-packages/django/test/testcases.py", line 218, in _add_databases_failures cls.databases = cls._validate_databases() File "/home/jkurlandski/workspace/randd/djangoprojs/mysite/venv/lib/python3.7/site-packages/django/test/testcases.py", line 204, in _validate_databases if alias not in connections: TypeError: argument of type 'ConnectionHandler' is not iterable ---------------------------------------------------------------------- Ran 0 tests in 0.001s Import Problems within VSC In mysite/mysite/polls/models.py I created the same Question and Choice models described in the Django tutorial referenced above. When I try to import them, however, the tests are no longer discoverable--the "Run" and "Debug" buttons disappear in VSC, and I get a message saying they are not discoverable. from django.test import TestCase # from unittest import TestCase # not discoverable: # from .models import Choice # not discoverable: # from polls.models import Choice In VSC, the use of from polls.models import Choice creates an "unresolved import 'polls.models' warning and causes the import statement to become underlined in yellow. The from .models import Choice import does not cause the warning. Since the test is not discoverable I can't run it, or debug it, within VSC. (But note, as I've said, that from the command line the test runs correctly.) As an experiment, I created the file mydate.py in polls, the same directory as models.py, containing a function called getNow( ). I can import this file into tests.py without losing test discoverability, but running the test causes the same ConnectionHandler error described above. from django.test import TestCase # from unittest import TestCase # discoverable, but TypeError: argument of type 'ConnectionHandler' is not iterable: from polls.mydate import getNow # discoverable, but TypeError: argument of type 'ConnectionHandler' is not iterable: # from .mydate import getNow ConnectionHandler Error from Command Line As I've said, I can run the unit tests from the command line with python manage.py test. However, I can replicate the ConnectionHandler error by running python -m unittest from the manage.py directory. Thanks in advance for any help you guys can give me. | You need to load the django configuration previous to run any test, for that reason the code in __init__.py file. If pytest is an option for you, you have to install pytest and pytest-django pip install pytest pytest-django create a pytest configuration (ex: pytest.ini) at the same level of manage.py file with the following content. [pytest] DJANGO_SETTINGS_MODULE=mysite.settings python_files=test*.py The configuration DJANGO_SETTINGS_MODULE is the import needed for the project setting (settings.py file). By default, the file tests.py is created for every app when you use python manage.py startapp app_name, but this file doesn't match with the default pytest file pattern, for that reason you have to add the python_files options if you want to use the default file name. Update the setting.json configuration file with: { "python.pythonPath": "venv/bin/python", "python.testing.pytestArgs": [ "mysite", "-s", "-vv" ], "python.testing.pytestEnabled": true, "python.testing.nosetestsEnabled": false, "python.testing.unittestEnabled": false } About the error unresolved import 'polls.models', you need to configure the Python extension on VSCode creating a .env file in the root of your project with PYTHONPATH=source_code_folder (for more information VS Code Python Environment): PYTHONPATH=mysite/ Finally, your project structure should be . <-- vs code root βββ .env <-- file created with the PYTHONPATH entry βββ mysite βββ pytest.ini <-- pytest configuration βββ db.sqlite3 βββ manage.py βββ mysite β βββ __init__.py β βββ asgi.py β βββ settings.py β βββ urls.py β βββ wsgi.py βββ polls βββ __init__.py βββ admin.py βββ apps.py βββ migrations βββ models.py βββ mydate.py βββ tests.py βββ urls.py βββ views.py | 16 | 6 |
60,305,098 | 2020-2-19 | https://stackoverflow.com/questions/60305098/modulenotfounderror-no-module-named-sklearn-preprocessing-data | My question is similar to this. I also use pickle to save & load model. I meet the below error during pickle.load( ) from sklearn.preprocessing import StandardScaler # SAVE scaler = StandardScaler().fit(X_train) X_trainScale = scaler.transform(X_train) pickle.dump(scaler, open('scaler.scl','wb')) # ================= # LOAD sclr = pickle.load(open('scaler.scl','rb')) # => ModuleNotFoundError: No module named 'sklearn.preprocessing._data' X_testScale = sclr.transform(X_test) ModuleNotFoundError: No module named 'sklearn.preprocessing._data' It looks like a sklearn version issue. My sklearn version is 0.20.3, Python version is 3.7.3. But I am using Python in an Anaconda .zip file. Is it possible to solve this without updating the version of sklearn? | I had exactly the same error message with StandardScaler using Anaconda. Fixed it by running: conda update --all I think the issue was caused by running the pickle dump for creating the scaler file on a machine with a newer version of scikit-learn, and then trying to run pickle load on machine with an older version of scikit-learn. (It gave the error when running pickle load on the machine with the older version of scikit-learn but no error when running pickle load on the machine with the newer version of scikit-learn. Both windows machines). Perhaps this is due to more recent versions using a different naming convention for functions regarding underscores (as mentioned above)? Anaconda would not let me update the scikit-learn library on it's own, because it claimed it required the older version (for some reason I could not understand). Perhaps another library was using it? So I had to fix it by updating all the libraries at the same time, which worked. | 14 | 5 |
60,330,730 | 2020-2-21 | https://stackoverflow.com/questions/60330730/typing-interfaces | What is the correct way to type an "interface" in python 3? In the following sample: class One(object): def foo(self) -> int: return 42 class Two(object): def foo(self) -> int: return 142 def factory(a: str): if a == "one": return One() return Two() what would be the correct way to type the return value of the factory function? It should be something like "A type with a single method named foo that accepts no arguments and returns an integer". But not sure I can find how to do that. UPD: this question is exclusively dedicated to typing. | You could use a typing.Union but, it sounds like you really want structural typing not nominal. Python supports this using typing.Protocol, which is a supported part of the python type-hinting system, so mypy will understand it, for example: import typing class Fooable(typing.Protocol): def foo(self) -> int: ... class One(object): def foo(self) -> int: return 42 class Two(object): def foo(self) -> int: return 142 def factory(a: str) -> Fooable: if a == "one": return One() return Two() x = factory('one') x.foo() Note, structural typing fits well with Python's duck-typing ethos. Python's typing system supports both structural and nominal forms. | 11 | 14 |
60,377,747 | 2020-2-24 | https://stackoverflow.com/questions/60377747/return-predictions-wav2vec-fairseq | I'm trying to use wav2vec to train my own Automatic Speech Recognition System: https://github.com/pytorch/fairseq/tree/master/examples/wav2vec import torch from fairseq.models.wav2vec import Wav2VecModel cp = torch.load('/path/to/wav2vec.pt') model = Wav2VecModel.build_model(cp['args'], task=None) model.load_state_dict(cp['model']) model.eval() First of all how can I use a loaded model to return predictions from a wav file? Second, how can I pre-train using annotated data? I don't see any text mention in the manifest scripts. | After trying various things I was able to figure this out and trained a wav2vec model from scratch. Some background: wav2vec uses semi-supervised learning to learn vector representations for preprocessed sound frames. This is similar to what word2vec does to learn word embeddings a text corpus. In the case of wav2vec it samples random parts of the sound file and learns to predict if a given part is in the near future from a current offset position. This is somewhat similar to the masked word task used to train transformers such as BERT. The nice thing about such prediction tasks is that they are self-supervised: the algorithm can be trained on unlabeled data since it uses the temporal structure of the data to produce labels and it uses random sampling to produce contrasting negative examples. It is a binary classification task (is the proposed processed sound frame in the near future of the current offset or not). In training for this binary classification task, it learns vector representations of sound frames (one 512 dim vector for each 10ms of sound). These vector representations are useful features because they concentrate information relevant to predicting speech. These vectors can then be used instead of spectrogram vectors as inputs for speech to text algorithms such as wav2letter or deepSpeech. This is an important point: wav2vec is not a full automatic speech recognition (ASR) system. It is a useful component because by leveraging self-supervised learning on unlabeled data (audio files containing speech but without text transcriptions), it greatly reduces the need for labeled data (speech transcribed to text). Based on their article it appears that by using wav2vec in an ASR pipeline, the amount of labeled data needed can be reduced by a factor of at least 10 (10 to 100 times less transcribed speech is needed apparently). Since un-transcribed speech files are much easier to get than transcribed speech, this is a huge advantage of using wav2vec as an initial module in an ASR system. So wav2vec is trained with data which is not annotated (no text is used to train it). The thing which confused me was the following command for training (here) : python train.py /manifest/path --save-dir /model/path ...(etc.)......... It turns out that since wav2vec is part of fairseq, the following fairseq command line tool should be used to train it: fairseq-train As the arguments to this command are pretty long, this can be done using a bash scipt such as #!/bin/bash fairseq-train /home/user/4fairseq --save-dir /home/user/4fairseq --fp16 --max-update 400000 --save-interval 1 --no-epoch-checkpoints \ --arch wav2vec --task audio_pretraining --lr 1e-06 --min-lr 1e-09 --optimizer adam --max-lr 0.005 --lr-scheduler cosine \ --conv-feature-layers "[(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1)]" \ --conv-aggregator-layers "[(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)]" \ --skip-connections-agg --residual-scale 0.5 --log-compression --warmup-updates 500 --warmup-init-lr 1e-07 --criterion binary_cross_entropy --num-negatives 10 \ --max-sample-size 150000 --max-tokens 1500000 most of the arguments are those suggested here, only the first two (which are filesystem paths) must be modified for your system. Since I had audio voice files which were in mp3 format, I converted them to wav files the following bash script: #!/bin/bash for file in /home/user/data/soundFiles/* do echo "$file" echo "${file%.*}.wav" ffmpeg -i "$file" "${file%.*}.wav" done They suggest that the audio files be of short duration, longer files should be split into smaller files. The files which I had were already pretty short so I did not do any splitting. the script wav2vec_manifest.py must be used to create a training data manifest before training. It will create two files (train.tsv and valid.tsv) basically creating lists of which audio files should be used for training and which should be used for validation. The path at which these two files are located is the first argument to the fairseq-train method. The second argument to the method fairseq-train is the path at which to save the model. After training there will be these two model files: checkpoint_best.pt checkpoint_last.pt These are updated at the end of each epoch so I was able to terminate the train process early and still have those saved model files | 8 | 20 |
60,367,378 | 2020-2-23 | https://stackoverflow.com/questions/60367378/finding-neighbourhoods-cliques-in-street-data-a-graph | I am looking for a way to automatically define neighbourhoods in cities as polygons on a graph. My definition of a neighbourhood has two parts: A block: An area inclosed between a number of streets, where the number of streets (edges) and intersections (nodes) is a minimum of three (a triangle). A neighbourhood: For any given block, all the blocks directly adjacent to that block and the block itself. See this illustration for an example: E.g. B4 is block defined by 7 nodes and 6 edges connecting them. As most of the examples here, the other blocks are defined by 4 nodes and 4 edges connecting them. Also, the neighbourhood of B1 includes B2 (and vice versa) while B2 also includes B3. I am using osmnx to get street data from OSM. Using osmnx and networkx, how can I traverse a graph to find the nodes and edges that define each block? For each block, how can I find the adjacent blocks? I am working myself towards a piece of code that takes a graph and a pair of coordinates (latitude, longitude) as input, identifies the relevant block and returns the polygon for that block and the neighbourhood as defined above. Here is the code used to make the map: import osmnx as ox import networkx as nx import matplotlib.pyplot as plt G = ox.graph_from_address('NΓΈrrebrogade 20, Copenhagen Municipality', network_type='all', distance=500) and my attempt at finding cliques with different number of nodes and degrees. def plot_cliques(graph, number_of_nodes, degree): ug = ox.save_load.get_undirected(graph) cliques = nx.find_cliques(ug) cliques_nodes = [clq for clq in cliques if len(clq) >= number_of_nodes] print("{} cliques with more than {} nodes.".format(len(cliques_nodes), number_of_nodes)) nodes = set(n for clq in cliques_nodes for n in clq) h = ug.subgraph(nodes) deg = nx.degree(h) nodes_degree = [n for n in nodes if deg[n] >= degree] k = h.subgraph(nodes_degree) nx.draw(k, node_size=5) Theory that might be relevant: Enumerating All Cycles in an Undirected Graph | Finding city blocks using the graph is surprisingly non-trivial. Basically, this amounts to finding the smallest set of smallest rings (SSSR), which is an NP-complete problem. A review of this problem (and related problems) can be found here. On SO, there is one description of an algorithm to solve it here. As far as I can tell, there is no corresponding implementation in networkx (or in python for that matter). I tried this approach briefly and then abandoned it -- my brain is not up to scratch for that kind of work today. That being said, I will award a bounty to anybody that might visit this page at a later date and post a tested implementation of an algorithm that finds the SSSR in python. I have instead pursued a different approach, leveraging the fact that the graph is guaranteed to be planar. Briefly, instead of treating this as a graph problem, we treat this as an image segmentation problem. First, we find all connected regions in the image. We then determine the contour around each region, transform the contours in image coordinates back to longitudes and latitudes. Given the following imports and function definitions: #!/usr/bin/env python # coding: utf-8 """ Find house blocks in osmnx graphs. """ import numpy as np import osmnx as ox import networkx as nx import matplotlib.pyplot as plt from matplotlib.path import Path from matplotlib.patches import PathPatch from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas from skimage.measure import label, find_contours, points_in_poly from skimage.color import label2rgb ox.config(log_console=True, use_cache=True) def k_core(G, k): H = nx.Graph(G, as_view=True) H.remove_edges_from(nx.selfloop_edges(H)) core_nodes = nx.k_core(H, k) H = H.subgraph(core_nodes) return G.subgraph(core_nodes) def plot2img(fig): # remove margins fig.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0, hspace=0) # convert to image # https://stackoverflow.com/a/35362787/2912349 # https://stackoverflow.com/a/54334430/2912349 canvas = FigureCanvas(fig) canvas.draw() img_as_string, (width, height) = canvas.print_to_buffer() as_rgba = np.fromstring(img_as_string, dtype='uint8').reshape((height, width, 4)) return as_rgba[:,:,:3] Load the data. Do cache the imports, if testing this repeatedly -- otherwise your account can get banned. Speaking from experience here. G = ox.graph_from_address('NΓΈrrebrogade 20, Copenhagen Municipality', network_type='all', distance=500) G_projected = ox.project_graph(G) ox.save_graphml(G_projected, filename='network.graphml') # G = ox.load_graphml('network.graphml') Prune nodes and edges that cannot be part of a cycle. This step is not strictly necessary but results in nicer contours. H = k_core(G, 2) fig1, ax1 = ox.plot_graph(H, node_size=0, edge_color='k', edge_linewidth=1) Convert plot to image and find connected regions: img = plot2img(fig1) label_image = label(img > 128) image_label_overlay = label2rgb(label_image[:,:,0], image=img[:,:,0]) fig, ax = plt.subplots(1,1) ax.imshow(image_label_overlay) For each labelled region, find the contour and convert the contour pixel coordinates back to data coordinates. # using a large region here as an example; # however we could also loop over all unique labels, i.e. # for ii in np.unique(labels.ravel()): ii = np.argsort(np.bincount(label_image.ravel()))[-5] mask = (label_image[:,:,0] == ii) contours = find_contours(mask.astype(np.float), 0.5) # Select the largest contiguous contour contour = sorted(contours, key=lambda x: len(x))[-1] # display the image and plot the contour; # this allows us to transform the contour coordinates back to the original data cordinates fig2, ax2 = plt.subplots() ax2.imshow(mask, interpolation='nearest', cmap='gray') ax2.autoscale(enable=False) ax2.step(contour.T[1], contour.T[0], linewidth=2, c='r') plt.close(fig2) # first column indexes rows in images, second column indexes columns; # therefor we need to swap contour array to get xy values contour = np.fliplr(contour) pixel_to_data = ax2.transData + ax2.transAxes.inverted() + ax1.transAxes + ax1.transData.inverted() transformed_contour = pixel_to_data.transform(contour) transformed_contour_path = Path(transformed_contour, closed=True) patch = PathPatch(transformed_contour_path, facecolor='red') ax1.add_patch(patch) Determine all points in the original graph that fall inside (or on) the contour. x = G.nodes.data('x') y = G.nodes.data('y') xy = np.array([(x[node], y[node]) for node in G.nodes]) eps = (xy.max(axis=0) - xy.min(axis=0)).mean() / 100 is_inside = transformed_contour_path.contains_points(xy, radius=-eps) nodes_inside_block = [node for node, flag in zip(G.nodes, is_inside) if flag] node_size = [50 if node in nodes_inside_block else 0 for node in G.nodes] node_color = ['r' if node in nodes_inside_block else 'k' for node in G.nodes] fig3, ax3 = ox.plot_graph(G, node_color=node_color, node_size=node_size) Figuring out if two blocks are neighbors is pretty easy. Just check if they share a node: if set(nodes_inside_block_1) & set(nodes_inside_block_2): # empty set evaluates to False print("Blocks are neighbors.") | 12 | 6 |
60,338,062 | 2020-2-21 | https://stackoverflow.com/questions/60338062/why-does-assigning-with-versus-iloc-yield-different-results-in-pandas | I am so confused with different indexing methods using iloc in pandas. Let say I am trying to convert a 1-d Dataframe to a 2-d Dataframe. First I have the following 1-d Dataframe a_array = [1,2,3,4,5,6,7,8] a_df = pd.DataFrame(a_array).T And I am going to convert that into a 2-d Dataframe with the size of 2x4. I start by preseting the 2-d Dataframe as follow: b_df = pd.DataFrame(columns=range(4),index=range(2)) Then I use for-loop to help me converting a_df (1-d) to b_df (2-d) with the following code for i in range(2): b_df.iloc[i,:] = a_df.iloc[0,i*4:(i+1)*4] It only gives me the following results 0 1 2 3 0 1 2 3 4 1 NaN NaN NaN NaN But when I changed b_df.iloc[i,:] to b_df.iloc[i][:]. The result is correct like the following, which is what I want 0 1 2 3 0 1 2 3 4 1 5 6 7 8 Could anyone explain to me what the difference between .iloc[i,:] and .iloc[i][:] is, and why .iloc[i][:] worked in my example above but not .iloc[i,:] | There is a very, very big difference between series.iloc[:] and series[:], when assigning back. (i)loc always checks to make sure whatever you're assigning from matches the index of the assignee. Meanwhile, the [:] syntax assigns to the underlying NumPy array, bypassing index alignment. s = pd.Series(index=[0, 1, 2, 3], dtype='float') s 0 NaN 1 NaN 2 NaN 3 NaN dtype: float64 # Let's get a reference to the underlying array with `copy=False` arr = s.to_numpy(copy=False) arr # array([nan, nan, nan, nan]) # Reassign using slicing syntax s[:] = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd']) s 0 1 1 2 2 3 3 4 dtype: int64 arr # array([1., 2., 3., 4.]) # underlying array has changed # Now, reassign again with `iloc` s.iloc[:] = pd.Series([5, 6, 7, 8], index=[3, 4, 5, 6]) s 0 NaN 1 NaN 2 NaN 3 5.0 dtype: float64 arr # array([1., 2., 3., 4.]) # `iloc` created a new array for the series # during reassignment leaving this unchanged s.to_numpy(copy=False) # the new underlying array, for reference # array([nan, nan, nan, 5.]) Now that you understand the difference, let's look at what happens in your code. Just print out the RHS of your loops to see what you are assigning: for i in range(2): print(a_df.iloc[0, i*4:(i+1)*4]) # output - first row 0 1 1 2 2 3 3 4 Name: 0, dtype: int64 # second row. Notice the index is different 4 5 5 6 6 7 7 8 Name: 0, dtype: int64 When assigning to b_df.iloc[i, :] in the second iteration, the indexes are different so nothing is assigned and you only see NaNs. However, changing b_df.iloc[i, :] to b_df.iloc[i][:] will mean you assign to the underlying NumPy array, so indexing alignment is bypassed. This operation is better expressed as for i in range(2): b_df.iloc[i, :] = a_df.iloc[0, i*4:(i+1)*4].to_numpy() b_df 0 1 2 3 0 1 2 3 4 1 5 6 7 8 It's also worth mentioning this is a form of chained assignment, which is not a good thing, and also makes your code harder to read and understand. | 13 | 4 |
60,395,570 | 2020-2-25 | https://stackoverflow.com/questions/60395570/invalidargumentexception-message-invalid-argument-using-must-be-a-string | im very new to python, trying to create reusable code. when i try to call the class Login and function login_user in test_main.py by passing all the arguments that were used under Login class, im getting an error as InvalidArgumentException: Message: invalid argument: 'using' must be a string. test_main.py file which runs on pytest. Locators_test is the class of test_Locators.py file where i have all my xpaths test_Locators.py class Locators_test(): loginlink_xpath = "//a[@id='login-link']" login_email = "xxxxx" login_password = "xxxxx" loginemail_id = "dnn_ctr1179_Login_txtEmail" loginpassword_id = "dnn_ctr1179_Login_txtPassword" clicklogin_id = "dnn_ctr1179_Login_btnLogin" test_login.py from Smoketest.locatorfile.test_Locators import Locators_test class Login(): def __init__(self,driver): self.driver = driver def login_user(self,driver): try: loginButton = self.driver.find_element((By.XPATH, Locators_test.loginlink_xpath)) while loginButton.click() is True: break time.sleep(3) self.driver.execute_script("window.scrollBy(0,300);") EmailField = self.driver.find_element((By.ID, Locators_test.loginemail_id)) EmailField.send_keys(Locators_test.login_email) PasswordField = self.driver.find_element((By.ID, Locators_test.loginpassword_id)) PasswordField.send_keys(Locators_test.login_password) ClickLogin = self.driver.find_element((By.ID, Locators_test.clicklogin_id)) while ClickLogin.click() is True: break time.sleep(5) userName = self.driver.find_element((By.XPATH, Locators_test.username_xpath)) print("Logged in as", userName.text) except StaleElementReferenceException or ElementClickInterceptedException or TimeoutException as ex: print(ex.message) test_main.py def test_setup(): driver = webdriver.Chrome(executable_path= Locators_test.browser_path) driver.maximize_window() driver.delete_all_cookies() driver.get(homePage) driver.implicitly_wait(5) yield print("test complete") def test_login(test_setup): from Smoketest.pages.test_login import Login lo = Login(driver) lo.login_user(((Locators_test.loginlink_xpath,Locators_test.loginemail_id,Locators_test.login_email,Locators_test.loginpassword_id,Locators_test.login_password,Locators_test.clicklogin_id,Locators_test.username_xpath))) indentations are all fine | I fixed it myself by removing the extra pair of parenthesis from the line loginButton = self.driver.find_element((By.XPATH, Locators_test.loginlink_xpath)) Right way is loginButton = self.driver.find_element(By.XPATH, Locators_test.loginlink_xpath) ps: this applies to all the lines. | 13 | 24 |
60,359,268 | 2020-2-23 | https://stackoverflow.com/questions/60359268/how-to-output-the-second-layer-of-a-network | My model is trained on digit images (MNIST dataset). I am trying to print the output of the second layer of my network - an array of 128 numbers. After reading a lot of examples - for instance this, and this, or this. I did not manage to do this on my own network. Neither of the solutions work of my own algorithm. Link to Colab: https://colab.research.google.com/drive/1MLbpWJmq8JZB4_zKongaHP2o3M1FpvAv?fbclid=IwAR20xRz2i6sFS-Nm6Xwfk5hztdXOuxY4tZaDRXxAx3b986HToa9-IaTgASU I received a lot of different error messages. I tried to handle each of them, but couldn't figure it on my own. What am I missing? How to output the Second layer? If my Shape is (28,28) - what should be the type & value of input_shape? Failed trials & Errors for example: (1) for layer in model.layers: get_2nd_layer_output = K.function([model.layers[0].input],[model.layers[2].output]) layer_output = get_2nd_layer_output(layer)[0] print('\nlayer output: get_2nd_layer_output=, layer=', layer, '\nlayer output: get_2nd_layer_output=', get_2nd_layer_output) TypeError: inputs should be a list or tuple. (2) input_shape=(28, 28) inp = model.input # input placeholder outputs = [layer.output for layer in model.layers] # all layer outputs functor = K.function([inp, K.learning_phase()], outputs ) # evaluation function # Testing test = np.random.random(input_shape)[np.newaxis,...] layer_outs = functor([test, 0.]) print('layer_outs',layer_outs) tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable dense_1/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/dense_1/bias) [[{{node dense_1/BiasAdd/ReadVariableOp}}]] | Looks like you are mixing old keras (before tensorflow 2.0: import keras) and new keras (from tensorflow import keras). Try not to use old keras alongside tensorflow>=2.0 (and not to refer to the old documentation as in your first link), as it is easily confused with the new one (although nothing strictly illogical): from tensorflow import keras from keras.models import Model print(Model.__module__) #outputs 'keras.engine.training' from tensorflow.keras.models import Model print(Model.__module__) #outputs 'tensorflow.python.keras.engine.training' Behaviour will be highly unstable mixing those two libraries. Once this is done, using an answer from what you tried, m being your model, and my_input_shape being the shape of your models input ie the shape of one picture (here (28, 28) or (1, 28, 28) if you have batches): from tensorflow import keras as K my_input_data = np.random.rand(*my_input_shape) new_temp_model = K.Model(m.input, m.layers[3].output) #replace 3 with index of desired layer output_of_3rd_layer = new_temp_model.predict(my_input_data) #this is what you want If you have one image img you can directly write new_temp_model.predict(img) | 9 | 3 |
60,363,908 | 2020-2-23 | https://stackoverflow.com/questions/60363908/spark-why-does-python-significantly-outperform-scala-in-my-use-case | To compare performance of Spark when using Python and Scala I created the same job in both languages and compared the runtime. I expected both jobs to take roughly the same amount of time, but Python job took only 27min, while Scala job took 37min (almost 40% longer!). I implemented the same job in Java as well and it took 37minutes too. How is this possible that Python is so much faster? Minimal verifiable example: Python job: # Configuration conf = pyspark.SparkConf() conf.set("spark.hadoop.fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider") conf.set("spark.executor.instances", "4") conf.set("spark.executor.cores", "8") sc = pyspark.SparkContext(conf=conf) # 960 Files from a public dataset in 2 batches input_files = "s3a://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312025.20/warc/CC-MAIN-20190817203056-20190817225056-00[0-5]*" input_files2 = "s3a://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00[0-3]*" # Count occurances of a certain string logData = sc.textFile(input_files) logData2 = sc.textFile(input_files2) a = logData.filter(lambda value: value.startswith('WARC-Type: response')).count() b = logData2.filter(lambda value: value.startswith('WARC-Type: response')).count() print(a, b) Scala job: // Configuration config.set("spark.executor.instances", "4") config.set("spark.executor.cores", "8") val sc = new SparkContext(config) sc.setLogLevel("WARN") sc.hadoopConfiguration.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider") // 960 Files from a public dataset in 2 batches val input_files = "s3a://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312025.20/warc/CC-MAIN-20190817203056-20190817225056-00[0-5]*" val input_files2 = "s3a://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00[0-3]*" // Count occurances of a certain string val logData1 = sc.textFile(input_files) val logData2 = sc.textFile(input_files2) val num1 = logData1.filter(line => line.startsWith("WARC-Type: response")).count() val num2 = logData2.filter(line => line.startsWith("WARC-Type: response")).count() println(s"Lines with a: $num1, Lines with b: $num2") Just by looking at the code, they seem to be identical. I looked a the DAGs and they didn't provide any insights (or at least I lack the know-how to come up with an explanation based on them). I would really appreciate any pointers. | Your basic assumption, that Scala or Java should be faster for this specific task, is just incorrect. You can easily verify it with minimal local applications. Scala one: import scala.io.Source import java.time.{Duration, Instant} object App { def main(args: Array[String]) { val Array(filename, string) = args val start = Instant.now() Source .fromFile(filename) .getLines .filter(line => line.startsWith(string)) .length val stop = Instant.now() val duration = Duration.between(start, stop).toMillis println(s"${start},${stop},${duration}") } } Python one import datetime import sys if __name__ == "__main__": _, filename, string = sys.argv start = datetime.datetime.now() with open(filename) as fr: # Not idiomatic or the most efficient but that's what # PySpark will use sum(1 for _ in filter(lambda line: line.startswith(string), fr)) end = datetime.datetime.now() duration = round((end - start).total_seconds() * 1000) print(f"{start},{end},{duration}") Results (300 repetitions each, Python 3.7.6, Scala 2.11.12), on Posts.xml from hermeneutics.stackexchange.com data dump with mix of matching and non matching patterns: Python 273.50 (258.84, 288.16) Scala 634.13 (533.81, 734.45) As you see Python is not only systematically faster, but also is more consistent (lower spread). Take away message is β don't believe unsubstantiated FUD β languages can be faster or slower on specific tasks or with specific environments (for example here Scala can be hit by JVM startup and / or GC and / or JIT), but if you claims like " XYZ is X4 faster" or "XYZ is slow as compared to ZYX (..) Approximately, 10x slower" it usually means that someone wrote really bad code to test things. Edit: To address some concerns raised in the comments: In the OP code data is passed in mostly in one direction (JVM -> Python) and no real serialization is required (this specific path just passes bytestring as-is and decodes on UTF-8 on the other side). That's as cheap as it gets when it comes to "serialization". What is passed back is just a single integer by partition, so in that direction impact is negligible. Communication is done over local sockets (all communication on worker beyond initial connect and auth is performed using file descriptor returned from local_connect_and_auth, and its nothing else than socket associated file). Again, as cheap as it gets when it comes to communication between processes. Considering difference in raw performance shown above (much higher than what you see in you program), there is a lot of margin for overheads listed above. This case is completely different from cases where either simple or complex objects have to be passed to and from Python interpreter in a form that is accessible to both parties as pickle-compatible dumps (most notable examples include old-style UDF, some parts of old-style MLLib). Edit 2: Since jasper-m was concerned about startup cost here, one can easily prove that Python has still significant advantage over Scala even if input size is significantly increased. Here are results for 2003360 lines / 5.6G (the same input, just duplicated multiple times, 30 repetitions), which way exceeds anything you can expect in a single Spark task. Python 22809.57 (21466.26, 24152.87) Scala 27315.28 (24367.24, 30263.31) Please note non-overlapping confidence intervals. Edit 3: To address another comment from Jasper-M: The bulk of all the processing is still happening inside a JVM in the Spark case. That is simply incorrect in this particular case: The job in question is map job with single global reduce using PySpark RDDs. PySpark RDD (unlike let's say DataFrame) implement gross of functionality natively in Python, with exception input, output and inter-node communication. Since it is single stage job, and final output is small enough to be ignored, the main responsibility of JVM (if one was to nitpick, this is implemented mostly in Java not Scala) is to invoke Hadoop input format, and push data through socket file to Python. The read part is identical for JVM and Python API, so it can be considered as constant overhead. It also doesn't qualify as the bulk of the processing, even for such simple job like this one. | 17 | 14 |
60,403,545 | 2020-2-25 | https://stackoverflow.com/questions/60403545/flake8-not-giving-errors-warnings-on-missing-docstring-or-code-not-following-pep | I am trying to run Flake8 for my python code however I'm noticing it's not giving me any of the PyDocStyle errors on a simple class with missing docstrings or warning about my class name cars which should be Cars according to PEP8 style guide Example code file (cars.py) class cars: def __init__(self, some_value): self.some_value = some_value when I run flake8 cars.py I get the following output: cars.py:3:37: W292 no newline at end of file I'm trying to configure Flake8 to run some common style checks but not finding any help in the Flake8 documentation about how to enable PyDocStyle error codes. For comparison, I ran the same file against Pylint and here is the output ************* Module code.cars cars.py:3:0: C0304: Final newline missing (missing-final-newline) cars.py:1:0: C0114: Missing module docstring (missing-module-docstring) cars.py:1:0: C0103: Class name "cars" doesnt conform to PascalCase naming style (invalid-name) cars.py:1:0: C0115: Missing class docstring (missing-class-docstring) cars.py:1:0: R0903: Too few public methods (0/2) (too-few-public-methods) ------------------------------------ Your code has been rated at -6.67/10 I'm using python 3.7.6, flake8 3.7.9 (mccabe: 0.6.1, pycodestyle: 2.5.0, pyflakes: 2.1.1) CPython 3.7.6 on Linux | So what I found out was that by default Flake8 wraps pycodestyle: 2.5.0 by default which from the documentation says: Among other things, these features are currently not in the scope of the pycodestyle library: naming conventions: this kind of feature is supported through plugins. Install flake8 and the pep8-naming extension to use this feature. docstring conventions: they are not in the scope of this library; see the pydocstyle project. automatic fixing: see the section PEP8 Fixers in the related tools page. So I installed pep8-naming, as well as flake8-docstrings and after running flake8 --version I got the below which shows it is now using the installed plugins: 3.7.9 (flake8-docstrings: 1.5.0, pydocstyle: 5.0.2, mccabe: 0.6.1, naming: 0.8.2, pycodestyle: 2.5.0, pyflakes: 2.1.1) CPython 3.7.6 on Darwin I reran my check flake8 cars.py and I got the below output: cars.py:1:1: D100 Missing docstring in public module cars.py:2:1: D101 Missing docstring in public class cars.py:2:8: N801 class name 'cars' should use CapWords convention cars.py:3:1: D107 Missing docstring in __init__ At a first impression - after checking out the git repos for flake8 and the additional plugins I had to install I am slightly skeptical on Flake8. The reason being is as at the time of writing Pylint seems to come packaged with the behavior I need as well as being around longer which benefits from stability and more contributors. Contrast to flake8 which is newer, and to achieve the desired behavior it is necessary to install 3rd party plugins/libraries that might be abandoned in a year or two or break when flake8 upgrades. Either scenario is problematic with the latter being troublesome if you're not careful with mentioning specific version/builds in CI pipelines. | 11 | 11 |
60,383,266 | 2020-2-24 | https://stackoverflow.com/questions/60383266/python-reuse-functions-in-dash-callbacks | I'm trying to make an app in the Python Dash framework which lets a user select a name from a list and use that name to populate two other input fields. There are six places where a user can select a name from (the same) list, and so a total of 12 callbacks that need to be performed. My question is, how can I use a single function definition to supply multiple callbacks? As I've seen other places (here for example), people reuse the same function name when doing multiple callbacks, e.g. @app.callback( Output('rp-mon1-health', 'value'), [Input('rp-mon1-name', 'value')] ) def update_health(monster): if monster != '': relevant = [m for m in monster_data if m['name'] == monster] return relevant[0]['health'] else: return 11 @app.callback( Output('rp-mon3-health', 'value'), [Input('rp-mon3-name', 'value')] ) def update_health(monster): if monster != '': relevant = [m for m in monster_data if m['name'] == monster] return relevant[0]['health'] else: return 11 @app.callback( Output('rp-mon1-health', 'value'), [Input('rp-mon1-name', 'value')] ) def update_health(monster): if monster != '': relevant = [m for m in monster_data if m['name'] == monster] return relevant[0]['health'] else: return 11 This is a ton of identical repetition and is bad if there's a fix I need to implement later. Ideally I'd be able to do something like: @app.callback( Output('rp-mon1-health', 'value'), [Input('rp-mon1-name', 'value')] ) @app.callback( Output('rp-mon2-health', 'value'), [Input('rp-mon2-name', 'value')] ) @app.callback( Output('rp-mon3-health', 'value'), [Input('rp-mon3-name', 'value')] ) def update_health(monster): if monster != '': relevant = [m for m in monster_data if m['name'] == monster] return relevant[0]['health'] else: return 11 However, the above ends up no call back on the first two, only on the last. My code as is, is below. import json import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output monster_data = json.loads('''[{ "name": "Ares Mothership", "health": 14, "transition": 2 },{ "name": "Cthugrosh", "health": 7, "transition": 3 }]''') monster_names = [{'label': m['name'], 'value': m['name']} for m in monster_data] monster_names.append({'label': 'None', 'value': ''}) app = dash.Dash(__name__) def gen_monster(player, i): name = 'Monster #%d: ' % i id_gen = '%s-mon%d' % (player, i) output = html.Div([ html.Label('%s Name ' % name), html.Br(), dcc.Dropdown( options=monster_names, value='', id='%s-name' % id_gen ), html.Br(), html.Label('Health'), html.Br(), dcc.Input(value=11, type='number', id='%s-health' % id_gen), html.Br(), html.Label('Hyper Transition'), html.Br(), dcc.Input(value=6, type='number', id='%s-state' % id_gen), ], style={'border': 'dotted 1px black'}) return output app.layout = html.Div(children=[ html.H1(children='Monsterpocalypse Streaming Stats Manager'), html.Div([ html.Div([ html.Label('Left Player Name: '), dcc.Input(value='Mark', type='text', id='lp-name'), gen_monster('lp', 1), html.Br(), gen_monster('lp', 2), html.Br(), gen_monster('lp', 3) ], style={'width': '300px'}), html.Br(), html.Div([ html.Label('Right Player Name: '), dcc.Input(value='Benjamin', type='text'), gen_monster('rp', 1), html.Br(), gen_monster('rp', 2), html.Br(), gen_monster('rp', 3) ], style={'width': '300px'}) ], style={'columnCount': 2}), html.Div(id='dummy1'), html.Div(id='dummy2') ]) @app.callback( Output('rp-mon1-health', 'value'), [Input('rp-mon1-name', 'value')] ) def update_health(monster): if monster != '': relevant = [m for m in monster_data if m['name'] == monster] return relevant[0]['health'] else: return 11 @app.callback( Output('rp-mon1-state', 'value'), [Input('rp-mon1-name', 'value')] ) def update_health(monster): if monster != '': relevant = [m for m in monster_data if m['name'] == monster] return relevant[0]['transition'] else: return 6 if __name__ == '__main__': app.run_server(debug=True) | You could do something like this: def update_health(monster): if monster != '': relevant = [m for m in monster_data if m['name'] == monster] return relevant[0]['health'] else: return 11 @app.callback( Output('rp-mon1-health', 'value'), [Input('rp-mon1-name', 'value')] ) def monster_1_callback(*args, **kwargs): return update_health(*args, **kwargs) @app.callback( Output('rp-mon2-health', 'value'), [Input('rp-mon2-name', 'value')] ) def monster_2_callback(*args, **kwargs): return update_health(*args, **kwargs) @app.callback( Output('rp-mon3-health', 'value'), [Input('rp-mon3-name', 'value')] ) def monster_3_callback(*args, **kwargs): return update_health(*args, **kwargs) Now the function that contains the logic is only written once, and the other functions are simple passthroughs that you shouldn't ever need to update. | 9 | 9 |
60,340,107 | 2020-2-21 | https://stackoverflow.com/questions/60340107/pandas-dataframe-bin-on-multiple-columns-get-statistics-on-another-column | Problem I have a target variable x and some additional variables A and B. I want to calculate averages (and other statistics) of x when certain conditions for A and B are met. A real world example would be to calculate the average air temperature (x) from a long series of measurements when solar radiation (A) and wind speed (B) fall into certain pre-defined bin ranges. Potential solutions I have been able to accomplish this with loops (see example below), but I've learned that I should avoid looping over dataframes. From my research on this site I feel like there is probably a much more elegant / vectorized solution using either pd.cut or np.select, but I frankly couldn't figure out how to do it. Example Generate sample data import pandas as pd import numpy as np n = 100 df = pd.DataFrame( { "x": np.random.randn(n), "A": np.random.randn(n)+5, "B": np.random.randn(n)+10 } ) df.head() output: x A B 0 -0.585313 6.038620 9.909762 1 0.412323 3.991826 8.836848 2 0.211713 5.019520 9.667349 3 0.710699 5.353677 9.757903 4 0.681418 4.452754 10.647738 Calculate bin averages # define bin ranges bins_A = np.arange(3, 8) bins_B = np.arange(8, 13) # prepare output lists A_mins= [] A_maxs= [] B_mins= [] B_maxs= [] x_means= [] x_stds= [] x_counts= [] # loop over bins for i_A in range(0, len(bins_A)-1): A_min = bins_A[i_A] A_max = bins_A[i_A+1] for i_B in range(0, len(bins_B)-1): B_min = bins_B[i_B] B_max = bins_B[i_B+1] # binning conditions for current step conditions = np.logical_and.reduce( [ df["A"] > A_min, df["A"] < A_max, df["B"] > B_min, df["B"] < B_max, ] ) # calculate statistics for x and store values in lists x_values = df.loc[conditions, "x"] x_means.append(x_values.mean()) x_stds.append(x_values.std()) x_counts.append(x_values.count()) A_mins.append(A_min) A_maxs.append(A_max) B_mins.append(B_min) B_maxs.append(B_max) Store the result in a new dataframe binned = pd.DataFrame( data={ "A_min": A_mins, "A_max": A_maxs, "B_min": B_mins, "B_max": B_maxs, "x_mean": x_means, "x_std": x_stds, "x_count": x_counts } ) binned.head() output: A_min A_max B_min B_max x_mean x_std x_count 0 3 4 8 9 0.971624 0.790972 2 1 3 4 9 10 0.302795 0.380102 3 2 3 4 10 11 0.447398 1.787659 5 3 3 4 11 12 0.462149 1.195844 2 4 4 5 8 9 0.379431 0.983965 4 | Approach #1 : Pandas + NumPy (some to none) We will try to keep it to pandas/NumPy so that we could leverage dataframe methods or array methods and ufuncs, while vectorizing it at their level. This makes it easier to extend the functionalities when complex problems are to be solved or statistics are to be generated, as that seems to be case here. Now, to solve the problem while keeping it close to pandas, would be to generate intermediate IDs or tags that resemble the combined tracking of A and B on the given bins bins_A and bins_B respectively. To do so, one way would be to use searchsorted on these two data separately - tagsA = np.searchsorted(bins_A,df.A) tagsB = np.searchsorted(bins_B,df.B) Now, we are interested in the within-the-bounds cases only, hence masking is needed - vm = (tagsB>0) & (tagsB<len(bins_B)) & (tagsA>0) & (tagsA<len(bins_A)) Let's apply this mask on the original dataframe - dfm = df.iloc[vm] Add in the tags for the valid ones, which would represent A_mins and B_min equivalents and hence would show up in the final output - dfm['TA'] = bins_A[(tagsA-1)[vm]] dfm['TB'] = bins_B[(tagsB-1)[vm]] So, our tagged dataframe is ready, which could then be describe-d to get the common stats after grouping on these two tags - df_out = dfm.groupby(['TA','TB'])['x'].describe() Sample run to make things clearer, while comparing against posted solution in question - In [46]: np.random.seed(0) ...: n = 100 ...: df = pd.DataFrame( ...: { ...: "x": np.random.randn(n), ...: "A": np.random.randn(n)+5, ...: "B": np.random.randn(n)+10 ...: } ...: ) In [47]: binned Out[47]: A_min A_max B_min B_max x_mean x_std x_count 0 3 4 8 9 0.400199 0.719007 5 1 3 4 9 10 -0.268252 0.914784 6 2 3 4 10 11 0.458746 1.499419 5 3 3 4 11 12 0.939782 0.055092 2 4 4 5 8 9 0.238318 1.173704 5 5 4 5 9 10 -0.263020 0.815974 8 6 4 5 10 11 -0.449831 0.682148 12 7 4 5 11 12 -0.273111 1.385483 2 8 5 6 8 9 -0.438074 NaN 1 9 5 6 9 10 -0.009721 1.401260 16 10 5 6 10 11 0.467934 1.221720 11 11 5 6 11 12 0.729922 0.789260 3 12 6 7 8 9 -0.977278 NaN 1 13 6 7 9 10 0.211842 0.825401 7 14 6 7 10 11 -0.097307 0.427639 5 15 6 7 11 12 0.915971 0.195841 2 In [48]: df_out Out[48]: count mean std ... 50% 75% max TA TB ... 3 8 5.0 0.400199 0.719007 ... 0.302472 0.976639 1.178780 9 6.0 -0.268252 0.914784 ... -0.001510 0.401796 0.653619 10 5.0 0.458746 1.499419 ... 0.462782 1.867558 1.895889 11 2.0 0.939782 0.055092 ... 0.939782 0.959260 0.978738 4 8 5.0 0.238318 1.173704 ... -0.212740 0.154947 2.269755 9 8.0 -0.263020 0.815974 ... -0.365103 0.449313 0.950088 10 12.0 -0.449831 0.682148 ... -0.436773 -0.009697 0.761038 11 2.0 -0.273111 1.385483 ... -0.273111 0.216731 0.706573 5 8 1.0 -0.438074 NaN ... -0.438074 -0.438074 -0.438074 9 16.0 -0.009721 1.401260 ... 0.345020 1.284173 1.950775 10 11.0 0.467934 1.221720 ... 0.156349 1.471263 2.240893 11 3.0 0.729922 0.789260 ... 1.139401 1.184846 1.230291 6 8 1.0 -0.977278 NaN ... -0.977278 -0.977278 -0.977278 9 7.0 0.211842 0.825401 ... 0.121675 0.398750 1.764052 10 5.0 -0.097307 0.427639 ... -0.103219 0.144044 0.401989 11 2.0 0.915971 0.195841 ... 0.915971 0.985211 1.054452 So, as mentioned earlier, we have our A_min and B_min in TA and TB, while the relevant statistics are captured in other headers. Note that this would be a multi-index dataframe. If we need to capture the equivalent array data, simply do : df_out.loc[:,['count','mean','std']].values for the stats, while np.vstack(df_out.loc[:,['count','mean','std']].index) for the bin interval-starts. Alternatively, to capture the equivalent stat data without describe, but using dataframe methods, we can do something like this - dfmg = dfm.groupby(['TA','TB'])['x'] dfmg.size().unstack().values dfmg.std().unstack().values dfmg.mean().unstack().values Alternative #1 : Using pd.cut We can also use pd.cut as was suggested in the question to replace searchsorted for a more compact one as the out-of-bounds ones are handled automatically, keeping the basic idea same - df['TA'] = pd.cut(df['A'],bins=bins_A, labels=range(len(bins_A)-1)) df['TB'] = pd.cut(df['B'],bins=bins_B, labels=range(len(bins_B)-1)) df_out = df.groupby(['TA','TB'])['x'].describe() So, this gives us the stats. For A_min and B_min equivalents, simply use the index levels - A_min = bins_A[df_out.index.get_level_values(0)] B_min = bins_B[df_out.index.get_level_values(1)] Or use some meshgrid method - mA,mB = np.meshgrid(bins_A[:-1],bins_B[:-1]) A_min,B_min = mA.ravel('F'),mB.ravel('F') Approach #2 : With bincount We can leverage np.bincount to get all those three stat metric values including standard-deviation, again in a vectorized manner - lA,lB = len(bins_A),len(bins_B) n = lA+1 x,A,B = df.x.values,df.A.values,df.B.values tagsA = np.searchsorted(bins_A,A) tagsB = np.searchsorted(bins_B,B) t = tagsB*n + tagsA L = n*lB countT = np.bincount(t, minlength=L) countT_x = np.bincount(t,x, minlength=L) avg_all = countT_x/countT count = countT.reshape(-1,n)[1:,1:-1].ravel('F') avg = avg_all.reshape(-1,n)[1:,1:-1].ravel('F') # Using numpy std definition for ddof case ddof = 1.0 # default one for pandas std grp_diffs = (x-avg_all[t])**2 std_all = np.sqrt(np.bincount(t,grp_diffs, minlength=L)/(countT-ddof)) stds = std_all.reshape(-1,n)[1:,1:-1].ravel('F') Approach #3 : With sorting to leverage reduceat methods - x,A,B = df.x.values,df.A.values,df.B.values vm = (A>bins_A[0]) & (A<bins_A[-1]) & (B>bins_B[0]) & (B<bins_B[-1]) xm = x[vm] tagsA = np.searchsorted(bins_A,A) tagsB = np.searchsorted(bins_B,B) tagsAB = tagsB*(tagsA.max()+1) + tagsA tagsABm = tagsAB[vm] sidx = tagsABm.argsort() tagsAB_s = tagsABm[sidx] xms = xm[sidx] cut_idx = np.flatnonzero(np.r_[True,tagsAB_s[:-1]!=tagsAB_s[1:],True]) N = (len(bins_A)-1)*(len(bins_B)-1) count = np.diff(cut_idx) avg = np.add.reduceat(xms,cut_idx[:-1])/count stds = np.empty(N) for ii,(s0,s1) in enumerate(zip(cut_idx[:-1],cut_idx[1:])): stds[ii] = np.std(xms[s0:s1], ddof=1) To get in the same or similar format as the pandas dataframe styled output, we need to reshape. Hence, it would be avg.reshape(-1,len(bins_A)-1).T and so on. | 14 | 12 |
60,390,709 | 2020-2-25 | https://stackoverflow.com/questions/60390709/working-with-mixed-datetime-formats-in-pandas | I read a file into a pandas dataframe with dates that vary in their format: either the American: YYYY-MM-DD or the European: DD.MM.YYYY They come as a string. I would like to format them all as a date object so pandas.Series.dt can work with them and ideally have them in the second format (DD.MM.YYYY). pandas.Series.dt gets confuesed with the two different spellings in one column. | Use to_datetime with both formats separately, so get missing values if format not match, so for new column use Series.fillna: df = pd.DataFrame({'date': ['2000-01-12', '2015-01-23', '20.12.2015', '31.12.2009']}) print (df) date 0 2000-01-12 1 2015-01-23 2 20.12.2015 3 31.12.2009 date1 = pd.to_datetime(df['date'], errors='coerce', format='%Y-%m-%d') date2 = pd.to_datetime(df['date'], errors='coerce', format='%d.%m.%Y') df['date'] = date1.fillna(date2) print (df) date 0 2000-01-12 1 2015-01-23 2 2015-12-20 3 2009-12-31 and ideally have them in the second format Format of datetimes in python/pandas is by default YYYY-MM-DD, if need custom one it is possible, but values are converted to strings, so datetimelike functions failed: df['date'] = df['date'].dt.strftime('%d.%m.%Y') print (df) date 0 12.01.2000 1 23.01.2015 2 20.12.2015 3 31.12.2009 print (type(df.loc[0, 'date'])) <class 'str'> | 9 | 13 |
60,388,502 | 2020-2-25 | https://stackoverflow.com/questions/60388502/how-can-i-replace-the-first-occurrence-of-a-character-in-every-word | How can I replace the first occurrence of a character in every word? Say I have this string: hello @jon i am @@here or @@@there and want some@thing in '@here" # ^ ^^ ^^^ ^ ^ And I want to remove the first @ on every word, so that I end up having a final string like this: hello jon i am @here or @@there and want something in 'here # ^ ^ ^^ ^ ^ Just for clarification, "@" characters always appear together in every word, but can be in the beginning of the word or between other characters. I managed to remove the "@" character if it occurs just once by using a variation of the regex I found in Delete substring when it occurs once, but not when twice in a row in python, which uses a negative lookahead and negative lookbehind: @(?!@)(?<!@@) See the output: >>> s = "hello @jon i am @@here or @@@there and want some@thing in '@here" >>> re.sub(r'@(?!@)(?<!@@)', '', s) "hello jon i am @@here or @@@there and want something in 'here" So the next step is to replace the "@" when it occurs more than once. This is easy by doing s.replace('@@', '@') to remove the "@" from wherever it occurs again. However, I wonder: is there a way to do this replacement in one shot? | I would do a regex replacement on the following pattern: @(@*) And then just replace with the first capture group, which is all continous @ symbols, minus one. This should capture every @ occurring at the start of each word, be that word at the beginning, middle, or end of the string. inp = "hello @jon i am @@here or @@@there and want some@thing in '@here" out = re.sub(r"@(@*)", '\\1', inp) print(out) This prints: hello jon i am @here or @@there and want something in 'here | 45 | 50 |
60,382,598 | 2020-2-24 | https://stackoverflow.com/questions/60382598/django-aws-elastic-beanstalk-error-improperlyconfigured-error-loading-mysqldb-m | I know this error have come to many people and I have tried different solutions and none of them worked. I am using aws eb cli. I am using following command eb deploy to deploy my application to server. Following are the configuration for my Django. under .ebextensions directory, I have following 2 files: 1: 01_packages.config packages: yum: git: [] python27-devel: [] mysql: [] mysql-devel: [] and another file is 2: 02_django.conf option_settings: "aws:elasticbeanstalk:application:environment": DJANGO_SETTINGS_MODULE: "settings.development" "PYTHONPATH": "/opt/python/current/app/src:$PYTHONPATH" "aws:elasticbeanstalk:container:python": WSGIPath: src/wsgi.py NumProcesses: 3 NumThreads: 20 "aws:elasticbeanstalk:container:python:staticfiles": "/static/": "static/" Following is my requirements.txt file after pip freeze in my local virtual environment. requirements.txt asn1crypto==0.24.0 awsebcli==3.17.1 backports.ssl-match-hostname==3.5.0.1 botocore==1.14.17 cement==2.8.2 cent==2.1.0 centrifuge==0.8.4 certifi==2017.11.5 cffi==1.11.2 chardet==3.0.4 colorama==0.3.9 cryptography==2.1.4 Django==1.8.18 django-colorfield==0.1.14 django-countries==5.0 django-debug-toolbar==1.9.1 django-environ==0.4.4 django-multiselectfield==0.1.8 django-simple-history==1.9.1 django-sslserver==0.20 docutils==0.15.2 enum34==1.1.6 future==0.16.0 google-api-python-client==1.6.4 hiredis==0.2.0 html5lib==1.0b8 httplib2==0.10.3 icalendar==4.0.0 idna==2.6 ipaddress==1.0.18 jmespath==0.9.4 jsonschema==2.4.0 mysqlclient==1.4.6 oauth2client==2.0.0 oauthclient==1.0.3 olefile==0.44 pathspec==0.5.9 paypalrestsdk==1.13.1 pdfcrowd==4.0.1 phonenumbers==8.8.6 Pillow==4.3.0 pyasn1==0.3.7 pyasn1-modules==0.1.5 pycparser==2.18 PyJWT==1.5.3 pyOpenSSL==17.5.0 PyPDF2==1.26.0 pypiwin32==219 pytesseract==0.1.7 python-dateutil==2.6.1 pytz==2017.3 PyYAML==5.2 reportlab==3.4.0 requests==2.18.4 rsa==3.4.2 semantic-version==2.5.0 six==1.11.0 sockjs-tornado==1.0.1 sqlparse==0.2.4 termcolor==1.1.0 toredis-fork==0.1.4 tornado==4.2.1 toro==0.8 twilio==6.9.1 uritemplate==3.0.0 urllib3==1.22 wcwidth==0.1.8 webencodings==0.5.1 xhtml2pdf==0.2.2 I kept this in my root directory. When I run eb deploy it deploys successfully. but when I run the browser to my url. I get this Internal Server Error page. So I tried to look in to the log files on the server under /var/log/httpd/error_log i get the following error: [Mon Feb 24 17:56:57.227427 2020] [:error] [pid 8054] [remote 101.50.93.65:188] mod_wsgi (pid=8054): Target WSGI script '/opt/python/current/app/src/wsgi.py' cannot be loaded as Python module. [Mon Feb 24 17:56:57.227450 2020] [:error] [pid 8054] [remote 101.50.93.65:188] mod_wsgi (pid=8054): Exception occurred processing WSGI script '/opt/python/current/app/src/wsgi.py'. [Mon Feb 24 17:56:57.227466 2020] [:error] [pid 8054] [remote 101.50.93.65:188] Traceback (most recent call last): [Mon Feb 24 17:56:57.227483 2020] [:error] [pid 8054] [remote 101.50.93.65:188] File "/opt/python/current/app/src/wsgi.py", line 17, in <module> [Mon Feb 24 17:56:57.227585 2020] [:error] [pid 8054] [remote 101.50.93.65:188] application = get_wsgi_application() [Mon Feb 24 17:56:57.227599 2020] [:error] [pid 8054] [remote 101.50.93.65:188] File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application [Mon Feb 24 17:56:57.227627 2020] [:error] [pid 8054] [remote 101.50.93.65:188] django.setup() [Mon Feb 24 17:56:57.227634 2020] [:error] [pid 8054] [remote 101.50.93.65:188] File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/__init__.py", line 18, in setup [Mon Feb 24 17:56:57.227645 2020] [:error] [pid 8054] [remote 101.50.93.65:188] apps.populate(settings.INSTALLED_APPS) [Mon Feb 24 17:56:57.227651 2020] [:error] [pid 8054] [remote 101.50.93.65:188] File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/apps/registry.py", line 78, in populate [Mon Feb 24 17:56:57.227662 2020] [:error] [pid 8054] [remote 101.50.93.65:188] raise RuntimeError("populate() isn't reentrant") [Mon Feb 24 17:56:57.227676 2020] [:error] [pid 8054] [remote 101.50.93.65:188] RuntimeError: populate() isn't reentrant So I kept Googling, and found a post suggested to change wsgi file, and I did and then this error updated to the following lines of code. [Mon Feb 24 18:23:12.169850 2020] [mpm_prefork:notice] [pid 9081] AH00169: caught SIGTERM, shutting down [Mon Feb 24 18:23:13.289449 2020] [suexec:notice] [pid 10286] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Mon Feb 24 18:23:13.305290 2020] [so:warn] [pid 10286] AH01574: module wsgi_module is already loaded, skipping [Mon Feb 24 18:23:13.307373 2020] [http2:warn] [pid 10286] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive. [Mon Feb 24 18:23:13.307384 2020] [http2:warn] [pid 10286] AH02951: mod_ssl does not seem to be enabled [Mon Feb 24 18:23:13.307990 2020] [lbmethod_heartbeat:notice] [pid 10286] AH02282: No slotmem from mod_heartmonitor [Mon Feb 24 18:23:13.308050 2020] [:warn] [pid 10286] mod_wsgi: Compiled for Python/2.7.13. [Mon Feb 24 18:23:13.308057 2020] [:warn] [pid 10286] mod_wsgi: Runtime using Python/2.7.16. [Mon Feb 24 18:23:13.311200 2020] [mpm_prefork:notice] [pid 10286] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/2.7.16 configured -- resuming normal operations [Mon Feb 24 18:23:13.311217 2020] [core:notice] [pid 10286] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' [Mon Feb 24 18:23:16.367182 2020] [:error] [pid 10293] [remote 127.0.0.1:0] mod_wsgi (pid=10293): Exception occurred processing WSGI script '/opt/python/current/app/src/wsgi.py'. [Mon Feb 24 18:23:16.367240 2020] [:error] [pid 10293] [remote 127.0.0.1:0] RuntimeError: response has not been started [Mon Feb 24 18:23:17.744228 2020] [:error] [pid 10291] [remote 127.0.0.1:0] mod_wsgi (pid=10291): Exception occurred processing WSGI script '/opt/python/current/app/src/wsgi.py'. [Mon Feb 24 18:23:17.744288 2020] [:error] [pid 10291] [remote 127.0.0.1:0] RuntimeError: response has not been started [Mon Feb 24 18:23:19.116825 2020] [:error] [pid 10292] [remote 127.0.0.1:0] mod_wsgi (pid=10292): Exception occurred processing WSGI script '/opt/python/current/app/src/wsgi.py'. [Mon Feb 24 18:23:19.116892 2020] [:error] [pid 10292] [remote 127.0.0.1:0] RuntimeError: response has not been started [Mon Feb 24 18:23:20.493432 2020] [:error] [pid 10418] [remote 127.0.0.1:0] mod_wsgi (pid=10418): Exception occurred processing WSGI script '/opt/python/current/app/src/wsgi.py'. [Mon Feb 24 18:23:20.493489 2020] [:error] [pid 10418] [remote 127.0.0.1:0] RuntimeError: response has not been started [Mon Feb 24 18:36:44.987693 2020] [:error] [pid 10443] [remote 95.105.12.68:0] mod_wsgi (pid=10443): Exception occurred processing WSGI script '/opt/python/current/app/src/wsgi.py'. [Mon Feb 24 18:36:44.987766 2020] [:error] [pid 10443] [remote 95.105.12.68:0] RuntimeError: response has not been started [Mon Feb 24 18:55:28.298121 2020] [:error] [pid 10468] [remote 101.50.93.65:0] mod_wsgi (pid=10468): Exception occurred processing WSGI script '/opt/python/current/app/src/wsgi.py'. [Mon Feb 24 18:55:28.298180 2020] [:error] [pid 10468] [remote 101.50.93.65:0] RuntimeError: response has not been started [Mon Feb 24 18:55:30.126198 2020] [:error] [pid 10499] [remote 101.50.93.65:0] mod_wsgi (pid=10499): Exception occurred processing WSGI script '/opt/python/current/app/src/wsgi.py'. [Mon Feb 24 18:55:30.126260 2020] [:error] [pid 10499] [remote 101.50.93.65:0] RuntimeError: response has not been started [Mon Feb 24 18:55:31.671293 2020] [:error] [pid 10973] [remote 101.50.93.65:0] mod_wsgi (pid=10973): Exception occurred processing WSGI script '/opt/python/current/app/src/wsgi.py'. [Mon Feb 24 18:55:31.671358 2020] [:error] [pid 10973] [remote 101.50.93.65:0] RuntimeError: response has not been started [Mon Feb 24 18:55:32.858757 2020] [:error] [pid 11606] [remote 101.50.93.65:0] mod_wsgi (pid=11606): Exception occurred processing WSGI script '/opt/python/current/app/src/wsgi.py'. [Mon Feb 24 18:55:32.858821 2020] [:error] [pid 11606] [remote 101.50.93.65:0] RuntimeError: response has not been started and this is the new wsgi.py file. import os import logging os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings.development") from django.core.wsgi import get_wsgi_application # application = get_wsgi_application() env_variables_to_pass = ['DJANGO_SETTINGS_MODULE', ] def application(environ, start_response): if environ['mod_wsgi.process_group'] != '': import signal os.kill(os.getpid(), signal.SIGINT) return ["killed"] and following was the old wsgi.py file import os import logging os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings.development") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() env_variables_to_pass = ['DJANGO_SETTINGS_MODULE', ] So then I tried to run application on the server using python manage.py runserver as there was nothing else to do to fix this and I was not sure where the problem is. So I did ssh and I did used the virtualenvironment of server that was already there created by EB. after running python manage.py runserver. I get this following error. File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 27, in <module> raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb So I tried to follow any advice that I could see available on google. I tried pip install mysqlclient I get this error. Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/d0/97/7326248ac8d5049968bf4ec708a5d3d4806e412a42e74160d7f266a3e03a/mysqlclient-1.4.6.tar.gz Complete output from command python setup.py egg_info: sh: mysql_config: command not found sh: mariadb_config: command not found sh: mysql_config: command not found Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-MxFUOd/mysqlclient/setup.py", line 16, in <module> metadata, options = get_config() File "setup_posix.py", line 61, in get_config libs = mysql_config("libs") File "setup_posix.py", line 29, in mysql_config raise EnvironmentError("%s not found" % (_mysql_config_path,)) EnvironmentError: mysql_config not found ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-MxFUOd/mysqlclient/ I tried both pip install mysqlclient --no-cache-dir and pip install mysqlclient It's Centos server so I cannot do the sudo apt-get commands. Only yum commands work. I tried this following command with sudo sudo pip install mysql-connector-python But I think this will be installed globally rather than env. So I tried without sudo, and it gave permission error. I used other commands to install mysql both with sudo and not sudo. pip install pymysql sudo yum install python-mysqldb No matter what I do I get this MySQL error. I don't want to move to other database as I would have to move data as well. UPDATE from given suggession of @Arun K i ran this following command which mysql_config i got this following response. /usr/bin/which: no mysql_config in (/opt/python/run/venv/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/.local/bin:/home/ec2-user/bin) | Try running sudo yum install mysql-devel gcc python-devel Then pip install mysqlclient | 7 | 2 |
60,378,705 | 2020-2-24 | https://stackoverflow.com/questions/60378705/python-vs-julia-autocorrelation | I am trying to do autocorrelation using Julia and compare it to Python's result. How come they give different results? Julia code using StatsBase t = range(0, stop=10, length=10) test_data = sin.(exp.(t.^2)) acf = StatsBase.autocor(test_data) gives 10-element Array{Float64,1}: 1.0 0.13254954979179642 -0.2030283419321465 0.00029587850872956104 -0.06629381497277881 0.031309038331589614 -0.16633393452504994 -0.08482388975165675 0.0006905628640697538 -0.1443650483145533 Python code from statsmodels.tsa.stattools import acf import numpy as np t = np.linspace(0,10,10) test_data = np.sin(np.exp(t**2)) acf_result = acf(test_data) gives array([ 1. , 0.14589844, -0.10412699, 0.07817509, -0.12916543, -0.03469143, -0.129255 , -0.15982435, -0.02067688, -0.14633346]) | This is because your test_data is different: Python: array([ 0.84147098, -0.29102733, 0.96323736, 0.75441021, -0.37291918, 0.85600145, 0.89676529, -0.34006519, -0.75811102, -0.99910501]) Julia: [0.8414709848078965, -0.2910273263243299, 0.963237364649543, 0.7544102058854344, -0.3729191776326039, 0.8560014512776061, 0.9841238290665676, 0.1665709194875013, -0.7581110212957692, -0.9991050130774393] This happens because you are taking sin of enormous numbers. For example, with the last number in t being 10, exp(10^2) is ~2.7*10^43. At this scale, floating point inaccuracies are about 3*10^9. So if even the least significant bit is different for Python and Julia, the sin value will be way off. In fact, we can inspect the underlying binary values of the initial array t. For example, they differ in the third last value: Julia: julia> reinterpret(Int, range(0, stop=10, length=10)[end-2]) 4620443017702830535 Python: >>> import struct >>> s = struct.pack('>d', np.linspace(0,10,10)[-3]) >>> struct.unpack('>q', s)[0] 4620443017702830536 We can indeed see that they disagree by exactly one machine epsilon. And if we use Julia take sin of the value obtained by Python: julia> sin(exp(reinterpret(Float64, 4620443017702830536)^2)) -0.3400651855865199 We get the same value Python does. | 20 | 27 |
60,368,956 | 2020-2-24 | https://stackoverflow.com/questions/60368956/attributeerrorbytes-object-has-no-attribute-encode | Trying to import a code from python2 to python 3 and this problem happens <ipython-input-53-e9f33b00348a> in aesEncrypt(text, secKey) 43 def aesEncrypt(text, secKey): 44 pad = 16 - len(text) % 16 ---> 45 text = text.encode("utf-8") + (pad * chr(pad)).encode("utf-8") 46 encryptor = AES.new(secKey, 2, '0102030405060708') 47 ciphertext = encryptor.encrypt(text) AttributeError:'bytes' object has no attribute 'encode' If I remove .encode("utf-8") the error is "can't concat str to bytes". Apparently pad*chr(pad) seems to be a byte string. It cannot use encode() <ipython-input-65-9e84e1f3dd26> in aesEncrypt(text, secKey) 43 def aesEncrypt(text, secKey): 44 pad = 16 - len(text) % 16 ---> 45 text = text.encode("utf-8") + (pad * chr(pad)) 46 encryptor = AES.new(secKey, 2, '0102030405060708') 47 ciphertext = encryptor.encrypt(text) TypeError: can't concat str to bytes However, the weird thing is that if i just try the part along. encode() works fine. text = { 'username': '', 'password': '', 'rememberLogin': 'true' } text=json.dumps(text) print(text) pad = 16 - len(text) % 16 print(type(text)) text = text + pad * chr(pad) print(type(pad * chr(pad))) print(type(text)) text = text.encode("utf-8") + (pad * chr(pad)).encode("utf-8") print(type(text)) {"username": "", "password": "", "rememberLogin": "true"} <class 'str'> <class 'str'> <class 'str'> <class 'bytes'> | If you don't know if a stringlike object is a Python 2 string (bytes) or Python 3 string (unicode). You could have a generic converter. Python3 shell: >>> def to_bytes(s): ... if type(s) is bytes: ... return s ... elif type(s) is str or (sys.version_info[0] < 3 and type(s) is unicode): ... return codecs.encode(s, 'utf-8') ... else: ... raise TypeError("Expected bytes or string, but got %s." % type(s)) ... >>> to_bytes("hello") b'hello' >>> to_bytes("hello".encode('utf-8')) b'hello' On Python 2 both these expressions evaluate to True: type("hello") == bytes and type("hello") == str. And type(u"hello") == str evaluates to False, while type(u"hello") == unicode is True. On Python 3 type("hello") == bytes is False, and type("hello") == str is True. And type("hello") == unicode raises a NameError exception since unicode isn't defined on 3. Python 2 shell: >>> to_bytes(u"hello") 'hello' >>> to_bytes("hello") 'hello' | 20 | 8 |
60,367,476 | 2020-2-23 | https://stackoverflow.com/questions/60367476/context-manager-that-handles-exceptions | I am trying to wrap my head around how to write a context manager that deals with writing some logs while handling any exceptions. The problem I am trying to solve is to make code like this: try: # code that can raise exception here except Exception as e: print('failed', e) print('all good') This is a repeated pattern I have in the code and I think it's best handled with a context manager like: with my_ctx_manager(success_msg='all good', failed_msg='failed): # code that can raise exception here this looks much better, but I don't know how to write the actual context manager to deal with any exceptions that could rise inside the context. @contextlib.contextmanager def my_ctx_manager(success_msg, failed_msg): try: # if no exception then print(success_msg) # How do I catch any exception here except Exception: print(failed_msg) # I need the exception to propagate as well raise I guess my question is more of the type: How do I make sure that the context manager correctly catches, logs and re-raise any exception for the code that is wrapping ? | The way the @contextmanager decorator works, you should write yield once within your context manager function, so that the with block will be executed while the yield statement pauses your function's execution. That means if the with block throws an exception, you can catch it by wrapping yield in a try/except block: from contextlib import contextmanager @contextmanager def example(): print('entered the context manager') managed_resource = 'some resource' try: yield managed_resource except Exception as e: print('caught:', e) # any cleanup that should only be done on failure raise else: # any cleanup that should only be done on success print('no exception was thrown') finally: # any cleanup that should always be done print('exited the context manager') with example() as resource: print('resource:', resource) raise ValueError('some error message') Output: entered the context manager resource: some resource caught: some error message exited the context manager Traceback (most recent call last): File "<stdin>", line 3, in <module> ValueError: some error message If you want to catch everything (not just Exception), then you can write a bare except: block and use sys.exc_info() to get the exception information. | 10 | 19 |
60,365,473 | 2020-2-23 | https://stackoverflow.com/questions/60365473/by-how-much-can-i-approx-reduce-disk-volume-by-using-dvc | I want to classify ~1m+ documents and have a Version Control System for in- and Output of the corresponding model. The data changes over time: sample size increases over time new Features might appear anonymization procedure might Change over time So basically "everything" might change: amount of observations, Features and the values. We are interested in making the ml model Building reproducible without using 10/100+ GB of disk volume, because we save all updated versions of Input data. Currently the volume size of the data is ~700mb. The most promising tool i found is: https://github.com/iterative/dvc. Currently the data is stored in a database in loaded in R/Python from there. Question: How much disk volume can be (very approx.) saved by using dvc? If one can roughly estimate that. I tried to find out if only the "diffs" of the data are saved. I didnt find much info by reading through: https://github.com/iterative/dvc#how-dvc-works or other documentation. I am aware that this is a very vague question. And it will highly depend on the dataset. However, i would still be interested in getting a very approximate idea. | Let me try to summarize how does DVC store data and I hope you'll be able to figure our from this how much space will be saved/consumed in your specific scenario. DVC is storing and deduplicating data on the individual file level. So, what does it usually mean from a practical perspective. I will use dvc add as an example, but the same logic applies to all commands that save data files or directories into DVC cache - dvc add, dvc run, etc. Scenario 1: Modifying file Let's imagine I have a single 1GB XML file. I start tracking it with DVC: $ dvc add data.xml On the modern file system (or if hardlinks, symlinks are enabled, see this for more details) after this command we still consume 1GB (even though file is moved into DVC cache and is still present in the workspace). Now, let's change it a bit and save it again: $ echo "<test/>" >> data.xml $ dvc add data.xml In this case we will have 2GB consumed. DVC does not do diff between two versions of the same file, neither it splits files into chunks or blocks to understand that only small portion of data has changed. To be precise, it calculates md5 of each file and save it in the content addressable key-value storage. md5 of the files serves as a key (path of the file in cache) and value is the file itself: (.env) [ivan@ivan ~/Projects/test]$ md5 data.xml 0c12dce03223117e423606e92650192c (.env) [ivan@ivan ~/Projects/test]$ tree .dvc/cache .dvc/cache βββ 0c βββ 12dce03223117e423606e92650192c 1 directory, 1 file (.env) [ivan@ivan ~/Projects/test]$ ls -lh data.xml data.xml ----> .dvc/cache/0c/12dce03223117e423606e92650192c (some type of link) Scenario 2: Modifying directory Let's now imagine we have a single large 1GB directory images with a lot of files: $ du -hs images 1GB $ ls -l images | wc -l 1001 $ dvc add images At this point we still consume 1GB. Nothing has changed. But if we modify the directory by adding more files (or removing some of them): $ cp /tmp/new-image.png images $ ls -l images | wc -l 1002 $ dvc add images In this case, after saving the new version we still close to 1GB consumption. DVC calculates diff on the directory level. It won't be saving all the files that were existing before in the directory. The same logic applies to all commands that save data files or directories into DVC cache - dvc add, dvc run, etc. Please, let me know if it's clear or we need to add more details, clarifications. | 9 | 14 |
60,352,850 | 2020-2-22 | https://stackoverflow.com/questions/60352850/wave-error-unknown-format-3-arises-when-trying-to-convert-a-wav-file-into-text | I need to record an audio from the microphone and convert it into text. I have tried this conversion process using several audio clips that I downloaded from the web and it works fine. But when I try to convert the audio clip I recorded from the microphone it gives the following error. Traceback (most recent call last): File "C:\Users\HP\AppData\Local\Programs\Python\Python37\lib\site-packages\speech_recognition__init__.py", line 203, in enter self.audio_reader = wave.open(self.filename_or_fileobject, "rb") File "C:\Users\HP\AppData\Local\Programs\Python\Python37\lib\wave.py", line 510, in open return Wave_read(f) File "C:\Users\HP\AppData\Local\Programs\Python\Python37\lib\wave.py", line 164, in init self.initfp(f) File "C:\Users\HP\AppData\Local\Programs\Python\Python37\lib\wave.py", line 144, in initfp self._read_fmt_chunk(chunk) File "C:\Users\HP\AppData\Local\Programs\Python\Python37\lib\wave.py", line 269, in _read_fmt_chunk raise Error('unknown format: %r' % (wFormatTag,)) wave.Error: unknown format: 3 The code I am trying is as follows. import speech_recognition as sr import sounddevice as sd from scipy.io.wavfile import write # recording from the microphone fs = 44100 # Sample rate seconds = 3 # Duration of recording myrecording = sd.rec(int(seconds * fs), samplerate=fs, channels=2) sd.wait() # Wait until recording is finished write('output.wav', fs, myrecording) # Save as WAV file sound = "output.wav" recognizer = sr.Recognizer() with sr.AudioFile(sound) as source: recognizer.adjust_for_ambient_noise(source) print("Converting audio file to text...") audio = recognizer.listen(source) try: text = recognizer.recognize_google(audio) print("The converted text:" + text) except Exception as e: print(e) I looked at the similar questions that were answered, and they say that we need to convert it into a different wav format. Can someone provide me a code or a library that I can use for this conversion? Thank you in advance. | You wrote the file in float format: soxi output.wav Input File : 'output.wav' Channels : 2 Sample Rate : 44100 Precision : 25-bit Duration : 00:00:03.00 = 132300 samples = 225 CDDA sectors File Size : 1.06M Bit Rate : 2.82M Sample Encoding: 32-bit Floating Point PCM and wave module can't read it. To store int16 format do like this: import numpy as np myrecording = sd.rec(int(seconds * fs), samplerate=fs, channels=2) sd.wait() # Wait until recording is finished write('output.wav', fs, myrecording.astype(np.int16)) # Save as WAV file in 16-bit format | 9 | 9 |
60,351,135 | 2020-2-22 | https://stackoverflow.com/questions/60351135/hours-and-minutes-as-labels-in-altair-plot-spanning-more-than-one-day | I'm trying to create in Altair a Vega-Lite specification of a plot of a time series whose time range spans a few days. Since in my case, it will be clear which day is which, I want to reduce noise in my axis labels by letting labels be of the form '%H:%M', even if this causes labels to be non-distinct. Here's some example data; my actual data has a five minute resolution, but I imagine that won't matter too much here: import altair as alt import numpy as np import pandas as pd # Create data spanning 30 hours, or just over one full day df = pd.DataFrame({'time': pd.date_range('2018-01-01', periods=30, freq='H'), 'data': np.arange(30)**.5}) By using the otherwise trivial yearmonthdatehoursminutes transform, I get the following: alt.Chart(df).mark_line().encode(x='yearmonthdatehoursminutes(time):T', y='data:Q') Now, my goal is to get rid of the dates in the labels on the horizontal axis, so they become something like ['00:00', '03:00', ..., '21:00', '00:00', '03:00'], or whatever spacing works best. The naive approach of just using hoursminutes as a transform won't work, as that bins the actual data: alt.Chart(df).mark_line().encode(x='hoursminutes(time):T', y='data:Q') So, is there a declarative way of doing this? Ultimately, the visualization will be making use of selections to define the horizontal axis limits, so specifying the labels explicitly using Axis does not seem appealing. | To expand on @fuglede's answer, there are two distinct concepts at play with dates and times in Altair. Time formats let you specify how times are displayed on an axis; they look like this: chart.encode( x=alt.X('time:T', axis=alt.Axis(format='%H:%M')) ) Altair uses format codes from d3-time-format. Time units let you specify how data will be grouped, and they also adjust the default time format to match. They look something like this: chart.encode( x=alt.X('time:T', timeUnit='hoursminutes') ) or via the shorthand: chart.encode( x='hoursminutes(time):T' ) Available time units are listed here. If you want to adjust axis formats only, use time formats. If you want to group based on timespans (i.e. group data by year, by month, by hour, etc.) then use a time unit. Examples of this appear in the Altair documentation, e.g. the Seattle Weather Heatmap in Altair's example gallery. | 8 | 6 |
60,351,804 | 2020-2-22 | https://stackoverflow.com/questions/60351804/no-validation-on-field-choices-django-postgres | I created a Student model with field choices. However, when I save it, it doesn't validate whether the choice is in the choices I specified in the model field. Why doesn't it prevent me from saving a new object with a choice I didn't specify in my model? Here is the model: class Student(models.Model): year_in_school = models.CharField( max_length=4, choices= [ ('FRES', 'Freshman'), ('SOPH', 'Sophomore'), ], ) And here is the code I wrote in the shell: >>> from app.models import Student >>> new_student = Student.objects.create(year_in_school='HACK') >>> new_student.year_in_school 'HA' | You might want to read more about choices here. The relevant part copied below: If choices are given, theyβre enforced by model validation Choices are not enforced at the database level. You need to perform model validation (by calling full_clean()) in order to check it. full_clean() will not be called automatically when you call your modelβs save() method. Youβll need to call it manually. | 13 | 13 |
60,341,728 | 2020-2-21 | https://stackoverflow.com/questions/60341728/is-there-a-way-to-call-azure-devops-via-python-using-requests | So, from what I see from most sources, they say if youre trying to make a python program call azure devops api calls, it uses a python import statement such as : from azure.devops.connection import Connection from msrest.authentication import BasicAuthentication ... Is there any way to use requests or other built in import statements so I dont have to install these devops specific modules? I'm coding in putty so I dont have a way to install these modules. If anyone's got any solutions or ideas I'd be happy to hear it! | Surely, it is supported to use requests to call Azure DevOps REST API Firstly, you need to create a personal access token (PAT) Then you can use the PAT to create the basic auth header, and make the request: import requests import base64 pat = 'tcd******************************tnq' authorization = str(base64.b64encode(bytes(':'+pat, 'ascii')), 'ascii') headers = { 'Accept': 'application/json', 'Authorization': 'Basic '+authorization } response = requests.get( url="https://dev.azure.com/jack0503/_apis/projects?api-version=5.1", headers=headers) print(response.text) | 7 | 21 |
60,333,494 | 2020-2-21 | https://stackoverflow.com/questions/60333494/this-paymentmethod-was-previously-used-without-being-attached-to-a-customer-or-w | Having a weird issue here. Following the docs, I am attaching the PaymentMethod to an existing customer, but it's not working. Roughly, I: create a customer create a payment intent with the customer create a card element with the payment intent customer enters card info confirm payment succeeded and sent intent back to backend if the intent has succeeded and the customer chose to save their card, create the payment method with the intent method and customer get error The code: python: stripe.Customer.create(email=user.email, name=user.full_name) python: stripe.PaymentIntent.create(amount=amount, currency="aud", customer=user.stripe_customer_id) js: Stripe('{{ stripe_publishable_key }}').elements().create("card"); user: enters card info js: stripe.confirmCardPayment('{{ clientSecret }}', { payment_method: { card: card, billing_details: { // name: 'Jenny Rosen' }, } }).then(function (result) { if (result.error) { // Show error to your customer (e.g., insufficient funds) console.log(result.error.message); var displayError = document.getElementById('card-errors'); displayError.textContent = result.error.message; } else { // The payment has been processed! if (result.paymentIntent.status === 'succeeded') { // Show a success message to your customer // There's a risk of the customer closing the window before callback // execution. Set up a webhook or plugin to listen for the // payment_intent.succeeded event that handles any business critical // post-payment actions. $('#fake-submit').click(); } } }); python: stripe.PaymentMethod.attach(stripe.PaymentIntent.retrieve(intent_id).payment_method, customer=user.stripe_customer_id) error: Request req_request_id: This PaymentMethod was previously used without being attached to a Customer or was detached from a Customer, and may not be used again. | It looks like there is an issue with the Stripe documentation. On https://stripe.com/docs/payments/save-after-payment#web-collect-card-details they have: setup_future_usage: 'off_session' But on https://stripe.com/docs/payments/save-and-reuse#web-collect-card-details they are missing this critical line. But in your case, does the user select if they want to save their card on the frontend? Then you don't need to save the card on the backend and can save it in the confirmCardPayment call: https://stripe.com/docs/js/payment_intents/confirm_card_payment#stripe_confirm_card_payment-data-save_payment_method : save_payment_method boolean If the PaymentIntent is associated with a customer and this parameter is set to true, the provided payment method will be attached to the customer. Default is false. | 20 | 26 |
60,345,906 | 2020-2-21 | https://stackoverflow.com/questions/60345906/alternative-segmentation-techniques-other-than-watershed-for-soil-particles-in-i | I am searching for an alternative way for segmenting the grains in the following image of soil grains other than watershed segmentation in python as it may mislead the right detection for the grains furthermore , I am working on the edge detection image ( using HED algorithm ) as attached .. I hope to find a better way to segment the grains for further processing as I would like to get the area of each polygon in the image in my project .. Thanks in advance I am asking also about random walker segmentation or any other available method. | You could try using Connected Components with Stats already implemented as cv2.connectedComponentsWithStats to perform component labeling. Using your binary image as input, here's the false-color image: The centroid of each object can be found in centroid parameter and other information such as area can be found in the status variable returned from cv2.connectedComponentsWithStats. Here's the image labeled with the area of each polygon. You could filter using a minimum threshold area to only keep larger polygons Code import cv2 import numpy as np # Load image, Gaussian blur, grayscale, Otsu's threshold image = cv2.imread('2.jpg') blur = cv2.GaussianBlur(image, (3,3), 0) gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # Perform connected component labeling n_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(thresh, connectivity=4) # Create false color image and color background black colors = np.random.randint(0, 255, size=(n_labels, 3), dtype=np.uint8) colors[0] = [0, 0, 0] # for cosmetic reason we want the background black false_colors = colors[labels] # Label area of each polygon false_colors_area = false_colors.copy() for i, centroid in enumerate(centroids[1:], start=1): area = stats[i, 4] cv2.putText(false_colors_area, str(area), (int(centroid[0]), int(centroid[1])), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255), 1) cv2.imshow('thresh', thresh) cv2.imshow('false_colors', false_colors) cv2.imshow('false_colors_area', false_colors_area) cv2.waitKey() | 9 | 8 |
60,342,896 | 2020-2-21 | https://stackoverflow.com/questions/60342896/how-to-use-type-hinting-with-dictionaries-and-google-protobuf-enum | I am trying to use protobuf enum as a type for values in a dictionary but it does not work for some reason. My enum definition in proto is: enum Device { UNSPECIFIED = 0; ON = 1; OFF = 2; } After successful compilation and importing, the following code results in error. from devices_pb2 import Device def foo(device: Device) -> Dict[str, Device]: pass Error message: def foo(device: Device) -> Dict[str, Device]: File "/home/ivan/anaconda3/envs/py37/lib/python3.7/typing.py", line 254, in inner return func(*args, **kwds) File "/home/ivan/anaconda3/envs/py37/lib/python3.7/typing.py", line 629, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "/home/ivan/anaconda3/envs/py37/lib/python3.7/typing.py", line 629, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "/home/ivan/anaconda3/envs/py37/lib/python3.7/typing.py", line 142, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Parameters to generic types must be types. Got <google.protobuf.internal.enum_type_wrapper.EnumTypeWrapper object at 0x7f4df6d81850>. However, if I do not use dictionary then it works just fine: def foo(device: Device) -> Device: pass I wonder if there is a solution to this problem? | Adding the following solved the problem: from __future__ import annotations For more details, please check here. | 11 | 11 |
60,330,655 | 2020-2-21 | https://stackoverflow.com/questions/60330655/bizarre-ordering-of-sets-in-python | When I convert a Python 3.8.0 list to a set, the resulting set ordering* is highly structured in a non-trivial way. How is this structure being extracted from the pseudo-random list? As part of an experiment I am running, I am generating a random set. I was surprised to see that plotting the set suddenly showed unexpected linear structure in the set. So there are two things puzzling me - why does converting to a set result have an ordering* which ends up highlighting this structure; and, to a lesser extent why does the pseudo-random set have this "hidden" structure at all? The code: X = [randrange(250) for i in range(30)] print(X) print(set(X)) which outputs, for example [238, 202, 245, 94, 111, 106, 148, 164, 154, 113, 128, 10, 196, 141, 69, 38, 106, 8, 40, 53, 160, 87, 85, 13, 38, 147, 204, 50, 162, 91] {128, 8, 10, 141, 13, 147, 148, 154, 160, 162, 164, 38, 40, 50, 53, 196, 69, 202, 204, 85, 87, 91, 94, 106, 238, 111, 113, 245} A plot** of the above list looks fairly random, as expected: whereas plotting the set (as it is ordered in the output) exhibits the structure present in the set: This behaviour 100% consistent on my machine (more examples below) with the values 250 and 30 used in the above code (the example I used is not cherry picked - it is just the last one I ran). Tuning these values sometimes results in slightly different structure (e.g. a subset of three arithmetic progressions*** instead of two). Is this reproducible on other people's machines? Of course, that such structure exists seems indicative of a not-so-great pseudo-random number generation, but this does not explain how converting to a set would in some sense 'extract' this structure. As far as I am aware, the there is no formal guarantee that the ordering of a set (when converted from a list) is deterministic (and even if it is, there is no sophisticated ordering being done in the background). So how is this happening?! (*): I know, sets are unordered collections, but I mean "ordered" in the sense that, when calling the print statement, the set is output in some order which consistently highlights the underlying set structure. (**): These plots are from Wolfram Alpha. Two more examples are below: (***): Two plots when changing the range of the random numbers from 250 to 500: | Basically, this is because of two things: A set in Python is implemented using a hashtable, The hash of an integer is the integer itself. Therefore, the index that an integer appears in the underlying array will be determined by the integer's value, modulo the length of the underlying array. So, integers will tend to stay in ascending order when you put a contiguous range of them into a set: >>> list(set(range(10000))) == list(range(10000)) True # this can't be an accident! If you don't have all of the numbers from a contiguous range, then the "modulo the length of the underlying array" part comes into play: >>> r = range(0, 50, 4) >>> set(r) {0, 32, 4, 36, 8, 40, 12, 44, 16, 48, 20, 24, 28} >>> sorted(r, key=lambda x: x % 32) [0, 32, 4, 36, 8, 40, 12, 44, 16, 48, 20, 24, 28] The sequence is predictable if you know the length of the underlying array, and the (deterministic) algorithm for adding elements. In this case the array's length is 32, because it's initially 8 and is quadrupled while elements are added. Except for a blip near the end (because the numbers 52 and 56 aren't in the set), the range is divided into two sequences 0, 4, 8, ... and 32, 36, 40, ... which alternate because the hashes, which are the numbers' values themselves, are taken modulo 32 to choose indices in the array. There are collisions; for example, 4 and 36 are equal modulo 32, but 4 was added to the set first so 36 ends up at a different index. Here's a chart for this sequence. The structure in your charts is just a noisier version, because you generated your numbers randomly rather than from a range with a step. The number of interleaved sequences will depend on the size of the set in proportion to the length of the range the numbers are sampled from, since that determines how many times the range's length "wraps around" modulo the length of the hashtable's underlying array. Here's an example with three interleaved sequences 0, 6, 12, ..., 66, 72, 78, ... and 36, 42, 48, ...: >>> set(range(0, 90, 6)) {0, 66, 36, 6, 72, 42, 12, 78, 48, 18, 84, 54, 24, 60, 30} | 17 | 18 |
60,324,614 | 2020-2-20 | https://stackoverflow.com/questions/60324614/suppress-output-on-library-import-in-python | I have a library that I need to import on my code. However, whenever it is imported it outputs several lines of data to the console. How can I suppress the output? Thanks | import os import sys # silence command-line output temporarily sys.stdout, sys.stderr = os.devnull, os.devnull # import the desired library import library # unsilence command-line output sys.stdout, sys.stderr = sys.__stdout__, sys.__stderr__ | 7 | 11 |
60,323,392 | 2020-2-20 | https://stackoverflow.com/questions/60323392/why-1-0-01-99-in-python | I imagine this is a classic floating point precision question, but I am trying to wrap my head around this result, running 1//0.01 in Python 3.7.5 yields 99. I imagine it is an expected result, but is there any way to decide when it is safer to use int(1/f) rather than 1//f ? | If this were division with real numbers, 1//0.01 would be exactly 100. Since they are floating-point approximations, though, 0.01 is slightly larger than 1/100, meaning the quotient is slightly smaller than 100. It's this 99.something value that is then floored to 99. | 33 | 25 |
60,321,389 | 2020-2-20 | https://stackoverflow.com/questions/60321389/sklearn-importerror-cannot-import-name-plot-roc-curve | I am trying to plot a Receiver Operating Characteristics (ROC) curve with cross validation, following the example provided in sklearn's documentation. However, the following import gives an ImportError, in both python2 and python3. from sklearn.metrics import plot_roc_curve Error: Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name plot_roc_curve python-2.7 sklearn version: 0.20.2. python-3.6 sklearn version: 0.21.3. I found that the following import works fine, but it's not quite the same as plot_roc_curve. from sklearn.metrics import roc_curve Is plot_roc_curve deprecated? Could somebody try the code and let me know the sklearn version if it works? | Plotting API was introduced in the version 0.22. As mentioned here, Scikit-learn 0.20 was the last version to support Python 2.7 and Python 3.4. Scikit-learn now requires Python 3.5 or newer. | 12 | 5 |
60,319,271 | 2020-2-20 | https://stackoverflow.com/questions/60319271/jupyterlab-how-to-clear-output-of-current-cell-using-a-keyboard-shortcut | This question has been asked and answered for Jupyter Notebooks here. There is one suggestion regarding JupyterLab there as well on how to hide cell output, but not to clear it. This is easy enough using the menu under Edit > Clear Outputs. But how do you do it using a keyboard shortcut? Many other commands under Edit already have their own assigned shortcuts, but not this one: | The answer: You'll have to assign a custom shortcut key under Settings > Advanced Settings Editor by inserting the following under User Preferences: {// List of Keyboard Shortcuts "shortcuts": [ { "command": "notebook:clear-cell-output", "keys": [ "F10" ], "selector": ".jp-Notebook.jp-mod-editMode" }, ] } I went for F10, but most other keys or combination of keys should work too. I've also used Ctrl Shift Enter. Where to put it: Some details: If you've assigned other shortcuts, make sure to add it in the correct place in the list of other shortcuts. {// List of Keyboard Shortcuts "shortcuts": [ { "command": "notebook:run-in-console", "keys": [ "F9" ], "selector": ".jp-Notebook.jp-mod-editMode" }, { "command": "notebook:clear-cell-output", "keys": [ "F10" ], "selector": ".jp-Notebook.jp-mod-editMode" }, ] } And a little mystery: If you insert the exact same thing as in the second box, you'll see that the item Run > Run Selected Text or Current Line in Console has gotten a nice F9 right next to it: This will not be the case for the item Edit > Clear Outputs, and I'll have to say that I don't know why. To my knowledge the "command": "notebook:clear-cell-output" that you're assigning to your chosen keyboard shortcut should be that exact functionality. But the good thing is that it works perfectly all the same. At least it does for me. Please note that this approach works best for newer versions of JupyterLab. The correct way will be a bit different for older versions. Here's a python snippet to test it out right away: import pandas as pd import numpy as np df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) print(df) | 10 | 12 |
60,309,604 | 2020-2-19 | https://stackoverflow.com/questions/60309604/aws-cognito-for-django3-drf-authentication | I'm trying to set up an AWS Cognito backend I have a React frontend already working with it, now I need my DRF API to authenticate using the Cognito as backend. I have found a few Python packages for that, none of them seem to be actively maintained django-warrant doesn't work with Django3 and is pretty much dead Django Cognito JWT seems to be my best bet, but also not actively maintained, the documentation is very poor, and there is a medium post on how to use, not very detailed, but better than nothing. So, I tried to follow the documentation Added the env vars on my settings COGNITO_AWS_REGION = 'us-east-1' COGNITO_USER_POOL = 'us-east-1_xxxxxxx' # same user pool id I'm using on the React app COGNITO_AUDIENCE = 'XXXXXXXXXXXXXXXXXXXXXX' # the same client id I'm using on the React app Then on my DRF authentication classes: 'DEFAULT_AUTHENTICATION_CLASSES': [ 'django_cognito_jwt.JSONWebTokenAuthentication', 'rest_framework_jwt.authentication.JSONWebTokenAuthentication', 'rest_framework.authentication.BasicAuthentication', 'rest_framework.authentication.SessionAuthentication', ], And finally the user model: AUTH_USER_MODEL = 'accounts.MyUser' COGNITO_USER_MODEL = "accounts.MyUser" My custom User Model: class MyUser(AbstractUser): """User model.""" username = None email = models.EmailField(_('email address'), unique=True) USERNAME_FIELD = 'email' REQUIRED_FIELDS = [] objects = UserManager() I'm also using DRF JWT package, and if I try to login with a Cognito user, something like: curl -X POST -H "Content-Type: application/json" -d '{"email":"[email protected]","password":"secretpassword"}' http://localhost/api-token-auth/ I get an error {"non_field_errors":["Unable to log in with provided credentials."]} On another hand, if I try to login with a local Django user via Django Rest Framework JWT it works fine and I get the JWT token as a response, so I guess the issue is the Cognito integration. Any idea what I'm missing? or how can I debug in order to figure out what is happening? UPDATE After digging a bit more on the code, I found out a few things: Even doing a DRF JWT authentication, the code end up at: django/contrib/auth/init.py Where it loops through all the authentication backends for Django, not for DRF: for backend, backend_path in _get_backends(return_tuples=True): Still using the ModelBackend to authenticate the user. So, I guess I also need to add some Cognito authentication backend for Django. I checked if I could just use the same backend used on DRF, but then I got an error of invalid argument: TypeError: authenticate() got an unexpected keyword argument 'email' UPDATE 2 It seems that one of the issues is because I use email instead of username to authenticate, and none of the packages seems to support it | Basically, there are 2 steps to achieve your goal. Get IdToken and AccessToken from Cognito by Boto3 library. Apply the IdToken in this Pattern Authorization Bearer IdToken to call API via curl. Unlike rest_framework_jwt , django_cognito_jwt only deals with JWT token at header. django_cognito_jwt does not cover step 1. That is why you are getting this error TypeError: authenticate() got an unexpected keyword argument 'email'. So, solution is use boto3 to get IdToken and apply curl by passing Authorization:Bearer IdToken header. | 7 | 5 |
60,306,156 | 2020-2-19 | https://stackoverflow.com/questions/60306156/simplehttpserver-not-found-python3 | I'm trying to write a simple server in python. So after watching tutorial, I'm trying to import a few modules. from http.server import HTTPServer from http.server import SimpleHTTPServer As the doc says, it has been moved, that's why i'm doing so. But it gives me this error : from http.server import SimpleHTTPServer ImportError: cannot import name 'SimpleHTTPServer' And without SimpleHTTPServer I can't use SimpleHTTPRequestHandler, as it is defined in SimpleHTTPServer.SimpleHTTPRequestHandler. How can I resolve this ? | The SimpleHTTPServer module was moved to be the module http.server. So the command is: python3 -m http.server Also, the new SimpleHTTPRequestHandler object is BaseHTTPRequestHandler. | 16 | 30 |
60,303,682 | 2020-2-19 | https://stackoverflow.com/questions/60303682/why-is-pip-installing-an-incompatible-package-version | I am using pip 20.0.2 on Ubuntu, and installing a bunch of requirements from a requirements file. For some reason, pip is deciding to install idna==2.9(link), even though that is not a compatible version with one of my directly listed dependencies. So I used python -m pipdeptree -r within the virtualenv that I'm installing everything to, and I see this listed for idna: idna==2.9 - cryptography==2.3.1 [requires: idna>=2.1] - requests==2.22.0 [requires: idna>=2.5,<2.9] - requests-oauthlib==1.3.0 [requires: requests>=2.0.0] - social-auth-core==3.2.0 [requires: requests-oauthlib>=0.6.1] - social-auth-app-django==2.1.0 [requires: social-auth-core>=1.2.0] - responses==0.10.9 [requires: requests>=2.0] - social-auth-core==3.2.0 [requires: requests>=2.9.1] - social-auth-app-django==2.1.0 [requires: social-auth-core>=1.2.0] As we can see, my two direct dependencies (cryptography and requests), are what require idna. According to those, it looks like pip should decide to install 2.8, because it is the latest version that will fulfill the constraints. Why is pip instead installing idna 2.9, as indicated by the top line of that output, and this error message when running pip install -r requirements.txt: ERROR: requests 2.22.0 has requirement idna<2.9,>=2.5, but you'll have idna 2.9 which is incompatible. EDIT: the contents of requirements.txt and it's children, as requested in the comments: # requirements.txt -r requirements/requirements-base.txt -r requirements/requirements-testing.txt # requirements-base.txt cryptography~=2.3.1 pyjwt~=1.6.4 requests~=2.22.0 social-auth-app-django~=2.1.0 # requirements-testing.txt hypothesis~=3.87.0 pytest~=3.6.2 pytest-django~=3.3.2 pytest-cov~=2.5.1 responses~=0.10.5 Edit 2: I've created a minimally viable example. For this example, here is requirements.txt: cryptography~=2.3.1 requests~=2.22.0 And here are the commands I ran from start to finish in a fresh directory: virtualenv -p python3.6 -v venv source venv/bin/activate pip install -r requirements.txt --no-cache-dir And the full output: Collecting cryptography~=2.3.1 Downloading cryptography-2.3.1-cp34-abi3-manylinux1_x86_64.whl (2.1 MB) |ββββββββββββββββββββββββββββββββ| 2.1 MB 2.0 MB/s Collecting requests~=2.22.0 Downloading requests-2.22.0-py2.py3-none-any.whl (57 kB) |ββββββββββββββββββββββββββββββββ| 57 kB 18.5 MB/s Collecting asn1crypto>=0.21.0 Downloading asn1crypto-1.3.0-py2.py3-none-any.whl (103 kB) |ββββββββββββββββββββββββββββββββ| 103 kB 65.4 MB/s Collecting idna>=2.1 Downloading idna-2.9-py2.py3-none-any.whl (58 kB) |ββββββββββββββββββββββββββββββββ| 58 kB 71.4 MB/s Collecting six>=1.4.1 Downloading six-1.14.0-py2.py3-none-any.whl (10 kB) Collecting cffi!=1.11.3,>=1.7 Downloading cffi-1.14.0-cp36-cp36m-manylinux1_x86_64.whl (399 kB) |ββββββββββββββββββββββββββββββββ| 399 kB 30.3 MB/s Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 Downloading urllib3-1.25.8-py2.py3-none-any.whl (125 kB) |ββββββββββββββββββββββββββββββββ| 125 kB 46.7 MB/s Collecting certifi>=2017.4.17 Downloading certifi-2019.11.28-py2.py3-none-any.whl (156 kB) |ββββββββββββββββββββββββββββββββ| 156 kB 65.1 MB/s Collecting chardet<3.1.0,>=3.0.2 Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB) |ββββββββββββββββββββββββββββββββ| 133 kB 60.8 MB/s Collecting pycparser Downloading pycparser-2.19.tar.gz (158 kB) |ββββββββββββββββββββββββββββββββ| 158 kB 25.0 MB/s Building wheels for collected packages: pycparser Building wheel for pycparser (setup.py) ... done Created wheel for pycparser: filename=pycparser-2.19-py2.py3-none-any.whl size=111031 sha256=030a1449dd5902f2f03e9e2f8f9cc6760503136a9243e965237a1ece1196502a Stored in directory: /tmp/pip-ephem-wheel-cache-c_dx8qi5/wheels/c6/6b/83/2608afaa57ecfb0a66ac89191a8d9bad71c62ca55ee499c2d0 Successfully built pycparser ERROR: requests 2.22.0 has requirement idna<2.9,>=2.5, but you'll have idna 2.9 which is incompatible. Installing collected packages: asn1crypto, idna, six, pycparser, cffi, cryptography, urllib3, certifi, chardet, requests Successfully installed asn1crypto-1.3.0 certifi-2019.11.28 cffi-1.14.0 chardet-3.0.4 cryptography-2.3.1 idna-2.9 pycparser-2.19 requests-2.22.0 six-1.14.0 urllib3-1.25.8 | Pip does not have a dependency resolver. If you tell it to install package foo without any qualifications, youβre getting the newest version of foo, even if it conflicts with other packages youβve already installed. Other solutions like poetry exist which do have the logic to keep everything compatible. If you need this, consider using something like that instead of plain pip. | 12 | 10 |
60,299,967 | 2020-2-19 | https://stackoverflow.com/questions/60299967/how-to-get-allocated-gpu-spec-in-google-colab | I'm using Google Colab for deep learning and I'm aware that they randomly allocate GPU's to users. I'd like to be able to see which GPU I've been allocated in any given session. Is there a way to do this in Google Colab notebooks? Note that I am using Tensorflow if that helps. | Since you can run bash command in colab, just run !nvidia-smi: | 51 | 62 |
60,294,634 | 2020-2-19 | https://stackoverflow.com/questions/60294634/select-first-row-when-there-are-multiple-rows-with-repeated-values-in-a-column | I want to select the first row when there are multiple rows with repeated values in a column. For example: import pandas as pd df = pd.DataFrame({'col1':['one', 'one', 'one', 'one', 'one', 'one', 'one', 'one'], 'col2':['ID=ABCD1234', 'ID=ABCD1234', 'ID=ABCD1234', 'ID=ABCD5678', 'ID=ABCD5678', 'ID=ABCD5678', 'ID=ABCD9102', 'ID=ABCD9102']}) The pandas dataframe looks like this: print(df) col1 col2 0 one ID=ABCD1234 1 one ID=ABCD1234 2 one ID=ABCD1234 3 one ID=ABCD5678 4 one ID=ABCD5678 5 one ID=ABCD5678 6 one ID=ABCD9102 7 one ID=ABCD9102 I want the row 0, row 3, and row 6 to be selected and output as a new dataframe. Expected output: col1 col2 0 one ID=ABCD1234 3 one ID=ABCD5678 6 one ID=ABCD9102 | You can use: df.drop_duplicates(subset = ['col2'], keep = 'first', inplace = True) | 8 | 13 |
60,254,571 | 2020-2-17 | https://stackoverflow.com/questions/60254571/no-module-named-socks | import requests, socket, socks ModuleNotFoundError: No module named 'socks' I have tried pip install socks, and followed instructions of other stackoverflow posts but none of them worked. I'm working on pycharm right now and I have also installed socks and socket on there, in fact, it does show I have installed it. And yes, I do have it on the right interpreter. How can I solve this? | I think you mean to install PySocks, so do pip install PySocks. pypi docs for PySocks pip install socks installs something different. pypi docs for socks | 14 | 31 |
60,193,899 | 2020-2-12 | https://stackoverflow.com/questions/60193899/venv-not-respecting-copies-argument | I am sshβd into a development environment (vagrant Ubuntu box) and my project directory is mapped to another filesystem (via vbox) so symlinks are not supported. I am attempting to create a new venv, but the --copies flag isnβt being respected. $sudo python -m venv --copies venv Error: [Errno 71] Protocol error: 'lib' -> '/home/vagrant/vagrant_projects/rurp/venv/lib64' If I use python 2.7 ($virtualenv venv --always-copy) it works, but not with the python3 venv --copies implementation. The --always-copy argument was a workaround for similar issues with python2.x. I could not find anything online indicating a bug in venv and am at a bit of a loss. Has anyone else had this issue? $ python -V Python 3.6.9 Thank you in advance. Edit: Also tested in python 3.8.1. | Per @chepner's comment above, it looks like the --copies argument is ignored on non-Windows systems (no mention of this in the documentation). I was able to workaround the issue by creating the venv in a local directory, manually copying the symlinked lib64 to a real directory, moving the venv to my project folder and manually updating the activation scripts. Ugly, but it works. $cd ~ $python3 -m venv --copies --clear venv $cp -r --remove-destination `readlink lib64` lib64 $cp -r venv vagrant_project/rurp/ I would be happy to accept a more elegant answer. | 7 | 5 |
60,224,850 | 2020-2-14 | https://stackoverflow.com/questions/60224850/send-mail-python-asyncio | I'm trying to learn asyncio. If I run this program normally without the asyncio library than it takes less time while it takes more time in this way so is this the right way to send mail using asyncio or there is any other way? import smtplib import ssl import time import asyncio async def send_mail(receiver_email): try: print(f"trying..{receiver_email}") server = smtplib.SMTP(smtp_server, port) server.ehlo() server.starttls(context=context) server.ehlo() server.login(sender_email, password) message = "test" await asyncio.sleep(0) server.sendmail(sender_email, receiver_email, message) print(f"done...{receiver_email}") except Exception as e: print(e) finally: server.quit() async def main(): t1 = time.time() await asyncio.gather( send_mail("[email protected]"), send_mail("[email protected]"), send_mail("[email protected]"), send_mail("[email protected]") ) print(f"End in {time.time() - t1}sec") if __name__ == "__main__": smtp_server = "smtp.gmail.com" port = 587 # For starttls sender_email = "*****" password = "*****" context = ssl.create_default_context() asyncio.run(main()) | You are not really doing sending your emails correctly using asyncio. You should be using the aiosmtplib for making asynchronous SMTP calls such as connect, starttls, login, etc. See the following example, which I have stripped down from a more complicated program that handled attachments. This code sends two emails asynchronously: #!/usr/bin/env python3 import asyncio import aiosmtplib import sys from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText MAIL_PARAMS = {'TLS': True, 'host': 'xxxxxxxx', 'password': 'xxxxxxxx', 'user': 'xxxxxxxx', 'port': 587} async def send_mail_async(sender, to, subject, text, textType='plain', **params): """Send an outgoing email with the given parameters. :param sender: From whom the email is being sent :type sender: str :param to: A list of recipient email addresses. :type to: list :param subject: The subject of the email. :type subject: str :param text: The text of the email. :type text: str :param textType: Mime subtype of text, defaults to 'plain' (can be 'html'). :type text: str :param params: An optional set of parameters. (See below) :type params; dict Optional Parameters: :cc: A list of Cc email addresses. :bcc: A list of Bcc email addresses. """ # Default Parameters cc = params.get("cc", []) bcc = params.get("bcc", []) mail_params = params.get("mail_params", MAIL_PARAMS) # Prepare Message msg = MIMEMultipart() msg.preamble = subject msg['Subject'] = subject msg['From'] = sender msg['To'] = ', '.join(to) if len(cc): msg['Cc'] = ', '.join(cc) if len(bcc): msg['Bcc'] = ', '.join(bcc) msg.attach(MIMEText(text, textType, 'utf-8')) # Contact SMTP server and send Message host = mail_params.get('host', 'localhost') isSSL = mail_params.get('SSL', False) isTLS = mail_params.get('TLS', False) if isSSL and isTLS: raise ValueError('SSL and TLS cannot both be True') port = mail_params.get('port', 465 if isSSL else 25) # For aiosmtplib 3.0.1 we must set argument start_tls=False # because we will explicitly be calling starttls ourselves when # isTLS is True: smtp = aiosmtplib.SMTP(hostname=host, port=port, start_tls=False, use_tls=isSSL) await smtp.connect() if isTLS: await smtp.starttls() if 'user' in mail_params: await smtp.login(mail_params['user'], mail_params['password']) await smtp.send_message(msg) await smtp.quit() async def main(): email_address = 'xxxxxxxx'; co1 = send_mail_async(email_address, [email_address], 'Test 1', 'Test 1 Message', textType='plain' ) co2 = send_mail_async(email_address, [email_address], 'Test 2', 'Test 2 Message', textType='plain' ) await asyncio.gather(co1, co2) if __name__ == "__main__": asyncio.run(main()) | 7 | 18 |
60,247,157 | 2020-2-16 | https://stackoverflow.com/questions/60247157/how-can-i-get-stub-files-for-matplotlib-numpy-scipy-pandas-etc | I know that the stub files for built-in Python library for type checking and static analysis come with mypy or PyCharm installation. How can I get stub files for matplotlib, numpy, scipy, pandas, etc.? | Type stubs are sometimes packaged directly with the library. Otherwise there can be some external libraries to provide them. Numpy Starting with numpy 1.20 type stubs will be included in numpy. See this changelog and this PR adding them Before that they could added with the library https://github.com/numpy/numpy-stubs Pandas and Matplotlib For pandas, official stub support can be found in the pandas-stubs repository, which is the official pandas stubs project. There is no official support for stubs in Matplotlib, but you can check for community-contributed stubs or create your own if needed. In some cases, you might also find unofficial stubs in projects like data-science-types, but for the most up-to-date and reliable stubs, it's recommended to use the official sources mentioned above. | 58 | 24 |
60,229,375 | 2020-2-14 | https://stackoverflow.com/questions/60229375/solution-for-specificationerror-nested-renamer-is-not-supported-while-agg-alo | def stack_plot(data, xtick, col2='project_is_approved', col3='total'): ind = np.arange(data.shape[0]) plt.figure(figsize=(20,5)) p1 = plt.bar(ind, data[col3].values) p2 = plt.bar(ind, data[col2].values) plt.ylabel('Projects') plt.title('Number of projects aproved vs rejected') plt.xticks(ind, list(data[xtick].values)) plt.legend((p1[0], p2[0]), ('total', 'accepted')) plt.show() def univariate_barplots(data, col1, col2='project_is_approved', top=False): # Count number of zeros in dataframe python: https://stackoverflow.com/a/51540521/4084039 temp = pd.DataFrame(project_data.groupby(col1)[col2].agg(lambda x: x.eq(1).sum())).reset_index() # Pandas dataframe grouby count: https://stackoverflow.com/a/19385591/4084039 temp['total'] = pd.DataFrame(project_data.groupby(col1)[col2].agg({'total':'count'})).reset_index()['total'] temp['Avg'] = pd.DataFrame(project_data.groupby(col1)[col2].agg({'Avg':'mean'})).reset_index()['Avg'] temp.sort_values(by=['total'],inplace=True, ascending=False) if top: temp = temp[0:top] stack_plot(temp, xtick=col1, col2=col2, col3='total') print(temp.head(5)) print("="*50) print(temp.tail(5)) univariate_barplots(project_data, 'school_state', 'project_is_approved', False) Error: SpecificationError Traceback (most recent call last) <ipython-input-21-2cace8f16608> in <module>() ----> 1 univariate_barplots(project_data, 'school_state', 'project_is_approved', False) <ipython-input-20-856fcc83737b> in univariate_barplots(data, col1, col2, top) 4 5 # Pandas dataframe grouby count: https://stackoverflow.com/a/19385591/4084039 ----> 6 temp['total'] = pd.DataFrame(project_data.groupby(col1)[col2].agg({'total':'count'})).reset_index()['total'] 7 print (temp['total'].head(2)) 8 temp['Avg'] = pd.DataFrame(project_data.groupby(col1)[col2].agg({'Avg':'mean'})).reset_index()['Avg'] ~\AppData\Roaming\Python\Python36\site-packages\pandas\core\groupby\generic.py in aggregate(self, func, *args, **kwargs) 251 # but not the class list / tuple itself. 252 func = _maybe_mangle_lambdas(func) --> 253 ret = self._aggregate_multiple_funcs(func) 254 if relabeling: 255 ret.columns = columns ~\AppData\Roaming\Python\Python36\site-packages\pandas\core\groupby\generic.py in _aggregate_multiple_funcs(self, arg) 292 # GH 15931 293 if isinstance(self._selected_obj, Series): --> 294 raise SpecificationError("nested renamer is not supported") 295 296 columns = list(arg.keys()) SpecificationError: **nested renamer is not supported** | In this specific case you can change temp['total'] = pd.DataFrame(project_data.groupby(col1)[col2].agg({'total':'count'})).reset_index()['total'] temp['Avg'] = pd.DataFrame(project_data.groupby(col1)[col2].agg({'Avg':'mean'})).reset_index()['Avg'] to the new syntax temp['total'] = pd.DataFrame(project_data.groupby(col1)[col2].agg(total='count')).reset_index()['total'] temp['Avg'] = pd.DataFrame(project_data.groupby(col1)[col2].agg(Avg='mean')).reset_index()['Avg'] New syntax: df.groupby( [columns] ).agg( new_column_name=("column", aggregation_function) ) The reason for this is that the new pandas version named aggregation is the recommended replacement for the deprecated dict-of-dicts approach to naming the output of column-specific aggregations (Deprecate groupby.agg() with a dictionary when renaming). source: https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.25.0.html | 61 | 78 |
60,251,799 | 2020-2-16 | https://stackoverflow.com/questions/60251799/how-to-start-a-new-django-project-using-poetry | How to start a new Django project using poetry? With virtualenv it is simple: virtualenv -p python3 env_name --no-site-packages source env_name/bin/activate pip install django django-admin.py startproject demo pip freeze > requirements.txt What will be equivalent to this using Poetry? | Create a new project folder and step in: $ mkdir djangodemo $ cd djangodemo Create a basic pyproject.toml with django as dependency: $ poetry init --no-interaction --dependency django Create venv with all dependencies needed: $ poetry install Init your demo-project: For Django versions after 4: $ poetry run django-admin startproject djangodemo For Django version less than 4: $ poetry run django-admin.py startproject djangodemo | 15 | 30 |
60,212,658 | 2020-2-13 | https://stackoverflow.com/questions/60212658/issues-with-pyenv-virtualenv-python-and-pip-not-changed-when-activating-deact | I installed pyenv-virtualenv using Linuxbrew (Homebrew 2.2.5) on my Ubuntu 16.04 VPS. The pyenv version is: 1.2.16. Now when I do a test like this: pyenv install 3.8.1 pyenv virtualenv 3.8.1 test cd /.pyenv/versions/3.8.1/envs/test pyenv local 3.8.1 Then entering / leaving the /.pyenv/versions/3.8.1/envs/test doesn't activate deactivate the virtual environment and I don't see (test) username:~ in my shell. I also created a /home/users/test directory and .python-version there but still entering / leaving directory does nothing. Accordingn to the documentation: If eval "$(pyenv virtualenv-init -)" is configured in your shell, pyenv-virtualenv will automatically activate/deactivate virtualenvs on entering/leaving directories which contain a .python-version file that contains the name of a valid virtual environment as shown in the output of pyenv virtualenvs (e.g., venv34 or 3.4.3/envs/venv34 in example above) . .python-version files are used by pyenv to denote local Python versions and can be created and deleted with the pyenv local command. So first question is: Why this doesn't work? Why the virtual environment is not activated / deactivated automatically at entering / leaving a directory containing a .python-version file? Also when I activate virtualenv by hand pyenv activate test and then check the Python version, it prints the system Python version and not the one from environment: Python 3.8.1: python --version Python 3.7.6 I can get the right Python version only by directly referring to virtualenv shims Python like this: which python /home/andre/.pyenv/shims/python /home/andre/.pyenv/shims/python --version Python 3.8.1 The behaviour is the same whenever the virtualenv "test" is activated or not. I would expect that after activating "test" the command python --version returns Python 3.8.1 So second question: why pip and python are not switched when activating / deactivating the virtual environment ? Are these pyenv bugs? Or am I doing something wrong? | It turns out that in order to automatically activate / deactivate a venv when entering / leaving a directory the .python-version file in there must contain the venv name and not the Python version associated with that venv So executing: pyenv local 3.8.1 creates a .python-version file which only includes the Python version 3.8.1. Then entering / leaving a directory containing .python-version file will set / unset the Python version specified in that file but will not activate / deactivate any venv. To create a .python-version file that will do both: activate a virtual environment and set Python version the command should look like: pyenv local test where test is a venv created with: pyenv virtualenv 3.8.1 test. So changing 3.8.1 to test in the .python-version fixed the problem. After I done this the venv was activated / deactivated when entering / leaving directory containing .python-version. But still the Python version did not change to the one associated with the venv (in this case 3.8.1) Then I found out that I had two lines in my .profile that was causing this problem: alias python=/home/linuxbrew/.linuxbrew/bin/python3 alias pip=/home/linuxbrew/.linuxbrew/bin/pip3 After removing these lines everything works as expected. If still having any problems make sure you have these lines in your .profile or .bash_profile, whichever one you are using: export PATH="$HOME/.pyenv/bin:$PATH" eval "$(pyenv init -)" eval "$(pyenv virtualenv-init -)" if command -v pyenv 1>/dev/null 2>&1; then export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" eval "$(pyenv init --path)" eval "$(pyenv init -)" fi | 11 | 13 |
60,199,316 | 2020-2-13 | https://stackoverflow.com/questions/60199316/how-to-save-a-list-of-numpy-arrays-into-a-single-file-and-load-file-back-to-orig | I am currently trying to save a list of numpy arrays into a single file, an example of such a list can be of the form below import numpy as np np_list = [] for i in range(10): if i % 2 == 0: np_list.append(np.random.randn(64)) else: np_list.append(np.random.randn(32, 64)) I can combine all of them using into a single file using savez by iterating through list but is there any other way? I am trying to save weights returned by the function model.get_weights(), which is a list of ndarray and after retrieving the weights from the saved file I intend to load those weights into another model using model.set_weights(np_list). Therefore the format of the list must remain the same. Let me know if anyone has an elegant way of doing this. | I would go with np.save and np.load because it's platform-independent, faster than savetxt and works with lists of arrays, for example: import numpy as np a = [ np.arange(100), np.arange(200) ] np.save('a.npy', np.array(a, dtype=object), allow_pickle=True) b = np.load('a.npy', allow_pickle=True) This is the documentation for np.save and np.load. And in this answer you can find a better discussion How to save and load numpy.array() data properly? Edit Like @AlexP mentioned numpy >= v1.24.2 does not support arrays with different sizes and types, so that's why the casting is necessary. | 12 | 20 |
60,285,826 | 2020-2-18 | https://stackoverflow.com/questions/60285826/transaction-atomic-needed-for-bulk-create | I'm using the bulk_create method from Django to create many entries at once. To ensure that the changes are only committed if there is no exception I'm thinking about adding transaction.atomic() to the code blocks but I'm not sure if I need to add it. From my understanding I only need to add it in Scenario 2 because in this case I'm executing more than one query. Scenario 1 Create 1.000 entries in one query Entry.objects.bulk_create([ Entry(headline='This is a test'), Entry(headline='This is only a test'), # ... ]) Scenario 2 Create 10.000 entries in in batches of 1.000 Entry.objects.bulk_create([ Entry(headline='This is a test'), Entry(headline='This is only a test'), # ... ], batch_size=1_000) | No, you don't have to for either scenario. According to the Django source code, using transaction atomic would be redundant for bulk_create as that method already uses atomic transactions. | 11 | 17 |
60,247,155 | 2020-2-16 | https://stackoverflow.com/questions/60247155/how-to-bypass-the-message-your-connection-is-not-private-on-non-secure-page-us | I'm trying to interact with the page "Your connection is not private". The solution of using options.add_argument('--ignore-certificate-errors') is not helpful for two reasons: I'm using an already open window. Even if I was using a "selenium opened window" the script runs non stop, and the issue I'm trying to solve is when my browser disconnects from a splunk dashboard and I want it to automatically connect again(and it pops the private connection window). How do I click on "Advanced" and then click on "Proceed to splunk_server (unsafe)? | For chrome: from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument('--ignore-ssl-errors=yes') options.add_argument('--ignore-certificate-errors') driver = webdriver.Chrome(options=options) If not work then this: from selenium import webdriver from selenium.webdriver import DesiredCapabilities options = webdriver.ChromeOptions() options.add_argument('--allow-insecure-localhost') # differ on driver version. can ignore. caps = options.to_capabilities() caps["acceptInsecureCerts"] = True driver = webdriver.Chrome(desired_capabilities=caps) For firefox: from selenium import webdriver profile = webdriver.FirefoxProfile() profile.accept_untrusted_certs = True driver = webdriver.Firefox(firefox_profile=profile) driver.get('https://cacert.org/') driver.close() If not work then this: capabilities = webdriver.DesiredCapabilities().FIREFOX capabilities['acceptSslCerts'] = True driver = webdriver.Firefox(capabilities=capabilities) driver.get('https://cacert.org/') driver.close() Above all worked for me! | 21 | 50 |
60,219,622 | 2020-2-14 | https://stackoverflow.com/questions/60219622/python-convert-dcm-to-png-images-are-too-bright | I have to convert some files which come by default as .dcm to .png, I've found some code samples to achieve that around here but the end results are too bright. Could anybody have a look at this, please? def convert_to_png(file): ds = pydicom.dcmread(file) shape = ds.pixel_array.shape # Convert to float to avoid overflow or underflow losses. image_2d = ds.pixel_array.astype(float) # Rescaling grey scale between 0-255 image_2d_scaled = (np.maximum(image_2d,0) / image_2d.max()) * 255.0 # Convert to uint image_2d_scaled = np.uint8(image_2d_scaled) # Write the PNG file with open(f'{file.strip(".dcm")}.png', 'wb') as png_file: w = png.Writer(shape[1], shape[0], greyscale=True) w.write(png_file, image_2d_scaled) I've tweaked around the code but nothing seems to work. This is how the actual thing looks like as dicom and on the right side is the result of running this code | Some DICOM datasets require window center/width rescaling of the original pixel intensities (via the (0028,1050) Window Center and (0028,1051) Window Width elements in the VOI LUT Module) in order to reproduce the way they were "viewed". pydicom has a function apply_voi_lut() for applying this windowing: from pydicom import dcmread from pydicom.pixel_data_handlers.util import apply_voi_lut ds = dcmread(file) if 'WindowWidth' in ds: print('Dataset has windowing') windowed = apply_voi_lut(ds.pixel_array, ds) # Add code for rescaling to 8-bit... Depending on the dataset type you may need to use apply_modality_lut() beforehand. | 10 | 17 |
60,240,747 | 2020-2-15 | https://stackoverflow.com/questions/60240747/multivariate-multiple-regression-using-python-libraries | Is there any library to perform a Multivariate Multiple Regression (a Multiple Regression with multiple dependent variables) in Python? Greetings and thanks in advance | You can try the modules in sklearn, the response variable can be 2 or more dimensional, and i think it works for OLS (linear regression), lasso, ridge.. The models in statsmodels can only do 1 response (just checked). Example dataset: import pandas as pd from sklearn.datasets import load_iris iris = load_iris() df = pd.DataFrame(data= iris['data'], columns= iris['feature_names'] ) df.shape (150, 4) Now we do the fit: from sklearn import linear_model clf = linear_model.LinearRegression() clf.fit(df[['sepal length (cm)']],df[['petal length (cm)','petal width (cm)']]) clf.coef_ array([[1.85843298], [0.75291757]]) You can see the coefficients are the same as when you fit one response in this case: clf.fit(df[['sepal length (cm)']],df[['petal width (cm)']]) clf.coef_ array([[0.75291757]]) | 7 | 5 |
60,279,160 | 2020-2-18 | https://stackoverflow.com/questions/60279160/compare-two-dataframes-pyspark | I'm trying to compare two data frames with have same number of columns i.e. 4 columns with id as key column in both data frames df1 = spark.read.csv("/path/to/data1.csv") df2 = spark.read.csv("/path/to/data2.csv") Now I want to append new column to DF2 i.e. column_names which is the list of the columns with different values than df1 df2.withColumn("column_names",udf()) DF1 +------+---------+--------+------+ | id | |name | sal | Address | +------+---------+--------+------+ | 1| ABC | 5000 | US | | 2| DEF | 4000 | UK | | 3| GHI | 3000 | JPN | | 4| JKL | 4500 | CHN | +------+---------+--------+------+ DF2: +------+---------+--------+------+ | id | |name | sal | Address | +------+---------+--------+------+ | 1| ABC | 5000 | US | | 2| DEF | 4000 | CAN | | 3| GHI | 3500 | JPN | | 4| JKL_M | 4800 | CHN | +------+---------+--------+------+ Now I want DF3 DF3: +------+---------+--------+------+--------------+ | id | |name | sal | Address | column_names | +------+---------+--------+------+--------------+ | 1| ABC | 5000 | US | [] | | 2| DEF | 4000 | CAN | [address] | | 3| GHI | 3500 | JPN | [sal] | | 4| JKL_M | 4800 | CHN | [name,sal] | +------+---------+--------+------+--------------+ I saw this SO question, How to compare two dataframe and print columns that are different in scala. Tried that, however the result is different. I'm thinking of going with a UDF function by passing row from each dataframe to udf and compare column by column and return column list. However for that both the data frames should be in sorted order so that same id rows will be sent to udf. Sorting is costly operation here. Any solution? | Assuming that we can use id to join these two datasets I don't think that there is a need for UDF. This could be solved just by using inner join, array and array_remove functions among others. First let's create the two datasets: df1 = spark.createDataFrame([ [1, "ABC", 5000, "US"], [2, "DEF", 4000, "UK"], [3, "GHI", 3000, "JPN"], [4, "JKL", 4500, "CHN"] ], ["id", "name", "sal", "Address"]) df2 = spark.createDataFrame([ [1, "ABC", 5000, "US"], [2, "DEF", 4000, "CAN"], [3, "GHI", 3500, "JPN"], [4, "JKL_M", 4800, "CHN"] ], ["id", "name", "sal", "Address"]) First we do an inner join between the two datasets then we generate the condition df1[col] != df2[col] for each column except id. When the columns aren't equal we return the column name otherwise an empty string. The list of conditions will consist the items of an array from which finally we remove the empty items: from pyspark.sql.functions import col, array, when, array_remove # get conditions for all columns except id conditions_ = [when(df1[c]!=df2[c], lit(c)).otherwise("") for c in df1.columns if c != 'id'] select_expr =[ col("id"), *[df2[c] for c in df2.columns if c != 'id'], array_remove(array(*conditions_), "").alias("column_names") ] df1.join(df2, "id").select(*select_expr).show() # +---+-----+----+-------+------------+ # | id| name| sal|Address|column_names| # +---+-----+----+-------+------------+ # | 1| ABC|5000| US| []| # | 3| GHI|3500| JPN| [sal]| # | 2| DEF|4000| CAN| [Address]| # | 4|JKL_M|4800| CHN| [name, sal]| # +---+-----+----+-------+------------+ | 11 | 20 |
60,286,623 | 2020-2-18 | https://stackoverflow.com/questions/60286623/python-loses-connection-to-mysql-database-after-about-a-day | I am developing a web-based application using Python, Flask, MySQL, and uWSGI. However, I am not using SQL Alchemy or any other ORM. I am working with a preexisting database from an old PHP application that wouldn't play well with an ORM anyway, so I'm just using mysql-connector and writing queries by hand. The application works correctly when I first start it up, but when I come back the next morning I find that it has become broken. I'll get errors like mysql.connector.errors.InterfaceError: 2013: Lost connection to MySQL server during query or the similar mysql.connector.errors.OperationalError: 2055: Lost connection to MySQL server at '10.0.0.25:3306', system error: 32 Broken pipe. I've been researching it and I think I know what the problem is. I just haven't been able to find a good solution. As best as I can figure, the problem is the fact that I am keeping a global reference to the database connection, and since the Flask application is always running on the server, eventually that connection expires and becomes invalid. I imagine it would be simple enough to just create a new connection for every query, but that seems like a far from ideal solution. I suppose I could also build some sort of connection caching mechanism that would close the old connection after an hour or so and then reopen it. That's the best option I've been able to come up with, but I still feel like there ought to be a better one. I've looked around, and most people that have been receiving these errors have huge or corrupted tables, or something to that effect. That is not the case here. The old PHP application still runs fine, the tables all have less than about 50,000 rows, and less than 30 columns, and the Python application runs fine until it has sat for about a day. So, here's to hoping someone has a good solution for keeping a continually open connection to a MySQL database. Or maybe I'm barking up the wrong tree entirely, if so hopefully someone knows. | I have it working now. Using pooled connections seemed to fix the issue for me. mysql.connector.connect( host='10.0.0.25', user='xxxxxxx', passwd='xxxxxxx', database='xxxxxxx', pool_name='batman', pool_size = 3 ) def connection(): """Get a connection and a cursor from the pool""" db = mysql.connector.connect(pool_name = 'batman') return (db, db.cursor()) I call connection() before each query function and then close the cursor and connection before returning. Seems to work. Still open to a better solution though. Edit I have since found a better solution. (I was still occasionally running into issues with the pooled connections). There is actually a dedicated library for Flask to handle mysql connections, which is almost a drop-in replacement. From bash: pip install Flask-MySQL Add MYSQL_DATABASE_HOST, MYSQL_DATABASE_USER, MYSQL_DATABASE_PASSWORD, MYSQL_DATABASE_DB to your Flask config. Then in the main Python file containing your Flask App object: from flaskext.mysql import MySQL mysql = MySQL() mysql.init_app(app) And to get a connection: mysql.get_db().cursor() All other syntax is the same, and I have not had any issues since. Been using this solution for a long time now. | 10 | 6 |
60,236,745 | 2020-2-15 | https://stackoverflow.com/questions/60236745/is-it-true-that-in-multiprocessing-each-process-gets-its-own-gil-in-cpython-h | Are there any caveats to it? I have a few questions related to it. How costly is it to create more GILs? Is it any different from creating a separate python runtime? Once a new GIL is created, will it create everything (objects, variables, stack, heap) from scratch as required in that process or a copy of everything in the present heap and the stack is created? (Garbage collection would malfunction if they are working on same objects.) Are the pieces of code being executed also copied to new CPU cores? Also can i relate one GIL to one CPU core? Now copying things is a fairly CPU intensive task (correct me if I am wrong), what would be the threshold to decide whether to go for multiprocessing? PS: I am talking about CPython but please feel free to extend the answer to whatever you feel is necessary. | Looking back at this question after 6 months, I feel I can clarify the doubts of my younger self. I hope this would be helpful to people who stumble upon it. Yes, It is true that in multiprocessing module, each process has a separate GIL and there are no caveats to it. But the understanding of the runtime and GIL is flawed in the question which needs to be corrected. I will clear the doubts/ answer the questions with a series of statements. Python code is ran (compiled to Cpython bytecode and then this bytecode interpreted) by CPython virtual machine. This is what constitutes the python runtime. When we create a new process, an entire new python virtual machine is launched (which we call the python process) with the stack and the heap memory. Yes this is a costly process but not too costly. Because python virtual machine is piece of C code precompiled to machine code. To put in perspective, the reason that in java they do not use multiprocessing is that it will create multiple JVMs which would be terrible as JVM needs a lot of memory and also, JVM is not precompiled machine code like CPython. GIL is just a piece of code within the python virtual machine which lets the CPython interpreter execute only one line of CPython bytecode (or one instruction) at a time. So, all questions related to GIL creation and cost are dumb. Basically the intention was to ask about CPython Virtual Machine. Can I relate 1 GIL to 1 CPU core? : Better to ask if 1 Python process can be related to 1 CPU core? : No. That's Kernel's job to decide what core the process is running (and which will keep changing from time to time and the process would have no control over it). The only thing is that at any give point of time, one python process cannot be running on multiple cores and one python process will execute only one instruction in CPython bytecode (due to the GIL). What's copied in cores and how the OS tries to keep a process hold the Core it is working on is a separate ans very deep topic in itself. The final question is a subjective one but with all this understanding, it's basically a cost to benefit ratio that may vary from program to program and might depend on how CPU intensive a process is and how many cores does the machine has etc. So that cannot be generalised. | 10 | 12 |
60,257,377 | 2020-2-17 | https://stackoverflow.com/questions/60257377/encountering-warn-procfsmetricsgetter-exception-when-trying-to-compute-pagesi | I installed Spark and when trying to run it, I am getting the error: WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped Can someone help me with that? | The same problem occured with me because python path was not added to system environment. I added this in environment and now it works perfectly. Adding PYTHONPATH environment variable with value as: %SPARK_HOME%\python;%SPARK_HOME%\python\lib\py4j-<version>-src.zip;%PYTHONPATH% helped resolve this issue. Just check what py4j version you have in your spark/python/lib folder. | 25 | 11 |
60,197,392 | 2020-2-12 | https://stackoverflow.com/questions/60197392/high-performance-replacement-for-multiprocessing-queue | My distributed application consists of many producers that push tasks into several FIFO queues, and multiple consumers for every one of these queues. All these components live on a single node, so no networking involved. This pattern is perfectly supported by Python's built-in multiprocessing.Queue, however when I am scaling up my application the queue implementation seems to be a bottleneck. I am not sending large amounts of data, so memory sharing does not solve the problem. What I need is fast guaranteed delivery of 10^4-10^5 small messages per second. Each message is about 100 bytes. I am new to the world of fast distributed computing and I am very confused by the sheer amount of options. There is RabbitMQ, Redis, Kafka, etc. ZeroMQ is a more focused and compact alternative, which also has successors such as nanomsg and nng. Also, implementing something like a many-to-many queue with a guaranteed delivery seems nontrivial without a broker. I would really appreciate if someone could point me to a "standard" way of doing something like this with one of the faster frameworks. | After trying a few available implementations and frameworks, I still could not find anything that would be suitable for my task. Either too slow or too heavy. To solve the issue my colleagues and I developed this: https://github.com/alex-petrenko/faster-fifo faster-fifo is a drop-in replacement for Python's multiprocessing.Queue and is significantly faster. In fact, it is up to 30x faster in the configurations I cared about (many producers, few consumers) because it additionally supports get_many() method on the consumer side. It is brokereless, lightweight, supports arbitrary many-to-many configurations, implemented for Posix systems using pthread synchronization primitives. | 8 | 10 |
60,239,051 | 2020-2-15 | https://stackoverflow.com/questions/60239051/pytorch-runtimeerror-expected-object-of-scalar-type-double-but-got-scalar-type | I am trying to implement a custom dataset for my neural network. But got this error when running the forward function. The code is as follows. import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import Dataset, DataLoader import numpy as np class ParamData(Dataset): def __init__(self,file_name): self.data = torch.Tensor(np.loadtxt(file_name,delimiter = ',')) #first place def __len__(self): return self.data.size()[0] def __getitem__(self,i): return self.data[i] class Net(nn.Module): def __init__(self,in_size,out_size,layer_size=200): super(Net,self).__init__() self.layer = nn.Linear(in_size,layer_size) self.out_layer = nn.Linear(layer_size,out_size) def forward(self,x): x = F.relu(self.layer(x)) x = self.out_layer(x) return x datafile = 'data1.txt' net = Net(100,1) dataset = ParamData(datafile) n_samples = len(dataset) #dataset = torch.Tensor(dataset,dtype=torch.double) #second place #net.float() #thrid place net.forward(dataset[0]) #fourth place In the file data1.txt is a csv formatted text file containing certain numbers, and each dataset[i] is a size 100 by 1 torch.Tensor object of dtype torch.float64. The error message is as follows: Traceback (most recent call last): File "Z:\Wrong.py", line 33, in <module> net.forward(dataset[0]) File "Z:\Wrong.py", line 23, in forward x = F.relu(self.layer(x)) File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "E:\Python38\lib\site-packages\torch\nn\modules\linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "E:\Python38\lib\site-packages\torch\nn\functional.py", line 1372, in linear output = input.matmul(weight.t()) RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'mat2' in call to _th_mm It seems that I should change the dtype of the numbers in dataset to torch.double. I tried things like changing the line at the first place to self.data = torch.tensor(np.loadtxt(file_name,delimiter = ','),dtype=torch.double) changing the line at the fourth place to net.forward(dataset[0].double()) uncommenting one of the two lines at the second or the thrid place I think these are the solutions I have seen from similar questions, but they either give new errors or don't do anything. What should I do? Update: So I got it working by changing the first place to self.data = torch.from_numpy(np.loadtxt(file_name,delimiter = ',')).float() which is weird because it is exactly the opposite of the error message. Is this a bug? I'd still like some explaining. | Now that I have more experience with pytorch, I think I can explain the error message. It seems that the line RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'mat2' in call to _th_mm is actually refering to the weights of the linear layer when the matrix multiplication is called. Since the input is double while the weights are float, it makes sense for the line output = input.matmul(weight.t()) to expect the weights to be double. | 21 | 10 |
60,186,698 | 2020-2-12 | https://stackoverflow.com/questions/60186698/create-a-numba-typed-list-without-looping-over-a-python-list | I want to use a numba.typed.List (going to call it List) to pass into my function which is wrapped in njit. However this List should be created from an existing python list. When I look at the documentation it seems the way you create a List is to initialize it and then append elements to it. However this requires you to loop over an already existing list in python which seems inefficient for large lists. For example: from numba.typed import List numba_list = List() py_list = ["a", "b", "c"] for e in py_list: numba_list.append(e) In [17]: numba_list[0] Out[17]: 'a' Is there a way to set a List to the values of a python list without explicitly looping over the python list ? I am using numba.__version__ = '0.47.0' | I am working on numba 0.49.1 where you can pass the list by the construction. py_list = [2,3,5] number_list = numba.typed.List(py_list) | 10 | 13 |
60,239,099 | 2020-2-15 | https://stackoverflow.com/questions/60239099/csv-file-with-arabic-characters-is-displayed-as-symbols-in-excel | I am using python to extract Arabic tweets from twitter and save it as a CSV file, but when I open the saved file in excel the Arabic language displays as symbols. However, inside python, notepad, or word, it looks good. May I know where is the problem? | This is a problem I face frequently with Microsoft Excel when opening CSV files that contain Arabic characters. Try the following workaround that I tested on latest versions of Microsoft Excel on both Windows and MacOS: Open Excel on a blank workbook Within the Data tab, click on From Text button (if not activated, make sure an empty cell is selected) Browse and select the CSV file In the Text Import Wizard, change the File_origin to "Unicode (UTF-8)" Go next and from the Delimiters, select the delimiter used in your file e.g. comma Finish and select where to import the data The Arabic characters should show correctly. | 24 | 54 |
60,244,570 | 2020-2-16 | https://stackoverflow.com/questions/60244570/how-to-bypass-python-function-definition-with-decorator | I would like to know if its possible to control Python function definition based on global settings (e.g. OS). Example: @linux def my_callback(*args, **kwargs): print("Doing something @ Linux") return @windows def my_callback(*args, **kwargs): print("Doing something @ Windows") return Then, if someone is using Linux, the first definition of my_callback will be used and the second will be silently ignored. Its not about determining the OS, its about function definition / decorators. | If the goal is to have the same sort of effect in your code that #ifdef WINDOWS / #endif has.. here's a way to do it (I'm on a mac btw). Simple Case, No Chaining >>> def _ifdef_decorator_impl(plat, func, frame): ... if platform.system() == plat: ... return func ... elif func.__name__ in frame.f_locals: ... return frame.f_locals[func.__name__] ... else: ... def _not_implemented(*args, **kwargs): ... raise NotImplementedError( ... f"Function {func.__name__} is not defined " ... f"for platform {platform.system()}.") ... return _not_implemented ... ... >>> def windows(func): ... return _ifdef_decorator_impl('Windows', func, sys._getframe().f_back) ... >>> def macos(func): ... return _ifdef_decorator_impl('Darwin', func, sys._getframe().f_back) So with this implementation you get same syntax you have in your question. >>> @macos ... def zulu(): ... print("world") ... >>> @windows ... def zulu(): ... print("hello") ... >>> zulu() world >>> What the code above is doing, essentially, is assigning zulu to zulu if the platform matches. If the platform doesn't match, it'll return zulu if it was previously defined. If it wasn't defined, it returns a placeholder function that raises an exception. Decorators are conceptually easy to figure out if you keep in mind that @mydecorator def foo(): pass is analogous to: foo = mydecorator(foo) Here's an implementation using a parameterized decorator: >>> def ifdef(plat): ... frame = sys._getframe().f_back ... def _ifdef(func): ... return _ifdef_decorator_impl(plat, func, frame) ... return _ifdef ... >>> @ifdef('Darwin') ... def ice9(): ... print("nonsense") Parameterized decorators are analogous to foo = mydecorator(param)(foo). I've updated the answer quite a bit. In response to comments, I've expanded its original scope to include application to class methods and to cover functions defined in other modules. In this last update, I've been able to greatly reduce the complexity involved in determining if a function has already been defined. [A little update here... I just couldn't put this down - it's been a fun exercise] I've been doing some more testing of this, and found it works generally on callables - not just ordinary functions; you could also decorate class declarations whether callable or not. And it supports inner functions of functions, so things like this are possible (although probably not good style - this is just test code): >>> @macos ... class CallableClass: ... ... @macos ... def __call__(self): ... print("CallableClass.__call__() invoked.") ... ... @macos ... def func_with_inner(self): ... print("Defining inner function.") ... ... @macos ... def inner(): ... print("Inner function defined for Darwin called.") ... ... @windows ... def inner(): ... print("Inner function for Windows called.") ... ... inner() ... ... @macos ... class InnerClass: ... ... @macos ... def inner_class_function(self): ... print("Called inner_class_function() Mac.") ... ... @windows ... def inner_class_function(self): ... print("Called inner_class_function() for windows.") The above demonstrates the basic mechanism of decorators, how to access the caller's scope, and how to simplify multiple decorators that have similar behavior by having an internal function containing the common algorithm defined. Chaining Support To support chaining these decorators indicating whether a function applies to more than one platform, the decorator could be implemented like so: >>> class IfDefDecoratorPlaceholder: ... def __init__(self, func): ... self.__name__ = func.__name__ ... self._func = func ... ... def __call__(self, *args, **kwargs): ... raise NotImplementedError( ... f"Function {self._func.__name__} is not defined for " ... f"platform {platform.system()}.") ... >>> def _ifdef_decorator_impl(plat, func, frame): ... if platform.system() == plat: ... if type(func) == IfDefDecoratorPlaceholder: ... func = func._func ... frame.f_locals[func.__name__] = func ... return func ... elif func.__name__ in frame.f_locals: ... return frame.f_locals[func.__name__] ... elif type(func) == IfDefDecoratorPlaceholder: ... return func ... else: ... return IfDefDecoratorPlaceholder(func) ... >>> def linux(func): ... return _ifdef_decorator_impl('Linux', func, sys._getframe().f_back) That way you support chaining: >>> @macos ... @linux ... def foo(): ... print("works!") ... >>> foo() works! The comments below don't really apply to this solution in its present state. They were made during the first iterations on finding a solution and no longer apply. For instance the statement, "Note that this only works if macos and windows are defined in the same module as zulu." (upvoted 4 times) applied to the earliest version, but has been addressed in the current version; which is the case for most of the statements below. It's curious that the comments that validated the current solution have been removed. | 70 | 61 |
60,240,694 | 2020-2-15 | https://stackoverflow.com/questions/60240694/suppress-scientific-notation-in-sklearn-metrics-plot-confusion-matrix | I was trying to plot a confusion matrix nicely, so I followed scikit-learn's newer version 0.22's in built plot confusion matrix function. However, one value of my confusion matrix value is 153, but it appears as 1.5e+02 in the confusion matrix plot: Following the scikit-learn's documentation, I spotted this parameter called values_format, but I do not know how to manipulate this parameter so that it can suppress the scientific notation. My code is as follows. from sklearn import svm, datasets from sklearn.model_selection import train_test_split from sklearn.metrics import plot_confusion_matrix # import some data to play with X = pd.read_csv("datasets/X.csv") y = pd.read_csv("datasets/y.csv") class_names = ['Not Fraud (positive)', 'Fraud (negative)'] # Split the data into a training set and a test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) # Run classifier, using a model that is too regularized (C too low) to see # the impact on the results logreg = LogisticRegression() logreg.fit(X_train, y_train) np.set_printoptions(precision=2) # Plot non-normalized confusion matrix titles_options = [("Confusion matrix, without normalization", None), ("Normalized confusion matrix", 'true')] for title, normalize in titles_options: disp = plot_confusion_matrix(logreg, X_test, y_test, display_labels=class_names, cmap=plt.cm.Greens, normalize=normalize, values_format = '{:.5f}'.format) disp.ax_.set_title(title) print(title) print(disp.confusion_matrix) plt.show() | Just remove ".format" and the {} brackets from your call parameter declaration: disp = plot_confusion_matrix(logreg, X_test, y_test, display_labels=class_names, cmap=plt.cm.Greens, normalize=normalize, values_format = '.5f') In addition, you can use '.5g' to avoid decimal 0's Taken from source | 15 | 15 |
60,199,213 | 2020-2-13 | https://stackoverflow.com/questions/60199213/pandas-read-csv-error-due-to-pandas-io-common-not-importing-is-url-in-1-0-x | I'm trying to read in a regular csv file into pandas through pd.read_csv(). I have done this on my local desktop many times before but I am using a virtual machine now and am getting this error : ImportError: cannot import name 'is_url' from 'pandas.io.common' (/opt/conda/lib/python3.7/site-packages/pandas/io/common.py) Can anyone help me understand what's happening and how to resolve it? I have tried updating as well as uninstalling and reinstalling pandas already. | conda update --force-reinstall pandas this worked for me. | 10 | 7 |
60,288,953 | 2020-2-18 | https://stackoverflow.com/questions/60288953/how-to-change-the-crs-of-a-raster-with-rasterio | I am trying to change the CRS of a raster tif file. When I assign new CRS using the following code: with rio.open(solar_path, mode='r+') as raster: raster.crs = rio.crs.CRS({'init': 'epsg:27700'}) show((raster, 1)) print(raster.crs) The print function returns 'EPSG:27700', however after plotting the image the CRS clearly hasn't change? | Unlike Geopandas, rasterio requires manual re-projection when changing the crs. def reproject_raster(in_path, out_path): """ """ # reproject raster to project crs with rio.open(in_path) as src: src_crs = src.crs transform, width, height = calculate_default_transform(src_crs, crs, src.width, src.height, *src.bounds) kwargs = src.meta.copy() kwargs.update({ 'crs': crs, 'transform': transform, 'width': width, 'height': height}) with rio.open(out_path, 'w', **kwargs) as dst: for i in range(1, src.count + 1): reproject( source=rio.band(src, i), destination=rio.band(dst, i), src_transform=src.transform, src_crs=src.crs, dst_transform=transform, dst_crs=crs, resampling=Resampling.nearest) return(out_path) | 9 | 10 |
60,212,552 | 2020-2-13 | https://stackoverflow.com/questions/60212552/pytest-deprecation-junit-family-default-value-will-change-to-xunit2 | I'm getting deprecation warning from my pipelines at circleci. Message. /home/circleci/evobench/env/lib/python3.7/site-packages/_pytest/junitxml.py:436: PytestDeprecationWarning: The 'junit_family' default value will change to 'xunit2' in pytest 6.0. Command - run: name: Tests command: | . env/bin/activate mkdir test-reports python -m pytest --junitxml=test-reports/junit.xml How should I modify command to use xunit? Is it possible to a default tool, as it is mentioned in the message? I mean without specyfing xunit or junit. Here's full pipeline. | Run your command in this ways. with xunit2 python -m pytest -o junit_family=xunit2 --junitxml=test-reports/junit.xml with xunit1 python -m pytest -o junit_family=xunit1 --junitxml=test-reports/junit.xml or python -m pytest -o junit_family=legacy --junitxml=test-reports/junit.xml This here describes the change in detail: The default value of junit_family option will change to xunit2 in pytest 6.0, given that this is the version supported by default in modern tools that manipulate this type of file. In order to smooth the transition, pytest will issue a warning in case the --junitxml option is given in the command line but junit_family is not explicitly configured in pytest.ini: PytestDeprecationWarning: The `junit_family` default value will change to 'xunit2' in pytest 6.0. Add `junit_family=legacy` to your pytest.ini file to silence this warning and make your suite compatible. In order to silence this warning, users just need to configure the junit_family option explicitly: [pytest] junit_family=legacy | 13 | 14 |
60,289,405 | 2020-2-18 | https://stackoverflow.com/questions/60289405/a-good-way-to-make-classes-for-more-complex-playing-card-types-than-those-found | I am extremely new to object-oriented programming, and am trying to begin learning in python by making a simple card game (as seems to be traditional!). I have done the following example which works fine, and teaches me about making multiple instances of the PlayingCard() class to create an instance of the Deck() class: class PlayingCard(object): def __init__(self, suit, val): self.suit = suit self.value = val def print_card(self): print("{} of {}".format(self.value, self.suit)) class Deck(object): def __init__(self): self.playingcards = [] self.build() def build(self): for s in ["Spades", "Clubs", "Diamonds", "Hearts"]: for v in range(1,14): self.playingcards.append(PlayingCard(s,v)) deck = Deck() I want to make something now with more complex cards, not just a standard 52 deck (which has nicely incrementing values). The deck I have in mind is the [Monopoly card game][1]: There are 3 fundamental types of cards - ACTION cards, PROPERTY cards, and MONEY cards. The action cards perform different actions, the property cards belong to different colour sets, and the money cards can have different values. Additionally, the property cards can be "wildcards", and can be used as part of one of two sets. Finally, every card also has an equivalent money value (indicated in the top corner of each card). In the rent action cards, the card can only apply to the colour property indicated on the card. My question is just generally how to handle a situation like this, and what would be a nice way to include these different cards in a class-based python program? Should I keep my single PlayingCard() class, and just have many inputs, such as PlayingCard(type="PROPERTY", value="3M"). Or would it be better to create seperate classes such as ActionPlayingCard(), PropertyPlayingCard(), etc ? Or is there a better way? As I say, I am at the beginning of my learning here, and how to organise these types of situations in terms of the higher level design. Many thanks. | When you are approaching a problem with OOP, you usually want to model behavior and properties in a reusable way, i.e., you should think of abstractions and organize your class hierarchy based on that. I would write something like the following: class Card: def __init__(self, money_value=0): self.money_value = money_value class ActionCard(Card): def __init__(self, action, money_value=0): super().__init__(money_value=money_value) self.action = action class RentActionCard(ActionCard): def __init__(self, action, color, money_value=0): super().__init__(action, money_value=money_value) self.color = color def apply(self, property_card): if property_card.color != self.color: # Don't apply # Apply class PropertyCard(Card): def __init__(self, color, money_value=0): super().__init__(money_value=money_value) self.color = color class WildcardPropertyCard(PropertyCard): def __init__(self, color, money_value=0): super().__init__(color, money_value=money_value) class MoneyCard(Card): def __init__(self, money_value=0): super().__init__(money_value=money_value) Due to Python being a dynamically typed language, OOP is a little harder to justify in my opinion, since we can just rely on duck typing and dynamic binding, the way you organize your hierarchy is less important. If I were to model this problem in C# for example, I would without a doubt use the hierarchy showed above, because I could rely on polymorphism to represent different types and guide the flow of my logic based on what type of card is being analyzed. A couple of final remarks: Python has very powerful builtin types, but most of the time using new custom types that build on them makes your life easier. You don't have to inherit from object since types in Python 3 (which is the only one maintained as of today) inherit from object by default. But, at the end of the day, there isn't a perfect answer, the best way would be to try both of the approaches and see what you're more comfortable with. | 9 | 3 |
60,226,033 | 2020-2-14 | https://stackoverflow.com/questions/60226033/no-module-named-cython-with-pip-installation-of-tar-gz | I use Poetry to build tar.gz and whl files for my example package (https://github.com/iamishalkin/cyrtd) and then try to install package inside pipenv environment. tar.gz installation fails and this is a piece of logs: $ poetry build ... $ pip install dist/cyrtd-0.1.0.tar.gz Processing c:\work2\cyrtd\dist\cyrtd-0.1.0.tar.gz Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Requirement already satisfied: cython<0.30.0,>=0.29.13 in c:\users\ivan.mishalkin\.virtualenvs\cyrtd-tpdvsw8x\lib\site-packages (from cyrtd==0.1.0) (0.29.15) Building wheels for collected packages: cyrtd Building wheel for cyrtd (PEP 517) ... error ERROR: Command errored out with exit status 1: ... from Cython.Build import cythonize ModuleNotFoundError: No module named 'Cython' ---------------------------------------- ERROR: Failed building wheel for dxpyfeed Failed to build dxpyfeed ERROR: Could not build wheels for dxpyfeed which use PEP 517 and cannot be installed directly Cython is installed and is callable from virtual interpreter. Even in logs it is written, that requirements for cython are satisfied. What is strange - everything worked fine couple months ago. I also tried conda venv, upgraded cython and poetry, nothing helped. Also tried weakly related workaround from setup_requires with Cython? - still no luck UPD: I found some dirty workaround here: https://luminousmen.com/post/resolve-cython-and-numpy-dependencies The idea is to add from setuptools import dist dist.Distribution().fetch_build_eggs(['cython']) before Cython.Build import After this I get these logs: $ pip install dist/cyrtd-0.1.0.tar.gz Processing c:\work2\cyrtd\dist\cyrtd-0.1.0.tar.gz Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Requirement already satisfied: cython<0.30.0,>=0.29.13 in c:\users\ivan.mishalkin\.virtualenvs\cyrtd-tpdvsw8x\lib\site-packages (from cyrtd==0.1.0) (0.29.15) Building wheels for collected packages: cyrtd Building wheel for cyrtd (PEP 517) ... done Created wheel for cyrtd: filename=cyrtd-0.1.0-cp37-cp37m-win_amd64.whl size=33062 sha256=370a90657759d3183f3c11ebbdf1d23c3ca857d41dd45a86386ba33a6baf9a07 Stored in directory: c:\users\ivan.mishalkin\appdata\local\pip\cache\wheels\45\d1\6b\52daecf1cc5234ca4d9e9e49b2f195e7adb83941424116432e Successfully built cyrtd Installing collected packages: cyrtd Attempting uninstall: cyrtd Found existing installation: cyrtd 0.1.0 Uninstalling cyrtd-0.1.0: Successfully uninstalled cyrtd-0.1.0 Successfully installed cyrtd-0.1.0 Still looking for a better solution UPD2: main files content: build.py: from setuptools import Extension from Cython.Build import cythonize cyfuncs_ext = Extension(name='cyrtd.cymod.cyfuncs', sources=['cyrtd/cymod/cyfuncs.pyx'] ) EXTENSIONS = [ cyfuncs_ext ] def build(setup_kwargs): setup_kwargs.update({ 'ext_modules': cythonize(EXTENSIONS, language_level=3), 'zip_safe': False, 'setup_requires':['setuptools>=18.0', 'cython'] }) | Adding cython in build-system section in pyproject.toml helped me pyproject.toml: ... [build-system] requires = ["poetry>=0.12", "cython"] ... | 14 | 2 |
60,238,873 | 2020-2-15 | https://stackoverflow.com/questions/60238873/python-jupyter-notebook-shap-force-plot-how-to-change-the-background-color-or-t | Is there any way that I can change the shap plot background color or text color in the dark theme? I need either the white background or white text. The plot is an object of IPython.core.display.HTML. It is generated by shap.force_plot(explainer.expected_value[1], shap_values[1][0,:], X_test.iloc[0,:],link="logit") Thanks for the help! | After adding the argument of matplotlib=True, the problem is solved. shap.force_plot(explainer.expected_value[1], shap_values[1][0,:], X_test.iloc[0,:],link="logit", matplotlib=True) It seems the plot is created with matplotlib on a html file. To plot properly, firstly I used style.use('seaborn-dark') in the dark model of Jupyter notebook. | 7 | 7 |
60,267,911 | 2020-2-17 | https://stackoverflow.com/questions/60267911/keras-inconsistent-prediction-time | I tried to get an estimate of the prediction time of my keras model and realised something strange. Apart from being fairly fast normally, every once in a while the model needs quite long to come up with a prediction. And not only that, those times also increase the longer the model runs. I added a minimal working example to reproduce the error. import time import numpy as np from sklearn.datasets import make_classification from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten # Make a dummy classification problem X, y = make_classification() # Make a dummy model model = Sequential() model.add(Dense(10, activation='relu',name='input',input_shape=(X.shape[1],))) model.add(Dense(2, activation='softmax',name='predictions')) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(X, y, verbose=0, batch_size=20, epochs=100) for i in range(1000): # Pick a random sample sample = np.expand_dims(X[np.random.randint(99), :], axis=0) # Record the prediction time 10x and then take the average start = time.time() for j in range(10): y_pred = model.predict_classes(sample) end = time.time() print('%d, %0.7f' % (i, (end-start)/10)) The time does not depend on the sample (it is being picked randomly). If the test is repeated, the indices in the for loop where the prediction takes longer are going to be (nearly) the same again. I'm using: tensorflow 2.0.0 python 3.7.4 For my application I need to guarantee the execution in a certain time. This is however impossible considering that behaviour. What is going wrong? Is it a bug in Keras or a bug in the tensorflow backend? EDIT: predict_on_batch shows the same behavior, however, more sparse: y_pred = model(sample, training=False).numpy() shows some heavy outliers as well, however, they are not increasing. EDIT 2: I downgraded to the latest tensorflow 1 version (1.15). Not only is the problem not existent anymore, also the "normal" prediction time significantly improved! I do not see the two spikes as problematic, as they didn't appear when I repeated the test (at least not at the same indices and linearly increasing) and are percentual not as large as in the first plot. We can thus conclude that this seems to be a problem inherent to tensorflow 2.0, which shows similar behaviour in other situations as @OverLordGoldDragon mentions. | TF2 generally exhibits poor and bug-like memory management in several instances I've encountered - brief description here and here. With prediction in particular, the most performant feeding method is via model(x) directly - see here, and its linked discussions. In a nutshell: model(x) acts via its its __call__ method (which it inherits from base_layer.Layer), whereas predict(), predict_classes(), etc. involve a dedicated loop function via _select_training_loop(); each utilize different data pre- and post-processing methods suited for different use-cases, and model(x) in 2.1 was designed specifically to yield fastest small-model / small-batch (and maybe any-size) performance (and still fastest in 2.0). Quoting a TensorFlow dev from linked discussions: You can predict the output using model call, not model predict, i.e., calling model(x) would make this much faster because there are no "conversion to dataset" part, and also it's directly calling a cached tf.function. Note: this should be less of an issue in 2.1, and especially 2.2 - but test each method anyway. Also I realize this doesn't directly answer your question on the time spikes; I suspect it's related to Eager caching mechanisms, but the surest way to determine is via TF Profiler, which is broken in 2.1. Update: regarding increasing spikes, possible GPU throttling; you've done ~1000 iters, try 10,000 instead - eventually, the increasing should stop. As you noted in your comments, this doesn't occur with model(x); makes sense as one less GPU step is involved ("conversion to dataset"). Update2: you could bug the devs here about it if you face this issue; it's mostly me singing there | 19 | 10 |
60,275,455 | 2020-2-18 | https://stackoverflow.com/questions/60275455/using-yolo-or-other-image-recognition-techniques-to-identify-all-alphanumeric-te | I have multiple images diagram, all of which contains labels as alphanumeric characters instead of just the text label itself. I want my YOLO model to identify all the numbers & alphanumeric characters present in it. How can I train my YOLO model to do the same. The dataset can be found here. https://drive.google.com/open?id=1iEkGcreFaBIJqUdAADDXJbUrSj99bvoi For example : see the bounding boxes. I want YOLO to detect wherever the text are present. However currently its not necessary to identify the text inside it. Also the same needs to be done for these type of images The images can be downloaded here This is what I have tried using opencv but it does not work for all the images in the dataset. import cv2 import numpy as np import pytesseract pytesseract.pytesseract.tesseract_cmd = r"C:\Users\HPO2KOR\AppData\Local\Tesseract-OCR\tesseract.exe" image = cv2.imread(r'C:\Users\HPO2KOR\Desktop\Work\venv\Patent\PARTICULATE DETECTOR\PD4.png') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] clean = thresh.copy() horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,1)) detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2) cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(clean, [c], -1, 0, 3) vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,30)) detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2) cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(clean, [c], -1, 0, 3) cnts = cv2.findContours(clean, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: area = cv2.contourArea(c) if area < 100: cv2.drawContours(clean, [c], -1, 0, 3) elif area > 1000: cv2.drawContours(clean, [c], -1, 0, -1) peri = cv2.arcLength(c, True) approx = cv2.approxPolyDP(c, 0.02 * peri, True) x,y,w,h = cv2.boundingRect(c) if len(approx) == 4: cv2.rectangle(clean, (x, y), (x + w, y + h), 0, -1) open_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2)) opening = cv2.morphologyEx(clean, cv2.MORPH_OPEN, open_kernel, iterations=2) close_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,2)) close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, close_kernel, iterations=4) cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: x,y,w,h = cv2.boundingRect(c) area = cv2.contourArea(c) if area > 500: ROI = image[y:y+h, x:x+w] ROI = cv2.GaussianBlur(ROI, (3,3), 0) data = pytesseract.image_to_string(ROI, lang='eng',config='--psm 6') if data.isalnum(): cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2) print(data) cv2.imwrite('image.png', image) cv2.imwrite('clean.png', clean) cv2.imwrite('close.png', close) cv2.imwrite('opening.png', opening) cv2.waitKey() Is there any model or any opencv technique or some pre trained model that can do the same for me? I just need the bounding boxes around all the alphanumeric characters present in the images. After that I need to identify whats present in it. However the second part is not important currently. | A possible approach is to use the EAST (Efficient and Accurate Scene Text) deep learning text detector based on Zhou et al.βs 2017 paper, EAST: An Efficient and Accurate Scene Text Detector. The model was originally trained for detecting text in natural scene images but it may be possible to apply it on diagram images. EAST is quite robust and is capable of detecting blurred or reflective text. Here is a modified version of Adrian Rosebrock's implementation of EAST. Instead of applying the text detector directly on the image, we can try to remove as much non-text objects on the image before performing text detection. The idea is to remove horizontal lines, vertical lines, and non-text contours (curves, diagonals, circular shapes) before applying detection. Here's the results with some of your images: Input -> Non-text contours to remove in green Result Other images The pretrained frozen_east_text_detection.pb model necessary to perform text detection can be found here. Although the model catches most of the text, the results are not 100% accurate and has occasional false positives probably due to how it was trained on natural scene images. To obtain more accurate results you would probably have to train your own custom model. But if you want a decent out-of-the-box solution then this should work you. Check out Adrian's OpenCV Text Detection (EAST text detector) blog post for a more comprehensive explanation of the EAST text detector. Code from imutils.object_detection import non_max_suppression import numpy as np import cv2 def EAST_text_detector(original, image, confidence=0.25): # Set the new width and height and determine the changed ratio (h, W) = image.shape[:2] (newW, newH) = (640, 640) rW = W / float(newW) rH = h / float(newH) # Resize the image and grab the new image dimensions image = cv2.resize(image, (newW, newH)) (h, W) = image.shape[:2] # Define the two output layer names for the EAST detector model that # we are interested -- the first is the output probabilities and the # second can be used to derive the bounding box coordinates of text layerNames = [ "feature_fusion/Conv_7/Sigmoid", "feature_fusion/concat_3"] net = cv2.dnn.readNet('frozen_east_text_detection.pb') # Construct a blob from the image and then perform a forward pass of # the model to obtain the two output layer sets blob = cv2.dnn.blobFromImage(image, 1.0, (W, h), (123.68, 116.78, 103.94), swapRB=True, crop=False) net.setInput(blob) (scores, geometry) = net.forward(layerNames) # Grab the number of rows and columns from the scores volume, then # initialize our set of bounding box rectangles and corresponding # confidence scores (numRows, numCols) = scores.shape[2:4] rects = [] confidences = [] # Loop over the number of rows for y in range(0, numRows): # Extract the scores (probabilities), followed by the geometrical # data used to derive potential bounding box coordinates that # surround text scoresData = scores[0, 0, y] xData0 = geometry[0, 0, y] xData1 = geometry[0, 1, y] xData2 = geometry[0, 2, y] xData3 = geometry[0, 3, y] anglesData = geometry[0, 4, y] # Loop over the number of columns for x in range(0, numCols): # If our score does not have sufficient probability, ignore it if scoresData[x] < confidence: continue # Compute the offset factor as our resulting feature maps will # be 4x smaller than the input image (offsetX, offsetY) = (x * 4.0, y * 4.0) # Extract the rotation angle for the prediction and then # compute the sin and cosine angle = anglesData[x] cos = np.cos(angle) sin = np.sin(angle) # Use the geometry volume to derive the width and height of # the bounding box h = xData0[x] + xData2[x] w = xData1[x] + xData3[x] # Compute both the starting and ending (x, y)-coordinates for # the text prediction bounding box endX = int(offsetX + (cos * xData1[x]) + (sin * xData2[x])) endY = int(offsetY - (sin * xData1[x]) + (cos * xData2[x])) startX = int(endX - w) startY = int(endY - h) # Add the bounding box coordinates and probability score to # our respective lists rects.append((startX, startY, endX, endY)) confidences.append(scoresData[x]) # Apply non-maxima suppression to suppress weak, overlapping bounding # boxes boxes = non_max_suppression(np.array(rects), probs=confidences) # Loop over the bounding boxes for (startX, startY, endX, endY) in boxes: # Scale the bounding box coordinates based on the respective # ratios startX = int(startX * rW) startY = int(startY * rH) endX = int(endX * rW) endY = int(endY * rH) # Draw the bounding box on the image cv2.rectangle(original, (startX, startY), (endX, endY), (36, 255, 12), 2) return original # Convert to grayscale and Otsu's threshold image = cv2.imread('1.png') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] clean = thresh.copy() # Remove horizontal lines horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,1)) detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2) cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(clean, [c], -1, 0, 3) # Remove vertical lines vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,30)) detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2) cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(clean, [c], -1, 0, 3) # Remove non-text contours (curves, diagonals, circlar shapes) cnts = cv2.findContours(clean, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: area = cv2.contourArea(c) if area > 1500: cv2.drawContours(clean, [c], -1, 0, -1) peri = cv2.arcLength(c, True) approx = cv2.approxPolyDP(c, 0.02 * peri, True) x,y,w,h = cv2.boundingRect(c) if len(approx) == 4: cv2.rectangle(clean, (x, y), (x + w, y + h), 0, -1) # Bitwise-and with original image to remove contours filtered = cv2.bitwise_and(image, image, mask=clean) filtered[clean==0] = (255,255,255) # Perform EAST text detection result = EAST_text_detector(image, filtered) cv2.imshow('filtered', filtered) cv2.imshow('result', result) cv2.waitKey() | 13 | 8 |
60,270,233 | 2020-2-17 | https://stackoverflow.com/questions/60270233/trying-to-create-dynamic-subdags-from-parent-dag-based-on-array-of-filenames | I am trying to move s3 files from a "non-deleting" bucket (meaning I can't delete the files) to GCS using airflow. I cannot be guaranteed that new files will be there everyday, but I must check for new files everyday. my problem is the dynamic creation of subdags. If there ARE files, I need subdags. If there are NOT files, I don't need subdags. My problem is the upstream/downstream settings. In my code, it does detect files, but does not kick off the subdags as they are supposed to. I'm missing something. here's my code: from airflow import models from airflow.utils.helpers import chain from airflow.providers.amazon.aws.hooks.s3 import S3Hook from airflow.operators.python_operator import PythonOperator, BranchPythonOperator from airflow.operators.dummy_operator import DummyOperator from airflow.operators.subdag_operator import SubDagOperator from airflow.contrib.operators.s3_to_gcs_operator import S3ToGoogleCloudStorageOperator from airflow.utils import dates from airflow.models import Variable import logging args = { 'owner': 'Airflow', 'start_date': dates.days_ago(1), 'email': ['[email protected]'], 'email_on_failure': True, 'email_on_success': True, } bucket = 'mybucket' prefix = 'myprefix/' LastBDEXDate = int(Variable.get("last_publish_date")) maxdate = LastBDEXDate files = [] parent_dag = models.DAG( dag_id='My_Ingestion', default_args=args, schedule_interval='@daily', catchup=False ) def Check_For_Files(**kwargs): s3 = S3Hook(aws_conn_id='S3_BOX') s3.get_conn() bucket = bucket LastBDEXDate = int(Variable.get("last_publish_date")) maxdate = LastBDEXDate files = s3.list_keys(bucket_name=bucket, prefix='myprefix/file') for file in files: print(file) print(file.split("_")[-2]) print(file.split("_")[-2][-8:]) ##proves I can see a date in the file name is ok. maxdate = maxdate if maxdate > int(file.split("_")[-2][-8:]) else int(file.split("_")[-2][-8:]) if maxdate > LastBDEXDate: return 'Start_Process' return 'finished' def create_subdag(dag_parent, dag_id_child_prefix, file_name): # dag params dag_id_child = '%s.%s' % (dag_parent.dag_id, dag_id_child_prefix) # dag subdag = models.DAG(dag_id=dag_id_child, default_args=args, schedule_interval=None) # operators s3_to_gcs_op = S3ToGoogleCloudStorageOperator( task_id=dag_id_child, bucket=bucket, prefix=file_name, dest_gcs_conn_id='GCP_Account', dest_gcs='gs://my_files/To_Process/', replace=False, gzip=True, dag=subdag) return subdag def create_subdag_operator(dag_parent, filename, index): tid_subdag = 'file_{}'.format(index) subdag = create_subdag(dag_parent, tid_subdag, filename) sd_op = SubDagOperator(task_id=tid_subdag, dag=dag_parent, subdag=subdag) return sd_op def create_subdag_operators(dag_parent, file_list): subdags = [create_subdag_operator(dag_parent, file, file_list.index(file)) for file in file_list] # chain subdag-operators together chain(*subdags) return subdags check_for_files = BranchPythonOperator( task_id='Check_for_s3_Files', provide_context=True, python_callable=Check_For_Files, dag=parent_dag ) finished = DummyOperator( task_id='finished', dag=parent_dag ) decision_to_continue = DummyOperator( task_id='Start_Process', dag=parent_dag ) if len(files) > 0: subdag_ops = create_subdag_operators(parent_dag, files) check_for_files >> decision_to_continue >> subdag_ops[0] >> subdag_ops[-1] >> finished check_for_files >> finished | Below is the recommended way to create a dynamic DAG or sub-DAG in airflow, though there are other ways also, but I guess this would be largely applicable to your problem. First, create a file (yaml/csv) which includes the list of all s3 files and locations, in your case you have written a function to store them in list, I would say store them in a separate yaml file and load it at run time in airflow env and then create DAGs. Below is a sample yaml file: dynamicDagConfigFile.yaml job: dynamic-dag bucket_name: 'bucket-name' prefix: 'bucket-prefix' S3Files: - File1: 'S3Loc1' - File2: 'S3Loc2' - File3: 'S3Loc3' You can modify your Check_For_Files function to store them in a yaml file. Now we can move on to dynamic dag creation: First define two tasks using dummy operators, i.e.the start and the end task. Such tasks are the ones in which we are going to build upon our DAG by dynamically creating tasks between them: start = DummyOperator( task_id='start', dag=dag ) end = DummyOperator( task_id='end', dag=dag) Dynamic DAG: We will use PythonOperators in airflow. The function should receive as arguments the task id; a python function to be executed, i.e., the python_callable for the Python operator; and a set of args to be used during the execution. Include an argument the task id. So, we can exchange data among tasks generated in dynamic way, e.g., via XCOM. You can specify your operation function within this dynamic dag like s3_to_gcs_op. def createDynamicDAG(task_id, callableFunction, args): task = PythonOperator( task_id = task_id, provide_context=True, #Eval is used since the callableFunction var is of type string #while the python_callable argument for PythonOperators only receives objects of type callable not strings. python_callable = eval(callableFunction), op_kwargs = args, xcom_push = True, dag = dag, ) return task Finally based on the location present in the yaml file you can create dynamic dags, first read the yaml file as below and create dynamic dag: with open('/usr/local/airflow/dags/config_files/dynamicDagConfigFile.yaml') as f: # use safe_load instead to load the YAML file configFile = yaml.safe_load(f) #Extract file list S3Files = configFile['S3Files'] #In this loop tasks are created for each table defined in the YAML file for S3File in S3Files: for S3File, fieldName in S3File.items(): #Remember task id is provided in order to exchange data among tasks generated in dynamic way. get_s3_files = createDynamicDAG('{}-getS3Data'.format(S3File), 'getS3Data', {}) #your configs here. #Second step is upload S3 to GCS upload_s3_toGCS = createDynamicDAG('{}-uploadDataS3ToGCS'.format(S3File), 'uploadDataS3ToGCS', {'previous_task_id':'{}-'}) #write your configs again here like S3 bucket name prefix extra or read from yaml file, and other GCS config. Final DAG definition: The idea is that #once tasks are generated they should linked with the #dummy operators generated in the start and end tasks. start >> get_s3_files get_s3_files >> upload_s3_toGCS upload_s3_toGCS >> end Full airflow code in order: import yaml import airflow from airflow import DAG from datetime import datetime, timedelta, time from airflow.operators.python_operator import PythonOperator from airflow.operators.dummy_operator import DummyOperator start = DummyOperator( task_id='start', dag=dag ) def createDynamicDAG(task_id, callableFunction, args): task = PythonOperator( task_id = task_id, provide_context=True, #Eval is used since the callableFunction var is of type string #while the python_callable argument for PythonOperators only receives objects of type callable not strings. python_callable = eval(callableFunction), op_kwargs = args, xcom_push = True, dag = dag, ) return task end = DummyOperator( task_id='end', dag=dag) with open('/usr/local/airflow/dags/config_files/dynamicDagConfigFile.yaml') as f: configFile = yaml.safe_load(f) #Extract file list S3Files = configFile['S3Files'] #In this loop tasks are created for each table defined in the YAML file for S3File in S3Files: for S3File, fieldName in S3File.items(): #Remember task id is provided in order to exchange data among tasks generated in dynamic way. get_s3_files = createDynamicDAG('{}-getS3Data'.format(S3File), 'getS3Data', {}) #your configs here. #Second step is upload S3 to GCS upload_s3_toGCS = createDynamicDAG('{}-uploadDataS3ToGCS'.format(S3File), 'uploadDataS3ToGCS', {'previous_task_id':'{}-'}) #write your configs again here like S3 bucket name prefix extra or read from yaml file, and other GCS config. start >> get_s3_files get_s3_files >> upload_s3_toGCS upload_s3_toGCS >> end | 12 | 4 |
60,229,970 | 2020-2-14 | https://stackoverflow.com/questions/60229970/aws-cli-errorrootcode-for-hash-md5-was-not-found | When trying to run the AWS CLI, I am getting this error: aws ERROR:root:code for hash md5 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type md5 ERROR:root:code for hash sha1 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha1 ERROR:root:code for hash sha224 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha224 ERROR:root:code for hash sha256 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha256 ERROR:root:code for hash sha384 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha384 ERROR:root:code for hash sha512 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha512 Traceback (most recent call last): File "/usr/local/bin/aws", line 19, in <module> import awscli.clidriver File "/usr/local/lib/python2.7/site-packages/awscli/clidriver.py", line 17, in <module> import botocore.session File "/usr/local/lib/python2.7/site-packages/botocore/session.py", line 29, in <module> import botocore.configloader File "/usr/local/lib/python2.7/site-packages/botocore/configloader.py", line 19, in <module> from botocore.compat import six File "/usr/local/lib/python2.7/site-packages/botocore/compat.py", line 25, in <module> from botocore.exceptions import MD5UnavailableError File "/usr/local/lib/python2.7/site-packages/botocore/exceptions.py", line 15, in <module> from botocore.vendored import requests File "/usr/local/lib/python2.7/site-packages/botocore/vendored/requests/__init__.py", line 58, in <module> from . import utils File "/usr/local/lib/python2.7/site-packages/botocore/vendored/requests/utils.py", line 26, in <module> from .compat import parse_http_list as _parse_list_header File "/usr/local/lib/python2.7/site-packages/botocore/vendored/requests/compat.py", line 7, in <module> from .packages import chardet File "/usr/local/lib/python2.7/site-packages/botocore/vendored/requests/packages/__init__.py", line 3, in <module> from . import urllib3 File "/usr/local/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3/__init__.py", line 10, in <module> from .connectionpool import ( File "/usr/local/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py", line 31, in <module> from .connection import ( File "/usr/local/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3/connection.py", line 45, in <module> from .util.ssl_ import ( File "/usr/local/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3/util/__init__.py", line 5, in <module> from .ssl_ import ( File "/usr/local/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3/util/ssl_.py", line 2, in <module> from hashlib import md5, sha1, sha256 ImportError: cannot import name md5 I tried the solution from this issue but they do not work: brew reinstall python@2 ==> Reinstalling python@2 Error: An exception occurred within a child process: FormulaUnavailableError: No available formula with the name "/usr/local/opt/python@2/.brew/[email protected]" I thought it might not be installed, but it already is: brew install python@2 Warning: python@2 2.7.15_1 is already installed and up-to-date To reinstall 2.7.15_1, run `brew reinstall python@2` Running brew doctor shows that python is unliked, but running brew link python fails because of a symlink belonging to python@2. brew link python Linking /usr/local/Cellar/python/3.7.6_1... Error: Could not symlink Frameworks/Python.framework/Headers Target /usr/local/Frameworks/Python.framework/Headers is a symlink belonging to python@2. You can unlink it: brew unlink python@2 To force the link and overwrite all conflicting files: brew link --overwrite python To list all files that would be deleted: brew link --overwrite --dry-run python The commands recommended seem to go in circle and none of them manage to solve the issue. I am a bit stuck - how do I recover from these errors? | Ran into a similar issue with brew install python2 error when trying to use pip. It's probably because python@2 was deleted from homebrew/core in commit 028f11f9e: python@2: delete (https://github.com/Homebrew/homebrew-core/issues/49796) EOL 1 January 2020. See this post here https://github.com/Homebrew/homebrew-core/pull/49796 for more details. To fix this, run brew uninstall python@2 to uninstall brew installed python@2 this should make default python2 fall back to python2 came with macOS. uninstall current aws cli by running sudo rm -rf /usr/local/bin/aws sudo rm -rf /usr/local/aws rm AWSCLIV2.pkg install aws cli again by running sudo curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg" sudo installer -pkg AWSCLIV2.pkg -target / if you run aws --version you should see it's linked to python3 instead of python2. Hope it helps. | 23 | 56 |
60,225,185 | 2020-2-14 | https://stackoverflow.com/questions/60225185/how-to-throw-http-error-code-with-aws-lambda-using-lambda-proxy | I created an AWS Lambda function using Python 3.8 with a Lambda Proxy API Gateway trigger: It is indeed possible to return custom HTTP error codes: def lambda_handler(event, context): return { 'statusCode': 400, 'body': json.dumps('This is a bad request!') } However, some online examples (e.g. 1, 2) simply raise an Exception to return the error with a custom message. If an Exception is thrown on the handler as below, the server return a 502 as the response is not in the expected format for Proxy Integration: def lambda_handler(event, context): raise Exception('This is an exception!') I think the examples rely on some Integration Response templates and do not use the Proxy Integration. Is it possible to achieve the same thing with Lambda Proxy? I'd like to avoid to globally catch exceptions in the Lambda handler to build custom error responses. | DurandA - I believe you are absolutely correct: the simplified Lambda Proxy Integration approach relies on you catching your exceptions and returning the standardized format: def lambda_handler(event, context): return { 'statusCode': 400, 'body': json.dumps('This is a bad request!') } The simplified Lambda Proxy Integration feature was announced in a September 2016 blog post, but one of the examples you cited was posted earlier, in a June 2016 blog post, back when the more complicated Integration Response method was the only way. Maybe you have stumbled on an out of date example. You also posted a link to the product documentation for error handling, at the top in the section covering the Lambda proxy integration feature, it says: With the Lambda proxy integration, Lambda is required to return an output of the following format: { "isBase64Encoded" : "boolean", "statusCode": "number", "headers": { ... }, "body": "JSON string" } Here is a working example that returns a HTTP 400 with message "This is an exception!" using Lambda Proxy Integration. import json def exception_handler(e): # exception to status code mapping goes here... status_code = 400 return { 'statusCode': status_code, 'body': json.dumps(str(e)) } def lambda_handler(event, context): try: raise Exception('This is an exception!') return { 'statusCode': 200, 'body': json.dumps('This is a good request!') } except Exception as e: return exception_handler(e) Output from the above: $ http https://**********.execute-api.us-east-2.amazonaws.com/test HTTP/1.1 400 Bad Request Connection: keep-alive Content-Length: 23 Content-Type: application/json Date: Sun, 23 Feb 2020 05:06:59 GMT X-Amzn-Trace-Id: Root=1-********-************************;Sampled=0 x-amz-apigw-id: **************** x-amzn-RequestId: ********-****-****-****-************ "This is an exception!" I understand your frustration that you do not want to build a custom exception handler. Fortunately, you only have to build a single handler wrapping your lambda_handler function. Wishing you all the best! | 18 | 34 |
60,278,766 | 2020-2-18 | https://stackoverflow.com/questions/60278766/best-way-to-insert-python-numpy-array-into-postgresql-database | Our team uses software that is heavily reliant on dumping NumPy data into files, which slows our code quite a lot. If we could store our NumPy arrays directly in PostgreSQL we would get a major performance boost. Other performant methods of storing NumPy arrays in any database or searchable database-like structure are welcome, but PostgresSQL would be preferred. My question is very similar to one asked previously. However, I am looking for a more robust and performant answer and I wish to store any arbitrary NumPy array. | Not sure if this is what you are after, but assuming you have read/write access to an existing postgres DB: import numpy as np import psycopg2 as psy import pickle db_connect_kwargs = { 'dbname': '<YOUR_DBNAME>', 'user': '<YOUR_USRNAME>', 'password': '<YOUR_PWD>', 'host': '<HOST>', 'port': '<PORT>' } connection = psy.connect(**db_connect_kwargs) connection.set_session(autocommit=True) cursor = connection.cursor() cursor.execute( """ DROP TABLE IF EXISTS numpy_arrays; CREATE TABLE numpy_arrays ( uuid VARCHAR PRIMARY KEY, np_array_bytes BYTEA ) """ ) The gist of this approach is to store any numpy array (of arbitrary shape and data type) as a row in the numpy_arrays table, where uuid is a unique identifier to be able to later retrieve the array. The actual array would be saved in the np_array_bytes column as bytes. Inserting into the database: some_array = np.random.rand(1500,550) some_array_uuid = 'some_array' cursor.execute( """ INSERT INTO numpy_arrays(uuid, np_array_bytes) VALUES (%s, %s) """, (some_array_uuid, pickle.dumps(some_array)) ) Querying from the database: uuid = 'some_array' cursor.execute( """ SELECT np_array_bytes FROM numpy_arrays WHERE uuid=%s """, (uuid,) ) some_array = pickle.loads(cursor.fetchone()[0]) Performance? If we could store our NumPy arrays directly in PostgreSQL we would get a major performance boost. I haven't benchmarked this approach in any way, so I can't confirm nor refute this... Disk Space? My guess is that this approach takes as much disk space as dumping the arrays to a file using np.save('some_array.npy', some_array). If this is an issue consider compressing the bytes before insertion. | 13 | 14 |
60,215,436 | 2020-2-13 | https://stackoverflow.com/questions/60215436/how-to-correctly-set-specific-module-to-debug-in-vs-code | I was following the instruction by VS code's website but it seemed that nothing that I tried worked. I created a new configuration as required but whenever I put the path it refuses to work in VS code although the path VS code complains about in the integrated terminal window works fine when I call it manually. The error the debugger throws is the following: (automl-meta-learning) brandomiranda~/automl-meta-learning/automl/experiments β― env PTVSD_LAUNCHER_PORT=59729 /Users/brandomiranda/miniconda3/envs/automl-meta-learning/bin/python /Users/brandomiranda/.vscode/extensions/ms-python.python-2020.2.63072/pythonFiles/lib/python/new_ptvsd/wheels/ptvsd/launcher -m /Users/brandomiranda/automl-meta-learning/automl/experiments/experiments_model_optimization.py E+00000.025: Error determining module path for sys.argv Traceback (most recent call last): File "/Users/brandomiranda/.vscode/extensions/ms-python.python-2020.2.63072/pythonFiles/lib/python/new_ptvsd/wheels/ptvsd/../ptvsd/server/cli.py", line 220, in run_module spec = find_spec(options.target) File "/Users/brandomiranda/miniconda3/envs/automl-meta-learning/lib/python3.7/importlib/util.py", line 94, in find_spec parent = __import__(parent_name, fromlist=['__path__']) ModuleNotFoundError: No module named '/Users/brandomiranda/automl-meta-learning/automl/experiments/experiments_model_optimization' Stack where logged: File "/Users/brandomiranda/miniconda3/envs/automl-meta-learning/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/Users/brandomiranda/miniconda3/envs/automl-meta-learning/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/brandomiranda/.vscode/extensions/ms-python.python-2020.2.63072/pythonFiles/lib/python/new_ptvsd/wheels/ptvsd/__main__.py", line 45, in <module> cli.main() File "/Users/brandomiranda/.vscode/extensions/ms-python.python-2020.2.63072/pythonFiles/lib/python/new_ptvsd/wheels/ptvsd/../ptvsd/server/cli.py", line 361, in main run() File "/Users/brandomiranda/.vscode/extensions/ms-python.python-2020.2.63072/pythonFiles/lib/python/new_ptvsd/wheels/ptvsd/../ptvsd/server/cli.py", line 226, in run_module log.exception("Error determining module path for sys.argv") /Users/brandomiranda/miniconda3/envs/automl-meta-learning/bin/python: Error while finding module specification for '/Users/brandomiranda/automl-meta-learning/automl/experiments/experiments_model_optimization.py' (ModuleNotFoundError: No module named '/Users/brandomiranda/automl-meta-learning/automl/experiments/experiments_model_optimization') then I tried running the file it complains manually and it runs it just fine... (automl-meta-learning) brandomiranda~/automl-meta-learning/automl/experiments β― python /Users/brandomiranda/automl-meta-learning/automl/experiments/experiments_model_optimization.py --> main in differentiable SGD -------> Inside Experiment Code <-------- ---> hostname: device = cpu Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified even when I hover over the path name and click it with command + click then it takes me to the path from within VS code. Which seems bizzare. So somehow only when I run it in debugger mode does it not work. Why? Launch.json { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Experiments Protype1", "type": "python", "request": "launch", "module": "${workspaceFolder}/automl/experiments/experiments_model_optimization.py" // ~/automl-meta-learning/automl/experiments/experiments_model_optimization.py }, { "name": "Python: Current File (Integrated Terminal)", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal" }, { "name": "Python: Remote Attach", "type": "python", "request": "attach", "port": 5678, "host": "localhost", "pathMappings": [ { "localRoot": "${workspaceFolder}", "remoteRoot": "." } ] }, { "name": "Python: Module", "type": "python", "request": "launch", "module": "enter-your-module-name-here", "console": "integratedTerminal" }, { "name": "Python: Django", "type": "python", "request": "launch", "program": "${workspaceFolder}/manage.py", "console": "integratedTerminal", "args": [ "runserver", "--noreload", "--nothreading" ], "django": true }, { "name": "Python: Flask", "type": "python", "request": "launch", "module": "flask", "env": { "FLASK_APP": "app.py" }, "args": [ "run", "--no-debugger", "--no-reload" ], "jinja": true }, { "name": "Python: Current File (External Terminal)", "type": "python", "request": "launch", "program": "${file}", "console": "externalTerminal" } ] } Cross-posted: Quora: https://qr.ae/TzkO4L reddit: https://www.reddit.com/r/vscode/comments/f3hm9r/how_to_correctly_set_specific_module_to_debug_in/ gitissue: https://github.com/microsoft/ptvsd/issues/2088 | You are using module instead of program in launch.json. When using module you must pass only the module\sub-module name, not the entire path. Visual Studio will then load the specified module and execute it's __main__.py file. This would be the correct input, assuming automl is a module and experiments is a submodule: "module": "automl.experiments" If you want to point to your script directly you can use the path you were using previously, just changing module to program: "program": "${workspaceFolder}/automl/experiments/experiments_model_optimization.py" | 13 | 6 |
60,279,762 | 2020-2-18 | https://stackoverflow.com/questions/60279762/migrating-flask-web-application-currently-using-uwsgi-web-server-to-asgi-web-ser | I currently have a flask web application using uWSGI web server that implements the WSGI standard and need to migrate this app to uvicorn web server that implements the ASGI standard. If I choose to use uvicorn web server from the many available options say, Hypercorn, Daphne, then which web microframework(instead of flask) should I opt for from the available options say, Starlette, Quart, Django/Channels to get this migration done smoothly? The hierarchy is like: Uvicorn: an ASGI server Starlette: (uses Uvicorn) a web microframework FastAPI: (uses Starlette) an API microframework with several additional features for building APIs, with data validation, etc. As what I have read so far, Quart is a Python web microframework based on Asyncio. It is intended to provide the easiest way to use asyncio in a web context, especially with existing Flask apps. and FastAPI has shown to be a Python web framework with one of the best performances, as measured by third-party benchmarks, thanks to being based on and powered by Starlette. https://fastapi.tiangolo.com/benchmarks/ Please suggest with the best approach | So here I would like to add something that I have concluded so far, FastAPI learned from Flask (and several of its plug-ins) several things, including its simplicity. For example, the way you declare routes is very similar. That makes it easy to migrate from Flask to FastAPI (which I see a lot of people doing). Flask is a framework based on the current/old standard for Python web frameworks: WSGI. FastAPI is based on Starlette, which uses the newer standard for asynchronous web frameworks: ASGI. Starlette would be more comparable to Flask, in being a pure βmicro-frameworkβ. Almost everything that you can do with Flask, you can do with Starlette (and so with FastAPI). Nevertheless, Starlette has some features not available in Flask (nor in many other WSGI frameworks, like Django, at least by default), like WebSockets, background tasks and others. As FastAPI is based on Starlette, it inherits all its features. Including WebSockets, GraphQL support, templates, etc. So, at the minimum, with FastAPI you can do virtually everything you can with Flask. FastAPI is also a micro-framework (maybe mini-framework, as it includes some extra features for APIs). So, you can structure the project however you want, in many cases, you can even use most of the same files from a Flask project (Iβve done that). This has been explained in this amazing post: https://www.quora.com/What-are-the-advantages-of-using-FastAPI-over-flask Also, Docker image with Uvicorn managed by Gunicorn for high-performance FastAPI web applications in Python 3.7 and 3.6 with performance auto-tuning can be used for minimal implementation. https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker | 10 | 12 |
Subsets and Splits