question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
59,933,946 | 2020-1-27 | https://stackoverflow.com/questions/59933946/difference-between-typevart-a-b-and-typevart-bound-uniona-b | What's the difference between the following two TypeVars? from typing import TypeVar, Union class A: pass class B: pass T = TypeVar("T", A, B) T = TypeVar("T", bound=Union[A, B]) I believe that in Python 3.12 this is the difference between these two bounds class Foo[T: (A, B)]: ... class Foo[T: A | B]: ... Here's an example of something I don't get: this passes type checking... T = TypeVar("T", bound=Union[A, B]) class AA(A): pass class X(Generic[T]): pass class XA(X[A]): pass class XAA(X[AA]): pass ...but with T = TypeVar("T", A, B), it fails with error: Value of type variable "T" of "X" cannot be "AA" Related: this question on the difference between Union[A, B] and TypeVar("T", A, B). | When you do T = TypeVar("T", bound=Union[A, B]), you are saying T can be bound to either Union[A, B] or any subtype of Union[A, B]. It's upper-bounded to the union. So for example, if you had a function of type def f(x: T) -> T, it would be legal to pass in values of any of the following types: Union[A, B] (or a union of any subtypes of A and B such as Union[A, BChild]) A (or any subtype of A) B (or any subtype of B) This is how generics behave in most programming languages: they let you impose a single upper bound. But when you do T = TypeVar("T", A, B), you are basically saying T must be either upper-bounded by A or upper-bounded by B. That is, instead of establishing a single upper-bound, you get to establish multiple! So this means while it would be legal to pass in values of either types A or B into f, it would not be legal to pass in Union[A, B] since the union is neither upper-bounded by A nor B. So for example, suppose you had a iterable that could contain either ints or strs. If you want this iterable to contain any arbitrary mixture of ints or strs, you only need a single upper-bound of a Union[int, str]. For example: from typing import TypeVar, Union, List, Iterable mix1: List[Union[int, str]] = [1, "a", 3] mix2: List[Union[int, str]] = [4, "x", "y"] all_ints = [1, 2, 3] all_strs = ["a", "b", "c"] T1 = TypeVar('T1', bound=Union[int, str]) def concat1(x: Iterable[T1], y: Iterable[T1]) -> List[T1]: out: List[T1] = [] out.extend(x) out.extend(y) return out # Type checks a1 = concat1(mix1, mix2) # Also type checks (though your type checker may need a hint to deduce # you really do want a union) a2: List[Union[int, str]] = concat1(all_ints, all_strs) # Also type checks a3 = concat1(all_strs, all_strs) In contrast, if you want to enforce that the function will accept either a list of all ints or all strs but never a mixture of either, you'll need multiple upper bounds. T2 = TypeVar('T2', int, str) def concat2(x: Iterable[T2], y: Iterable[T2]) -> List[T2]: out: List[T2] = [] out.extend(x) out.extend(y) return out # Does NOT type check b1 = concat2(mix1, mix2) # Also does NOT type check b2 = concat2(all_ints, all_strs) # But this type checks b3 = concat2(all_ints, all_ints) | 82 | 125 |
59,931,566 | 2020-1-27 | https://stackoverflow.com/questions/59931566/how-to-monitor-celery-task-completion-with-prometheus | I am trying to use Prometheus for monitoring Celery tasks for which I am relatively new and I have a problem with incrementing a counter. It's just not incrementing if I am trying to do it inside of Celery.task E.g. from celery import Celery from prometheus_client import Counter app = Celery('tasks', broker='redis://localhost') TASKS = Counter('tasks', 'Count of tasks') @app.task def add(x, y): TASKS.inc(1) return x + y When I visit an endpoint to see which metrics are exposed I can see tasks_total, but its value doesn't change no matter how much add tasks have been executed. However, when I am trying to increment the same counter from regular function, it works. E.g. def dummy_add(x, y): TASKS.inc() return x + y Could you please explain to me what I am doing wrong? | By default, Celery uses process pool for workers. This doesn't go well with Prometheus Python client and you have to use its multiprocessing mode. You might rather want to use one of existing Celery exporters (such as this one) which take a different approach. They just start its own process and listen on events from workers thus overcoming the drawbacks mentioned above. | 6 | 3 |
59,926,511 | 2020-1-27 | https://stackoverflow.com/questions/59926511/pyspark-cannot-import-name-onehotencoderestimator | I have just started learning Spark. Currently, I am trying to perform One hot encoding on a single column from my dataframe. However I cannot import the OneHotEncoderEstimator from pyspark. I have try to import the OneHotEncoder (depacated in 3.0.0), spark can import it but it lack the transform function. Here is the output from my code below. If anyone has encountered similar problem, please help. Thank you so much for your time!! | Your first problem is that encoder object has no 'transform' error. This is a category indexer. Before you can transform columns of object, you must train a OneHotEncoderEstimator using fit() function. In that way your encoder object will learn from data and will be able to transfer the data to encoded category vectors. Most of the category indexer models requires fit() function to learn from data itself. so what you should do is encoder = OneHotEncoderEstimator(dropLast=False, inputCol:"AgeIndex", outputCol="AgeVec" model = encoder.fit(df) encoded = model.transform(df) encoded.show() Also I recommend you to read documentation before starting to a project if you are new to something, documentation helps a lot. The section of spark that includes transformation operations posted here as a link. Spark Transformation Operations your second problem is import error, since you are using notebook I suggest you should check your notebook's environment. But your version is preview version which mostly considers the developers and tester. For starters one should always go for the latest tested release. Try to switch back to spark-2.4.4 and check the notebook's environment. | 6 | 4 |
59,926,723 | 2020-1-27 | https://stackoverflow.com/questions/59926723/select-rows-if-string-begins-with-certain-characters-in-pandas | I have a csv file as the given picture bellow I'm trying to find any word that will start with letter A and G or any list that I want but my code returns an error any Ideas what I'm doing wrong ? this is my code if len(sys.argv) == 1: print("please provide a CSV file to analys") else: fileinput = sys.argv[1] wdata = pd.read_csv(fileinput) print( list(filter(startswith("a","g"), wdata)) ) | Use Series.str.startswith with convert list to tuple and filtering by DataFrame.loc with boolean indexing: wdata = pd.DataFrame({'words':['what','and','how','good','yes']}) L = ['a','g'] s = wdata.loc[wdata['words'].str.startswith(tuple(L)), 'words'] print (s) 1 and 3 good Name: words, dtype: object | 7 | 5 |
59,925,873 | 2020-1-27 | https://stackoverflow.com/questions/59925873/best-practice-for-conditionally-getting-values-from-python-dictionary | I have a dictionary in python with a pretty standard structure. I want to retrieve one value if present, and if not retrieve another value. If for some reason both values are missing I need to throw an error. Example of dicts: # Data that has been modified data_a = { "date_created": "2020-01-23T16:12:35+02:00", "date_modified": "2020-01-27T07:15:00+02:00" } # Data that has NOT been modified data_b = { "date_created": "2020-01-23T16:12:35+02:00", } What is the best practice for this? What could be an intuitive and easily readable way to do this? The way I do this at the moment: mod_date_a = data_a.get('date_modified', data_a.get('date_created', False)) # mod_data_a = "2020-01-27T07:15:00+02:00" mod_date_b = data_b.get('date_modified', data_b.get('date_created', False)) # mod_data_b = "2020-01-23T16:12:35+02:00" if not mod_date_a or not mod_date_b: log.error('Some ERROR') This nested get() just seems a little clumsy so I was wondering if anyone had a better solution for this. | Assuming there are no false values and you don't really need False but just anything false: mod_date_a = data_a.get('date_modified') or data_a.get('date_created') | 6 | 4 |
59,925,384 | 2020-1-27 | https://stackoverflow.com/questions/59925384/python-remove-elements-that-are-greater-than-a-threshold-from-a-list | I would like to remove elements that are greater than a threshold from a list. For example, a list with elements a = [1,9,2,10,3,6]. I would like to remove all elements that are greater than 5. Return should be [1,2,3]. I tried using enumerate and pop but it doesn't work. for i,x in enumerate(a): if x > 5: a.pop(i) | Try using a list comprehension: >>> a = [1,9,2,10,3,6] >>> [x for x in a if x <= 5] [1, 2, 3] This says, "make a new list of x values where x comes from a but only if x is less than or equal to the threshold 5. The issue with the enumerate() and pop() approach is that it mutates the list while iterating over it -- somewhat akin to sawing-off a tree limb while your still sitting on the limb. So when (i, x) is (1, 9), the pop(i) changes a to [1,2,10,3,6], but then iteration advances to (2, 10) meaning that the value 2 never gets examined. It falls apart from there. FWIW, if you need to mutable the list in-place, just reassign it with a slice: a[:] = [x for x in a if x <= 5] Hope this helps :-) | 21 | 39 |
59,923,717 | 2020-1-26 | https://stackoverflow.com/questions/59923717/is-there-a-way-of-subclassing-from-dict-and-collections-abc-mutablemapping-toget | Let's for the sake of example assume I want to subclass dict and have all keys capitalized: class capdict(dict): def __init__(self,*args,**kwds): super().__init__(*args,**kwds) mod = [(k.capitalize(),v) for k,v in super().items()] super().clear() super().update(mod) def __getitem__(self,key): return super().__getitem__(key.capitalize()) def __setitem__(self,key,value): super().__setitem__(key.capitalize(),value) def __delitem__(self,key): super().__detitem__(key.capitalize()) This works to an extent, >>> ex = capdict(map(reversed,enumerate("abc"))) >>> ex {'A': 0, 'B': 1, 'C': 2} >>> ex['a'] 0 but, of course, only for methods I remembered to implement, for example >>> 'a' in ex False is not the desired behavior. Now, the lazy way of filling in all the methods that can be derived from the "core" ones would be mixing in collections.abc.MutableMapping. Only, it doesn't work here. I presume because the methods in question (__contains__ in the example) are already provided by dict. Is there a way of having my cake and eating it? Some magic to let MutableMapping only see the methods I've overridden so that it reimplements the others based on those? | What you could do: This likely won't work out well (i.e. not the cleanest design), but you could inherit from MutableMapping first and then from dict second. Then MutableMapping would use whatever methods you've implemented (because they are the first in the lookup chain): >>> class D(MutableMapping, dict): def __getitem__(self, key): print(f'Intercepted a lookup for {key!r}') return dict.__getitem__(self, key) >>> d = D(x=10, y=20) >>> d.get('x', 0) Intercepted a lookup for 'x' 10 >>> d.get('z', 0) Intercepted a lookup for 'z' 0 Better way: The cleanest approach (easy to understand and test) is to just inherit from MutableMapping and then implement the required methods using a regular dict as the base data store (with composition rather than inheritance): >>> class CapitalizingDict(MutableMapping): def __init__(self, *args, **kwds): self.store = {} self.update(*args, **kwds) def __getitem__(self, key): key = key.capitalize() return self.store[key] def __setitem__(self, key, value): key = key.capitalize() self.store[key] = value def __delitem__(self, key): del self.store[key] def __len__(self): return len(self.store) def __iter__(self): return iter(self.store) def __repr__(self): return repr(self.store) >>> d = CapitalizingDict(x=10, y=20) >>> d {'X': 10, 'Y': 20} >>> d['x'] 10 >>> d.get('x', 0) 10 >>> d.get('z', 0) 0 >>> d['w'] = 30 >>> d['W'] 30 | 27 | 31 |
59,912,147 | 2020-1-25 | https://stackoverflow.com/questions/59912147/why-does-subclassing-in-python-slow-things-down-so-much | I was working on a simple class that extends dict, and I realized that key lookup and use of pickle are very slow. I thought it was a problem with my class, so I did some trivial benchmarks: (venv) marco@buzz:~/sources/python-frozendict/test$ python --version Python 3.9.0a0 (venv) marco@buzz:~/sources/python-frozendict/test$ sudo pyperf system tune --affinity 3 [sudo] password for marco: Tune the system configuration to run benchmarks Actions ======= CPU Frequency: Minimum frequency of CPU 3 set to the maximum frequency System state ============ CPU: use 1 logical CPUs: 3 Perf event: Maximum sample rate: 1 per second ASLR: Full randomization Linux scheduler: No CPU is isolated CPU Frequency: 0-3=min=max=2600 MHz CPU scaling governor (intel_pstate): performance Turbo Boost (intel_pstate): Turbo Boost disabled IRQ affinity: irqbalance service: inactive IRQ affinity: Default IRQ affinity: CPU 0-2 IRQ affinity: IRQ affinity: IRQ 0,2=CPU 0-3; IRQ 1,3-17,51,67,120-131=CPU 0-2 Power supply: the power cable is plugged Advices ======= Linux scheduler: Use isolcpus=<cpu list> kernel parameter to isolate CPUs Linux scheduler: Use rcu_nocbs=<cpu list> kernel parameter (with isolcpus) to not schedule RCU on isolated CPUs (venv) marco@buzz:~/sources/python-frozendict/test$ python -m pyperf timeit --rigorous --affinity 3 -s ' x = {0:0, 1:1, 2:2, 3:3, 4:4} ' 'x[4]' ......................................... Mean +- std dev: 35.2 ns +- 1.8 ns (venv) marco@buzz:~/sources/python-frozendict/test$ python -m pyperf timeit --rigorous --affinity 3 -s ' class A(dict): pass x = A({0:0, 1:1, 2:2, 3:3, 4:4}) ' 'x[4]' ......................................... Mean +- std dev: 60.1 ns +- 2.5 ns (venv) marco@buzz:~/sources/python-frozendict/test$ python -m pyperf timeit --rigorous --affinity 3 -s ' x = {0:0, 1:1, 2:2, 3:3, 4:4} ' '5 in x' ......................................... Mean +- std dev: 31.9 ns +- 1.4 ns (venv) marco@buzz:~/sources/python-frozendict/test$ python -m pyperf timeit --rigorous --affinity 3 -s ' class A(dict): pass x = A({0:0, 1:1, 2:2, 3:3, 4:4}) ' '5 in x' ......................................... Mean +- std dev: 64.7 ns +- 5.4 ns (venv) marco@buzz:~/sources/python-frozendict/test$ python Python 3.9.0a0 (heads/master-dirty:d8ca2354ed, Oct 30 2019, 20:25:01) [GCC 9.2.1 20190909] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from timeit import timeit >>> class A(dict): ... def __reduce__(self): ... return (A, (dict(self), )) ... >>> timeit("dumps(x)", """ ... from pickle import dumps ... x = {0:0, 1:1, 2:2, 3:3, 4:4} ... """, number=10000000) 6.70694484282285 >>> timeit("dumps(x)", """ ... from pickle import dumps ... x = A({0:0, 1:1, 2:2, 3:3, 4:4}) ... """, number=10000000, globals={"A": A}) 31.277778962627053 >>> timeit("loads(x)", """ ... from pickle import dumps, loads ... x = dumps({0:0, 1:1, 2:2, 3:3, 4:4}) ... """, number=10000000) 5.767975459806621 >>> timeit("loads(x)", """ ... from pickle import dumps, loads ... x = dumps(A({0:0, 1:1, 2:2, 3:3, 4:4})) ... """, number=10000000, globals={"A": A}) 22.611666693352163 The results are really a surprise. While key lookup is 2x slower, pickle is 5x slower. How can this be? Other methods, like get(),__eq__() and __init__(), and iteration over keys(), values() and items() are as fast as dict. EDIT: I took a look at the source code of Python 3.9, and in Objects/dictobject.c it seems that the __getitem__() method is implemented by dict_subscript(). And dict_subscript() slows down subclasses only if the key is missing, since the subclass can implement __missing__() and it tries to see if it exists. But the benchmark was with an existing key. But I noticed something: __getitem__() is defined with the flag METH_COEXIST. And also __contains__(), the other method that is 2x slower, has the same flag. From the official documentation: The method will be loaded in place of existing definitions. Without METH_COEXIST, the default is to skip repeated definitions. Since slot wrappers are loaded before the method table, the existence of a sq_contains slot, for example, would generate a wrapped method named contains() and preclude the loading of a corresponding PyCFunction with the same name. With the flag defined, the PyCFunction will be loaded in place of the wrapper object and will co-exist with the slot. This is helpful because calls to PyCFunctions are optimized more than wrapper object calls. So if I understood correctly, in theory METH_COEXIST should speed things up, but it seems to have the opposite effect. Why? EDIT 2: I discovered something more. __getitem__() and __contains()__ are flagged as METH_COEXIST, because they are declared in PyDict_Type two times. They are both present, one time, in the slot tp_methods, where they are explicitly declared as __getitem__() and __contains()__. But the official documentation says that tp_methods are not inherited by subclasses. So a subclass of dict does not call __getitem__(), but calls the subslot mp_subscript. Indeed, mp_subscript is contained in the slot tp_as_mapping, that permits a subclass to inherit its subslots. The problem is that both __getitem__() and mp_subscript use the same function, dict_subscript. Is it possible that it's only the way it was inherited that slows it down? | Indexing and in are slower in dict subclasses because of a bad interaction between a dict optimization and the logic subclasses use to inherit C slots. This should be fixable, though not from your end. The CPython implementation has two sets of hooks for operator overloads. There are Python-level methods like __contains__ and __getitem__, but there's also a separate set of slots for C function pointers in the memory layout of a type object. Usually, either the Python method will be a wrapper around the C implementation, or the C slot will contain a function that searches for and calls the Python method. It's more efficient for the C slot to implement the operation directly, as the C slot is what Python actually accesses. Mappings written in C implement the C slots sq_contains and mp_subscript to provide in and indexing. Ordinarily, the Python-level __contains__ and __getitem__ methods would be automatically generated as wrappers around the C functions, but the dict class has explicit implementations of __contains__ and __getitem__, because the explicit implementations are a bit faster than the generated wrappers: static PyMethodDef mapp_methods[] = { DICT___CONTAINS___METHODDEF {"__getitem__", (PyCFunction)(void(*)(void))dict_subscript, METH_O | METH_COEXIST, getitem__doc__}, ... (Actually, the explicit __getitem__ implementation is the same function as the mp_subscript implementation, just with a different kind of wrapper.) Ordinarily, a subclass would inherit its parent's implementations of C-level hooks like sq_contains and mp_subscript, and the subclass would be just as fast as the superclass. However, the logic in update_one_slot looks for the parent implementation by trying to find the generated wrapper methods through an MRO search. dict doesn't have generated wrappers for sq_contains and mp_subscript, because it provides explicit __contains__ and __getitem__ implementations. Instead of inheriting sq_contains and mp_subscript, update_one_slot ends up giving the subclass sq_contains and mp_subscript implementations that perform an MRO search for __contains__ and __getitem__ and call those. This is much less efficient than inheriting the C slots directly. Fixing this will require changes to the update_one_slot implementation. Aside from what I described above, dict_subscript also looks up __missing__ for dict subclasses, so fixing the slot inheritance issue won't make subclasses completely on par with dict itself for lookup speed, but it should get them a lot closer. As for pickling, on the dumps side, the pickle implementation has a dedicated fast path for dicts, while the dict subclass takes a more roundabout path through object.__reduce_ex__ and save_reduce. On the loads side, the time difference is mostly just from the extra opcodes and lookups to retrieve and instantiate the __main__.A class, while dicts have a dedicated pickle opcode for making a new dict. If we compare the disassembly for the pickles: In [26]: pickletools.dis(pickle.dumps({0: 0, 1: 1, 2: 2, 3: 3, 4: 4})) 0: \x80 PROTO 4 2: \x95 FRAME 25 11: } EMPTY_DICT 12: \x94 MEMOIZE (as 0) 13: ( MARK 14: K BININT1 0 16: K BININT1 0 18: K BININT1 1 20: K BININT1 1 22: K BININT1 2 24: K BININT1 2 26: K BININT1 3 28: K BININT1 3 30: K BININT1 4 32: K BININT1 4 34: u SETITEMS (MARK at 13) 35: . STOP highest protocol among opcodes = 4 In [27]: pickletools.dis(pickle.dumps(A({0: 0, 1: 1, 2: 2, 3: 3, 4: 4}))) 0: \x80 PROTO 4 2: \x95 FRAME 43 11: \x8c SHORT_BINUNICODE '__main__' 21: \x94 MEMOIZE (as 0) 22: \x8c SHORT_BINUNICODE 'A' 25: \x94 MEMOIZE (as 1) 26: \x93 STACK_GLOBAL 27: \x94 MEMOIZE (as 2) 28: ) EMPTY_TUPLE 29: \x81 NEWOBJ 30: \x94 MEMOIZE (as 3) 31: ( MARK 32: K BININT1 0 34: K BININT1 0 36: K BININT1 1 38: K BININT1 1 40: K BININT1 2 42: K BININT1 2 44: K BININT1 3 46: K BININT1 3 48: K BININT1 4 50: K BININT1 4 52: u SETITEMS (MARK at 31) 53: . STOP highest protocol among opcodes = 4 we see that the difference between the two is that the second pickle needs a whole bunch of opcodes to look up __main__.A and instantiate it, while the first pickle just does EMPTY_DICT to get an empty dict. After that, both pickles push the same keys and values onto the pickle operand stack and run SETITEMS. | 10 | 15 |
59,901,350 | 2020-1-24 | https://stackoverflow.com/questions/59901350/what-exactly-does-the-gc-collect-function-do | I am having trouble understanding what exactly the python function gc.collect() does. Does the function only collects those objects that are collectable, to free up the space at a later time once the gc.threshold is reached? or does gc.collect() collect objects and also automatically get rid of whatever it collected, so that the space can be freed immediately after executing the code? | It depends! If called with no argument or with generation=2 as an argument, it would free the objects that are collectable. If called with generation=1 it would not clear the free lists. From the documentation: With no arguments, run a full collection. The optional argument generation may be an integer specifying which generation to collect (from 0 to 2). A ValueError is raised if the generation number is invalid. The number of unreachable objects found is returned. The free lists maintained for a number of built-in types are cleared whenever a full collection or collection of the highest generation (2) is run. Not all items in some free lists may be freed due to the particular implementation, in particular float. | 7 | 5 |
59,879,577 | 2020-1-23 | https://stackoverflow.com/questions/59879577/pandas-getting-typeerror-only-integer-scalar-arrays-can-be-converted-to-a-sca | After renaming a DataFrame's column(s), I get an error when merging on the new column(s): import pandas as pd df1 = pd.DataFrame({'a': [1, 2]}) df2 = pd.DataFrame({'b': [3, 1]}) df1.columns = [['b']] df1.merge(df2, on='b') TypeError: only integer scalar arrays can be converted to a scalar index | Replaced the code tmp.columns = [['POR','POR_PORT']] with tmp.rename(columns={'Locode':'POR', 'Port Name':'POR_PORT'}, inplace=True) and it worked. | 11 | 6 |
59,904,631 | 2020-1-24 | https://stackoverflow.com/questions/59904631/python-class-constants-in-dataclasses | Understanding that the below are not true constants, attempting to follow PEP 8 I'd like to make a "constant" in my @dataclass in Python 3.7. @dataclass class MyClass: data: DataFrame SEED = 8675309 # Jenny's Constant My code used to be: class MyClass: SEED = 8675309 # Jenny's Constant def __init__(data): self.data = data Are these two equivalent in there handling of the seed? Is the seed now part of the init/eq/hash? Is there a preferred style for these constants? | They are the same. dataclass ignores unannotated variables when determining what to use to generate __init__ et al. SEED is just an unhinted class attribute. If you want to provide a type hint for a class attribute, you use typing.ClassVar to specify the type, so that dataclass won't mistake it for an instance attribute. from dataclasses import dataclass from typing import ClassVar @dataclass class MyClass: data: DataFrame SEED: ClassVar[int] = 8675309 | 33 | 53 |
59,902,102 | 2020-1-24 | https://stackoverflow.com/questions/59902102/why-is-imperative-mood-important-for-docstrings | The error code D401 for pydocstyle reads: First line should be in imperative mood. I often run into cases where I write a docstring, have this error thrown by my linter, and rewrite it -- but the two docstrings are semantically identical. Why is it important to have imperative mood for docstrings? | From the docstring of check_imperative_mood itself: """D401: First line should be in imperative mood: 'Do', not 'Does'. [Docstring] prescribes the function or method's effect as a command: ("Do this", "Return that"), not as a description; e.g. don't write "Returns the pathname ...". (We'll ignore the irony that this docstring itself would fail the test.) | 34 | 70 |
59,859,885 | 2020-1-22 | https://stackoverflow.com/questions/59859885/how-to-define-python-enum-properties-if-mysql-enum-values-have-space-in-their-na | I have Python Enum class like this: from enum import Enum class Seniority(Enum): Intern = "Intern" Junior_Engineer = "Junior Engineer" Medior_Engineer = "Medior Engineer" Senior_Engineer = "Senior Engineer" In MYSQL database, seniority ENUM column has values "Intern", "Junior Engineer", "Medior Engineer", "Senior Engineer". The problem is that I get an error: LookupError: "Junior Engineer" is not among the defined enum values This error has occurred when I call query like: UserProperty.query.filter_by(full_name='John Doe').first() seniority is enum property in the UserProperty model. class UserProperty(db.Model): ... seniority = db.Column(db.Enum(Seniority), nullable=True) ... For this class I've defined Schema class using marshmallow Schema and EnumField from marshmallow_enum package: class UserPropertySchema(Schema): ... seniority = EnumField(Seniority, by_value=True) ... What to do in this situation, because I can't define python class property name with space. How to force python to use values of defined properties instead of property names? | As Shenanigator stated in the comment of my question, we can use aliases to solve this problem. Seniority = Enum( value='Seniority', names=[ ('Intern', 'Intern'), ('Junior Engineer', 'Junior Engineer'), ('Junior_Engineer', 'Junior_Engineer'), ('Medior Engineer', 'Medior Engineer'), ('Medior_Engineer', 'Medior_Engineer'), ('Senior Engineer', 'Senior Engineer'), ('Senior_Engineer', 'Senior_Engineer') ] ) | 11 | 4 |
59,898,490 | 2020-1-24 | https://stackoverflow.com/questions/59898490/how-to-find-code-that-is-missing-type-annotations | I have a project that is fully annotated. Or at least I hope so, because it is entirely possible that there is a function or two somewhere in there that is missing type annotations. How can I find such functions (or any other blocks of code)? | You can use mypy for this. Just add some switches to the command call: $ mypy --disallow-untyped-calls --disallow-untyped-defs --disallow-incomplete-defs projectname This will find you all untyped defines plus incomplete defines and also warns you if you call a untyped function. For further information have a look at the untyped-definitions-and-calls section of the mypy documentation. | 12 | 15 |
59,894,720 | 2020-1-24 | https://stackoverflow.com/questions/59894720/keras-and-tensorboard-attributeerror-sequential-object-has-no-attribute-g | I am using keras and trying to plot the logs using tensorboard. Bellow you can find out the error I am getting and also the list of packages versions I am using. I can not understand it is giving me the error of 'Sequential' object has no attribute '_get_distribution_strategy'. Package: Keras 2.3.1 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 tensorboard 2.1.0 tensorflow 2.1.0 tensorflow-estimator 2.1.0 MODEL: model = Sequential() model.add(Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_shape=(X.shape[1],))) model.add(GlobalAveragePooling1D()) #model.add(Dense(10, activation='sigmoid')) model.add(Dense(len(CATEGORIES), activation='softmax')) model.summary() #opt = 'adam' # Here we can choose a certain optimizer for our model opt = 'rmsprop' model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) # Here we choose the loss function, input our optimizer choice, and set our metrics. # Create a TensorBoard instance with the path to the logs directory tensorboard = TensorBoard(log_dir='logs/{}'.format(time()), histogram_freq = 1, embeddings_freq = 1, embeddings_data = X) history = model.fit(X, Y, epochs=epochs, batch_size=batch_size, validation_split=0.1, callbacks=[tensorboard]) ERROR: C:\Users\Bruno\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\callbacks\tensorboard_v2.py:102: UserWarning: The TensorBoard callback does not support embeddings display when using TensorFlow 2.0. Embeddings-related arguments are ignored. warnings.warn('The TensorBoard callback does not support ' C:\Users\Bruno\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\indexed_slices.py:433: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " Train on 1123 samples, validate on 125 samples Traceback (most recent call last): File ".\NN_Training.py", line 128, in <module> history = model.fit(X, Y, epochs=epochs, batch_size=batch_size, validation_split=0.1, callbacks=[tensorboard]) # Feed in the train set for X and y and run the model!!! File "C:\Users\Bruno\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training.py", line 1239, in fit validation_freq=validation_freq) File "C:\Users\Bruno\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training_arrays.py", line 119, in fit_loop callbacks.set_model(callback_model) File "C:\Users\Bruno\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\callbacks\callbacks.py", line 68, in set_model callback.set_model(model) File "C:\Users\Bruno\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\callbacks\tensorboard_v2.py", line 116, in set_model super(TensorBoard, self).set_model(model) File "C:\Users\Bruno\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 1532, in set_model self.log_dir, self.model._get_distribution_strategy()) # pylint: disable=protected-access AttributeError: 'Sequential' object has no attribute '_get_distribution_strategy'``` | You are mixing imports between keras and tf.keras, they are not the same library and doing this is not supported. You should make all imports from one of the libraries, either keras or tf.keras. | 13 | 14 |
59,878,319 | 2020-1-23 | https://stackoverflow.com/questions/59878319/can-you-reverse-a-pytorch-neural-network-and-activate-the-inputs-from-the-output | Can we activate the outputs of a NN to gain insight into how the neurons are connected to input features? If I take a basic NN example from the PyTorch tutorials. Here is an example of a f(x,y) training example. import torch N, D_in, H, D_out = 64, 1000, 100, 10 x = torch.randn(N, D_in) y = torch.randn(N, D_out) model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) loss_fn = torch.nn.MSELoss(reduction='sum') learning_rate = 1e-4 for t in range(500): y_pred = model(x) loss = loss_fn(y_pred, y) model.zero_grad() loss.backward() with torch.no_grad(): for param in model.parameters(): param -= learning_rate * param.grad After I've finished training the network to predict y from x inputs. Is it possible to reverse the trained NN so that it can now predict x from y inputs? I don't expect y to match the original inputs that trained the y outputs. So I expect to see what features the model activates on to match x and y. If it is possible, then how do I rearrange the Sequential model without breaking all the weights and connections? | It is possible but only for very special cases. For a feed-forward network (Sequential) each of the layers needs to be reversible; that means the following arguments apply to each layer separately. The transformation associated with one layer is y = activation(W*x + b) where W is the weight matrix and b the bias vector. In order to solve for x we need to perform the following steps: Reverse activation; not all activation functions have an inverse though. For example the ReLU function does not have an inverse on (-inf, 0). If we used tanh on the other hand we can use its inverse which is 0.5 * log((1 + x) / (1 - x)). Solve W*x = inverse_activation(y) - b for x; for a unique solution to exist W must have similar row and column rank and det(W) must be non-zero. We can control the former by choosing a specific network architecture while the latter depends on the training process. So for a neural network to be reversible it must have a very specific architecture: all layers must have the same number of input and output neurons (i.e. square weight matrices) and the activation functions all need to be invertible. Code: Using PyTorch we will have to do the inversion of the network manually, both in terms of solving the system of linear equations as well as finding the inverse activation function. Consider the following example of a 1-layer neural network (since the steps apply to each layer separately extending this to more than 1 layer is trivial): import torch N = 10 # number of samples n = 3 # number of neurons per layer x = torch.randn(N, n) model = torch.nn.Sequential( torch.nn.Linear(n, n), torch.nn.Tanh() ) y = model(x) z = y # use 'z' for the reverse result, start with the model's output 'y'. for step in list(model.children())[::-1]: if isinstance(step, torch.nn.Linear): z = z - step.bias[None, ...] z = z[..., None] # 'torch.solve' requires N column vectors (i.e. shape (N, n, 1)). z = torch.solve(z, step.weight)[0] z = torch.squeeze(z) # remove the extra dimension that we've added for 'torch.solve'. elif isinstance(step, torch.nn.Tanh): z = 0.5 * torch.log((1 + z) / (1 - z)) print('Agreement between x and z: ', torch.dist(x, z)) | 12 | 10 |
59,880,963 | 2020-1-23 | https://stackoverflow.com/questions/59880963/how-to-print-docstring-for-class-attribute-element | I have a class: class Holiday(ActivitiesBaseClass): """Holiday is an activity that involves taking time off work""" Hotel = models.CharField(max_length=255) """ Name of hotel """ I can print the class docstring by typing: print(Holiday.__doc__) This outputs as: Holiday is an activity that involves taking time off work How can I print the docstring for Hotel which is a Class attribute/element? What I've tried print(Holiday.Hotel.__doc__) returns: A wrapper for a deferred-loading field. When the value is read from this object the first time, the query is executed. I've also tried the help() function to no avail. N.B Sphinx seems to be able to extract the docstrings of class attributes and include them in readthedocs so I’m hoping there’s a way | Don't think it's possible to add docstring for specific field, but you can use field's help_text argument instead: Hotel = models.CharField(max_length=255, help_text="Name of hotel") | 7 | 5 |
59,858,898 | 2020-1-22 | https://stackoverflow.com/questions/59858898/how-to-convert-a-video-on-disk-to-a-rtsp-stream | I have a video file on my local disk and i want to create an rtsp stream from it, which i am going to use in one of my project. One way is to create a rtsp stream from vlc but i want to do it with code (python would be better). I have tried opencv's VideoWritter like this import cv2 _dir = "/path/to/video/file.mp4" cap = cv2.VideoCapture(_dir) framerate = 25.0 out = cv2.VideoWriter( "appsrc ! videoconvert ! x264enc noise-reduction=10000 speed-preset=ultrafast tune=zerolatency ! rtph264pay config-interval=1 pt=96 ! tcpserversink host=127.0.0.1 port=5000 sync=false", 0, framerate, (1920, 1080), ) counter = 0 while cap.isOpened(): ret, frame = cap.read() if ret: out.write(frame) print(f"Read {counter} frames",sep='',end="\r",flush=True) counter += 1 if cv2.waitKey(1) & 0xFF == ord("q"): break else: break cap.release() out.release() But when i stream it on vlc like this vlc -v rtsp://127.0.0.1:5000 I am getting [00007fbb307a3e18] access_realrtsp access error: cannot connect to 127.0.0.1:5000 [00007fbb2c189f08] core input error: open of `rtsp://127.0.0.1:5000' failed [00007fbb307a4278] live555 demux error: Failed to connect with rtsp://127.0.0.1:5000 Gstreamer is another option but as i have never used it, so would be nice if someone points me in the right direction. | You tried to expose RTP protocol via TCP server but please note that RTP is not RTSP and that RTP (and RTCP) can only be part of RTSP. Anyways, there is a way to create RTSP server with GStreamer and Python by using GStreamer's GstRtspServer and Python interface for Gstreamer (gi package). Assuming that you already have Gstreamer on your machine, first install gi python package and then install Gstreamer RTSP server (which is not part of standard Gstreamer installation). Python code to expose mp4 container file via simple RTSP server #!/usr/bin/env python import sys import gi gi.require_version('Gst', '1.0') gi.require_version('GstRtspServer', '1.0') from gi.repository import Gst, GstRtspServer, GObject, GLib loop = GLib.MainLoop() Gst.init(None) class TestRtspMediaFactory(GstRtspServer.RTSPMediaFactory): def __init__(self): GstRtspServer.RTSPMediaFactory.__init__(self) def do_create_element(self, url): #set mp4 file path to filesrc's location property src_demux = "filesrc location=/path/to/dir/test.mp4 ! qtdemux name=demux" h264_transcode = "demux.video_0" #uncomment following line if video transcoding is necessary #h264_transcode = "demux.video_0 ! decodebin ! queue ! x264enc" pipeline = "{0} {1} ! queue ! rtph264pay name=pay0 config-interval=1 pt=96".format(src_demux, h264_transcode) print ("Element created: " + pipeline) return Gst.parse_launch(pipeline) class GstreamerRtspServer(): def __init__(self): self.rtspServer = GstRtspServer.RTSPServer() factory = TestRtspMediaFactory() factory.set_shared(True) mountPoints = self.rtspServer.get_mount_points() mountPoints.add_factory("/stream1", factory) self.rtspServer.attach(None) if __name__ == '__main__': s = GstreamerRtspServer() loop.run() Note that this code will expose RTSP stream named stream1 on default port 8554 I used qtdemux to get video from MP4 container. You could extend above pipeline to extract audio too (and expose it too via RTSP server) to decrease CPU processing you can only extract video without decoding it and encoding it again to H264. However, if transcoding is needed, I left one commented line that will do the job (but it might choke less powerful CPUs). You can play this with VLC vlc -v rtsp://127.0.0.1:8554/stream1 or with Gstreamer gst-launch-1.0 playbin uri=rtsp://127.0.0.1:8554/stream1 However, If you actually do not need RTSP but just end-to-end RTP following Gstreamer pipeline (that utilizes rtpbin) will do the job gst-launch-1.0 -v rtpbin name=rtpbin \ filesrc location=test.mp4 ! qtdemux name=demux \ demux.video_0 ! decodebin ! x264enc ! rtph264pay config-interval=1 pt=96 ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 sync=true async=false and VLC can play it with vlc -v rtp://127.0.0.1:5000 | 10 | 16 |
59,860,465 | 2020-1-22 | https://stackoverflow.com/questions/59860465/pybind11-importerror-dll-not-found-when-trying-to-import-pyd-in-python-int | I built a .pyd in Visual Studio 2019 (Community) that provides a wrapper for some functionality that's only present in the LibRaw. The solution compiles successfully without any warnings or errors. The project uses LibRaw, OpenCV and pybind11 as well as Python.h and the corresponding .lib-file. When i try to import the .pyd inside the Python Interpreter i get: C:\Users\Tim.Hilt\source\repos\cr3_converter\Release>dir Datenträger in Laufwerk C: ist Acer Volumeseriennummer: EC36-E45E Verzeichnis von C:\Users\Tim.Hilt\source\repos\cr3_converter\Release 22.01.2020 11:28 <DIR> . 22.01.2020 11:28 <DIR> .. 22.01.2020 11:28 808 cr3_converter.exp 22.01.2020 11:28 3.068.361 cr3_converter.iobj 22.01.2020 11:28 785.552 cr3_converter.ipdb 22.01.2020 11:28 1.908 cr3_converter.lib 22.01.2020 11:28 4.190.208 cr3_converter.pdb 22.01.2020 11:28 953.856 cr3_converter.pyd 31.10.2019 16:22 26.408.085 IMG_0482_raw.CR3 7 Datei(en), 35.408.778 Bytes 2 Verzeichnis(se), 77.160.587.264 Bytes frei C:\Users\Tim.Hilt\source\repos\cr3_converter\Release>python Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 22:39:24) [MSC v.1916 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import cr3_converter Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: DLL load failed while importing cr3_converter: The specified module was not found. >>> import cr3_converter.pyd Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: DLL load failed while importing cr3_converter: The specified module was not found. >>> The Paths to the needed .dlls (Python and OpenCV in this case; LibRaw was linked fully statically) are set in the system-path. I ran Dependency-Walker, but could not find anything suspicious. Here is the corresponding Dependency Walker image. I also tried out another tool (Dependencies.exe, which is essentially a rewrite of Dependency Walker, but takes into account the API-MS-WIN-CORE-....dlls) and got an error, which looked like this: When i hover over the missing .dll, i can see a api-ms-win... module could not be found on disk. I have searched and found the module and added its directory-path to the system-path. Now the module isn't highlighted red anymore, but the C:\WINDOWS\SysWOW64\WS2_32.dll (red highlight at top of screenshot) still shows to have missing imports. Could this be an issue? How i've produced the .pyd: Created empty Visual Studio project (Win32; Python is also the 32-bit-installation) Changed project-settings to .dll-configuration and .pyd-fileextension Added include-paths to the header files for OpenCV, LibRaw and Pybind11 Added paths and input files for the linker for OpenCV, LibRaw and Python3.8 Built the solution (No Errors, No Warnings) Tried to import the resulting .pyd in the python-interpreter I've seen this question, where the OP had the problem of different Python-.dlls being loaded by the library, but in my case the library referenced by Dependency Walker is the same as the one i have in my path-variable. | I quietly assumed, that Windows searches for .dlls in the same directories as the ones listed in the systems (/users) PATH-variable. However, that is not the case, as ProcMon revealed. For now, i copied the missing .dlls to the folder that contains the .pyd and everything works. | 8 | 7 |
59,870,637 | 2020-1-23 | https://stackoverflow.com/questions/59870637/unsupported-operand-types-for-windowspath-and-str | The code I'm working on throws the error Unsupported operand type(s) for +: 'WindowsPath' and 'str'. I have tried many things, and none have fixed this (aside from removing the line with the error, but that's not helpful). For context, this script (when it's done) is supposed to: Find a file (mp3) based on the ID you type in (in the directory specified in SongsPath.txt) Back it up Then replace it with another file (renamed to the previous file's name) so that the program that fetches these files plays the new song instead of the old one, but can be restored to the original song at any time. (the songs are downloaded from newgrounds and saved by their newgrounds audio portal ID) I'm using Python 3.6.5 import os import pathlib from pathlib import Path nspt = open ("NewSongsPath.txt", "rt") nsp = Path (nspt.read()) spt = open("SongsPath.txt", "rt") sp = (Path(spt.read())) print("type the song ID:") ID = input() csp = str(path sp + "/" + ID + ".mp3") # this is the line throwing the error. sr = open(csp , "rb") sw = open(csp, "wb") print (sr.read()) | What is happening is that you are using the "+" character to concatenate 2 different types of data Instead of using the error line: csp = str(path sp + "/" + ID + ".mp3") Try to use it this way: csp = str(Path(sp)) fullpath = csp + "/" + ID + ".mp3" Use the 'fullpath' variable to open file. | 12 | 23 |
59,859,095 | 2020-1-22 | https://stackoverflow.com/questions/59859095/reinstall-packages-automatically-into-virtual-environment-after-python-minor-ver | I've got several virtual environments (dozens) lying on my disk made by the venv module of Python 3.6. Now I've upgraded to Ubuntu 19.10 in a haste and only afterwards noticed that 3.6 is not available at all for Ubuntu 19.10 from the generally acknowledged sources. I've managed to upgrade the Python versions of these virtual environments by locating bin/python3 under my home directory and running python3.7 -mvenv --upgrade on the containing folders. Now, while python3.7 -mvenv --upgrade upgrades the Python in the virtual environment, it does nothing to reinstall my previous package versions in the lib/python3.7/site-packages under that venv. I guess I could have done this by installing Python 3.6, pip freezeing the requirements from the venv and then upgrading the venv to Python 3.7, pip install -ring - if only there was Python 3.6 install available for my new OS. Is there any other way to do this in a rather automated fashion (perhaps mainly the pip freeze using the old lib/python3.6 directory) without me having to install Python 3.6 from source, using conda or installing 3.6 from some random PPAs? I want to upgrade all environments en masse so that in the future when I need to do something with a random environment it would continue working with Python 3.7. | In your new 3.7 venv you should have pkg_resources available - setuptools is automatically installed when created. If not, just pip install setuptools. setuptools library code is actually what pip is vendoring to make pip freeze work. But you can just freeze it manually. # in 3.7 runtime... import pkg_resources old_site_dir = ".venv/lib/python3.6/site-packages/" working_set = pkg_resources.WorkingSet([old_site_dir]) for dist in working_set: print(dist.as_requirement()) You can throw that output in a requirements.txt file and likely have a working reconstructed site, no python3.6 runtime required. Note that this method may not be 100% foolproof, because it is possible for projects to declare separate dependency trees for python3.6 and python3.7 by using environment markers in their distribution metadata (see PEP 508). It is also possible that items installed in your 3.6 site do not support 3.7 at all. However it is pretty uncommon to see that in a minor version bump between 3.6 and 3.7, so just using the working set should be "good enough" in practice. | 11 | 12 |
59,852,831 | 2020-1-22 | https://stackoverflow.com/questions/59852831/write-a-multi-line-string-to-a-text-file-using-python | I'm trying to write multiple lines of a string to a text file in Python3, but it only writes the single line. e.g Let's say printing my string returns this in the console; >> print(mylongstring) https://www.link1.com https://www.link2.com https://www.link3.com https://www.link4.com https://www.link5.com https://www.link6.com And i go to write that into a text file f = open("temporary.txt","w+") f.write(mylongstring) All that reads in my text file is the first link (link1.com) Any help? I can elaborate more if you want, it's my first post here after all. | Try closing the file: f = open("temporary.txt","w+") f.write(mylongstring) f.close() If that doesn't work try using: f = open("temporary.txt","w+") f.writelines(mylongstring) f.close() If that still doesn't work use: f = open("temporary.txt","w+") f.writelines([i + '\n' for i in mylongstring]) f.close() | 9 | 5 |
59,783,735 | 2020-1-17 | https://stackoverflow.com/questions/59783735/dataframe-convert-header-row-to-row-pandas | have a df with values df: 165 156 1 test greater 56gsa ------------------------------------- spin 201 2 normal lesser 12asgs pine 202 3 fast greater 5sasgs required output: 0 1 2 3 4 5 ------------------------------------- 165 156 1 test greater 56gsa spin 201 2 normal lesser 12asgs pine 202 3 fast greater 5sasgs | If DataFrame is created from file then header=None parameter is your friend: df = pd.read_csv(file, header=None) pandas >= 2.0: If not then convert column to one row DataFrame and concatenate (append not working anymore) to original data: df = pd.concat([df.columns.to_frame().T, df]) df.columns = range(len(df.columns)) print (df) 0 1 2 3 4 5 0 165 156 1 test greater 56gsa 1 spin 201 2 normal lesser 12asgs 2 pine 202 3 fast greater 5sasgs pandas < 2.0 (original answer): If not then convert column to one row DataFrame and DataFrame.append to original data: df = df.columns.to_frame().T.append(df, ignore_index=True) df.columns = range(len(df.columns)) print (df) 0 1 2 3 4 5 0 165 156 1 test greater 56gsa 1 spin 201 2 normal lesser 12asgs 2 pine 202 3 fast greater 5sasgs | 17 | 19 |
59,801,341 | 2020-1-18 | https://stackoverflow.com/questions/59801341/how-to-use-np-max-for-empty-numpy-array-without-valueerror-zero-size-array-to-r | I get a case that when I tried to use np.max() in an empty numpy array it will report such error messages. # values is an empty numpy array here max_val = np.max(values) ValueError: zero-size array to reduction operation maximum which has no identity So the way I think to fix it is that I try to deal with the empty numpy array first before calling the np.max() like follows: # add some values as missing values on purposes. def deal_empty_np_array(a:np.array): if a.size == 0: a = np.append(a, [-999999, -999999]) return a values = deal_empty_np_array(values) max_val = np.max(values); OR use the try catch way like this link. So I am wondering if there is a better solution for this awkward case. Thanks in advance. PS: Sorry for not giving a clean description before. | In [3]: np.max([]) --------------------------------------------------------------------------- ... ValueError: zero-size array to reduction operation maximum which has no identity But check the docs. In newer numpy ufunc like max take an initial parameter that lets you work with an empty array: In [4]: np.max([], initial=10) Out[4]: 10.0 | 15 | 24 |
59,753,517 | 2020-1-15 | https://stackoverflow.com/questions/59753517/what-order-should-these-elements-be-in-a-python-module | When present, what is the order that these elements should be declared in a Python module? Hash bang (#!/usr/bin/env python) Encoding (# coding: utf-8) Future imports (from __future__ import unicode_literals, ...) Docstring If declared last, will the docstring work in a call help(module)? | Hash bang. The kernel literally looks at the first two bytes of the file to see if they are equal to #!, so it won't work otherwise. Encoding. According to the Python Language Reference it must be "on the first or second line". Docstring. According to PEP 257, a docstring is "a string literal that occurs as the first statement in a module, function, class, or method definition", so it cannot go after any import statements. You can see for yourself that help(module) no longer reports your docstring if you put it in a different place. Future imports, because they cannot go before any of the above. | 13 | 14 |
59,801,387 | 2020-1-18 | https://stackoverflow.com/questions/59801387/how-to-install-mod-wsgi-into-apache-on-windows | Other similar answers are out of date or focus on a particular error and not the whole process. What is the full installation process of mod_wsgi into an Apache installation on Windows 10? | Install Microsoft Visual C++ Build Tools: https://visualstudio.microsoft.com/visual-cpp-build-tools/ Point MOD_WSGI_APACHE_ROOTDIR to your installation (default is C:/Apache24). Use forward slashes: set MOD_WSGI_APACHE_ROOTDIR=C:/Users/me/apache For PowerShell this would be (with backward slashes): $env:MOD_WSGI_APACHE_ROOTDIR="C:\Users\me\apache" Install mod-wsgi package: pip install mod-wsgi Note: Make sure that the python version you're using has the same architecture (32/64 bit) as your Apache version. Get module configuration: mod_wsgi-express module-config Copy the output of the previous command into your Apache's httpd.conf. When you restart Apache, mod_wsgi will be loaded. Check out the quickstart Hello World example to test your mod_wsgi installation. | 20 | 31 |
59,744,589 | 2020-1-15 | https://stackoverflow.com/questions/59744589/how-can-i-convert-the-string-2020-01-06t000000-000z-into-a-datetime-object | As the question says, I have a series of strings like '2020-01-06T00:00:00.000Z'. How can I convert this series to datetime using Python? I prefer the method on pandas. If not is there any method to solve this task? Thank all. string '2020-01-06T00:00:00.000Z' convert to 2020-01-06 00:00:00 under datetime object | With Python 3.7+, that can be achieved with datetime.fromisoformat() and some tweaking of the source string: from datetime import datetime datetime.fromisoformat('2020-01-06T00:00:00.000Z'[:-1] + '+00:00') Output: datetime.datetime(2020, 1, 6, 0, 0, tzinfo=datetime.timezone.utc) And here is a more Pythonic way to achieve the same result: from datetime import datetime from datetime import timezone datetime.fromisoformat( '2020-01-06T00:00:00.000Z'[:-1] ).astimezone(timezone.utc) Output: datetime.datetime(2020, 1, 6, 3, 0, tzinfo=datetime.timezone.utc) Finally, to format it as %Y-%m-%d %H:%M:%S, you can do: d = datetime.fromisoformat( '2020-01-06T00:00:00.000Z'[:-1] ).astimezone(timezone.utc) d.strftime('%Y-%m-%d %H:%M:%S') Output: '2020-01-06 00:00:00' | 20 | 26 |
59,775,038 | 2020-1-16 | https://stackoverflow.com/questions/59775038/visual-studio-code-syntax-highlighting-not-working | I am using Visual Studio Code (VSC) as my IDE. My computer just updated to Catalina 10.15.2 (19C57) and since the update, VSCode is no longer highlighting syntax errors. The extensions I have seem to be working and it recognizes my miniconda Python environment. Is there a solution for this yet? I was avoiding Catalina as I know it has caused a lot of issues but now that I was forced to install it I need a solution. | In my case, the Catalina installation didn't remove my Python installation. After checking as suggested by @Brett Cannon in his comment, the update to Catalina uninstalled some extensions from VS Code. These are not available in the VS Code extension Marketplace anymore so there must be an issue regarding compatibility. I fixed it by doing the following: Open the command palette (Command + Shift + P) Type Python: Select Linter and select pylint Select the Install with Conda option Restart VS Code Now it's working correctly, though it's still not shown in my extensions section in VS Code. It's necessary to point out that you will have to install pylint in every Python environment you are using, in my case I have multiple Conda environments. | 24 | 7 |
59,791,884 | 2020-1-17 | https://stackoverflow.com/questions/59791884/set-the-legend-location-of-a-pandas-plot | I know how to set the legend location of matplotlib plot with plt.legend(loc='lower left'), however, I am plotting with pandas method df.plot() and need to set the legend location to 'lower left'. Does anyone know how to do it? Edited: I am actually looking for a way to do it through pandas' df.plot(), not via plt.legend(loc='lower left') | With pandas 1.5.3 you can chain legend() behind plot() see matplotlib. Example: matched.set_index( matched.index.date ).plot(kind='barh', stacked=True ).legend( bbox_to_anchor=(1.0, 1.0), fontsize='small', ); | 38 | 11 |
59,796,680 | 2020-1-18 | https://stackoverflow.com/questions/59796680/disable-pip-install-timeout-for-slow-connections | I recently moved to a place with terrible internet connection. Ever since then I have been having huge issues getting my programming environments set up with all the tools I need - you don't realize how many things you need to download until each one of those things takes over a day. For this post I would like to try to figure out how to deal with this in pip. The Problem Almost every time I pip install something it ends out timing out somewhere in the middle. It takes many tries until I get lucky enough to have it complete without a time out. This happens with many different things I have tried, big or small. Every time an install fails the next time starts all over again from 0%, no matter how far I got before. I get something along the lines of pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. What I want to happen Ideally I would like to either extend the definition of time pip uses before it declares a timeout or be able to disable the option of a timeout all together. I am not sure either of these are possible, so if anyone has any other solution for me that would be greatly appreciated as well. Other Information Not sure this helps any but what I found is that the only reliable way for me to download anything here is using torrents, as they do not restart a download once they lose connection, rather they always continue where they left off. If there is a way to use this fact in any way that would be nice too. | Use option --timeout <sec> to set socket time out. Also, as @Iain Shelvington mentioned, timeout = <sec> in pip configuration will also work. TIP: Every time you want to know something (maybe an option) about a command (tool), before googling, check the manual page of the command by using man <command> or use <command> --help or check that command's docs online will be very useful too. | 7 | 13 |
59,752,632 | 2020-1-15 | https://stackoverflow.com/questions/59752632/airflow-jinja-templating-in-params | I have an Airflow operator which allows me to query Athena which accepts a Jinja templated file as the query input. Usually, I pass variables such as table/database names, etc to the template for create table and add partition statements. This works fine for defined strings. My task definition looks like this: db = 'sample_db' table = 'sample_table' out = 's3://sample' p1='2020' p2='1' add_partition_task= AWSAthenaOperator( task_id='add_partition', query='add_partition.sql', params={'database': db, 'table_name': table, 'p1': p1 'p2': p2}, database=db, output_location=out ) The SQL file being templated looks like: ALTER TABLE {{ params.database }}.{{ params.table_name }} ADD IF NOT EXISTS PARTITION (partition1= '{{ params.p1 }}', partition2 = '{{ params.p2 }}') This templating works fine. The extension to this is to allow 'partition1' and 'partition2' to be defined by a jinja templated variable containing an XCOM pull from an earlier task which converts a date into Financial Year and Period. Using date as the partition is a possibility but I am interested in whether params can be forced to accept Jinja templates. The code I would like to use looks like the following: db = 'sample_db' table = 'sample_table' out = 's3://sample' p1 = '{{ task_instance.xcom_pull(task_ids="convert_to_partition", key="p1") }}' p2 = '{{ task_instance.xcom_pull(task_ids="convert_to_partition", key="p2") }}' add_partition_task= AWSAthenaOperator( task_id='add_partition', query='add_partition.sql', params={'database': db, 'table_name': table, 'p1': p1 'p2': p2}, database=db, output_location=out ) So now params.p1 and params.p2 contain a Jinja template. Obviously, params does not support jinja templating as the SQL rendered contains the string literal '{{ task_instance....' rather than the rendered XCOM value. Adding params to the template_fields in the operator implementation is not enough to force it to render the template. My operator only extends BaseOperator and uses an AthenaHook which extends AwsHook. Does anyone have some experience with passing templated variables in a params like structure or an alternative approach? | Since AWSAthenaOperator has both query as a templated field and accepts file extension .sql, you can include the jinja template in the files themselves. I modified your AWSAthenaOperator a bit to fit the example. add_partition_task= AWSAthenaOperator( task_id='add_partition', query='add_partition.sql', params={ 'database': db, 'table_name': table, } ) Here is what the add_partition.sql could look like. INSERT OVERWRITE TABLE {{ params.database }}.{{ params.table_name }} (day, month, year) SELECT * FROM db.table WHERE p1 = "{{ task_instance.xcom_pull(task_ids='convert_to_partition', key='p1') }}" AND p2 = "{{ task_instance.xcom_pull(task_ids='convert_to_partition', key='p2') }}" ; | 7 | 4 |
59,763,933 | 2020-1-16 | https://stackoverflow.com/questions/59763933/what-is-the-difference-between-the-random-choices-and-random-sample-function | I have the following list: list = [1,1,2,2]. After applying the sample method (rd.sample(list, 3)) the, output is [1, 1, 2]. After applying the choices method (rd.choices(list, 3)), the output is: [2, 1, 2]. What is the difference between these two methods? When should one be preferred over the other? | The fundamental difference is that random.choices() will (eventually) draw elements at the same position (always sample from the entire sequence, so, once drawn, the elements are replaced - with replacement), while random.sample() will not (once elements are picked, they are removed from the population to sample, so, once drawn the elements are not replaced - without replacement). Note that here replaced (replacement) should be understood as placed back (placement back) and not as a synonym of substituted (and substitution). To better understand it, let's consider the following example: import random random.seed(0) ll = list(range(10)) print(random.sample(ll, 10)) # [6, 9, 0, 2, 4, 3, 5, 1, 8, 7] print(random.choices(ll, k=10)) # [5, 9, 5, 2, 7, 6, 2, 9, 9, 8] As you can see, random.sample() does not produce repeating elements, while random.choices() does. In your example, both methods have results with repeating values because you have repeating values in the original sequence, but, in the case of random.sample() those repeating values must come from different positions of the original input. Eventually, you cannot sample() more than the size of the input sequence, while this is not an issue with choices(): # print(random.sample(ll, 20)) # ValueError: Sample larger than population or is negative print(random.choices(ll, k=20)) # [9, 3, 7, 8, 6, 4, 1, 4, 6, 9, 9, 4, 8, 2, 8, 5, 0, 7, 3, 8] A more generic and theoretical discussion of the sampling process can be found on Wikipedia. | 49 | 63 |
59,815,797 | 2020-1-20 | https://stackoverflow.com/questions/59815797/how-to-save-plotly-express-plot-into-a-html-or-static-image-file | However, I feel saving the figure with plotly.express is pretty tricky. How to save plotly.express or plotly plot into a individual html or static image file? Anyone can help? | Adding to @vestland 's answer about saving to HTML, another way to do it according to the documentation would be: import plotly.express as px # a sample scatter plot figure created fig = px.scatter(x=range(10), y=range(10)) fig.write_html("path/to/file.html") You can read about it further (controlling size of the HTML file) here: Interactive HTML Export in Python | 60 | 11 |
59,760,328 | 2020-1-15 | https://stackoverflow.com/questions/59760328/how-does-torch-distributed-barrier-work | I've read all the documentations I could find about torch.distributed.barrier(), but still having trouble understanding how it's being used in this script and would really appreciate some help. So the official doc of torch.distributed.barrier says it "Synchronizes all processes.This collective blocks processes until the whole group enters this function, if async_op is False, or if async work handle is called on wait()." It's used in two places in the script: First place if args.local_rank not in [-1, 0] and not evaluate: torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache ... (preprocesses the data and save the preprocessed data) if args.local_rank == 0 and not evaluate: torch.distributed.barrier() Second place if args.local_rank not in [-1, 0]: torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab ... (loads the model and the vocabulary) if args.local_rank == 0: torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab I'm having trouble relating the comment in the code to the functionality of this function stated in the official doc. How does it make sure only the first process executes the code between the two calls of torch.distributed.barrier() and why it only checks whether the local rank is 0 before the second call? Thanks in advance! | First you need to understand the ranks. To be brief: in a multiprocessing context we typically assume that rank 0 is the first process or base process. The other processes are then ranked differently, e.g. 1, 2, 3, totalling four processes in total. Some operations are not necessary to be done in parallel or you just need one process to do some preprocessing or caching so that the other processes can use that data. In your example, if the first if statement is entered by the non-base processes (rank 1, 2, 3), they will block (or "wait") because they run into the barrier. They wait there, because barrier() blocks until all processes have reached a barrier, but the base process has not reached a barrier yet. So at this point the non-base processes (1, 2, 3) are blocked, but the base process (0) continues. The base process will do some operations (preprocess and cache data, in this case) until it reaches the second if-statement. There, the base process will run into a barrier. At this point, all processes have stopped at a barrier, meaning that all current barriers can be lifted and all processes can continue. Because the base process prepared the data, the other processes can now use that data. Perhaps the most important thing to understand is: when a process encounters a barrier it will block the position of the barrier is not important (not all processes have to enter the same if-statement, for instance) a process is blocked by a barrier until all processes have encountered a barrier, upon which those barriers are lifted for all processes | 21 | 52 |
59,845,407 | 2020-1-21 | https://stackoverflow.com/questions/59845407/plotly-express-vs-altair-vega-lite-for-interactive-plots | Recently I am learning both Plotly express and Altair/Vega-Lite for interactive plotting. Both of them are quite impressive and I am wondering what their strengths and weaknesses are. Especially for creating interactive plots, are there any big differences between them and when is one more suitable than the other? | Trying to not get into personal preferences and too many details, here are some of the main similarities and differences between the two as far I am aware. Design principles Both Plotly express and Altair are high level declarative libraries, which means you express yourself in terms of data and relationships (like in seaborn, holoviews, and ggplot) rather than in terms of lower level plotting mechanics (like in matplotlib and bokeh). This requires less typing and lets your focus be on the data, but you also have less control of exact details in the plot. Both are interactive plotting packages based on underlying javascript libraries. Plotly express sits on top of plotly.py which is a Python wrapper for plotly.js whereas Altair is a wrapper around VegaLite.js which in turn is based on Vega.js. Both plotly.js and Vega are based on the D3 visualization library, which is the standard js viz library. Syntax One of the more fundamental differences is in the syntax. Plotly's syntax is more focused on having individual functions for each plot and then that function takes several parameters to control its behavior. For example, the violinplot function has a parameter for whether there should be a strip plot included as well. Altair focuses on having a graphical grammar where you compose charts from individual graphical grammar units just as you compose sentences from words. For example, if I wanted to combine two charts in Altair I would create them individually and add the them together via the layer operator (this is possible to an extend in Plotly also but not always straightforward with Plotly express). So Altair's syntactical principles are very similar to ggplot, whereas Plotly express is more (but not quite) like seaborn in its syntax. Interactivity Both are very capable and can create multipanel layouts of plots that are linked together via interactions, such as filtering or hover events that update the other plots. All interactivity in the core libraries themselves is client-side (happens in your browser, and still present when exporting a notebook to HTML). Server-side interactivity (requires a running Python server) can be achieved by pairing with an external dashboarding solution, which allows you to trigger a custom function to execute on a selection of points in a plot. For Plotly, this is their own solution Dash and for Altair this has recently been added in the Panel dashboarding library (and it might be implemented for Streamlit in the future). Altair is the only visualization package that I know of that has an interaction grammar, which allows you to compose interactions between widgets and plots according to similar principles as when creating the plots via the grammar of graphics. This makes the experience of creating plots consistent and can allow for increased creativity and flexibility when designing interactions. Plotly has support for animations in an intuitive way, and this can be a great if your data is a time series or similar. Appearance Please check out the Altair and Plotly express galleries to decide which aesthetics you prefer. Many of the defaults (background color, mark sizes, axis number, etc) are of course changeable (individually or via themes), but you will still get a good general idea for how your plots will look from spending time in the galleries. One notable difference is that Altair will keep plot elements and spacing constant while resizing the plot size to fit e.g. more categorical entries, whereas Plotly will modify spacing and the size of the elements in a plot to fit an overall plot size. For faceted subplots, Altair will keep each subplot a constant size and expand the total size of the chart as more are added, whereas Plotly will fit the subplots into the overall size of the plot and make each plot smaller as more are added. You can adjust both libraries to create plots of the size you want, but this is how they behave out of the box. Extras Plotly currently supports more many more types of graphs and has some special functionality targeted to for example biological plots and image analysis. Plotly can accelerate performance with WebGL for certain types of plots, whereas Altair's performance can be scaled with VegaFusion. Both can also work with Datashader to some extent, but not as seamlessly as when using it with Bokeh/Holoviews. Plotly was created by a company that offers enterprise supports for some of its products. Vegalite was developed by the same research group that developed D3. Both are open source. | 43 | 79 |
59,809,829 | 2020-1-19 | https://stackoverflow.com/questions/59809829/swap-shift-enter-and-enter-in-python-interactive-window-vscode | In the interactive window in vscode you press shift-enter to run the code you just typed and enter to go to the next line. Can I swap this? | The current answers explain how to make "enter" execute the command but they don't give you the other half: make shift+enter insert a new line. Here is the complete solution (I aggregated multiple solutions from various answers to similar questions to credit to everyone else who helped). Bonus: I come from MATLAB so I also made "F9" execute selection in the ipython window. Open keybindings.json (ctrl+shift+p, open keyboard shortcuts (JSON) Add the following: [ { "key": "enter", "command": "interactive.execute", "when": "!suggestWidgetVisible && resourceScheme == 'vscode-interactive'" }, { "key": "shift+enter", "command": "", "when": "!suggestWidgetVisible && resourceScheme == 'vscode-interactive'" }, { "key": "f9", "command": "jupyter.execSelectionInteractive", "when": "editorTextFocus && !findInputFocussed && !replaceInputFocussed && editorLangId == 'python'" } ] | 10 | 4 |
59,785,890 | 2020-1-17 | https://stackoverflow.com/questions/59785890/how-to-get-the-location-of-all-text-present-in-an-image-using-opencv | I have this image that contains text (numbers and alphabets) in it. I want to get the location of all the text and numbers present in this image. Also I want to extract all the text as well. How do I get the coordinates as well as the all the text (numbers and alphabets) in my image? For eg 10B, 44, 16, 38, 22B etc | Here's a potential approach using morphological operations to filter out non-text contours. The idea is: Obtain binary image. Load image, grayscale, then Otsu's threshold Remove horizontal and vertical lines. Create horizontal and vertical kernels using cv2.getStructuringElement() then remove lines with cv2.drawContours() Remove diagonal lines, circle objects, and curved contours. Filter using contour area cv2.contourArea() and contour approximation cv2.approxPolyDP() to isolate non-text contours Extract text ROIs and OCR. Find contours and filter for ROIs then OCR using Pytesseract. Removed horizontal lines highlighted in green Removed vertical lines Removed assorted non-text contours (diagonal lines, circular objects, and curves) Detected text regions import cv2 import numpy as np import pytesseract pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe" # Load image, grayscale, Otsu's threshold image = cv2.imread('1.jpg') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] clean = thresh.copy() # Remove horizontal lines horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,1)) detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2) cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(clean, [c], -1, 0, 3) # Remove vertical lines vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,30)) detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2) cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(clean, [c], -1, 0, 3) cnts = cv2.findContours(clean, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: # Remove diagonal lines area = cv2.contourArea(c) if area < 100: cv2.drawContours(clean, [c], -1, 0, 3) # Remove circle objects elif area > 1000: cv2.drawContours(clean, [c], -1, 0, -1) # Remove curve stuff peri = cv2.arcLength(c, True) approx = cv2.approxPolyDP(c, 0.02 * peri, True) x,y,w,h = cv2.boundingRect(c) if len(approx) == 4: cv2.rectangle(clean, (x, y), (x + w, y + h), 0, -1) open_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2)) opening = cv2.morphologyEx(clean, cv2.MORPH_OPEN, open_kernel, iterations=2) close_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,2)) close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, close_kernel, iterations=4) cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: x,y,w,h = cv2.boundingRect(c) area = cv2.contourArea(c) if area > 500: ROI = image[y:y+h, x:x+w] ROI = cv2.GaussianBlur(ROI, (3,3), 0) data = pytesseract.image_to_string(ROI, lang='eng',config='--psm 6') if data.isalnum(): cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2) print(data) cv2.imwrite('image.png', image) cv2.imwrite('clean.png', clean) cv2.imwrite('close.png', close) cv2.imwrite('opening.png', opening) cv2.waitKey() | 9 | 13 |
59,762,996 | 2020-1-16 | https://stackoverflow.com/questions/59762996/how-to-fix-attributeerror-partially-initialized-module | I am trying to run my script but keep getting this error: File ".\checkmypass.py", line 1, in <module> import requests line 3, in <module> response = requests.get(url) AttributeError: partially initialized module 'requests' has no attribute 'get' (most likely due to a circular import) How can I fix it? | Make sure the name of the file is not the same as the module you are importing – this will make Python think there is a circular dependency. Also check the URL and the package you are using. "Most likely due to a circular import" refers to a file (module) which has a dependency on something else and is trying to be imported while it's already been imported. Once it's correct, you should have something like this: import requests r = requests.get("http://google.com") print(r.status_code) # 200 | 136 | 115 |
59,768,704 | 2020-1-16 | https://stackoverflow.com/questions/59768704/is-there-a-shortcut-in-vscode-to-execute-current-line-or-selection-in-debug-repl | I am developing with Python and commonly running code in an integrated terminal with Shift + Enter. However, when debugging the process seems to be more complicated. I need to copy the code, move focus to debug REPL (Ctrl + Shift + Y), paste, run and move focus back to the editor. Is there any easier way to do this? | If you use the vscode's integrated debugging you can set a shortcut for sending selection to debug Repl. I use this on my keybindings.json config file: { "key": "shift+alt+d", "command": "editor.debug.action.selectionToRepl" } The difference from the "workbench.action.terminal.runSelectedText" command is that you actually have to select the line to send to the debug Repl, does not work for just putting the cursor on the line and hitting the shortcut. | 26 | 26 |
59,773,675 | 2020-1-16 | https://stackoverflow.com/questions/59773675/why-am-i-getting-the-mysql-server-has-gone-away-exception-in-django | I'm working with Django 2.2.6. The same system that runs my django project also has a background service running, listening on a unix socket for requests. In Django Admin, if a user hits a button, Django sends a request on the unix socket, and the background service does something. My background service has full access to Django's ORM. It imports models from my project's models.py, and can query the database without any issues. The problem is that if I leave my django, and my background service running overnight, login to Django Admin, and hit the button, my background service throws an exception: django.db.utils.OperationalError: (2006, 'MySQL server has gone away') It appears that this is because the MySQL database has a timeout period, called wait_timeout. If a connection to the database isn't active for that long, MySQL will disconnect it. Unfortunately, Django doesn't notice, and tries to use it, throwing the error. Fortunately, Django has its own built-in CONN_MAX_AGE variable for each database defined in settings.py. If a database connection is older than the CONN_MAX_AGE, it shuts it down before a request and starts up a new one. Looking at my MySQL database: > show session variables where variable_name = "wait_timeout"; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | wait_timeout | 28800 | +---------------+-------+ Looking at my Django's CONN_MAX_AGE variable: # ./manage.py shell >>> from django.conf import settings >>> settings.DATABASES['default']['CONN_MAX_AGE'] 0 Note: the 'default' database is the only one I have defined in my settings.py Also note that both my MySQL wait_timeout, and my Django's CONN_MAX_AGE are default values - I haven't changed them. According to the Django docs here, a CONN_MAX_AGE value of 0 means: close database connections at the end of each request If django is meant to close the database connection after every request, why than am I running into this error? Why isn't it closing old connections the second I'm done running my query, and starting a new connection when I do a new query, hours later? Edit: Right now, my solution is to have my background service do a heartbeat. Once per hour seems to work fine. The heartbeat is just a simple, low-resource-consumption MySQL command like MyDjangoModel.objects.exists(). As long as it does a MySQL query to refresh the MySQL timeout, it works. Having this does add some complexity to my background service, as in one case, my otherwise single-threaded background service needed a background thread whose only job was to do heartbeats. If there's a simpler solution to this, or at least an explanation for why this happens, I'd like to hear it. | I had exactly the same issue than yours. I implemented a monitoring script using watchdogs library, and, by the end of "wait_timeout", MySQL error would be raised. After a few tries with "django.db.close_old_connections()" function, it still did not work, but I was attempting to close old connections every defined time interval, which was not working. I changed the close command to run only before the call of my custom management command (which is the command that will interact with db and used to crash with MySQL error) and it started to work. Apparently from this page, the reason why that happen is because the "close_old_connection" function is linked only to HTTP request signals, so it will not be fired in specific custom scripts. The documentation of Django doesn't tell that, and I honestly also understood things the same way you were understanding. So, what you can try to do is to add the call to close old connection before interacting with db: from django.db import close_old_connections close_old_connections() do_something_with_db() | 12 | 8 |
59,770,742 | 2020-1-16 | https://stackoverflow.com/questions/59770742/adding-the-line-of-identity-to-a-scatter-plot-using-altair | I have created a basic scatter plot to compare two variables using altair. I expect the variables to be strongly correlated and the points should end up on or close to the line of identity. How can I add the line of identity to the plot? I would like it to be a line similar to those created by mark_rule, but extending diagonally instead of vertically or horizontally. Here is as far as I have gotten: import altair as alt import numpy as np import pandas as pd norm = np.random.multivariate_normal([0, 0], [[2, 1.8],[1.8, 2]], 100) df = pd.DataFrame(norm, columns=['var1', 'var2']) chart = alt.Chart(df, width=500, height=500).mark_circle(size=100).encode( alt.X('var1'), alt.Y('var2'), ).interactive() line = alt.Chart( pd.DataFrame({'var1': [-4, 4], 'var2': [-4, 4]})).mark_line().encode( alt.X('var1'), alt.Y('var2'), ).interactive() chart + line The problems with this example is that the line doesn't extend forever when zooming (like a rule mark) and that the plot gets automatically scaled to the line endings instead of only the points. | It's not perfect but you could make the line longer and set the scale domain. import altair as alt import numpy as np import pandas as pd norm = np.random.multivariate_normal([0, 0], [[2, 1.8],[1.8, 2]], 100) df = pd.DataFrame(norm, columns=['var1', 'var2']) chart = alt.Chart(df, width=500, height=500).mark_circle(size=100).encode( alt.X('var1', scale=alt.Scale(domain=[-4,4])), alt.Y('var2', scale=alt.Scale(domain=[-4,4])), ).interactive() line = alt.Chart( pd.DataFrame({'var1': [-100, 100], 'var2': [-100, 100]})).mark_line().encode( alt.X('var1'), alt.Y('var2'), ).interactive() chart + line | 7 | 1 |
59,764,018 | 2020-1-16 | https://stackoverflow.com/questions/59764018/aws-lambda-in-python-import-parent-package-directory-in-lambda-function-handler | I have a directory structure like the following in my serverless application(simplest app to avoid clutter) which I created using AWS SAM with Python 3.8 as the runtime: ├── common │ └── a.py ├── hello_world │ ├── __init__.py │ ├── app.py │ └── requirements.txt └── template.yaml I would like to import common/a.py module inside the Lambda handler - hello_world/app.py. Now I know that I can import it normally in Python by adding the path to PYTHONPATH or sys.path, but it doesn't work when the code is run in Lambda inside a Docker container. When invoked, the Lambda handler function is run inside /var/task directory and the folder structure is not regarded. I tried inserting /var/task/common, /var/common, /var and even /common to sys.path programmatically like this: import sys sys.path.insert(0, '/var/task/common') from common.a import Alpha but I still get the error: ModuleNotFoundError: No module named 'common' I am aware of Lambda Layers but given my scenario, I would like to directly reference the common code in multiple Lambda functions without the need of uploading to a layer. I want to have something like the serverless-python-requirements plugin in the Serverless framework but in AWS SAM. So my question is, how should I add this path to common to PYTHONPATH or sys.path? Or is there an alternative workaround for this like [serverless-python-requirements][3] to directly import a module in a parent folder without using Lambda Layers? | I didn't find what I was looking for but I ended up with a solution to create a single Lambda function in the root which handles all the different API calls within the function. Yes my Lambda function is integrated with API Gateway, and I can get the API method and API path using event["httpMethod"] and event ["httpPath"] respectively. I can then put all the packages under the root and import them between each other. For example, say I have 2 API paths /items and /employees that need to be handled and both of them need to handle both GET and POST methods, the following code suffices: if event["path"] == '/items': if event["httpMethod"] == 'GET': ... elif event["httpMethod"] == 'POST': ... elif event["path"] == '/employees': if event["httpMethod"] == 'GET': ... if event["httpMethod"] == 'POST': ... So now I can have as much packages under this Lambda function. For example, the following is how repository looks like now: ├── application │ └── *.py ├── persistence │ └── *.py ├── models │ └── *.py └── rootLambdaFunction.py └── requirements.txt This way, I can import packages at will wherever I want within the given structure. | 18 | 3 |
59,789,689 | 2020-1-17 | https://stackoverflow.com/questions/59789689/spark-dag-differs-with-withcolumn-vs-select | Context In a recent SO-post, I discovered that using withColumn may improve the DAG when dealing with stacked/chain column expressions in conjunction with distinct windows specifications. However, in this example, withColumn actually makes the DAG worse and differs to the outcome of using select instead. Reproducible example First, some test data (PySpark 2.4.4 standalone): import pandas as pd import numpy as np from pyspark.sql import SparkSession, Window from pyspark.sql import functions as F spark = SparkSession.builder.getOrCreate() dfp = pd.DataFrame( { "col1": np.random.randint(0, 5, size=100), "col2": np.random.randint(0, 5, size=100), "col3": np.random.randint(0, 5, size=100), "col4": np.random.randint(0, 5, size=100), "col5": np.random.randint(0, 5, size=100), } ) df = spark.createDataFrame(dfp) df.show(5) +----+----+----+----+----+ |col1|col2|col3|col4|col5| +----+----+----+----+----+ | 0| 3| 2| 2| 2| | 1| 3| 3| 2| 4| | 0| 0| 3| 3| 2| | 3| 0| 1| 4| 4| | 4| 0| 3| 3| 3| +----+----+----+----+----+ only showing top 5 rows The example is simple. In contains 2 window specifications and 4 independent column expressions based on them: w1 = Window.partitionBy("col1").orderBy("col2") w2 = Window.partitionBy("col3").orderBy("col4") col_w1_1 = F.max("col5").over(w1).alias("col_w1_1") col_w1_2 = F.sum("col5").over(w1).alias("col_w1_2") col_w2_1 = F.max("col5").over(w2).alias("col_w2_1") col_w2_2 = F.sum("col5").over(w2).alias("col_w2_2") expr = [col_w1_1, col_w1_2, col_w2_1, col_w2_2] withColumn - 4 shuffles If withColumn is used with alternating window specs, the DAG creates unnecessary shuffles: df.withColumn("col_w1_1", col_w1_1)\ .withColumn("col_w2_1", col_w2_1)\ .withColumn("col_w1_2", col_w1_2)\ .withColumn("col_w2_2", col_w2_2)\ .explain() == Physical Plan == Window [sum(col5#92L) windowspecdefinition(col3#90L, col4#91L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w2_2#147L], [col3#90L], [col4#91L ASC NULLS FIRST] +- *(4) Sort [col3#90L ASC NULLS FIRST, col4#91L ASC NULLS FIRST], false, 0 +- Exchange hashpartitioning(col3#90L, 200) +- Window [sum(col5#92L) windowspecdefinition(col1#88L, col2#89L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w1_2#143L], [col1#88L], [col2#89L ASC NULLS FIRST] +- *(3) Sort [col1#88L ASC NULLS FIRST, col2#89L ASC NULLS FIRST], false, 0 +- Exchange hashpartitioning(col1#88L, 200) +- Window [max(col5#92L) windowspecdefinition(col3#90L, col4#91L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w2_1#145L], [col3#90L], [col4#91L ASC NULLS FIRST] +- *(2) Sort [col3#90L ASC NULLS FIRST, col4#91L ASC NULLS FIRST], false, 0 +- Exchange hashpartitioning(col3#90L, 200) +- Window [max(col5#92L) windowspecdefinition(col1#88L, col2#89L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w1_1#141L], [col1#88L], [col2#89L ASC NULLS FIRST] +- *(1) Sort [col1#88L ASC NULLS FIRST, col2#89L ASC NULLS FIRST], false, 0 +- Exchange hashpartitioning(col1#88L, 200) +- Scan ExistingRDD[col1#88L,col2#89L,col3#90L,col4#91L,col5#92L] select - 2 shuffles If all columns are passed with select, the DAG is correct. df.select("*", *expr).explain() == Physical Plan == Window [max(col5#92L) windowspecdefinition(col3#90L, col4#91L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w2_1#119L, sum(col5#92L) windowspecdefinition(col3#90L, col4#91L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w2_2#121L], [col3#90L], [col4#91L ASC NULLS FIRST] +- *(2) Sort [col3#90L ASC NULLS FIRST, col4#91L ASC NULLS FIRST], false, 0 +- Exchange hashpartitioning(col3#90L, 200) +- Window [max(col5#92L) windowspecdefinition(col1#88L, col2#89L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w1_1#115L, sum(col5#92L) windowspecdefinition(col1#88L, col2#89L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w1_2#117L], [col1#88L], [col2#89L ASC NULLS FIRST] +- *(1) Sort [col1#88L ASC NULLS FIRST, col2#89L ASC NULLS FIRST], false, 0 +- Exchange hashpartitioning(col1#88L, 200) +- Scan ExistingRDD[col1#88L,col2#89L,col3#90L,col4#91L,col5#92L] Question There is some existing information about why one should avoid withColumn, however they are mainly concerned with calling withColumn a lot of times and they do not address the issue of deviating DAGs (see here and here). Does anyone have an idea why the DAG differs between withColumn and select? Spark's optimization algorithms should apply in any case and should not be dependent on different ways to express the exact same thing. Thanks in advance. | This looks like a consequence of the the internal projection caused by withColumn. It's documented here in the Spark docs The official recommendation is to do as Jay recommended and instead do a select when dealing with multiple columns | 19 | 6 |
59,809,495 | 2020-1-19 | https://stackoverflow.com/questions/59809495/how-to-install-tensorflow-with-python-3-8 | Whenever I try to install TensorFlow with pip on Python 3.8, I get the error that TensorFlow is not found. I have realized later on that it is not supported by Python 3.8. How can I install TensorFlow on Python 3.8? | As of May 7, 2020, according to Tensorflow's Installation page with pip, Python 3.8 is now supported. Python 3.8 support requires TensorFlow 2.2 or later. You should be able to install it normally via pip. Prior to May 2020: As you mentioned, it is currently not supported by Python 3.8, but is by Python 3.7. You want to have virtualenv installed. You also need Python 3.7. Then you can just start a virtualenv with -p python3.7 and install it using pip like you did before: virtualenv --system-site-packages -p python3.7 DEST_DIR source ./DEST_DIR/bin/activate pip install --upgrade pip pip install --upgrade tensorflow | 13 | 15 |
59,821,618 | 2020-1-20 | https://stackoverflow.com/questions/59821618/how-to-use-yapf-or-black-in-vscode | I installed yapf using: conda install yapf and add next lines in my .vscode/settings.json file: { //"python.linting.pylintEnabled": true, //"python.linting.pycodestyleEnabled": false, //"python.linting.flake8Enabled": true, "python.formatting.provider": "yapf", "python.formatting.yapfArgs": [ " — style", "{based_on_style: pep8, indent_width: 4}" ], "python.linting.enabled": true, } But I can't understand how to use it - it doesn't show any error in a bad-formatted script: import pandas as pd class MyClass(object): def __init__(self, some_value: int): self.value = some_value def one_more_function(self, another_value): print(another_value) myObject = MyClass(45) myObject.one_more_function(2) my__object2 = MyClass(324) print('ok') def some_foo(): """ """ pass | The problem was in wrong settings. To use yapf, black or autopep8 you need: Install yapf / black / autopep8 (pip install black) Configure .vscode/settings.json in the next way: part of the file: { "python.linting.enabled": true, "python.linting.pylintPath": "pylint", "editor.formatOnSave": true, "python.formatting.provider": "yapf", // or "black" here "python.linting.pylintEnabled": true, } Key option - "editor.formatOnSave": true, this mean yapf formats your document every time you save it. | 27 | 39 |
59,838,238 | 2020-1-21 | https://stackoverflow.com/questions/59838238/importerror-cannot-import-name-gi-from-partially-initialized-module-gi-mo | Looks like I have broken my python installation when I wanted to switch to python 3.8. Using Ubuntu 18.04. Trying to use the gi, gives the following error: $ python Python 3.8.1 (default, Dec 31 2019, 18:42:42) [GCC 7.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from gi.repository import GLib, Gio Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3/dist-packages/gi/__init__.py", line 42, in <module> from . import _gi ImportError: cannot import name '_gi' from partially initialized module 'gi' (most likely due to a circular import) (/usr/lib/python3/dist-packages/gi/__init__.py) Tried running update-alternatives for python, but it tells me there is only one python alternative configured (3.8). Tried to reinstall python3-gi and python3.8. Still the same problem | I had the same issue. I linked python3 to python3.6, for me it was pointing to 3.8. That solved the issue. cd /usr/bin/ rm python3 ln -s python3.6 python3 Thats all. Now my system started working fine. | 43 | 52 |
59,823,283 | 2020-1-20 | https://stackoverflow.com/questions/59823283/could-not-load-dynamic-library-cudart64-101-dll-on-tensorflow-cpu-only-install | I just installed the latest version of Tensorflow via pip install tensorflow and whenever I run a program, I get the log message: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found Is this bad? How do I fix the error? | Tensorflow 2.1+ What's going on? With the new Tensorflow 2.1 release, the default tensorflow pip package contains both CPU and GPU versions of TF. In previous TF versions, not finding the CUDA libraries would emit an error and raise an exception, while now the library dynamically searches for the correct CUDA version and, if it doesn't find it, emits the warning (The W in the beginning stands for warnings, errors have an E (or F for fatal errors) and falls back to CPU-only mode. In fact, this is also written in the log as an info message right after the warning (do note that if you have a higher minimum log level that the default, you might not see info messages). The full log is (emphasis mine): 2020-01-20 12:27:44.554767: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-01-20 12:27:44.554964: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Should I worry? How do I fix it? If you don't have a CUDA-enabled GPU on your machine, or if you don't care about not having GPU acceleration, no need to worry. If, on the other hand, you installed tensorflow and wanted GPU acceleration, check your CUDA installation (TF 2.1 requires CUDA 10.1, not 10.2 or 10.0). If you just want to get rid of the warning, you can adapt TF's logging level to suppress warnings, but that might be overkill, as it will silence all warnings. Tensorflow 1.X or 2.0: Your CUDA setup is broken, ensure you have the correct version installed. | 162 | 148 |
59,768,651 | 2020-1-16 | https://stackoverflow.com/questions/59768651/how-to-use-nox-with-poetry | I want to use nox in my project managed with poetry. What is not going well is that installing dev dependency in nox session. I have the noxfile.py as shown below: import nox from nox.sessions import Session from pathlib import Path __dir__ = Path(__file__).parent.absolute() @nox.session(python=PYTHON) def test(session: Session): session.install(str(__dir__)) # I want to use dev dependency here session.run("pytest") How can I install dev dependency in nox session? | Currently, session.install doesn't support poetry and install just runs pip in the shell. You can activate poetry with a more general method session.run. Example: @nox.session(python=False) def tests(session): session.run('poetry', 'shell') session.run('poetry', 'install') session.run('pytest') When you set up session, you can do everything by your own disabling creation of python virtualenv (python=False) and activating poetry's one with poetry shell. | 10 | 6 |
59,838,433 | 2020-1-21 | https://stackoverflow.com/questions/59838433/how-does-waitress-handle-concurrent-tasks | I'm trying to build a python webserver using Django and Waitress, but I'd like to know how Waitress handles concurrent requests, and when blocking may occur. While the Waitress documentation mentions that multiple worker threads are available, it doesn't provide a lot of information on how they are implemented and how the python GIL affects them (emphasis my own): When a channel determines the client has sent at least one full valid HTTP request, it schedules a "task" with a "thread dispatcher". The thread dispatcher maintains a fixed pool of worker threads available to do client work (by default, 4 threads). If a worker thread is available when a task is scheduled, the worker thread runs the task. The task has access to the channel, and can write back to the channel's output buffer. When all worker threads are in use, scheduled tasks will wait in a queue for a worker thread to become available. There doesn't seem to be much information on Stackoverflow either. From the question "Is Gunicorn's gthread async worker analogous to Waitress?": Waitress has a master async thread that buffers requests, and enqueues each request to one of its sync worker threads when the request I/O is finished. These statements don't address the GIL (at least from my understanding) and it'd be great if someone could elaborate more on how worker threads work for Waitress. Thanks! | Here's how the event-driven asynchronous servers generally work: Start a process and listen to incoming requests. Utilizing the event notification API of the operating system makes it very easy to serve thousands of clients from single thread/process. Since there's only one process managing all the connections, you don't want to perform any slow (or blocking) tasks in this process. Because then it will block the program for every client. To perform blocking tasks, the server delegates the tasks to "workers". Workers can be threads (running in the same process) or separate processes (or subprocesses). Now the main process can keep on serving clients while workers perform the blocking tasks. How does Waitress handle concurrent tasks? Pretty much the same way I just described above. And for workers it creates threads, not processes. how the python GIL affects them Waitress uses threads for workers. So, yes they are affected by GIL in that they aren't truly concurrent though they seem to be. "Asynchronous" is the correct term. Threads in Python run inside a single process, on a single CPU core, and don't run in parallel. A thread acquires the GIL for a very small amount of time and executes its code and then the GIL is acquired by another thread. But since the GIL is released on network I/O, the parent process will always acquire the GIL whenever there's a network event (such as an incoming request) and this way you can stay assured that the GIL will not affect the network bound operations (like receiving requests or sending response). On the other hand, Python processes are actually concurrent: they can run in parallel on multiple cores. But Waitress doesn't use processes. Should you be worried? If you're just doing small blocking tasks like database read/writes and serving only a few hundred users per second, then using threads isn't really that bad. For serving a large volume of users or doing long running blocking tasks, you can look into using external task queues like Celery. This will be much better than spawning and managing processes yourself. | 22 | 16 |
59,815,698 | 2020-1-20 | https://stackoverflow.com/questions/59815698/mutagens-save-does-not-set-or-change-cover-art-for-mp3-files | I am trying to use Mutagen for changing ID3 (version 2.3) cover art for a bunch of MP3 files in the following way: from mutagen.mp3 import MP3 from mutagen.id3 import APIC file = MP3(filename) with open('Label.jpg', 'rb') as albumart: file.tags['APIC'] = APIC( encoding=3, mime='image/jpeg', type=3, desc=u'Cover', data=albumart.read() ) file.save(v2_version=3) However, the file (or at least the APIC tag) remains unchanged, as checked by reading the tag back. In the system file explorer, the file does show an updated Date modified, however. How can I get Mutagen to update the cover art correctly? | I needed to set the cover to the "APIC:" tag, instead of the "APIC" tag (which I guess is how IDv2.3 is specified). | 7 | 0 |
59,762,414 | 2020-1-16 | https://stackoverflow.com/questions/59762414/how-to-use-multiprocessing-to-drop-duplicates-in-a-very-big-list | Let's say I have a huge list containing random numbers for example L = [random.randrange(0,25000000000) for _ in range(1000000000)] I need to get rid of the duplicates in this list I wrote this code for lists containing a smaller number of elements def remove_duplicates(list_to_deduplicate): seen = set() result=[] for i in list_to_deduplicate: if i not in seen: result.append(i) seen.add(i) return result In the code above I create a set so I can memorize what numbers have already appeared in the list I'm working on if the number is not in the set then I add it to the result list I need to return and save it in the set so it won't be added again in the result list Now for 1000000 number in a list all is good I can get a result fast but for numbers superior to let's say 1000000000 problems arise I need to use the different cores on my machine to try and break up the problem and then combine the results from multiple processes My first guess was to make a set accessible to all processes but many complications will arise How can a process read while maybe another one is adding to the set and I don't even know if it is possible to share a set between processes I know we can use a Queue or a pipe but I'm not sure on how to use it Can someone give me an advice on what is the best way to solve this problem I am open to any new idea | I'm skeptic even your greatest list is big enough so that multiprocessing would improve timings. Using numpy and multithreading is probably your best chance. Multiprocessing introduces quite some overhead and increases memory consumption like @Frank Merrow rightly mentioned earlier. That's not the case (to that extend) for multithreading, though. It's important to not mix these terms up because processes and threads are not the same. Threads within the same process share their memory, distinct processes do not. The problem with going multi-core in Python is the GIL, which doesn't allow multiple threads (in the same process) to execute Python bytecode in parallel. Some C-extensions like numpy can release the GIL, this enables profiting from multi-core parallelism with multithreading. Here's your chance to get some speed up on top of a big improvement just by using numpy. from multiprocessing.dummy import Pool # .dummy uses threads import numpy as np r = np.random.RandomState(42).randint(0, 25000000000, 100_000_000) n_threads = 8 result = np.unique(np.concatenate( Pool(n_threads).map(np.unique, np.array_split(r, n_threads))) ).tolist() Use numpy and a thread-pool, split up the array, make the sub-arrays unique in separate threads, then concatenate the sub-arrays and make the recombined array once more unique again. The final dropping of duplicates for the recombined array is necessary because within the sub-arrays only local duplicates can be identified. For low entropy data (many duplicates) using pandas.unique instead of numpy.unique can be much faster. Unlike numpy.unique it also preserves order of appearance. Note that using a thread-pool like above makes only sense if the numpy-function is not already multi-threaded under the hood by calling into low-level math libraries. So, always test to see if it actually improves performance and don't take it for granted. Tested with 100M random generated integers in the range: High entropy: 0 - 25_000_000_000 (199560 duplicates) Low entropy: 0 - 1000 Code import time import timeit from multiprocessing.dummy import Pool # .dummy uses threads import numpy as np import pandas as pd def time_stmt(stmt, title=None): t = timeit.repeat( stmt=stmt, timer=time.perf_counter_ns, repeat=3, number=1, globals=globals() ) print(f"\t{title or stmt}") print(f"\t\t{min(t) / 1e9:.2f} s") if __name__ == '__main__': n_threads = 8 # machine with 8 cores (4 physical cores) stmt_np_unique_pool = \ """ np.unique(np.concatenate( Pool(n_threads).map(np.unique, np.array_split(r, n_threads))) ).tolist() """ stmt_pd_unique_pool = \ """ pd.unique(np.concatenate( Pool(n_threads).map(pd.unique, np.array_split(r, n_threads))) ).tolist() """ # ------------------------------------------------------------------------- print(f"\nhigh entropy (few duplicates) {'-' * 30}\n") r = np.random.RandomState(42).randint(0, 25000000000, 100_000_000) r = list(r) time_stmt("list(set(r))") r = np.asarray(r) # numpy.unique time_stmt("np.unique(r).tolist()") # pandas.unique time_stmt("pd.unique(r).tolist()") # numpy.unique & Pool time_stmt(stmt_np_unique_pool, "numpy.unique() & Pool") # pandas.unique & Pool time_stmt(stmt_pd_unique_pool, "pandas.unique() & Pool") # --- print(f"\nlow entropy (many duplicates) {'-' * 30}\n") r = np.random.RandomState(42).randint(0, 1000, 100_000_000) r = list(r) time_stmt("list(set(r))") r = np.asarray(r) # numpy.unique time_stmt("np.unique(r).tolist()") # pandas.unique time_stmt("pd.unique(r).tolist()") # numpy.unique & Pool time_stmt(stmt_np_unique_pool, "numpy.unique() & Pool") # pandas.unique() & Pool time_stmt(stmt_pd_unique_pool, "pandas.unique() & Pool") Like you can see in the timings below, just using numpy without multithreading already accounts for the biggest performance improvement. Also note pandas.unique() being faster than numpy.unique() (only) for many duplicates. high entropy (few duplicates) ------------------------------ list(set(r)) 32.76 s np.unique(r).tolist() 12.32 s pd.unique(r).tolist() 23.01 s numpy.unique() & Pool 9.75 s pandas.unique() & Pool 28.91 s low entropy (many duplicates) ------------------------------ list(set(r)) 5.66 s np.unique(r).tolist() 4.59 s pd.unique(r).tolist() 0.75 s numpy.unique() & Pool 1.17 s pandas.unique() & Pool 0.19 s | 7 | 7 |
59,817,473 | 2020-1-20 | https://stackoverflow.com/questions/59817473/sort-a-list-from-an-index-to-another-index | Suppose I have a list [2, 4, 1, 3, 5]. I want to sort the list just from index 1 to the end, which gives me [2, 1, 3, 4, 5] How can I do it in Python? (No extra spaces would be appreciated) | TL;DR: Use sorted with a slicing assignment to keep the original list object without creating a new one: l = [2, 4, 1, 3, 5] l[1:] = sorted(l[1:]) print(l) Output: [2, 1, 3, 4, 5] Longer Answer: After the list is created, we will make a slicing assignment: l[1:] = Now you might be wondering what does [1:], it is slicing the list and starts from the second index, so the first index will be dropped. Python's indexing starts from zero, : means get everything after the index before, but if it was [1:3] it will only get values that are in between the indexes 1 and 3, let's say your list is: l = [1, 2, 3, 4, 5] If you use: print(l[1:]) It will result in: [2, 3, 4, 5] And if you use: print(l[1:3]) It will result in: [2, 3] About slicing, read more here if you want to. And after slicing we have an equal sign =, that just simply changes what's before the = sign to what's after the = sign, so in this case, we use l[1:], and that gives [2, 3, 4, 5], it will change that to whatever is after the = sign. If you use: l[1:] = [100, 200, 300, 400] print(l) It will result in: [1, 100, 200, 300, 400] To learn more about it check out this. After that, we got sorted, which is default builtin function, it simple sorts the list from small to big, let's say we have the below list: l = [3, 2, 1, 4] If you use: print(sorted(l)) It will result in: [1, 2, 3, 4] To learn more about it check this. After that we come back to our first topic about slicing, with l[1:], but from here you know that it isn't only used for assignments, you can apply functions to it and deal with it, like here we use sorted. | 19 | 18 |
59,843,346 | 2020-1-21 | https://stackoverflow.com/questions/59843346/dict-pop-versus-dict-get-on-the-default-return-value | I'm trying to figure out what is the reason for having None as the default value for dict.get but no default value (without specifying the default value) for dict.pop {}.get('my_key') # output: None {}.pop('my_key') # output: KeyError: 'my_key' I was thinking that the reason for not having implicit default value for dict.pop is because you may have keys with value None so, in order to not get confused if your key is in the dictionary or not, an implicit default value for dict.pop doesn't make so much sense. But then again this explanation should be valid also for dict.get and isn't: {'my_key': None}.get('my_key') # output: None # but doesn't tell you if the key is truly in the dictionary or not | My take on this is to allow a dict to be used with different contexts. With dict.get(key) we always get a value even if the key is not present in dict. The default value can be provided. No exception is raised. The dict is not changed. With dict.pop(key) we get a value only when the key is present in dict, otherwise an exception is raised. We may avoid the exception by providing a default value. The dict is changed. To test for the presence of key we use key in dict. For dict.pop, this gives a similar interface to what list.pop, set.pop and deque.pop provides. In short a good example of the "principle of least surprise" and the Zen of Python (import this) :) | 9 | 2 |
59,802,468 | 2020-1-18 | https://stackoverflow.com/questions/59802468/post-install-script-with-python-poetry | Post-install script with Python setuptools Exactly this question, but with Poetry and no Setuptools. I want to run print('Installation finished, doing other things...') when my package is installed. With Setuptools you could just modify setup.py, but in Poetry there specifically is no setup.py. What I actually want to do is generate a default .mypackage_config file and place it somewhere useful. I don't see how to do this without arbitrary code, but Poetry does not allow arbitrary code for installation. Is there any way to do this? | It's not currently possible (and probably won't ever be) The entire idea of poetry is that a package can be installed without running any arbitrary Python code. Because of that, custom post-install scripts will probably never exist (from the author of poetry, the link you gave in your question). What you could do instead You could use setuptools, and modify the setup.py script, but it's probably easier to remove the need for a post-install script by removing the need for a default config file. Most Python tools, such as black, tend to assume default config parameters, unless there are settings in a config file (such as a pyproject.toml file) that overrides them, e.g.: [tool.black] # specifies this next section is the config for black line-length = 88 # changes some configuration parameters from the defaults target-version = ['py36', 'py37'] Jupyter has a command jupyterhub --generate-config to generate a default config file (source). If your package is a library, not a command-line tool, i.e. you use it by import-ing it in your Python code, I'd recommend just passing in the configuration as arguments to the constructor/functions, since that's the standard way of passing config to a library. As an example, look at PySerial's API. You can configure the library by passing args to the constructor serial.Serial, e.g.: import serial with serial.Serial( '/dev/ttyUSB0', # first config param, the port 19200, # second config param, the baudrate timeout=5, # sets the timeout config param ) as ser: | 12 | 12 |
59,846,065 | 2020-1-21 | https://stackoverflow.com/questions/59846065/read-the-docs-build-fails-with-cannot-import-name-packagefinder-from-pip-in | The build of Sphinx docs on read-the-docs fails with the following error (complete log below): ImportError: cannot import name 'PackageFinder' from 'pip._internal.index' (/home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/lib/python3.7/site-packages/pip/_internal/index/__init__.py) Did I do something wrong or is this a bug in read-the-docs? A local build of Sphinx docs runs fine. Complete error log on read-the-docs: Read the Docs build information Build id: 10299638 Project: cascade-python Version: latest Commit: a7d50bf781bd8076b10dd7024db4ccb628016c27 Date: 2020-01-21T17:03:12.876711Z State: finished Success: False [rtd-command-info] start-time: 2020-01-21T17:03:13.203354Z, end-time: 2020-01-21T17:03:13.215400Z, duration: 0, exit-code: 0 git remote set-url origin https://github.com/brunorijsman/cascade-python.git [rtd-command-info] start-time: 2020-01-21T17:03:13.276220Z, end-time: 2020-01-21T17:03:13.630658Z, duration: 0, exit-code: 0 git fetch origin --force --tags --prune --prune-tags --depth 50 From https://github.com/brunorijsman/cascade-python 2a28505..a7d50bf master -> origin/master [rtd-command-info] start-time: 2020-01-21T17:03:13.824496Z, end-time: 2020-01-21T17:03:13.876904Z, duration: 0, exit-code: 0 git checkout --force origin/master Previous HEAD position was 2a28505 Fix lint HEAD is now at a7d50bf Trigger docs build [rtd-command-info] start-time: 2020-01-21T17:03:13.941290Z, end-time: 2020-01-21T17:03:13.951085Z, duration: 0, exit-code: 0 git clean -d -f -f [rtd-command-info] start-time: 2020-01-21T17:03:16.657644Z, end-time: 2020-01-21T17:03:22.489740Z, duration: 5, exit-code: 0 python3.7 -mvirtualenv --no-site-packages --no-download /home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest Using base prefix '/home/docs/.pyenv/versions/3.7.3' New python executable in /home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/bin/python3.7 Not overwriting existing python script /home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/bin/python (you must use /home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/bin/python3.7) Installing setuptools, pip, wheel... done. [rtd-command-info] start-time: 2020-01-21T17:03:22.562608Z, end-time: 2020-01-21T17:03:23.258281Z, duration: 0, exit-code: 1 /home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/bin/python -m pip install --upgrade --cache-dir /home/docs/checkouts/readthedocs.org/user_builds/cascade-python/.cache/pip pip Traceback (most recent call last): File "/home/docs/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/docs/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/lib/python3.7/site-packages/pip/__main__.py", line 16, in <module> from pip._internal import main as _main # isort:skip # noqa File "/home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/lib/python3.7/site-packages/pip/_internal/__init__.py", line 40, in <module> from pip._internal.cli.autocompletion import autocomplete File "/home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/lib/python3.7/site-packages/pip/_internal/cli/autocompletion.py", line 8, in <module> from pip._internal.cli.main_parser import create_main_parser File "/home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/lib/python3.7/site-packages/pip/_internal/cli/main_parser.py", line 12, in <module> from pip._internal.commands import ( File "/home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/lib/python3.7/site-packages/pip/_internal/commands/__init__.py", line 6, in <module> from pip._internal.commands.completion import CompletionCommand File "/home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/lib/python3.7/site-packages/pip/_internal/commands/completion.py", line 6, in <module> from pip._internal.cli.base_command import Command File "/home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/lib/python3.7/site-packages/pip/_internal/cli/base_command.py", line 25, in <module> from pip._internal.index import PackageFinder ImportError: cannot import name 'PackageFinder' from 'pip._internal.index' (/home/docs/checkouts/readthedocs.org/user_builds/cascade-python/envs/latest/lib/python3.7/site-packages/pip/_internal/index/__init__.py) | The issue and the fix are described in read-the-docs issue #6554 (https://github.com/readthedocs/readthedocs.org/issues/6554): Currently all builds are failing because the automatic upgrade (since #4823 ) to pip 20.0 was buggy (see pypa/pip#7620 ). There's now a 20.0.1 release which seems to have fixed the problem for others ... but how can I force my readthedocs to also upgrade to the .1 version? The fix is to wipe out the build environment as follows (this is taken from https://docs.readthedocs.io/en/stable/guides/wipe-environment.html): Log in to read-the-docs Go to Versions Click on the Edit button of the version you want to wipe on the right side of the page Go to the bottom of the page and click the wipe link, next to the “Save” button Now you can re-build the version with a fresh build environment! This fix worked for me (but as of 26-Jan-2020 you have to wipe out the environment for every build -- see comment from Grimmy below). | 25 | 27 |
59,774,722 | 2020-1-16 | https://stackoverflow.com/questions/59774722/why-is-time-sleep-accuracy-influenced-by-chrome | I've noticed some strange behaviour that may or may not be specific to my system. (lenovo t430 running windows 8) With this script: import time now = time.time() while True: then = now now = time.time() dif = now - then print(dif) time.sleep(0.01) I get the following output (what I would consider nominal) with a browser open. However without a browser open I observe a severe per loop latency. Obviously this is counter-intuitive as I think anyone would expect better performance when you have fewer concurrant processes. Any insights or simple replication of these results would be appreciated. EDIT: Interestingly I observe similar latency with this code: import time now = time.time() def newSleep(mark,duration): count = 0 while time.time()-mark < duration: count+=1 print(count) while True: then = now now = time.time() dif = now - then print(dif) #time.sleep(0.01) newSleep(now,0.01) While it does provide additional insight - that is some instances of latent loops are due to lack of processor availability (noted by a count of 0 being printed)- I still notice the 15ms behavior where the printed count will be as high as 70k... and 10ms behavior with counts around 40k. | I extra fired up Windows 7 to replicate your findings and I can confirm it. It's a Windows thing with the type of timer used and a default resolution of 15.6 ms (minimum 0.5 ms). Applications can alter the current resolution (WinAPI function: timeBeginPeriod) and Chrome does so. This function affects a global Windows setting. Windows uses the lowest value (that is, highest resolution) requested by any process. Setting a higher resolution can improve the accuracy of time-out intervals in wait functions. However, it can also reduce overall system performance, because the thread scheduler switches tasks more often. High resolutions can also prevent the CPU power management system from entering power-saving modes. Setting a higher resolution does not improve the accuracy of the high-resolution performance counter. An article from 2014 in Forbes is covering a bug in Chrome which would set the resolution permanently to 1 ms no matter what current load would require - a problem because it's a system-wide effect with impact on energy consumption. From that article: In an OS like Windows, events are often set to run at intervals. To save power, the processor sleeps when nothing needs attention, and wakes at predefined intervals. This interval is what Chrome adjusts in Windows, so reducing it to 1.000ms means that the system is waking far more often than at 15.625ms. In fact, at 1.000ms the processor is waking 1000 times per second. The default, of 15.625ms means the processor wakes just 64 times per second to check on events that need attention. Microsoft itself says that tick rates of 1.000ms might increase power consumption by "as much as 25 per cent". You can get the default resolution from Python with time.get_clock_info(). namespace = time.get_clock_info('time') namespace.adjustable # True namespace.implementation # 'GetSystemTimeAsFileTime()' namespace.monotonic # False namespace.resolution # 0.015600099999999999 You can get the actual resolution from cmd with the ClockRes applet. | 24 | 15 |
59,841,876 | 2020-1-21 | https://stackoverflow.com/questions/59841876/why-define-create-foo-in-a-django-models-manager-instead-of-overriding-create | Reading the Django docs, it advices to make a custom creation method for a model named Foo by defining it as create_foo in the manager: class BookManager(models.Manager): def create_book(self, title): book = self.create(title=title) # do something with the book return book class Book(models.Model): title = models.CharField(max_length=100) objects = BookManager() book = Book.objects.create_book("Pride and Prejudice") My question is that why is the previous one preferred to simply overriding the base class's create method: class BookManager(models.Manager): def create(self, title): book = self.model(title=title) # do something with the book book.save() return book class Book(models.Model): title = models.CharField(max_length=100) objects = BookManager() book = Book.objects.create("Pride and Prejudice") Imo it seems that only overriding create will prevent anyone from accidentally using it to make a illformed model instance, since create_foo can always be bypassed completely: class BookManager(models.Manager): def create_book(self, title): book = self.create(title=title, should_not_be_set_manually="critical text") return book class Book(models.Model): title = models.CharField(max_length=100) should_not_be_set_manually = models.CharField(max_length=100) objects = BookManager() # Can make an illformed Book!! book = Book.objects.create(title="Some title", should_not_be_set_manually="bad value") Is there any advantage in doing it like the docs suggest, or is actually overriding create just objectively better? | Yes, obviously, you can do that. But if you look closer to the example you are quoting from documentation, it is not about whether you should override create or not, it is about If you do so, however, take care not to change the calling signature as any change may prevent the model instance from being saved. preserving the calling signature. Because interfaces available for you may also be used by django internally. If you modify them, things may not break for you but for Django. In this example, they are not suggesting this for create but model constructor. Secondly, even standard interface for create is only taking keyword arguments def create(self, **kwargs): But if you modify it to take positional arguments, def create(self, title): it will break wherever it is used inside Django or in standard way. So you should extend existing functionality not modify and most probably break it. | 10 | 12 |
59,765,486 | 2020-1-16 | https://stackoverflow.com/questions/59765486/vscode-remote-jupyter-notebook-open-an-existing-notebook-in-a-specific-folder | I can connect to a remote Jupyter Notebook server with a token from VSCode through the "Python: Specify Jupyter server URI" command from the Command Palette. However, I didn't find a way to do 2 things: Open an existing Notebook on the remote Jupyter Notebook server. Specify a folder to connect to, where my existing notebook resides in the remote server. Is there a way of doing it? | Currently, VSCode doesn't support this functionality. See this issue: https://github.com/microsoft/vscode-python/issues/8161 | 7 | 5 |
59,847,074 | 2020-1-21 | https://stackoverflow.com/questions/59847074/unmelt-only-part-of-a-column-from-pandas-dataframe | I have the following example dataframe: df = pd.DataFrame(data = {'RecordID' : [1,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5], 'DisplayLabel' : ['Source','Test','Value 1','Value 2','Value3','Source','Test','Value 1','Value 2','Source','Test','Value 1','Value 2','Source','Test','Value 1','Value 2','Source','Test','Value 1','Value 2'], 'Value' : ['Web','Logic','S','I','Complete','Person','Voice','>20','P','Mail','OCR','A','I','Dictation','Understandable','S','I','Web','Logic','R','S']}) which creates this dataframe: +-------+----------+---------------+----------------+ | Index | RecordID | Display Label | Value | +-------+----------+---------------+----------------+ | 0 | 1 | Source | Web | | 1 | 1 | Test | Logic | | 2 | 1 | Value 1 | S | | 3 | 1 | Value 2 | I | | 4 | 1 | Value 3 | Complete | | 5 | 2 | Source | Person | | 6 | 2 | Test | Voice | | 7 | 2 | Value 1 | >20 | | 8 | 2 | Value 2 | P | | 9 | 3 | Source | Mail | | 10 | 3 | Test | OCR | | 11 | 3 | Value 1 | A | | 12 | 3 | Value 2 | I | | 13 | 4 | Source | Dictation | | 14 | 4 | Test | Understandable | | 15 | 4 | Value 1 | S | | 16 | 4 | Value 2 | I | | 17 | 5 | Source | Web | | 18 | 5 | Test | Logic | | 19 | 5 | Value 1 | R | | 20 | 5 | Value 2 | S | +-------+----------+---------------+----------------+ I am trying to "unmelt" though not exactly the source and test columns into new dataframe Columns such that it will look like this: +-------+----------+-----------+----------------+---------------+----------+ | Index | RecordID | Source | Test | Result | Value | +-------+----------+-----------+----------------+---------------+----------+ | 0 | 1 | Web | Logic | Value 1 | S | | 1 | 1 | Web | Logic | Value 2 | I | | 2 | 1 | Web | Logic | Value 3 | Complete | | 3 | 2 | Person | Voice | Value 1 | >20 | | 4 | 2 | Person | Voice | Value 2 | P | | 5 | 3 | Mail | OCR | Value 1 | A | | 6 | 3 | Mail | OCR | Value 2 | I | | 7 | 4 | Dictation | Understandable | Value 1 | S | | 8 | 4 | Dictation | Understandable | Value 2 | I | | 9 | 5 | Web | Logic | Value 1 | R | | 10 | 5 | Web | Logic | Value 2 | S | +-------+----------+-----------+----------------+---------------+----------+ It's my understanding that pivot and melt will do the entire DisplayLabel column and not just some of the values. Any help would be greatly appreciated as I have read the Pandas Melt and the Pandas Pivot as well as some references on stackoverflow and I can't seem to figure out a way to do this quickly. Thanks! | We can achieve your result by applying logic and pivotting, we split your data by checking if DisplayLabel contains Value and then we join them back together: mask = df['DisplayLabel'].str.contains('Value') df2 = df[~mask].pivot(index='RecordID', columns='DisplayLabel', values='Value') dfpiv = ( df[mask].rename(columns={'DisplayLabel':'Result'}) .set_index('RecordID') .join(df2) .reset_index() ) RecordID Result Value Source Test 0 1 Value 1 S Web Logic 1 1 Value 2 I Web Logic 2 1 Value3 Complete Web Logic 3 2 Value 1 >20 Person Voice 4 2 Value 2 P Person Voice 5 3 Value 1 A Mail OCR 6 3 Value 2 I Mail OCR 7 4 Value 1 S Dictation Understandable 8 4 Value 2 I Dictation Understandable 9 5 Value 1 R Web Logic 10 5 Value 2 S Web Logic If you want the exact column order as your example, use DataFrame.reindex: dfpiv.reindex(columns=['RecordID', 'Source', 'Test', 'Result', 'Value']) RecordID Source Test Result Value 0 1 Web Logic Value 1 S 1 1 Web Logic Value 2 I 2 1 Web Logic Value3 Complete 3 2 Person Voice Value 1 >20 4 2 Person Voice Value 2 P 5 3 Mail OCR Value 1 A 6 3 Mail OCR Value 2 I 7 4 Dictation Understandable Value 1 S 8 4 Dictation Understandable Value 2 I 9 5 Web Logic Value 1 R 10 5 Web Logic Value 2 S In detail - step by step: # mask all rows where "Value" is in column DisplayLabel mask = df['DisplayLabel'].str.contains('Value') 0 False 1 False 2 True 3 True 4 True 5 False 6 False 7 True 8 True 9 False 10 False 11 True 12 True 13 False 14 False 15 True 16 True 17 False 18 False 19 True 20 True Name: DisplayLabel, dtype: bool # select all rows which do NOT have "Value" in DisplayLabel df[~mask] RecordID DisplayLabel Value 0 1 Source Web 1 1 Test Logic 5 2 Source Person 6 2 Test Voice 9 3 Source Mail 10 3 Test OCR 13 4 Source Dictation 14 4 Test Understandable 17 5 Source Web 18 5 Test Logic # pivot the values in DisplayLabel to columns df2 = df[~mask].pivot(index='RecordID', columns='DisplayLabel', values='Value') DisplayLabel Source Test RecordID 1 Web Logic 2 Person Voice 3 Mail OCR 4 Dictation Understandable 5 Web Logic df[mask].rename(columns={'DisplayLabel':'Result'}) # rename the column DisplayLabel to Result .set_index('RecordID') # set RecordId as index so we can join df2 .join(df2) # join df2 back to our dataframe based RecordId .reset_index() # reset index so we get RecordId back as column RecordID Result Value Source Test 0 1 Value 1 S Web Logic 1 1 Value 2 I Web Logic 2 1 Value3 Complete Web Logic 3 2 Value 1 >20 Person Voice 4 2 Value 2 P Person Voice 5 3 Value 1 A Mail OCR 6 3 Value 2 I Mail OCR 7 4 Value 1 S Dictation Understandable 8 4 Value 2 I Dictation Understandable 9 5 Value 1 R Web Logic 10 5 Value 2 S Web Logic | 11 | 7 |
59,842,469 | 2020-1-21 | https://stackoverflow.com/questions/59842469/luigi-is-there-a-way-to-pass-false-to-a-bool-parameter-from-the-command-line | I have a Luigi task with a boolean parameter that is set to True by default: class MyLuigiTask(luigi.Task): my_bool_param = luigi.BoolParameter(default=True) When I run this task from terminal, I sometimes want to pass that parameter as False, but get the following result: $ MyLuigiTask --my_bool_param False error: unrecognized arguments: False Same obviously for false and 0... I understand that I can make the default False and then use the flag --my_bool_param if I want to make it True, but I much prefer to have the default True. Is there any way to do this, and still pass False from terminal? | Found the solution in Luigi docs: class MyLuigiTask(luigi.Task): my_bool_param = luigi.BoolParameter( default=True, parsing=luigi.BoolParameter.EXPLICIT_PARSING) def run(self): print(self.my_bool_param) Here EXPLICIT_PARSING tell Luigi that adding the flag --my_bool_param false in the terminal call to MyLuigiTask, will be parsed as store_false. Now we can have: $ MyLuigiTask --my_bool_param false False | 7 | 7 |
59,839,782 | 2020-1-21 | https://stackoverflow.com/questions/59839782/confusion-matrix-font-size | I have a Confusion Matrix with really small sized numbers but I can't find a way to change them. from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, rf_predictions) ax = plt.subplot() sns.set(font_scale=3.0) #edited as suggested sns.heatmap(cm, annot=True, ax=ax, cmap="Blues", fmt="g"); # annot=True to annotate cells # labels, title and ticks ax.set_xlabel('Predicted labels'); ax.set_ylabel('Observed labels'); ax.set_title('Confusion Matrix'); ax.xaxis.set_ticklabels(['False', 'True']); ax.yaxis.set_ticklabels(['Flase', 'True']); plt.show() thats the code I am using and the pic I get looks like: I would not mind changing the numbers of the classification by hand but I dont really want to do it for the labels aswell. EDIT: Figures are bigger now but the labels stay very small Cheers | Use sns.set to change the font size of the heatmap values. You can specify the font size of the labels and the title as a dictionary in ax.set_xlabel, ax.set_ylabel and ax.set_title, and the font size of the tick labels with ax.tick_params. from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, rf_predictions) ax = plt.subplot() sns.set(font_scale=3.0) # Adjust to fit sns.heatmap(cm, annot=True, ax=ax, cmap="Blues", fmt="g"); # Labels, title and ticks label_font = {'size':'18'} # Adjust to fit ax.set_xlabel('Predicted labels', fontdict=label_font); ax.set_ylabel('Observed labels', fontdict=label_font); title_font = {'size':'21'} # Adjust to fit ax.set_title('Confusion Matrix', fontdict=title_font); ax.tick_params(axis='both', which='major', labelsize=10) # Adjust to fit ax.xaxis.set_ticklabels(['False', 'True']); ax.yaxis.set_ticklabels(['False', 'True']); plt.show() | 9 | 10 |
59,833,435 | 2020-1-21 | https://stackoverflow.com/questions/59833435/zsh-command-not-found-conda-after-upgrading-to-catalina-and-even-after-reinstal | I recently updated my MacOS to Catalina, and now I have the infamous "zsh command not found: conda" when I enter "conda" in my terminal. I've read a number of solutions, and the easiest for me to try was to reinstall Anaconda in my home directory (specifically, the 2019.10 version of the installer installs in Users/myname/opt/anaconda3), as suggested by the folks at Anaconda here. Well, I did just that and it did not solve the problem. What am I missing? | From the Anaconda install docs: In order to initialize after the installation process is done, first run source <path to conda>/bin/activate and then run conda init. However, If you are on macOS Catalina, the new default shell is zsh. You will instead need to run source <path to conda>/bin/activate followed by conda init zsh. | 7 | 23 |
59,832,252 | 2020-1-20 | https://stackoverflow.com/questions/59832252/taking-the-maximum-values-of-each-row-in-a-tensor-pytorch | Suppose I have a tensor of the form [[-5, 0, -1], [3, 100, 87], [17, -34, 2], [45, 1, 25]] I want to find the maximum value in each row and return a rank 1 tensor as follows: [0, 100, 17, 45] How would I do this in PyTorch? | You can use the torch.max() function. So you can do something like x = torch.Tensor([[-5, 0, -1], [3, 100, 87], [17, -34, 2], [45, 1, 25]]) out, inds = torch.max(x,dim=1) and this will return the maximum values across each row (dimension 1). It will return max values with their indices. | 8 | 11 |
59,829,077 | 2020-1-20 | https://stackoverflow.com/questions/59829077/how-to-display-r-squared-value-on-my-graph-in-python | I am a Python beginner so this may be more obvious than what I'm thinking. I'm using Matplotlib to graphically present my predicted data vs actual data via a neural network. I am able to calculate r-squared, and plot my data, but now I want to combine the value on the graph itself, which changes with every new run. My NN uses at least 4 different inputs, and gives one output. This is my end code for that: y_predicted = model.predict(X_test) This is how i calculate R2: # Using sklearn from sklearn.metrics import r2_score print r2_score(y_test, y_predicted) and this is my graph: fig, ax = plt.subplots() ax.scatter(y_test, y_predicted) ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4) ax.set_xlabel('Actual') ax.set_ylabel('Predicted') #regression line y_test, y_predicted = y_test.reshape(-1,1), y_predicted.reshape(-1,1) ax.plot(y_test, LinearRegression().fit(y_test, y_predicted).predict(y_test)) plt.show() It gives something like the graph attached, and the R2 varies everytime I change the epochs, or number of layers, or type of data etc. The red is my line of regression, which I will label later. Since R2 is a function I can't simply use the legend or text code. I would also like to display MSE. Can anyone help me out? Graph | If I understand correctly, you want to show R2 in the graph. You can add it to the graph title: ax.set_title('R2: ' + str(r2_score(y_test, y_predicted))) before plt.show() | 7 | 4 |
59,826,571 | 2020-1-20 | https://stackoverflow.com/questions/59826571/pandas-dataframe-copydeep-true-doesnt-actually-create-deep-copy | I've been experimenting for a while with pd.Series and pd.DataFrame and faced some strange problem. Let's say I have the following pd.DataFrame: df = pd.DataFrame({'col':[[1,2,3]]}) Notice, that this dataframe includes column containing list. I want to modify this dataframe's copy and return its modified version so that the initial one will remain unchanged. For the sake of simplicity, let's say I want to add integer '4' in its cell. I've tried the following code: def modify(df): dfc = df.copy(deep=True) dfc['col'].iloc[0].append(4) return dfc modify(df) print(df) The problem is that, besides the new copy dfc, the initial DataFrame df is also modified. Why? What should I do to prevent initial dataframes from modifying? My pandas version is 0.25.0 | From the docs here, in the Notes section: When deep=True, data is copied but actual Python objects will not be copied recursively, only the reference to the object. This is in contrast to copy.deepcopy in the Standard Library, which recursively copies object data (see examples below). This is referenced again in this issue on GitHub, where the devs state that: embedding mutable objects inside a. DataFrame is an antipattern So this function is working as the devs intend - mutable objects such as lists should not be embedded in DataFrames. I couldn't find a way to get copy.deepcopy to work as intended on a DataFrame, but I did find a fairly awful workaround using pickle: import pandas as pd import pickle df = pd.DataFrame({'col':[[1,2,3]]}) def modify(df): dfc = pickle.loads(pickle.dumps(df)) print(dfc['col'].iloc[0] is df['col'].iloc[0]) #Check if we've succeeded in deepcopying dfc['col'].iloc[0].append(4) print(dfc) return dfc modify(df) print(df) Output: False col 0 [1, 2, 3, 4] col 0 [1, 2, 3] | 7 | 8 |
59,825,672 | 2020-1-20 | https://stackoverflow.com/questions/59825672/pandas-overwrite-values-in-multiple-columns-at-once-based-on-condition-of-values | I have such DataFrame: df = pd.DataFrame(data={ 'col0': [11, 22,1, 5] 'col1': ['aa:a:aaa', 'a:a', 'a', 'a:aa:a:aaa'], 'col2': ["foo", "foo", "foobar", "bar"], 'col3': [True, False, True, False], 'col4': ['elo', 'foo', 'bar', 'dupa']}) I want to get length of the list after split on ":" in col1, then I want to overwrite the values if length > 2 OR not overwrite the values if length <= 2. Ideally, in one line as fast as possible. Currently, I try but it returns ValueError. df[['col1', 'col2', 'col3']] = df.loc[df['col1'].str.split(":").apply(len) > 2], ("", "", False), df[['col1', 'col2', 'col3']]) EDIT: condition on col1. EDIT2: thank you for all the great and quickly provided answers. amazing! EDIT3: timing on 10^6 rows: @ansev 3.2657s @jezrael 0.8922s @anky_91 1.9511s | Use Series.str.count, add 1, compare by Series.gt and assign list to filtered columns in list: df.loc[df['col1'].str.count(":").add(1).gt(2), ['col1','col2','col3']] = ["", "", False] print (df) col0 col1 col2 col3 col4 0 11 False elo 1 22 a:a foo False foo 2 1 a foobar True bar 3 5 False dupa | 13 | 8 |
59,822,973 | 2020-1-20 | https://stackoverflow.com/questions/59822973/keep-duplicates-by-key-in-a-list-of-dictionaries | I have a list of dictionaries, and I would like to obtain those that have the same value in a key: my_list_of_dicts = [{ 'id': 3, 'name': 'John' },{ 'id': 5, 'name': 'Peter' },{ 'id': 2, 'name': 'Peter' },{ 'id': 6, 'name': 'Mariah' },{ 'id': 7, 'name': 'John' },{ 'id': 1, 'name': 'Louis' } ] I want to keep those items that have the same 'name', so, I would like to obtain something like: duplicates: [{ 'id': 3, 'name': 'John' },{ 'id': 5, 'name': 'Peter' },{ 'id': 2, 'name': 'Peter' }, { 'id': 7, 'name': 'John' } ] I'm trying (not successfully): duplicates = [item for item in my_list_of_dicts if len(my_list_of_dicts.get('name', None)) > 1] I have clear my problem with this code, but not able to do the right sentence | Another concise way using collections.Counter: from collections import Counter my_list_of_dicts = [{ 'id': 3, 'name': 'John' },{ 'id': 5, 'name': 'Peter' },{ 'id': 2, 'name': 'Peter' },{ 'id': 6, 'name': 'Mariah' },{ 'id': 7, 'name': 'John' },{ 'id': 1, 'name': 'Louis' } ] c = Counter(x['name'] for x in my_list_of_dicts) duplicates = [x for x in my_list_of_dicts if c[x['name']] > 1] | 7 | 10 |
59,820,159 | 2020-1-20 | https://stackoverflow.com/questions/59820159/identify-leading-and-trailing-nas-in-pandas-dataframe | Is there a way to identify leading and trailing NAs in a pandas.DataFrame Currently I do the following but it seems not straightforward: import pandas as pd df = pd.DataFrame(dict(a=[0.1, 0.2, 0.2], b=[None, 0.1, None], c=[0.1, None, 0.1]) lead_na = (df.isnull() == False).cumsum() == 0 trail_na = (df.iloc[::-1].isnull() == False).cumsum().iloc[::-1] == 0 trail_lead_nas = top_na | trail_na Any ideas how this could be expressed more efficiently? Answer: %timeit df.ffill().isna() | df.bfill().isna() The slowest run took 29.24 times longer than the fastest. This could mean that an intermediate result is being cached. 31 ms ± 25.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit ((df.isnull() == False).cumsum() == 0) | ((df.iloc[::-1].isnull() ==False).cumsum().iloc[::-1] == 0) 255 ms ± 66.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) | How about this df.ffill().isna() | df.bfill().isna() Out[769]: a b c 0 False True False 1 False False False 2 False True False df = pd.concat([df] * 1000, ignore_index=True) In [134]: %%timeit ...: lead_na = (df.isnull() == False).cumsum() == 0 ...: trail_na = (df.iloc[::-1].isnull() == False).cumsum().iloc[::-1] == 0 ...: trail_lead_nas = lead_na | trail_na ...: 11.8 ms ± 105 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [135]: %%timeit ...: df.ffill().isna() | df.bfill().isna() ...: 2.1 ms ± 50 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) | 7 | 6 |
59,816,481 | 2020-1-20 | https://stackoverflow.com/questions/59816481/how-to-convert-pandas-dataframe-to-hierarchical-dictionary | I have the following pandas dataframe: df1 = pd.DataFrame({'date': [200101,200101,200101,200101,200102,200102,200102,200102],'blockcount': [1,1,2,2,1,1,2,2],'reactiontime': [350,400,200,250,100,300,450,400]}) I am trying to create a hierarchical dictionary, with the values of the embedded dictionary as lists, that looks like this: {200101: {1:[350, 400], 2:[200, 250]}, 200102: {1:[100, 300], 2:[450, 400]}} How would I do this? The closest I get is using this code: df1.set_index('date').groupby(level='date').apply(lambda x: x.set_index('blockcount').squeeze().to_dict()).to_dict() Which returns: {200101: {1: 400, 2: 250}, 200102: {1: 300, 2: 400}} | Here is another way using pivot_table: d = df1.pivot_table(index='blockcount',columns='date', values='reactiontime',aggfunc=list).to_dict() print(d) {200101: {1: [350, 400], 2: [200, 250]}, 200102: {1: [100, 300], 2: [450, 400]}} | 16 | 22 |
59,811,781 | 2020-1-19 | https://stackoverflow.com/questions/59811781/tf-function-valueerror-creating-variables-on-a-non-first-call-to-a-function-de | I would like to know why this function: @tf.function def train(self,TargetNet,epsilon): if len(self.experience['s']) < self.min_experiences: return 0 ids=np.random.randint(low=0,high=len(self.replay_buffer['s']),size=self.batch_size) states=np.asarray([self.experience['s'][i] for i in ids]) actions=np.asarray([self.experience['a'][i] for i in ids]) rewards=np.asarray([self.experience['r'][i] for i in ids]) next_states=np.asarray([self.experience['s1'][i] for i in ids]) dones = np.asarray([self.experience['done'][i] for i in ids]) q_next_actions=self.get_action(next_states,epsilon) q_value_next=TargetNet.predict(next_states) q_value_next=tf.gather_nd(q_value_next,tf.stack((tf.range(self.batch_size),q_next_actions),axis=1)) targets=tf.where(dones, rewards, rewards+self.gamma*q_value_next) with tf.GradientTape() as tape: estimates=tf.math.reduce_sum(self.predict(states)*tf.one_hot(actions,self.num_actions),axis=1) loss=tf.math.reduce_sum(tf.square(estimates - targets)) variables=self.model.trainable_variables gradients=tape.gradient(loss,variables) self.optimizer.apply_gradients(zip(gradients,variables)) gives ValueError: Creating variables on a non-first call to a function decorated with tf.function. Whereas this code which is very similiar: @tf.function def train(self, TargetNet): if len(self.experience['s']) < self.min_experiences: return 0 ids = np.random.randint(low=0, high=len(self.experience['s']), size=self.batch_size) states = np.asarray([self.experience['s'][i] for i in ids]) actions = np.asarray([self.experience['a'][i] for i in ids]) rewards = np.asarray([self.experience['r'][i] for i in ids]) states_next = np.asarray([self.experience['s2'][i] for i in ids]) dones = np.asarray([self.experience['done'][i] for i in ids]) value_next = np.max(TargetNet.predict(states_next), axis=1) actual_values = np.where(dones, rewards, rewards+self.gamma*value_next) with tf.GradientTape() as tape: selected_action_values = tf.math.reduce_sum( self.predict(states) * tf.one_hot(actions, self.num_actions), axis=1) loss = tf.math.reduce_sum(tf.square(actual_values - selected_action_values)) variables = self.model.trainable_variables gradients = tape.gradient(loss, variables) self.optimizer.apply_gradients(zip(gradients, variables)) Does not throw an error.Please help me understand why. EDIT:I removed the parameter epsilon from the function and it works.Is it because the @tf.function decorator is valid only for single argument functions? | Using tf.function you're converting the content of the decorated function: this means that TensorFlow will try to compile your eager code into its graph representation. The variables, however, are special objects. In fact, when you were using TensorFlow 1.x (graph mode), you were defining the variables only once and then using/updating them. In tensorflow 2.0, if you use pure eager execution, you can declare and re-use the same variable more than once since a tf.Variable - in eager mode - is just a plain Python object that gets destroyed as soon as the function ends and the variable, thus, goes out of scope. In order to make TensorFlow able to correctly convert a function that creates a state (thus, that uses Variables) you have to break the function scope, declaring the variables outside of the function. In short, if you have a function that works correctly in eager mode, like: def f(): a = tf.constant([[10,10],[11.,1.]]) x = tf.constant([[1.,0.],[0.,1.]]) b = tf.Variable(12.) y = tf.matmul(a, x) + b return y You have to change it's structure to something like: b = None @tf.function def f(): a = tf.constant([[10, 10], [11., 1.]]) x = tf.constant([[1., 0.], [0., 1.]]) global b if b is None: b = tf.Variable(12.) y = tf.matmul(a, x) + b print("PRINT: ", y) tf.print("TF-PRINT: ", y) return y f() in order to make it work correctly with the tf.function decorator. I covered this (and others) scenario in several blog posts: the first part analyzes this behavior in the section Handling states breaking the function scope (however I suggest to read it from the beginning and to read also part 2 and 3). | 11 | 11 |
59,813,807 | 2020-1-19 | https://stackoverflow.com/questions/59813807/understanding-invalid-decimal-literal | 100_year = date.today().year - age + 100 ^ SyntaxError: invalid decimal literal I'm trying to understand what the problem is. | Python identifiers can not start with a number. The 'arrow' points to year because underscore is a valid thousands separator in Python >= 3.6, so 100_000 is a valid integer literal. | 13 | 27 |
59,810,276 | 2020-1-19 | https://stackoverflow.com/questions/59810276/why-is-my-poetry-virtualenv-using-the-system-python-instead-of-the-pyenv-python | I've recently installed both Pyenv and Poetry and want to create a new Python 3.8 project. I've set both the global and local versions of python to 3.8.1 using the appropriate Pyenv commands (pyenv global 3.8.1 for example). When I run pyenv version in my terminal the output is 3.8.1. as expected. Now, the problem is that when I create a new python project with Poetry (poetry new my-project), the generated pyproject.toml file creates a project with python 2.7: [tool.poetry] name = "my-project" version = "0.1.0" description = "" authors = ["user <[email protected]>"] [tool.poetry.dependencies] python = "^2.7" [tool.poetry.dev-dependencies] pytest = "^4.6" [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api" It seems that Poetry defaults back to the system version of Python. How do I change this so that it uses the version installed with Pyenv? Edit I'm using MacOS, which comes bundled with Python 2.7. I think that might be causing some of the issues here. I've reinstalled Python 3.8 again with Pyenv, but when I hit Poetry install I get the following error: The currently activated Python version 2.7.16 is not supported by the project (^3.8). Trying to find and use a compatible version. [NoCompatiblePythonVersionFound] Poetry was unable to find a compatible version. If you have one, you can explicitly use it via the "env use" command. Should I create an environment explicitly for the project using Pyenv or should the project be able to access the correct Python version after running pyenv local 3.8.1.? When I do the latter, nothing changes and I still get the same errors. | Alright, I figured the problem. A little embarrassingly, I had not run pyenv shell 3.8.1 before running any of the other commands. Everything works now. Thank you all for your efforts. | 105 | 41 |
59,780,302 | 2020-1-17 | https://stackoverflow.com/questions/59780302/pip3-install-pyqt5-user-fails | Errors are present when trying to install PyQt5 via pip3. The automated message wants me to add more detail, but I don't have any. All the detail is in the code. ➜ ~ pip3 install PyQt5 --user Collecting PyQt5 Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f2c97cfeb50>: Failed to establish a new connection: [Er rno -2] Name or service not known')': /simple/pyqt5/ Using cached https://files.pythonhosted.org/packages/3a/fb/eb51731f2dc7c22d8e1a63ba88fb702727b324c6352183a32f27f73b8116/PyQt5-5.14.1.tar.gz Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... error Complete output from command /usr/bin/python3 /usr/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /tmp/tmp3yjy_ooq: Traceback (most recent call last): File "/usr/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py", line 64, in prepare_metadata_for_build_wheel hook = backend.prepare_metadata_for_build_wheel AttributeError: module 'sipbuild.api' has no attribute 'prepare_metadata_for_build_wheel' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py", line 207, in <module> main() File "/usr/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py", line 197, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/usr/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py", line 67, in prepare_metadata_for_build_wheel config_settings) File "/usr/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py", line 95, in _get_wheel_metadata_from_wheel whl_basename = backend.build_wheel(metadata_directory, config_settings) File "/tmp/pip-build-env-1ms8fm3e/overlay/lib64/python3.7/site-packages/sipbuild/api.py", line 51, in build_wheel project = AbstractProject.bootstrap('pep517') File "/tmp/pip-build-env-1ms8fm3e/overlay/lib64/python3.7/site-packages/sipbuild/abstract_project.py", line 82, in bootstrap project.setup(pyproject, tool, tool_description) File "/tmp/pip-build-env-1ms8fm3e/overlay/lib64/python3.7/site-packages/sipbuild/project.py", line 387, in setup self.apply_user_defaults(tool) File "project.py", line 62, in apply_user_defaults super().apply_user_defaults(tool) File "/tmp/pip-build-env-1ms8fm3e/overlay/lib/python3.7/site-packages/pyqtbuild/project.py", line 86, in apply_user_defaults super().apply_user_defaults(tool) File "/tmp/pip-build-env-1ms8fm3e/overlay/lib64/python3.7/site-packages/sipbuild/project.py", line 202, in apply_user_defaults self.builder.apply_user_defaults(tool) File "/tmp/pip-build-env-1ms8fm3e/overlay/lib/python3.7/site-packages/pyqtbuild/builder.py", line 68, in apply_user_defaults "specify a working qmake or add it to PATH") sipbuild.pyproject.PyProjectOptionException ---------------------------------------- Command "/usr/bin/python3 /usr/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /tmp/tmp3yjy_ooq" failed with error code 1 in /tmp/pip-install-6w4gkbyk/PyQt5 Tried installing Qt directly, in order to get QMake in path, but to no avail. ➜ ~ qmake --version QMake version 2.01a Using Qt version 4.8.7 in /usr/lib64 ➜ ~ OS information (Fedora 30) 5.2.11-200.fc30.x86_64 #1 SMP Thu Aug 29 12:43:20 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux More versions ➜ ~ pip3 --version pip 19.0.3 from /usr/lib/python3.7/site-packages/pip (python 3.7) ➜ ~ python3 --version Python 3.7.5 ➜ ~ | I had the same error installing tensorflow. Upgrading "setuptools" and "pip" to the latest minor version worked for me. | 8 | 12 |
59,799,635 | 2020-1-18 | https://stackoverflow.com/questions/59799635/keep-common-rows-within-every-group-of-a-pandas-dataframe | Given the following pandas data frame: | a b --+----- 0 | 1 A 1 | 2 A 2 | 3 A 3 | 4 A 4 | 1 B 5 | 2 B 6 | 3 B 7 | 1 C 8 | 3 C 9 | 4 C If you group it by column b I want to perform an action that keeps only the rows where they have column a in common. The result would be the following data frame: | a b --+----- 0 | 1 A 2 | 3 A 4 | 1 B 6 | 3 B 7 | 1 C 8 | 3 C Is there some built in method to do this? | You can try pivot_table with dropna here then filter using sreries.isin : s = df.pivot_table(index='a',columns='b',aggfunc=len).dropna().index df[df['a'].isin(s)] Similarly with crosstab: s = pd.crosstab(df['a'],df['b']) df[df['a'].isin(s[s.all(axis=1)].index)] a b 0 1 A 2 3 A 4 1 B 6 3 B 7 1 C 8 3 C | 7 | 9 |
59,790,440 | 2020-1-17 | https://stackoverflow.com/questions/59790440/how-to-sort-a-list-of-sub-lists-by-the-contents-of-sub-lists-where-sub-lists-co | I have a list containing thousands of sub-lists. Each of these sub-lists contain a combination of mixed strings and boolean values, for example: lst1 = [['k', 'b', False], ['k', 'a', True], ['a', 'a', 'a'], ['a', 'b', 'a'], ['a', 'a' , False], ...] I want to sort this list in accordance with the contents of the sub-lists, like: lst2 = [['a', 'a', 'a'], ['a', 'a' , False], ['a', 'b', 'a'], ['k', 'a', True], ['k', 'b', False], ...] I've tried sorting it like this: lst2 = sorted([list(sorted(x)) for x in lst1]) print(lst2) This doesn't work because of the combination of boolean values with strings in some fields, so I get TypeError: '<' not supported between instances of 'bool' and 'str'. I've also tried a brute force method, creating every possible combination and then checking them to see if which are in the first list: col1 = ['a', 'b', 'c', d, e, f, g, h, i, j, k, ..., True, False] col2 = ['a', 'b', 'c', d, e, f, g, h, i, j, k, ..., True, False] col3 = ['a', 'b', 'c', d, e, f, g, h, i, j, k, ..., True, False] lst2 = list() for t1 in col1: for t2 in col2: for t3 in col3: test_sublist = [t1, t2, t3] if test_sublist in lst1: lst2.append(test_sublist) This way works well enough, because I'm able to automatically create sorted lists for each column, col 1, col 2, and col 3, but it takes way too long to run (more than 3 days). Is there a better solution for sorting mixed string/boolean lists like these? | These handle any lengths, not just length 3. And bools in any places, not just the last column. For keying, they turn each element of each sublist into a tuple. Solution 1: sorted(lst1, key=lambda s: [(e is False, e is True, e) for e in s]) Turns strings into (False, False, thestring) so they come first. Turns True into (False, True, True) so it comes next. Turns False into (True, False, False) so it comes last. Though I think of it the reverse way, as in "First deprioritize False, then deprioritize True". The general form is key=lambda x: (shall_come_last(x), x). Solution 2: sorted(lst1, key=lambda s: [((e is True) + 2 * (e is False), e) for e in s]) Turns strings into (0, thestring) so they come first. Turns True into (1, True) so it comes next. Turns False into (2, False) so it comes last. Solution 3: sorted(lst1, key=lambda s: [(0, e) if isinstance(e, str) else (2 - e,) for e in s]) Turns strings into (0, thestring) so they come first. Turns True into (1,) so it comes next. Turns False into (2,) so it comes last. | 7 | 3 |
59,789,037 | 2020-1-17 | https://stackoverflow.com/questions/59789037/get-highest-duration-from-a-list-of-strings | I have a list of durations like below ['5d', '20h', '1h', '7m', '14d', '1m'] where d stands for days, h stands for hours and m stands for minutes. I want to get the highest duration from this list(14d in this case). How can I get that from this list of strings? | Pure python solution. We could store mapping between our time extensions (m, h, d) and minutes (here time_map), to find highest duration. Here we're using max() with key argument to apply our mapping. inp = ['5d', '20h', '1h', '7m', '14d', '1m'] time_map = {'m': 1, 'h': 60, 'd': 24*60} print(max(inp, key=lambda x:int(x[:-1])*time_map[x[-1]])) # -> 14d | 9 | 13 |
59,780,089 | 2020-1-17 | https://stackoverflow.com/questions/59780089/one-liner-to-assign-if-not-none | Is there a way to do an assignment only if the assigned value is not None, and otherwise do nothing? Of course we can do: x = get_value() if get_value() is not None but this will read the value twice. We can cache it to a local variable: v = get_value() x = v if v is not None but now we have made two statements for a simple thing. We could write a function: def return_if_not_none(v, default): if v is not None: return v else: return default And then do x = return_if_not_none(get_value(), x). But surely there is already a Python idiom to accomplish this, without accessing x or get_value() twice and without creating variables? Put in another way, let's say =?? is a Python operator similar to the C# null coalesce operator. Unlike the C# ??=, our fictional operator checks if the right hand side is None: x = 1 y = 2 z = None x =?? y print(x) # Prints "2" x =?? z print(x) # Still prints "2" Such a =?? operator would do exactly what my question is asking. | In python 3.8 you can do something like this if (v := get_value()) is not None: x = v Updated based on Ryan Haining solution, see in comments | 27 | 26 |
59,777,009 | 2020-1-16 | https://stackoverflow.com/questions/59777009/merging-two-dataframes-based-on-indexes-from-two-other-dataframes | I'm new to pandas have tried going through the docs and experiment with various examples, but this problem I'm tacking has really stumped me. I have the following two dataframes (DataA/DataB) which I would like to merge on a per global_index/item/values basis. DataA DataB row item_id valueA row item_id valueB 0 x A1 0 x B1 1 y A2 1 y B2 2 z A3 2 x B3 3 x A4 3 y B4 4 z A5 4 z B5 5 x A6 5 x B6 6 y A7 6 y B7 7 z A8 7 z B8 The list of items(item_ids) is finite and each of the two dataframes represent a the value of a trait (trait A, trait B) for an item at a given global_index value. The global_index could roughly be thought of as a unit of "time" The mapping between each data frame (DataA/DataB) and the global_index is done via the following two mapper DFs: DataA_mapper global_index start_row num_rows 0 0 3 1 3 2 3 5 3 DataB_mapper global_index start_row num_rows 0 0 2 2 2 3 4 5 3 Simply put for a given global_index (eg: 1) the mapper will define a list of rows into the respective DFs (DataA or DataB) that are associated with that global_index. For example, for a global_index value of 0: In DF DataA rows 0..2 are associated with global_index 0 In DF DataB rows 0..1 are associated with global_index 0 Another example, for a global_index value of 2: In DF DataB rows 2..4 are associated with global_index 2 In DF DataA there are no rows associated with global_index 2 The ranges [start_row,start_row + num_rows) presented do not overlap each other and represent a unique sequence/range of rows in their respective dataframes (DataA, DataB) In short no row in either DataA or DataB will be found in more than one range. I would like to merge the DFs so that I get the following dataframe: row global_index item_id valueA valueB 0 0 x A1 B1 1 0 y A2 B2 2 0 z A3 NaN 3 1 x A4 B1 4 1 z A5 NaN 5 2 x A4 B3 6 2 y A2 B4 7 2 z A5 NaN 8 3 x A6 B3 9 3 y A7 B4 10 3 z A8 B5 11 4 x A6 B6 12 4 y A7 B7 13 4 z A8 B8 In the final datafram any pair of global_index/item_id there will ever be either: a value for both valueA and valueB a value only for valueA a value only for valueB With the requirement being if there is only one value for a given global_index/item (eg: valueA but no valueB) for the last value of the missing one to be used. | First, you can create the 'global_index' column using the function pd.cut: for df, m in [(df_A, map_A), (df_B, map_B)]: bins = np.insert(m['num_rows'].cumsum().values, 0, 0) # create bins and add zero at the beginning df['global_index'] = pd.cut(df['row'], bins=bins, labels=m['global_index'], right=False) Next, you can use outer join to merge both data frames: df = df_A.merge(df_B, on=['global_index', 'item_id'], how='outer') And finally you can use functions groupby and ffill to fill missing values: for val in ['valueA', 'valueB']: df[val] = df.groupby('item_id')[val].ffill() Output: item_id global_index valueA valueB 0 x 0 A1 B1 1 y 0 A2 B2 2 z 0 A3 NaN 3 x 1 A4 B1 4 z 1 A5 NaN 5 x 3 A6 B1 6 y 3 A7 B2 7 z 3 A8 NaN 8 x 2 A6 B3 9 y 2 A7 B4 10 z 2 A8 B5 11 x 4 A6 B6 12 y 4 A7 B7 13 z 4 A8 B8 | 13 | 8 |
59,768,672 | 2020-1-16 | https://stackoverflow.com/questions/59768672/handling-pylint-warning-of-inconsistent-return-statement | I'm running PyLint on some code and I'm getting the warning of "Either all return statements in a function should return an expression or none of them should. (inconsistent-return-statements)." Here is the code I have: def determine_operand_count(opcode_form, opcode_byte): if opcode_form == OP_FORM.VARIABLE: if opcode_byte & 0b00100000 = 0b00100000: return OP_COUNT.VAR return OP_COUNT.OP2 if opcode_form == OP_FORM.SHORT: if opcode_byte & 0b00110000 == 0b00110000: return OP_COUNT.OP0 return OP_COUNT.OP1 if opcode_form == OP_FORM.LONG: return OP_COUNT.OP2 Here "OP_FORM" and "OP_COUNT" are Enums defined earlier in the code. To me, that code is very readable code and I guess I'm curious what the warning from PyLint is complaining about. In every condition that I have, an "OP_COUNT" type is returned. In fact, if any of these conditionals weren't returning an OP_COUNT, my code would be failing entirely. This seems to be a warning about my "return statements", suggesting that some aren't returning any sort of expression. But that's clearly not true (so far as I can see) as each return statement is returning something. So I'm guessing this has to do with implied returns? But to that point, in my original code I actually held "else" clauses for my internal if statements. But when I did that, PyLint gave me another warning: "Unnecessary 'else' after 'return' (no-else-return)." I did see the following: "How to fix inconsistent return statement in python?", but that didn't seem to reflect the situation in my code. So it's unclear to me how to satisfy PyLint in this case since the code clearly works and seems to be doing what the warning suggests I need to be doing. Given that, I suspect I'm missing something obvious but that I currently lack the intuition for spotting. Any help to get me spotting what I'm missing would be appreciated. | Pylint complains about what is happening when you reach the very end of the function. What should happen at the end of the function? (added a return and the warning goes away) def determine_operand_count(opcode_form, opcode_byte): if opcode_form == OP_FORM.VARIABLE: if opcode_byte & 0b00100000 == 0b00100000: return OP_COUNT.VAR return OP_COUNT.OP2 if opcode_form == OP_FORM.SHORT: if opcode_byte & 0b00110000 == 0b00110000: return OP_COUNT.OP0 return OP_COUNT.OP1 if opcode_form == OP_FORM.LONG: return OP_COUNT.OP2 return OP_COUNT.UNKNOWN This code should work, but in my experience is kind of fragile over time because of the indentation. An alternative is to write it as: def determine_operand_count_v2(opcode_form, opcode_byte): def variable_form(opcode_byte): if opcode_byte & 0b00100000 == 0b00100000: return OP_COUNT.VAR return OP_COUNT.OP2 def short_form(opcode_byte): if opcode_byte & 0b00110000 == 0b00110000: return OP_COUNT.OP0 return OP_COUNT.OP1 def long_form(_): return OP_COUNT.OP2 opfcn = {OP_FORM.VARIABLE: variable_form, OP_FORM.SHORT: short_form, OP_FORM.LONG: long_form} return opfcn[opcode_form](opcode_byte) | 9 | 9 |
59,769,492 | 2020-1-16 | https://stackoverflow.com/questions/59769492/spyder-4-is-not-displaying-plots-and-displays-message-like-this-uncheck-mute-i | I wrote this code. It should display the plots in spyder ide. import pandas as pd import numpy as np import matplotlib.pyplot as plt from IPython.display import display from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error import scipy.signal import scipy.stats from sklearn.model_selection import train_test_split train = pd.read_csv("train.csv", nrows=9000000,dtype={'acoustic_data':np.int16,'time_to_failure':np.float64}) train.rename({"acoustic_data": "signal", "time_to_failure": "quaketime"}, axis="columns", inplace=True) #visualization(train,title="Acoustic data and time to failure: sampled data") fig, ax = plt.subplots(2,1, figsize=(20,12)) ax[0].plot(train.index.values, train.quaketime.values, c="darkred") ax[0].set_title("Quaketime of 10 Mio rows") ax[0].set_xlabel("Index") ax[0].set_ylabel("Quaketime in ms") ax[1].plot(train.index.values, train.signal.values, c="mediumseagreen") ax[1].set_title("Signal of 10 Mio rows") ax[1].set_xlabel("Index") ax[1].set_ylabel("Acoustic Signal") Spyder IDE dispaly following message instead of plot Figures now render in the Plots pane by default. To make them also appear inline in the Console, uncheck "Mute Inline Plotting" under the Plots pane options menu. I have tested this code in google colab. It plots the required graph. How I can plot in Spyder IDE. I am using Spyder 4.0.1. I did not found any "Mute Inline Plotting" option to uncheck | What you have is neither an error nor a warning. It's an instruction. You can find the options following: Ignore the console outputs, that's just to give you orientation in Spyder 4.0.1. | 15 | 53 |
59,768,259 | 2020-1-16 | https://stackoverflow.com/questions/59768259/filtering-dataframe-on-groups-where-count-of-element-is-different-than-1 | I'm working with a DataFrame having the following structure: import pandas as pd df = pd.DataFrame({'group':[1,1,1,2,2,2,2,3,3,3], 'brand':['A','B','X','C','D','X','X','E','F','X']}) print(df) group brand 0 1 A 1 1 B 2 1 X 3 2 C 4 2 D 5 2 X 6 2 X 7 3 E 8 3 F 9 3 X My goal is to view only the groups having exactly one brand X associated to them. Since group number 2 has two observations equal to brand X, it should be filtered out from the resulting DataFrame. The output should look like this: group brand 0 1 A 1 1 B 2 1 X 3 3 E 4 3 F 5 3 X I know I should do a groupby on the group column and then filter those groups having a count of X different than 1. The filtering part is where I struggle. Any help would be appreciated. | Use series.eq to check if brand is equal to X , then groupby and transform sum and filter groups in which X count is equal to 1: df[df['brand'].eq('X').groupby(df['group']).transform('sum').eq(1)] group brand 0 1 A 1 1 B 2 1 X 7 3 E 8 3 F 9 3 X | 11 | 10 |
59,761,539 | 2020-1-16 | https://stackoverflow.com/questions/59761539/simple-way-to-do-multiple-dispatch-in-python-no-external-libraries-or-class-bu | I'm writing a throwaway script to compute some analytical solutions to a few simulations I'm running. I would like to implement a function in a way that, based on its inputs, will compute the right answer. So for instance, say I have the following math equation: tmax = (s1 - s2) / 2 = q * (a^2 / (a^2 - b^2)) It seems simple to me that I should be able to do something like: def tmax(s1, s2): return (s1 - s2) / 2 def tmax(a, b, q): return q * (a**2 / (a**2 - b**2)) I may have gotten to used to writing in julia, but I really don't want to complicate this script more than I need to. | In statically typed languages like C++, you can overload functions based on the input parameter types (and quantity) but that's not really possible in Python. There can only be one function of any given name. What you can do is to use the default argument feature to select one of two pathways within that function, something like: def tmax(p1, p2, p3 = None): # Two-argument variant has p3 as None. if p3 is None: return (p1 - p2) / 2 # Otherwise, we have three arguments. return (p1 * p1 / (p1 * p1 - p2 * p2)) * p3 If you're wondering why I've change the squaring operations from n ** 2 to n * n, it's because the latter is faster (or it was, at some point in the past, at least for small integral powers like 2 - this is probably still the case but you may want to confirm). A possible case where it may be faster to do g1 ** 2 rather than g1 * g1 is where g1 is a global rather than a local (it takes longer for the Python VM to LOAD_GLOBAL rather than LOAD_FAST). This is not the case with the code posted since the argument is inherently non-global. | 9 | 4 |
59,755,609 | 2020-1-15 | https://stackoverflow.com/questions/59755609/selenium-common-exceptions-invalidargumentexception-message-invalid-argument-e | I have a list of URLs in a .txt file that I would like to run using selenium. Lets say that the file name is b.txt in it contains 2 urls (precisely formatted as below): https://www.google.com/,https://www.bing.com/, What I am trying to do is to make selenium run both urls (from the .txt file), however it seems that every time the code reaches the "driver.get" line, the code fails. url = open ('b.txt','r') url_rpt = url.read().split(",") options = Options() options.add_argument('--headless') options.add_argument('--disable-gpu') driver = webdriver.Chrome(chrome_options=options) for link in url_rpt: driver.get(link) driver.quit() The result that I get when I run the code is Traceback (most recent call last): File "C:/Users/ASUS/PycharmProjects/XXXX/Test.py", line 22, in <module> driver.get(link) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site- packages\selenium\webdriver\remote\webdriver.py", line 333, in get self.execute(Command.GET, {'url': url}) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site- packages\selenium\webdriver\remote\webdriver.py", line 321, in execute self.error_handler.check_response(response) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python38\lib\site- packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.InvalidArgumentException: Message: invalid argument (Session info: headless chrome=79.0.3945.117) Any suggestion on how to re-write the code? | This error message... Traceback (most recent call last): . driver.get(link) . self.execute(Command.GET, {'url': url}) . raise exception_class(message, screen, stacktrace) selenium.common.exceptions.InvalidArgumentException: Message: invalid argument (Session info: chrome=79.0.3945.117) ...implies that the url passed as an argument to get() was an argument was invalid. I was able to reproduce the same Traceback when the text file containing the list of urls contains a space character after the seperator of the last url. Possibly a space character was present at the fag end of b.txt as https://www.google.com/,https://www.bing.com/,. Debugging An ideal debugging approach would be to print the url_rpt which would have revealed the space character as follows: Code Block: url = open ('url_list.txt','r') url_rpt = url.read().split(",") print(url_rpt) Console Output: ['https://www.google.com/', 'https://www.bing.com/', ' '] Solution If you remove the space character from the end your own code would execute just perfecto: options = webdriver.ChromeOptions() options.add_argument("start-maximized") options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option('useAutomationExtension', False) driver = webdriver.Chrome(options=options, executable_path=r'C:\WebDrivers\chromedriver.exe') url = open ('url_list.txt','r') url_rpt = url.read().split(",") print(url_rpt) for link in url_rpt: driver.get(link) driver.quit() | 19 | 18 |
59,759,107 | 2020-1-15 | https://stackoverflow.com/questions/59759107/how-to-avoid-poor-performance-of-pandas-mean-with-datetime-columns | I have a pandas (version 0.25.3) DataFrame containing a datetime64 column. I'd like to calculate the mean of each column. import numpy as np import pandas as pd n = 1000000 df = pd.DataFrame({ "x": np.random.normal(0.0, 1.0, n), "d": pd.date_range(pd.datetime.today(), periods=n, freq="1H").tolist() }) Calculating the mean of individual columns is pretty much instantaneous. df["x"].mean() ## 1000 loops, best of 3: 1.35 ms per loop df["d"].mean() ## 100 loops, best of 3: 2.91 ms per loop However, when I use the DataFrame's .mean() method, it takes a really long time. %timeit df.mean() ## 1 loop, best of 3: 9.23 s per loop It isn't clear to me where the performance penalty comes from. What is the best way to avoid the slowdown? Should I convert the datetime64 column to a different type? Is using the DataFrame-level .mean() method considered bad form? | You could restrict it to the numeric values: df.mean(numeric_only=True) Then it runs very fast as well. Here is the text from the documentation: numeric_only : bool, default None Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series. -- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mean.html | 7 | 5 |
59,752,380 | 2020-1-15 | https://stackoverflow.com/questions/59752380/set-content-type-when-uploading-to-azure-blob-storage | I am uploading a static site using the Azure Blob storage client library. blob_service_client = BlobServiceClient.from_connection_string(az_string) blob_client = blob_service_client.get_blob_client(container=container_name, blob=local_file_name) print("\nUploading to Azure Storage as blob:\n\t" + local_file_name) with open('populated.html', "rb") as data: test = blob_client.upload_blob(data, overwrite=True) This is working but the HTML file is downloading instead of displaying. This is because the content type is wrong: Content-Type: application/octet-stream. Is there any way to set this using upload_blob? Update: To get this working, I needed this: my_content_settings = ContentSettings(content_type='text/html') blob_client.upload_blob(data, overwrite=True, content_settings=my_content_settings) | Looking at the code here, one of the parameters to this method is content_settings which is of type ContentSettings. You can define content_type there. | 17 | 12 |
59,748,008 | 2020-1-15 | https://stackoverflow.com/questions/59748008/telegram-bot-api-is-the-chat-id-unique-for-each-user-contacting-the-bot | We are using python API for telegram bots and need to be able to identify the user. Is the chat_id unique for each user connecting the bot? Can we trust the chat_id to be consistent? e.g same chat_id will tell us that this is the same user, and each user connecting with the bot will have one chat_id that is consistent between sessions? Thanks | Is the chat_id unique for each user connecting the bot? Yes chat_id will always be unique for each user connecting to your bot. If the same user sends messages to different bots, they will always 'identify' themselves with their unique id. Keep in mind that getUpdates shows the users id, and the id from the chat. { "ok": true, "result": [ { "update_id": 1234567, "message": { "message_id": 751, "from": { "id": 12122121, <-- user.id "is_bot": false, "first_name": "Me", "last_name": "&", "username": "&&&&", "language_code": "en" }, "chat": { "id": -104235244275, <-- chat_id "title": "Some group", "type": "supergroup" }, "date": 1579999999, "text": "Hi!" } } ] } According to this post, that chat.id will not change, even if the group is converted to a supergroup Based on comment; small overvieuw of private/group chat example user_1 ---> bot_a in private chat { "message": { "from": { "id": 12345678 <-- id from user_1 }, "chat": { "id": 12345678, <-- send from private chat, so chat is equals to user_id } } } user_2 ---> bot_a in private chat { "message": { "from": { "id": 9876543 <-- id from user_2 }, "chat": { "id": 9876543, <-- send from private chat, so chat is equals to user_id } } } user_1 ---> bot_a in group chat { "message": { "from": { "id": 12345678 <-- id from user_1 }, "chat": { "id": 5646464, <-- send from group chat, so id is from groupchat } } } user_2 ---> bot_a in group chat { "message": { "from": { "id": 9876543 <-- id from user_2 }, "chat": { "id": 5646464, <-- send from group chat, so id is from groupchat } } } | 10 | 10 |
59,748,851 | 2020-1-15 | https://stackoverflow.com/questions/59748851/split-a-list-into-sublists-based-on-a-set-of-indexes-in-python | I have a list similar to below ['a','b','c','d','e','f','g','h','i','j'] and I would like to separate by a list of index [1,4] In this case, it will be [['a'],['b','c'],['d','e','f','g','h','i','j']] As [:1] =['a'] [1:4] = ['b','c'] [4:] = ['d','e','f','g','h','i','j'] Case 2: if the list of index is [0,6] It will be [[],['a','b','c','d','e'],['f','g','h','i','j']] As [:0] = [] [0:6] = ['a','b','c','d','e'] [6:] = ['f','g','h','i','j'] Case 3 if the index is [2,5,7] it will be [['a','b'],['c','d','e'],['h','i','j']] As [:2] =['a','b'] [2:5] = ['c','d','e'] [5:7] = ['f','g'] [7:] = ['h','i','j'] | Something along these lines: mylist = ['a','b','c','d','e','f','g','h','i','j'] myindex = [1,4] [mylist[s:e] for s, e in zip([0]+myindex, myindex+[None])] Output [['a'], ['b', 'c', 'd'], ['e', 'f', 'g', 'h', 'i', 'j']] | 7 | 8 |
59,683,237 | 2020-1-10 | https://stackoverflow.com/questions/59683237/deep-copy-of-pandas-dataframes-and-dictionaries | I'm creating a small Pandas dataframe: df = pd.DataFrame(data={'colA': [["a", "b", "c"]]}) I take a deepcopy of that df. I'm not using the Pandas method but general Python, right? import copy df_copy = copy.deepcopy(df) A df_copy.head() gives the following: Then I put these values into a dictionary: mydict = df_copy.to_dict() That dictionary looks like this: Finally, I remove one item of the list: mydict['colA'][0].remove("b") I'm surprized that the values in df_copy are updated. I'm very confused that the values in the original dataframe are updated too! Both dataframes look like this now: I understand Pandas doesn't really do deepcopy, but this wasn't a Pandas method. My questions are: 1) how can I build a dictionary from a dataframe that doesn't update the dataframe? 2) how can I take a copy of a dataframe which would be completely independent? thanks for your help! Cheers, Nicolas | TLDR To get deepcopy: df_copy = pd.DataFrame( columns = df.columns, data = copy.deepcopy(df.values) ) Disclaimer Notice that putting mutable objects inside a DataFrame can be an antipattern so make sure you need it and understand what you are doing. Why your copy is not independent When applied on an object, copy.deepcopy is looked up for a _deepcopy_ method of that object, that is called in turn. It's added to avoid copying too much for objects. In the case of a DataFrame instance in version 0.20.0 and above - _deepcopy_ doesn`t work recursively. Similarly, if you will use DataFrame.copy(deep=True) deep copy will copy the data, but will not do so recursively. . How to solve the problem To take a truly deep copy of a DataFrame containing a list(or other python objects), so that it will be independent - you can use one of the methods below. df_copy = pd.DataFrame( columns = df.columns, data = copy.deepcopy(df.values) ) For a dictionary, you may use same trick: mydict = pd.DataFrame( columns = df.columns, data = copy.deepcopy(df_copy.values) ).to_dict() mydict['colA'][0].remove("b") There's also a standard hacky way of deep-copying python objects: import pickle df_copy = pickle.loads(pickle.dumps(df)) Feel free to ask for any clarifications, if needed. | 8 | 17 |
59,695,334 | 2020-1-11 | https://stackoverflow.com/questions/59695334/custom-color-palette-in-seaborn | I have a scatterplot that should show the changes in bond lengths depending on temperature. I wanted to give each temperature a specific color, but it doesn't seem to work - plot uses the default seaborn palette. Is there a way to map temperature to color, and make seaborn use it? import pandas as pd import matplotlib.pyplot as plt import seaborn as sns palette = ["#090364", "#091e75", "#093885", "#085396", "#086da6", "#0888b7", "#08a2c7", "#07bdd8", "#07d7e8", "#07f2f9", "#f9ac07", "#c77406", "#963b04", "#640303"] sns.set_style("whitegrid") sns.set_palette(palette) plot = sns.scatterplot(df.loc[:,'length'], df.loc[:,'type'], hue = df.loc[:,'temperature'], legend = False, s = 200) | I figured it out. You had to paste the number of colors into the palette: sns.set_style("whitegrid") plot = sns.scatterplot(df.loc[:,'length'], df.loc[:,'type'], hue = df.loc[:,'temperature'], palette=sns.color_palette(palette, len(palette)), legend = False, s = 200) | 14 | 23 |
59,693,359 | 2020-1-11 | https://stackoverflow.com/questions/59693359/when-to-use-iostr-iobytes-and-textio-binaryio-in-python-type-hinting | From the documentation, it says that: Generic type IO[AnyStr] and its subclasses TextIO(IO[str]) and BinaryIO(IO[bytes]) represent the types of I/O streams such as returned by open(). — Python Docs: typing.IO The docs did not specify when BinaryIO/TextIO shall be used over their counterparts IO[str] and IO[bytes]. Through a simple inspection of the Python Typeshed source, only 30 hits found when searching for BinaryIO, and 109 hits for IO[bytes]. I was trying to switch to BinaryIO from IO[bytes] for better compatibility with sphinx-autodoc-typehints, but the switch-over has broken many type checks as methods like tempfile.NamedTemporaryFile is typed as IO[bytes] instead of the other. Design-wise speaking, what are the correct situations to use each type of these IO type hints? | BinaryIO and TextIO directly subclass IO[bytes] and IO[str] respectively, and add on a few extra methods -- see the definitions in typeshed for the specifics. So if you need these extra methods, use BinaryIO/TextIO. Otherwise, it's probably best to use IO[...] for maximum flexibility. For example, if you annotate a method as accepting an IO[str], it's a bit easier for the end-user to provide an instance of that object. Though all this being said, the IO classes in general are kind of messy at present: they define a lot of methods that not all functions will actually need. So, the typeshed maintainers are actually considering breaking up the IO class into smaller Protocols. You could perhaps do the same, if you're so inclined. This approach is mostly useful if you want to define your own IO-like classes but don't want the burden of implementing the full typing.IO[...] API -- or if you're using some class that's almost IO-like, but not quite. All this being said, all three approaches -- using BinaryIO/TextIO, IO[...], or defining more compact custom Protocols -- are perfectly valid. If the sphinx extension doesn't seem to be able to handle one particular approach for some reason, that's probably a bug on their end. | 21 | 24 |
59,661,042 | 2020-1-9 | https://stackoverflow.com/questions/59661042/what-do-single-star-and-slash-do-as-independent-parameters | In the following function definition, what do the * and / account for? def func(self, param1, param2, /, param3, *, param4, param5): print(param1, param2, param3, param4, param5) NOTE: Not to mistake with the single|double asterisks in *args | **kwargs (solved here) | The function parameter syntax(/) is to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments.(This is new in Python 3.8) Documentation specifies some of the use cases/benefits of positional-only parameters It allows pure Python functions to fully emulate behaviors of existing C coded functions. For example, the built-in pow() function does not accept keyword arguments: def pow(x, y, z=None, /): "Emulate the built in pow() function" r = x ** y return r if z is None else r%z Another use case is to preclude keyword arguments when the parameter name is not helpful. For example, the builtin len() function has the signature len(obj, /). This precludes awkward calls such as: len(obj='hello') # The "obj" keyword argument impairs readability A further benefit of marking a parameter as positional-only is that it allows the parameter name to be changed in the future without risk of breaking client code. For example, in the statistics module, the parameter name dist may be changed in the future. This was made possible with the following function specification: def quantiles(dist, /, *, n=4, method='exclusive') ... Where as * is used to force the caller to use named arguments. Django documentation contains a section which clearly explains a usecase of named arguments. Form fields no longer accept optional arguments as positional arguments To help prevent runtime errors due to incorrect ordering of form field arguments, optional arguments of built-in form fields are no longer accepted as positional arguments. For example: forms.IntegerField(25, 10) raises an exception and should be replaced with: forms.IntegerField(max_value=25, min_value=10) Suppose we have a method called func: def func(self, param1, param2, /, param3, *, param4, param5): print(param1, param2, param3, param4, param5) It must called with obj.func(10, 20, 30, param4=50, param5=60) OR obj.func(10, 20, param3=30, param4=50, param5=60) ie, param1, param2 must be specified positionally. param3 can be called either with positional or keyword argument. param4 and param5 must be called with keyword argument. DEMO: >>> class MyClass(object): ... def func(self, param1, param2, /, param3, *, param4, param5): ... return param1, param2, param3, param4, param5 ... >>> obj = MyClass() >>> >>> assert obj.func(10, 20, 30, param4=40, param5=50), obj.func( ... 10, 20, param3=30, param4=40, param5=50 ... ) | 101 | 128 |
59,650,243 | 2020-1-8 | https://stackoverflow.com/questions/59650243/communication-between-async-tasks-and-synchronous-threads-in-python | I am looking for the best solution for communication between async tasks and methods/functions that run in a thread pool executor from concurrent.futures. In previous synchronous projects, I would use the queue.Queue class. I assume that any method should be thread safe and therefore the asyncio.queue will not work. I have seen people extend the queue.Queue class to do something like: class async_queue(Queue): async def aput(self, item): self.put_nowait(item) async def aget(self): resp = await asyncio.get_event_loop().run_in_executor( None, self.get ) return resp Is there a better way? | I would recommend going the other way around: using the asyncio.Queue class to communicate between the two worlds. This has the advantage of not having to spend a slot in the thread pool on operations that take a long time to complete, such as a get(). Here is an example: class Queue: def __init__(self): self._loop = asyncio.get_running_loop() self._queue = asyncio.Queue() def sync_put_nowait(self, item): self._loop.call_soon(self._queue.put_nowait, item) def sync_put(self, item): asyncio.run_coroutine_threadsafe(self._queue.put(item), self._loop).result() def sync_get(self): return asyncio.run_coroutine_threadsafe(self._queue.get(item), self._loop).result() def async_put_nowait(self, item): self._queue.put_nowait(item) async def async_put(self, item): await self._queue.put(item) async def async_get(self): return await self._queue.get() The methods prefixed with sync_ are meant to be invoked by sync code (running outside the event loop thread). The ones prefixed with async_ are to be called by code running in the event loop thread, regardless of whether they are actually coroutines. (put_nowait, for example, is not a coroutine, but it still must be distinguished between a sync and an async version.) | 10 | 12 |
59,740,840 | 2020-1-14 | https://stackoverflow.com/questions/59740840/why-my-program-to-scrape-nse-website-gets-blocked-in-servers-but-works-in-local | This python code is running on the local computer but is not running on Digital Ocean Amazon AWS Google Collab Heroku and many other VPS. It shows different errors at different times. import requests headers = { 'authority': 'beta.nseindia.com', 'cache-control': 'max-age=0', 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36', 'sec-fetch-user': '?1', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'none', 'sec-fetch-mode': 'navigate', 'accept-encoding': 'gzip, deflate, br', 'accept-language': 'en-US,en;q=0.9,hi;q=0.8', } params = ( ('symbol', 'BANKNIFTY'), ) response = requests.get('https://nseindia.com/api/quote-derivative', headers=headers, params=params) #NB. Original query string below. It seems impossible to parse and #reproduce query strings 100% accurately so the one below is given #in case the reproduced version is not "correct". # response = requests.get('https://nseindia.com/api/quote-derivative?symbol=BANKNIFTY', headers=headers) Is there any mistake in the above code? What I am missing? I copied the header data from Chrome Developer Tools> Network in incognito mode used https://curl.trillworks.com/ site to generate the python code from the curl command. But the curl command is working fine and giving fine output- curl "https://nseindia.com/api/quote-derivative?symbol=BANKNIFTY" -H "authority: beta.nseindia.com" -H "cache-control: max-age=0" -H "dnt: 1" -H "upgrade-insecure-requests: 1" -H "user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36" -H "sec-fetch-user: ?1" -H "accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" -H "sec-fetch-site: none" -H "sec-fetch-mode: navigate" -H "accept-encoding: gzip, deflate, br" -H "accept-language: en-US,en;q=0.9,hi;q=0.8" --compressed How come the curl command is working but the python generated out of the curl command is not? | Use the nsefetch() function as documented here https://unofficed.com/nse-python/documentation/nsefetch/ In case you want python-requests method from nsepython import * payload= nsefetch('https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp?segmentLink=17&instrument=OPTIDX&symbol=BANKNIFTY') print(payload) This will work in all desktops and laptops. In case you want curl method from nsepythonserver import * payload= nsefetch('https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp?segmentLink=17&instrument=OPTIDX&symbol=BANKNIFTY') print(payload) This will work in all servers. Like Linux servers. | 7 | 5 |
59,645,272 | 2020-1-8 | https://stackoverflow.com/questions/59645272/how-do-i-pass-an-async-function-to-a-thread-target-in-python | I have the following code: async some_callback(args): await some_function() and I need to give it to a Thread as a target: _thread = threading.Thread(target=some_callback, args=("some text")) _thread.start() The error that I get is "some_callback is never awaited". Any ideas how can I solve this problem? | You can do it by adding function between to execute async: import asyncio async def some_callback(args): await some_function() def between_callback(args): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.run_until_complete(some_callback(args)) loop.close() _thread = threading.Thread(target=between_callback, args=("some text")) _thread.start() | 71 | 80 |
59,725,933 | 2020-1-14 | https://stackoverflow.com/questions/59725933/plot-fft-as-a-set-of-sine-waves-in-python | I saw someone do this in a presentation but I'm having a hard time reproducing what he was able to do. Here's a slide from his presentation: Pretty cool. He decomposed a dataset using FFT, then plotted the appropriate sine waves that the FFT specified. So in an effort to recreate what he did, I created a series of points that correspond to the combination of 2 sine waves: import matplotlib.pyplot as plt import numpy as np %matplotlib inline x = np.arange(0, 10, 0.01) x2 = np.arange(0, 20, 0.02) sin1 = np.sin(x) sin2 = np.sin(x2) x2 /= 2 sin3 = sin1 + sin2 plt.plot(x, sin3) plt.show() Now I want to decompose this wave (or rather, the wave that the points imply) back into the original 2 sine waves: # goal: sin3 -> sin1, sin2 # sin3 array([ 0.00000000e+00, 2.99985000e-02, ... 3.68998236e-01]) # sin1 array([ 0. , 0.00999983, 0.01999867, ... -0.53560333]) # sin2 array([ 0. , 0.01999867, 0.03998933, ... 0.90460157]) I start by importing numpy and getting the fft of sin3: import numpy as np fft3 = np.fft.fft(sin3) ok, that's about as far as I get. Now I've got an array with complex numbers: array([ 2.13316069e+02+0.00000000e+00j, 3.36520138e+02+4.05677438e+01j,...]) and if I naively plot it I see: plt.plot(fft3) plt.show() Ok, not sure what to do with that. I want to get from here to the datasets that look like sin1 and sin2: plt.plot(sin1) plt.show() plt.plot(sin2) plt.show() I understand the real and imaginary part of the complex numbers in the fft3 dataset, I'm just not sure what to do with them to derive sin1 and sin2 datasets from it. I know this has less to do with programming and more to do with math, but could anyone give me a hint here? EDIT: update on Mark Snyder's answer: Using Mark's code I was able to get what I expected and ended up with this method: def decompose_fft(data: list, threshold: float = 0.0): fft3 = np.fft.fft(data) x = np.arange(0, 10, 10 / len(data)) freqs = np.fft.fftfreq(len(x), .01) recomb = np.zeros((len(x),)) for i in range(len(fft3)): if abs(fft3[i]) / len(x) > threshold: sinewave = ( 1 / len(x) * ( fft3[i].real * np.cos(freqs[i] * 2 * np.pi * x) - fft3[i].imag * np.sin(freqs[i] * 2 * np.pi * x))) recomb += sinewave plt.plot(x, sinewave) plt.show() plt.plot(x, recomb, x, data) plt.show() later I'll make it return the recombined list of waves, but for now I'm getting an anomalie I don't quite understand. First of all I call it like this, simply passing in a dataset. decompose_fft(sin3, threshold=0.0) But looks great but I get this weird line at y=0.2 Does anyone know what this could be or what's causing it? EDIT: The above question has been answered by Mark in the comments, thanks! | The discrete Fourier transform gives you the coefficients of complex exponentials that, when summed together, produce the original discrete signal. In particular, the k'th Fourier coefficient gives you information about the amplitude of the sinusoid that has k cycles over the given number of samples. Note that since your sines don't have integer numbers of cycles in 1000 samples, you actually will not be able to retrieve your original sine waves using an FFT. Instead you'll get a blend of many different sinusoids, including a constant component of ~.4. You can plot the various component sinusoids and observe that their sum is the original signal using the following code: freqs = np.fft.fftfreq(len(x),.01) threshold = 0.0 recomb = np.zeros((len(x),)) for i in range(len(fft3)): if abs(fft3[i])/(len(x)) > threshold: recomb += 1/(len(x))*(fft3[i].real*np.cos(freqs[i]*2*np.pi*x)-fft3[i].imag*np.sin(freqs[i]*2*np.pi*x)) plt.plot(x,1/(len(x))*(fft3[i].real*np.cos(freqs[i]*2*np.pi*x)-fft3[i].imag*np.sin(freqs[i]*2*np.pi*x))) plt.show() plt.plot(x,recomb,x,sin3) plt.show() By changing threshold, you can also choose to exclude sinusoids of low power and see how that affects the final reconstruction. EDIT: There's a bit of a trap in the above code, although it's not wrong. It hides the inherent symmetry of the DFT for real signals, and plots each of the sinusoids twice at half of their true amplitude. This code is more performant and plots the sinusoids at their correct amplitude: import cmath freqs = np.fft.fftfreq(len(x),.01) threshold = 0.0 recomb = np.zeros((len(x),)) middle = len(x)//2 + 1 for i in range(middle): if abs(fft3[i])/(len(x)) > threshold: if i == 0: coeff = 2 else: coeff = 1 sinusoid = 1/(len(x)*coeff/2)*(abs(fft3[i])*np.cos(freqs[i]*2*np.pi*x+cmath.phase(fft3[i]))) recomb += sinusoid plt.plot(x,sinusoid) plt.show() plt.plot(x,recomb,x,sin3) plt.show() If in the general case you know that the signal was composed of some subset of sinusoids with frequencies that may not line up correctly with the length of the signal, you may be able to identify the frequencies by either zero-padding or extending your signal. You can learn more about that here. If the signals are completely arbitrary and you're just interested in looking at component sinusoids, there's no need for that. | 9 | 7 |
59,681,461 | 2020-1-10 | https://stackoverflow.com/questions/59681461/read-a-big-mbox-file-with-python | I'd like to read a big 3GB .mbox file coming from a Gmail backup. This works: import mailbox mbox = mailbox.mbox(r"D:\All mail Including Spam and Trash.mbox") for i, message in enumerate(mbox): print("from :",message['from']) print("subject:",message['subject']) if message.is_multipart(): content = ''.join(part.get_payload(decode=True) for part in message.get_payload()) else: content = message.get_payload(decode=True) print("content:",content) print("**************************************") if i == 10: break except it takes more than 40 seconds for the first 10 messages only. Is there a faster way to access to a big .mbox file with Python? | Here's a quick and dirty attempt to implement a generator to read in an mbox file message by message. I have opted to simply ditch the information from the From separator; I'm guessing maybe the real mailbox library might provide more information, and of course, this only supports reading, not searching or writing back to the input file. #!/usr/bin/env python3 import email from email.policy import default class MboxReader: def __init__(self, filename): self.handle = open(filename, 'rb') assert self.handle.readline().startswith(b'From ') def __enter__(self): return self def __exit__(self, exc_type, exc_value, exc_traceback): self.handle.close() def __iter__(self): return iter(self.__next__()) def __next__(self): lines = [] while True: line = self.handle.readline() if line == b'' or line.startswith(b'From '): yield email.message_from_bytes(b''.join(lines), policy=default) if line == b'': break lines = [] continue lines.append(line) Usage: with MboxReader(mboxfilename) as mbox: for message in mbox: print(message.as_string()) The policy=default argument (or any policy instead of default if you prefer, of course) selects the modern EmailMessage library which was introduced in Python 3.3 and became official in 3.6. If you need to support older Python versions from before America lost its mind and put an evil clown in the White House simpler times, you will want to omit it; but really, the new API is better in many ways. | 18 | 17 |
59,684,674 | 2020-1-10 | https://stackoverflow.com/questions/59684674/should-i-add-pythons-pyc-files-to-dockerignore | I've seen several examples of .dockerignore files for Python projects where *.pyc files and/or __pycache__ folders are ignored: **/__pycache__ *.pyc Since these files/folders are going to be recreated in the container anyway, I wonder if it's a good practice to do so. | Yes, it's a recommended practice. There are several reasons: Reduce the size of the resulting image In .dockerignore you specify files that won't go to the resulting image, it may be crucial when you're building the smallest image. Roughly speaking the size of bytecode files is equal to the size of actual files. Bytecode files aren't intended for distribution, that's why we usually put them into .gitignore as well. Cache related problems In earlier versions of Python 3.x there were several cached related issues: Python’s scheme for caching bytecode in .pyc files did not work well in environments with multiple Python interpreters. If one interpreter encountered a cached file created by another interpreter, it would recompile the source and overwrite the cached file, thus losing the benefits of caching. Since Python 3.2 all the cached files prefixed with interpreter version as mymodule.cpython-32.pyc and presented under __pychache__ directory. By the way, starting from Python 3.8 you can even control a directory where the cache will be stored. It may be useful when you're restricting write access to the directory but still want to get benefits of cache usage. Usually, the cache system works perfectly, but someday something may go wrong. It worth to note that the cached .pyc (lives in the same directory) file will be used instead of the .py file if the .py the file is missing. In practice, it's not a common occurrence, but if some stuff keeps up being "there", thinking about remove cache files is a good point. It may be important when you're experimenting with the cache system in Python or executing scripts in different environments. Security reasons Most likely that you don't even need to think about it, but cache files can contain some sort of sensitive information. Due to the current implementation, in .pyc files presented an absolute path to the actual files. There are situations when you don't want to share such information. It seems that interacting with bytecode files is a quite frequent necessity, for example, django-extensions have appropriate options compile_pyc and clean_pyc. | 27 | 21 |
59,645,430 | 2020-1-8 | https://stackoverflow.com/questions/59645430/python-moving-file-on-sftp-server-to-another-folder | I wrote this script to save a file from an SFTP remote folder to a local folder. It then removes the file from the SFTP. I want to change it so it stops removing files and instead saves them to a backup folder on the SFTP. How do I do that in pysftp? I cant find any documentation regarding it... import pysftp cnopts = pysftp.CnOpts() cnopts.hostkeys = None myHostname = "123" myUsername = "456" myPassword = "789" with pysftp.Connection(host=myHostname, username=myUsername, password="789", cnopts=cnopts) as sftp: sftp.cwd("/Production/In/") directory_structure = sftp.listdir_attr() local_dir= "D:/" remote_dir = "/Production/" remote_backup_dir = "/Production/Backup/" for attr in directory_structure: if attr.filename.endswith(".xml"): file = attr.filename sftp.get(remote_dir + file, local_dir + file) print("Moved " + file + " to " + local_dir) sftp.remove(remote_dir + file) Don't worry about my no hostkey or the password in plain. I'm not keeping it that way once i get the script working :) | Use Connection.rename: sftp.rename(remote_dir + file, remote_backup_dir + file) Obligatory warnings: Do not set cnopts.hostkeys = None, unless you do not care about security. For the correct solution see Verify host key with pysftp. Do not use pysftp. It's dead. Use Paramiko. See pysftp vs. Paramiko. | 10 | 11 |
59,674,072 | 2020-1-10 | https://stackoverflow.com/questions/59674072/sklearns-class-stratifiedshufflesplit | I'm little confused about how does the class StratifiedShuffleSplit of Sklearn works. The code below is from Géron's book "Hands On Machine Learning", chapter 2, where he does a stratified sampling. from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(housing, housing["income_cat"]): strat_train_set = housing.loc[train_index] strat_test_set = housing.loc[test_index] Especially, what is been doing in split.split? Thanks! | Since you did not provide a dataset, I use sklearn sample to answer this question. Prepare dataset # generate data import numpy as np from sklearn.model_selection import StratifiedShuffleSplit data = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]]) group_label = np.array([0, 0, 0, 1, 1, 1]) This generate a dataset data, which has 6 obseravations and 2 variables. group_label has 2 value, means group 0 and group 1. In this case, group 0 contains 3 samples, same is group 1. To be general, the group size are not need to be the same. Create a StratifiedShuffleSplit object instance sss = StratifiedShuffleSplit(n_splits=5, test_size=0.5, random_state=0) sss.get_n_splits(data, group_label) Out: 5 In this step, you can create a instance of StratifiedShuffleSplit, you can tell the function how to split(At random_state = 0,split data 5 times,each time 50% of data will split to test set). However, it only split data when you call it in the next step. Call the instance, and split data. # the instance is actually a generater type(sss.split(data, group_label)) # split data for train_index, test_index in sss.split(data, group_label): print("n_split",,"TRAIN:", train_index, "TEST:", test_index) X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] out: TRAIN: [5 2 3] TEST: [4 1 0] TRAIN: [5 1 4] TEST: [0 2 3] TRAIN: [5 0 2] TEST: [4 3 1] TRAIN: [4 1 0] TEST: [2 3 5] TRAIN: [0 5 1] TEST: [3 4 2] In this step, spliter you defined in the last step will generate 5 split of data one by one. For instance, in the first split, the original data is shuffled and sample 5,2,3 is selected as train set, this is also a stratified sampling by group_label; in the second split, the data is shuffled again and sample 5,1,4 is selected as train set; etc.. | 11 | 17 |
59,713,920 | 2020-1-13 | https://stackoverflow.com/questions/59713920/how-to-make-that-when-you-click-on-the-text-it-was-copied-pytelegrambotapi | I am writing a telegram to the bot. I ran into such a problem. I need the bot to send a message (text) when clicked on which it was copied (as a token from @BotFather) | If I understand you correctly, you wish to send a message, that when the user presses it, the text is automatically copied to the user's clipboard, just like the BotFather sends the API token? This is done by the MarkDown parse_mode; Send a message with &parse_mode=MarkDown and wrap the 'pressable' text in back-ticks (`): Hi. `Press me!`! https://api.telegram.org/bot<token>/sendMessage?chat_id=<id>&text=Hi! `Press me!`&parse_mode=MarkDown EDIT: Bases on OP's comment you're looking for a python-telegram-bot solution. From there documentation; bot.send_message( chat_id=chat_id, text="*bold* _italic_ `fixed width font` [link](http://google.com).", parse_mode=telegram.ParseMode.MARKDOWN ) | 16 | 31 |
Subsets and Splits