eshangj commited on
Commit
91e3af6
·
verified ·
1 Parent(s): c0fa40c

Delete stackoverflow_q_and_a_sample.csv

Browse files
Files changed (1) hide show
  1. stackoverflow_q_and_a_sample.csv +0 -30
stackoverflow_q_and_a_sample.csv DELETED
@@ -1,30 +0,0 @@
1
- link,question,accepted_answer
2
- https://stackoverflow.com/questions/231767/what-does-the-yield-keyword-do-in-python,"What functionality does the yield keyword in Python provide? For example, I'm trying to understand this code1: def _get_child_candidates(self, distance, min_dist, max_dist): if self._leftchild and distance - max_dist < self._median: yield self._leftchild if self._rightchild and distance + max_dist >= self._median: yield self._rightchild And this is the caller: result, candidates = [], [self] while candidates: node = candidates.pop() distance = node._get_dist(obj) if distance <= max_dist and distance >= min_dist: result.extend(node._values) candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result What happens when the method _get_child_candidates is called? Is a list returned? A single element? Is it called again? When will subsequent calls stop? 1. This piece of code was written by Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: Module mspace.","To understand what yield does, you must understand what generators are. And before you can understand generators, you must understand iterables. Iterables When you create a list, you can read its items one by one. Reading its items one by one is called iteration: >>> mylist = [1, 2, 3] >>> for i in mylist: ... print(i) 1 2 3 mylist is an iterable. When you use a list comprehension, you create a list, and so an iterable: >>> mylist = [x*x for x in range(3)] >>> for i in mylist: ... print(i) 0 1 4 Everything you can use ""for... in..."" on is an iterable; lists, strings, files... These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values. Generators Generators are iterators, a kind of iterable you can only iterate over once. Generators do not store all the values in memory, they generate the values on the fly: >>> mygenerator = (x*x for x in range(3)) >>> for i in mygenerator: ... print(i) 0 1 4 It is just the same except you used () instead of []. BUT, you cannot perform for i in mygenerator a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end after calculating 4, one by one. Yield yield is a keyword that is used like return, except the function will return a generator. >>> def create_generator(): ... mylist = range(3) ... for i in mylist: ... yield i*i ... >>> mygenerator = create_generator() # create a generator >>> print(mygenerator) # mygenerator is an object! <generator object create_generator at 0xb7555c34> >>> for i in mygenerator: ... print(i) 0 1 4 Here it's a useless example, but it's handy when you know your function will return a huge set of values that you will only need to read once. To master yield, you must understand that when you call the function, the code you have written in the function body does not run. The function only returns the generator object, this is a bit tricky. Then, your code will continue from where it left off each time for uses the generator. Now the hard part: The first time the for calls the generator object created from your function, it will run the code in your function from the beginning until it hits yield, then it'll return the first value of the loop. Then, each subsequent call will run another iteration of the loop you have written in the function and return the next value. This will continue until the generator is considered empty, which happens when the function runs without hitting yield. That can be because the loop has come to an end, or because you no longer satisfy an ""if/else"". Your code explained Generator: # Here you create the method of the node object that will return the generator def _get_child_candidates(self, distance, min_dist, max_dist): # Here is the code that will be called each time you use the generator object: # If there is still a child of the node object on its left # AND if the distance is ok, return the next child if self._leftchild and distance - max_dist < self._median: yield self._leftchild # If there is still a child of the node object on its right # AND if the distance is ok, return the next child if self._rightchild and distance + max_dist >= self._median: yield self._rightchild # If the function arrives here, the generator will be considered empty # There are no more than two values: the left and the right children Caller: # Create an empty list and a list with the current object reference result, candidates = list(), [self] # Loop on candidates (they contain only one element at the beginning) while candidates: # Get the last candidate and remove it from the list node = candidates.pop() # Get the distance between obj and the candidate distance = node._get_dist(obj) # If the distance is ok, then you can fill in the result if distance <= max_dist and distance >= min_dist: result.extend(node._values) # Add the children of the candidate to the candidate's list # so the loop will keep running until it has looked # at all the children of the children of the children, etc. of the candidate candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result This code contains several smart parts: The loop iterates on a list, but the list expands while the loop is being iterated. It's a concise way to go through all these nested data even if it's a bit dangerous since you can end up with an infinite loop. In this case, candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) exhausts all the values of the generator, but while keeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node. The extend() method is a list object method that expects an iterable and adds its values to the list. Usually, we pass a list to it: >>> a = [1, 2] >>> b = [3, 4] >>> a.extend(b) >>> print(a) [1, 2, 3, 4] But in your code, it gets a generator, which is good because: You don't need to read the values twice. You may have a lot of children and you don't want them all stored in memory. And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples, and generators! This is called duck typing and is one of the reasons why Python is so cool. But this is another story, for another question... You can stop here, or read a little bit to see an advanced use of a generator: Controlling a generator exhaustion >>> class Bank(): # Let's create a bank, building ATMs ... crisis = False ... def create_atm(self): ... while not self.crisis: ... yield ""$100"" >>> hsbc = Bank() # When everything's ok the ATM gives you as much as you want >>> corner_street_atm = hsbc.create_atm() >>> print(corner_street_atm.next()) $100 >>> print(corner_street_atm.next()) $100 >>> print([corner_street_atm.next() for cash in range(5)]) ['$100', '$100', '$100', '$100', '$100'] >>> hsbc.crisis = True # Crisis is coming, no more money! >>> print(corner_street_atm.next()) <type 'exceptions.StopIteration'> >>> wall_street_atm = hsbc.create_atm() # It's even true for new ATMs >>> print(wall_street_atm.next()) <type 'exceptions.StopIteration'> >>> hsbc.crisis = False # The trouble is, even post-crisis the ATM remains empty >>> print(corner_street_atm.next()) <type 'exceptions.StopIteration'> >>> brand_new_atm = hsbc.create_atm() # Build a new one to get back in business >>> for cash in brand_new_atm: ... print cash $100 $100 $100 $100 $100 $100 $100 $100 $100 ... Note: For Python 3, useprint(corner_street_atm.__next__()) or print(next(corner_street_atm)) It can be useful for various things like controlling access to a resource. Itertools, your best friend The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator? Chain two generators? Group values in a nested list with a one-liner? Map / Zip without creating another list? Then just import itertools. An example? Let's see the possible orders of arrival for a four-horse race: >>> horses = [1, 2, 3, 4] >>> races = itertools.permutations(horses) >>> print(races) <itertools.permutations object at 0xb754f1dc> >>> print(list(itertools.permutations(horses))) [(1, 2, 3, 4), (1, 2, 4, 3), (1, 3, 2, 4), (1, 3, 4, 2), (1, 4, 2, 3), (1, 4, 3, 2), (2, 1, 3, 4), (2, 1, 4, 3), (2, 3, 1, 4), (2, 3, 4, 1), (2, 4, 1, 3), (2, 4, 3, 1), (3, 1, 2, 4), (3, 1, 4, 2), (3, 2, 1, 4), (3, 2, 4, 1), (3, 4, 1, 2), (3, 4, 2, 1), (4, 1, 2, 3), (4, 1, 3, 2), (4, 2, 1, 3), (4, 2, 3, 1), (4, 3, 1, 2), (4, 3, 2, 1)] Understanding the inner mechanisms of iteration Iteration is a process implying iterables (implementing the __iter__() method) and iterators (implementing the __next__() method). Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables. There is more about it in this article about how for loops work."
3
- https://stackoverflow.com/questions/419163/what-does-if-name-main-do,"What does this do, and why should one include the if statement? if __name__ == ""__main__"": print(""Hello, World!"") If you are trying to close a question where someone should be using this idiom and isn't, consider closing as a duplicate of Why is Python running my module when I import it, and how do I stop it? instead. For questions where someone simply hasn't called any functions, or incorrectly expects a function named main to be used as an entry point automatically, use Why doesn't the main() function run when I start a Python script? Where does the script start running?.","Short Answer It's boilerplate code that protects users from accidentally invoking the script when they didn't intend to. Here are some common problems when the guard is omitted from a script: If you import the guardless script in another script (e.g. import my_script_without_a_name_eq_main_guard), then the latter script will trigger the former to run at import time and using the second script's command line arguments. This is almost always a mistake. If you have a custom class in the guardless script and save it to a pickle file, then unpickling it in another script will trigger an import of the guardless script, with the same problems outlined in the previous bullet. Long Answer To better understand why and how this matters, we need to take a step back to understand how Python initializes scripts and how this interacts with its module import mechanism. Whenever the Python interpreter reads a source file, it does two things: it sets a few special variables like __name__, and then it executes all of the code found in the file. Let's see how this works and how it relates to your question about the __name__ checks we always see in Python scripts. Code Sample Let's use a slightly different code sample to explore how imports and scripts work. Suppose the following is in a file called foo.py. # Suppose this is foo.py. print(""before import"") import math print(""before function_a"") def function_a(): print(""Function A"") print(""before function_b"") def function_b(): print(""Function B {}"".format(math.sqrt(100))) print(""before __name__ guard"") if __name__ == '__main__': function_a() function_b() print(""after __name__ guard"") Special Variables When the Python interpreter reads a source file, it first defines a few special variables. In this case, we care about the __name__ variable. When Your Module Is the Main Program If you are running your module (the source file) as the main program, e.g. python foo.py the interpreter will assign the hard-coded string ""__main__"" to the __name__ variable, i.e. # It's as if the interpreter inserts this at the top # of your module when run as the main program. __name__ = ""__main__"" When Your Module Is Imported By Another On the other hand, suppose some other module is the main program and it imports your module. This means there's a statement like this in the main program, or in some other module the main program imports: # Suppose this is in some other main program. import foo The interpreter will search for your foo.py file (along with searching for a few other variants), and prior to executing that module, it will assign the name ""foo"" from the import statement to the __name__ variable, i.e. # It's as if the interpreter inserts this at the top # of your module when it's imported from another module. __name__ = ""foo"" Executing the Module's Code After the special variables are set up, the interpreter executes all the code in the module, one statement at a time. You may want to open another window on the side with the code sample so you can follow along with this explanation. Always It prints the string ""before import"" (without quotes). It loads the math module and assigns it to a variable called math. This is equivalent to replacing import math with the following (note that __import__ is a low-level function in Python that takes a string and triggers the actual import): # Find and load a module given its string name, ""math"", # then assign it to a local variable called math. math = __import__(""math"") It prints the string ""before function_a"". It executes the def block, creating a function object, then assigning that function object to a variable called function_a. It prints the string ""before function_b"". It executes the second def block, creating another function object, then assigning it to a variable called function_b. It prints the string ""before __name__ guard"". Only When Your Module Is the Main Program If your module is the main program, then it will see that __name__ was indeed set to ""__main__"" and it calls the two functions, printing the strings ""Function A"" and ""Function B 10.0"". Only When Your Module Is Imported by Another (instead) If your module is not the main program but was imported by another one, then __name__ will be ""foo"", not ""__main__"", and it'll skip the body of the if statement. Always It will print the string ""after __name__ guard"" in both situations. Summary In summary, here's what'd be printed in the two cases: # What gets printed if foo is the main program before import before function_a before function_b before __name__ guard Function A Function B 10.0 after __name__ guard # What gets printed if foo is imported as a regular module before import before function_a before function_b before __name__ guard after __name__ guard Why Does It Work This Way? You might naturally wonder why anybody would want this. Well, sometimes you want to write a .py file that can be both used by other programs and/or modules as a module, and can also be run as the main program itself. Examples: Your module is a library, but you want to have a script mode where it runs some unit tests or a demo. Your module is only used as a main program, but it has some unit tests, and the testing framework works by importing .py files like your script and running special test functions. You don't want it to try running the script just because it's importing the module. Your module is mostly used as a main program, but it also provides a programmer-friendly API for advanced users. Beyond those examples, it's elegant that running a script in Python is just setting up a few magic variables and importing the script. ""Running"" the script is a side effect of importing the script's module. Food for Thought Question: Can I have multiple __name__ checking blocks? Answer: it's strange to do so, but the language won't stop you. Suppose the following is in foo2.py. What happens if you say python foo2.py on the command-line? Why? # Suppose this is foo2.py. import os, sys; sys.path.insert(0, os.path.dirname(__file__)) # needed for some interpreters def function_a(): print(""a1"") from foo2 import function_b print(""a2"") function_b() print(""a3"") def function_b(): print(""b"") print(""t1"") if __name__ == ""__main__"": print(""m1"") function_a() print(""m2"") print(""t2"") Now, figure out what will happen in foo3.py (having removed the __name__ check): # Suppose this is foo3.py. import os, sys; sys.path.insert(0, os.path.dirname(__file__)) # needed for some interpreters def function_a(): print(""a1"") from foo3 import function_b print(""a2"") function_b() print(""a3"") def function_b(): print(""b"") print(""t1"") print(""m1"") function_a() print(""m2"") print(""t2"") What will this do when used as a script? When imported as a module? # Suppose this is in foo4.py __name__ = ""__main__"" def bar(): print(""bar"") print(""before __name__ guard"") if __name__ == ""__main__"": bar() print(""after __name__ guard"") "
4
- https://stackoverflow.com/questions/394809/does-python-have-a-ternary-conditional-operator,Is there a ternary conditional operator in Python?,"Yes, it was added in version 2.5. The expression syntax is: a if condition else b First condition is evaluated, then exactly one of either a or b is evaluated and returned based on the Boolean value of condition. If condition evaluates to True, then a is evaluated and returned but b is ignored, or else when b is evaluated and returned but a is ignored. This allows short-circuiting because when condition is true only a is evaluated and b is not evaluated at all, but when condition is false only b is evaluated and a is not evaluated at all. For example: >>> 'true' if True else 'false' 'true' >>> 'true' if False else 'false' 'false' Note that conditionals are an expression, not a statement. This means you can't use statements such as pass, or assignments with = (or ""augmented"" assignments like +=), within a conditional expression: >>> pass if False else pass File ""<stdin>"", line 1 pass if False else pass ^ SyntaxError: invalid syntax >>> # Python parses this as `x = (1 if False else y) = 2` >>> # The `(1 if False else x)` part is actually valid, but >>> # it can't be on the left-hand side of `=`. >>> x = 1 if False else y = 2 File ""<stdin>"", line 1 SyntaxError: cannot assign to conditional expression >>> # If we parenthesize it instead... >>> (x = 1) if False else (y = 2) File ""<stdin>"", line 1 (x = 1) if False else (y = 2) ^ SyntaxError: invalid syntax (In 3.8 and above, the := ""walrus"" operator allows simple assignment of values as an expression, which is then compatible with this syntax. But please don't write code like that; it will quickly become very difficult to understand.) Similarly, because it is an expression, the else part is mandatory: # Invalid syntax: we didn't specify what the value should be if the # condition isn't met. It doesn't matter if we can verify that # ahead of time. a if True You can, however, use conditional expressions to assign a variable like so: x = a if True else b Or for example to return a value: # Of course we should just use the standard library `max`; # this is just for demonstration purposes. def my_max(a, b): return a if a > b else b Think of the conditional expression as switching between two values. We can use it when we are in a 'one value or another' situation, where we will do the same thing with the result, regardless of whether the condition is met. We use the expression to compute the value, and then do something with it. If you need to do something different depending on the condition, then use a normal if statement instead. Keep in mind that it's frowned upon by some Pythonistas for several reasons: The order of the arguments is different from those of the classic condition ? a : b ternary operator from many other languages (such as C, C++, Go, Perl, Ruby, Java, JavaScript, etc.), which may lead to bugs when people unfamiliar with Python's ""surprising"" behaviour use it (they may reverse the argument order). Some find it ""unwieldy"", since it goes contrary to the normal flow of thought (thinking of the condition first and then the effects). Stylistic reasons. (Although the 'inline if' can be really useful, and make your script more concise, it really does complicate your code) If you're having trouble remembering the order, then remember that when read aloud, you (almost) say what you mean. For example, x = 4 if b > 8 else 9 is read aloud as x will be 4 if b is greater than 8 otherwise 9. Official documentation: Conditional expressions Is there an equivalent of C’s ”?:” ternary operator? "
5
- https://stackoverflow.com/questions/100003/what-are-metaclasses-in-python,What are metaclasses? What are they used for?,"A metaclass is the class of a class. A class defines how an instance of the class (i.e. an object) behaves while a metaclass defines how a class behaves. A class is an instance of a metaclass. While in Python you can use arbitrary callables for metaclasses (like Jerub shows), the better approach is to make it an actual class itself. type is the usual metaclass in Python. type is itself a class, and it is its own type. You won't be able to recreate something like type purely in Python, but Python cheats a little. To create your own metaclass in Python you really just want to subclass type. A metaclass is most commonly used as a class-factory. When you create an object by calling the class, Python creates a new class (when it executes the 'class' statement) by calling the metaclass. Combined with the normal __init__ and __new__ methods, metaclasses therefore allow you to do 'extra things' when creating a class, like registering the new class with some registry or replace the class with something else entirely. When the class statement is executed, Python first executes the body of the class statement as a normal block of code. The resulting namespace (a dict) holds the attributes of the class-to-be. The metaclass is determined by looking at the baseclasses of the class-to-be (metaclasses are inherited), at the __metaclass__ attribute of the class-to-be (if any) or the __metaclass__ global variable. The metaclass is then called with the name, bases and attributes of the class to instantiate it. However, metaclasses actually define the type of a class, not just a factory for it, so you can do much more with them. You can, for instance, define normal methods on the metaclass. These metaclass-methods are like classmethods in that they can be called on the class without an instance, but they are also not like classmethods in that they cannot be called on an instance of the class. type.__subclasses__() is an example of a method on the type metaclass. You can also define the normal 'magic' methods, like __add__, __iter__ and __getattr__, to implement or change how the class behaves. Here's an aggregated example of the bits and pieces: def make_hook(f): """"""Decorator to turn 'foo' method into '__foo__'"""""" f.is_hook = 1 return f class MyType(type): def __new__(mcls, name, bases, attrs): if name.startswith('None'): return None # Go over attributes and see if they should be renamed. newattrs = {} for attrname, attrvalue in attrs.iteritems(): if getattr(attrvalue, 'is_hook', 0): newattrs['__%s__' % attrname] = attrvalue else: newattrs[attrname] = attrvalue return super(MyType, mcls).__new__(mcls, name, bases, newattrs) def __init__(self, name, bases, attrs): super(MyType, self).__init__(name, bases, attrs) # classregistry.register(self, self.interfaces) print ""Would register class %s now."" % self def __add__(self, other): class AutoClass(self, other): pass return AutoClass # Alternatively, to autogenerate the classname as well as the class: # return type(self.__name__ + other.__name__, (self, other), {}) def unregister(self): # classregistry.unregister(self) print ""Would unregister class %s now."" % self class MyObject: __metaclass__ = MyType class NoneSample(MyObject): pass # Will print ""NoneType None"" print type(NoneSample), repr(NoneSample) class Example(MyObject): def __init__(self, value): self.value = value @make_hook def add(self, other): return self.__class__(self.value + other.value) # Will unregister the class Example.unregister() inst = Example(10) # Will fail with an AttributeError #inst.unregister() print inst + inst class Sibling(MyObject): pass ExampleSibling = Example + Sibling # ExampleSibling is now a subclass of both Example and Sibling (with no # content of its own) although it will believe it's called 'AutoClass' print ExampleSibling print ExampleSibling.__mro__ "
6
- https://stackoverflow.com/questions/38987/how-do-i-merge-two-dictionaries-in-a-single-expression-in-python,"I want to merge two dictionaries into a new dictionary. x = {'a': 1, 'b': 2} y = {'b': 3, 'c': 4} z = merge(x, y) >>> z {'a': 1, 'b': 3, 'c': 4} Whenever a key k is present in both dictionaries, only the value y[k] should be kept.","How can I merge two Python dictionaries in a single expression? For dictionaries x and y, their shallowly-merged dictionary z takes values from y, replacing those from x. In Python 3.9.0 or greater (released 17 October 2020, PEP-584, discussed here): z = x | y In Python 3.5 or greater: z = {**x, **y} In Python 2, (or 3.4 or lower) write a function: def merge_two_dicts(x, y): z = x.copy() # start with keys and values of x z.update(y) # modifies z with keys and values of y return z and now: z = merge_two_dicts(x, y) Explanation Say you have two dictionaries and you want to merge them into a new dictionary without altering the original dictionaries: x = {'a': 1, 'b': 2} y = {'b': 3, 'c': 4} The desired result is to get a new dictionary (z) with the values merged, and the second dictionary's values overwriting those from the first. >>> z {'a': 1, 'b': 3, 'c': 4} A new syntax for this, proposed in PEP 448 and available as of Python 3.5, is z = {**x, **y} And it is indeed a single expression. Note that we can merge in with literal notation as well: z = {**x, 'foo': 1, 'bar': 2, **y} and now: >>> z {'a': 1, 'b': 3, 'foo': 1, 'bar': 2, 'c': 4} It is now showing as implemented in the release schedule for 3.5, PEP 478, and it has now made its way into the What's New in Python 3.5 document. However, since many organizations are still on Python 2, you may wish to do this in a backward-compatible way. The classically Pythonic way, available in Python 2 and Python 3.0-3.4, is to do this as a two-step process: z = x.copy() z.update(y) # which returns None since it mutates z In both approaches, y will come second and its values will replace x's values, thus b will point to 3 in our final result. Not yet on Python 3.5, but want a single expression If you are not yet on Python 3.5 or need to write backward-compatible code, and you want this in a single expression, the most performant while the correct approach is to put it in a function: def merge_two_dicts(x, y): """"""Given two dictionaries, merge them into a new dict as a shallow copy."""""" z = x.copy() z.update(y) return z and then you have a single expression: z = merge_two_dicts(x, y) You can also make a function to merge an arbitrary number of dictionaries, from zero to a very large number: def merge_dicts(*dict_args): """""" Given any number of dictionaries, shallow copy and merge into a new dict, precedence goes to key-value pairs in latter dictionaries. """""" result = {} for dictionary in dict_args: result.update(dictionary) return result This function will work in Python 2 and 3 for all dictionaries. e.g. given dictionaries a to g: z = merge_dicts(a, b, c, d, e, f, g) and key-value pairs in g will take precedence over dictionaries a to f, and so on. Critiques of Other Answers Don't use what you see in the formerly accepted answer: z = dict(x.items() + y.items()) In Python 2, you create two lists in memory for each dict, create a third list in memory with length equal to the length of the first two put together, and then discard all three lists to create the dict. In Python 3, this will fail because you're adding two dict_items objects together, not two lists - >>> c = dict(a.items() + b.items()) Traceback (most recent call last): File ""<stdin>"", line 1, in <module> TypeError: unsupported operand type(s) for +: 'dict_items' and 'dict_items' and you would have to explicitly create them as lists, e.g. z = dict(list(x.items()) + list(y.items())). This is a waste of resources and computation power. Similarly, taking the union of items() in Python 3 (viewitems() in Python 2.7) will also fail when values are unhashable objects (like lists, for example). Even if your values are hashable, since sets are semantically unordered, the behavior is undefined in regards to precedence. So don't do this: >>> c = dict(a.items() | b.items()) This example demonstrates what happens when values are unhashable: >>> x = {'a': []} >>> y = {'b': []} >>> dict(x.items() | y.items()) Traceback (most recent call last): File ""<stdin>"", line 1, in <module> TypeError: unhashable type: 'list' Here's an example where y should have precedence, but instead the value from x is retained due to the arbitrary order of sets: >>> x = {'a': 2} >>> y = {'a': 1} >>> dict(x.items() | y.items()) {'a': 2} Another hack you should not use: z = dict(x, **y) This uses the dict constructor and is very fast and memory-efficient (even slightly more so than our two-step process) but unless you know precisely what is happening here (that is, the second dict is being passed as keyword arguments to the dict constructor), it's difficult to read, it's not the intended usage, and so it is not Pythonic. Here's an example of the usage being remediated in django. Dictionaries are intended to take hashable keys (e.g. frozensets or tuples), but this method fails in Python 3 when keys are not strings. >>> c = dict(a, **b) Traceback (most recent call last): File ""<stdin>"", line 1, in <module> TypeError: keyword arguments must be strings From the mailing list, Guido van Rossum, the creator of the language, wrote: I am fine with declaring dict({}, **{1:3}) illegal, since after all it is abuse of the ** mechanism. and Apparently dict(x, **y) is going around as ""cool hack"" for ""call x.update(y) and return x"". Personally, I find it more despicable than cool. It is my understanding (as well as the understanding of the creator of the language) that the intended usage for dict(**y) is for creating dictionaries for readability purposes, e.g.: dict(a=1, b=10, c=11) instead of {'a': 1, 'b': 10, 'c': 11} Response to comments Despite what Guido says, dict(x, **y) is in line with the dict specification, which btw. works for both Python 2 and 3. The fact that this only works for string keys is a direct consequence of how keyword parameters work and not a short-coming of dict. Nor is using the ** operator in this place an abuse of the mechanism, in fact, ** was designed precisely to pass dictionaries as keywords. Again, it doesn't work for 3 when keys are not strings. The implicit calling contract is that namespaces take ordinary dictionaries, while users must only pass keyword arguments that are strings. All other callables enforced it. dict broke this consistency in Python 2: >>> foo(**{('a', 'b'): None}) Traceback (most recent call last): File ""<stdin>"", line 1, in <module> TypeError: foo() keywords must be strings >>> dict(**{('a', 'b'): None}) {('a', 'b'): None} This inconsistency was bad given other implementations of Python (PyPy, Jython, IronPython). Thus it was fixed in Python 3, as this usage could be a breaking change. I submit to you that it is malicious incompetence to intentionally write code that only works in one version of a language or that only works given certain arbitrary constraints. More comments: dict(x.items() + y.items()) is still the most readable solution for Python 2. Readability counts. My response: merge_two_dicts(x, y) actually seems much clearer to me, if we're actually concerned about readability. And it is not forward compatible, as Python 2 is increasingly deprecated. {**x, **y} does not seem to handle nested dictionaries. the contents of nested keys are simply overwritten, not merged [...] I ended up being burnt by these answers that do not merge recursively and I was surprised no one mentioned it. In my interpretation of the word ""merging"" these answers describe ""updating one dict with another"", and not merging. Yes. I must refer you back to the question, which is asking for a shallow merge of two dictionaries, with the first's values being overwritten by the second's - in a single expression. Assuming two dictionaries of dictionaries, one might recursively merge them in a single function, but you should be careful not to modify the dictionaries from either source, and the surest way to avoid that is to make a copy when assigning values. As keys must be hashable and are usually therefore immutable, it is pointless to copy them: from copy import deepcopy def dict_of_dicts_merge(x, y): z = {} overlapping_keys = x.keys() & y.keys() for key in overlapping_keys: z[key] = dict_of_dicts_merge(x[key], y[key]) for key in x.keys() - overlapping_keys: z[key] = deepcopy(x[key]) for key in y.keys() - overlapping_keys: z[key] = deepcopy(y[key]) return z Usage: >>> x = {'a':{1:{}}, 'b': {2:{}}} >>> y = {'b':{10:{}}, 'c': {11:{}}} >>> dict_of_dicts_merge(x, y) {'b': {2: {}, 10: {}}, 'a': {1: {}}, 'c': {11: {}}} Coming up with contingencies for other value types is far beyond the scope of this question, so I will point you at my answer to the canonical question on a ""Dictionaries of dictionaries merge"". Less Performant But Correct Ad-hocs These approaches are less performant, but they will provide correct behavior. They will be much less performant than copy and update or the new unpacking because they iterate through each key-value pair at a higher level of abstraction, but they do respect the order of precedence (latter dictionaries have precedence) You can also chain the dictionaries manually inside a dict comprehension: {k: v for d in dicts for k, v in d.items()} # iteritems in Python 2.7 or in Python 2.6 (and perhaps as early as 2.4 when generator expressions were introduced): dict((k, v) for d in dicts for k, v in d.items()) # iteritems in Python 2 itertools.chain will chain the iterators over the key-value pairs in the correct order: from itertools import chain z = dict(chain(x.items(), y.items())) # iteritems in Python 2 Performance Analysis I'm only going to do the performance analysis of the usages known to behave correctly. (Self-contained so you can copy and paste yourself.) from timeit import repeat from itertools import chain x = dict.fromkeys('abcdefg') y = dict.fromkeys('efghijk') def merge_two_dicts(x, y): z = x.copy() z.update(y) return z min(repeat(lambda: {**x, **y})) min(repeat(lambda: merge_two_dicts(x, y))) min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()})) min(repeat(lambda: dict(chain(x.items(), y.items())))) min(repeat(lambda: dict(item for d in (x, y) for item in d.items()))) In Python 3.8.1, NixOS: >>> min(repeat(lambda: {**x, **y})) 1.0804965235292912 >>> min(repeat(lambda: merge_two_dicts(x, y))) 1.636518670246005 >>> min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()})) 3.1779992282390594 >>> min(repeat(lambda: dict(chain(x.items(), y.items())))) 2.740647904574871 >>> min(repeat(lambda: dict(item for d in (x, y) for item in d.items()))) 4.266070580109954 $ uname -a Linux nixos 4.19.113 #1-NixOS SMP Wed Mar 25 07:06:15 UTC 2020 x86_64 GNU/Linux Resources on Dictionaries My explanation of Python's dictionary implementation, updated for 3.6. Answer on how to add new keys to a dictionary Mapping two lists into a dictionary The official Python docs on dictionaries The Dictionary Even Mightier - talk by Brandon Rhodes at Pycon 2017 Modern Python Dictionaries, A Confluence of Great Ideas - talk by Raymond Hettinger at Pycon 2017 "
7
- https://stackoverflow.com/questions/89228/how-do-i-execute-a-program-or-call-a-system-command,How do I call an external command within Python as if I had typed it in a shell or command prompt?,"Use subprocess.run: import subprocess subprocess.run([""ls"", ""-l""]) Another common way is os.system but you shouldn't use it because it is unsafe if any parts of the command come from outside your program or can contain spaces or other special characters, also subprocess.run is generally more flexible (you can get the stdout, stderr, the ""real"" status code, better error handling, etc.). Even the documentation for os.system recommends using subprocess instead. On Python 3.4 and earlier, use subprocess.call instead of .run: subprocess.call([""ls"", ""-l""]) "
8
- https://stackoverflow.com/questions/273192/how-do-i-create-a-directory-and-any-missing-parent-directories,"How do I create a directory at a given path, and also create any missing parent directories along that path? For example, the Bash command mkdir -p /path/to/nested/directory does this.","On Python ≥ 3.5, use pathlib.Path.mkdir: from pathlib import Path Path(""/my/directory"").mkdir(parents=True, exist_ok=True) For older versions of Python, I see two answers with good qualities, each with a small flaw, so I will give my take on it: Try os.path.exists, and consider os.makedirs for the creation. import os if not os.path.exists(directory): os.makedirs(directory) As noted in comments and elsewhere, there's a race condition – if the directory is created between the os.path.exists and the os.makedirs calls, the os.makedirs will fail with an OSError. Unfortunately, blanket-catching OSError and continuing is not foolproof, as it will ignore a failure to create the directory due to other factors, such as insufficient permissions, full disk, etc. One option would be to trap the OSError and examine the embedded error code (see Is there a cross-platform way of getting information from Python’s OSError): import os, errno try: os.makedirs(directory) except OSError as e: if e.errno != errno.EEXIST: raise Alternatively, there could be a second os.path.exists, but suppose another created the directory after the first check, then removed it before the second one – we could still be fooled. Depending on the application, the danger of concurrent operations may be more or less than the danger posed by other factors such as file permissions. The developer would have to know more about the particular application being developed and its expected environment before choosing an implementation. Modern versions of Python improve this code quite a bit, both by exposing FileExistsError (in 3.3+)... try: os.makedirs(""path/to/directory"") except FileExistsError: # directory already exists pass ...and by allowing a keyword argument to os.makedirs called exist_ok (in 3.2+). os.makedirs(""path/to/directory"", exist_ok=True) # succeeds even if directory exists. "
9
- https://stackoverflow.com/questions/522563/how-to-access-the-index-value-in-a-for-loop,"How do I access the index while iterating over a sequence with a for loop? xs = [8, 23, 45] for x in xs: print(""item #{} = {}"".format(index, x)) Desired output: item #1 = 8 item #2 = 23 item #3 = 45 ","Use the built-in function enumerate(): for idx, x in enumerate(xs): print(idx, x) It is non-pythonic to manually index via for i in range(len(xs)): x = xs[i] or manually manage an additional state variable. Check out PEP 279 for more."
10
- https://stackoverflow.com/questions/952914/how-do-i-make-a-flat-list-out-of-a-list-of-lists,"I have a list of lists like [ [1, 2, 3], [4, 5, 6], [7], [8, 9] ] How can I flatten it to get [1, 2, 3, 4, 5, 6, 7, 8, 9]? If your list of lists comes from a nested list comprehension, the problem can be solved more simply/directly by fixing the comprehension; please see How can I get a flat result from a list comprehension instead of a nested list?. The most popular solutions here generally only flatten one ""level"" of the nested list. See Flatten an irregular (arbitrarily nested) list of lists for solutions that completely flatten a deeply nested structure (recursively, in general).","A list of lists named xss can be flattened using a nested list comprehension: flat_list = [ x for xs in xss for x in xs ] The above is equivalent to: flat_list = [] for xs in xss: for x in xs: flat_list.append(x) Here is the corresponding function: def flatten(xss): return [x for xs in xss for x in xs] This is the fastest method. As evidence, using the timeit module in the standard library, we see: $ python -mtimeit -s'xss=[[1,2,3],[4,5,6],[7],[8,9]]*99' '[x for xs in xss for x in xs]' 10000 loops, best of 3: 143 usec per loop $ python -mtimeit -s'xss=[[1,2,3],[4,5,6],[7],[8,9]]*99' 'sum(xss, [])' 1000 loops, best of 3: 969 usec per loop $ python -mtimeit -s'xss=[[1,2,3],[4,5,6],[7],[8,9]]*99' 'reduce(lambda xs, ys: xs + ys, xss)' 1000 loops, best of 3: 1.1 msec per loop Explanation: the methods based on + (including the implied use in sum) are, of necessity, O(L**2) when there are L sublists -- as the intermediate result list keeps getting longer, at each step a new intermediate result list object gets allocated, and all the items in the previous intermediate result must be copied over (as well as a few new ones added at the end). So, for simplicity and without actual loss of generality, say you have L sublists of M items each: the first M items are copied back and forth L-1 times, the second M items L-2 times, and so on; total number of copies is M times the sum of x for x from 1 to L excluded, i.e., M * (L**2)/2. The list comprehension just generates one list, once, and copies each item over (from its original place of residence to the result list) also exactly once."
11
- https://stackoverflow.com/questions/136097/what-is-the-difference-between-staticmethod-and-classmethod-in-python,What is the difference between a method decorated with @staticmethod and one decorated with @classmethod?,"Maybe a bit of example code will help: Notice the difference in the call signatures of foo, class_foo and static_foo: class A(object): def foo(self, x): print(f""executing foo({self}, {x})"") @classmethod def class_foo(cls, x): print(f""executing class_foo({cls}, {x})"") @staticmethod def static_foo(x): print(f""executing static_foo({x})"") a = A() Below is the usual way an object instance calls a method. The object instance, a, is implicitly passed as the first argument. a.foo(1) # executing foo(<__main__.A object at 0xb7dbef0c>, 1) With classmethods, the class of the object instance is implicitly passed as the first argument instead of self. a.class_foo(1) # executing class_foo(<class '__main__.A'>, 1) You can also call class_foo using the class. In fact, if you define something to be a classmethod, it is probably because you intend to call it from the class rather than from a class instance. A.foo(1) would have raised a TypeError, but A.class_foo(1) works just fine: A.class_foo(1) # executing class_foo(<class '__main__.A'>, 1) One use people have found for class methods is to create inheritable alternative constructors. With staticmethods, neither self (the object instance) nor cls (the class) is implicitly passed as the first argument. They behave like plain functions except that you can call them from an instance or the class: a.static_foo(1) # executing static_foo(1) A.static_foo('hi') # executing static_foo(hi) Staticmethods are used to group functions which have some logical connection with a class to the class. foo is just a function, but when you call a.foo you don't just get the function, you get a ""partially applied"" version of the function with the object instance a bound as the first argument to the function. foo expects 2 arguments, while a.foo only expects 1 argument. a is bound to foo. That is what is meant by the term ""bound"" below: print(a.foo) # <bound method A.foo of <__main__.A object at 0xb7d52f0c>> With a.class_foo, a is not bound to class_foo, rather the class A is bound to class_foo. print(a.class_foo) # <bound method type.class_foo of <class '__main__.A'>> Here, with a staticmethod, even though it is a method, a.static_foo just returns a good 'ole function with no arguments bound. static_foo expects 1 argument, and a.static_foo expects 1 argument too. print(a.static_foo) # <function static_foo at 0xb7d479cc> And of course the same thing happens when you call static_foo with the class A instead. print(A.static_foo) # <function static_foo at 0xb7d479cc> "
12
- https://stackoverflow.com/questions/509211/how-slicing-in-python-works,"How does Python's slice notation work? That is: when I write code like a[x:y:z], a[:], a[::2] etc., how can I understand which elements end up in the slice? See Why are slice and range upper-bound exclusive? to learn why xs[0:2] == [xs[0], xs[1]], not [..., xs[2]]. See Make a new list containing every Nth item in the original list for xs[::N]. See How does assignment work with list slices? to learn what xs[0:2] = [""a"", ""b""] does.","The syntax is: a[start:stop] # items start through stop-1 a[start:] # items start through the rest of the array a[:stop] # items from the beginning through stop-1 a[:] # a copy of the whole array There is also the step value, which can be used with any of the above: a[start:stop:step] # start through not past stop, by step The key point to remember is that the :stop value represents the first value that is not in the selected slice. So, the difference between stop and start is the number of elements selected (if step is 1, the default). The other feature is that start or stop may be a negative number, which means it counts from the end of the array instead of the beginning. So: a[-1] # last item in the array a[-2:] # last two items in the array a[:-2] # everything except the last two items Similarly, step may be a negative number: a[::-1] # all items in the array, reversed a[1::-1] # the first two items, reversed a[:-3:-1] # the last two items, reversed a[-3::-1] # everything except the last two items, reversed Python is kind to the programmer if there are fewer items than you ask for. For example, if you ask for a[:-2] and a only contains one element, you get an empty list instead of an error. Sometimes you would prefer the error, so you have to be aware that this may happen. Relationship with the slice object A slice object can represent a slicing operation, i.e.: a[start:stop:step] is equivalent to: a[slice(start, stop, step)] Slice objects also behave slightly differently depending on the number of arguments, similar to range(), i.e. both slice(stop) and slice(start, stop[, step]) are supported. To skip specifying a given argument, one might use None, so that e.g. a[start:] is equivalent to a[slice(start, None)] or a[::-1] is equivalent to a[slice(None, None, -1)]. While the :-based notation is very helpful for simple slicing, the explicit use of slice() objects simplifies the programmatic generation of slicing."
13
- https://stackoverflow.com/questions/176918/how-to-find-the-index-for-a-given-item-in-a-list,"Given a list [""foo"", ""bar"", ""baz""] and an item in the list ""bar"", how do I get its index 1?",">>> [""foo"", ""bar"", ""baz""].index(""bar"") 1 See the documentation for the built-in .index() method of the list: list.index(x[, start[, end]]) Return zero-based index in the list of the first item whose value is equal to x. Raises a ValueError if there is no such item. The optional arguments start and end are interpreted as in the slice notation and are used to limit the search to a particular subsequence of the list. The returned index is computed relative to the beginning of the full sequence rather than the start argument. Caveats Linear time-complexity in list length An index call checks every element of the list in order, until it finds a match. If the list is long, and if there is no guarantee that the value will be near the beginning, this can slow down the code. This problem can only be completely avoided by using a different data structure. However, if the element is known to be within a certain part of the list, the start and end parameters can be used to narrow the search. For example: >>> import timeit >>> timeit.timeit('l.index(999_999)', setup='l = list(range(0, 1_000_000))', number=1000) 9.356267921015387 >>> timeit.timeit('l.index(999_999, 999_990, 1_000_000)', setup='l = list(range(0, 1_000_000))', number=1000) 0.0004404920036904514 The second call is orders of magnitude faster, because it only has to search through 10 elements, rather than all 1 million. Only the index of the first match is returned A call to index searches through the list in order until it finds a match, and stops there. If there could be more than one occurrence of the value, and all indices are needed, index cannot solve the problem: >>> [1, 1].index(1) # the `1` index is not found. 0 Instead, use a list comprehension or generator expression to do the search, with enumerate to get indices: >>> # A list comprehension gives a list of indices directly: >>> [i for i, e in enumerate([1, 2, 1]) if e == 1] [0, 2] >>> # A generator comprehension gives us an iterable object... >>> g = (i for i, e in enumerate([1, 2, 1]) if e == 1) >>> # which can be used in a `for` loop, or manually iterated with `next`: >>> next(g) 0 >>> next(g) 2 The list comprehension and generator expression techniques still work if there is only one match, and are more generalizable. Raises an exception if there is no match As noted in the documentation above, using .index will raise an exception if the searched-for value is not in the list: >>> [1, 1].index(2) Traceback (most recent call last): File ""<stdin>"", line 1, in <module> ValueError: 2 is not in list If this is a concern, either explicitly check first using item in my_list, or handle the exception with try/except as appropriate. The explicit check is simple and readable, but it must iterate the list a second time. See What is the EAFP principle in Python? for more guidance on this choice."
14
- https://stackoverflow.com/questions/3294889/iterating-over-dictionaries-using-for-loops,"d = {'x': 1, 'y': 2, 'z': 3} for key in d: print(key, 'corresponds to', d[key]) How does Python recognize that it needs only to read the key from the dictionary? Is key a special keyword, or is it simply a variable?","key is just a variable name. for key in d: will simply loop over the keys in the dictionary, rather than the keys and values. To loop over both key and value you can use the following: For Python 3.x: for key, value in d.items(): For Python 2.x: for key, value in d.iteritems(): To test for yourself, change the word key to poop. In Python 3.x, iteritems() was replaced with simply items(), which returns a set-like view backed by the dict, like iteritems() but even better. This is also available in 2.7 as viewitems(). The operation items() will work for both 2 and 3, but in 2 it will return a list of the dictionary's (key, value) pairs, which will not reflect changes to the dict that happen after the items() call. If you want the 2.x behavior in 3.x, you can call list(d.items())."
15
- https://stackoverflow.com/questions/16476924/how-can-i-iterate-over-rows-in-a-pandas-dataframe,"I have a pandas dataframe, df: c1 c2 0 10 100 1 11 110 2 12 120 How do I iterate over the rows of this dataframe? For every row, I want to access its elements (values in cells) by the name of the columns. For example: for row in df.rows: print(row['c1'], row['c2']) I found a similar question, which suggests using either of these: for date, row in df.T.iteritems(): for row in df.iterrows(): But I do not understand what the row object is and how I can work with it.","DataFrame.iterrows is a generator which yields both the index and row (as a Series): import pandas as pd df = pd.DataFrame({'c1': [10, 11, 12], 'c2': [100, 110, 120]}) df = df.reset_index() # make sure indexes pair with number of rows for index, row in df.iterrows(): print(row['c1'], row['c2']) 10 100 11 110 12 120 Obligatory disclaimer from the documentation Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed and can be avoided with one of the following approaches: Look for a vectorized solution: many operations can be performed using built-in methods or NumPy functions, (boolean) indexing, … When you have a function that cannot work on the full DataFrame/Series at once, it is better to use apply() instead of iterating over the values. See the docs on function application. If you need to do iterative manipulations on the values but performance is important, consider writing the inner loop with cython or numba. See the enhancing performance section for some examples of this approach. Other answers in this thread delve into greater depth on alternatives to iter* functions if you are interested to learn more."
16
- https://stackoverflow.com/questions/423379/how-to-use-a-global-variable-in-a-function,"How do I create or use a global variable inside a function? How do I use a global variable that was defined in one function inside other functions? Failing to use the global keyword where appropriate often causes UnboundLocalError. The precise rules for this are explained at UnboundLocalError on local variable when reassigned after first use. Generally, please close other questions as a duplicate of that question when an explanation is sought, and this question when someone simply needs to know the global keyword.","You can use a global variable within other functions by declaring it as global within each function that assigns a value to it: globvar = 0 def set_globvar_to_one(): global globvar # Needed to modify global copy of globvar globvar = 1 def print_globvar(): print(globvar) # No need for global declaration to read value of globvar set_globvar_to_one() print_globvar() # Prints 1 Since it's unclear whether globvar = 1 is creating a local variable or changing a global variable, Python defaults to creating a local variable, and makes you explicitly choose the other behavior with the global keyword. See other answers if you want to share a global variable across modules."
17
- https://stackoverflow.com/questions/415511/how-do-i-get-the-current-time-in-python,How do I get the current time in Python?,"Use datetime: >>> import datetime >>> now = datetime.datetime.now() >>> now datetime.datetime(2009, 1, 6, 15, 8, 24, 78915) >>> print(now) 2009-01-06 15:08:24.789150 For just the clock time without the date: >>> now.time() datetime.time(15, 8, 24, 78915) >>> print(now.time()) 15:08:24.789150 To save typing, you can import the datetime object from the datetime module: >>> from datetime import datetime Then remove the prefix datetime. from all of the above."
18
- https://stackoverflow.com/questions/6470428/how-to-catch-multiple-exceptions-in-one-line-in-the-except-block,"I know that I can do: try: # do something that may fail except: # do this if ANYTHING goes wrong I can also do this: try: # do something that may fail except IDontLikeYouException: # say please except YouAreTooShortException: # stand on a ladder But if I want to do the same thing inside two different exceptions, the best I can think of right now is to do this: try: # do something that may fail except IDontLikeYouException: # say please except YouAreBeingMeanException: # say please Is there any way that I can do something like this (since the action to take in both exceptions is to say please): try: # do something that may fail except IDontLikeYouException, YouAreBeingMeanException: # say please Now this really won't work, as it matches the syntax for: try: # do something that may fail except Exception, e: # say please So, my effort to catch the two distinct exceptions doesn't exactly come through. Is there a way to do this?","From Python Documentation: An except clause may name multiple exceptions as a parenthesized tuple, for example except (IDontLikeYouException, YouAreBeingMeanException) as e: pass Or, for Python 2 only: except (IDontLikeYouException, YouAreBeingMeanException), e: pass Separating the exception from the variable with a comma will still work in Python 2.6 and 2.7, but is now deprecated and does not work in Python 3; now you should be using as."
19
- https://stackoverflow.com/questions/123198/how-to-copy-files,How do I copy a file in Python?,"shutil has many methods you can use. One of which is: import shutil shutil.copyfile(src, dst) # 2nd option shutil.copy(src, dst) # dst can be a folder; use shutil.copy2() to preserve timestamp Copy the contents of the file named src to a file named dst. Both src and dst need to be the entire filename of the files, including path. The destination location must be writable; otherwise, an IOError exception will be raised. If dst already exists, it will be replaced. Special files such as character or block devices and pipes cannot be copied with this function. With copy, src and dst are path names given as strs. Another shutil method to look at is shutil.copy2(). It's similar but preserves more metadata (e.g. time stamps). If you use os.path operations, use copy rather than copyfile. copyfile will only accept strings."
20
- https://stackoverflow.com/questions/448271/what-is-init-py-for,What is __init__.py for in a Python source directory?,"It used to be a required part of a package (old, pre-3.3 ""regular package"", not newer 3.3+ ""namespace package""). Here's the documentation. Python defines two types of packages, regular packages and namespace packages. Regular packages are traditional packages as they existed in Python 3.2 and earlier. A regular package is typically implemented as a directory containing an __init__.py file. When a regular package is imported, this __init__.py file is implicitly executed, and the objects it defines are bound to names in the package’s namespace. The __init__.py file can contain the same Python code that any other module can contain, and Python will add some additional attributes to the module when it is imported. But just click the link, it contains an example, more information, and an explanation of namespace packages, the kind of packages without __init__.py."
21
- https://stackoverflow.com/questions/606191/convert-bytes-to-a-string-in-python-3,"I captured the standard output of an external program into a bytes object: >>> from subprocess import * >>> stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0] >>> stdout b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n' I want to convert that to a normal Python string, so that I can print it like this: >>> print(stdout) -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1 -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2 How do I convert the bytes object to a str with Python 3? See Best way to convert string to bytes in Python 3? for the other way around.","Decode the bytes object to produce a string: >>> b""abcde"".decode(""utf-8"") 'abcde' The above example assumes that the bytes object is in UTF-8, because it is a common encoding. However, you should use the encoding your data is actually in!"
22
- https://stackoverflow.com/questions/1436703/what-is-the-difference-between-str-and-repr,What is the difference between __str__ and __repr__ in Python?,"Alex Martelli summarized well but, surprisingly, was too succinct. First, let me reiterate the main points in Alex’s post: The default implementation is useless (it’s hard to think of one which wouldn’t be, but yeah) __repr__ goal is to be unambiguous __str__ goal is to be readable Container’s __str__ uses contained objects’ __repr__ Default implementation is useless This is mostly a surprise because Python’s defaults tend to be fairly useful. However, in this case, having a default for __repr__ which would act like: return ""%s(%r)"" % (self.__class__, self.__dict__) Or in new f-string formatting: return f""{self.__class__!s}({self.__dict__!r})"" would have been too dangerous (for example, too easy to get into infinite recursion if objects reference each other). So Python cops out. Note that there is one default which is true: if __repr__ is defined, and __str__ is not, the object will behave as though __str__=__repr__. This means, in simple terms: almost every object you implement should have a functional __repr__ that’s usable for understanding the object. Implementing __str__ is optional: do that if you need a “pretty print” functionality (for example, used by a report generator). The goal of __repr__ is to be unambiguous Let me come right out and say it — I do not believe in debuggers. I don’t really know how to use any debugger, and have never used one seriously. Furthermore, I believe that the big fault in debuggers is their basic nature — most failures I debug happened a long long time ago, in a galaxy far far away. This means that I do believe, with religious fervor, in logging. Logging is the lifeblood of any decent fire-and-forget server system. Python makes it easy to log: with maybe some project specific wrappers, all you need is a log(INFO, ""I am in the weird function and a is"", a, ""and b is"", b, ""but I got a null C — using default"", default_c) But you have to do the last step — make sure every object you implement has a useful repr, so code like that can just work. This is why the “eval” thing comes up: if you have enough information so eval(repr(c))==c, that means you know everything there is to know about c. If that’s easy enough, at least in a fuzzy way, do it. If not, make sure you have enough information about c anyway. I usually use an eval-like format: ""MyClass(this=%r,that=%r)"" % (self.this,self.that). It does not mean that you can actually construct MyClass, or that those are the right constructor arguments — but it is a useful form to express “this is everything you need to know about this instance”. Note: I used %r above, not %s. You always want to use repr() [or %r formatting character, equivalently] inside __repr__ implementation, or you’re defeating the goal of repr. You want to be able to differentiate MyClass(3) and MyClass(""3""). The goal of __str__ is to be readable Specifically, it is not intended to be unambiguous — notice that str(3)==str(""3""). Likewise, if you implement an IP abstraction, having the str of it look like 192.168.1.1 is just fine. When implementing a date/time abstraction, the str can be ""2010/4/12 15:35:22"", etc. The goal is to represent it in a way that a user, not a programmer, would want to read it. Chop off useless digits, pretend to be some other class — as long is it supports readability, it is an improvement. Container’s __str__ uses contained objects’ __repr__ This seems surprising, doesn’t it? It is a little, but how readable would it be if it used their __str__? [moshe is, 3, hello world, this is a list, oh I don't know, containing just 4 elements] Not very. Specifically, the strings in a container would find it way too easy to disturb its string representation. In the face of ambiguity, remember, Python resists the temptation to guess. If you want the above behavior when you’re printing a list, just print(""["" + "", "".join(lst) + ""]"") (you can probably also figure out what to do about dictionaries). Summary Implement __repr__ for any class you implement. This should be second nature. Implement __str__ if you think it would be useful to have a string version which errs on the side of readability."
23
- https://stackoverflow.com/questions/17071871/how-do-i-select-rows-from-a-dataframe-based-on-column-values,"How can I select rows from a DataFrame based on values in some column in Pandas? In SQL, I would use: SELECT * FROM table WHERE column_name = some_value ","To select rows whose column value equals a scalar, some_value, use ==: df.loc[df['column_name'] == some_value] To select rows whose column value is in an iterable, some_values, use isin: df.loc[df['column_name'].isin(some_values)] Combine multiple conditions with &: df.loc[(df['column_name'] >= A) & (df['column_name'] <= B)] Note the parentheses. Due to Python's operator precedence rules, & binds more tightly than <= and >=. Thus, the parentheses in the last example are necessary. Without the parentheses df['column_name'] >= A & df['column_name'] <= B is parsed as df['column_name'] >= (A & df['column_name']) <= B which results in a Truth value of a Series is ambiguous error. To select rows whose column value does not equal some_value, use !=: df.loc[df['column_name'] != some_value] The isin returns a boolean Series, so to select rows whose value is not in some_values, negate the boolean Series using ~: df = df.loc[~df['column_name'].isin(some_values)] # .loc is not in-place replacement For example, import pandas as pd import numpy as np df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(), 'B': 'one one two three two two one three'.split(), 'C': np.arange(8), 'D': np.arange(8) * 2}) print(df) # A B C D # 0 foo one 0 0 # 1 bar one 1 2 # 2 foo two 2 4 # 3 bar three 3 6 # 4 foo two 4 8 # 5 bar two 5 10 # 6 foo one 6 12 # 7 foo three 7 14 print(df.loc[df['A'] == 'foo']) yields A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14 If you have multiple values you want to include, put them in a list (or more generally, any iterable) and use isin: print(df.loc[df['B'].isin(['one','three'])]) yields A B C D 0 foo one 0 0 1 bar one 1 2 3 bar three 3 6 6 foo one 6 12 7 foo three 7 14 Note, however, that if you wish to do this many times, it is more efficient to make an index first, and then use df.loc: df = df.set_index(['B']) print(df.loc['one']) yields A C D B one foo 0 0 one bar 1 2 one foo 6 12 or, to include multiple values from the index use df.index.isin: df.loc[df.index.isin(['one','two'])] yields A C D B one foo 0 0 one bar 1 2 two foo 2 4 two foo 4 8 two bar 5 10 one foo 6 12 "
24
- https://stackoverflow.com/questions/3437059/does-python-have-a-string-contains-substring-method,"I'm looking for a string.contains or string.indexof method in Python. I want to do: if not somestring.contains(""blah""): continue ","Use the in operator: if ""blah"" not in somestring: continue Note: This is case-sensitive."
25
- https://stackoverflow.com/questions/1024847/how-can-i-add-new-keys-to-a-dictionary,How do I add a new key to an existing dictionary? It doesn't have an .add() method.,"You create a new key/value pair on a dictionary by assigning a value to that key d = {'key': 'value'} print(d) # {'key': 'value'} d['mynewkey'] = 'mynewvalue' print(d) # {'key': 'value', 'mynewkey': 'mynewvalue'} If the key doesn't exist, it's added and points to that value. If it exists, the current value it points to is overwritten."
26
- https://stackoverflow.com/questions/6996603/how-can-i-delete-a-file-or-folder-in-python,How can I delete a file or folder in Python?,"Use one of these methods: pathlib.Path.unlink() removes a file or symbolic link. pathlib.Path.rmdir() removes an empty directory. shutil.rmtree() deletes a directory and all its contents. On Python 3.3 and below, you can use these methods instead of the pathlib ones: os.remove() removes a file. os.unlink() removes a symbolic link. os.rmdir() removes an empty directory. "
27
- https://stackoverflow.com/questions/1132941/least-astonishment-and-the-mutable-default-argument,"Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue: def foo(a=[]): a.append(5) return a Python novices would expect this function called with no parameter to always return a list with only one element: [5]. The result is instead very different, and very astonishing (for a novice): >>> foo() [5] >>> foo() [5, 5] >>> foo() [5, 5, 5] >>> foo() [5, 5, 5, 5] >>> foo() A manager of mine once had his first encounter with this feature, and called it ""a dramatic design flaw"" of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don't understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?) Edit: Baczek made an interesting example. Together with most of your comments and Utaal's in particular, I elaborated further: def a(): print(""a executed"") return [] def b(x=a()): x.append(5) print(x) a executed >>> b() [5] >>> b() [5, 5] To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function, or ""together"" with it? Doing the binding inside the function would mean that x is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the def line would be ""hybrid"" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time. The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.","Actually, this is not a design flaw, and it is not because of internals or performance. It comes simply from the fact that functions in Python are first-class objects, and not only a piece of code. As soon as you think of it this way, then it completely makes sense: a function is an object being evaluated on its definition; default parameters are kind of ""member data"" and therefore their state may change from one call to the other - exactly as in any other object. In any case, the Effbot (Fredrik Lundh) has a very nice explanation of the reasons for this behavior in Default Parameter Values in Python. I found it very clear, and I really suggest reading it for a better knowledge of how function objects work."
28
- https://stackoverflow.com/questions/36901/what-does-double-star-asterisk-and-star-asterisk-do-for-parameters,"What do *args and **kwargs mean in these function definitions? def foo(x, y, *args): pass def bar(x, y, **kwargs): pass See What do ** (double star/asterisk) and * (star/asterisk) mean in a function call? for the complementary question about arguments.","The *args and **kwargs are common idioms to allow an arbitrary number of arguments to functions, as described in the section more on defining functions in the Python tutorial. The *args will give you all positional arguments as a tuple: def foo(*args): for a in args: print(a) foo(1) # 1 foo(1, 2, 3) # 1 # 2 # 3 The **kwargs will give you all keyword arguments as a dictionary: def bar(**kwargs): for a in kwargs: print(a, kwargs[a]) bar(name='one', age=27) # name one # age 27 Both idioms can be mixed with normal arguments to allow a set of fixed and some variable arguments: def foo(kind, *args, bar=None, **kwargs): print(kind, args, bar, kwargs) foo(123, 'a', 'b', apple='red') # 123 ('a', 'b') None {'apple': 'red'} It is also possible to use this the other way around: def foo(a, b, c): print(a, b, c) obj = {'b':10, 'c':'lee'} foo(100, **obj) # 100 10 lee Another usage of the *l idiom is to unpack argument lists when calling a function. def foo(bar, lee): print(bar, lee) baz = [1, 2] foo(*baz) # 1 2 In Python 3 it is possible to use *l on the left side of an assignment (Extended Iterable Unpacking), though it gives a list instead of a tuple in this context: first, *rest = [1, 2, 3, 4] # first = 1 # rest = [2, 3, 4] Also Python 3 adds a new semantic (refer PEP 3102): def func(arg1, arg2, arg3, *, kwarg1, kwarg2): pass Such function accepts only 3 positional arguments, and everything after * can only be passed as keyword arguments. Note: A Python dict, semantically used for keyword argument passing, is arbitrarily ordered. However, in Python 3.6+, keyword arguments are guaranteed to remember insertion order. ""The order of elements in **kwargs now corresponds to the order in which keyword arguments were passed to the function."" - What’s New In Python 3.6. In fact, all dicts in CPython 3.6 will remember insertion order as an implementation detail, and this becomes standard in Python 3.7."
29
- https://stackoverflow.com/questions/613183/how-do-i-sort-a-dictionary-by-value,"I have a dictionary of values read from two fields in a database: a string field and a numeric field. The string field is unique, so that is the key of the dictionary. I can sort on the keys, but how can I sort based on the values? Note: I have read Stack Overflow question here How do I sort a list of dictionaries by a value of the dictionary? and probably could change my code to have a list of dictionaries, but since I do not really need a list of dictionaries I wanted to know if there is a simpler solution to sort either in ascending or descending order.","Python 3.7+ or CPython 3.6 Dicts preserve insertion order in Python 3.7+. Same in CPython 3.6, but it's an implementation detail. >>> x = {1: 2, 3: 4, 4: 3, 2: 1, 0: 0} >>> {k: v for k, v in sorted(x.items(), key=lambda item: item[1])} {0: 0, 2: 1, 1: 2, 4: 3, 3: 4} or >>> dict(sorted(x.items(), key=lambda item: item[1])) {0: 0, 2: 1, 1: 2, 4: 3, 3: 4} Older Python It is not possible to sort a dictionary, only to get a representation of a dictionary that is sorted. Dictionaries are inherently orderless, but other types, such as lists and tuples, are not. So you need an ordered data type to represent sorted values, which will be a list—probably a list of tuples. For instance, import operator x = {1: 2, 3: 4, 4: 3, 2: 1, 0: 0} sorted_x = sorted(x.items(), key=operator.itemgetter(1)) sorted_x will be a list of tuples sorted by the second element in each tuple. dict(sorted_x) == x. And for those wishing to sort on keys instead of values: import operator x = {1: 2, 3: 4, 4: 3, 2: 1, 0: 0} sorted_x = sorted(x.items(), key=operator.itemgetter(0)) In Python3 since unpacking is not allowed we can use x = {1: 2, 3: 4, 4: 3, 2: 1, 0: 0} sorted_x = sorted(x.items(), key=lambda kv: kv[1]) If you want the output as a dict, you can use collections.OrderedDict: import collections sorted_dict = collections.OrderedDict(sorted_x) "
30
- https://stackoverflow.com/questions/4906977/how-can-i-access-environment-variables-in-python,How can I get the value of an environment variable in Python?,"Environment variables are accessed through os.environ: import os print(os.environ['HOME']) To see a list of all environment variables: print(os.environ) If a key is not present, attempting to access it will raise a KeyError. To avoid this: # Returns `None` if the key doesn't exist print(os.environ.get('KEY_THAT_MIGHT_EXIST')) # Returns `default_value` if the key doesn't exist print(os.environ.get('KEY_THAT_MIGHT_EXIST', default_value)) # Returns `default_value` if the key doesn't exist print(os.getenv('KEY_THAT_MIGHT_EXIST', default_value)) "