question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
60,739,889 | 2020-3-18 | https://stackoverflow.com/questions/60739889/bypass-mypys-module-has-no-attribute-on-dynamic-attribute-setting | I have the following code on a.py: class Tags(enum.Flag): NONE = 0 A = enum.auto() B = enum.auto() C = enum.auto() # Allow using tags.A instead of tags.Tags.A globals().update(Tags.__members__) But when I use it on other files, mypy (rightfully) fails to identify the attributes: import tags print(tags.A) # Module has no attribute "A" Is there any possible way to bypass this in Python 3.6? Known solutions (which are not good enough for my case): Use # type: ignore every time you use tags.A Use tags.Tags.A Use __getitem__ in the module level (Works only from Python 3.7) | Personally, what I would do is modify my imports to do: from tags import Tags ..which would let me refer to the enum items by doing just Tags.A everywhere with minimal fuss. But if you really want to continue referring to these items from the module namespace instead, one approach is to define a __getattr__ function within tags.py: class Tags(enum.Flag): NONE = 0 A = enum.auto() B = enum.auto() C = enum.auto() def __getattr__(name: str) -> Tags: return Tags[name] At runtime, Python will try calling this function if you try accessing some attribute that wasn't directly defined in the tags module object. Mypy understands this convention and assumes that if __getattr__ is defined, the module is "incomplete". So, it'll use the return type as the type for any missing attributes you try accessing. You may still want to do globals().update(Tags.__members__) mostly as a performance optimization though, to skip having to actually call __getattr__ function at runtime. This strategy only really works if tags.py only contains one enum though -- otherwise, you'd need to make the return type something like Union[Tags, MyOtherEnum] (which is unwieldy) or even just Any (which loses the benefits of using a type checker). This strategy also means mypy won't be able to do more sophisticated type inference and narrowing by taking into account the actual enum value. This is mostly only relevant if you're using Literal enums though. If these are concerns, you might have to resort to a more brute force approach, like so: from typing import TYPE_CHECKING class Tags(enum.Flag): NONE = 0 A = enum.auto() B = enum.auto() C = enum.auto() globals().update(Tags.__members__) if TYPE_CHECKING: NONE = Tags.NONE A = Tags.A B = Tags.B C = Tags.C The TYPE_CHECKING constant is always false at runtime, but is considered to be always true by type checkers. But if you're going to go to the expense of directly telling mypy about each of these variants, you might as well just skip trying to automatically update the globals and do this instead: class Tags(enum.Flag): NONE = 0 A = enum.auto() B = enum.auto() C = enum.auto() NONE = Tags.NONE A = Tags.A B = Tags.B C = Tags.C Obviously, having to repeat yourself twice like this is pretty suboptimal, but I don't think there's an easy way around it. You could perhaps mitigate this by creating a script that auto-generates tags.py for you, but that's also fairly suboptimal, just for different reasons. | 8 | 2 |
60,742,389 | 2020-3-18 | https://stackoverflow.com/questions/60742389/split-item-to-rows-pandas | I have the data in dataframes like below. I want to split the item into same number of rows >>> df idx a 0 3 1 5 2 4 from above dataframe, I want the below as >>> df idx a 0 1 1 2 2 3 3 1 4 2 5 3 6 4 7 5 8 1 9 2 10 3 11 4 I tried several ways, but no success. | A Fun way df.a.map(range).explode()+1 # may add reset_index(), however, I think keep the original index is good, and help us convert back. Out[158]: idx 0 1 0 2 0 3 1 1 1 2 1 3 1 4 1 5 2 1 2 2 2 3 2 4 Name: a, dtype: object | 6 | 5 |
60,741,112 | 2020-3-18 | https://stackoverflow.com/questions/60741112/most-pythonic-way-to-provide-defaults-for-class-constructor | I am trying to stick to Google's styleguide to strive for consistency from the beginning. I am currently creating a module and within this module I have a class. I want to provide some sensible default values for different standard use cases. However, I want to give the user the flexibility to override any of the defaults. What I am currently doing is I provide a module scoped "constant" dictionary with the default values (for the different use cases) and in my class I give the parameters in the constructor precedence over the defaults. Finally, I want to make sure that we end with valid values for the parameters. That's what I have done: MY_DEFAULTS = {"use_case_1": {"x": 1, "y": 2}, "use_case_2": {"x": 4, "y": 3}} class MyClass: def __init__(self, use_case = None, x = None, y = None): self.x = x self.y = y if use_case: if not self.x: self.x = MY_DEFAULTS[use_case]["x"] if not self.y: self.y = MY_DEFAULTS[use_case]["y"] assert self.x, "no valid values for 'x' provided" assert self.y, "no valid values for 'y' provided" def __str__(self): return "(%s, %s)" % (self.x, self.y) print(MyClass()) # AssertionError: no valid values for 'x' provided print(MyClass("use_case_1")) # (1, 2) print(MyClass("use_case_2", y = 10) # (4, 10) Questions While technically working, I was wondering whether this is the most pythonic way of doing it? With more and more default values for my class the code becomes very repetitive, what could I do to simplify that? assert seems also for me not the best option at it is rather a debugging statement than a validation check. I was toying with the @property decorator, where I would raise an Exception in case there are invalid parameters, but with the current pattern I want to allow x and y for a short moment to be not truthy to implement the precedence properly (that is I only want to check the truthiness at the end of the constructor. Any hints on that? | In general if there is more than one way to reasonably construct your object type, you can provide classmethods for alternate construction (dict.fromkeys is an excellent example of this). Note that this approach is more applicable if your use cases are finite and well defined statically. class MyClass: def __init__(self, x, y): self.x = x self.y = y @classmethod def make_use_case1(cls, x=1, y=2): return cls(x,y) @classmethod def make_use_case2(cls, x=4, y=3): return cls(x,y) def __str__(self): return "(%s, %s)" % (self.x, self.y) If the only variation in the use cases is default arguments then re-writing the list of positional arguments each time is a lot of overhead. Instead we can write one classmethod to take the use case and the optional overrides as keyword only. class MyClass: DEFAULTS_PER_USE_CASE = { "use_case_1": {"x": 1, "y": 2}, "use_case_2": {"x": 4, "y": 3} } @classmethod def make_from_use_case(cls, usecase, **overrides): args = {**cls.DEFAULTS_PER_USE_CASE[usecase], **overrides} return cls(**args) def __init__(self, x,y): self.x = x self.y = y def __str__(self): return "(%s, %s)" % (self.x, self.y) x = MyClass.make_from_use_case("use_case_1", x=5) print(x) If you wanted the arguments to be passed positionally that would be more difficult but I imagine this would suit your needs. | 7 | 9 |
60,740,528 | 2020-3-18 | https://stackoverflow.com/questions/60740528/how-to-count-the-number-of-dashes-between-any-two-alphabetical-characters | If we have a string of alphabetical characters and some dashes, and we want to count the number of dashes between any two alphabetic characters in this string. what is the easiest way to do this? Example: Input: a--bc---d-k output: 2031 This means that there are 2 dashes between a and b, 0 dash between b and c, 3 dashes between c and d and 1 dash between d and k what is a good way to find this output list in python? | Solution with regex: import re x = 'a--bc---d-k' results = [ len(m) for m in re.findall('(?<=[a-z])-*(?=[a-z])', x) ] print(results) print(''.join(str(r) for r in results)) output: [2, 0, 3, 1] 2031 Solution with brute force loop logic: x = 'a--bc---d-k' count = 0 results = [] for c in x: if c == '-': count += 1 else: results.append(count) count = 0 results = results[1:] # cut off first length print(results) output: [2, 0, 3, 1] | 11 | 9 |
60,736,401 | 2020-3-18 | https://stackoverflow.com/questions/60736401/reticulate-doesnt-print-to-console-in-real-time | Using Python's print() (or to the best of my knowledge, any other console output generating) function in loop structures and running the code via reticulate in R, the output is only printed after the execution finished. For example, take the following loop which goes to sleep for 1.5 seconds after each iteration; the run numbers are all printed in one go after the loop is through. The same applies when saving the Python code to a separate .py file and then running reticulate::py_run_file(). library(reticulate) py_run_string(" import time for i in range(5): print(str(i)) time.sleep(1.5) # sleep for 1.5 sec ") Does anyone know where this behavior comes from and, if possible, how to circumvent it? | Apparently, you need to force pyhton to export the standard output at times. You can do this by adding sys.stdout.flush() to your code: library(reticulate) py_run_string(" import time import sys for i in range(5): print(str(i)) time.sleep(1.5) # sleep for 1.5 sec sys.stdout.flush() ") see over here described with nohup | 9 | 5 |
60,732,859 | 2020-3-18 | https://stackoverflow.com/questions/60732859/sudoku-puzzle-with-boxes-containing-square-numbers | Two days ago, I was given a sudoku problem that I tried to solve with Python 3. I've been informed that a solution does exist, but I'm not certain if there exists multiple solutions. The problem is as following: A 9x9 grid of sudoku is completely empty. It does however contain colored boxes, and inside these boxes, the sum of the numbers has to be a square number. Other than that, normal sudoku rules apply. The issue here is not solving a sudoku puzzle, but rather generating a viable puzzle, that satisfies the rules of the colored boxes. My strategy Using numpy arrays, I have divided the grid into 81 indices, which can be rearranged to a 9x9 grid. import numpy as np print(np.array([i for i in range(81)]).reshape((9, 9))) -> [[ 0 1 2 3 4 5 6 7 8] [ 9 10 11 12 13 14 15 16 17] [18 19 20 21 22 23 24 25 26] [27 28 29 30 31 32 33 34 35] [36 37 38 39 40 41 42 43 44] [45 46 47 48 49 50 51 52 53] [54 55 56 57 58 59 60 61 62] [63 64 65 66 67 68 69 70 71] [72 73 74 75 76 77 78 79 80]] Here is a list containing all the blocks of indices. boxes = [[44, 43, 42, 53],[46, 47, 38],[61, 60],[69, 70],[71, 62], [0, 9, 18],[1, 10, 11, 20],[2, 3, 12],[4, 13, 14],[5, 6], [7, 8],[17, 26, 35],[21, 22, 23],[15, 16, 24, 25, 34], [27, 36, 37],[19, 28, 29],[45, 54],[55, 56],[63, 64, 65], [72, 73, 74],[57, 66, 75 ],[58, 59, 67, 68],[76, 77],[78, 79, 80]] As you can see from the picture, or from the array above, the boxes are arranged into blocks of 2, 3, 4, or 5 (8 twos, 12 threes, 3 fours, 1 fiver). I've also noticed that a box can contain multiple numbers without breaking any rules of sudoku, but only 2 of one number is possible. Given that information, the biggest possible square would be 36, as 9+9+8+7+6 = 39, and thus no sum of a block could ever reach 49. To find out if the sum of a list contains a square number, I've made the following function: def isSquare(array): if np.sum(array) in [i**2 for i in range(1,7)]: return True else: return False To find out if a list contain the correct amount of duplicates, that is, more than one duplicate of only one number, I've made the following function: def twice(array): counter = [0]*9 for i in range(len(array)): counter[array[i]-1]+=1 if 3 in counter: return False if counter.count(2)>1: return False return True Now, given the digits 1-9, there are limited ways solutions to a list, if the list has to sum into a square number. Using itertools, I could find the solutions, dividing them into an array, where index 0 contains blocks of twos, index 1 contains blocks of threes, and so on. from itertools combinations_with_replacement solutions = [] for k in range(2, 6): solutions.append([list(i) for i in combinations_with_replacement(np.arange(1, 10), k) if isSquare(i) and twice(i)]) However, any permutation of these lists are viable solutions to the "square problem". Using itertools again, the total amount of possible boxes (without the sudoku rules) sums to 8782. from itertools import permutations def find_squares(): solutions = [] for k in range(2, 6): solutions.append([list(i) for i in combinations_with_replacement(np.arange(1, 10), k) if isSquare(i) and twice(i)]) s = [] for item in solutions: d=[] for arr in item: for k in permutations(arr): d.append(list(k)) s.append(d) return s # 4-dimensional array, max 2 of each solutions = find_squares() total = sum([len(i) for i in solutions]) print(total) -> 8782 This should be enough to implement functionality that decides if a board is legal, that is, rows, columns and boxes only contains one each of the digits 1-9. My implementation: def legal_row(arr): for k in range(len(arr)): values = [] for i in range(len(arr[k])): if (arr[k][i] != 0): if (arr[k][i] in values): return False else: values.append(arr[k][i]) return True def legal_column(arr): return legal_row(np.array(arr, dtype=int).T) def legal_box(arr): return legal_row(arr.reshape(3,3,3,3).swapaxes(1,2).reshape(9,9)) def legal(arr): return (legal_row(arr) and legal_column(arr) and legal_box(arr)) Difficulties with runtime A straightforward approach would be to check every single combination of every single block. I have dones this, and produced several viable problems, however the complexity of my algorithm makes this take far too long time. Instead, I tried to randomize some of the properties: The order of the blocks and the order of the solutions. Using this, I limited the number of tries, and checked if a solution was viable: attempts = 1000 correct = 0 possibleBoards = [] for i in range(1, attempts+1): board = np.zeros((9, 9), dtype=int) score = 0 shapes = boxes np.random.shuffle(shapes) for block in shapes: new_board = board new_1d = board.reshape(81) all_sols = solutions[len(block)-2] np.random.shuffle(all_sols) for sols in all_sols: #print(len(sols)) new_1d[block] = sols new_board = new_1d.reshape((9, 9)) if legal(new_board): board = new_board score+=1 break confirm = board.reshape(81) #solve(board) # Using my solve function, not important here # Note that without it, correct would always be 0 as the middle of the puzzle has no boxes confirm = board.reshape(81) if (i%1000==0 or i==1): print("Attempt",i) if 0 not in confirm: correct+=1 print(correct) possibleBoards.append(board) In the code above, the variable score refers to how many blocks the algorithm could find during an attempt. The variable correct refers to how many of the generated sudoku boards could be completed. If you are interested in how well it did in 700 attempts, here are some stats (This is a historgram, the x-axis represents the scores, and the y-axis represents how many of each score was present during these 700 attempts). What I need help with I am struggling to find a feasible way to find a solution to this problem, that can actually run in a finite amount of time. I would greatly appreciate any tips regarding making some of my code faster or better, any ideas of a different approach to the problem, any solutions to the problem, or some useful tips about Python/Numpy relevant to this problem. | This is where I would use a SMT solver. They are a lot more powerful than people give credit for. If the best algorithm you can think of is essentially bruteforce, try a solver instead. Simply listing your constraints and running it gives your unique answer in a couple seconds: 278195436 695743128 134628975 549812763 386457291 721369854 913286547 862574319 457931682 The code used (and reference image for coordinates): import z3 letters = "ABCDEFGHI" numbers = "123456789" boxes = """ A1 A2 A3 B1 B2 C2 C3 C1 D1 D2 E1 E2 F2 F1 G1 H1 I1 G2 H2 G3 H3 H4 I2 I3 I4 B3 B4 C4 D3 E3 F3 A4 A5 B5 C5 B6 C6 G5 H5 I5 I6 A6 A7 B7 C7 D7 D8 D9 E7 E8 F7 F8 G7 H7 I7 I8 A8 B8 C8 G8 H8 A9 B9 C9 E9 F9 G9 H9 I9 """ positions = [letter + number for letter in letters for number in numbers] S = {pos: z3.Int(pos) for pos in positions} solver = z3.Solver() # Every symbol must be a number from 1-9. for symbol in S.values(): solver.add(z3.Or([symbol == i for i in range(1, 10)])) # Every row value must be unique. for row in numbers: solver.add(z3.Distinct([S[col + row] for col in letters])) # Every column value must be unique. for col in letters: solver.add(z3.Distinct([S[col + row] for row in numbers])) # Every block must contain every value. for i in range(3): for j in range(3): solver.add(z3.Distinct([S[letters[m + i * 3] + numbers[n + j * 3]] for m in range(3) for n in range(3)])) # Colored boxes. for box in boxes.split("\n"): box = box.strip() if not box: continue boxsum = z3.Sum([S[pos] for pos in box.split()]) solver.add(z3.Or([boxsum == 1, boxsum == 4, boxsum == 9, boxsum == 16, boxsum == 25, boxsum == 36])) # Print solutions. while solver.check() == z3.sat: model = solver.model() for row in numbers: print("".join(model.evaluate(S[col+row]).as_string() for col in letters)) print() # Prevent next solution from being equivalent. solver.add(z3.Or([S[col+row] != model.evaluate(S[col+row]) for col in letters for row in numbers])) | 9 | 6 |
60,732,924 | 2020-3-18 | https://stackoverflow.com/questions/60732924/how-to-specify-that-a-python-object-must-be-two-types-at-once | I'm using the Python typing module throughout my project, and I was wondering if there was a way to specify that a given object must be of two different types, at once. This most obviously arises when you have specified two protocols, and expect a single object to fulfil both: class ReturnsNumbers(Protocol): def get_number(self) -> Int: pass class ReturnsLetters(Protocol): def get_letter(self) -> str: pass def get_number_and_letter(x: <what should this be?>) -> None: print(x.get_number(), x.get_letter()) Thanks in advance! | Create a new type that inherits from all types you wish to combine, as well as Protocol. class ReturnsNumbersAndLetters(ReturnsNumbers, ReturnsLetters, Protocol): pass def get_number_and_letter(x: ReturnsNumbersAndLetters) -> None: print(x.get_number(), x.get_letter()) | 15 | 13 |
60,669,969 | 2020-3-13 | https://stackoverflow.com/questions/60669969/why-is-mypy-complaining-about-list-comprehension-when-it-cant-be-annotated | Why does Mypy complain that it requires a type annotation for a list comprehension variable, when it is impossible to annotate such a variable using MyPy? Specifically, how can I resolve the following error: from enum import EnumMeta def spam( y: EnumMeta ): return [[x.value] for x in y] 🠜 Mypy: Need type annotation for 'x' cast doesn't work: return [[cast(Enum, x).value] for x in y] 🠜 Mypy: Need type annotation for 'x' Even though Mypy doesn't support annotations (x:Enum) in such a case I see the usage of the variable can be annotated using cast (see this post). However, cast(Enum, x) doesn't stop Mypy complaining that the variable isn't annotated in the first place. #type: doesn't work: return [[x.value] for x in y] # type: Enum 🠜 Mypy: Misplaced type annotation I also see that a for loop variable can be annotated using a comment, # type: (see this post). However, # type: Enum doesn't work for list comprehension's for. | In a list comprehension, the iterable must be cast instead of the elements. from typing import Iterable, cast from enum import EnumMeta, Enum def spam(y: EnumMeta): return [[x.value] for x in cast(Iterable[Enum], y)] This allows mypy to infer the type of x as well. In addition, at runtime it performs only 1 cast instead of n casts. If spam can digest any iterable that produces enums, it is easier to type hint this directly. from typing import Iterable from enum import Enum def spam(y: Iterable[Enum]): return [[x.value] for x in y] | 16 | 25 |
60,645,256 | 2020-3-11 | https://stackoverflow.com/questions/60645256/how-do-you-get-batches-of-rows-from-spark-using-pyspark | I have a Spark RDD of over 6 billion rows of data that I want to use to train a deep learning model, using train_on_batch. I can't fit all the rows into memory so I would like to get 10K or so at a time to batch into chunks of 64 or 128 (depending on model size). I am currently using rdd.sample() but I don't think that guarantees I will get all rows. Is there a better method to partition the data to make it more manageable so that I can write a generator function for getting batches? My code is below: data_df = spark.read.parquet(PARQUET_FILE) print(f'RDD Count: {data_df.count()}') # 6B+ data_sample = data_df.sample(True, 0.0000015).take(6400) sample_df = data_sample.toPandas() def get_batch(): for row in sample_df.itertuples(): # TODO: put together a batch size of BATCH_SIZE yield row for i in range(10): print(next(get_batch())) | I don't believe spark let's you offset or paginate your data. But you can add an index and then paginate over that, First: from pyspark.sql.functions import lit data_df = spark.read.parquet(PARQUET_FILE) count = data_df.count() chunk_size = 10000 # Just adding a column for the ids df_new_schema = data_df.withColumn('pres_id', lit(1)) # Adding the ids to the rdd rdd_with_index = data_df.rdd.zipWithIndex().map(lambda (row,rowId): (list(row) + [rowId+1])) # Creating a dataframe with index df_with_index = spark.createDataFrame(rdd_with_index,schema=df_new_schema.schema) # Iterating into the chunks for page_num in range(0, count+1, chunk_size): initial_page = page_num*chunk_size final_page = initial_page + chunk_size where_query = ('pres_id > {0} and pres_id <= {1}').format(initial_page,final_page) chunk_df = df_with_index.where(where_query).toPandas() train_on_batch(chunk_df) # <== Your function here This is not optimal it will badly leverage spark because of the use of a pandas dataframe but will solve your problem. Don't forget to drop the id if this affects your function. | 11 | 3 |
60,699,002 | 2020-3-16 | https://stackoverflow.com/questions/60699002/how-can-i-build-manually-c-extension-with-mingw-w64-python-and-pybind11 | My final goal is to compile Python C++ extension from my C++ code. Currently to get started I am following a simple example from the first steps of pybind11 documentation. My working environment is a Windows 7 Professional 64 bit, mingw-w64 (x86_64-8.1.0-posix-seh-rt_v6-rev0) and Anaconda3 with Python 3.7.4 64 bit. I have 2 files. The first one is a C++ file -- example.cpp #include <pybind11/pybind11.h> int add(int i, int j) { return i + j; } PYBIND11_MODULE(example, m) { m.doc() = "pybind11 example plugin"; // optional module docstring m.def("add", &add, "A function which adds two numbers"); } I compile the C++ file with the following command: C:/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/g++.exe -shared -std=c++11 -DMS_WIN64 -fPIC -ID:\Users\ADAS\anaconda3\Include -ID:\Users\ADAS\anaconda3\Library\include -ID:\Users\ADAS\anaconda3\pkgs\python-3.7.4-h5263a28_0\include -Wall -LD:\Users\ADAS\anaconda3\Lib -LD:\Users\ADAS\anaconda3\pkgs\python-3.7.4-h5263a28_0\libs example.cpp -o example.dll -lPython37 The compilation result is successful and I am getting example.dll file. At the next step I run the following Python code -- example.py: import example def main(): i, j = (1, 2) res = example.add(i, j) print("%d + %d = %d" % (i, j, res)) if __name__ == '__main__': main() And here I have got an issue. It seems the import example line does not give me any warnings or errors but the line res = example.add(i, j) gives me an error: AttributeError: module 'example' has no attribute 'add' Under Ubuntu 18.04 I successfully compiled and run in Python the example above but in my office I have only Windows 7. Questions: what is wrong in either my setup or command line? Is it possible to fix this problem without changing current C++ compiler (mingw-w64 version 8.1) under Windows? | It is unbelievable! The problem was just the file extension of the compiled file. As soon as I changed .dll to .pyd, the Python example (example.py) is running without any issue! So the new command line is: C:/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/g++.exe -shared -std=c++11 -DMS_WIN64 -fPIC -ID:\Users\ADAS\anaconda3\Include -ID:\Users\ADAS\anaconda3\Library\include -ID:\Users\ADAS\anaconda3\pkgs\python-3.7.4-h5263a28_0\include -Wall -LD:\Users\ADAS\anaconda3\Lib -LD:\Users\ADAS\anaconda3\pkgs\python-3.7.4-h5263a28_0\libs example.cpp -o example.pyd -lPython37 Because I did some experiments with the command line arguments, I am going to check again all the compiler arguments to make sure it gives the successful result. I will let you know, if some changes are still required. Update1: According to Python3 default settings, the full extension of the compiled C++ file under Windows has to be .cp37-win_amd64.pyd. We can get the extension by terminal command: python -c "from distutils import sysconfig; print(sysconfig.get_config_var('EXT_SUFFIX'))" That is an equivalent of the python3-config --extension-suffix from pybind11 documentation. The python3-config script is not implemented in Windows environment (at least in Anaconda3 distribution). Update2: Under Linux OS (tested in Ubuntu 20.04), the command line is like c++ -O3 -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) example.cpp -o example$(python3-config --extension-suffix) | 6 | 9 |
60,669,256 | 2020-3-13 | https://stackoverflow.com/questions/60669256/how-do-you-create-a-logit-normal-distribution-in-python | Following this post, I tried to create a logit-normal distribution by creating the LogitNormal class: import numpy as np import matplotlib.pyplot as plt from scipy.special import logit from scipy.stats import norm, rv_continuous class LogitNormal(rv_continuous): def _pdf(self, x, **kwargs): return norm.pdf(logit(x), **kwargs)/(x*(1-x)) class OtherLogitNormal: def pdf(self, x, **kwargs): return norm.pdf(logit(x), **kwargs)/(x*(1-x)) fig, ax = plt.subplots() values = np.linspace(10e-10, 1-10e-10, 1000) sigma, mu = 1.78, 0 ax.plot( values, LogitNormal().pdf(values, loc=mu, scale=sigma), label='subclassed' ) ax.plot( values, OtherLogitNormal().pdf(values, loc=mu, scale=sigma), label='not subclassed' ) ax.legend() fig.show() However, the LogitNormal class does not produce the desired results. When I don't subclass rv_continuous it works. Why is that? I need the subclassing to work because I also need the other methods that come with it like rvs. Btw, the only reason I am creating my own logit-normal distribution in Python is because the only implementations of that distribution that I could find were from the PyMC3 package and from the TensorFlow package, both of which are pretty heavy / overkill if you only need them for that one function. I already tried PyMC3, but apparently it doesn't do well with scipy I think, it always crashed for me. But that's a whole different story. | Forewords I came across this problem this week and the only relevant issue I have found about it is this post. I have almost same requirement as the OP: Having a random variable for Logit Normal distribution. But I also need: To be able to perform statistical test as well; While being compliant with the scipy random variable interface. As @Jacques Gaudin pointed out the interface for rv_continous (see distribution architecture for details) does not ensure follow up for loc and scale parameters when inheriting from this class. And this is somehow misleading and unfortunate. Implementing the __init__ method of course allow to create the missing binding but the trade off is: it breaks the pattern scipy is currently using to implement random variables (see an example of implementation for lognormal). So, I took time to dig into the scipy code and I have created a MCVE for this distribution. Although it is not totally complete (it mainly misses moments overrides) it fits the bill for both OP and my purposes while having satisfying accuracy and performance. MCVE An interface compliant implementation of this random variable could be: class logitnorm_gen(stats.rv_continuous): def _argcheck(self, m, s): return (s > 0.) & (m > -np.inf) def _pdf(self, x, m, s): return stats.norm(loc=m, scale=s).pdf(special.logit(x))/(x*(1-x)) def _cdf(self, x, m, s): return stats.norm(loc=m, scale=s).cdf(special.logit(x)) def _rvs(self, m, s, size=None, random_state=None): return special.expit(m + s*random_state.standard_normal(size)) def fit(self, data, **kwargs): return stats.norm.fit(special.logit(data), **kwargs) logitnorm = logitnorm_gen(a=0.0, b=1.0, name="logitnorm") This implementation unlock most of the scipy random variables potential. N = 1000 law = logitnorm(0.24, 1.31) # Defining a RV sample = law.rvs(size=N) # Sampling from RV params = logitnorm.fit(sample) # Infer parameters w/ MLE check = stats.kstest(sample, law.cdf) # Hypothesis testing bins = np.arange(0.0, 1.1, 0.1) # Bin boundaries expected = np.diff(law.cdf(bins)) # Expected bin counts As it relies on scipy normal distribution we may assume underlying functions have the same accuracy and performance than normal random variable object. But it might indeed be subject to float arithmetic inaccuracy especially when dealing with highly skewed distributions at the support boundary. Tests To check out how it performs we draw some distribution of interest and check them. Let's create some fixtures: def generate_fixtures( locs=[-2.0, -1.0, 0.0, 0.5, 1.0, 2.0], scales=[0.32, 0.56, 1.00, 1.78, 3.16], sizes=[100, 1000, 10000], seeds=[789, 123456, 999999] ): for (loc, scale, size, seed) in itertools.product(locs, scales, sizes, seeds): yield {"parameters": {"loc": loc, "scale": scale}, "size": size, "random_state": seed} And perform checks on related distributions and samples: eps = 1e-8 x = np.linspace(0. + eps, 1. - eps, 10000) for fixture in generate_fixtures(): # Reference: parameters = fixture.pop("parameters") normal = stats.norm(**parameters) sample = special.expit(normal.rvs(**fixture)) # Logit Normal Law: law = logitnorm(m=parameters["loc"], s=parameters["scale"]) check = law.rvs(**fixture) # Fit: p = logitnorm.fit(sample) trial = logitnorm(*p) resample = trial.rvs(**fixture) # Hypothetis Tests: ks = stats.kstest(check, trial.cdf) bins = np.histogram(resample)[1] obs = np.diff(trial.cdf(bins))*fixture["size"] ref = np.diff(law.cdf(bins))*fixture["size"] chi2 = stats.chisquare(obs, ref, ddof=2) Some adjustments with n=1000, seed=789 (this sample is quite normal) are shown below: | 6 | 6 |
60,667,001 | 2020-3-13 | https://stackoverflow.com/questions/60667001/how-to-find-corner-x-y-coordinate-points-on-image-python-opencv | This is a truck container image but from the top view. First, I need to find the rectangle and know each corner position. The goal is to know the dimension of the container. | Here's a simple approach: Obtain binary image. Load image, convert to grayscale, Gaussian blur, then Otsu's threshold. Find distorted bounding rectangle contour and corners. We find contours then filter using contour area to isolate the rectangular contour. Next we find the distorted bounding rectangle with cv2.minAreaRect() and the corners with cv2.boxPoints() Detected bounding rectangle -> Mask -> Detected corners Corner points (188, 351) (47, 348) (194, 32) (53, 29) Code import cv2 import numpy as np # Load image, grayscale, blur, Otsu's threshold image = cv2.imread('1.png') mask = np.zeros(image.shape[:2], dtype=np.uint8) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (5,5), 0) thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] # Find distorted bounding rect cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: area = cv2.contourArea(c) if area > 5000: # Find distorted bounding rect rect = cv2.minAreaRect(c) corners = cv2.boxPoints(rect) corners = np.int0(corners) cv2.fillPoly(mask, [corners], (255,255,255)) # Draw corner points corners = corners.tolist() print(corners) for corner in corners: x, y = corner cv2.circle(image, (x, y), 5, (36,255,12), -1) cv2.imshow('thresh', thresh) cv2.imshow('image', image) cv2.imshow('mask', mask) cv2.waitKey() | 6 | 7 |
60,700,062 | 2020-3-16 | https://stackoverflow.com/questions/60700062/runtimeerror-occurs-in-pytorch-backward-function | I am trying to calculate the grad of a variable in PyTorch. However, there was a RuntimeError which tells me that the shape of output and grad must be the same. However, in my case, the shape of output and grad cannot be the same. Here is my code to reproduce: import numpy as np import torch from torch.autograd import Variable as V ne = 3 m, n = 79, 164 G = np.random.rand(m, n).astype(np.float64) w = np.random.rand(n, n).astype(np.float64) z = -np.random.rand(n).astype(np.float64) G = V(torch.from_numpy(G)) w = V(torch.from_numpy(w)) z = V(torch.from_numpy(z), requires_grad=True) e, v = torch.symeig(torch.diag(2 * z - torch.sum(w, dim=1)) + w, eigenvectors=True, upper=False) ssev = torch.sum(torch.pow(e[-ne:] * v[:, -ne:], 2), dim=1) out = torch.sum(torch.matmul(G, ssev.reshape((n, 1)))) out.backward(z) print(z.grad) The error message is: RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([164]) and output[0] has a shape of torch.Size([]) Similar calculation is allowed in TensorFlow and I can successfully get the gradient I want: import numpy as np import tensorflow as tf m, n = 79, 164 G = np.random.rand(m, n).astype(np.float64) w = np.random.rand(n, n).astype(np.float64) z = -np.random.rand(n).astype(np.float64) def tf_function(z, G, w, ne=3): e, v = tf.linalg.eigh(tf.linalg.diag(2 * z - tf.reduce_sum(w, 1)) + w) ssev = tf.reduce_sum(tf.square(e[-ne:] * v[:, -ne:]), 1) return tf.reduce_sum(tf.matmul(G, tf.expand_dims(ssev, 1))) z, G, w = [tf.convert_to_tensor(_, dtype=tf.float64) for _ in (z, G, w)] z = tf.Variable(z) with tf.GradientTape() as g: g.watch(z) out = tf_function(z, G, w) print(g.gradient(out, z).numpy()) My tensorflow version is 2.0 and my PyTorch version is 1.14.0. I am using Python3.6.9. In my opinion, calculating the gradients when the output and the variables have different shapes is very reasonable and I don't think I made any mistake.Can anyone help me with this problem? I really appreciate it! | First of all you don't need to use numpy and then convert to Variable (which is deprecated by the way), you can just use G = torch.rand(m, n) etc. Second, when you write out.backward(z), you are passing z as the gradient of out, i.e. out.backward(gradient=z), probably due to the misconception that "out.backward(z) computes the gradient of z, i.e. dout/dz". Instead, this argument is meant to be gradient = d[f(out)]/dout for some function f (e.g. a loss function) and it's the tensor used to compute vector-Jacobian product dout/dz * df/dout. Therefore, the reason why you got the error is because your out (and its gradient df/dout) is a scalar (zero-dimensional tensor) and z is a tensor of size n, leading to a mismatch in shapes. To fix the problem, as you have already figured out by yourself, just replace out.backward(z) with out.backward(), which is equivalent to out.backward(gradient=torch.tensor(1.)), since in your case out is a scalar and f(out) = out, so d[f(out)]/dout = d(out)/d(out) = tensor(1.). If your out was a non-scalar tensor, then out.backward() would not work and instead you would have to use out.backward(torch.ones(out.shape)) (again assuming that f(out) = out). In any case, if you need to pass gradient to the out.backward(), make sure that it has the same shape as the out. | 6 | 5 |
60,716,016 | 2020-3-17 | https://stackoverflow.com/questions/60716016/how-to-get-the-latest-release-version-in-github-only-use-python-requests | Recently,I make an app and upload it to my GitHub release page.I want to make a function to check update to get the latest version(in the release pages). I try to use requests module to crawl my release page to get the latest version.Here it a minimal example of my code: import requests from lxml import etree response = requests.get("https://github.com/v2ray/v2ray-core/releases") html = etree.HTML(response.text) Version = html.xpath("/html/body/div[4]/div/main/div[3]/div/div[2]/div[1]/div/div[2]/div[1]/div/div/a") print(Version) I think the xpath is right,because I use chrome -> copy -> copy xpath.But it return me a [].it can't find the latest version. | A direct way is to use GitHub API, it is easy to do and don't need xpath. The URL should be: https://api.github.com/repos/{owner}/{repo}/releases/latest So if you want to get the latest release version of the repository. Here is an easy example only using the requests module: import requests response = requests.get("https://api.github.com/repos/v2ray/v2ray-core/releases/latest") print(response.json()["name"]) | 14 | 33 |
60,685,749 | 2020-3-14 | https://stackoverflow.com/questions/60685749/python-plotly-how-to-add-an-image-to-a-3d-scatter-plot | I am trying to visualize multiple 2d trajectories (x, y) in a 3D scatter plot where the z axis is time. import numpy as np import pandas as pd import plotly.express as px # Sample data: 3 trajectories t = np.linspace(0, 10, 200) df = pd.concat([pd.DataFrame({'x': 900 * (1 + np.cos(t + 5 * i)), 'y': 400 * (1 + np.sin(t)), 't': t, 'id': f'id000{i}'}) for i in [0, 1, 2]]) # 3d scatter plot fig = px.scatter_3d(df, x='x', y='y', z='t', color='id', ) fig.update_traces(marker=dict(size=2)) fig.show() I have a .png file of a map with size: 2000x1000. The (x, y) coordinates of the trajectories correspond to the pixel locations of the map. I would like to see the image of the map on the "floor" of the 3d scatter plot. I have tried to add the image with this code: from scipy import misc img = misc.imread('images/map_bg.png') fig2 = px.imshow(img) fig.add_trace(fig2.data[0]) fig.show() But the result is having an independent image in the background as a separate plot: And I want the image on the "floor" of the scatter plot and moving with the scatter plot, if I rotate/zoom. Here is a mock: Additional note: There can be any number of trajectories and for my application, it is important that each trajectory is automatically plotted with a different color. I am using plotly.express, but I can use other plotly packages, as long as these requirements are met. | I've ran into the same situation where I wanted to use an image as a bottom surface in a 3D scatterplot. With help from two posts here and here, I've been able to create the following 3d scatter plot: I've used plotly go in my example, so the result is a little bit different than the code from the OP. import numpy as np import pandas as pd from PIL import Image import plotly.graph_objects as go from scipy import misc im = misc.face() im_x, im_y, im_layers = im.shape eight_bit_img = Image.fromarray(im).convert('P', palette='WEB', dither=None) dum_img = Image.fromarray(np.ones((3,3,3), dtype='uint8')).convert('P', palette='WEB') idx_to_color = np.array(dum_img.getpalette()).reshape((-1, 3)) colorscale=[[i/255.0, "rgb({}, {}, {})".format(*rgb)] for i, rgb in enumerate(idx_to_color)] # Sample data: 3 trajectories t = np.linspace(0, 10, 200) df = pd.concat([pd.DataFrame({'x': 400 * (1 + np.cos(t + 5 * i)), 'y': 400 * (1 + np.sin(t)), 't': t, 'id': f'id000{i}'}) for i in [0, 1, 2]]) # im = im.swapaxes(0, 1)[:, ::-1] colors=df['t'].to_list() # # 3d scatter plot x = np.linspace(0,im_x, im_x) y = np.linspace(0, im_y, im_y) z = np.zeros(im.shape[:2]) fig = go.Figure() fig.add_trace(go.Scatter3d( x=df['x'], y=df['y'], z=df['t'], marker=dict( color=colors, size=4, ) )) fig.add_trace(go.Surface(x=x, y=y, z=z, surfacecolor=eight_bit_img, cmin=0, cmax=255, colorscale=colorscale, showscale=False, lighting_diffuse=1, lighting_ambient=1, lighting_fresnel=1, lighting_roughness=1, lighting_specular=0.5, )) fig.update_layout( title="My 3D scatter plot", width=800, height=800, scene=dict(xaxis_visible=True, yaxis_visible=True, zaxis_visible=True, xaxis_title="X", yaxis_title="Y", zaxis_title="Z" , )) fig.show() | 7 | 4 |
60,699,202 | 2020-3-16 | https://stackoverflow.com/questions/60699202/python-frozen-dataclass-allow-changing-of-attribute-via-method | Suppose I have a dataclass: @dataclass(frozen=True) class Foo: id: str name: str I want this to be immutable (hence the frozen=True), such that foo.id = bar and foo.name = baz fail. But, I want to be able to strip the id, like so: foo = Foo(id=10, name="spam") foo.strip_id() foo -> Foo(id=None, name="spam") I have tried a few things, overriding setattr, but nothing worked. Is there an elegant solution to this? (I know I could write a method that returns a new frozen instance that is identical except that that id has been stripped, but that seems a bit hacky, and it would require me to do foo = foo.strip_id(), since foo.strip_id() would not actually change foo) Edit: Although some commenters seem to disagree, I think there is a legitimate distinction between 'fully mutable, do what you want with it', and 'immutable, except in this particular, tightly controlled way' | Well, you can do it by directly modifying the __dict__ member of the instance modifying the attribute using object.__setattr__(...)1, but why??? Asking specifically for immutable and then making it mutable is... indecisive. But if you must: from dataclasses import dataclass @dataclass(frozen=True) class Foo: id: str name: str def strip_id(self): object.__setattr__(self, 'id', None) foo=Foo(10, 'bar') >>> foo Foo(id=10, name='bar') >>> foo.strip_id() >>> foo Foo(id=None, name='bar') Any way of doing this is probably going to seem hacky... because it requires doing things that are fundamentally the opposite of the design. If you're using this as a signal to other programmers that they should not modify the values, the way that is normally done in Python is by prefixing the variable name with a single underscore. If you want to do that, while also making the values accessible, Python has a builtin module called property, where (from the documentation) "typical use is to define a managed attribute": from dataclasses import dataclass @dataclass class Foo: _name: str @property def name(self): return self._name @name.setter def name(self, value): self._name = value @name.deleter def name(self): self._name = None Then you can use it like this: >>> f=Foo() >>> f.name = "bar" >>> f.name 'bar' >>> f._name 'bar' >>> del f.name >>> f.name >>> f._name The decorated methods hide the actual value of _name behind name to control how the user interacts with that value. You can use this to apply transformation rules or validation checks to data before it is stored or returned. This doesn't quite accomplish the same thing as using @dataclass(frozen=True), and if you try declaring it as frozen, you'll get an error. Mixing frozen dataclasses with the property decorator is not straightforward and I have not seen a satisfying solution that is concise and intuitive. @Arne posted this answer, and I found this thread on GitHub, but neither approach is very inspiring; if I came across such things in code that I had to maintain, I would not be very happy (but I would be confused, and probably pretty irritated). 1: Modified as per the answer by @Arne, who observed that the internal use of a dictionary as the data container is not guaranteed. | 10 | 10 |
60,725,232 | 2020-3-17 | https://stackoverflow.com/questions/60725232/what-files-directories-should-i-add-to-gitignore-when-using-poetry-the-python | I'm using a very new Python package manager called Poetry. It creates several files/directories upon creating a new project (environment), but I'm not sure which one I should add to .gitignore for the best practice. Say I create a new poetry project by doing this: $ poetry new foo_project $ cd foo_project $ poetry add numpy $ ls There are: tests (directory) foo_project (also a directory) pyproject.toml (a file that specifies installed packages) poetry.lock (a lock file of installed packages) README.rst (I don't know why README is created but it just shows up.) I usually add tests/, foo_project/, poetry.lock and README.rst because they seem to be dependent on the machine the project was created. Also, I seem to be able to reproduce the environment only with pyproject.toml so that's another reason I ignored all other files/directories. However, it's just my hunch and unfortunately, I can't find any official guide what I really should add to .gitignore on the official documentation. It just bugs me that I don't know for sure what I'm doing. Which ones should I add to my .gitignore? | Also moved to poetry quite recently. I would say you should not add any of: tests/, foo_project/, poetry.lock or README.rst to your .gitignore. In other words, those files and folders should be in version control. My reasons are as follows: tests/ - your tests should not be machine dependent (unless this is a know limitation of your package) and providing the tests for others is how they test a) the installation has worked and b) any changes they make don't break past features, so a pull request becomes more robust. foo_project/ - this is where your python module goes! All you .py files should be inside this folder if you want poetry to be able to build and publish your package. poetry.lock - see https://python-poetry.org/docs/basic-usage/ where it says: When Poetry has finished installing, it writes all of the packages and the exact versions of them that it downloaded to the poetry.lock file, locking the project to those specific versions. You should commit the poetry.lock file to your project repo so that all people working on the project are locked to the same versions of dependencies (more below). README.rst - although this one is perhaps more a personal thing, this is the file that becomes your package readme if you use poetry for publishing your package, e.g. to PyPi. Without it you're package will have an empty readme. I have two readmes one .md (for GitHub) and one .rst (for PyPi). I use the GitHub readme for developers/users and the PyPi for pure users. | 29 | 25 |
60,707,607 | 2020-3-16 | https://stackoverflow.com/questions/60707607/weird-mro-result-when-inheriting-directly-from-typing-namedtuple | I am confused why FooBar.__mro__ doesn't show <class '__main__.Parent'> like the above two. I still don't know why after some digging into the CPython source code. from typing import NamedTuple from collections import namedtuple A = namedtuple('A', ['test']) class B(NamedTuple): test: str class Parent: pass class Foo(Parent, A): pass class Bar(Parent, B): pass class FooBar(Parent, NamedTuple): pass print(Foo.__mro__) # prints (<class '__main__.Foo'>, <class '__main__.Parent'>, <class '__main__.A'>, <class 'tuple'>, <class 'object'>) print(Bar.__mro__) # prints (<class '__main__.Bar'>, <class '__main__.Parent'>, <class '__main__.B'>, <class 'tuple'>, <class 'object'>) print(FooBar.__mro__) # prints (<class '__main__.FooBar'>, <class 'tuple'>, <class 'object'>) # expecting: (<class '__main__.FooBar'>, <class '__main__.Parent'>, <class 'tuple'>, <class 'object'>) | This is because typing.NamedTuple is not really a proper type. It is a class. But its singular purpose is to take advantage of meta-class magic to give you a convenient nice way to define named-tuple types. And named-tuples derive from tuple directly. Note, unlike most other classes, from typing import NamedTuple class Foo(NamedTuple): pass print(isinstance(Foo(), NamedTuple)) prints False. This is because in NamedTupleMeta essentially introspects __annotations__ in your class to eventually use it to return a class created by a call to collections.namedtuple: def _make_nmtuple(name, types): msg = "NamedTuple('Name', [(f0, t0), (f1, t1), ...]); each t must be a type" types = [(n, _type_check(t, msg)) for n, t in types] nm_tpl = collections.namedtuple(name, [n for n, t in types]) # Prior to PEP 526, only _field_types attribute was assigned. # Now __annotations__ are used and _field_types is deprecated (remove in 3.9) nm_tpl.__annotations__ = nm_tpl._field_types = dict(types) try: nm_tpl.__module__ = sys._getframe(2).f_globals.get('__name__', '__main__') except (AttributeError, ValueError): pass return nm_tpl class NamedTupleMeta(type): def __new__(cls, typename, bases, ns): if ns.get('_root', False): return super().__new__(cls, typename, bases, ns) types = ns.get('__annotations__', {}) nm_tpl = _make_nmtuple(typename, types.items()) ... return nm_tpl And of course, namedtuple essentially just creates a class which derives from tuple. Effectively, any other classes your named-tuple class derives from in the class definition statement are ignored, because this subverts the usual class machinery. It might feel wrong, in a lot of ways it is ugly, but practicality beats purity. And it is nice and practical to be able to write things like: class Foo(NamedTuple): bar: int baz: str | 9 | 11 |
60,657,926 | 2020-3-12 | https://stackoverflow.com/questions/60657926/drawing-labels-that-follow-their-edges-in-a-networkx-graph | Working with Networkx, I have several edges that need to be displayed in different ways. For that I use the connectionstyle, some edges are straight lines, some others are Arc3. The problem is that every edge has a label and the label doesn't follow the edges in these styles. I borrowed a graph as example : #!/usr/bin/env python3 import networkx as nx import matplotlib.pyplot as plt # Graph data names = ['A', 'B', 'C', 'D', 'E'] positions = [(0, 0), (0, 1), (1, 0), (0.5, 0.5), (1, 1)] edges = [('A', 'B'), ('A', 'C'), ('A', 'D'), ('A', 'E'), ('D', 'A')] # Matplotlib figure plt.figure('My graph problem') # Create graph G = nx.MultiDiGraph(format='png', directed=True) for index, name in enumerate(names): G.add_node(name, pos=positions[index]) labels = {} for edge in edges: G.add_edge(edge[0], edge[1]) labels[(edge[0], edge[1])] = '{} -> {}'.format(edge[0], edge[1]) layout = dict((n, G.node[n]["pos"]) for n in G.nodes()) nx.draw(G, pos=layout, with_labels=True, node_size=300, connectionstyle='Arc3, rad=0.3') nx.draw_networkx_edge_labels(G, layout, edge_labels=labels, connectionstyle='Arc3, rad=0.3') # Here is the problem : the labels will not follow the edges plt.show() That can lead to problems as shows this example image : we're not sure for which edge is the label. Is there a way to draw labels that follow their edges ? Thanks | Yes, it is possible to draw labeled edges of networkx directed graphs, by using GraphViz. An example using the Python package graphviz, the Python package networkx, and the GraphViz program dot: """How to draw a NetworkX graph using GraphViz. Requires: - The Python package `graphviz`: https://github.com/xflr6/graphviz `pip install graphviz` - The Python package `networkx`: https://github.com/xflr6/graphviz `pip install networkx` - The GraphViz program `dot` in the environment's path https://graphviz.org/download/ https://en.wikipedia.org/wiki/PATH_(variable) """ import graphviz as gv import networkx as nx def dump_example_directed_graph(): """Use GraphViz `dot` to layout a directed multigraph. Creates a file named 'example_directed_graph' that contains the rendered graph. """ g = example_directed_graph() h = networkx_to_graphviz(g) filename = 'example_directed_graph' fileformat = 'pdf' h.render(filename, format=fileformat, cleanup=True) # The argument `view=True` can be given to # the method `graphviz.dot.Digraph.render` # to open the rendered file with the # default viewer of the operating system def dump_example_undirected_graph(): """Use GraphViz `dot` to layout an undirected multigraph. Creates a file named `example_undirected_graph` that contains the rendered graph. """ g = example_undirected_graph() h = networkx_to_graphviz(g) filename = 'example_undirected_graph' fileformat = 'pdf' h.render(filename, format=fileformat, cleanup=True) def example_directed_graph(): """Return a sample directed graph as `networkx.MultiDiGraph`.""" g = nx.MultiDiGraph() g.add_node(1, label='A') g.add_node(2, label='B') g.add_edge(1, 2, label='AB-1') g.add_edge(1, 2, label='AB-2') g.add_edge(2, 1, label='BA') return g def example_undirected_graph(): """Return a sample undirected graph as `networkx.MultiGraph`.""" g = nx.MultiGraph() g.add_node(1, label='A') g.add_node(2, label='B') g.add_edge(1, 2, label='AB-1') g.add_edge(1, 2, label='AB-2') return g def networkx_to_graphviz(g): """Convert `networkx` graph `g` to `graphviz.Digraph`. @type g: `networkx.Graph` or `networkx.DiGraph` @rtype: `graphviz.Digraph` """ if g.is_directed(): h = gv.Digraph() else: h = gv.Graph() for u, d in g.nodes(data=True): h.node(str(u), label=d['label']) for u, v, d in g.edges(data=True): h.edge(str(u), str(v), label=d['label']) return h if __name__ == '__main__': dump_example_directed_graph() dump_example_undirected_graph() Documentation of the: class graphviz.dot.Graph for representing undirected graphs class graphviz.dot.Digraph for representing directed graphs method graphviz.dot.Digraph.node for adding an annotated node to a graph method graphviz.dot.Digraph.edge for adding an annotated edge to a graph class networkx.MultiGraph for representing undirected graphs class networkx.MultiDiGraph for representing directed graphs The above code uses networkx == 2.5.1, graphviz == 0.16, and GraphViz version 2.40.1. Current possibilities using matplotlib It appears that currently networkx supports: unlabeled curved edges using the function networkx.drawing.nx_pylab.draw_networkx_edges with the argument connectionstyle, or labeled straight edges using the function networkx.drawing.nx_pylab.draw_networkx_edges with the argument edge_labels. So as of networkx <= 2.5.1, labeled curved edges cannot be drawn with matplotlib. As a result, for a directed graph with a pair of labeled edges that connect the same nodes (e.g., an edge 1 -> 2 and an edge 2 -> 1), the edges would be drawn in matplotlib to overlap, so not all edge labels will be visible. | 7 | 4 |
60,719,286 | 2020-3-17 | https://stackoverflow.com/questions/60719286/actions-for-creating-venv-in-python-and-clone-a-git-repo | I'm relatively new in all that and I have a problem with the row of the actions. Say that you created a directory and you want a python virtual environment for some project and clone a few git repos (say, from GitHub). Then you cd in that directory and create a virtual environment using the venv module (for python3). To do so you run the following command, python3 -m venv my_venv which will create in your directory a virtual environment called my_env. To activate this environment you run the following. source ./my_env/bin/activate If in addition inside that directory you have a requirements.txt file you can run, pip3 install -r ./requirements.txt to install your various dependencies and packages with pip3. Now this is where I'm getting confused. If you want to clone git repos where exactly you do that? In the same directory you just run git clone and creates the git repos or you need to cd in another folder. In order to let python venv pick up the cloned repos is the above enough, or venv must be installed after you have cloned the repos in your directory? | First of all, you need to understand what is virtual environments, when you understand what it is for, the order of actions will be more clear. Python applications will often use packages and modules that don’t come as part of the standard library. Applications will sometimes need a specific version of a library, because the application may require that a particular bug has been fixed or the application may be written using an obsolete version of the library’s interface. This means it may not be possible for one Python installation to meet the requirements of every application. If application A needs version 1.0 of a particular module but application B needs version 2.0, then the requirements are in conflict and installing either version 1.0 or 2.0 will leave one application unable to run. The solution for this problem is to create a virtual environment, a self-contained directory tree that contains a Python installation for a particular version of Python, plus a number of additional packages. Different applications can then use different virtual environments. To resolve the earlier example of conflicting requirements, application A can have its own virtual environment with version 1.0 installed while application B has another virtual environment with version 2.0. If application B requires a library be upgraded to version 3.0, this will not affect application A’s environment. ※ Reference: 12. Virtual Environments and Packages Generally, the following order is the most appropriated. $ git clone <Project A> # Cloning project repository $ cd <Project A> # Enter to project directory $ python3 -m venv my_venv # If not created, creating virtualenv $ source ./my_venv/bin/activate # Activating virtualenv (my_venv)$ pip3 install -r ./requirements.txt # Installing dependencies (my_venv)$ deactivate # When you want to leave virtual environment All installed dependencies at step 5 will be unavailable after you leave virtual environment. | 14 | 35 |
60,674,136 | 2020-3-13 | https://stackoverflow.com/questions/60674136/python-how-to-cancel-a-specific-task-spawned-by-a-nursery-in-python-trio | I have an async function that listens on a specific port. I want to run the function on a few ports at a time and when the user wants to stop listening on a specific port, stop the function listening on that port. Previously I was using the asyncio library for this task and I tackled this problem by creating tasks with a unique id as their name. asyncio.create_task(Func(), name=UNIQUE_ID) Since trio uses nurseries to spawn tasks, I can see the running tasks by using nursery.child_tasks but the tasks don't have a way to name them and even a way to cancel a task on demand TL;DR Since trio doesn't has a cancel() function that cancels a specific task, how can I manually cancel a task. | Easy. You create a cancel scope, return that from the task, and cancel this scope when required: async def my_task(task_status=trio.TASK_STATUS_IGNORED): with trio.CancelScope() as scope: task_status.started(scope) pass # do whatever async def main(): async with trio.open_nursery() as n: scope = await n.start(my_task) pass # do whatever scope.cancel() # cancels my_task() The magic part is await n.start(task), which waits until the task calls task_status.started(x) and returns the value of x (or None if you leave that empty). | 6 | 5 |
60,658,375 | 2020-3-12 | https://stackoverflow.com/questions/60658375/vscode-python-extension-how-can-i-disable-autocompletion-from-inserting-import | In VS Code's Python extension I sometimes find that autocompletion can include options for things that are not yet imported in the file I'm editing. When selecting one of those options imports sometimes get inserted at the top of the module without notification. While I can see the utility in this feature I don't really like the behavior since it does this silently and puts them in alphabetical order disregarding any other sorting I may choose. Is there a way to disable this feature? | Using Pylance (as of v2020.8.0), you can disable this by setting "python.analysis.autoImportCompletions": false https://github.com/microsoft/pylance-release/blob/master/CHANGELOG.md#202080-5-august-2020 | 15 | 17 |
60,716,529 | 2020-3-17 | https://stackoverflow.com/questions/60716529/download-file-using-fastapi | I see the functions for uploading in an API, but I don't see how to download. Am I missing something? I want to create an API for a file download site. Is there a different API I should be using? from typing import List from fastapi import FastAPI, Query app = FastAPI() PATH "some/path" @app.get("/shows/") def get_items(q: List[str] = Query(None)): ''' Pass path to function. Returns folders and files. ''' results = {} query_items = {"q": q} entry = PATH + "/".join(query_items["q"]) + "/" dirs = os.listdir(entry) results["folders"] = [val for val in dirs if os.path.isdir(entry+val)] results["files"] = [val for val in dirs if os.path.isfile(entry+val)] results["path_vars"] = query_items["q"] return results Here is the sample bit of code for python to fetch files and dirs for a path, you can return the path as a list with a new entry in a loop to go deeper into a file tree. Passing a file name should trigger a download function, but I cant seem to get a download func going. | This worked For me from starlette.responses import FileResponse return FileResponse(file_location, media_type='application/octet-stream',filename=file_name) This will download the file with filename | 46 | 48 |
60,639,731 | 2020-3-11 | https://stackoverflow.com/questions/60639731/tensorboard-for-custom-training-loop-in-tensorflow-2 | I want to create a custom training loop in tensorflow 2 and use tensorboard for visualization. Here is an example I've created based on tensorflow documentation: import tensorflow as tf import datetime os.environ["CUDA_VISIBLE_DEVICES"] = "0" # which gpu to use mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)) train_dataset = train_dataset.shuffle(60000).batch(64) test_dataset = test_dataset.batch(64) def create_model(): return tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28), name='Flatten_1'), tf.keras.layers.Dense(512, activation='relu', name='Dense_1'), tf.keras.layers.Dropout(0.2, name='Dropout_1'), tf.keras.layers.Dense(10, activation='softmax', name='Dense_2') ], name='Network') # Loss and optimizer loss_object = tf.keras.losses.SparseCategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam() # Define our metrics train_loss = tf.keras.metrics.Mean('train_loss', dtype=tf.float32) train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy') test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32) test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('test_accuracy') @tf.function def train_step(model, optimizer, x_train, y_train): with tf.GradientTape() as tape: predictions = model(x_train, training=True) loss = loss_object(y_train, predictions) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) train_loss(loss) train_accuracy(y_train, predictions) @tf.function def test_step(model, x_test, y_test): predictions = model(x_test) loss = loss_object(y_test, predictions) test_loss(loss) test_accuracy(y_test, predictions) current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") train_log_dir = '/NAS/Dataset/logs/gradient_tape/' + current_time + '/train' test_log_dir = '/NAS/Dataset/logs/gradient_tape/' + current_time + '/test' train_summary_writer = tf.summary.create_file_writer(train_log_dir) test_summary_writer = tf.summary.create_file_writer(test_log_dir) model = create_model() # reset our model EPOCHS = 5 for epoch in range(EPOCHS): for (x_train, y_train) in train_dataset: train_step(model, optimizer, x_train, y_train) with train_summary_writer.as_default(): tf.summary.scalar('loss', train_loss.result(), step=epoch) tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch) for (x_test, y_test) in test_dataset: test_step(model, x_test, y_test) with test_summary_writer.as_default(): tf.summary.scalar('loss', test_loss.result(), step=epoch) tf.summary.scalar('accuracy', test_accuracy.result(), step=epoch) template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}' print(template.format(epoch + 1, train_loss.result(), train_accuracy.result() * 100, test_loss.result(), test_accuracy.result() * 100)) # Reset metrics every epoch train_loss.reset_states() test_loss.reset_states() train_accuracy.reset_states() test_accuracy.reset_states() I am accessing tensorboard with the following command on terminal: tensorboard --logdir=..... The code above produce summaries for losses and metrics. My question is: How can i produce the graph of this process? I've tried to use the recommended commands from tensorflow: tf.summary.trace_on() and tf.summary.trace_export(), but I haven't managed to plot the graph. Maybe I am using them wrong. I whould really appreciate any suggestion on how to do this. | As answered here, I'm sure there's a better way, but a simple workaround is to just use the existing tensorboard callback logic: tb_callback = tf.keras.callbacks.TensorBoard(LOG_DIR) tb_callback.set_model(model) # Writes the graph to tensorboard summaries using an internal file writer | 11 | 6 |
60,646,028 | 2020-3-12 | https://stackoverflow.com/questions/60646028/numpy-point-cloud-to-image | I have a point cloud which looks something like this: The red dots are the points, the black dots are the red dots projected to the xy plane. Although it is not visible in the plot, each point also has a value, which is added to the given pixel when the point is moved to the xy plane. The points are represented by a numpy (np) array like so: points=np.array([[x0,y0,z0,v0],[x1,y1,z1,v1],...[xn,yn,zn,vn]]) The obvious way to put these points into some image would be through a simple loop, like so: image=np.zeros(img_size) for point in points: #each point = [x,y,z,v] image[tuple(point[0:2])] += point[3] Now this works fine, but it is very slow. So I was wondering if there is some way using vectorization, slicing and other clever numpy/python tricks of speeding it up, since in reality I have to this many times for large point clouds. I had come up with something using np.put: def points_to_image(xs, ys, vs, img_size): img = np.zeros(img_size) coords = np.stack((ys, xs)) #put the 2D coordinates into linear array coordinates abs_coords = np.ravel_multi_index(coords, img_size) np.put(img, abs_coords, ps) return img (in this case the points are pre-split into vectors containing the x, y and v components). While this works fine, it of course only puts the last point to each given pixel, i.e. it is not additive. Many thanks for your help! | Courtesy of @Paul Panzer: def points_to_image(xs, ys, ps, img_size): coords = np.stack((ys, xs)) abs_coords = np.ravel_multi_index(coords, img_size) img = np.bincount(abs_coords, weights=ps, minlength=img_size[0]*img_size[1]) img = img.reshape(img_size) On my machine, the loop version takes 0.4432s vs 0.0368s using vectorization. So a neat 12x speedup. ============ EDIT ============ Quick update: using torch... def points_to_image_torch(xs, ys, ps, sensor_size=(180, 240)): xt, yt, pt = torch.from_numpy(xs), torch.from_numpy(ys), torch.from_numpy(ps) img = torch.zeros(sensor_size) img.index_put_((yt, xt), pt, accumulate=True) return img I get all the way down to 0.00749. And that's still all happening on CPU, so 59x speedup vs python loop. I also had a go at running it on GPU, it doesn't seem to make a difference in speed, I guess with accumulate=True it's probably using some sort of atomics on the GPU that slows it all down. | 6 | 7 |
60,638,344 | 2020-3-11 | https://stackoverflow.com/questions/60638344/quartiles-line-properties-in-seaborn-violinplot | trying to figure out how to modify the line properties (color, thickness, style etc) of the quartiles in a seaborn violinplot. Example code from their website: import seaborn as sns sns.set(style="whitegrid") tips = sns.load_dataset("tips") ax = sns.violinplot(x="day", y="total_bill", hue="sex", data=tips, palette="Set2", split=True,linestyle=':', scale="count", inner="quartile") The desired outcome would be to be able to change e.g. the color of the two parts of the violinplot individually as for example like this to improve readability: How can I do this? Thankful for any insights UPDATE: Based on the response by @kynnem the following can be used to change the median and quartile lines separately: import seaborn as sns sns.set(style="whitegrid") tips = sns.load_dataset("tips") ax = sns.violinplot(x="day", y="total_bill", hue="sex", data=tips, palette="Set2", split=True,linestyle=':', scale="count", inner="quartile") for l in ax.lines: l.set_linestyle('--') l.set_linewidth(0.6) l.set_color('red') l.set_alpha(0.8) for l in ax.lines[1::3]: l.set_linestyle('-') l.set_linewidth(1.2) l.set_color('black') l.set_alpha(0.8) Result: | You can access the lines from your ax variable using the following to set line type, color, and saturation: for l in ax.lines: l.set_linestyle('-') l.set_color('black') l.set_alpha(0.8) This creates a solid black line for all horizontal lines. If you can figure out which of the lines in ax correspond with your lines of interest, you can then specify different colors and styles as you wish | 11 | 8 |
60,699,058 | 2020-3-16 | https://stackoverflow.com/questions/60699058/python-asyncio-queue-not-showing-any-exceptions | If i run this code, it will hang without throwing ZeroDivisionError. If i move await asyncio.gather(*tasks, return_exceptions=True) above await queue.join(), it will finally throw ZeroDivisionError and stop. If i then comment out 1 / 0 and run, it will execute everything, but will hang in the end. Now the question is, how can i achive both: Being able to see unexpected exceptions as in the case 2 above, and... Actually stop when all task are done in the Queue . import asyncio import random import time async def worker(name, queue): while True: print('Get a "work item" out of the queue.') sleep_for = await queue.get() print('Sleep for the "sleep_for" seconds.') await asyncio.sleep(sleep_for) # Error on purpose 1 / 0 print('Notify the queue that the "work item" has been processed.') queue.task_done() print(f'{name} has slept for {sleep_for:.2f} seconds') async def main(): print('Create a queue that we will use to store our "workload".') queue = asyncio.Queue() print('Generate random timings and put them into the queue.') total_sleep_time = 0 for _ in range(20): sleep_for = random.uniform(0.05, 1.0) total_sleep_time += sleep_for queue.put_nowait(sleep_for) print('Create three worker tasks to process the queue concurrently.') tasks = [] for i in range(3): task = asyncio.create_task(worker(f'worker-{i}', queue)) tasks.append(task) print('Wait until the queue is fully processed.') started_at = time.monotonic() print('Joining queue') await queue.join() total_slept_for = time.monotonic() - started_at print('Cancel our worker tasks.') for task in tasks: task.cancel() print('Wait until all worker tasks are cancelled.') await asyncio.gather(*tasks, return_exceptions=True) print('====') print(f'3 workers slept in parallel for {total_slept_for:.2f} seconds') print(f'total expected sleep time: {total_sleep_time:.2f} seconds') asyncio.run(main()) | There are several ways to approach this, but the central idea is that in asyncio, unlike in classic threading, it is straightforward to await multiple things at once. For example, you can await queue.join() and the worker tasks, whichever completes first. Since workers don't complete normally (you cancel them later), a worker completing means that it has raised. # convert queue.join() to a full-fledged task, so we can test # whether it's done queue_complete = asyncio.create_task(queue.join()) # wait for the queue to complete or one of the workers to exit await asyncio.wait([queue_complete, *tasks], return_when=asyncio.FIRST_COMPLETED) if not queue_complete.done(): # If the queue hasn't completed, it means one of the workers has # raised - find it and propagate the exception. You can also # use t.exception() to get the exception object. Canceling other # tasks is another possibility. for t in tasks: if t.done(): t.result() # this will raise | 6 | 7 |
60,666,418 | 2020-3-13 | https://stackoverflow.com/questions/60666418/remove-authentication-and-permission-for-specific-url-path | I'm working with DRF and came across this issue. I have a third-party view which I'm importing in my urls.py file like this : from some_package import some_view urlpatterns = [ path('view/',some_view) ] but the issue I'm facing is since I have enabled default permission classes in my settings.py like this: REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.TokenAuthentication', ), 'DEFAULT_PERMISSION_CLASSES':( 'rest_framework.permissions.IsAuthenticated', ), } now when I call that view using the url , it gives me authentication error as I'm not providing token .Is there a way I can bypass authentication error without having to make changes in view directly,I know that we can remove permission for that particular view , but for that I'll have to make changes to that some_view function code. But I don't want to do that,let's say we don't have access to that function we can only pass data and receive response. How can I bypass authentication without having to change that functions code . I tried searching but couldn't find what I'm looking for. I was assuming that there might be someway we can do that from urls.py like specifying any parameter or something like that which make that particular view to bypass authentication without having to change functions code. somthing like this : from some_package import some_view urlpatterns = [ path('view/',some_view,"some_parameter") #passing someparameter from here or something like that ] Is it possible what I'm looking for ? Thanks in advance :) | So, the most appropriate way for third-party views is to use decorators by defining them inside your urls.py: Case 1 I assume that some_view is a class inherited from rest_framework.views.APIView: urls.py from django.urls import path from rest_framework.decorators import permission_classes, authentication_classes from rest_framework.permissions import AllowAny from some_package import some_view urlpatterns = [ path('', authentication_classes([])(permission_classes([AllowAny])(some_view)).as_view()) ] Case 2 I assume that some_view is a simple Django view function and you need to define it for GET method: urls.py from django.urls import path from rest_framework.decorators import api_view, permission_classes, authentication_classes from rest_framework.permissions import AllowAny from some_package import some_view urlpatterns = [ path('', api_view(['GET'])(authentication_classes([])(permission_classes([AllowAny])(some_view)))) ] Case 3 I assume that some_view is an api_view decorated DRF view function. This is the hardest and most probably the most impossible part because you have to undecorate the previous api_view decorator. If view function is decorated with api_view, then it is already converted into Django view function so neither permision_classes nor authentication_classes can be appended to class: | 8 | 6 |
60,720,072 | 2020-3-17 | https://stackoverflow.com/questions/60720072/5x5-sliding-puzzle-fast-low-move-solution | I am trying to find a way to programmatically solve a 24-piece sliding puzzle in a reasonable amount of time and moves. Here is an example of the solved state in the puzzle I am describing: I have already found that the IDA* algorithm works fairly well to accomplish this for a 15-puzzle (4x4 grid). The IDA* algorithm is able to find the lowest number of moves for any 4x4 sliding puzzle in a very reasonable amount of time. I ran an adaptation of this code to test 4x4 sliding puzzles and was able to significantly reduce runtime further by using PyPy. Unfortunately, when this code is adapted for 5x5 sliding puzzles it runs horribly slow. I ran it for over an hour and eventually just gave up on seeing it finish, whereas it ran for only a few seconds on 4x4 grids. I understand this is because the number of nodes that need to searched goes up exponentially as the grid increases. However, I am not looking to find the optimal solution to a 5x5 sliding puzzle, only a solution that is close to optimal. For example, if the optimal solution for a given puzzle was 120 moves, then I would be satisfied with any solution that is under 150 moves and can be found in a few minutes. Are there any specific algorithms that might accomplish this? | It as been proved that finding the fewest number of moves of n-Puzzle is NP-Complete, see Daniel Ratner and Manfred Warmuth, The (n2-1)-Puzzle and Related Relocation Problems, Journal of Symbolic Computation (1990) 10, 111-137. Interesting facts reviewed in Graham Kendall, A Survey of NP-Complete Puzzles, 2008: The 8-puzzle can be solved with A* algorithm; The 15-puzzle cannot be solved with A* algorithm but the IDA* algorithm can; Optimal solutions to the 24-puzzle cannot be generated in reasonable times using IDA* algorithm. Therefore stopping the computation to change the methodology was the correct things to do. It seems there is an available algorithm in polynomial time that can find sub-optimal solutions, see Ian Parberry, Solving the (n^2−1)-Puzzle with 8/3n^3 Expected Moves, Algorithms 2015, 8(3), 459-465. It may be what you are looking for. | 6 | 7 |
60,643,710 | 2020-3-11 | https://stackoverflow.com/questions/60643710/setuptools-know-in-advance-the-wheel-filename-of-a-native-library | Is there an easy way to know the filename of a Python wheel before running the setup script? I'm trying to generate a Bazel rule that builds a .whl for each Python version installed in the machine, the library contains native code so it needs to be compiled for each version separately. The thing with Bazel is that it requires to declare any outputs in advance, and what I'm observing is that each Python version generates a different filename without obvious consistency (different prefixes for malloc and unicode) 2.7 --> lib-0.0.0-cp27-cp27mu-linux_x86_64.whl 3.6m --> lib-0.0.0-cp36-cp36m-linux_x86_64.whl 3.8 --> lib-0.0.0-cp36-cp38-linux_x86_64.whl I know as a workaround I could zip the wheel to pass it around, but I was wondering if there is a cleaner way to do it. | Update See also a more detailed answer here. You can get the name by querying the bdist_wheel command, for that you don't even need to build anything or writing a setup.py script (but you need the metadata you pass to the setup function). Example: from distutils.core import Extension from setuptools.dist import Distribution fuzzlib = Extension('fuzzlib', ['fuzz.pyx']) # the files don't need to exist dist = Distribution(attrs={'name': 'so', 'version': '0.1.2', 'ext_modules': [fuzzlib]}) bdist_wheel_cmd = dist.get_command_obj('bdist_wheel') bdist_wheel_cmd.ensure_finalized() distname = bdist_wheel_cmd.wheel_dist_name tag = '-'.join(bdist_wheel_cmd.get_tag()) wheel_name = f'{distname}-{tag}.whl' print(wheel_name) will print you the desired name. Notice that attrs passed to Distribution should contain the same metadata you pass to the setup function, otherwise you will likely get a wrong tag. To reuse the metadata, in a setup.py script this could be combined like e.g. setup_kwargs = {'name': 'so', 'version': '0.1.2', ...} dist = Distribution(attrs=setup_kwargs) ... setup(**setup_kwargs) | 7 | 4 |
60,719,220 | 2020-3-17 | https://stackoverflow.com/questions/60719220/airflow-running-python-files-fails-due-to-python-cant-open-file | I have a folder tree like this in my project project dags python_scripts libraries docker-compose.yml Dockerfile docker_resources I create an airflow service in a docker container with: dockerfile #Base image FROM puckel/docker-airflow:1.10.1 #Impersonate USER root #Los automatically thrown to the I/O strem and not buffered. ENV PYTHONUNBUFFERED 1 ENV AIRFLOW_HOME=/usr/local/airflow ENV PYTHONPATH "${PYTHONPATH}:/libraries" WORKDIR / #Add docker source files to the docker machine ADD ./docker_resources ./docker_resources #Install libraries and dependencies RUN apt-get update && apt-get install -y vim RUN pip install --user psycopg2-binary RUN pip install -r docker_resources/requirements.pip Docker-compose.yml version: '3' services: postgres: image: postgres:9.6 container_name: "postgres" environment: - POSTGRES_USER=airflow - POSTGRES_PASSWORD=airflow - POSTGRES_DB=airflow ports: - "5432:5432" webserver: build: . restart: always depends_on: - postgres volumes: - ./dags:/usr/local/airflow/dags - ./libraries:/libraries - ./python_scripts:/python_scripts ports: - "8080:8080" command: webserver healthcheck: test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"] interval: 30s timeout: 30s retries: 3 scheduler: build: . restart: always depends_on: - postgres volumes: - ./dags:/usr/local/airflow/dags - ./logs:/usr/local/airflow/logs ports: - "8793:8793" command: scheduler healthcheck: test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-scheduler.pid ]"] interval: 30s timeout: 30s retries: 3 My dag folder has a tutorial with: from datetime import timedelta # The DAG object; we'll need this to instantiate a DAG from airflow import DAG # Operators; we need this to operate! from airflow.operators.bash_operator import BashOperator from airflow.utils.dates import days_ago # These args will get passed on to each operator # You can override them on a per-task basis during operator initialization default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': days_ago(2), 'email': ['[email protected] '], 'email_on_failure': False, 'email_on_retry': False, 'retries': 0, 'retry_delay': timedelta(minutes=5), 'schedule_interval': '@daily', } dag = DAG( 'Tutorial', default_args=default_args, description='A simple tutorial DAG with production tables', catchup=False ) task_1 = BashOperator( task_id='my_task', bash_command='python /python_scripts/my_script.py', dag=dag, ) I tried changing bash_command='python /python_scripts/my_script.py', for: bash_command='python python_scripts/my_script.py', bash_command='python ~/../python_scripts/my_script.py', bash_command='python ~/python_scripts/my_script.py', And all of them fails. I tried them because BashOperator run the command in a tmp folder. If I get in the machine, and run ls command I find the file, under python_scripts. Even if I run python /python_scripts/my_script.py from /usr/local/airflowit works. The error is always: INFO - python: can't open file I searched and people solved the issue with absolute paths, but I can't fix it. Edit If in the dockerfile I add ADD ./ ./ below WORKDIR / and I delete these volumes from docker-compose.yml: 1. ./libraries:/libraries 2. ./python_scripts:/python_scripts The error is not file not found, is libraries not found. Import module error. Which is an improvement, but doesn't make sense cause PYTHONPATH is defined to have /libraries folder. Makes more sense the volumes that the ADD statement, because I need to have the changes applied into the code instantly into the docker. Edit 2: Volumes are mounted but no file is inside the container folders, this is why is not able to find the files. When run Add ./ ./ the folder has the files cause there add all the files inside the folder. Despite it doesn't work due libraries are not found neither. | Finally I solved the issue, I discard all previous work, and restart DOCKERFILE using an UBUNTU base image, and not puckel/docker-airflow image which is based in python:3.7-slim-buster. I don't use any other user that its not root know. | 7 | 0 |
60,704,532 | 2020-3-16 | https://stackoverflow.com/questions/60704532/python-telegram-bot-how-to-wait-for-user-answer-to-a-question-and-return-it | Context: I am using PyTelegramBotAPi or Python Telegram Bot I have a code I am running when a user starts the conversation. When the user starts the conversation I need to send him the first picture and a question if He saw something in the picture, the function needs to wait for the user input and return whether he saw it or not. After that, I will need to keep sending the picture in a loop and wait for the answer and run a bisection algorithm on it. What I have tried so far: I tried to use reply markup that waits for a response or an inline keyboard with handlers but I am stuck because my code is running without waiting for the user input. The code: @bot.message_handler(func=lambda msg: msg in ['Yes', 'No']) @bot.message_handler(commands=['start', 'help']) def main(message): """ This is my main function """ chat_id = message.chat.id try: reply_answer = message.reply_to_message.text except AttributeError: reply_answer = '0' # TODO : should wait for the answer asynchnonossly def tester(n, reply_answer): """ Displays the current candidate to the user and asks them to check if they see wildfire damages. """ print('call......') bisector.index = n bot.send_photo( chat_id=chat_id, photo=bisector.image.save_image(), caption=f"Did you see it Yes or No {bisector.date}", reply_markup=types.ForceReply(selective=True)) # I SHOUL WAIT FOR THE INPUT HERE AND RETURN THE USER INPUT return eval(reply_answer) culprit = bisect(bisector.count, lambda x: x, partial(tester, reply_answer=reply_answer) ) bisector.index = culprit bot.send_message(chat_id, f"Found! First apparition = {bisector.date}") bot.polling(none_stop=True) The algorithm I am running on the user input is something like this : def bisect(n, mapper, tester): """ Runs a bisection. - `n` is the number of elements to be bisected - `mapper` is a callable that will transform an integer from "0" to "n" into a value that can be tested - `tester` returns true if the value is within the "right" range """ if n < 1: raise ValueError('Cannot bissect an empty array') left = 0 right = n - 1 while left + 1 < right: mid = int((left + right) / 2) val = mapper(mid) tester_values = tester(val) # Here is where I am using the ouput from Telegram bot if tester_values: right = mid else: left = mid return mapper(right) I hope I was clear explaining the problem, feel free to ask any clarification. If you know something that can point me in the right direction in order to solve this problem, let me know. I have tried a similar question but I am not getting answers. | You should save your user info in a database. Basic fields would be: (id, first_name, last_name, username, menu) What is menu? Menu keeps user's current state. When a user sends a message to your bot, you check the database to find out about user's current sate. So if the user doesn't exist, you add them to your users table with menu set to MainMenu or WelcomeMenu or in your case PictureMenu. Now you're going to have a listener for update function, let's assume each of these a menu. @bot.message_handler(commands=['start', 'help']) so when the user sends start you're going to check user's menu field inside the function. @bot.message_handler(commands=['start', 'help']) def main(message): user = fetch_user_from_db(chat_id) if user.menu == "PictureMenu": if message.photo is Not None: photo = message.photo[0].file_id photo_file = download_photo_from_telegram(photo) do_other_things() user.menu = "Picture2Menu"; user.save(); else: send_message("Please send a photo") if user.menu == "Picture2Menu": if message.photo is Not None: photo = message.photo[0].file_id photo_file = download_photo_from_telegram(photo) do_other_things() user.menu = "Picture3Menu"; user.save(); else: send_message("Please send a photo") ... I hope you got it. | 6 | 16 |
60,713,751 | 2020-3-16 | https://stackoverflow.com/questions/60713751/where-to-put-dockerignore | Consider the following typical python project structure: fooproject - main.py - src/ - test/ - logs/ - Dockerfile - .dockerignore - README.md The .dockerfile should prevent test/ and logs/ directories from being included in the docker image. test/ logs/ Contents of Dockerfile are FROM ubuntu16.04 COPY . /app/ WORKDIR /app USER root RUN pip install -r requirements.txt ENTRYPOINT ["main.py"] However, when running through PyCharm's automatic docker integration, the test and logs directories are both copied into the container. The PyCharm run command ends up as follows: 7eb643d9785b:python -u /opt/project/main.py I've tried a few things to no avail, such as making a duplicate copy of .dockerignore at the directory above the rest of the app. For example: COPY . /app/ COPY .dockerignore .dockerignore WORKDIR /app Wondering if it's possible that PyCharm's /opt/project is somehow interfering? So where exactly should .dockerignore be in a project like this? Update I shared the output of docker container inspect with BMitch, and he was able to find the solution. PyCharm was automatically mounting a volume in the container settings run config | The .dockerignore is used to control what files are included in the build context. This impacts the COPY and ADD commands in the Dockerfile, and ultimately the resulting image. When you run that image with a volume mount, e.g.: { "Type": "bind", "Source": "/home/adam/Desktop/Dev/ec2-data-analysis/grimlock", "Destination": "/opt/project", "Mode": "rw", "RW": true, "Propagation": "rprivate" }, That volume mount overrides the contents of the image for that container. All access to the path will go to your desktop directory rather than the image contents, and Linux bind mounts do not have a concept of the .dockerignore file. When you run this image without the volume mount, you should see different behavior. For anyone coming across this question in the future, the .dockerignore needs to be in the root of your build context. The build context is the directory you pass at the end of the build command, often a . or the current directory. If you enable BuildKit, it will first check for a Dockerfile.dockerignore where your Dockerfile path/name could be changed. And to test your .dockerignore, see this answer. | 19 | 24 |
60,722,876 | 2020-3-17 | https://stackoverflow.com/questions/60722876/how-likely-is-token-collision-with-python-secrets-library | How likely is it for a collision to occur with tokens generated using Python's secrets library (https://docs.python.org/3/library/secrets.html)? There doesn't seem to be any mention of their uniqueness. | The purpose of the secrets module, is for sourcing secret data, i.e. information that cannot be predicted or reverse engineered The secrets module provides access to the most secure source of randomness that your operating system provides. Typically, the OS will use several entropy sources to generate bits. e.g. process id, thread id, mouse / keyboard input timings, CPU counters, system time, etc. So long as the amount of entropy is sufficiently large for the amount of bits being generated (and again the OS uses many sources and accumulates entropy constantly), we should expect a uniform distribution of all values. So if you generate a 32 bit key, you should expect to see each of the 4294967296 values with similar frequency. To estimate how long before we expect a collision, we are essentially looking at the Birthday problem. In general, if the values are uniformly distributed, and the number of values is n, we should expect a collision to occur after generating about sqrt(n) values (though in reality it's a bit more). You can verify this with a quick benchmark program import secrets def find_collision(b): tokens = set() while True: token = secrets.token_bytes(b) if token in tokens: return len(tokens) tokens.add(token) b = 4 samples = 100 l = [find_collision(b) for i in range(samples)] avg = sum(l)/len(l) print(f'n = {2**(b*8)}, sqrt(n) = {2**(b*8/2)}') print(f'on average, with a {b} byte token, a collision occurs after {avg} values') n = 4294967296, sqrt(n) = 65536.0 on average, with a 4 byte token, a collision occurs after 75797.78 values secrets makes no aims or claims about uniqueness, as the goal is to generate high entropy, random bytes that should be impossible to predict or reverse engineer. To add constraints to the module to prevent duplicates would inherently make it more predictable. In addition, secrets feeds you these bytes in a stream under the assumption that you take what you need, and use it how you like, so preventing duplicates doesn't make much sense as an upstream responsibility. This is in contrast to something like a UUID, which has a fixed width and is intended to be used as a portable identifier and recognized datatype. | 6 | 9 |
60,716,482 | 2020-3-17 | https://stackoverflow.com/questions/60716482/error-skipping-analyzing-flask-mysqldb-found-module-but-no-type-hints-or-lib | I am using Python 3.6 and flask. I used flask-mysqldb to connect to MySQL, but whenever I try to run mypy on my program I get this error: Skipping analyzing 'flask_mysqldb': found module but no type hints or library stubs. I tried running mypy with the flags ignore-missing-imports or follow-imports=skip. Then I was not getting the error. Why do I get this error? How can I fix this without adding any additional flags? | You are getting this error because mypy is not designed to try type checking every single module you try importing. This is mainly for three reasons: The module you're trying to import could be written in a way that it fails to type check. For example, if the module does something like my_list = [] in the global scope, mypy will ask for a type hint since it doesn't know what this list is supposed to contain. These sorts of errors are out of the control of people using the libraries, and so it would be annoying and disruptive to potentially spam them everywhere. Even if the library module you're trying to import type checks cleanly, if it's not using type hints, you won't gain much benefit from trying to use it with mypy. If the library is dynamically typed, mypy could silently accept code that actually does not type check at runtime. This can be surprising to people/silently doing the wrong thing is generally a bad idea. Instead, you get an explicit warning. Not all modules are written in Python -- some modules are actually C extensions. Mypy cannot analyze these modules and so must implement some mechanism for ignoring modules. If you do not get this error, this means one of five things: The type hints for your library already exist in Typeshed, which comes pre-bundled with mypy. Typeshed mostly contains type hints for the standard library and a handful of popular 3rd party libraries. The library is already using type hints and declared that it wants to be analyzed and type checked by mypy. This is done by including a special py.typed file within the package which makes it PEP 561 compatible. You've installed a 3rd party "stubs-only" package. This package can be installed alongside the library and lets people provide type hints without having to modify the library itself. For example, the django-stubs package contains type hints for the Django library, which is not PEP 561 compatible. You've configured mypy to use your own custom stubs. That is, you've basically created your own local "stubs-only" package and told mypy to use it. You've decided to suppress the error and live with the fact that the library is dynamically typed for now -- for example, by using the command line flags you found or by adding a # type: ignore to the import. For more details on how to deal with this error, see the mypy docs on dealing with missing type hints from 3rd party libraries. | 56 | 59 |
60,727,103 | 2020-3-17 | https://stackoverflow.com/questions/60727103/what-are-the-caveats-of-inheriting-from-both-str-and-enum | What are the caveats (if any) of using a class that inherits from both str and Enum? This was listed as a possible way to solve the problem of Serialising an Enum member to JSON from enum import Enum class LogLevel(str, Enum): DEBUG = 'DEBUG' INFO = 'INFO' Of course the point is to use this class as an enum, with all its advantages | When inheriting from str, or any other type, the resulting enum members are also that type. This means: they have all the methods of that type they can be used as that type and, most importantly, they will compare with other instances of that type That last point is the most important: because LogLevel.DEBUG is a str it will compare with other strings -- which is good -- but will also compare with other str-based Enums -- which could be bad. Info regarding subclassing enum from the documentation | 10 | 13 |
60,729,170 | 2020-3-17 | https://stackoverflow.com/questions/60729170/python-opencv-converting-planar-yuv-420-image-to-rgb-yuv-array-format | I am trying to use OpenCV, version 4.1.0 through python to convert a planar YUV 4:2:0 image to RGB and am struggling to understand how to format the array to pass to the cvtColor function. I have all 3 channels as separate arrays and am trying to merge them for use with cv2.cvtColor. I am using cv2.cvtColor(yuv_array, cv2.COLOR_YUV420p2RGB). I understand that the yuv_array should be 1.5x as tall as the original image (that's what a yuv array from cvtColor using cv2.COLOR_RGB2YUV_YV12 looks like) and I should put the UV components into the bottom half of the yuv_array and the Y channel into the top part of the array. I cannot seem to figure out how the U and V channels should be formatted within the bottom of this array. I've tried interleaving them and just putting them both in there back-to-back. With both methods, I've tried putting U first then V and also the other way around. All methods lead to artifacts in the resulting image. Here is my code and an example image: import os import errno import numpy as np import cv2 fifo_names = ["/tmp/fifos/y_fifo", "/tmp/fifos/u_fifo", "/tmp/fifos/v_fifo"] #teardown; delete fifos import signal, sys def cleanup_exit(signal, frame): print ("cleaning up!") for fifo in fifo_names: os.remove(fifo) sys.exit(0) signal.signal(signal.SIGINT, cleanup_exit) signal.signal(signal.SIGTERM, cleanup_exit) #make fifos for fifo in fifo_names: try: os.mkfifo(fifo); except OSError as oe: if oe.errno == errno.EEXIST: os.remove(fifo) os.mkfifo(fifo) else: raise() #make individual np arrays to store Y,U, and V channels #we know the image size beforehand -- 640x360 pixels yuv_data = [] frame_size = [] fullsize = (360, 640) halfsize = (180, 320) for i in range(len(fifo_names)): if (i == 0): size = fullsize else: size = halfsize yuv_data.append(np.empty(size, dtype=np.uint8)); frame_size.append(size) #make array that holds all yuv data for display with cv2 all_yuv_data = np.empty((fullsize[0] + halfsize[0], fullsize[1]), dtype=np.uint8) #continuously read yuv images from fifos print("waiting for fifo to be written to...") while True: for i in range(len(fifo_names)): fifo = fifo_names[i] with open(fifo, 'rb') as f: print("FIFO %s opened" % (fifo)) all_data = b'' while True: data = f.read() print("read from %s, len: %d" % (fifo,len(data))) if len(data) == 0: #then the fifo has been closed break else: all_data += data yuv_data[i] = np.frombuffer(all_data, dtype=np.uint8).reshape(frame_size[i]) #stick all yuv data in one buffer, interleaving columns all_yuv_data[0:fullsize[0],0:fullsize[1]] = yuv_data[0] all_yuv_data[fullsize[0]:,0:fullsize[1]:2] = yuv_data[1] all_yuv_data[fullsize[0]:,1:fullsize[1]:2] = yuv_data[2] #show each yuv channel individually cv2.imshow('y', yuv_data[0]) cv2.imshow('u', yuv_data[1]) cv2.imshow('v', yuv_data[2]) #convert yuv to rgb and display it rgb = cv2.cvtColor(all_yuv_data, cv2.COLOR_YUV420p2RGB); cv2.imshow('rgb', rgb) cv2.waitKey(1) The above code is trying to interleave the U and V information column-wise. I have also tried using the following to place the U and V channel information into the all_yuv_data array: #try back-to-back all_yuv_data[0:fullsize[0],0:fullsize[1]] = yuv_data[0] all_yuv_data[fullsize[0]:,0:halfsize[1]] = yuv_data[1] all_yuv_data[fullsize[0]:,halfsize[1]:] = yuv_data[2] The image is a frame of video obtained with libav from another program. The frame is of format AV_PIX_FMT_YUV420P, described as "planar YUV 4:2:0, 12bpp, (1 Cr & Cb sample per 2x2 Y samples)". Here are the yuv channels for a sample image shown in grayscale: Y Channel: U Channel: V Channel: and the corresponding RGB conversion (this was from using the above interleaving method, similar artifacts are seen when using the 'back-to-back' method): RGB Image With Artifacts: How should I be placing the u and v channel information in all_yuv_data? Edited by Mark Setchell after this point I believe the expected result is: | In case the YUV standard matches the OpenCV COLOR_YUV2BGR_I420 conversion formula, you may read the frame as one chunk, and reshape it to height*1.5 rows apply conversion. The following code sample: Builds an input in YUV420 format, and write it to memory stream (instead of fifo). Read frame from stream and convert it to BGR using COLOR_YUV2BGR_I420. Colors are incorrect... Repeat the process by reading Y, U and V, resizing U, and V and using COLOR_YCrCb2BGR conversion. Note: OpenCV works in BGR color format (not RGB). Here is the code: import cv2 import numpy as np import io # Building the input: ############################################################################### img = cv2.imread('GrandKingdom.jpg') #yuv = cv2.cvtColor(img, cv2.COLOR_BGR2YUV) #y, u, v = cv2.split(yuv) # Convert BGR to YCrCb (YCrCb apply YCrCb JPEG (or YCC), "full range", # where Y range is [0, 255], and U, V range is [0, 255] (this is the default JPEG format color space format). yvu = cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb) y, v, u = cv2.split(yvu) # Downsample U and V (apply 420 format). u = cv2.resize(u, (u.shape[1]//2, u.shape[0]//2)) v = cv2.resize(v, (v.shape[1]//2, v.shape[0]//2)) # Open In-memory bytes streams (instead of using fifo) f = io.BytesIO() # Write Y, U and V to the "streams". f.write(y.tobytes()) f.write(u.tobytes()) f.write(v.tobytes()) f.seek(0) ############################################################################### # Read YUV420 (I420 planar format) and convert to BGR ############################################################################### data = f.read(y.size*3//2) # Read one frame (number of bytes is width*height*1.5). # Reshape data to numpy array with height*1.5 rows yuv_data = np.frombuffer(data, np.uint8).reshape(y.shape[0]*3//2, y.shape[1]) # Convert YUV to BGR bgr = cv2.cvtColor(yuv_data, cv2.COLOR_YUV2BGR_I420); # How to How should I be placing the u and v channel information in all_yuv_data? # ------------------------------------------------------------------------------- # Example: place the channels one after the other (for a single frame) f.seek(0) y0 = f.read(y.size) u0 = f.read(y.size//4) v0 = f.read(y.size//4) yuv_data = y0 + u0 + v0 yuv_data = np.frombuffer(yuv_data, np.uint8).reshape(y.shape[0]*3//2, y.shape[1]) bgr = cv2.cvtColor(yuv_data, cv2.COLOR_YUV2BGR_I420); ############################################################################### # Display result: cv2.imshow("bgr incorrect colors", bgr) ############################################################################### f.seek(0) y = np.frombuffer(f.read(y.size), dtype=np.uint8).reshape((y.shape[0], y.shape[1])) # Read Y color channel and reshape to height x width numpy array u = np.frombuffer(f.read(y.size//4), dtype=np.uint8).reshape((y.shape[0]//2, y.shape[1]//2)) # Read U color channel and reshape to height x width numpy array v = np.frombuffer(f.read(y.size//4), dtype=np.uint8).reshape((y.shape[0]//2, y.shape[1]//2)) # Read V color channel and reshape to height x width numpy array # Resize u and v color channels to be the same size as y u = cv2.resize(u, (y.shape[1], y.shape[0])) v = cv2.resize(v, (y.shape[1], y.shape[0])) yvu = cv2.merge((y, v, u)) # Stack planes to 3D matrix (use Y,V,U ordering) bgr = cv2.cvtColor(yvu, cv2.COLOR_YCrCb2BGR) ############################################################################### # Display result: cv2.imshow("bgr", bgr) cv2.waitKey(0) cv2.destroyAllWindows() Result: | 6 | 10 |
60,728,640 | 2020-3-17 | https://stackoverflow.com/questions/60728640/modify-global-variable-from-async-function-in-python | I'm making a Discord bot in Python using discord.py. I'd like to set/modify a global variable from an async thread. message = "" @bot.command() async def test(ctx, msg): message = msg However this doesn't work. How can I achieve something that does this? | As I said in the comment you have to use the keyword global in the functions wherever you are modifying the global variable. If you are just reading it in function than you don't need it. message = "" @bot.command() async def test(ctx, msg): global message message = msg | 11 | 24 |
60,722,360 | 2020-3-17 | https://stackoverflow.com/questions/60722360/python-how-to-split-and-re-join-first-and-last-item-in-series-of-strings | I have a Series of strings, the series looks like; Series "1, 2, 6, 7, 6" "1, 3, 7, 9, 9" "1, 1, 3, 5, 6" "1, 2, 7, 7, 8" "1, 4, 6, 8, 9" "1" I want to remove all elements apart from the first and last, so the output would look like; Series "1, 6" "1, 9" "1, 6" "1, 8" "1, 9" "1" To use Split(), do I need to loop over each element in the series? I've tried this but can't get the output I'm looking for. | You can use split and rsplit to get the various parts: result = [f"{x.split(',', 1)[0]},{x.rsplit(',', 1)[1]}" if x.find(',') > 0 else x for x in strings] If strings is a pd.Series object then you can convert it back to a series: result = pd.Series(result, index=strings.index) | 6 | 5 |
60,708,680 | 2020-3-16 | https://stackoverflow.com/questions/60708680/how-to-change-the-marker-symbol-of-errorbar-limits-in-matplotlib | just a quick question, where I couldn't find anything helpful in the plt.errorbar documentation I want to plot values with error bars: import matplotlib.pyplot as plt plt.errorbar(1, 0.25, yerr=0.1, uplims=True, lolims=True, fmt='o') plt.show() But I would like to have error bars with a simple horizontal line instead of arrows at the ends. But there is no "capmarker" or similar option in the plt.errorbar() function | Remove the uplims=True and lolims=True; both limits are plotted by default, without any ending arrows: import matplotlib.pyplot as plt plt.errorbar(1, 0.25, yerr=0.1, fmt='o') plt.show() EDIT: Increase the capsize to add caps to the end of the error bars, and increase the capthick to make the caps thicker: plt.errorbar(1, 0.25, yerr=0.1, fmt='o', capsize=3) plt.errorbar(1, 0.25, yerr=0.1, fmt='o', capsize=3, capthick=3) | 6 | 13 |
60,650,325 | 2020-3-12 | https://stackoverflow.com/questions/60650325/install-python2-on-a-mac-after-python-2-support-has-ended-on-homebrew | I would like to install a few packages from a github project and one of the dependencies is python@2. Prior to Jan 1 2020, it was possible to install python@2 using Homebrew: $ brew install python@2 However, Python 2 support has ended from Homebrew. Is there anyway to install python@2 on a Mac now that Python 2 support has ended? Until the code in this project is ported to Python 3, unfortunately I'm stuck with getting it to work with Python 2 (and dependencies which use Python 2), which is the reason I would like to install python@2 as a temporary solution. | I was able to find the exact same question -- Brew - reinstalling python@2. Unfortunately I can't raise the duplicate flag again so I'll duplicate the answer: brew install https://raw.githubusercontent.com/Homebrew/homebrew-core/86a44a0a552c673a05f11018459c9f5faae3becc/Formula/[email protected] | 10 | 10 |
60,721,826 | 2020-3-17 | https://stackoverflow.com/questions/60721826/how-can-i-generate-n-random-values-from-a-bimodal-distribution-in-python | I tried generating and combining two unimodal distributions but think there's something wrong in my code. N=400 mu, sigma = 100, 5 mu2, sigma2 = 10, 40 X1 = np.random.normal(mu, sigma, N) X2 = np.random.normal(mu2, sigma2, N) w = np.random.normal(0.5, 1, N) X = w*X1 + (1-w)*X2 X = X.reshape(-1,2) When I plot X I don't get a bimodal distribution | It's unclear where your problem is; it's also unclear what the purpose of the variable w is, and it's unclear how you judge you get an incorrect result, since we don't see the plot code, or any other code to confirm or reject a binomial distribution. That is, your example is too incomplete to exactly answer your question. But I can make an educated guess. If I do the following below: import numpy as np import matplotlib.pyplot as plt N=400 mu, sigma = 100, 5 mu2, sigma2 = 10, 40 X1 = np.random.normal(mu, sigma, N) X2 = np.random.normal(mu2, sigma2, N) X = np.concatenate([X1, X2]) plt.hist(X) and that yields the following figure: | 6 | 16 |
60,720,213 | 2020-3-17 | https://stackoverflow.com/questions/60720213/pandas-get-rows-by-comparing-two-columns-of-dataframe-to-list-of-tuples | Say I have a pandas DataFrame with four columns: A,B,C,D. my_df = pd.DataFrame({'A': [0,1,4,9], 'B': [1,7,5,7],'C':[1,1,1,1],'D':[2,2,2,2]}) I also have a list of tuples: my_tuples = [(0,1),(4,5),(9,9)] I want to keep only the rows of the dataframe where the value of (my_df['A'],my_df['B']) is equal to one of the tuples in my_tuples. In this example, this would be row#0 and row#2. Is there a good way to do this? I'd appreciate any help. | Use DataFrame.merge with DataFrame created by tuples, there is no on parameter for default interecton of all columns in both DataFrames, here A and B: df = my_df.merge(pd.DataFrame(my_tuples, columns=['A','B'])) print (df) A B C D 0 0 1 1 2 1 4 5 1 2 Or: df = my_df[my_df.set_index(['A','B']).index.isin(my_tuples)] print (df) A B C D 0 0 1 1 2 2 4 5 1 2 | 11 | 10 |
60,698,056 | 2020-3-15 | https://stackoverflow.com/questions/60698056/parquet-int96-timestamp-conversion-to-datetime-date-via-python | TL;DR I'd like to convert an int96 value such as ACIE4NxJAAAKhSUA into a readable timestamp format like 2020-03-02 14:34:22 or whatever that could be normally interpreted...I mostly use python so I'm looking to build a function that does this conversion. If there's another function that can do the reverse -- even better. Background I'm using parquet-tools to convert a raw parquet file (with snappy compression) to raw JSON via this commmand: C:\Research> java -jar parquet-tools-1.8.2.jar cat --json original-file.snappy.parquet > parquet-output.json Inside the JSON, I'm seeing these values as the timestamps: {... "_id":"101836","timestamp":"ACIE4NxJAAAKhSUA"} I've determined that the timestamp value of "ACIE4NxJAAAKhSUA" is really int96 (this is also confirmed with reading the schema of the parquet file too.... message spark_schema { ...(stuff)... optional binary _id (UTF8); optional int96 timestamp; } I think this is also known as Impala Timestamp as well (at least that's what I've gathered) Further Issue Research I've been searching everywhere for some function or info regarding how to "read" the int96 value (into python -- I'd like to keep it in that language since I'm most familiar with it) and output the timestamp -- I've found nothing. Here's a very articles I've already looked into (that's related to this subject): ParquetWriter research in SO here Casting int96 via golan in SO here NOTE: this has a function that I could explore but I'm not sure how to dive in too deep Regarding the depreciated int96 timestamp Please don't ask me to stop using an old/depreciated timestamp format within a parquet file, I'm well aware of that with the research I've done so far. I'm a receiver of the file/data -- I can't change the format used on creation. If there's another way to control the initial JSON output to deliver a "non int96" value -- I'd be interested in that as well. Thanks so much for your help SO community! | parquet-tools will not be able to change format type from INT96 to INT64. What you are observing in json output is a String representation of the timestamp stored in INT96 TimestampType. You will need spark to re-write this parquet with timestamp in INT64 TimestampType and then the json output will produce a timestamp (in the format you desire). You will need to set a specific config in Spark - spark-shell --conf spark.sql.parquet.outputTimestampType=TIMESTAMP_MICROS 2020-03-16 11:37:50 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Spark context Web UI available at http://192.168.0.20:4040 Spark context available as 'sc' (master = local[*], app id = local-1584383875924). Spark session available as 'spark'. Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.4.0 /_/ Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_91) Type in expressions to have them evaluated. Type :help for more information. val sourceDf = spark.read.parquet("original-file.snappy.parquet") 2020-03-16 11:38:31 WARN Utils:66 - Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf. sourceDf: org.apache.spark.sql.DataFrame = [application: struct<name: string, upgrades: struct<value: double> ... 3 more fields>, timestamp: timestamp ... 16 more fields] scala> sourceDf.repartition(1).write.parquet("Downloads/output") Parquet-tools will show the correct TimestampType parquet-tools schema Downloads/output/part-00000-edba239b-e696-4b4e-8fd3-c7cca9eea6bf-c000.snappy.parquet message spark_schema { ... optional binary _id (UTF8); optional int64 timestamp (TIMESTAMP_MICROS); ... } And the json dump gives - parquet-tools cat --json Downloads/output/part-00000-edba239b-e696-4b4e-8fd3-c7cca9eea6bf-c000.snappy.parquet {..."_id":"101836", "timestamp":1583973827000000} The timestamp recorded is in nano seconds. Hope this helps! | 6 | 6 |
60,708,789 | 2020-3-16 | https://stackoverflow.com/questions/60708789/exceptions-vs-errors-in-python | I come from Java where Exceptions and Errors are quite different things and they both derive from something called Throwable. In Java normally you should never try to catch an Error. In Python though it seems the distinction is blurred. So far after reading some docs and checking the hierarchy I have the following questions: There are syntax errors which of course cause your program not to be able to start at all. Right? "Errors detected during execution are called exceptions and are not unconditionally fatal" (per the tutorial). What does "fatal" mean here? Also, some objects like AttributeError are (by the above definition) actually exceptions even though they contain Error in their names, is that conclusion correct? Some classes derive from Exception but contain Error in their name. Isn't this confusing? But even so it means that Error in the name is in no way special, it's still an Exception. Or not... ? "All built-in, non-system-exiting exceptions are derived from [Exception]" (quote from here) So which ones are system-exiting exceptions and which ones are not? It is not immediately clear. All user-defined exceptions should also be derived from Exception. So basically as a beginner do I need to worry about anything else but Exception? Seems like not. Warnings also derive from Exception. So are warnings fatal or system-exiting or none of these? Where does the AssertionError fit into all of this? Is it fatal or system exiting? How does one know or specify that some Exception class represents fatal or system-exiting exception? | Yes. SyntaxError isn't catchable except in cases of dynamically executed code (via eval/exec), because it occurs before the code is actually running. "Fatal" means "program dies regardless of what the code says"; that doesn't happen with exceptions in Python, they're all catchable. os._exit can forcibly kill the process, but it does so by bypassing the exception mechanism. There is no difference between exceptions and errors, so the nomenclature doesn't matter. System-exiting exceptions derive from BaseException, but not Exception. But they can be caught just like any other exception Warnings behave differently based on the warnings filter, and deriving from Exception means they're not in the "system-exiting" category AssertionError is just another Exception child class, so it's not "system exiting". It's just tied to the assert statement, which has special semantics. Things deriving from BaseException but not Exception (e.g. SystemExit, KeyboardInterrupt) are "not reasonable to catch" (or if you do catch them, it should almost always be to log/perform cleanup and rethrow them), everything else (derived from Exception as well) is "conditionally reasonable to catch". There is no other distinction. To be clear, "system-exiting" is just a way of saying "things which except Exception: won't catch"; if no except blocks are involved, all exceptions (aside from warnings, which as noted, behave differently based on the warnings filter) are "system-exiting". | 41 | 42 |
60,700,187 | 2020-3-16 | https://stackoverflow.com/questions/60700187/how-can-i-pass-a-callback-to-re-sub-but-still-inserting-match-captures | Consider: text = "abcdef" pattern = "(b|e)cd(b|e)" repl = [r"\1bla\2", r"\1blabla\2"] text = re.sub(pattern, lambda m: random.choice(repl), text) I want to replace matches randomly with entries of a list repl. But when using lambda m: random.choice(repl) as a callback, it doesn't replace \1, \2 etc. with its captures any more, returning "\1bla\2" as plain text. I've tried to look up re.py on how they do it internally, so I might be able to call the same internal function, but it doesn't seem trivial. The example above returns a\1bla\2f or a\1blabla\2f while abblaef or abblablaef are valid options in my case. Note that I'm using a function, because, in case of several matches like text = "abcdef abcdef", it should randomly choose a replacement from repl for every match – instead of using the same replacement for all matches. | If you pass a function you lose the automatic escaping of backreferences. You just get the match object and have to do the work. So you could: Pick a string in the regex rather than passing a function: text = "abcdef" pattern = "(b|e)cd(b|e)" repl = [r"\1bla\2", r"\1blabla\2"] re.sub(pattern, random.choice(repl), text) # 'abblaef' or 'abblablaef' Or write a function that processes the match object and allows more complex processing. You can take advantage of expand to use back references: text = "abcdef abcdef" pattern = "(b|e)cd(b|e)" def repl(m): repl = [r"\1bla\2", r"\1blabla\2"] return m.expand(random.choice(repl)) re.sub(pattern, repl, text) # 'abblaef abblablaef' and variations You can, or course, put that function into a lambda: repl = [r"\1bla\2", r"\1blabla\2"] re.sub(pattern, lambda m: m.expand(random.choice(repl)), text) | 7 | 8 |
60,705,542 | 2020-3-16 | https://stackoverflow.com/questions/60705542/python-typeerror-expected-str-bytes-or-os-pathlike-object-not-io-textiowrapp | I am trying to convert a pipe-delimited text file to a CSV file, and then iterate through and print the CSV file. Here is my code: with open("...somefile.txt", "r") as text_file: text_reader = csv.reader(text_file, delimiter='|') with open("...somefile.csv", 'w') as csv_file: csv_writer = csv.writer(csv_file, delimiter=',') csv_writer.writerows(text_reader) with open (csv_file, 'r') as f: reader = csv.reader (f, delimiter=',') for row in reader: print(row) However, I am getting this error message : ----> 9 with open (csv_file, 'r') as f: 10 reader = csv.reader (f, delimiter=',') 11 for row in reader: TypeError: expected str, bytes or os.PathLike object, not _io.TextIOWrapper Can anyone explain what this means? Also, if I were to make this into a function, how could I take in a file name as input and then change the file to add a .csv extension wen converting to a csv file? Thanks | You're passing open an already opened file, rather than the path of the file you created. Replace: with open (csv_file, 'r') as f: with with open ("...somefile.csv", 'r') as f: To change the extension in a function: import pathlib def txt_to_csv(fname): new_name = f'{Path(fname).stem}.csv' with open(fname, "r") as text_file: text_reader = csv.reader(text_file, delimiter='|') with open(new_name, 'w') as csv_file: csv_writer = csv.writer(csv_file, delimiter=',') csv_writer.writerows(text_reader) with open (new_name, 'r') as f: reader = csv.reader (f, delimiter=',') for row in reader: print(row) | 8 | 6 |
60,690,327 | 2020-3-15 | https://stackoverflow.com/questions/60690327/typeerror-keyword-argument-not-understood-inputs | The following code is for disease detection with a CNN model using Tensorflow and Keras. For some reason, I keep getting an error. This is a TypeError with parameter 'inputs'. I don't understand why this error is being raised. Here is my code: from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np # linear algebra import pandas as pd # data processing CSV file import tensorflow as tf from tensorflow.keras.layers import Dense, Flatten, Conv2D from tensorflow.keras import Model import cv2 import matplotlib.pyplot as plt import seaborn as sns # seaborn is a data visualization library for python graphs from PIL import Image import os #file path interacting with operating system thisFolder = os.path.dirname(os.path.realpath(__file__)) print(thisFolder) print(tf.__version__) infected = os.listdir(thisFolder + '/cell_images/cell_images/Parasitized/') uninfected = os.listdir(thisFolder +'/cell_images/cell_images/Uninfected/') data = [] labels = [] for i in infected: try: image = cv2.imread(thisFolder + "/cell_images/cell_images/Parasitized/"+i) image_array = Image.fromarray(image , 'RGB') resize_img = image_array.resize((50 , 50)) rotated45 = resize_img.rotate(45) rotated75 = resize_img.rotate(75) blur = cv2.blur(np.array(resize_img) ,(10, 10)) data.append(np.array(resize_img)) data.append(np.array(rotated45)) data.append(np.array(rotated75)) data.append(np.array(blur)) labels.append(1) labels.append(1) labels.append(1) labels.append(1) except AttributeError: print('') for u in uninfected: try: image = cv2.imread("../input/cell_images/cell_images/Uninfected/"+u) image_array = Image.fromarray(image , 'RGB') resize_img = image_array.resize((50 , 50)) rotated45 = resize_img.rotate(45) rotated75 = resize_img.rotate(75) data.append(np.array(resize_img)) data.append(np.array(rotated45)) data.append(np.array(rotated75)) labels.append(0) labels.append(0) labels.append(0) except AttributeError: print('') cells = np.array(data) labels = np.array(labels) np.save('Cells' , cells) np.save('Labels' , labels) print('Cells : {} | labels : {}'.format(cells.shape , labels.shape)) # plt.figure(1 , figsize = (15, 9)) # all graphs and displays n = 0 for i in range(49): n += 1 r = np.random.randint(0 , cells.shape[0] , 1) plt.subplot(7 , 7, n) plt.subplots_adjust(hspace = 0.5 , wspace = 0.5) plt.imshow(cells[r[0]]) plt.title('{} : {}'.format('Infected' if labels[r[0]] == 1 else 'Uninfected', labels[r[0]])) plt.xticks([]) , plt.yticks([]) plt.figure(1, figsize = (15 , 7)) plt.subplot(1 , 2 , 1) plt.imshow(cells[0]) plt.title('Infected Cell') plt.xticks([]) , plt.yticks([]) n = np.arange(cells.shape[0]) np.random.shuffle(n) cells = cells[n] labels = labels[n] cells = cells.astype(np.float32) labels = labels.astype(np.int32) cells = cells/255 from sklearn.model_selection import train_test_split train_x , x , train_y , y = train_test_split(cells , labels , test_size = 0.2 , random_state = 111) eval_x , test_x , eval_y , test_y = train_test_split(x , y , test_size = 0.5 , random_state = 111) plt.figure(1 , figsize = (15 ,5)) n = 0 for z , j in zip([train_y , eval_y , test_y] , ['train labels','eval labels','test labels']): n += 1 plt.subplot(1 , 3 , n) sns.countplot(x = z ) plt.title(j) # plt.show() print('train data shape {} ,eval data shape {} , test data shape {}'.format(train_x.shape, eval_x.shape , test_x.shape)) from tensorflow.python.framework import ops ops.reset_default_graph() def cnn_model_fn(features , labels , mode): input_layers = tf.reshape(features['x'] , [-1 , 50 , 50 ,3]) conv1 = tf.compat.v1.layers.Conv2D( inputs = input_layers , filters = 50 , kernel_size = [7 , 7], padding = 'same', activation = tf.nn.relu ) conv2 = tf.layers.conv2d( inputs = conv1, filters = 90, kernel_size = [3 , 3], padding = 'valid', activation = tf.nn.relu ) conv3 = tf.layers.conv2d( inputs = conv2 , filters = 10, kernel_size = [5 , 5], padding = 'same', activation = tf.nn.relu ) pool1 = tf.layers.max_pooling2d(inputs = conv3 , pool_size = [2 , 2] , strides = 2 ) conv4 = tf.layers.conv2d( inputs = pool1 , filters = 5, kernel_size = [3 , 3], padding = 'same', activation = tf.nn.relu ) pool2 = tf.layers.max_pooling2d(inputs = conv4 , pool_size = [2 , 2] , strides = 2 , padding = 'same') pool2_flatten = tf.layers.flatten(pool2) fc1 = tf.layers.dense( inputs = pool2_flatten, units = 2000, activation = tf.nn.relu ) fc2 = tf.layers.dense( inputs = fc1, units = 1000, activation = tf.nn.relu ) fc3 = tf.layers.dense( inputs = fc2 , units = 500 , activation = tf.nn.relu ) logits = tf.layers.dense( inputs = fc3 , units = 2 ) predictions = { 'classes': tf.argmax(input = logits , axis = 1), 'probabilities': tf.nn.softmax(logits , name = 'softmax_tensor') } if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode = mode , predictions = predictions) loss = tf.losses.sparse_softmax_cross_entropy(labels = labels , logits = logits) if mode == tf.estimator.ModeKeys.TRAIN: optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.001) train_op = optimizer.minimize(loss = loss , global_step = tf.train.get_global_step()) return tf.estimator.EstimatorSpec(mode = mode , loss = loss , train_op = train_op ) eval_metric_op = {'accuracy' : tf.metrics.accuracy(labels = labels , predictions = predictions['classes'])} logging_hook = tf.train.LoggingTensorHook( tensors = tensors_to_log , every_n_iter = 50 ) return tf.estimator.EstimatorSpec(mode = mode , loss = loss , eval_metric_ops = eval_metric_op) # Checkpoint saving training values malaria_detector = tf.estimator.Estimator(model_fn = cnn_model_fn , model_dir = '/tmp/modelchkpt') tensors_to_log = {'probabilities':'softmax_tensor'} logging_hook = tf.estimator.LoggingTensorHook( tensors = tensors_to_log , every_n_iter = 50 ) train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn( x = {'x': train_x}, y = train_y, batch_size = 100 , num_epochs = None , shuffle = True ) malaria_detector.train(input_fn = train_input_fn , steps = 1 , hooks = [logging_hook]) malaria_detector.train(input_fn = train_input_fn , steps = 10000) eval_input_fn = tf.estimator.inputs.numpy_input_fn( x = {'x': eval_x}, y = eval_y , num_epochs = 1 , shuffle = False ) eval_results = malaria_detector.evaluate(input_fn = eval_input_fn) print(eval_results) pred_input_fn = tf.estimator.inputs.numpy_input_fn( x = {'x' : test_x}, y = test_y, num_epochs = 1, shuffle = False ) y_pred = malaria_detector.predict(input_fn = pred_input_fn) classes = [p['classes'] for p in y_pred] from sklearn.metrics import confusion_matrix , classification_report , accuracy_score print('{} \n{} \n{}'.format(confusion_matrix(test_y , classes) , classification_report(test_y , classes) , accuracy_score(test_y , classes))) plt.figure(1 , figsize = (15 , 9)) n = 0 for i in range(49): n += 1 r = np.random.randint( 0 , test_x.shape[0] , 1) plt.subplot(7 , 7 , n) plt.subplots_adjust(hspace = 0.5 , wspace = 0.5) plt.imshow(test_x[r[0]]) plt.title('true {} : pred {}'.format(test_y[r[0]] , classes[r[0]]) ) plt.xticks([]) , plt.yticks([]) plt.show() print("done") And here is the error: File "CNN.py", line 240, in <module> malaria_detector.train(input_fn = train_input_fn , steps = 1 , hooks = [logging_hook]) File "/usr/local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 374, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "/usr/local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1164, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "/usr/local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1194, in _train_model_default features, labels, ModeKeys.TRAIN, self.config) File "/usr/local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1152, in _call_model_fn model_fn_results = self._model_fn(features=features, **kwargs) File "CNN.py", line 136, in cnn_model_fn activation = tf.nn.relu File "/usr/local/lib/python2.7/site-packages/tensorflow_core/python/layers/convolutional.py", line 314, in __init__ name=name, **kwargs) File "/usr/local/lib/python2.7/site-packages/tensorflow_core/python/keras/layers/convolutional.py", line 527, in __init__ **kwargs) File "/usr/local/lib/python2.7/site-packages/tensorflow_core/python/keras/layers/convolutional.py", line 122, in __init__ **kwargs) File "/usr/local/lib/python2.7/site-packages/tensorflow_core/python/layers/base.py", line 213, in __init__ **kwargs) File "/usr/local/lib/python2.7/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper result = method(self, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 186, in __init__ generic_utils.validate_kwargs(kwargs, allowed_kwargs) File "/usr/local/lib/python2.7/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 718, in validate_kwargs raise TypeError(error_message, kwarg) TypeError: ('Keyword argument not understood:', 'inputs') How can I fix this TypeError? I have installed tensorflow 2.1 and upgraded keras. Not sure if that would relate to this error - this seems like a syntax error. Thanks! - Satya | The error you are asking about (TypeError: ('Keyword argument not understood:', 'inputs')) is caused because you are capitalizing the conv2d function in your first convolutional layer. Change the following: conv1 = tf.compat.v1.layers.Conv2D( inputs = input_layers , filters = 50 , kernel_size = [7 , 7], padding = 'same', activation = tf.nn.relu ) to: conv1 = tf.compat.v1.layers.conv2d( inputs = input_layers , filters = 50 , kernel_size = [7 , 7], padding = 'same', activation = tf.nn.relu ) and the error will go away. | 11 | 7 |
60,691,363 | 2020-3-15 | https://stackoverflow.com/questions/60691363/runtimeerrorfreeze-support-on-mac | I'm new on python. I want to learn how to parallel processing in python. I saw the following example: import multiprocessing as mp np.random.RandomState(100) arr = np.random.randint(0, 10, size=[20, 5]) data = arr.tolist() def howmany_within_range_rowonly(row, minimum=4, maximum=8): count = 0 for n in row: if minimum <= n <= maximum: count = count + 1 return count pool = mp.Pool(mp.cpu_count()) results = pool.map(howmany_within_range_rowonly, [row for row in data]) pool.close() print(results[:10]) but when I run it, this error happened: RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. What should I do? | If you place everything in global scope inside this if __name__ == "__main__" block as follows, you should find that your program behaves as you expect: def howmany_within_range_rowonly(row, minimum=4, maximum=8): count = 0 for n in row: if minimum <= n <= maximum: count = count + 1 return count if __name__ == "__main__": np.random.RandomState(100) arr = np.random.randint(0, 10, size=[20, 5]) data = arr.tolist() pool = mp.Pool(mp.cpu_count()) results = pool.map(howmany_within_range_rowonly, [row for row in data]) pool.close() print(results[:10]) Without this protection, if your current module was imported from a different module, your multiprocessing code would be executed. This could occur within a non-main process spawned in another Pool and spawning processes from sub-processes is not allowed, hence we protect against this problem. | 15 | 19 |
60,681,437 | 2020-3-14 | https://stackoverflow.com/questions/60681437/plotly-express-bar-chart-colour-change | How can I change the colour of the plotly express bar-chart to green in the following code? import plotly.express as px import pandas as pd # prepare the dataframe df = pd.DataFrame(dict( x=[1, 2, 3], y=[1, 3, 2] )) # prepare the layout title = "A Bar Chart from Plotly Express" fig = px.bar(df, x='x', y='y', # data from df columns color= pd.Series('green', index=range(len(df))), # does not work title=title, labels={'x': 'Some X', 'y':'Some Y'}) fig.show() | It can be done: - with color_discrete_sequence =['green']*3, or, - with fig.update_traces(marker_color='green') after the bar is instantiated. Ref: Community response | 18 | 21 |
60,677,843 | 2020-3-13 | https://stackoverflow.com/questions/60677843/python-check-if-all-elements-in-a-list-are-nan | My code sometimes produces a list of nan's op_list = [nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]. I want to know if all elements are nans. My code and present output: op_list = [nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] print(np.isnan(op_list)) array([ True, True, True, True, True, True, True, True, True, True]) My expected output: True | You need all: np.isnan(op_list).all() # True For a solution using lists you can do: all(i != i for i in op_list) # True | 16 | 41 |
60,674,501 | 2020-3-13 | https://stackoverflow.com/questions/60674501/how-to-make-black-background-in-cv2-puttext-with-python-opencv | I have a project of opencv where on the frame I am displaying some text using cv2.putText(). Currently it looks like below: As you can see on the top left corner, the text is present but its not clearly visible. Is it possible to make background black so that the text will then appear good. Something like below image: Even if the black background covers till right side of the frame, that is also fine. Below is the code I am using for putting text on frame: cv2.putText(frame, "Data: N/A", (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1) cv2.putText(frame, "Room: C1", (5, 60), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1) Is there any prebuilt method/library available in opencv which can do this. Can anyone please suggest a good way? | There's no prebuilt method but a simple appraoch is to use cv2.rectangle + cv2.putText. All you need to do is to draw the black rectangle on the image followed by placing the text. You can adjust the x,y,w,h parameters depending on how large/small you want the rectangle. Here's an example: Input image: Result: import cv2 import numpy as np # Load image, define rectangle bounds image = cv2.imread('1.jpg') x,y,w,h = 0,0,175,75 # Draw black background rectangle cv2.rectangle(image, (x, x), (x + w, y + h), (0,0,0), -1) # Add text cv2.putText(image, "THICC flower", (x + int(w/10),y + int(h/2)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 2) # Display cv2.imshow('image', image) cv2.waitKey() | 24 | 11 |
60,675,832 | 2020-3-13 | https://stackoverflow.com/questions/60675832/how-to-see-current-cache-size-when-using-functools-lru-cache | I am doing performance/memory analysis on a certain method that is wrapped with the functools.lru_cache decorator. I want to see how to inspect the current size of my cache without doing some crazy inspect magic to get to the underlying cache. Does anyone know how to see the current cache size of method decorated with functools.lru_cache? | Digging around in the docs showed the answer is calling .cache_info() on the method. To help measure the effectiveness of the cache and tune the maxsize parameter, the wrapped function is instrumented with a cache_info() function that returns a named tuple showing hits, misses, maxsize and currsize. In a multi-threaded environment, the hits and misses are approximate. | 10 | 16 |
60,674,039 | 2020-3-13 | https://stackoverflow.com/questions/60674039/how-to-test-assert-whether-two-lists-of-dictionaries-where-a-dict-item-contains | I'm creating my first test script (yay!) I'm have a list of dictionaries where one of the keys is a list. I'd like the test to past if the list (in the dictionary) is in any order. I know you can use assertCountEqual to check list equality regardless of order, but can you do that for a list that contains a dictinary of lists? See below for example Will Succeed def test(self): output = [2,1] desired_output = [1,2] self.assertCountEqual(output, desired_output) Will Fail def test(self): desired_output = [{'count': 2, 'columns': ['col2', 'col5']}] output = [{'count': 2, 'columns': ['col5', 'col2']}] self.assertCountEqual(output, desired_output) Thanks | The assertCountEqual(first, second, msg=None): Test that sequence first contains the same elements as second, regardless of their order. When they don’t, an error message listing the differences between the sequences will be generated. Important: Calling assertCountEqual(first, second, msg=None) is equivalent to callingassertEqual(Counter(list(first)), Counter(list(second))). Note: A Counter is a dict subclass for counting hashable objects. It is a collection where elements are stored as dictionary keys and their counts are stored as dictionary values. For this to work the keys have to be hashable but unfortunately the dict is unhashable because its mutable. In order to perform the desired task you could use the frozenset. The frozenset builds an immutable unordered collection of unique elements. In order to have the test succeed you will have to build a dict whose values corresponding to its keys are immutable. We can use the recursive approach to build a dictionary that contains immutable values. Try this (UPDATE): def getHashableDict(dictionary): hashable_dict = {} for key, value in dictionary.items(): if isinstance(value, list): hashable_dict[key] = frozenset(value) elif isinstance(value, dict): hashable_dict[key] = getHashableDict(value) else: hashable_dict[key] = value return frozenset(hashable_dict.items()) def test(self): desired_output = [{'count': 2, 'columns': ['col2', 'col5']}] output = [{'count': 2, 'columns': ['col5', 'col2']}] output = [getHashableDict(item) for item in output] #--> create list of hashable types desired_output = [getHashableDict(item) for item in desired_output] self.assertCountEqual(output, desired_output) The test will now succeed. | 6 | 5 |
60,671,987 | 2020-3-13 | https://stackoverflow.com/questions/60671987/django-orm-equivalent-of-sql-not-in-exclude-and-q-objects-do-not-work | The Problem I'm trying to use the Django ORM to do the equivalent of a SQL NOT IN clause, providing a list of IDs in a subselect to bring back a set of records from the logging table. I can't figure out if this is possible. The Model class JobLog(models.Model): job_number = models.BigIntegerField(blank=True, null=True) name = models.TextField(blank=True, null=True) username = models.TextField(blank=True, null=True) event = models.TextField(blank=True, null=True) time = models.DateTimeField(blank=True, null=True) What I've Tried My first attempt was to use exclude, but this does NOT to negate the entire Subquery, rather than the desired NOT IN: query = ( JobLog.objects.values( "username", "job_number", "name", "time", ) .filter(time__gte=start, time__lte=end, event="delivered") .exclude( job_number__in=models.Subquery( JobLog.objects.values_list("job_number", flat=True).filter( time__gte=start, time__lte=end, event="finished", ) ) ) ) Unfortunately, this yields this SQL: SELECT "view_job_log"."username", "view_job_log"."group", "view_job_log"."job_number", "view_job_log"."name", "view_job_log"."time" FROM "view_job_log" WHERE ( "view_job_log"."event" = 'delivered' AND "view_job_log"."time" >= '2020-03-12T11:22:28.300590+00:00'::timestamptz AND "view_job_log"."time" <= '2020-03-13T11:22:28.300600+00:00'::timestamptz AND NOT ( "view_job_log"."job_number" IN ( SELECT U0."job_number" FROM "view_job_log" U0 WHERE ( U0."event" = 'finished' AND U0."time" >= '2020-03-12T11:22:28.300590+00:00'::timestamptz AND U0."time" <= '2020-03-13T11:22:28.300600+00:00'::timestamptz ) ) AND "view_job_log"."job_number" IS NOT NULL ) ) What I need is for the third AND clause to be AND "view_job_log"."job_number" NOT IN instead of the AND NOT (. I've also tried doing the sub-select as it's own query first, with an exclude, as suggested here: Django equivalent of SQL not in However, this yields the same problematic result. Then I tried a Q object, which yields a similar query: query = ( JobLog.objects.values( "username", "subscriber_code", "job_number", "name", "time", ) .filter( ~models.Q(job_number__in=models.Subquery( JobLog.objects.values_list("job_number", flat=True).filter( time__gte=start, time__lte=end, event="finished", ) )), time__gte=start, time__lte=end, event="delivered", ) ) This attempt with the Q object yields the following SQL, again, without the NOT IN: SELECT "view_job_log"."username", "view_job_log"."group", "view_job_log"."job_number", "view_job_log"."name", "view_job_log"."time" FROM "view_job_log" WHERE ( NOT ( "view_job_log"."job_number" IN ( SELECT U0."job_number" FROM "view_job_log" U0 WHERE ( U0."event" = 'finished' AND U0."time" >= '2020-03-12T11:33:28.098653+00:00'::timestamptz AND U0."time" <= '2020-03-13T11:33:28.098678+00:00'::timestamptz ) ) AND "view_job_log"."job_number" IS NOT NULL ) AND "view_job_log"."event" = 'delivered' AND "view_job_log"."time" >= '2020-03-12T11:33:28.098653+00:00'::timestamptz AND "view_job_log"."time" <= '2020-03-13T11:33:28.098678+00:00'::timestamptz ) Is there any way to get Django's ORM to do something equivalent to AND job_number NOT IN (12345, 12346, 12347)? Or am I going to have to drop to raw SQL to accomplish this? Thanks in advance for reading this entire wall-of-text question. Explicit is better than implicit. :) | I think the easiest way to do this would be to define a custom lookup, similar to this one or the in lookup from django.db.models.lookups import In as LookupIn class NotIn(LookupIn): lookup_name = "notin" def get_rhs_op(self, connection, rhs): return "NOT IN %s" % rhs Field.register_lookup(NotIn) or class NotIn(models.Lookup): lookup_name = "notin" def as_sql(self, compiler, connection): lhs, params = self.process_lhs(compiler, connection) rhs, rhs_params = self.process_rhs(compiler, connection) params.extend(rhs_params) return "%s NOT IN %s" % (lhs, rhs), params then use it in your query: query = ( JobLog.objects.values( "username", "job_number", "name", "time", ) .filter(time__gte=start, time__lte=end, event="delivered") .filter( job_number__notin=models.Subquery( JobLog.objects.values_list("job_number", flat=True).filter( time__gte=start, time__lte=end, event="finished", ) ) ) ) this generates the SQL: SELECT "people_joblog"."username", "people_joblog"."job_number", "people_joblog"."name", "people_joblog"."time" FROM "people_joblog" WHERE ("people_joblog"."event" = delivered AND "people_joblog"."time" >= 2020 - 03 - 13 15:24:34.691222 + 00:00 AND "people_joblog"."time" <= 2020 - 03 - 13 15:24:41.678069 + 00:00 AND "people_joblog"."job_number" NOT IN ( SELECT U0. "job_number" FROM "people_joblog" U0 WHERE (U0. "event" = finished AND U0. "time" >= 2020 - 03 - 13 15:24:34.691222 + 00:00 AND U0. "time" <= 2020 - 03 - 13 15:24:41.678069 + 00:00))) | 9 | 9 |
60,664,637 | 2020-3-13 | https://stackoverflow.com/questions/60664637/sslerror-using-boto | We are using a proxy + profile when using the aws s3 commands to browse our buckets in CLI. export HTTPS_PROXY=https://ourproxyhost.com:3128 aws s3 ls s3://our_bucket/.../ --profile dev And we can work with our buckets and objects fine. Because I need to write Python code for this, I translated this using boto3: # python 2.7.12 import boto3 # v1.5.18 from botocore.config import Config # v1.8.32 s3 = boto3.Session(profile_name='dev').resource('s3', config=Config(proxies={'https': 'ourproxyhost.com:3128'})).meta.client obj = s3.get_object(Bucket='our_bucket', Key='dir1/dir2/.../file') What I get is this: botocore.vendored.requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590) Why is this working in CLI, but not in Python? | botocore.vendored.requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590) The error above in most cases it's usually related to the CA bundle being used for S3 connections. Possible Resolution Steps: 1. Turn off SSL certification validation : s3 = boto3.client('s3', verify=False) As mentioned in this boto3 documentation, this option turns off validation of SSL certificates but SSL protocol will still be used (unless use_ssl is False) for communication. 2. Check if you have AWS_CA_BUNDLE env var set?: echo $AWS_CA_BUNDLE or export | grep AWS_CA_BUNDLE 3. Check if you have certifi installed in your python env?: pip list | grep certifi Depending on the output of the above command, you could be using a version of certifi (which is not a dependency of boto3) that has a broken certificate validation when communicating with s3 endpoints. You will need to upgrade your OpenSSL version or pin certifi to a stable version as shown below : sudo pip uninstall certifi sudo pip install certifi==2015.04.28 Hope this helps! | 9 | 13 |
60,667,865 | 2020-3-13 | https://stackoverflow.com/questions/60667865/get-column-names-with-distinct-value-greater-than-specified-values-python | Dataframe X: A B C D V1 V2 V3 V4 V1 V3 V4 V5 V1 V4 V5 V5 V1 V5 V9 V5 V1 V2 V3 V4 V1 V10 V11 V12 V1 V10 V6 V8 V1 V12 V7 V8 Here Col A has 1 unique value, Col B has 6 unique values, Col C has 7 unique values, Col D has 4 unique values. I need a list of all columns where unique values > 4 say. X.columns[(X.nunique() > 4).any()] I expect to get only col B and Col C here, but I get all columns. How to achieve desired output. | You are really close, only remove .any for boolean mask: c = X.columns[(X.nunique() > 4)] print (c) Index(['B', 'C'], dtype='object') If need select columns use DataFrame.loc: df = X.loc[:, (X.nunique() > 4)] print (df) B C 0 V2 V3 1 V3 V4 2 V4 V5 3 V5 V9 4 V2 V3 5 V10 V11 6 V10 V6 7 V12 V7 | 6 | 7 |
60,662,784 | 2020-3-12 | https://stackoverflow.com/questions/60662784/csv-standard-multiple-tables | I am working on a python project that does some analysis on csv files. I know there is no well-definedstandard for csv files, but as far as I understood the definition (https://www.rfc-editor.org/rfc/rfc4180#page-2), I think that a csv file should not contain more than one table. Is this thinking correct, or did I misunderstood the definitions? How often do you see more than one table in csv's? | You are correct. There is no universal accepted standard. The definition is written to suggest that each file contains one table, and this is by far the most common practice. There's technically nothing stopping you from having more than one table, using a format you decide on and implement and keep consistent. For instance, you could parse the file yourself and use a line with 5 hyphens to designate a separate table. However I wouldn't recommend this. It goes against the common practice, and you will eliminate the possibility of using existing CSV libraries to help you. | 10 | 9 |
60,657,399 | 2020-3-12 | https://stackoverflow.com/questions/60657399/what-does-abc-class-do | I wrote a class that inherits another class: class ValueSum(SubQuery): output = IntegerField() And pycharm is showing the following warning: Class ValueSum must implement all abstract methods Then I alt+enter to add ABC to superclass. And my warning is gone. I have several questions: Should I always do this when writing a sub-class? What is the difference between manually implementing all the methods vs just using ABC? Does ABC add something to my code? | SubQuery is an abstract base class (per the abc module) with one or more abstract methods that you did not override. By adding ABC to the list of base classes, you defined ValueSum itself to be an abstract base class. That means you aren't forced to override the methods, but it also means you cannot instantiate ValueSum itself. PyCharm is warning you ahead of time that you need to implement the abstract methods inherited from SubQuery; if you don't, you would get an error from Python when you actually tried to instantiate ValueSum. As to what inheriting from ABC does, the answer is... not much. It's a convenience for setting the metaclass. The following are equivalent: class Foo(metaclass=abc.ABCMeta): ... and class Foo(abc.ABC): ... The metaclass modifies __new__ so that every attempt to create an instance of your class checks that the class has implemented all methods decorated with @abstractmethod in a parent class. | 11 | 10 |
60,655,270 | 2020-3-12 | https://stackoverflow.com/questions/60655270/mypy-fails-to-infer-enums-being-created-from-a-list-variable | Enums can be created by taking in a list of possible members, which I'm doing like so: # example_issue.py import enum yummy_foods = ["ham", "cheese"] foods = enum.Enum("Foods", yummy_foods) cheese = foods.cheese This looks OK, and runs fine, but mypy returns example_issue.py:4: error: Enum() expects a string, tuple, list or dict literal as the second argument example_issue.py:5: error: "Type[foods]" has no attribute "cheese" Found 2 errors in 1 file (checked 1 source file) What is mypy doing here, and why can't it follow that foods can take any value in yummy_foods? | Using a variable yummy_foods is too dynamic for mypy's static type checking, see this GitHub issue. If you change your code to generate the Enum as: foods = enum.Enum("Foods", ["ham", "cheese"]) mypy will then be able to figure out which attributes exist on the enumeration. | 7 | 6 |
60,654,781 | 2020-3-12 | https://stackoverflow.com/questions/60654781/typeerror-cannot-perform-rand-with-a-dtyped-float64-array-and-scalar-of-ty | I ran a command in python pandas as follows: q1_fisher_r[(q1_fisher_r['TP53']==1) & q1_fisher_r[(q1_fisher_r['TumorST'].str.contains(':1:'))]] I got following error: TypeError: Cannot perform 'rand_' with a dtyped [float64] array and scalar of type [bool] the solution i tried using this: error link. changed the code accordingly as: q1_fisher_r[(q1_fisher_r['TumorST'].str.contains(':1:')) & (q1_fisher_r[(q1_fisher_r['TP53']==1)])] But still I got the same error as TypeError: Cannot perform 'rand_' with a dtyped [float64] array and scalar of type [bool] | For filtering by multiple conditions chain them by & and filter by boolean indexing: q1_fisher_r[(q1_fisher_r['TP53']==1) & q1_fisher_r['TumorST'].str.contains(':1:')] ^^^^ ^^^^ first condition second condition Problem is this code returned filtered data, so cannot chain by condition: q1_fisher_r[(q1_fisher_r['TumorST'].str.contains(':1:'))] Similar problem: q1_fisher_r[(q1_fisher_r['TP53']==1)] Sample: q1_fisher_r = pd.DataFrame({'TP53':[1,1,2,1], 'TumorST':['5:1:','9:1:','5:1:','6:1']}) print (q1_fisher_r) TP53 TumorST 0 1 5:1: 1 1 9:1: 2 2 5:1: 3 1 6:1 df = q1_fisher_r[(q1_fisher_r['TP53']==1) & q1_fisher_r['TumorST'].str.contains(':1:')] print (df) TP53 TumorST 0 1 5:1: 1 1 9:1: | 52 | 41 |
60,635,118 | 2020-3-11 | https://stackoverflow.com/questions/60635118/drf-how-to-change-the-value-of-the-model-fields-before-saving-to-the-database | If I need to change some field values before saving to the database as I think models method clear() is suitable. But I can't call him despite all my efforts. For example fields email I need set to lowercase and fields nda I need set as null models.py class Vendors(models.Model): nda = models.DateField(blank=True, null=True) parent = models.OneToOneField('Vendors', models.DO_NOTHING, blank=True, null=True) def clean(self): if self.nda == "": self.nda = None class VendorContacts(models.Model): .... vendor = models.ForeignKey('Vendors', related_name='contacts', on_delete=models.CASCADE) email = models.CharField(max_length=80, blank=True, null=True, unique=True) def clean(self): if self.email: self.email = self.email.lower() serializer.py class VendorContactSerializer(serializers.ModelSerializer): class Meta: model = VendorContacts fields = ( ... 'email',) class VendorsSerializer(serializers.ModelSerializer): contacts = VendorContactSerializer(many=True) class Meta: model = Vendors fields = (... 'nda', 'contacts', ) def create(self, validated_data): contact_data = validated_data.pop('contacts') vendor = Vendors.objects.create(**validated_data) for data in contact_data: VendorContacts.objects.create(vendor=vendor, **data) return vendor views.py class VendorsCreateView(APIView): """Create new vendor instances from form""" permission_classes = (permissions.AllowAny,) serializer_class = VendorsSerializer def post(self, request, *args, **kwargs): serializer = VendorsSerializer(data=request.data) try: serializer.is_valid(raise_exception=True) serializer.save() except ValidationError: return Response({"errors": (serializer.errors,)}, status=status.HTTP_400_BAD_REQUEST) else: return Response(request.data, status=status.HTTP_200_OK) As I learned from the documentation Django Rest Framework serializers do not call the Model.clean when validating model serializers In dealing with this problem, I found two ways to solve it. 1. using the custom method at serializer. For my case, it looks like class VendorsSerializer(serializers.ModelSerializer): contacts = VendorContactSerializer(many=True) class Meta: model = Vendors fields = (... 'nda', 'contacts', ) def create(self, validated_data): contact_data = validated_data.pop('contacts') vendor = Vendors.objects.create(**validated_data) for data in contact_data: VendorContacts.objects.create(vendor=vendor, **data) return vendor def validate(self, attrs): instance = Vendors(**attrs) instance.clean() return attrs Using full_clean() method. For me, it looks like class VendorsSerializer(serializers.ModelSerializer): contacts = VendorContactSerializer(many=True) class Meta: model = Vendors fields = (... 'nda', 'contacts', ) def create(self, validated_data): contact_data = validated_data.pop('contacts') vendor = Vendors(**validated_data) vendor.full_clean() vendor.save() for data in contact_data: VendorContacts.objects.create(vendor=vendor, **data) return vendor But in both cases, the clean() method is not called. I really don't understand what I'm doing wrong. | For DRF you can change your serializer before save as below... First of all, you should check that serializer is valid or not, and if it is valid then change the required object of the serializer and then save that serializer. if serializer.is_valid(): serializer.object.user_id = 15 # For example serializer.save() UPD! views.py class VendorsCreateView(APIView): """Create new vendor instances from form""" permission_classes = (permissions.AllowAny,) serializer_class = VendorsSerializer def post(self, request, *args, **kwargs): data = request.data if data['nda'] == '': data['nda'] = None for contact in data['contacts']: if contact['email']: print(contact['email']) contact['email'] = contact['email'].lower() serializer = VendorsSerializer(data=request.data) try: serializer.is_valid(raise_exception=True) serializer.save() except ValidationError: return Response({"errors": (serializer.errors,)}, status=status.HTTP_400_BAD_REQUEST) | 7 | 5 |
60,632,275 | 2020-3-11 | https://stackoverflow.com/questions/60632275/python-typing-is-there-a-way-to-avoid-importing-of-optional-type-if-its-none | Let's say we have a function definition like this: def f(*, model: Optional[Type[pydantic.BaseModel]] = None) So the function doesn't require pydantic to be installed until you pass something as a model. Now let's say we want to pack the function into pypi package. And my question is if there's a way to avoid bringing pydantic into the package dependencies only the sake of type checking? | I tried to follow dspenser's advice, but I found mypy still giving me Name 'pydantic' is not defined error. Then I found this chapter in the docs and it seems to be working in my case too: from typing import TYPE_CHECKING if TYPE_CHECKING: import pydantic You can use normal clases (instead of string literals) with __future__.annotations (python 3.8.1): from __future__ import annotations from typing import TYPE_CHECKING, Optional, Type if TYPE_CHECKING: import pydantic def f(*, model: Optional[Type[pydantic.BaseModel]] = None): pass If for some reason you can't use __future__.annotations, e.g. you're on python < 3.7, use typing with string literals from dspenser's solution. | 6 | 9 |
60,646,431 | 2020-3-12 | https://stackoverflow.com/questions/60646431/how-to-use-multiprocessing-manager-value-to-store-a-sum | I want to accumulate a sum using multiprocessing.Pool. Here's how I tried: import multiprocessing def add_to_value(addend, value): value.value += addend with multiprocessing.Manager() as manager: value = manager.Value(float, 0.0) with multiprocessing.Pool(2) as pool: pool.starmap(add_to_value, [(float(i), value) for i in range(100)]) print(value.value) This gives incorrect and even inconsistent results. For instance, one time it gives 2982.0 and another it gives 2927.0. The correct output is 4950.0, and I do get this when I use only one process in the call to Pool, rather than 2. I'm using Python 3.7.5. | The multiprocessing documentation (under multiprocessing.Value) is quite explicit about this: Operations like += which involve a read and write are not atomic. So if, for instance, you want to atomically increment a shared value it is insufficient to just do counter.value += 1. In short, you need to grab a lock to be able to do this. You can do that with: def add_to_value(addend, value, lock): with lock: value.value += addend if __name__ == '__main__': with multiprocessing.Manager() as manager: lock = manager.Lock() value = manager.Value(float, 0.0) with multiprocessing.Pool(2) as pool: pool.starmap(add_to_value, [(float(i), value, lock) for i in range(100)]) print(value.value) This will correctly output 4950.0. But note that this approach will be quite expensive due to the need for locking. Most probably, it will take more time to finish than if you have a single process doing the operation. NOTE: I'm also adding an if __name__ == '__main__': guard which is actually required when using a start method other than fork. The default on both Windows and Mac OS is spawn, so that's really needed to make this code portable to either of those platforms. Start methods spawn and forkserver are also available on Linux/Unix, so in some situations this is also needed there. Multiprocessing will be more efficient when you're able to offload a job to workers that they can complete on their own, for example calculate partial sums and then add them together in the main process. If possible, consider rethinking your approach to fit that model. | 6 | 7 |
60,650,432 | 2020-3-12 | https://stackoverflow.com/questions/60650432/merging-csv-files-with-different-headers-with-pandas-in-python | I'm trying to map a dataset to a blank CSV file with different headers, so I'm essentially trying to map data from one CSV file which has different headers to a new CSV with different amount of headers and called different things, the reason this question is different is since the column names aren't the same but there are no overlapping columns either. And I can't overwrite the data file with new headers since the data file has other columns with irrelevant data, I'm certain I'm overcomplicating this. I've seen this example code but how do I change this since this example is using a common header to join the data. a = pd.read_csv("a.csv") b = pd.read_csv("b.csv") #a.csv = ID TITLE #b.csv = ID NAME b = b.dropna(axis=1) merged = a.merge(b, on='title') merged.to_csv("output.csv", index=False) Sample Data a.csv (blank format file, the format must match this file): Headers: TOWN NAME LOCATION HEIGHT STAR b.csv: Headers: COUNTRY WEIGHT NAME AGE MEASUREMENT Data: UK, 150lbs, John, 6, 6ft Expected output file: Headers: TOWN NAME LOCATION HEIGHT STAR Data: (Blank) John, UK, 6ft (Blank) | From your example, it looks like you need to do some column renaming in addition to the merge. This is easiest done before the merge itself. # Read the csv files dfA = pd.read_csv("a.csv") dfB = pd.read_csv("b.csv") # Rename the columns of b.csv that should match the ones in a.csv dfB = dfB.rename(columns={'MEASUREMENT': 'HEIGHT', 'COUNTRY': 'LOCATION'}) # Merge on all common columns df = pd.merge(dfA, dfB, on=list(set(dfA.columns) & set(dfB.columns)), how='outer') # Only keep the columns that exists in a.csv df = df[dfA.columns] # Save to a new csv df.to_csv("output.csv", index=False) This should give you what you are after. | 8 | 2 |
60,637,120 | 2020-3-11 | https://stackoverflow.com/questions/60637120/detect-circles-in-opencv | I have a problem with choosing right parameters for HoughCircles function. I try to detect circles from video. This circles are made by me, and has almost the same dimension. Problem is that camera is in move. When I change maxRadius it still detect bigger circles somehow (see the right picture). I also tried to change param1, param2 but still no success. gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) blurred = cv2.medianBlur(gray, 25)#cv2.bilateralFilter(gray,10,50,50) minDist = 100 param1 = 500 param2 = 200#smaller value-> more false circles minRadius = 5 maxRadius = 10 circles = cv2.HoughCircles(blurred, cv2.HOUGH_GRADIENT, 1, minDist, param1, param2, minRadius, maxRadius) if circles is not None: circles = np.uint16(np.around(circles)) for i in circles[0,:]: cv2.circle(blurred,(i[0], i[1]), i[2], (0, 255, 0), 2) Maybe Im using wrong function? | The main problem in your code is 5th argument to HoughCircles function. According to documentation the argument list is: cv2.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]]) → circles That means the 5th argument applies circles (it gives an option getting the output by reference, instead of using the returned value). Because you are not passing circles argument, you must pass named arguments for all arguments after the 4th argument (like param1=param1, param2=param2....). Parameter tuning issues: Reduce the value of param1. param1 is the higher threshold passed to the Canny. In your case value should be about 30. Reduce the value of param2 The documentation not so clear, but setting the value around 50 works. Increase maxRadius value - radius 10 is much smaller than the radius of your circles. Here is the code: import numpy as np import cv2 img = cv2.imread('circles.png') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) blurred = cv2.medianBlur(gray, 25) #cv2.bilateralFilter(gray,10,50,50) minDist = 100 param1 = 30 #500 param2 = 50 #200 #smaller value-> more false circles minRadius = 5 maxRadius = 100 #10 # docstring of HoughCircles: HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]]) -> circles circles = cv2.HoughCircles(blurred, cv2.HOUGH_GRADIENT, 1, minDist, param1=param1, param2=param2, minRadius=minRadius, maxRadius=maxRadius) if circles is not None: circles = np.uint16(np.around(circles)) for i in circles[0,:]: cv2.circle(img, (i[0], i[1]), i[2], (0, 255, 0), 2) # Show result for testing: cv2.imshow('img', img) cv2.waitKey(0) cv2.destroyAllWindows() Result: | 15 | 12 |
60,644,482 | 2020-3-11 | https://stackoverflow.com/questions/60644482/easy-way-to-copy-output-from-jupyter-to-paste-in-excel | I'm trying to find a simple way to change the standard output that I get in Jupyter in order to copy and paste tables ouputs in Excel. For example a simple value_counts() give this kind of output in Jupyter lab/notebook: 0 1030971 1 8766 Is there a simple way to mark, copy and paste keeping the format? Even better, could the table have borders? PD. I do not need to export results to excel, I only need to copy a few tables of all my output. Thanks in advance! | Assuming your DataFrame is named "df", run this line and you'll save the data to your clipboard, letting you paste it where you need to. df.to_clipboard(excel=True) | 7 | 15 |
60,642,466 | 2020-3-11 | https://stackoverflow.com/questions/60642466/how-do-i-select-where-in-values-with-tuples-in-python-sqlite3 | I have an SQLite database with three columns, and I'm trying to use parameter substitution for tuples to SELECT rows. This is my table: conn = sqlite3.connect("SomeDb.sqlite3") conn.execute(""" CREATE TABLE RoadSegmentDistribution( Source INTEGER, Destination INTEGER, Distribution TEXT ) """) I know how to substitute with non-tuples, but I cannot figure out how to do it with tuples. Based on this answer, I thought I simply had to substitute each and every value across the list of tuples: for e in conn.execute(""" SELECT * FROM RoadSegmentDistribution WHERE ( Source, Destination ) IN (VALUES (?,?), (?,?), (?,?), (?,?), (?,?)) """, [(1, 2),(2, 3),(4, 5),(6, 7),(8, 9)] ): print(e) but then I get the error ProgrammingError: Incorrect number of bindings supplied. The current statement uses 10, and there are 5 supplied. Obviously this means that I only need one question mark per tuple, right?: for e in conn.execute(""" SELECT * FROM RoadSegmentDistribution WHERE ( Source, Destination ) IN (VALUES (?), (?), (?), (?), (?)) """, [(1, 2),(2, 3),(4, 5),(6, 7),(8, 9)] ): print(e) But then I get this error: OperationalError: sub-select returns 1 columns - expected 2 I cannot insert the values manually like in the linked answer, since I don't know what the list parameter contains. This means that I need to do some kind of ",".join() based on the length of the list, but I'll figure that out once I know how to do substitution with a fixed-length list. How would I do this? | Using the str.join method is indeed a good way to achieve this, given the lack of native support for container-based placeholders in SQL engines: values = [(1, 2), (2, 3), (4, 5), (6, 7), (8, 9)] for e in conn.execute(f""" SELECT * FROM RoadSegmentDistribution WHERE ( Source, Destination ) IN (VALUES {','.join(f'({",".join("?" * len(t))})' for t in values)}) """, [i for t in values for i in t] ): print(e) where, with the given values: f""" SELECT * FROM RoadSegmentDistribution WHERE ( Source, Destination ) IN (VALUES {','.join(f'({",".join("?" * len(t))})' for t in values)}) """ would expand into: SELECT * FROM RoadSegmentDistribution WHERE ( Source, Destination ) IN (VALUES (?,?),(?,?),(?,?),(?,?),(?,?)) | 7 | 6 |
60,643,344 | 2020-3-11 | https://stackoverflow.com/questions/60643344/can-a-function-be-a-python-dataclass-member | I'm trying to keep some functions together and tried a Python dataclass for this. I could not come up with or find how to assign a type to a function within a dataclass. In example below I used a dummy type int, but what I should use correctly instead of int? from dataclasses import dataclass inc = lambda x : x+1 @dataclass class Holder: func: int # but I need a better type signature here h = Holder(inc) assert h.func(1) == 2 | You should use the Callable type from typing import Callable @dataclass class Holder: func: Callable[[int], int] | 13 | 17 |
60,641,215 | 2020-3-11 | https://stackoverflow.com/questions/60641215/one-to-many-relationship-django | I am coding up a dictionary using Django. I want a word to have multiple definitions, if necessary. This would be a one-to-many relationship, but Django does not seem to have a OneToManyField. This is a snippet of my code: class Definition(models.Model): definition = models.CharField(max_length=64) class Word(models.Model): word = models.CharField(max_length=64, unique=True) definitions = models.ForeignKey(Definition, on_delete=models.CASCADE, related_name="word") I would like to do word.definitions and get back all the definitions of that word. Also, deleting a word should delete all definitions of that word. Finally, a_definition.word should give me the word associated with that definition. | You have to use ForeignKey in Definition class. Definition will have relation to Word: from django.db import models class Definition(models.Model): definition = models.CharField(max_length=64) word = models.ForeignKey(Word, on_delete=models.CASCADE) class Word(models.Model): word = models.CharField(max_length=64, unique=True) And you can query it likes this: from .models import Word, Definition word = Word.objects.get(word = 'test') #get Word object definitions = Definition.objects.filter(word = word) #get all Definition objects related to word object above for definition in definitions: #print all definitions related to word print('%s -> %s' % (word.word, definition.definition)) | 6 | 7 |
60,638,356 | 2020-3-11 | https://stackoverflow.com/questions/60638356/difference-between-pip-install-and-pip-install-e | I have created a package in python, and now I would like to install it as a regular package. What is the difference between just using pip3 install . and pip3 install -e . ? The reason why I asked, is because with pip3 install . the package, although installed was not seen by the system. While in the second way it was working fine | The -e flag tells pip to install in editable mode: -e,--editable <path/url> Install a project in editable mode (i.e. setuptools "develop mode") from a local project path or a VCS url. https://manpages.debian.org/stretch/python-pip/pip.1 So what is editable mode or setuptools "develop mode" ? This command allows you to deploy your project’s source for use in one or more “staging areas” where it will be available for importing. This deployment is done in such a way that changes to the project source are immediately available in the staging area(s), without needing to run a build or install step after each change. The develop command works by creating an .egg-link file (named for the project) in the given staging area. If the staging area is Python’s site-packages directory, it also updates an easy-install.pth file so that the project is on sys.path by default for all programs run using that Python installation. The develop command also installs wrapper scripts in the staging area (or a separate directory, as specified) that will ensure the project’s dependencies are available on sys.path before running the project’s source scripts. And, it ensures that any missing project dependencies are available in the staging area, by downloading and installing them if necessary. Last, but not least, the develop command invokes the build_ext -i command to ensure any C extensions in the project have been built and are up-to-date, and the egg_info command to ensure the project’s metadata is updated (so that the runtime and wrappers know what the project’s dependencies are). If you make any changes to the project’s setup script or C extensions, you should rerun the develop command against all relevant staging areas to keep the project’s scripts, metadata and extensions up-to-date. or, tldr; Deploy your project in “development mode”, such that it’s available on sys.path, yet can still be edited directly from its source checkout. https://setuptools.readthedocs.io/en/latest/setuptools.html#develop-deploy-the-project-source-in-development-mode | 7 | 8 |
60,636,444 | 2020-3-11 | https://stackoverflow.com/questions/60636444/what-is-the-difference-between-x-test-x-train-y-test-y-train-in-sklearn | I'm learning sklearn and I didn't understand very good the difference and why use 4 outputs with the function train_test_split(). In the Documentation, I found some examples but it wasn't sufficient to end my doubts. Does the code use the X_train to predict the X_test or use the X_train to predict the y_test? What is the difference between train and test? Do I use train to predict the test or something similar? I'm very confused about it. I will let below the example provided in the Documentation. >>> import numpy as np >>> from sklearn.model_selection import train_test_split >>> X, y = np.arange(10).reshape((5, 2)), range(5) >>> X array([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]) >>> list(y) [0, 1, 2, 3, 4] >>> X_train, X_test, y_train, y_test = train_test_split( ... X, y, test_size=0.33, random_state=42) ... >>> X_train array([[4, 5], [0, 1], [6, 7]]) >>> y_train [2, 0, 3] >>> X_test array([[2, 3], [8, 9]]) >>> y_test [1, 4] >>> train_test_split(y, shuffle=False) [[0, 1, 2], [3, 4]] | Below is a dummy pandas.DataFrame for example: import pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score, confusion_matrix, classification_report df = pd.DataFrame({'X1':[100,120,140,200,230,400,500,540,600,625], 'X2':[14,15,22,24,23,31,33,35,40,40], 'Y':[0,0,0,0,1,1,1,1,1,1]}) Here we have 3 columns, X1,X2,Y suppose X1 & X2 are your independent variables and 'Y' column is your dependent variable. X = df[['X1','X2']] y = df['Y'] With sklearn.model_selection.train_test_split you are creating 4 portions of data which will be used for fitting & predicting values. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4,random_state=42) X_train, X_test, y_train, y_test Now 1). X_train - This includes your all independent variables,these will be used to train the model, also as we have specified the test_size = 0.4, this means 60% of observations from your complete data will be used to train/fit the model and rest 40% will be used to test the model. 2). X_test - This is remaining 40% portion of the independent variables from the data which will not be used in the training phase and will be used to make predictions to test the accuracy of the model. 3). y_train - This is your dependent variable which needs to be predicted by this model, this includes category labels against your independent variables, we need to specify our dependent variable while training/fitting the model. 4). y_test - This data has category labels for your test data, these labels will be used to test the accuracy between actual and predicted categories. Now you can fit a model on this data, let's fit sklearn.linear_model.LogisticRegression logreg = LogisticRegression() logreg.fit(X_train, y_train) #This is where the training is taking place y_pred_logreg = logreg.predict(X_test) #Making predictions to test the model on test data print('Logistic Regression Train accuracy %s' % logreg.score(X_train, y_train)) #Train accuracy #Logistic Regression Train accuracy 0.8333333333333334 print('Logistic Regression Test accuracy %s' % accuracy_score(y_pred_logreg, y_test)) #Test accuracy #Logistic Regression Test accuracy 0.5 print(confusion_matrix(y_test, y_pred_logreg)) #Confusion matrix print(classification_report(y_test, y_pred_logreg)) #Classification Report You can read more about metrics here Read more about data split here Hope this helps:) | 18 | 37 |
60,635,934 | 2020-3-11 | https://stackoverflow.com/questions/60635934/get-second-minimum-values-per-column-in-2d-array | How can I get the second minimum value from each column? I have this array: A = [[72 76 44 62 81 31] [54 36 82 71 40 45] [63 59 84 36 34 51] [58 53 59 22 77 64] [35 77 60 76 57 44]] I wish to have output like: A = [54 53 59 36 40 44] | Try this, in just one line: [sorted(i)[1] for i in zip(*A)] in action: In [12]: A = [[72, 76, 44, 62, 81, 31], ...: [54 ,36 ,82 ,71 ,40, 45], ...: [63 ,59, 84, 36, 34 ,51], ...: [58, 53, 59, 22, 77 ,64], ...: [35 ,77, 60, 76, 57, 44]] In [18]: [sorted(i)[1] for i in zip(*A)] Out[18]: [54, 53, 59, 36, 40, 44] zip(*A) will transpose your list of list so the columns become rows. and if you have duplicate value, for example: In [19]: A = [[72, 76, 44, 62, 81, 31], ...: [54 ,36 ,82 ,71 ,40, 45], ...: [63 ,59, 84, 36, 34 ,51], ...: [35, 53, 59, 22, 77 ,64], # 35 ...: [35 ,77, 50, 76, 57, 44],] # 35 If you need to skip both 35s, you can use set(): In [29]: [sorted(list(set(i)))[1] for i in zip(*A)] Out[29]: [54, 53, 50, 36, 40, 44] | 17 | 12 |
60,585,818 | 2020-3-8 | https://stackoverflow.com/questions/60585818/what-does-it-mean-to-inherit-global-site-packages-in-pycharm | When creating a new Python project, why would I want to select this option? If I don't select it, what functionality am I missing out on? Would I not be able to import certain Python modules? | It's just an option to pre-install some packages that you're using everytime, or if it doesn't bother you to have extra packages in your local python interpreted select it : all packages installed in the global python of your machine will be accessible for the interpreter you're going to create in the virtualenv. do not select it : the interpreter you're going to create in the virtualenv will just have the basic, like pip, and setuptools, then you can install just what you need Python global and venv : The global python, is the one in /usr/bin in Linux, or wherever in Windows, this is the main installation of the program, and you can add extra packages using pip When you're working on something, you may need only some packages, or specific version so not using the global Python. You can create a virtualenv, or pyenv, that will link a local python to the global one, for the main python functionnality, but the packages will be installed only in the virtualenv (and when using Pycharm, it can install for you the main package into the virtualenv you're creating) | 22 | 11 |
60,618,346 | 2020-3-10 | https://stackoverflow.com/questions/60618346/pytorch-autograd-gives-different-gradients-when-using-clamp-instead-of-torch-re | I'm still working on my understanding of the PyTorch autograd system. One thing I'm struggling at is to understand why .clamp(min=0) and nn.functional.relu() seem to have different backward passes. It's especially confusing as .clamp is used equivalently to relu in PyTorch tutorials, such as https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-nn. I found this when analysing the gradients of a simple fully connected net with one hidden layer and a relu activation (linear in the outputlayer). to my understanding the output of the following code should be just zeros. I hope someone can show me what I am missing. import torch dtype = torch.float x = torch.tensor([[3,2,1], [1,0,2], [4,1,2], [0,0,1]], dtype=dtype) y = torch.ones(4,4) w1_a = torch.tensor([[1,2], [0,1], [4,0]], dtype=dtype, requires_grad=True) w1_b = w1_a.clone().detach() w1_b.requires_grad = True w2_a = torch.tensor([[-1, 1], [-2, 3]], dtype=dtype, requires_grad=True) w2_b = w2_a.clone().detach() w2_b.requires_grad = True y_hat_a = torch.nn.functional.relu(x.mm(w1_a)).mm(w2_a) y_a = torch.ones_like(y_hat_a) y_hat_b = x.mm(w1_b).clamp(min=0).mm(w2_b) y_b = torch.ones_like(y_hat_b) loss_a = (y_hat_a - y_a).pow(2).sum() loss_b = (y_hat_b - y_b).pow(2).sum() loss_a.backward() loss_b.backward() print(w1_a.grad - w1_b.grad) print(w2_a.grad - w2_b.grad) # OUT: # tensor([[ 0., 0.], # [ 0., 0.], # [ 0., -38.]]) # tensor([[0., 0.], # [0., 0.]]) # | The reason is that relu and clamp produce different gradients at 0. For a scalar tensor x = 0: (relu(x) - 1.0).pow(2).backward() gives x.grad == 0 (x.clamp(min=0) - 1.0).pow(2).backward() gives x.grad == -2 This indicates that: relu chooses x == 0 --> grad = 0 clamp chooses x == 0 --> grad = 1 | 9 | 8 |
60,584,388 | 2020-3-8 | https://stackoverflow.com/questions/60584388/how-to-check-typevars-type-at-runtime | I have a generic class Graph(Generic[T]). Is there a function that returns the type arguments passed to the class Graph? >>> g = Graph[int]() >>> magic_func(g) <class 'int'> | Here is one way to achieve this which works from Python 3.6+ (tested it in 3.6, 3.7 and 3.8): from typing import TypeVar, Generic T = TypeVar('T') class Graph(Generic[T], object): def get_generic_type(self): print(self.__orig_class__.__args__[0]) if __name__=='__main__': g_int = Graph[int]() g_str = Graph[str]() g_int.get_generic_type() g_str.get_generic_type() Output: <class 'int'> <class 'str'> If you want to get the type inside __new__ or __init__ things get a little bit tricky, see the following post for more info: Generic[T] base class - how to get type of T from within instance? Edit The library pytypes seems to provide a method that allow to get __orig_class__ even from __init__, check the method get_orig_class available here: https://github.com/Stewori/pytypes/blob/master/pytypes/type_util.py | 9 | 7 |
60,536,472 | 2020-3-5 | https://stackoverflow.com/questions/60536472/building-python-and-openssl-from-source-but-ssl-module-fails | I'm trying to build Python and OpenSSL from source in a container. Both seem to build correctly, but Python does not successfully create the _ssl module. I've found a few guides online that say to un-comment and lines from Python-3.X.X/Modules/Setup and add the --openssldir=/usr/local/ssl flag to the ./configure step for OpenSSL. I do these in my dockerfile. This has had the effect that, during the ./configure output for Python, I see the following line. checking for X509_VERIFY_PARAM_set1_host in libssl... yes Yet I receive the following errors: [91m*** WARNING: renaming "_ssl" since importing it failed: /usr/lib/x86_64-linux-gnu/libssl.so.1.1: version `OPENSSL_1_1_1' not found (required by build/lib.linux-x86_64-3.8/_ssl.cpython-38-x86_64-linux-gnu.so) [0m[91m*** WARNING: renaming "_hashlib" since importing it failed: /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1: version `OPENSSL_1_1_1' not found (required by build/lib.linux-x86_64-3.8/_hashlib.cpython-38-x86_64-linux-gnu.so) [0m Python build finished successfully! ... Following modules built successfully but were removed because they could not be imported: _hashlib _ssl Could not build the ssl module! Python requires an OpenSSL 1.0.2 or 1.1 compatible libssl with X509_VERIFY_PARAM_set1_host(). LibreSSL 2.6.4 and earlier do not provide the necessary APIs, https://github.com/libressl-portable/portable/issues/381 If ./configure finds X509..., why am I still getting the hashlib and ssl errors? The full Dockerfile, FWIW: FROM jenkins/jenkins:lts USER root RUN apt-get update && apt-get install -y apt-utils gcc make zlib1g-dev \ build-essential libffi-dev checkinstall libsqlite3-dev RUN wget https://www.openssl.org/source/openssl-1.1.1d.tar.gz && \ tar xzf openssl-1.1.1d.tar.gz && \ cd openssl-1.1.1d && \ ./config -Wl,--enable-new-dtags,-rpath,'$(LIBRPATH)' --prefix=/usr/local/ssl --openssldir=/usr/local/ssl && \ make && \ make test && \ make install RUN wget -q https://www.python.org/ftp/python/3.8.2/Python-3.8.2.tgz && \ tar -xzf Python-3.8.2.tgz && \ cd Python-3.8.2 && \ ./configure && \ make && \ make install USER jenkins | The following worked for me on Amazon's EC2, with the default CentOS 7 system. First, the openssl libraries on CentOS 7 are too old (Python 3.9+ wants openssl 1.1.1+ and the version available is 1.0.x). Install the newer ones: sudo yum install openssl11-devel Note: since writing this answer, Amazon has end-of-life'd the openssl11-devel package and updated openssl-devel to 3.0.8. 3.0.8 is more than enough for Python, so now you can just do yum install openssl-devel. Unfortunately, CentOS doesn't actually put the SSL libraries anywhere that Python can find them. With some trial and error, I found that this makes ./configure happy: export OPENSSL_LIBS=/usr/lib64/libssl.so ./configure \ --with-openssl=/usr \ --with-openssl-rpath=/usr/lib64 \ --enable-optimizations Explanation: Python doesn't look in lib64 by default, which is where openssl11 is installed. --with-openssl takes a path that ./configure appends include/openssl/ssl.h to, so make sure that is there for your system! When you run ./configure, you're looking for a line near the end of the output like: checking for stdlib extension module _ssl... yes If you see missing instead of yes, search config.log for openssl, which should give you some guidance about where it's screwing up. Hopefully this saves someone else the many hours I spent figuring this out. | 7 | 15 |
60,567,732 | 2020-3-6 | https://stackoverflow.com/questions/60567732/poetry-force-install-when-versions-are-incompatible | Poetry has a very good version solver, too good sometimes :) I'm trying to use poetry in a project that uses two incompatible packages. However they are incompatible only by declaration as one of them is no longer developed, but otherwise they work together just fine. With pip I'm able to install these in one environment (with an error printed) and it works. Poetry will declare that the dependencies versions can't be resolved and refuse to install anything. Is there a way to force poetry to install these incompatible dependencies? Thank you! | No. Alternative solutions might be: contacting the offending package's maintainers and asking for a fix + release forking the package and releasing a fix yourself vendoring the package in your source code - there is no need to install it if it's already there, and many of the usual downsides to vendoring disappear if the project in question is not maintained anymore installing the package by hand post poetry install with an installer that has the option to ignore the dependency resolver, like pip (similar to what you're already doing) | 21 | 11 |
60,615,531 | 2020-3-10 | https://stackoverflow.com/questions/60615531/creating-dynamic-classes-in-sqlalchemy | We have 1 table with a large amount of data and DBA's partitioned it based on a particular parameter. This means I ended up with Employee_TX, Employee_NY kind of table names. Earlier the models.py was simple as in -- class Employee(Base): __tablename__ = 'Employee' name = Column... state = Column... Now, I don't want to create 50 new classes for the newly partitioned tables as anyways my columns are the same. Is there a pattern where I can create a single class and then use it in query dynamically? session.query(<Tablename>).filter().all() Maybe some kind of Factory pattern or something is what I'm looking for. So far I've tried by running a loop as for state in ['CA', 'TX', 'NY']: class Employee(Base): __qualname__ = __tablename__ = 'Employee_{}'.format(state) name = Column... state = Column... but this doesn't work and I get a warning as - SAWarning: This declarative base already contains a class with the same class name and module name as app_models.employee, and will be replaced in the string-lookup table. Also it can't find the generated class when I do from app_models import Employee_TX This is a flask app with PostgreSQL as a backend and sqlalchemy is used as an ORM | Got it by creating a custom function like - def get_model(state): DynamicBase = declarative_base(class_registry=dict()) class MyModel(DynamicBase): __tablename__ = 'Employee_{}'.format(state) name = Column... state = Column... return MyModel And then from my services.py, I just call with get_model(TX) | 7 | 7 |
60,580,332 | 2020-3-7 | https://stackoverflow.com/questions/60580332/poetry-virtual-environment-already-activated | Running the following poetry shell returns the following error /home/harshagoli/.poetry/lib/poetry/_vendor/py2.7/subprocess32.py:149: RuntimeWarning: The _posixsubprocess module is not being used. Child process reliability may suffer if your program uses threads. "program uses threads.", RuntimeWarning) The currently activated Python version 2.7.17 is not supported by the project (^3.7). Trying to find and use a compatible version. Using python3 (3.7.5) Virtual environment already activated: /home/harshagoli/.cache/pypoetry/virtualenvs/my-project-0wt3KWFj-py3.7 How can I get past this error? Why doesn't this command work? | poetry shell is a really buggy command, and this is often talked about among the maintainers. A workaround for this specific issue is to activate the shell manually. It might be worth aliasing the following source $(poetry env info --path)/bin/activate so you need to paste this into your .bash_aliases or .bashrc alias acpoet="source $(poetry env info --path)/bin/activate" Now you can run acpoet to activate your poetry env (don't forget to source your file to enable the command) | 41 | 54 |
60,532,973 | 2020-3-4 | https://stackoverflow.com/questions/60532973/how-do-i-get-a-is-safe-url-function-to-use-with-flask-and-how-does-it-work | Flask-Login recommends having an is_safe_url() function after user login: Here is a link to the part of the documentation that discusses this: https://flask-login.readthedocs.io/en/latest/#login-example They link to this snippet but I don't understand how it implements is_safe_url(): https://palletsprojects.com/p/flask/ next = request.args.get('next') if not is_safe_url(next): return abort(400) This doesn't seem to come with Flask. I'm relatively new to coding. I want to understand: What exactly is happening when request gets the next argument? What does the is_safe_url() function do to ensure the URL is safe? Does the next URL need to be checked on login only? Or are there other places and times when it is important to include this security measure? And most importantly: is there a reliable is_safe_url() function that I can use with Flask? Edit: Added link to Flask-Login documentation and included snippet. | As mentioned in the comments, Flask-Login today had a dead link in the documentation (issue on GitHub). Please note the warning in the original flask snippets documentation: Snippets are unofficial and unmaintained. No Flask maintainer has curated or checked the snippets for security, correctness, or design. The snippet is from urllib.parse import urlparse, urljoin def is_safe_url(target): ref_url = urlparse(request.host_url) test_url = urlparse(urljoin(request.host_url, target)) return test_url.scheme in ('http', 'https') and \ ref_url.netloc == test_url.netloc Now to address your questions: What exactly is happening when request gets the next argument? Part of the code we are focusing on here is next = request.args.get('next') return redirect(next or url_for('dashboard')) which redirects user to dashboard (e.g. after successful login) by default. However, if user tried to reach for e.g. endpoint profile and wasn't logged in we would want to redirect him to the login page. After logging in default redirect would redirect user to dashboard and not to profile where he intended to go. To provide better user experience we can redirect user to his profile page by building URL /login?next=profile, which enables flask to redirect to profile instead of the default dashboard. Since user can abuse URLs we want to check if URL is safe, or abort otherwise. What does the is_safe_url() function do to ensure the URL is safe? The snippet in question is a function that ensures that a redirect target will lead to the same server. Does the next URL need to be checked on login only? Or are there other places and times when it is important to include this security measure? No, you should check all dangerous URLs. Example of safe URL redirect would be redirect(url_for('index')), since its hardcoded in your program. See examples of safe and dangerous URLs on Unvalidated Redirects and Forwards - OWASP cheatsheet. And most importantly: is there a reliable is_safe_url() function that I can use with Flask? There is Django's is_safe_url() bundled as a standalone package on pypi. | 19 | 26 |
60,558,412 | 2020-3-6 | https://stackoverflow.com/questions/60558412/how-to-decode-a-video-memory-file-byte-string-and-step-through-it-frame-by-f | I am using python to do some basic image processing, and want to extend it to process a video frame by frame. I get the video as a blob from a server - .webm encoded - and have it in python as a byte string (b'\x1aE\xdf\xa3\xa3B\x86\x81\x01B\xf7\x81\x01B\xf2\x81\x04B\xf3\x81\x08B\x82\x88matroskaB\x87\x81\x04B\x85\x81\x02\x18S\x80g\x01\xff\xff\xff\xff\xff\xff\xff\x15I\xa9f\x99*\xd7\xb1\x83\x0fB@M\x80\x86ChromeWA\x86Chrome\x16T\xaek\xad\xae\xab\xd7\x81\x01s\xc5\x87\x04\xe8\xfc\x16\t^\x8c\x83\x81\x01\x86\x8fV_MPEG4/ISO/AVC\xe0\x88\xb0\x82\x02\x80\xba\x82\x01\xe0\x1fC\xb6u\x01\xff\xff\xff\xff\xff\xff ...). I know that there is cv.VideoCapture, which can do almost what I need. The problem is that I would have to first write the file to disk, and then load it again. It seems much cleaner to wrap the string, e.g., into an IOStream, and feed it to some function that does the decoding. Is there a clean way to do this in python, or is writing to disk and loading it again the way to go? | Two years after Rotem wrote his answer there is now a cleaner / easier way to do this using ImageIO. Note: Assuming ffmpeg is in your path, you can generate a test video to try this example using: ffmpeg -f lavfi -i testsrc=duration=10:size=1280x720:rate=30 testsrc.webm import imageio.v3 as iio from pathlib import Path webm_bytes = Path("testsrc.webm").read_bytes() # read all frames from the bytes string frames = iio.imread(webm_bytes, index=None, format_hint=".webm") frames.shape # Output: # (300, 720, 1280, 3) for frame in iio.imiter(webm_bytes, format_hint=".webm"): print(frame.shape) # Output: # (720, 1280, 3) # (720, 1280, 3) # (720, 1280, 3) # ... To use this you'll need the ffmpeg backend (which implements a solution similar to what Rotem proposed): pip install imageio[ffmpeg] In response to Rotem's comment a bit of explanation: The above snippet uses imageio==2.16.0. The v3 API is an upcoming user-facing API that streamlines reading and writing. The API is available since imageio==2.10.0, however, you will have to use import imageio as iio and use iio.v3.imiter and iio.v3.imread on versions older than 2.16.0. The ability to read video bytes has existed forever (>5 years and counting) but has (as I am just now realizing) never been documented directly ... so I will add a PR for that soon™ :) On older versions (tested on v2.9.0) of ImageIO (v2 API) you can still read video byte strings; however, this is slightly more verbose: import imageio as iio import numpy as np from pathlib import Path webm_bytes = Path("testsrc.webm").read_bytes() # read all frames from the bytes string frames = np.stack(iio.mimread(webm_bytes, format="FFMPEG", memtest=False)) # iterate over frames one by one reader = iio.get_reader(webm_bytes, format="FFMPEG") for frame in reader: print(frame.shape) reader.close() | 7 | 6 |
60,599,149 | 2020-3-9 | https://stackoverflow.com/questions/60599149/what-is-the-difference-between-numpy-randoms-generator-class-and-np-random-meth | I have been using numpy's random functionality for a while, by calling methods such as np.random.choice() or np.random.randint() etc. I just now found about the ability to create a default_rng object, or other Generator objects: from numpy.random import default_rng gen = default_rng() random_number = gen.integers(10) So far I would have always used np.random.randint(10) instead, and I am wondering what the difference between both ways is. The only benefit I can think of would be keeping track of multiple seeds, or wanting to use specific PRNGs, but maybe there are also differences for a more generic use-case? | numpy.random.* functions (including numpy.random.binomial) make use of a global pseudorandom number generator (PRNG) object which is shared across the application. On the other hand, default_rng() is a self-contained Generator object that doesn't rely on global state. If you don't care about reproducible "randomness" in your application, these two approaches are equivalent for the time being. Although NumPy's new RNG policy discourages the use of global state in general, it did not deprecate any numpy.random.* functions in version 1.17, although a future version of NumPy might. Note also that because numpy.random.* functions rely on a global PRNG object that isn't thread-safe, these functions can cause race conditions if your application uses multiple threads. (Generator objects are not thread-safe, either, but there are ways to generate pseudorandom numbers via multithreading, without the need to share PRNG objects across threads.) | 8 | 13 |
60,593,604 | 2020-3-9 | https://stackoverflow.com/questions/60593604/importerror-attempted-relative-import-with-no-known-parent-package | I am learning to program with python and I am having issues with importing from a module in a package. I am usingvisual studio code with Python 3.8.2 64 bit. My Project Directory .vscode ├── ecommerce │ ├── __init__.py │ ├── database.py │ ├── products.py │ └── payments │ ├── __init__.py │ ├── authorizenet.py │ └── paypal.py ├── __init__.py └── main.py in the ecommerce/products.py file I have: #products.py from .database import Database p = Database(3,2) So that I can import the Database class from the ecommerce/database.py file. But I get error ImportError : Attempted relative import with no known parent package | It seems, from Python docs and experimenting, that relative imports (involving ., .. etc) only work if the importing module has a __name__ other than __main__, and further, the __name__ of the importing module is pkg.module_name, i.e., it has to be imported from above in the directory hierarchy (to have a parent pkg as part of it's __name__.) OR the importing module is being specified via module syntax that includes a parent pkg as python -m pkg.module, in which case it's __name__ is still __main__, so it is being run as a script, yet relative imports will work. Here __package__ is set and used to find the parent package while __name__ is __main__; more here. [After all that, it appears that __package__ and sys.path are key to determining if/how relative imports work. __name__ indicates script or module(i.e., __main__ or module_name). __package__ indicates where in the package the relative imports occur with respect to, and the top of __package__ needs to be in sys.path.] So, continuing with @AmitTendulkar 's example, if you run this as > python main.py or > python -m main or > python -m ecommerce.products from the project root directory, or enter interactive python from that root directory and import main, or import ecommerce.products the relative imports in products.py will work. But if you > python products.py or > python -m products from within ecommerce directory, or enter interactive python from that ecommerce directory and import products they will fail. It is helpful to add print("In module products __package__, __name__ ==", __package__, __name__) etc. in each file to debug. UPDATE: How imports work depend on sys.path and __package__, not on __name__. Issued from /home/jj, > python sub/mod.py has a sys.path, __package__ of /home/jj/sub, None -absolute imports of modules in sys.path work, relative imports fail. > python -m sub.mod has sys.path, __package__ of /home/jj, sub -absolute imports of modules in sys.path work, relative imports work relative to sys.path + __package__. It is more helpful to add import sys print("In module products sys.path[0], __package__ ==", sys.path[0], __package__) etc. in each file to debug. | 139 | 37 |
60,599,812 | 2020-3-9 | https://stackoverflow.com/questions/60599812/how-can-i-customize-mplfinance-plot | I've made a python script to convert a csv file in a candlestick like this using mpl_finance, this is the script: import matplotlib.pyplot as plt from mpl_finance import candlestick_ohlc import pandas as pd import matplotlib.dates as mpl_dates plt.style.use('ggplot') # Extracting Data for plotting data = pd.read_csv('CSV.csv') ohlc = data.loc[:, ['Date', 'Open', 'High', 'Low', 'Close']] ohlc['Date'] = pd.to_datetime(ohlc['Date']) ohlc['Date'] = ohlc['Date'].apply(mpl_dates.date2num) ohlc = ohlc.astype(float) # Creating Subplots fig, ax = plt.subplots() plt.axis('off') fig.patch.set_facecolor('black') candlestick_ohlc(ax, ohlc.values, width=0.6, colorup='green', colordown='red', alpha=0.8) plt.show() Now I need to do the same thing but using mplfinance instead of mpl_finance and I've tried like this: import mplfinance as mpf # Load data file. df = pd.read_csv('CSV.csv', index_col=0, parse_dates=True) # Plot candlestick. # Add volume. # Add moving averages: 3,6,9. # Save graph to *.png. mpf.plot(df, type='candle', style='charles', title='', ylabel='', ylabel_lower='', volume=True, mav=(3,6,9), savefig='test-mplfiance.png') And I have this result: So, now I need to change background color from white to black, remove grid and remove axes but I have no idea how to do it. Thanks to all will spend time for reply me. [EDIT]: this is an old question I've ade when mpl_finance was at It's first stage, now a lot of things are changed and this question is obsolete. | The best way to do this is to define your own style using mpf.make_mpf_style() rather than using the default mpf styles. If using external axes method in mplfinance, you can plot multiple charts as below: # add your own style by passing in kwargs s = mpf.make_mpf_style(base_mpf_style='charles', rc={'font.size': 6}) fig = mpf.figure(figsize=(10, 7), style=s) # pass in the self defined style to the whole canvas ax = fig.add_subplot(2,1,1) # main candle stick chart subplot, you can also pass in the self defined style here only for this subplot av = fig.add_subplot(2,1,2, sharex=ax) # volume chart subplot mpf.plot(price_data, type='candle', ax=ax, volume=av) The default mpf styles are as below. I believe 'mike' and 'nighclouds' have dark background, not 100% sure about others, you can choose to work on top of these two. In [5]: mpf.available_styles() Out[5]: ['binance', 'blueskies', 'brasil', 'charles', 'checkers', 'classic', 'default', 'mike', 'nightclouds', 'sas', 'starsandstripes', 'yahoo'] Link to visualize the default mplfinance styles The arguments that can be passed in mpf.make_mpf_style() are as below, you can use base_mpf_style, facecolor, gridcolor, gridstyle, gridaxis, rc to customise your own style, and give it a name by using style_name. You can play around with these arguments to see how they turn out. def _valid_make_mpf_style_kwargs(): vkwargs = { 'base_mpf_style': { 'Default' : None, 'Validator' : lambda value: value in _styles.keys() }, 'base_mpl_style': { 'Default' : None, 'Validator' : lambda value: isinstance(value,str) }, # and is in plt.style.available 'marketcolors' : { 'Default' : None, # 'Validator' : lambda value: isinstance(value,dict) }, 'mavcolors' : { 'Default' : None, 'Validator' : lambda value: isinstance(value,list) }, # TODO: all([mcolors.is_color_like(v) for v in value.values()]) 'facecolor' : { 'Default' : None, 'Validator' : lambda value: isinstance(value,str) }, 'edgecolor' : { 'Default' : None, 'Validator' : lambda value: isinstance(value,str) }, 'figcolor' : { 'Default' : None, 'Validator' : lambda value: isinstance(value,str) }, 'gridcolor' : { 'Default' : None, 'Validator' : lambda value: isinstance(value,str) }, 'gridstyle' : { 'Default' : None, 'Validator' : lambda value: isinstance(value,str) }, 'gridaxis' : { 'Default' : None, 'Validator' : lambda value: value in [ 'vertical'[0:len(value)], 'horizontal'[0:len(value)], 'both'[0:len(value)] ] }, 'y_on_right' : { 'Default' : None, 'Validator' : lambda value: isinstance(value,bool) }, 'rc' : { 'Default' : None, 'Validator' : lambda value: isinstance(value,dict) }, 'style_name' : { 'Default' : None, 'Validator' : lambda value: isinstance(value,str) }, } _validate_vkwargs_dict(vkwargs) return vkwargs | 21 | 18 |
60,600,529 | 2020-3-9 | https://stackoverflow.com/questions/60600529/cannot-import-name-string-int-label-map-pb2 | My goal is to run tensorflow object detection API and followed the steps in the installation. I install the tensorflow object detection API and protobuf. I have also added the path to protobuf. But the following error shoots up: ImportError: cannot import name 'string_int_label_map_pb2' Installed protobuf : %%bash cd models/research protoc object_detection/protos/*.proto --python_out=. A block of code containing the error import statements: from object_detection.utils import ops as utils_ops from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_util | Install protoc-3.11.4 from https://github.com/google/protobuf/releases and run protoc object_detection/protos/*.proto --python_out=. as mentioned in the installation instructions. And put this file in in object detection/protos | 8 | 13 |
60,616,430 | 2020-3-10 | https://stackoverflow.com/questions/60616430/mlflow-how-to-read-metrics-or-params-from-an-existing-run | I try to read metrics in this way: data, info = mlflow.get_run(run_id) print(data[1].metrics) # example of output: {'loss': 0.01} But it get only last value. It is possible to read manually all steps of a particular metric? | I ran into this same problem and was able to do get all of the values for the metric by using using mlflow.tracking.MlflowClient().get_metric_history. This will return every value you logged using mlflow.log_metric(key, value). Quick example (untested) import mlflow trackingDir = 'file:///....' registryDir = 'file:///...' runID = 'my run id' metricKey = 'loss' client = mlflow.tracking.MlflowClient( tracking_uri=trackingDir, registry_uri=registryDir, ) metrics = client.get_metric_history(runID, metricKey) From the docs get_metric_history(run_id, key)[source] Return a list of metric objects corresponding to all values logged for a given metric. Parameters run_id – Unique identifier for run key – Metric name within the run Returns A list of mlflow.entities.Metric entities if logged, else empty list from mlflow.tracking import MlflowClient def print_metric_info(history): for m in history: print("name: {}".format(m.key)) print("value: {}".format(m.value)) print("step: {}".format(m.step)) print("timestamp: {}".format(m.timestamp)) print("--") # Create a run under the default experiment (whose id is "0"). Since this is low-level # CRUD operation, the method will create a run. To end the run, you'll have # to explicitly end it. client = MlflowClient() experiment_id = "0" run = client.create_run(experiment_id) print("run_id:{}".format(run.info.run_id)) print("--") # Log couple of metrics, update their initial value, and fetch each # logged metrics' history. for k, v in [("m1", 1.5), ("m2", 2.5)]: client.log_metric(run.info.run_id, k, v, step=0) client.log_metric(run.info.run_id, k, v + 1, step=1) print_metric_info(client.get_metric_history(run.info.run_id, k)) client.set_terminated(run.info.run_id) | 7 | 12 |
60,607,824 | 2020-3-9 | https://stackoverflow.com/questions/60607824/pytorch-imagenet-dataset | I am unable to download the original ImageNet dataset from their official website. However, I found out that pytorch has ImageNet as one of it’s torch vision datasets. Q1. Is that the original ImageNet dataset? Q2. How do I get the classes for the dataset like it’s being done in Cifar-10 classes = [‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’] | The torchvision.datasets.ImageNet is just a class which allows you to work with the ImageNet dataset. You have to download the dataset yourself (e.g. from http://image-net.org/download-images) and pass the path to it as the root argument to the ImageNet class object. Note that the option to download it directly by passing the flag download=True is no longer possible: if download is True: msg = ("The dataset is no longer publicly accessible. You need to " "download the archives externally and place them in the root " "directory.") raise RuntimeError(msg) elif download is False: msg = ("The use of the download flag is deprecated, since the dataset " "is no longer publicly accessible.") warnings.warn(msg, RuntimeWarning) (source) If you just need to get the class names and the corresponding indices without downloading the whole dataset (e.g. if you are using a pretrained model and want to map the predictions to labels), then you can download them e.g. from here or from this github gist. | 11 | 17 |
60,532,678 | 2020-3-4 | https://stackoverflow.com/questions/60532678/what-is-the-difference-between-miniconda-and-miniforge | The miniforge installer is a relatively new, community-led, minimal conda installer that (as it says in its readme) "can be directly compared to Miniconda, with the added feature that conda-forge is the default channel". It is unclear what is different between miniforge and Miniconda, or what the miniforge use case is. If miniforge is the same as Miniconda except it just uses the conda-forge channel by default, why create a whole different installer - why not just use miniconda and add conda-forge as the first channel to use in ~/.condarc? If miniforge is different from Miniconda, what is different about the two? | miniforge is the community (conda-forge) driven minimalistic conda installer. Subsequent package installations come thus from conda-forge channel. miniconda is the Anaconda (company) driven minimalistic conda installer. Subsequent package installations come from the anaconda channels (default or otherwise). miniforge started a few months ago because miniconda doens't support aarch64, very quickly the 'PyPy' people jumped on board, and in the mean time there are also miniforge versions for all Linux architectures, as well as MacOS. Soon there will also be a windows variant (hopefully also for both CPython and PyPy) I guess that an ARMv7 (32Bit ARM) variant is also on the horizon (Raspbian) | 91 | 84 |
60,582,050 | 2020-3-7 | https://stackoverflow.com/questions/60582050/lightgbmerror-do-not-support-special-json-characters-in-feature-name-the-same | I have the following code: most_important = features_importance_chi(importance_score_tresh, df_user.drop(columns = 'CHURN'),churn) X = df_user.drop(columns = 'CHURN') churn[churn==2] = 1 y = churn # handle undersample problem X,y = handle_undersampe(X,y) # train the model X=X.loc[:,X.columns.isin(most_important)].values y=y.values parameters = { 'application': 'binary', 'objective': 'binary', 'metric': 'auc', 'is_unbalance': 'true', 'boosting': 'gbdt', 'num_leaves': 31, 'feature_fraction': 0.5, 'bagging_fraction': 0.5, 'bagging_freq': 20, 'learning_rate': 0.05, 'verbose': 0 } # split data x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y) train_data = lightgbm.Dataset(x_train, label=y_train) test_data = lightgbm.Dataset(x_test, label=y_test) model = lightgbm.train(parameters, train_data, valid_sets=[train_data, test_data], **feature_name=most_important,** num_boost_round=5000, early_stopping_rounds=100) and function which returns most_important parameter def features_importance_chi(importance_score_tresh, X, Y): model = ExtraTreesClassifier(n_estimators=10) model.fit(X,Y.values.ravel()) feature_list = pd.Series(model.feature_importances_, index=X.columns) feature_list = feature_list[feature_list > importance_score_tresh] feature_list = feature_list.index.values.tolist() return feature_list Funny thing is that this code in Spyder returns the following error LightGBMError: Do not support special JSON characters in feature name. but in jupyter works fine. I am able to print the list of most important features. Any idea what could be the reason for this error? | You know what, this message is often found on LGBMClassifier () models, i.e. LGBM. Simply drop this line at the beginning as soon as you upload the data from the pandas and you have a problem with your head: import re df = df.rename(columns = lambda x:re.sub('[^A-Za-z0-9_]+', '', x)) | 12 | 41 |
60,580,113 | 2020-3-7 | https://stackoverflow.com/questions/60580113/change-python-version-to-3-x | According to poetry's docs, the proper way to setup a new project is with poetry new poetry-demo, however this creates a project based on the now deprecated python2.7 by creating the following toml file: [tool.poetry] name = "poetry-demo" version = "0.1.0" description = "" authors = ["Harsha Goli <[email protected]>"] [tool.poetry.dependencies] python = "^2.7" [tool.poetry.dev-dependencies] pytest = "^4.6" [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api" How can I update this to 3.7? Simply changing python = "^2.7" to python = "^3.7" results in the following error when poetry install is run: [SolverProblemError] The current project's Python requirement (2.7.17) is not compatible with some of the required packages Python requirement: - zipp requires Python >=3.6 Because no versions of pytest match >=4.6,<4.6.9 || >4.6.9,<5.0 and pytest (4.6.9) depends on importlib-metadata (>=0.12), pytest (>=4.6,<5.0) requires importlib-metadata (>=0.12). And because no versions of importlib-metadata match >=0.12,<1.5.0 || >1.5.0 and importlib-metadata (1.5.0) depends on zipp (>=0.5), pytest (>=4.6,<5.0) requires zipp (>=0.5). Because zipp (3.1.0) requires Python >=3.6 and no versions of zipp match >=0.5,<3.1.0 || >3.1.0, zipp is forbidden. Thus, pytest is forbidden. So, because poetry-demo depends on pytest (^4.6), version solving failed. | Interestingly, poetry is silently failing due to a missing package the tool itself relies on and continues to install a broken venv. Here's how you fix it. sudo apt install python3-venv poetry env remove python3 poetry install I had to remove pytest, and then reinstall with poetry add pytest. EDIT: I ran into this issue again when upgrading a project from python3.7 to python3.8 - for this instead of installing python3-venv, you'd want to install python3.8-venv instead | 118 | 13 |
60,557,160 | 2020-3-6 | https://stackoverflow.com/questions/60557160/python3-8-fails-with-fatal-python-error-config-get-locale-encoding | OK, so somehow I have mangled my python3 installation under macOS Mojave and I'm not sure how. I've used macports for years to keep python up to date but when I installed python38 now I cannot run python3 at all. I always get this: $ python3.8 Fatal Python error: config_get_locale_encoding: failed to get the locale encoding: nl_langinfo(CODESET) failed Python runtime state: preinitialized $ I uninstalled the macports version and reinstalled, same thing. Uninstalled and then installed fresh from python.org, same thing. python27 runs fine. python37 also runs fine. python38 won't even work if I use $python3.8 -I so it's not some site package weirdness. Here's the really weird bit: while I cannot run python38 from a shell (any shell, tried from bash , I can launch python38 from the GUI using IDLE.app. Oddly, on my other machine (my laptop), python38 installed with macports works just fine. I'm flummoxed and I don't flummox easily. Any ideas? | Try setting LANG with a locale: export LANG="en_US.UTF-8" | 22 | 58 |
60,624,139 | 2020-3-10 | https://stackoverflow.com/questions/60624139/when-i-do-flask-run-it-shows-error-modulenotfounderror-no-module-named-werk | the exact error I get is : flask.cli.NoAppException: While importing "application", an ImportError was raised:Traceback (most recent call last): File "/home/harshit/.local/lib/python3.6/site-packages/flask/cli.py", line 240, in locate_app __import__(module_name) File "/home/harshit/Documents/project1/application.py", line 18, in <module> Session(app) File "/home/harshit/.local/lib/python3.6/site-packages/flask_session/__init__.py", line 54, in __init__ self.init_app(app) File "/home/harshit/.local/lib/python3.6/site-packages/flask_session/__init__.py", line 61, in init_app app.session_interface = self._get_interface(app) File "/home/harshit/.local/lib/python3.6/site-packages/flask_session/__init__.py", line 93, in _get_interface config['SESSION_USE_SIGNER'], config['SESSION_PERMANENT']) File "/home/harshit/.local/lib/python3.6/site-packages/flask_session/sessions.py", line 313, in __init__ from werkzeug.contrib.cache import FileSystemCache ModuleNotFoundError: No module named 'werkzeug.contrib' I am trying to import sessions from Flask | Werkzeug 1.0.0 has removed deprecated code, including all of werkzeug.contrib. You should use alternative libraries for new projects. werkzeug.contrib.session was extracted to secure-cookie. If an existing project you're using needs something from contrib, you'll need to downgrade to Werkzeug<1: pip3 install Werkzeug<1 | 20 | 23 |
60,537,977 | 2020-3-5 | https://stackoverflow.com/questions/60537977/critical-worker-timeout-in-logs-when-running-hello-cloud-run-with-python-f | Following the tutorial here I have the following 2 files: app.py from flask import Flask, request app = Flask(__name__) @app.route('/', methods=['GET']) def hello(): """Return a friendly HTTP greeting.""" who = request.args.get('who', 'World') return f'Hello {who}!\n' if __name__ == '__main__': # Used when running locally only. When deploying to Cloud Run, # a webserver process such as Gunicorn will serve the app. app.run(host='localhost', port=8080, debug=True) Dockerfile # Use an official lightweight Python image. # https://hub.docker.com/_/python FROM python:3.7-slim # Install production dependencies. RUN pip install Flask gunicorn # Copy local code to the container image. WORKDIR /app COPY . . # Service must listen to $PORT environment variable. # This default value facilitates local development. ENV PORT 8080 # Run the web service on container startup. Here we use the gunicorn # webserver, with one worker process and 8 threads. # For environments with multiple CPU cores, increase the number of workers # to be equal to the cores available. CMD exec gunicorn --bind 0.0.0.0:$PORT --workers 1 --threads 8 app:app I then build and run them using Cloud Build and Cloud Run: PROJECT_ID=$(gcloud config get-value project) DOCKER_IMG="gcr.io/$PROJECT_ID/helloworld-python" gcloud builds submit --tag $DOCKER_IMG gcloud run deploy --image $DOCKER_IMG --platform managed The code appears to run fine, and I am able to access the app on the given URL. However the logs seem to indicate a critical error, and the workers keep restarting. Here is the log file from Cloud Run after starting up the app and making a few requests in my web browser: 2020-03-05T03:37:39.392Z Cloud Run CreateService helloworld-python ... 2020-03-05T03:38:03.285477Z[2020-03-05 03:38:03 +0000] [1] [INFO] Starting gunicorn 20.0.4 2020-03-05T03:38:03.287294Z[2020-03-05 03:38:03 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) 2020-03-05T03:38:03.287362Z[2020-03-05 03:38:03 +0000] [1] [INFO] Using worker: threads 2020-03-05T03:38:03.318392Z[2020-03-05 03:38:03 +0000] [4] [INFO] Booting worker with pid: 4 2020-03-05T03:38:15.057898Z[2020-03-05 03:38:15 +0000] [1] [INFO] Starting gunicorn 20.0.4 2020-03-05T03:38:15.059571Z[2020-03-05 03:38:15 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) 2020-03-05T03:38:15.059609Z[2020-03-05 03:38:15 +0000] [1] [INFO] Using worker: threads 2020-03-05T03:38:15.099443Z[2020-03-05 03:38:15 +0000] [4] [INFO] Booting worker with pid: 4 2020-03-05T03:38:16.320286ZGET200 297 B 2.9 s Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/ 2020-03-05T03:38:16.489044ZGET404 508 B 6 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/favicon.ico 2020-03-05T03:38:21.575528ZGET200 288 B 6 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/ 2020-03-05T03:38:27.000761ZGET200 285 B 5 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/?who=me 2020-03-05T03:38:27.347258ZGET404 508 B 13 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/favicon.ico 2020-03-05T03:38:34.802266Z[2020-03-05 03:38:34 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:4) 2020-03-05T03:38:35.302340Z[2020-03-05 03:38:35 +0000] [4] [INFO] Worker exiting (pid: 4) 2020-03-05T03:38:48.803505Z[2020-03-05 03:38:48 +0000] [5] [INFO] Booting worker with pid: 5 2020-03-05T03:39:10.202062Z[2020-03-05 03:39:09 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:5) 2020-03-05T03:39:10.702339Z[2020-03-05 03:39:10 +0000] [5] [INFO] Worker exiting (pid: 5) 2020-03-05T03:39:18.801194Z[2020-03-05 03:39:18 +0000] [6] [INFO] Booting worker with pid: 6 Note the worker timeouts and reboots at the end of the logs. The fact that its a CRITICAL error makes me think it shouldn't be happing. Is this expected behavior? Is this a side effect of the Cloud Run machinery starting and stopping my service as requests come and go? | Cloud Run has scaled down one of your instances, and the gunicorn arbiter is considering it stalled. You should add --timeout 0 to your gunicorn invocation to disable the worker timeout entirely, it's unnecessary for Cloud Run. | 30 | 42 |
60,623,869 | 2020-3-10 | https://stackoverflow.com/questions/60623869/gradcam-with-guided-backprop-for-transfer-learning-in-tensorflow-2-0 | I get an error using gradient visualization with transfer learning in TF 2.0. The gradient visualization works on a model that does not use transfer learning. When I run my code I get the error: assert str(id(x)) in tensor_dict, 'Could not compute output ' + str(x) AssertionError: Could not compute output Tensor("block5_conv3/Identity:0", shape=(None, 14, 14, 512), dtype=float32) When I run the code below it errors. I think there's an issue with the naming conventions or connecting inputs and outputs from the base model, vgg16, to the layers I'm adding. Really appreciate your help! """ Broken example when grad_model is created. """ !pip uninstall tensorflow !pip install tensorflow==2.0.0 import cv2 import numpy as np import tensorflow as tf from tensorflow.keras import layers import matplotlib.pyplot as plt IMAGE_PATH = '/content/cat.3.jpg' LAYER_NAME = 'block5_conv3' model_layer = 'vgg16' CAT_CLASS_INDEX = 281 imsize = (224,224,3) img = tf.keras.preprocessing.image.load_img(IMAGE_PATH, target_size=(224, 224)) plt.figure() plt.imshow(img) img = tf.io.read_file(IMAGE_PATH) img = tf.image.decode_jpeg(img) img = tf.cast(img, dtype=tf.float32) # img = tf.keras.preprocessing.image.img_to_array(img) img = tf.image.resize(img, (224,224)) img = tf.reshape(img, (1, 224,224,3)) input = layers.Input(shape=(imsize[0], imsize[1], imsize[2])) base_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(imsize[0], imsize[1], imsize[2])) # base_model.trainable = False flat = layers.Flatten() dropped = layers.Dropout(0.5) global_average_layer = tf.keras.layers.GlobalAveragePooling2D() fc1 = layers.Dense(16, activation='relu', name='dense_1') fc2 = layers.Dense(16, activation='relu', name='dense_2') fc3 = layers.Dense(128, activation='relu', name='dense_3') prediction = layers.Dense(2, activation='softmax', name='output') for layr in base_model.layers: if ('block5' in layr.name): layr.trainable = True else: layr.trainable = False x = base_model(input) x = global_average_layer(x) x = fc1(x) x = fc2(x) x = prediction(x) model = tf.keras.models.Model(inputs = input, outputs = x) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), loss='binary_crossentropy', metrics=['accuracy']) This portion of the code is where the error lies. I'm not sure what is the correct way to label inputs and outputs. # Create a graph that outputs target convolution and output grad_model = tf.keras.models.Model(inputs = [model.input, model.get_layer(model_layer).input], outputs=[model.get_layer(model_layer).get_layer(LAYER_NAME).output, model.output]) print(model.get_layer(model_layer).get_layer(LAYER_NAME).output) # Get the score for target class # Get the score for target class with tf.GradientTape() as tape: conv_outputs, predictions = grad_model(img) loss = predictions[:, 1] The section below is for plotting a heatmap of gradcam. print('Prediction shape:', predictions.get_shape()) # Extract filters and gradients output = conv_outputs[0] grads = tape.gradient(loss, conv_outputs)[0] # Apply guided backpropagation gate_f = tf.cast(output > 0, 'float32') gate_r = tf.cast(grads > 0, 'float32') guided_grads = gate_f * gate_r * grads # Average gradients spatially weights = tf.reduce_mean(guided_grads, axis=(0, 1)) # Build a ponderated map of filters according to gradients importance cam = np.ones(output.shape[0:2], dtype=np.float32) for index, w in enumerate(weights): cam += w * output[:, :, index] # Heatmap visualization cam = cv2.resize(cam.numpy(), (224, 224)) cam = np.maximum(cam, 0) heatmap = (cam - cam.min()) / (cam.max() - cam.min()) cam = cv2.applyColorMap(np.uint8(255 * heatmap), cv2.COLORMAP_JET) output_image = cv2.addWeighted(cv2.cvtColor(img.astype('uint8'), cv2.COLOR_RGB2BGR), 0.5, cam, 1, 0) plt.figure() plt.imshow(output_image) plt.show() I also asked this to the tensorflow team on github at https://github.com/tensorflow/tensorflow/issues/37680. | I figured it out. If you set up the model extending the vgg16 base model with your own layers, rather than inserting the base model into a new model like a layer, then it works. First set up the model and be sure to declare the input_tensor. inp = layers.Input(shape=(imsize[0], imsize[1], imsize[2])) base_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_tensor=inp, input_shape=(imsize[0], imsize[1], imsize[2])) This way we don't have to include a line like x=base_model(inp) to show what input we want to put in. That's already included in tf.keras.applications.VGG16(...). Instead of putting this vgg16 base model inside another model, it's easier to do gradcam by adding layers to the base model itself. I grab the output of the last layer of VGG16 (with the top removed), which is the pooling layer. block5_pool = base_model.get_layer('block5_pool') x = global_average_layer(block5_pool.output) x = fc1(x) x = prediction(x) model = tf.keras.models.Model(inputs = inp, outputs = x) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), loss='binary_crossentropy', metrics=['accuracy']) Now, I grab the layer for visualization, LAYER_NAME='block5_conv3'. # Create a graph that outputs target convolution and output grad_model = tf.keras.models.Model(inputs = [model.input], outputs=[model.output, model.get_layer(LAYER_NAME).output]) print(model.get_layer(LAYER_NAME).output) # Get the score for target class # Get the score for target class with tf.GradientTape() as tape: predictions, conv_outputs = grad_model(img) loss = predictions[:, 1] print('Prediction shape:', predictions.get_shape()) # Extract filters and gradients output = conv_outputs[0] grads = tape.gradient(loss, conv_outputs)[0] | 8 | 6 |
60,625,834 | 2020-3-10 | https://stackoverflow.com/questions/60625834/create-interactive-hierarchy-diagram-from-pandas-dictionary | I have data that shows the relationship for each employee with their managers(Person:Manager) - data = {'PersonX':'Person1', 'PersonY':'Person1', 'PersonZ':'Person 2', 'Person1':'Person100','Person2':'Person100' } I am trying to show a hierarchy chart from the above data in a clean looking chart and if I can filter that data in the visualization itself that is a Bonus. The data that I get can contain sometimes 5 people or sometimes the number of records is more than 5000. I have tried these approaches but they are no where close to generating any graphs that are interactive. Code - Try 1 - import pandas as pd import networkx as nx d = {'PersonX': 'Person1', 'PersonY': 'Person1', 'PersonZ': 'Person2', 'Person1': 'Person100', 'Person2': 'Person100'} df = pd.DataFrame(d.items(), columns=['Person', 'Manager']) G = nx.from_pandas_edgelist(df, source='Person', target='Manager') nx.draw(G, with_labels=True) plt.show() Try 2 - import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import LabelEncoder from scipy.cluster import hierarchy df2 = df.apply(LabelEncoder().fit_transform) df2.set_index('Manager', inplace=True) Z = hierarchy.linkage(df2, 'ward') hierarchy.dendrogram(hierarchy.linkage(df2, method='ward')) plt.show() Try 3 - print('strict digraph tree {') for row in d.items(): print(' {0} -> {1};'.format(*row)) print('}') And ran the test.py | dot -Tpng -otree.png | I went with the following code to create a graph that was interactive, this is a work in progress but I wanted to post this so that people can use this in case needed. import pandas as pd import dash import dash_html_components as html import dash_cytoscape as cyto from matplotlib import colors as mcolors from itertools import zip_longest from ast import literal_eval colors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS) # Sort colors by hue, saturation, value and name. by_hsv = sorted((tuple(mcolors.rgb_to_hsv(mcolors.to_rgba(color)[:3])), name) for name, color in colors.items()) sorted_names = [name for hsv, name in by_hsv] app = dash.Dash(__name__) # colors = ['red', 'blue', 'green', 'yellow', 'pink'] # stylesheet for the web page generated default_stylesheet = [ { "selector": 'node', 'style': { "opacity": 0.9, 'height': 15, 'width': 15, 'background-color': '#222222', 'label': 'data(label)' } }, { "selector": 'edge', 'style': { "curve-style": "bezier", "opacity": 0.3, 'width': 2 } }, *[{ "selector": '.' + color, 'style': {'line-color': color} } for color in sorted_names] ] # Example data for illustration # My actual data was in the excel file with two columns Managers and Person managers = ['Person A', 'Person A', 'Person A', 'Person A', 'Person A', 'Person A', 'Person B', 'Person B', 'Person B', 'Person B', 'Person B', 'Person B', 'Person C', 'Person C', 'Person C', 'Person C', 'Person C', 'Person C', 'Person V', 'Person V', 'Person V', 'Person V', 'Person V'] person = ['Person D', 'Person E', 'Person F', 'Person G', 'Person H', 'Person I', 'Person J', 'Person K', 'Person L', 'Person M', 'Person N', 'Person O', 'Person P', 'Person Q', 'Person R', 'Person S', 'Person T', 'Person U', 'Person A', 'Person W', 'Person X', 'Person B', 'Person C'] # Creating a dataframe with the illustration data df = pd.DataFrame(list(zip(person, managers)), columns=['Person', 'Manager']) # Giving colors to each managers in the dataframe df['colors'] = df['Manager'].map(dict(zip_longest(list(set(managers)), sorted_names))) # Creating the nodes within the dataframe df['y_node_target'] = "{\"data\": {\"id\": \"" + df['Person'] + "\", \"label\":\""+df['Person']+"\"}, \"classes\": \"" + df['colors'] + "\"}" df['y_node'] = "{\"data\": {\"id\": \"" + df['Manager'] + "\", \"label\":\""+df['Manager']+"\"}, \"classes\": \"" + df['colors'] + "\"}" nodes = list(set(pd.concat([df['y_node'], df['y_node_target']]).to_list())) df['Edges'] = "{\'data\': {\'source\':\"" + df['Manager'] + "\", \'target\': \"" + df[ 'Person'] + "\"},\'classes\': \"" + df['colors'] + "\"}" # Converting the strings to dictionaries and assigning them to variables edges = list(set(df['Edges'].astype(str).to_list())) edges = list(map(literal_eval, edges)) nodes = list(map(literal_eval, nodes)) app.layout = html.Div([ cyto.Cytoscape( id='cytoscape', elements=edges + nodes, stylesheet=default_stylesheet, layout={ 'name': 'breadthfirst' }, style={'height': '95vh', 'width': '100%'} ) ]) if __name__ == '__main__': app.run_server(debug=True) Output was a webpage - | 8 | 4 |
60,562,759 | 2020-3-6 | https://stackoverflow.com/questions/60562759/incorrect-results-with-annotate-values-union-in-django | Jump to edit to see more real-life code example, that doesn't work after changing the query order Here are my models: class ModelA(models.Model): field_1a = models.CharField(max_length=32) field_2a = models.CharField(max_length=32) class ModelB(models.Model): field_1b = models.CharField(max_length=32) field_2b = models.CharField(max_length=32) Now, create 2 instances each: ModelA.objects.create(field_1a="1a1", field_2a="1a2") ModelA.objects.create(field_1a="2a1", field_2a="2a2") ModelB.objects.create(field_1b="1b1", field_2b="1b2") ModelB.objects.create(field_1b="2b1", field_2b="2b2") If I'll query for only one model with annotations, I get something like that: >>> ModelA.objects.all().annotate(field1=F("field_1a"), field2=F("field_2a")).values("field1", "field2") [{"field1": "1a1", "field2": "1a2"}, {"field1": "2a1", "field2": "2a2"}] This is correct behavior. The problem starts, when I want to get union of those two models: # model A first, with annotate query = ModelA.objects.all().annotate(field1=F("field_1a"), field2=F("field_2a")) # now union with model B, also annotated query = query.union(ModelB.objects.all().annotate(field1=F("field_1b"), field2=F("field_2b"))) # get only field1 and field2 query = query.values("field1", "field2") # the results are skewed: assert list(query) == [ {"field1": 1, "field2": "1a1"}, {"field1": 1, "field2": "1b1"}, {"field1": 2, "field2": "2a1"}, {"field1": 2, "field2": "2b1"}, ] The assert passes correctly, which means that the results are wrong. It seems like the values() didn't match the variable name, it just iterated over the object as on a tuple. The value of field1 is actually the object's ID, and field2 is field1. This is pretty easy to fix in such simple models, but my real models are quite complex, and they have a different number of fields. How do I union them correctly? EDIT Below you can find an extended example that fails regardless of the order of union() and values() - the models are slightly bigger now, and it seems that the different fields count somehow confuses Django: # models class ModelA(models.Model): field_1a = models.CharField(max_length=32) field_1aa = models.CharField(max_length=32, null=True) field_1aaa = models.CharField(max_length=32, null=True) field_2a = models.CharField(max_length=32) extra_a = models.CharField(max_length=32) class ModelB(models.Model): extra = models.CharField(max_length=32) field_1b = models.CharField(max_length=32) field_2b = models.CharField(max_length=32) # test ModelA.objects.create(field_1a="1a1", field_2a="1a2", extra_a="1extra") ModelA.objects.create(field_1a="2a1", field_2a="2a2", extra_a="2extra") ModelB.objects.create(field_1b="1b1", field_2b="1b2", extra="3extra") ModelB.objects.create(field_1b="2b1", field_2b="2b2", extra="4extra") values = ("field1", "field2", "extra") query = ( ModelA.objects.all() .annotate( field1=F("field_1a"), field2=F("field_2a"), extra=F("extra_a") ) .values(*values) ) query = query.union( ModelB.objects.all() .annotate(field1=F("field_1b"), field2=F("field_2b")) .values(*values) ) # outcome assert list(query) == [ {"field1": "1a1", "field2": "1a2", "extra": "1extra"}, {"field1": "2a1", "field2": "2a2", "extra": "2extra"}, {"field1": "3extra", "field2": "1b1", "extra": "1b2"}, {"field1": "4extra", "field2": "2b1", "extra": "2b2"}, ] | After some debugging and going through the source code, I have an idea why this is happening. What I am going to do is try to explain that why doing annotate + values results in displaying the id and what is the difference between the two cases above. To keep things simple, I will write also write the possible resulting sql query for each statement. 1. annotate first but get values on union query qs1 = ModelA.objects.all().annotate(field1=F("field_1a"), field2=F("field_2a")) When writing something like this, django will get all the fields + annotated fields, so the resulting sql query looks like: select id, field_1a, field_2a, field_1a as field1, field_2a as field2 from ModelA So, if we have a query which is the result of: qs = qs1.union(qs2) the resulting sql for django looks like: (select id, field_1a, field_2a, field_1a as field1, field_2a as field2 from ModelA) UNION (select id, field_1b, field_2b, field_1b as field1, field_2b as field2 from ModelB) Let's go deeper into how this sql is generated. When we do a union, a combinator and combined_queries is set on the qs.query and the resulting sql is generated by combining the sql of individual queries. So, in summary: qs.sql == qs1.sql UNION qs2.sql # in abstract sense When, we do qs.values('field1', 'field2'), the col_count in compiler is set to 2 which is the number of fields. As you can see that the union query above returns 5 columns but in the final return from compiler each row in the results is sliced using col_count. Now, this results with only 2 columns is passed back to ValuesIterable where it maps each name in the selected fields with the resulting columns. That is how it leads to the incorrect results. 2. annotate + values on individual queries and then perform union Now, let's see what happens when annotate is used with values directly qs1 = ModelA.objects.all().annotate(field1=F("field_1a"), field2=F("field_2a")).values('field1', 'field2') The resulting sql is: select field_1a as field1, field_2a as field2 from ModelA Now, when we do the union: qs = qs1.union(qs2) the sql is: (select field_1a as field1, field_2a as field2 from ModelA) UNION (select field_1b as field1, field_2b as field2 from ModelB) Now, when qs.values('field1', 'field2') executes, the number of columns returned from union query has 2 columns which is same as the col_count which is 2 and each field is matched with the individual columns producing the expected result. 3. Different field annotation count and ordering of fields In the OP, there is a scenario when even using .values before union doesn't produce correct results. The reason for that is that in the ModelB, there is no annotation for extra field. So, let's look at the queries generated for each model: ModelA.objects.all() .annotate( field1=F("field_1a"), field2=F("field_2a"), extra=F("extra_a") ) .values(*values) The SQL becomes: select field_1a as field1, field_2a as field2, extra_a as extra from ModelA For ModelB: ModelB.objects.all() .annotate(field1=F("field_1b"), field2=F("field_2b")) .values(*values) SQL: select extra, field_1b as field1, field_2b as field2 from ModelB and the union is: (select field_1a as field1, field_2a as field2, extra_a as extra from ModelA) UNION (select extra, field_1b as field1, field_2b as field2 from ModelB) Because annotated fields are listed after the real db fields, the extra of ModelB is mixed with field1 of ModelB. TO make sure that you get correct results, please make sure that the ordering of fields in generated SQL is always correct - with or without annotation. In this case, I will suggest to annotate extra on ModelB as well. | 10 | 5 |
60,590,442 | 2020-3-8 | https://stackoverflow.com/questions/60590442/abstract-dataclass-without-abstract-methods-in-python-prohibit-instantiation | Even if a class is inherited from ABC, it can still be instantiated unless it contains abstract methods. Having the code below, what is the best way to prevent an Identifier object from being created: Identifier(['get', 'Name'])? from abc import ABC from typing import List from dataclasses import dataclass @dataclass class Identifier(ABC): sub_tokens: List[str] @staticmethod def from_sub_tokens(sub_tokens): return SimpleIdentifier(sub_tokens) if len(sub_tokens) == 1 else CompoundIdentifier(sub_tokens) @dataclass class SimpleIdentifier(Identifier): pass @dataclass class CompoundIdentifier(Identifier): pass | You can create a AbstractDataclass class which guarantees this behaviour, and you can use this every time you have a situation like the one you described. @dataclass class AbstractDataclass(ABC): def __new__(cls, *args, **kwargs): if cls == AbstractDataclass or cls.__bases__[0] == AbstractDataclass: raise TypeError("Cannot instantiate abstract class.") return super().__new__(cls) So, if Identifier inherits from AbstractDataclass instead of from ABC directly, modifying the __post_init__ will not be needed. @dataclass class Identifier(AbstractDataclass): sub_tokens: List[str] @staticmethod def from_sub_tokens(sub_tokens): return SimpleIdentifier(sub_tokens) if len(sub_tokens) == 1 else CompoundIdentifier(sub_tokens) @dataclass class SimpleIdentifier(Identifier): pass @dataclass class CompoundIdentifier(Identifier): pass Instantiating Identifier will raise TypeError but not instantiating SimpleIdentifier or CompountIdentifier. And the AbstractDataclass can be re-used in other parts of the code. | 16 | 24 |
60,575,662 | 2020-3-7 | https://stackoverflow.com/questions/60575662/how-to-update-plotly-express-treemap-to-have-both-label-as-well-as-the-value-ins | Currently, plotly express treemap shows only label inside treemap. How to include the value alongside the label? | That's why I don't like express, there are too many limitations and to make these kinds of changes you have to access the trace either way. From my point of view it is better and more code-transparent to use plain plotly instead. That being said, you can access the textinfo attribute of the trace to do this. From the reference: Determines which trace information appear on the graph. Any combination of "label", "text", "value", "current path", "percent root", "percent entry", "percent parent" joined with a "+" OR "none". Taking an example from the site: df = px.data.tips() fig = px.treemap(df, path=['day', 'time', 'sex'], values='total_bill') # this is what I don't like, accessing traces like this fig.data[0].textinfo = 'label+text+value+current path' fig.layout.hovermode = False fig.show() Also take a look at the texttemplate attribute for formatting options. | 9 | 22 |
60,590,333 | 2020-3-8 | https://stackoverflow.com/questions/60590333/increasing-each-element-of-a-tensor-by-the-predecessor-in-tensorflow-2-0 | I'm new to tensorflow 2.0, and haven't done much except designing and training some artificial neural networks from boilerplate code. I'm trying to solve an exercise for newcomers into the new tensorflow. I created some code, but it doesn't work. Below is the problem definition: Assuming we have tensor M of rational numbers in shape of (a, b, c) and scalar p ∈ (0, 1) (memory factor), let’s create a function that will return tensor N in shape of (a, b, c). Each element of N tensors moving along axis c should be increased by the value of predecessor multiplied by p. Assuming we have tensor: T = [x1, x2, x3, x4] in shape of (1, 1, 4), we would like to get vector: [x1, x2+x1·p, x3+(x2+x1·p)·p, x4+(x3+(x2+x1·p)·p)*p] Solution should be created in Tensorflow 2.0 and should be focused on delivering the shortest execution time on CPU. Created graph should allow to efficiently calculate derivative both on tensor M and value p. This is the code I created till now: import tensorflow as tf @tf.function def vectorize_predec(t, p): last_elem = 0 result = [] for el in t: result.append(el + (p * last_elem)) last_elem = el + (p * last_elem) return result p = tf.Variable(0.5, dtype='double') m = tf.constant([[0, 1, 2, 3, 4], [1, 3, 5, 7, 10], [1, 1, 1, -1, 0]]) vectorize_predec(m, p) But it throws a TypeError. I looked around documentation, I've seen functions like cumsum and polyeval, but I'm not sure they fit my needs. To my understanding, I need to write my own customer function annotated with @tf.function. I'm also not sure how to handle 3-dimension tensors properly according to the problem definition (adding the predecessor should happen on the last ("c") axis). I've seen in documentation (here: https://www.tensorflow.org/tutorials/customization/performance) that there are ways to measure size of the produced graph. Although, I'm not sure how "graph" allows to efficiently calculate derivative both on tensor M and value p. ELI5 answers appreciated, or at least some materials I can read to educate myself better. Thanks a lot! | I'll give you a couple of different methods to implement that. I think the most obvious solution is to use tf.scan: import tensorflow as tf def apply_momentum_scan(m, p, axis=0): # Put axis first axis = tf.convert_to_tensor(axis, dtype=tf.int32) perm = tf.concat([[axis], tf.range(axis), tf.range(axis + 1, tf.rank(m))], axis=0) m_t = tf.transpose(m, perm) # Do computation res_t = tf.scan(lambda a, x: a * p + x, m_t) # Undo transpose perm_t = tf.concat([tf.range(1, axis + 1), [0], tf.range(axis + 1, tf.rank(m))], axis=0) return tf.transpose(res_t, perm_t) However, you can also implement this as a particular matrix product, if you build a matrix of exponential factors: import tensorflow as tf def apply_momentum_matmul(m, p, axis=0): # Put axis first and reshape m = tf.convert_to_tensor(m) p = tf.convert_to_tensor(p) axis = tf.convert_to_tensor(axis, dtype=tf.int32) perm = tf.concat([[axis], tf.range(axis), tf.range(axis + 1, tf.rank(m))], axis=0) m_t = tf.transpose(m, perm) shape_t = tf.shape(m_t) m_tr = tf.reshape(m_t, [shape_t[0], -1]) # Build factors matrix r = tf.range(tf.shape(m_tr)[0]) p_tr = tf.linalg.band_part(p ** tf.dtypes.cast(tf.expand_dims(r, 1) - r, p.dtype), -1, 0) # Do computation res_tr = p_tr @ m_tr # Reshape back and undo transpose res_t = tf.reshape(res_tr, shape_t) perm_t = tf.concat([tf.range(1, axis + 1), [0], tf.range(axis + 1, tf.rank(m))], axis=0) return tf.transpose(res_t, perm_t) This can also be rewritten to avoid the first transposing (which in TensorFlow is expensive) with tf.tensordot: import tensorflow as tf def apply_momentum_tensordot(m, p, axis=0): # Put axis first and reshape m = tf.convert_to_tensor(m) # Build factors matrix r = tf.range(tf.shape(m)[axis]) p_mat = tf.linalg.band_part(p ** tf.dtypes.cast(tf.expand_dims(r, 1) - r, p.dtype), -1, 0) # Do computation res_t = tf.linalg.tensordot(m, p_mat, axes=[[axis], [1]]) # Transpose last_dim = tf.rank(res_t) - 1 perm_t = tf.concat([tf.range(axis), [last_dim], tf.range(axis, last_dim)], axis=0) return tf.transpose(res_t, perm_t) The three functions would be used in a similar way: import tensorflow as tf p = tf.Variable(0.5, dtype=tf.float32) m = tf.constant([[0, 1, 2, 3, 4], [1, 3, 5, 7, 10], [1, 1, 1, -1, 0]], tf.float32) # apply_momentum is one of the functions above print(apply_momentum(m, p, axis=0).numpy()) # [[ 0. 1. 2. 3. 4. ] # [ 1. 3.5 6. 8.5 12. ] # [ 1.5 2.75 4. 3.25 6. ]] print(apply_momentum(m, p, axis=1).numpy()) # [[ 0. 1. 2.5 4.25 6.125 ] # [ 1. 3.5 6.75 10.375 15.1875] # [ 1. 1.5 1.75 -0.125 -0.0625]] Using a matrix product is more asymptotically complex, but it can be faster than scanning. Here is a small benchmark: import tensorflow as tf import numpy as np # Make test data tf.random.set_seed(0) p = tf.constant(0.5, dtype=tf.float32) m = tf.random.uniform([100, 30, 50], dtype=tf.float32) # Axis 0 print(np.allclose(apply_momentum_scan(m, p, 0).numpy(), apply_momentum_matmul(m, p, 0).numpy())) # True print(np.allclose(apply_momentum_scan(m, p, 0).numpy(), apply_momentum_tensordot(m, p, 0).numpy())) # True %timeit apply_momentum_scan(m, p, 0) # 11.5 ms ± 610 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit apply_momentum_matmul(m, p, 0) # 1.36 ms ± 18.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit apply_momentum_tensordot(m, p, 0) # 1.62 ms ± 7.39 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) # Axis 1 print(np.allclose(apply_momentum_scan(m, p, 1).numpy(), apply_momentum_matmul(m, p, 1).numpy())) # True print(np.allclose(apply_momentum_scan(m, p, 1).numpy(), apply_momentum_tensordot(m, p, 1).numpy())) # True %timeit apply_momentum_scan(m, p, 1) # 4.27 ms ± 60.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit apply_momentum_matmul(m, p, 1) # 1.27 ms ± 36.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit apply_momentum_tensordot(m, p, 1) # 1.2 ms ± 11.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) # Axis 2 print(np.allclose(apply_momentum_scan(m, p, 2).numpy(), apply_momentum_matmul(m, p, 2).numpy())) # True print(np.allclose(apply_momentum_scan(m, p, 2).numpy(), apply_momentum_tensordot(m, p, 2).numpy())) # True %timeit apply_momentum_scan(m, p, 2) # 6.29 ms ± 64.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit apply_momentum_matmul(m, p, 2) # 1.41 ms ± 21.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit apply_momentum_tensordot(m, p, 2) # 1.05 ms ± 26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) So, matrix product seems to win. Let's see if this scales: import tensorflow as tf import numpy as np # Make test data tf.random.set_seed(0) p = tf.constant(0.5, dtype=tf.float32) m = tf.random.uniform([1000, 300, 500], dtype=tf.float32) # Axis 0 print(np.allclose(apply_momentum_scan(m, p, 0).numpy(), apply_momentum_matmul(m, p, 0).numpy())) # True print(np.allclose(apply_momentum_scan(m, p, 0).numpy(), apply_momentum_tensordot(m, p, 0).numpy())) # True %timeit apply_momentum_scan(m, p, 0) # 784 ms ± 6.78 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit apply_momentum_matmul(m, p, 0) # 1.13 s ± 76.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit apply_momentum_tensordot(m, p, 0) # 1.3 s ± 27 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # Axis 1 print(np.allclose(apply_momentum_scan(m, p, 1).numpy(), apply_momentum_matmul(m, p, 1).numpy())) # True print(np.allclose(apply_momentum_scan(m, p, 1).numpy(), apply_momentum_tensordot(m, p, 1).numpy())) # True %timeit apply_momentum_scan(m, p, 1) # 852 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit apply_momentum_matmul(m, p, 1) # 659 ms ± 10.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit apply_momentum_tensordot(m, p, 1) # 741 ms ± 19.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # Axis 2 print(np.allclose(apply_momentum_scan(m, p, 2).numpy(), apply_momentum_matmul(m, p, 2).numpy())) # True print(np.allclose(apply_momentum_scan(m, p, 2).numpy(), apply_momentum_tensordot(m, p, 2).numpy())) # True %timeit apply_momentum_scan(m, p, 2) # 1.06 s ± 16.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit apply_momentum_matmul(m, p, 2) # 924 ms ± 17 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit apply_momentum_tensordot(m, p, 2) # 483 ms ± 10.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Well, now it's not so clear anymore. Scanning is still not super fast, but matrix products are sometimes slower. As you can imagine if you go to even bigger tensors the complexity of matrix products will dominate the timings. So, if you want the fastest solution and know your tensors are not going to get huge, use one of the matrix product implementations. If you're fine with okay speed but want to make sure you don't run out of memory (matrix solution also takes much more) and timing is predictable, you can use the scanning solution. Note: Benchmarks above were carried out on CPU, results may vary significantly on GPU. | 7 | 5 |
Subsets and Splits