question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
62,326,155 | 2020-6-11 | https://stackoverflow.com/questions/62326155/how-to-efficiently-get-count-for-item-in-list-of-lists-in-python | I have three lists as follows. mylist = [[5274919, ["my cat", "little dog", "fish", "rat"]], [5274920, ["my cat", "parrot", "little dog"]], [5274991, ["little dog", "fish", "duck"]]] myconcepts = ["my cat", "little dog"] hatedconcepts = ["rat", "parrot"] For each concept in myconcepts, I want to get the count every other concept connected to it using mylist. Then remove the hatedconcepts from it. So, my output should looks like as follows. {"my cat": [("my cat", 2), ("little dog", 2), ("fish", 1)], "little dog": [("little dog", 3), ("my cat", 2), ("fish", 2), ("duck", 1)]} I was using this code to do it. import collections myoutput = [] for concept in myconcepts: mykeywords = [] for item in mylist: if concept in item[1]: for mykeyword in item[1]: if mykeyword in hatedconcepts: pass else: mykeywords.append(mykeyword) if len(mykeywords) > 0: sorted_keywords = collections.Counter(mykeywords).most_common() myoutput.append(tuple((concept, sorted_keywords))) print(myoutput) The output of the code is: [('my cat', [('my cat', 2), ('little dog', 2), ('fish', 1)]), ('little dog', [('little dog', 3), ('my cat', 2), ('fish', 2), ('duck', 1)])] However now I have a huge mylist with a size of 3GB and nearly 9000 myconcepts. The hatedconcepts count is only 20. It looks like it takes about two weeks to run using my current code. The main reason for this could be that my current program is O^3 which is not very efficient. Therefore, I am looking for ways to make my current program more efficient. I am even fine with pythonic solutions that even take 5-6 days to run. Please let me know your thoughts. I have added a portion of mylist in: https://drive.google.com/file/d/1M3EhIRwwKwD3Kv4zDsmXaH1D73tx0eF3/view?usp=sharing just to get some idea how it looks like. I am happy to provide more details if needed. | I have tried to make it fast, avoided some repeated loops. Please check if this speeds things up. from itertools import chain from collections import Counter, defaultdict database = defaultdict(set) output = {} # created a map for different concepts, so we only search the indices where a certain concept is for index, (_, concepts) in enumerate(mylist): for concept in concepts: database[concept].add(index) for concept in myconcepts: search_indices = database[concept] all_counts = Counter(chain.from_iterable(mylist[i][1] for i in search_indices)) for hc in hatedconcepts: if hc in all_counts: all_counts.pop(hc) output[concept] = sorted(all_counts.items(), key=lambda x: x[1], reverse=True) | 17 | 2 |
62,325,417 | 2020-6-11 | https://stackoverflow.com/questions/62325417/inconsistent-behavior-when-inserting-a-set-into-cells-using-loc-in-pandas | It's a pretty simple example import pandas df = pandas.DataFrame() value_to_be_set = {'1'} df.loc[0, 'col1'] = value_to_be_set df['col2'] = None df.loc[0, 'col2'] = value_to_be_set print(df.head()) output col1 col2 0 1 {1} Why is the datatype different for both columns? Python 3.7.3 pandas version: 0.23.4 | In first assignment, you create a num_column from a set, said differently from an iterable. You ask for 1 single element and provide an iterable of size one, so you affect the content of the set to the single cell. You can try to use a set of 2 values to see that it would raise an error. In second assignment, you update a cell in an existing column. Pandas has no reason to unpack anything here, and it affects the set to the cell. To be honest, this explains what happens, but is not a justification for the rationale behind the different behaviours... | 8 | 7 |
62,304,176 | 2020-6-10 | https://stackoverflow.com/questions/62304176/how-to-find-out-dataframe-to-numpy-did-not-create-a-copy | The pandas.DataFrame.to_numpy method has a copy argument with the following documentation: copy : bool, default False Whether to ensure that the returned value is a not a view on another array. Note that copy=False does not ensure that to_numpy() is no-copy. Rather, copy=True ensure that a copy is made, even if not strictly necessary. Playing around a bit, it seems like calling to_numpy on data that is both adjacent in memory and not of mixed types, keeps a view. But how do I check whether the resulting numpy array shares the memory with the data frame it was created from, without changing data? Example of memory sharing: import pandas as pd import numpy as np # some data frame that I expect not to be copied frame = pd.DataFrame(np.arange(144).reshape(12,12)) array = frame.to_numpy() array[:] = 0 print(frame) # Prints: # 0 1 2 3 4 5 6 7 8 9 10 11 # 0 0 0 0 0 0 0 0 0 0 0 0 0 # 1 0 0 0 0 0 0 0 0 0 0 0 0 # 2 0 0 0 0 0 0 0 0 0 0 0 0 # 3 0 0 0 0 0 0 0 0 0 0 0 0 # 4 0 0 0 0 0 0 0 0 0 0 0 0 # 5 0 0 0 0 0 0 0 0 0 0 0 0 # 6 0 0 0 0 0 0 0 0 0 0 0 0 # 7 0 0 0 0 0 0 0 0 0 0 0 0 # 8 0 0 0 0 0 0 0 0 0 0 0 0 # 9 0 0 0 0 0 0 0 0 0 0 0 0 # 10 0 0 0 0 0 0 0 0 0 0 0 0 # 11 0 0 0 0 0 0 0 0 0 0 0 0 Example not sharing memory: import pandas as pd import numpy as np # some data frame that I expect to be copied types = [int, str, float] frame = pd.DataFrame({ i: [types[i%len(types)](value) for value in col] for i, col in enumerate(np.arange(144).reshape(12,12).T) }) array = frame.to_numpy() array[:] = 0 print(frame) # Prints: # 0 1 2 3 4 5 6 7 8 9 10 11 # 0 0 12 24.0 36 48 60.0 72 84 96.0 108 120 132.0 # 1 1 13 25.0 37 49 61.0 73 85 97.0 109 121 133.0 # 2 2 14 26.0 38 50 62.0 74 86 98.0 110 122 134.0 # 3 3 15 27.0 39 51 63.0 75 87 99.0 111 123 135.0 # 4 4 16 28.0 40 52 64.0 76 88 100.0 112 124 136.0 # 5 5 17 29.0 41 53 65.0 77 89 101.0 113 125 137.0 # 6 6 18 30.0 42 54 66.0 78 90 102.0 114 126 138.0 # 7 7 19 31.0 43 55 67.0 79 91 103.0 115 127 139.0 # 8 8 20 32.0 44 56 68.0 80 92 104.0 116 128 140.0 # 9 9 21 33.0 45 57 69.0 81 93 105.0 117 129 141.0 # 10 10 22 34.0 46 58 70.0 82 94 106.0 118 130 142.0 # 11 11 23 35.0 47 59 71.0 83 95 107.0 119 131 143.0 | There is numpy.shares_memory you can use: # Your first example print(np.shares_memory(array, frame)) # True, they are sharing memory # Your second example print(np.shares_memory(array2, frame2)) # False, they are not sharing memory There is also numpy.may_share_memory, which is faster but can only be used for making sure things do not share memory (because it only checks whether the bounds overlap), so strictly speaking does not answer the question. Read this for the differences. Take care using these numpy functions with pandas data-structures: np.shares_memory(frame, frame) returns True for the first example, but False for the second, probably because the __array__ method of the data frame in the second example creates a copy behind the scenes. | 10 | 6 |
62,319,228 | 2020-6-11 | https://stackoverflow.com/questions/62319228/number-of-instances-per-class-in-pytorch-dataset | I'm trying to make a simple image classifier using PyTorch. This is how I load the data into a dataset and dataLoader: batch_size = 64 validation_split = 0.2 data_dir = PROJECT_PATH+"/categorized_products" transform = transforms.Compose([transforms.Grayscale(), CustomToTensor()]) dataset = ImageFolder(data_dir, transform=transform) indices = list(range(len(dataset))) train_indices = indices[:int(len(indices)*0.8)] test_indices = indices[int(len(indices)*0.8):] train_sampler = SubsetRandomSampler(train_indices) test_sampler = SubsetRandomSampler(test_indices) train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=train_sampler, num_workers=16) test_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=test_sampler, num_workers=16) I want to print out the number of images in each class in training and test data separately, something like this: In train data: shoes: 20 shirts: 14 In test data: shoes: 4 shirts: 3 I tried this: from collections import Counter print(dict(Counter(sample_tup[1] for sample_tup in dataset.imgs))) but I got this error: AttributeError: 'MyDataset' object has no attribute 'img' | You need to use .targets to access the labels of data i.e. print(dict(Counter(dataset.targets))) It'll print something like this (e.g. in MNIST dataset): {5: 5421, 0: 5923, 4: 5842, 1: 6742, 9: 5949, 2: 5958, 3: 6131, 6: 5918, 7: 6265, 8: 5851} Also, you can use .classes or .class_to_idx to get mapping of label id to classes: print(dataset.class_to_idx) {'0 - zero': 0, '1 - one': 1, '2 - two': 2, '3 - three': 3, '4 - four': 4, '5 - five': 5, '6 - six': 6, '7 - seven': 7, '8 - eight': 8, '9 - nine': 9} Edit: Method 1 From the comments, in order to get class distribution of training and testing set separately, you can simply iterate over subset as below: train_size = int(0.8 * len(dataset)) test_size = len(dataset) - train_size train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size]) # labels in training set train_classes = [label for _, label in train_dataset] Counter(train_classes) Counter({0: 4757, 1: 5363, 2: 4782, 3: 4874, 4: 4678, 5: 4321, 6: 4747, 7: 5024, 8: 4684, 9: 4770}) Edit (2): Method 2 Since you've a large dataset, and as you said it takes considerable time to iterate over all training set, there is another way: You can use .indices of subset, which referes to indices in the original dataset selected for subset. i.e. train_classes = [dataset.targets[i] for i in train_dataset.indices] Counter(train_classes) # if doesn' work: Counter(i.item() for i in train_classes) | 13 | 18 |
62,315,295 | 2020-6-11 | https://stackoverflow.com/questions/62315295/convert-datetime-to-protobuf-timestamp-in-python | So I'm trying to prepare a message with Python that takes a Timestamp, but I'm having trouble converting a datetime to a protobuf Timestamp. Here's what I've tried so far: from google.protobuf.timestamp_pb2 import Timestamp import datetime now = datetime.datetime.now() timestamp = Timestamp() timestamp.FromDatetime(now) However, I'm getting an error AttributeError: 'Timestamp' object attribute 'seconds' is read-only How can I create a Timestamp from a datetime? | This code is working fine on my machine from google.protobuf.timestamp_pb2 import Timestamp import datetime now = datetime.datetime.now() timestamp = Timestamp() timestamp.FromDatetime(now) Output: seconds: 1591859232 nanos: 803377000 | 8 | 16 |
62,309,487 | 2020-6-10 | https://stackoverflow.com/questions/62309487/pybind11-init-with-lambda | I use pybind11 as a wrapper of my C++ code into a python library. It happens that there are arguments that I can't provide or sometimes I want to do a conversion/initialization that I know in the C++ side. It could be because the class is not known in python, for instance. How could that be done? The only "solution" I see so far would be to create an inherited proxy class in C++. Example: I want to define/bind a python class A: class A: def __init__(self, B b): ... With a C++ equivalent class: class A { A(C c, D d); } Is there some kind of lambda or an equivalent that I could create for the pybind11::init<>? | pybind11 lets you bind factory functions as init methods. So you would have to provide a function in c++ that took a B and return an A and then you could bind that as an init method for A. An example from the pybind11 docs class Example { private: Example(int); // private constructor public: // Factory function: static Example create(int a) { return Example(a); } }; py::class_<Example>(m, "Example") .def(py::init(&Example::create)); You should be able to bind in a free function as well (not just a static function), if you do not want to (or can't) change class A in c++. So it could look something like this (changed to return a unique_ptr, which pybind can just take ownership of vs a raw instance. But either should work) std::unique_ptr<A> createA(const B& arg) { // returns an instance of A that you made using B } py::class_<A>(m, "A") .def(py::init(&createA)); You obviously then have to also provide a binding for B in python. Docs are here and include even more examples, including how to do an init lambda as well: https://pybind11.readthedocs.io/en/stable/advanced/classes.html#custom-constructors | 9 | 10 |
62,299,740 | 2020-6-10 | https://stackoverflow.com/questions/62299740/how-do-i-detect-and-invoke-a-function-when-a-python-enum-member-is-accessed | I have an enum for which some of the members are deprecated: from enum import Enum class Foo(Enum): BAR = "bar" BAZ = "baz" # deprecated How do it get the following behavior: When somebody writes Foo.BAR, everything behaves normally When somebody writes Foo.BAZ, a DeprecationWarning is issued using warnings.warn("BAZ is deprecated", DeprecationWarning). Afterwards everything behaves normally. The same behavior should apply when members are accessed in other ways, e.g. Foo("baz") and Foo["BAZ"] should raise a DeprecationWarning. Things I have tried, but failed: Overwrite _missing_ and don't define BAZ. Does not work, because in the end I still need to return an existing member for a while (until our DB is cleaned of the deprecated value). But I can not dynamically add members to an enum. If I define it, _missing_ is not called. overwrite any of __getattr__, __getattribute__. These are called when accessing attributes of a member, e.g. Foo.BAZ.boo, not when accessing Foo.BAZ. I guess this could work if I could overwrite __getattr__ of EnumMeta and then make Enum use the child meta class. However, I don't see how that can be done either overwrite __class_getitem__: Reserved for static typing and not called anyways. Abuse _generate_next_value_. This function is only called on class creation, so I can get a deprecation warning when the class is called once, regardless of whether the deprecated member is called or not. But that is not what I want. Look at this question. It does not solve my problem, as the goal there is filtering of deprecated members during iteration. TLDR: How can I detect and invoke a function when an enum member is accessed? I am working with python 3.8, so new features are fine. | This appears to be one of those times when subclassing EnumMeta is the right thing to do. The new metaclass will run an _on_access method, if it exists, whenever a member is accessed: class OnAccess(EnumMeta): """ runs a user-specified function whenever member is accessed """ # def __getattribute__(cls, name): obj = super().__getattribute__(name) if isinstance(obj, Enum) and obj._on_access: obj._on_access() return obj # def __getitem__(cls, name): member = super().__getitem__(name) if member._on_access: member._on_access() return member # def __call__(cls, value, names=None, *, module=None, qualname=None, type=None, start=1): obj = super().__call__(value, names, module=module, qualname=qualname, type=type, start=start) if isinstance(obj, Enum) and obj._on_access: obj._on_access() return obj The new base Enum treats any extra arguments on member creation as arguments for a deprecate function, and sets the _on_access attribute to that function only if extra arguments are given: class DeprecatedEnum(Enum, metaclass=OnAccess): # def __new__(cls, value, *args): member = object.__new__(cls) member._value_ = value member._args = args member._on_access = member.deprecate if args else None return member # def deprecate(self): args = (self.name, ) + self._args import warnings warnings.warn( "member %r is deprecated; %s" % args, DeprecationWarning, stacklevel=3, ) And our example Enum with deprecated members: class Foo(DeprecatedEnum): BAR = "bar" BAZ = "baz", "use something else" And the warnings (from a test script): # no warning here list(Foo) # nor for non-deprecated members Foo.BAR # but direct use of deprecated members does generate warnings Foo.BAZ /home/ethan/test:74: DeprecationWarning: member 'BAZ' is deprecated; use something else Foo.BAZ Foo('baz') /home/ethan/test:75: DeprecationWarning: member 'BAZ' is deprecated; use something else Foo('baz') Foo['BAZ'] /home/ethan/test:76: DeprecationWarning: member 'BAZ' is deprecated; use something else Foo['BAZ'] And all the deprecated members in Foo: >>> print([m.name for m in Foo if m._args]) ['BAZ'] Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library. | 8 | 13 |
62,301,268 | 2020-6-10 | https://stackoverflow.com/questions/62301268/whenever-i-try-to-install-torch-it-displays-killed | I just want to install pytorch, I ran this in the terminal: pip install torch And it displays: Collecting torch Killed What is the problem? | It says your your free ram is not enough to install the package, but there is a method that you can still use it. pip install torch --no-cache-dir | 36 | 112 |
62,267,544 | 2020-6-8 | https://stackoverflow.com/questions/62267544/generate-pydantic-model-from-a-dict | Is there a straight-forward approach to generate a Pydantic model from a dictionary? Here is a sample of the data I have. { 'id': '424c015f-7170-4ac5-8f59-096b83fe5f5806082020', 'contacts': [{ 'displayName': 'Norma Fisher', 'id': '544aa395-0e63-4f9a-8cd4-767b3040146d' }], 'startTime': '2020-06-08T09:38:00+00:00' } Expecting a model similar to ... class NewModel(BaseModel): id: str contacts: list startTime: str | In Pydantic 2, you can use MyModel.model_validate(my_dict) to generate a model from a dictionary. According to the documentation – this is very similar to the __init__ method of the model, except it takes a dict rather than keyword arguments. If you're Pydantic 1, the method is parse_obj instead. | 78 | 124 |
62,261,355 | 2020-6-8 | https://stackoverflow.com/questions/62261355/how-to-add-watermark-in-all-pages-of-pdf-files-with-python | I'm try to adding watermark to every pages of my PDF file.My PDF files have 58 pages but my output file has get only last page in my PDF file. This's my code: from PyPDF2 import PdfFileReader, PdfFileWriter watermark_pdf = PdfFileReader("watermark.pdf") watermark_page = watermark_pdf.getPage(0) reader = PdfFileReader("original_document.pdf") for page in reader.pages: page.mergePage(watermark_page) output = PdfFileWriter() output.addPage(page) with open("watermarked_document.pdf", "wb") as fp: output.write(fp) Please tell me how to add watermark all pages. | You're rewriting your "merged" file for each page. Try something like from PyPDF2 import PdfFileMerger, PdfFileReader, PdfFileWriter pdf_file = "C:/Users/11359023/Desktop/deepfake_vee.pdf" watermark = "C:/Users/11359023/Desktop/simple.pdf" merged = "C:/Users/11359023/Desktop/merged.pdf" with open(pdf_file, "rb") as input_file, open(watermark, "rb") as watermark_file: input_pdf = PdfFileReader(input_file) watermark_pdf = PdfFileReader(watermark_file) watermark_page = watermark_pdf.getPage(0) output = PdfFileWriter() for i in range(input_pdf.getNumPages()): pdf_page = input_pdf.getPage(i) pdf_page.merge_page(watermark_page) output.addPage(pdf_page) with open(merged, "wb") as merged_file: output.write(merged_file) instead. Edit:- mergePage() has been Deprecated and removed in PyPDF2 3.0.0. use merge_page() instead. | 7 | 10 |
62,163,460 | 2020-6-3 | https://stackoverflow.com/questions/62163460/remove-a-legend-section-from-a-seaborn-plot | Using the 'tips' dataset as a toy model, I generate the following plot: import seaborn as sns import matplotlib.pyplot as plt tips = sns.load_dataset("tips") cmap = sns.cubehelix_palette(dark=.3, light=.8, as_cmap=True) g = sns.scatterplot(x="total_bill", y="sex", hue="smoker", size = 'tip',sizes=(320, 600), data=tips) plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., fontsize=13) plt.show(g) This image is exactly what I need. However, I want to remove the size = 'tip' from the legend and only keep the smoker. Essentially, remove those black circles labeled 0.0 to 12.0. How do I ensure my legend has only one variable of my choosing? | I was able to find a fix by indexing the labels in the legend. import seaborn as sns import matplotlib.pyplot as plt tips = sns.load_dataset("tips") cmap = sns.cubehelix_palette(dark=.3, light=.8, as_cmap=True) ax = sns.scatterplot(x="total_bill", y="sex", hue="smoker", size='tip', sizes=(320, 600), data=tips) # extract the existing handles and labels h, l = ax.get_legend_handles_labels() # slice the appropriate section of l and h to include in the legend ax.legend(h[0:3], l[0:3], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., fontsize=13) plt.show() | 12 | 13 |
62,240,559 | 2020-6-7 | https://stackoverflow.com/questions/62240559/closing-open-positions-on-binance | I am using the Binance Python API (Python 3.x) When one uses the “create_order” functionality, it creates an order on the SPOT exchange with a STATUS of NEW. When it gets filled, the STATUS goes to FILLED. Also, when it is FILLED, my understanding is that a POSITION is being created (Long or Short) My question is as follows: What endpoint can I use to get a list of the Open Positions. Why do I want this? If a Position is on the SELL side, I would like to execute a BUY to close it. If a Position is on the BUY side, I would like to execute a SELL to close it. Can this be done? Any help, hints or advice would be ~greatly~ appreciated. TIA @michaeldel ETA: I am using this here: https://python-binance.readthedocs.io/en/latest/ For the Orders, I have been following: https://python-binance.readthedocs.io/en/latest/account.html?highlight=orders#orders Can you note what the equivalent would be the equivalent under this (Python) API? I have been using: "get_all_orders" with a focus of the "STATUS" being "FILLED". https://python-binance.readthedocs.io/en/latest/binance.html#binance.client.Client.get_all_orders I was looking for Open Positions (not Orders) If a BTCUSDT SELL Position that has status=FILLED with a origQty =.20, I want to be able to reverse it with a BUY and a Quantity of .20 If a BTCUSDT BUY Position has status=FILLED and a origQty=.30, I want to be able to reverse it with a SELL and a Quantity of .30 Does this make sense? Is there a better way to do it? Am I missing something? Thanks for the input! | Also, when it is FILLED, my understanding is that a POSITION is being created (Long or Short) As far as I know, Binance does not provide semantics for position (in terms of trading). Such abstractions are usually implemented for derivatives (e.g. futures) when it comes to currency markets, since currencies buying-and-selling-to-make-profit is not their only use. On Binance, and most other cryptocurrency exchanges, you are making spot transactions, i.e. giving some amount of a currency to receive some amount of another currency. Plain and simple. You may abstract positions yourself though, but that may involve much more work especially considering heterogeneous chains of transaction (e.g. BTC -> ETH -> USDT -> BTC), partial fills, etc. | 7 | 2 |
62,288,835 | 2020-6-9 | https://stackoverflow.com/questions/62288835/how-to-interpret-conda-package-conflicts | I am attempting to create a conda environment with 3 packages and a specific python version and get the following output: $ conda create -n testing_junk -y instrain awscli samtools python=3.8 Collecting package metadata (current_repodata.json): done Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: | Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed \ UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package ncurses conflicts for: python=3.8 -> ncurses[version='>=6.1,<6.2.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0'] awscli -> python[version='>=3.8,<3.9.0a0'] -> ncurses[version='5.9.*|5.9|>=6.1,<6.2.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.*'] instrain -> python[version='>=3.4'] -> ncurses[version='5.9.*|5.9|>=6.1,<6.2.0a0|>=6.2,<7.0a0|>=6.1,<7.0a0|>=6.0,<7.0a0|6.0.*'] python=3.8 -> readline[version='>=7.0,<8.0a0'] -> ncurses[version='5.9.*|>=6.0,<7.0a0|6.0.*'] samtools -> ncurses[version='5.9|5.9.*|>=5.9,<5.10.0a0|>=6.1,<6.2.0a0'] Package python conflicts for: awscli -> python[version='2.7.*|3.5.*|3.6.*|>=2.7,<2.8.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.5,<3.6.0a0|3.4.*'] python=3.8 instrain -> biopython -> python[version='2.7.*|3.5.*|3.6.*|>=2.7,<2.8.0a0|>=3.6,<3.7.0a0|>=3.8,<3.9.0a0|>=3.7,<3.8.0a0|>=3.5,<3.6.0a0|3.4.*|>3|>=3.5|<3.0.0|>=3.6'] instrain -> python[version='>=3.4'] awscli -> python_abi=3.8[build=*_cp38] -> python[version='3.7.*|3.8.*'] Package ca-certificates conflicts for: samtools -> openssl[version='>=1.1.1a,<1.1.2a'] -> ca-certificates python=3.8 -> openssl[version='>=1.1.1g,<1.1.2a'] -> ca-certificates awscli -> python[version='>=2.7,<2.8.0a0'] -> ca-certificates Package setuptools conflicts for: python=3.8 -> pip -> setuptools instrain -> matplotlib-base -> setuptools[version='>=40.0'] Package libgcc-ng conflicts for: samtools -> ncurses[version='>=6.1,<6.2.0a0'] -> libgcc-ng[version='>=7.2.0'] samtools -> libgcc-ng[version='>=4.9|>=7.3.0'] Package pypy3.6 conflicts for: instrain -> numpy -> pypy3.6[version='7.3.0.*|7.3.1.*|>=7.3.1'] awscli -> python[version='>=3.6,<3.7.0a0'] -> pypy3.6[version='7.3.*|7.3.0.*|7.3.1.*'] Package bzip2 conflicts for: samtools -> bzip2[version='1.0.*|>=1.0.6,<2.0a0|>=1.0.8,<2.0a0'] instrain -> pysam -> bzip2[version='>=1.0.6,<2.0a0|>=1.0.8,<2.0a0'] awscli -> python[version='>=3.7,<3.8.0a0'] -> bzip2[version='>=1.0.6,<2.0a0|>=1.0.8,<2.0a0'] Package zlib conflicts for: samtools -> zlib[version='1.2.11.*|>=1.2.11,<1.3.0a0|1.2.8.*|1.2.8'] samtools -> curl[version='>=7.59.0,<8.0a0'] -> zlib[version='1.2.*|1.2.11'] Package samtools conflicts for: samtools instrain -> pysam -> samtools[version='1.3|1.3.1.*|1.3.1|1.5.*|1.6.*|1.7|1.7.*|1.9.*|>=1.4.1|>=1.4.1,<1.5|>=1.4,<1.5|>=1.3,<1.4|>=1.3'] Package openssl conflicts for: samtools -> curl[version='>=7.59.0,<8.0a0'] -> openssl[version='1.0.*|>=1.0.2o,<1.0.3a|>=1.0.2m,<1.0.3a'] samtools -> openssl[version='>=1.0.2p,<1.0.3a|>=1.0.2r,<1.0.3a|>=1.1.1a,<1.1.2a'] Package _libgcc_mutex conflicts for: samtools -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex[version='*|0.1',build='main|conda_forge'] python=3.8 -> libgcc-ng[version='>=7.5.0'] -> _libgcc_mutex[version='*|0.1',build='main|conda_forge']The following specifications were found to be incompatible with your CUDA driver: - feature:/linux-64::__cuda==10.2=0 - feature:|@/linux-64::__cuda==10.2=0 Your installed CUDA driver is: 10.2 I understand that there is something about the packages that conflict with each other, but I'm unable to interpret this output to understand what the problem is. For example, in looking at the first block of conflicts (related to ncurses), shouldn't version 6.1 satisfy all requirements listed? Additionally, for the block about package setuptools, I don't see any problem at all? Any insight into how to interpret these conflicts so that I can attempt to address them would be much appreciated. | Some Practical Advice @Quantum7's answer gives a fine literal interpretation of Conda's conflict reporting. However, I wanted to offer a more practical take, which is that this "feature" from Conda is too non-specific to be useful in most non-trivial environments. And sometimes it won't even include the underlying conflict. Don't waste your time with it! Conda's Conflict Reporting is Often Not Helpful On the face of it, Conda attempts to report all possible sources of conflict. That is, all sets of paths in the dependency graph that begin from the explicit specifications and end in the same package. This amounts to most of what is reported being innocuous and frankly distracting. For example, the zlib "conflicts": Package zlib conflicts for: samtools -> zlib[version='1.2.11.*|>=1.2.11,<1.3.0a0|1.2.8.*|1.2.8'] samtools -> curl[version='>=7.59.0,<8.0a0'] -> zlib[version='1.2.*|1.2.11'] Since samtools depends on zlib both directly and indirectly (mediated through curl), this comes up as two alternate paths that lead to constraints. The problem is that the intersection of the final constraints are not empty, such that there is nothing incompatible here. Furthermore, there are cases where none of what is reported is in conflict (e.g., this question or this one), which means parsing through the output could be a complete waste of time. Try Mamba Instead, if one is actually concerned with resolving conflicts, I find Mamba to be more effective to work with, both in speed and precision. # install mamba conda install -n base conda-forge::mamba # use 'mamba' just like 'conda' mamba create -n foo instrain awscli samtools python=3.8 Unfortunately, this example simply works now. However, there are other questions where Conda and Mamba unsatisfiability reporting is compared, e.g., this question. | 39 | 34 |
62,178,888 | 2020-6-3 | https://stackoverflow.com/questions/62178888/can-someone-explain-to-me-how-minmaxscaler-works | Why we are using the MinMaxScaler() and what does it do? scaler = MinMaxScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) model = LogisticRegression() model.fit(X_train, y_train) y_pred = model.predict(X_test) | Core of the method A way to normalize the input features/variables is the Min-Max scaler. By doing so, all features will be transformed into the range [0,1] meaning that the minimum and maximum value of a feature/variable is going to be 0 and 1, respectively. Why to normalize prior to model fitting? The main idea behind normalization/standardization is always the same. Variables that are measured at different scales do not contribute equally to the model fitting & model learned function and might end up creating a bias. Thus, to deal with this potential problem feature-wise normalization such as MinMax Scaling is usually used prior to model fitting. More here: https://towardsdatascience.com/everything-you-need-to-know-about-min-max-normalization-in-python-b79592732b79 | 13 | 34 |
62,267,292 | 2020-6-8 | https://stackoverflow.com/questions/62267292/fastapi-pydantic-accept-arbitrary-post-request-body | I want to create a FastAPI endpoint that just accepts an arbitrary post request body and returns it. If I send {"foo" : "bar"} , I want to get {"foo" : "bar"} back. But I also want to be able to send {"foo1" : "bar1", "foo2" : "bar2"} and get that back. I tried: from fastapi import FastAPI app = FastAPI() app.post("/") async def handle(request: BaseModel): return request But that returns an empty dictionary, no matter what I send it. Any ideas? | The accepted answer works as long as the input is wrapped in a dictionary. That is: started with a { and ends with a }. However, that does not cover all valid JSON inputs. For example, the following valid JSON inputs would fail: true / false 1.2 null "text" [1,2,3] In order to have a truly generic JSON input accepted by the endpoint, the following would work: from typing import Any, Dict, List, Union from fastapi import FastAPI app = FastAPI() @app.post("/") async def handle(request: Union[List,Dict,Any]=None): return request Just using Any did not work for some reason. When I used it, FastApi was expecting the input from the query arguments, not from the request body. The =None makes it accept null and also an empty body. You can leave that part off and then the request body will be required to be not empty/null. If you are using Python3.10 then you can get rid of the Union and write the definition as: async def handle(request: List | Dict | Any = None): | 17 | 10 |
62,279,710 | 2020-6-9 | https://stackoverflow.com/questions/62279710/fastapi-variable-query-parameters | I am writing a Fast API server that accepts requests, checks if users are authorized and then redirects them to another URL if successful. I need to carry over URL parameters, e.g. http://localhost:80/data/?param1=val1¶m2=val2 should redirect to http://some.other.api/?param1=val1¶m2=val2, thus keeping previously allotted parameters. The parameters are not controlled by me and could change at any moment. How can I achieve this? Code: from fastapi import FastAPI from starlette.responses import RedirectResponse app = FastAPI() @app.get("/data/") async def api_data(): params = '' # I need this value url = f'http://some.other.api/{params}' response = RedirectResponse(url=url) return response | In the docs they talk about using the Request directly, which then lead me to this: from fastapi import FastAPI, Request from starlette.responses import RedirectResponse app = FastAPI() @app.get("/data/") async def api_data(request: Request): params = request.query_params url = f'http://some.other.api/?{params}' response = RedirectResponse(url=url) return response | 30 | 39 |
62,164,400 | 2020-6-3 | https://stackoverflow.com/questions/62164400/how-to-access-private-github-repo-file-csv-in-python-using-pandas-or-requests | I had to switch my public Github repository to private and cannot access files, not with access tokens that I was able to with the public Github repo. I can access my private repo's CSV with curl: ''' curl -s https://{token}@raw.githubusercontent.com/username/repo/master/file.csv ''' However, I want to access this information in my python file. When the repo was public I could simply use: ''' url = 'https://raw.githubusercontent.com/username/repo/master/file.csv' df = pd.read_csv(url, error_bad_lines=False) ''' This no longer works now that the repo is private, and I cannot find a work around to download this CSV in python instead of pulling from terminal. If I try: ''' requests.get(https://{token}@raw.githubusercontent.com/username/repo/master/file.csv) ''' I get a 404 response, which is basically the same thing that is happening with the pd.read_csv(). If I click on the raw file I see that a temporary token is created and the URL is: ''' https://raw.githubusercontent.com/username/repo/master/file.csv?token=TEMPTOKEN ''' Is there a way to attach my permanent private access token so that I can always pull this data from github? | This is what ended up working for me - leaving it here if anyone runs into the same issue. Thanks for the help! import json, requests, urllib, io user='my_github_username' pao='my_pao' github_session = requests.Session() github_session.auth = (user, pao) # providing raw url to download csv from github csv_url = 'https://raw.githubusercontent.com/user/repo/master/csv_name.csv' download = github_session.get(csv_url).content downloaded_csv = pandas.read_csv(io.StringIO(download.decode('utf-8')), error_bad_lines=False) | 13 | 3 |
62,290,209 | 2020-6-9 | https://stackoverflow.com/questions/62290209/pandas-resample-with-start-date | I'd like to resample a pandas object using a specific date (or month) as the edge of the first bin. For instance, in the following snippet I'd like my first index value to be 2020-02-29 and I'd be happy specifying start=2 or start="2020-02-29". >>> dates = pd.date_range("2020-01-29", "2021-07-04") >>> s = pd.Series(range(len(dates)), index=dates) >>> s.resample('4M').count() 2020-01-31 3 2020-05-31 121 2020-09-30 122 2021-01-31 123 2021-05-31 120 2021-09-30 34 Freq: 4M, dtype: int64 So far this is the cleanest I can come up with uses pd.cut and groupby: >>> rule = "4M" >>> start = pd.Timestamp("2020-02-29") - pd.tseries.frequencies.to_offset(rule) >>> end = s.index.max() + pd.tseries.frequencies.to_offset(rule) >>> bins = pd.date_range(start, end, freq=rule) >>> gb = s.groupby(pd.cut(s.index, bins)).count() >>> gb.index = gb.index.categories.right >>> gb 2020-02-29 32 2020-06-30 122 2020-10-31 123 2021-02-28 120 2021-06-30 122 2021-10-31 4 dtype: int64 | My answer feels a little hacky, but uses resample and gives the desired output. Find the date one bin length (e.g. 4 months, or month ends specifically) before the specified date, append it to s, and then resample: rule = '4M' date = '02-29-2020' base_date = pd.to_datetime(date) - pd.tseries.frequencies.to_offset(rule) s.loc[base_date] = np.nan output = s.resample(rule=rule).count() output=output[output.index >= date] Result: 2020-02-29 32 2020-06-30 122 2020-10-31 123 2021-02-28 120 2021-06-30 122 2021-10-31 4 Freq: 4M, dtype: int64 I added output=output[output.index >= date] b/c otherwise you get an additional empty bin: 2019-10-31 0 2020-02-29 32 2020-06-30 122 2020-10-31 123 2021-02-28 120 2021-06-30 122 2021-10-31 4 Freq: 4M, dtype: int64 | 21 | 10 |
62,230,582 | 2020-6-6 | https://stackoverflow.com/questions/62230582/http-method-not-allowed-when-trying-to-capture-console | I am trying to capture the console log of Firefox using Selenium but I am getting "HTTP method not allowed" error This is how I am doing it currently: from selenium import webdriver from selenium.webdriver.common.desired_capabilities import DesiredCapabilities # enable browser logging d = DesiredCapabilities.FIREFOX d['loggingPrefs'] = {'browser': 'ALL'} driver = webdriver.Firefox(capabilities=d) # load some site driver.get('url') # print messages for entry in driver.get_log('browser'): print entry The error I am getting is selenium.common.exceptions.WebDriverException: Message: HTTP method not allowed | get_log is not implemented by Firefox driver. See https://github.com/mozilla/geckodriver/issues/330 | 7 | 8 |
62,175,978 | 2020-6-3 | https://stackoverflow.com/questions/62175978/is-0-is-0-always-true-in-python | Python 3.8 (or CPython 3.8?) added the warning SyntaxWarning: "is" with a literal. Did you mean "=="? for the code 0 is 0. I understand the warning, and I know the difference between is and ==. However, I also know that CPython caches the object for small integers and shares it in other cases as well. (Out of curiosity, I just checked the code (header) again. Small ints are cached in tstate->interp->small_ints. 0 and 1 are even more special and are stored globally in _PyLong_Zero and _PyLong_One. All new creations of ints are via PyLong_FromLong and that one first checks if it is a small integer and cached.) Given this background, if you know you have an int object, you could say that the check x is 0 should be safe, right? Also, you could derive that 0 is 0 should always be True, right? Or is this an implementation detail of CPython and other interpreters do not follow this? Which interpreter does not follow this? Despite this more generic question (which I'm just curious about), consider this more specific (example) code: def sum1a(*args): y = 0 for x in args: if y is 0: y = x else: y = y + x return y Vs: def sum1b(*args): y = 0 for x in args: if y == 0: y = x else: y = y + x return y Vs: def sum1c(*args): y = None for x in args: if y is None: y = x else: y = y + x if y is None: return 0 return y Vs: def sum2(*args): y = 0 for x in args: y = y + x return y The reason I would sometimes prefer sum1* over sum2 is that depending on the library, sum1* can really be more efficient. E.g. if the argument is a Numpy/TensorFlow/PyTorch array, you really would save a (potentially costly) operation here. The reason I would prefer sum1a over sum1b is that sum1b would break on certain inputs. E.g. if the input is a Numpy array, this would not work. Of course, you could use sum1c instead of sum1a. However, sum1a is shorter. So this is nicer? If the answer to the original question is that this should always work, and if you agree that sum1a is the best option then, how would you get rid of the warning? Is there a simple workaround? In general, I can see that the warning can be useful. So I would not want to disable it completely. I just want to disable it for this specific statement. Maybe I could wrap it up in a function: def is_(a, b): return a is b And then just use if is_(y, 0): .... Does this work? Is that a good idea? | No, it isn't. Case in point the Rust implementation for Python returns False: >>>>> 0 is 0 False and this is not wrong, though I expect this to change in future versions (it has!). is calls id who's only stipulation is that the id returned is unique and constant for a given object. Whether the source code representation for a number (0 here) maps to a distinct object or not is up for the implementation to define. | 9 | 9 |
62,198,351 | 2020-6-4 | https://stackoverflow.com/questions/62198351/why-doesnt-pytorch-allow-inplace-operations-on-leaf-variables | So if I run this code in Pytorch: x = torch.ones(2,2, requires_grad=True) x.add_(1) I will get the error: RuntimeError: a leaf Variable that requires grad is being used in an in-place operation. I understand that Pytorch does not allow inplace operations on leaf variables and I also know that there are ways to get around this restrictions. What I don't understand is the philosophy behind this rule. Why is it wrong to change a leaf variable with inplace operations? | As I understand it, any time you do a non-traditional operation on a tensor that was initialized with requires_grad=True, Pytorch throws an error to make sure it was intentional. For example, you normally would only update a weight tensor using optimizer.step(). For another example, I ran into this issue when trying to update the values in a backprop-able tensor during network initialization. self.weight_layer = nn.Parameter(data=torch.zeros(seq_length), requires_grad=True) self.weight_layer[true_ids == 1] = -1.2 RuntimeError: a leaf Variable that requires grad is being used in an in-place operation. The problem is that, because requires_grad=True, the network doesn't know that I'm still initializing the values. If this is what you are trying to do, wrapping the update in a torch.no_grad block is one solution: with torch.no_grad() self.weight_layer = nn.Parameter(data=torch.zeros(seq_length), requires_grad=True) self.weight_layer[true_ids == 1] = -1.2 Otherwise, you could just set requires_grad=True after you finish initializing the Tensor: self.weight_layer = nn.Parameter(data=torch.zeros(seq_length)) self.weight_layer[true_ids == 1] = -1.2 self.weight_layer.requires_grad = True | 12 | 15 |
62,264,787 | 2020-6-8 | https://stackoverflow.com/questions/62264787/mypy-fastapi-response-model | I've been tasked with handling the update from Mypy 0.770 to 0.870 in our FastAPI project, and this has produced an error that I can't quite wrap my head around. My endpoint can return two different models based on some condition, and this was denoted as follows the endpont decorator: @router.get("/", response_model=Union[Model1, Model2]) Mypy 0.870 now complains about this, stating that Argument "response_model" to "get" of "APIRouter" has incompatible type "object"; expected "Optional[Type[Any]]" Setting it to single types, such as Model1 or even str removes the error. Any however, does not work. Now, looking into the get method, I see that the response_model argument is typed as Type[Any], which I assume must be a pointer. How I can define non-simple return models for my API, and make Mypy happy? edit: I tried to reproduce the problem in a smaller frame, but couldn't. The following code works fine: from typing import Any, Type, Union def test1(var, response_model: Type[Any]): print(f"Accepted Type[Any], {var}") def test2(var, response_model: Union[dict, set]): print(f"Accepted Union, {var}") def main(): test1('test1', response_model=Union[dict, set]) test2('test2', response_model=Union[dict, set]) if __name__ == '__main__': main() | This is a compatibility issue introduced in newer versions of mypy. There is an open issue on Github about this topic: https://github.com/tiangolo/fastapi/issues/2279 In the discussion they provide the following workarounds: Using a different approach to create a type alias: NewModel = TypeVar('NewModel',Model1,Model2) Creating a new Pydantic Model : class NewModel(BaseModel): __root__: Union[Model1, Model2] | 7 | 5 |
62,254,125 | 2020-6-8 | https://stackoverflow.com/questions/62254125/plot-multiple-distplot-in-seaborn-facetgrid | I have a dataframe which looks like below: df: RY MAJ_CAT Value 2016 Cause Unknown 0.00227 2016 Vegetation 0.04217 2016 Vegetation 0.04393 2016 Vegetation 0.07878 2016 Defective Equip 0.00137 2018 Cause Unknown 0.00484 2018 Defective Equip 0.01546 2020 Defective Equip 0.05169 2020 Defective Equip 0.00515 2020 Cause Unknown 0.00050 I want to plot the distribution of the value over the given years. So I used distplot of seaborn by using following code: year_2016 = df[df['RY']==2016] year_2018 = df[df['RY']==2018] year_2020 = df[df['RY']==2020] sns.distplot(year_2016['value'].values, hist=False,rug=True) sns.distplot(year_2018['value'].values, hist=False,rug=True) sns.distplot(year_2020['value'].values, hist=False,rug=True) In the next step I want to plot the same value distribution over the given year w.r.t MAJ_CAT. So I decided to use Facetgrid of seaborn, below is the code : g = sns.FacetGrid(df,col='MAJ_CAT') g = g.map(sns.distplot,df[df['RY']==2016]['value'].values, hist=False,rug=True)) g = g.map(sns.distplot,df[df['RY']==2018]['value'].values, hist=False,rug=True)) g = g.map(sns.distplot,df[df['RY']==2020]['value'].values, hist=False,rug=True)) However, when it ran the above command, it throws the following error: KeyError: "None of [Index([(0.00227, 0.04217, 0.043930000000000004, 0.07877999999999999, 0.00137, 0.0018800000000000002, 0.00202, 0.00627, 0.00101, 0.07167000000000001, 0.01965, 0.02775, 0.00298, 0.00337, 0.00088, 0.04049, 0.01957, 0.01012, 0.12065, 0.23699, 0.03639, 0.00137, 0.03244, 0.00441, 0.06748, 0.00035, 0.0066099999999999996, 0.00302, 0.015619999999999998, 0.01571, 0.0018399999999999998, 0.03425, 0.08046, 0.01695, 0.02416, 0.08975, 0.0018800000000000002, 0.14743, 0.06366000000000001, 0.04378, 0.043, 0.02997, 0.0001, 0.22799, 0.00611, 0.13960999999999998, 0.38871, 0.018430000000000002, 0.053239999999999996, 0.06702999999999999, 0.14103, 0.022719999999999997, 0.011890000000000001, 0.00186, 0.00049, 0.13947, 0.0067, 0.00503, 0.00242, 0.00137, 0.00266, 0.38638, 0.24068, 0.0165, 0.54847, 1.02545, 0.01889, 0.32750999999999997, 0.22526, 0.24516, 0.12791, 0.00063, 0.0005200000000000001, 0.00921, 0.07665, 0.00116, 0.01042, 0.27046, 0.03501, 0.03159, 0.46748999999999996, 0.022090000000000002, 2.2972799999999998, 0.69021, 0.22529000000000002, 0.00147, 0.1102, 0.03234, 0.05799, 0.11744, 0.00896, 0.09556, 0.03202, 0.01347, 0.00923, 0.0034200000000000003, 0.041530000000000004, 0.04848, 0.00062, 0.0031100000000000004, ...)], dtype='object')] are in the [columns]" I am not sure where am I making the mistake. Could anyone please help me in fixing the issue? | setup the dataframe import pandas as pd import numpy as np import seaborn as sns # setup dataframe of synthetic data np.random.seed(365) data = {'RY': np.random.choice([2016, 2018, 2020], size=400), 'MAJ_CAT': np.random.choice(['Cause Unknown', 'Vegetation', 'Defective Equip'], size=400), 'Value': np.random.random(size=400) } df = pd.DataFrame(data) Updated Answer From seaborn v0.11 Use sns.displot with kind='kde' and rug=True Is a figure-level interface for drawing distribution plots onto a FacetGrid. Plotting all 'MAJ_CAT' together sns.displot(data=df, x='Value', hue='RY', kind='kde', palette='tab10', rug=True) Plotting 'MAJ_CAT' separately sns.displot(data=df, col='MAJ_CAT', x='Value', hue='RY', kind='kde', palette='tab10', rug=True) Original Answer In seaborn v0.11, distplot is deprecated distplot Consolidate the original code to generate the distplot for year in df.RY.unique(): values = df.Value[df.RY == year] sns.distplot(values, hist=False, rug=True) facetgrid properly configure the mapping and add hue to FacetGrid g = sns.FacetGrid(df, col='MAJ_CAT', hue='RY') p1 = g.map(sns.distplot, 'Value', hist=False, rug=True).add_legend() | 8 | 9 |
62,281,476 | 2020-6-9 | https://stackoverflow.com/questions/62281476/attributeerror-timedeltaproperties-object-has-no-attribute-minute | I have a dataframe that looks like this df [output]: date time 2020-02-28 00:30:45 2020-02-28 00:30:45 2020-03-09 00:21:06 2020-03-09 00:21:06 2020-03-09 00:21:06 with df.time.dtype [output]: dtype('<m8[ns]') I want to extract the minutes in the time variable with the following command df.time.dt.minute but instead, I have this error AttributeError: 'TimedeltaProperties' object has no attribute 'minute' Does someone know how to fix this problem? | your column 'time' is of dtype timedelta as the error tells you; you could use the total_seconds() method to convert to seconds and divide by 60 to get the minutes. If you want a full-featured datetime column, combine 'date' and 'time'. Then you can use .dt.minute. Ex: import pandas as pd df = pd.DataFrame({'time': pd.to_timedelta(['00:30:45','00:30:45','00:21:06','00:21:06','00:21:06']), 'date': pd.to_datetime(['2020-02-28','2020-02-28','2020-03-09','2020-03-09','2020-03-09'])}) # to get the "total minutes": df['minutes'] = df['time'].dt.total_seconds()/60 df['minutes'] # 0 30.75 # 1 30.75 # 2 21.10 # 3 21.10 # 4 21.10 # Name: minutes, dtype: float64 [pd.Timedelta docs] # to get a column of dtype datetime: df['DateTime'] = df['date'] + df['time'] # now you can do: df['DateTime'].dt.minute # 0 30 # 1 30 # 2 21 # 3 21 # 4 21 # Name: DateTime, dtype: int64 | 14 | 21 |
62,287,150 | 2020-6-9 | https://stackoverflow.com/questions/62287150/django-geodjango-read-coordinates-in-the-wrong-order | first of all thanks for your help. I'm making a form with Django which uses the OSMWidget to save coordinates (Polygons, Lines and Points) to a Geometry field in a PostgreSQL database. It works well, I can save the information in the database without any problem. And when I make a query with PgAdmin I can see the geometric fields data displayed in a Leaflet map correctly. . Here's some of what I have in my forms.py: from django import forms from django_select2 import forms as select2_forms from django.contrib.gis import forms as osmforms from django.forms import ModelForm from .models import Dataset class SessionForm(forms.ModelForm): at_choices = [(item.title, item.title) for item in Dataset.objects.all()] key_choices = [(item.keywords_d, item.keywords_d) for item in Dataset.objects.all()] uuid = forms.CharField(label='', max_length=10 , widget=forms.TextInput(attrs={'class': "form-control left-half"})) title = forms.CharField(label='Title', max_length=65536 , widget=forms.TextInput(attrs={'class': "form-control full-size-field"})) abstract = forms.CharField(label='Abstract', max_length=65536 , widget=forms.Textarea(attrs={'class': "form-control full-size-field", 'title': 'Your name'})) keywords_d = forms.MultipleChoiceField(label='Keywords', widget=select2_forms.Select2MultipleWidget(attrs={'class': "form-control left-half",'style': 'width:100%'}), choices=key_choices) activity_type = forms.MultipleChoiceField(label='Activity type', widget=select2_forms.Select2MultipleWidget(attrs={'class': "form-control right-half",'style': 'width:100%'}), choices=at_choices) related_site_we = forms.CharField(label='Related Site', max_length=256 , widget=forms.TextInput(attrs={'class': "form-control full-size-field"})) bounding_box = osmforms.GeometryCollectionField(label='Bounding Box', widget=osmforms.OSMWidget(attrs={'class': "form-control full-size-field",'map_width': 992, 'map_height': 500})) class Meta: model = Dataset fields = ['uuid','title','abstract','keywords_d','activity_type','related_site_we','bounding_box'] And this is part of the views.py: def editor(request): if request.method == 'GET': if request.GET['uuid'] != '0': session = Dataset.objects.get(uuid=request.GET['uuid']) form = SessionForm(instance=session) else: form = SessionForm() return render(request, 'form.html', {'form': form,}) Without going into too much detail, one of the purposes of the form is to partially fill it out so that others can edit it later. When editing the form, this loads the existing data in the database for that entry, along with the coordinates we have previously entered, and this is where the problem appears, as it seems to reverse the order of latitude and longitude, appearing this way: As I said, the coordinates are stored well, I think it's just a problem in the order of the coordinates when OSMWidget reads them. Is there any way to correct this? I've been reading documentation for hours, as well as reviewing other threads in StackOverFlow and other forums, and I can't find a solution to this. Thanks in advance | I had the same problem. In my case, it was due to incompatibility between Django and GDAL, as has been also mentionned here : if you are using GDAL 3, then be sure to use Django 3.1. Upgrading Django did correct both OSMWidget and OSMGeoAdmin for PointField. I'm note sure you have exactly the same configuration problem though, as multipolygons seemed unaffected on my app... Note : I wouldn't normally reproduce an existing ticket as an answer on SO, but I just spent 2 days figuring this out (and finding the right information) and it will help to have this more referenced. | 9 | 6 |
62,212,263 | 2020-6-5 | https://stackoverflow.com/questions/62212263/alembic-doesnt-recognize-false-default-value | While maintaining a SQLAlchemy data model and utilizing alembic for version control, the following code change I made resulted in an empty revision: some_column = Column(Boolean, nullable=False, default=False) While previously it was: some_column = Column(Boolean, nullable=False) So adding a default value produces no changes in alembic, i.e. generates an empty revision. I tried other values offered by SQLAlchemy like false() and expression.false() instead of False, but the result is the same (empty alembic revision). Also tried server_default instead of default. The database in question is PostgreSQL. By empty revision, of course I mean that alembic doesn't recognize any change being made in SQLAlchemy: def upgrade(): # ### commands auto generated by Alembic - please adjust! ### pass # ### end Alembic commands ### def downgrade(): # ### commands auto generated by Alembic - please adjust! ### pass # ### end Alembic commands ### Appreciate any help in this regard. | To do this automatically you have to turn on a setting to detect server default changes. In your env.py, for the context.configure calls (online and offline migrations, so in 2 places), add a compare_server_default=True kwarg. It is probably safer to just put in the alter_column yourself as well as definitely use server_default because default is just for python-side setting of the default(which is ok but sounds like not what you want). Quoted from https://alembic.sqlalchemy.org/en/latest/autogenerate.html#what-does-autogenerate-detect-and-what-does-it-not-detect Autogenerate can optionally detect: ... Change of server default. This will occur if you set the EnvironmentContext.configure.compare_server_default parameter to True, or to a custom callable function. This feature works well for simple cases but cannot always produce accurate results. ... | 14 | 13 |
62,178,926 | 2020-6-3 | https://stackoverflow.com/questions/62178926/setting-up-coc-nvim-for-python | I have installed coc.nvim and extension coc-python(:CocInstall coc-python) When I opened file I refused of linting and then get error: [coc.nvim] Jedi error: Traceback (most recent call last): File "completion.py", line 694, in <module> [coc.nvim] Jedi error: Traceback (most recent call last): [coc.nvim] Jedi error: import jedi ModuleNotFoundError: No module named 'jedi' I tried to reinstall extension and plugin but It doesn't help. | It's recommended to use https://github.com/fannheyward/coc-pyright if you're using Python 3, or use https://github.com/pappasam/coc-jedi if you're using Jedi. | 8 | 8 |
62,293,200 | 2020-6-9 | https://stackoverflow.com/questions/62293200/upload-images-to-instagram-using-python | I'm trying to do a simple Instagram python bot in order to upload images in my Instagram profile. I've already tried the most common libraries (InstagramAPI, instapy, insta-cly). While I was searching I found out that Instagram has changed something making those libraries useless. Is there any library I can use? I know thatI can use Selenium in order to make it go but I'm wondering if there is any shortcut. Trank you! | try this library: instabot https://pypi.org/project/instabot/ example of code for uploading an image: from instabot import Bot bot = Bot() bot.login(username="instagram_username", password="your_password") file = open('path_to_your_image', 'r') bot.upload_photo(file, caption="your post caption") | 8 | 6 |
62,288,531 | 2020-6-9 | https://stackoverflow.com/questions/62288531/how-to-capture-inputs-and-outputs-of-a-child-process | I'm trying to make a program which takes an executable name as an argument, runs the executable and reports the inputs and outputs for that run. For example consider a child program named "circle". The following would be desired run for my program: $ python3 capture_io.py ./circle Enter radius of circle: 10 Area: 314.158997 [('output', 'Enter radius of circle: '), ('input', '10\n'), ('output', 'Area: 314.158997\n')] I decided to use pexpect module for this job. It has a method called interact which lets the user interact with the child program as seen above. It also takes 2 optional parameters: output_filter and input_filter. From the documentation: The output_filter will be passed all the output from the child process. The input_filter will be passed all the keyboard input from the user. So this is the code I wrote: capture_io.py import sys import pexpect _stdios = [] def read(data): _stdios.append(("output", data.decode("utf8"))) return data def write(data): _stdios.append(("input", data.decode("utf8"))) return data def capture_io(argv): _stdios.clear() child = pexpect.spawn(argv) child.interact(input_filter=write, output_filter=read) child.wait() return _stdios if __name__ == '__main__': stdios_of_child = capture_io(sys.argv[1:]) print(stdios_of_child) circle.c #include <stdio.h> #include <stdlib.h> int main(int argc, char* argv[]) { float radius, area; printf("Enter radius of circle: "); scanf("%f", &radius); if (radius < 0) { fprintf(stderr, "Negative radius values are not allowed.\n"); exit(1); } area = 3.14159 * radius * radius; printf("Area: %f\n", area); return 0; } Which produces the following output: $ python3 capture_io.py ./circle Enter radius of circle: 10 Area: 314.158997 [('output', 'Enter radius of circle: '), ('input', '1'), ('output', '1'), ('input', '0'), ('output', '0'), ('input', '\r'), ('output', '\r\n'), ('output', 'Area: 314.158997\r\n')] As you can observe from the output, input is processed character by character and also echoed back as output which creates such a mess. Is it possible to change this behaviour so that my input_filter will run only when Enter is pressed? Or more generally, what would be the best way to achieve my goal (with or without pexpect)? | Is it possible to change this behaviour so that my input_filter will run only when Enter is pressed? Yes, you can do it by inheriting from pexpect.spawn and overwriting the interact method. I will come to that soon. As VPfB pointed out in their answer, you can't use a pipe and I think it's worth to mentioning that this issue is also addressed in the pexpect's documentation. You said that: ... input is processed character by character and also echoed back as output ... If you examine the source code of the interact you can see this line: tty.setraw(self.STDIN_FILENO) This will set your terminal to raw mode: input is available character by character, ..., and all special processing of terminal input and output characters is disabled. That is why your input_filter function is running for every key press and it sees backspace or other special characters. If you could comment out this line, you would see something like this when you run your program: $ python3 test.py ./circle Enter radius of circle: 10 10 Area: 314.158997 [('output', 'Enter radius of circle: '), ('input', '10\n'), ('output', '10\r\n'), ('output', 'Area: 314.158997\r\n')] And this would also let you edit the input (i. e. 12[Backspace]0 would give you same result). But as you can see, it still echoes the input. This can be disabled by setting a simple flag for child's terminal: mode = tty.tcgetattr(self.child_fd) mode[3] &= ~termios.ECHO tty.tcsetattr(self.child_fd, termios.TCSANOW, mode) Running with the latest changes: $ python3 test.py ./circle Enter radius of circle: 10 Area: 314.158997 [('output', 'Enter radius of circle: '), ('input', '10\n'), ('output', 'Area: 314.158997\r\n')] Bingo! Now you can inherit from pexpect.spawn and override interact method with these changes or implement the same thing using the builtin pty module of Python: with pty: import os import pty import sys import termios import tty _stdios = [] def _read(fd): data = os.read(fd, 1024) _stdios.append(("output", data.decode("utf8"))) return data def _stdin_read(fd): data = os.read(fd, 1024) _stdios.append(("input", data.decode("utf8"))) return data def _spawn(argv): pid, master_fd = pty.fork() if pid == pty.CHILD: os.execlp(argv[0], *argv) mode = tty.tcgetattr(master_fd) mode[3] &= ~termios.ECHO tty.tcsetattr(master_fd, termios.TCSANOW, mode) try: pty._copy(master_fd, _read, _stdin_read) except OSError: pass os.close(master_fd) return os.waitpid(pid, 0)[1] def capture_io_and_return_code(argv): _stdios.clear() return_code = _spawn(argv) return _stdios, return_code >> 8 if __name__ == '__main__': stdios, ret = capture_io_and_return_code(sys.argv[1:]) print(stdios) with pexpect: import sys import termios import tty import pexpect _stdios = [] def read(data): _stdios.append(("output", data.decode("utf8"))) return data def write(data): _stdios.append(("input", data.decode("utf8"))) return data class CustomSpawn(pexpect.spawn): def interact(self, escape_character=chr(29), input_filter=None, output_filter=None): self.write_to_stdout(self.buffer) self.stdout.flush() self._buffer = self.buffer_type() mode = tty.tcgetattr(self.child_fd) mode[3] &= ~termios.ECHO tty.tcsetattr(self.child_fd, termios.TCSANOW, mode) if escape_character is not None and pexpect.PY3: escape_character = escape_character.encode('latin-1') self._spawn__interact_copy(escape_character, input_filter, output_filter) def capture_io_and_return_code(argv): _stdios.clear() child = CustomSpawn(argv) child.interact(input_filter=write, output_filter=read) child.wait() return _stdios, child.status >> 8 if __name__ == '__main__': stdios, ret = capture_io_and_return_code(sys.argv[1:]) print(stdios) | 9 | 0 |
62,238,064 | 2020-6-6 | https://stackoverflow.com/questions/62238064/how-to-use-scipy-optimize-linear-sum-assignment-in-tensorflow-or-keras | first time posting here ! If my question is lacking anything please tell me and I'll fix it ! Facebook recently released DETR, an object detection model using transformers ! The model is implemented with Pytorch and I'm trying to implement the loss function where Hungarian algorithm is involved but with Keras and Tensorflow as a custom loss function for Keras model. In the original implementation from Facebook, it's line 81-82 in https://github.com/facebookresearch/detr/blob/master/models/matcher.py In order to use numpy and classic python function, I used: def hungarian_loss(losses): row_ind, col_ind = linear_sum_assignment(losses) idx = [[i, j] for i, j in zip(row_ind, col_ind)] return idx # dist loss is a 5x5 matrix, and idx is 5x2 indexes idx = tf.py_function(func=hungarian_loss, inp=[dist_loss], Tout=tf.int32) min_val = tf.gather_nd(dist_loss, idx) return K.mean(min_val) But I got : tensorflow.python.framework.errors_impl.InvalidArgumentError: Inner dimensions of output shape must match inner dimensions of updates shape. Output: [5,5] updates: [5] Is it because I'm trying to use something that wasn't a tf.Tensor as loss ? | Does this work for you? See: https://www.tensorflow.org/api_docs/python/tf/numpy_function @tf.function def tf_linear_sum_assignment(cost_matrix): return tf.numpy_function(func=linear_sum_assignment,inp=[cost_matrix],Tout=[tf.int64,tf.int64]) | 10 | 7 |
62,182,687 | 2020-6-3 | https://stackoverflow.com/questions/62182687/custom-help-in-python-click | By default, click adds a --help option that outputs a standardised usage text based on the structure of the click commands: Usage: ... Options: ... Commands: ... ... How to override this behaviour to have a custom help output ? What I am trying to do is to output a custom message using rich library. | The trick is to create a click.Group class and override format_help method class RichGroup(click.Group): def format_help(self, ctx, formatter): sio = io.StringIO() console = rich.Console(file=sio, force_terminal=True) console.print("Hello, [bold magenta]World[/bold magenta]!", ":vampire:") formatter.write(sio.getvalue()) @click.group(cls=RichGroup) def cli(): pass | 7 | 15 |
62,269,892 | 2020-6-8 | https://stackoverflow.com/questions/62269892/get-rid-of-white-border-around-option-menu | I'm trying to get rid of the white border around the OptionMenu. What I tried I changed the colour to red, but there is still a white border around it. Can anyone help? Here's the code: from tkinter import * import tkinter as tk from tkinter import ttk root = tk.Tk() root.geometry('500x500') var = StringVar() option = ttk.OptionMenu(root,var,'1','2','3') option["menu"].config(bg="red") option.pack() root.mainloop() Also, is there a way to change the colour of the OptionMenu trigger box (In the red Circle)? | As stated in the comments by @Mike-SMT, Have you considered writing your own option menu? This, to me, seems to be the only way to get an OptionMenu without having that irritating grey border. Here is my attempt at it: import tkinter as tk root = tk.Tk() root.geometry('500x500') class custom_option_menu(tk.Tk): def down(self, *menu_items): if self.button["text"] == "↓": self.frame.place(x = self.x + (len(self.first) * 13)/2, y = self.y + 50, anchor = "center") self.button.config(text = "↑") elif self.button["text"] == "↑": self.frame.place_forget() self.button.config(text = "↓") def __init__(self, master, first, bg, *menu_items): self.master = master self.first = first self.menu_items = menu_items self.bg = bg self.frame = tk.Frame(master, height = 100, width = 100) self.otherframe = tk.Frame(master, height = 10, width = len(first) * 13) self.label = tk.Label(self.otherframe, text = str(first)) self.button = tk.Button(self.otherframe, text = "↓", command = lambda: self.down(), relief= "flat") def save_var(event = "<Button-1>"): print(event.widget["text"]) for i in range(len(self.menu_items)): self.frame.config(bg = self.bg) self.option = tk.Button(self.frame, text = self.menu_items[i], relief = "flat", bg = self.bg, textvariable = int(i)) self.option.pack() self.option.bind("<Button-1>", save_var) def put(self, x, y): self.x = x self.y = y self.button.pack(side = "right") self.label.pack() self.frame.place(x = x + (len(self.first) * 13)/2, y = y + 50, anchor = "center") self.frame.place_forget() self.otherframe.place(x = x + (len(self.first) * 13)/2, y = y, anchor = "center") nice = custom_option_menu(root, "o000000000000000", "blue", "First", "second", "Third") nice.put(100, 200) root.mainloop() Sadly I couldn't get the default geometry managers to work for this, so I created .put(). It's just the x and y coordinates. The arguments to the class custom_option_menu go as follows: The first argument is the parent widget. The second argument is the text on the OptionMenu. The third argument is the background color for the options. The remaining arguments are the options. To open the menu, click the down arrow. I hope this is what you were looking for! | 11 | 6 |
62,167,179 | 2020-6-3 | https://stackoverflow.com/questions/62167179/how-do-i-annotate-the-type-of-a-parameter-of-an-abstractmethod-when-the-paramet | How do I annotate the type of a function parameter of a abstractmethod, when the parameter can have any type derived from a specific base type? Example: import abc import attr @attr.s(auto_attribs=True) class BaseConfig(abc.ABC): option_base: str @attr.s(auto_attribs=True) class ConfigA(BaseConfig): option_a: str @attr.s(auto_attribs=True) class ConfigB(BaseConfig): option_b: bool class Base(abc.ABC): @abc.abstractmethod def do_something(self, config: BaseConfig): pass class ClassA(Base): def do_something(self, config: ConfigA): # test.py:27: error: Argument 1 of "do_something" is incompatible with supertype "Base"; supertype defines the argument type as "BaseConfig" print("option_a: " + config.option_a) class ClassB(Base): def do_something(self, config: ConfigB): # test.py:33: error: Argument 1 of "do_something" is incompatible with supertype "Base"; supertype defines the argument type as "BaseConfig" print("option_b: " + str(config.option_b)) conf_a = ConfigA(option_a="value_a", option_base="value_base") conf_b = ConfigB(option_b=True, option_base="value_base") object_a = ClassA() object_b = ClassB() object_a.do_something(conf_a) object_b.do_something(conf_b) When parsing this with mypy I get test.py:27: error: Argument 1 of "do_something" is incompatible with supertype "Base"; supertype defines the argument type as "BaseConfig" test.py:33: error: Argument 1 of "do_something" is incompatible with supertype "Base"; supertype defines the argument type as "BaseConfig" How would I need to change the signature of Base.do_something() so mypy doesn't report any error, while still enforcing, that the function parameter of the abstract method do_something is derived from BaseConfig? | TLDR: Make the baseclass Generic and parameterise the type of configuration: C = TypeVar('C', bound=BaseConfig) class Base(abc.ABC, Generic[C]): @abc.abstractmethod def do_something(self, config: C): pass The original class hierarchy declares that ClassA can be used anywhere Base is valid. When we assume some variable obj: Base, this leads to a conflict: We can assign obj = ClassA() since ClassA "is a" Base class. We can use obj.do_something(BaseConfig()) since obj "is a" Base instance. However, ClassA.do_something(config: ConfigA) says we cannot do both at the same time, contradicting the type equivalence. Instead, we need to distinguish between "Base that takes a ConfigA", "Base that takes a ConfigB" and so on. This is done by parameterising Base with a type-variable for the config. from typing import Generic, TypeVar C = TypeVar('C', bound=BaseConfig) # C "is some" BaseConfig type class Base(abc.ABC, Generic[C]): # class takes type variable ... @abc.abstractmethod def do_something(self, config: C): # ... and uses it in method signature pass This allows us to have both generic and concrete Base variants - for example, Base[ConfigA] is a "Base that takes a ConfigA". From this, the subclasses can be derived as taking the appropriate configuration: class ClassA(Base[ConfigA]): # set type variable to ConfigA def do_something(self, config: ConfigA): print("option_a: " + config.option_a) | 7 | 11 |
62,220,904 | 2020-6-5 | https://stackoverflow.com/questions/62220904/vs-code-python-installation-and-python-interpreter-not-recognized | I am getting this message on the VS Code that "Python is not installed. Please download and install python before using the extension." There is also no *"Python Interpreter"* to select. When I click on it it shows it empty. I do have Python and Python extension installed and I do have virtual environments set up in the Anaconda navigator but for some reason, I am not able to use them. I tried many ways like reinstalling the Python, Anaconda, and VS Code and also the Python extension for VS code but it's not solving the issue. What could be the reason? I have attached a screenshot of the VS Code as well. Pleae click here to see the screenshot Thanks for your help. | I tried many methods but none worked. So then I removed this extension "Anaconda Extension Pack by Microsoft" and it solved the issue. So anyone facing the same issue might try uninstalling this extension. | 10 | 2 |
62,193,187 | 2020-6-4 | https://stackoverflow.com/questions/62193187/django-shell-plus-how-to-access-jupyter-notebook-in-docker-container | I am trying to access a Jupyter Notebook created with the shell_plus command from django-extensions in a Docker container. docker-compose -f local.yml run --rm django python manage.py shell_plus --notebook My configuration is based on the answers of @RobM and @Mark Chackerian to this Stack Overflow question. I.e. I installed and configured a custom kernel and my Django apps config file has the constant NOTEBOOK_ARGUMENTS set to: NOTEBOOK_ARGUMENTS = [ '--ip', '0.0.0.0', '--port', '8888', '--allow-root', '--no-browser', ] I can see the container starting successfully in the logs: [I 12:58:54.877 NotebookApp] The Jupyter Notebook is running at: [I 12:58:54.877 NotebookApp] http://10d56bab37fc:8888/?token=b2678617ff4dcac7245d236b6302e57ba83a71cb6ea558c6 [I 12:58:54.877 NotebookApp] or http://127.0.0.1:8888/?token=b2678617ff4dcac7245d236b6302e57ba83a71cb6ea558c6 But I can't open the url. I have forwarded the port 8888 in my docker-compose, tried to use localhost instead of 127.0.0.1 and also tried to use the containers IP w/o success. It feels like I am missing the obvious here … Any help is appreciated. | For the sake of records as of 2020, I managed to have a working django setup with Postgresql in docker-compose: development.py (settings.py) INSTALLED_APPS += [ "django_extensions", ] SHELL_PLUS = "ipython" SHELL_PLUS_PRINT_SQL = True NOTEBOOK_ARGUMENTS = [ "--ip", "0.0.0.0", "--port", "8888", "--allow-root", "--no-browser", ] IPYTHON_ARGUMENTS = [ "--ext", "django_extensions.management.notebook_extension", "--debug", ] IPYTHON_KERNEL_DISPLAY_NAME = "Django Shell-Plus" SHELL_PLUS_POST_IMPORTS = [ # extra things to import in notebook ("module1.submodule", ("func1", "func2", "class1", "etc")), ("module2.submodule", ("func1", "func2", "class1", "etc")) ] os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true" # only use in development requirements.txt django-extensions jupyter notebook Werkzeug # needed for runserver_plus ... docker-compose.yml version: "3" services: db: image: postgres:13 environment: - POSTGRES_HOST_AUTH_METHOD=trust restart: always ports: - "5432:5432" volumes: - postgres_data:/var/lib/postgresql/data/ web: build: . environment: - DJANGO_SETTINGS_MODULE=settings.development command: - scripts/startup.sh volumes: - ... ports: - "8000:8000" # webserver - "8888:8888" # ipython notebook depends_on: - db volumes: postgres_data: From your host terminal run this command: docker-compose exec web python manage.py shell_plus --notebook Finally navigate to http://localhost:8888/?token=<xxxx> in the web browser of host. | 8 | 23 |
62,271,614 | 2020-6-8 | https://stackoverflow.com/questions/62271614/what-does-typeerror-init-missing-1-required-positional-argument-get-res | I'm following the graphql python tutorial at https://www.howtographql.com/graphql-python/4-authentication/. It worked fine for the first 3 sections, but in the Authentication section I've run into this problem. I am learning python, don't know Django or graphql, so it's a lot to digest all at once, but it was going ok until now. Also not sure what relevant bits to include here. I followed all the instructions. When I go to my local project site at localhost:8000/graphql/, I get TypeError at /graphql/ __init__() missing 1 required positional argument: 'get_response' Here is the relevant snippet of my settings.py: MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', ] GRAPHENE = { 'SCHEMA': 'hackernews.schema.schema', 'MIDDLEWARE': ['graphql_jwt.middleware.JSONWebTokenMiddleware', ], } AUTHENTICATION_BACKENDS = [ 'graphql_jwt.backends.JSONWebTokenBackend', 'django.contrib.auth.backends.ModelBackend', ] I also did import graphql_jwt in my main schema.py Here is some kind of stack trace Environment: Request Method: GET Request URL: http://localhost:8000/graphql/ Django Version: 2.1.4 Python Version: 3.7.4 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'graphene_django', 'links'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware'] Traceback: File "C:\Users\e79909\projects\python\graphql-python\venv\lib\site-packages\django\core\handlers\exception.py" in inner 34. response = get_response(request) File "C:\Users\e79909\projects\python\graphql-python\venv\lib\site-packages\django\core\handlers\base.py" in _get_response 126. response = self.process_exception_by_middleware(e, request) File "C:\Users\e79909\projects\python\graphql-python\venv\lib\site-packages\django\core\handlers\base.py" in _get_response 124. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\e79909\projects\python\graphql-python\venv\lib\site-packages\django\views\decorators\csrf.py" in wrapped_view 54. return view_func(*args, **kwargs) File "C:\Users\e79909\projects\python\graphql-python\venv\lib\site-packages\django\views\generic\base.py" in view 62. self = cls(**initkwargs) File "C:\Users\e79909\projects\python\graphql-python\venv\lib\site-packages\graphene_django\views.py" in __init__ 88. self.middleware = list(instantiate_middleware(middleware)) File "C:\Users\e79909\projects\python\graphql-python\venv\lib\site-packages\graphene_django\views.py" in instantiate_middleware 48. yield middleware() Exception Type: TypeError at /graphql/ Exception Value: __init__() missing 1 required positional argument: 'get_response' | Ok, I just found it. GRAPHENE = { 'SCHEMA': 'hackernews.schema.schema', 'MIDDLEWARES': ['graphql_jwt.middleware.JSONWebTokenMiddleware'], } Notice the S. It needs to be 'MIDDLEWARES', not 'MIDDLEWARE'. Found the solution on this GitHub issue Also, according to this comment on the same issue, you should add 'graphql_jwt.middleware.JSONWebTokenMiddleware' to the MIDDLEWARE list (the one with all the Django middleware). MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'graphql_jwt.middleware.JSONWebTokenMiddleware', ### <---Add this line 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] | 11 | 17 |
62,188,851 | 2020-6-4 | https://stackoverflow.com/questions/62188851/social-auth-app-django-refresh-access-token | I use social-auth-app-django for my django website. Login all works, but after the token expires. I cant access the google's user data anymore. I found how to refresh the token, but it gives File "/mnt/s/github/nascentapp/app/booking/management/commands/sendmail.py", line 17, in handle new_token = self.get_token(user=booking_user, provider='google-oauth2') File "/mnt/s/github/nascentapp/app/booking/management/commands/sendmail.py", line 28, in get_token social.refresh_token(strategy) File "/home/sander/.local/share/virtualenvs/app-YMrBBUv3/lib/python3.6/site-packages/social_core/storage.py", line 58, in refresh_token response = backend.refresh_token(token, *args, **kwargs) File "/home/sander/.local/share/virtualenvs/app-YMrBBUv3/lib/python3.6/site-packages/social_core/backends/oauth.py", line 438, in refresh_token request = self.request(url, **request_args) File "/home/sander/.local/share/virtualenvs/app-YMrBBUv3/lib/python3.6/site-packages/social_core/backends/base.py", line 234, in request response.raise_for_status() File "/home/sander/.local/share/virtualenvs/app-YMrBBUv3/lib/python3.6/site-packages/requests/models.py", line 941, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://accounts.google.com/o/oauth2/token Here is some of my code def get_token(self, user, provider): social = user.social_auth.get(provider=provider) print('This is social of user: ', social) if (social.extra_data['auth_time'] + social.extra_data['expires']) <= int(time.time()): print('\n Token is out of date \n') strategy = load_strategy() social.refresh_token(strategy) return social.extra_data['access_token'] in my settings file: AUTHENTICATION_BACKENDS = ( 'social_core.backends.open_id.OpenIdAuth', # for Google authentication 'social_core.backends.google.GoogleOpenId', # for Google authentication 'social_core.backends.google.GoogleOAuth2', # for Google authentication 'django.contrib.auth.backends.ModelBackend', ) SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = os.environ.get('DJANGO_SOCIAL_AUTH_GOOGLE_OAUTH2_KEY') # Paste CLient Key SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = os.environ.get('DJANGO_SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET') # Paste Secret Key SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = [ 'https://www.googleapis.com/auth/calendar.readonly', 'https://www.googleapis.com/auth/calendar.events' ] | Fixed it by adding this: SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS = { 'access_type': 'offline', 'approval_prompt': 'auto' } If the user already registered, you need to force the prompt first time (otherwhise you dont get the refresh token) /login/google-oauth2?approval_prompt=force | 8 | 5 |
62,280,161 | 2020-6-9 | https://stackoverflow.com/questions/62280161/saving-keras-models-with-custom-layers | I am trying to save a Keras model in a H5 file. The Keras model has a custom layer. When I try to restore the model, I get the following error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-5-0fbff9b56a9d> in <module>() 1 model.save('model.h5') 2 del model ----> 3 model = tf.keras.models.load_model('model.h5') 8 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name) 319 cls = get_registered_object(class_name, custom_objects, module_objects) 320 if cls is None: --> 321 raise ValueError('Unknown ' + printable_module_name + ': ' + class_name) 322 323 cls_config = config['config'] ValueError: Unknown layer: CustomLayer Could you please tell me how I am supposed to save and load weights of all the custom Keras layers too? (Also, there was no warning when saving, will it be possible to load models from H5 files which I have already saved but can't load back now?) Here is the minimal working code sample (MCVE) for this error, as well as the full expanded message: Google Colab Notebook Just for completeness, this is the code I used to make my custom layer. get_config and from_config are both working fine. class CustomLayer(tf.keras.layers.Layer): def __init__(self, k, name=None): super(CustomLayer, self).__init__(name=name) self.k = k def get_config(self): return {'k': self.k} def call(self, input): return tf.multiply(input, 2) model = tf.keras.models.Sequential([ tf.keras.Input(name='input_layer', shape=(10,)), CustomLayer(10, name='custom_layer'), tf.keras.layers.Dense(1, activation='sigmoid', name='output_layer') ]) model.save('model.h5') model = tf.keras.models.load_model('model.h5') | Correction number 1 is to use Custom_Objects while loading the Saved Model i.e., replace the code, new_model = tf.keras.models.load_model('model.h5') with new_model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer}) Since we are using Custom Layers to build the Model and before Saving it, we should use Custom Objects while Loading it. Correction number 2 is to add **kwargs in the __init__ function of the Custom Layer like def __init__(self, k, name=None, **kwargs): super(CustomLayer, self).__init__(name=name) self.k = k super(CustomLayer, self).__init__(**kwargs) Complete working code is shown below: import tensorflow as tf class CustomLayer(tf.keras.layers.Layer): def __init__(self, k, name=None, **kwargs): super(CustomLayer, self).__init__(name=name) self.k = k super(CustomLayer, self).__init__(**kwargs) def get_config(self): config = super(CustomLayer, self).get_config() config.update({"k": self.k}) return config def call(self, input): return tf.multiply(input, 2) model = tf.keras.models.Sequential([ tf.keras.Input(name='input_layer', shape=(10,)), CustomLayer(10, name='custom_layer'), tf.keras.layers.Dense(1, activation='sigmoid', name='output_layer') ]) tf.keras.models.save_model(model, 'model.h5') new_model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer}) print(new_model.summary()) Output of the above code is shown below: WARNING:tensorflow:No training configuration found in the save file, so the model was *not* compiled. Compile it manually. Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= custom_layer_1 (CustomLayer) (None, 10) 0 _________________________________________________________________ output_layer (Dense) (None, 1) 11 ================================================================= Total params: 11 Trainable params: 11 Non-trainable params: 0 Hope this helps. Happy Learning! | 24 | 24 |
62,288,567 | 2020-6-9 | https://stackoverflow.com/questions/62288567/time-travel-debugging-in-python-what-tools-are-suggested-to-use | I was recently wondering about Time Travel Debugging in relation to Python. I found information about tools like: RevPDB - unfortunately the last recorded activity is from 2016 timetravelpdb - unfortunately the last recorded activity is in 2015 Since the projects were updated so long ago, I was wondering if the tools used for TTD had changed for the moment? I am counting on constructive discussion and advice & suggestions what to use now. It is all about sharing the knowledge. | General Overview of TTD Research At this very moment, available solutions are those listed in the description of the question and additionally PyTrace. As far as RevPDB and timetravelpdb are concerned, I haven't tested these solutions in any way as the activity in these projects is registered a few years ago so I assumed that in case of problems contact with the support will be difficult. How to start working with it? To start with, it is worth using an interactive demo to learn about the basic functionalities and the way it works: PyTrace Interactive Demo If you need more information, check this site: PyTrace Official Site I am impressed with this project and I'm going to start using it in my daily coding routine so I will post my thought and tips about it later. Stay tuned! | 10 | 11 |
62,268,459 | 2020-6-8 | https://stackoverflow.com/questions/62268459/accuracy-with-tf-idf-and-non-tf-idf-features | I run a Random Forest algorithm with TF-IDF and non-TF-IDF features. In total the features are around 130k in number (after a feature selection conducted on the TF-IDF features) and the observations of the training set are around 120k in number. Around 500 of them are the non-TF-IDF features. The issue is that the accuracy of the Random Forest on the same test set etc with - only the non-TF-IDF features is 87% - the TF-IDF and non-TF-IDF features is 76% This significant aggravation of the accuracy raises some questions in my mind. The relevant piece of code of mine with the training of the models is the following: drop_columns = ['labels', 'complete_text_1', 'complete_text_2'] # Split to predictors and targets X_train = df.drop(columns=drop_columns).values y_train = df['labels'].values # Instantiate, train and transform with tf-idf models vectorizer_1 = TfidfVectorizer(analyzer="word", ngram_range=(1,2), vocabulary=tf_idf_feature_names_selected) X_train_tf_idf_1 = vectorizer_1.fit_transform(df['complete_text_1']) vectorizer_2 = TfidfVectorizer(analyzer="word", ngram_range=(1,2), vocabulary=tf_idf_feature_names_selected) X_train_tf_idf_2 = vectorizer_2.fit_transform(df['complete_text_2']) # Covert the general features to sparse array X_train = np.array(X_train, dtype=float) X_train = csr_matrix(X_train) # Concatenate the general features and tf-idf features array X_train_all = hstack([X_train, X_train_tf_idf_1, X_train_tf_idf_2]) # Instantiate and train the model rf_classifier = RandomForestClassifier(n_estimators=150, random_state=0, class_weight='balanced', n_jobs=os.cpu_count()-1) rf_classifier.fit(X_train_all, y_train) Personally, I have not seen any bug in my code (this piece above and in general). The hypothesis which I have formulated to explain this decrease in accuracy is the following. The number of non-TF-IDF features is only 500 (out of the 130k features in total) This gives some chances that the non-TF-IDF features are not picked that much at each split by the trees of the random forest (eg because of max_features etc) So if the non-TF-IDF features do actually matter then this will create problems because they are not taken enough into account. Related to this, when I check the features' importances of the random forest after training it I see the importances of the non-TF-IDF features being very very low (although I am not sure how reliable indicator are the feature importances especially with TF-IDF features included). Can you explain differently the decrease in accuracy at my classifier? In any case, what would you suggest doing? Some other ideas of combining the TF-IDF and non-TF-IDF features are the following. One option would be to have two separate (random forest) models - one for the TF-IDF features and one for the non-TF-IDF features. Then the results of these two models will be combined either by (weighted) voting or meta-classification. | Your view that 130K of features is way too much for the Random forest sounds right. You didn't mention how many examples you have in your dataset and that would be cruccial to the choice of the possible next steps. Here are a few ideas on top of my head. If number of datapoints is large enough you myabe want to train some transformation for the TF-IDF features - e.g. you might want to train a small-dimensional embeddings of these TF-IDF features into, say 64-dimensional space and then e.g. a small NN on top of that (even a linear model maybe). After you have embeddings you could use them as transforms to generate 64 additional features for each example to replace TF-IDF features for RandomForest training. Or alternatively just replace the whole random forest with a NN of such architecture that e.g. TF-IDFs are all combined into a few neurons via fully-connected layers and later concatened with other features (pretty much same as embeddings but as a part of NN). If you don't have enough data to train a large NN maybe you can try to train GBDT ensemble instead of random forest. It probably should do much better job at picking the good features compared to random forest which definitely likely to be affected a lot by a lot of noisy useless features. Also you can first train some crude version and then do a feature selection based on that (again, I would expect it should do a more reasonable job compared to random forest). | 9 | 2 |
62,210,221 | 2020-6-5 | https://stackoverflow.com/questions/62210221/walk-forward-with-validation-window-for-time-series-data-cross-validation | I'm looking to perform walk forward validation on my time-series data. Extensive document exists on how to perform rolling window: or expanding window But this validation does not correspond to what will be in my production system: I want to daily retrain a model that will make prediction 14 days in the future. So I would only add one day of data to my previous training period (where the other methods add on the following training folds an entire set of data of length test_size; 14 days in my case). Therefore, I would like to validate my model with a sliding window: My question is that I can't come across a Python library that would do the work. TimeSeriesSplit from sklearn has no option of that kind. Basically I want to provide : test_size, n_fold, min_train_size and if n_fold > (n_samples - min_train_size) % test_size then next training_set draw data from the previous fold test_set | Here is my solution that allows the user to specify the testing horizon and the minimum sample of data for training: from sklearn.model_selection import TimeSeriesSplit from sklearn.utils import indexable from sklearn.utils.validation import _num_samples class TimeSeriesSplitCustom(TimeSeriesSplit): def __init__(self, n_splits=5, max_train_size=None, test_size=1, min_train_size=1): super().__init__(n_splits=n_splits, max_train_size=max_train_size) self.test_size = test_size self.min_train_size = min_train_size def overlapping_split(self, X, y=None, groups=None): min_train_size = self.min_train_size test_size = self.test_size n_splits = self.n_splits n_samples = _num_samples(X) if (n_samples - min_train_size) / test_size >= n_splits: print('(n_samples - min_train_size) / test_size >= n_splits') print('default TimeSeriesSplit.split() used') yield from super().split(X) else: shift = int(np.floor( (n_samples - test_size - min_train_size) / (n_splits - 1))) start_test = n_samples - (n_splits * shift + test_size - shift) test_starts = range(start_test, n_samples - test_size + 1, shift) if start_test < min_train_size: raise ValueError( ("The start of the testing : {0} is smaller" " than the minimum training samples: {1}.").format(start_test, min_train_size)) indices = np.arange(n_samples) for test_start in test_starts: if self.max_train_size and self.max_train_size < test_start: yield (indices[test_start - self.max_train_size:test_start], indices[test_start:test_start + test_size]) else: yield (indices[:test_start], indices[test_start:test_start + test_size]) And with the visualisation: import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Patch from ModelEvaluation import TimeSeriesSplitCustom np.random.seed(1338) cmap_data = plt.cm.Paired cmap_cv = plt.cm.coolwarm n_splits = 13 # Generate the class/group data n_points = 100 X = np.random.randn(100, 10) percentiles_classes = [.1, .3, .6] y = np.hstack([[ii] * int(100 * perc) for ii, perc in enumerate(percentiles_classes)]) # Evenly spaced groups repeated once groups = np.hstack([[ii] * 10 for ii in range(10)]) fig, ax = plt.subplots() cv = TimeSeriesSplitCustom(n_splits=n_splits, test_size=20, min_train_size=12) plot_cv_indices(cv, X, y, groups, ax, n_splits) plt.show() (To have the same result, make sure to change the for ii, (tr, tt) in enumerate(**cv.overlapping_split**(X=X, y=y, groups=group)): in the plot_cv_indices function. Cheers! | 11 | 2 |
62,229,579 | 2020-6-6 | https://stackoverflow.com/questions/62229579/google-colab-how-to-show-value-of-assignments | I am working on this python notebook in Google Colab: https://github.com/AllenDowney/ModSimPy/blob/master/notebooks/chap01.ipynb I had to change the configuration line because the one stated in the original was erroring out: # Configure Jupyter to display the assigned value after an assignment # Line commented below because errors out # %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # Edit solution given below %config InteractiveShell.ast_node_interactivity='last_expr' However, I think the original statement was meant to show values of assignments (if I'm not mistaken), so that when I run the following cell in the notebook, I should see an output: meter = UNITS.meter second = UNITS.second a = 9.8 * meter / second**2 If so, how can I make the notebook on Google Colab show output of assignments? | Google Colab has not yet been upgraded to the latest IPython version- if you explicitly upgrade with !pip install -U ipython then last_expr_or_assign will work. | 8 | 2 |
62,293,077 | 2020-6-9 | https://stackoverflow.com/questions/62293077/why-is-pils-image-fromarray-distorting-my-image-color | I am generating thumbnails for mp4 videos using the following code: import cv2 as cv from PIL import Image vidcap = cv.VideoCapture(videoPath) vidcap.set(cv.CAP_PROP_POS_MSEC, millisecond) #Turn video frame into numpy ndarray success, image = vidcap.read() cv.imwrite('fromImage.jpg', image) #line to be replaced The thumbnail generated from a high budget, professionally shot video looks like this: Unfortunately in my application context, I will not be able to write the image frame directly to a file. Instead I must convert the image array generated by cv into a PIL image and then go from there. It looks something like this: # Turn numpy ndarray int PIL image img = Image.fromarray(image) img.save('fromArray.jpg') #Saving it for stackoverflow But the outputted thumbnail from the same mp4 video is completely distorted as it seems to have swapped red and blue and looks like this: Who or what is the culprit in this image distortion? | https://note.nkmk.me/en/python-opencv-bgr-rgb-cvtcolor/ imageRGB = cv.cvtColor(image, cv.COLOR_BGR2RGB) img = Image.fromarray(imageRGB) img.save('fromArray.jpg') | 11 | 22 |
62,288,898 | 2020-6-9 | https://stackoverflow.com/questions/62288898/matplotlib-values-for-the-xx-small-x-small-small-medium-large-x-large-xx | The matplotlibrc sample file states that: ## The font.size property is the default font size for text, given in pts. ## 10 pt is the standard value. ## ## Note that font.size controls default text sizes. To configure ## special text sizes tick labels, axes, labels, title, etc, see the rc ## settings for axes and ticks. Special text sizes can be defined ## relative to font.size, using the following values: xx-small, x-small, ## small, medium, large, x-large, xx-large, larger, or smaller What are the equivalent values in pts for those "special text sizes"? | You can also compute the absolute font sizes yourself very easily import matplotlib as mpl import matplotlib.pyplot as plt fig, ax = plt.subplots() t = ax.text(0.5, 0.5, 'Text') fonts = ['xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large', 'larger', 'smaller'] for font in fonts: t.set_fontsize(font) print (font, round(t.get_fontsize(), 2)) plt.close() Output xx-small 5.79 x-small 6.94 small 8.33 medium 10.0 large 12.0 x-large 14.4 xx-large 17.28 larger 12.0 smaller 8.33 | 13 | 19 |
62,246,742 | 2020-6-7 | https://stackoverflow.com/questions/62246742/how-to-add-a-different-model-form-to-modelformset-factory | With respect to these models: class Projects(models.Model): projectDescription = models.CharField(max_length=50,blank=True,null = True,) status = models.IntegerField(choices = Status_CHOICES, default = 4) projectOwner = models.ForeignKey(staff, on_delete=models.CASCADE, blank=True,null = True,) class Updates(models.Model): project = models.ForeignKey(Projects, on_delete=models.CASCADE) update = models.CharField(max_length=50,blank=True,null = True,) updateDate = models.DateTimeField(default = timezone.now, editable = False) addedBy = models.CharField(max_length=35,blank=True,) I want to create a view that displays forms for all of the current projects. This is easy using a modelformset_factory. But how can I also add an additional form to each of these project form instances so that an Update to the project (foreign key) can be made? Ideally the user makes changes to various projects, adds an update to one or several projects, then submits the form to save all the changes. What I have below seems to be very close, but it is saving an update to each project with the value of whatever I typed into the last form. Which seems to be because the form is not unique. I went down the road of using prefix's which didn't seem to get me anywhere either. help! update form class updateForm(ModelForm): def __init__(self, *args, **kwargs): super(updateForm, self).__init__(*args, **kwargs) class Meta: model = Updates fields = ('update',) view: def MDSprojects(request): projects = Projects.objects.filter(dept = 'Assistive Technology') projectFormset = modelformset_factory(Projects, form=EditProjectForm, extra = 0) if request.method == 'POST': formsetA = projectFormset(request.POST,request.FILES) if formsetA.is_valid(): for f in formsetA: formA = f.save(commit = False) id = formA.id formA.save() formsetB = updateForm(request.POST,request.FILES) if formsetB.is_valid(): formB = formsetB.save(commit = False) project = Projects.objects.get(id = id) formB.project = project formB.save() else: print(formsetB.errors) return redirect('MDS_Projects') else: print(formsetA.errors) else: formsetA = projectFormset(queryset = projects) formsetB = updateForm() return render(request,'MDSprojectsB.html',{'formset':formsetA,'formsetB':formsetB,}) Template: <form method="POST" enctype= multipart/form-data> {{ formset.management_form }} {% csrf_token %} {%for form in formset%} {{ form.management_form }} {% for hidden in form.hidden_fields %} {{ hidden }} {% endfor %} <div class="card border-primary mb-3" style="max-width: 85rem;"> <div class="card-header text-primary"> <strong>{{form.instance.projectDescription}}</strong> </div> <div class="card-body text-primary"> Project Name: {{form.projectDescription}}</div> <div class="card-body text-primary"> Status: {{form.status}}</div> <div class="card-body text-primary"> Status: {{form.projectOwner}}</div> <div class="card-body text-primary"> Status: {{form.priority}}</div> <div> {{ formsetB.management_form }} {% for hidden in formsetB.hidden_fields %} {{ hidden }} {% endfor %} {{formsetB}} </div> </div> {% endfor %} <button class="btn btn-success btn" type="submit">Save Changes</button> </form> EDIT: Perhaps another way of explaining my question is: If I had the same models as above: class Projects(models.Model): projectDescription = models.CharField(max_length=50,blank=True,null = True,) status = models.IntegerField(choices = Status_CHOICES, default = 4) projectOwner = models.ForeignKey(staff, on_delete=models.CASCADE, blank=True,null = True,) class Updates(models.Model): project = models.ForeignKey(Projects, on_delete=models.CASCADE) update = models.CharField(max_length=50,blank=True,null = True,) updateDate = models.DateTimeField(default = timezone.now, editable = False) addedBy = models.CharField(max_length=35,blank=True,) I want to have a view that renders all of the current projects as individual forms so that the user can make changes to the project model in one view. Additionally I want the user to be able to add an update to each project as well. So for example, if I had 3 projects, I would expect to see 3 forms on the page. Each form would contain the fields to modify this project (so these form fields would be pre-populated with the current values). Then lastly (the part I'm stuck on) would be an additional empty field to add an update to the project. | Solution I What you have done is almost correct. You have initiated an UpdateForm but you treated it as if it was a formset. However it's a Form instance. If you alter your code as below you may achieve your goal. models.py class Project(models.Model): description = models.CharField(max_length=50, blank=True, null=True) status = models.IntegerField(choices=STATUS_CHOICES, default=4) owner = models.ForeignKey(staff, on_delete=models.CASCADE, blank=True, null=True) ... class Update(models.Model): project = models.ForeignKey(Project, on_delete=models.CASCADE) notes = models.CharField(max_length=50, blank=True, null=True) update_date = models.DateTimeField(default=timezone.now, editable=False) added_by = models.CharField(max_length=35, blank=True) forms.py class CreateUpdateForm(ModelForm): class Meta: model = Update fields = ('notes') class EditProjectForm(ModelForm) class Meta: model = Project fields = ('__all__') views.py def mds_projects(request): project_formset = modelformset_factory(Project, form=EditProjectForm, extra=0) if request.method == 'POST': formset = project_formset(request.POST,request.FILES) if formset.is_valid(): for f in formset: project = f.save(commit = False) update_form = CreateUpdateForm(request.POST,request.FILES) if update_form.is_valid(): update = update_form.save(commit = False) update.project = project project.save() update.save() else: print(update_form.errors) return redirect('MDS_Projects') else: print(formset.errors) else: projects = Project.objects.filter(dept='Assistive Technology') formset = project_formset(queryset=projects) update_form = updateForm() return render(request,'MDSprojectsB.html',{'formset':formset, 'update_form':update_form}) MDSprojectsB.html <form method="POST" enctype= multipart/form-data> {{ formset.management_form }} {% csrf_token %} {%for form in formset%} {{ form.management_form }} {% for hidden in form.hidden_fields %} {{ hidden }} {% endfor %} <div class="card border-primary mb-3" style="max-width: 85rem;"> <div class="card-header text-primary"> <strong>{{form.instance.description}}</strong> </div> <div class="card-body text-primary"> Project Name: {{form.description}}</div> <div class="card-body text-primary"> Status: {{form.status}}</div> <div class="card-body text-primary"> Owner: {{form.owner}}</div> <div class="card-body text-primary"> Priority: {{form.priority}}</div> <div> {{ update_form }} </div> </div> {% endfor %} <button class="btn btn-success btn" type="submit">Save Changes</button> </form> Solution II You can also use one form for both models. Since you only want update notes for the Update model you can add additional form field to update project form thus you would not need an additional update form. forms.py class EditProjectForm(ModelForm) update_notes = forms.CharField(max_length=50, help_text='50 characters max.') class Meta: model = Project fields = ('__all__') views.py def mds_projects(request): project_formset = modelformset_factory(Project, form=EditProjectForm, extra=0) if request.method == 'POST': formset = project_formset(request.POST,request.FILES) if formset.is_valid(): for f in formset: project = f.save(commit = False) update = Update.objects.create(notes=f.cleaned_data['update_notes'], project=project) project.save() return redirect('MDS_Projects') else: print(formset.errors) else: projects = Project.objects.filter(dept='Assistive Technology') formset = project_formset(queryset=projects) return render(request,'MDSprojectsB.html',{'formset':formset}) MDSprojectsB.html <form method="POST" enctype= multipart/form-data> {{ formset.management_form }} {% csrf_token %} {%for form in formset%} {{ form.management_form }} {% for hidden in form.hidden_fields %} {{ hidden }} {% endfor %} <div class="card border-primary mb-3" style="max-width: 85rem;"> <div class="card-header text-primary"> <strong>{{form.instance.description}}</strong> </div> <div class="card-body text-primary"> Project Name: {{form.description}}</div> <div class="card-body text-primary"> Status: {{form.status}}</div> <div class="card-body text-primary"> Owner: {{form.owner}}</div> <div class="card-body text-primary"> Priority: {{form.priority}}</div> <div class="card-body text-primary"> Update: {{form.update_notes}}</div> </div> {% endfor %} <button class="btn btn-success btn" type="submit">Save Changes</button> </form> P.S. The code you provided has some violation of both python and django naming conventions. Please try to follow these conventions. I have fixed some of those in my code. For example, you should not name your models plural, you should use camelCase naming for functions and variables. | 8 | 7 |
62,287,001 | 2020-6-9 | https://stackoverflow.com/questions/62287001/how-to-overlay-two-plots-in-same-figure-in-plotly-create-pareto-chart-in-plotl | I was trying to plot barplot and scatterplot in the same plot in plotly, but it shows only scatterplot. How to show both the plots? data import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from matplotlib.ticker import PercentFormatter import plotly import plotly.offline as py import plotly.graph_objs as go import plotly.figure_factory as ff import plotly.tools as tls from plotly.subplots import make_subplots from plotly.offline import plot, iplot, init_notebook_mode init_notebook_mode(connected=False) df = pd.DataFrame({ 'price': [ 4.0, 17.0, 7.0, 7.0, 2.0, 1.0, 1.0], 'item': ['apple', 'banana', 'carrot', 'plum', 'orange', 'date', 'cherry']}) df = df.sort_values(num,ascending=False) df['cumulative_sum'] = df[num].cumsum() df['cumulative_perc'] = 100*df['cumulative_sum']/df[num].sum() df['demarcation'] = 80 num = 'price' cat = 'item' title = 'Pareto Chart' Code trace1 = go.Bar( x=df[cat], y=df[num], name=num, marker=dict( color='rgb(34,163,192)' ) ) trace2 = go.Scatter( x=df[cat], y=df['cumulative_perc'], name='Cumulative Percentage', yaxis='y2', ) data = [trace1,trace2] fig = dict(data=data) iplot(fig) Output Required show both barchart and scatterplot barchart y-ticks on left y-axis scatterplot y-ticks on right y-axis xticklabels rotate 90 degree | Try this: import plotly.graph_objects as go from plotly.subplots import make_subplots trace1 = go.Bar( x=df[cat], y=df[num], name=num, marker=dict( color='rgb(34,163,192)' ) ) trace2 = go.Scatter( x=df[cat], y=df['cumulative_perc'], name='Cumulative Percentage', yaxis='y2' ) fig = make_subplots(specs=[[{"secondary_y": True}]]) fig.add_trace(trace1) fig.add_trace(trace2,secondary_y=True) fig['layout'].update(height = 600, width = 800, title = title,xaxis=dict( tickangle=-90 )) iplot(fig) Gives, | 22 | 35 |
62,286,965 | 2020-6-9 | https://stackoverflow.com/questions/62286965/is-there-a-difference-between-f-and-f-in-python-string-formatting | To format strings in Python 3.6+, I usually use the lowercase "f" option to include variables. For example: response = requests.get(f'{base_url}/{endpoint}?fields={field_list}') I've recently seen one of my coworkers who always uses capital "F", instead. Like this: response = requests.get(F'{base_url}/{endpoint}?fields={field_list}') Is there a difference between the lowercase "f" and capital "F"? And if yes, when would you use each? | As explained in the PEP 498, chapter Specification, both are accepted, and should not differ. In source code, f-strings are string literals that are prefixed by the letter 'f' or 'F'. Everywhere this PEP uses 'f', 'F' may also be used. | 13 | 13 |
62,281,179 | 2020-6-9 | https://stackoverflow.com/questions/62281179/how-to-adjust-scale-ranges-in-altair | I'm having trouble getting all of the axes onto the same scale when using altair to make a group of plots like so: class_list = ['c-CS-m','c-CS-s','c-SC-m','c-SC-s','t-CS-m','t-CS-s','t-SC-m','t-SC-s'] list_of_plots = [] for class_name in class_list: list_of_plots.append(alt.Chart(data[data['class'] == class_name]).mark_bar().encode( x = alt.X('DYRK1A', bin = True, scale=alt.Scale()), y = 'count()').resolve_scale( y='independent' )) list_of_plots[0] & list_of_plots[1] | list_of_plots[2] & list_of_plots[3] | list_of_plots[4] & list_of_plots[5] | list_of_plots[6] & list_of_plots[7] I'd like to have the x axis run from 0.0 to 1.4 and the y axis run from 0 to 120 so that all eight plots I'm producing are on the same scale! I've tried to use domain, inside the currently empty Scale() call but it seems to result in the visualisations that have x axis data from say 0.0 to 0.3 being super squished up and I can't understand why? For context, I'm trying to plot continuous values for protein expression levels. The 8 plots are for different classes of mice that have been exposed to different conditions. The data is available at this link if that helps: https://archive.ics.uci.edu/ml/datasets/Mice+Protein+Expression Please let me know if I need to provide some more info in order for you to help me! | First of all, it looks like you're trying to create a wrapped facet chart. Rather than doing that manually with concatenation, it's better to use a wrapped facet encoding. Second, when you specify resolve_scale(y='independent'), you're specifying that the y-scales should not match between subcharts. If instead you want all scales to be shared, you can use resolve_scale(y='shared'), or equivalently just leave that out, as it is the default. To specify explicit axis domains, use alt.Scale(domain=[min, max]). Put together, it might look something like this: alt.Chart(data).mark_bar().encode( x = alt.X('DYRK1A', bin = True, scale=alt.Scale(domain=[0, 1.4])), y = alt.Y('count()', scale=alt.Scale(domain=[0, 120]), facet = alt.Facet('class:N', columns=4), ) | 16 | 32 |
62,274,412 | 2020-6-9 | https://stackoverflow.com/questions/62274412/cv2-approxpolydp-cv2-arclength-how-these-works | How do these function works? I am using Python3.7 and OpenCv 4.2.0. Thanks in Advance. approx = cv2.approxPolyDP(cnt, 0.01*cv2.arcLength(cnt, True), True) | If you are looking for a example snippet, below is one: import cv2 import imutils # edged is the edge detected image cnts = cv2.findContours(edged, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:5] # loop over the contours for c in cnts: # approximate the contour peri = cv2.arcLength(c, True) approx = cv2.approxPolyDP(c, 0.02 * peri, True) # if our approximated contour has four points, then we # can assume that we have found our screen if len(approx) == 4: screenCnt = approx break In the above snippet, first it finds the contours from a edge detected image, then it sorts the contours to find the five largest contours. Finally it loop over the contours and used cv2.approxPolyDP function to smooth and approximate the quadrilateral. cv2.approxPolyDP works for the cases where there are sharp edges in the contours like a document boundary. | 15 | 16 |
62,265,366 | 2020-6-8 | https://stackoverflow.com/questions/62265366/does-mypy-only-type-check-a-function-if-it-declares-a-return-type | The following file: from typing import List class A: def __init__(self, myStr): self.chars: List[int] = list(myStr) def toString(self): return "".join(self.chars) typechecks (note: chars should be List[str] not List[int]): ➜ Python python3 -m mypy temp.py => Success: no issues found in 1 source file, but the following one, where I declare the return type of toString, does: from typing import List class A: def __init__(self, myStr): self.chars: List[int] = list(myStr) def toString(self) -> str: return "".join(self.chars) ➜ Python python3 -m mypy temp.py temp.py:9: error: Argument 1 to "join" of "str" has incompatible type "List[int]"; expected "Iterable[str]" Found 1 error in 1 file (checked 1 source file) Could anyone explain mypy's behaviour in this case? Why do I need to add the return type to the function to force mypy to correctly diagnose the problem? (it already has all necessary information: that chars is List[int] and join accepts Iterable[str]) | This behavior of mypy is by design. Mypy assumes that if a function signature is missing type hints, the user did not want that function to be type checked yet and so skips analyzing that function body. This behavior is intended to make progressively adding type hints when working on large codebases easier: you end up being warned only about functions you've had a chance to examine and migrate instead of being hit with a wall of warnings up front. If you don't like this behavior and would prefer that mypy tries type checking function bodies no matter what, pass in the --check-untyped-defs command line flag (or config file option). Alternatively, if you'd like mypy to warn you when you forget to add a type signature, use the --disallow-untyped-defs and --disallow-incomplete-defs flags. The --strict flag may also enable all three of these flags, among other ones. You can run mypy --help to double-check this for yourself. | 12 | 20 |
62,269,086 | 2020-6-8 | https://stackoverflow.com/questions/62269086/preserve-column-order-when-using-pivot | I am trying to do a simple pivot on my dataframe, with one column as the index, one column as the columns, and one column as the values. Here is a screenshot of the code and the result: As you can see, it is just one simple line of code. You can also notice that once the table is pivoted, the columns get placed into alphabetical order, which is not acceptable for the purposes of this program. This is supposed to be a general program for many use cases, so I can't reindex it by the column names like some other posts have suggested. Depending on the use case, it may have anywhere from 400 to 500 columns (later in the script these get put into smaller dataframes), but this one only has 32. How can I fix this to keep the original sorting of the columns when I pivot without using the specific column names? I want to keep it general so it will work for most use cases. | You can reorder the columns after the pivoting by selecting a column list in the original order. This column list is obtained from the original dataframe in generic way by selecting the column names for the first line. Example: df = pd.DataFrame({'equipment name':['r1', 'r1', 'r2', 'r2'], 'name': ['col2', 'col1', 'col2', 'col1'], 'value': [1,2,3,4]}) #alphabetic order pd.pivot(df, 'equipment name', 'name', 'value') #name col1 col2 #equipment name #r1 2 1 #r2 4 3 #original order cols = df[df['equipment name']==df['equipment name'][0]].name.tolist() pd.pivot(df, 'equipment name', 'name', 'value')[cols] #name col2 col1 #equipment name #r1 1 2 #r2 3 4 | 7 | 8 |
62,264,277 | 2020-6-8 | https://stackoverflow.com/questions/62264277/get-infinity-when-dividing-by-zero | Is it possible to assign say infinity to something divided by 0 instead of it throwing ZeroDivisionError? Like a function that assigns infinity to something/0. | If you just want a float that represents infinity, you can issue float('inf') or float('-inf'). Standard Python floats and ints will give you a ZeroDivisionError, but you can use numpy datatypes. >>> import numpy as np >>> np.float64(15)/0 inf Without numpy, write a function: def my_div(dividend, divisor): try: return dividend/divisor except ZeroDivisionError: if dividend == 0: raise ValueError('0/0 is undefined') # instead of raising an error, an alternative # is to return float('nan') as the result of 0/0 if dividend > 0: return float('inf') else: return float('-inf') | 9 | 6 |
62,253,289 | 2020-6-8 | https://stackoverflow.com/questions/62253289/valueerror-data-cardinality-is-ambiguous | I'm trying to train LSTM network on data taken from a DataFrame. Here's the code: x_lstm=x.to_numpy().reshape(1,x.shape[0],x.shape[1]) model = keras.models.Sequential([ keras.layers.LSTM(x.shape[1], return_sequences=True, input_shape=(x_lstm.shape[1],x_lstm.shape[2])), keras.layers.LSTM(NORMAL_LAYER_SIZE, return_sequences=True), keras.layers.LSTM(NORMAL_LAYER_SIZE), keras.layers.Dense(y.shape[1]) ]) optimizer=keras.optimizers.Adadelta() model.compile(loss="mse", optimizer=optimizer) for i in range(150): history = model.fit(x_lstm, y) save_model(model,'tmp.rnn') This fails with ValueError: Data cardinality is ambiguous: x sizes: 1 y sizes: 99 Please provide data which shares the same first dimension. When I change model to model = keras.models.Sequential([ keras.layers.LSTM(x.shape[1], return_sequences=True, input_shape=x_lstm.shape), keras.layers.LSTM(NORMAL_LAYER_SIZE, return_sequences=True), keras.layers.LSTM(NORMAL_LAYER_SIZE), keras.layers.Dense(y.shape[1]) ]) it fails with following error: Input 0 of layer lstm_9 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 1, 99, 1200] How do I get this to work? x has shape of (99, 1200) (99 items with 1200 features each, this is just sample a larger dataset), y has shape (99, 1) | As the Error suggests, the First Dimension of X and y is different. First Dimension indicates the Batch Size and it should be same. Please ensure that Y also has the shape, (1, something). I could reproduce your error with the Code shown below: from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, LSTM import tensorflow as tf import numpy as np # define sequences sequences = [ [1, 2, 3, 4], [1, 2, 3], [1] ] # pad sequence padded = pad_sequences(sequences) X = np.expand_dims(padded, axis = 0) print(X.shape) # (1, 3, 4) y = np.array([1,0,1]) #y = y.reshape(1,-1) print(y.shape) # (3,) model = Sequential() model.add(LSTM(4, return_sequences=False, input_shape=(None, X.shape[2]))) model.add(Dense(1, activation='sigmoid')) model.compile ( loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.001)) model.fit(x = X, y = y) If we observe the Print Statements, Shape of X is (1, 3, 4) Shape of y is (3,) This Error can be fixed by uncommenting the Line, y = y.reshape(1,-1), which makes the First Dimension (Batch_Size) equal (1) for both X and y. Now, the working code is shown below, along with the Output: from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, LSTM import tensorflow as tf import numpy as np # define sequences sequences = [ [1, 2, 3, 4], [1, 2, 3], [1] ] # pad sequence padded = pad_sequences(sequences) X = np.expand_dims(padded, axis = 0) print('Shape of X is ', X.shape) # (1, 3, 4) y = np.array([1,0,1]) y = y.reshape(1,-1) print('Shape of y is', y.shape) # (1, 3) model = Sequential() model.add(LSTM(4, return_sequences=False, input_shape=(None, X.shape[2]))) model.add(Dense(1, activation='sigmoid')) model.compile ( loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.001)) model.fit(x = X, y = y) The Output of above code is : Shape of X is (1, 3, 4) Shape of y is (1, 3) 1/1 [==============================] - 0s 1ms/step - loss: 0.2588 <tensorflow.python.keras.callbacks.History at 0x7f5b0d78f4a8> Hope this helps. Happy Learning! | 18 | 15 |
62,253,718 | 2020-6-8 | https://stackoverflow.com/questions/62253718/how-can-i-receive-file-in-python-telegram-bot | I have a problem about file messages in python telegram bot. How can I receive file and read that file ? Or save it. | You can: Register a handler that listens to Document get File object from the update (inside the listener using get_file) then simply call .download() to download the document Here a sample code to get you started: from telegram.ext import Updater, MessageHandler, Filters BOT_TOKEN = ' ... ' def downloader(update, context): context.bot.get_file(update.message.document).download() # writing to a custom file with open("custom/file.doc", 'wb') as f: context.bot.get_file(update.message.document).download(out=f) updater = Updater(BOT_TOKEN, use_context=True) updater.dispatcher.add_handler(MessageHandler(Filters.document, downloader)) updater.start_polling() updater.idle() | 10 | 11 |
62,256,014 | 2020-6-8 | https://stackoverflow.com/questions/62256014/does-python-forbid-two-similarly-looking-unicode-identifiers | I was playing around with Unicode identifiers and stumbled upon this: >>> 𝑓, x = 1, 2 >>> 𝑓, x (1, 2) >>> 𝑓, f = 1, 2 >>> 𝑓, f (2, 2) What's going on here? Why does Python replace the object referenced by 𝑓, but only sometimes? Where is that behavior described? | PEP 3131 -- Supporting Non-ASCII Identifiers says All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC. You can use unicodedata to test the conversions: import unicodedata unicodedata.normalize('NFKC', '𝑓') # f which would indicate that '𝑓' gets converted to 'f' in parsing. Leading to the expected: 𝑓 = "Some String" print(f) # "Some String" | 87 | 88 |
62,245,218 | 2020-6-7 | https://stackoverflow.com/questions/62245218/python-pandas-reshape-dataframe | Given the following data frame: pd.DataFrame({"A":[1,2,3],"B":[4,5,6],"C":[6,7,8]}) A B C 0 1 4 6 1 2 5 7 2 3 6 8 3 11 14 16 4 12 15 17 5 13 16 18 I would like to reshape it so it would look like so: A B C A_1 B_1 C_1 A_2 B_2 C_2 0 1 4 6 2 5 7 3 6 8 1 11 14 16 12 15 17 13 16 18 So every 3 rows are grouped into 1 row How can I achieve this with pandas? | One idea is create MultiIndex with integer and modulo division and reshape by DataFrame.unstack: a = np.arange(len(df)) df.index = [a // 3, a % 3] df = df.unstack().sort_index(axis=1, level=1) df.columns = [f'{a}_{b}' for a, b in df.columns] print (df) A_0 B_0 C_0 A_1 B_1 C_1 A_2 B_2 C_2 0 1 4 6 2 5 7 3 6 8 1 11 14 16 12 15 17 13 16 18 For reverse operation is possible use str.split with DataFrame.stack: a = np.arange(len(df)) df1 = (df.set_index(pd.MultiIndex.from_arrays([a // 3, a % 3])) .unstack().sort_index(axis=1, level=1)) df1.columns = [f'{a}_{b}' for a, b in df1.columns] print (df1) A_0 B_0 C_0 A_1 B_1 C_1 A_2 B_2 C_2 0 1 4 6 2 5 7 3 6 8 1 11 14 16 12 15 17 13 16 18 df1.columns = df1.columns.str.split('_', expand=True) df2 = df1.stack().reset_index(drop=True) print (df2) A B C 0 1 4 6 1 2 5 7 2 3 6 8 3 11 14 16 4 12 15 17 5 13 16 18 | 10 | 12 |
62,250,799 | 2020-6-7 | https://stackoverflow.com/questions/62250799/mean-of-non-diagonal-elements-of-each-row-numpy | I essentially have a confusion matrix of size n x n with all my diagonal elements being 1. For every row, I wish to calculate its mean, excluding the 1, i.e. excluding the diagonal value. Is there a simple way to do it in numpy? This is my current solution: mask = np.zeros(cs.shape, dtype=bool) np.fill_diagonal(mask, 1) print(np.ma.masked_array(cs, mask).mean(axis=1)) where cs is my n x n matrix The code seems convoluted, and I certainly feel that there's a much more elegant solution. | A concise one using summation - (cs.sum(1)-1)/(cs.shape[1]-1) For a general case of ignoring diagonal elements, use np.diag in place of 1 offset - (cs.sum(1)-np.diag(cs))/(cs.shape[1]-1) Another with mean - n = cs.shape[1] (cs.mean(1)-1./n)*(n/(n-1)) | 8 | 6 |
62,248,185 | 2020-6-7 | https://stackoverflow.com/questions/62248185/pandas-combining-sparse-columns-in-dataframe | I am using Python, Pandas for data analysis. I have sparsely distributed data in different columns like following | id | col1a | col1b | col2a | col2b | col3a | col3b | |----|-------|-------|-------|-------|-------|-------| | 1 | 11 | 12 | NaN | NaN | NaN | NaN | | 2 | NaN | NaN | 21 | 86 | NaN | NaN | | 3 | 22 | 87 | NaN | NaN | NaN | NaN | | 4 | NaN | NaN | NaN | NaN | 545 | 32 | I want to combine this sparsely distributed data in different columns to tightly packed column like following. | id | group | cola | colb | |----|-------|-------|-------| | 1 | g1 | 11 | 12 | | 2 | g2 | 21 | 86 | | 3 | g1 | 22 | 87 | | 4 | g3 | 545 | 32 | What I have tried is doing following, but not able to do it properly df['cola']=np.nan df['colb']=np.nan df['cola'].fillna(df.col1a,inplace=True) df['colb'].fillna(df.col1b,inplace=True) df['cola'].fillna(df.col2a,inplace=True) df['colb'].fillna(df.col2b,inplace=True) df['cola'].fillna(df.col3a,inplace=True) df['colb'].fillna(df.col3b,inplace=True) But I think there must be more concise and efficient way way of doing this. How to do this in better way? | You can use df.stack() assuming 'id' is your index else set 'id' as index. Then use pd.pivot_table. df = df.stack().reset_index(name='val',level=1) df['group'] = 'g'+ df['level_1'].str.extract('col(\d+)') df['level_1'] = df['level_1'].str.replace('col(\d+)','') df.pivot_table(index=['id','group'],columns='level_1',values='val') level_1 cola colb id group 1 g1 11.0 12.0 2 g2 21.0 86.0 3 g1 22.0 87.0 4 g3 545.0 32.0 | 7 | 8 |
62,241,367 | 2020-6-7 | https://stackoverflow.com/questions/62241367/have-permissionerror-when-i-run-poetry-run-command | Environment Ubuntu 20.04 Python 3.7.3 Poetry 1.0.8 My Problem I installed poetry to manage packages, and I tried it with following simple project, . └── myproject ├── README.rst ├── myproject │ ├── __init__.py │ ├── main.py ├── myproject.egg-info │ ├── PKG-INFO │ ├── SOURCES.txt │ ├── dependency_links.txt │ ├── requires.txt │ └── top_level.txt ├── poetry.lock ├── pyproject.toml └── tests ├── __init__.py └── test_myproject.py To run main.py I tried $ poetry run myproject/main.py But I had an error, which says, [PermissionError] [Errno 13] Permission denied What I tried To run my code, I tried another way. $ poetry shell (myproject-x8XipcUE-py3.7)$ python myproject/main.py I had no error... What is the problem for my poetry run command? | My guess is that myproject/main.py isn't an executable (doesn't have the 'x') permission. That's why you can run it with python myproject/main.py, but can't run it as the main exe. To fix it, run chmod +x myproject/main.py, and then try poetry run again. Of course, you'll have to have a proper Shebang at the very top of main.py. Something like #!/usr/bin/env python (again - at the very beginning of the file). | 22 | 6 |
62,239,593 | 2020-6-7 | https://stackoverflow.com/questions/62239593/how-to-calculate-a-cumulative-product-of-a-list-using-list-comprehension | I'm trying my hand at converting the following loop to a comprehension. Problem is given an input_list = [1, 2, 3, 4, 5] return a list with each element as multiple of all elements till that index starting from left to right. Hence return list would be [1, 2, 6, 24, 120]. The normal loop I have (and it's working): l2r = list() for i in range(lst_len): if i == 0: l2r.append(lst_num[i]) else: l2r.append(lst_num[i] * l2r[i-1]) | Python 3.8+ solution: := Assignment Expressions lst = [1, 2, 3, 4, 5] curr = 1 out = [(curr:=curr*v) for v in lst] print(out) Prints: [1, 2, 6, 24, 120] Other solution (with itertools.accumulate): from itertools import accumulate out = [*accumulate(lst, lambda a, b: a*b)] print(out) | 11 | 17 |
62,234,909 | 2020-6-6 | https://stackoverflow.com/questions/62234909/layout-management-in-plotly-dash-app-how-to-position-html-div | I am creating a dash app, this is my code: # import required packages import dash import dash_table import dash_core_components as dcc import dash_html_components as html import dash_bootstrap_components as dbc import plotly.graph_objs as go import numpy as np import pandas as pd # define figure creation function def create_figure(): N = 100 x_min = 0 x_max = 10 y_min = 0 y_max = 10 blue = '#6683f3' orange = '#ff9266' grey = '#e0e1f5' black = '#212121' x = np.linspace(x_min, x_max, N) y = np.linspace(y_min, y_max, N) XX, YY = np.meshgrid(x, y) Z1 = XX*2*YY/10 Z2 = np.sin(XX)*YY**2 data = [go.Contour(z = Z1, name = 'Z1', contours_coloring = 'lines', line_width = 2, showscale = False, showlegend = True, colorscale = [[0, blue], [1, blue]], ncontours = 11, contours = dict(showlabels = True, labelformat = '.0f')), go.Contour(z = Z2, name = 'Z2', contours_coloring = 'lines', line_width = 2, showscale = False, showlegend = True, colorscale = [[0, orange], [1, orange]], ncontours = 21, contours = dict(showlabels = True, labelformat = '.0f'))] layout = go.Layout(plot_bgcolor = black, hovermode = 'x unified') figure = go.Figure(data = data, layout = layout) figure.update_xaxes(title_text = 'X', linewidth = 1, nticks = 11, gridwidth = 0.5, gridcolor = grey, tickformat = '.0f') figure.update_yaxes(title_text = 'Y', linewidth = 1, nticks = 11, gridwidth = 0.5, gridcolor = grey, tickformat = '.0f') figure.update_layout(legend = dict(itemsizing = 'constant')) return figure # define dataframe creation function def create_dataframe(): rows = 6 df = pd.DataFrame(columns = list('ABCDEFGHIJ')) data = np.random.random(size = (rows, len(df.columns))) for line in data: df = df.append(dict(zip(df.columns, line)), ignore_index=True) return df # call figure and dataframe functions figure = create_figure() df = create_dataframe() # page layout app = dash.Dash(external_stylesheets = [dbc.themes.BOOTSTRAP]) app.layout = html.Div([html.Div([dcc.RadioItems(id = 'radio-item-1', options = [dict(label = 'option A', value = 'A'), dict(label = 'option B', value = 'B'), dict(label = 'option C', value = 'C')], value = 'A'), html.P(id = 'text-1', children = 'Some quantity'), html.P(id = 'text-2', children = 'Some other quantity'), dcc.RadioItems(id = 'radio-item-2', options = [dict(label = 'option 1', value = '1'), dict(label = 'option 2', value = '2'), dict(label = 'option 3', value = '3')], value = '1')]), html.Div(dcc.Graph(id = 'main-graph', figure = figure, style = dict(height = 1000))), html.Div(dash_table.DataTable(id = 'main-table', columns = [{"name": i, "id": i} for i in df.columns], data = df.to_dict('records')))]) if __name__ == "__main__": app.run_server() The layout is basically this: while I would like to get this: How can I do? What options should I change? I tried to set style = dict(float = 'left') for the options' Div but so the graph overlaps the options and these are no longer visible. Moreover, is there a way to vertically align the radioItems' options? Version info: Python 3.7.0 dash 1.12.0 dash-bootstrap-components 0.10.1 dash-core-components 1.10.0 dash-html-components 1.0.3 dash-renderer 1.4.1 dash-table 4.7.0 plotly 4.7.0 | To stack multiple html.Div() horizontally, use style={'display': 'inline-block'}. To align the dcc.RadioItems() vertically, use labelStyle={'display': 'block'}. I included an updated version of your code below. # import required packages import dash import dash_table import dash_core_components as dcc import dash_html_components as html import dash_bootstrap_components as dbc import plotly.graph_objs as go import numpy as np import pandas as pd # define figure creation function def create_figure(): N = 100 x_min = 0 x_max = 10 y_min = 0 y_max = 10 blue = '#6683f3' orange = '#ff9266' grey = '#e0e1f5' black = '#212121' x = np.linspace(x_min, x_max, N) y = np.linspace(y_min, y_max, N) XX, YY = np.meshgrid(x, y) Z1 = XX*2*YY/10 Z2 = np.sin(XX)*YY**2 data = [go.Contour(z = Z1, name = 'Z1', contours_coloring = 'lines', line_width = 2, showscale = False, showlegend = True, colorscale = [[0, blue], [1, blue]], ncontours = 11, contours = dict(showlabels = True, labelformat = '.0f')), go.Contour(z = Z2, name = 'Z2', contours_coloring = 'lines', line_width = 2, showscale = False, showlegend = True, colorscale = [[0, orange], [1, orange]], ncontours = 21, contours = dict(showlabels = True, labelformat = '.0f'))] layout = go.Layout(plot_bgcolor = black, hovermode = 'x unified') figure = go.Figure(data = data, layout = layout) figure.update_xaxes(title_text = 'X', linewidth = 1, nticks = 11, gridwidth = 0.5, gridcolor = grey, tickformat = '.0f') figure.update_yaxes(title_text = 'Y', linewidth = 1, nticks = 11, gridwidth = 0.5, gridcolor = grey, tickformat = '.0f') figure.update_layout(legend = dict(itemsizing = 'constant'), margin = dict(t=0, b=0, l=0, r=0)) return figure # define dataframe creation function def create_dataframe(): rows = 6 df = pd.DataFrame(columns = list('ABCDEFGHIJ')) data = np.random.random(size = (rows, len(df.columns))) for line in data: df = df.append(dict(zip(df.columns, line)), ignore_index=True) return df # call figure and dataframe functions figure = create_figure() df = create_dataframe() # page layout app = dash.Dash(external_stylesheets = [dbc.themes.BOOTSTRAP]) app.layout = html.Div([ # first row html.Div(children=[ # first column of first row html.Div(children=[ dcc.RadioItems(id = 'radio-item-1', options = [dict(label = 'option A', value = 'A'), dict(label = 'option B', value = 'B'), dict(label = 'option C', value = 'C')], value = 'A', labelStyle={'display': 'block'}), html.P(id = 'text-1', children = 'First paragraph'), ], style={'display': 'inline-block', 'vertical-align': 'top', 'margin-left': '3vw', 'margin-top': '3vw'}), # second column of first row html.Div(children=[ dcc.RadioItems(id = 'radio-item-2', options = [dict(label = 'option 1', value = '1'), dict(label = 'option 2', value = '2'), dict(label = 'option 3', value = '3')], value = '1', labelStyle={'display': 'block'}), html.P(id='text-2', children='Second paragraph'), ], style={'display': 'inline-block', 'vertical-align': 'top', 'margin-left': '3vw', 'margin-top': '3vw'}), # third column of first row html.Div(children=[ html.Div(dcc.Graph(id = 'main-graph', figure = figure)), ], style={'display': 'inline-block', 'vertical-align': 'top', 'margin-left': '3vw', 'margin-top': '3vw'}), ], className='row'), # second row html.Div(children=[ html.Div(dash_table.DataTable(id = 'main-table', columns = [{"name": i, "id": i} for i in df.columns], data = df.to_dict('records'), style_table={'margin-left': '3vw', 'margin-top': '3vw'})), ], className='row'), ]) if __name__ == "__main__": app.run_server(debug=True) | 11 | 18 |
62,220,294 | 2020-6-5 | https://stackoverflow.com/questions/62220294/hoverinformation-for-shapes-in-plotly | I know there is the hovertemplate/hover_text/ option for traces (marker/line) but I cannot find such a thing for shapes. Is there a way to have a hover text pop up when moving over a shape? Maybe a workaround? Example: import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Scatter( x=[1.5, 3], y=[2.5, 2.5], text=["Rectangle reference to the plot", "Rectangle reference to the axes"], mode="markers", )) fig.add_shape( # Rectangle reference to the plot type="rect", xref="paper", yref="paper", x0=0.25, y0=0, x1=0.5, y1=0.5, line=dict( color="LightSeaGreen", width=3, ), fillcolor="PaleTurquoise", ) When I hover over the two points, I get a hover-template with information. How can I get something similar for the shape? | I thought of a solution I am happy with. Simply draw a shape. You won't be able to see a hover text. However, if you add a trace with a fill on top of the shape, then set the trace to opacity=0 you will see the hover text from the trace pop up when moving over the shape. Again, thanks for your responses! import plotly.graph_objects as go # Draw shape (you won't be able to add a hover text for it) fig = go.Figure() fig.add_shape( type="rect", x0=0, y0=0, x1=4, y1=3, fillcolor='LightSkyBlue', line_color='Blue', name='Shape 1' ) # Adding a trace with a fill, setting opacity to 0 fig.add_trace( go.Scatter( x=[0,0,4,4,0], y=[0,3,3,0,0], fill="toself", mode='lines', name='', text='Custom text on top of shape', opacity=0 ) ) fig.show() | 10 | 3 |
62,230,507 | 2020-6-6 | https://stackoverflow.com/questions/62230507/multiple-columns-for-hue-parameter-in-seaborn-violinplot | I am working with tips data set, and here is the head of data set. total_bill tip sex smoker day time size 0 16.99 1.01 Female No Sun Dinner 2 1 10.34 1.66 Male No Sun Dinner 3 2 21.01 3.50 Male No Sun Dinner 3 3 23.68 3.31 Male No Sun Dinner 2 4 24.59 3.61 Female No Sun Dinner 4 My code is sns.violinplot(x='day',y='total_bill',data=tips, hue=['sex','smoker']) I want a violinplot of day with total_bill in which hue is sex and smoker, but I can not find any option to set multiple values of hue. Is there any way? | You could use a seaborn.catplot in order to use 'sex' as hue and 'smoker' as column for generating two side by side violinplot. Check this code: import seaborn as sns import matplotlib.pyplot as plt sns.set() tips = sns.load_dataset("tips") sns.catplot(x = "day", y = "total_bill", hue = "sex", col = "smoker", data = tips, kind = "violin", split = True) plt.show() which gives me this plot: | 20 | 7 |
62,230,148 | 2020-6-6 | https://stackoverflow.com/questions/62230148/python-telegram-bot-markdown | I am working on a Telegram Bot in Python but I struggle to use markdown correctly and I can not find any proper resources about the telegram markdown implementation. It gets even more complicated because of two different markdown "versions" (Markdown and Markdown_V2). And none of them is matching the behavior of the normal chat field (typing by hand). Test String: *Bold*, _italic_, *_bold and italic_*, **double bold**, __double italic__, __**double bold and double italic**__ parse_mode="Markdown": Bold, italic, _bold and italic_, double bold, double italic, double bold and double italic parse_mode="Markdown V2": Bold, italic, bold and italic, double bold, double italic, double bold and double italic in Chat: *Bold*, _italic_, *bold and italic*, double bold, double italic, **double bold and double italic** - How do I add bold and italic, and are there any other commands like underline and more? I need some explanation. Thanks. | Bots need a different markdown syntax. To send bold and italic text use: update.message.reply_text('*_bold and italic_*', parse_mode='MarkdownV2') from the official telegram website https://core.telegram.org/bots/api#markdownv2-style *bold \*text* _italic \*text_ __underline__ ~strikethrough~ *bold _italic bold ~italic bold strikethrough~ __underline italic bold___ bold* [inline URL](http://www.example.com/) [inline mention of a user](tg://user?id=123456789) `inline fixed-width code` ``` pre-formatted fixed-width code block ``` ```python pre-formatted fixed-width code block written in the Python programming language ``` I recommend to use only MarkdownV2 syntax, since Markdown is less powerful | 11 | 18 |
62,220,246 | 2020-6-5 | https://stackoverflow.com/questions/62220246/how-to-create-a-facetgrid-stacked-barplot-using-seaborn | I am trying to plot a facet_grid with stacked bar charts inside. I would like to use Seaborn. Its barplot function does not include a stacked argument. I tried to use FacetGrid.map with a custom callable function. import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt def custom_stacked_barplot(col_day, col_time, col_total_bill, **kwargs): dict_df={} dict_df['day']=col_day dict_df['time']=col_time dict_df['total_bill']=col_total_bill df_data_graph=pd.DataFrame(dict_df) df = pd.crosstab(index=df_data_graph['time'], columns=tips['day'], values=tips['total_bill'], aggfunc=sum) df.plot.bar(stacked=True) tips=sns.load_dataset("tips") g = sns.FacetGrid(tips, col='size', row='smoker') g = g.map(custom_stacked_barplot, "day", 'time', 'total_bill') However I get an empty canvas and stacked bar charts separately. Empty canvas: Graph1 apart: Graph2:. How can I fix this issue? Thanks for the help! | The simplest code to achive that result is this: import seaborn as sns import matplotlib.pyplot as plt sns.set() tips=sns.load_dataset("tips") g = sns.FacetGrid(tips, col = 'size', row = 'smoker', hue = 'day') g = (g.map(sns.barplot, 'time', 'total_bill', ci = None).add_legend()) plt.show() which gives this result: | 8 | 8 |
62,221,654 | 2020-6-5 | https://stackoverflow.com/questions/62221654/how-to-get-coverage-reporting-when-testing-a-pytest-plugin | Context I am updating an inherited repository which has poor test coverage. The repo itself is a pytest plugin. I've changed the repo to use tox along with pytest-cov, and converted the "raw" tests to use pytester as suggested in the pytest documentation when testing plugins. The testing and tox build, etc. works great. However, the coverage is reporting false misses with things like class definitions, imports, etc. This is because the code itself is being imported as part of pytest instantiation, and isn't getting "covered" until the testing actually starts. I've read pytest docs, pytest-cov and coverage docs, and tox docs, and tried several configurations, but to no avail. I've exhausted my pool of google keyword combinations that might lead me to a good solution. Repository layout pkg_root/ .tox/ py3/ lib/ python3.7/ site-pacakges/ plugin_module/ supporting_module.py plugin.py some_data.dat plugin_module/ supporting_module.py plugin.py some_data.dat tests/ conftest.py test_my_plugin.py tox.ini setup.py Some relevant snippets with commentary: tox.ini [pytest] addopts = --cov={envsitepackagesdir}/plugin_module --cov-report=html testpaths = tests This configuration gives me an error that no data was collected; no htmlcov is created in this case. If I just use --cov, I get (expected) very noisy coverage, which shows the functional hits and misses, but with the false misses reported above for imports, class definitions, etc. conftest.py pytest_plugins = ['pytester'] # Entire contents of file! test_my_plugin.py def test_a_thing(testdir): testdir.makepyfile( """ def test_that_fixture(my_fixture): assert my_fixture.foo == 'bar' """ ) result = testdir.runpytest() result.assert_outcomes(passed=1) How can I get an accurate report? Is there a way to defer the plugin loading until it's demanded by the pytester tests? | Instead of using the pytest-cov plugin, use coverage to run pytest: coverage run -m pytest .... That way, coverage will be started before pytest. | 63 | 94 |
62,222,436 | 2020-6-5 | https://stackoverflow.com/questions/62222436/importerror-cannot-import-name-feature-from-setuptools | I want to install the requirements of this https://github.com/sraashis/deepdyn project, but when I run: pip install -r deepdyn/assets/requirements.txt I receive the following error in the terminal: ERROR: Command errored out with exit status 1: command: /home/masoud/anaconda3/envs/tfgpu/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-xzvdvhgj/MarkupSafe/setup.py'"'"'; __file__='"'"'/tmp/pip-install-xzvdvhgj/MarkupSafe/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-v7ebj7ab cwd: /tmp/pip-install-xzvdvhgj/MarkupSafe/ Complete output (5 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-xzvdvhgj/MarkupSafe/setup.py", line 6, in <module> from setuptools import setup, Extension, Feature ImportError: cannot import name 'Feature' from 'setuptools' (/home/masoud/anaconda3/envs/tfgpu/lib/python3.7/site-packages/setuptools/__init__.py) ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. | The bug was fixed in version 1.1 but deepdyn requires version 1.0. This is probably a bug in deepdyn and should be reported. Or may be deepdyn requires some older version of setuptools. Again, ask the authors about it. | 14 | 5 |
62,223,424 | 2020-6-5 | https://stackoverflow.com/questions/62223424/simplequeue-vs-queue-in-python-what-is-the-advantage-of-using-simplequeue | The queue — A synchronized queue class simply states that there are fewer functions allowed with SimpleQueue. I need very basic queue functionality for a multithreading application, would it help in any way to use SimpleQueue? | queue.SimpleQueue handles more than threadsafe concurrency. It handles reentrancy - it is safe to call queue.SimpleQueue.put in precarious situations where it might be interrupting other work in the same thread. For example, you can safely call it from __del__ methods, weakref callbacks, or signal module signal handlers. If you need that, use queue.SimpleQueue. | 33 | 19 |
62,220,855 | 2020-6-5 | https://stackoverflow.com/questions/62220855/tensorflow-removing-jfif | I am quite new to tensorflow, I would like to clearly know, what does the below command do? import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import os num_skipped = 0 for folder_name in ("Cat", "Dog"): print("folder_name:",folder_name) #folder_name: Cat folder_path = os.path.join("Dataset/PetImages", folder_name) print("folder_path:",folder_path) #folder_path: Dataset/PetImages/Cat for fname in os.listdir(folder_path): print("fname:",fname) #fname: 5961.jpg fpath = os.path.join(folder_path, fname) print("fpath:", fpath) #fpath: Dataset/PetImages/Cat/10591.jpg try: fobj = open(fpath, "rb") is_jfif = tf.compat.as_bytes("JFIF") in fobj.peek(10) finally: fobj.close() if not is_jfif: num_skipped += 1 # Delete corrupted image os.remove(fpath) print("Deleted %d images" % num_skipped) Keras Website comment on the above code : When working with lots of real-world image data, corrupted images are a common occurence. Let's filter out badly-encoded images that do not feature the string "JFIF" in their header. I want to specifically know what does the below command do, how does it do ? is_jfif = tf.compat.as_bytes("JFIF") in fobj.peek(10) I checked the API but wasn't clearly able to understand it. A better explanation will be of much help. Thanks | Wikipedia explains that JPG files contain the string "JFIF" at the beginning of the file, encoded as bytes: So: tf.compat.as_bytes("JFIF") converts the string "JFIF" to bytes. You could also just use b"JFIF", though maybe the TensorFlow implementation has some optimization I don't know about. fobj.peek(10) theoretically returns the first 10 bytes of the file, but in practice it often returns the entire file. is_jfif then just checks if the converted "JFIF" string is in the result of fobj.peek. | 7 | 10 |
62,220,197 | 2020-6-5 | https://stackoverflow.com/questions/62220197/how-to-catch-an-exception-message-in-python | I want something of the form try: # code except *, error_message: print(error_message) i.e I want to have a generic except block that catches all types of exceptions and prints an error message. Eg. "ZeroDivisionError: division by zero". Is it possible in python? If I do the following I can catch all exceptions, but I won't get the error message. try: # code except: print("Exception occurred") | Try this: except Exception as e: print(str(e)) | 8 | 14 |
62,215,910 | 2020-6-5 | https://stackoverflow.com/questions/62215910/how-to-get-the-centroids-in-dbscan-sklearn | I am using DBSCAN for clustering. However, now I want to pick a point from each cluster that represents it, but I realized that DBSCAN does not have centroids as in kmeans. However, I observed that DBSCAN has something called core points. I am thinking if it is possible to use these core points or any other alternative to obtain a representative point from each cluster. I have mentioned below the code that I have used. import numpy as np from math import pi from sklearn.cluster import DBSCAN #points containing time value in minutes points = [100, 200, 600, 659, 700] def convert_to_radian(x): return((x / (24 * 60)) * 2 * pi) rad_function = np.vectorize(convert_to_radian) points_rad = rad_function(points) #generate distance matrix from each point dist = points_rad[None,:] - points_rad[:, None] #Assign shortest distances from each point dist[((dist > pi) & (dist <= (2*pi)))] = dist[((dist > pi) & (dist <= (2*pi)))] -(2*pi) dist[((dist > (-2*pi)) & (dist <= (-1*pi)))] = dist[((dist > (-2*pi)) & (dist <= (-1*pi)))] + (2*pi) dist = abs(dist) #check dist print(dist) #using default values, set metric to 'precomputed' db = DBSCAN(eps=((100 / (24*60)) * 2 * pi ), min_samples = 2, metric='precomputed') #check db print(db) db.fit(dist) #get labels labels = db.labels_ #get number of clusters no_clusters = len(set(labels)) - (1 if -1 in labels else 0) print('No of clusters:', no_clusters) print('Cluster 0 : ', np.nonzero(labels == 0)[0]) print('Cluster 1 : ', np.nonzero(labels == 1)[0]) print(db.core_sample_indices_) I am happy to provide more details if needed. | Why don't you estimate the centroids of the resulted estimated clusters? points_of_cluster_0 = dist[labels==0,:] centroid_of_cluster_0 = np.mean(points_of_cluster_0, axis=0) print(centroid_of_cluster_0) points_of_cluster_1 = dist[labels==1,:] centroid_of_cluster_1 = np.mean(points_of_cluster_1, axis=0) print(centroid_of_cluster_1) | 11 | 7 |
62,213,171 | 2020-6-5 | https://stackoverflow.com/questions/62213171/why-can-i-not-assign-cls-hash-id | I would have hoped this works (in Python 3.6), class A: __hash__ = id A().__hash__() but I get TypeError: id() takes exactly one argument (0 given) Surprisingly, def my_id(self): return id(self) class A: __hash__ = my_id A().__hash__() works as hoped. | id is of type builtin_function_or_method (it's a function that's built into the runtime - ), which for practical reasons (optimisation mainly) doesn't implement the descriptor protocol as a python function would, so A().__hash__ resolves to the id function itself, not to a method object wrapping the function. You'll observe the same behaviour with most builtin functions FWIW: >>> type(len) <class 'builtin_function_or_method'> >>> len.__get__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'builtin_function_or_method' object has no attribute '__get__' >>> >>> type(all) <class 'builtin_function_or_method'> >>> all.__get__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'builtin_function_or_method' object has no attribute '__get__' >>> type(abs) <class 'builtin_function_or_method'> >>> abs.__get__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'builtin_function_or_method' object has no attribute '__get__' >>> type(isinstance) <class 'builtin_function_or_method'> >>> isinstance.__get__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'builtin_function_or_method' object has no attribute '__get__' etc... | 7 | 9 |
62,212,417 | 2020-6-5 | https://stackoverflow.com/questions/62212417/python-string-format-with-negative-sign-for-negative-number-but-space-for-posit | Is there a format code to format -2.34 as '-2.3', but +2.34 as ' 2.3' (notice the leading space)? Basically show the negative sign but leave a space for positive sign. | Use " " (a space) to insert a space before positive numbers and a minus sign before negative numbers: txt = "The temperature is between {: } and {: } degrees celsius." print(txt.format(-3, 7)) answer : The temperature is between -3 and 7 degrees celsius. | 11 | 8 |
62,183,202 | 2020-6-3 | https://stackoverflow.com/questions/62183202/cannot-read-properly-data-of-null-dash | When I run the below code it loads the web, but then it leaves an error message, it is because in some section there is no data, can a conditional help? I uploaded the code but I do not know where to act, i am new on Dash and my knowledge in javascript is limited. My main file import dash from dash.dependencies import Output, Input import dash_core_components as dcc import dash_html_components as html import plotly import random import plotly.graph_objs as go from collections import deque import sqlite3 import pandas as pd #popular topics: google, olympics, trump, gun, usa app = dash.Dash(__name__) app.layout = html.Div( [ html.H2('Live Twitter Sentiment'), dcc.Input(id='sentiment_term', value='trump', type='text'), dcc.Graph(id='live-graph', animate=False), dcc.Interval( id='graph-update', interval=1*1000 ), ] ) @app.callback(Output('live-graph', 'figure'), [Input(component_id='sentiment_term', component_property='value')]) def update_graph_scatter(sentiment_term): try: conn = sqlite3.connect('twitter.db') c = conn.cursor() df = pd.read_sql("SELECT * FROM sentiment WHERE tweet LIKE ? ORDER BY unix DESC LIMIT 1000", conn, params=('%' + sentiment_term + '%',)) df.sort_values('unix', inplace=True) df['sentiment_smoothed'] = df['sentiment'].rolling(int(len(df)/5)).mean() df.dropna(inplace=True) X = df.unix.values[-100:] Y = df.sentiment_smoothed.values[-100:] data = plotly.graph_objs.Scatter( x=X, y=Y, name='Scatter', mode= 'lines+markers' ) return {'data': [data],'layout' : go.Layout(xaxis=dict(range=[min(X),max(X)]), yaxis=dict(range=[min(Y),max(Y)]), title='Term: {}'.format(sentiment_term))} except Exception as e: with open('errors.txt','a') as f: f.write(str(e)) f.write('\n') if __name__ == '__main__': app.run_server(debug=True) | Sometimes Dash can struggle if the prop being updated by a callback hasn't been initialized. In this case, the figure prop of the dcc.Graph was never declared. Setting an explicit empty value, such as figure={} is often enough to resolve this sort of error. | 7 | 13 |
62,201,325 | 2020-6-4 | https://stackoverflow.com/questions/62201325/how-to-count-the-occurrence-of-values-in-one-pandas-dataframe-if-the-values-to-c | I have a (really big) pandas Dataframe df: country age gender Brazil 10 F USA 20 F Brazil 10 F USA 20 M Brazil 10 M USA 20 M I have another pandas Dataframe freq: age gender counting 10 F 0 10 M 0 20 F 0 I wanna count the pair of values in freq when they occur in df: age gender counting 10 F 2 10 M 1 20 F 1 I'm using this code, but it takes too long: for row in df.itertuples(index=False): freq.loc[np.all(freq['age','gender']==row[2:3],axis=1),'counting'] += 1 Is there a faster way to do that? Please note: I have to use freq because not all combinations (as for instance 20 and M) are desired some columns in df may not be used counting counts how many times both values appear in each row freq may have more than 2 values to check for (this is just an small example) | you can do it with inner merge to filter the combinations in df you don't want, then groupby age and gender and count the column counting. just reset_index to fit your expected output. freq = (df.merge(freq, on=['age', 'gender'], how='inner') .groupby(['age','gender'])['counting'].size() .reset_index()) print (freq) age gender counting 0 10 F 2 1 10 M 1 2 20 F 1 Depending on the number of combinations you don't want, it could be faster to groupby on df before doing the merge like: freq = (df.groupby(['age','gender']).size() .rename('counting').reset_index() .merge(freq[['age','gender']]) ) | 9 | 10 |
62,188,158 | 2020-6-4 | https://stackoverflow.com/questions/62188158/clarification-for-it-should-be-possible-to-change-the-value-of-1-from-the-cpyt | See this link: https://docs.python.org/3/c-api/long.html#c.PyLong_FromLong The current implementation keeps an array of integer objects for all integers between -5 and 256; when you create an int in that range, you actually just get back a reference to the existing object. So, it should be possible to change the value of 1. I suspect the behavior of Python, in this case, is undefined. :-) What do the bold lines mean in this context? | It means that integers in Python are actual objects with a "value"-field to hold the integer's value. In Java, you could express Python's integers like so (leaving out a lot of details, of course): class PyInteger { private int value; public PyInteger(int val) { this.value = val; } public PyInteger __add__(PyInteger other) { return new PyInteger(this.value + other.value); } } In order to not have hunderts of Python integers with the same value around, it caches some integers, along the lines of: PyInteger[] cache = { new PyInteger(0), new PyInteger(1), new PyInteger(2), ... } However, what would happen if you did something like this (let's ignore that value is private for a moment): PyInteger one = cache[1]; // the PyInteger representing 1 one.value = 3; Suddenly, every time you used 1 in your program, you would actually get back 3, because the object representing 1 has an effective value of 3. Indeed, you can do that in Python! That is: it is possible to change the effective numeric value of an integer in Python. There is an answer in this reddit post. I copy it here for completeness, though (original credits go to Veedrac): import ctypes def deref(addr, typ): return ctypes.cast(addr, ctypes.POINTER(typ)) deref(id(29), ctypes.c_int)[6] = 100 #>>> 29 #>>> 100 29 ** 0.5 #>>> 10.0 The Python specification itself does not say anything about how integers are to be stored or represented internally. It also does not say which integers should be cached, or that any should be cached at all. In short: there is nothing in the Python specifications defining what should happen if you do something silly like this ;-). We could even go slightly further... In reality, the field value above is actually an array of integers, emulating an arbitrary large integer value (for a 64-bit integer, you just combine two 32-bit fields, etc). However, when integers start to get large and outgrow a standard 32-bit integer, caching is no longer a viable option. Even if you used a dictionary, comparing integer arrays for equality would be too much of an overhead with too little gain. You can actually check this yourself by using is to compare identities: >>> 3 * 4 is 12 True >>> 300 * 400 is 120000 False >>> 300 * 400 == 120000 True In a typical Python system, there is exactly one object representing the number 12. 120000, on the other hand, is hardly ever cached. So, above, 300 * 400 yields a new object representing 120000, which is different from the object created for the number on the right hand side. Why is this relevant? If you change the value of a small number like 1 or 29, it will affect all calculations that use that number. You will most likely seriously break your system (until you restart). But if you change the value of a large integer, the effects will be minimal. Changing the value of 12 to 13 means that 3 * 4 will yield 13. Chaning the value of 120000 to 130000 has much less effect and 300 * 400 will still yield (a new) 120000 and not 130000. As soon as you take other Python implementations into the picture, things can get even harder to predict. MicroPython, for instance, does not have objects for small numbers, but emalutes them on the fly, and PyPy might well just optimise your changes away. Bottomline: the exact behaviour of numbers that you tinker with is truly undefined, but depends on several factors and the exact implementation. Answer to a question in the comments: What is the significance of 6 in Veedrac's code above? All objects in Python share a common memory layout. The first field is a reference counter that tells you how many other objects are currently referring to this object. The second field is a reference to the object's class or type. Since integers do not have a fixed size, the third field is the size of the data part (you can find the relevant definitions here (general objects) and here (integers/longs)): struct longObject { native_int ref_counter; // offset: +0 / +0 PyObject* type; // offset: +1 / +2 native_int size; // offset: +2 / +4 unsigned short value[]; // offset: +3 / +6 } On a 32-bit system, native_int and PyObject* both occupy 32 bits, whereas on a 64-bit system they occupy 64 bits, naturally. So, if we access the data as 32 bits (using ctypes.c_int) on a 64-bit system, the actual value of the integer is to be found at offset +6. If you change the type to ctypes.c_long, on the other hand, the offset is +3. Because id(x) in CPython returns the memory address of x, you can actually check this yourself. Based on the above deref function, let's do: >>> deref(id(29), ctypes.c_long)[3] 29 >>> deref(id(29), ctypes.c_long)[1] 10277248 >>> id(int) # memory address of class "int" 10277248 | 29 | 43 |
62,195,181 | 2020-6-4 | https://stackoverflow.com/questions/62195181/how-to-pass-variable-to-json-for-python | I am new in work with JSON, so sorry in advance for the stupid question. I want to write JSON with the variable in the value field. It looks like this: def print_json(user_name): opened_json = open('way/to/json/file') tmp = json.load(opened_json) res = tmp(['path_to_folder'](user_name)) print(res) def main(user_name): print_json(user_name) main('user') It is JSON: {"path_to_folder": "/Users/" + user_name + "/my_folder/"} Awaiting for that output: /Users/user/my_folder/ Please, tell me if any solution here exists. Thanks in advance! EDIT: My problem, that I can't add variable to JSON correctly. It marked red. Wrong syntax, when I try to concat. | What you want isn't directly possible in JSON, because it doesn't support "templating". One solution would be to use a templating language such as Jinja to write a JSON template, then load this file without the json library and fill in the values using Jinja, and finally use json.loads to load a dictionary from your rendered string. Your json-like file could look something like this: {"path_to_folder": "/Users/{{ user_name }}/my_folder/"} Your Python code: import json from jinja2 import Environment, FileSystemLoader env = Environment( FileSystemLoader("path/to/template") ) template = env.get_template("template_filename.json") def print_json(username): return json.loads( template.render(user_name=username) ) ... In fact, if this is a simple one-time thing, it might even be better to use Python's built-in templating. I would recommend old-style formatting in the case of JSON, because otherwise you'll have to escape a lot of braces: JSON file: {"path_to_folder": "/Users/%(user_name)s/my_folder/"} "Rendering": with open("path/to/json") as f: rendered = json.loads(f.read() % {"user_name": username}) | 7 | 6 |
62,186,218 | 2020-6-4 | https://stackoverflow.com/questions/62186218/python-multiprocessing-attributeerror-cant-pickle-local-object | I have a method inside a class to return a func which parameters may change. The Interface function accept two parameters, f and its args.I want to use mp.pool to accelerate it.However, it returns an error. from multiprocessing import Pool # from multiprocess import Pool # from pathos.multiprocessing import ProcessingPool as Pool import pickle import dill class Temp: def __init__(self, a): self.a = a def test(self): def test1(x): return self.a + x return test1 def InterfaceFunc(f, x): mypool = Pool(4) return list(mypool.map(f, x)) if __name__ == "__main__": t1 = Temp(1).test() x = [1, 2, 3, 1, 2] res1 = list(map(t1, x)) print(res1) res2 = InterfaceFunc(t1, x) it raise the same error: AttributeError: Can't pickle local object 'Temp.test.<locals>.test1' I have tried 3 method: What can multiprocessing and dill do together? Replace pickle in Python multiprocessing lib Python Multiprocessing Pool Map: AttributeError: Can't pickle local object Method 1, 2 : from multiprocess import Pool from pathos.multiprocessing import ProcessingPool as Pool It raise error: File "E:\Users\ll\Anaconda3\lib\site-packages\dill\_dill.py", line 577, in _load_type return _reverse_typemap[name] KeyError: 'ClassType' Method3 needs to change the code , however I cant simply move the func out of the class because I need f to be a parameter for the Interface. Do you have any suggestions? I'm an inexperienced newcomer. | Python can't pickle the closure, but all you really need is something that you can call that retains state. The __call__ method makes a class instance callable, so use that from multiprocessing import Pool class TempTest1: def __init__(self, a): self.a = a def __call__(self, x): return self.a + x class Temp: def __init__(self, a): self.a = a def test(self): return TempTest1(self.a) def InterfaceFunc(f, x): mypool = Pool(4) return list(mypool.map(f, x)) if __name__ == "__main__": t1 = Temp(1).test() x = [1, 2, 3, 1, 2] res1 = list(map(t1, x)) print(res1) res2 = InterfaceFunc(t1, x) print(res2) | 8 | 10 |
62,183,821 | 2020-6-3 | https://stackoverflow.com/questions/62183821/what-is-the-unit-in-python-lru-cache | According to the documentation the default value for lru_cache from functools is 128. But no unit is defined. Decorator to wrap a function with a memoizing callable that saves up to the maxsize most recent calls. It can save time when an expensive or I/O bound function is periodically called with the same arguments. Since a dictionary is used to cache results, the positional and keyword arguments to the function must be hashable. Distinct argument patterns may be considered to be distinct calls with separate cache entries. For example, f(a=1, b=2) and f(b=2, a=1) differ in their keyword argument order and may have two separate cache entries. If user_function is specified, it must be a callable. This allows the lru_cache decorator to be applied directly to a user function, leaving the maxsize at its default value of 128. My question is there any unit like bits, bytes, megabytes attached to this or is this an arbitrary number that has no simple relationship with the used memory? | Short answer: It is the number of elements that are stored in the cache. We can look up the source code of the lru_cache [GitHub]. The code is rather complicated, but in a nutshell, line 619 already gives a clue: full = (cache_len() >= maxsize) This specifies that the cache is full given that the cache_len() is greater than or equal to the maxsize. The cache_len is a function that returns the number of records in the dictionary, as we can see in the source code: cache = {} hits = misses = 0 full = False cache_get = cache.get # bound method to lookup a key or return None cache_len = cache.__len__ # get cache size without calling len() The logic also each time branches when it adds a new record, in case the cache is full, it will "kick out" one of the elements: if key in cache: # Getting here means that this same key was added to the # cache while the lock was released. Since the link # update is already done, we need only return the # computed result and update the count of misses. pass elif full: # Use the old root to store the new key and result. oldroot = root oldroot[KEY] = key oldroot[RESULT] = result # Empty the oldest link and make it the new root. # Keep a reference to the old key and old result to # prevent their ref counts from going to zero during the # update. That will prevent potentially arbitrary object # clean-up code (i.e. __del__) from running while we're # still adjusting the links. root = oldroot[NEXT] oldkey = root[KEY] oldresult = root[RESULT] root[KEY] = root[RESULT] = None # Now update the cache dictionary. del cache[oldkey] # Save the potentially reentrant cache[key] assignment # for last, after the root and links have been put in # a consistent state. cache[key] = oldroot else: # Put result in a new link at the front of the queue. last = root[PREV] link = [last, root, key, result] last[NEXT] = root[PREV] = cache[key] = link # Use the cache_len bound method instead of the len() function # which could potentially be wrapped in an lru_cache itself. full = (cache_len() >= maxsize) | 16 | 15 |
62,169,315 | 2020-6-3 | https://stackoverflow.com/questions/62169315/runtimeerror-unable-to-create-link-name-already-exists-keras | When I save my model I get the following error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-40-853303da8647> in <module>() 7 8 ----> 9 model.save(outdir+'model.h5') 10 11 5 frames /usr/local/lib/python3.6/dist-packages/h5py/_hl/group.py in __setitem__(self, name, obj) 371 372 if isinstance(obj, HLObject): --> 373 h5o.link(obj.id, self.id, name, lcpl=lcpl, lapl=self._lapl) 374 375 elif isinstance(obj, SoftLink): h5py/_objects.pyx in h5py._objects.with_phil.wrapper() h5py/_objects.pyx in h5py._objects.with_phil.wrapper() h5py/h5o.pyx in h5py.h5o.link() RuntimeError: Unable to create link (name already exists) This does not happen when I use built-in layers to build my model or others user defined layers. This error arises only when I use this particular user defined layer: class MergeTwo(keras.layers.Layer): def __init__(self, nout, **kwargs): super(MergeTwo, self).__init__(**kwargs) self.nout = nout self.alpha = self.add_weight(shape=(self.nout,), initializer='zeros', trainable=True) self.beta = self.add_weight(shape=(self.nout,), initializer='zeros', trainable=True) def call(self, inputs): A, B = inputs result = keras.layers.add([self.alpha*A ,self.beta*B]) result = keras.activations.tanh(result) return result def get_config(self): config = super(MergeTwo, self).get_config() config['nout'] = self.nout return config I read the Docs but nothing worked, I cannot figure out why. I am using Google Colab and Tensorflow version 2.2.0 | I think the problem is that both of your weight variables have internally the same name, which should not happen, you can give them names with the name parameter to add_weight: self.alpha = self.add_weight(shape=(self.nout,), initializer='zeros', trainable=True, name="alpha") self.beta = self.add_weight(shape=(self.nout,), initializer='zeros', trainable=True, name="beta") This should workaround the problem. | 8 | 12 |
62,172,931 | 2020-6-3 | https://stackoverflow.com/questions/62172931/cannot-unpack-non-iterable-int-object-when-using-python-dicitonary | I have the following command below: import pandas as pd import numpy as np from scipy import stats np.random.seed(12345) standarderrors1992 = stats.sem(np.random.normal(32000,200000,3650)) standarderrors1993 = stats.sem(np.random.normal(43000,100000,3650)) standarderrors1994 = stats.sem(np.random.normal(43500,140000,3650)) standarderrors1995 = stats.sem(np.random.normal(48000,70000,3650)) mean1992 = np.random.normal(32000,200000,3650).mean() mean1993 = np.random.normal(43000,100000,3650).mean() mean1994 = np.random.normal(43500,140000,3650).mean() mean1995 = np.random.normal(48000,70000,3650).mean() Here, I have found both the mean and standard error for a set of randomly chosen values. limit = 3000 dict = {mean1992:standarderrors1992,mean1993:standarderrors1993,mean1994:standarderrors1994,mean1995:standarderrors1995} for key,value in dict: if limit > (key+(1.96*value)): colour = 1 elif limit < (key+(1.96*value)): colour = 0 elif (limit !> (key+(1.96*value))) && (limit !< (key-(1.96*value))): colour = ((key+(1.96*value))-limit)/((key+(1.96*value))-(key-(1.96*value))) Here, I am trying to put the values corresponding to the means and standard errors into a dictionary so that I can loop through both of them. Ideally, I want to assign a particular value to the variable 'colour' depending on the values for the mean and standard error of a particular year. i.e. mean and SE for 1992 However, I keep getting the error: TypeError: cannot unpack non-iterable int object Coudld anyone let me know where I'm going wrong? | You need to iterate over dict.items() for this to work. for key,value in dict.items(): # do stuff here I would advice against naming your variables dict which shadows the build in dict function though :) | 7 | 18 |
62,170,394 | 2020-6-3 | https://stackoverflow.com/questions/62170394/how-to-get-maximum-and-minimum-of-a-list-in-column | Given that, I have a dataframe as below: import pandas as pd import numpy as np dict = { "A": [[1,2,3,4],[3],[2,8,4],[5,8]] } dt = pd.DataFrame(dict) I wish to have the Maximum and minimum of each row in column B. My favorite output is: A B 0 [1, 2, 3, 4] [1,4] 1 [3] [3,3] 2 [2, 8, 4] [2,8] 3 [5, 8] [5,8] What I already tried is the below code which does not work: dt["B"] =[np.min(dt.A), np.max(dt.A)] | Like this: In [1592]: dt['B'] = dt.A.apply(lambda x: [min(x), max(x)]) In [1593]: dt Out[1593]: A B 0 [1, 2, 3, 4] [1, 4] 1 [3] [3, 3] 2 [2, 8, 4] [2, 8] 3 [5, 8] [5, 8] As suggested by @Ch3steR, using map since it's faster: dt['B'] = dt.A.map(lambda x: [min(x), max(x)]) | 10 | 12 |
62,166,719 | 2020-6-3 | https://stackoverflow.com/questions/62166719/padding-same-conversion-to-pytorch-padding | I'm trying to convert the following Keras model code to pytorch, but am having problems dealing with padding='same'. model = Sequential() model.add(Conv2D(64, (3, 3), input_shape=img_size)) model.add(BatchNormalization(axis=1)) model.add(Activation('relu')) model.add(Dropout(0.3)) model.add(Conv2D(64, (3, 3), padding='same')) model.add(BatchNormalization(axis=1)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same')) Which produces the following summary: Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 30, 30, 64) 1792 _________________________________________________________________ batch_normalization_1 (Batch (None, 30, 30, 64) 120 _________________________________________________________________ activation_1 (Activation) (None, 30, 30, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 30, 30, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 30, 30, 64) 36928 _________________________________________________________________ batch_normalization_2 (Batch (None, 30, 30, 64) 120 _________________________________________________________________ activation_2 (Activation) (None, 30, 30, 64) 0 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 15, 15, 64) 0 ================================================================= Total params: 38,960 Trainable params: 38,840 Non-trainable params: 120 Right now, I would write: self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, bias=False), nn.BatchNorm2d(64), nn.ReLU(inplace=True), nn.Dropout(0.3), nn.Conv2d(64, 64, kernel_size=3, padding = ? bias=False), nn.BatchNorm2d(64), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2, padding = ?), ) Where padding should have numerical value. I was wondering if there is an easier way to calculate this since we're using padding='same'. Also, the next line of the Keras model looks like: model.add(Conv2D(128, (3, 3), padding='same')) So I really need to brush up on how to calculate padding, especially after stride too. From a rough eye only, is the padding 2? | W:input volume size F:kernel size S:stride P:amount of padding size of output volume = (W-F+2P)/S+1 e.g. input:7x7, kernel:3x3, stride:1, pad:0 output size = (7-3+2*0)/1+1 = 5 =>5x5 | 8 | 6 |
62,040,724 | 2020-5-27 | https://stackoverflow.com/questions/62040724/warning-the-lock-file-is-not-up-to-date-with-the-latest-changes-in-pyproject-to | When I am using a poetry command with Python 3.7, in my case: poetry export -f requirements.txt I am getting the following error: Warning: The lock file is not up to date with the latest changes in pyproject.toml. You may be getting outdated dependencies. Run update to update them. So far clear, but if I run poetry update it upgrades my dependencies, which is not what I want at this time for my project. If I run poetry lock instead, it still upgrades dependencies. How can I work around this? | UPDATE V2.0.0 (released date : 4 jan 2025) : The --no-update option does no longer exist (see V2 docs). The same documentation says: By default, packages that have already been added to the lock file before will not be updated. So now just use: poetry lock or downgrade you poetry version to use the solution below: Former answer (before Poetry V2.0.0 release): This is a known issue in Poetry. The issue is resolved, use: poetry lock --no-update. Old answer: There is a current workaround with the following commands: poetry add pathlib2 poetry remove pathlib2 Where pathlib2 is any library you don't already depend on and that has no dependencies on it's own, hence pathlib2. Using these commands will rewrite the lockfile hashes and resolve the file conflict without upgrading any of the other packages used in the project. | 28 | 49 |
62,044,541 | 2020-5-27 | https://stackoverflow.com/questions/62044541/change-pytest-working-directory-to-test-case-directory | I have the following pytest directory structure: system_tests/ ├── conftest ├── pytest.ini │ ├── suite_1/ │ └── test_A.py │ └── suite_2/ └── sub_suite_2a/ └── test_B.py When each test method runs, a number of third-party libraries/processes generate artifacts in the current working directory. When pytest is executed from the sub_suite folder (using CLI or IDE "play" button), the files are generated in the sub_suite folder, where I want them to be. However, when pytest is run from the system_tests folder to run all tests, all artifacts are created in the system_tests folder, which is not what I want. Is there an easy way to force pytest to always use the test class folder as the working directory so I get the same results regardless of how or where I run a test from? | EDIT: Improved Solution Using monkeypatch as suggested by @Kound removes the boilerplate code to restore the cwd. You can also enable autouse to automatically apply this fixture to all test functions. Add the following fixture to conftest.py to change the cwd for all tests: @pytest.fixture(autouse=True) def change_test_dir(request, monkeypatch): monkeypatch.chdir(request.fspath.dirname) request is a built-in pytest fixture fspath is the LocalPath to the test module being executed dirname is the directory of the test module Any processes that are kicked off by the test will use the test case folder as their working directory and copy their logs, outputs, etc. there, regardless of where the test suite was executed. Original Solution The following function-level fixture will change to the test case directory, run the test (yield), then change back to the calling directory to avoid side-effects, as suggested by @hi2meuk: @pytest.fixture def change_test_dir(request): os.chdir(request.fspath.dirname) yield os.chdir(request.config.invocation_params.dir) request.config.invocation_params.dir - the folder from which pytest was executed request.config.rootdir - pytest root, doesn't change based on where you run pytest. Not used here, but could be useful. | 25 | 35 |
62,069,596 | 2020-5-28 | https://stackoverflow.com/questions/62069596/configuring-isort-and-autoflake-with-project-toml | I have a series of tools running locally and on Jenkins to check and format my Python code: autoflake isort black I use pyproject.toml file to configure black, isort with .isort.cfg and autoflake with command line parameters because I haven't found any support to configure it with a configuration file. Is there way to configure also isort and autoflake with pyproject.toml? I would like to have all tools configured with just a single file. | isort configuration can be found at https://pycqa.github.io/isort/docs/configuration/options.html In general, config params are separated by underscores. The example below will provide configuration that makes black and isort compatible, as discussed here https://copdips.com/2020/04/making-isort-compatible-with-black.html [tool.isort] multi_line_output = 3 line_length = 88 include_trailing_comma = true [tool.black] line_length = 88 | 9 | 15 |
62,099,939 | 2020-5-30 | https://stackoverflow.com/questions/62099939/solving-linear-equations-on-the-gpu-with-numpy-and-pytorch | I am trying to solve a lot of linear equations as fast as possible. To find out the fastest way I benchmarked NumPy and PyTorch, each on the CPU and on my GeForce 1080 GPU (using Numba for NumPy). The results really confused me. This is the code I used with Python 3.8: import timeit import torch import numpy from numba import njit def solve_numpy_cpu(dim: int = 5): a = numpy.random.rand(dim, dim) b = numpy.random.rand(dim) for _ in range(1000): numpy.linalg.solve(a, b) def solve_numpy_njit_a(dim: int = 5): njit(solve_numpy_cpu, dim=dim) @njit def solve_numpy_njit_b(dim: int = 5): a = numpy.random.rand(dim, dim) b = numpy.random.rand(dim) for _ in range(1000): numpy.linalg.solve(a, b) def solve_torch_cpu(dim: int = 5): a = torch.rand(dim, dim) b = torch.rand(dim, 1) for _ in range(1000): torch.solve(b, a) def solve_torch_gpu(dim: int = 5): torch.set_default_tensor_type("torch.cuda.FloatTensor") solve_torch_cpu(dim=dim) def main(): for f in (solve_numpy_cpu, solve_torch_cpu, solve_torch_gpu, solve_numpy_njit_a, solve_numpy_njit_b): time = timeit.timeit(f, number=1) print(f"{f.__name__:<20s}: {time:f}") if __name__ == "__main__": main() And these are the results: solve_numpy_cpu : 0.007275 solve_torch_cpu : 0.012244 solve_torch_gpu : 5.239126 solve_numpy_njit_a : 0.000158 solve_numpy_njit_b : 1.273660 The slowest is CUDA accelerated PyTorch. I verified that PyTorch is using my GPU with import torch torch.cuda.is_available() torch.cuda.get_device_name(0) returning True 'GeForce GTX 1080' I can get behind that, on the CPU, PyTorch is slower than NumPy. What I cannot understand is why PyTorch on the GPU is so much slower. Not that important but actually even more confusing is that Numba's njit decorator makes performance orders of magnitude slower, until you don't use the @ decorator syntax anymore. Is it my setup? Occasionally I get a weird message about the windows page / swap file not being big enough. In case I've taken a completely obscure path to solving linear equations on the GPU, I'd be happy to be directed into another direction. Edit So, I focussed on Numba and changed my benchmarking a bit. As suggested by @max9111 I rewrote the functions to receive input and produce output because, in the end, that's what anyone would want to use them for. Now, I also perform a first compile run for the Numba accelerated function so the subsequent timing is fairer. Finally, I checked the performance against matrix size and plotted the results. TL/DR: Up to matrix sizes of 500x500, Numba acceleration doesn't really make a difference for numpy.linalg.solve. Here is the code: import time from typing import Tuple import numpy from matplotlib import pyplot from numba import jit @jit(nopython=True) def solve_numpy_njit(a: numpy.ndarray, b: numpy.ndarray) -> numpy.ndarray: parameters = numpy.linalg.solve(a, b) return parameters def solve_numpy(a: numpy.ndarray, b: numpy.ndarray) -> numpy.ndarray: parameters = numpy.linalg.solve(a, b) return parameters def get_data(dim: int) -> Tuple[numpy.ndarray, numpy.ndarray]: a = numpy.random.random((dim, dim)) b = numpy.random.random(dim) return a, b def main(): a, b = get_data(10) # compile numba function p = solve_numpy_njit(a, b) matrix_size = [(x + 1) * 10 for x in range(50)] non_accelerated = [] accelerated = [] results = non_accelerated, accelerated for j, each_matrix_size in enumerate(matrix_size): for m, f in enumerate((solve_numpy, solve_numpy_njit)): average_time = -1. for k in range(5): time_start = time.time() for i in range(100): a, b = get_data(each_matrix_size) p = f(a, b) d_t = time.time() - time_start print(f"{each_matrix_size:d} {f.__name__:<30s}: {d_t:f}") average_time = (average_time * k + d_t) / (k + 1) results[m].append(average_time) pyplot.plot(matrix_size, non_accelerated, label="not numba") pyplot.plot(matrix_size, accelerated, label="numba") pyplot.legend() pyplot.show() if __name__ == "__main__": main() And these are the results (runtime against matrix edge length): Edit 2 Seeing that Numba doesn't make much of a difference in my case, I came back to benchmarking PyTorch. And indeed, it appears to be roughly 4x faster than Numpy without even using a CUDA device. Here is the code I used: import time from typing import Tuple import numpy import torch from matplotlib import pyplot def solve_numpy(a: numpy.ndarray, b: numpy.ndarray) -> numpy.ndarray: parameters = numpy.linalg.solve(a, b) return parameters def get_data(dim: int) -> Tuple[numpy.ndarray, numpy.ndarray]: a = numpy.random.random((dim, dim)) b = numpy.random.random(dim) return a, b def get_data_torch(dim: int) -> Tuple[torch.tensor, torch.tensor]: a = torch.rand(dim, dim) b = torch.rand(dim, 1) return a, b def solve_torch(a: torch.tensor, b: torch.tensor) -> torch.tensor: parameters, _ = torch.solve(b, a) return parameters def experiment_numpy(matrix_size: int, repetitions: int = 100): for i in range(repetitions): a, b = get_data(matrix_size) p = solve_numpy(a, b) def experiment_pytorch(matrix_size: int, repetitions: int = 100): for i in range(repetitions): a, b = get_data_torch(matrix_size) p = solve_torch(a, b) def main(): matrix_size = [x for x in range(5, 505, 5)] experiments = experiment_numpy, experiment_pytorch results = tuple([] for _ in experiments) for i, each_experiment in enumerate(experiments): for j, each_matrix_size in enumerate(matrix_size): time_start = time.time() each_experiment(each_matrix_size, repetitions=100) d_t = time.time() - time_start print(f"{each_matrix_size:d} {each_experiment.__name__:<30s}: {d_t:f}") results[i].append(d_t) for each_experiment, each_result in zip(experiments, results): pyplot.plot(matrix_size, each_result, label=each_experiment.__name__) pyplot.legend() pyplot.show() if __name__ == "__main__": main() And here's the result (runtime against matrix edge length): So for now, I'll be sticking with torch.solve. However, the original question remains: How can I exploit my GPU to solve linear equations even faster? | Your analysis is correct on several fronts, but there are a couple of nuances that might help clarify your results and improve GPU performance: 1. CPU vs GPU Performance In general, GPU operations have an overhead cost associated with transferring data between the CPU and GPU memory. Therefore, the benefits of GPU acceleration often become apparent with larger data sets, where the benefits of parallelization outweigh this overhead. This overhead cost is likely why the GPU computations are slower for small matrices. To exploit your GPU for solving linear equations, you should focus on larger matrices. 2. Torch solve vs torch.linalg.solve The torch.solve function has been deprecated since PyTorch 1.7.0. You might get better performance and more accurate results with torch.linalg.solve. 3. Numba's njit Performance Numba's @njit decorator accelerates Python functions by generating optimized machine code using the LLVM compiler infrastructure at import time. When you use the @njit decorator, Numba compiles the function in no-Python mode which may lead to slower performance if the function cannot be fully optimized. The first run will also include a "compilation" step, which can make it appear much slower if it is included in the timing. 4. Using CUDA Memory Efficiently The line torch.set_default_tensor_type("torch.cuda.FloatTensor") in your solve_torch_gpu function sets the default tensor type to CUDA tensors. Every tensor created afterward will be a CUDA tensor. This might lead to unnecessary usage of GPU memory and slow down the calculations. If you create your tensors directly on GPU when you need them (using .to(device) where device is your CUDA device), it will be more efficient and might improve your computation time. Here's a revised version of your function that uses torch.linalg.solve and directly creates tensors on the GPU: def solve_torch_gpu_optimized(dim: int = 5): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") a = torch.rand(dim, dim, device=device) b = torch.rand(dim, 1, device=device) for _ in range(1000): torch.linalg.solve(a, b) | 12 | 3 |
62,117,400 | 2020-5-31 | https://stackoverflow.com/questions/62117400/hashing-plaid-request-body-webhook | I am trying to verify a webhook sent from Plaid's API. Every webhook request is sent with a 'plaid-verification' header which is a JSON Web Token. The steps required to validate are: Extract JWT from request header signed_jwt = eyJhbGciOiJFUzI1NiIsImtpZCI6IjZjNTUxNmUxLTkyZGMtNDc5ZS1hOGZmLTVhNTE5OTJlMDAwMSIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE1OTA4ODcwMDEsInJlcXVlc3RfYm9keV9zaGEyNTYiOiJiNjNhMDdiNTQ3YjAwZjk5MjU0N2Y2YmJjOGQ5YWNjNjFhOGNjZWUxMzhiYzgyZjQ0YTZiYWEwOTY4M2E1ZDBmIn0.OOKvIihgqCj7Qrb2bmz7T3t7uK-0JyjiEqL2s1kWeJBM4MMmjaHKK8GmU_z91QolBWMzvPgs718EElY-rE3cwQ Extract JWT header value without validating the signature, which looks like this: { "alg": "ES256", "kid": "6c5516e1-92dc-479e-a8ff-5a51992e0001", "typ": "JWT" } Extract the kid and POST to /webhook_verification_key/get POST /webhook_verification_key/get { "client_id": "MY_CLIENT_ID" "secret": "MY_SECRET_ID" "key_id": "6c5516e1-92dc-479e-a8ff-5a51992e0001" } The response is: { "key": { "alg": "ES256", "created_at": 1560466143, "crv": "P-256", "expired_at": null, "kid": "6c5516e1-92dc-479e-a8ff-5a51992e0001", "kty": "EC", "use": "sig", "x": "35lvC8uz2QrWpQJ3TUH8t9o9DURMp7ydU518RKDl20k", "y": "I8BuXB2bvxelzJAd7OKhd-ZwjCst05Fx47Mb_0ugros" }, "request_id": "HvfCtrDLG1ihcp7" } Interpret key as a JSON Web Key, validate that the signature of the JSON Web Key is valid, and extract the payload (using jose python library) claims = jwt.decode(signed_jwt, key, algorithms=['ES256']) claims = { "iat": 1590887001, "request_body_sha256": "b63a07b547b00f992547f6bbc8d9acc61a8ccee138bc82f44a6baa09683a5d0f" } Compute the SHA-256 of the request body and ensure that it matches claims['request_body_sha256']: Body is in a file body.json { "error": null, "item_id": "yxQbxDjnD8hr69pKbQpbcKeVn3GL9QuyA7NV3", "new_transactions": 390, "webhook_code": "HISTORICAL_UPDATE", "webhook_type": "TRANSACTIONS" } Compute SHA-256 of body.json f = open('body.json') body = json.load(f) f.close() m = hashlib.sha256() m.update(json.dumps(body).encode()) body_hash = m.hexdigest() print(body_hash) body_hash = 'efbb5274864518f7eb3834125d9bcdb95fb03066d3d1bed3ebcc6163d8dc3579' The body hash in the example above does not equal the body hash received from Plaid. There's 2 possible problems here: Plaid isn't sending the correct body hash (unlikely) The hash method I'm using to hash the body is not the same as Plaid's method Is there something I'm missing here? Perhaps the request body is encoded differently on my end? I'm using Node.js and Express in production but I made a Python script to follow the method Plaid outlined here, but I'm still not getting the correct hash. I'm honestly out of ideas. | It seems to be a problem with whitespace. If you modify body.json to have 2 spaces per ‘tab’ on each new line, it will generate the right hash. | 7 | 10 |
62,162,970 | 2020-6-2 | https://stackoverflow.com/questions/62162970/programmatically-determine-pip-user-install-location-scripts-directory | As explained in pip's documentation a user can install packages in his personal account using pip install --user <pkg>. How can I programmatically determine the user install location for scripts installed like this? I am talking about the directory that should be added to the PATH so that installed packages can be invoked from command line. For example, in Windows when installing pip install -U pylint --user I get the following warning because I don't have 'C:\Users\myusername\AppData\Roaming\Python\Python37\Scripts' in my PATH: ... Installing collected packages: wrapt, six, typed-ast, lazy-object-proxy, astroid, mccabe, isort, colorama, toml, pylint Running setup.py install for wrapt ... done WARNING: The script isort.exe is installed in 'C:\Users\myusername\AppData\Roaming\Python\Python37\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. WARNING: The scripts epylint.exe, pylint.exe, pyreverse.exe and symilar.exe are installed in 'C:\Users\myusername\AppData\Roaming\Python\Python37\Scripts' which is not on PATH. Is there some python code I can use to determine that location programmatically (that will work on Windows/Linux/Darwin/etc)? Something like: def get_user_install_scripts_dir(): ... # would return 'C:\Users\myusername\AppData\Roaming\Python\Python37\Scripts' # on Windows with Python 3.7.x, '/home/myusername/.local/bin' in Linux, etc return platform_scripts_dir As a fallback, is there some command I can run to obtain this location? Something like (but for the script location not the site's base directory): PS C:\Users\myusername\> python -m site --user-base C:\Users\myusername\AppData\Roaming\Python $ python -m site --user-base /home/myusername/.local | I believe the following should give the expected result import os import sysconfig user_scripts_path = sysconfig.get_path('scripts', f'{os.name}_user') print(user_scripts_path) Command-line: python -c 'import os,sysconfig;print(sysconfig.get_path("scripts",f"{os.name}_user"))' Since pip 21.3 released on 2021-10-11, pip itself uses sysconfig to compute paths. References: https://docs.python.org/3/library/sysconfig.html#installation-paths https://docs.python.org/3/library/os.html#os.name https://discuss.python.org/t/proper-way-to-determine-install-location-of-console-scripts/7188 pip 21.3 "Deprecations and removals" github.com/pypa/pip/pull/10358 "Switch install scheme backend to sysconfig" | 11 | 14 |
62,134,556 | 2020-6-1 | https://stackoverflow.com/questions/62134556/how-to-detect-android-os-from-a-python-script | I am running a python script in a termux environment on an Android device and I would like to be able to detect that the OS is Android. The traditional approaches don't work: >>> import platform >>> import sys >>> print(platform.system()) 'Linux' >>> print(sys.platform) 'linux' >>> print(platform.release()) '4.14.117-perf+' >>> print(platform.platform()) 'Linux-4.14.117-perf+-aarch64-with-libc' What other ootb options are available? An apparently useful option is platform.machine() which returns armv8 — this is more than just 'Linux' yet it's just the architecture, and not the OS, and it might return a false positive for example on a raspberry pi or other arm-based systems. | There is more simple way that doesn't depend using external utilities and just uses sys module. Here is code: import sys is_android: bool = hasattr(sys, 'getandroidapilevel') Here are it's pros and cons: @@Pros@@ + Does not depend on environment values + Does not depend on third-party modules + Simple one-liner (2 technically) @@Cons@@ - Version restriction (Supports CPython 3.7+ or equivalent) - Implementation-dependent (while CPython implements this I don't know about others) | 7 | 1 |
62,160,411 | 2020-6-2 | https://stackoverflow.com/questions/62160411/pythons-new-functools-cached-property-bug-or-limitation | Since Python 3.8, functools has a cached_property. I've been using a similar lazyprop decorator based on Beazley's cookbook (code below), but when I replace by the builtin, I get problems. Here's one of them. When I use the decorator within the class definition, using the @ operator, it doesn't complain. But if I use it with setattr, I get: TypeError: Cannot use cached_property instance without calling __set_name__ on it. Beazley's simple version works fine though. from functools import cached_property class lazyprop: """Based on code from David Beazley's "Python Cookbook".""" def __init__(self, func): self.__doc__ = getattr(func, '__doc__') self.func = func def __get__(self, instance, cls): if instance is None: return self else: value = instance.__dict__[self.func.__name__] = self.func(instance) return value class D: def __init__(self, greeting='Hello'): self.greeting = greeting def greet(self): return self.greeting + " world!" # Beazley's version works... D.greet = lazyprop(greet) assert D().greet == "Hello world!" # ... but the builtin version will fail D.greet = cached_property(greet) # D().greet # this will fail | TL;DR The cache is the instance dict itself, and the name of the property is needed as the key. The chosen design imposes the limitation, but (IMO) it's a good compromise. lazyprop is not thread-safe, or at least may call self.func more than is strictly necessary in a multi-threaded environment. To start, it is documented that __set_name__ is only called if the assignment occurs in the context of type creation. The note in the documentation (adapted to your example) shows what you need to do: call __set_name__ yourself. class D: def __init__(self, greeting='Hello'): self.greeting = greeting def greet(self): return self.greeting + " world!" D.greet = cached_property(greet) D.greet.__set_name__(D, 'greet') # sets D.greet.attrname = "greet" for later use assert D().greet == "hello world!" Why is the attribute name needed? cached_property.__set__ is not defined, so given d = D(), d.greet will first look for an instance attribute named greet in d.__dict__. If it is not found, then D.greet.__get__(d, D) will be called. That function basically does three things: it calls the original greet function if needed to compute the value, then saves it to a new instance attribute with the same name, and then returns the computed value. "Wait", you ask, "what do you mean, 'if needed'? Didn't you just say D.greet.__get__ is only called if the instance attribute doesn't already exist?" Yes, but in a multithreaded environment, you don't know if another thread might also be executing D.greet.__get__ at the same time. In order to prevent a race condition, __get__ goes through the following steps (you can follow along in the code if you like): Check for an instance attribute with the same name, in case it was created in another thread after the current call started. If not, try to get a lock so we can create the instance attribute ourselves. Once you get the lock, look for the instance attribute again, in case someone created it while we waited for the lock. If the instance attribute still does not exist, we can safely create it ourselves. Finally, whether we pulled a value from the instance attribute or called the original greet function ourselves, we can return the value. With all this in mind, I would call this a limitation rather than a bug, but a limitation that is easily worked around. This implementation is probably simpler than one that tries not to rely on the name of the property itself for maintaining the necessary mapping. | 7 | 6 |
62,086,013 | 2020-5-29 | https://stackoverflow.com/questions/62086013/download-file-folder-from-public-aws-s3-with-python-no-credentials | I have opened a public access to S3 bucket and I need to download files / folders with files from the bucket using python. The trick is that I do not want to supply credentials (which boto3 apparently requires). Is it even possible? | You can use GetObject from the S3 REST API, together with the Requests library in Python. If you grant READ access to the anonymous user, you can return the object without using an authorization header. Example of such an S3 REST call:: > GET /example-object HTTP/1.1 > Host: example-bucket.s3.<Region>.amazonaws.com Python(rough example): import requests url = '/example-object' headers = {'Host': 'example-bucket.s3.<Region>.amazonaws.com'} r = requests.get(url, headers=headers) Requests GetObject | 8 | 3 |
62,045,387 | 2020-5-27 | https://stackoverflow.com/questions/62045387/how-to-suppress-coroutine-was-never-awaited-warning | All search results on "coroutine was never awaited" are for people who were either trying to fire-and-forget or actually did forget to await. This is not my case. I want to use a coroutine the same way I often use generators: I'm creating it here while I have all the variables handy, but I'm not sure yet whether I'll ever need that to be run. Something like: options = { 'a': async_func_1(..., ...), 'b': async_func_2(), 'c': async_func_3(...), } and elsewhere: appropriate_option = figure_the_option_out(...) result = await options[appropriate_option] | deceze's comment that you should not create the coroutine object until you are ready to await it is probably the most ideal solution. But if that isn't practical, you can use weakref.finalize() to call the coroutine object's close() method just before it is garbage-collected. >python -m asyncio asyncio REPL 3.9.5 (default, May 18 2021, 14:42:02) [MSC v.1916 64 bit (AMD64)] on win32 Use "await" directly instead of "asyncio.run()". Type "help", "copyright", "credits" or "license" for more information. >>> import asyncio >>> import weakref >>> async def coro(x): ... print(x) ... >>> coro_obj = coro('Hello') >>> del coro_obj <console>:1: RuntimeWarning: coroutine 'coro' was never awaited RuntimeWarning: Enable tracemalloc to get the object allocation traceback >>> coro_obj = coro('Hello') >>> _ = weakref.finalize(coro_obj, coro_obj.close) >>> del coro_obj >>> | 13 | 6 |
62,067,400 | 2020-5-28 | https://stackoverflow.com/questions/62067400/understanding-accumulated-gradients-in-pytorch | I am trying to comprehend inner workings of the gradient accumulation in PyTorch. My question is somewhat related to these two: Why do we need to call zero_grad() in PyTorch? Why do we need to explicitly call zero_grad()? Comments to the accepted answer to the second question suggest that accumulated gradients can be used if a minibatch is too large to perform a gradient update in a single forward pass, and thus has to be split into multiple sub-batches. Consider the following toy example: import numpy as np import torch class ExampleLinear(torch.nn.Module): def __init__(self): super().__init__() # Initialize the weight at 1 self.weight = torch.nn.Parameter(torch.Tensor([1]).float(), requires_grad=True) def forward(self, x): return self.weight * x if __name__ == "__main__": # Example 1 model = ExampleLinear() # Generate some data x = torch.from_numpy(np.array([4, 2])).float() y = 2 * x optimizer = torch.optim.SGD(model.parameters(), lr=0.01) y_hat = model(x) # forward pass loss = (y - y_hat) ** 2 loss = loss.mean() # MSE loss loss.backward() # backward pass optimizer.step() # weight update print(model.weight.grad) # tensor([-20.]) print(model.weight) # tensor([1.2000] Which is exactly the result one would expect. Now assume that we want to process the dataset sample-by-sample utilizing gradient accumulation: # Example 2: MSE sample-by-sample model2 = ExampleLinear() optimizer = torch.optim.SGD(model2.parameters(), lr=0.01) # Compute loss sample-by-sample, then average it over all samples loss = [] for k in range(len(y)): y_hat = model2(x[k]) loss.append((y[k] - y_hat) ** 2) loss = sum(loss) / len(y) loss.backward() # backward pass optimizer.step() # weight update print(model2.weight.grad) # tensor([-20.]) print(model2.weight) # tensor([1.2000] Again as expected, the gradient is calculated when the .backward() method is called. Finally to my question: what exactly happens 'under the hood'? My understanding is that the computational graph is dynamically updated going from <PowBackward> to <AddBackward> <DivBackward> operations for the loss variable, and that no information about the data used for each forward pass is retained anywhere except for the loss tensor which can be updated until the backward pass. Are there any caveats to the reasoning in the above paragraph? Lastly, are there any best practices to follow when using gradient accumulation (i.e. can the approach I use in Example 2 backfire somehow)? | You are not actually accumulating gradients. Just leaving off optimizer.zero_grad() has no effect if you have a single .backward() call, as the gradients are already zero to begin with (technically None but they will be automatically initialised to zero). The only difference between your two versions, is how you calculate the final loss. The for loop of the second example does the same calculations as PyTorch does in the first example, but you do them individually, and PyTorch cannot optimise (parallelise and vectorise) your for loop, which makes an especially staggering difference on GPUs, granted that the tensors aren't tiny. Before getting to gradient accumulation, let's start with your question: Finally to my question: what exactly happens 'under the hood'? Every operation on tensors is tracked in a computational graph if and only if one of the operands is already part of a computational graph. When you set requires_grad=True of a tensor, it creates a computational graph with a single vertex, the tensor itself, which will remain a leaf in the graph. Any operation with that tensor will create a new vertex, which is the result of the operation, hence there is an edge from the operands to it, tracking the operation that was performed. a = torch.tensor(2.0, requires_grad=True) b = torch.tensor(4.0) c = a + b # => tensor(6., grad_fn=<AddBackward0>) a.requires_grad # => True a.is_leaf # => True b.requires_grad # => False b.is_leaf # => True c.requires_grad # => True c.is_leaf # => False Every intermediate tensor automatically requires gradients and has a grad_fn, which is the function to calculate the partial derivatives with respect to its inputs. Thanks to the chain rule, we can traverse the whole graph in reverse order to calculate the derivatives with respect to every single leaf, which are the parameters we want to optimise. That's the idea of backpropagation, also known as reverse mode differentiation. For more details I recommend reading Calculus on Computational Graphs: Backpropagation. PyTorch uses that exact idea, when you call loss.backward() it traverses the graph in reverse order, starting from loss, and calculates the derivatives for each vertex. Whenever a leaf is reached, the calculated derivative for that tensor is stored in its .grad attribute. In your first example, that would lead to: MeanBackward -> PowBackward -> SubBackward -> MulBackward` The second example is almost identical, except that you calculate the mean manually, and instead of having a single path for the loss, you have multiple paths for each element of the loss calculation. To clarify, the single path also calculates the derivatives of each element, but internally, which again opens up the possibilities for some optimisations. # Example 1 loss = (y - y_hat) ** 2 # => tensor([16., 4.], grad_fn=<PowBackward0>) # Example 2 loss = [] for k in range(len(y)): y_hat = model2(x[k]) loss.append((y[k] - y_hat) ** 2) loss # => [tensor([16.], grad_fn=<PowBackward0>), tensor([4.], grad_fn=<PowBackward0>)] In either case a single graph is created that is backpropagated exactly once, that's the reason it's not considered gradient accumulation. Gradient Accumulation Gradient accumulation refers to the situation, where multiple backwards passes are performed before updating the parameters. The goal is to have the same model parameters for multiple inputs (batches) and then update the model's parameters based on all these batches, instead of performing an update after every single batch. Let's revisit your example. x has size [2], that's the size of our entire dataset. For some reason, we need to calculate the gradients based on the whole dataset. That is naturally the case when using a batch size of 2, since we would have the whole dataset at once. But what happens if we can only have batches of size 1? We could run them individually and update the model after each batch as usual, but then we don't calculate the gradients over the whole dataset. What we need to do, is run each sample individually with the same model parameters and calculate the gradients without updating the model. Now you might be thinking, isn't that what you did in the second version? Almost, but not quite, and there is a crucial problem in your version, namely that you are using the same amount of memory as in the first version, because you have the same calculations and therefore the same number of values in the computational graph. How do we free memory? We need to get rid of the tensors of the previous batch and also the computational graph, because that uses a lot of memory to keep track of everything that's necessary for the backpropagation. The computational graph is automatically destroyed when .backward() is called (unless retain_graph=True is specified). def calculate_loss(x: torch.Tensor) -> torch.Tensor: y = 2 * x y_hat = model(x) loss = (y - y_hat) ** 2 return loss.mean() # With mulitple batches of size 1 batches = [torch.tensor([4.0]), torch.tensor([2.0])] optimizer.zero_grad() for i, batch in enumerate(batches): # The loss needs to be scaled, because the mean should be taken across the whole # dataset, which requires the loss to be divided by the number of batches. loss = calculate_loss(batch) / len(batches) loss.backward() print(f"Batch size 1 (batch {i}) - grad: {model.weight.grad}") print(f"Batch size 1 (batch {i}) - weight: {model.weight}") # Updating the model only after all batches optimizer.step() print(f"Batch size 1 (final) - grad: {model.weight.grad}") print(f"Batch size 1 (final) - weight: {model.weight}") Output (I removed the Parameter containing messages for readability): Batch size 1 (batch 0) - grad: tensor([-16.]) Batch size 1 (batch 0) - weight: tensor([1.], requires_grad=True) Batch size 1 (batch 1) - grad: tensor([-20.]) Batch size 1 (batch 1) - weight: tensor([1.], requires_grad=True) Batch size 1 (final) - grad: tensor([-20.]) Batch size 1 (final) - weight: tensor([1.2000], requires_grad=True) As you can see, the model kept the same parameter for all batches, while the gradients were accumulate, and there is a single update at the end. Note that the loss needs to be scaled per batch, in order to have the same significance over the whole dataset as if you used a single batch. While in this example, the whole dataset is used before performing the update, you can easily change that to update the parameters after a certain number of batches, but you have to remember to zero out the gradients after an optimiser step was taken. The general recipe would be: accumulation_steps = 10 for i, batch in enumerate(batches): # Scale the loss to the mean of the accumulated batch size loss = calculate_loss(batch) / accumulation_steps loss.backward() if (i + 1) % accumulation_steps == 0: optimizer.step() # Reset gradients, for the next accumulated batches optimizer.zero_grad() You can find that recipe and more techniques for working with large batch sizes in HuggingFace - Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups. | 33 | 64 |
62,058,120 | 2020-5-28 | https://stackoverflow.com/questions/62058120/how-to-use-python-black-formatter-under-a-project-for-python-3-5-managed-by-poe | I created a python project "foo" with Poetry. This is the content of pyproject.toml: [tool.poetry] name = "bar" version = "0.1.0" description = "" [tool.poetry.dependencies] python = ">=3.5" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api" This package is compatible with Python3.5. I want to black formatter, which is not compatible with Python3.5. I think there is no problem if I use Python>=3.6 for development, but I cannot install black formatter: $ poetry add black --dev [SolverProblemError] The current project's Python requirement (>=3.5) is not compatible with some of the required packages Python requirement: - black requires Python >=3.6 Because no versions of black match >19.10b0,<20.0 and black (19.10b0) requires Python >=3.6, black is forbidden. So, because bar depends on black (^19.10b0), version solving failed. So I installed black directly with pip: $ poetry run pip install black This way doesn't sit well with me. I want to install black by poetry. How should I do? (I don't want to modify the dependency to python>=3.6) | Seems a bit late but actually you can do what you want even if black supports only Python >=3.6.2 In your pyproject.toml you can define a restricted dependcy as documented in https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies [tool.poetry.dependencies] python = ">=3.5" [tool.poetry.dev-dependencies] black = {version = "^21.7b0", python = ">=3.6.2"} Poetry won't complain and you won't have any problems since it is a dev dependency. | 9 | 7 |
62,114,945 | 2020-5-31 | https://stackoverflow.com/questions/62114945/attributeerror-parsedrequirement-object-has-no-attribute-req | I have docker file with one layer as RUN python setup.py develop I am using a mutli-stage build with three stages and this is the stage one all the stages have the same base image, though I don't think this is a problem with dockerfile but seems to be a problem with python and the way it is executed working on the base image python:3.7-slim I am building this dockerfile on Travis CI with this below Version info on Travis: docker version Client: Version: 17.09.0-ce API version: 1.32 Go version: go1.8.3 Git commit: afdb6d4 Built: Tue Sep 26 22:42:38 2017 OS/Arch: linux/amd64 Server: Version: 17.09.0-ce API version: 1.32 (minimum version 1.12) Go version: go1.8.3 Git commit: afdb6d4 Built: Tue Sep 26 22:41:20 2017 OS/Arch: linux/amd64 Experimental: false I get this below error as AttributeError: 'ParsedRequirement' object has no attribute 'req' Surprisingly I am able to have this working on My mac machine with docker version 19.03.2 Here is my setup.py file import os import shutil import inspect import platform from setuptools import setup import setuptools try: from pip.req import parse_requirements except ImportError: from pip._internal.req import parse_requirements EMAIL_CONF = 'email.conf' DL_CONF = 'dl.conf' LINUX_CONFDIR = os.path.expanduser('~') + '/.config/bassa/' WIN_CONFDIR = os.path.expanduser('~') + '/%app_data%/bassa/' OSX_CONFDIR = os.path.expanduser('~') + '/.config/bassa/' # Utility function to read the README file. def read(file_name): return open(os.path.join(os.path.dirname(__file__), file_name)).read() base_dir = os.path.dirname(os.path.abspath(__file__)) requirements_path = os.path.join(base_dir, 'requirements.txt') install_reqs = parse_requirements(requirements_path, session=False) requirements = [str(ir.req) for ir in install_reqs] ### Set configs ### if platform.system() == 'Linux': configdir = LINUX_CONFDIR elif platform.system() == 'Windows': configdir = WIN_CONFDIR elif platform.system() == 'Darwin': configdir = OSX_CONFDIR if not os.path.exists(configdir): os.makedirs(configdir) email_conf_location = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) + "/" + EMAIL_CONF dl_conf_location = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) + "/" + DL_CONF shutil.copyfile(email_conf_location, configdir + EMAIL_CONF) shutil.copyfile(dl_conf_location, configdir + DL_CONF) ###/ Set configs ### setup( ... ) Please help me with this issue. | Update: * Please note that updating pip and pip-tools is not supported in my case. Then the workaround in my answer will help. * If updating pip and pip-tools to compatible version is supported then refer to Gnnr's answer or Heapify's answer I got the fix finally \o/ install_reqs = parse_requirements(requirements_path, session=False) At first, I have inspected what install_reqs was on Travis by simply logging it and found that it was a list of ParsedRequirement objects. I also found that this class is defined in req_file.py. I have gone to check the source code for req_file.py here on GitHub. I found that there was no such attribute called req but instead it is requirement. So there were two versions of parse_requirements function so I handled this using a try and except block. # Generator must be converted to list, or we will only have one chance to read each element, meaning that the first requirement will be skipped. requirements = list(requirements) try: requirements = [str(ir.req) for ir in install_reqs] except: requirements = [str(ir.requirement) for ir in install_reqs] Now it is compatible with both the versions \0/ | 14 | 23 |
62,150,659 | 2020-6-2 | https://stackoverflow.com/questions/62150659/how-to-convert-a-tensor-of-booleans-to-ints-in-pytorch | Suppose, we have a tensor t = torch.tensor([True, False, True, False]) How do we convert it to an integer tensor with values [1, 0, 1, 0]? | The solution is just a single line of code. To convert a tensor t with values [True, False, True, False] to an integer tensor, just do the following. t = torch.tensor([True, False, True, False]) t_integer = t.long() print(t_integer) [1, 0, 1, 0] | 18 | 29 |
62,137,479 | 2020-6-1 | https://stackoverflow.com/questions/62137479/plotly-dash-dcc-radioitems-vertical-alignment | I would like to align vertically all options of a dash_core_components.RadioItems. According to the dash documentation, the default behavior should include a vertical alignment of the RadioItems options. If you wanted to align the options horizontally, you would have to specify: labelStyle={'display': 'inline-block'} On the contrary, as default behavior I get a horizontal alignment and I don't know what to specify as the display item to get a vertical alignment of the RadioItems options. Here my attempt until now: import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output app = dash.Dash() app.layout = html.Div([dcc.RadioItems(id = 'input-radio-button', options = [dict(label = 'A', value = 'A'), dict(label = 'B', value = 'B')], value = 'A'), html.P(id = 'output-text')]) @app.callback(Output('output-text', 'children'), [Input('input-radio-button', 'value')]) def update_graph(value): return f'The selected value is {value}' if __name__ == "__main__": app.run_server() What I get: I would like to get a results like this (image manually edited): I found this reference where this problem is mentioned. There, it is proposed to solve it by referring to an external stylesheet. I would like, if possible, to avoid this turnaround and solve it by specifying the correct option within the definition of the RadioItems element. Version info: Python 3.7.0 dash 1.12.0 plotly 4.7.0 | You can pass the labelStyle={'display': 'block'} property to dcc.RadioItems() in order to vertically align the different options, but I suggest that you follow the recommendation in the Dash Community Forum, which is to always link the Dash CSS file bWLwgP.css. | 9 | 10 |
62,161,001 | 2020-6-2 | https://stackoverflow.com/questions/62161001/python-newline-n-not-working-in-jupyter-notebooks | I'm trying to display the tuples of a postgreSQL table neatly in my Jupyter Notebook, but the newline \n escape character doesn't seem to work here (it works for my python scripts w/ same code outside of jupyter). I'm trying to run: cur.execute('SELECT * FROM Cars') '\n '.join(str(x) for x in cur.fetchall()) But my output still contains the '\n' characters themselves: "(1, 'Toyota', 'Supra', 2020, 'Sport', 'Gas', 49995.0, 7, 280.0, datetime.date(2020, 5, 27), 'Loan', 50150.0, 300.0, 987654321, 333356789)\n (4, 'Chevrolet', 'Corvette', 2020, 'Sport', 'Gas', 55999.0, 4, 280.0, datetime.date(2020, 5, 27), 'Loan', 58999.0, 300.0, 987444321, 333356789)\n (2, 'Toyota', '4Runner', 2018, 'Sport', 'Gas', 40599.0, 13266, 280.0, datetime.date(2020, 5, 27), 'Loan', 58999.0, 300.0, 987334321, 333356789)" Any ideas as to what I need to do or add? | When you don't put the output in print statement. "\n" would be printed as "\n" instead of newline in jupyter notebook and python shell. in: 'A\nB' out: 'A\nB' in: print('A\nB') out: A B The solution you need is: print('\n '.join(str(x) for x in cur.fetchall())) | 9 | 18 |
62,151,238 | 2020-6-2 | https://stackoverflow.com/questions/62151238/how-to-set-the-jinja-environment-variable-in-flask | I have a page with the following Code Structure: Python Code: from flask import Flask,render_template app=Flask(__name__) @app.route('/') def home(): return render_template("first.html") @app.route('/second') def next(): return render_template("second.html") if __name__=="__main__": app.run(debug=True) HTML Code for second.html: {% extends first.html %} <dosomething> </dosomething> Now I want to set the Environment of Jinja as follows: Environment: import jinja2 JINJA_ENV=jinja2.Environment(block_start_string='#%',block_end_string="#%",variable_start_string='{',variable_end_string='}') I want this changes to be reflected in Flask. Should this variable be initialized with Flask in some way or left as such? | It's not publically documented, but a Flask() object has a .jinja_options dict that will be used to build the Jinja Environment. Just make sure to set it ASAP. Source: https://github.com/pallets/flask/blob/bbb273bb761461ab329f03ff2d9002f6cb81e2a4/src/flask/app.py#L272 https://github.com/pallets/flask/blob/bbb273bb761461ab329f03ff2d9002f6cb81e2a4/src/flask/app.py#L652 Example: from flask import Flask from jinja2 import ChainableUndefined app = Flask(__name__) app.jinja_options["undefined"] = ChainableUndefined | 8 | 9 |
Subsets and Splits