question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
60,618,896
2020-3-10
https://stackoverflow.com/questions/60618896/how-to-run-tensorflow-inference-for-multiple-models-on-gpu-in-parallel
Do you know any elegant way to do inference on 2 python processes with 1 GPU tensorflow? Suppose I have 2 processes, first one is classifying cats/dogs, 2nd one is classifying birds/planes, each process is running different tensorflow model and run on GPU. These 2 models will be given images from different cameras continuously. Usually, tensorflow will occupy all memory of the entire GPU. So when you start another process, it will crash saying OUT OF MEMORY or failed convolution CUDA or something along that line. Is there a tutorial/article/sample code that shows how to load 2 models in different processes and both run in parallel? This is very useful also in case you are running a model inference while you are doing some heavy graphics e.g. playing games. I also want to know how running the model affects the game. I've tried using python Thread and it works but each model predicts 2 times slower (and you know that python thread is not utilizing multiple CPU cores). I want to use python Process but it's not working. If you have sample few lines of code that work I would appreciate that very much. I've attached current Thread code also:
OK. I think I've found the solution now. I use tensorflow 2 and there are essentially 2 methods to manage the memory usage of GPU. set memory growth to true set memory limit to some number You can use both methods, ignore all the warning messages about out of memory stuff. I still don't know what it exactly means but the model is still running and that's what I care about. I measured the exact time the model uses to run and it's a lot better than running on CPU. If I run both processes at the same time, the speed drop a bit, but it's still lot better than running on CPU. For memory growth approach, my GPU is 3GB so first process try to allocate everything and then 2nd process said out of memory. But it still works. For memory limit approach, I set the limit to some number e.g. 1024 MB. Both processes work. So What is the right minimum number that you can set? I tried reducing the memory limit until I found that my model works with 64 MB limit fine. The prediction speed is still the same as when I set the memory limit to 1024 MB. When I set the memory limit to 32MB, I noticed 50% speed drop. When I set to 16 MB, the model refuses to run because it does not have enough memory to store the image tensor. This means that my model requires minimum of 64 MB which is very little considering that I have 3GB to spare. This also allows me to run the model while playing some video games. Conclusion: I chose to use the memory limit approach with 64 MB limit. You can check how to use memory limit here: https://www.tensorflow.org/guide/gpu I suggest you to try changing the memory limit to see the minimum you need for your model. You will see speed drop or model refusing to run when the memory is not enough.
13
3
60,626,517
2020-3-10
https://stackoverflow.com/questions/60626517/use-walrus-operator-in-python-3-7
Why are future imports limited to only certain functionality? Is there no way to get the walrus operator in Python 3.7? I thought this would work, but it doesn't: from __future__ import walrus It doesn't work because walrus isn't in the list of supported features: __future__.all_feature_names ['nested_scopes', 'generators', 'division', 'absolute_import', 'with_statement', 'print_function', 'unicode_literals', 'barry_as_FLUFL', 'generator_stop', 'annotations'] Are there any other alternatives short of using python 3.8?
If the version of Python you're using doesn't contain an implementation of a feature, then you cannot use that feature; writing from __future__ import ... cannot cause that feature to be implemented in the version of Python you have installed. The purpose of __future__ imports is to allow an "opt-in" period for new features which could break existing programs. For example, when the / operator's behaviour on integers was changed so that 3/2 was 1.5 instead of 1 (i.e. floor division), this would have broken a lot of code if it was just changed overnight. So both behaviours were implemented in the next few versions of Python, and if you were using one of those newer versions then you could choose the new behaviour with from __future__ import division. But you were only able to do so because the version of Python you were using did implement the new behaviour. The walrus operator was introduced in Python 3.8, so if you are using a version prior to 3.8 then it does not contain an implementation of that operator, so you cannot use it. There was no need to use __future__ to make the walrus operator "opt-in", since the introduction of a new operator with new syntax could not have broken any existing code.
19
33
60,616,802
2020-3-10
https://stackoverflow.com/questions/60616802/how-to-type-hint-a-generic-numeric-type-in-python
Forgive me if this question has been asked before but I could not find any related answer. Consider a function that takes a numerical type as input parameter: def foo(a): return ((a+1)*2)**4; This works with integers, floats and complex numbers. Is there a basic type so that I can do a type hinting (of a real existing type/base class), such as: def foo(a: numeric): return ((a+1)*2)**4; Furthermore I need to use this in a collection type parameter, such as: from typing import Collection; def foo(_in: Collection[numeric]): return ((_in[0]+_in[1])*2)**4;
PEP 3141 added abstract base classes for numbers, so you could use: from numbers import Number def foo(a: Number) -> Number: ...
68
85
60,581,677
2020-3-7
https://stackoverflow.com/questions/60581677/experimental-list-devices-attribute-missing-in-tensorflow-core-api-v2-config
Am using tensorflow 2.1 on Windows 10. On running model.add(Conv3D(16, (22, 5, 5), strides=(1, 2, 2), padding='valid',activation='relu',data_format= "channels_first", input_shape=input_shape)) on spyder, I get the this error: { AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'experimental_list_devices' } How can I solve this error?
I found the answer here - https://github.com/keras-team/keras/issues/13684. I had the same issue for load_model() from keras under Anaconda: AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'experimental_list_devices' I found source of problem in ...\anaconda3\envs\tf_env\Lib\site-packages\keras\backend\tensorflow_backend.py In line 506 I changed line _LOCAL_DEVICES = tf.config.experimental_list_devices() to devices = tf.config.list_logical_devices() _LOCAL_DEVICES = [x.name for x in devices] And it works
6
16
60,610,280
2020-3-10
https://stackoverflow.com/questions/60610280/bertforsequenceclassification-vs-bertformultiplechoice-for-sentence-multi-class
I'm working on a text classification problem (e.g. sentiment analysis), where I need to classify a text string into one of five classes. I just started using the Huggingface Transformer package and BERT with PyTorch. What I need is a classifier with a softmax layer on top so that I can do 5-way classification. Confusingly, there seem to be two relevant options in the Transformer package: BertForSequenceClassification and BertForMultipleChoice. Which one should I use for my 5-way classification task? What are the appropriate use cases for them? The documentation for BertForSequenceClassification doesn't mention softmax at all, although it does mention cross-entropy. I am not sure if this class is only for 2-class classification (i.e. logistic regression). Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). The documentation for BertForMultipleChoice mentions softmax, but the way the labels are described, it sound like this class is for multi-label classification (that is, a binary classification for multiple labels). Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. Thank you for any help.
The answer to this lies in the (admittedly very brief) description of what the tasks are about: [BertForMultipleChoice] [...], e.g. for RocStories/SWAG tasks. When looking at the paper for SWAG, it seems that the task is actually learning to choose from varying options. This is in contrast to your "classical" classification task, in which the "choices" (i.e., classes) do not vary across your samples, which is exactly what BertForSequenceClassification is for. Both variants can in fact be for an arbitrary number of classes (in the case of BertForSequenceClassification), respectively choices (for BertForMultipleChoice), via changing the labels parameter in the config. But, since it seems like you are dealing with a case of "classical classification", I suggest using the BertForSequenceClassification model. Shortly addressing the missing Softmax in BertForSequenceClassification: Since classification tasks can compute loss across classes indipendent of the sample (unlike multiple choice, where your distribution is changing), this allows you to use Cross-Entropy Loss, which factors in Softmax in the backpropagation step for increased numerical stability.
19
17
60,517,286
2020-3-4
https://stackoverflow.com/questions/60517286/rolling-apply-function-must-be-real-number-not-nonetype
I'm trying to use rolling and apply function to print window but I got the error says File "pandas/_libs/window.pyx", line 1649, in pandas._libs.window.roll_generic TypeError: must be real number, not NoneType My code is following def print_window(window): print(window) print('==================') def example(): df = pd.read_csv('window_example.csv') df.rolling(5).apply(print_window) My data is like number sum mean 1 1 1 2 3 1.5 3 6 2 4 10 2.5 5 15 3 6 20 4 How should I slove this error? I didn't find similar questions on this error Thanks !
This behavior appeared in pandas=1.0.0. The function of the apply is now expected to return a single value to affect the corresponding column with. https://pandas.pydata.org/pandas-docs/version/1.0.0/reference/api/pandas.core.window.rolling.Rolling.apply.html#pandas.core.window.rolling.Rolling.apply A workaround for your code would be : def print_window(window): print(window) print('==================') return 0 def example(): df = pd.read_csv('window_example.csv') df.rolling(5).apply(print_window)
8
7
60,604,046
2020-3-9
https://stackoverflow.com/questions/60604046/i-was-wondering-what-is-meant-by-class-state-in-oop-particularly-in-python
Today, someone asked me about static methods and said is that right that static method can't access or modify a class state?
General OO answer: the state of an object is the values of it's attributes. For example, given class Point: def __init__(self, x, y): self.x = x self.y = y p = Point(42, 43) then the state of p is {"x": 42, "y": 43} To modify an object's state, a method need to have access to this object. For ordinary methods, this is provided by the self parameter. Now Python's classes are objects too (instance of the type class), so Python has "classmethods" which can be invoked on either an instance or the class itself, but get the class object itself instead of an instance. Those classmethods can then modify the class's state (class attributes, which are shared by all instances of the class). A staticmethod gets neither the instance nor class, so it can't change neither the class nor instance's state indeed.
7
8
60,613,233
2020-3-10
https://stackoverflow.com/questions/60613233/what-does-frozen-distribution-mean-in-scipy
In the documentation of scipy, the 'frozen pdf', etc, is mentioned sometimes, but I don't know the meaning of it? Is it a statistical concept or scipy terminology?
I agree that the docs are somewhat unclear on the issue. It seems that the frozen distribution fixes the first n moments for programmer's convenience. I am unaware of the term "forzen distribution" outside of SciPy. SciPy's frozen distribution is perhaps best described here: Passing the loc and scale keywords time and again can become quite bothersome. The concept of freezing a RV is used to solve such problems. rv = gamma(1, scale=2.) By using rv we no longer have to include the scale or the shape parameters anymore. Thus, distributions can be used in one of two ways, either by passing all distribution parameters to each method call (such as we did earlier) or by freezing the parameters for the instance of the distribution. Let us check this: rv.mean(), rv.std() (2.0, 2.0) This is, indeed, what we should get. In the scipy tutorial page, we see the following line: (We explain the meaning of a frozen distribution below). The only mention of frozen distribution after that point is the following: The main additional methods of the not frozen distribution are related to the estimation of distribution parameters: fit: maximum likelihood estimation of distribution parameters, including location and scale fit_loc_scale: estimation of location and scale when shape parameters are given nnlf: negative log likelihood function expect: calculate the expectation of a function against the pdf or pmf
9
9
60,607,436
2020-3-9
https://stackoverflow.com/questions/60607436/split-django-models-py-into-multiple-files-inside-folder-django-3-0-4
I'm trying to split the models.py into multiple files inside a folder. what is the proper way to do that ? all the methods in the internet from 8 years ago and it's not working now. UPDATE 1: test1 __init__.py admin.py apps.py tests.py views.py migrations models __init__.py comment.py like.py post.py profile.py inside init.py from django.db import models from django.conf import settings from .like import Like from .post import Post from .profile import Profile from .comment import Comment inside comment.py class Comment(models.Model): commented_by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) for_post = models.ForeignKey(Post, on_delete=models.CASCADE) inside like.py class Like(models.Model): liked_by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) post = models.ForeignKey(Post, on_delete=models.CASCADE) inside post.py class Post(models.Model): posted_by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) inside profile.py class Profile(models.Model): user = models.OneToOneField(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) when I try to makemigrations: python manage.py makemigrations test1 I got this error: (sandbox-qFsmxchL) λ python manage.py makemigrations test1 Traceback (most recent call last): File "manage.py", line 21, in <module> main() File "manage.py", line 17, in main execute_from_command_line(sys.argv) File "C:\Users\USER\.virtualenvs\sandbox-qFsmxchL\lib\site-packages\django\core\management\__init__.py", line 401, in execute_from_command_line utility.execute() File "C:\Users\USER\.virtualenvs\sandbox-qFsmxchL\lib\site-packages\django\core\management\__init__.py", line 377, in execute django.setup() File "C:\Users\USER\.virtualenvs\sandbox-qFsmxchL\lib\site-packages\django\__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "C:\Users\USER\.virtualenvs\sandbox-qFsmxchL\lib\site-packages\django\apps\registry.py", line 114, in populate app_config.import_models() File "C:\Users\USER\.virtualenvs\sandbox-qFsmxchL\lib\site-packages\django\apps\config.py", line 211, in import_models self.models_module = import_module(models_module_name) File "c:\users\USER\appdata\local\programs\python\python38-32\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\USER\Desktop\sandbox\test1\models\__init__.py", line 4, in <module> from .like import Like File "C:\Users\USER\Desktop\sandbox\test1\models\like.py", line 2, in <module> class Like(models.Model): NameError: name 'models' is not defined
You can do that putting them into a models folder, like: models/ - __init__.py - model_1.py - model_2.py and __init__.py should import all models contained in the other files from .model_1 import Model1 from .model_2 import Model2 It is up to you to split them, depending on if you have a lot of models and if those are stricly related to each others. Edit: inside init.py from .post import Post from .like import Like from .profile import Profile from .comment import Comment inside comment.py from django.db import models from django.conf import settings from test1.models import Post class Comment(models.Model): commented_by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) for_post = models.ForeignKey(Post, on_delete=models.CASCADE) inside like.py from django.db import models from django.conf import settings from test1.models import Post class Like(models.Model): liked_by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) post = models.ForeignKey(Post, on_delete=models.CASCADE) inside post.py from django.db import models from django.conf import settings class Post(models.Model): posted_by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) inside profile.py from django.db import models from django.conf import settings class Profile(models.Model): user = models.OneToOneField(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
7
20
60,561,959
2020-3-6
https://stackoverflow.com/questions/60561959/is-returning-a-value-other-than-self-in-enter-an-anti-pattern
Following this related question, while there are always examples of some library using a language feature in a unique way, I was wondering whether returning a value other than self in an __enter__ method should be considered an anti-pattern. The main reason why this seems to me like a bad idea is that it makes wrapping context managers problematic. For example, in Java (also possible in C#), one can wrap an AutoCloseable class in another class which will take care of cleaning up after the inner class, like in the following code snippet: try (BufferedReader reader = new BufferedReader(new FileReader("src/main/resources/input.txt"))) { return readAllLines(reader); } Here, BufferedReader wraps FileReader, and calls FileReader's close() method inside its own close() method. However, if this was Python, and FileReader would've returned an object other than self in its __enter__ method, this would make such an arrangement significantly more complicated. The following issues would have to be addressed by the writer of BufferedReader: When I need to use FileReader for my own methods, do I use FileReader directly or the object returned by its __enter__ method? What methods are even supported by the returned object? In my __exit__ method, do I need to close only the FileReader object, or the object returned in the __enter__ method? What happens if __enter__ actually returns a different object on its call? Do I now need to keep a collection of all of the different objects returned by it in case someone calls __enter__ several times on me? How do I know which one to use when I need to use on of these objects? And the list goes on. One semi-successful solution to all of these problems would be to simply avoid having one context manager class clean up after another context manager class. In my example, that would mean that we would need two nested with blocks - one for the FileReader, and one for the BufferedReader. However, this makes us write more boilerplate code, and seems significantly less elegant. All in all, these issues lead me to believe that while Python does allow us to return something other than self in the __enter__ method, this behavior should simply be avoided. Is there some official or semi-official remarks about these issues? How should a responsible Python developer write code that addresses these issues?
TLDR: Returning something other than self from __enter__ is perfectly fine and not bad practice. The introducing PEP 343 and Context Manager specification expressly list this as desired use cases. An example of a context manager that returns a related object is the one returned by decimal.localcontext(). These managers set the active decimal context to a copy of the original decimal context and then return the copy. This allows changes to be made to the current decimal context in the body of the with statement without affecting code outside the with statement. The standard library has several examples of returning something other than self from __enter__. Notably, much of contextlib matches this pattern. contextlib.contextmanager produces context managers which cannot return self, because there is no such thing. contextlib.closing wraps a thing and returns it on __enter__. contextlib.nullcontext returns a pre-defined constant threading.Lock returns a boolean decimal.localcontext returns a copy of its argument The context manager protocol makes it clear what is the context manager, and who is responsible for cleanup. Most importantly, the return value of __enter__ is inconsequential for the protocol. A rough paraphrasing of the protocol is this: When something runs cm.__enter__, it is responsible for running cm.__exit__. Notably, whatever code does that has access to cm (the context manager itself); the result of cm.__enter__ is not needed to call cm.__exit__. In other words, a code that takes (and runs) a ContextManager must run it completely. Any other code does not have to care whether its value comes from a ContextManager or not. # entering a context manager requires closing it… def managing(cm: ContextManager): value = cm.__enter__() # must clean up `cm` after this point try: yield from unmanaged(value) except BaseException as exc: if not cm.__exit__(type(exc), exc, exc.__traceback__): raise else: cm.__exit__(None, None, None) # …other code does not need to know where its values come from def unmanaged(smth: Any): yield smth When context managers wrap others, the same rules apply: If the outer context manager calls the inner one's __enter__, it must call its __exit__ as well. If the outer context manager already has the entered inner context manager, it is not responsible for cleanup. In some cases it is in fact bad practice to return self from __enter__. Returning self from __enter__ should only be done if self is fully initialised beforehand; if __enter__ runs any initialisation code, a separate object should be returned. class BadContextManager: """ Anti Pattern: Context manager is in inconsistent state before ``__enter__`` """ def __init__(self, path): self.path = path self._file = None # BAD: initialisation not complete def read(self, n: int): return self._file.read(n) # fails before the context is entered! def __enter__(self) -> 'BadContextManager': self._file = open(self.path) return self # BAD: self was not valid before def __exit__(self, exc_type, exc_val, tb): self._file.close() class GoodContext: def __init__(self, path): self.path = path self._file = None # GOOD: Inconsistent state not visible/used def __enter__(self) -> TextIO: if self._file is not None: raise RuntimeError(f'{self.__class__.__name__} is not re-entrant') self._file = open(self.path) return self._file # GOOD: value was not accessible before def __exit__(self, exc_type, exc_val, tb): self._file.close() Notably, even though GoodContext returns a different object, it is still responsible to clean up. Another context manager wrapping GoodContext does not need to close the return value, it just has to call GoodContext.__exit__.
7
7
60,601,412
2020-3-9
https://stackoverflow.com/questions/60601412/tensorflow-2-0-how-to-transform-from-mapdataset-after-reading-from-tfrecord-t
I've stored my training and validation data on two separate TFRecord files, in which I store 4 values: signal A (float32 shape (150,)), signal B (float32 shape (150,)), label (scalar int64), id (string). My parsing function for reading is: def _parse_data_function(sample_proto): raw_signal_description = { 'label': tf.io.FixedLenFeature([], tf.int64), 'id': tf.io.FixedLenFeature([], tf.string), } for key, item in SIGNALS.items(): raw_signal_description[key] = tf.io.FixedLenFeature(item, tf.float32) # Parse the input tf.Example proto using the dictionary above. return tf.io.parse_single_example(sample_proto, raw_signal_description) where SIGNALS is a dictionary mapping signal name->signal shape. Then, I read the raw datasets: training_raw = tf.data.TFRecordDataset(<path to training>), compression_type='GZIP') val_raw = tf.data.TFRecordDataset(<path to validation>), compression_type='GZIP') and use map to parse the values: training_data = training_raw.map(_parse_data_function) val_data = val_raw.map(_parse_data_function) Displaying the header of training_data or val_data, I get: <MapDataset shapes: {Signal A: (150,), Signal B: (150,), id: (), label: ()}, types: {Signal A: tf.float32, Signal B: tf.float32, id: tf.string, label: tf.int64}> which is pretty much as expected. I also checked some of the values for consistency and they seemed to be correct. Now, to my issue: how do I get from the MapDataset, with the dictionary like structure, to something that can be given as input to the model? The input to my model is the pair (Signal A, label), though in the future I will use Signal B as well. The simplest way to me seemed to create an generator over the elements that I want. Something like: def data_generator(mapdataset): for sample in mapdataset: yield (sample['Signal A'], sample['label']) However, with this approach I lose some convenience of Datasets, such as batching, and it is also not clear how to use the same approach for the validation_data paramenter of model.fit. Ideally, I would only convert between the map representation and the Dataset representation where it iterates over pairs of Signal A tensors and labels. EDIT: My end product should be something with a header akin to: <TensorSliceDataset shapes: ((150,), ()), types: (tf.float32, tf.int64)> But not necessarily TensorSliceDataset
You can simply do this in the parse function. For example: def _parse_data_function(sample_proto): raw_signal_description = { 'label': tf.io.FixedLenFeature([], tf.int64), 'id': tf.io.FixedLenFeature([], tf.string), } for key, item in SIGNALS.items(): raw_signal_description[key] = tf.io.FixedLenFeature(item, tf.float32) # Parse the input tf.Example proto using the dictionary above. parsed = tf.io.parse_single_example(sample_proto, raw_signal_description) return parsed['Signal A'], parsed['label'] If you map this function over the TFRecordDataset, you will have a dataset of tuples (signal_a, label) instead of a dataset of dictionaries. You should be able to put this into model.fit directly.
9
5
60,596,102
2020-3-9
https://stackoverflow.com/questions/60596102/selected-kde-bandwidth-is-0-cannot-estimate-density
import pandas as pd import seaborn as sns ser_test = pd.Series([1,0,1,4,6,0,6,5,1,3,2,5,1]) sns.kdeplot(ser_test, cumulative=True) The above code generates the following CDF graph: But when the elements of the series are modified to: ser_test = pd.Series([1,0,1,1,6,0,6,1,1,0,2,1,1]) sns.kdeplot(ser_test, cumulative=True) I get the following error: ValueError: could not convert string to float: 'scott' RuntimeError: Selected KDE bandwidth is 0. Cannot estimate density. What does this error mean and how can I resolve it to generate a CDF (even if it is very skewed). Edit: I am using seaborn version 0.9.0 The complete trace is below: ValueError: could not convert string to float: 'scott' During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) <ipython-input-93-7cee594b4526> in <module> 1 ser_test = pd.Series([1,0,1,1,6,0,6,1,1,0,2,1,1]) ----> 2 sns.kdeplot(ser_test, cumulative=True) ~/.local/lib/python3.5/site-packages/seaborn/distributions.py in kdeplot(data, data2, shade, vertical, kernel, bw, gridsize, cut, clip, legend, cumulative, shade_lowest, cbar, cbar_ax, cbar_kws, ax, **kwargs) 689 ax = _univariate_kdeplot(data, shade, vertical, kernel, bw, 690 gridsize, cut, clip, legend, ax, --> 691 cumulative=cumulative, **kwargs) 692 693 return ax ~/.local/lib/python3.5/site-packages/seaborn/distributions.py in _univariate_kdeplot(data, shade, vertical, kernel, bw, gridsize, cut, clip, legend, ax, cumulative, **kwargs) 281 x, y = _statsmodels_univariate_kde(data, kernel, bw, 282 gridsize, cut, clip, --> 283 cumulative=cumulative) 284 else: 285 # Fall back to scipy if missing statsmodels ~/.local/lib/python3.5/site-packages/seaborn/distributions.py in _statsmodels_univariate_kde(data, kernel, bw, gridsize, cut, clip, cumulative) 353 fft = kernel == "gau" 354 kde = smnp.KDEUnivariate(data) --> 355 kde.fit(kernel, bw, fft, gridsize=gridsize, cut=cut, clip=clip) 356 if cumulative: 357 grid, y = kde.support, kde.cdf ~/.local/lib/python3.5/site-packages/statsmodels/nonparametric/kde.py in fit(self, kernel, bw, fft, weights, gridsize, adjust, cut, clip) 138 density, grid, bw = kdensityfft(endog, kernel=kernel, bw=bw, 139 adjust=adjust, weights=weights, gridsize=gridsize, --> 140 clip=clip, cut=cut) 141 else: 142 density, grid, bw = kdensity(endog, kernel=kernel, bw=bw, ~/.local/lib/python3.5/site-packages/statsmodels/nonparametric/kde.py in kdensityfft(X, kernel, bw, weights, gridsize, adjust, clip, cut, retgrid) 451 bw = float(bw) 452 except: --> 453 bw = bandwidths.select_bandwidth(X, bw, kern) # will cross-val fit this pattern? 454 bw *= adjust 455 ~/.local/lib/python3.5/site-packages/statsmodels/nonparametric/bandwidths.py in select_bandwidth(x, bw, kernel) 172 # eventually this can fall back on another selection criterion. 173 err = "Selected KDE bandwidth is 0. Cannot estimate density." --> 174 raise RuntimeError(err) 175 else: 176 return bandwidth RuntimeError: Selected KDE bandwidth is 0. Cannot estimate density.
What's going on here is that Seaborn (or rather, the library it relies on to calculate the KDE - scipy or statsmodels) isn't managing to figure out the "bandwidth", a scaling parameter used in the calculation. You can pass it manually. I played with a few values and found 1.5 gave a graph at the same scale as your previous: sns.kdeplot(ser_test, cumulative=True, bw=1.5) See also here. Worth installing statsmodels if you don't have it.
13
8
60,599,469
2020-3-9
https://stackoverflow.com/questions/60599469/typing-restrict-to-a-list-of-strings
This is Python 3.7 I have a dataclass like this: @dataclass class Action: action: str But action is actually restricted to the values "bla" and "foo". Is there a sensible way to express this?
You could use an Enum: from dataclasses import dataclass from enum import Enum class ActionType(Enum): BLA = 'bla' FOO = 'foo' @dataclass class Action: action: ActionType >>> a = Action(action=ActionType.FOO) >>> a.action.name 'FOO' >>> a.action.value 'foo'
11
10
60,598,837
2020-3-9
https://stackoverflow.com/questions/60598837/html-to-image-using-python
Here is a variable html_str, it is a string that contains html tags and contents in body. I am created a .html file from this string using the below code in python. html_file = open("filename.html", "w") html_file.write(html_str) html_file.close() now i got html file named file "filename.html". Now i want to convert that "filename.html" to a image, named filename.jpg with the exact contents of the html file. please help me.
You can do this by using imgkit import imgkit imgkit.from_file('test.html', 'out.jpg') Or you can also use htmlcsstoimage Api # pip3 install requests import requests HCTI_API_ENDPOINT = "https://hcti.io/v1/image" HCTI_API_USER_ID = 'your-user-id' HCTI_API_KEY = 'your-api-key' data = { 'html': "<div class='box'>Hello, world!</div>", 'css': ".box { color: white; background-color: #0f79b9; padding: 10px; font-family: Roboto }", 'google_fonts': "Roboto" } image = requests.post(url = HCTI_API_ENDPOINT, data = data, auth=(HCTI_API_USER_ID, HCTI_API_KEY)) print("Your image URL is: %s"%image.json()['url']) # https://hcti.io/v1/image/7ed741b8-f012-431e-8282-7eedb9910b32
32
36
60,588,385
2020-3-8
https://stackoverflow.com/questions/60588385/plotly-how-to-group-data-and-specify-colors-using-go-box-instead-of-px-box
The question: Using plotly express you can group data and assign different colors using color=<group> in px.box(). But how can you do the same thing using plotly.graph_objects and go.box() Some details: Plotly Express is nice but sometimes we need more than the basics. So I tried to use Plotly Go instead but then I can't figure out how to box plots with boxes in groups without adding a go.Box for each group manually as in the documentation. Here is the code that I took from the documentation for Plotly Express: import plotly.express as px df = px.data.tips() fig = px.box(df, x="time", y="total_bill", color="smoker", notched=True, # used notched shape title="Box plot of total bill", hover_data=["day"] # add day column to hover data ) fig.show() How can you achieve the same thing in Plotly Go? Because the color property is not recognised as valid. import plotly.graph_objects as go df = px.data.tips() fig = go.Figure(go.Box( x=df.time, y=df.total_bill, color="smoker", notched=True, # used notched shape )) fig.show() Moreover, how can you define the colors for the boxes? Using marker_color only works with one color (can't give a list) in Plotly Go and sets all boxes to that color and it is not a valid property for Plotly Express. I tried using colorscale and that doesn't work either.
Let's jump straight to the answer and shed some light on the details afterwards. In order to set the colors for your go.box figures you'll have to split the dataset in the groups you want to study, and assign a color to each subcategory using line=dict(color=<color>). The code snippet below will show you how you can use plotlys built-in colorcycle to get the same result you would using plotly express without specifying each color for each category. You'll also have to set boxmode='group' for the figure layout to prevent the boxes from being displayed on top of eachother. Plot 1 - Using go.box: Code 1 - Using go.box: # imports import plotly.graph_objects as go import plotly.express as px fig=go.Figure() for i, smokes in enumerate(df['smoker'].unique()): df_plot=df[df['smoker']==smokes] #print(df_plot.head()) fig.add_trace(go.Box(x=df_plot['time'], y=df_plot['total_bill'], notched=True, line=dict(color=colors[i]), name='smoker=' + smokes)) fig.update_layout(boxmode='group', xaxis_tickangle=0) fig.show() Now for the... how can you define the colors for the boxes? ...part. The color of the boxes are defined by the fillcolor which defaults to a half-transparent variant of the line color. In the above example you can set a transparent green to all boxes using fillcolor='rgba(0,255,0,0.5)': Plot 2: fillcolor='rgba(0,255,0,0.5)' Or you can reference different colors of the same color cycle as you're using for the line colors using an offset version of the colors list like fillcolor=colors[i+4] Plot 3: fillcolor=colors[i+4] The absolutely simplest thing to do to set line and fillcolor would be to just set line=dict(color='black') and fillcolor='yellow' for all groups: Plot 4: Back to the basics Complete code: # imports import plotly.express as px import plotly.graph_objects as go # data df = px.data.tips() # plotly setup fig=go.Figure() # a plotly trace for each subcategory for i, smokes in enumerate(df['smoker'].unique()): df_plot=df[df['smoker']==smokes] fig.add_trace(go.Box(x=df_plot['time'], y=df_plot['total_bill'], notched=True, line=dict(color='black'), #line=dict(color=colors[i]), fillcolor='yellow', #fillcolor=colors[i+4], name='smoker=' + smokes)) # figure layout adjustments fig.update_layout(boxmode='group', xaxis_tickangle=0) fig.show() Some details about it all: How can you achieve the same thing in Plotly Go? Because the color property is not recognised as valid. If you study the documentation for go.box, you'll quickly discover that go.box has no color method, while px.box has got this: color: str or int or Series or array-like Either a name of a column in `data_frame`, or a pandas Series or array_like object. Values from this column or array_like are used to assign color to marks. In other words, what color in px.Box does for you, is to split up the dataset in, for example, unique groups in a dataset of a long format such as px.data.tips() When it comes to go.box there is no such method and you'll just have to accept the ValueError: ValueError: Invalid property specified for object of type plotly.graph_objs.Box: 'color'
8
11
60,593,624
2020-3-9
https://stackoverflow.com/questions/60593624/modify-trained-model-architecture-and-continue-training-keras
I want to train a model in a sequential manner. That is I want to train the model initially with a simple architecture and once it is trained, I want to add a couple of layers and continue training. Is it possible to do this in Keras? If so, how? I tried to modify the model architecture. But until I compile, the changes are not effective. Once I compile, all the weights are re-initialized and I lose all the trained information. All the questions in web and SO I found are either about loading a pre-trained model and continuing training or modifying the architecture of pre-trained model and then only test it. I didn't find anything related to my question. Any pointers are also highly appreciated. PS: I'm using Keras in tensorflow 2.0 package.
Without knowing the details of your model, the following snippet might help: from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, Input # Train your initial model def get_initial_model(): ... return model model = get_initial_model() model.fit(...) model.save_weights('initial_model_weights.h5') # Use Model API to create another model, built on your initial model initial_model = get_initial_model() initial_model.load_weights('initial_model_weights.h5') nn_input = Input(...) x = initial_model(nn_input) x = Dense(...)(x) # This is the additional layer, connected to your initial model nn_output = Dense(...)(x) # Combine your model full_model = Model(inputs=nn_input, outputs=nn_output) # Compile and train as usual full_model.compile(...) full_model.fit(...) Basically, you train your initial model, save it. And reload it again, and wrap it together with your additional layers using the Model API. If you are not familiar with Model API, you can check out the Keras documentation here (afaik the API remains the same for Tensorflow.Keras 2.0). Note that you need to check if your initial model's final layer's output shape is compatible with the additional layers (e.g. you might want to remove the final Dense layer from your initial model if you are just doing feature extraction).
7
4
60,571,301
2020-3-6
https://stackoverflow.com/questions/60571301/run-localhost-server-in-google-colab-notebook
I am trying to implement Tacotron speech synthesis with Tensorflow in Google Colab using this code form a repo in Github, below is my code and working good till the step of using localhost server, how I can to run a localhost server in a notebook in Google Colab? My code: !pip install tensorflow==1.3.0 import tensorflow as tf print("You are using Tensorflow",tf.__version__) !git clone https://github.com/keithito/tacotron.git cd tacotron pip install -r requirements.txt !curl https://data.keithito.com/data/speech/tacotron-20180906.tar.gz | tar xzC /tmp !python demo_server.py --checkpoint /tmp/tacotron-20180906/model.ckpt #requires localhost Unfortunately running in local mode from Google Colab will not help me because to do this I need to download the data in my machine which are too large. Below is my last output and here I am supposed to open the localhost:8888 to complete the work, so as I mentioned before is there any way to run localhost in Google Colaboratory?
You can do this by using tools like ngrok or remote.it They give you a URL that you can access from any browser to access your web server running on 8888 Example 1: Tunneling tensorboard running on !wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip !unzip ngrok-stable-linux-amd64.zip get_ipython().system_raw('tensorboard --logdir /content/trainingdata/objectdetection/ckpt_output/trainingImatges/ --host 0.0.0.0 --port 6006 &') get_ipython().system_raw('./ngrok http 6006 &') ! curl -s http://localhost:4040/api/tunnels | python3 -c \ "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])" Running this install ngrok on colab, and makes a link like http://c11e1b53.ngrok.io/ Documentaion for NGROK
17
10
60,584,948
2020-3-8
https://stackoverflow.com/questions/60584948/pythonic-way-of-doing-composition-aliases
What is the most pythonic and correct way of doing composition aliases? Here's a hypothetical scenario: class House: def cleanup(self, arg1, arg2, kwarg1=False): # do something class Person: def __init__(self, house): self.house = house # aliases house.cleanup # 1. self.cleanup_house = self.house.cleanup # 2. def cleanup_house(self, arg1, arg2, kwarg1=False): return self.house.cleanup(arg1=arg1, arg2=arg2, kwarg1=kwarg1) AFAIK with #1 my tested editors understand these just as fine as #2 - auto completion, doc strings etc. Are there any down-sides to #1 approach? Which way is more correct from python's point of view? To expand on method #1 unsettable and type hinted variant would be immune to all of the issues pointed out in the comments: class House: def cleanup(self, arg1, arg2, kwarg1=False): """clean house is nice to live in!""" pass class Person: def __init__(self, house: House): self._house = house # aliases self.cleanup_house = self.house.cleanup @property def house(self): return self._house
There are a number of problems with the first method: The alias won't update when the attribute it refers to changes unless you jump through extra hoops. You could, for example, make house a property with a setter, but that is non-trivial work for something that shouldn't require it. See the end of this answer for a sample implementation. cleanup_house will not be inheritable. A function object defined in a class is a non-data descriptor that can be inherited and overridden, as well as be bound to an instance. An instance attribute like in the first approach is not present in the class at all. The fact that it is a bound method is incidental. A child class will not be able to access super().cleanup_house, for a concrete example. person.cleanup_house.__name__ != 'cleanup_house'. This is not something you check often, but when you do, you would expect the function name to be cleanup. The good news is that you don't have to repeat signatures multiple times to use approach #2. Python offers the very convenient splat (*)/splatty-splat (**) notation for delegating all argument checking to the method being wrapped: def cleanup_house(self, *args, **kwargs): return self.house.cleanup(*args, **kwargs) And that's it. All regular and default arguments are passed through as-is. This is the reason that #2 is by far the more pythonic approach. I have no idea how it will interact with editors that support type hints unless you copy the method signature. One thing that may be a problem is that cleanup_house.__doc__ is not the same as house.cleanup.__doc__. This could potentially merit a conversion of house to a property, whose setter assigns cleanup_house.__doc__. To address issue 1. (but not 2. or 3.), you can implement house as a property with a setter. The idea is to update the aliases whenever the house attribute changes. This is not a good idea in general, but here is an alternative implementation to what you have in the question that will likely work a bit better: class House: def cleanup(self, arg1, arg2, kwarg1=False): """clean house is nice to live in!""" pass class Person: def __init__(self, house: House): self.house = house # use the property here @property def house(self): return self._house @house.setter def house(self, value): self._house = house self.cleanup_house = self._house.cleanup
8
8
60,582,073
2020-3-7
https://stackoverflow.com/questions/60582073/better-way-to-check-multiple-columns-with-the-same-condition-in-pandas
I got the output but trying to find a more efficient way to do this: (df['budget'] == 0).sum(), (df['revenue'] == 0).sum(),(df['budget_adj'] == 0).sum(), (df['revenue_adj'] == 0).sum() Output is (5674, 5993, 5676, 5993)
You can compare the columns in bulk and sum these up column-wise: (df[['budget', 'revenue', 'budget_adj', 'revenue_adj']] == 0).sum(axis=0)
8
3
60,549,865
2020-3-5
https://stackoverflow.com/questions/60549865/what-causes-a-to-overallocate
Apparently list(a) doesn't overallocate, [x for x in a] overallocates at some points, and [*a] overallocates all the time? Here are sizes n from 0 to 12 and the resulting sizes in bytes for the three methods: 0 56 56 56 1 64 88 88 2 72 88 96 3 80 88 104 4 88 88 112 5 96 120 120 6 104 120 128 7 112 120 136 8 120 120 152 9 128 184 184 10 136 184 192 11 144 184 200 12 152 184 208 Computed like this, reproducable at repl.it, using Python 3.8: from sys import getsizeof for n in range(13): a = [None] * n print(n, getsizeof(list(a)), getsizeof([x for x in a]), getsizeof([*a])) So: How does this work? How does [*a] overallocate? Actually, what mechanism does it use to create the result list from the given input? Does it use an iterator over a and use something like list.append? Where is the source code? (Colab with data and code that produced the images.) Zooming in to smaller n: Zooming out to larger n:
[*a] is internally doing the C equivalent of: Make a new, empty list Call newlist.extend(a) Returns list. So if you expand your test to: from sys import getsizeof for n in range(13): a = [None] * n l = [] l.extend(a) print(n, getsizeof(list(a)), getsizeof([x for x in a]), getsizeof([*a]), getsizeof(l)) Try it online! you'll see the results for getsizeof([*a]) and l = []; l.extend(a); getsizeof(l) are the same. This is usually the right thing to do; when extending you're usually expecting to add more later, and similarly for generalized unpacking, it's assumed that multiple things will be added one after the other. [*a] is not the normal case; Python assumes there are multiple items or iterables being added to the list ([*a, b, c, *d]), so overallocation saves work in the common case. By contrast, a list constructed from a single, presized iterable (with list()) may not grow or shrink during use, and overallocating is premature until proven otherwise; Python recently fixed a bug that made the constructor overallocate even for inputs with known size. As for list comprehensions, they're effectively equivalent to repeated appends, so you're seeing the final result of the normal overallocation growth pattern when adding an element at a time. To be clear, none of this is a language guarantee. It's just how CPython implements it. The Python language spec is generally unconcerned with specific growth patterns in list (aside from guaranteeing amortized O(1) appends and pops from the end). As noted in the comments, the specific implementation changes again in 3.9; while it won't affect [*a], it could affect other cases where what used to be "build a temporary tuple of individual items and then extend with the tuple" now becomes multiple applications of LIST_APPEND, which can change when the overallocation occurs and what numbers go into the calculation.
149
87
60,574,862
2020-3-7
https://stackoverflow.com/questions/60574862/calculating-pairwise-euclidean-distance-between-all-the-rows-of-a-dataframe
How can I calculate the Euclidean distance between all the rows of a dataframe? I am trying this code, but it is not working: zero_data = data distance = lambda column1, column2: pd.np.linalg.norm(column1 - column2) result = zero_data.apply(lambda col1: zero_data.apply(lambda col2: distance(col1, col2))) result.head() This is how my (44062 by 278) dataframe looks like:
To compute the Eucledian distance between two rows i and j of a dataframe df: np.linalg.norm(df.loc[i] - df.loc[j]) To compute it between consecutive rows, i.e. 0 and 1, 1 and 2, 2 and 3, ... np.linalg.norm(df.diff(axis=0).drop(0), axis=1) If you want to compute it between all the rows, i.e. 0 and 1, 0 and 2, ..., 1 and 1, 1 and 2 ..., then you have to loop through all the combinations of i and j (keep in mind that for 44062 rows there are 970707891 such combinations so using a for-loop will be very slow): import itertools for i, j in itertools.combinations(df.index, 2): d_ij = np.linalg.norm(df.loc[i] - df.loc[j]) Edit: Instead, you can use scipy.spatial.distance.cdist which computes distance between each pair of two collections of inputs: from scipy.spatial.distance import cdist cdist(df, df, 'euclid') This will return you a symmetric (44062 by 44062) matrix of Euclidian distances between all the rows of your dataframe. The problem is that you need a lot of memory for it to work (at least 8*44062**2 bytes of memory, i.e. ~16GB). So a better option is to use pdist from scipy.spatial.distance import pdist pdist(df.values, 'euclid') which will return an array (of size 970707891) of all the pairwise Euclidean distances between the rows of df. P.s. Don't forget to ignore the 'Actual_data' column in the computations of distances. E.g. you can do the following: data = df.drop('Actual_Data', axis=1).values and then cdist(data, data, 'euclid') or pdist(data, 'euclid'). You can also create another dataframe with distances like this: data = df.drop('Actual_Data', axis=1).values d = pd.DataFrame(itertools.combinations(df.index, 2), columns=['i','j']) d['dist'] = pdist(data, 'euclid') i j dist 0 0 1 ... 1 0 2 ... 2 0 3 ... 3 0 4 ... ...
8
9
60,560,093
2020-3-6
https://stackoverflow.com/questions/60560093/monkey-patching-class-with-inherited-classes-in-python
After reading the answers to the question about monkey-patching classes in Python I tried to apply the advised solution to the following case. Imagine that we have a module a.py class A(object): def foo(self): print(1) class AA(A): pass and let us try to monkey patch it as follows. It works when we monkey patch class A: >>> import a >>> class B(object): ... def foo(self): ... print(3) ... >>> a.A = B >>> x = a.A() >>> x.foo() 3 But if we try the inherited class, it turns to be not patched: >>> y = a.AA() >>> y.foo() 1 Is there any way to monkey patch the class with all its inherited classes? EDIT For now, the best solution for me is as follows: >>> class AB(B, a.AA): ... pass ... >>> a.AA = AB >>> x = a.AA() >>> x.foo() 3 Any complex structure of a.AA will be inherited and the only difference between AB and a.AA will be the foo() method. In this way, we don't modify any internal class attributes (like __base__ or __dict__). The only remaining drawback is that we need to do that for each of the inherited classes. Is it the best way to do this?
You need to explicitly overwrite the tuple of base classes in a.AA, though I don't recommend modifying classes like this. >>> import a >>> class B: ... def foo(self): ... print(2) ... >>> a.AA.__bases__ = (B,) >>> a.AA().foo() 2 This will also be reflected in a.A.__subclasses__() (although I am not entirely sure as to how that works; the fact that it is a method suggests that it computes this somehow at runtime, rather than simply returning a value that was modified by the original definition of AA). It appears that the bases classes in a class statement are simply remembered, rather than used, until some operation needs them (e.g. during attribute lookup). There may be some other subtle corner cases that aren't handled as smoothly: caveat programmator.
8
5
60,571,675
2020-3-6
https://stackoverflow.com/questions/60571675/setting-dynamic-folder-and-report-name-in-pytest
I have a problem with setting report name and folder with it dynamically in Python's pytest. For example: I've run all pytest's tests @ 2020-03-06 21:50 so I'd like to have my report stored in folder 20200306 with name report_2150.html. I want it to be automated and triggered right after the tests are finished. I'm working in VS Code and I'm aiming to share my work with colleagues with no automation experience so I'm aiming to use it as "click test to start". My project structure: webtools/ |── .vscode/ |──── settings.json |── drivers/ |── pages/ |── reports/ |── tests/ |──── __init__.py |──── config.json |──── conftest.py |──── test_1.py |──── test_2.py |── setup.py Code samples: settings.json { "python.linting.pylintEnabled": false, "python.linting.flake8Enabled": true, "python.linting.enabled": true, "python.pythonPath": "C:\\Users\\user\\envs\\webtools\\Scripts\\python.exe", "python.testing.pytestArgs": [ "tests", "--self-contained-html", "--html=./reports/tmp_report.html" ], "python.testing.unittestEnabled": false, "python.testing.nosetestsEnabled": false, "python.testing.pytestEnabled": true, "python.testing.unittestArgs": [ "-v", "-s", "./tests", "-p", "test_*.py" ] } config.json { "browser": "chrome", "wait_time": 10 } conftest.py import json import pytest from datetime import datetime import time import shutil import os from selenium import webdriver from selenium.webdriver import Chrome CONFIG_PATH = 'tests/config.json' DEFAULT_WAIT_TIME = 10 SUPPORTED_BROWSERS = ['chrome', 'explorer'] @pytest.fixture(scope='session') def config(): # Read the JSON config file and returns it as a parsed dict with open(CONFIG_PATH) as config_file: data = json.load(config_file) return data @pytest.fixture(scope='session') def config_browser(config): # Validate and return the browser choice from the config data if 'browser' not in config: raise Exception('The config file does not contain "browser"') elif config['browser'] not in SUPPORTED_BROWSERS: raise Exception(f'"{config["browser"]}" is not a supported browser') return config['browser'] @pytest.fixture(scope='session') def config_wait_time(config): # Validate and return the wait time from the config data return config['wait_time'] if 'wait_time' in config else DEFAULT_WAIT_TIME @pytest.fixture def browser(config_browser, config_wait_time): # Initialize WebDriver if config_browser == 'chrome': driver = webdriver.Chrome(r"./drivers/chromedriver.exe") elif config_browser == 'explorer': driver = webdriver.Ie(r"./drivers/IEDriverServer.exe") else: raise Exception(f'"{config_browser}" is not a supported browser') # Wait implicitly for elements to be ready before attempting interactions driver.implicitly_wait(config_wait_time) # Maximize window for test driver.maximize_window() # Return the driver object at the end of setup yield driver # For cleanup, quit the driver driver.quit() @pytest.fixture(scope='session') def cleanup_report(): timestamp = datetime.now().strftime('%Y%m%d_%H%M%S') os.chdir("./reports") os.mkdir(timestamp) yield shutil.move("./tmp_report.html", "./%s/test_report.html" % timestamp) In current situation the report is created as tmp_report.html in the reports folder, but I don't know how I can force running cleanup_report() after all tests are completed and tmp_report.html is present and complete in folder. For checking if complete I assume I'd have to verify if all html tags have their closing (or at least <html> one). Can somebody help me with that? If you need some further code portions I'll provide them as soon as possible. Thank you in advance!
You can customize the plugin options in a custom impl of the pytest_configure hook. Put this example code in a conftest.py file in your project root dir: from datetime import datetime from pathlib import Path import pytest @pytest.hookimpl(tryfirst=True) def pytest_configure(config): # set custom options only if none are provided from command line if not config.option.htmlpath: now = datetime.now() # create report target dir reports_dir = Path('reports', now.strftime('%Y%m%d')) reports_dir.mkdir(parents=True, exist_ok=True) # custom report file report = reports_dir / f"report_{now.strftime('%H%M')}.html" # adjust plugin options config.option.htmlpath = report config.option.self_contained_html = True If you want to completely ignore what's passed from command line, remove the if not config.option.htmlpath: condition. If you want to stick with your current impl, notice that on fixtures teardown, pytest-html hasn't written the report yet. Move the code from cleanup_report to a custom impl of the pytest_sessionfinish hook to ensure pytest-html has already written the default report file: @pytest.hookimpl(trylast=True) def pytest_sessionfinish(session, exitstatus): shutil.move(...)
9
11
60,571,475
2020-3-6
https://stackoverflow.com/questions/60571475/install-or-suggest-to-missing-imported-python-modules-on-vs-code-like-pycharm
When we import a module that isn't currently installed on the Python used on the current environment, PyCharm suggest us to 'install missing module', if you click install, it'll install it automatically... Is there any plugin for vscode that does that or something like that? I want to import emoji for example, and like pycharm does, suggest me to install the missing module, so I won't have to do manually a pip install Is there a plugin that does this for vscode? Thank you
I think that your request is already ongoing on vscode-python extension: https://github.com/microsoft/vscode-python/issues/8062 I suggest to follow this issue to see when it's ready for production.
9
5
60,567,679
2020-3-6
https://stackoverflow.com/questions/60567679/save-keras-model-weights-directly-to-bytes-memory
Keras allows for saving entire models or just model weights (see thread). When saving the weights, they must be saved to a file, eg: model = keras_model() model.save_weights('/tmp/model.h5') Instead of writing to file, I'd like to just save the bytes into memory. Something like model.dump_weights() Tensorflow doesn't seem to have this, so as a workaround I'm writing to disk and then reading into memory: temp = '/tmp/weights.h5' model.save_weights(temp) with open(temp, 'rb') as f: weightbytes = f.read() Any way to avoid this roundabout?
Thanks @ddoGas for pointing out the model.get_weights() method, which returns a list of weights that can then be serialized. Just some context for why I am not saving the model in the conventional way: we are working with model wrapper classes that associate a model and custom behavior. For example, before prediction occurs special validation is needed: class CNN: ... def predict(): self.do_special_validation() self.model.predict() Hence, we're serializing the CNN class not just the underlying model. This is the solution to pickle the entire object. (pickle(CNN()) fails, otherwise we'd just use that) import pickle def serialize(cnn): return pickle.dumps({ "weights": cnn.model.get_weights(), "cnnclass": cnn.__class__ }) def deserialize(cnn_bytes): loaded = pickle.loads(cnn_bytes) weights, cnnclass = loaded['weights'], loaded['cnnclass'] cnninstance = cnnclass() cnninstance.model.set_weights(weights) return cnninstance Works well, thanks! PS note using cnn.__class__ because don't want to necessarily bind this to the CNN class directly but for it to work in general for any class that has a cnn.model attribute.
7
0
60,564,570
2020-3-6
https://stackoverflow.com/questions/60564570/how-to-subset-list-elements-that-lie-between-two-missing-values
With a list containing some missing values such as this: [10, 11, 12,np.nan, 14, np.nan, 16, 17, np.nan, 19, np.nan] How can you subset the values that are positioned between two missing (nan) values? I know how to do it with a for loop : # imports import numpy as np # input lst=[10,11,12,np.nan, 14, np.nan, 16, 17, np.nan, 19, np.nan] # define an empty list and build on that in a For Loop subset=[] for i, elem in enumerate(lst): if np.isnan(lst[i-1]) and np.isnan(lst[i+1]): subset.extend([elem]) print(subset) # output # [14, 19] Any suggestions on how to do this in a less cumbersome way?
Use list comprehension import numpy as np lst=[10,11,12,np.nan, 14, np.nan, 16, 17, np.nan, np.nan, np.nan] subset = [elem for i, elem in enumerate(lst) if i and i < len(lst)-1 and np.isnan(lst[i-1]) and np.isnan(lst[i+1]) and not np.isnan(elem)] print(subset) Corrected the mistakes that were pointed out by other contributors. This should work for all the cases now.
7
6
60,554,339
2020-3-5
https://stackoverflow.com/questions/60554339/find-distance-to-nearest-zero-in-numpy-array
Let's say I have a NumPy array: x = np.array([0, 1, 2, 0, 4, 5, 6, 7, 0, 0]) At each index, I want to find the distance to nearest zero value. If the position is a zero itself then return zero as a distance. Afterward, we are only interested in distances to the nearest zero that is to the right of the current position. The super naive approach would be something like: out = np.full(x.shape[0], x.shape[0]-1) for i in range(x.shape[0]): j = 0 while i + j < x.shape[0]: if x[i+j] == 0: break j += 1 out[i] = j And the output would be: array([0, 2, 1, 0, 4, 3, 2, 1, 0, 0]) I'm noticing a countdown/decrement pattern in the output in between the zeros. So, I might be able to do use the locations of the zeros (i.e., zero_indices = np.argwhere(x == 0).flatten()) What is the fastest way to get the desired output in linear time?
Approach #1 : Searchsorted to the rescue for linear-time in a vectorized manner (before numba guys come in)! mask_z = x==0 idx_z = np.flatnonzero(mask_z) idx_nz = np.flatnonzero(~mask_z) # Cover for the case when there's no 0 left to the right # (for same results as with posted loop-based solution) if x[-1]!=0: idx_z = np.r_[idx_z,len(x)] out = np.zeros(len(x), dtype=int) idx = np.searchsorted(idx_z, idx_nz) out[~mask_z] = idx_z[idx] - idx_nz Approach #2 : Another with some cumsum - mask_z = x==0 idx_z = np.flatnonzero(mask_z) # Cover for the case when there's no 0 left to the right if x[-1]!=0: idx_z = np.r_[idx_z,len(x)] out = idx_z[np.r_[False,mask_z[:-1]].cumsum()] - np.arange(len(x)) Alternatively, last step of cumsum could be replaced by repeat functionality - r = np.r_[idx_z[0]+1,np.diff(idx_z)] out = np.repeat(idx_z,r)[:len(x)] - np.arange(len(x)) Approach #3 : Another with mostly just cumsum - mask_z = x==0 idx_z = np.flatnonzero(mask_z) pp = np.full(len(x), -1) pp[idx_z[:-1]] = np.diff(idx_z) - 1 if idx_z[0]==0: pp[0] = idx_z[1] else: pp[0] = idx_z[0] out = pp.cumsum() # Handle boundary case and assigns 0s at original 0s places out[idx_z[-1]:] = np.arange(len(x)-idx_z[-1],0,-1) out[mask_z] = 0
14
10
60,553,723
2020-3-5
https://stackoverflow.com/questions/60553723/why-do-queryset0-and-queryset-first-return-different-records
I discovered today that I can access elements in a queryset by referencing them with an index, i.e. queryset[n]. However, immediately after I discovered that queryset[0] does not return the same record as queryset.first(). Why is this, and is one of those more "correct"? (I know that .first() is faster, but other than that) Python 3.7.4 django 1.11.20
There is a small semantical difference between qs[0] and qs.first(). If you did not specify an order in the queryset yourself, then Django will order the queryset itself by primary key before fetching the first element. Furthermore .first() will return None if the queryset is empty. Whereas qs[0] will raise an IndexError. The claim that .first() is faster however is not True. In fact if you use qs[n], then Django will, behind the curtains fetch the record by slicing with qs[n:n+1], hence it will, given the database backend supports this, make a query with LIMIT 1 OFFSET n, and thus fetch one record, just like .first() will do. If the queryset is already retrieved, it will furthermore make no extra queries at all, since the data is already cached. You can see the implementation on GitHub: def first(self): """ Returns the first object of a query, returns None if no match is found. """ objects = list((self if self.ordered else self.order_by('pk'))[:1]) if objects: return objects[0] return None As you can see, if the queryset is already ordered (self.ordered is True, then we take self[:1], check if there is a record, and if so return it. If not we return None. The code for retrieving an item at a specific index is more cryptic. It essentially will set the limits from k to k+1, materialize the item, and return the first item, as we can see in the source code [GitHub]: def __getitem__(self, k): """ Retrieves an item or slice from the set of results. """ if not isinstance(k, (slice,) + six.integer_types): raise TypeError assert ((not isinstance(k, slice) and (k >= 0)) or (isinstance(k, slice) and (k.start is None or k.start >= 0) and (k.stop is None or k.stop >= 0))), \ "Negative indexing is not supported." if self._result_cache is not None: return self._result_cache[k] if isinstance(k, slice): qs = self._clone() if k.start is not None: start = int(k.start) else: start = None if k.stop is not None: stop = int(k.stop) else: stop = None qs.query.set_limits(start, stop) return list(qs)[::k.step] if k.step else qs qs = self._clone() qs.query.set_limits(k, k + 1) return list(qs)[0]
7
9
60,534,999
2020-3-4
https://stackoverflow.com/questions/60534999/how-to-solve-spanish-lemmatization-problems-with-spacy
When trying lemmatize in Spanish a csv with more than 60,000 words, SpaCy does not correctly write certain words, I understand that the model is not 100% accurate. However, I have not found any other solution, since NLTK does not bring a Spanish core. A friend tried to ask this question in Spanish Stackoverflow, however, the community is quite small compared with this community, and we got no answers about it. code: nlp = spacy.load('es_core_news_sm') def lemmatizer(text): doc = nlp(text) return ' '.join([word.lemma_ for word in doc]) df['column'] = df['column'].apply(lambda x: lemmatizer(x)) I tried to lemmatize certain words that I found wrong to prove that SpaCy is not doing it correctly: text = 'personas, ideas, cosas' # translation: persons, ideas, things print(lemmatizer(text)) # Current output: personar , ideo , coser # translation: personify, ideo, sew # The expected output should be: persona, idea, cosa # translation: person, idea, thing
Unlike the English lemmatizer, spaCy's Spanish lemmatizer does not use PoS information at all. It relies on a lookup list of inflected verbs and lemmas (e.g., ideo idear, ideas idear, idea idear, ideamos idear, etc.). It will just output the first match in the list, regardless of its PoS. I actually developed spaCy's new rule-based lemmatizer for Spanish, which takes PoS and morphological information (such as tense, gender, number) into account. These fine-grained rules make it a lot more accurate than the current lookup lemmatizer. It will be released soon! Meanwhile, you can maybe use Stanford CoreNLP or FreeLing.
10
21
60,551,227
2020-3-5
https://stackoverflow.com/questions/60551227/how-to-check-if-a-python-object-is-a-numpy-ndarray
I have a function that takes an array as input and does some computation on it. The input array may or may not be a numpy ndarray (may be a list, pandas object, etc). In the function, I convert the input array (regardless of its type) to a numpy ndarray. But this step may be computationally expensive for large arrays, especially if the function is called multiple times in a for loop. Hence, I want to convert the input array to numpy ndarray ONLY if it is not already a numpy ndarray. How can I do this? import numpy as np def myfunc(array): # Check if array is not already numpy ndarray # Not correct way, this is where I need help if type(array) != 'numpy.ndarray': array = np.array(array) # The computation on array # Do something with array new_array = other_func(array) return new_array
It is simpler to use asarray: def myfunc(arr): arr = np.asarray(arr) # The computation on array # Do something with array new_array = other_func(arr) return new_array If arr is already an array, asarray does not make a copy, so there's no penalty to passing it through asarray. Let numpy do the testing for you. numpy functions often pass their inputs through asarray (or variant) just make sure the type is what they expect.
8
4
60,536,592
2020-3-5
https://stackoverflow.com/questions/60536592/how-do-you-model-something-over-time-in-python
I'm looking for a data type to help me model resource availability over fluid time. We're open from 9 til 6 and can handle 5 parallel jobs. In my imaginary programming land, I've just initialised an object with that range with a value of 3 across the board. We have appointments on the books, each with start and end times. I need to punch each of those out of the day That leaves me with a graph of sorts where the availability goes up and down, but ultimately allowing me to quickly find time ranges where there is remaining availability. I've come at this problem from many directions but always come back to the fundamental problem of not knowing a data type to model something as simple as an integer over time. I could convert my appointments into time series events (eg appointment arrives means -1 availability, appointment leaves means +1) but I still don't know how to manipulate that data so that I can distil out periods where the availability is greater than zero. Somebody's left a close-vote citing lack of focus, but my goal here seems pretty singular so I'll try to explain the problem graphically. I'm trying to infer the periods of time where the number of active jobs falls below a given capacity. Turning a range of known parallel capacity (eg 3 between 9-6) and a list of jobs with variable start/ends, into a list of time ranges of available time.
My approach would be to build the time series, but include the availability object with a value set to the availability in that period. availability: [ { "start": 09:00, "end": 12:00, "value": 4 }, { "start": 12:00, "end": 13:00, "value": 3 } ] data: [ { "start": 10:00, "end": 10:30, } ] Build the time series indexing on start/end times, with the value as the value. A start time for availability is +value, end time -value. While for an event, it'd be -1 or +1 as you said. "09:00" 4 "10:00" -1 "10:30" 1 "12:00" -4 "12:00" 3 "13:00" -3 Then group by index, sum and cumulative sum. getting: "09:00" 4 "10:00" 3 "10:30" 4 "12:00" 3 "13:00" 0 Example code in pandas: import numpy as np import pandas as pd data = [ { "start": "10:00", "end": "10:30", } ] breakpoints = [ { "start": "00:00", "end": "09:00", "value": 0 }, { "start": "09:00", "end": "12:00", "value": 4 }, { "start": "12:00", "end": "12:30", "value": 4 }, { "start": "12:30", "end": "13:00", "value": 3 }, { "start": "13:00", "end": "00:00", "value": 0 } ] df = pd.DataFrame(data, columns=['start', 'end']) print(df.head(5)) starts = pd.DataFrame(data, columns=['start']) starts["value"] = -1 starts = starts.set_index("start") ends = pd.DataFrame(data, columns=['end']) ends["value"] = 1 ends = ends.set_index("end") breakpointsStarts = pd.DataFrame(breakpoints, columns=['start', 'value']).set_index("start") breakpointsEnds = pd.DataFrame(breakpoints, columns=['end', 'value']) breakpointsEnds["value"] = breakpointsEnds["value"].transform(lambda x: -x) breakpointsEnds = breakpointsEnds.set_index("end") countsDf = pd.concat([starts, ends, breakpointsEnds, breakpointsStarts]).sort_index() countsDf = countsDf.groupby(countsDf.index).sum().cumsum() print(countsDf) # Periods that are available df = countsDf df["available"] = df["value"] > 0 # Indexes where the value of available changes # Alternatively swap out available for the value. time_changes = df["available"].diff()[df["available"].diff() != 0].index.values newDf = pd.DataFrame(time_changes, columns= ["start"]) # Setting the end column to the value of the next start newDf['end'] = newDf.transform(np.roll, shift=-1) print(newDf) # Join this back in to get the actual value of available mergedDf = newDf.merge(df, left_on="start", right_index=True) print(mergedDf) returning at the end: start end value available 0 00:00 09:00 0 False 1 09:00 13:00 4 True 2 13:00 00:00 0 False
8
8
60,521,925
2020-3-4
https://stackoverflow.com/questions/60521925/how-to-detect-the-horizontal-and-vertical-lines-of-a-table-and-eliminate-the-noi
I am trying to get the horizontal and vertical lines of the table in an image in order to extract the texts in cells. Here's a picture I use: I use the code below to extract the vertical and horizontal lines: img = cv2.imread(img_for_box_extraction_path, 0) # Read the image (thresh, img_bin) = cv2.threshold(img, 200, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU) # Thresholding the image img_bin = 255-img_bin # Invert the image cv2.imwrite("Image_bin_2.jpg",img_bin) # Defining a kernel length kernel_length = np.array(img).shape[1]//140 # A verticle kernel of (1 X kernel_length), which will detect all the verticle lines from the image. verticle_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, kernel_length)) # A horizontal kernel of (kernel_length X 1), which will help to detect all the horizontal line from the image. hori_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernel_length, 1)) # A kernel of (3 X 3) ones. kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3)) # Morphological operation to detect verticle lines from an image img_temp1 = cv2.erode(img_bin, verticle_kernel, iterations=3) verticle_lines_img = cv2.dilate(img_temp1, verticle_kernel, iterations=3) cv2.imwrite("verticle_lines_2.jpg",verticle_lines_img) # Morphological operation to detect horizontal lines from an image img_temp2 = cv2.erode(img_bin, hori_kernel, iterations=3) horizontal_lines_img = cv2.dilate(img_temp2, hori_kernel, iterations=3) cv2.imwrite("horizontal_lines_2.jpg",horizontal_lines_img) The pictures below are the horizontal lines and vertical lines: I use the code below to add two image together # Weighting parameters, this will decide the quantity of an image to be added to make a new image. alpha = 0.5 beta = 1.0 - alpha # This function helps to add two image with specific weight parameter to get a third image as summation of two image. img_final_bin = cv2.addWeighted(verticle_lines_img, alpha, horizontal_lines_img, beta, 0.0) img_final_bin = cv2.erode(~img_final_bin, kernel, iterations=2) (thresh, img_final_bin) = cv2.threshold(img_final_bin, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU) # For Debugging # Enable this line to see verticle and horizontal lines in the image which is used to find boxes cv2.imwrite("img_final_bin_2.jpg",img_final_bin) However, I get a picture like this: How do I remove the noise and get a better result? Thanks in advance.
Here's a simple method: Binary image Detected horizontal Detected vertical Combined masks Lines to be removed in green Result import cv2 import numpy as np # Load image, grayscale, Gaussian blur, Otsu's threshold image = cv2.imread('1.jpg') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (3,3), 0) thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # Detect horizontal lines horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (50,1)) horizontal_mask = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=1) # Detect vertical lines vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,50)) vertical_mask = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=1) # Combine masks and remove lines table_mask = cv2.bitwise_or(horizontal_mask, vertical_mask) image[np.where(table_mask==255)] = [255,255,255] cv2.imshow('thresh', thresh) cv2.imshow('horizontal_mask', horizontal_mask) cv2.imshow('vertical_mask', vertical_mask) cv2.imshow('table_mask', table_mask) cv2.imshow('image', image) cv2.waitKey()
10
14
60,537,557
2020-3-5
https://stackoverflow.com/questions/60537557/how-to-make-a-simple-python-rest-server-and-client
I'm attempting to make the simplest possible REST API server and client, with both the server and client being written in Python and running on the same computer. From this tutorial: https://blog.miguelgrinberg.com/post/designing-a-restful-api-with-python-and-flask I'm using this for the server: # server.py from flask import Flask, jsonify app = Flask(__name__) tasks = [ { 'id': 1, 'title': u'Buy groceries', 'description': u'Milk, Cheese, Pizza, Fruit, Tylenol', 'done': False }, { 'id': 2, 'title': u'Learn Python', 'description': u'Need to find a good Python tutorial on the web', 'done': False } ] @app.route('/todo/api/v1.0/tasks', methods=['GET']) def get_tasks(): return jsonify({'tasks': tasks}) if __name__ == '__main__': app.run(debug=True) If I run this from the command line: curl -i http://localhost:5000/todo/api/v1.0/tasks I get this: HTTP/1.0 200 OK Content-Type: application/json Content-Length: 317 Server: Werkzeug/0.16.0 Python/3.6.9 Date: Thu, 05 Mar 2020 02:45:59 GMT { "tasks": [ { "description": "Milk, Cheese, Pizza, Fruit, Tylenol", "done": false, "id": 1, "title": "Buy groceries" }, { "description": "Need to find a good Python tutorial on the web", "done": false, "id": 2, "title": "Learn Python" } ] } Great, now my question is, how can I write a Python script using requests to obtain the same information? I suspect this is the proper idea: # client.py import requests url = 'http://todo/api/v1.0/tasks' response = requests.get(url, # what goes here ?? ) print('response = ' + str(response)) However as you can see from my comment, I'm not sure how to set up the parameters for requests.get. I attempted to use this SO post: Making a request to a RESTful API using python as a guideline however it's not clear how to adjust the formatting per the message change. Can provide a brief description of how to set up params to pass into requests.get and suggest the necessary changes to get the client example above working? Thanks! --- Edit --- Something else I can mention is that I got the client to work using Postman pretty easily, I'm just not sure how to set up the syntax in Python: --- Edit --- Based on icedwater's response below, this is complete, working code for the client: # client.py import requests import json url = 'http://localhost:5000/todo/api/v1.0/tasks' response = requests.get(url) print(str(response)) print('') print(json.dumps(response.json(), indent=4)) result: <Response [200]> { "tasks": [ { "description": "Milk, Cheese, Pizza, Fruit, Tylenol", "done": false, "id": 1, "title": "Buy groceries" }, { "description": "Need to find a good Python tutorial on the web", "done": false, "id": 2, "title": "Learn Python" } ] }
From help(requests.get): Help on function get in module requests.api: get(url, params=None, **kwargs) Sends a GET request. :param url: URL for the new :class:`Request` object. :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`. :param \*\*kwargs: Optional arguments that ``request`` takes. :return: :class:`Response <Response>` object :rtype: requests.Response so I would say requests.get(url) would be enough to get a response. Then look at either the json() or data() functions in response depending on what the API is expected to return. So for the case of a JSON response, the following code should be enough: import requests import json url = "https://postman-echo.com/get?testprop=testval" response = requests.get(url) print(json.dumps(response.json(), indent=4)) Try the above code with an actual test API.
10
5
60,535,139
2020-3-4
https://stackoverflow.com/questions/60535139/using-tuple-as-a-key-in-a-dictionary-in-javascript
In python I have a Random dictionary where I use tuple as a key and each is mapped to some value. Sample Random_Dict = { (4, 2): 1, (2, 1): 3, (2, 0): 7, (1, 0): 8 } example in above key: (4,2) value: 1 I am attempting to replicate this in Javascript world This is what I came up with const randomKeys = [[4, 2], [2, 1], [2, 0], [1, 0] ] const randomMap = {} randomMap[randomKeys[0]] = 1 randomMap[randomKeys[1]] = 3 randomMap[randomKeys[2]] = 7 randomMap[randomKeys[3]] = 8 randomMap[[1, 2]] = 3 I am wondering if this is the most effective way. I almost wonder if i should do something like holding two numbers in one variable so that way i can have a dictionary in JS that maps 1:1. Looking for suggestions and solutions that are better
You can use a Map to map sets of 2 arbitrary values. In the following snippet the keys can be 'tuples' (1), or any other data type, and the values can be as well: const values = [ [ [4, 2], 1], [ [2, 1], 3], [ [2, 0], 7], [ [1, 0], 9], ]; const map = new Map(values); // Get the number corresponding a specific 'tuple' console.log( map.get(values[0][0]) // should log 1 ); // Another try: console.log( map.get(values[2][0]) // should log 7 ); Note that the key equality check is done by reference, not by value equivalence. So the following logs undefined for the above example, although the given 'key' is also an array of the shape [4, 2] just like one of the Map keys: console.log(map.get([4, 2])); (1) Tuples don't technically exist in Javascript. The closest thing is an array with 2 values, as I used in my example.
8
5
60,528,954
2020-3-4
https://stackoverflow.com/questions/60528954/pandas-read-csv-and-set-index-column
I have a problem when I read a .csv and set the 'Column A' as index column. df = pd.read_csv(index_col = 'Column A') print(df.colums) However, I cannot access 'Column A' anymore. I still want to use it as one column to access its date. Can anyone help?
I found this is very straightforward: just setting index as a column. df['index1'] = df.index
8
5
60,528,792
2020-3-4
https://stackoverflow.com/questions/60528792/how-to-combine-javascript-react-frontend-and-python-backend
I'm not quite sure if my question is a duplicate, but I wasn't able to find something helping me in my case. Set Up I've built a frontend webpage which contains a couple of services, for example show some timeseries and other information about my system. The website is build with the react framework and so using javascript in general. Now I want to do some calculations about the timeseries for example calculate the similarity and other features of my sensordata. For that I'm using python which offers me a lot of libraries I've used for a long time and are easy to use.What I'm looking for: I'm looking for a very simple way to call my backend-timeseries-analysis-python script from the react GUI passing some variables like the length of the series. Also I want to process the returned values and safe the current values needed for normalization (like max,min) for further calculations. So the procedure would look like the following: 1) Type value in react frontend input box 2) react/javascript calls pythonscript/ initialize a class and passes variables to class 3) python calculates similarity of sensor data 4) python returns similarity values to frontend and saves classes for later call 5) react displays returned values 6) react/javascript calls pythonscript 7) python compares latest data to past data and refreshs treshholds(like max, min) 8) python calculates similarity of sensor data 9) continue.. Thanks for your help!
You can expose your Python scripts on a REST API which will be called by your React frontend. Database connection will be made by this API, and the response is sent to your frontend. See Flask (very simple for small projects) or even Django to build Python APIs.
14
11
60,524,565
2020-3-4
https://stackoverflow.com/questions/60524565/global-variable-imported-from-a-module-does-not-update-why
I'm having trouble understanding why importing a global variable from another module works as expected when using import, but when using from x import * the global variable doesn't appear to update within its own module Imagine I have 2 files, one.py: def change(value): global x x = value x = "start" and two.py: from one import * print x change("updated") print x I'd expect: start updated But I get... start start If I import the module normally it works as expected import one print one.x one.change("updated") print one.x Result... start updated Given that I can't change ony.py's use of global variables (not my fault), and two.py is meant to be a sort of wrapper* around one.py, I'd really like to avoid using the one. namespace throughout two.py for the sake of one stubborn variable. If it's not possible a novice-level explantion of what's going on might help me avoid getting stuck like this again. I undertand that one.x is getting updated, but two.x isn't respecting the updates, but I don't know why.
You can think of a package as a dict. Every function and variable in a package is listed as a key in that dict which you can view using globals(). When you import an object from another package, you copy a reference to the object into your own package under a name (usually the same, different if you import <var> as <name>). By setting the value in the other package, you overwrite the key in that package with a new value, but you leave your reference pointing to the old one. An analogy with dicts We can demonstrate the process using dicts as an analogy: # Our 'packages' one = {'x': 'start'} two = {} # Equivalent of our import two['x'] = one['x'] # Variable updated in `one' one['x'] = 'updated' # ... and accessed in `two` print(two['x']) # Still 'start' We would not expect the two dict to be updated just because we overwrote a value in one. What you can do about it You can modify the object as long as you don't break the pointer by overwriting the variable. For example if x was a dict, you could change a value inside the dict and the two variables would still point to the same object. Alternatively you could attach a variables to the function like this: def change(value): change.x = value This does the same work by ensuring we are mutating the same object. A better answer yet might be to wrap both items in an object if they need to travel together: class Changer: x = 'start' @classmethod def change(cls, value): cls.x = value At this point however, we could modify the value directly as an attribute: Changer.x = 'updated' Which might be the simplest.
16
14
60,499,745
2020-3-3
https://stackoverflow.com/questions/60499745/does-pandas-use-hashing-for-a-single-indexed-dataframe-and-binary-searching-for
I have always been under impression that Pandas uses hashing when indexing the rows in a dataframe such that the operations like df.loc[some_label] is O(1). However, I just realized today that this is not the case, at least for multi-indexed dataframe. As pointed out in the document, "Indexing will work even if the data are not sorted, but will be rather inefficient (and show a PerformanceWarning)". Some articles I found seem to suggest, for multiindex dataframe, Pandas is using binary-search based indexing if you have called sort_index() on the dataframe; otherwise, it just linearly scans the rows. My question are Does single-indexed dataframe use hash-based indexing or not? If not for question 1, does it use binary-search when sort_index() has been called, and linear scan otherwise, like in the case of multi-indexed dataframe? If yes for question 1, why Pandas choose to not use hash-based indexing for multi-index as well?
Does a single-indexed DataFrame use hash-based indexing? No, Pandas does not use hash-based indexing for single-indexed DataFrames. Instead, it relies on array-based lookups or binary search when the index is sorted. If the index is unsorted, Pandas performs a linear scan, which is less efficient. If the DataFrame is sorted using sort_index(), Pandas can leverage a binary search to achieve faster lookups. Without sorting, lookups default to a linear scan. Hash-based indexing is more challenging for multi-indexes due to the hierarchical nature of the index. Instead, Pandas relies on binary search (for sorted indexes) or linear scan (for unsorted indexes) because these methods handle hierarchical indexing efficiently. Hash-based indexing would introduce additional overhead and complexity when working with multiple levels.
11
1
60,421,475
2020-2-26
https://stackoverflow.com/questions/60421475/dcgan-debugging-getting-just-garbage
Introduction: I am trying to get a CDCGAN (Conditional Deep Convolutional Generative Adversarial Network) to work on the MNIST dataset which should be fairly easy considering that the library (PyTorch) I am using has a tutorial on its website. But I can't seem to get It working it just produces garbage or the model collapses or both. What I tried: making the model Conditional semi-supervised learning using batch norm using dropout on each layer besides the input/output layer on the generator and discriminator label smoothing to combat overconfidence adding noise to the images (I guess you call this instance noise) to get a better data distribution use leaky relu to avoid vanishing gradients using a replay buffer to combat forgetting of learned stuff and overfitting playing with hyperparameters comparing it to the model from PyTorch tutorial basically what I did besides some things like Embedding layer ect. Images my Model generated: Hyperparameters: batch_size=50, learning_rate_discrimiantor=0.0001, learning_rate_generator=0.0003, shuffle=True, ndf=64, ngf=64, droupout=0.5 batch_size=50, learning_rate_discriminator=0.0003, learning_rate_generator=0.0003, shuffle=True, ndf=64, ngf=64, dropout=0 Images Pytorch tutorial Model generated: Code for the pytorch tutorial dcgan model As comparison here are the images from the DCGAN from the pytorch turoial: My Code: import torch import torch.nn as nn import torchvision from torchvision import transforms, datasets import torch.nn.functional as F from torch import optim as optim from torch.utils.tensorboard import SummaryWriter import numpy as np import os import time class Discriminator(torch.nn.Module): def __init__(self, ndf=16, dropout_value=0.5): # ndf feature map discriminator super().__init__() self.ndf = ndf self.droupout_value = dropout_value self.condi = nn.Sequential( nn.Linear(in_features=10, out_features=64 * 64) ) self.hidden0 = nn.Sequential( nn.Conv2d(in_channels=2, out_channels=self.ndf, kernel_size=4, stride=2, padding=1, bias=False), nn.LeakyReLU(0.2), ) self.hidden1 = nn.Sequential( nn.Conv2d(in_channels=self.ndf, out_channels=self.ndf * 2, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ndf * 2), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden2 = nn.Sequential( nn.Conv2d(in_channels=self.ndf * 2, out_channels=self.ndf * 4, kernel_size=4, stride=2, padding=1, bias=False), #nn.BatchNorm2d(self.ndf * 4), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden3 = nn.Sequential( nn.Conv2d(in_channels=self.ndf * 4, out_channels=self.ndf * 8, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ndf * 8), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.out = nn.Sequential( nn.Conv2d(in_channels=self.ndf * 8, out_channels=1, kernel_size=4, stride=1, padding=0, bias=False), torch.nn.Sigmoid() ) def forward(self, x, y): y = self.condi(y.view(-1, 10)) y = y.view(-1, 1, 64, 64) x = torch.cat((x, y), dim=1) x = self.hidden0(x) x = self.hidden1(x) x = self.hidden2(x) x = self.hidden3(x) x = self.out(x) return x class Generator(torch.nn.Module): def __init__(self, n_features=100, ngf=16, c_channels=1, dropout_value=0.5): # ngf feature map of generator super().__init__() self.ngf = ngf self.n_features = n_features self.c_channels = c_channels self.droupout_value = dropout_value self.hidden0 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.n_features + 10, out_channels=self.ngf * 8, kernel_size=4, stride=1, padding=0, bias=False), nn.BatchNorm2d(self.ngf * 8), nn.LeakyReLU(0.2) ) self.hidden1 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.ngf * 8, out_channels=self.ngf * 4, kernel_size=4, stride=2, padding=1, bias=False), #nn.BatchNorm2d(self.ngf * 4), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden2 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.ngf * 4, out_channels=self.ngf * 2, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ngf * 2), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden3 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.ngf * 2, out_channels=self.ngf, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ngf), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.out = nn.Sequential( # "out_channels=1" because gray scale nn.ConvTranspose2d(in_channels=self.ngf, out_channels=1, kernel_size=4, stride=2, padding=1, bias=False), nn.Tanh() ) def forward(self, x, y): x_cond = torch.cat((x, y), dim=1) # Combine flatten image with conditional input (class labels) x = self.hidden0(x_cond) # Image goes into a "ConvTranspose2d" layer x = self.hidden1(x) x = self.hidden2(x) x = self.hidden3(x) x = self.out(x) return x class Logger: def __init__(self, model_name, model1, model2, m1_optimizer, m2_optimizer, model_parameter, train_loader): self.out_dir = "data" self.model_name = model_name self.train_loader = train_loader self.model1 = model1 self.model2 = model2 self.model_parameter = model_parameter self.m1_optimizer = m1_optimizer self.m2_optimizer = m2_optimizer # Exclude Epochs of the model name. This make sense e.g. when we stop a training progress and continue later on. self.experiment_name = '_'.join("{!s}={!r}".format(k, v) for (k, v) in model_parameter.items())\ .replace("Epochs" + "=" + str(model_parameter["Epochs"]), "") self.d_error = 0 self.g_error = 0 self.tb = SummaryWriter(log_dir=str(self.out_dir + "/log/" + self.model_name + "/runs/" + self.experiment_name)) self.path_image = os.path.join(os.getcwd(), f'{self.out_dir}/log/{self.model_name}/images/{self.experiment_name}') self.path_model = os.path.join(os.getcwd(), f'{self.out_dir}/log/{self.model_name}/model/{self.experiment_name}') try: os.makedirs(self.path_image) except Exception as e: print("WARNING: ", str(e)) try: os.makedirs(self.path_model) except Exception as e: print("WARNING: ", str(e)) def log_graph(self, model1_input, model2_input, model1_label, model2_label): self.tb.add_graph(self.model1, input_to_model=(model1_input, model1_label)) self.tb.add_graph(self.model2, input_to_model=(model2_input, model2_label)) def log(self, num_epoch, d_error, g_error): self.d_error = d_error self.g_error = g_error self.tb.add_scalar("Discriminator Train Error", self.d_error, num_epoch) self.tb.add_scalar("Generator Train Error", self.g_error, num_epoch) def log_image(self, images, epoch, batch_num): grid = torchvision.utils.make_grid(images) torchvision.utils.save_image(grid, f'{self.path_image}\\Epoch_{epoch}_batch_{batch_num}.png') self.tb.add_image("Generator Image", grid) def log_histogramm(self): for name, param in self.model2.named_parameters(): self.tb.add_histogram(name, param, self.model_parameter["Epochs"]) self.tb.add_histogram(f'gen_{name}.grad', param.grad, self.model_parameter["Epochs"]) for name, param in self.model1.named_parameters(): self.tb.add_histogram(name, param, self.model_parameter["Epochs"]) self.tb.add_histogram(f'dis_{name}.grad', param.grad, self.model_parameter["Epochs"]) def log_model(self, num_epoch): torch.save({ "epoch": num_epoch, "model_generator_state_dict": self.model1.state_dict(), "model_discriminator_state_dict": self.model2.state_dict(), "optimizer_generator_state_dict": self.m1_optimizer.state_dict(), "optimizer_discriminator_state_dict": self.m2_optimizer.state_dict(), }, str(self.path_model + f'\\{time.time()}_epoch{num_epoch}.pth')) def close(self, logger, images, num_epoch, d_error, g_error): logger.log_model(num_epoch) logger.log_histogramm() logger.log(num_epoch, d_error, g_error) self.tb.close() def display_stats(self, epoch, batch_num, dis_error, gen_error): print(f'Epoch: [{epoch}/{self.model_parameter["Epochs"]}] ' f'Batch: [{batch_num}/{len(self.train_loader)}] ' f'Loss_D: {dis_error.data.cpu()}, ' f'Loss_G: {gen_error.data.cpu()}') def get_MNIST_dataset(num_workers_loader, model_parameter, out_dir="data"): compose = transforms.Compose([ transforms.Resize((64, 64)), transforms.CenterCrop((64, 64)), transforms.ToTensor(), torchvision.transforms.Normalize(mean=[0.5], std=[0.5]) ]) dataset = datasets.MNIST( root=out_dir, train=True, download=True, transform=compose ) train_loader = torch.utils.data.DataLoader(dataset, batch_size=model_parameter["batch_size"], num_workers=num_workers_loader, shuffle=model_parameter["shuffle"]) return dataset, train_loader def train_discriminator(p_optimizer, p_noise, p_images, p_fake_target, p_real_target, p_images_labels, p_fake_labels, device): p_optimizer.zero_grad() # 1.1 Train on real data pred_dis_real = discriminator(p_images, p_images_labels) error_real = loss(pred_dis_real, p_real_target) error_real.backward() # 1.2 Train on fake data fake_data = generator(p_noise, p_fake_labels).detach() fake_data = add_noise_to_image(fake_data, device) pred_dis_fake = discriminator(fake_data, p_fake_labels) error_fake = loss(pred_dis_fake, p_fake_target) error_fake.backward() p_optimizer.step() return error_fake + error_real def train_generator(p_optimizer, p_noise, p_real_target, p_fake_labels, device): p_optimizer.zero_grad() fake_images = generator(p_noise, p_fake_labels) fake_images = add_noise_to_image(fake_images, device) pred_dis_fake = discriminator(fake_images, p_fake_labels) error_fake = loss(pred_dis_fake, p_real_target) # because """ We use "p_real_target" instead of "p_fake_target" because we want to maximize that the discriminator is wrong. """ error_fake.backward() p_optimizer.step() return fake_images, pred_dis_fake, error_fake # TODO change to a Truncated normal distribution def get_noise(batch_size, n_features=100): return torch.FloatTensor(batch_size, n_features, 1, 1).uniform_(-1, 1) # We flip label of real and fate data. Better gradient flow I have told def get_real_data_target(batch_size): return torch.FloatTensor(batch_size, 1, 1, 1).uniform_(0.0, 0.2) def get_fake_data_target(batch_size): return torch.FloatTensor(batch_size, 1, 1, 1).uniform_(0.8, 1.1) def image_to_vector(images): return torch.flatten(images, start_dim=1, end_dim=-1) def vector_to_image(images): return images.view(images.size(0), 1, 28, 28) def get_rand_labels(batch_size): return torch.randint(low=0, high=9, size=(batch_size,)) def load_model(model_load_path): if model_load_path: checkpoint = torch.load(model_load_path) discriminator.load_state_dict(checkpoint["model_discriminator_state_dict"]) generator.load_state_dict(checkpoint["model_generator_state_dict"]) dis_opti.load_state_dict(checkpoint["optimizer_discriminator_state_dict"]) gen_opti.load_state_dict(checkpoint["optimizer_generator_state_dict"]) return checkpoint["epoch"] else: return 0 def init_model_optimizer(model_parameter, device): # Initialize the Models discriminator = Discriminator(ndf=model_parameter["ndf"], dropout_value=model_parameter["dropout"]).to(device) generator = Generator(ngf=model_parameter["ngf"], dropout_value=model_parameter["dropout"]).to(device) # train dis_opti = optim.Adam(discriminator.parameters(), lr=model_parameter["learning_rate_dis"], betas=(0.5, 0.999)) gen_opti = optim.Adam(generator.parameters(), lr=model_parameter["learning_rate_gen"], betas=(0.5, 0.999)) return discriminator, generator, dis_opti, gen_opti def get_hot_vector_encode(labels, device): return torch.eye(10)[labels].view(-1, 10, 1, 1).to(device) def add_noise_to_image(images, device, level_of_noise=0.1): return images[0].to(device) + (level_of_noise) * torch.randn(images.shape).to(device) if __name__ == "__main__": # Hyperparameter model_parameter = { "batch_size": 500, "learning_rate_dis": 0.0002, "learning_rate_gen": 0.0002, "shuffle": False, "Epochs": 10, "ndf": 64, "ngf": 64, "dropout": 0.5 } # Parameter r_frequent = 10 # How many samples we save for replay per batch (batch_size / r_frequent). model_name = "CDCGAN" # The name of you model e.g. "Gan" num_workers_loader = 1 # How many workers should load the data sample_save_size = 16 # How many numbers your saved imaged should show device = "cuda" # Which device should be used to train the neural network model_load_path = "" # If set load model instead of training from new num_epoch_log = 1 # How frequent you want to log/ torch.manual_seed(43) # Sets a seed for torch for reproducibility dataset_train, train_loader = get_MNIST_dataset(num_workers_loader, model_parameter) # Get dataset # Initialize the Models and optimizer discriminator, generator, dis_opti, gen_opti = init_model_optimizer(model_parameter, device) # Init model/Optimizer start_epoch = load_model(model_load_path) # when we want to load a model # Init Logger logger = Logger(model_name, generator, discriminator, gen_opti, dis_opti, model_parameter, train_loader) loss = nn.BCELoss() images, labels = next(iter(train_loader)) # For logging # For testing # pred = generator(get_noise(model_parameter["batch_size"]).to(device), get_hot_vector_encode(get_rand_labels(model_parameter["batch_size"]), device)) # dis = discriminator(images.to(device), get_hot_vector_encode(labels, device)) logger.log_graph(get_noise(model_parameter["batch_size"]).to(device), images.to(device), get_hot_vector_encode(get_rand_labels(model_parameter["batch_size"]), device), get_hot_vector_encode(labels, device)) # Array to store exp_replay = torch.tensor([]).to(device) for num_epoch in range(start_epoch, model_parameter["Epochs"]): for batch_num, data_loader in enumerate(train_loader): images, labels = data_loader images = add_noise_to_image(images, device) # Add noise to the images # 1. Train Discriminator dis_error = train_discriminator( dis_opti, get_noise(model_parameter["batch_size"]).to(device), images.to(device), get_fake_data_target(model_parameter["batch_size"]).to(device), get_real_data_target(model_parameter["batch_size"]).to(device), get_hot_vector_encode(labels, device), get_hot_vector_encode( get_rand_labels(model_parameter["batch_size"]), device), device ) # 2. Train Generator fake_image, pred_dis_fake, gen_error = train_generator( gen_opti, get_noise(model_parameter["batch_size"]).to(device), get_real_data_target(model_parameter["batch_size"]).to(device), get_hot_vector_encode( get_rand_labels(model_parameter["batch_size"]), device), device ) # Store a random point for experience replay perm = torch.randperm(fake_image.size(0)) r_idx = perm[:max(1, int(model_parameter["batch_size"] / r_frequent))] r_samples = add_noise_to_image(fake_image[r_idx], device) exp_replay = torch.cat((exp_replay, r_samples), 0).detach() if exp_replay.size(0) >= model_parameter["batch_size"]: # Train on experienced data dis_opti.zero_grad() r_label = get_hot_vector_encode(torch.zeros(exp_replay.size(0)).numpy(), device) pred_dis_real = discriminator(exp_replay, r_label) error_real = loss(pred_dis_real, get_fake_data_target(exp_replay.size(0)).to(device)) error_real.backward() dis_opti.step() print(f'Epoch: [{num_epoch}/{model_parameter["Epochs"]}] ' f'Batch: Replay/Experience batch ' f'Loss_D: {error_real.data.cpu()}, ' ) exp_replay = torch.tensor([]).to(device) logger.display_stats(epoch=num_epoch, batch_num=batch_num, dis_error=dis_error, gen_error=gen_error) if batch_num % 100 == 0: logger.log_image(fake_image[:sample_save_size], num_epoch, batch_num) logger.log(num_epoch, dis_error, gen_error) if num_epoch % num_epoch_log == 0: logger.log_model(num_epoch) logger.log_histogramm() logger.close(logger, fake_image[:sample_save_size], num_epoch, dis_error, gen_error) First link to my Code (Pastebin) Second link to my Code (0bin) Conclusion: Since I implemented all these things (e.g. label smoothing) which are considered beneficial to a GAN/DCGAN. And my Model still performs worse than the Tutorial DCGAN from PyTorch I think I might have a bug in my code but I can't seem to find it. Reproducibility: You should be able to just copy the code and run it if you have the libraries that I imported installed to look for yourself if you can find anything. I appreciate any feedback.
So I solved this issue a while ago, but forgot to post an answer on stack overflow. So I will simply post my code here which should work probably pretty good. Some disclaimer: I am not quite sure if it works since I did this a year ago its for 128x128px Images MNIST It's not a vanilla GAN I used various optimization techniques If you want to use it you need to change various details, such as the training dataset Resources: Multi-Scale Gradients Instance Noise Various tricks I used More tricks import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torchvision import torchvision.transforms as transforms from torch.utils.data import DataLoader import pytorch_lightning as pl from pytorch_lightning import loggers from numpy.random import choice import os from pathlib import Path import shutil from collections import OrderedDict # custom weights initialization called on netG and netD def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: nn.init.normal_(m.weight.data, 0.0, 0.02) elif classname.find('BatchNorm') != -1: nn.init.normal_(m.weight.data, 1.0, 0.02) nn.init.constant_(m.bias.data, 0) # randomly flip some labels def noisy_labels(y, p_flip=0.05): # # flip labels with 5% probability # determine the number of labels to flip n_select = int(p_flip * y.shape[0]) # choose labels to flip flip_ix = choice([i for i in range(y.shape[0])], size=n_select) # invert the labels in place y[flip_ix] = 1 - y[flip_ix] return y class AddGaussianNoise(object): def __init__(self, mean=0.0, std=0.1): self.std = std self.mean = mean def __call__(self, tensor): tensor = tensor.cuda() return tensor + (torch.randn(tensor.size()) * self.std + self.mean).cuda() def __repr__(self): return self.__class__.__name__ + '(mean={0}, std={1})'.format(self.mean, self.std) def resize2d(img, size): return (F.adaptive_avg_pool2d(img, size).data).cuda() def get_valid_labels(img): return ((0.8 - 1.1) * torch.rand(img.shape[0], 1, 1, 1) + 1.1).cuda() # soft labels def get_unvalid_labels(img): return (noisy_labels((0.0 - 0.3) * torch.rand(img.shape[0], 1, 1, 1) + 0.3)).cuda() # soft labels class Generator(pl.LightningModule): def __init__(self, ngf, nc, latent_dim): super(Generator, self).__init__() self.ngf = ngf self.latent_dim = latent_dim self.nc = nc self.fc0 = nn.Sequential( # input is Z, going into a convolution nn.utils.spectral_norm(nn.ConvTranspose2d(latent_dim, ngf * 16, 4, 1, 0, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 16) ) self.fc1 = nn.Sequential( # state size. (ngf*8) x 4 x 4 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 16, ngf * 8, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 8) ) self.fc2 = nn.Sequential( # state size. (ngf*4) x 8 x 8 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 4) ) self.fc3 = nn.Sequential( # state size. (ngf*2) x 16 x 16 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 2) ) self.fc4 = nn.Sequential( # state size. (ngf) x 32 x 32 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf) ) self.fc5 = nn.Sequential( # state size. (nc) x 64 x 64 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False)), nn.Tanh() ) # state size. (nc) x 128 x 128 # For Multi-Scale Gradient # Converting the intermediate layers into images self.fc0_r = nn.Conv2d(ngf * 16, self.nc, 1) self.fc1_r = nn.Conv2d(ngf * 8, self.nc, 1) self.fc2_r = nn.Conv2d(ngf * 4, self.nc, 1) self.fc3_r = nn.Conv2d(ngf * 2, self.nc, 1) self.fc4_r = nn.Conv2d(ngf, self.nc, 1) def forward(self, input): x_0 = self.fc0(input) x_1 = self.fc1(x_0) x_2 = self.fc2(x_1) x_3 = self.fc3(x_2) x_4 = self.fc4(x_3) x_5 = self.fc5(x_4) # For Multi-Scale Gradient # Converting the intermediate layers into images x_0_r = self.fc0_r(x_0) x_1_r = self.fc1_r(x_1) x_2_r = self.fc2_r(x_2) x_3_r = self.fc3_r(x_3) x_4_r = self.fc4_r(x_4) return x_5, x_0_r, x_1_r, x_2_r, x_3_r, x_4_r class Discriminator(pl.LightningModule): def __init__(self, ndf, nc): super(Discriminator, self).__init__() self.nc = nc self.ndf = ndf self.fc0 = nn.Sequential( # input is (nc) x 128 x 128 nn.utils.spectral_norm(nn.Conv2d(nc, ndf, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True) ) self.fc1 = nn.Sequential( # state size. (ndf) x 64 x 64 nn.utils.spectral_norm(nn.Conv2d(ndf + nc, ndf * 2, 4, 2, 1, bias=False)), # "+ nc" because of multi scale gradient nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 2) ) self.fc2 = nn.Sequential( # state size. (ndf*2) x 32 x 32 nn.utils.spectral_norm(nn.Conv2d(ndf * 2 + nc, ndf * 4, 4, 2, 1, bias=False)), # "+ nc" because of multi scale gradient nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 4) ) self.fc3 = nn.Sequential( # state size. (ndf*4) x 16 x 16e nn.utils.spectral_norm(nn.Conv2d(ndf * 4 + nc, ndf * 8, 4, 2, 1, bias=False)), # "+ nc" because of multi scale gradient nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 8), ) self.fc4 = nn.Sequential( # state size. (ndf*8) x 8 x 8 nn.utils.spectral_norm(nn.Conv2d(ndf * 8 + nc, ndf * 16, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 16) ) self.fc5 = nn.Sequential( # state size. (ndf*8) x 4 x 4 nn.utils.spectral_norm(nn.Conv2d(ndf * 16 + nc, 1, 4, 1, 0, bias=False)), nn.Sigmoid() ) # state size. 1 x 1 x 1 def forward(self, input, detach_or_not): # When we train i ncombination with generator we use multi scale gradient. x, x_0_r, x_1_r, x_2_r, x_3_r, x_4_r = input if detach_or_not: x = x.detach() x_0 = self.fc0(x) x_0 = torch.cat((x_0, x_4_r), dim=1) # Concat Multi-Scale Gradient x_1 = self.fc1(x_0) x_1 = torch.cat((x_1, x_3_r), dim=1) # Concat Multi-Scale Gradient x_2 = self.fc2(x_1) x_2 = torch.cat((x_2, x_2_r), dim=1) # Concat Multi-Scale Gradient x_3 = self.fc3(x_2) x_3 = torch.cat((x_3, x_1_r), dim=1) # Concat Multi-Scale Gradient x_4 = self.fc4(x_3) x_4 = torch.cat((x_4, x_0_r), dim=1) # Concat Multi-Scale Gradient x_5 = self.fc5(x_4) return x_5 class DCGAN(pl.LightningModule): def __init__(self, hparams, checkpoint_folder, experiment_name): super().__init__() self.hparams = hparams self.checkpoint_folder = checkpoint_folder self.experiment_name = experiment_name # networks self.generator = Generator(ngf=hparams.ngf, nc=hparams.nc, latent_dim=hparams.latent_dim) self.discriminator = Discriminator(ndf=hparams.ndf, nc=hparams.nc) self.generator.apply(weights_init) self.discriminator.apply(weights_init) # cache for generated images self.generated_imgs = None self.last_imgs = None # For experience replay self.exp_replay_dis = torch.tensor([]) def forward(self, z): return self.generator(z) def adversarial_loss(self, y_hat, y): return F.binary_cross_entropy(y_hat, y) def training_step(self, batch, batch_nb, optimizer_idx): # For adding Instance noise for more visit: https://www.inference.vc/instance-noise-a-trick-for-stabilising-gan-training/ std_gaussian = max(0, self.hparams.level_of_noise - ( (self.hparams.level_of_noise * 2) * (self.current_epoch / self.hparams.epochs))) AddGaussianNoiseInst = AddGaussianNoise(std=std_gaussian) # the noise decays over time imgs, _ = batch imgs = AddGaussianNoiseInst(imgs) # Adding instance noise to real images self.last_imgs = imgs # train generator if optimizer_idx == 0: # sample noise z = torch.randn(imgs.shape[0], self.hparams.latent_dim, 1, 1).cuda() # generate images self.generated_imgs = self(z) # ground truth result (ie: all fake) g_loss = self.adversarial_loss(self.discriminator(self.generated_imgs, False), get_valid_labels(self.generated_imgs[0])) # adversarial loss is binary cross-entropy; [0] is the image of the last layer tqdm_dict = {'g_loss': g_loss} log = {'g_loss': g_loss, "std_gaussian": std_gaussian} output = OrderedDict({ 'loss': g_loss, 'progress_bar': tqdm_dict, 'log': log }) return output # train discriminator if optimizer_idx == 1: # Measure discriminator's ability to classify real from generated samples # how well can it label as real? real_loss = self.adversarial_loss( self.discriminator([imgs, resize2d(imgs, 4), resize2d(imgs, 8), resize2d(imgs, 16), resize2d(imgs, 32), resize2d(imgs, 64)], False), get_valid_labels(imgs)) fake_loss = self.adversarial_loss(self.discriminator(self.generated_imgs, True), get_unvalid_labels( self.generated_imgs[0])) # how well can it label as fake?; [0] is the image of the last layer # discriminator loss is the average of these d_loss = (real_loss + fake_loss) / 2 tqdm_dict = {'d_loss': d_loss} log = {'d_loss': d_loss, "std_gaussian": std_gaussian} output = OrderedDict({ 'loss': d_loss, 'progress_bar': tqdm_dict, 'log': log }) return output def configure_optimizers(self): lr_gen = self.hparams.lr_gen lr_dis = self.hparams.lr_dis b1 = self.hparams.b1 b2 = self.hparams.b2 opt_g = torch.optim.Adam(self.generator.parameters(), lr=lr_gen, betas=(b1, b2)) opt_d = torch.optim.Adam(self.discriminator.parameters(), lr=lr_dis, betas=(b1, b2)) return [opt_g, opt_d], [] def backward(self, trainer, loss, optimizer, optimizer_idx: int) -> None: loss.backward(retain_graph=True) def train_dataloader(self): # transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)), # transforms.ToTensor(), # transforms.Normalize([0.5], [0.5])]) # dataset = torchvision.datasets.MNIST(os.getcwd(), train=False, download=True, transform=transform) # return DataLoader(dataset, batch_size=self.hparams.batch_size) # transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)), # transforms.ToTensor(), # transforms.Normalize([0.5], [0.5]) # ]) # train_dataset = torchvision.datasets.ImageFolder( # root="./drive/My Drive/datasets/flower_dataset/", # # root="./drive/My Drive/datasets/ghibli_dataset_small_overfit/", # transform=transform # ) # return DataLoader(train_dataset, num_workers=self.hparams.num_workers, shuffle=True, # batch_size=self.hparams.batch_size) transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]) ]) train_dataset = torchvision.datasets.ImageFolder( root="ghibli_dataset_small_overfit/", transform=transform ) return DataLoader(train_dataset, num_workers=self.hparams.num_workers, shuffle=True, batch_size=self.hparams.batch_size) def on_epoch_end(self): z = torch.randn(4, self.hparams.latent_dim, 1, 1).cuda() # match gpu device (or keep as cpu) if self.on_gpu: z = z.cuda(self.last_imgs.device.index) # log sampled images sample_imgs = self.generator(z)[0] torchvision.utils.save_image(sample_imgs, f'generated_images_epoch{self.current_epoch}.png') # save model if self.current_epoch % self.hparams.save_model_every_epoch == 0: trainer.save_checkpoint( self.checkpoint_folder + "/" + self.experiment_name + "_epoch_" + str(self.current_epoch) + ".ckpt") from argparse import Namespace args = { 'batch_size': 128, # batch size 'lr_gen': 0.0003, # TTUR;learnin rate of both networks; tested value: 0.0002 'lr_dis': 0.0003, # TTUR;learnin rate of both networks; tested value: 0.0002 'b1': 0.5, # Momentum for adam; tested value(dcgan paper): 0.5 'b2': 0.999, # Momentum for adam; tested value(dcgan paper): 0.999 'latent_dim': 256, # tested value which worked(in V4_1): 100 'nc': 3, # number of color channels 'ndf': 8, # number of discriminator features 'ngf': 8, # number of generator features 'epochs': 4, # the maxima lamount of epochs the algorith should run 'save_model_every_epoch': 1, # how often we save our model 'image_size': 128, # size of the image 'num_workers': 3, 'level_of_noise': 0.1, # how much instance noise we introduce(std; tested value: 0.15 and 0.1 'experience_save_per_batch': 1, # this value should be very low; tested value which works: 1 'experience_batch_size': 50 # this value shouldnt be too high; tested value which works: 50 } hparams = Namespace(**args) # Parameters experiment_name = "DCGAN_6_2_MNIST_128px" dataset_name = "mnist" checkpoint_folder = "DCGAN/" tags = ["DCGAN", "128x128"] dirpath = Path(checkpoint_folder) # defining net net = DCGAN(hparams, checkpoint_folder, experiment_name) torch.autograd.set_detect_anomaly(True) trainer = pl.Trainer( # resume_from_checkpoint="DCGAN_V4_2_GHIBLI_epoch_999.ckpt", max_epochs=args["epochs"], gpus=1 ) trainer.fit(net)
31
2
60,455,830
2020-2-28
https://stackoverflow.com/questions/60455830/can-you-have-an-async-handler-in-lambda-python-3-6
I've made Lambda functions before but not in Python. I know in Javascript Lambda supports the handler function being asynchronous, but I get an error if I try it in Python. Here is the code I am trying to test: async def handler(event, context): print(str(event)) return { 'message' : 'OK' } And this is the error I get: An error occurred during JSON serialization of response: <coroutine object handler at 0x7f63a2d20308> is not JSON serializable Traceback (most recent call last): File "/var/lang/lib/python3.6/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/var/lang/lib/python3.6/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/var/lang/lib/python3.6/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "/var/runtime/awslambda/bootstrap.py", line 149, in decimal_serializer raise TypeError(repr(o) + " is not JSON serializable") TypeError: <coroutine object handler at 0x7f63a2d20308> is not JSON serializable /var/runtime/awslambda/bootstrap.py:312: RuntimeWarning: coroutine 'handler' was never awaited errortype, result, fatal = report_fault(invokeid, e) EDIT 2021: Since this question seems to be gaining traction, I assume people are coming here trying to figure out how to get async to work with AWS Lambda as I was. The bad news is that even now more than a year later, there still isn't any support by AWS to have an asynchronous handler in a Python-based Lambda function. (I have no idea why, as NodeJS-based Lambda functions can handle it perfectly fine.) The good news is that since Python 3.7, there is a simple workaround in the form of asyncio.run: import asyncio def lambda_handler(event, context): # Use asyncio.run to synchronously "await" an async function result = asyncio.run(async_handler(event, context)) return { 'statusCode': 200, 'body': result } async def async_handler(event, context): # Put your asynchronous code here await asyncio.sleep(1) return 'Success' Note: The selected answer says that using asyncio.run is not the proper way of starting an asynchronous task in Lambda. In general, they are correct because if some other resource in your Lambda code creates an event loop (a database/HTTP client, etc.), it's wasteful to create another loop and it's better to operate on the existing loop using asyncio.get_event_loop. However, if an event loop does not yet exist when your code begins running, asyncio.run becomes the only (simple) course of action.
Not at all. Async Python handlers are not supported by AWS Lambda. If you need to use async/await functionality in your AWS Lambda, you have to define an async function in your code (either in Lambda files or a Lambda Layer) and call asyncio.get_event_loop().run_until_complete(your_async_handler()) inside your regular sync Lambda handler: import asyncio import aioboto3 # To reduce execution time for subsequent invocations, # open a reusable resource in a global scope dynamodb = aioboto3.Session().resource('dynamodb') async def async_handler(event, context): # Put your asynchronous code here table = await dynamodb.Table('test') await table.put_item( Item={'pk': 'test1', 'col1': 'some_data'}, ) return {'statusCode': 200, 'body': '{"ok": true}'} # Point to this function as a handler in the Lambda configuration def lambda_handler(event, context): loop = asyncio.get_event_loop() # DynamoDB resource defined above is attached to this loop: # if you use asyncio.run instead # you will encounter "Event loop closed" exception return loop.run_until_complete(async_handler(event, context)) Please note that asyncio.run (introduced in Python 3.7) is not a proper way to call an async handler in AWS Lambda execution environment since Lambda tries to reuse the execution context for subsequent invocations. The problem here is that asyncio.run creates a new EventLoop and closes the previous one. If you have opened any resources or created coroutines attached to the closed EventLoop from previous Lambda invocation you will get «Event loop closed» error. asyncio.get_event_loop().run_until_complete allows you to reuse the same loop. See related StackOverflow question. AWS Lambda documentation misleads its readers a little by introducing synchronous and asynchronous invocations. Do not mix it up with sync/async Python functions. Synchronous refers to invoking AWS Lambda with further waiting for the result (blocking operation). The function is called immediately and you get the response as soon as possible. Whereas using an asynchronous invocation you ask Lambda to schedule the function execution and do not wait for the response at all. When the time comes, Lambda still will call the handler function synchronously.
51
66
60,439,570
2020-2-27
https://stackoverflow.com/questions/60439570/pytorch-runtimeerror-shape-16-400-is-invalid-for-input-of-size-9600
I'm trying to build a CNN but I get this error: ---> 52 x = x.view(x.size(0), 5 * 5 * 16) RuntimeError: shape '[16, 400]' is invalid for input of size 9600 It's not clear for me what the inputs of the 'x.view' line should be. Also, I don't really understand how many times I should have this 'x.view' function in my code. Is it only once, after the 3 convolutional layers and 2 linear layers? Or is it 5 times, one after every layer? Here's my CNN code: import torch.nn.functional as F # Convolutional neural network class ConvNet(nn.Module): def __init__(self, num_classes=10): super(ConvNet, self).__init__() self.conv1 = nn.Conv2d( in_channels=3, out_channels=16, kernel_size=3) self.conv2 = nn.Conv2d( in_channels=16, out_channels=24, kernel_size=4) self.conv3 = nn.Conv2d( in_channels=24, out_channels=32, kernel_size=4) self.dropout = nn.Dropout2d(p=0.3) self.pool = nn.MaxPool2d(2) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(512, 10) self.final = nn.Softmax(dim=1) def forward(self, x): print('shape 0 ' + str(x.shape)) x = F.max_pool2d(F.relu(self.conv1(x)), 2) x = self.dropout(x) print('shape 1 ' + str(x.shape)) x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = self.dropout(x) print('shape 2 ' + str(x.shape)) # x = F.max_pool2d(F.relu(self.conv3(x)), 2) # x = self.dropout(x) x = F.interpolate(x, size=(5, 5)) x = x.view(x.size(0), 5 * 5 * 16) x = self.fc1(x) return x net = ConvNet() Can someone help me understand the problem? The output of x.shape is: shape 0 torch.Size([16, 3, 256, 256]) shape 1 torch.Size([16, 16, 127, 127]) shape 2 torch.Size([16, 24, 62, 62]) Thanks.
This means that instead the product of the channel and spatial dimensions is not 5*5*16. To flatten the tensor, replace x = x.view(x.size(0), 5 * 5 * 16) with: x = x.view(x.size(0), -1)
10
7
60,410,426
2020-2-26
https://stackoverflow.com/questions/60410426/prevent-f-string-from-converting-float-into-scientific-notation
I was struck by this default behavior of f-strings in python 3.7.2: >> number = 0.0000001 >> string = f"Number: {number}" >> print(string) Number: 1e-07 What I expected was: Number: 0.0000001 This is very annoying especially for creation of filenames. How can I disable this automatic conversion into the scientific notation? And why is it enabled in the first place? Basically the opposite of this question. Edit: I would like to avoid setting a fixed length for the float via {number:.8f} since my numbers have different lengths and I don't want to have any trailing zeros. I want to use the f-strings to make filenames automatically, like this: filename = f"number_{number:.10f}_other_number_{other_number:.10f}.json" I am looking for a simple modifier that can disable the automatic scientific notation while keeping the original precision of the float.
After checking other sources, I found numpy.format_float_positional which works very nicely here: import numpy as np number = 0.0000001 string = f"Number: {np.format_float_positional(number)}"
10
1
60,424,390
2020-2-27
https://stackoverflow.com/questions/60424390/is-there-a-way-to-kill-uvicorn-cleanly
Is there a way to kill uvicorn cleanly? I.e., I can type ^C at it, if it is running in the foreground on a terminal. This causes the uvivorn process to die and all of the worker processes to be cleaned up. (I.e., they go away.) On the other hand, if uvicorn is running in the background without a terminal, then I can't figure out a way to kill it cleanly. It seems to ignore SIGTERM, SIGINT, and SIGHUP. I can kill it with SIGKILL (i.e. -9), but then the worker processes remain alive, and I have to track all the worker processes down and kill them too. This is not ideal. I am using uvicorn with CPython 3.7.4, uvivorn version 0.11.2, and FastAPI 0.46.0 on Red Hat Enterprise Linux Server 7.3 (Maipo).
That's because you're running uvicorn as your only server. uvicorn is not a process manager and, as so, it does not manage its workers life cycle. That's why they recommend running uvicorn using gunicorn+UvicornWorker for production. That said, you can kill the spawned workers and trigger it's shutdown using the script below: $ kill $(pgrep -P $uvicorn_pid) The reason why this works but not the kill on the parent pid is because when you ^C something, the signal is transmitted throughout all of its spawned processes that are attached to the stdin.
42
23
60,414,753
2020-2-26
https://stackoverflow.com/questions/60414753/how-to-install-githttps-from-setup-py-using-install-requires
I have a project in which I have to install from git+https: I can make it to work in this way: virtualenv -p python3.5 bla . bla/bin/activate pip install numpy # must have numpy before the following pkg... pip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' However, I want to use it in a setup.py file in install_requires: from setuptools import setup setup(install_requires='git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI', setup_requires='numpy') and then, pip install -e . from the dir containing the setup.py This doesn't work due to parse error: Complete output (1 lines): error in bla_bla setup command: 'install_requires' must be a string or list of strings containing valid project/version requireme nt specifiers; Invalid requirement, parse error at "'+https:/'" ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. The error doesn't occur if I install using pip install -r requires.txt (assuming I have the same string in that file) and not when using direct pip install git+...... How to fix this parsing error? What I've tried so far: wrapping the string with " / """ / ' / ''' adding 'r' before the string
install_requires must be a string or a list of strings with names and optionally URLs to get the package from: install_requires=[ 'pycocotools @ git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' ] See https://pip.pypa.io/en/stable/reference/requirement-specifiers/ and https://www.python.org/dev/peps/pep-0440/#direct-references This requires pip install including pip install . and doesn't work with python setup.py install.
13
27
60,430,112
2020-2-27
https://stackoverflow.com/questions/60430112/single-sourcing-package-version-for-setup-cfg-python-projects
For traditional Python projects with a setup.py, there are various ways of ensuring that the version string does not have to be repeated throughout the code base. See PyPA's guide on "Single-sourcing the package version" for a list of recommendations. Many are trying to move away from setup.py to setup.cfg (probably under the influence of PEP517 and PEP518; setup.py was mostly used declaratively anyway, and when there was logic in setup.py, it was probably for the worse.) This means that most the suggestions won't work anymore since setup.cfg cannot contain "code". How can I single-source the package version for Python projects that use setup.cfg?
There are a couple of ways to do this (see below for the project structure used in these examples): 1. setup.cfg [metadata] version = 1.2.3.dev4 src/my_top_level_package/__init__.py import importlib.metadata __version__ = importlib.metadata.version('MyProject') 2. setup.cfg [metadata] version = file: VERSION.txt VERSION.txt 1.2.3.dev4 src/my_top_level_package/__init__.py import importlib.metadata __version__ = importlib.metadata.version('MyProject') 3. setup.cfg [metadata] version = attr: my_top_level_package.__version__ src/my_top_level_package/__init__.py __version__ = '1.2.3.dev4' And more... There are probably other ways to do this, by playing with different combinatons. References: https://setuptools.readthedocs.io/en/latest/userguide/declarative_config.html https://docs.python.org/3/library/importlib.metadata.html Structure assumed in the previous examples is as follows... MyProject ├── setup.cfg ├── setup.py └── src └── my_top_level_package └── __init__.py setup.py #!/usr/bin/env python3 import setuptools if __name__ == '__main__': setuptools.setup( # see 'setup.cfg' ) setup.cfg [metadata] name = MyProject # See above for the value of 'version = ...' [options] package_dir = = src packages = find: [options.packages.find] where = src $ cd path/to/MyProject $ python3 setup.py --version 1.2.3.dev4 $ python3 -m pip install . # ... $ python3 -c 'import my_top_level_package; print(my_top_level_package.__version__)' 1.2.3.dev4 $ python3 -V Python 3.6.9 $ python3 -m pip list Package Version ------------- ---------- MyProject 1.2.3.dev4 pip 20.0.2 pkg-resources 0.0.0 setuptools 45.2.0 wheel 0.34.2 zipp 3.0.0
26
37
60,517,190
2020-3-3
https://stackoverflow.com/questions/60517190/are-poetry-lock-files-os-independent
I am creating a poetry.lock file on my Mac. Then, I am using it to build a Docker image based on Debian. My question is the following: is there any guarantee that the exact packages will be found by the Debian image? I might be mistaken but I remember packages might not exist in every version for every OS. That’s seem reasonable when thinking about os specific packages (windows vs Unix). But how about Debian and Mac (both Unix based)? Should I expect at the least the majority of the packages to exist in the same version?
Broadly, they are portable to any OS, yes. But you must understand this has "nothing" to do with poetry nor pipenv nor pip but with the fact that each package version can be distributed for multiple platforms. It is pip who will pick a particular wheel that matches the compatible platform tags for the system issuing the install (see the output of pip debug). For most pure python packages: only a wheel and a source distribution (tarball) can be found in the indexes, and pip will elect the wheel unless you tell it otherwise. So, the install will be pretty much the same across different OS. Only the python interpreter will be different. For packages featuring C (or whatever) extensions: the package will have been precompiled for a series of platforms and pip will pick the best mach for your system and python version. If no prebuilt wheel is compatible with your system, pip is forced to fallback to the source distribution and compile it, at which point you will be required whatever building toolchain that package needs (a C compiler, CPython headers and whatnot). The main purpose of a lockfile is a reproductible install This is achieved through Version pinning: if no platform tag was compatible and the source distribution failed to build, pip would try to pick the next greatest version compatible with the requirement version range. In presence of a lockfile, that range is exactly one only version. Thus the installation would fail with no chances of picking another version. Artifact hashing: the downloaded artifacts are hashed and checked against the saved hashes in the lockfile. This simply prevents a rogue package index from tampering the source code nor the metadata of formerly published wheels or tarballs. Code from binary extensions will be equivalent yet inherently different across platforms It is reasonable to trust the binary extensions will be equivalent and will come from the very same source code, executing the same behaviour with only potential platform-specific bugs. Yet, strictly speaking the compiled code will be usually different. If you end up compiling the packages: for different platforms, the compilation will definetely be different for a given platform but different hardware, the compilation will most usually still be different for similar systems, the compilation toolchain can be different or be configured to perform different optimizations for the same system, the compilation might not be reproductible If you end up downloading a prebuilt wheel: you only trust the package maintainers to have built all public wheels for different platforms out of the same source code version prebuilt binary extensions will be different for each platform tag set, and might present different bugs or issues This is rarely the case for pure Python packages: theoretically possible, but maintainers will rather build a single wheel with compatibility boilerplate than craft different code bases for each Python version, even for Python-2&3 with two actual py{2,3}-none-any public wheels. Example watchdog = [ {file = "watchdog-2.1.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:9693f35162dc6208d10b10ddf0458cc09ad70c30ba689d9206e02cd836ce28a3"}, {file = "watchdog-2.1.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:aba5c812f8ee8a3ff3be51887ca2d55fb8e268439ed44110d3846e4229eb0e8b"}, {file = "watchdog-2.1.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4ae38bf8ba6f39d5b83f78661273216e7db5b00f08be7592062cb1fc8b8ba542"}, {file = "watchdog-2.1.6-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:ad6f1796e37db2223d2a3f302f586f74c72c630b48a9872c1e7ae8e92e0ab669"}, {file = "watchdog-2.1.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:922a69fa533cb0c793b483becaaa0845f655151e7256ec73630a1b2e9ebcb660"}, {file = "watchdog-2.1.6-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:b2fcf9402fde2672545b139694284dc3b665fd1be660d73eca6805197ef776a3"}, {file = "watchdog-2.1.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3386b367e950a11b0568062b70cc026c6f645428a698d33d39e013aaeda4cc04"}, {file = "watchdog-2.1.6-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8f1c00aa35f504197561060ca4c21d3cc079ba29cf6dd2fe61024c70160c990b"}, {file = "watchdog-2.1.6-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b52b88021b9541a60531142b0a451baca08d28b74a723d0c99b13c8c8d48d604"}, {file = "watchdog-2.1.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8047da932432aa32c515ec1447ea79ce578d0559362ca3605f8e9568f844e3c6"}, {file = "watchdog-2.1.6-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e92c2d33858c8f560671b448205a268096e17870dcf60a9bb3ac7bfbafb7f5f9"}, {file = "watchdog-2.1.6-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:b7d336912853d7b77f9b2c24eeed6a5065d0a0cc0d3b6a5a45ad6d1d05fb8cd8"}, {file = "watchdog-2.1.6-py3-none-manylinux2014_aarch64.whl", hash = "sha256:cca7741c0fcc765568350cb139e92b7f9f3c9a08c4f32591d18ab0a6ac9e71b6"}, {file = "watchdog-2.1.6-py3-none-manylinux2014_armv7l.whl", hash = "sha256:25fb5240b195d17de949588628fdf93032ebf163524ef08933db0ea1f99bd685"}, {file = "watchdog-2.1.6-py3-none-manylinux2014_i686.whl", hash = "sha256:be9be735f827820a06340dff2ddea1fb7234561fa5e6300a62fe7f54d40546a0"}, {file = "watchdog-2.1.6-py3-none-manylinux2014_ppc64.whl", hash = "sha256:d0d19fb2441947b58fbf91336638c2b9f4cc98e05e1045404d7a4cb7cddc7a65"}, {file = "watchdog-2.1.6-py3-none-manylinux2014_ppc64le.whl", hash = "sha256:3becdb380d8916c873ad512f1701f8a92ce79ec6978ffde92919fd18d41da7fb"}, {file = "watchdog-2.1.6-py3-none-manylinux2014_s390x.whl", hash = "sha256:ae67501c95606072aafa865b6ed47343ac6484472a2f95490ba151f6347acfc2"}, {file = "watchdog-2.1.6-py3-none-manylinux2014_x86_64.whl", hash = "sha256:e0f30db709c939cabf64a6dc5babb276e6d823fd84464ab916f9b9ba5623ca15"}, {file = "watchdog-2.1.6-py3-none-win32.whl", hash = "sha256:e02794ac791662a5eafc6ffeaf9bcc149035a0e48eb0a9d40a8feb4622605a3d"}, {file = "watchdog-2.1.6-py3-none-win_amd64.whl", hash = "sha256:bd9ba4f332cf57b2c1f698be0728c020399ef3040577cde2939f2e045b39c1e5"}, {file = "watchdog-2.1.6-py3-none-win_ia64.whl", hash = "sha256:a0f1c7edf116a12f7245be06120b1852275f9506a7d90227648b250755a03923"}, {file = "watchdog-2.1.6.tar.gz", hash = "sha256:a36e75df6c767cbf46f61a91c70b3ba71811dfa0aca4a324d9407a06a8b7a2e7"}, ] weasyprint = [ {file = "weasyprint-54.1-py3-none-any.whl", hash = "sha256:27c078ded67a43c9a05c349eda01ea327805d48e5c3ca3b704f57eb82bd78592"}, {file = "weasyprint-54.1.tar.gz", hash = "sha256:fa57db862e06bd01c5e7d82dad399b3b9952a39827023c17bee9b1c061ff1bbd"}, ] In this fragment of a poetry.lock file you can see the hashes for watchdog==2.1.6 and weasyprint==54.1. For weasyprint, the wheel tagged py3-none-any will probably be the wheel to be actually downloadad in any system. Yet, for watchdog, the elegible wheel will depend both on your OS and your hardware, aside from your python version! Any x86_64 linux box will pick watchdog-2.1.6-py3-none-manylinux2014_x86_64.whl An arm64 MacOS box with CPython-3.8 will pick watchdog-2.1.6-cp38-cp38-macosx_11_0_arm64.whl An arm64 MacOS box with PyPy-3.7 will be forced to build watchdog-2.1.6.tar.gz, which could lead to the whole installation failing if it can't achieve that. A future MacOS with CPython-3.11 ill also need to build watchdog-2.1.6.tar.gz You can also check those hashes in PyPI's website https://pypi.org/project/watchdog/2.1.6/#files
16
11
60,509,425
2020-3-3
https://stackoverflow.com/questions/60509425/how-to-use-repeat-function-when-building-data-in-keras
I am training a binary classifier on a dataset of cats and dogs: Total Dataset: 10000 images Training Dataset: 8000 images Validation/Test Dataset: 2000 images The Jupyter notebook code: # Part 2 - Fitting the CNN to the images train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) test_datagen = ImageDataGenerator(rescale = 1./255) training_set = train_datagen.flow_from_directory('dataset/training_set', target_size = (64, 64), batch_size = 32, class_mode = 'binary') test_set = test_datagen.flow_from_directory('dataset/test_set', target_size = (64, 64), batch_size = 32, class_mode = 'binary') history = model.fit_generator(training_set, steps_per_epoch=8000, epochs=25, validation_data=test_set, validation_steps=2000) I trained it on a CPU without a problem but when I run on GPU it throws me this error: Found 8000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. WARNING:tensorflow:From <ipython-input-8-140743827a71>:23: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version. Instructions for updating: Please use Model.fit, which supports generators. WARNING:tensorflow:sample_weight modes were coerced from ... to ['...'] WARNING:tensorflow:sample_weight modes were coerced from ... to ['...'] Train for 8000 steps, validate for 2000 steps Epoch 1/25 250/8000 [..............................] - ETA: 21:50 - loss: 7.6246 - accuracy: 0.5000 WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 200000 batches). You may need to use the repeat() function when building your dataset. 250/8000 [..............................] - ETA: 21:52 - loss: 7.6246 - accuracy: 0.5000 I would like to know how to use the repeat() function in keras using Tensorflow 2.0?
Your problem stems from the fact that the parameters steps_per_epoch and validation_steps need to be equal to the total number of data points divided by the batch_size. Your code would work in Keras 1.X, prior to August 2017. Change your model.fit() function to: history = model.fit_generator(training_set, steps_per_epoch=int(8000/batch_size), epochs=25, validation_data=test_set, validation_steps=int(2000/batch_size)) As of TensorFlow 2.1, fit_generator() is being deprecated. You can use .fit() method also on generators. TensorFlow >= 2.1 code: history = model.fit(training_set.repeat(), steps_per_epoch=int(8000/batch_size), epochs=25, validation_data=test_set.repeat(), validation_steps=int(2000/batch_size)) Notice that int(8000/batch_size) is equivalent to 8000 // batch_size (integer division)
14
24
60,448,666
2020-2-28
https://stackoverflow.com/questions/60448666/whats-the-use-of-the-del-method-in-python
From Python documentation: It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits. As far as I understand, there is also no way to guarantee an object stops existing before the interpreter exits, since it's up to the garbage collector to decide if and when an object is deleted. So what's the point of having this method at all? You can write cleanup code inside it, but there's no guarantee it will ever be executed. I know you can solve this using try-finally or with clauses, but I still wonder what would be a meaningful use case of the __del__() method.
After reading all of these answers—none of which satisfactorily answered all of my questions/doubts—and rereading Python documentation, I've come to a conclusion of my own. This the summary of my thoughts on the matter. Implementation-agnostic The passage you quoted from the __del__ method documentation says: It is not guaranteed that the __del__() methods are called for objects that still exist when the interpreter exits. But not only is it not guaranteed that __del__() is called for objects being destroyed during interpreter exit, it is not even guaranteed that objects are garbage collected at all, even during normal execution—from the "Data model" section of the Python Language Reference: Objects are never explicitly destroyed; however, when they become unreachable they may be garbage-collected. An implementation is allowed to postpone garbage collection or omit it altogether — it is a matter of implementation quality how garbage collection is implemented, as long as no objects are collected that are still reachable. Thus, replying to your question: So what's the point of having this method at all? You can write cleanup code inside it, but there's no guarantee it will ever be executed. From an implementation-agnostic perspective, are there any uses for the __del__ method, as a fundamental component of one's code that can be relied on? No. None at all. It is essentially useless from this perspective. From a practical point of view, though, as other answers have pointed out, you can use __del__ as a last-resort mechanism to (try to) ensure that any necessary cleanup is performed before the object is destroyed, e.g. releasing resources, if the user forgot to explicitly call a close method. This is not so much a fail-safe as it is a "it doesn't hurt to add an extra safety mechanism even if it's not guaranteed to work"—and in fact, most Python implementations will catch that most of the time. But it's nothing to be relied on. Implementation-specific That being said, if you know that your program will run on a specific set of Python implementations, then you can rely on the implementation details of garbage collection—for instance, if you use CPython, you can "rely on" the fact that, during normal execution (i.e. outside of interpreter exit), if the reference count of a non-cyclically-referenced object reaches zero, it will be garbage collected and its __del__ method will be called, as other answers have pointed out. From the same subsection as above: CPython implementation detail: CPython currently uses a reference-counting scheme with (optional) delayed detection of cyclically linked garbage, which collects most objects as soon as they become unreachable, but is not guaranteed to collect garbage containing circular references. But still, this is really precarious and something to not be really relied on, since as mentioned it is only guaranteed for objects that are not part of a cyclic reference graph. Also: Other implementations act differently and CPython may change. Do not depend on immediate finalization of objects when they become unreachable (so you should always close files explicitly). Bottom line From a purist point of view, the __del__ method is completely useless. From a slightly less purist point of view, it is still almost useless. From a practical point of view, it might be useful as a complementary—but never essential—feature of your code.
16
6
60,513,146
2020-3-3
https://stackoverflow.com/questions/60513146/keras-iterator-with-augmented-images-and-other-features
Say you have a dataset that has images and some data in a .csv for each image. Your goal is to create a NN that has a convolution branch and another one (in my case an MLP). Now, there are plenty of guides (one here, another one) on how to create the network, that's not the problem. The issue here is how do I create an iterator in the form of [[convolution_input, other_features], target] when the convolution_input is from a Keras ImageDataGenerator flow that adds augmented images. More specifically, when the nth image (that may be an augmented one or not) is fed to the NN, I want its original features inside other_features. I found a few attempts (here and here, the second one looked promising but I wasn't able to figure out how to handle augmented images) in doing exactly that but they do not seem to take into account the possible dataset manipulation that the Keras generator does.
Let's say, you have a CSV, such that your images and the other features are in the file. Where id represents the image name, and followed by the features, and followed by your target, (class for classification, number for regression) | id | feat1 | feat2 | feat3 | class | |---------------------|-------|-------|-------|-------| | 1_face_IMG_NAME.jpg | 1 | 0 | 1 | A | | 3_face_IMG_NAME.jpg | 1 | 0 | 1 | B | | 2_face_IMG_NAME.jpg | 1 | 0 | 1 | A | | ... | ... | ... | ... | ... | First, let us define a data generator, and later we can override it. Let us read the data from the CSV in a pandas data frame and use keras's flow_from_dataframe to read from the data frame. df = pandas.read_csv("dummycsv.csv") datagen = ImageDataGenerator(rescale=1/255.) generator = datagen.flow_from_dataframe( df, directory="out/", x_col="id", y_col=df.columns[1:], class_mode="raw", batch_size=1) You can always add your augmentation in ImageDataGenerator. Things to note in the above code in flow_from_dataframe is x_col = the image name y_col = typically columns with the class name, but let us override it later by first providing all the other columns in the CSV. i.e. feat_1, feat_2.... till class_label class_mode = raw, suggests the generator to return all the values in y as is. Now let us override/inherit the above generator and create a new one, such that it returns [img, otherfeatures], [target] Here is the code with comments as explanations: def my_custom_generator(): # to keep track of complete epoch count = 0 while True: if count == len(df.index): # if the count is matching with the length of df, # the one pass is completed, so reset the generator generator.reset() break count += 1 # get the data from the generator data = generator.next() # the data looks like this [[img,img] , [other_cols,other_cols]] based on the batch size imgs = [] cols = [] targets = [] # iterate the data and append the necessary columns in the corresponding arrays for k in range(batch_size): # the first array contains all images imgs.append(data[0][k]) # the second array contains all features with last column as class, so [:-1] cols.append(data[1][k][:-1]) # the last column in the second array from data is the class targets.append(data[1][k][-1]) # this will yield the result as you expect. yield [imgs,cols], targets Create a similar function for your validation generator. Use train_test_split to split your data frame if you need it and create 2 generators and override them. Pass the function in model.fit_generator like this model.fit_generator(my_custom_generator(),.....other params)
10
5
60,486,649
2020-3-2
https://stackoverflow.com/questions/60486649/pyspark-how-can-i-suppress-run-output-in-pyspark-cell-when-importing-variables
I am using multiple notebooks in PySpark and import variables across these notebooks using %run path. Every time I run the command, all variables that I displayed in the original notebook are being displayed again in the current notebook (the notebook in which I %run). But I do not want them to be displayed in the current notebook. I only want to be able to work with the imported variables. How do I suppress the output being display every time? Note, I am not sure if it matters, but I am working in DataBricks. Thank you! Command example: %run /Users/myemail/Nodebook
You can use the "Hide Result" option in the upper right toggle of the cell:
11
12
60,436,768
2020-2-27
https://stackoverflow.com/questions/60436768/create-a-gzip-file-like-object-for-unit-testing
I want to test a Python function that reads a gzip file and extracts something from the file (using pytest). import gzip def my_function(file_path): output = [] with gzip.open(file_path, 'rt') as f: for line in f: output.append('something from line') return output Can I create a gzip file like object that I can pass to my_function? The object should have defined content and should work with gzip.open() I know that I can create a temporary gzip file in a fixture but this depends on the filesystem and other properties of the environment. Creating a file-like object from code would be more portable.
You can use the io and gzip libraries to create in-memory file objects. Example: import io, gzip def inmem(): stream = io.BytesIO() with gzip.open(stream, 'wb') as f: f.write(b'spam\neggs\n') stream.seek(0) return stream
7
7
60,411,012
2020-2-26
https://stackoverflow.com/questions/60411012/running-apache-beam-python-pipelines-in-kubernetes
This question might seem like a duplicate of this. I am trying to run Apache Beam python pipeline using flink on an offline instance of Kubernetes. However, since I have user code with external dependencies, I am using the Python SDK harness as an External Service - which is causing errors (described below). The kubernetes manifest I use to launch the beam python SDK: apiVersion: apps/v1 kind: Deployment metadata: name: beam-sdk spec: replicas: 1 selector: matchLabels: app: beam component: python-beam-sdk template: metadata: labels: app: beam component: python-beam-sdk spec: hostNetwork: True containers: - name: python-beam-sdk image: apachebeam/python3.7_sdk:latest imagePullPolicy: "Never" command: ["/opt/apache/beam/boot", "--worker_pool"] ports: - containerPort: 50000 name: yay apiVersion: v1 kind: Service metadata: name: beam-python-service spec: type: NodePort ports: - name: yay port: 50000 targetPort: 50000 selector: app: beam component: python-beam-sdk When I launch my pipeline with the following options: beam_options = PipelineOptions([ "--runner=FlinkRunner", "--flink_version=1.9", "--flink_master=10.101.28.28:8081", "--environment_type=EXTERNAL", "--environment_config=10.97.176.105:50000", "--setup_file=./setup.py" ]) I get the following error message (within the python sdk service): NAME READY STATUS RESTARTS AGE beam-sdk-666779599c-w65g5 1/1 Running 1 4d20h flink-jobmanager-74d444cccf-m4g8k 1/1 Running 1 4d20h flink-taskmanager-5487cc9bc9-fsbts 1/1 Running 2 4d20h flink-taskmanager-5487cc9bc9-zmnv7 1/1 Running 2 4d20h (base) [~]$ sudo kubectl logs -f beam-sdk-666779599c-w65g5 2020/02/26 07:56:44 Starting worker pool 1: python -m apache_beam.runners.worker.worker_pool_main --service_port=50000 --container_executable=/opt/apache/beam/boot Starting worker with command ['/opt/apache/beam/boot', '--id=1-1', '--logging_endpoint=localhost:39283', '--artifact_endpoint=localhost:41533', '--provision_endpoint=localhost:42233', '--control_endpoint=localhost:44977'] 2020/02/26 09:09:07 Initializing python harness: /opt/apache/beam/boot --id=1-1 --logging_endpoint=localhost:39283 --artifact_endpoint=localhost:41533 --provision_endpoint=localhost:42233 --control_endpoint=localhost:44977 2020/02/26 09:11:07 Failed to obtain provisioning information: failed to dial server at localhost:42233 caused by: context deadline exceeded I have no idea what the logging- or artifact endpoint (etc.) is. And by inspecting the source code it seems like that the endpoints has been hard-coded to be located at localhost.
(You said in a comment that the answer to the referenced post is valid, so I'll just address the specific error you ran into in case someone else hits it.) Your understanding is correct; the logging, artifact, etc. endpoints are essentially hardcoded to use localhost. These endpoints are meant to be only used internally by Beam and are not configurable. So the Beam worker is implicitly assumed to be on the same host as the Flink task manager. Typically, this is accomplished by making the Beam worker pool a sidecar of the Flink task manager pod, rather than a separate service.
7
2
60,452,488
2020-2-28
https://stackoverflow.com/questions/60452488/why-cant-arguments-be-passed-explicitly-as-x-and-y-in-pyplot
I am using matplotlib.pyplot module imported as plt for plots. In the plt.plot() statement, if I pass the arguments as "x= array1, "y= array2", I get "TypeError: plot got an unexpected keyword argument 'x' ". The code gets executed correctly if I simple pass "array1 and array2", without explicitly saying they correspond to x and y axes. Why is that?
If you look at the function definition, https://github.com/matplotlib/matplotlib/blob/9a24fb724331f50baf0da4d17188860357d328a9/lib/matplotlib/axes/_axes.py#L72, you can see the asterisk there, and the use of that doesn't work with using keywords for non-optional parameters. See Python args and kwargs: Demystified for example. def plot(self, *args, scalex=True, scaley=True, data=None, **kwargs): """ Plot y versus x as lines and/or markers. Call signatures:: plot([x], y, [fmt], *, data=None, **kwargs) plot([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs)
12
5
60,414,234
2020-2-26
https://stackoverflow.com/questions/60414234/setting-pre-hook-for-docker-compose-file
I am running a dockerized django app and I am looking for a way to run (a) directive(s) every time before I build a docker container. More concretely, I would like to run docker-compose -f production.yml run --rm django python manage.py check --deploy each time before I either build or up the production.yml file and stop the build process if any erroroccur. Like a pre-hook. I know I could achieve this with a bash-script, yet I was wondering if there is a way of doing this inside the docker-compose file. I can't find anything in the docker documentation (except events, but I don't understand if they serve for what I want to achieve) about it and I assume that this is not possible. Yet, maybe it is in fact possible or maybe there is a hacky workaround? Thanks in advance for any tips.
Currently, this is not possible. There have been multiple requests to add such functionality, but the maintainers do not consider this a good idea. See: https://github.com/docker/compose/issues/468 https://github.com/docker/compose/issues/1341 https://github.com/docker/compose/issues/6736
13
11
60,432,137
2020-2-27
https://stackoverflow.com/questions/60432137/jupyter-notebook-memory-management
I am currently working on a jupyter notebook in kaggle. After performing the desired transformations on my numpy array, I pickled it so that it can be stored on disk. The reason I did that is so that I can free up the memory being consumed by the large array. The memory consumed after pickling the array was about 8.7 gb. I decided to run this code snippet provided by @jan-glx here , to find out what variables were consuming my memory: import sys def sizeof_fmt(num, suffix='B'): ''' by Fred Cirera, https://stackoverflow.com/a/1094933/1870254, modified''' for unit in ['','Ki','Mi','Gi','Ti','Pi','Ei','Zi']: if abs(num) < 1024.0: return "%3.1f %s%s" % (num, unit, suffix) num /= 1024.0 return "%.1f %s%s" % (num, 'Yi', suffix) for name, size in sorted(((name, sys.getsizeof(value)) for name, value in locals().items()), key= lambda x: -x[1])[:10]: print("{:>30}: {:>8}".format(name, sizeof_fmt(size))) After performing this step I noticed that the size of my array was 3.3 gb, and the size of all the other variables summed together was about 0.1 gb. I decided to delete the array and see if that would fix the problem, by performing the following: del my_array gc.collect() After doing this, the memory consumption decreased from 8.7 gb to 5.4 gb. Which in theory makes sense, but still didn't explain what the rest of the memory was being consumed by. I decided to continue anyways and reset all my variables to see whether this would free up the memory or not with: %reset As expected it freed up the memory of the variables that were printed out in the function above, and I was still left with 5.3 gb of memory in use. One thing to note is that I noticed a memory spike when pickling the file itself, so a summary of the process would be something like this: performed operations on array -> memory consumption increased from about 1.9 gb to 5.6 gb pickled file -> memory consumption increased from 5.6 gb to about 8.7 gb Memory spikes suddenly while file is being pickled to 15.2 gb then drops back to 8.7 gb. deleted array -> memory consumption decreased from 8.7 gb to 5.4 gb performed reset -> memory consumption decreased from 5.4 gb to 5.3 gb Please note that the above is loosely based of monitoring the memory on kaggle and may be inaccurate. I have also checked this question but it was not helpful for my case. Would this be considered a memory leak? If so, what do I do in this case? EDIT 1: After some further digging, I noticed that there are others facing this problem. This problem stems from the pickling process, and that pickling creates a copy in memory but, for some reason, does not release it. Is there a way to release the memory after the pickling process is complete. EDIT 2: When deleting the pickled file from disk, using: !rm my_array It ended up freeing the disk space and freeing up space on memory as well. I don't know whether the above tidbit would be of use or not, but I decided to include it anyways as every bit of info might help.
There is one basic drawback that you should be aware of: The CPython interpreter actually can actually barely free memory and return it to the OS. For most workloads, you can assume that memory is not freed during the lifetime of the interpreter's process. However, the interpreter can re-use the memory internally. So looking at the memory consumption of the CPython process from the operating system's perspective really does not help at all. A rather common work-around is to run memory intensive jobs in a sub-process / worker process (via multiprocessing for instance) and "only" return the result to the main process. Once the worker dies, the memory is actually freed. Second, using sys.getsizeof on ndarrays can be impressively misleading. Use the ndarray.nbytes property instead and be aware that this may also be misleading when dealing with views. Besides, I am not entirely sure why you "pickle" numpy arrays. There are better tools for this job. Just to name two: h5py (a classic, based on HDF5) and zarr. Both libraries allow you to work with ndarray-like objects directly on disk (and compression) - essentially eliminating the pickling step. Besides, zarr also allows you to create compressed ndarray-compatible data structures in memory. Must ufuncs from numpy, scipy & friends will happily accept them as input parameters.
10
5
60,496,204
2020-3-2
https://stackoverflow.com/questions/60496204/webdriverwait-for-multiple-conditions-or-logical-evaluation
Using python, the method WebDriverWait is used to wait for 1 element to be present on the webpage. How can this method be used without multiple try/except? Is there an OR option for multiple cases using this method? https://selenium-python.readthedocs.io/waits.html
Without using multiple try/except{} to induce WebDriverWait for two elements through OR option you can use either of the following solutions: Using CSS_SELECTOR: element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, ".element_A_class, .element_B_class")) Using XPATH through lambda: element = WebDriverWait(driver,20).until(lambda driver: driver.find_element(By.XPATH,"element_A_xpath") or driver.find_element(By.XPATH,"element_B_xpath")) Reference You can find a couple of relevant discussions in: selenium two xpath tests in one Python / Selenium: Logic Operators in WebDriverWait Expected Conditions
9
13
60,440,292
2020-2-27
https://stackoverflow.com/questions/60440292/runtimeerror-expected-scalar-type-long-but-found-float
I can't get the dtypes to match, either the loss wants long or the model wants float if I change my tensors to long. The shape of the tensors are 42000, 1, 28, 28 and 42000. I'm not sure where I can change what dtypes are required for the model or loss. I'm not sure if dataloader is required, using Variable didn't work either. dataloaders_train = torch.utils.data.DataLoader(Xt_train, batch_size=64) dataloaders_test = torch.utils.data.DataLoader(Yt_train, batch_size=64) class Network(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(42000, 256) self.output = nn.Linear(256, 10) self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) def forward(self, x): x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x model = Network() input_size = 784 hidden_sizes = [28, 64] output_size = 10 model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model) criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in zip(dataloaders_train, dataloaders_test): images = images.view(images.shape[0], -1) #images, labels = Variable(images), Variable(labels) print(images.dtype) print(labels.dtype) optimizer.zero_grad() output = model(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss}") Which gives RuntimeError Traceback (most recent call last) <ipython-input-128-68109c274f8f> in <module> 11 12 output = model(images) ---> 13 loss = criterion(output, labels) 14 loss.backward() 15 optimizer.step() /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 202 203 def forward(self, input, target): --> 204 return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) 205 206 /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1836 .format(input.size(0), target.size(0))) 1837 if dim == 2: -> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: expected scalar type Long but found Float
LongTensor is synonymous with integer. PyTorch won't accept a FloatTensor as categorical target, so it's telling you to cast your tensor to LongTensor. This is how you should change your target dtype: Yt_train = Yt_train.type(torch.LongTensor) This is very well documented on the PyTorch website, you definitely won't regret spending a minute or two reading this page. PyTorch essentially defines nine CPU tensor types and nine GPU tensor types: ╔══════════════════════════╦═══════════════════════════════╦════════════════════╦═════════════════════════╗ ║ Data type ║ dtype ║ CPU tensor ║ GPU tensor ║ ╠══════════════════════════╬═══════════════════════════════╬════════════════════╬═════════════════════════╣ ║ 32-bit floating point ║ torch.float32 or torch.float ║ torch.FloatTensor ║ torch.cuda.FloatTensor ║ ║ 64-bit floating point ║ torch.float64 or torch.double ║ torch.DoubleTensor ║ torch.cuda.DoubleTensor ║ ║ 16-bit floating point ║ torch.float16 or torch.half ║ torch.HalfTensor ║ torch.cuda.HalfTensor ║ ║ 8-bit integer (unsigned) ║ torch.uint8 ║ torch.ByteTensor ║ torch.cuda.ByteTensor ║ ║ 8-bit integer (signed) ║ torch.int8 ║ torch.CharTensor ║ torch.cuda.CharTensor ║ ║ 16-bit integer (signed) ║ torch.int16 or torch.short ║ torch.ShortTensor ║ torch.cuda.ShortTensor ║ ║ 32-bit integer (signed) ║ torch.int32 or torch.int ║ torch.IntTensor ║ torch.cuda.IntTensor ║ ║ 64-bit integer (signed) ║ torch.int64 or torch.long ║ torch.LongTensor ║ torch.cuda.LongTensor ║ ║ Boolean ║ torch.bool ║ torch.BoolTensor ║ torch.cuda.BoolTensor ║ ╚══════════════════════════╩═══════════════════════════════╩════════════════════╩═════════════════════════╝
54
100
60,462,840
2020-2-29
https://stackoverflow.com/questions/60462840/ffmpeg-delay-in-decoding-h264
I am taking raw RGB frames, encoding them to h264, then decoding them back to raw RGB frames. [RGB frame] ------ encoder ------> [h264 stream] ------ decoder ------> [RGB frame] ^ ^ ^ ^ encoder_write encoder_read decoder_write decoder_read I would like to retrieve the decoded frames as soon as possible. However, it seems that there is always a one-frame delay no matter how long one waits.¹ In this example, I feed the encoder a frame every 2 seconds: $ python demo.py 2>/dev/null time=0 frames=1 encoder_write time=2 frames=2 encoder_write time=2 frames=1 decoder_read <-- decoded output is delayed by extra frame time=4 frames=3 encoder_write time=4 frames=2 decoder_read time=6 frames=4 encoder_write time=6 frames=3 decoder_read ... What I want instead: $ python demo.py 2>/dev/null time=0 frames=1 encoder_write time=0 frames=1 decoder_read <-- decode immediately after encode time=2 frames=2 encoder_write time=2 frames=2 decoder_read time=4 frames=3 encoder_write time=4 frames=3 decoder_read time=6 frames=4 encoder_write time=6 frames=4 decoder_read ... The encoder and decoder ffmpeg processes are run with the following arguments: encoder: ffmpeg -f rawvideo -pix_fmt rgb24 -s 224x224 -i pipe: \ -f h264 -tune zerolatency pipe: decoder: ffmpeg -probesize 32 -flags low_delay \ -f h264 -i pipe: \ -f rawvideo -pix_fmt rgb24 -s 224x224 pipe: Complete reproducible example below. No external video files needed. Just copy, paste, and run python demo.py 2>/dev/null! import subprocess from queue import Queue from threading import Thread from time import sleep, time import numpy as np WIDTH = 224 HEIGHT = 224 NUM_FRAMES = 256 def t(epoch=time()): return int(time() - epoch) def make_frames(num_frames): x = np.arange(WIDTH, dtype=np.uint8) x = np.broadcast_to(x, (num_frames, HEIGHT, WIDTH)) x = x[..., np.newaxis].repeat(3, axis=-1) x[..., 1] = x[:, :, ::-1, 1] scale = np.arange(1, len(x) + 1, dtype=np.uint8) scale = scale[:, np.newaxis, np.newaxis, np.newaxis] x *= scale return x def encoder_write(writer): """Feeds encoder frames to encode""" frames = make_frames(num_frames=NUM_FRAMES) for i, frame in enumerate(frames): writer.write(frame.tobytes()) writer.flush() print(f"time={t()} frames={i + 1:<3} encoder_write") sleep(2) writer.close() def encoder_read(reader, queue): """Puts chunks of encoded bytes into queue""" while chunk := reader.read1(): queue.put(chunk) # print(f"time={t()} chunk={len(chunk):<4} encoder_read") queue.put(None) def decoder_write(writer, queue): """Feeds decoder bytes to decode""" while chunk := queue.get(): writer.write(chunk) writer.flush() # print(f"time={t()} chunk={len(chunk):<4} decoder_write") writer.close() def decoder_read(reader): """Retrieves decoded frames""" buffer = b"" frame_len = HEIGHT * WIDTH * 3 targets = make_frames(num_frames=NUM_FRAMES) i = 0 while chunk := reader.read1(): buffer += chunk while len(buffer) >= frame_len: frame = np.frombuffer(buffer[:frame_len], dtype=np.uint8) frame = frame.reshape(HEIGHT, WIDTH, 3) psnr = 10 * np.log10(255**2 / np.mean((frame - targets[i])**2)) buffer = buffer[frame_len:] i += 1 print(f"time={t()} frames={i:<3} decoder_read psnr={psnr:.1f}") cmd = ( "ffmpeg " "-f rawvideo -pix_fmt rgb24 -s 224x224 " "-i pipe: " "-f h264 " "-tune zerolatency " "pipe:" ) encoder_process = subprocess.Popen( cmd.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE ) cmd = ( "ffmpeg " "-probesize 32 " "-flags low_delay " "-f h264 " "-i pipe: " "-f rawvideo -pix_fmt rgb24 -s 224x224 " "pipe:" ) decoder_process = subprocess.Popen( cmd.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE ) queue = Queue() threads = [ Thread(target=encoder_write, args=(encoder_process.stdin,),), Thread(target=encoder_read, args=(encoder_process.stdout, queue),), Thread(target=decoder_write, args=(decoder_process.stdin, queue),), Thread(target=decoder_read, args=(decoder_process.stdout,),), ] for thread in threads: thread.start() ¹ I did some testing and it seems the decoder is waiting for the next frame's NAL header 00 00 00 01 41 88 (in hex) before it decodes the current frame. One would hope that the prefix 00 00 00 01 would be enough, but it also waits for the next two bytes! ² Prior revision of question.
Add -probesize 32 to your decoder arguments. Set decoder command to: cmd = "ffmpeg -probesize 32 -f h264 -i pipe: -f rawvideo -pix_fmt rgb24 -s 224x224 pipe:" I found the solution here: How to minimize the delay in a live streaming with FFmpeg. According to FFmpeg StreamingGuide: Also setting -probesize and -analyzeduration to low values may help your stream start up more quickly. After adding -probesize 32 argument, I am getting 9 lines of Decoder written 862 bytes... instead of about 120 lines. Update: I could not find a solution, but I managed to form a simple demonstration of the problem. Instead of using two sub-processes and 4 threads, the code sample uses one sub-process and no Python threads. The sample uses the following "filter graph": _________ ______________ _________ | BMP | | | | BMP | | encoded | demuxer | encoded data | muxer | encoded | | frames | ---------> | packets | -------> | frames | |_________| |______________| |_________| input PIPE output PIPE See: Stream copy chapter I figure out that for "pushing" the first frame from the input to the output, we need to write at least additional 4112 bytes from the beginning of the second frame. Here is the code sample: import cv2 import numpy as np import subprocess as sp width, height, n_frames, fps = 256, 256, 10, 1 # 10 frames, resolution 256x256, and 1 fps def make_bmp_frame_as_bytes(i): """ Build synthetic image for testing, encode as BMP and convert to bytes sequence """ p = width//50 img = np.full((height, width, 3), 60, np.uint8) cv2.putText(img, str(i+1), (width//2-p*10*len(str(i+1)), height//2+p*10), cv2.FONT_HERSHEY_DUPLEX, p, (255, 30, 30), p*2) # Blue number # BMP Encode img into bmp_img _, bmp_img = cv2.imencode(".BMP", img) bmp_img_bytes = bmp_img.tobytes() return bmp_img_bytes # BMP in, BMP out: process = sp.Popen(f'ffmpeg -debug_ts -probesize 32 -f bmp_pipe -framerate {fps} -an -sn -dn -i pipe: -f image2pipe -codec copy -an -sn -dn pipe:', stdin=sp.PIPE, stdout=sp.PIPE) # Build image (number -1) before the loop. bmp_img_bytes = make_bmp_frame_as_bytes(-1) # Write one BMP encoded image before the loop. process.stdin.write(bmp_img_bytes) process.stdin.flush() for i in range(n_frames): # Build image (number i) before the loop. bmp_img_bytes = make_bmp_frame_as_bytes(i) # Write 4112 first bytes of the BMP encoded image. # Writing 4112 "push" forward the previous image (writing less than 4112 bytes hals on the first frame). process.stdin.write(bmp_img_bytes[0:4112]) process.stdin.flush() # Read output BMP encoded image from stdout PIPE. buffer = process.stdout.read(width*height*3 + 54) # BMP header is 54 bytes buffer = np.frombuffer(buffer, np.uint8) frame = cv2.imdecode(buffer, cv2.IMREAD_COLOR) # Decode BMP image (using OpenCV). # Display the image cv2.imshow('frame', frame) cv2.waitKey(1000) # Write the next bytes of the BMP encoded image (from byte 4112 to the end). process.stdin.write(bmp_img_bytes[4112:]) process.stdin.flush() process.stdin.close() buffer = process.stdout.read(width*height*3 + 54) # Read last image process.stdout.close() # Wait for sub-process to finish process.wait() cv2.destroyAllWindows() I have no idea why 4112 bytes. I used FFmpeg version 4.2.2, statically linked (ffmpeg.exe) under Windows 10. I didn't check if 4112 bytes is persistent for other versions / platforms. I suspect the "latency issue" is inherent to FFmpeg Demuxers. I could not find any argument/flag to prevent the issue. The rawvideo demuxer is the only demuxer (I found) that didn't add latency. I hope the simpler sample code helps finding a solution to the latency issue... Update: H.264 stream example: The sample uses the following "filter graph": _________ ______________ _________ | H.264 | | | | | | encoded | demuxer | encoded data | decoder | decoded | | frames | ---------> | packets | ---------> | frames | |_________| |______________| |_________| input PIPE output PIPE The code sample writes AUD NAL unit after writing each encoded frame. The AUD (Access Unit Delimiter) is an optional NAL unit that comes at the beginning the encoded frame. Apparently, writing AUD after the writing the encoded frame "pushes" the encoded frames from the demuxer to the decoder. Here is a code sample: import cv2 import numpy as np import subprocess as sp import json width, height, n_frames, fps = 256, 256, 100, 1 # 100 frames, resolution 256x256, and 1 fps def make_raw_frame_as_bytes(i): """ Build synthetic "raw BGR" image for testing, convert the image to bytes sequence """ p = width//60 img = np.full((height, width, 3), 60, np.uint8) cv2.putText(img, str(i+1), (width//2-p*10*len(str(i+1)), height//2+p*10), cv2.FONT_HERSHEY_DUPLEX, p, (255, 30, 30), p*2) # Blue number raw_img_bytes = img.tobytes() return raw_img_bytes # Build input file input.264 (AVC encoded elementary stream) ################################################################################ process = sp.Popen(f'ffmpeg -y -video_size {width}x{height} -pixel_format bgr24 -f rawvideo -r {fps} -an -sn -dn -i pipe: -f h264 -g 1 -pix_fmt yuv444p -crf 10 -tune zerolatency -an -sn -dn input.264', stdin=sp.PIPE) #-x264-params aud=1 #Adds [ 0, 0, 0, 1, 9, 16 ] to the beginning of each encoded frame aud_bytes = b'\x00\x00\x00\x01\t\x10' #Access Unit Delimiter #process = sp.Popen(f'ffmpeg -y -video_size {width}x{height} -pixel_format bgr24 -f rawvideo -r {fps} -an -sn -dn -i pipe: -f h264 -g 1 -pix_fmt yuv444p -crf 10 -tune zerolatency -x264-params aud=1 -an -sn -dn input.264', stdin=sp.PIPE) for i in range(n_frames): raw_img_bytes = make_raw_frame_as_bytes(i) process.stdin.write(raw_img_bytes) # Write raw video frame to input stream of ffmpeg sub-process. process.stdin.close() process.wait() ################################################################################ # Execute FFprobe and create JSON file (showing pkt_pos and pkt_size for every encoded frame): sp.run('ffprobe -print_format json -show_frames input.264', stdout=open('input_probe.json', 'w')) # Read FFprobe output to dictionary p with open('input_probe.json') as f: p = json.load(f)['frames'] # Input PIPE: H.264 encoded video, output PIPE: decoded video frames in raw BGR video format process = sp.Popen(f'ffmpeg -probesize 32 -flags low_delay -f h264 -framerate {fps} -an -sn -dn -i pipe: -f rawvideo -s {width}x{height} -pix_fmt bgr24 -an -sn -dn pipe:', stdin=sp.PIPE, stdout=sp.PIPE) f = open('input.264', 'rb') process.stdin.write(aud_bytes) # Write AUD NAL unit before the first encoded frame. for i in range(n_frames-1): # Read H.264 encoded video frame h264_frame_bytes = f.read(int(p[i]['pkt_size'])) process.stdin.write(h264_frame_bytes) process.stdin.write(aud_bytes) # Write AUD NAL unit after the encoded frame. process.stdin.flush() # Read decoded video frame (in raw video format) from stdout PIPE. buffer = process.stdout.read(width*height*3) frame = np.frombuffer(buffer, np.uint8).reshape(height, width, 3) # Display the decoded video frame cv2.imshow('frame', frame) cv2.waitKey(1) # Write last encoded frame h264_frame_bytes = f.read(int(p[n_frames-1]['pkt_size'])) process.stdin.write(h264_frame_bytes) f.close() process.stdin.close() buffer = process.stdout.read(width*height*3) # Read the last video frame process.stdout.close() # Wait for sub-process to finish process.wait() cv2.destroyAllWindows() Update: The reason for the extra frame delay is that h264 elementary stream doesn't have an "end of frame" signal, and there is no "payload size" field in the NAL unit header. The only way to detect when a frame ends is to see where the next one begins. See: Detect ending of frame in H.264 video stream. And How to know the number of NAL unit in H.264 stream which represent a picture. For avoiding the wait for the beginning of the next frame, you must use a "transport stream" layer or video container format. Transport streams and few container formats allow "end of frame" detection by the receiver (demuxer). I tried using MPEG-2 transport stream, but it added a delay of one more frame. [I didn't try RTSP protocol, because it's not working with pipes]. Using Flash Video (FLV) container reduces the delay to a single frame. The FLV container has a "Payload Size" field in the packet header that allows the demuxer to avoid waiting for the next frame. Commands for using FLV container and H.264 codec: cmd = ( "ffmpeg " "-f rawvideo -pix_fmt rgb24 -s 224x224 " "-i pipe: " "-vcodec libx264 " "-f flv " "-tune zerolatency " "pipe:" ) encoder_process = subprocess.Popen( cmd.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE ) cmd = ( "ffmpeg " "-probesize 32 " "-flags low_delay " "-f flv " "-vcodec h264 " "-i pipe: " "-f rawvideo -pix_fmt rgb24 -s 224x224 " "pipe:" ) decoder_process = subprocess.Popen( cmd.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE ) In the commands above, FFmpeg uses the FLV muxer for the encoder process and FLV demuxer for the decoder process. Output result: time=0 frames=1 encoder_write time=0 frames=1 decoder_read psnr=49.0 time=2 frames=2 encoder_write time=2 frames=2 decoder_read psnr=48.3 time=4 frames=3 encoder_write time=4 frames=3 decoder_read psnr=45.8 time=6 frames=4 encoder_write time=6 frames=4 decoder_read psnr=46.7 As you can see, there is no extra frame delay. Other containers that have also worked are: AVI and MKV.
7
10
60,439,489
2020-2-27
https://stackoverflow.com/questions/60439489/django-run-tasks-possibly-in-the-far-future
Suppose I have a model Event. I want to send a notification (email, push, whatever) to all invited users once the event has elapsed. Something along the lines of: class Event(models.Model): start = models.DateTimeField(...) end = models.DateTimeField(...) invited = models.ManyToManyField(model=User) def onEventElapsed(self): for user in self.invited: my_notification_backend.sendMessage(target=user, message="Event has elapsed") Now, of course, the crucial part is to invoke onEventElapsed whenever timezone.now() >= event.end. Keep in mind, end could be months away from the current date. I have thought about two basic ways of doing this: Use a periodic cron job (say, every five minutes or so) which checks if any events have elapsed within the last five minutes and executes my method. Use celery and schedule onEventElapsed using the eta parameter to be run in the future (within the models save method). Considering option 1, a potential solution could be django-celery-beat. However, it seems a bit odd to run a task at a fixed interval for sending notifications. In addition I came up with a (potential) issue that would (probably) result in a not-so elegant solution: Check every five minutes for events that have elapsed in the previous five minutes? seems shaky, maybe some events are missed (or others get their notifications send twice?). Potential workaroung: add a boolean field to the model that is set to True once notifications have been sent. Then again, option 2 also has its problems: Manually take care of the situation when an event start/end datetime is moved. When using celery, one would have to store the taskID (easy, ofc) and revoke the task once the dates have changed and issue a new task. But I have read, that celery has (design-specific) problems when dealing with tasks that are run in the future: Open Issue on github. I realize how this happens and why it is everything but trivial to solve. Now, I have come across some libraries which could potentially solve my problem: celery_longterm_scheduler (But does this mean I cannot use celery as I would have before, because of the differend Scheduler class? This also ties into the possible usage of django-celery-beat... Using any of the two frameworks, is it still possible to queue jobs (that are just a bit longer-running but not months away?) django-apscheduler, uses apscheduler. However, I was unable to find any information on how it would handle tasks that are run in the far future. Is there a fundemantal flaw with the way I am approaching this? Im glad for any inputs you might have. Notice: I know this is likely to be somehwat opinion based, however, maybe there is a very basic thing that I have missed, regardless of what could be considered by some as ugly or elegant.
We're doing something like this in the company i work for, and the solution is quite simple. Have a cron / celery beat that runs every hour to check if any notification needs to be sent. Then send those notifications and mark them as done. This way, even if your notification time is years ahead, it will still be sent. Using ETA is NOT the way to go for a very long wait time, your cache / amqp might loose the data. You can reduce your interval depending on your needs, but do make sure they dont overlap. If one hour is too huge of a time difference, then what you can do is, run a scheduler every hour. Logic would be something like run a task (lets call this scheduler task) hourly that gets all notifications that needs to be sent in the next hour (via celery beat) - Schedule those notifications via apply_async(eta) - this will be the actual sending Using that methodology would get you both of best worlds (eta and beat)
13
9
60,488,824
2020-3-2
https://stackoverflow.com/questions/60488824/having-a-hard-time-getting-rabbitmq-server-started-and-wonder-why-keep-getting-t
I keep getting this error when I start Rabbitmq and wonder what's wrong? BOOT FAILED =========== Error description: init:do_boot/3 line 817 init:start_em/1 line 1109 rabbit:start_it/1 line 474 rabbit:broker_start/1 line 350 rabbit:start_loaded_apps/2 line 600 app_utils:manage_applications/6 line 126 lists:foldl/3 line 1263 rabbit:'-handle_app_error/1-fun-0-'/3 line 723 throw:{could_not_start,ra, {ra, {{shutdown, {failed_to_start_child,ra_system_sup, {shutdown, {failed_to_start_child,ra_log_sup, {shutdown, {failed_to_start_child,ra_log_wal_sup, {shutdown, {failed_to_start_child,ra_log_wal, {{case_clause,{ok,<<0,0,0,0,0>>}}, [{ra_log_wal,open_existing,1, [{file,"src/ra_log_wal.erl"},{line,646}]}, {ra_log_wal,'-recover_wal/2-lc$^0/1-0-',1, [{file,"src/ra_log_wal.erl"},{line,265}]}, {ra_log_wal,recover_wal,2, [{file,"src/ra_log_wal.erl"},{line,268}]}, {ra_log_wal,init,1, [{file,"src/ra_log_wal.erl"},{line,214}]}, {gen_batch_server,init_it,6, [{file,"src/gen_batch_server.erl"},{line,133}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,249}]}]}}}}}}}}}, {ra_app,start,[normal,[]]}}}} Log file(s) (may contain more information): C:/Users/AIMLExpert/AppData/Roaming/RabbitMQ/log/[email protected] C:/Users/AIMLExpert/AppData/Roaming/RabbitMQ/log/rabbit@DESKTOP-N0Q3S7C_upgrade.log {"init terminating in do_boot",{could_not_start,ra,{ra,{{shutdown,{failed_to_start_child,ra_system_sup,{shutdown,{failed_to_start_child,ra_log_sup,{shutdown,{failed_to_start_child,ra_log_wal_sup,{shutdown,{failed_to_start_child,ra_log_wal,{{case_clause,{ok,<<0,0,0,0,0>>}},[{ra_log_wal,open_existing,1,[{file,"src/ra_log_wal.erl"},{line,646}]},{ra_log_wal,'-recover_wal/2-lc$^0/1-0-',1,[{file,"src/ra_log_wal.erl"},{line,265}]},{ra_log_wal,recover_wal,2,[{file,"src/ra_log_wal.erl"},{line,268}]},{ra_log_wal,init,1,[{file,"src/ra_log_wal.erl"},{line,214}]},{gen_batch_server,init_it,6,[{file,"src/gen_batch_server.erl"},{line,133}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}}}}}}}}},{ra_app,start,[normal,[]]}}}}} init terminating in do_boot ({could_not_start,ra,{ra,{{shutdown,{_}},{ra_app,start,[_]}}}}) Crash dump is being written to: C:\Users\AIMLExpert\AppData\Roaming\RabbitMQ\log\erl_crash.dump...done whats causing this Error? I have tried restarting the command line but am still getting the error.
I had the same issue on Arch Linux due to a crash dump. It was saying in the logs that it had problem with reverting a WAL file that was 0 bytes in size. After removing the WAL file the service started. Locate the WAL in /var/lib/rabbitmq/mnesia with find /var/lib/rabbitmq/ -name "*.wal" and remove it. Restart the service afterwards.
9
17
60,473,359
2020-3-1
https://stackoverflow.com/questions/60473359/scapy-get-set-frequency-or-channel-of-a-packet
I have been trying to capture WIFI packets with Linux and see the frequency/channel at which packet was captured. I tried Wireshark and there was no luck and no help. Though using a sample packets from Wireshark, I can see the frequency/channel. So now I'm experimenting with Scapy. I wanted to figure out the frequency/channel of a sniffed packet, but still no luck. Is there a way to do this with Scapy. P.S. If there is a better tool than Scapy, or Python, I appreciate comments
I found out that RadioTab headers are not part of any Dot11 protocol but are merely added by the network interface. And the reason I got the RadioTab headers on sample packets from Wireshark.org and not from my live wireshark capture is because some network adapters do not add RadioTap while others do and the network adapter of my laptop does not add RadioTab headers. I checked this with a new external WiFi adapter and it did add the RadioTap headers. If the adapter does not inject the additional information as it captures frames, then no radiotap headers will be added. So to my main question, how to get/set frequency of a packet. I expected Scapy to have this option but it doesn't, and it shouldn't. The reason is that the frequency depends on what is set on the network adapter. So what I did was to set the frequency/channel of my WiFi adapter to a different one. My external WiFi adapter can work in various channels so I changed each and confirmed with the RadioTap header. There are a simple linux commands/tools that helped me check the supported channels of my WiFi interface, and switch to a particular channel. To capture/send packets at a certain frequency or channel, you need to change the working channel of your interface and set the sniffer/sender interface in scapy to that interface. EDIT - Other problems I faced and solutions: If you are on linux, and you want to change the working channel of your interface you need to disable network-manager for that interface and to do this First Add the following snippet to /etc/network/interfaces auto $iface iface $iface inet dhcp wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf replace $iface with your interface name. This will let you control the interface by yourself. And then add the following lines to /etc/wpa_supplicant/wpa_supplicant.conf ctrl_interface=/var/run/wpa_supplicant network={ ssid="Your_AP_SSID" psk="Your_Passphrase" freq_list=2412 2437 2462 } Note that 2412 2437 2462 are the frequencies (channel 1, 6, 11 in this case) for your interface to choose from. You can edit them to desired frequency. Source. But first you have to check that your interface supports these frequencies. To check that iwlist channel Finally after everything is done. sendp(Ether()/IP(dst="1.2.3.4",ttl=(1,4)), iface="wlp3s0") This will send you packets at the frequency that wlp3s0 is set.
11
8
60,512,207
2020-3-3
https://stackoverflow.com/questions/60512207/partitionby-overwrite-strategy-in-an-azure-datalake-using-pyspark-in-databrick
I have a simple ETL process in an Azure environment blob storage > datafactory > datalake raw > databricks > datalake curated > datwarehouse(main ETL). the datasets for this project are not very big (~1 million rows 20 columns give or take) however I would like to keep them partitioned properly in my datalake as Parquet files. currently I run some simple logic to figure where in my lake each file should sit based off business calendars. the files vaguely looks like this Year Week Data 2019 01 XXX 2019 02 XXX I then partition a given file into the following format replacing data that exists and creating new folders for new data. curated --- dataset -- Year 2019 - Week 01 - file.pq + metadata - Week 02 - file.pq + metadata - Week 03 - file.pq + datadata #(pre existing file) the metadata are success and commits that are auto generated. to this end i use the following query in Pyspark 2.4.3 pyspark_dataframe.write.mode('overwrite')\ .partitionBy('Year','Week').parquet('\curated\dataset') now if I use this command on it's own, it will overwrite any existing data in the target partition so Week 03 will be lost. using spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic") seems to stop the issue and only over write the target files but I wonder if this is the best way to handle files in my data lake? also I've found it hard to find any documentation on the above feature. my first instinct was to loop over a single parquet and write each partition manually, which although gives me greater control, but looping will be slow. my next thought would be to write each partition to a /tmp folder and move each parquet file and then replace files / create files as need be using the query from above. then purge the /tmp folder whilst creating some sort of metadata log. Is there a better way/method to this? any guidance would be much appreciated. the end goal here is to have a clean and safe area for all 'Curated' data whilst having a log of parquet files I can read into a DataWarehouse for further ETL.
I saw that you are using databricks in the azure stack. I think the most viable and recommended method for you to use would be to make use of the new delta lake project in databricks: It provides options for various upserts, merges and acid transactions to object stores like s3 or azure data lake storage. It basically provides the management, safety, isolation and upserts/merges provided by data warehouses to datalakes. For one pipeline apple actually replaced its data warehouses to be run solely on delta databricks because of its functionality and flexibility. For your use case and many others who use parquet, it is just a simple change of replacing 'parquet' with 'delta', in order to use its functionality (if you have databricks). Delta is basically a natural evolution of parquet and databricks has done a great job by providing added functionality and as well as open sourcing it. For your case, I would suggest you try the replaceWhere option provided in delta. Before making this targeted update, the target table has to be of format delta Instead of this: dataset.repartition(1).write.mode('overwrite')\ .partitionBy('Year','Week').parquet('\curataed\dataset') From https://docs.databricks.com/delta/delta-batch.html: 'You can selectively overwrite only the data that matches predicates over partition columns' You could try this: dataset.write.repartition(1)\ .format("delta")\ .mode("overwrite")\ .partitionBy('Year','Week')\ .option("replaceWhere", "Year == '2019' AND Week >='01' AND Week <='02'")\ #to avoid overwriting Week3 .save("\curataed\dataset") Also, if you wish to bring partitions to 1, why dont you use coalesce(1) as it will avoid a full shuffle. From https://mungingdata.com/delta-lake/updating-partitions-with-replacewhere/: 'replaceWhere is particularly useful when you have to run a computationally expensive algorithm, but only on certain partitions' Therefore, I personally think that using replacewhere to manually specify your overwrite will be more targeted and computationally efficient then to just rely on: spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic") Databricks provides optimizations on delta tables make it a faster, and much more efficient option to parquet( hence a natural evolution) by bin packing and z-ordering: From Link:https://docs.databricks.com/spark/latest/spark-sql/language-manual/optimize.html WHERE(binpacking) 'Optimize the subset of rows matching the given partition predicate. Only filters involving partition key attributes are supported.' ZORDER BY 'Colocate column information in the same set of files. Co-locality is used by Delta Lake data-skipping algorithms to dramatically reduce the amount of data that needs to be read'. Faster query execution with indexing, statistics, and auto-caching support Data reliability with rich schema validation and transactional guarantees Simplified data pipeline with flexible UPSERT support and unified Structured Streaming + batch processing on a single data source You could also check out the complete documentation of the open source project: https://docs.delta.io/latest/index.html .. I also want to say that I do not work for databricks/delta lake. I have just seen their improvements and functionality benefit me in my work. UPDATE: The gist of the question is "replacing data that exists and creating new folders for new data" and to do it in highly scalable and effective manner. Using dynamic partition overwrite in parquet does the job however I feel like the natural evolution to that method is to use delta table merge operations which were basically created to 'integrate data from Spark DataFrames into the Delta Lake'. They provide you with extra functionality and optimizations in merging your data based on how would want that to happen and keep a log of all actions on a table so you can rollback versions if needed. Delta lake python api(for merge): https://docs.delta.io/latest/api/python/index.html#delta.tables.DeltaMergeBuilder databricks optimization: https://kb.databricks.com/delta/delta-merge-into.html#discussion Using a single merge operation you can specify the condition merge on, in this case it could be a combination of the year and week and id, and then if the records match(meaning they exist in your spark dataframe and delta table, week1 and week2), update them with the data in your spark dataframe and leave other records unchanged: #you can also add additional condition if the records match, but not required .whenMatchedUpdateAll(condition=None) For some cases, if nothing matches then you might want to insert and create new rows and partitions, for that you can use: .whenNotMatchedInsertAll(condition=None) You can use .converttodelta operation https://docs.delta.io/latest/api/python/index.html#delta.tables.DeltaTable.convertToDelta, to convert your parquet table to a delta table so that you can perform delta operations on it using the api. 'You can now convert a Parquet table in place to a Delta Lake table without rewriting any of the data. This is great for converting very large Parquet tables which would be costly to rewrite as a Delta table. Furthermore, this process is reversible' Your merge case(replacing data where it exists and creating new records when it does not exist) could go like this: (have not tested, refer to examples + api for syntax) %python deltaTable = DeltaTable.convertToDelta(spark, "parquet.`\curataed\dataset`") deltaTable.alias("target").merge(dataset, "target.Year= dataset.Year AND target.Week = dataset.Week") \ .whenMatchedUpdateAll()\ .whenNotMatchedInsertAll()\ .execute() If the delta table is partitioned correctly(year,week) and you used whenmatched clause correctly, these operations will be highly optimized and could take seconds in your case. It also provides you with consistency, atomicity and data integrity with option to rollback. Some more functionality provided is that you can specify the set of columns to update if the match is made, (if you only need to update certain columns). You can also enable spark.conf.set("spark.databricks.optimizer.dynamicPartitionPruning","true"), so that delta uses minimal targeted partitions to carry out the merge(update,delete,create). Overall, I think using this approach is a very new and innovative way of carrying out targeted updates as it gives you more control over it while keeping ops highly efficient. Using parquet with dynamic partitionoverwrite mode will also work fine however, delta lake features bring data quality to your data lake that is unmatched. My recommendation: I would say for now, use dynamic partition overwrite mode for parquet files to do your updates, and you could experiment and try to use the delta merge on just one table with the databricks optimization of spark.conf.set("spark.databricks.optimizer.dynamicPartitionPruning","true") and .whenMatchedUpdateAll() and compare the performance of both(your files are small so I do not think it will be a big difference). The databricks partition pruning optimization for merges article came out in Feb so it is really new and possibly could be a gamechanger for the overhead delta merge operations incur( as under the hood they just create new files, but partition pruning could speed it up) Merge examples in python,scala,sql: https://docs.databricks.com/delta/delta-update.html#merge-examples https://databricks.com/blog/2019/10/03/simple-reliable-upserts-and-deletes-on-delta-lake-tables-using-python-apis.html
8
16
60,416,350
2020-2-26
https://stackoverflow.com/questions/60416350/chrome-80-how-to-decode-cookies
I had a working script for opening and decrypting Google Chrome cookies which looked like: decrypted = win32crypt.CryptUnprotectData(enctypted_cookie_value, None, None, None, 0) It seems that after update 80 it is no longer a valid solution. According to this blog post https://blog.nirsoft.net/2020/02/19/tools-update-new-encryption-chrome-chromium-version-80/ it seems that i need to CryptUnprotectData on encrypted_key from Local State file, than somehow decrypt cookie, using decrypted key. For the first part i got my encrypted_key path = r'%LocalAppData%\Google\Chrome\User Data\Local State' path = os.path.expandvars(path) with open(path, 'r') as file: encrypted_key = json.loads(file.read())['os_crypt']['encrypted_key'] encrypted_key = bytearray(encrypted_key, 'utf-8') Then i tried to decrypt it decrypted_key = win32crypt.CryptUnprotectData(encrypted_key, None, None, None, 0) And got exception: pywintypes.error: (13, 'CryptProtectData', 'The data is invalid.') and i cant figure out how to fix it Also for the second part of encryption, it seems that i should use pycryptodome, something like this snippet: cipher = AES.new(encrypted_key, AES.MODE_GCM, nonce=nonce) plaintext = cipher.decrypt(data) But i can't figure out where i should get nonce value Can someone explain, how to do Chrome cookies decrypting correctly?
Since Chrome version 80 and higher, cookies are encrypted using AES-256 in GCM mode. The applied key is encrypted using DPAPI. The details are described here, section Chrome v80.0 and higher. The encrypted key starts with the ASCII encoding of DPAPI (i.e. 0x4450415049) and is Base64 encoded, i.e. the key must first be Base64 decoded and the first 5 bytes must be removed. Afterwards a decryption with win32crypt.CryptUnprotectData is possible. The decryption returns a tuple whose second value contains the decrypted key: import os import json import base64 import win32crypt from Crypto.Cipher import AES path = r'%LocalAppData%\Google\Chrome\User Data\Local State' path = os.path.expandvars(path) with open(path, 'r') as file: encrypted_key = json.loads(file.read())['os_crypt']['encrypted_key'] encrypted_key = base64.b64decode(encrypted_key) # Base64 decoding encrypted_key = encrypted_key[5:] # Remove DPAPI decrypted_key = win32crypt.CryptUnprotectData(encrypted_key, None, None, None, 0)[1] # Decrypt key The encryption of the cookies is performed with AES-256 in GCM mode. This is authenticated encryption, which guarantees confidentiality and authenticity/integrity. During encryption an authentication tag is generated, which is used for integrity verification during decryption. The GCM mode is based on the CTR mode and uses an IV (nonce). In addition to the 32 bytes key, the nonce and the authentication tag are required for decryption. The encrypted data start with the ASCII encoding of v10 (i.e. 0x763130), followed by the 12 bytes nonce, the actual ciphertext and finally the 16 bytes authentication tag. The individual components can be separated as follows: data = bytes.fromhex('763130...') # the encrypted cookie nonce = data[3:3+12] ciphertext = data[3+12:-16] tag = data[-16:] whereby data contains the encrypted data. The decryption itself is done using PyCryptodome with: cipher = AES.new(decrypted_key, AES.MODE_GCM, nonce=nonce) plaintext = cipher.decrypt_and_verify(ciphertext, tag) # the decrypted cookie Note: Generally, there are also cookies stored that have been saved with Chrome versions below v80 and are therefore DPAPI encrypted. DPAPI encrypted cookies can be recognized by the fact that they start with the sequence 0x01000000D08C9DDF0115D1118C7A00C04FC297EB, here and here, section About DPAPI. These cookies can of course not be decrypted as described above, but with the former procedure for DPAPI encrypted cookies. Tools to view cookies in unencrypted or encrypted form are ChromeCookiesView or DB Browser for SQLite, respectively.
14
31
60,497,516
2020-3-2
https://stackoverflow.com/questions/60497516/django-add-comment-section-on-posts-feed
I want to share a project that currently can create user and each user can create N posts The source is available on github and I has two models users and post and the template layers Currently the feed for each post has a button that send an commenting the post I want to change that to put the comments of the post and not send and email each user should be able to comment a post and the comment should remain {% block container %} <body id="bg" img style="zoom: 85%; background-position: center center; background-attachment: fixed;background-repeat:no-repeat;padding:5px; background-image: url('{% static "/back.png"%}') ";> <div style="background-image: url({% static 'static/img/back.png' %});"> <div class="row" style="align:center"> {% for post in posts %} <div class="col-sm-12 col-md-8 offset-md-4 mt-5 p-0 post-container,width:50%;"> <div class="card" style="width: 32rem;width:50%;"> <div class="card-body"> <div class="media pt-3 pl-3 pb-1"> <a href="{% url " users:detail" post.user.username%}"> <img alt="{{ post.user.username }}" class="mr-3 rounded-circle" height="35" src="{{ post.profile.picture.url }}"> </a> <h3 class="card-title">{{ post.title }}</h3> </div> <p class="card-text">{{ post.desc }}</p> </div> </div> <img alt="{{ post.title }}" src="{{ post.photo.url }}" style="width: 50%; heigth:60%"> <div class="media-body"> <b><p style="margin-top: 5px;">@{{ post.user.username }} - <small>{{ post.created }}</small> &nbsp;&nbsp; <a href="" style="color: #000; font-size: 20px;"> <i class="far fa-heart"></i> </a> <br> </p></b> </div> <!-- COMENT SECTION THAT I WANT TO IMPLEMENT MY FEATURE--> <form action="{% url 'posts:comment_new' %}" enctype="multipart/form-data" method="POST"> {% csrf_token %} <input class="form-control {% if form.title.errors %}is-invalid{% endif %}" name="title" size="16" type="hidden" value="{{post.title}}" > <input class="form-control {% if form.title.errors %}is-invalid{% endif %}" name="first_name " size="16" type="hidden" value="{{user.first_name}}" > <input class="form-control {% if form.title.errors %}is-invalid{% endif %}" name="last_name " size="16" type="hidden" value="{{user.last_name}}" > <textarea class="form-control" cols="50" name="comment" rows="5" style="width:50%;" value="{{ comments.comment }}"></textarea> <button class="btn btn-outline-info btn-lg" style="width:35%; display:block;margin:auto;" type="submit"> Publish </button> </form> </div> <br> {% endfor %} </div> </div> {% endblock %} As I said I want to replace this form function call to create a comment section instead sending a email with the comment < form action = "{% url 'posts:comment_new' %}"> def comment_new(request): if request.method == 'POST': message = request.POST['comment'] subject = request.POST['title'] user = request.POST['first_name'] last_name = request.POST['last_name'] # lastname = request.POST['lastname'] send_mail("[MAIL] " + subject, user + " " + last_name + " said " + message + " on http://url.com:8000", '[email protected]', ['[email protected]'], fail_silently=False) posts = Post.objects.all().order_by('-created') return render(request, os.path.join(BASE_DIR, 'templates', 'posts', 'feed.html'), {'posts': posts}) I think this maybe create a comment with user and post id with the comment detail def comment_new(request): if request.method == 'POST': message = request.POST['comment'] subject = request.POST['title'] user = request.POST['first_name'] last_name = request.POST['last_name'] #lastname = request.POST['lastname'] form = PostForm(request.POST, request.FILES) form.save() One options its create a comment class Comment(models.Model): """ #id= models.AutoField(max_length=1000, blank=True) # post = models.ForeignKey(Post, related_name='',on_delete=models.CASCADE,default=0) """ #comment = models.ForeignKey('posts.Post', related_name='posts_rel', to_field="comments", db_column="comments", # on_delete=models.CASCADE, null=True, default=1, blank=True) post = models.IntegerField(blank=True,null=True,unique=True) user = models.ForeignKey(User, on_delete=models.CASCADE,null=True) username = models.CharField(blank=True, null=True, unique=True ,max_length=200) comment = models.CharField(max_length=254, blank=True, null=True) and then the form class CommentForm(forms.ModelForm): class Meta: """form settings""" model = Comment fields = ('user','username','post','comment',) finally with the function I'm able to persist but not able to render form = CommentForm(request.POST, request.FILES) # print formset.errors if form.is_valid(): form.save() but I can't find the way to render the object on the html file please feel free to suggest any solution or better create a pull request on the public git hub repo
In the book Django 2 by Example we can find a step by step guide to create a comment system, wherein the users will be able to comment on posts. In order to do it, is as simple as the following four steps Create a model to save the comments Create a form to submit comments and validate the input data Add a view that processes the form and saves the new comment to the database Edit the post detail template to display the list of comments and the form to add a new comment Create a model to save the comments In your models.py file for the application, add the following code class Comment(models.Model): post = models.ForeignKey(Post, on_delete=models.CASCADE, related_name='comments') name = models.CharField(max_length=80) email = models.EmailField() body = models.TextField() created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) active = models.BooleanField(default=True) class Meta: ordering = ('created',) def __str__(self): return 'Comment by {} on {}'.format(self.name, self.post) The new Comment model you just created is not yet synchronized into the database. Run the following command to generate a new migration that reflects the creation of the new model: python manage.py makemigrations APPNAME and python manage.py migrate After this, the new table exists in the database. Now, open the admin.py file of the blog application, import the Comment model, and add the following ModelAdmin class: from .models import Post, Comment @admin.register(Comment) class CommentAdmin(admin.ModelAdmin): list_display = ('name', 'email', 'post', 'created', 'active') list_filter = ('active', 'created', 'updated') search_fields = ('name', 'email', 'body') Create a form to submit comments and validate the input data Edit the forms.py file of your blog application and add the following lines: from .models import Comment class CommentForm(forms.ModelForm): class Meta: model = Comment fields = ('name', 'email', 'body') Add a view that processes the form and saves the new comment to the database Edit the views.py file, add imports for the Comment model and the CommentForm form, and modify the post detail view to make it look like the following: from .models import Post, Comment from .forms import EmailPostForm, CommentForm def post_detail(request, year, month, day, post): post = get_object_or_404(Post, slug=post, status='published', publish__year=year, publish__month=month, publish__day=day) # List of active comments for this post comments = post.comments.filter(active=True) new_comment = None if request.method == 'POST': # A comment was posted comment_form = CommentForm(data=request.POST) if comment_form.is_valid(): # Create Comment object but don't save to database yet new_comment = comment_form.save(commit=False) # Assign the current post to the comment new_comment.post = post # Save the comment to the database new_comment.save() else: comment_form = CommentForm() return render(request, 'blog/post/detail.html', {'post': post, 'comments': comments, 'new_comment': new_comment, 'comment_form': comment_form}) Edit the post detail template to display the list of comments and the form to add a new comment At this point we have created the functionality to manage comments for a post. Now, we will need to adapt our post/detail.html template to do the following things: - Display the list of comments - Display a form for users to add a new comment Append the following lines to the post/detail.html template for the list of comments: {% for comment in comments %} <div class="comment"> <p class="info"> Comment {{ forloop.counter }} by {{ comment.name }} {{ comment.created }} </p> {{ comment.body|linebreaks }} </div> {% empty %} <p>There are no comments yet.</p> {% endfor %} Then, for the other point, add the following lines: {% if new_comment %} <h2>Your comment has been added.</h2> {% else %} <h2>Add a new comment</h2> <form action="." method="post"> {{ comment_form.as_p }} {% csrf_token %} <p><input type="submit" value="Add comment"></p> </form> {% endif %}
7
12
60,515,935
2020-3-3
https://stackoverflow.com/questions/60515935/visual-studio-code-does-not-attach-debugger-to-multi-processes-in-python-using-p
Hi I am trying to debug multi processes in python. This below is a portion of where I run multi-processes using Pool pool = Pool(num_half_logical_cpus) pool_result_dict = pool.starmap(process_batches, lstListSets) However, I can't hit any breakpoints. Can anyone guide me to hit those breakpoints I set up? Thanks!
Add this option to launch.json will let you debug multiple processes. "subProcess": true, Then it will hit the breakpoint then you are able to select which process you want to step through (F10). There will be a list of processes at the lower-left sub-window.
8
22
60,422,693
2020-2-26
https://stackoverflow.com/questions/60422693/weird-indexing-using-numpy
I have a variable, x, that is of the shape (2,2,50,100). I also have an array, y, that equals np.array([0,10,20]). A weird thing happens when I index x[0,:,:,y]. x = np.full((2,2,50,100),np.nan) y = np.array([0,10,20]) print(x.shape) (2,2,50,100) print(x[:,:,:,y].shape) (2,2,50,3) print(x[0,:,:,:].shape) (2,50,100) print(x[0,:,:,y].shape) (3,2,50) Why does the last one output (3,2,50) and not (2,50,3)?
This is how numpy uses advanced indexing to broadcast array shapes. When you pass a 0 for the first index, and y for the last index, numpy will broadcast the 0 to be the same shape as y. The following equivalence holds: x[0,:,:,y] == x[(0, 0, 0),:,:,y]. here is an example import numpy as np x = np.arange(120).reshape(2,3,4,5) y = np.array([0,2,4]) np.equal(x[0,:,:,y], x[(0, 0, 0),:,:,y]).all() # returns: True Now, because you are effectively passing in two sets of indices, you are using the advanced indexing API to form (in this case) pairs of indices. x[(0, 0, 0),:,:,y]) # equivalent to [ x[0,:,:,y[0]], x[0,:,:,y[1]], x[0,:,:,y[2]] ] # equivalent to rows = np.array([0, 0, 0]) cols = y x[rows,:,:,cols] # equivalent to [ x[r,:,:,c] for r, c in zip(rows, columns) ] Which has a first dimension that same as the length of y. This is what you are seeing. As an example, look at an array with 4 dimensions which are described in the next chunk: x = np.arange(120).reshape(2,3,4,5) y = np.array([0,2,4]) # x looks like: array([[[[ 0, 1, 2, 3, 4], -+ =+ [ 5, 6, 7, 8, 9], Sheet1 | [ 10, 11, 12, 13, 14], | | [ 15, 16, 17, 18, 19]], -+ | Workbook1 [[ 20, 21, 22, 23, 24], -+ | [ 25, 26, 27, 28, 29], Sheet2 | [ 30, 31, 32, 33, 34], | | [ 35, 36, 37, 38, 39]], -+ | | [[ 40, 41, 42, 43, 44], -+ | [ 45, 46, 47, 48, 49], Sheet3 | [ 50, 51, 52, 53, 54], | | [ 55, 56, 57, 58, 59]]], -+ =+ [[[ 60, 61, 62, 63, 64], [ 65, 66, 67, 68, 69], [ 70, 71, 72, 73, 74], [ 75, 76, 77, 78, 79]], [[ 80, 81, 82, 83, 84], [ 85, 86, 87, 88, 89], [ 90, 91, 92, 93, 94], [ 95, 96, 97, 98, 99]], [[100, 101, 102, 103, 104], [105, 106, 107, 108, 109], [110, 111, 112, 113, 114], [115, 116, 117, 118, 119]]]]) x has a really easy to understand sequential form that we can now use to show what is happening... The first dimension is like having 2 Excel Workbooks, the second dimension is like having 3 sheets in each workbook, the third dimension is like having 4 rows per sheet, and the last dimension is 5 values for each row (or columns per sheet). Looking at it this way, asking for x[0,:,:,0], is the saying: "in the first workbook, for each sheet, for each row, give me the first value/column." x[0,:,:,y[0]] # returns: array([[ 0, 5, 10, 15], [20, 25, 30, 35], [40, 45, 50, 55]]) # this is in the same as the first element in: x[(0,0,0),:,:,y] But now with advanced indexing, we can think of x[(0,0,0),:,:,y] as "in the first workbook, for each sheet, for each row, give me the yth value/column. Ok, now do it for each value of y" x[(0,0,0),:,:,y] # returns: array([[[ 0, 5, 10, 15], [20, 25, 30, 35], [40, 45, 50, 55]], [[ 2, 7, 12, 17], [22, 27, 32, 37], [42, 47, 52, 57]], [[ 4, 9, 14, 19], [24, 29, 34, 39], [44, 49, 54, 59]]]) Where it gets crazy is that numpy will broadcast to match the outer dimensions of index array. So if you want to do that same operation as above, but for BOTH "Excel workbooks", you don't have to loop and concatenate. You can just pass an array to the first dimension, but it MUST have a compatible shape. Passing an integer gets broadcast to y.shape == (3,). If you want to pass an array as the first index, only the last dimension of the array has to be compatible with y.shape. I.e., the last dimension of the first index must either be 3 or 1. ix = np.array([[0], [1]]) x[ix,:,:,y].shape # each row of ix is broadcast to length 3: (2, 3, 3, 4) ix = np.array([[0,0,0], [1,1,1]]) x[ix,:,:,y].shape # this is identical to above: (2, 3, 3, 4) ix = np.array([[0], [1], [0], [1], [0]]) x[ix,:,:,y].shape # ix is broadcast so each row of ix has 3 columns, the length of y (5, 3, 3, 4) Found a short explanation in the docs: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#combining-advanced-and-basic-indexing Edit: From the original question, to get a one-liner of your desired subslicing, you can use x[0][:,:,y]: x[0][:,:,y].shape # returns (2, 50, 3) However, if you are trying to assign to those subslices, you have to be very careful that you are looking at a shared memory view of the original array. Otherwise the assignment won't be to the original array, but a copy. Shared memory only occurs when you are use an integer or slice to subset your array, i.e. x[:,0:3,:,:] or x[0,:,:,1:-1]. np.shares_memory(x, x[0]) # returns: True np.shares_memory(x, x[:,:,:,y]) # returns: False In both your original question and my example y is neither an int or a slice, so will always end up assigning to a copy of the original. BUT! Because your array for y can be expressed as a slice, you CAN actually get an assignable view of your array via: x[0,:,:,0:21:10].shape # returns: (2, 50, 3) np.shares_memory(x, x[0,:,:,0:21:10]) # returns: True # actually assigns to the original array x[0,:,:,0:21:10] = 100 Here we use the slice 0:21:10 to grab every index that would be in range(0,21,10). We have to use 21 and not 20 because the stop-point is excluded from the slice, just like in the range function. So basically, if you can construct a slice that fits your subslicing criteria, you can do assignment.
31
24
60,516,438
2020-3-3
https://stackoverflow.com/questions/60516438/install-python-modules-in-azure-functions
I am learning how to use Azure functions and using my web scraping script in it. It uses BeautifulSoup (bs4) and pymysql modules. It works fine when I tried it locally in the virtual environment as per this MS guide: https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-first-azure-function-azure-cli?pivots=programming-language-python&tabs=cmd%2Cbrowser#run-the-function-locally But when I create the function App and publish the script to it, Azure Functions logs give me this error: Failure Exception: ModuleNotFoundError: No module named 'pymysql'. It must happen when attempting to import it. I really don't know how to proceed, where should I specify what modules it needs to install?
You need to check if you have generated the requirements.txt which includes all of the information of the modules. When you deploy the function to azure, it will install the modules by the requirements.txt automatically. You can generate the information of modules in requirements.txt file by the command below in local: pip freeze > requirements.txt And then deploy the function to azure by running the publish command: func azure functionapp publish hurypyfunapp --build remote For more information about deploy python function from local to auzre, please refer to this tutorial. By the way, if you use consumption plan for your python function, the "Kudu" is not available for us. If you want to use "Kudu", you need to create app service plan for it but not consumption plan. Hope it helps~
8
13
60,515,794
2020-3-3
https://stackoverflow.com/questions/60515794/mocking-instance-attributes
Please help me understand why the following doesn't work. In particular - instance attributes of a tested class are not visible to Python's unittest.Mock. In the example below bar instance attribute is not accessible. The error returned is: AttributeError: <class 'temp.Foo'> does not have the attribute 'bar' import unittest from unittest.mock import patch class Foo: def __init__(self): super().__init__(self) self.bar = some_external_function_returning_list() def do_someting(self): calculate(self.bar) class TestFoo(unittest.TestCase): @patch('temp.Foo.bar') def test_do_something(self, patched_bar): patched_bar.return_value = ['list_elem1', 'list_elem2']
Patching is used to modify name or attribute lookup. In this case, there is no bar attribute of the class temp.Foo. If the intent is to patch the instance variable, you either need an existing instance to modify def test(self): f = Foo() with patch.object(f, 'bar', 3): self.assertEqual(f.bar, 3) or you may want to patch the function call that initializes the instance attribute in the first place. def test(self): with patch('some_external_function_returning_list', return_value=3): f = Foo() self.assertEqual(f.bar, 3)
9
15
60,513,468
2020-3-3
https://stackoverflow.com/questions/60513468/what-is-the-time-complexity-of-searching-in-dict-if-very-long-strings-are-used-a
I read from python3 document, that python use hash table for dict(). So the search time complexity should be O(1) with O(N) as the worst case. However, recently as I took a course, the teacher says that happens only when you use int as the key. If you use a string of length L as keys the search time complexity is O(L). I write a code snippet to test out his honesty import random import string from time import time import matplotlib.pyplot as plt def randomString(stringLength=10): """Generate a random string of fixed length """ letters = string.ascii_lowercase return ''.join(random.choice(letters) for i in range(stringLength)) def test(L): #L: int length of keys N = 1000 # number of keys d = dict() for i in range(N): d[randomString(L)] = None tic = time() for key in d.keys(): d[key] toc = time() - tic tic = time() for key in d.keys(): pass t_idle = time() - tic t_total = toc - t_idle return t_total L = [i * 10000 for i in range(5, 15)] ans = [test(l) for l in L] plt.figure() plt.plot(L, ans) plt.show() The result is very interesting. As you can see, the x-axis is the length of the strings used as keys and the y-axis is the total time to query all 1000 keys in the dictionary. Can anyone explain this result? Please be gentle on me. As you can see, if I ask this basic question, that means I don't have the ability to read python source code or equivalently complex insider document.
Since a dictionary is a hashtable, and looking up a key in a hashtable requires computing the key's hash, then the time complexity of looking up the key in the dictionary cannot be less than the time complexity of the hash function. In current versions of CPython, a string of length L takes O(L) time to compute the hash of if it's the first time you've hashed that particular string object, and O(1) time if the hash for that string object has already been computed (since the hash is stored): >>> from timeit import timeit >>> s = 'b' * (10**9) # string of length 1 billion >>> timeit(lambda: hash(s), number=1) 0.48574538500002973 # half a second >>> timeit(lambda: hash(s), number=1) 5.301000044255488e-06 # 5 microseconds So that's also how long it takes when you look up the key in a dictionary: >>> s = 'c' * (10**9) # string of length 1 billion >>> d = dict() >>> timeit(lambda: s in d, number=1) 0.48521506899999167 # half a second >>> timeit(lambda: s in d, number=1) 4.491000026973779e-06 # 5 microseconds You also need to be aware that a key in a dictionary is not looked up only by its hash: when the hashes match, it still needs to test that the key you looked up is equal to the key used in the dictionary, in case the hash matching is a false positive. Testing equality of strings takes O(L) time in the worst case: >>> s1 = 'a'*(10**9) >>> s2 = 'a'*(10**9) >>> timeit(lambda: s1 == s2, number=1) 0.2006020820001595 So for a key of length L and a dictionary of length n: If the key is not present in the dictionary, and its hash has already been cached, then it takes O(1) average time to confirm it is absent. If the key is not present and its hash has not been cached, then it takes O(L) average time because of computing the hash. If the key is present, it takes O(L) average time to confirm it is present whether or not the hash needs to be computed, because of the equality test. The worst case is always O(nL) because if every hash collides and the strings are all equal except in the last places, then a slow equality test has to be done n times.
6
10
60,512,830
2020-3-3
https://stackoverflow.com/questions/60512830/yield-inside-a-recursive-procedure
Let's say I have a Python list representing ranges for some variables: conditions = [['i', (1, 5)], ['j', (1, 2)]] This represents that variable i ranges from 1 to 5, and inside that loop variable j ranges from 1 to 2. I want a dictionary for each possible combination: {'i': 1, 'j': 1} {'i': 1, 'j': 2} {'i': 2, 'j': 1} {'i': 2, 'j': 2} {'i': 3, 'j': 1} {'i': 3, 'j': 2} {'i': 4, 'j': 1} {'i': 4, 'j': 2} {'i': 5, 'j': 1} {'i': 5, 'j': 2} The reason is that I want to iterate over them. But because the whole space is too big, I do not want to generate all of them, store them and then iterate over that list of dictionaries. I thought about using the following recursive procedure, but I need some help with the yield part. Where should it be? How do I avoid nested generators? def iteration(conditions, currentCondition, valuedIndices): if currentCondition == len(conditions): yield valuedIndices else: cond = conditions[currentCondition] index = cond[0] lim1 = cond[1][0] lim2 = cond[1][1] for ix in range(lim1, lim2 + 1): valuedIndices[index] = ix yield iteration(conditions, currentCondition + 1, valuedIndices) Now I would like to be able to do: for valued_indices in iteration(conditions, 0, {}): ...
This is a case where it might be easier to take a step back and start fresh. Let's start by getting the keys and the intervals separate, using a well-known trick involving zip: >>> keys, intervals = list(zip(*conditions)) >>> keys ('i', 'j') >>> intervals ((1, 5), (1, 2)) (The correspondence between the two preserves the original pairing; intervals[i] is the interval for the variable keys[i] for all i.) Now, let's create proper range objects from those intervals >>> intervals = [range(x, y+1) for x, y in intervals] >>> list(intervals[0]) [1, 2, 3, 4, 5] >>> list(intervals[1]) [1, 2] We can compute the product of these range objects >>> for v in product(*intervals): ... print(v) ... (1, 1) (1, 2) (2, 1) (2, 2) (3, 1) (3, 2) (4, 1) (4, 2) (5, 1) (5, 2) which you should recognize as the values to use for each dict. You can zip each of those values with the keys to create an appropriate set of arguments for the dict command. For example: >>> dict(zip(keys, (1,1))) {'i': 1, 'j': 1} Putting this all together, we can iterate over the product to produce each dict in turn, yielding it. def iteration(conditions): keys, intervals = zip(*conditions) intervals = [range(x, y+1) for x, y in intervals] yield from (dict(zip(keys, v)) for v in product(*intervals))
9
6
60,510,815
2020-3-3
https://stackoverflow.com/questions/60510815/how-does-np-ndarray-tobytes-work-for-dtype-object
I encountered a strange behavior of np.ndarray.tobytes() that makes me doubt that it is working deterministically, at least for arrays of dtype=object. import numpy as np print(np.array([1,[2]]).dtype) # => object print(np.array([1,[2]]).tobytes()) # => b'0h\xa3\t\x01\x00\x00\x00H{!-\x01\x00\x00\x00' print(np.array([1,[2]]).tobytes()) # => b'0h\xa3\t\x01\x00\x00\x00\x88\x9d)-\x01\x00\x00\x00' In the sample code, a list of mixed python objects ([1, [2]]) is first converted to a numpy array, and then transformed to a byte sequence using tobytes(). Why do the resulting byte-representations differ for repeated instantiations of the same data? The documentation just states that it converts an ndarray to raw python bytes, but it does not refer to any limitations. So far, I observed this just for dtype=object. Numeric arrays always yield the same byte sequence: np.random.seed(42); print(np.random.rand(3).tobytes()) # b'\xecQ_\x1ew\xf8\xd7?T\xd6\xbbh@l\xee?Qg\x1e\x8f~l\xe7?' np.random.seed(42); print(np.random.rand(3).tobytes()) # b'\xecQ_\x1ew\xf8\xd7?T\xd6\xbbh@l\xee?Qg\x1e\x8f~l\xe7?' Have I missed an elementar thing about python's/numpy's memory architecture? I tested with numpy version 1.17.2 on a Mac. Context: I encountered this problem when trying to compute a hash for arbitrary data structures. I hoped that I can rely on the basic serialization capabilities of tobytes(), but this appears to be a wrong premise. I know that pickle is the standard for serialization in python, but since I don't require portability and my data structures only contain numbers, I first sought help with numpy.
An array of dtype object stores pointers to the objects it contains. In CPython, this corresponds to the id. Every time you create a new list, it will be allocated at a new memory address. However, small integers are interned, so 1 will reference the same integer object every time. You can see exactly how this works by checking the IDs of some sample objects: >>> x = np.array([1, [2]]) >>> x.tobytes() b'\x90\x91\x04a\xfb\x7f\x00\x00\xc8[J\xaa+\x02\x00\x00' >>> id(x[0]) 140717641208208 >>> id(1) # Small integers are interned 140717641208208 >>> id(x[0]).to_bytes(8, 'little') # Checks out as the first 8 bytes b'\x90\x91\x04a\xfb\x7f\x00\x00' >>> id(x[1]).to_bytes(8, 'little') # Checks out as the last 8 bytes b'\xc8[J\xaa+\x02\x00\x00' As you can see, it is quite deterministic, but serializes information that is essentially useless to you. The operation is the same for numeric arrays as for object arrays: it returns a view or copy of the underlying buffer. The contents of the buffer is what is throwing you off. Since you mentioned that you are computing hashes, keep in mind that there is a reason that python lists are unhashable. You can have lists that are equal at one time and different at another. Using IDs is generally not a good idea for an effective hash.
7
3
60,509,336
2020-3-3
https://stackoverflow.com/questions/60509336/fatal-error-python-h-no-such-file-or-directory-python-levenshtein-install
Firstly, I'm working on an Amazon EC2 instance, Amazon linux version 2 AMI using Python 3.7. I'm trying to install the python-Levenshtein package using the command: pip3 install python-Levenshtein --user and I'm getting a rather huge error, with the key parts being; gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python3.7m -c Levenshtein/_levenshtein.c -o build/temp.linux-x86_64-3.7/Levenshtein/_levenshtein.o Levenshtein/_levenshtein.c:99:10: fatal error: Python.h: No such file or directory #include <Python.h> ^~~~~~~~~~ compilation terminated. error: command 'gcc' failed with exit status 1 and: Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-m87wdfsg/python-Levenshtein/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-w3meudfd-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-build-m87wdfsg/python-Levenshtein/ I've tried many, many solutions, the main one being this: error: command 'gcc' failed with exit status 1 while installing eventlet Edit Tried this... 1) sudo yum install python-devel (no change) 2) reinstalling GCC (no change) 3) spinning up a clean EC2 instance (same error) 4) pip install python-dev-tools (now gives me the error repeated 2x) 5) attempting to find Python.h using 'locate "Python.h"' (nothing) 6) sudo yum list python37-devel (error: no matching packages)
Thanks to Charles, the answer is as follows: sudo yum install python3-devel
8
18
60,501,332
2020-3-3
https://stackoverflow.com/questions/60501332/aws-textract-unsupporteddocumentexception-pdf
I'm using boto3 (aws sdk for python) to analyze a document (a pdf) to get the form key:value pairs. import boto3 def process_text_analysis(bucket, document): # Get the document from S3 s3_connection = boto3.resource('s3') s3_object = s3_connection.Object(bucket, document) s3_response = s3_object.get() # Analyze the document client = boto3.client('textract') response = client.analyze_document(Document={'S3Object': {'Bucket': bucket, 'Name': document}}, FeatureTypes=["FORMS"]) process_text_analysis('francismorgan-01', '709 Privado M SURESTE.pdf') I have followed the documentation for AWS using Analyze Document and when I run my function I get the error. botocore.errorfactory.UnsupportedDocumentException: An error occurred (UnsupportedDocumentException) when calling the AnalyzeDocument operation: Request has unsupported document format Am I missing something?
AnalyzeDocument is a synchronous API that only supports PNG or JPG images. Since you want to work with PDF files, then you'll need to use Amazon Textract Asynchronous API e.g StartDocumentAnalysis, StartDocumentTextDetection
6
12
60,506,508
2020-3-3
https://stackoverflow.com/questions/60506508/get-file-size-creation-date-and-modification-date-in-python
I need to get file info (path, size, dates, etc) and save it in a txt but I don't know where or how to do it. This is what I have: ruta = "FolderPath" os.listdir(path=ruta) miArchivo = open("TxtPath","w") def getListOfFiles(ruta): listOfFile = os.listdir(ruta) allFiles = list() for entry in listOfFile: fullPath = os.path.join(ruta, entry) if os.path.isdir(fullPath): allFiles = allFiles + getListOfFiles(fullPath) else: allFiles.append(fullPath) return allFiles listOfFiles = getListOfFiles(ruta) for elem in listOfFiles: print(elem) print("\n") miArchivo.write("%s\n" % (elem)) miArchivo.close() The output is (only path, no other info): What I want is: V:\1111111\222222222\333333333\444444444\5555555555\66666666\Folder\File name -- size -- modification date and so on
ERROR: type should be string, got "https://docs.python.org/2.7/library/os.path.html#module-os.path os.path.getsize(path) # size in bytes os.path.ctime(path) # time of last metadata change; it's a bit OS specific. Here's a rewrite of your program. I did this: Reformatted with autopep8 for better readability. (That's something you can install to prettify your code your code. But IDEs such as PyCharm Community Edition can help you to do the same, in addition to helping you with code completion and a GUI debugger.) Made your getListofFiles() return a list of tuples. There are three elements in each one; the filename, the size, and the timestamp of the file, which appears to be what's known as an epoch time (time in seconds since 1970; you will have to go through python documentation on dates and times). The tuples is written to your text file in a .csv style format (but note there are modules to do the same in a much better way). Rewritten code: import os def getListOfFiles(ruta): listOfFile = os.listdir(ruta) allFiles = list() for entry in listOfFile: fullPath = os.path.join(ruta, entry) if os.path.isdir(fullPath): allFiles = allFiles + getListOfFiles(fullPath) else: print('getting size of fullPath: ' + fullPath) size = os.path.getsize(fullPath) ctime = os.path.getctime(fullPath) item = (fullPath, size, ctime) allFiles.append(item) return allFiles ruta = \"FolderPath\" miArchivo = open(\"TxtPath\", \"w\") listOfFiles = getListOfFiles(ruta) for elem in listOfFiles: miArchivo.write(\"%s,%s,%s\\n\" % (elem[0], elem[1], elem[2])) miArchivo.close() Now it does this. my-MBP:verynew macbookuser$ python verynew.py; cat TxtPath getting size of fullPath: FolderPath/dir2/file2 getting size of fullPath: FolderPath/dir2/file1 getting size of fullPath: FolderPath/dir1/file1 FolderPath/dir2/file2,3,1583242888.4 FolderPath/dir2/file1,1,1583242490.17 FolderPath/dir1/file1,1,1583242490.17 my-MBP:verynew macbookuser$ "
8
5
60,497,198
2020-3-2
https://stackoverflow.com/questions/60497198/python-jupyter-notebook-in-vscode-does-not-use-the-right-environment
The situation I use Anaconda 3 on Windows 10. I have a Visual Studio Code workspace (my_workspace) than contains a Jupyter notebook (my_notebook.ipynb). VSCode has the Python extension installed. The file my_workspace/settings.json contains: { "python.pythonPath": "C:\\Users\\Me\\Anaconda3\\envs\\my_env\\python.exe" } my_env is an existing Anaconda environment. I can activate it and work with it in a shell, and if I run jupyter lab in such a shell, the code inside the notebooks can import my_env's packages as expected. If I open my_workspace in VSCode, then my_notebook.ipynb in a tab, my_env is also mentioned in VSCode's status bar ("Python 3.7.6 64-bit ('my_env': conda)"), and my_env is automatically activated when I open a PowerShell prompt in VSCode's console (I ran conda init once). The problem When the notebook is opened in VSCode, the Jupyter kernel seems to use the base environment's Python interpreter instead of the one in my_env. When importing a package installed in my_env, but not in base, I get this error: >>> import keras Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'keras' This happens for all packages, not just keras. In the notebook tab in VSCode, if I click on the interpreter's name in the top-right corner, then choose the correct interpreter (the one in my_env), then the notebook runs correctly in my_env. But I have to do this every time I re-open VSCode. How to make the default kernel respect the environment chosen in settings.json?
I think there is no parameter right now to control that in the settings.json. I had similar problems with the environments in which the notebook is launched and I was able to fix this modifying the kernelspec section in the IPython notebook. Basically, open the notebook as a JSON file and remove the kernelspec section. When the notebook is launched from vscode, that part will be filled with the default python environment kernel for the workspace. In my case, is filled with the pipenv environment.
18
19
60,497,344
2020-3-2
https://stackoverflow.com/questions/60497344/how-to-use-python-c-on-windows
https://docs.python.org/3/using/cmdline.html This is the option documentation. but it doesn't provide me any useful message I want to execute code in this way python -c "def hello():\n print('hello world')" the error message PS C:\Users\Administrator> python -c "def hello():\n print('hello world')" File "<string>", line 1 def hello():\n print('hello world') ^ SyntaxError: unexpected character after line continuation character it works on Linux, but not windows. Hope you can give a full fixed command ~~~ Another question connected with this one \n problem when using javascript to exec "python -c "
Try backtick instead of backslash. Error: PS C:\Users\me> python -c "def hello():\n print('hello world')" File "<string>", line 1 def hello():\n print('hello world') ^ SyntaxError: unexpected character after line continuation character PS C:\Users\me> Ok: PS C:\Users\me> python -c "def hello():`n print('hello world')" PS C:\Users\me> Useful: PS C:\Users\me> python -c "def hello():`n print('hello world')`nhello()" hello world PS C:\Users\me> Just echoing to see it: PS C:\Users\me> echo "def hello():`n print('hello world')`nhello()" def hello(): print('hello world') hello() PS C:\Users\me> See PowerTip: New Lines with PowerShell
7
4
60,458,581
2020-2-28
https://stackoverflow.com/questions/60458581/find-entries-that-do-not-match-between-columns-and-iterate-through-columns
I have two datasets that I need to validate against. All records should match. I am having trouble in determining how to iterate through each different column. import pandas as pd import numpy as np df = pd.DataFrame([['charlie', 'charlie', 'beta', 'cappa'], ['charlie', 'charlie', 'beta', 'delta'], ['charlie', 'charlie', 'beta', 'beta']], columns=['A_1', 'A_2','B_1','B_2']) df.head() Out[83]: A_1 A_2 B_1 B_2 0 charlie charlie beta cappa 1 charlie charlie beta delta 2 charlie charlie beta beta For example, in the above code, I want to compare A_1 to A_2, and B_1 to B_2, to return a new column, A_check and B_check respectively, that return True if A_1 matches A_2 as the A_Check for instance. Something like this: df['B_check'] = np.where((df['B_1'] == df['B_2']), 'True', 'False') df_subset = df[df['B_check']=='False'] But iterable across any given column names, where columns that need to be checked against will always have the same name before the underscore and always have 1 or 2 after the underscore. Ultimately, the actual task has multiple data frames with varying columns to check, as well as varying numbers of columns to check. The output I am ultimately going for is a data frame that shows all the records that were false for any particular column check.
With a bit more comprehensive regex: from itertools import groupby import re for k, cols in groupby(sorted(df.columns), lambda x: x[:-2] if re.match(".+_(1|2)$", x) else None): cols=list(cols) if(len(cols)==2 and k): df[f"{k}_check"]=df[cols[0]].eq(df[cols[1]]) It will pair together only columns which name ends up with _1 and _2 regardless what you have before in their names, calculating _check only if there are 2- _1 and _2 (assuming you don't have 2 columns with the same name). For the sample data: A_1 A_2 B_1 B_2 A_check B_check 0 charlie charlie beta cappa True False 1 charlie charlie beta delta True False 2 charlie charlie beta beta True True
7
7
60,495,296
2020-3-2
https://stackoverflow.com/questions/60495296/multiple-inheritance-with-kwargs
Problem I came across this code in Object Oriented Programming by Dusty Phillips (simplified for brevity) and I unsure about a specific part of this definition. class A: def __init__(self, a, **kwargs): super().__init__(**kwargs) self.a = a class B: def __init__(self, b, **kwargs): super().__init__(**kwargs) self.b = b class C(A, B): def __init__(self, c, **kwargs): super().__init__(**kwargs) self.c = c Questions Since the method resolution order is (__main__.C, __main__.A, __main__.B, object), could class B be defined in the following way instead? class B: def __init__(self, b): self.b = b Isn't super().__init__(**kwargs) in class B redundant, since any surplus kwargs passed to C will be passed to object, raising? TypeError: object.__init__() takes exactly one argument (the instance to initialize) Is this a safeguard for if C was defined as class C(B, A) instead of class C(A, B)?
Since the method resolution order is (__main__.C, __main__.A, __main__.B, object), could class B be defined in the following way instead? No, because then this would fail: class D: def __init__(self, d, **kwargs): self.d = d super().__init__(**kwargs) class E(C, D): def __init__(self, e, **kwargs): self.e = e super().__init__(**kwargs) The MRO of E is (E, C, A, B, D, object), so B must call super().__init__ otherwise D.__init__ won't be called. Isn't super().__init__(**kwargs) in class B redundant, since any surplus kwargs passed to C will be passed to object, raising? No, because the surplus kwargs will go to D.__init__ in the above example. But even without that, it is not redundant to raise an error when you call a constructor with too many arguments; it is desirable to have an error message informing you about your incorrect code, rather than for the mistake to go undetected. Is this a safeguard for if C was defined as class C(B, A) instead of class C(A, B)? In some sense, sure; but really it's a safeguard for B occurring in any class hierarchy, so long as the other classes in the hierarchy follow the same rule of calling super().__init__(**kwargs).
11
3
60,492,462
2020-3-2
https://stackoverflow.com/questions/60492462/mfcc-python-completely-different-result-from-librosa-vs-python-speech-features
I'm trying to do extract MFCC features from audio (.wav file) and I have tried python_speech_features and librosa but they are giving completely different results: audio, sr = librosa.load(file, sr=None) # librosa hop_length = int(sr/100) n_fft = int(sr/40) features_librosa = librosa.feature.mfcc(audio, sr, n_mfcc=13, hop_length=hop_length, n_fft=n_fft) # psf features_psf = mfcc(audio, sr, numcep=13, winlen=0.025, winstep=0.01) Below are the plots: librosa: python_speech_features: Did I pass any parameters wrong for those two methods? Why there's such a huge difference here? Update: I have also tried tensorflow.signal implementation, and here's the result: The plot itself matches closer to the one from librosa, but the scale is closer to python_speech_features. (Note that here I calculated 80 mel bins and took the first 13; if I do the calculation with only 13 bins, the result looks quite different as well). Code below: stfts = tf.signal.stft(audio, frame_length=n_fft, frame_step=hop_length, fft_length=512) spectrograms = tf.abs(stfts) num_spectrogram_bins = stfts.shape[-1] lower_edge_hertz, upper_edge_hertz, num_mel_bins = 80.0, 7600.0, 80 linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix( num_mel_bins, num_spectrogram_bins, sr, lower_edge_hertz, upper_edge_hertz) mel_spectrograms = tf.tensordot(spectrograms, linear_to_mel_weight_matrix, 1) mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate(linear_to_mel_weight_matrix.shape[-1:])) log_mel_spectrograms = tf.math.log(mel_spectrograms + 1e-6) features_tf = tf.signal.mfccs_from_log_mel_spectrograms(log_mel_spectrograms)[..., :13] features_tf = np.array(features_tf).T I think my question is: which output is closer to what MFCC actually looks like?
There are at least two factors at play here that explain why you get different results: There is no single definition of the mel scale. Librosa implement two ways: Slaney and HTK. Other packages might and will use different definitions, leading to different results. That being said, overall picture should be similar. That leads us to the second issue... python_speech_features by default puts energy as first (index zero) coefficient (appendEnergy is True by default), meaning that when you ask for e.g. 13 MFCC, you effectively get 12 + 1. In other words, you were not comparing 13 librosa vs 13 python_speech_features coefficients, but rather 13 vs 12. The energy can be of different magnitude and therefore produce quite different picture due to the different colour scale. I will now demonstrate how both modules can produce similar results: import librosa import python_speech_features import matplotlib.pyplot as plt from scipy.signal.windows import hann import seaborn as sns n_mfcc = 13 n_mels = 40 n_fft = 512 hop_length = 160 fmin = 0 fmax = None sr = 16000 y, sr = librosa.load(librosa.util.example_audio_file(), sr=sr, duration=5,offset=30) mfcc_librosa = librosa.feature.mfcc(y=y, sr=sr, n_fft=n_fft, n_mfcc=n_mfcc, n_mels=n_mels, hop_length=hop_length, fmin=fmin, fmax=fmax, htk=False) mfcc_speech = python_speech_features.mfcc(signal=y, samplerate=sr, winlen=n_fft / sr, winstep=hop_length / sr, numcep=n_mfcc, nfilt=n_mels, nfft=n_fft, lowfreq=fmin, highfreq=fmax, preemph=0.0, ceplifter=0, appendEnergy=False, winfunc=hann) As you can see the scale is different, but overall picture looks really similar. Note that I had to make sure that a number of parameters passed to the modules is the same.
13
23
60,491,544
2020-3-2
https://stackoverflow.com/questions/60491544/pandas-rolling-std-yields-inconsistent-results-and-differs-from-values-std
Using pandas v1.0.1 and numpy 1.18.1, I want to calculate the rolling mean and std with different window sizes on a time series. In the data I am working with, the values can be constant for some subsequent points such that - depending on the window size - the rolling mean might be equal to all the values in the window and the corresponding std is expected to be 0. However, I see a different behavior using the same df depending on the window size. MWE: for window in [3,5]: values = [1234.0, 4567.0, 6800.0, 6810.0, 6821.0, 6820.0, 6820.0, 6820.0, 6820.0, 6820.0, 6820.0] df = pd.DataFrame(values, columns=['values']) df.loc[:, 'mean'] = df.rolling(window, min_periods=1).mean() df.loc[:, 'std'] = df.rolling(window, min_periods=1).std(ddof=0) print(df.info()) print(f'window: {window}') print(df) print('non-rolling result:', df['values'].iloc[len(df.index)-window:].values.std()) print('') Output: window: 3 values mean std 0 1234.0 1234.000000 0.000000 1 4567.0 2900.500000 1666.500000 2 6800.0 4200.333333 2287.053757 3 6810.0 6059.000000 1055.011216 4 6821.0 6810.333333 8.576454 5 6820.0 6817.000000 4.966555 6 6820.0 6820.333333 0.471405 7 6820.0 6820.000000 0.000000 8 6820.0 6820.000000 0.000000 9 6820.0 6820.000000 0.000000 10 6820.0 6820.000000 0.000000 non-rolling result: 0.0 window: 5 values mean std 0 1234.0 1234.000000 0.000000 1 4567.0 2900.500000 1666.500000 2 6800.0 4200.333333 2287.053757 3 6810.0 4852.750000 2280.329732 4 6821.0 5246.400000 2186.267193 5 6820.0 6363.600000 898.332366 6 6820.0 6814.200000 8.158431 7 6820.0 6818.200000 4.118252 8 6820.0 6820.200000 0.400000 9 6820.0 6820.000000 0.000021 10 6820.0 6820.000000 0.000021 non-rolling result: 0.0 As expected, the std is 0 for idx 7,8,9,10 using a window size of 3. For a window size of 5, I would expect idx 9 and 10 to yield 0. However, the result is different from 0. If I calculate the std 'manually' for the last window of each window size (using idxs 8,9,10 and 6,7,8,9,10, respectively), I get the expected result of 0 for both cases. Does anybody have an idea what could be the issue here? Any numerical caveats?
It seems that implementation of std() in pd.rolling prefers high performance over numerical accuracy. However You can apply np version of standard deviation: df.loc[:, 'std'] = df.rolling(window, min_periods=1).apply(np.std) Result: values std 0 1234.0 0.000000 1 4567.0 1666.500000 2 6800.0 2287.053757 3 6810.0 2280.329732 4 6821.0 2186.267193 5 6820.0 898.332366 6 6820.0 8.158431 7 6820.0 4.118252 8 6820.0 0.400000 9 6820.0 0.000000 10 6820.0 0.000000 Now precision is better.
7
6
60,492,839
2020-3-2
https://stackoverflow.com/questions/60492839/how-to-compare-sentence-similarities-using-embeddings-from-bert
I am using the HuggingFace Transformers package to access pretrained models. As my use case needs functionality for both English and Arabic, I am using the bert-base-multilingual-cased pretrained model. I need to be able to compare the similarity of sentences using something such as cosine similarity. To use this, I first need to get an embedding vector for each sentence, and can then compute the cosine similarity. Firstly, what is the best way to extratc the semantic embedding from the BERT model? Would taking the last hidden state of the model after being fed the sentence suffice? import torch from transformers import BertModel, BertTokenizer model_class = BertModel tokenizer_class = BertTokenizer pretrained_weights = 'bert-base-multilingual-cased' tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained(pretrained_weights) sentence = 'this is a test sentence' input_ids = torch.tensor([tokenizer.encode(sentence, add_special_tokens=True)]) with torch.no_grad(): output_tuple = model(input_ids) last_hidden_states = output_tuple[0] print(last_hidden_states.size(), last_hidden_states) Secondly, if this is a sufficient way to get embeddings from my sentence, I now have another problem where the embedding vectors have different lengths depending on the length of the original sentence. The shapes output are [1, n, vocab_size], where n can have any value. In order to compute two vectors' cosine similarity, they need to be the same length. How can I do this here? Could something as naive as first summing across axis=1 still work? What other options do I have?
You can use the [CLS] token as a representation for the entire sequence. This token is typically prepended to your sentence during the preprocessing step. This token that is typically used for classification tasks (see figure 2 and paragraph 3.2 in the BERT paper). It is the very first token of the embedding. Alternatively you can take the average vector of the sequence (like you say over the first(?) axis), which can yield better results according to the huggingface documentation (3rd tip). Note that BERT was not designed for sentence similarity using the cosine distance, though in my experience it does yield decent results.
30
16
60,490,169
2020-3-2
https://stackoverflow.com/questions/60490169/maximum-recursion-level-reached-converting-pandas-dataframe-to-json
I have a pandas dataframe which contains thousands of rows, and a few columns. I am getting an error when trying to convert it to a json file. This is the code to convert: sessionAttendance.to_json('SessionAttendance.json') This is the error I'm getting: OverflowError: Maximum recursion level reached _id wondeID session updatedAt 0 123456789101112131415161 AA1234567891 AM 2019-06-21 08:05:50.845 1 123456789101112131415162 AA1234567892 AM 2019-06-21 08:05:50.845 2 123456789101112131415163 AA1234567893 AM 2019-06-21 08:05:50.845 3 123456789101112131415164 AA1234567894 AM 2019-06-21 08:05:50.845 [234195 rows x 4 columns]
It seems to be related to the way Mongo formats its _id fields which are not correctly processed by the json module. A workaround is to set default_handler=str to force the json formatter to use a string representation for any unwanted type: sessionAttendance.to_json('SessionAttendance.json', default_handler=str) Disclaimer: credit should be given to that other SO post
10
18
60,484,383
2020-3-2
https://stackoverflow.com/questions/60484383/typeerror-scalar-value-for-argument-color-is-not-numeric-when-using-opencv
I'm new to OpenCV. Some weird things happened when I drew a circle. It didn't work when I tried to pass c2 to the circle function, but it worked well when I pass c1 to the color argument. But c1 == c2. Here is my code: import cv2 import numpy as np canvas = np.zeros((300, 300, 3), dtype='uint8') for _ in range(1): r = np.random.randint(0, 200) center = np.random.randint(0, 300, size=(2, )) color = np.random.randint(0, 255, size=(3, )) c1 = tuple(color.tolist()) c2 = tuple(color) print('c1 == c2 : {} '.format(c1 == c2)) cv2.circle(canvas, tuple(center), r, c2, thickness=-1) cv2.imshow('Canvas', canvas) cv2.waitKey(0) When I use c2, the console printed: TypeError: Scalar value for argument 'color' is not numeric but why it happened when c1 == c2? Thanks.
Convert data type int64 to int. ndarray.tolist() : data items are converted to the nearest compatible builtin Python type, via the item function. Ex. import cv2 import numpy as np canvas = np.zeros((300, 300, 3), dtype='uint8') for _ in range(1): r = np.random.randint(0, 200) center = np.random.randint(0, 300, size=(2, )) color = np.random.randint(0, 255, size=(3, )) #convert data types int64 to int color = ( int (color [ 0 ]), int (color [ 1 ]), int (color [ 2 ])) cv2.circle(canvas, tuple(center), r, tuple (color), thickness=-1) cv2.imshow('Canvas', canvas) cv2.waitKey(0)
17
21
60,480,777
2020-3-1
https://stackoverflow.com/questions/60480777/plotting-already-calculated-confusion-matrix-using-python
How can I plot in Python a Confusion Matrix similar do the one shown here for already given values of the Confusion Matrix? In the code they use the method sklearn.metrics.plot_confusion_matrix which computes the Confusion Matrix based on the ground truth and the predictions. But in my case, I already have calculated my Confusion Matrix. So for example, my Confusion Matrix is (values in percentages): [[0.612, 0.388] [0.228, 0.772]]
If you check the source for sklearn.metrics.plot_confusion_matrix, you can see how the data is processed to create the plot. Then you can reuse the constructor ConfusionMatrixDisplay and plot your own confusion matrix. import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay cm = [0.612, 0.388, 0.228, 0.772] # your confusion matrix ls = [0, 1] # your y labels disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=ls) disp.plot(include_values=include_values, cmap=cmap, ax=ax, xticks_rotation=xticks_rotation) plt.show()
7
6
60,480,686
2020-3-1
https://stackoverflow.com/questions/60480686/pytorch-model-summary-forward-func-has-more-than-one-argument
I am using torch summary from torchsummary import summary I want to pass more than one argument when printing the model summary, but the examples mentioned here: Model summary in pytorch taken only one argument. for e.g.: model = Network().to(device) summary(model,(1,28,28)) The reason is that the forward function takes two arguments as input, e.g.: def forward(self, img1, img2): How do I pass two arguments here?
You can use the example given here: pytorch summary multiple inputs summary(model, [(1, 16, 16), (1, 28, 28)])
11
22
60,478,373
2020-3-1
https://stackoverflow.com/questions/60478373/rearranging-columns-with-pandas-is-there-an-equivalent-to-dplyrs-select-e
I'm trying to rearrange columns in a DataFrame, by putting a few columns first, and then all the others after. With R's dplyr, this would look like: library(dplyr) df = tibble(col1 = c("a", "b", "c"), id = c(1, 2, 3), col2 = c(2, 4, 6), date = c("1 Feb", "2 Feb", "3 Feb")) df2 = select(df, id, date, everything()) Easy. With Python's pandas, here's what I've tried: import pandas as pd df = pd.DataFrame({ "col1": ["a", "b", "c"], "id": [1, 2, 3], "col2": [2, 4, 6], "date": ["1 Feb", "2 Feb", "3 Feb"] }) # using sets cols = df.columns.tolist() cols_1st = {"id", "date"} cols = set(cols) - cols_1st cols = list(cols_1st) + list(cols) # wrong column order df2 = df[cols] # using lists cols = df.columns.tolist() cols_1st = ["id", "date"] cols = [c for c in cols if c not in cols_1st] cols = cols_1st + cols # right column order, but is there a better way? df3 = df[cols] The pandas way is more tedious, but I'm fairly new to this. Is there a better way?
You can use df.drop: >>> df = pd.DataFrame({ "col1": ["a", "b", "c"], "id": [1, 2, 3], "col2": [2, 4, 6], "date": ["1 Feb", "2 Feb", "3 Feb"] }) >>> df col1 id col2 date 0 a 1 2 1 Feb 1 b 2 4 2 Feb 2 c 3 6 3 Feb >>> cols_1st = ["id", "date"] >>> df[cols_1st + list(df.drop(cols_1st, 1))] id date col1 col2 0 1 1 Feb a 2 1 2 2 Feb b 4 2 3 3 Feb c 6
9
5
60,441,473
2020-2-27
https://stackoverflow.com/questions/60441473/creating-a-workitem-in-azure-devops-via-python
Trying to create a new workitem in VSTS via Python API access and I cant find anywhere in the documents on how to create a new workitem in Python. I'm sure it's fairly simple but I can't seem to find it in the documentation. https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work%20items/create?view=azure-devops-rest-5.1
Please kindly refer this official Azure DevOps Python API doc. It contains Python APIs for interacting with and managing Azure DevOps. These APIs power the Azure DevOps Extension for Azure CLI. To learn more about the Azure DevOps Extension for Azure CLI, visit the Microsoft/azure-devops-cli-extension repo. Here is some example code for creating work item in python.
6
6
60,468,385
2020-2-29
https://stackoverflow.com/questions/60468385/is-there-cudnnlstm-or-cudnngru-alternative-in-tensorflow-2-0
The CuDNNGRU in TensorFlow 1.0 is really fast. But when I shifted to TensorFlow 2.0 i am unable to find CuDNNGRU. Simple GRU is really slow in TensorFlow 2.0. Is there any way to use CuDNNGRU in TensorFlow 2.0?
The importable implementations have been deprecated - instead, LSTM and GRU will default to CuDNNLSTM and CuDNNGRU if all conditions are met: activation = 'tanh' recurrent_activation = 'sigmoid' recurrent_dropout = 0 unroll = False use_bias = True Inputs, if masked, are strictly right-padded reset_after = True (GRU only) Also ensure TensorFlow uses GPU: import tensorflow as tf from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Update: there appears to be a problem w/ TF 2.0.0 when running on Colab in getting CuDNN to work; try !pip install tensorflow==2.1.0 instead.
12
24
60,466,436
2020-2-29
https://stackoverflow.com/questions/60466436/why-is-a-insert0-0-much-slower-than-a00-0
Using a list's insert function is much slower than achieving the same effect using slice assignment: > python -m timeit -n 100000 -s "a=[]" "a.insert(0,0)" 100000 loops, best of 5: 19.2 usec per loop > python -m timeit -n 100000 -s "a=[]" "a[0:0]=[0]" 100000 loops, best of 5: 6.78 usec per loop (Note that a=[] is only the setup, so a starts empty but then grows to 100,000 elements.) At first I thought maybe it's the attribute lookup or function call overhead or so, but inserting near the end shows that that's negligible: > python -m timeit -n 100000 -s "a=[]" "a.insert(-1,0)" 100000 loops, best of 5: 79.1 nsec per loop Why is the presumably simpler dedicated "insert single element" function so much slower? I can also reproduce it at repl.it: from timeit import repeat for _ in range(3): for stmt in 'a.insert(0,0)', 'a[0:0]=[0]', 'a.insert(-1,0)': t = min(repeat(stmt, 'a=[]', number=10**5)) print('%.6f' % t, stmt) print() # Example output: # # 4.803514 a.insert(0,0) # 1.807832 a[0:0]=[0] # 0.012533 a.insert(-1,0) # # 4.967313 a.insert(0,0) # 1.821665 a[0:0]=[0] # 0.012738 a.insert(-1,0) # # 5.694100 a.insert(0,0) # 1.899940 a[0:0]=[0] # 0.012664 a.insert(-1,0) I use Python 3.8.1 32-bit on Windows 10 64-bit. repl.it uses Python 3.8.1 64-bit on Linux 64-bit.
I think it's probably just that they forgot to use memmove in list.insert. If you take a look at the code list.insert uses to shift elements, you can see it's just a manual loop: for (i = n; --i >= where; ) items[i+1] = items[i]; while list.__setitem__ on the slice assignment path uses memmove: memmove(&item[ihigh+d], &item[ihigh], (k - ihigh)*sizeof(PyObject *)); memmove typically has a lot of optimization put into it, such as taking advantage of SSE/AVX instructions.
75
71
60,459,218
2020-2-28
https://stackoverflow.com/questions/60459218/pandas-passing-list-likes-to-loc-or-with-any-missing-labels-is-no-longer-su
For some reason train_test_split, despite lengths being identical and indexes look the same, triggers this error. from sklearn.model_selection import KFold data = {'col1':[30.5,45,1,99,6,5,4,2,5,7,7,3], 'col2':[99.5, 98, 95, 90,1,5,6,7,4,4,3,3],'col3':[23, 23.6, 3, 90,1,9,60,9,7,2,2,1]} df = pd.DataFrame(data) train, test = train_test_split(df, test_size=0.10) X = train[['col1', 'col2']] y2 = train['col3'] X = np.array(X) kf = KFold(n_splits=3, shuffle=True) for train_index, test_index in kf.split(X): X_train, y_train = X[train_index], y[train_index] y is a pandas Series (same length as x). x was a dataframe with about 20 numerical columns casted to numpy array. For some reason train_test_split triggers the error despite the lengths being identical. If i dont call train_test_split it works fine. the last line triggering the error due to trying to index numpy array this way: y[train_ind]
I've tried to create a scenario for your situation. I've created following dataframe: col1 col2 col3 0 1 2 1 1 3 4 0 2 5 6 1 3 7 8 0 4 9 10 1 5 11 12 0 6 13 14 1 7 15 16 0 8 17 18 1 9 19 20 0 10 21 22 1 11 23 24 0 12 25 26 1 13 27 28 0 14 29 30 1 I set col1 and col2 for X and col3 for y. After this I've converted X to numpy array as following. Only difference is I've used shuffle in KFold. X = df[['col1', 'col2']] y = df['col3'] X = np.array(X) kf = KFold(n_splits=3, shuffle=True) for train_index, test_index in kf.split(X): X_train, y_train = X[train_index], y[train_index] And it worked well. So please check my code and your code and clarify it if there is something I missed. Update I assume y2 is y. So y type is still Series, you need to use .iloc for it. Following code worked well. data = {'col1':[30.5,45,1,99,6,5,4,2,5,7,7,3], 'col2':[99.5, 98, 95, 90,1,5,6,7,4,4,3,3],'col3':[23, 23.6, 3, 90,1,9,60,9,7,2,2,1]} df = pd.DataFrame(data) train, test = train_test_split(df, test_size=0.10) X = train[['col1', 'col2']] y = train['col3'] X = np.array(X) kf = KFold(n_splits=3, shuffle=True) for train_index, test_index in kf.split(X): X_train, y_train = X[train_index], y.iloc[train_index]
10
12
60,459,641
2020-2-28
https://stackoverflow.com/questions/60459641/how-do-you-get-mypy-to-recognize-a-newer-version-of-python
I just updated my project to Python 3.7 and I'm seeing this error when I run mypy on the project: error: "Type[datetime]" has no attribute "fromisoformat" datetime does have a function fromisoformat in Python 3.7, but not in previous versions of Python. Why is mypy reporting this error, and how can I get it to analyze Python 3.7 correctly? Things I've tried so far: Deleting .mypy_cache (which has a suspicious looking subfolder titled 3.6) Reinstalling mypy with pip install --upgrade --force-reinstall mypy To reproduce: Create a python 3.6 project install mypy 0.761 (latest) in the project venv scan the project with mypy (mypy .) update the project to python 3.7 add a file with this code in it: from datetime import datetime datetime.fromisoformat('2011-11-04 00:05:23.283') scan the project again (mypy .) [UPDATE: this actually works fine. It was rerunning my precommit hooks without reinstalling pre-commit on the new Python version venv that was causing the problems.]
You are running mypy under an older version of Python. mypy defaults to the version of Python that is used to run it. You have two options: You can change the Python language version with the --python-version command-line option: This flag will make mypy type check your code as if it were run under Python version X.Y. Without this option, mypy will default to using whatever version of Python is running mypy. I'd put this in the project mypy configuration file; the equivalent of the command-line switch is named python_version; put it in the global [mypy] section: [mypy] python_version = 3.7 Install mypy into the virtualenv of your project, so that it uses the exact same Python version. Note that if you see this issue (and didn't accidentally set --python-version, on the command-line or in a configuration file, you are certainly not running mypy from your project venv.
13
6
60,451,472
2020-2-28
https://stackoverflow.com/questions/60451472/how-to-make-a-dict-from-an-enum
How to make a dict from an enum? from enum import Enum class Shake(Enum): VANILLA = "vanilla" CHOCOLATE = "choc" COOKIES = "cookie" MINT = "mint" dct = {} for i in Shake: dct[i]=i.value print(dct) Output: {<Shake.VANILLA: 'vanilla'>: 'vanilla', <Shake.CHOCOLATE: 'choc'>: 'choc', <Shake.COOKIES: 'cookie'>: 'cookie', <Shake.MINT: 'mint'>: 'mint'} But I want the key to be VANILLA and not <Shake.VANILLA: 'vanilla'>
you can just use a dictionary comprehension from enum import Enum class Shake(Enum): VANILLA = "vanilla" CHOCOLATE = "choc" COOKIES = "cookie" MINT = "mint" dct = {i.name: i.value for i in Shake} print(dct) OUTPUT {'VANILLA': 'vanilla', 'CHOCOLATE': 'choc', 'COOKIES': 'cookie', 'MINT': 'mint'}
28
41
60,432,969
2020-2-27
https://stackoverflow.com/questions/60432969/create-python-c-extension-using-macos-10-15-catalina-that-is-backwards-compati
How can I create a Python C extension wheel for MacOS that is backwards compatible (MacOS 10.9+) using MacOS 10.15? This is what I have so far: export MACOSX_DEPLOYMENT_TARGET=10.9 python -m pip wheel . -w wheels --no-deps python -m pip install delocate for whl in wheels/*.whl; do delocate-wheel -w wheels_fixed -v "$whl" done Unfortunately, pip wheel generates a file myapp-0.0.1-cp37-cp37m-macosx_10_15_x86_64.whl, and unlike auditwheel on Linux, delocate-wheel does not modify the name of the wheel. As a result, if I upload it on PyPI using twine, only users with MacOS 10.15 are able to install it using pip. I guess I could manually rename it to myapp-0.0.1-cp37-cp37m-macosx_10_9_x86_64.whl, but this does not sound right to me. For the builds I am just using the GitHub Actions MacOS virtual machines. Thank you. PS: The compiler used for the build is GCC9
I found the solution to my problem and I will post the answer here in case someone else has the same problem. In order to fix the problem I had to also set export MACOSX_DEPLOYMENT_TARGET=10.9 before I install python using pyenv. Now pip wheel creates my wheel with the tag macosx_10_9_x86_64. Thank you. PS: When installing python via pyenv, python is compiled from source, and somehow it takes into account the flag MACOSX_DEPLOYMENT_TARGET.
11
6
60,423,697
2020-2-26
https://stackoverflow.com/questions/60423697/why-is-performance-so-much-better-with-zarr-than-parquet-when-using-dask
When I run essentially the same calculations with dask against zarr data and parquet data, the zarr-based calculations are significantly faster. Why? Is it maybe because I did something wrong when I created the parquet files? I've replicated the issue with fake data (see below) in a jupyter notebook to illustrate the kind of behavior I'm seeing. I'd appreciate any insight anyone has into why the zarr-based calculation is orders of magnitude faster than the parquet-based calculation. The data I'm working with in real life is earth science model data. The particular data parameters are not important, but each parameter can be thought of as an array with latitude, longitude, and time dimensions. To generate zarr files, I simply write out the multi-dimensional structure of my parameter and its dimensions. To generate parquet, I first "flatten" the 3-D parameter array into a 1-D array, which becomes a single column in my data frame. I then add latitude, longitude, and time columns before writing the data frame out as parquet. This cell has all the imports needed for the rest of the code: import pandas as pd import numpy as np import xarray as xr import dask import dask.array as da import intake from textwrap import dedent This cell generates the fake data files, which total a bit more than 3 Gigabytes in size: def build_data(lat_resolution, lon_resolution, ntimes): """Build a fake geographical dataset with ntimes time steps and resolution lat_resolution x lon_resolution""" lats = np.linspace(-90.0+lat_resolution/2, 90.0-lat_resolution/2, np.round(180/lat_resolution)) lons = np.linspace(-180.0+lon_resolution/2, 180-lon_resolution/2, np.round(360/lon_resolution)) times = np.arange(start=1,stop=ntimes+1) data = np.random.randn(len(lats),len(lons),len(times)) return lats,lons,times,data def create_zarr_from_data_set(lats,lons,times,data,zarr_dir): """Write zarr from a data set corresponding to the data passed in.""" dar = xr.DataArray(data, dims=('lat','lon','time'), coords={'lat':lats,'lon':lons,'time':times}, name="data") ds = xr.Dataset({'data':dar, 'lat':('lat',lats), 'lon':('lon',lons), 'time':('time',times)}) ds.to_zarr(zarr_dir) def create_parquet_from_data_frame(lats,lons,times,data,parquet_file): """Write a parquet file from a dataframe corresponding to the data passed in.""" total_points = len(lats)*len(lons)*len(times) # Flatten the data array data_flat = np.reshape(data,(total_points,1)) # use meshgrid to create the corresponding latitude, longitude, and time # columns mesh = np.meshgrid(lats,lons,times,indexing='ij') lats_flat = np.reshape(mesh[0],(total_points,1)) lons_flat = np.reshape(mesh[1],(total_points,1)) times_flat = np.reshape(mesh[2],(total_points,1)) df = pd.DataFrame(data = np.concatenate((lats_flat, lons_flat, times_flat, data_flat),axis=1), columns = ["lat","lon","time","data"]) df.to_parquet(parquet_file,engine="fastparquet") def create_fake_data_files(): """Create zarr and parquet files with fake data""" zarr_dir = "zarr" parquet_file = "data.parquet" lats,lons,times,data = build_data(0.1,0.1,31) create_zarr_from_data_set(lats,lons,times,data,zarr_dir) create_parquet_from_data_frame(lats,lons,times,data,parquet_file) with open("data_catalog.yaml",'w') as f: catalog_str = dedent("""\ sources: zarr: args: urlpath: "./{}" description: "data in zarr format" driver: intake_xarray.xzarr.ZarrSource metadata: {{}} parquet: args: urlpath: "./{}" description: "data in parquet format" driver: parquet """.format(zarr_dir,parquet_file)) f.write(catalog_str) ## # Generate the fake data ## create_fake_data_files() I ran several different kinds of calculations against the parquet and zarr files, but for simplicity in this example, I'll just pull a single parameter value out at a particular time, latitude, and longitude. This cell builds the zarr and parquet directed acyclic graphs (DAGs) for the calculation: # pick some arbitrary point to pull out of the data lat_value = -0.05 lon_value = 10.95 time_value = 5 # open the data cat = intake.open_catalog("data_catalog.yaml") data_zarr = cat.zarr.to_dask() data_df = cat.parquet.to_dask() # build the DAG for getting a single point out of the zarr data time_subset = data_zarr.where(data_zarr.time==time_value,drop=True) lat_condition = da.logical_and(time_subset.lat < lat_value + 1e-9, time_subset.lat > lat_value - 1e-9) lon_condition = da.logical_and(time_subset.lon < lon_value + 1e-9, time_subset.lon > lon_value - 1e-9) geo_condition = da.logical_and(lat_condition,lon_condition) zarr_subset = time_subset.where(geo_condition,drop=True) # build the DAG for getting a single point out of the parquet data parquet_subset = data_df[(data_df.lat > lat_value - 1e-9) & (data_df.lat < lat_value + 1e-9) & (data_df.lon > lon_value - 1e-9) & (data_df.lon < lon_value + 1e-9) & (data_df.time == time_value)] When I run time against the compute for each of the DAGs, I get wildly different times. The zarr-based subset takes less than a second. The parquet-based subset takes 15-30 seconds. This cell does the zarr-based calculation: %%time zarr_point = zarr_subset.compute() Zarr-based calculation time: CPU times: user 6.19 ms, sys: 5.49 ms, total: 11.7 ms Wall time: 12.8 ms This cell does the parquet-based calculation: %%time parquet_point = parquet_subset.compute() Parquet-based calculation time: CPU times: user 18.2 s, sys: 28.1 s, total: 46.2 s Wall time: 29.3 s As you can see, the zarr-based calculation is much, much faster. Why?
Glad to see fastparquet, zarr and intake used in the same question! TL;DR here is: use the right data model appropriate for your task. Also, it's worth pointing out that the zarr dataset is 1.5GB, blosc/lz4 compressed in 512 chunks, and the parquet dataset 1.8GB, snappy compressed in 5 chunks, where the compression are both the defaults. The random data does not compress well, the coordinates do. zarr is an array-oriented format, and can be chunked on any dimension, which means that, to read a single point, you only need the metadata (which is very brief text) and the one chunk which contains it - which needs to be uncompressed in this case. The indexing of the data chunks is implicit. parquet is a column-oriented format. To find a specific point, you may be able to ignore some chunks based in the min/max column metadata for each chunk, depending on how the coordinate columns are organised, and then load the column chunk for the random data and decompress. You would need custom logic to be able to select chunks for loading on multiple columns simultaneously, which Dask does not currently implement (and would not be possible without carefully reordering your data). The metadata for parquet is much larger than for zarr, but both insignificant in this case - if you had many variables or more coordinates, this might become an extra issue for parquet. In this case random access will be much faster for zarr, but reading all of the data is not radically different, since both must load all the bytes on disc and uncompress into floats, and in both cases the coordinates data loads quickly. However, the in-memory representation of the uncompressed dataframe is much larger than for the uncompressed array, since instead of a 1D small array for each coordinate, you now have arrays for each coordinate with the same number of points as the random data; plus, again, to find a particular point is done by indexing the small arrays to get the right coordinate in the array case, and by comparing to every single lat/lon value of every single point for the dataframe case.
12
17
60,435,907
2020-2-27
https://stackoverflow.com/questions/60435907/pyspark-merge-multiple-columns-into-a-json-column
I asked the question a while back for python, but now I need to do the same thing in PySpark. I have a dataframe (df) like so: |cust_id|address |store_id|email |sales_channel|category| ------------------------------------------------------------------- |1234567|123 Main St|10SjtT |[email protected]|ecom |direct | |4567345|345 Main St|10SjtT |[email protected]|instore |direct | |1569457|876 Main St|51FstT |[email protected]|ecom |direct | and I would like to combine the last 4 fields into one metadata field that is a json like so: |cust_id|address |metadata | ------------------------------------------------------------------------------------------------------------------- |1234567|123 Main St|{'store_id':'10SjtT', 'email':'[email protected]','sales_channel':'ecom', 'category':'direct'} | |4567345|345 Main St|{'store_id':'10SjtT', 'email':'[email protected]','sales_channel':'instore', 'category':'direct'}| |1569457|876 Main St|{'store_id':'51FstT', 'email':'[email protected]','sales_channel':'ecom', 'category':'direct'} | Here's the code I used to do this in python: cols = [ 'store_id', 'store_category', 'sales_channel', 'email' ] df1 = df.copy() df1['metadata'] = df1[cols].to_dict(orient='records') df1 = df1.drop(columns=cols) but I would like to translate this to PySpark code to work with a spark dataframe; I do NOT want to use pandas in Spark.
Use to_json function to create json object! Example: from pyspark.sql.functions import * #sample data df=spark.createDataFrame([('1234567','123 Main St','10SjtT','[email protected]','ecom','direct')],['cust_id','address','store_id','email','sales_channel','category']) df.select("cust_id","address",to_json(struct("store_id","category","sales_channel","email")).alias("metadata")).show(10,False) #result +-------+-----------+----------------------------------------------------------------------------------------+ |cust_id|address |metadata | +-------+-----------+----------------------------------------------------------------------------------------+ |1234567|123 Main St|{"store_id":"10SjtT","category":"direct","sales_channel":"ecom","email":"[email protected]"}| +-------+-----------+----------------------------------------------------------------------------------------+ to_json by passing list of columns: ll=['store_id','email','sales_channel','category'] df.withColumn("metadata", to_json(struct([x for x in ll]))).drop(*ll).show() #result +-------+-----------+----------------------------------------------------------------------------------------+ |cust_id|address |metadata | +-------+-----------+----------------------------------------------------------------------------------------+ |1234567|123 Main St|{"store_id":"10SjtT","email":"[email protected]","sales_channel":"ecom","category":"direct"}| +-------+-----------+----------------------------------------------------------------------------------------+
12
32
60,435,406
2020-2-27
https://stackoverflow.com/questions/60435406/which-exception-should-be-raised-when-a-required-environment-variable-is-missing
The title pretty much sums it up already. I have a piece of code that calls os.getenv to get both a URL as well as a token in order to connect to a service. The code lives in a module and will only be imported from there, i.e. it's not a script. It's not a huge issue at all, since I really only need to crash and display the message saying that there are unset values, but it got me thinking about which of Python's built-in exceptions would be the best fit. I found the EnvironmentError, but that seems to function as base class from which IOError and other OS related exceptions inherit. Would it be as simple as a ValueError, as it's really just a value that's missing? Thanks!
Well most built in concrete exception classes are for specific use cases, and this one does not really fit in any but RuntimeError. But I would advise you to use a custom Exception subclass.
34
7
60,434,664
2020-2-27
https://stackoverflow.com/questions/60434664/automatically-determine-header-row-when-reading-csv-in-pandas
I am trying to collect data from different .csv files, that share the same column names. However, some csv files have their headers located in different rows. Is there a way to determine the header row dynamically based on the first row that contains "most" values (the actual header names)? I tried the following: def process_file(file, path, col_source, col_target): global df_master print(file) df = pd.read_csv(path + file, encoding = "ISO-8859-1", header=None) df = df.dropna(thresh=2) ## Drop the rows that contain less than 2 non-NaN values. E.g. metadata df.columns = df.iloc[0,:].values df = df.drop(df.index[0]) However, when using pandas.read_csv(), it seems like the very first value determines the size of the actual dataframe as I receive the following error message: pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 4, saw 162 As you can see in this case the header row would have been located in row 4. When adding error_bad_lines=False to read_csv, only the metadata will be read into the dataframe. The files can have either the structure of: a "Normal" File: row1 col1 col2 col3 col4 col5 row2 val1 val1 val1 val1 val1 row3 val2 val2 val2 val2 val2 row4 or a structure with meta data before header: row1 metadata1 row2 metadata2 row3 col1 col2 col3 col4 col5 row4 val1 val1 val1 val1 val1 Any help much appreciated!
IMHO the simplest way if to forget pandas for a while: you open the file as a text file for reading you start parsing it line by line, guessing whether the line is metadata header the true header line data lines A simple way is to concatenate all the lines starting from the true header line in a single string (let us call it buffer), and then use pd.read_csv(io.StringIO(buffer), ...)
9
3
60,434,320
2020-2-27
https://stackoverflow.com/questions/60434320/shapley-for-logistic-regression
Does shapley support logistic regression models? Running the following code i get: logmodel = LogisticRegression() logmodel.fit(X_train,y_train) predictions = logmodel.predict(X_test) explainer = shap.TreeExplainer(logmodel ) Exception: Model type not yet supported by TreeExplainer: <class 'sklearn.linear_model.logistic.LogisticRegression'> P.S. You are supposed to use a different explainder for different models
Shap is model agnostic by definition. It looks like you have just chosen an explainer that doesn't suit your model type. I suggest looking at KernelExplainer which as described by the creators here is An implementation of Kernel SHAP, a model agnostic method to estimate SHAP values for any model. Because it makes not assumptions about the model type, KernelExplainer is slower than the other model type specific algorithms. The documentation for Shap is mostly solid and has some decent examples.
7
6
60,406,272
2020-2-26
https://stackoverflow.com/questions/60406272/how-to-have-persistent-storage-for-a-pypi-package
I have a pypi package called collectiondbf which connects to an API with a user entered API key. It is used in a directory to download files like so: python -m collectiondbf [myargumentshere..] I know this should be basic knowledge, but I'm really stuck on the question: How can I save the keys users give me in a meaningful way so that they do not have to enter them every time? I would like to use the following solution using a config.json file, but how would I know the location of this file if my package will be moving directories? Here is how I would like to user it but obviously it won't work since the working directory will change import json if user_inputed_keys: with open('config.json', 'w') as f: json.dump({'api_key': api_key}, f)
Most common operating systems have the concept of an application directory that belongs to every user who has an account on the system. This directory allows said user to create and read, for example, config files and settings. So, all you need to do is make a list of all distros that you want to support, find out where they like to put user application files, and have a big old if..elif..else chain to open the appropriate directory. Or use appdirs, which does exactly that already: from pathlib import Path import json import appdirs CONFIG_DIR = Path(appdirs.user_config_dir(appname='collectiondbf')) # magic CONFIG_DIR.mkdir(parents=True, exist_ok=True) config = CONFIG_DIR / 'config.json' if not config.exists(): with config.open('w') as f: json.dumps(get_key_from_user(), f) with config.open('r') as f: keys = json.load(f) # now 'keys' can safely be imported from this module
7
5
60,408,901
2020-2-26
https://stackoverflow.com/questions/60408901/sklearn-utils-compute-class-weight-function-for-large-dataset
I am training a tensorflow keras sequential model on around 20+ GB text based categorical data in a postgres db and i need to give class weights to the model. Here is what i am doing. class_weights = sklearn.utils.class_weight.compute_class_weight('balanced', classes, y) model.fit(x, y, epochs=100, batch_size=32, class_weight=class_weights, validation_split=0.2, callbacks=[early_stopping]) Since i can't load the whole thing in memory i figured i can use fit_generator method in keras model. However how can i calculate the class weights on this data? sklearn does not provide any special function for this, is it the right tool for this ? I thought of doing it on multiple random samples but is there a better approach where whole data can be used ?
You can use the generators and also you can compute the class weights. Let's say you have your generator like this train_generator = train_datagen.flow_from_directory( 'train_directory', target_size=(224, 224), batch_size=32, class_mode = "categorical" ) and the class weights for the training set can be computed like this class_weights = class_weight.compute_class_weight( 'balanced', np.unique(train_generator.classes), train_generator.classes) [EDIT 1] Since you mentioned about postgres sql in the comments, I am adding the prototype answer here. first fetch the count for each classes using a separate query from postgres sql and use it to compute the class weights. you can compute it manually. The basic logic is the count of least weighed class gets the value 1, and the rest of the classes get <1 based on the relative count to the least weighed class. for example you have 3 classes A,B,C with 100,200,150 then class weights becomes {A:1,B:0.5,C:0.66} let compute it manually after fetching the values from postgres sql. [Query] cur.execute("SELECT class, count(*) FROM table group by classes order by 1") rows = cur.fetchall() The above query will return rows with tuples (class name, count for each class) ordered from least to highest. Then the below line will code will create the class weights dictionary class_weights = {} for row in rows: class_weights[row[0]]=rows[0][1]/row[1] #dividing the least value the current value to get the weight, # so that the least value becomes 1, # and other values becomes < 1
10
5