question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
59,617,184
2020-1-6
https://stackoverflow.com/questions/59617184/why-are-some-python-exceptions-lower-case
In Python, exceptions are classes and cased as such. For example: OSError. However, there are some exceptions, such as those in the socket module, that are named in lower-case. For example: socket.timeout, socket.error. Why is this?
According to the docs, exception socket.error A deprecated alias of OSError. Changed in version 3.3: Following PEP 3151, this class was made an alias of OSError. PEP 3151 says while standard exception types live in the root namespace, they are visually distinguished by the fact that they use the CamelCase convention, while almost all other builtins use lowercase naming (except True, False, None, Ellipsis and NotImplemented)
9
7
59,572,706
2020-1-3
https://stackoverflow.com/questions/59572706/django-collectstatic-not-working-on-production-with-s3-but-same-settings-work-l
I've been moving around some settings to make more defined local and production environments, and I must have messed something up. Below are the majority of relevant settings. If I move the production.py settings (which just contains AWS-related settings at the moment) to base.py, I can update S3 from my local machine just fine. Similarly, if I keep those AWS settings in base.py and push to production, S3 updates appropriately. In addition, if I print something from production.py, it does print. However, if I make production.py my "local" settings on manage.py, or when I push to Heroku with the settings as seen below, S3 is not updating. What about my settings is incorrect? (Well, I'm sure a few things, but specifically causing S3 not to update?) Here's some relevant code: __init__.py (in the directory with base, local, and production) from cobev.settings.base import * base.py INSTALLED_APPS = [ ... 'whitenoise.runserver_nostatic', 'django.contrib.staticfiles', ... 'storages', ] ... STATIC_URL = '/static/' STATICFILES_DIRS = [os.path.join(BASE_DIR, "global_static"), os.path.join(BASE_DIR, "media", ) ] MEDIA_ROOT = os.path.join(BASE_DIR, 'media/') MEDIA_URL = '/media/' local.py # local_settings.py from .base import * ... STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') production.py from .base import * # AWS Settings AWS_ACCESS_KEY_ID = config('AWS_ACCESS_KEY_ID') AWS_SECRET_ACCESS_KEY = config('AWS_SECRET_ACCESS_KEY') AWS_STORAGE_BUCKET_NAME = 'cobev' AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME AWS_S3_OBJECT_PARAMETERS = { 'CacheControl': 'max-age=86400', } AWS_LOCATION = 'static' AWS_DEFAULT_ACL = 'public-read' STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage' DEFAULT_FILE_STORAGE = 'cobev.storage_backends.MediaStorage' STATIC_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, AWS_LOCATION) ADMIN_MEDIA_PREFIX = STATIC_URL + 'admin/' # End AWS wsgi.py import os from django.core.wsgi import get_wsgi_application os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cobev.settings.production") application = get_wsgi_application() from whitenoise.django import DjangoWhiteNoise application = DjangoWhiteNoise(application) manage.py #!/usr/bin/env python import os import sys if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cobev.settings.local") try: from django.core.management import execute_from_command_line except ImportError as exc: raise ImportError( "Couldn't import Django. Are you sure it's installed and " "available on your PYTHONPATH environment variable? Did you " "forget to activate a virtual environment?" ) from exc execute_from_command_line(sys.argv)
Ok, let me try, as discovered in comments from the question, you do S3 update using collectstatic, but this is a management command which is called using manage.py file where you set cobev.settings.local as settings which are not equal to cobev.settings.production which is used for wsgi.py file. I think you should manage your settings file using, normal Django way, OS environment variable named DJANGO_SETTINGS_MODULE. For sure you should be able to set it in any production environment you are running.
7
2
59,596,261
2020-1-5
https://stackoverflow.com/questions/59596261/return-or-yield-from-a-function-that-calls-a-generator
I have a generator generator and also a convenience method to it - generate_all. def generator(some_list): for i in some_list: yield do_something(i) def generate_all(): some_list = get_the_list() return generator(some_list) # <-- Is this supposed to be return or yield? Should generate_all return or yield? I want the users of both methods to use it the same, i.e. for x in generate_all() should be equal to some_list = get_the_list() for x in generate(some_list)
Generators are lazy-evaluating so return or yield will behave differently when you're debugging your code or if an exception is thrown. With return any exception that happens in your generator won't know anything about generate_all, that's because when generator is really executed you have already left the generate_all function. With yield in there it will have generate_all in the traceback. def generator(some_list): for i in some_list: raise Exception('exception happened :-)') yield i def generate_all(): some_list = [1,2,3] return generator(some_list) for item in generate_all(): ... Exception Traceback (most recent call last) <ipython-input-3-b19085eab3e1> in <module> 8 return generator(some_list) 9 ---> 10 for item in generate_all(): 11 ... <ipython-input-3-b19085eab3e1> in generator(some_list) 1 def generator(some_list): 2 for i in some_list: ----> 3 raise Exception('exception happened :-)') 4 yield i 5 Exception: exception happened :-) And if it's using yield from: def generate_all(): some_list = [1,2,3] yield from generator(some_list) for item in generate_all(): ... Exception Traceback (most recent call last) <ipython-input-4-be322887df35> in <module> 8 yield from generator(some_list) 9 ---> 10 for item in generate_all(): 11 ... <ipython-input-4-be322887df35> in generate_all() 6 def generate_all(): 7 some_list = [1,2,3] ----> 8 yield from generator(some_list) 9 10 for item in generate_all(): <ipython-input-4-be322887df35> in generator(some_list) 1 def generator(some_list): 2 for i in some_list: ----> 3 raise Exception('exception happened :-)') 4 yield i 5 Exception: exception happened :-) However this comes at the cost of performance. The additional generator layer does have some overhead. So return will be generally a bit faster than yield from ... (or for item in ...: yield item). In most cases this won't matter much, because whatever you do in the generator typically dominates the run-time so that the additional layer won't be noticeable. However yield has some additional advantages: You aren't restricted to a single iterable, you can also easily yield additional items: def generator(some_list): for i in some_list: yield i def generate_all(): some_list = [1,2,3] yield 'start' yield from generator(some_list) yield 'end' for item in generate_all(): print(item) start 1 2 3 end In your case the operations are quite simple and I don't know if it's even necessary to create multiple functions for this, one could easily just use the built-in map or a generator expression instead: map(do_something, get_the_list()) # map (do_something(i) for i in get_the_list()) # generator expression Both should be identical (except for some differences when exceptions happen) to use. And if they need a more descriptive name, then you could still wrap them in one function. There are multiple helpers that wrap very common operations on iterables built-in and further ones can be found in the built-in itertools module. In such simple cases I would simply resort to these and only for non-trivial cases write your own generators. But I assume your real code is more complicated so that may not be applicable but I thought it wouldn't be a complete answer without mentioning alternatives.
33
15
59,600,000
2020-1-5
https://stackoverflow.com/questions/59600000/none-propagation-in-python-chained-attribute-access
Is there a null propagation operator ("null-aware member access" operator) in Python so I could write something like var = object?.children?.grandchildren?.property as in C#, VB.NET and TypeScript, instead of var = None if not myobject\ or not myobject.children\ or not myobject.children.grandchildren\ else myobject.children.grandchildren.property
No. There is a PEP proposing the addition of such operators but it has not (yet) been accepted. In particular, one of the operators proposed in PEP 505 is The "None-aware attribute access" operator ?. ("maybe dot") evaluates the complete expression if the left hand side evaluates to a value that is not None
14
31
59,587,183
2020-1-4
https://stackoverflow.com/questions/59587183/python-dynamodb-expressionattributevalues-contains-invalid-key-syntax-error-ke
Trying to do an update_item which is supposed to create new attributes if it doesn't find existing ones (according to documentation) but I am getting a Sytax error. I have been wracking my brain all day trying to figure out why I am getting this and I can't seem to get past this. Thank you for any help Error I am getting: ClientError: An error occurred (ValidationException) when calling the UpdateItem operation: ExpressionAttributeValues contains invalid key: Syntax error; key: "var4" MyCode: dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('contacts') table.update_item( Key={'email': emailID}, UpdateExpression=SET last_name = :var0, address_1_state = :var1, email_2 = :var2, phone = :var3, phone_2 = :var4 ExpressionAttributeValues={ 'var0': 'Metzger', 'var1': 'CA', 'var2': 'none', 'var3': '949 302-9072', 'var4': '818-222-2311' } )
just change the section like the following - ExpressionAttributeValues={ ':var0': 'Metzger', ':var1': 'CA', ':var2': 'none', ':var3': '949 302-9072', ':var4': '818-222-2311' } Hope the code will work then :)
8
28
59,587,631
2020-1-4
https://stackoverflow.com/questions/59587631/use-center-diverging-colormap-in-a-pandas-dataframe-heatmap-display
I would like to use a diverging colormap to color the background of a pandas dataframe. The aspect that makes this trickier than one would think is the centering. In the example below, a red to blue colormap is used, but the middle of the colormap isn't used for values around zero. How to create a centered background color display where zero is white, all negatives are a red hue, and all positives are a blue hue? import pandas as pd import numpy as np import seaborn as sns np.random.seed(24) df = pd.DataFrame() df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4)*10, columns=list('ABCD'))], axis=1) df.iloc[0, 2] = 0.0 cm = sns.diverging_palette(5, 250, as_cmap=True) df.style.background_gradient(cmap=cm).set_precision(2) The zero in the above display has a red hue and the closest to white background is used for a negative number.
import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from matplotlib import colors np.random.seed(24) df = pd.DataFrame() df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4)*10, columns=list('ABCD'))], axis=1) df.iloc[0, 2] = 0.0 cm = sns.diverging_palette(5, 250, as_cmap=True) def background_gradient(s, m, M, cmap='PuBu', low=0, high=0): rng = M - m norm = colors.Normalize(m - (rng * low), M + (rng * high)) normed = norm(s.values) c = [colors.rgb2hex(x) for x in plt.cm.get_cmap(cmap)(normed)] return ['background-color: %s' % color for color in c] even_range = np.max([np.abs(df.values.min()), np.abs(df.values.max())]) df.style.apply(background_gradient, cmap=cm, m=-even_range, M=even_range).set_precision(2)
7
5
59,594,516
2020-1-4
https://stackoverflow.com/questions/59594516/how-to-sample-from-pandas-dataframe-while-keeping-row-order
Given any DataFrame 2-dimensional, you can call eg. df.sample(frac=0.3) to retrieve a sample. But this sample will have completely shuffled row order. Is there a simple way to get a subsample that preserves the row order?
What we can do instead is use df.sample(), and then sort the resultant index by the original row order. Appending the sort_index() call does the trick. Here's my code: df = pd.DataFrame(np.random.randn(100, 10)) result = df.sample(frac=0.3).sort_index() You can even get it in ascending order. Documentation here.
10
9
59,589,494
2020-1-4
https://stackoverflow.com/questions/59589494/how-do-i-change-the-index-values-of-a-pandas-series
How can I change the index values of a Pandas Series from the regular integer value that they default to, to values within a list that I have? e.g. x = pd.Series([421, 122, 275, 847, 175]) index_values = ['2014-01-01', '2014-01-02', '2014-01-03', '2014-01-04', '2014-01-05'] How do I get the dates in the index_values list to be the indexes in the Series that I've created?
You can assign index values by list: x.index = index_values print(x) 2014-01-01 421 2014-01-02 122 2014-01-03 275 2014-01-04 847 2014-01-05 175 dtype: int64
40
16
59,582,142
2020-1-3
https://stackoverflow.com/questions/59582142/import-cannot-import-name-resolveinfo-from-graphql-error-when-using-newest
I am having some issues with my django app since updating my dependencies. Here aer my installed apps: INSTALLED_APPS = [ 'graphene_django', 'rest_framework', 'corsheaders', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'dojo_manager.dojo', ] and my requirements.txt: aniso8601==8.0.0 asgiref==3.2.3 Django==3.0.2 django-cors-headers==3.2.0 django-filter==2.2.0 django-graphql-jwt==0.3.0 djangorestframework==3.11.0 djangorestframework-jwt==1.11.0 graphene==2.1.8 graphene-django==2.8.0 graphene-django-extras==0.4.8 graphql-core==3.0.1 graphql-relay==3.0.0 pip-upgrade-outdated==1.5 pipupgrade==1.5.2 promise==2.3 PyJWT==1.7.1 python-dateutil==2.8.1 pytz==2019.3 Rx==3.0.1 singledispatch==3.4.0.3 six==1.13.0 sqlparse==0.3.0 I am getting ImportError: cannot import name 'ResolveInfo' from 'graphql' (E:\Ben\GitHub-Repos\dojo-manager\env\lib\site-packages\graphql\__init__.py) I am aware of https://github.com/graphql-python/graphene-django/issues/737 and https://github.com/graphql-python/graphene/issues/546 , none of which seem to solve it in my case. Any help greatly appreciated.
Ok I was able to fix it by downgrading graphql-core==3.0.1 to graphql-core<3 (and all the depencencies). I must have missed the errors when performing pip install -r requirements.txt
10
8
59,588,886
2020-1-4
https://stackoverflow.com/questions/59588886/why-isnt-pip-installing-the-latest-version-of-a-package-even-when-a-newer-vers
I was trying to upgrade to the latest version of a package I had installed with pip, but for some reason it won't get the latest version. I've tried uninstalling the package in question, or even reinstalling pip entirely, but it still refuses to get the latest version from PyPI. When I try to pin the package version (e.g. pip install package==0.10.0) it says that it "Could not find a version that satisfies the requirement package==0.10.0 (from versions: ...)" pip search package even acknowledges that the installed version isn't the latest, labeling the two versions for me. I've seen other questions with external files or local versions, but I've tried the respective solutions (--allow-external doesn't exist anymore, and --no-cache-dir doesn't help) and I'm still stuck on the older version.
I was trying to upgrade Quart. Maybe other packages have something else going on. In this particular case, Quart had dropped support for Python 3.6 (the version I had installed) and only supported 3.7 or later. (This was a fairly recent change to the project, so I just didn't see the news.) However, when attempting to install a package only supported by a later Python, pip doesn't really explain why it couldn't find a version to satisfy the requirement - instead, it just lists all the versions that should work with the current Python, without indicating that more exist and just can't be installed. The only real options to fix are: Update your Python to meet the package's requirements Ask/help the maintainer to backport the package to the version you have.
8
7
59,583,726
2020-1-3
https://stackoverflow.com/questions/59583726/django-importerror-cannot-import-name-python-2-unicode-compatible
I'm building a website and I was trying to create a custom user-to-user messaging system so I installed django-messages and maybe a few other things, and suddenly when I tried to run my server I get the following error : Watching for file changes with StatReloader Exception in thread django-main-thread: Traceback (most recent call last): File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner self.run() File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\utils\autoreload.py", line 53, in wrapper fn(*args, **kwargs) File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\commands\runserver.py", line 109, in inner_run autoreload.raise_last_exception() File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\utils\autoreload.py", line 76, in raise_last_exception raise _exception[1] File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\__init__.py", line 357, in execute autoreload.check_errors(django.setup)() File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\utils\autoreload.py", line 53, in wrapper fn(*args, **kwargs) File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\apps\registry.py", line 114, in populate app_config.import_models() File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\apps\config.py", line 211, in import_models self.models_module = import_module(models_module_name) File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django_messages\models.py", line 9, in <module> from django.utils.encoding import python_2_unicode_compatible ImportError: cannot import name 'python_2_unicode_compatible' from 'django.utils.encoding' (C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\utils\encoding.py) It sounds like chinese to me, I don't understand a single line of this error, can someone help me ? I've made a few researchs, it looks like it's related to a package name six but I was not able to find a solution. I can provide you my code if it's needed but I don't know which file you need so feel free to ask for a file in the comments and I will edit my post accordingly. Thanks in advance !
You are using Django 3, where all the Python 2 compatibility APIs that used to be bundled with Django were removed. django-messages still depends on these, and is trying and failing to import them. You either need to downgrade to Django 2.2, or wait for django-messages to be updated for Django 3 support. This applies for any library in which you get such errors - it means the library is not compatible yet with Django 3.
14
17
59,585,624
2020-1-3
https://stackoverflow.com/questions/59585624/vectorize-a-for-loop-in-numpy-to-calculate-duct-tape-overlaping
I'm creating an application with python to calculate duct-tape overlapping (modeling a dispenser applies a product on a rotating drum). I have a program that works correctly, but is really slow. I'm looking for a solution to optimize a for loop used to fill a numpy array. Could someone help me vectorize the code below? import numpy as np import matplotlib.pyplot as plt # Some parameters width = 264 bbddiam = 940 accuracy = 4 #2 points per pixel drum = np.zeros(accuracy**2 * width * bbddiam).reshape((bbddiam * accuracy , width * accuracy)) # The "slow" function def line_mask(drum, coef, intercept, upper=True, accuracy=accuracy): """Masks a half of the array""" to_return = np.zeros(drum.shape) for index, v in np.ndenumerate(to_return): if upper == True: if index[0] * coef + intercept > index[1]: to_return[index] = 1 else: if index[0] * coef + intercept <= index[1]: to_return[index] = 1 return to_return def get_band(drum, coef, intercept, bandwidth): """Calculate a ribbon path on the drum""" to_return = np.zeros(drum.shape) t1 = line_mask(drum, coef, intercept + bandwidth / 2, upper=True) t2 = line_mask(drum, coef, intercept - bandwidth / 2, upper=False) to_return = t1 + t2 return np.where(to_return == 2, 1, 0) single_band = get_band(drum, 1 / 10, 130, bandwidth=15) # Visualize the result ! plt.imshow(single_band) plt.show() Numba does wonders for my code, reducing runtime from 5.8s to 86ms (special thanks to @Maarten-vd-Sande): from numba import jit @jit(nopython=True, parallel=True) def line_mask(drum, coef, intercept, upper=True, accuracy=accuracy): ... A better solution with numpy is still welcome ;-)
There is no need for any looping at all here. You have effectively two different line_mask functions. Neither needs to be looped explicitly, but you would probably get a significant speedup just from rewriting it with a pair of for loops in an if and else, rather than an if and else in a for loop, which gets evaluated many many times. The really numpythonic thing to do is to properly vectorize your code to operate on entire arrays without any loops. Here is a vectorized version of line_mask: def line_mask(drum, coef, intercept, upper=True, accuracy=accuracy): """Masks a half of the array""" r = np.arange(drum.shape[0]).reshape(-1, 1) c = np.arange(drum.shape[1]).reshape(1, -1) comp = c.__lt__ if upper else c.__ge__ return comp(r * coef + intercept) Setting up the shapes of r and c to be (m, 1) and (n, 1) so that the result is (m, n) is called broadcasting, and is the staple of vectorization in numpy. The result of the updated line_mask is a boolean mask (as the name implies) rather than a float array. This makes it smaller, and hopefully bypasses float operations entirely. You can now rewrite get_band to use masking instead of addition: def get_band(drum, coef, intercept, bandwidth): """Calculate a ribbon path on the drum""" t1 = line_mask(drum, coef, intercept + bandwidth / 2, upper=True) t2 = line_mask(drum, coef, intercept - bandwidth / 2, upper=False) return t1 & t2 The remainder of the program should stay the same, since these functions preserve all the interfaces. If you want, you can rewrite most of your program in three (still somewhat legible) lines: coeff = 1/10 intercept = 130 bandwidth = 15 r, c = np.ogrid[:drum.shape[0], :drum.shape[1]] check = r * coeff + intercept single_band = ((check + bandwidth / 2 > c) & (check - bandwidth / 2 <= c))
7
8
59,582,663
2020-1-3
https://stackoverflow.com/questions/59582663/cnn-pytorch-error-input-type-torch-cuda-bytetensor-and-weight-type-torch-cu
I'm receiving the error, Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same Following is my code, device = torch.device('cuda:0') trainData = torchvision.datasets.FashionMNIST('/content/', train=True, transform=None, target_transform=None, download=True) testData = torchvision.datasets.FashionMNIST('/content/', train=False, transform=None, target_transform=None, download=True) class Net(nn.Module): def __init__(self): super().__init__() ''' Network Structure: input > (1)Conv2D > (2)MaxPool2D > (3)Conv2D > (4)MaxPool2D > (5)Conv2D > (6)MaxPool2D > (7)Linear > (8)LinearOut ''' # Creating the convulutional Layers self.conv1 = nn.Conv2d(in_channels=CHANNELS, out_channels=32, kernel_size=3) self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3) self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3) self.flatten = None # Creating a Random dummy sample to get the Flattened Dimensions x = torch.randn(CHANNELS, DIM, DIM).view(-1, CHANNELS, DIM, DIM) x = self.convs(x) # Creating the Linear Layers self.fc1 = nn.Linear(self.flatten, 512) self.fc2 = nn.Linear(512, CLASSES) def convs(self, x): # Creating the MaxPooling Layers x = F.max_pool2d(F.relu(self.conv1(x)), kernel_size=(2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), kernel_size=(2, 2)) x = F.max_pool2d(F.relu(self.conv3(x)), kernel_size=(2, 2)) if not self.flatten: self.flatten = x[0].shape[0] * x[0].shape[1] * x[0].shape[2] return x # FORWARD PASS def forward(self, x): x = self.convs(x) x = x.view(-1, self.flatten) sm = F.relu(self.fc1(x)) x = F.softmax(self.fc2(sm), dim=1) return x, sm x_train, y_train = training_set x_train, y_train = x_train.to(device), y_train.to(device) optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE) loss_func = nn.MSELoss() loss_log = [] for epoch in range(EPOCHS): for i in tqdm(range(0, len(x_train), BATCH_SIZE)): x_batch = x_train[i:i+BATCH_SIZE].view(-1, CHANNELS, DIM, DIM).to(device) y_batch = y_train[i:i+BATCH_SIZE].to(device) net.zero_grad() output, sm = net(x_batch) loss = loss_func(output, y_batch.float()) loss.backward() optimizer.step() loss_log.append(loss) # print(f"Epoch : {epoch} || Loss : {loss}") return loss_log train_set = (trainData.train_data, trainData.train_labels) test_set = (testData.test_data, testData.test_labels) EPOCHS = 5 LEARNING_RATE = 0.001 BATCH_SIZE = 32 net = Net().to(device) loss_log = train(net, train_set, EPOCHS, LEARNING_RATE, BATCH_SIZE) And this is the Error that I'm getting, RuntimeError Traceback (most recent call last) <ipython-input-8-0db1a1b4e37d> in <module>() 5 net = Net().to(device) 6 ----> 7 loss_log = train(net, train_set, EPOCHS, LEARNING_RATE, BATCH_SIZE) 6 frames <ipython-input-6-7de4a78e3736> in train(net, training_set, EPOCHS, LEARNING_RATE, BATCH_SIZE) 13 14 net.zero_grad() ---> 15 output, sm = net(x_batch) 16 loss = loss_func(output, y_batch.float()) 17 loss.backward() /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) <ipython-input-5-4fddc427892a> in forward(self, x) 41 # FORWARD PASS 42 def forward(self, x): ---> 43 x = self.convs(x) 44 x = x.view(-1, self.flatten) 45 sm = F.relu(self.fc1(x)) <ipython-input-5-4fddc427892a> in convs(self, x) 31 32 # Creating the MaxPooling Layers ---> 33 x = F.max_pool2d(F.relu(self.conv1(x)), kernel_size=(2, 2)) 34 x = F.max_pool2d(F.relu(self.conv2(x)), kernel_size=(2, 2)) 35 x = F.max_pool2d(F.relu(self.conv3(x)), kernel_size=(2, 2)) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input) 343 344 def forward(self, input): --> 345 return self.conv2d_forward(input, self.weight) 346 347 class Conv3d(_ConvNd): /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in conv2d_forward(self, input, weight) 340 _pair(0), self.dilation, self.groups) 341 return F.conv2d(input, weight, self.bias, self.stride, --> 342 self.padding, self.dilation, self.groups) 343 344 def forward(self, input): RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same I double-checked that my Neural Net and my Inputs both are in GPU. I'm still getting this error and I don't understand why! Somebody, please help me to get out of this error.
Cast your input x_batch to float. Use x_batch = x_batch.float() before you pass it through your model.
16
29
59,582,503
2020-1-3
https://stackoverflow.com/questions/59582503/matplotlib-how-to-draw-vertical-line-between-two-y-points
I have 2 y points for each x points. I can draw the plot with this code: import matplotlib.pyplot as plt x = [0, 2, 4, 6] y = [(1, 5), (1, 3), (2, 4), (2, 7)] plt.plot(x, [i for (i,j) in y], 'rs', markersize = 4) plt.plot(x, [j for (i,j) in y], 'bo', markersize = 4) plt.xlim(xmin=-3, xmax=10) plt.ylim(ymin=-1, ymax=10) plt.xlabel('ID') plt.ylabel('Class') plt.show() This is the output: How can I draw a thin line connecting each y point pair? Desired output is:
just add plt.plot((x,x),([i for (i,j) in y], [j for (i,j) in y]),c='black')
7
12
59,563,025
2020-1-2
https://stackoverflow.com/questions/59563025/how-to-reset-tensorboard-when-it-tries-to-reuse-a-killed-windows-pid
Apologies if two days' frustration leaks through... Problem: can't reliably run Tensorboard in jupyter notebook (actually, in Jupyter Lab) with %tensorboard --logdir {logdir} and if I kill the tensorboard process and start again in the notebook it says it is reusing the dead process and port, but the process is dead and netstat -ano | findstr :6006` shows nothing, so the port looks closed too. Question: How in the name of $deity do I get tensorboard to restart from scratch and forget what it thinks it knows about processes, ports etc.? If I could do that I could hack away at residual path etc. issues... Known issues already addressed (I think): need to escape backslashes in Python string to get proper path and other OS gremlins; avoid spaces in path, ensure correct capitalisation... Environment: Win 64-bit Home with Anaconda and Tensforflow-GPU 2 installed via conda install - TF is working and writes data to the specified path given via the call back tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1) # logdir is the full path But I'm damned if I can start Tensorboard reliably within the notebook. I found that if I started an Anaconda command window and invoked tensorboard from there tensorboard started ok... (TF2GPU_Anaconda) C:\Users\Julian>tensorboard --logdir "a:\tensorboard\20200102-112749" 2020-01-02 11:53:58.478848: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all TensorBoard 2.0.0 at http://localhost:6006/ (Press CTRL+C to quit) It was accessibly in Chrome at localhost:6006 as stated (specifically http://localhost:6006/#scalars&run=20200102-112749%5Ctrain) (i'll ignore the other problems with tensorboard such as refresh failures on scalars, odd message on graph, etc.) and %tensorboard --logdir {logdir} then shows tensorboard in the notebook and in the separate chrome tab. However! whilst tensorboard reports in the notebook that it is reusing the old dead PID it is in fact on a completely different new PID What have I been doing wrong, and how do I reset tensorboard completely? PS the last (successful!) invocation was in fact with %tensorboard --logdir {makeWindowsCmdPath('A:\\tensorboard\\20200102-112749')} where makeWindowsCmdPath is defined as def makeWindowsCmdPath(path): return '\"' + str(path) + '\"' UPDATE 2020-01-03 A MWE of eventual success has been uploaded in a comment at Github in response to an issue that includes the PID referencing errors of tensorboard
Hey—sorry to hear that you’re running into issues. It’s entirely plausible that everything that you describe is both accurate and my fault. :-) How in the name of $deity do I get tensorboard to restart from scratch and forget what it thinks it knows about processes, ports etc.? If I could do that I could hack away at residual path etc. issues... There is a directory called .tensorboard-info in your temp directory that maintains a best-effort registry of the TensorBoard jobs that we think are running. When TensorBoard launches (in any manner, including with %tensorboard), it writes an “info file” to that directory, and when you use %tensorboard we first check to see if a “compatible instance” (same working directory and CLI args) is still running, and if so reuse it instead. When a TensorBoard instance shuts down cleanly, it removes its own info file. The idea is that as long as TensorBoard is shut down cleanly we should always have an accurate record of which processes are live, and since this registry is in a temp directory any errors due to hard shutdowns will be short-lived. But this is where I erred: coming from the POSIX world and not being very familiar with Windows application development, I didn’t realize that the Windows temp directory is not actually automatically deleted, ever. Therefore, any bookkeeping errors persist indefinitely. So, the answer to your question is, “remove the .tensorboard-info directory located under tempfile.gettempdir()” (preferably when you don’t have any actively running TensorBoard instances). There are ways that we can plausibly work around this in TensorBoard core: see https://github.com/tensorflow/tensorboard/issues/2483 for a start, and I’ve also considered amortized approaches like letting each TensorBoard instance perform some cleanup of other instances at start time. We haven’t yet gotten around to implementing these. Let me know if this is helpful or if it fails to address your question.
16
26
59,581,746
2020-1-3
https://stackoverflow.com/questions/59581746/why-does-vs-code-autopep8-format-2-white-lines
print("Hello") def world(): print("Hello") world() Gets corrected to: print("Hello") def world(): print("Hello") world() I have tried to: Reinstall Virtual Studio Code Reinstall Python 3.8 Computer Reboot Using other formatters like Black and yapf but got the same result
Because autopep8 follows PEP8 which suggests 2 blank lines around top-level functions. Surround top-level function and class definitions with two blank lines.
7
5
59,576,397
2020-1-3
https://stackoverflow.com/questions/59576397/python-kernel-dies-on-jupyter-notebook-with-tensorflow-2
I installed tensorflow 2 on my mac using conda according these instructions: conda create -n tf2 tensorflow Then I installed ipykernel to add this new environment to my jupyter notebook kernels as follows: conda activate tf2 conda install ipykernel python -m ipykernel install --user --name=tf2 That seemed to work well, I am able to see my tf2 environment on my jupyter notebook kernels. Then I tried to run the simple MNIST example to check if all was working properly and I when I execute this line of code: model.fit(x_train, y_train, epochs=5) The kernel of my jupyter notebook dies without more information. I executed the same code on my terminal via python mnist_test.py and also via ipython (command by command) and I don't have any issues, which let's me assume that my tensorflow 2 is correctly installed on my conda environment. Any ideas on what went wrong during the install? Versions: python==3.7.5 tensorboard==2.0.0 tensorflow==2.0.0 tensorflow-estimator==2.0.0 ipykernel==5.1.3 ipython==7.10.2 jupyter==1.0.0 jupyter-client==5.3.4 jupyter-console==5.2.0 jupyter-core==4.6.1 Here I put the complete script as well as the STDOUT of the execution: import tensorflow as tf import matplotlib.pyplot as plt import seaborn as sns mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 nn_model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) nn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) nn_model.fit(x_train, y_train, epochs=5) nn_model.evaluate(x_test, y_test, verbose=2) (tf2) ➜ tensorflow2 python mnist_test.py 2020-01-03 10:46:10.854619: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags. 2020-01-03 10:46:10.854860: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 8. Tune using inter_op_parallelism_threads for best performance. Train on 60000 samples Epoch 1/5 60000/60000 [==============================] - 6s 102us/sample - loss: 0.3018 - accuracy: 0.9140 Epoch 2/5 60000/60000 [==============================] - 6s 103us/sample - loss: 0.1437 - accuracy: 0.9571 Epoch 3/5 60000/60000 [==============================] - 6s 103us/sample - loss: 0.1054 - accuracy: 0.9679 Epoch 4/5 60000/60000 [==============================] - 6s 103us/sample - loss: 0.0868 - accuracy: 0.9729 Epoch 5/5 60000/60000 [==============================] - 6s 103us/sample - loss: 0.0739 - accuracy: 0.9772 10000/1 - 1s - loss: 0.0359 - accuracy: 0.9782 (tf2) ➜ tensorflow2
After trying different things I run jupyter notebook on debug mode by using the command: jupyter notebook --debug Then after executing the commands on my notebook I got the error message: OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/. And following this discussion, installing nomkl on the virtual environment worked for me. conda install nomkl
11
16
59,567,172
2020-1-2
https://stackoverflow.com/questions/59567172/multiple-assignments-via-walrus-operator
I've attempted to make multiple assignments with the walrus operator, and seen questions on StackOverflow such as this which also fail to assign multiple variables using a walrus operator, and am just wondering what a successful multiple assignment would look like, or whether it is not possible. The purpose of doing so is to add support for detecting all assigned variable names in my library mvdef (specifically, within the find_assigned_args function in the mvdef.src.ast_util module). From running ast.parse I can see that the := operator produces an ast.NamedExpr node, and this has a .target attribute which is an ast.Name object, from which I can obtain the assigned name from the object's .id attribute. If I had to guess, I'd presume that if it were to be at all possible, the .target attribute would be a list of ast.Name objects instead of a single ast.Name object, however the fact that I can't seem to get an example of this makes me wonder whether it is impossible, at least for the time being (in which case I can simplify my code and not just guess at what an implementation should be). If anyone knows which specific part of the Python source to look at to tell me if this is possible or not, that'd be helpful, thanks! P.S. - from looking at the test cases in Lib/test/test_parser.py provided in the initial commit (via), there don't seem to be examples of multiple assignments with the walrus operator, so I'm going to assume for now it's not possible (but please chime in if I'm wrong!) def test_named_expressions(self): self.check_suite("(a := 1)") self.check_suite("(a := a)") self.check_suite("if (match := pattern.search(data)) is None: pass") self.check_suite("[y := f(x), y**2, y**3]") self.check_suite("filtered_data = [y for x in data if (y := f(x)) is None]") self.check_suite("(y := f(x))") self.check_suite("y0 = (y1 := f(x))") self.check_suite("foo(x=(y := f(x)))") self.check_suite("def foo(answer=(p := 42)): pass") self.check_suite("def foo(answer: (p := 42) = 5): pass") self.check_suite("lambda: (x := 1)") self.check_suite("(x := lambda: 1)") self.check_suite("(x := lambda: (y := 1))") # not in PEP self.check_suite("lambda line: (m := re.match(pattern, line)) and m.group(1)") self.check_suite("x = (y := 0)") self.check_suite("(z:=(y:=(x:=0)))") self.check_suite("(info := (name, phone, *rest))") self.check_suite("(x:=1,2)") self.check_suite("(total := total + tax)") self.check_suite("len(lines := f.readlines())") self.check_suite("foo(x := 3, cat='vector')") self.check_suite("foo(cat=(category := 'vector'))") self.check_suite("if any(len(longline := l) >= 100 for l in lines): print(longline)") self.check_suite( "if env_base := os.environ.get('PYTHONUSERBASE', None): return env_base" ) self.check_suite( "if self._is_special and (ans := self._check_nans(context=context)): return ans" ) self.check_suite("foo(b := 2, a=1)") self.check_suite("foo(b := 2, a=1)") self.check_suite("foo((b := 2), a=1)") self.check_suite("foo(c=(b := 2), a=1)")
Iterable packing and unpacking is one difference between = and :=, with only the former supporting them. As found in PEP-572: # Equivalent needs extra parentheses loc = x, y # Use (loc := (x, y)) info = name, phone, *rest # Use (info := (name, phone, *rest)) # No equivalent px, py, pz = position name, phone, email, *other_info = contact
22
28
59,562,997
2020-1-2
https://stackoverflow.com/questions/59562997/how-to-parse-and-read-id-field-from-and-to-a-pydantic-model
I am trying to parse MongoDB data to a pydantic schema but fail to read its _id field which seem to just disappear from the schema. The issue is definitely related to the underscore in front of the object attribute. I can't change _id field name since that would imply not parsing the field at all. Please find below the code I use (using int instead of ObjectId for the sake of simplification) from pydantic import BaseModel class User_1(BaseModel): _id: int data_1 = {"_id": 1} parsed_1 = User_1(**data_1) print(parsed_1.schema()) class User_2(BaseModel): id: int data_2 = {"id": 1} parsed_2 = User_2(**data_2) print(parsed_2.schema()) User_1 is parsed successfully since its _id field is required but can't be read afterwards. User_2 works in the above example by fails if attached to Mongo which doesn't provide any id field but _id. Output of the code above reads as follows: User_1 {'title': 'User_1', 'type': 'object', 'properties': {}} User_2 {'title': 'User_2', 'type': 'object', 'properties': {'id': {'title': 'Id', 'type': 'integer'}}, 'required': ['id']}
you need to use an alias for that field name from pydantic import BaseModel, Field class User_1(BaseModel): id: int = Field(..., alias='_id') See the docs here.
24
36
59,559,941
2020-1-2
https://stackoverflow.com/questions/59559941/how-to-round-decimal-places-in-a-dash-table
I have the following python code: import dash import dash_html_components as html import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/dougmellon/rockies_dash/master/rockies_2019.csv') def generate_table(dataframe, max_rows=10): return html.Table( # Header [html.Tr([html.Th(col) for col in dataframe.columns])] + # Body [html.Tr([ html.Td(dataframe.iloc[i][col]) for col in dataframe.columns ]) for i in range(min(len(dataframe), max_rows))] ) external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) app.layout = html.Div(children=[ html.H4('Batting Stats (2019)'), generate_table(df) ]) if __name__ == '__main__': app.run_server(debug=True) which is pulling data from this csv file (github): When I run the following code, python app.py It displays data with greater than three decimals - which isn't in my csv file. I have tried three or four times to reenter the data manually and re-upload the CSV to github but for some reason there is still data with greater than three decimals. Does anyone have any suggestions as to how I could possibly fix this issue?
I would check this for the dataframe and maybe it might help - How to display pandas DataFrame of floats using a format string for columns? or simply try- pd.options.display.float_format = '${:.2f}'.format I just read in one of the DashTable forums that - you can format data in pandas dataframe and DataTable will display them that way as they are. For example: To display percent value: table_df['col_name']=table_df['col_name'].map('{:,.2f}%'.format) To display float type without decimal part: table_df['col_name']=table_df['col_name'].map("{:,.0f}".format)
8
6
59,554,493
2020-1-1
https://stackoverflow.com/questions/59554493/unable-to-fire-a-docker-build-for-django-and-mysql
I am building an application with Djnago and MySql. I want to use docker for the deployment of my application. I have prepared a requirement.txt, docker-compose.yml and a Dockerfile docker-compose.yml version: "3" services: law-application: restart: always build: context: . ports: - "8000:8000" volumes: - ./app:/app command: > sh -c "python manage.py runserver 0.0.0.0:8000" depends_on: - mysql_db mysql_db: image: mysql:latest command: mysqld --default-authentication-plugin=mysql_native_password volumes: - "./mysql:/var/lib/mysql" ports: - "3306:3306" restart: always environment: - MYSQL_ROOT_PASSWORD=root - MYSQL_DATABASE=root - MYSQL_USER=root - MYSQL_PASSWORD=root requirements.txt django>=2.1.3,<2.2.0 djangorestframework==3.11.0 mysqlclient==1.4.6 Dockerfile FROM python:3.7-alpine MAINTAINER Intersources Inc. ENV PYTHONUNBUFFERED 1 COPY ./requirements.txt /requirements.txt RUN pip install -r /requirements.txt RUN apt-get update RUN apt-get install python3-dev default-libmysqlclient-dev -y RUN mkdir /app WORKDIR /app COPY ./app /app RUN adduser -D jeet USER jeet settings.py DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'my-app-db', 'USER': 'root', 'PASSWORD': 'root', 'HOST': 'mysql_db', 'PORT': 3307, } } I have been trying to run the command docker build . to build an image from docker file but I get this error. Looks like there is some issue with the MySql connector. I have tried searching for the solution but couldn't found any thing to fix this. I am able to build the image if I remove the mysql_db service from the docker-compose.yml. Sending build context to Docker daemon 444.7MB Step 1/12 : FROM python:3.7-alpine ---> 6c7f85a86cca Step 2/12 : MAINTAINER Intersources Inc. ---> Using cache ---> 03b6fa5764d4 Step 3/12 : ENV PYTHONUNBUFFERED 1 ---> Using cache ---> 22ecd91dcb55 Step 4/12 : COPY ./requirements.txt /requirements.txt ---> Using cache ---> e58c16108f20 Step 5/12 : RUN pip install -r /requirements.txt ---> Running in 8f3eb8240fce Collecting django<2.2.0,>=2.1.3 Downloading https://files.pythonhosted.org/packages/ff/82/55a696532518aa47666b45480b579a221638ab29d60d33ce71fcbd3cef9a/Django-2.1.15-py3-none-any.whl (7.3MB) Collecting djangorestframework==3.11.0 Downloading https://files.pythonhosted.org/packages/be/5b/9bbde4395a1074d528d6d9e0cc161d3b99bd9d0b2b558ca919ffaa2e0068/djangorestframework-3.11.0-py3-none-any.whl (911kB) Collecting mysqlclient==1.4.6 Downloading https://files.pythonhosted.org/packages/d0/97/7326248ac8d5049968bf4ec708a5d3d4806e412a42e74160d7f266a3e03a/mysqlclient-1.4.6.tar.gz (85kB) ERROR: Command errored out with exit status 1: command: /usr/local/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-e9kt1otq/mysqlclient/setup.py'"'"'; __file__='"'"'/tmp/pip-install-e9kt1otq/mysqlclient/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-e9kt1otq/mysqlclient/pip-egg-info cwd: /tmp/pip-install-e9kt1otq/mysqlclient/ Complete output (12 lines): /bin/sh: mysql_config: not found /bin/sh: mariadb_config: not found /bin/sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-e9kt1otq/mysqlclient/setup.py", line 16, in <module> metadata, options = get_config() File "/tmp/pip-install-e9kt1otq/mysqlclient/setup_posix.py", line 61, in get_config libs = mysql_config("libs") File "/tmp/pip-install-e9kt1otq/mysqlclient/setup_posix.py", line 29, in mysql_config raise EnvironmentError("%s not found" % (_mysql_config_path,)) OSError: mysql_config not found ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. The command '/bin/sh -c pip install -r /requirements.txt' returned a non-zero code: 1
Just run pip install requirements after apt-get install because mysqlclient requires libmysqlclient-dev: You're using apt package manager with alpine base linux image which is incompatible. I recommend to take python3.7-slim with debian os which supports apt. FROM python:3.7-slim MAINTAINER Intersources Inc. ENV PYTHONUNBUFFERED 1 RUN apt-get update RUN apt-get install python3-dev default-libmysqlclient-dev gcc -y COPY ./requirements.txt /requirements.txt RUN pip install -r /requirements.txt RUN mkdir /app WORKDIR /app COPY ./app /app RUN adduser -D jeet USER jeet If you do need alpine modify Dockerfile like this: FROM python:3.7-alpine MAINTAINER Intersources Inc. RUN apk update RUN apk add musl-dev mariadb-dev gcc RUN pip install mysqlclient RUN mkdir /app WORKDIR /app COPY ./app /app RUN adduser -D jeet USER jeet
7
19