question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
60,054,076 | 2020-2-4 | https://stackoverflow.com/questions/60054076/is-dataclass-a-good-fit-to-replace-a-dictionary | I use dictionaries as data structure a lot in my code. Instead of returning several value as Tuple like Python permits it : def do_smth(): [...] return val1, val2, val3 I prefer to use a dictionary with the advantage to have named keys. But with complex nested dictionary it's hard to navigate inside it. When I was coding with JS several years ago I liked dictionary too because I could call sub part like thing.stuff.foo and the IDE helped me with the structure. I just discover the new DataClass in python and I'm not sure about the reason of this except to replace a dictionary ? For what I have read a DataClass cannot have function inside and the initialization of its arguments is simplified. I would like to have comments about this, how do you use a DataClass, or about dictionary in python. | Dataclasses are more of a replacement for NamedTuples, then dictionaries. Whilst NamedTuples are designed to be immutable, dataclasses can offer that functionality by setting frozen=True in the decorator, but provide much more flexibility overall. If you are into type hints in your Python code, they really come into play. The other advantage is like you said - complex nested dictionaries. You can define Dataclasses as your types, and represent them within Dataclasses in a clear and concise way. Consider the following: @dataclass class City: code: str population: int @dataclass class Country: code: str currency: str cities: List[City] @dataclass class Locations: countries: List[Country] You can then write functions where you annotate the function param with dataclass name as a type hint and access it's attributes (similar to passing in a dictionary and accessing it's keys), or alternatively construct the dataclass and output it i.e. def get_locations(....) -> Locations: .... It makes the code very readable as opposed a large complicated dictionary. You can also set defaults, which is not something that is (edit: WAS prior to 3.7) not allowed in NamedTuples but is allowed in dictionaries. @dataclass class Stock: quantity: int = 0 You can also control whether you want the dataclass to be ordered etc in the decorator just like whether want it to be frozen, whereas normal dictionaries are not ordered (edit: WAS prior to 3.7). See here for more information You get all the benefits of object comparison if you want them i.e. __eq__() etc. They also by default come with __init__ and __repr__ so you don't have to type out those methods manually like with normal classes. There is also substantially more control over fields, allowing metadata etc. And lastly you can convert it into a dictionary at the end by importing from dataclasses import dataclass asdict Update (Aug 2023): Thanks for the comments! Have edited to clarify those features from 3.7 that I misrepresented. Also wanted to add some further information whilst I'm here: For what I have read a DataClass cannot have function inside and the initialization of its arguments is simplified. Just a note... You can bind methods to a dataclass and by default __init__ is constructed for you but I believe this can be disabled using @dataclass(init=False) which will give the ability to construct the object and then modify the attribute (my_var = MyClass(); my_var.my_field = 42. However I have found the __post_init__ method very handy, and there is the ability to suspend a specific attribute from automatically initialising to give more control i.e. from the docs @dataclass class C: a: float b: float c: float = field(init=False) def __post_init__(self): self.c = self.a + self.b Another useful aspect to the __post_init__ is to make assertions of the value. Type checking on init is performed only to evaluate whether any Class Variables are defined, as they are excluded as fields but can be leveraged by internal methods i.e. from typing import ClassVar @dataclass class Lamp: valid_sockets: ClassVar[set] = { 'edison_screw', 'bayonet' } valid_min_wattage: ClassVar[int] = 40 valid_max_wattage: ClassVar[int] = 200 height_cm: int socket: str wattage: int def __post_init__(self) -> None: assert self._is_valid_wattage(), f'Lamp requires {self.valid_min_wattage}-{self.valid_max_wattage}W bulb' assert self._is_valid_socket(), f'Bulb must be one of {self.valid_sockets}' def _is_valid_socket(self) -> bool: return self.socket.lower() in self.valid_sockets def _is_valid_wattage(self) -> bool: return (self.wattage > self.valid_min_wattage) and ( self.wattage < self.valid_max_wattage) In [27]: l = Lamp(50, 'bayonet', 80) In [28]: print(repr(l)) Lamp(height_cm=50, socket='bayonet', wattage=80) In [29]: l = Lamp(50, 'bayonet', 300) --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) Cell In [29], line 1 ----> 1 l = Lamp(50, 'bayonet', 300) File <string>:6, in __init__(self, height_cm, socket, wattage) Cell In [25], line 11, in Lamp.__post_init__(self) 10 def __post_init__(self) -> None: ---> 11 assert self._is_valid_wattage(), f'Lamp requires {self.valid_min_wattage}-{self.valid_max_wattage}W bulb' 12 assert self._is_valid_socket(), f'Bulb must be one of {self.valid_sockets}' AssertionError: Lamp requires 40-200W bulb | 15 | 24 |
60,018,578 | 2020-2-1 | https://stackoverflow.com/questions/60018578/what-does-model-eval-do-in-pytorch | When should I use .eval()? I understand it is supposed to allow me to "evaluate my model". How do I turn it back off for training? Example training code using .eval(). | model.eval() is a kind of switch for some specific layers/parts of the model that behave differently during training and inference (evaluating) time. For example, Dropouts Layers, BatchNorm Layers etc. You need to turn them off during model evaluation, and .eval() will do it for you. In addition, the common practice for evaluating/validation is using torch.no_grad() in pair with model.eval() to turn off gradients computation: # evaluate model: model.eval() with torch.no_grad(): ... out_data = model(data) ... BUT, don't forget to turn back to training mode after eval step: # training step ... model.train() ... | 292 | 399 |
59,986,413 | 2020-1-30 | https://stackoverflow.com/questions/59986413/achieving-multiple-inheritance-using-python-dataclasses | I'm trying to use the new python dataclasses to create some mix-in classes (already as I write this I think it sounds like a rash idea), and I'm having some issues. Behold the example below: from dataclasses import dataclass @dataclass class NamedObj: name: str def __post_init__(self): print("NamedObj __post_init__") self.name = "Name: " + self.name @dataclass class NumberedObj: number: int = 0 def __post_init__(self): print("NumberedObj __post_init__") self.number += 1 @dataclass class NamedAndNumbered(NumberedObj, NamedObj): def __post_init__(self): super().__post_init__() print("NamedAndNumbered __post_init__") If I then try: nandn = NamedAndNumbered('n_and_n') print(nandn.name) print(nandn.number) I get NumberedObj __post_init__ NamedAndNumbered __post_init__ n_and_n 1 Suggesting it has run __post_init__ for NamedObj, but not for NumberedObj. What I would like is to have NamedAndNumbered run __post_init__ for both of its mix-in classes, Named and Numbered. One might think that it could be done if NamedAndNumbered had a __post_init__ like this: def __post_init__(self): super(NamedObj, self).__post_init__() super(NumberedObj, self).__post_init__() print("NamedAndNumbered __post_init__") But this just gives me an error AttributeError: 'super' object has no attribute '__post_init__' when I try to call NamedObj.__post_init__(). At this point I'm not entirely sure if this is a bug/feature with dataclasses or something to do with my probably-flawed understanding of Python's approach to inheritance. Could anyone lend a hand? | This: def __post_init__(self): super(NamedObj, self).__post_init__() super(NumberedObj, self).__post_init__() print("NamedAndNumbered __post_init__") doesn't do what you think it does. super(cls, obj) will return a proxy to the class after cls in type(obj).__mro__ - so, in your case, to object. And the whole point of cooperative super() calls is to avoid having to explicitely call each of the parents. The way cooperative super() calls are intended to work is, well, by being "cooperative" - IOW, everyone in the mro is supposed to relay the call to the next class (actually, the super name is a rather sad choice, as it's not about calling "the super class", but about "calling the next class in the mro"). IOW, you want each of your "composable" dataclasses (which are not mixins - mixins only have behaviour) to relay the call, so you can compose them in any order. A first naive implementation would look like: @dataclass class NamedObj: name: str def __post_init__(self): super().__post_init__() print("NamedObj __post_init__") self.name = "Name: " + self.name @dataclass class NumberedObj: number: int = 0 def __post_init__(self): super().__post_init__() print("NumberedObj __post_init__") self.number += 1 @dataclass class NamedAndNumbered(NumberedObj, NamedObj): def __post_init__(self): super().__post_init__() print("NamedAndNumbered __post_init__") BUT this doesn't work, since for the last class in the mro (here NamedObj), the next class in the mro is the builtin object class, which doesn't have a __post_init__ method. The solution is simple: just add a base class that defines this method as a noop, and make all your composable dataclasses inherit from it: class Base(object): def __post_init__(self): # just intercept the __post_init__ calls so they # aren't relayed to `object` pass @dataclass class NamedObj(Base): name: str def __post_init__(self): super().__post_init__() print("NamedObj __post_init__") self.name = "Name: " + self.name @dataclass class NumberedObj(Base): number: int = 0 def __post_init__(self): super().__post_init__() print("NumberedObj __post_init__") self.number += 1 @dataclass class NamedAndNumbered(NumberedObj, NamedObj): def __post_init__(self): super().__post_init__() print("NamedAndNumbered __post_init__") | 21 | 27 |
60,050,586 | 2020-2-4 | https://stackoverflow.com/questions/60050586/pytorch-change-the-learning-rate-based-on-number-of-epochs | When I set the learning rate and find the accuracy cannot increase after training few epochs optimizer = optim.Adam(model.parameters(), lr = 1e-4) n_epochs = 10 for i in range(n_epochs): // some training here If I want to use a step decay: reduce the learning rate by a factor of 10 every 5 epochs, how can I do so? | You can use learning rate scheduler torch.optim.lr_scheduler.StepLR import torch.optim.lr_scheduler.StepLR scheduler = StepLR(optimizer, step_size=5, gamma=0.1) Decays the learning rate of each parameter group by gamma every step_size epochs see docs here Example from docs # Assuming optimizer uses lr = 0.05 for all groups # lr = 0.05 if epoch < 30 # lr = 0.005 if 30 <= epoch < 60 # lr = 0.0005 if 60 <= epoch < 90 # ... scheduler = StepLR(optimizer, step_size=30, gamma=0.1) for epoch in range(100): train(...) validate(...) scheduler.step() Example: import torch import torch.optim as optim optimizer = optim.SGD([torch.rand((2,2), requires_grad=True)], lr=0.1) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) for epoch in range(1, 21): scheduler.step() print('Epoch-{0} lr: {1}'.format(epoch, optimizer.param_groups[0]['lr'])) if epoch % 5 == 0:print() Epoch-1 lr: 0.1 Epoch-2 lr: 0.1 Epoch-3 lr: 0.1 Epoch-4 lr: 0.1 Epoch-5 lr: 0.1 Epoch-6 lr: 0.010000000000000002 Epoch-7 lr: 0.010000000000000002 Epoch-8 lr: 0.010000000000000002 Epoch-9 lr: 0.010000000000000002 Epoch-10 lr: 0.010000000000000002 Epoch-11 lr: 0.0010000000000000002 Epoch-12 lr: 0.0010000000000000002 Epoch-13 lr: 0.0010000000000000002 Epoch-14 lr: 0.0010000000000000002 Epoch-15 lr: 0.0010000000000000002 Epoch-16 lr: 0.00010000000000000003 Epoch-17 lr: 0.00010000000000000003 Epoch-18 lr: 0.00010000000000000003 Epoch-19 lr: 0.00010000000000000003 Epoch-20 lr: 0.00010000000000000003 More on How to adjust Learning Rate - torch.optim.lr_scheduler provides several methods to adjust the learning rate based on the number of epochs. | 28 | 53 |
59,985,035 | 2020-1-30 | https://stackoverflow.com/questions/59985035/does-there-exist-any-alternative-of-logspace-in-julia-v1-3-1 | Background and Existing Solutions I am porting some Python code into Julia (v1.3.1), and I have run into an issue with trying to reproduce the code into as easily readable code in Julia. In Python (using numpy), we have created a 101-element logarithmically spaced sequence from 0.001 to 1000: >>> X = numpy.logspace( -3, 3, 101 ) array([1.00000000e-03, 1.14815362e-03, 1.31825674e-03, ..., 1.00000000e+03]) Implementing this in Julia with PyCall would of course work like this: julia> using PyCall julia> numpy = pyimport("numpy") julia> X_python = numpy.logspace( -3, 3, 101 ) 101-element Array{Float64,1}: 0.001 0.0011481536214968829 0.0013182567385564075 ⋮ 1000.0 But I want to implement this in pure Julia for my current project. Not finding the same function from the Julia documentation, after some searching I came across an older documentation entry for logspace here. I then came across this Github pull request for deprecating logspace into its definition, so currently it seems that this is the way to create the logarithmically spaced sequence: julia> X_julia = 10 .^ range( -3, 3, length = 101 ) 101-element Array{Float64,1}: 0.001 0.0011481536214968829 0.0013182567385564075 ⋮ 1000.0 TL;DR / The Actual Question julia> LinRange(1e-3, 1e3, 101) 101-element LinRange{Float64}: 0.001,10.001,20.001,…,1000.0 Since there currently exists a simple and easy-to-read function, LinRange, for creating linear sequences (as seen above), does there exist a similar function, something like LogRange, for logarithmic sequences? I am going for simplicity and improved readability in this project, so while broadcasting a range into the exponent of 10 works from the mathematical point of view, something like LogRange(1e-3, 1e3, 101) would be easier for a beginner or part-time programmer to understand. EDIT: When the limits of the sequence are integer exponents of 10, the code is fairly clear, but when the limits are floats, the difference in readability between LogRange() and 10 .^ () becomes more apparent: julia> 10 .^ range( log10(1.56e-2), log10(3.62e4), length = 101 ) julia> LogRange( 1.56e-2, 3.62e4, 101 ) | You can just use range For Log-spaced values ∈ [1.0e-10, 1.0e10]: 10 .^ range(-10, stop=10, length=101) | 7 | 1 |
59,962,902 | 2020-1-29 | https://stackoverflow.com/questions/59962902/max-retries-exceeded-with-url-caused-by-proxyerror | i wanted to get some proxy list from this webPage; https://free-proxy-list.net/ but i stuck in this error and dont know how to fix it. requests.exceptions.ProxyError: HTTPSConnectionPool(host='free-proxy-list.net', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x00000278BFFA1EB0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))) and btw, this is my related code: import urllib import requests from bs4 import BeautifulSoup from fake_useragent import UserAgent ua = UserAgent(cache=False) header = { "User-Agent": str(ua.msie) } proxy = { "https": "http://95.66.151.101:8080" } urls = "https://free-proxy-list.net/" res = requests.get(urls, proxies=proxy) soup = BeautifulSoup(res.text,'lxml') and i tried to scrape other web sites, but i realized that its not the way. | Your using https in the Json dict when your proxy is a http proxy Proxies should always be inside this format For a http proxy {'"http": "Http Proxy"} For a https proxy {"https":"Https Proxy"} And for the UserAgent {"User-Agent": "Opera/9.80 (X11; Linux x86_64; U; de) Presto/2.2.15 Version/10.00"} Example import requests requests.get("https://example.com", proxies={"http":"http://95.66.151.101:8080"}, headers={"User-Agent": "Opera/9.80 (X11; Linux x86_64; U; de) Presto/2.2.15 Version/10.00"}) The module from fake_useragent import UserAgent You imported is irrelevant and unnecessary Extra The error could've also happened because the proxy isn't valid or responded improperly If you are looking for free lists of proxies consider checking out these sources https://pastebin.com/raw/VJwVkqRT https://proxyscrape.com/free-proxy-list https://www.freeproxylists.net/ | 10 | 5 |
59,979,467 | 2020-1-30 | https://stackoverflow.com/questions/59979467/accessing-microsoft-sharepoint-files-and-data-using-python | I am using Microsoft sharepoint. I have an url, by using that url I need to get total data like photos,videos,folders,subfolders,files,posts etc... and I need to store those data in database(Sql server). I am using python. So,Please anyone suggest me how to do this and I am beginner for accessing sharepoint and working this sort of things. | Here's the starter code for connecting to share point through Python and accessing the list of files, folders and individual file contents of Sharepoint as well. You can build on top of this to suit your needs. Please note that this method works for public Sharepoint sites that are accessible through internet. For Organisation restricted Sharepoint sites that are hosted on a Company's intranet, I haven't tested this code out. You will have to modify the link to the Sharepoint file a bit since you cannot directly access a Sharepoint file in Python using the URL address of that file which is copied from the web browser. from office365.runtime.auth.authentication_context import AuthenticationContext from office365.sharepoint.client_context import ClientContext from office365.sharepoint.files.file import File ####inputs######## # This will be the URL that points to your sharepoint site. # Make sure you change only the parts of the link that start with "Your" url_shrpt = 'https://YourOrganisation.sharepoint.com/sites/YourSharepointSiteName' username_shrpt = 'YourUsername' password_shrpt = 'YourPassword' folder_url_shrpt = '/sites/YourSharepointSiteName/Shared%20Documents/YourSharepointFolderName/' ####################### ###Authentication###For authenticating into your sharepoint site### ctx_auth = AuthenticationContext(url_shrpt) if ctx_auth.acquire_token_for_user(username_shrpt, password_shrpt): ctx = ClientContext(url_shrpt, ctx_auth) web = ctx.web ctx.load(web) ctx.execute_query() print('Authenticated into sharepoint as: ',web.properties['Title']) else: print(ctx_auth.get_last_error()) ############################ ####Function for extracting the file names of a folder in sharepoint### ###If you want to extract the folder names instead of file names, you have to change "sub_folders = folder.files" to "sub_folders = folder.folders" in the below function global print_folder_contents def print_folder_contents(ctx, folder_url): try: folder = ctx.web.get_folder_by_server_relative_url(folder_url) fold_names = [] sub_folders = folder.files #Replace files with folders for getting list of folders ctx.load(sub_folders) ctx.execute_query() for s_folder in sub_folders: fold_names.append(s_folder.properties["Name"]) return fold_names except Exception as e: print('Problem printing out library contents: ', e) ###################################################### # Call the function by giving your folder URL as input filelist_shrpt=print_folder_contents(ctx,folder_url_shrpt) #Print the list of files present in the folder print(filelist_shrpt) Now that we are able to retrieve and print the list of files present in a particular folder in Sharepoint, below is the code to access the file contents of a particular file and save it to local disk having known the file name and path in Sharepoint. #Specify the URL of the sharepoint file. Remember to change only the the parts of the link that start with "Your" file_url_shrpt = '/sites/YourSharepointSiteName/Shared%20Documents/YourSharepointFolderName/YourSharepointFileName' #Load the sharepoint file content to "response" variable response = File.open_binary(ctx, file_url_shrpt) #Save the file to your offline path with open("Your_Offline_File_Path", 'wb') as output_file: output_file.write(response.content) You can refer to the following links for connecting to SQL server and storing the contents in tables: Connecting to Microsoft SQL server using Python https://datatofish.com/how-to-connect-python-to-sql-server-using-pyodbc/ | 15 | 22 |
60,017,052 | 2020-2-1 | https://stackoverflow.com/questions/60017052/decompose-for-time-series-valueerror-you-must-specify-a-period-or-x-must-be | I have some problems executing an additive model right. I have the following data frame: And when I run this code: import statsmodels as sm import statsmodels.api as sm decomposition = sm.tsa.seasonal_decompose(df, model = 'additive') fig = decomposition.plot() matplotlib.rcParams['figure.figsize'] = [9.0,5.0] I got that message: ValueError: You must specify a period or x must be a pandas object with a >DatetimeIndex with a freq not set to None What should I do in order to get that example: The screen above I took from this place | Having the same ValueError, this is just the result of some testing and little research on my own, without the claim to be complete or professional about it. Please comment or answer whoever finds something wrong. Of course, your data should be in the right order of the index values, which you would assure with df.sort_index(inplace=True), as you state it in your answer. This is not wrong as such, though the error message is not about the sort order, and I have checked this: the error does not go away in my case when I sort the index of a huge dataset I have at hand. It is true, I also have to sort the df.index, but the decompose() can handle unsorted data as well where items jump here and there in time: then you simply get a lot of blue lines from left to the right and back, until the whole graph is full of it. What is more, usually, the sorting is already in the right order anyway. In my case, sorting does not help fixing the error. Thus I also doubt that index sorting has fixed the error in your case, because: what does the error actually say? ValueError: You must specify: [either] a period or x must be a pandas object with a DatetimeIndex with a freq not set to None Before all, in case you have a list column so that your time series is nested up to now, see Convert pandas df with data in a "list column" into a time series in long format. Use three columns: [list of data] + [timestamp] + [duration] for details how to unnest a list column. This would be needed for both 1.) and 2.). Details of 1.: "You must specify [either] a period ..." Definition of period "period, int, optional" from https://www.statsmodels.org/stable/generated/statsmodels.tsa.seasonal.seasonal_decompose.html: Period of the series. Must be used if x is not a pandas object or if the index of x does not have a frequency. Overrides default periodicity of x if x is a pandas object with a timeseries index. The period parameter that is set with an integer means the number of cycles which you expect to be in the data. If you have a df with 1000 rows with a list column in it (call it df_nested), and each list with for example 100 elements, then you will have 100 elements per cycle. It is probably smart taking period = len(df_nested) (= number of cycles) in order to get the best split of seasonality and trend. If your elements per cycle vary over time, other values may be better. I am not sure about how to rightly set the parameter, therefore the question statsmodels seasonal_decompose(): What is the right “period of the series” in the context of a list column (constant vs. varying number of items) on Cross Validated which is not yet answered. The "period" parameter of option 1.) has a big advantage over option 2.). Though it uses the time index (DatetimeIndex) for its x-axis, it does not require an item to hit the frequency exactly, in contrast to option 2.). Instead, it just joins together whatever is in a row, with the advantage that you do not need to fill any gaps: the last value of the previous event is just joined with the next value of the following event, whether it is already in the next second or on the next day. What is the max possible "period" value? In case you have a list column (call the df "df_nested" again), you should first unnest the list column to a normal column. The max period is len(df_unnested)/2. Example1: 20 items in x (x is the amount of all items of df_unnested) can maximally have a period = 10. Example2: Having the 20 items and taking period=20 instead, this throws the following error: ValueError: x must have 2 complete cycles requires 40 observations. x only has 20 observation(s) Another side-note: To get rid of the error in question, period = 1 should already take it away, but for time series analysis, "=1" does not reveal anything new, every cycle is just 1 item then, the trend is the same as the original data, the seasonality is 0, and the residuals are always 0. #### Example borrowed from Convert pandas df with data in a "list column" into a time series in long format. Use three columns: [list of data] + [timestamp] + [duration] df_test = pd.DataFrame({'timestamp': [1462352000000000000, 1462352100000000000, 1462352200000000000, 1462352300000000000], 'listData': [[1,2,1,9], [2,2,3,0], [1,3,3,0], [1,1,3,9]], 'duration_sec': [3.0, 3.0, 3.0, 3.0]}) tdi = pd.DatetimeIndex(df_test.timestamp) df_test.set_index(tdi, inplace=True) df_test.drop(columns='timestamp', inplace=True) df_test.index.name = 'datetimeindex' df_test = df_test.explode('listData') sizes = df_test.groupby(level=0)['listData'].transform('size').sub(1) duration = df_test['duration_sec'].div(sizes) df_test.index += pd.to_timedelta(df_test.groupby(level=0).cumcount() * duration, unit='s') The resulting df_test['listData'] looks as follows: 2016-05-04 08:53:20 1 2016-05-04 08:53:21 2 2016-05-04 08:53:22 1 2016-05-04 08:53:23 9 2016-05-04 08:55:00 2 2016-05-04 08:55:01 2 2016-05-04 08:55:02 3 2016-05-04 08:55:03 0 2016-05-04 08:56:40 1 2016-05-04 08:56:41 3 2016-05-04 08:56:42 3 2016-05-04 08:56:43 0 2016-05-04 08:58:20 1 2016-05-04 08:58:21 1 2016-05-04 08:58:22 3 2016-05-04 08:58:23 9 Now have a look at different period's integer values. period = 1: result_add = seasonal_decompose(x=df_test['listData'], model='additive', extrapolate_trend='freq', period=1) plt.rcParams.update({'figure.figsize': (5,5)}) result_add.plot().suptitle('Additive Decompose', fontsize=22) plt.show() period = 2: result_add = seasonal_decompose(x=df_test['listData'], model='additive', extrapolate_trend='freq', period=2) plt.rcParams.update({'figure.figsize': (5,5)}) result_add.plot().suptitle('Additive Decompose', fontsize=22) plt.show() If you take a quarter of all items as one cycle which is 4 (out of 16 items) here. period = 4: result_add = seasonal_decompose(x=df_test['listData'], model='additive', extrapolate_trend='freq', period=int(len(df_test)/4)) plt.rcParams.update({'figure.figsize': (5,5)}) result_add.plot().suptitle('Additive Decompose', fontsize=22) plt.show() Or if you take the max possible size of a cycle which is 8 (out of 16 items) here. period = 8: result_add = seasonal_decompose(x=df_test['listData'], model='additive', extrapolate_trend='freq', period=int(len(df_test)/2)) plt.rcParams.update({'figure.figsize': (5,5)}) result_add.plot().suptitle('Additive Decompose', fontsize=22) plt.show() Have a look at how the y-axes change their scale. #### You will increase the period integer according to your needs. The max in your case of the question: sm.tsa.seasonal_decompose(df, model = 'additive', period = int(len(df)/2)) Details of 2.: "... or x must be a pandas object with a DatetimeIndex with a freq not set to None" To get x to be a DatetimeIndex with a freq not set to None, you need to assign the freq of the DatetimeIndex using .asfreq('?') with ? being your choice among a wide range of offset aliases from https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases. In your case, this option 2. is the better suited as you seem to have a list without gaps. Your monthly data then should probably be introduced as "month start frequency" --> "MS" as offset alias: sm.tsa.seasonal_decompose(df.asfreq('MS'), model = 'additive') See How to set frequency with pd.to_datetime()? for more details, also about how you would deal with gaps. If you have data that is highly scattered in time so that you have too many gaps to fill or if gaps in time are nothing important, option 1 of using "period" is probably the better choice. In my example case of df_test, option 2. is not good. The data is totally scattered in time, and if I take a second as the frequency, you get this: Output of df_test.asfreq('s') (=frequency in seconds): 2016-05-04 08:53:20 1 2016-05-04 08:53:21 2 2016-05-04 08:53:22 1 2016-05-04 08:53:23 9 2016-05-04 08:53:24 NaN ... 2016-05-04 08:58:19 NaN 2016-05-04 08:58:20 1 2016-05-04 08:58:21 1 2016-05-04 08:58:22 3 2016-05-04 08:58:23 9 Freq: S, Name: listData, Length: 304, dtype: object You see here that although my data is only 16 rows, introducing a frequency in seconds forces the df to be 304 rows only to reach out from "08:53:20" till "08:58:23", 288 gaps are caused here. What is more, here you have to hit the exact time. If you have 0.1 or even 0.12314 seconds as your real frequency instead, you will not hit most of the items with your index. Here an example with min as the offset alias, df_test.asfreq('min'): 2016-05-04 08:53:20 1 2016-05-04 08:54:20 NaN 2016-05-04 08:55:20 NaN 2016-05-04 08:56:20 NaN 2016-05-04 08:57:20 NaN 2016-05-04 08:58:20 1 We see that only the first and the last minute are filled at all, the rest is not hit. Taking the day as as the offset alias, df_test.asfreq('d'): 2016-05-04 08:53:20 1 We see that you get only the first row as the resulting df, since there is only one day covered. It will give you the first item found, the rest is dropped. The end of it all Putting together all of this, in your case, take option 2., while in my example case of df_test, option 1 is needed. | 34 | 48 |
60,024,262 | 2020-2-2 | https://stackoverflow.com/questions/60024262/error-converting-object-string-to-int32-typeerror-object-cannot-be-converted | I get following error while trying to convert object (string) column in Pandas to Int32 which is integer type that allows for NA values. df.column = df.column.astype('Int32') TypeError: object cannot be converted to an IntegerDtype I'm using pandas version: 0.25.3 | It's known bug, as explained here. Workaround is to convert column first to float and than to Int32. Make sure you strip your column from whitespaces before you do conversion: df.column = df.column.str.strip() Than do conversion: df.column = df.column.astype('float') # first convert to float before int df.column = df.column.astype('Int32') or simpler: df.column = df.column.astype('float').astype('Int32') # or Int64 | 24 | 41 |
59,977,052 | 2020-1-29 | https://stackoverflow.com/questions/59977052/shooting-a-bullet-in-pygame-in-the-direction-of-mouse | I just cant figure out why my bullet is not working. I made a bullet class and here it is: class Bullet: def __init__(self): self.x = player.x self.y = player.y self.height = 7 self.width = 2 self.bullet = pygame.Surface((self.width, self.height)) self.bullet.fill((255, 255, 255)) Now I added several functions in my game class and here is the new code: class Game: def __init__(self): self.bullets = [] def shoot_bullet(self): if self.bullets: for bullet in self.bullets: rise = mouse.y - player.y run = mouse.x - player.x angle = math.atan2(rise, run) bullet.x += math.cos(angle) bullet.y += math.sin(angle) pygame.transform.rotate(bullet.bullet, -math.degrees(angle)) D.blit(bullet.bullet, (bullet.x, bullet.y)) def generate_bullet(self): if mouse.is_pressed(): self.bullets.append(Bullet()) What I was expecting the code to do was a Bullet() would get added to game.bullets every time I pressed the mouse button, then game.shoot_bullet would calculate the angle between the player and the mouse and shoot the bullet accordingly in the direction of the mouse. However, the result is a complete mess and the bullets actually don't rotate and don't move. They get generated and move weirdly to the left of the screen. I am not sure if I have messed up something or the method I have used is completely wrong. | First of all pygame.transform.rotate does not transform the object itself, but creates a new rotated surface and returns it. If you want to fire a bullet in a certain direction, the direction is defined the moment the bullet is fired, but it does not change continuously. When the bullet is fired, set the starting position of the bullet and calculate the direction vector to the mouse position: self.pos = (x, y) mx, my = pygame.mouse.get_pos() self.dir = (mx - x, my - y) The direction vector should not depend on the distance to the mouse, but it has to be a Unit vector. Normalize the vector by dividing by the Euclidean distance length = math.hypot(*self.dir) if length == 0.0: self.dir = (0, -1) else: self.dir = (self.dir[0]/length, self.dir[1]/length) Compute the angle of the vector and rotate the bullet. In general, the angle of a vector can be computed by atan2(y, x). The y-axis needs to be reversed (atan2(-y, x)) as the y-axis generally points up, but in the PyGame coordinate system the y-axis points down (see How to know the angle between two points?): angle = math.degrees(math.atan2(-self.dir[1], self.dir[0])) self.bullet = pygame.Surface((7, 2)).convert_alpha() self.bullet.fill((255, 255, 255)) self.bullet = pygame.transform.rotate(self.bullet, angle) To update the position of the bullet, it is sufficient to scale the direction (by a velocity) and add it to the position of the bullet: self.pos = (self.pos[0]+self.dir[0]*self.speed, self.pos[1]+self.dir[1]*self.speed) To draw the rotated bullet in the correct position, take the bounding rectangle of the rotated bullet and set the center point with self.pos (see How do I rotate an image around its center using PyGame?): bullet_rect = self.bullet.get_rect(center = self.pos) surf.blit(self.bullet, bullet_rect) See also Shoot bullets towards target or mouse Minimal example: repl.it/@Rabbid76/PyGame-FireBulletInDirectionOfMouse import pygame import math pygame.init() window = pygame.display.set_mode((500, 500)) clock = pygame.time.Clock() class Bullet: def __init__(self, x, y): self.pos = (x, y) mx, my = pygame.mouse.get_pos() self.dir = (mx - x, my - y) length = math.hypot(*self.dir) if length == 0.0: self.dir = (0, -1) else: self.dir = (self.dir[0]/length, self.dir[1]/length) angle = math.degrees(math.atan2(-self.dir[1], self.dir[0])) self.bullet = pygame.Surface((7, 2)).convert_alpha() self.bullet.fill((255, 255, 255)) self.bullet = pygame.transform.rotate(self.bullet, angle) self.speed = 2 def update(self): self.pos = (self.pos[0]+self.dir[0]*self.speed, self.pos[1]+self.dir[1]*self.speed) def draw(self, surf): bullet_rect = self.bullet.get_rect(center = self.pos) surf.blit(self.bullet, bullet_rect) bullets = [] pos = (250, 250) run = True while run: clock.tick(60) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.MOUSEBUTTONDOWN: bullets.append(Bullet(*pos)) for bullet in bullets[:]: bullet.update() if not window.get_rect().collidepoint(bullet.pos): bullets.remove(bullet) window.fill(0) pygame.draw.circle(window, (0, 255, 0), pos, 10) for bullet in bullets: bullet.draw(window) pygame.display.flip() | 7 | 14 |
60,003,444 | 2020-1-31 | https://stackoverflow.com/questions/60003444/typeddict-when-keys-have-invalid-names | If I have a key in a dictionary with an invalid identifier, such as A(2). How can I create a TypedDict with this field? E.g from typing import TypedDict class RandomAlphabet(TypedDict): A(2): str is not valid Python code, resulting in the error: SyntaxError: illegal target for annotation The same problem is with reserved keywords: class RandomAlphabet(TypedDict): return: str throws: SyntaxError: invalid syntax | According to PEP 589 you can use alternative syntax to create a TypedDict as follows: Movie = TypedDict('Movie', {'name': str, 'year': int}) So, in your case, you could write: from typing import TypedDict RandomAlphabet = TypedDict('RandomAlphabet', {'A(2)': str}) or for the second example: RandomAlphabet = TypedDict('RandomAlphabet', {'return': str}) PEP 589 warns, though: This syntax doesn't support inheritance, however, and there is no way to have both required and non-required fields in a single type. The motivation for this is keeping the backwards compatible syntax as simple as possible while covering the most common use cases. | 48 | 65 |
60,031,112 | 2020-2-2 | https://stackoverflow.com/questions/60031112/how-do-i-make-a-pdf-searchable-for-a-flask-search-application | I have been doing research for a very important personal project. I would like to create a Flask Search Application that allows me to search for content across 100 Plus PDF files. I have found Some information around A ElasticSearch Lib that works well with flask. #!/usr/bin/env python3 #-*- coding: utf-8 -*- # import libraries to help read and create PDF import PyPDF2 from fpdf import FPDF import base64 import json from flask import Flask, jsonify, request, render_template, json from datetime import datetime import pandas as pd # import the Elasticsearch low-level client library from elasticsearch import Elasticsearch # create a new client instance of Elasticsearch elastic_client = Elasticsearch(hosts=["localhost"]) es = Elasticsearch("http://localhost:9200/") app = Flask(__name__) # create a new PDF object with FPDF pdf = FPDF() # use an iterator to create 10 pages for page in range(10): pdf.add_page() pdf.set_font("Arial", size=14) pdf.cell(150, 12, txt="Object Rocket ROCKS!!", ln=1, align="C") # output all of the data to a new PDF file pdf.output("object_rocket.pdf") ''' read_pdf = PyPDF2.PdfFileReader("object_rocket.pdf") page = read_pdf.getPage(0) page_mode = read_pdf.getPageMode() page_text = page.extractText() print (type(page_text)) ''' #with open(path, 'rb') as file: # get the PDF path and read the file file = "Sheet3.pdf" read_pdf = PyPDF2.PdfFileReader(file, strict=False) #print (read_pdf) # get the read object's meta info pdf_meta = read_pdf.getDocumentInfo() # get the page numbers num = read_pdf.getNumPages() print ("PDF pages:", num) # create a dictionary object for page data all_pages = {} # put meta data into a dict key all_pages["meta"] = {} # Use 'iteritems()` instead of 'items()' for Python 2 for meta, value in pdf_meta.items(): print (meta, value) all_pages["meta"][meta] = value # iterate the page numbers for page in range(num): data = read_pdf.getPage(page) #page_mode = read_pdf.getPageMode() # extract the page's text page_text = data.extractText() # put the text data into the dict all_pages[page] = page_text # create a JSON string from the dictionary json_data = json.dumps(all_pages) #print ("\nJSON:", json_data) # convert JSON string to bytes-like obj bytes_string = bytes(json_data, 'utf-8') #print ("\nbytes_string:", bytes_string) # convert bytes to base64 encoded string encoded_pdf = base64.b64encode(bytes_string) encoded_pdf = str(encoded_pdf) #print ("\nbase64:", encoded_pdf) # put the PDF data into a dictionary body to pass to the API request body_doc = {"data": encoded_pdf} # call the index() method to index the data result = elastic_client.index(index="pdf", doc_type="_doc", id="42", body=body_doc) # print the returned sresults #print ("\nindex result:", result['result']) # make another Elasticsearch API request to get the indexed PDF result = elastic_client.get(index="pdf", doc_type='_doc', id=42) # print the data to terminal result_data = result["_source"]["data"] #print ("\nresult_data:", result_data, '-- type:', type(result_data)) # decode the base64 data (use to [:] to slice off # the 'b and ' in the string) decoded_pdf = base64.b64decode(result_data[2:-1]).decode("utf-8") #print ("\ndecoded_pdf:", decoded_pdf) # take decoded string and make into JSON object json_dict = json.loads(decoded_pdf) #print ("\njson_str:", json_dict, "\n\ntype:", type(json_dict)) result2 = elastic_client.index(index="pdftext", doc_type="_doc", id="42", body=json_dict) # create new FPDF object pdf = FPDF() # build the new PDF from the Elasticsearch dictionary # Use 'iteritems()` instead of 'items()' for Python 2 """ for page, value in json_data: if page != "meta": # create new page pdf.add_page() pdf.set_font("Arial", size=14) # add content to page output = value + " -- Page: " + str(int(page)+1) pdf.cell(150, 12, txt=output, ln=1, align="C") else: # create the meta data for the new PDF for meta, meta_val in json_dict["meta"].items(): if "title" in meta.lower(): pdf.set_title(meta_val) elif "producer" in meta.lower() or "creator" in meta.lower(): pdf.set_creator(meta_val) """ # output the PDF object's data to a PDF file #pdf.output("object_rocket_from_elaticsearch.pdf" ) @app.route('/', methods=['GET']) def index(): return jsonify(json_dict) @app.route('/<id>', methods=['GET']) def index_by_id(id): return jsonify(json_dict[id]) """ @app.route('/insert_data', methods=['PUT']) def insert_data(): slug = request.form['slug'] title = request.form['title'] content = request.form['content'] body = { 'slug': slug, 'title': title, 'content': content, 'timestamp': datetime.now() } result = es.index(index='contents', doc_type='title', id=slug, body=body) return jsonify(result) """ app.run(port=5003, debug=True) ------Progress------ I now have a working solution with no front-end search capability: # Load_single_PDF_BY_PAGE_TO_index.py #!/usr/bin/env python3 #-*- coding: utf-8 -*- # import libraries to help read and create PDF import PyPDF2 from fpdf import FPDF import base64 from flask import Flask, jsonify, request, render_template, json from datetime import datetime import pandas as pd # import the Elasticsearch low-level client library from elasticsearch import Elasticsearch # create a new client instance of Elasticsearch elastic_client = Elasticsearch(hosts=["localhost"]) es = Elasticsearch("http://localhost:9200/") app = Flask(__name__) #with open(path, 'rb') as file: # get the PDF path and read the file file = "Sheet3.pdf" read_pdf = PyPDF2.PdfFileReader(file, strict=False) #print (read_pdf) # get the read object's meta info pdf_meta = read_pdf.getDocumentInfo() # get the page numbers num = read_pdf.getNumPages() print ("PDF pages:", num) # create a dictionary object for page data all_pages = {} # put meta data into a dict key all_pages["meta"] = {} # Use 'iteritems()` instead of 'items()' for Python 2 for meta, value in pdf_meta.items(): print (meta, value) all_pages["meta"][meta] = value x = 44 # iterate the page numbers for page in range(num): data = read_pdf.getPage(page) #page_mode = read_pdf.getPageMode() # extract the page's text page_text = data.extractText() # put the text data into the dict all_pages[page] = page_text body_doc2 = {"data": page_text} result3 = elastic_client.index(index="pdfclearn", doc_type="_doc", id=x, body=body_doc2) x += 1 The above code loads a single pdf into elasticsearch by page. from flask import Flask, jsonify, request,render_template from elasticsearch import Elasticsearch from datetime import datetime es = Elasticsearch("http://localhost:9200/") app = Flask(__name__) @app.route('/pdf', methods=['GET']) def index(): results = es.get(index='pdfclearn', doc_type='_doc', id='44') return jsonify(results['_source']) @app.route('/pdf/<id>', methods=['GET']) def index_by_id(id): results = es.get(index='pdfclearn', doc_type='_doc', id=id) return jsonify(results['_source']) @app.route('/search/<keyword>', methods=['POST','GET']) def search(keyword): keyword = keyword body = { "query": { "multi_match": { "query": keyword, "fields": ["data"] } } } res = es.search(index="pdfclearn", doc_type="_doc", body=body) return jsonify(res['hits']['hits']) @app.route("/searhbar") def searhbar(): return render_template("index.html") @app.route("/searhbar/<string:box>") def process(box): query = request.args.get('query') if box == 'names': keyword = box body = { "query": { "multi_match": { "query": keyword, "fields": ["data"] } } } res = es.search(index="pdfclearn", doc_type="_doc", body=body) return jsonify(res['hits']['hits']) app.run(port=5003, debug=True) In the above code we can search across all Pages for a keyword or phrase. curl http://127.0.0.1:5003/search/test //it works!! I Found a blog about how to dave PDF files as a Base64 index in ElasticSearch. I have seen DocuSign's API do this for document templating. However, I dont understand How to Jsonify the Base64 PDF in a way thats searchable for ElasticSearch. curl "http://localhost:9200/pdftext/_doc/42" curl -X POST "http://localhost:9200/pdf/_search?q=*" I can retrieve the Base64 of a 700 Page document. But I think what I need is to Index and retrieve Each Page of the Document. Blogs I Have Studied that got me part the way: https://kb.objectrocket.com/elasticsearch/how-to-index-a-pdf-file-as-an-elasticsearch-index-267 https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xvi-full-text-search endgame: https://towardsdatascience.com/create-a-full-search-engine-via-flask-elasticsearch-javascript-d3js-and-bootstrap-275f9dc6efe1 I will continue to study Elastic Search and Base64 Encoding and decoding. But I would like some help getting to my goal. Any Detailed example would be much appreciated. | So now Amazon has a solution for my use case. It's called AWS Textract. If you create a free AWS account, and download the Cli and Python sdk, you can use the following code: import boto3 # Document documentName = "test2-28.png" # Read document content with open(documentName, 'rb') as document: imageBytes = document.read() # Amazon Textract clientls textract = boto3.client('textract') # Call Amazon Textract response = textract.detect_document_text(Document={'Bytes': imageBytes}) # print(response) # Print detected text for item in response["Blocks"]: if item["BlockType"] == "LINE": print('\033[94m' + item["Text"] + '\033[0m') Make sure to convert your PDF pages to Images first. ML works off Images. I used .png files for each page. Next I will need to loop through a folder with all pages as images in it. I will also need to save to a CSV file output or DB for future analysis. | 8 | 0 |
60,058,588 | 2020-2-4 | https://stackoverflow.com/questions/60058588/tensorflow-2-0-tf-random-set-seed-not-working-since-i-am-getting-different-resul | I am using tf.random.set_seed to assure the reproducibility of my experiments but getting different results in terms of loss after training my model multiple times. I am monitoring the learning curve of each experiment using Tensorboard, but I am getting different values of loss and accuracy. | Providing the solution here (Answer Section), even though it is present in the Comment Section, for the benefit of the community. To reproduce same results, you can create function as below and pass seeds directly to the layers as mentioned Daniel in the comments. def reset_random_seeds(): os.environ['PYTHONHASHSEED']=str(2) tf.random.set_seed(2) np.random.seed(2) random.seed(2) Please refer complete code in below, which reproduced same results import os ####*IMPORANT*: Have to do this line *before* importing tensorflow os.environ['PYTHONHASHSEED']=str(2) import tensorflow as tf import tensorflow.keras as keras import tensorflow.keras.layers import random import pandas as pd import numpy as np def reset_random_seeds(): os.environ['PYTHONHASHSEED']=str(2) tf.random.set_seed(2) np.random.seed(2) random.seed(2) #make some random data reset_random_seeds() NUM_ROWS = 1000 NUM_FEATURES = 10 random_data = np.random.normal(size=(NUM_ROWS, NUM_FEATURES)) df = pd.DataFrame(data=random_data, columns=['x_' + str(ii) for ii in range(NUM_FEATURES)]) y = df.sum(axis=1) + np.random.normal(size=(NUM_ROWS)) def run(x, y): reset_random_seeds() model = keras.Sequential([ keras.layers.Dense(40, input_dim=df.shape[1], activation='relu'), keras.layers.Dense(20, activation='relu'), keras.layers.Dense(10, activation='relu'), keras.layers.Dense(1, activation='linear') ]) NUM_EPOCHS = 100 model.compile(optimizer='adam', loss='mean_squared_error') model.fit(x, y, epochs=NUM_EPOCHS, verbose=0) predictions = model.predict(x).flatten() loss = model.evaluate(x, y) #This prints out the loss by side-effect #With Tensorflow 2.0 this is now reproducible! run(df, y) run(df, y) run(df, y) Output: 32/32 [==============================] - 0s 2ms/step - loss: 0.5633 32/32 [==============================] - 0s 2ms/step - loss: 0.5633 32/32 [==============================] - 0s 2ms/step - loss: 0.5633 | 9 | 14 |
60,060,301 | 2020-2-4 | https://stackoverflow.com/questions/60060301/typeerror-cannot-cast-array-data-from-dtypeint64-to-dtypeint32-accordin | I'm trying to plot a regplot using seaborn and i'm not unable to plot it and facing TypeError: Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe' . My data has 731 rows and 16 column - >>> bike_df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 731 entries, 0 to 730 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 instant 731 non-null int64 1 dteday 731 non-null object 2 season 731 non-null int64 3 yr 731 non-null int64 4 mnth 731 non-null int64 5 holiday 731 non-null int64 6 weekday 731 non-null int64 7 workingday 731 non-null int64 8 weathersit 731 non-null int64 9 temp 731 non-null float64 10 atemp 731 non-null float64 11 hum 731 non-null float64 12 windspeed 731 non-null float64 13 casual 731 non-null int64 14 registered 731 non-null int64 15 cnt 731 non-null int64 dtypes: float64(4), int64(11), object(1) memory usage: 88.6+ KB Here is a snippet of the data And when i'm trying to plot regplot using seaborn - >>> sns.regplot(x="casual", y="cnt", data=bike_df); --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-54-68533af96906> in <module> ----> 1 sns.regplot(x="casual", y="cnt", data=bike_df); ~\AppData\Local\Continuum\anaconda3\envs\rstudio\lib\site-packages\seaborn\regression.py in regplot(x, y, data, x_estimator, x_bins, x_ci, scatter, fit_reg, ci, n_boot, units, seed, order, logistic, lowess, robust, logx, x_partial, y_partial, truncate, dropna, x_jitter, y_jitter, label, color, marker, scatter_kws, line_kws, ax) 816 scatter_kws["marker"] = marker 817 line_kws = {} if line_kws is None else copy.copy(line_kws) --> 818 plotter.plot(ax, scatter_kws, line_kws) 819 return ax 820 ~\AppData\Local\Continuum\anaconda3\envs\rstudio\lib\site-packages\seaborn\regression.py in plot(self, ax, scatter_kws, line_kws) 363 364 if self.fit_reg: --> 365 self.lineplot(ax, line_kws) 366 367 # Label the axes ~\AppData\Local\Continuum\anaconda3\envs\rstudio\lib\site-packages\seaborn\regression.py in lineplot(self, ax, kws) 406 """Draw the model.""" 407 # Fit the regression model --> 408 grid, yhat, err_bands = self.fit_regression(ax) 409 edges = grid[0], grid[-1] 410 ~\AppData\Local\Continuum\anaconda3\envs\rstudio\lib\site-packages\seaborn\regression.py in fit_regression(self, ax, x_range, grid) 214 yhat, yhat_boots = self.fit_logx(grid) 215 else: --> 216 yhat, yhat_boots = self.fit_fast(grid) 217 218 # Compute the confidence interval at each grid point ~\AppData\Local\Continuum\anaconda3\envs\rstudio\lib\site-packages\seaborn\regression.py in fit_fast(self, grid) 239 n_boot=self.n_boot, 240 units=self.units, --> 241 seed=self.seed).T 242 yhat_boots = grid.dot(beta_boots).T 243 return yhat, yhat_boots ~\AppData\Local\Continuum\anaconda3\envs\rstudio\lib\site-packages\seaborn\algorithms.py in bootstrap(*args, **kwargs) 83 for i in range(int(n_boot)): 84 resampler = integers(0, n, n) ---> 85 sample = [a.take(resampler, axis=0) for a in args] 86 boot_dist.append(f(*sample, **func_kwargs)) 87 return np.array(boot_dist) ~\AppData\Local\Continuum\anaconda3\envs\rstudio\lib\site-packages\seaborn\algorithms.py in <listcomp>(.0) 83 for i in range(int(n_boot)): 84 resampler = integers(0, n, n) ---> 85 sample = [a.take(resampler, axis=0) for a in args] 86 boot_dist.append(f(*sample, **func_kwargs)) 87 return np.array(boot_dist) TypeError: Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe' I tried changing the datatypes using dtypes for all the rows like below - >>> bike_df['cnt'] = bike_df['cnt'].astype(np.int32) but this did not help and got the same error again while plotting. Any suggestions are appreciated. Thanks in advance. | Update: this bug is solved in Seaborn version 0.10.1 (April 2020). I encountered the same problem. It is issue 1950 at Seaborn's github. Related to running a 32-bit version of numpy. It will be solved in the next release. To work around the problem, I changed line 84 of my local version of Seaborn's algorithm.py: resampler = integers(0, n, n, dtype=np.int_) This happened with: numpy version: 1.18.1 seaborn version: 0.10.0 | 11 | 17 |
60,032,983 | 2020-2-3 | https://stackoverflow.com/questions/60032983/record-voice-with-recorder-js-and-upload-it-to-python-flask-server-but-wav-file | I would like to realize this. A user speaks to a web browser. A web browser (Google Chrome) record user's voice as WAV file(Recorder.js) and send it to a python-flask server. I realized this with the help of addpipe's simple recorder.js sample. https://github.com/addpipe/simple-recorderjs-demo This sample uses php server, so I changed the original app.js. original app.js xhr.open("POST","uplod.php",true); my app.js xhr.open("POST","/",true); I checked my web app locally, then everything looked perfect. I use Windows 10, WSL, Debian 10/buster, python3.7.6, Google Chrome. Here is Terminal's record. 127.0.0.1 - - [03/Feb/2020 11:53:17] "GET / HTTP/1.1" 200 - ./file.wav exists 127.0.0.1 - - [03/Feb/2020 11:53:32] "POST / HTTP/1.1" 200 - However, when I checked uploaded "file.wav" with ffprobe command, it was broken. libavutil 56. 22.100 / 56. 22.100 libavcodec 58. 35.100 / 58. 35.100 libavformat 58. 20.100 / 58. 20.100 libavdevice 58. 5.100 / 58. 5.100 libavfilter 7. 40.101 / 7. 40.101 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 3.100 / 5. 3.100 libswresample 3. 3.100 / 3. 3.100 libpostproc 55. 3.100 / 55. 3.100 file.wav: Invalid data found when processing input This is screenshot of my app. When I push "save to disk" button, I can download WAV file locally. If I check downloaded WAV file, it is not broken. libavutil 56. 22.100 / 56. 22.100 libavcodec 58. 35.100 / 58. 35.100 libavformat 58. 20.100 / 58. 20.100 libavdevice 58. 5.100 / 58. 5.100 libavfilter 7. 40.101 / 7. 40.101 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 3.100 / 5. 3.100 libswresample 3. 3.100 / 3. 3.100 libpostproc 55. 3.100 / 55. 3.100 Input #0, wav, from '/mnt/c/Users/w0obe/Downloads/2020-02-03T02_53_29.366Z.wav': Duration: 00:00:07.34, bitrate: 768 kb/s Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, 1 channels, s16, 768 kb/s My directory structure is here. . ├── file.wav(uploaded WAV file) ├── main.py ├── Pipfile ├── Pipfile.lock ├── static │ └── js │ └── app.js └── templates └── index.html This is main.py. #!/usr/bin/env python # -*- coding: utf-8 -*- from flask import Flask from flask import request from flask import render_template import os app = Flask(__name__) @app.route("/", methods=['POST', 'GET']) def index(): if request.method == "POST": f = open('./file.wav', 'wb') f.write(request.get_data("audio_data")) f.close() if os.path.isfile('./file.wav'): print("./file.wav exists") return render_template('index.html', request="POST") else: return render_template("index.html") if __name__ == "__main__": app.run() This is index.html. <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Simple Recorder.js demo with record, stop and pause - addpipe.com</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> </head> <body> <h1>Simple Recorder.js demo</h1> <div id="controls"> <button id="recordButton">Record</button> <button id="pauseButton" disabled>Pause</button> <button id="stopButton" disabled>Stop</button> </div> <div id="formats">Format: start recording to see sample rate</div> <p><strong>Recordings:</strong></p> <ol id="recordingsList"></ol> <!-- inserting these scripts at the end to be able to use all the elements in the DOM --> <script src="https://cdn.rawgit.com/mattdiamond/Recorderjs/08e7abd9/dist/recorder.js"></script> <script src="/static/js/app.js"></script> </body> </html> This is app.js. //webkitURL is deprecated but nevertheless URL = window.URL || window.webkitURL; var gumStream; //stream from getUserMedia() var rec; //Recorder.js object var input; //MediaStreamAudioSourceNode we'll be recording // shim for AudioContext when it's not avb. var AudioContext = window.AudioContext || window.webkitAudioContext; var audioContext //audio context to help us record var recordButton = document.getElementById("recordButton"); var stopButton = document.getElementById("stopButton"); var pauseButton = document.getElementById("pauseButton"); //add events to those 2 buttons recordButton.addEventListener("click", startRecording); stopButton.addEventListener("click", stopRecording); pauseButton.addEventListener("click", pauseRecording); function startRecording() { console.log("recordButton clicked"); /* Simple constraints object, for more advanced audio features see https://addpipe.com/blog/audio-constraints-getusermedia/ */ var constraints = { audio: true, video:false } /* Disable the record button until we get a success or fail from getUserMedia() */ recordButton.disabled = true; stopButton.disabled = false; pauseButton.disabled = false /* We're using the standard promise based getUserMedia() https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia */ navigator.mediaDevices.getUserMedia(constraints).then(function(stream) { console.log("getUserMedia() success, stream created, initializing Recorder.js ..."); /* create an audio context after getUserMedia is called sampleRate might change after getUserMedia is called, like it does on macOS when recording through AirPods the sampleRate defaults to the one set in your OS for your playback device */ audioContext = new AudioContext(); //update the format document.getElementById("formats").innerHTML="Format: 1 channel pcm @ "+audioContext.sampleRate/1000+"kHz" /* assign to gumStream for later use */ gumStream = stream; /* use the stream */ input = audioContext.createMediaStreamSource(stream); /* Create the Recorder object and configure to record mono sound (1 channel) Recording 2 channels will double the file size */ rec = new Recorder(input,{numChannels:1}) //start the recording process rec.record() console.log("Recording started"); }).catch(function(err) { //enable the record button if getUserMedia() fails recordButton.disabled = false; stopButton.disabled = true; pauseButton.disabled = true }); } function pauseRecording(){ console.log("pauseButton clicked rec.recording=",rec.recording ); if (rec.recording){ //pause rec.stop(); pauseButton.innerHTML="Resume"; }else{ //resume rec.record() pauseButton.innerHTML="Pause"; } } function stopRecording() { console.log("stopButton clicked"); //disable the stop button, enable the record too allow for new recordings stopButton.disabled = true; recordButton.disabled = false; pauseButton.disabled = true; //reset button just in case the recording is stopped while paused pauseButton.innerHTML="Pause"; //tell the recorder to stop the recording rec.stop(); //stop microphone access gumStream.getAudioTracks()[0].stop(); //create the wav blob and pass it on to createDownloadLink rec.exportWAV(createDownloadLink); } function createDownloadLink(blob) { var url = URL.createObjectURL(blob); var au = document.createElement('audio'); var li = document.createElement('li'); var link = document.createElement('a'); //name of .wav file to use during upload and download (without extendion) var filename = new Date().toISOString(); //add controls to the <audio> element au.controls = true; au.src = url; //save to disk link link.href = url; link.download = filename+".wav"; //download forces the browser to donwload the file using the filename link.innerHTML = "Save to disk"; //add the new audio element to li li.appendChild(au); //add the filename to the li li.appendChild(document.createTextNode(filename+".wav ")) //add the save to disk link to li li.appendChild(link); //upload link var upload = document.createElement('a'); upload.href="#"; upload.innerHTML = "Upload"; upload.addEventListener("click", function(event){ var xhr=new XMLHttpRequest(); xhr.onload=function(e) { if(this.readyState === 4) { console.log("Server returned: ",e.target.responseText); } }; var fd=new FormData(); fd.append("audio_data",blob, filename); xhr.open("POST","/",true); xhr.send(fd); }) li.appendChild(document.createTextNode (" "))//add a space in between li.appendChild(upload)//add the upload link to li //add the li element to the ol recordingsList.appendChild(li); } How can I upload WAV file to flask server? Could you give me any information or suggestion? Thank you in advance. Sincerely, Kazu | I have this problem and it takes me 2 days for finding the solution :)) . In flask server you can use request.files['audio_data'] to get wav audio file. You can pass and use it as an audio variable too. Hope this can help you | 10 | 12 |
60,022,388 | 2020-2-2 | https://stackoverflow.com/questions/60022388/pytorch-runtimeerror-reduce-failed-to-synchronize-cudaerrorassert-device-sid | I am running into the following error when trying to train this on this dataset. Since this is the configuration published in the paper, I am assuming I am doing something incredibly wrong. This error arrives on a different image every time I try to run training. C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1741, in <module> main() File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1735, in main globals = debugger.run(setup['file'], None, None, is_module) File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1135, in run pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Noam/Code/vision_course/hopenet/deep-head-pose/code/original_code_augmented/train_hopenet_with_validation_holdout.py", line 187, in <module> loss_reg_yaw = reg_criterion(yaw_predicted, label_yaw_cont) File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\modules\loss.py", line 431, in forward return F.mse_loss(input, target, reduction=self.reduction) File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\functional.py", line 2204, in mse_loss ret = torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction)) RuntimeError: reduce failed to synchronize: cudaErrorAssert: device-side assert triggered Any ideas? | This kind of error generally occurs when using NLLLoss or CrossEntropyLoss, and when your dataset has negative labels (or labels greater than the number of classes). That is also the exact error you are getting Assertion t >= 0 && t < n_classes failed. This won't occur for MSELoss, but OP mentions that there is a CrossEntropyLoss somewhere and thus the error occurs (the program crashes asynchronously on some other line). The solution is to clean the dataset and ensure that t >= 0 && t < n_classes is satisfied (where t represents the label). Also, ensure that your network output is in the range 0 to 1 in case you use NLLLoss or BCELoss (then you require softmax or sigmoid activation respectively). Note that this is not required for CrossEntropyLoss or BCEWithLogitsLoss because they implement the activation function inside the loss function. (Thanks to @PouyaB for pointing out). | 17 | 24 |
59,978,301 | 2020-1-30 | https://stackoverflow.com/questions/59978301/how-to-use-deep-learning-models-for-time-series-forecasting | I have signals recorded from machines (m1, m2, so on) for 28 days. (Note: each signal in each day is 360 length long). machine_num, day1, day2, ..., day28 m1, [12, 10, 5, 6, ...], [78, 85, 32, 12, ...], ..., [12, 12, 12, 12, ...] m2, [2, 0, 5, 6, ...], [8, 5, 32, 12, ...], ..., [1, 1, 12, 12, ...] ... m2000, [1, 1, 5, 6, ...], [79, 86, 3, 1, ...], ..., [1, 1, 12, 12, ...] I want to predict the signal sequence of each machine for next 3 days. i.e. in day29, day30, day31. However, I don't have values for days 29, 30 and 31. So, my plan was as follows using LSTM model. The first step is to get signals for day 1 and asked to predict signals for day 2, then in the next step get signals for days 1, 2 and asked to predict signals for day 3, etc, so when I reach day 28, the network has all the signals up to 28 and is asked to predict the signals for day 29, etc. I tried to do a univariant LSTM model as follows. # univariate lstm example from numpy import array from keras.models import Sequential from keras.layers import LSTM from keras.layers import Dense # define dataset X = array([[10, 20, 30], [20, 30, 40], [30, 40, 50], [40, 50, 60]]) y = array([40, 50, 60, 70]) # reshape from [samples, timesteps] into [samples, timesteps, features] X = X.reshape((X.shape[0], X.shape[1], 1)) # define model model = Sequential() model.add(LSTM(50, activation='relu', input_shape=(3, 1))) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse') # fit model model.fit(X, y, epochs=1000, verbose=0) # demonstrate prediction x_input = array([50, 60, 70]) x_input = x_input.reshape((1, 3, 1)) yhat = model.predict(x_input, verbose=0) print(yhat) However, this example is very simple since it does not have long sequences like mine. For example, my data for m1 would look as follows. m1 = [[12, 10, 5, 6, ...], [78, 85, 32, 12, ...], ..., [12, 12, 12, 12, ...]] Moreover, I need the prediction of day 29, 30, 31. In that case, I am unsure how to change this example to cater my needs. I want to sepcifically know if the direction I have chosen is correct. If so, how to do it. I am happy to provide more details if needed. EDIT: I have mentioned the model.summary(). | Model and shapes Since these are sequences in sequences, you need to use your data in a different format. Although you could just go like (machines, days, 360) and simply treat the 360 as features (that could work up to some point), for a robust model (then maybe there is a speed problem) you'd need to treat both things as sequences. Then I'd go with data like (machines, days, 360, 1) and two levels of recurrency. Our models input_shape then would be (None, 360, 1) Model case 1 - Only day recurrency Data shape: (machines, days, 360) Apply some normalization to the data. Here, an example, but models can be flexible as you can add more layers, try convolutions, etc: inputs = Input((None, 360)) #(m, d, 360) outs = LSTM(some_units, return_sequences=False, stateful=depends_on_training_approach)(inputs) #(m, some_units) outs = Dense(360, activation=depends_on_your_normalization)(outs) #(m, 360) outs = Reshape((1,360)) #(m, 1, 360) #this reshape is not necessary if using the "shifted" approach - see time windows below #it would then be (m, d, 360) model = Model(inputs, outs) Depending on the complexity of the intra-daily sequences, they could get well predicted with this, but if they evolve in a complex way, then the next model would be a little better. Always remember that you can create more layers and explore things to increase the capability of this model, this is only a tiny example Model case 2 - Two level recurrency Data shape: (machines, days, 360, 1) Apply some normalization to the data. There are so many many ways to experiment on how to do this, but here is a simple one. inputs = Input((None, 360, 1)) #(m, d, 360, 1) #branch 1 inner_average = TimeDistributed( Bidirectional( LSTM(units1, return_sequences=True, stateful=False), merge_mode='ave' ) )(inputs) #(m, d, 360, units1) inner_average = Lambda(lambda x: K.mean(x, axis=1))(inner_average) #(m, 360, units1) #branch 2 inner_seq = TimeDistributed( LSTM(some_units, return_sequences=False, stateful=False) )(inputs) #may be Bidirectional too #shape (m, d, some_units) outer_seq = LSTM(other_units, return_sequences = False, stateful=depends_on_training_approach)(inner_seq) #(m, other_units) outer_seq = Dense(few_units * 360, activation = 'tanh')(outer_seq) #(m, few_units * 360) #activation = same as inner_average outer_seq = Reshape((360,few_units))(outer_seq) #(m, 360, few_units) #join branches outputs = Concatenate()([inner_average, outer_seq]) #(m, 360, units1+few_units) outputs = LSTM(units, return_sequences=True, stateful= False)(outputs) #(m, 360,units) outputs = Dense(1, activation=depends_on_your_normalization)(outputs) #(m, 360, 1) outputs = Reshape((1,360))(outputs) #(m, 1, 360) for training purposes model = Model(inputs, outputs) This is one attempt, I made an average of the days, but I could have made, instead of inner_average, something like: #branch 1 daily_minutes = Permute((2,1,3))(inputs) #(m, 360, d, 1) daily_minutes = TimeDistributed( LSTM(units1, return_sequences=False, stateful=depends_on_training_approach) )(daily_minutes) #(m, 360, units1) Many other ways of exploring the data are possible, this is a highly creative field. You could, for instance, use the daily_minutes approach right after the inner_average excluding the K.mean lambda layer.... you got the idea. Time windows approach Your approach sounds nice. Give one step to predict the next, give two steps to predic the third, give three steps to predict the fourth. The models above are suited to this approach. Keep in mind that very short inputs may be useless and may make your model worse. (Try to imagine how many steps would be reasonably enough for you to start predicting the next ones) Preprocess your data and divide it in groups: group with length = 4 (for instance) group with length = 5 ... group with length = 28 You will need a manual training loop where in each epoch you feed each of these groups (you can't feed different lenghts all together). Another approach is, give all steps, make the model predict a shifted sequence like: inputs = original_inputs[:, :-1] #exclude last training day outputs = original_inputs[:, 1:] #exclude first training day For making the models above suited to this approach, you need return_sequences=True in every LSTM that uses the day dimension as steps (not the inner_seq). (The inner_average method will fail, and you will have to resort to the daily_minutes approach with return_sequences=True and another Permute((2,1,3)) right after. Shapes would be: branch1 : (m, d, 360, units1) branch2 : (m, d, 360, few_units) - needs to adjust the Reshape for this The reshapes using 1 timestep will be unnecessary, the days dimension will replace the 1. You may need to use Lambda layers to reshape considering the batch size and variable number of days (if details are needed, please tell me) Training and predicting (Sorry for not having the time for detailing it now) You then can follow the approaches mentioned here and here too, more complete with a few links. (Take care with the output shapes, though, in your question, we are always keeping the time step dimension, even though it may be 1) The important points are: If you opt for stateful=False: this means easy training with fit (as long as you didn't use the "different lengths" approach); this also means you will need to build a new model with stateful=True, copy the weights of the trained model; then you do the manual step by step prediction If you opt for stateful=True from the beginning: this necessarily means manual training loop (using train_on_batch for instance); this necessarily means you will need model.reset_states() whenever you are going to present a batch whose sequences are not sequels of the last batch (every batch if your batches contain whole sequences). don't need to build a new model to manually predict, but manual prediction remains the same | 7 | 4 |
60,042,568 | 2020-2-3 | https://stackoverflow.com/questions/60042568/this-application-failed-to-start-because-no-qt-platform-plugin-could-be-initiali | I am stuck trying to run a very simple Python script, getting this error: qt.qpa.plugin: Could not find the Qt platform plugin "cocoa" in "" This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. zsh: abort python3 mypuppy1.py The script code is: import cv2 img = cv2.imread('00-puppy.jpg') while True: cv2.imshow('Puppy',img) if cv2.waitKey(1) & 0xFF == 27: break cv2.destroyAllWindows() However this Notebook code works in JupyterLab: import cv2 img = cv2.imread('00-puppy.jpg') cv2.imshow('Puppy', img) cv2.waitKey() I am on macOS, using Anaconda and JupyterLab. I would appreciate any help with this issue. Thanks! | For me, it worked by using a opencv-python version prior to 4.2 version that just got released. The new version (4.2.0.32) released on Feb 2, 2020 seems to have caused this breaking change and probably expects to find Qt at a specific location (Users/ directory) as pointed by other answers. You can try either manually installed from qt.io as suggested and making sure you get a .qt directory under yours Users directory, or you can use version 4.1.2.30, which works like charm without doing anything else. It works for opencv-contrib-python too. | 56 | 21 |
60,023,381 | 2020-2-2 | https://stackoverflow.com/questions/60023381/securityerror-failed-to-establish-secure-connection-to-eof-occurred-in-violati | I am attempting to connect to Neo4j but I keep getting this error. I tried from neo4j.v1 import GraphDatabase driver = GraphDatabase.driver(uri="bolt://localhost:7687", auth=("neo4j", "12345")) but I get this error when I try to connect SecurityError: Failed to establish secure connection to 'EOF occurred in violation of protocol (_ssl.c:841)' I can connect to the browser when I type http://localhost:7474/browser/ Here is the full error log: --------------------------------------------------------------------------- SSLEOFError Traceback (most recent call last) ~\AppData\Roaming\Python\Python36\site-packages\neobolt\direct.py in _secure(s, host, ssl_context, **config) 853 try: --> 854 s = ssl_context.wrap_socket(s, server_hostname=host if HAS_SNI and host else None) 855 except SSLError as cause: c:\program files\python36\lib\ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session) 406 server_hostname=server_hostname, --> 407 _context=self, _session=session) 408 c:\program files\python36\lib\ssl.py in init(self, sock, keyfile, certfile, server_side, cert_reqs, ssl_version, ca_certs, do_handshake_on_connect, family, type, proto, fileno, suppress_ragged_eofs, npn_protocols, ciphers, server_hostname, _context, _session) 813 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets") --> 814 self.do_handshake() 815 c:\program files\python36\lib\ssl.py in do_handshake(self, block) 1067 self.settimeout(None) -> 1068 self._sslobj.do_handshake() 1069 finally: c:\program files\python36\lib\ssl.py in do_handshake(self) 688 """Start the SSL/TLS handshake.""" --> 689 self._sslobj.do_handshake() 690 if self.context.check_hostname: SSLEOFError: EOF occurred in violation of protocol (_ssl.c:841) The above exception was the direct cause of the following exception: SecurityError Traceback (most recent call last) in 1 ----> 2 driver = GraphDatabase.driver(uri="bolt://localhost:7687", auth=("neo4j", "12345")) ~\AppData\Roaming\Python\Python36\site-packages\neo4j__init__.py in driver(cls, uri, **config) 118 :class:.Driver subclass instance directly. 119 """ --> 120 return Driver(uri, **config) 121 122 ~\AppData\Roaming\Python\Python36\site-packages\neo4j__init__.py in new(cls, uri, **config) 159 for subclass in Driver.subclasses(): 160 if parsed_scheme in subclass.uri_schemes: --> 161 return subclass(uri, **config) 162 raise ValueError("URI scheme %r not supported" % parsed.scheme) 163 ~\AppData\Roaming\Python\Python36\site-packages\neo4j__init__.py in new(cls, uri, **config) 233 234 pool = ConnectionPool(connector, instance.address, **config) --> 235 pool.release(pool.acquire()) 236 instance._pool = pool 237 instance._max_retry_time = config.get("max_retry_time", default_config["max_retry_time"]) ~\AppData\Roaming\Python\Python36\site-packages\neobolt\direct.py in acquire(self, access_mode) 713 714 def acquire(self, access_mode=None): --> 715 return self.acquire_direct(self.address) 716 717 ~\AppData\Roaming\Python\Python36\site-packages\neobolt\direct.py in acquire_direct(self, address) 606 if can_create_new_connection: 607 try: --> 608 connection = self.connector(address, error_handler=self.connection_error_handler) 609 except ServiceUnavailable: 610 self.remove(address) ~\AppData\Roaming\Python\Python36\site-packages\neo4j__init__.py in connector(address, **kwargs) 230 231 def connector(address, **kwargs): --> 232 return connect(address, **dict(config, **kwargs)) 233 234 pool = ConnectionPool(connector, instance.address, **config) ~\AppData\Roaming\Python\Python36\site-packages\neobolt\direct.py in connect(address, **config) 970 raise ServiceUnavailable("Failed to resolve addresses for %s" % address) 971 else: --> 972 raise last_error ~\AppData\Roaming\Python\Python36\site-packages\neobolt\direct.py in connect(address, **config) 961 host = address[0] 962 s = _connect(resolved_address, **config) --> 963 s, der_encoded_server_certificate = _secure(s, host, security_plan.ssl_context, **config) 964 connection = _handshake(s, address, der_encoded_server_certificate, **config) 965 except Exception as error: ~\AppData\Roaming\Python\Python36\site-packages\neobolt\direct.py in _secure(s, host, ssl_context, **config) 857 error = SecurityError("Failed to establish secure connection to {!r}".format(cause.args[1])) 858 error.cause = cause --> 859 raise error 860 else: 861 # Check that the server provides a certificate SecurityError: Failed to establish secure connection to 'EOF occurred in violation of protocol (_ssl.c:841)' | I had the same problem with the Object Graph Mapper Neomodel (connecting to neo4j v4). Adding the second line solved it: config.DATABASE_URL = 'bolt://neo4j:password123@localhost:7687' config.ENCRYPTED_CONNECTION = False | 9 | 6 |
60,022,462 | 2020-2-2 | https://stackoverflow.com/questions/60022462/how-to-suppress-specific-warning-in-tensorflow-python | I have a model that, based on certain conditions, has some unconnected gradients, and this is exactly what I want. But Tensorflow is printing out a Warning every time it encounters the unconnected gradient. WARNING:tensorflow:Gradients do not exist for variables Is there any way to only suppress this specific warning? I don't want to blindly suppress all warnings since there might be unexpected (and potentially useful) warnings in the future as I'm still working on my model. | Kinda hacky way: gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients([ (grad, var) for (grad, var) in zip(gradients, model.trainable_variables) if grad is not None ]) | 10 | 6 |
60,054,350 | 2020-2-4 | https://stackoverflow.com/questions/60054350/django-use-a-property-as-a-foreign-key | The database of my app is populated and kept syncd with external data sources. I have an abstract model from which all the models of my Django 2.2 app derives, defined as follow: class CommonModel(models.Model): # Auto-generated by Django, but included in this example for clarity. # id = models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID') ORIGIN_SOURCEA = '1' ORIGIN_SOURCEB = '2' ORIGIN_CHOICES = [ (ORIGIN_SOURCEA, 'Source A'), (ORIGIN_SOURCEB, 'Source B'), ] object_origin = models.IntegerField(choices=ORIGIN_CHOICES) object_id = models.IntegerField() class A(CommonModel): some_stuff = models.CharField() class B(CommonModel): other_stuff = models.IntegerField() to_a_fk = models.ForeignKey("myapp.A", on_delete=models.CASCADE) class C(CommonModel): more_stuff = models.CharField() b_m2m = models.ManyToManyField("myapp.B") The object_id field can't be set as unique since each data source I use in my app may have an object with an object_id = 1. Hence the need to track down the origin of the object, by the field object_origin. Unfortunately, Django's ORM doesn't support more-than-one columns foreign keys. Problem Whilst keeping the auto-generated primary key in the database (id), I would like to make my foreign key and many-to-many relations happen on both object_id and object_origin fields instead of the primary key id. What I've tried I thought about doing something like this: class CommonModel(models.Model): # Auto-generated by Django, but included in this example for clarity. # id = models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID') ORIGIN_SOURCEA = '1' ORIGIN_SOURCEB = '2' ORIGIN_CHOICES = [ (ORIGIN_SOURCEA, 'Source A'), (ORIGIN_SOURCEB, 'Source B'), ] object_origin = models.IntegerField(choices=ORIGIN_CHOICES) object_id = models.IntegerField() def _get_composed_object_origin_id(self): return f"{self.object_origin}:{self.object_id}" composed_object_origin_id = property(_get_composed_object_origin_id) class A(CommonModel): some_stuff = models.CharField() class B(CommonModel): other_stuff = models.IntegerField() to_a_fk = models.ForeignKey("myapp.A", to_field="composed_object_origin_id", on_delete=models.CASCADE) But Django complains about it: myapp.B.to_a_fk: (fields.E312) The to_field 'composed_object_origin_id' doesn't exist on the related model 'myapp.A'. And it sounds legit, Django excepted the filed given to to_field to be a database field. But there is no need to add a new field to my CommonModel since composed_object_type_id is built uisng two non-nullable fields... | You mentioned in your comment in the other answer that object_id is not unique but it is unique in combination with object_type, so could you use a unique_together in the metaclass? i.e class CommonModel(models.Model): object_type = models.IntegerField() object_id = models.IntegerField() class Meta: unique_together = ( ("object_type", "object_id"), ) | 8 | 6 |
59,979,354 | 2020-1-30 | https://stackoverflow.com/questions/59979354/what-is-the-difference-between-numpy-fft-fft-and-numpy-fft-fftfreq | I am analysing time series data and would like to extract the 5 main frequency components and use them as features for training a machine learning model. My dataset is 921 x 10080. Each row is a time series and there are 921 of them in total. While exploring possible ways to do this, I came across various functions including numpy.fft.fft, numpy.fft.fftfreq, and DFT. My question is, what do these functions do to the dataset and what is the difference between these functions? For Numpy.fft.fft, Numpy docs state: Compute the one-dimensional discrete Fourier Transform. This function computes the one-dimensional n-point discrete Fourier Transform (DFT) with the efficient Fast Fourier Transform (FFT) algorithm [CT]. While for numpy.fft.fftfreq: numpy.fft.fftfreq(n, d=1.0) Return the Discrete Fourier Transform sample frequencies. The returned float array f contains the frequency bin centers in cycles per unit of the sample spacing (with zero at the start). For instance, if the sample spacing is in seconds, then the frequency unit is cycles/second. But this doesn't really talk to me probably because I don't have background knowledge for signal processing. Which function should I use for my case, i.e. extracting the first 5 main frequency and amplitude components for each row of the dataset? Update: Using fft returned result below. My intention was to obtain the first 5 frequency and amplitude values for each time series, but are they the frequency components? Here's the code: def get_fft_values(y_values, T, N, f_s): f_values = np.linspace(0.0, 1.0/(2.0*T), N//2) fft_values_ = rfft(y_values) fft_values = 2.0/N * np.abs(fft_values_[0:N//2]) return f_values[0:5], fft_values[0:5] #f_values - frequency(length = 5040) ; fft_values - amplitude (length = 5040) t_n = 1 N = 10080 T = t_n / N f_s = 1/T result = pd.DataFrame(df.apply(lambda x: get_fft_values(x, T, N, f_s), axis =1)) result and output 0 ([0.0, 1.000198452073824, 2.000396904147648, 3.0005953562214724, 4.000793808295296], [52.91299603174603, 1.2744877093061115, 2.47064631896607, 1.4657299825335832, 1.9362280837538701]) 1 ([0.0, 1.000198452073824, 2.000396904147648, 3.0005953562214724, 4.000793808295296], [57.50430555555556, 4.126212552498241, 2.045294347349226, 0.7878668631936439, 2.6093502232989976]) 2 ([0.0, 1.000198452073824, 2.000396904147648, 3.0005953562214724, 4.000793808295296], [52.05765873015873, 0.7214089616631307, 1.8547819994826562, 1.3859749465142301, 1.1848485830307878]) 3 ([0.0, 1.000198452073824, 2.000396904147648, 3.0005953562214724, 4.000793808295296], [53.68928571428572, 0.44281647644149114, 0.3880646059685434, 2.3932194091895043, 0.22048418335196407]) 4 ([0.0, 1.000198452073824, 2.000396904147648, 3.0005953562214724, 4.000793808295296], [52.049007936507934, 0.08026717757664162, 1.122163085234073, 1.2300320578011028, 0.01109727616896663]) ... ... 916 ([0.0, 1.000198452073824, 2.000396904147648, 3.0005953562214724, 4.000793808295296], [74.39303571428572, 2.7956204803382096, 1.788360577194303, 0.8660509272194551, 0.530400826933975]) 917 ([0.0, 1.000198452073824, 2.000396904147648, 3.0005953562214724, 4.000793808295296], [51.88751984126984, 1.5768804453161231, 0.9932384706239461, 0.7803585797514547, 1.6151532436755451]) 918 ([0.0, 1.000198452073824, 2.000396904147648, 3.0005953562214724, 4.000793808295296], [52.16263888888889, 1.8672674706267687, 0.9955183554654834, 1.0993971449470716, 1.6476405255363171]) 919 ([0.0, 1.000198452073824, 2.000396904147648, 3.0005953562214724, 4.000793808295296], [59.22579365079365, 2.1082518972190183, 3.686245044113031, 1.6247500816133893, 1.9790245755039324]) 920 ([0.0, 1.000198452073824, 2.000396904147648, 3.0005953562214724, 4.000793808295296], [59.32333333333333, 4.374568790482763, 1.3313693716184536, 0.21391538068483704, 1.414774377287436]) | First one needs to understand that there are time domain and frequency domain representations of signals. The graphic below shows a few common fundamental signal types and their time domain and frequency domain representations. Pay close attention to the sine curve which I will use to illustrate the difference between fft and fftfreq. The Fourier transformation is the portal between your time domain and frequency domain representation. Hence numpy.fft.fft() - returns the fourier transform. this will have both real and imaginary parts. The real and imaginary parts, on their own, are not particularly useful, unless you are interested in symmetry properties around the data window's center (even vs. odd). numpy.fft.fftfreq - returns a float array of the frequency bin centers in cycles per unit of the sample spacing. The numpy.fft.fft() method is a way to get the right frequency that allows you to separate the fft properly. This is best illustrated with an example: import numpy as np import matplotlib.pyplot as plt #fs is sampling frequency fs = 100.0 time = np.linspace(0,10,int(10*fs),endpoint=False) #wave is the sum of sine wave(1Hz) and cosine wave(10 Hz) wave = np.sin(np.pi*time)+ np.cos(np.pi*time) #wave = np.exp(2j * np.pi * time ) plt.plot(time, wave) plt.xlim(0,10) plt.xlabel("time (second)") plt.title('Original Signal in Time Domain') plt.show() # Compute the one-dimensional discrete Fourier Transform. fft_wave = np.fft.fft(wave) # Compute the Discrete Fourier Transform sample frequencies. fft_fre = np.fft.fftfreq(n=wave.size, d=1/fs) plt.subplot(211) plt.plot(fft_fre, fft_wave.real, label="Real part") plt.xlim(-50,50) plt.ylim(-600,600) plt.legend(loc=1) plt.title("FFT in Frequency Domain") plt.subplot(212) plt.plot(fft_fre, fft_wave.imag,label="Imaginary part") plt.legend(loc=1) plt.xlim(-50,50) plt.ylim(-600,600) plt.xlabel("frequency (Hz)") plt.show() | 11 | 16 |
60,032,540 | 2020-2-3 | https://stackoverflow.com/questions/60032540/opencv-cv2-imshow-is-not-working-because-of-the-qt | I am having a problem that I can not use cv2.imshow() because of following error message qt.qpa.plugin: Could not find the Qt platform plugin "cocoa" in "" This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. Last Macbook I was using initially did not have QT so I have no idea how should I deal with it. Any ideas? | I had the same issue after updating opencv - python to 4.2.0.32. Uninstall opencv-python and install the lower version (e.g pip install opencv-python==4.1.0.25) solves this issue. | 10 | 43 |
60,047,685 | 2020-2-3 | https://stackoverflow.com/questions/60047685/is-it-bad-practice-to-include-non-validating-methods-in-a-pydantic-model | I'm using pydantic 1.3 to validate models for an API I am writing. Is it common/good practice to include arbitrary methods in a class that inherits from pydantic.BaseModel? I need some helper methods associated with the objects and I am trying to decide whether I need a "handler" class. These models are being converted to JSON and sent to a restful service that I am also writing. My model looks like this: class Foo(pydantic.BaseModel): name: str bar: int baz: int Is it poor practice to do something like: class Foo(pydantic.BaseModel): name: str bar: int baz: int def add_one(self): self.bar += 1 It makes some sense to me, but I can't find an example of anyone doing this. | Yes, it's fine. We should probably document it. The only problem comes when you have a field name which conflicts with the method, but that's not a problem if you know what your data looks like. Also, it's possible to over object orient your code, but you're a long way from that. | 75 | 80 |
60,052,453 | 2020-2-4 | https://stackoverflow.com/questions/60052453/videowriter-outputs-corrupted-video-file | This is my code to save web_cam streaming. It is working but the problem with output video file. import numpy as np import cv2 cap = cv2.VideoCapture(0) # Define the codec and create VideoWriter object #fourcc = cv2.cv.CV_FOURCC(*'DIVX') #out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480)) out = cv2.VideoWriter('output.avi', -1, 20.0, (640,480)) while(cap.isOpened()): ret, frame = cap.read() if ret==True: frame = cv2.flip(frame,0) # write the flipped frame out.write(frame) cv2.imshow('frame',frame) if cv2.waitKey(1) & 0xFF == ord('q'): break else: break # Release everything if job is finished cap.release() out.release() cv2.destroyAllWindows() | The output file is corrupted because of the wrong frame rate and frame resolution. Using this code : out = cv2.VideoWriter('output.avi', -1, 20.0, (640,480)) We set the fps/frame rate per second 20. Which was not correct. Also, the frame width and height was wrong. I solved by getting fps, width, height from the captured web_cam profile. cap = cv2.VideoCapture(0) #web-cam capture fps = cap.get(cv2.CAP_PROP_FPS) width = cap.get(cv2.CAP_PROP_FRAME_WIDTH) # float height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT) # float out = cv2.VideoWriter('output.avi', -1,fps, (int(width), int(height))) | 9 | 4 |
60,050,507 | 2020-2-4 | https://stackoverflow.com/questions/60050507/reading-prometheus-metric-using-python | I am trying to read Prometheus metrics (the cpu and memory values) of a POD in kubernetes. I have Prometheus install and everything is up using local host 'http://localhost:9090/. I used the following code to read the CPU and memory of a pod but I have an error results = response.json()['data']['result'] , No JSON object could be decoded . IS anyone can help please ? import datetime import time import requests PROMETHEUS = 'http://localhost:9090/' end_of_month = datetime.datetime.today().replace(day=1).date() last_day = end_of_month - datetime.timedelta(days=1) duration = '[' + str(last_day.day) + 'd]' response = requests.get(PROMETHEUS + '/metrics', params={ 'query': 'sum by (job)(increase(process_cpu_seconds_total' + duration + '))', 'time': time.mktime(end_of_month.timetuple())}) results = response.json()['data']['result'] print('{:%B %Y}:'.format(last_day)) for result in results: print(' {metric}: {value[1]}'.format(**result)) | The code looks true, However,the query in your response command is wrong . the true formate is : response =requests.get(PROMETHEUS + '/api/v1/query', params={'query': 'container_cpu_user_seconds_total'}) you can change "container_cpu_user_seconds_total" to any query that you want to read. .. good luck | 9 | 9 |
60,055,151 | 2020-2-4 | https://stackoverflow.com/questions/60055151/how-to-trigger-an-airflow-dag-run-from-within-a-python-script | Using apache airflow, I created some DAGS, some of which do not run on a schedule. I'm trying to find a way that I can trigger a run for a specific DAG from within a Python script. Is this possible? How can I do? EDIT --- The python script will be running from a different project from the project where all my DAGS are located | You have a variety of options when it comes to triggering Airflow DAG runs. Using Python The airflow python package provides a local client you can use for triggering a dag from within a python script. For example: from airflow.api.client.local_client import Client c = Client(None, None) c.trigger_dag(dag_id='test_dag_id', run_id='test_run_id', conf={}) Using the Airflow CLI You can trigger dags in airflow manually using the Airflow CLI. More info on how to use the CLI to trigger DAGs can be found here. Using the Airflow REST API You can also use the Airflow REST api to trigger DAG runs. More info on that here. The first option from within python might work for you best (it's also how I've personally done it in the past). But you could theoretically use a subprocess to interact with the CLI from python, or a library like requests to interact with the REST API from within Python. | 10 | 20 |
60,058,762 | 2020-2-4 | https://stackoverflow.com/questions/60058762/fastest-way-for-boolean-matrix-computations | I have a boolean matrix with 1.5E6 rows and 20E3 columns, similar to this example: M = [[ True, True, False, True, ...], [False, True, True, True, ...], [False, False, False, False, ...], [False, True, False, False, ...], ... [ True, True, False, False, ...] ] Also, I have another matrix N ( 1.5E6 rows, 1 column): N = [[ True], [False], [ True], [ True], ... [ True] ] What I need to do, is to go through each column pair from matrix M (1&1, 1&2, 1&3, 1&N, 2&1, 2&2 etc) combined by the AND operator, and count how many overlaps there are between the result and matrix N. My Python/Numpy code would look like this: for i in range(M.shape[1]): for j in range(M.shape[1]): result = M[:,i] & M[:,j] # Combine the columns with AND operator count = np.sum(result & N.ravel()) # Counts the True occurrences ... # Save the count for the i and j pair The problem is, going through 20E3 x 20E3 combinations with two for loops is computationally expensive (takes around 5-10 days to compute). A better option I tried is comparing each column to the whole matrix M: for i in range(M.shape[1]): result = M[:,i]*M.shape[1] & M # np.tile or np.repeat is used to horizontally repeat the column counts = np.sum(result & N*M.shape[1], axis=0) ... # Save the counts This reduces overhead and calculation time to around 10%, but it's still taking 1 day or so to compute. My question would be :what is the fastest way (non Python maybe?) to make these calculations (basically just AND and SUM)? I was thinking about low level languages, GPU processing, quantum computing etc.. but I don't know much about any of these so any advice regarding the direction is appreciated! Additional thoughts: Currently thinking if there is a fast way using the dot product (as Davikar proposed) for computing triplets of combinations: def compute(M, N): out = np.zeros((M.shape[1], M.shape[1], M.shape[1]), np.int32) for i in range(M.shape[1]): for j in range(M.shape[1]): for k in range(M.shape[1]): result = M[:, i] & M[:, j] & M[:, k] out[i, j, k] = np.sum(result & N.ravel()) return out | Simply use np.einsum to get all the counts - np.einsum('ij,ik,i->jk',M,M.astype(int),N.ravel()) Feel free to play around with optimize flag with np.einsum. Also, feel free to play around with different dtypes conversion. To leverage GPU, we can use tensorflow package that also supports einsum. Faster alternatives with np.dot : (M&N).T.dot(M.astype(int)) (M&N).T.dot(M.astype(np.float32)) Timings - In [110]: np.random.seed(0) ...: M = np.random.rand(500,300)>0.5 ...: N = np.random.rand(500,1)>0.5 In [111]: %timeit np.einsum('ij,ik,i->jk',M,M.astype(int),N.ravel()) ...: %timeit (M&N).T.dot(M.astype(int)) ...: %timeit (M&N).T.dot(M.astype(np.float32)) 227 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) 66.8 ms ± 198 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 3.26 ms ± 753 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) And take it a bit further with float32 conversions for both of the boolean arrays - In [122]: %%timeit ...: p1 = (M&N).astype(np.float32) ...: p2 = M.astype(np.float32) ...: out = p1.T.dot(p2) 2.7 ms ± 34.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) | 7 | 9 |
60,046,243 | 2020-2-3 | https://stackoverflow.com/questions/60046243/how-to-make-a-distplot-for-each-column-in-a-pandas-dataframe | I 'm using Seaborn in a Jupyter notebook to plot histograms like this: import numpy as np import pandas as pd from pandas import DataFrame import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline df = pd.read_csv('CTG.csv', sep=',') sns.distplot(df['LBE']) I have an array of columns with values that I want to plot histogram for and I tried plotting a histogram for each of them: continous = ['b', 'e', 'LBE', 'LB', 'AC'] for column in continous: sns.distplot(df[column]) And I get this result - only one plot with (presumably) all histograms: My desired result is multiple histograms that looks like this (one for each variable): How can I do this? | Insert plt.figure() before each call to sns.distplot() . Here's an example with plt.figure(): Here's an example without plt.figure(): Complete code: # imports import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [6, 2] %matplotlib inline # sample time series data np.random.seed(123) df = pd.DataFrame(np.random.randint(-10,12,size=(300, 4)), columns=list('ABCD')) datelist = pd.date_range(pd.datetime(2014, 7, 1).strftime('%Y-%m-%d'), periods=300).tolist() df['dates'] = datelist df = df.set_index(['dates']) df.index = pd.to_datetime(df.index) df.iloc[0]=0 df=df.cumsum() # create distplots for column in df.columns: plt.figure() # <==================== here! sns.distplot(df[column]) | 8 | 13 |
60,029,614 | 2020-2-2 | https://stackoverflow.com/questions/60029614/esp32-cam-stream-in-opencv-python | I am using AI Thinker ESP32-CAM with stream url http://192.168.8.100:81/stream. I have tried this and other techniques but nothing worked for me import numpy as np cap = cv2.VideoCapture("rtsp://192.168.8.100:81/stream") while(True): ret, frame = cap.read() cv2.imshow('frame',frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() I've tried http as well instead of rstp but no results at all | I have used this arduino code #include <esp32cam.h> #include <WebServer.h> #include <WiFi.h> const char* WIFI_SSID = "ZONG MBB-E5573-AE26"; const char* WIFI_PASS = "58688303"; WebServer server(80); static auto loRes = esp32cam::Resolution::find(320, 240); static auto hiRes = esp32cam::Resolution::find(800, 600); void handleBmp() { if (!esp32cam::Camera.changeResolution(loRes)) { Serial.println("SET-LO-RES FAIL"); } auto frame = esp32cam::capture(); if (frame == nullptr) { Serial.println("CAPTURE FAIL"); server.send(503, "", ""); return; } Serial.printf("CAPTURE OK %dx%d %db\n", frame->getWidth(), frame->getHeight(), static_cast<int>(frame->size())); if (!frame->toBmp()) { Serial.println("CONVERT FAIL"); server.send(503, "", ""); return; } Serial.printf("CONVERT OK %dx%d %db\n", frame->getWidth(), frame->getHeight(), static_cast<int>(frame->size())); server.setContentLength(frame->size()); server.send(200, "image/bmp"); WiFiClient client = server.client(); frame->writeTo(client); } void serveJpg() { auto frame = esp32cam::capture(); if (frame == nullptr) { Serial.println("CAPTURE FAIL"); server.send(503, "", ""); return; } Serial.printf("CAPTURE OK %dx%d %db\n", frame->getWidth(), frame->getHeight(), static_cast<int>(frame->size())); server.setContentLength(frame->size()); server.send(200, "image/jpeg"); WiFiClient client = server.client(); frame->writeTo(client); } void handleJpgLo() { if (!esp32cam::Camera.changeResolution(loRes)) { Serial.println("SET-LO-RES FAIL"); } serveJpg(); } void handleJpgHi() { if (!esp32cam::Camera.changeResolution(hiRes)) { Serial.println("SET-HI-RES FAIL"); } serveJpg(); } void handleJpg() { server.sendHeader("Location", "/cam-hi.jpg"); server.send(302, "", ""); } void handleMjpeg() { if (!esp32cam::Camera.changeResolution(hiRes)) { Serial.println("SET-HI-RES FAIL"); } Serial.println("STREAM BEGIN"); WiFiClient client = server.client(); auto startTime = millis(); int res = esp32cam::Camera.streamMjpeg(client); if (res <= 0) { Serial.printf("STREAM ERROR %d\n", res); return; } auto duration = millis() - startTime; Serial.printf("STREAM END %dfrm %0.2ffps\n", res, 1000.0 * res / duration); } void setup() { Serial.begin(115200); Serial.println(); { using namespace esp32cam; Config cfg; cfg.setPins(pins::AiThinker); cfg.setResolution(hiRes); cfg.setBufferCount(2); cfg.setJpeg(80); bool ok = Camera.begin(cfg); Serial.println(ok ? "CAMERA OK" : "CAMERA FAIL"); } WiFi.persistent(false); WiFi.mode(WIFI_STA); WiFi.begin(WIFI_SSID, WIFI_PASS); while (WiFi.status() != WL_CONNECTED) { delay(500); } Serial.print("http://"); Serial.println(WiFi.localIP()); Serial.println(" /cam.bmp"); Serial.println(" /cam-lo.jpg"); Serial.println(" /cam-hi.jpg"); Serial.println(" /cam.mjpeg"); server.on("/cam.bmp", handleBmp); server.on("/cam-lo.jpg", handleJpgLo); server.on("/cam-hi.jpg", handleJpgHi); server.on("/cam.jpg", handleJpg); server.on("/cam.mjpeg", handleMjpeg); server.begin(); } void loop() { server.handleClient(); } And OpenCV code import urllib import cv2 import numpy as np url='http://192.168.8.100/cam-hi.jpg' while True: imgResp=urllib.request.urlopen(url) imgNp=np.array(bytearray(imgResp.read()),dtype=np.uint8) img=cv2.imdecode(imgNp,-1) # all the opencv processing is done here cv2.imshow('test',img) if ord('q')==cv2.waitKey(10): exit(0) | 8 | 12 |
60,049,059 | 2020-2-4 | https://stackoverflow.com/questions/60049059/python-linear-regression-typeerror-invalid-type-promotion | i am trying to run linear regression and i am having issues with data type i think. I have tested line by line and everything works until i reach last line where i get the issue TypeError: invalid Type promotion. Based on my research i think it is due to date format. Here is my code: import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression data=pd.read_excel('C:\\Users\\Proximo\\PycharmProjects\Counts\\venv\\Counts.xlsx') data['DATE'] = pd.to_datetime(data['DATE']) data.plot(x = 'DATE', y = 'COUNT', style = 'o') plt.title('Corona Spread Over the Time') plt.xlabel('Date') plt.ylabel('Count') plt.show() X=data['DATE'].values.reshape(-1,1) y=data['COUNT'].values.reshape(-1,1) X_train,X_test,Y_train,Y_test=train_test_split(X,y,test_size=.2,random_state=0) regressor = LinearRegression() regressor.fit(X_train,Y_train) y_pre = regressor.predict(X_test) When i run it this is the full error i get: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-21-c9e943251026> in <module> ----> 1 y_pre = regressor.predict(X_test) 2 c:\users\slavi\pycharmprojects\coronavirus\venv\lib\site-packages\sklearn\linear_model\_base.py in predict(self, X) 223 Returns predicted values. 224 """ --> 225 return self._decision_function(X) 226 227 _preprocess_data = staticmethod(_preprocess_data) c:\users\slavi\pycharmprojects\coronavirus\venv\lib\site-packages\sklearn\linear_model\_base.py in _decision_function(self, X) 207 X = check_array(X, accept_sparse=['csr', 'csc', 'coo']) 208 return safe_sparse_dot(X, self.coef_.T, --> 209 dense_output=True) + self.intercept_ 210 211 def predict(self, X): c:\users\Proximo\pycharmprojects\Count\venv\lib\site-packages\sklearn\utils\extmath.py in safe_sparse_dot(a, b, dense_output) 149 ret = np.dot(a, b) 150 else: --> 151 ret = a @ b 152 153 if (sparse.issparse(a) and sparse.issparse(b) TypeError: invalid type promotion My date format which looks like this: array([['2020-01-20T00:00:00.000000000'], ['2020-01-21T00:00:00.000000000'], ['2020-01-22T00:00:00.000000000'], ['2020-01-23T00:00:00.000000000'], ['2020-01-24T00:00:00.000000000'], ['2020-01-25T00:00:00.000000000'], ['2020-01-26T00:00:00.000000000'], ['2020-01-27T00:00:00.000000000'], ['2020-01-28T00:00:00.000000000'], ['2020-01-29T00:00:00.000000000'], ['2020-01-30T00:00:00.000000000'], ['2020-01-31T00:00:00.000000000'], ['2020-02-01T00:00:00.000000000'], ['2020-02-02T00:00:00.000000000']], dtype='datetime64[ns]') Any suggestion on how to resolve this issue? | I think linear regression not work for date type data.You need to convert it to numerical data. for example import numpy as np import pandas as pd import datetime as dt X_test = pd.DataFrame(np.array([ ['2020-01-24T00:00:00.000000000'], ['2020-01-25T00:00:00.000000000'], ['2020-01-26T00:00:00.000000000'], ['2020-01-27T00:00:00.000000000'], ['2020-01-28T00:00:00.000000000'], ['2020-01-29T00:00:00.000000000'], ['2020-01-30T00:00:00.000000000'], ['2020-01-31T00:00:00.000000000'], ['2020-02-01T00:00:00.000000000'], ['2020-02-02T00:00:00.000000000']], dtype='datetime64[ns]')) X_test.columns = ["Date"] X_test['Date'] = pd.to_datetime(X_test['Date']) X_test['Date']=X_test['Date'].map(dt.datetime.toordinal) Try this approach.this should work. Note - it is better to covert training set dates to numeric and train on that data. | 18 | 31 |
60,032,087 | 2020-2-3 | https://stackoverflow.com/questions/60032087/why-my-package-cant-be-installed-with-pipx | I've got a Python project aws-ssm-tools that uses setup.py for packaging. It comes with 3 scripts: ssm-tunnel, ssm-session and ssm-copy. It can be installed with pip install aws-ssm-tools and puts the scripts to ~/.local/bin/. However when I try to install it with pipx it fails: ~ $ pipx install aws-ssm-tools No apps associated with package aws-ssm-tools. Try again with '--include-deps' to include apps of dependent packages, which are listed above. If you are attempting to install a library, pipx should not be used. Consider using pip or a similar tool instead. I have the scripts specified in setup.py: SCRIPTS = [ 'ssm-session', 'ssm-copy', 'ssm-tunnel', 'ssm-tunnel-updown.dns-example', ] # ... setup( name="aws-ssm-tools", version=VERSION, packages=find_packages(), scripts=SCRIPTS+[ 'ssm-tunnel-agent' ], # ... ) What else do I have to do to make pipx happy? | As stated in the pipx documentation chapter "How pipx works", section "Developing for pix", the project requires setuptools entry_points. According to the content of your question, it seems that the target project uses scripts, they are similar in purpose to entry-points but pipx does not look for those and does not expose them. | 11 | 8 |
60,048,149 | 2020-2-3 | https://stackoverflow.com/questions/60048149/how-to-convert-png-to-jpg-in-python | I'm trying to compare two images, one a .png and the other a .jpg. So I need to convert the .png file to a .jpg to get closer values for SSIM. Below is the code that I've tried, but I'm getting this error: AttributeError: 'tuple' object has no attribute 'dtype' image2 = imread(thisPath + caption) image2 = io.imsave("jpgtest.jpg", (76, 59)) image2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY) image2 = resize(image2, (76, 59)) imshow("is it a jpg", image2) cv2.waitKey() | Before demonstrating how to convert an image from .png to .jpg format, I want to point out that you should be consistent on the library that you use. Currently, you're mixing scikit-image with opencv. It's best to choose one library and stick with it instead of reading in an image with scikit and then converting to grayscale with opencv. To convert a .png to .jpg image using OpenCV, you can use cv2.imwrite. Note with .jpg or .jpeg format, to maintain the highest quality, you must specify the quality value from [0..100] (default value is 95). Simply do this: import cv2 # Load .png image image = cv2.imread('image.png') # Save .jpg image cv2.imwrite('image.jpg', image, [int(cv2.IMWRITE_JPEG_QUALITY), 100]) | 10 | 24 |
60,047,837 | 2020-2-3 | https://stackoverflow.com/questions/60047837/django-singleton-model-to-store-user-settings | I am building a web app with Django. I would like to be able to store final user settings for my app. For now I am using a model Settings that contains all what I need for the app. It seems ok but I don't think that's the right way to do it because I could have a second row in my database and I don't want it. I though about using a file but it is not as easy to manage as a table in a database. Could someone help me with that please ? Thank you ! :) Louison | If I understand your question correctly, you're interested in managing settings for your django app through an editable, easy to update interface. A popular existing approach to this is implemented by django-constance. It can use your database or redis to store the settings, and it makes them editable through the django admin. You can look at the documentation here. Ultimately, the approach of using a database table to store configuration settings is fine. The issue of having one row can feel odd though, so django-constance's approach is to use a key/value design where each individual configuration key has its own row in the table. | 7 | 9 |
60,044,157 | 2020-2-3 | https://stackoverflow.com/questions/60044157/python-ffmpeg-subprocess-broken-pipe | The following script reads a video with OpenCV, applies a transformation to each frame and attempts to write it with ffmpeg. My problem is, that I don't get ffmpeg working with the subprocess module. I always get the error BrokenPipeError: [Errno 32] Broken pipe in the line where I try to write to stdin. Why is that, what am I doing wrong? # Open input video with OpenCV video_in = cv.VideoCapture(src_video_path) frame_width = int(video_in.get(cv.CAP_PROP_FRAME_WIDTH)) frame_height = int(video_in.get(cv.CAP_PROP_FRAME_HEIGHT)) fps = video_in.get(cv.CAP_PROP_FPS) frame_count = int(video_in.get(cv.CAP_PROP_FRAME_COUNT)) bitrate = bitrate * 4096 * 2160 / (frame_width * frame_height) # Process video in ffmpeg pipe # See http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/ command = ['ffmpeg', '-loglevel', 'error', '-y', # Input '-f', 'rawvideo', '-vcodec', 'rawvideo' '-pix_fmt', 'bgr24', '-s', str(frame_width) + 'x' + str(frame_height), '-r', str(fps), # Output '-i', '-', '-an', '-vcodec', 'h264', '-r', str(fps), '-b:v', str(bitrate) + 'M', '-pix_fmt', 'bgr24', dst_video_path ] pipe = sp.Popen(command, stdin=sp.PIPE) for i_frame in range(frame_count): ret, frame = video_in.read() if ret: warped_frame = cv.warpPerspective(frame, homography, (frame_width, frame_height)) pipe.stdin.write(warped_frame.astype(np.uint8).tobytes()) else: print('Stopped early.') break print('Done!') | There is a missing comma after '-vcodec', 'rawvideo'!!! Took me about an hour to notice... You should also close stdin and wait before print('Done!'): pipe.stdin.close() pipe.wait() | 9 | 9 |
60,038,362 | 2020-2-3 | https://stackoverflow.com/questions/60038362/how-to-prevent-xss-attacks-in-django-rest-api-charfields | I'm currently working on an app using Django 2.2 with djangorestframework 3.9.2. I am aware that Django itself provides protection against SQL Injection or in in the context of displaying content in django templates (XSS), but I've noticed that while I use Django REST API, all the CharFields in my models are not sanitized automatically. Note: this question does not apply to django templates. E.g. having a direct messages model (message/models.py): class Message(models.Model): sender = models.ForeignKey(...) receiver = models.ForeignKey(...) message = models.CharField(max_length=1200) timestamp = models.DateTimeField(...) is_read = models.BooleanField(default=False) Actually does not prevent anyone from sending a message with content <script>alert("Hello there");</script>. It will be saved in database and returned by the REST API as is, allowing to remotely run any JS script (basically a Cross Site Scripting). Is this an expected behavior? How can this be prevented? | You can use escape() method inside serializer's validation: from django.utils.html import escape class MySerializer: def validate_myfield(self, value): return escape(value) | 7 | 16 |
60,033,397 | 2020-2-3 | https://stackoverflow.com/questions/60033397/moviewriter-ffmpeg-unavailable-trying-to-use-class-matplotlib-animation-pillo | Is there any way to use moving plot without ffmpeg? import matplotlib.animation as animation from IPython.display import HTML fig, ax = plt.subplots(figsize=(15, 8)) animator = animation.FuncAnimation(fig, draw_barchart, frames=range(1968, 2019)) HTML(animator.to_jshtml()) animator.save('dynamic_images.mp4') My code is above, I am getting the key error.mp4', ValueError: unknown file extension: .mp4 I tried installing conda install -c conda-forge ffmpeg end up with SSL issue Is there any way to use moving plot without ffmpeg As like error throwing is there any way to use 'matplotlib.animation.PillowWriter' Disclaimer : I went through the link https://www.wikihow.com/Install-FFmpeg-on-Windows but the URL is blocked by the IT team | You can save animated plot as .gif with use of celluloid library: from matplotlib import pyplot as plt from celluloid import Camera import numpy as np # create figure object fig = plt.figure() # load axis box ax = plt.axes() # set axis limit ax.set_ylim(0, 1) ax.set_xlim(0, 10) camera = Camera(fig) for i in range(10): ax.scatter(i, np.random.random()) plt.pause(0.1) camera.snap() animation = camera.animate() animation.save('animation.gif', writer='PillowWriter', fps=2) Output: | 12 | 8 |
60,034,429 | 2020-2-3 | https://stackoverflow.com/questions/60034429/importerror-cannot-import-name-serial-from-serial-unknown-location | Whenever i execute the code below it gives me following Error: ImportError: cannot import name 'Serial' from 'serial' (unknown location) Code: from serial import Serial arduinodata = Serial('com4',9600) print("Enter n to ON LED and f to OFF LED") while 1: input_data = raw_input() print ("You Entered"+input_data) if (input_data == 'n'): arduinodata.write(b'1') print("LED ON") if (input_data == 'f'): arduinodata.write(b'0') print("LED OFF") I have installed all the required python modules. Like pyserial, pyfirmata etc but it is continuously giving me this error. | Most likely missing an __init__.py file or the module, or the file sub-directory for the module (Serial) is on a different layer than the file executable. Hope that helps :). | 11 | 1 |
60,029,873 | 2020-2-2 | https://stackoverflow.com/questions/60029873/pandas-to-json-redundant-backslashes | I have a '.csv' file containing data about movies and I'm trying to reformat it as a JSON file to use it in MongoDB. So I loaded that csv file to a pandas DataFrame and then used to_json method to write it back. here is how one row in DataFrame looks like: In [43]: result.iloc[0] Out[43]: title Avatar release_date 2009 cast [{"cast_id": 242, "character": "Jake Sully", "... crew [{"credit_id": "52fe48009251416c750aca23", "de... Name: 0, dtype: object but when pandas writes it back, it becomes like this: { "title":"Avatar", "release_date":"2009", "cast":"[{\"cast_id\": 242, \"character\": \"Jake Sully\", \"credit_id\": \"5602a8a7c3a3685532001c9a\", \"gender\": 2,...]", "crew":"[{\"credit_id\": \"52fe48009251416c750aca23\", \"department\": \"Editing\", \"gender\": 0, \"id\": 1721,...]" } As you can see, 'cast' ans 'crew' are lists and they have tons of redundant backslashes. These backslashes appear in MongoDB collections and make it impossible to extract data from these two fields. How can I solve this problem other than replacing \" with "? P.S.1: this is how I save the DataFrame as JSON: result.to_json('result.json', orient='records', lines=True) UPDATE 1: Apparently pandas is doing just fine and the problem is caused by the original csv files. here is how they look like: movie_id,title,cast,crew 19995,Avatar,"[{""cast_id"": 242, ""character"": ""Jake Sully"", ""credit_id"": ""5602a8a7c3a3685532001c9a"", ""gender"": 2, ""id"": 65731, ""name"": ""Sam Worthington"", ""order"": 0}, {""cast_id"": 3, ""character"": ""Neytiri"", ""credit_id"": ""52fe48009251416c750ac9cb"", ""gender"": 1, ""id"": 8691, ""name"": ""Zoe Saldana"", ""order"": 1}, {""cast_id"": 25, ""character"": ""Dr. Grace Augustine"", ""credit_id"": ""52fe48009251416c750aca39"", ""gender"": 1, ""id"": 10205, ""name"": ""Sigourney Weaver"", ""order"": 2}, {""cast_id"": 4, ""character"": ""Col. Quaritch"", ""credit_id"": ""52fe48009251416c750ac9cf"", ""gender"": 2, ""id"": 32747, ""name"": ""Stephen Lang"", ""order"": 3},...]" I tried to replace "" with " (and I really wanted to avoid this hack): sed -i 's/\"\"/\"/g' And of course it caused problems in some lines of data when reading it as csv again: ParserError: Error tokenizing data. C error: Expected 1501 fields in line 4, saw 1513 So we can conclude it's not safe to do such blind replacement. Any idea? P.S.2: I'm using kaggle's 5000 movie dataset: https://www.kaggle.com/carolzhangdc/imdb-5000-movie-dataset | Pandas is escaping the " character because it thinks the values in the json columns are text. To get the desired behaviour, simply parse the values in the json column as json. let the file data.csv have the following content (with quotes escaped). # data.csv movie_id,title,cast 19995,Avatar,"[{""cast_id"": 242, ""character"": ""Jake Sully"", ""credit_id"": ""5602a8a7c3a3685532001c9a"", ""gender"": 2, ""id"": 65731, ""name"": ""Sam Worthington"", ""order"": 0}, {""cast_id"": 3, ""character"": ""Neytiri"", ""credit_id"": ""52fe48009251416c750ac9cb"", ""gender"": 1, ""id"": 8691, ""name"": ""Zoe Saldana"", ""order"": 1}, {""cast_id"": 25, ""character"": ""Dr. Grace Augustine"", ""credit_id"": ""52fe48009251416c750aca39"", ""gender"": 1, ""id"": 10205, ""name"": ""Sigourney Weaver"", ""order"": 2}, {""cast_id"": 4, ""character"": ""Col. Quaritch"", ""credit_id"": ""52fe48009251416c750ac9cf"", ""gender"": 2, ""id"": 32747, ""name"": ""Stephen Lang"", ""order"": 3}]" read this into a dataframe, then apply the json.loads function & write out to a file as json. df = pd.read_csv('data.csv') df.cast = df.cast.apply(json.loads) df.to_json('data.json', orient='records', lines=True) The output is a properly formatted json (extra newlines added by me) # data.json {"movie_id":19995, "title":"Avatar", "cast":[{"cast_id":242,"character":"Jake Sully","credit_id":"5602a8a7c3a3685532001c9a","gender":2,"id":65731,"name":"Sam Worthington","order":0}, {"cast_id":3,"character":"Neytiri","credit_id":"52fe48009251416c750ac9cb","gender":1,"id":8691,"name":"Zoe Saldana","order":1}, {"cast_id":25,"character":"Dr. Grace Augustine","credit_id":"52fe48009251416c750aca39","gender":1,"id":10205,"name":"Sigourney Weaver","order":2}, {"cast_id":4,"character":"Col. Quaritch","credit_id":"52fe48009251416c750ac9cf","gender":2,"id":32747,"name":"Stephen Lang","order":3}] } | 8 | 6 |
60,018,903 | 2020-2-1 | https://stackoverflow.com/questions/60018903/how-to-replace-all-pixels-of-a-certain-rgb-value-with-another-rgb-value-in-openc | I need to be able to replace all pixels that have a certain RGB value with another color in OpenCV. I’ve tried some of the solutions but none of them worked for me. What is the best way to achieve this? | TLDR; Make all green pixels white with Numpy: import numpy as np pixels[np.all(pixels == (0, 255, 0), axis=-1)] = (255,255,255) I have made some examples of other ways of changing colours here. First I'll cover exact, specific RGB values like you asked in your question, using this image. It has three big blocks of exactly red, exactly green and exactly blue on the left and three gradual transitions between those colours on the right: Here's the initial answer as above again: #!/usr/bin/env python3 import cv2 import numpy as np # Load image im = cv2.imread('image.png') # Make all perfectly green pixels white im[np.all(im == (0, 255, 0), axis=-1)] = (255,255,255) # Save result cv2.imwrite('result1.png',im) This time I define the colour names for extra readability and maintainability. The final line is the important point: # Define some colours for readability - these are in OpenCV **BGR** order - reverse them for PIL red = [0,0,255] green = [0,255,0] blue = [255,0,0] white = [255,255,255] black = [0,0,0] # Make all perfectly green pixels white im[np.all(im == green, axis=-1)] = white Same result. This time I make a re-usable mask of red pixels which I can use in subsequent operations. The final line with the assignment im[Rmask] = black is now particularly easy to read : # Define some colours for readability - these are in OpenCV **BGR** order - reverse them for PIL red = [0,0,255] green = [0,255,0] blue = [255,0,0] white = [255,255,255] black = [0,0,0] # Make mask of all perfectly red pixels Rmask = np.all(im == red, axis=-1) # Make all red pixels black im[Rmask] = black This time I combine a mask of red and blue pixels so you can see the power of masks. The final line is the important point: # Define some colours for readability - these are in OpenCV **BGR** order - reverse them for PIL red = [0,0,255] green = [0,255,0] blue = [255,0,0] white = [255,255,255] black = [0,0,0] # Make mask of all perfectly red pixels and all perfectly blue pixels Rmask = np.all(im == red, axis=-1) Bmask = np.all(im == blue, axis=-1) # Make all red or blue pixels black im[Rmask | Bmask] = black And this time I make all non-red pixels into black - hopefully you are appreciating the power of masks now. The final line is the important point: # Define some colours for readability - these are in OpenCV **BGR** order - reverse them for PIL red = [0,0,255] green = [0,255,0] blue = [255,0,0] white = [255,255,255] black = [0,0,0] # Make mask of all perfectly red pixels Rmask = np.all(im == red, axis=-1) # Make all non-red pixels black im[~Rmask] = black Up till now, we have only made some selection of pixels into a single new colour. What if we want to make some pixels one colour and all other pixels a different colour in a single pass? The final line is the important point: # Define some colours for readability - these are in OpenCV **BGR** order - reverse them for PIL red = [0,0,255] green = [0,255,0] blue = [255,0,0] white = [255,255,255] black = [0,0,0] # Make mask of all perfectly red pixels Rmask = np.all(im == red, axis=-1) # Make all red pixels white AND at same time everything else black im = np.where(np.all(im == red, axis=-1, keepdims=True), white, black) If you want to affect a whole range of colours, rather than a specific RGB value, have a look here and here. Keywords: Image processing, Python, prime, change colour, change color, prime. | 14 | 31 |
60,029,027 | 2020-2-2 | https://stackoverflow.com/questions/60029027/decay-parameter-of-adam-optimizer-in-keras | I think that Adam optimizer is designed such that it automtically adjusts the learning rate. But there is an option to explicitly mention the decay in the Adam parameter options in Keras. I want to clarify the effect of decay on Adam optimizer in Keras. If we compile the model using decay say 0.01 on lr = 0.001, and then fit the model running for 50 epochs, then does the learning rate get reduced by a factor of 0.01 after each epoch? Is there any way where we can specify that the learning rate should decay only after running for certain number of epochs? In pytorch there is a different implementation called AdamW, which is not present in the standard keras library. Is this the same as varying the decay after every epoch as mentioned above? Thanks in advance for the reply. | From source code, decay adjusts lr per iterations according to lr = lr * (1. / (1. + decay * iterations)) # simplified see image below. This is epoch-independent. iterations is incremented by 1 on each batch fit (e.g. each time train_on_batch is called, or how many ever batches are in x for model.fit(x) - usually len(x) // batch_size batches). To implement what you've described, you can use a callback as below: from keras.callbacks import LearningRateScheduler def decay_schedule(epoch, lr): # decay by 0.1 every 5 epochs; use `% 1` to decay after each epoch if (epoch % 5 == 0) and (epoch != 0): lr = lr * 0.1 return lr lr_scheduler = LearningRateScheduler(decay_schedule) model.fit(x, y, epochs=50, callbacks=[lr_scheduler]) The LearningRateScheduler takes a function as an argument, and the function is fed the epoch index and lr at the beginning of each epoch by .fit. It then updates lr according to that function - so on next epoch, the function is fed the updated lr. Also, there is a Keras implementation of AdamW, NadamW, and SGDW, by me - Keras AdamW. Clarification: the very first call to .fit() invokes on_epoch_begin with epoch = 0 - if we don't wish lr to be decayed immediately, we should add a epoch != 0 check in decay_schedule. Then, epoch denotes how many epochs have already passed - so when epoch = 5, the decay is applied. | 10 | 10 |
60,019,006 | 2020-2-1 | https://stackoverflow.com/questions/60019006/can-we-plot-image-data-in-altair | I am trying to plot image data in altair, specifically trying to replicate face recognition example in this link from Jake VDP's book - https://jakevdp.github.io/PythonDataScienceHandbook/05.07-support-vector-machines.html. Any one had luck plotting image data in altair? | Altair features an image mark that can be used if you want to plot images that are available at a URL; for example: import altair as alt import pandas as pd source = pd.DataFrame.from_records([ {"x": 0.5, "y": 0.5, "img": "https://vega.github.io/vega-datasets/data/ffox.png"}, {"x": 1.5, "y": 1.5, "img": "https://vega.github.io/vega-datasets/data/gimp.png"}, {"x": 2.5, "y": 2.5, "img": "https://vega.github.io/vega-datasets/data/7zip.png"} ]) alt.Chart(source).mark_image( width=50, height=50 ).encode( x='x', y='y', url='img' ) Altair is not as well suited to displaying 2-dimensional data arrays as images, because the grammar is primarily designed to work with structured tabular data. However, it is possible to do using a combination of flatten transforms and window transforms. Here is an example using the data from the page you linked to: import altair as alt import pandas as pd from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=60) data = pd.DataFrame({ 'image': list(faces.images[:12]) # list of 2D arrays }) alt.Chart(data).transform_window( index='count()' # number each of the images ).transform_flatten( ['image'] # extract rows from each image ).transform_window( row='count()', # number the rows... groupby=['index'] # ...within each image ).transform_flatten( ['image'] # extract the values from each row ).transform_window( column='count()', # number the columns... groupby=['index', 'row'] # ...within each row & image ).mark_rect().encode( alt.X('column:O', axis=None), alt.Y('row:O', axis=None), alt.Color('image:Q', scale=alt.Scale(scheme=alt.SchemeParams('greys', extent=[1, 0])), legend=None ), alt.Facet('index:N', columns=4) ).properties( width=100, height=120 ) | 14 | 18 |
60,015,319 | 2020-2-1 | https://stackoverflow.com/questions/60015319/is-it-necessary-to-call-super-init-explicitly-in-python | I came from Java where we can avoid calling super class zero-argument constructor. The call to it is generated implicitly by the compiler. I read this post about super() and now in question about is it really necessary to do something like this explicitly: class A(object): def __init__(self): print("world") class B(A): def __init__(self): print("hello") super().__init__() #Do we get some Undefined Behavior if we do not call it explicitly? | If you override the __init__ method of the superclass, then the __init__ method of the subclass needs to explicitly call it if that is the intended behavior, yes. Your mental model of __init__ is incorrect; it is not the constructor method, it is a hook which the constructor method calls to let you customize object initialization easily. (The actual constructor is called __new__ but you don't need to know this, and will probably never need to interact with it directly, let alone change it.) | 13 | 21 |
60,012,168 | 2020-1-31 | https://stackoverflow.com/questions/60012168/how-to-change-the-time-of-a-pandas-datetime-column-to-midnight | Using Pandas 1.0.0, how can I change the time of a datetime dataframe column to midnight in one line of code? e.g.: from START_DATETIME 2017-02-13 09:13:33 2017-03-11 23:11:35 2017-03-12 00:44:32 ... to START_DATETIME 2017-02-13 00:00:00 2017-03-11 00:00:00 2017-03-12 00:00:00 ... My attempt: df['START_DATETIME'] = df['START_DATETIME'].apply(lambda x: pd.Timestamp(x).replace(hour=0, minute=0, second=0)) but this produces START_DATETIME 2017-02-13 2017-03-11 2017-03-12 ... | Your method already converted datetime values correctly to midnight. I.e., their time are 00:00:00. Pandas just intelligently doesn't show the time part because it is redundant to show all same time of 00:00:00. After you assigning result back to START_DATETIME, print a cell will show print(df.loc[0, START_DATETIME]) Output: 2017-02-13 00:00:00 Besides, to convert time to 00:00:00, you should use dt.normalize or dt.floor df['START_DATETIME'] = pd.to_datetime(df['START_DATETIME']).dt.normalize() or df['START_DATETIME'] = pd.to_datetime(df['START_DATETIME']).dt.floor('D') If you want to force pandas to show 00:00:00 in the series output, you need convert START_DATETIME to str after converting pd.to_datetime(df['START_DATETIME']).dt.floor('D').dt.strftime('%Y-%m-%d %H:%M:%S') Out[513]: 0 2017-02-13 00:00:00 1 2017-03-11 00:00:00 2 2017-03-12 00:00:00 Name: START_DATETIME, dtype: object | 7 | 13 |
60,013,721 | 2020-2-1 | https://stackoverflow.com/questions/60013721/how-to-see-complete-rows-in-google-colab | I am using Google Colab python 3.x and I have a Dataframe as below. I would like to see all cells on each row and column. How can I do this? I tried pd.set_option('display.max_columns', 3000) but it didn't work. # importing pandas as pd import pandas as pd # dictionary of lists dict = {'name':["a1", "b2", "c2", "d3"], 'degree': ["We explained to customer how correct fees (100) were charged. Account balance was too low", "customer was late in paying fees and we have to charge fine", "customer's credit score was too low and we have to charge higher interest rate", "customer complained a lot and didnt listen to our explanation. I had to escalate the call"], 'score':[90, 40, 80, 98]} # creating a dataframe from a dictionary df = pd.DataFrame(dict) print (df) name degree score 0 a1 We explained to customer how correct fees (100... 90 1 b2 customer was late in paying fees and we have t... 40 2 c2 customer's credit score was too low and we hav... 80 3 d3 customer complained a lot and didnt listen to ... 98 | use pd.set_option('max_colwidth', <width>) for column width & pd.set_option('max_rows', <rows>) for number of rows. see https://pandas.pydata.org/pandas-docs/stable/user_guide/options.html [] pd.set_option('max_rows', 99999) [] pd.set_option('max_colwidth', 400) [] pd.describe_option('max_colwidth') display.max_colwidth : int The maximum width in characters of a column in the repr of a pandas data structure. When the column overflows, a "..." placeholder is embedded in the output. [default: 50] [currently: 400] [] df = pd.DataFrame(d) [] df name degree score 0 a1 We explained to customer how correct fees (100) were charged. Account balance was too low 90 1 b2 customer was late in paying fees and we have to charge fine 40 2 c2 customer's credit score was too low and we have to charge higher interest rate 80 3 d3 customer complained a lot and didnt listen to our explanation. I had to escalate the call 98 | 7 | 17 |
60,008,773 | 2020-1-31 | https://stackoverflow.com/questions/60008773/what-is-the-numpy-equivalent-of-random-sample | I want to randomly choose 2 elements out of a list. >>> import random >>> random.sample(["foo", "bar", "baz", "quux"], 2) ['quux', 'bar'] But I want to use a numpy.random.Generator to do it, rather than using Python's global random number generator. Is there a built-in or easy way to do this? >>> import numpy as np >>> gen = np.random.default_rng() >>> ??? [edit] the point is to make use of gen which allows you to seed it for reproducibility. I realize the same can hypothetically be accomplished by re-seeding global generators, but I specifically want to use gen, a local generator, rather than relying on global generators. | If you really want to do it from the numpy.random.Generator: import numpy as np gen = np.random.default_rng() gen.choice(["foo", "bar", "baz", "quux"], 2, replace=False) Note that np.random.choice selects with replacement by default (i.e. each item can be sampled multiple times), so turn this off if you want an equivalent method to random.sample (credit: @ayhan). | 7 | 7 |
60,007,062 | 2020-1-31 | https://stackoverflow.com/questions/60007062/how-do-i-calculate-the-levenshtein-distance-between-two-pandas-dataframe-columns | I'm trying to calculate the Levenshtein distance between two Pandas columns but I'm getting stuck Here is the library I'm using. Here is a minimal, reproducible example: import pandas as pd from textdistance import levenshtein attempts = [['passw0rd', 'pasw0rd'], ['passwrd', 'psword'], ['psw0rd', 'passwor']] df=pd.DataFrame(attempts, columns=['password', 'attempt']) password attempt 0 passw0rd pasw0rd 1 passwrd psword 2 psw0rd passwor My poor attempt: df.apply(lambda x: levenshtein.distance(*zip(x['password'] + x['attempt'])), axis=1) This is how the function works. It takes two strings as arguments: levenshtein.distance('helloworld', 'heloworl') Out[1]: 2 | Maybe I'm missing something, is there a reason you don't like the lambda expression? This works to me: import pandas as pd from textdistance import levenshtein attempts = [['passw0rd', 'pasw0rd'], ['passwrd', 'psword'], ['psw0rd', 'passwor'], ['helloworld', 'heloworl']] df=pd.DataFrame(attempts, columns=['password', 'attempt']) df.apply(lambda x: levenshtein.distance(x['password'], x['attempt']), axis=1) out: 0 1 1 3 2 4 3 2 dtype: int64 | 10 | 12 |
60,000,802 | 2020-1-31 | https://stackoverflow.com/questions/60000802/how-can-i-see-all-installed-python-modules-in-jupyter-lab-like-pip-freeze-with | I'm looking for a way to get a list of all installed/importable python modules from a within a Jupyterlab notebook. From the command line, I can get the list by running py -3 -m pip freeze (or) pip freeze In the Jupyterlab console, running pip freeze returns The following command must be run outside of the IPython shell: $ pip freeze The Python package manager (pip) can only be used from outside of IPython. Please reissue the `pip` command in a separate terminal or command prompt. See the Python documentation for more information on how to install packages: https://docs.python.org/3/installing/ For older versions of pip, it was possible to import pip and get a list from within a notebook. The command was help('modules') This now gives a warning and returns nothing. c:\python37\lib\site-packages\IPython\kernel\__init__.py:13: ShimWarning: The `IPython.kernel` package has been deprecated since IPython 4.0.You should import from ipykernel or jupyter_client instead. "You should import from ipykernel or jupyter_client instead.", ShimWarning) 10 year old stackoverflow solutions like How can I get a list of locally installed Python modules? also no longer work. Is there a proper way of doing this (without using a subprocess hack or running pip as an external program like "!pip") | import pip._internal.operations.freeze _ = pip._internal.operations.freeze.get_installed_distributions() print(sorted(["%s==%s" % (i.key, i.version) for i in _])[:10]) ['absl-py==0.7.1', 'aiml==0.9.2', 'aio-utils==0.0.1', 'aiocache==0.10.1', 'aiocontextvars==0.2.2', 'aiocqhttp==0.6.7', 'aiodns==2.0.0', 'aiofiles==0.4.0', 'aiohttp-proxy==0.1.1', 'aiohttp==3.6.2'] This works in Win10 with Python 3.6 & 3.7 (ipython, pip.version: '20.0.1') at least. I took a look at the source code in Lib\site-packages\pip. | 9 | 3 |
60,003,006 | 2020-1-31 | https://stackoverflow.com/questions/60003006/could-validation-data-be-a-generator-in-tensorflow-keras-2-0 | In official documents of tensorflow.keras, validation_data could be: tuple (x_val, y_val) of Numpy arrays or tensors tuple (x_val, y_val, val_sample_weights) of Numpy arrays dataset For the first two cases, batch_size must be provided. For the last case, validation_steps could be provided. It does not mention if generator could act as validation_data. So I want to know if validation_data could be a datagenerator? like the following codes: net.fit_generator(train_it.generator(), epoch_iterations * batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=val_it.generator(), nb_val_samples=3, callbacks=[checker, tb, stopper, saver]) Update: In the official documents of keras, the same contents, but another sentense is added: dataset or a dataset iterator Considering that dataset For the first two cases, batch_size must be provided. For the last case, validation_steps could be provided. I think there should be 3 cases. Keras' documents are correct. So I will post an issue in tensorflow.keras to update the documents. | Yes it can, that's strange that it is not in the doc but is it working exactly like the x argument, you can also use a keras.Sequence or a generator. In my project I often use keras.Sequence that acts like a generator Minimum working example that shows that it works : import numpy as np from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Flatten def generator(batch_size): # Create empty arrays to contain batch of features and labels batch_features = np.zeros((batch_size, 1000)) batch_labels = np.zeros((batch_size,1)) while True: for i in range(batch_size): yield batch_features, batch_labels model = Sequential() model.add(Dense(125, input_shape=(1000,), activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) train_generator = generator(64) validation_generator = generator(64) model.fit(train_generator, validation_data=validation_generator, validation_steps=100, epochs=100, steps_per_epoch=100) 100/100 [==============================] - 1s 13ms/step - loss: 0.6689 - accuracy: 1.0000 - val_loss: 0.6448 - val_accuracy: 1.0000 Epoch 2/100 100/100 [==============================] - 0s 4ms/step - loss: 0.6223 - accuracy: 1.0000 - val_loss: 0.6000 - val_accuracy: 1.0000 Epoch 3/100 100/100 [==============================] - 0s 4ms/step - loss: 0.5792 - accuracy: 1.0000 - val_loss: 0.5586 - val_accuracy: 1.0000 Epoch 4/100 100/100 [==============================] - 0s 4ms/step - loss: 0.5393 - accuracy: 1.0000 - val_loss: 0.5203 - val_accuracy: 1.0000 | 13 | 15 |
59,996,493 | 2020-1-31 | https://stackoverflow.com/questions/59996493/does-await-always-give-other-tasks-a-chance-to-execute | I'd like to know what guarantees python gives around when a event loop will switch tasks. As I understand it async / await are significantly different from threads in that the event loop does not switch task based on time slicing, meaning that unless the task yields (await), it will carry on indefinitely. This can actually be useful because it is easier to manage critical sections under asyncio than with threading. What I'm less clear about is something like the following: async def caller(): while True: await callee() async def callee(): pass In this example caller is repeatedly doing await. So technically it is yielding. But I'm not clear on whether this will allow other tasks on the event loop to execute because it only yields to callee and that is never yielding. That is if I awaited callee inside a "critical section" even though I know it won't block, am I at risk of something else unexpected happening? | You are right to be wary. caller yields from callee, and yields to the event loop. Then the event loop decides which task to resume. Other tasks may (hopefully) be squeezed in between the calls to callee. callee needs to await an actual blocking Awaitable such as asyncio.Future or asyncio.sleep(), not a coroutine, otherwise the control will not be returned to the event loop until caller returns. For example, the following code will finish the caller2 task before it starts working on the caller1 task. Because callee2 is essentially a sync function without awaiting a blocking I/O operations, therefore, no suspension point is created and caller2 will resume immediately after each call to callee2. import asyncio import time async def caller1(): for i in range(5): await callee1() async def callee1(): await asyncio.sleep(1) print(f"called at {time.strftime('%X')}") async def caller2(): for i in range(5): await callee2() async def callee2(): time.sleep(1) print(f"sync called at {time.strftime('%X')}") async def main(): task1 = asyncio.create_task(caller1()) task2 = asyncio.create_task(caller2()) await task1 await task2 asyncio.run(main()) Result: sync called at 19:23:39 sync called at 19:23:40 sync called at 19:23:41 sync called at 19:23:42 sync called at 19:23:43 called at 19:23:43 called at 19:23:44 called at 19:23:45 called at 19:23:46 called at 19:23:47 But if callee2 awaits as the following, the task switching will happen even if it awaits asyncio.sleep(0), and the tasks will run concurrently. async def callee2(): await asyncio.sleep(1) print('sync called') Result: called at 19:22:52 sync called at 19:22:52 called at 19:22:53 sync called at 19:22:53 called at 19:22:54 sync called at 19:22:54 called at 19:22:55 sync called at 19:22:55 called at 19:22:56 sync called at 19:22:56 This behavior is not necessarily intuitive, but it makes sense considering that asyncio was made to handle I/O operations and networking concurrently, not the usual synchronous python codes. Another thing to note is: This still works if the callee awaits a coroutine that, in turn, awaits a asyncio.Future, asyncio.sleep(), or another coroutine that await one of those things down the chain. The flow control will be returned to the event loop when the blocking Awaitable is awaited. So the following also works. async def callee2(): await inner_callee() print(f"sync called at {time.strftime('%X')}") async def inner_callee(): await asyncio.sleep(1) | 16 | 19 |
60,000,179 | 2020-1-31 | https://stackoverflow.com/questions/60000179/sphinx-insert-argument-documentation-from-parent-method | I have some classes that inherit from each other. All classes contain the same method (let us call it mymethod), whereby the children overwrite the base class method. I want to generate a documentation for mymethod in all classes using sphinx. Suppose mymethod takes an argument myargument. This argument has the same type and meaning for both the base method as well as the inherited method. To minimize redundancies, I would like to write the documentation for myargument only for the base class and insert the documentation in the child method's documentation. That is, I do not want to only put a simple reference to the base class but rather dynamically insert the text when I generate the documentation. Can this be done? How? Below please find some code illustrating the problem. class BaseClass def mymethod(myargument): """This does something Params ------ myargument : int Description of the argument """ [...] class MyClass1(BaseClass): def mymethod(myargument): """This does something Params ------ [here I would like to insert in the description of ``myargument`` from ``BaseClass.mymethod``] """ BaseClass.mymethod(myargument) [...] class MyClass2(BaseClass): def mymethod(myargument, argument2): """This does something Params ------ [here I would like to insert in the description of ``myargument`` in ``BaseClass.mymethod``] argument2 : int Description of the additional argument """ BaseClass.mymethod(argument) [...] | Probably not ideal, but maybe you could use a decorator to extend the docstring. For example: class extend_docstring: def __init__(self, method): self.doc = method.__doc__ def __call__(self, function): if self.doc is not None: doc = function.__doc__ function.__doc__ = self.doc if doc is not None: function.__doc__ += doc return function class BaseClass: def mymethod(myargument): """This does something Params ------ myargument : int Description of the argument """ [...] class MyClass1(BaseClass): @extend_docstring(BaseClass.mymethod) def mymethod(myargument): BaseClass.mymethod(myargument) [...] class MyClass2(BaseClass): @extend_docstring(MyClass1.mymethod) def mymethod(myargument, argument2): """argument2 : int Description of the additional argument """ BaseClass.mymethod(argument) [...] print('---BaseClass.mymethod---') print(BaseClass.mymethod.__doc__) print('---MyClass1.mymethod---') print(MyClass1.mymethod.__doc__) print('---MyClass2.mymethod---') print(MyClass2.mymethod.__doc__) Result: ---BaseClass.mymethod--- This does something Params ------ myargument : int Description of the argument ---MyClass1.mymethod--- This does something Params ------ myargument : int Description of the argument ---MyClass2.mymethod--- This does something Params ------ myargument : int Description of the argument argument2 : int Description of the additional argument The override method could be resolved dynamically if you make the decorator a descriptor and search for it into __get__ but that means the decorator is no longer stackable as it doesn't return the real function. | 8 | 2 |
59,975,604 | 2020-1-29 | https://stackoverflow.com/questions/59975604/how-to-inverse-a-dft-with-magnitude-with-opencv-python | I'm new to all of this, I would like to get a magnitude spectrum from an image and then rebuild the image from a modified magnitude spectrum.. But for now i'am getting a very dark reconstitution. import numpy as np import cv2 from matplotlib import pyplot as plt img = cv2.imread('IMG.jpg',0) dft = cv2.dft(np.float32(img),flags = cv2.DFT_COMPLEX_OUTPUT) dft_shift = np.fft.fftshift(dft) m, a = np.log(cv2.cartToPolar(dft_shift[:,:,0],dft_shift[:,:,1])) # do somthing with m x, y = cv2.polarToCart(np.exp(m), a) back = cv2.merge([x, y]) f_ishift = np.fft.ifftshift(back) img_back = cv2.idft(f_ishift) img_back = cv2.magnitude(img_back[:,:,0],img_back[:,:,1]) plt.subplot(131),plt.imshow(img, cmap = 'gray') plt.title('Input Image'), plt.xticks([]), plt.yticks([]) plt.subplot(132),plt.imshow(m, cmap = 'gray') plt.title('Magnitude Spectrum'), plt.xticks([]), plt.yticks([]) plt.subplot(133),plt.imshow(img_back, cmap = 'gray') plt.title('result'), plt.xticks([]), plt.yticks([]) plt.show() the result Can you guys help me figure out why is this so dark. Thank in advance :) EDIT I tryed to normalise the image, but it's not working. I'm still having a very dark image. import numpy as np import cv2 from matplotlib import pyplot as plt img = cv2.imread('IMG.jpg',0) dft = cv2.dft(np.float32(img),flags = cv2.DFT_COMPLEX_OUTPUT) dft_shift = np.fft.fftshift(dft) m, a = np.log1p(cv2.cartToPolar(dft_shift[:,:,0],dft_shift[:,:,1])) # modify m, then use the modify m to reconstruct x, y = cv2.polarToCart(np.expm1(m), a) back = cv2.merge([x, y]) f_ishift = np.fft.ifftshift(back) img_back = cv2.idft(f_ishift, flags=cv2.DFT_SCALE) img_back = cv2.magnitude(img_back[:,:,0],img_back[:,:,1]) min, max = np.amin(img, (0,1)), np.amax(img, (0,1)) print(min,max) # re-normalize to 8-bits min, max = np.amin(img_back, (0,1)), np.amax(img_back, (0,1)) print(min,max) img_back = cv2.normalize(img_back, None, alpha=0, beta=252, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U) min, max = np.amin(img_back, (0,1)), np.amax(img_back, (0,1)) print(min,max) plt.subplot(131),plt.imshow(img, cmap = 'gray') plt.title('Input Image'), plt.xticks([]), plt.yticks([]) plt.subplot(132),plt.imshow(m, cmap = 'gray') plt.title('Magnitude Spectrum'), plt.xticks([]), plt.yticks([]) plt.subplot(133),plt.imshow(img_back, cmap = 'gray') plt.title('result'), plt.xticks([]), plt.yticks([]) plt.show() cv2.waitKey(0) cv2.destroyAllWindows() output: 0 252 0.36347726 5867.449 0 252 I would like to modify the magnitude spectrum and used the modify version to reconstruct the image. | If you need to modify the magnitude by raising it to a power near 1 (called coefficient rooting or alpha rooting), then it is just a simple modification of my code above using Python/OpenCV. Simply add cv2.pow(mag, 1.1) before converting the magnitude and phase back to real and imaginary components. Input: import numpy as np import cv2 # read input as grayscale img = cv2.imread('lena.png', 0) # convert image to floats and do dft saving as complex output dft = cv2.dft(np.float32(img), flags = cv2.DFT_COMPLEX_OUTPUT) # apply shift of origin from upper left corner to center of image dft_shift = np.fft.fftshift(dft) # extract magnitude and phase images mag, phase = cv2.cartToPolar(dft_shift[:,:,0], dft_shift[:,:,1]) # get spectrum for viewing only spec = np.log(mag) / 30 # NEW CODE HERE: raise mag to some power near 1 # values larger than 1 increase contrast; values smaller than 1 decrease contrast mag = cv2.pow(mag, 1.1) # convert magnitude and phase into cartesian real and imaginary components real, imag = cv2.polarToCart(mag, phase) # combine cartesian components into one complex image back = cv2.merge([real, imag]) # shift origin from center to upper left corner back_ishift = np.fft.ifftshift(back) # do idft saving as complex output img_back = cv2.idft(back_ishift) # combine complex components into original image again img_back = cv2.magnitude(img_back[:,:,0], img_back[:,:,1]) # re-normalize to 8-bits min, max = np.amin(img_back, (0,1)), np.amax(img_back, (0,1)) print(min,max) img_back = cv2.normalize(img_back, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U) cv2.imshow("ORIGINAL", img) cv2.imshow("MAG", mag) cv2.imshow("PHASE", phase) cv2.imshow("SPECTRUM", spec) cv2.imshow("REAL", real) cv2.imshow("IMAGINARY", imag) cv2.imshow("COEF ROOT", img_back) cv2.waitKey(0) cv2.destroyAllWindows() # write result to disk cv2.imwrite("lena_grayscale_opencv.png", img) cv2.imwrite("lena_grayscale_coefroot_opencv.png", img_back) Original Grayscale: Coefficient Rooting Result: Here is an animation showing the differences (created using ImageMagick): | 7 | 8 |
59,979,760 | 2020-1-30 | https://stackoverflow.com/questions/59979760/how-to-detect-all-rectangular-boxes-python-opencv-without-missing-anything | I'm trying to detect all the rectangles from the relational database. But some of the boxes are not being detected by my script. Please help me to do that. Thank you. The Image: My Code: #!/usr/bin/python import cv2 import numpy as np im = cv2.imread("table.png") image = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(image,0,255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1] edge = cv2.Canny(thresh,30,200) cont = cv2.findContours(edge,cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[0] for j,i in enumerate(cont): x,y,w,h = cv2.boundingRect(i) if (w*h>900): cv2.drawContours(image,[i],0,(0,0,255),3) cv2.imshow("Image",image) cv2.waitKey(0) OUTPUT: | Here's an simple approach using thresholding + morphological operations. Obtain binary image. Load image, convert to grayscale, then adaptive threshold Fill rectangular contours. Find contours and fill the contours to create filled rectangular blocks. Perform morph open. We create a rectangular structuring element and morph open to remove the lines Draw rectangle. Find contours and draw bounding rectangles. Here's each step visualized: Using this screenshotted image (contains more border since the provided image has the rectangles too close to the border). You could add a border to the input image instead of screenshotting for more border area. Take a look at add border to image Binary image Filled rectangular contours Morph open Result Code import cv2 # Load iamge, grayscale, adaptive threshold image = cv2.imread('1.png') result = image.copy() gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,51,9) # Fill rectangular contours cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(thresh, [c], -1, (255,255,255), -1) # Morph open kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9)) opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=4) # Draw rectangles cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: x,y,w,h = cv2.boundingRect(c) cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 3) cv2.imshow('thresh', thresh) cv2.imshow('opening', opening) cv2.imshow('image', image) cv2.waitKey() Note: Depending on the image, you may have to modify the kernel size. For instance, it may be necessary to increase the kernel from (5, 5) to say (11, 11). In addition, you could increase or decrease the number of iterations when performing cv2.morphologyEx(). There is a trade-off when increasing or decreasing the kernel size as you may remove more or less of the lines. Again, it all varies depending on the input image. | 10 | 14 |
59,968,630 | 2020-1-29 | https://stackoverflow.com/questions/59968630/tensorflow-one-custom-metric-for-multioutput-models | I can't find the info in the documentation so I am asking here. I have a multioutput model with 3 different outputs: model = tf.keras.Model(inputs=[input], outputs=[output1, output2, output3]) The predicted labels for validation are constructed from these 3 outputs to form only one, it's a post-processing step. The dataset used for training is a dataset of those 3 intermediary outputs, for validation I evaluate on a dataset of labels instead of the 3 kind of intermediary data. I would like to evaluate my model using a custom metric that handle the post processing and comparaison with the ground truth. My question is, in the code of the custom metric, will y_pred be a list of the 3 outputs of the model? class MyCustomMetric(tf.keras.metrics.Metric): def __init__(self, name='my_custom_metric', **kwargs): super(MyCustomMetric, self).__init__(name=name, **kwargs) def update_state(self, y_true, y_pred, sample_weight=None): # ? is y_pred a list [batch_output_1, batch_output_2, batch_output_3] ? def result(self): pass # one single metric handling the 3 outputs? model.compile(optimizer=tf.compat.v1.train.RMSPropOptimizer(0.01), loss=tf.keras.losses.categorical_crossentropy, metrics=[MyCustomMetric()]) | With your given model definition, this is a standard multi-output Model. model = tf.keras.Model(inputs=[input], outputs=[output_1, output_2, output_3]) In general, all (custom) Metrics as well as (custom) Losses will be called on every output separately (as y_pred)! Within the loss/metric function you will only see one output together with the one corresponding target tensor. By passing a list of loss functions (length == number of outputs of your model) you can specifiy which loss will be used for which output: model.compile(optimizer=Adam(), loss=[loss_for_output_1, loss_for_output_2, loss_for_output_3], loss_weights=[1, 4, 8]) The total loss (which is the objective function to minimize) will be the additive combination of all losses multiplied with the given loss weights. It is almost the same for the metrics! Here you can pass (as for the loss) a list (lenght == number of outputs) of metrics and tell Keras which metric to use for which of your model outputs. model.compile(optimizer=Adam(), loss='mse', metrics=[metrics_for_output_1, metrics_for_output2, metrics_for_output3]) Here metrics_for_output_X can be either a function or a list of functions, which all be called with the one corresponding output_X as y_pred. This is explained in detail in the documentation of Multi-Output Models in Keras. They also show examples for using dictionarys (to map loss/metric functions to a specific output) instead of lists. https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models Further information: If I understand you correctly you want to train your model using a loss function comparing the three model outputs with three ground truth values and want to do some sort of performance evaluation by comparing a derived value from the three model outputs and a single ground truth value. Usually the model gets trained on the same objective it is evaluated on, otherwise you might get poorer results when evaluating your model! Anyways... for evaluating your model on a single label I suggest you either: 1. (The clean solution) Rewrite your model and incorporate the post-processing steps. Add all the necessary operations (as layers) and map those to an auxiliary output. For training your model you can set the loss_weight of the auxiliary output to zero. Merge your Datasets so you can feed your model the model input, the intermediate target outputs as well as the labels. As explained above you can define now a metric comparing the auxiliary model output with the given target labels. 2. Or you train your model and derive the metric e.g. in a custom Callback by calculating your post-processing steps on the three outputs of model.predict(input). This will make it necessary to write custom summaries if you want to track those values in your tensorboard! That's why I would not recommend this solution. | 8 | 6 |
59,974,146 | 2020-1-29 | https://stackoverflow.com/questions/59974146/installing-an-old-version-of-scikit-learn | Problem Statment I'm trying to run some old python code that requires scikit-learn 18.0 but the current version I have installed is 0.22 and so I'm getting a warning/invalid data when I run the code. What I've Tried I tried installing the specific version both in the terminal: python -m pip install scikit-learn==0.18 and in conda and none of that has worked. All I can install is v 0.22. Help? Thanks. Error In Terminal ERROR: Failed building wheel for scikit-learn Running setup.py clean for scikit-learn Failed to build scikit-learn Installing collected packages: scikit-learn Found existing installation: scikit-learn 0.22.1 Uninstalling scikit-learn-0.22.1: Successfully uninstalled scikit-learn-0.22.1 Running setup.py install for scikit-learn ... error ERROR: Command errored out with exit status 1: Error through conda environment: PackagesNotFoundError: The following packages are not available from current channels: - scikit-learn==0.18 this was after creating and activating the new environment | Tackling your issues one at a time: python -m pip install scikit-learn==0.18 fails This is probably due to the fact that scikit-learn==0.18, if you check on pypi only has whl files for python 3.5 and 2.7 for windows, therefore pip downloads the source distribution and then fails in compiling it, probably because it doesn't work with newer python versions The following packages are not available from current channels This happens, because scikit-learn==18.0 simply does not exist in the default conda channels. On my win64 machine, the oldesst version that I can install is 19.0 (You can check by typing conda search scikit-learn in the cmd), so unfortunately no way to install using the default conda channels. There is a channel called free (found through the anaconda website) that has scikit-learn 18.0, so you could install with: conda install -c free scikit-learn To also make sure that the python version is compatible, I would just create a fitting environment: conda create -n py35 -c free scikit-learn=0.18.0 | 9 | 12 |
59,981,999 | 2020-1-30 | https://stackoverflow.com/questions/59981999/find-monday-of-current-week-in-python | I am trying to get the timestamp of monday at 00:00 of the current week in python. I know that for a specific date, the timestamp can be found using baseTime = int(datetime.datetime.timestamp(datetime.datetime(2020,1,1))) However, I want my program to automatically find out, based on the date, which date monday of the current week was, and then get the timestamp. That is to say, it would return different dates this week and next week, meaning different timestamps. I know that the current date can be found using import datetime today = datetime.date.today() Thanks in advance | I am trying to get the timestamp of monday at 00:00 of the current week in python You could use timedelta method from datetime package. from datetime import datetime, timedelta now = datetime.now() monday = now - timedelta(days = now.weekday()) print(monday) Output 2020-01-27 08:47:01 | 16 | 50 |
59,978,162 | 2020-1-30 | https://stackoverflow.com/questions/59978162/how-to-run-gunicorn-while-still-using-websocket | So I am using a docker for this python chat app project. I originally had python manage.py runserver 0.0.0.0:8000 as my command in docker-compose. I found that I should switch to gunicorn if I want to deploy my app on web (like heroku). The tutorial I found say simply change the command in docker-compose to gunicorn myproject.wsgi -b 0.0.0.0:8000. I did that, and all the websocket connections broke. There's a fail to send due to websocket is still in CONNECTING state, then after a while handshake fails with a status code of 404. All the setup were the same as before, except that one line. Just wondering what else I need to change to make websocket work with gunicorn? Thanks EDIT: after some digging on the internet it seems that gunicorn wasn't supposed to be run with websocket (wsgi asgi difference I suppose?) If anyone could tell me something I could use instead of gunicorn for web server it would be extremely appreciated, or if there's any way I can run gunicorn with my django channels still working? Thanks!! | When using ASGI, for asynchronous servers (websockets), you should use an asynchronous server, like Daphne or Uvicorn, the Django documentation has examples on how to deploy for both of them. If you want to use uvicorn directly you could do something like: uvicorn myproject.asgi:application --host 0.0.0.0 --port 8000 You can also run uvicorn through gunicorn using the worker class: gunicorn myproject.asgi:application -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 | 9 | 13 |
59,977,900 | 2020-1-30 | https://stackoverflow.com/questions/59977900/how-to-plot-with-pyplot-from-a-script-file-in-google-colab | I am trying to show a plot to the notebook from a python script, but all I get is a text output showing me the type() output of the figure.I have something like this: This is my script (a very simplified version of my actual script, but same concept). import matplotlib.pyplot as plt x=[1,2,3,4,5,6,5,3,2,4,2,3,4,2] plt.plot(x) plt.show() I have also tried setting the backend to notebook, but I get the same result. If I plot some data from the console (writing python directly into the code textbox, the plot works fine). I have searched for the answer in a lot of forums, and even the official Google notebook to plotting in Colab says that the plot should work fine from within a script. | Instead of calling !python plot.py Use this instead %run plot.py It will show the plot normally. | 11 | 20 |
59,976,480 | 2020-1-29 | https://stackoverflow.com/questions/59976480/docker-errno-111-connect-call-failed-127-0-0-1-6379 | I am trying to follow the tutorial here https://channels.readthedocs.io/en/latest/tutorial/part_2.html and check if channel layer can communicate with Redis. The only different thing I'm doing is that I'm using docker-compose and running the entire thing on a docker container, and that seems to be messing up with everything. This is the error message I'm getting when I try to run async_to_sync(channel_layer.send)('test_channel', {'type': 'hello'}) Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python3.7/site-packages/asgiref/sync.py", line 116, in __call__ return call_result.result() File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 428, in result return self.__get_result() File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result raise self._exception File "/usr/local/lib/python3.7/site-packages/asgiref/sync.py", line 156, in main_wrap result = await self.awaitable(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 293, in send async with self.connection(index) as connection: File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 820, in __aenter__ self.conn = await self.pool.pop() File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 70, in pop conns.append(await aioredis.create_redis(**self.host, loop=loop)) File "/usr/local/lib/python3.7/site-packages/aioredis/commands/__init__.py", line 175, in create_redis loop=loop) File "/usr/local/lib/python3.7/site-packages/aioredis/connection.py", line 113, in create_connection timeout) File "/usr/local/lib/python3.7/asyncio/tasks.py", line 414, in wait_for return await fut File "/usr/local/lib/python3.7/site-packages/aioredis/stream.py", line 24, in open_connection lambda: protocol, host, port, **kwds) File "/usr/local/lib/python3.7/asyncio/base_events.py", line 958, in create_connection raise exceptions[0] File "/usr/local/lib/python3.7/asyncio/base_events.py", line 945, in create_connection await self.sock_connect(sock, address) File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 473, in sock_connect return await fut File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 503, in _sock_connect_cb raise OSError(err, f'Connect call failed {address}') ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 6379) I've checked a few post and saw that many suggested this is because Redis isn't running. I know that Redis exist since docker ps shows that CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2ccab2cfc570 test_web "python manage.py ru…" 7 minutes ago Up 7 minutes 0.0.0.0:8000->8000/tcp test_web_1 6da398f093fc redis:2.8 "docker-entrypoint.s…" 10 minutes ago Up 10 minutes 0.0.0.0:6379->6379/tcp relaxed_aryabhata Any idea what I can do right now? I'm really new to these | Try changing 127.0.0.1:6379 to redis:6379. Although Redis is running, your python container isn't able to communicate with it; this is because it's trying to connect to 127.0.0.1:6379, but from the container's perspective, there's nothing running there. This can be a bit frustrating to debug, but it's a bit easier if you keep in mind that containers get their own network namespace. As a result, python's localhost != redis's localhost != your host machine's localhost. Luckily, it's easy to connect containers that are sharing the same bridge, and by default, docker-compose creates a single bridge network and connects all your containers to them, providing the necessary DNS to allow them to discover one another. As a result, container-to-container communication works simply by using the service name. As a note, it's possible to run containers in the same namespace, and to run in them in the namespace of the host, via the --net=container:<container-id> or --net=host flag. This is especially useful for running debugging tools in a container and attaching them to the network namespace of either another container or the host, e.g. using netshoot to see what ports are listening within the container (exposed or not), docker run --rm -it --net container:test_web_1 nicolaka/netshoot netstat -tulpn. | 15 | 29 |
59,967,429 | 2020-1-29 | https://stackoverflow.com/questions/59967429/convert-all-columns-from-int64-to-int32 | We all now the question: Change data type of columns in Pandas where it is really nice explained how to change the data type of a column, but what if I have a dataframe df with the following df.dtypes: A object B int64 C int32 D object E int64 F float32 How could I change this without explicity mention the column names that all int64 types are converted to int32 types? So the desired outcome is: A object B int32 C int32 D object E int32 F float32 | You can create dictionary by all columns with int64 dtype by DataFrame.select_dtypes and convert it to int32 by DataFrame.astype, but not sure if not fail if big integers numbers: df = pd.DataFrame({ 'A':list('abcdef'), 'B':[4,5,4,5,5,4], 'C':[7,8,9,4,2,3], 'D':[1,3,5,7,1,0], 'E':[5,3,6,9,2,4], 'F':list('aaabbb') }) d = dict.fromkeys(df.select_dtypes(np.int64).columns, np.int32) df = df.astype(d) print (df.dtypes) A object B int32 C int32 D int32 E int32 F object dtype: object | 11 | 17 |
59,882,884 | 2020-1-23 | https://stackoverflow.com/questions/59882884/vscode-doesnt-show-poetry-virtualenvs-in-select-interpreter-option | I need help. VSCode will NEVER find poetry virtualenv interpreter no matter what I try. Installed poetry Python package manager using a standard $ curl method as explained in the official documentation. Started a project by $ poetry new finance-essentials_37-64, installed poetry environment with $ poetry install. So now I can see that I indeed have a virtual environment by: Jaepil@Jaepil-PC MINGW64 /e/VSCodeProjects/finance_essentials_37-64 $ poetry env list >> finance-essentials-37-64-SCQrHB_N-py3.7 (Activated) and this virtualenv is installed at: C:\Users\Jaepil\AppData\Local\pypoetry\Cache\virtualenvs, which has finance-essentials-37-64-SCQrHB_N-py3.7 directory. However, VSCode is unable to find this virtualenv in its 'select interpreter' command. I only see a bunch of Anaconda and Pipenv environments but not the poetry environment's interpreter that I've just made. I also added "python.venvPath": "~/.cache/pypoetry/virtualenvs", to my settings.json as suggested in here, but to no avail. Still doesn't work. I also tried an absolute path, by adding "python.venvPath": "C:\\Users\\Jaepil\\AppData\\Local\\pypoetry\\Cache\\virtualenvs", to the same settings, but it also doesn't work. VSCode settings reference states that it has python.poetryPath as a default but it doesn't seem to work either. Should I change the default value "poetry" in this case? python.poetryPath "poetry" Specifies the location of the Poetry dependency manager executable, if installed. The default value "poetry" assumes the executable is in the current path. The Python extension uses this setting to install packages when Poetry is available and there's a poetry.lock file in the workspace folder. I'm on Windows 10 pro 64bit & Has Python 3.7.6 installed on the system. PS C:\Users\Jaepil> python Python 3.7.6 (tags/v3.7.6:43364a7ae0, Dec 19 2019, 00:42:30) [MSC v.1916 64 bit (AMD64)] on win32 | You just need to type in your shell: poetry config virtualenvs.in-project true The virtualenv will be created inside the project path and vscode will recognize. Consider adding this to your .bashrc or .zshrc. If you already have created your project, you need to re-create the virtualenv to make it appear in the correct place: poetry env list # shows the name of the current environment poetry env remove <current environment> poetry install # will create a new environment using your updated configuration | 193 | 482 |
59,874,373 | 2020-1-23 | https://stackoverflow.com/questions/59874373/type-hint-for-a-list-of-possible-values | I have a function which can take a fixed list of values: e.g. def func(mode="a"): if mode not in ["a", "b"]: raise AttributeError("not ok") is there a way to type hint it can only be one of these two values? | I think you want a literal type: def func(mode: Literal["a", "b"] = "a"): if mode not in ["a", "b"]: raise AttributeError("not ok") This was introduced in Python 3.8, via PEP 586. | 18 | 35 |
59,875,983 | 2020-1-23 | https://stackoverflow.com/questions/59875983/why-is-caplog-text-empty-even-though-the-function-im-testing-is-logging | I'm trying to use pytest to test if my function is logging the expected text, such as addressed this question (the pyunit equivalent would be assertLogs). Following the pytest logging documentation, I am passing the caplog fixture to the tester. The documentation states: Lastly all the logs sent to the logger during the test run are made available on the fixture in the form of both the logging.LogRecord instances and the final log text. The module I'm testing is: import logging logger = logging.getLogger(__name__) def foo(): logger.info("Quinoa") The tester is: def test_foo(caplog): from mwe16 import foo foo() assert "Quinoa" in caplog.text I would expect this test to pass. However, running the test with pytest test_mwe16.py shows a test failure due to caplog.text being empty: ============================= test session starts ============================== platform linux -- Python 3.7.3, pytest-5.3.0, py-1.8.0, pluggy-0.13.0 rootdir: /tmp plugins: mock-1.12.1, cov-2.8.1 collected 1 item test_mwe16.py F [100%] =================================== FAILURES =================================== ___________________________________ test_foo ___________________________________ caplog = <_pytest.logging.LogCaptureFixture object at 0x7fa86853e8d0> def test_foo(caplog): from mwe16 import foo foo() > assert "Quinoa" in caplog.text E AssertionError: assert 'Quinoa' in '' E + where '' = <_pytest.logging.LogCaptureFixture object at 0x7fa86853e8d0>.text test_mwe16.py:4: AssertionError ============================== 1 failed in 0.06s =============================== Why is caplog.text empty despite foo() sending text to a logger? How do I use pytest such that caplog.text does capture the logged text, or otherwise verify that the text is being logged? | The documentation is unclear here. From trial and error, and notwithstanding the "all the logs sent to the logger during the test run are made available" text, it still only captures logs with certain log levels. To actually capture all logs, one needs to set the log level for captured log messages using caplog.set_level or the caplog.at_level context manager, so that the test module becomes: import logging def test_foo(caplog): from mwe16 import foo with caplog.at_level(logging.DEBUG): foo() assert "Quinoa" in caplog.text Now, the test passes: ============================= test session starts ============================== platform linux -- Python 3.7.3, pytest-5.3.0, py-1.8.0, pluggy-0.13.0 rootdir: /tmp plugins: mock-1.12.1, cov-2.8.1 collected 1 item test_mwe16.py . [100%] ============================== 1 passed in 0.04s =============================== | 34 | 33 |
59,955,854 | 2020-1-28 | https://stackoverflow.com/questions/59955854/what-is-md5-md5-and-why-is-hashlib-md5-so-much-slower | Found this undocumented _md5 when getting frustrated with the slow stdlib hashlib.md5 implementation. On a macbook: >>> timeit hashlib.md5(b"hello world") 597 ns ± 17.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) >>> timeit _md5.md5(b"hello world") 224 ns ± 3.18 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) >>> _md5 <module '_md5' from '/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_md5.cpython-37m-darwin.so'> On a Windows box: >>> timeit hashlib.md5(b"stonk overflow") 328 ns ± 21.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) >>> timeit _md5.md5(b"stonk overflow") 110 ns ± 12.5 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) >>> _md5 <module '_md5' (built-in)> On a Linux box: >>> timeit hashlib.md5(b"https://adventofcode.com/2016/day/5") 259 ns ± 1.33 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) >>> timeit _md5.md5(b"https://adventofcode.com/2016/day/5") 102 ns ± 0.0576 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) >>> _md5 <module '_md5' from '/usr/local/lib/python3.8/lib-dynload/_md5.cpython-38-x86_64-linux-gnu.so'> For hashing short messages, it's way faster. For long messages, similar performance. Why is it hidden away in an underscore extension module, and why isn't this faster implementation used by default in hashlib? What is the _md5 module and why doesn't it have public API? | Until Python 2.5, hashes and digests were implemented in their own modules (e.g. [Python 2.Docs]: md5 - MD5 message digest algorithm). Starting with v2.5, [Python 2.6.Docs]: hashlib - Secure hashes and message digests was added. Its purpose was to: Offer an unified access method to the hashes / digests (via their name) Switch (by default) to an external cryptography provider (it seems the logical step to delegate to some entity specialized in that field, as maintaining all those algorithms could be an overkill). At that time OpenSSL was the best choice: mature enough, known and compatible (there were a bunch of similar Java providers, but those were pretty useless) As a side effect of #2., the Python implementations were hidden from the public API (renamed them: _md5, _sha1, _sha256, _sha512, and the latter ones added: _blake2, _sha3), as redundancy often creates confusions. But, another side effect was _hashlib.so dependency on OpenSSL's libcrypto*.so (this is Nix (at least Linux) specific, on Win, a static libeay32.lib was linked in _hashlib.pyd, and also _ssl.pyd (which I consider lame), till v3.7+, where OpenSSL .dlls are part of the Python installation). Probably on 90%+ of the machines things were smooth, as OpenSSL was / is installed by default, but for those where it isn't, many things might get broken because for example hashlib is imported by many modules (one such example is random which itself gets imported by lots of others), so trivial pieces of code that are not related at all to cryptography (at least not at 1st sight) will stop working. That's why the old implementations are kept (but again, they are only fallbacks as OpenSSL versions are / should be better maintained). [cfati@cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q059955854]> ~/sopr.sh ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [064bit-prompt]> python3 -c "import sys, hashlib as hl, _md5, ssl;print(\"{0:}\n{1:}\n{2:}\n{3:}\".format(sys.version, _md5, hl._hashlib, ssl.OPENSSL_VERSION))" 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609] <module '_md5' (built-in)> <module '_hashlib' from '/usr/lib/python3.5/lib-dynload/_hashlib.cpython-35m-x86_64-linux-gnu.so'> OpenSSL 1.0.2g 1 Mar 2016 [064bit-prompt]> [064bit-prompt]> ldd /usr/lib/python3.5/lib-dynload/_hashlib.cpython-35m-x86_64-linux-gnu.so linux-vdso.so.1 => (0x00007fffa7d0b000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f50d9e4d000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f50d9a83000) libcrypto.so.1.0.0 => /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007f50d963e000) /lib64/ld-linux-x86-64.so.2 (0x00007f50da271000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f50d943a000) [064bit-prompt]> [064bit-prompt]> openssl version -a OpenSSL 1.0.2g 1 Mar 2016 built on: reproducible build, date unspecified platform: debian-amd64 options: bn(64,64) rc4(16x,int) des(idx,cisc,16,int) blowfish(idx) compiler: cc -I. -I.. -I../include -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 -DL_ENDIAN -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DECP_NISTZ256_ASM OPENSSLDIR: "/usr/lib/ssl" [064bit-prompt]> [064bit-prompt]> python3 -c "import _md5, hashlib as hl;print(_md5.md5(b\"A\").hexdigest(), hl.md5(b\"A\").hexdigest())" 7fc56270e7a70fa81a5935b72eacbe29 7fc56270e7a70fa81a5935b72eacbe29 According to [Python 3.Docs]: hashlib.algorithms_guaranteed: A set containing the names of the hash algorithms guaranteed to be supported by this module on all platforms. Note that ‘md5’ is in this list despite some upstream vendors offering an odd “FIPS compliant” Python build that excludes it. Below it's an example of a custom Python 2.7 installation (that I built quite a while ago, worth mentioning that it dynamically links to OpenSSL .dlls): [cfati@CFATI-5510-0:e:\Work\Dev\StackOverflow\q059955854]> sopr.bat ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [prompt]> "F:\Install\pc064\HPE\OPSWpython\2.7.10__00\python.exe" -c "import sys, ssl;print(\"{0:}\n{1:}\".format(sys.version, ssl.OPENSSL_VERSION))" 2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)] OpenSSL 1.0.2j-fips 26 Sep 2016 [prompt]> "F:\Install\pc064\HPE\OPSWpython\2.7.10__00\python.exe" -c "import hashlib as hl;print(hl.md5(\"A\").hexdigest())" 7fc56270e7a70fa81a5935b72eacbe29 [prompt]> "F:\Install\pc064\HPE\OPSWpython\2.7.10__00\python.exe" -c "import ssl;ssl.FIPS_mode_set(True);import hashlib as hl;print(hl.md5(\"A\").hexdigest())" Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: error:060A80A3:digital envelope routines:FIPS_DIGESTINIT:disabled for fips As for the speed question I can only speculate: Python implementation was (obviously) written specifically for Python, meaning it is "more optimized" (yes, this is grammatically incorrect) for Python than a generic version, and also resides in python*.so (or the python executable itself) OpenSSL implementation resides in libcrypto*.so, and it's being accessed by the wrapper _hashlib.so, which does the back and forth conversions between Python types (PyObject*) and the OpenSSL ones (EVP_MD_CTX*) Considering the above, it would make sense that the former is (slightly) faster (at least for small messages, where the overhead (function call and other Python underlying operations) takes a significant percentage of the total time compared to the hashing itself). There are also other factors to be considered (e.g. whether OpenSSL assembler speedups were used). Update #0 Below are some benchmarks of my own. code00.py: #!/usr/bin/env python import sys import timeit from hashlib import md5 as md5_openssl from _md5 import md5 as md5_builtin MD5S = ( md5_openssl, md5_builtin, ) def main(*argv): base_text = b"A" number = 1000000 print("timeit attempts number: {:d}".format(number)) #x = [] #y = {} for count in range(0, 16): factor = 2 ** count text = base_text * factor globals_dict = {"text": text} #x.append(factor) print("\nUsing a {:8d} (2 ** {:2d}) bytes message".format(len(text), count)) for func in MD5S: globals_dict["md5"] = func t = timeit.timeit(stmt="md5(text)", globals=globals_dict, number=number) print(" {:12s} took: {:11.6f} seconds".format(func.__name__, t)) #y.setdefault(func.__name__, []).append(t) #print(x, y) if __name__ == "__main__": print("Python {:s} {:03d}bit on {:s}\n".format(" ".join(elem.strip() for elem in sys.version.split("\n")), 64 if sys.maxsize > 0x100000000 else 32, sys.platform)) rc = main(*sys.argv[1:]) print("\nDone.\n") sys.exit(rc) Output: Win 10 pc064 (running on a Dell Precision 5510 laptop): [prompt]> "e:\Work\Dev\VEnvs\py_pc064_03.07.06_test0\Scripts\python.exe" ./code00.py Python 3.7.6 (tags/v3.7.6:43364a7ae0, Dec 19 2019, 00:42:30) [MSC v.1916 64 bit (AMD64)] 64bit on win32 timeit attempts number: 1000000 Using a 1 (2 ** 0) bytes message openssl_md5 took: 0.449134 seconds md5 took: 0.120021 seconds Using a 2 (2 ** 1) bytes message openssl_md5 took: 0.460399 seconds md5 took: 0.118555 seconds Using a 4 (2 ** 2) bytes message openssl_md5 took: 0.451850 seconds md5 took: 0.121166 seconds Using a 8 (2 ** 3) bytes message openssl_md5 took: 0.438398 seconds md5 took: 0.118127 seconds Using a 16 (2 ** 4) bytes message openssl_md5 took: 0.454653 seconds md5 took: 0.122818 seconds Using a 32 (2 ** 5) bytes message openssl_md5 took: 0.450776 seconds md5 took: 0.118594 seconds Using a 64 (2 ** 6) bytes message openssl_md5 took: 0.555761 seconds md5 took: 0.278812 seconds Using a 128 (2 ** 7) bytes message openssl_md5 took: 0.681296 seconds md5 took: 0.455921 seconds Using a 256 (2 ** 8) bytes message openssl_md5 took: 0.895952 seconds md5 took: 0.807457 seconds Using a 512 (2 ** 9) bytes message openssl_md5 took: 1.401584 seconds md5 took: 1.499279 seconds Using a 1024 (2 ** 10) bytes message openssl_md5 took: 2.360966 seconds md5 took: 2.878650 seconds Using a 2048 (2 ** 11) bytes message openssl_md5 took: 4.383245 seconds md5 took: 5.655477 seconds Using a 4096 (2 ** 12) bytes message openssl_md5 took: 8.264774 seconds md5 took: 10.920909 seconds Using a 8192 (2 ** 13) bytes message openssl_md5 took: 15.521947 seconds md5 took: 21.895179 seconds Using a 16384 (2 ** 14) bytes message openssl_md5 took: 29.947287 seconds md5 took: 43.198639 seconds Using a 32768 (2 ** 15) bytes message openssl_md5 took: 59.123447 seconds md5 took: 86.453821 seconds Done. Ubuntu 16 pc064 (VM running in VirtualBox on the above machine): [064bit-prompt]> python3 ./code00.py Python 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609] 64bit on linux timeit attempts number: 1000000 Using a 1 (2 ** 0) bytes message openssl_md5 took: 0.246166 seconds md5 took: 0.130589 seconds Using a 2 (2 ** 1) bytes message openssl_md5 took: 0.251019 seconds md5 took: 0.127750 seconds Using a 4 (2 ** 2) bytes message openssl_md5 took: 0.257018 seconds md5 took: 0.123116 seconds Using a 8 (2 ** 3) bytes message openssl_md5 took: 0.245399 seconds md5 took: 0.128267 seconds Using a 16 (2 ** 4) bytes message openssl_md5 took: 0.251832 seconds md5 took: 0.136373 seconds Using a 32 (2 ** 5) bytes message openssl_md5 took: 0.248410 seconds md5 took: 0.140708 seconds Using a 64 (2 ** 6) bytes message openssl_md5 took: 0.361016 seconds md5 took: 0.267021 seconds Using a 128 (2 ** 7) bytes message openssl_md5 took: 0.478735 seconds md5 took: 0.413986 seconds Using a 256 (2 ** 8) bytes message openssl_md5 took: 0.707602 seconds md5 took: 0.695042 seconds Using a 512 (2 ** 9) bytes message openssl_md5 took: 1.216832 seconds md5 took: 1.268570 seconds Using a 1024 (2 ** 10) bytes message openssl_md5 took: 2.122014 seconds md5 took: 2.429623 seconds Using a 2048 (2 ** 11) bytes message openssl_md5 took: 4.158188 seconds md5 took: 4.847686 seconds Using a 4096 (2 ** 12) bytes message openssl_md5 took: 7.839173 seconds md5 took: 9.242224 seconds Using a 8192 (2 ** 13) bytes message openssl_md5 took: 15.282232 seconds md5 took: 18.368874 seconds Using a 16384 (2 ** 14) bytes message openssl_md5 took: 30.681912 seconds md5 took: 36.755073 seconds Using a 32768 (2 ** 15) bytes message openssl_md5 took: 60.230543 seconds md5 took: 73.237356 seconds Done. Ubuntu 22 pc064 (dual-boot on the same machine): [064bit prompt]> python ./code00.py Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] 064bit on linux timeit attempts number: 1000000 Using a 1 (2 ** 0) bytes message openssl_md5 took: 0.258825 seconds md5 took: 0.092418 seconds Using a 2 (2 ** 1) bytes message openssl_md5 took: 0.265123 seconds md5 took: 0.095969 seconds Using a 4 (2 ** 2) bytes message openssl_md5 took: 0.273572 seconds md5 took: 0.098485 seconds Using a 8 (2 ** 3) bytes message openssl_md5 took: 0.267524 seconds md5 took: 0.102606 seconds Using a 16 (2 ** 4) bytes message openssl_md5 took: 0.295750 seconds md5 took: 0.102688 seconds Using a 32 (2 ** 5) bytes message openssl_md5 took: 0.266704 seconds md5 took: 0.095375 seconds Using a 64 (2 ** 6) bytes message openssl_md5 took: 0.350251 seconds md5 took: 0.209725 seconds Using a 128 (2 ** 7) bytes message openssl_md5 took: 0.559193 seconds md5 took: 0.362671 seconds Using a 256 (2 ** 8) bytes message openssl_md5 took: 0.685720 seconds md5 took: 0.589242 seconds Using a 512 (2 ** 9) bytes message openssl_md5 took: 1.100991 seconds md5 took: 1.081601 seconds Using a 1024 (2 ** 10) bytes message openssl_md5 took: 2.069975 seconds md5 took: 2.176450 seconds Using a 2048 (2 ** 11) bytes message openssl_md5 took: 3.742486 seconds md5 took: 4.197531 seconds Using a 4096 (2 ** 12) bytes message openssl_md5 took: 7.186287 seconds md5 took: 8.270421 seconds Using a 8192 (2 ** 13) bytes message openssl_md5 took: 13.889762 seconds md5 took: 16.225811 seconds Using a 16384 (2 ** 14) bytes message openssl_md5 took: 27.422105 seconds md5 took: 32.898019 seconds Using a 32768 (2 ** 15) bytes message openssl_md5 took: 54.010482 seconds md5 took: 64.579159 seconds Done. The result seem to be quite different than yours. In my case: Starting somewhere in [~512B .. ~1KiB] sized messages, OpenSSL implementation seems to perform better than builtin one I know that there are too few results to claim a pattern, but it seems that both implementations seem to be linearly proportional (in terms of time) with message size (but the builtin slope seems to be a bit steeper - meaning it will perform worse on the long run) As a conclusion, if all your messages are small, and the builtin implementation works best for you, then use it. Update #1 Graphical representation (I had to reduce the timeit iterations number by an order of magnitude, as it would take much too long for large messages): and zooming on the area where the 2 graphs intersect: | 11 | 9 |
59,953,431 | 2020-1-28 | https://stackoverflow.com/questions/59953431/how-to-change-plotly-figure-size | I made the following scatter plot with Plotly: import plotly import plotly.plotly as py from plotly.graph_objs import Scatter import plotly.graph_objs as go trace1 = go.Scatter( x=x1_tsne, # x-coordinates of trace y=y1_tsne, # y-coordinates of trace mode="markers ", # scatter mode (more in UG section 1) text=label3, opacity=1, textposition="top center", marker=dict(size=25, color=color_4, symbol=marker_list_2, line=dict(width=0.5)), textfont=dict( color="black", size=18, # can change the size of font here family="Times New Roman", ), ) layout = { "xaxis": { "showticklabels": False, "showgrid": False, "zeroline": False, "linecolor": "black", "linewidth": 2, "mirror": True, "autorange": False, "range": [-40, 40], }, "yaxis": { "showticklabels": False, "showgrid": False, "zeroline": False, "linecolor": "black", "linewidth": 2, "mirror": True, "autorange": False, "range": [-40, 40], }, } data = [trace1] fig = go.Figure(data=data, layout=layout) py.iplot(fig) I want the height, width, and the markers to be like the plot shown below (made in Matplotlib): I tried tuning the range, and using Autorange, but they did not help. EDIT: The following code worked for me: trace1 = go.Scatter( x=x1_tsne, # x-coordinates of trace y=y1_tsne, # y-coordinates of trace mode="markers +text ", # scatter mode (more in UG section 1) text=label3, opacity=1, textposition="top center", marker=dict(size=25, color=color_4, symbol=marker_list_2, line=dict(width=0.5)), textfont=dict( color="black", size=18, # can change the size of font here family="Times New Roman", ), ) data = [trace1] layout = go.Layout( autosize=False, width=1000, height=1000, xaxis=go.layout.XAxis(linecolor="black", linewidth=1, mirror=True), yaxis=go.layout.YAxis(linecolor="black", linewidth=1, mirror=True), margin=go.layout.Margin(l=50, r=50, b=100, t=100, pad=4), ) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename="size-margins") | Consider using: fig.update_layout( autosize=False, width=800, height=800, ) ...and then eventually reduce the size of your marker. Full code: import plotly.graph_objs as go trace1 = go.Scatter( x=x1_tsne, # x-coordinates of trace y=y1_tsne, # y-coordinates of trace mode="markers +text ", # scatter mode (more in UG section 1) text=label3, opacity=1, textposition="top center", marker=dict(size=12, color=color_4, symbol=marker_list_2, line=dict(width=0.5)), textfont=dict( color="black", size=18, # can change the size of font here family="Times New Roman", ), ) data = [trace1] layout = go.Layout( autosize=False, width=1000, height=1000, xaxis=go.layout.XAxis(linecolor="black", linewidth=1, mirror=True), yaxis=go.layout.YAxis(linecolor="black", linewidth=1, mirror=True), margin=go.layout.Margin(l=50, r=50, b=100, t=100, pad=4), ) fig = go.Figure(data=data, layout=layout) | 69 | 108 |
59,899,134 | 2020-1-24 | https://stackoverflow.com/questions/59899134/how-to-change-individual-entries-in-xarray-dataarray-with-sel | I have data inside an xarray.DataArray that I want to manipulate, however, it do not manage to change individual entries in the DataArray. Example: import numpy as np import xarray as xr data = np.random.rand(2,2) times = [1998,1999] locations = ['It','Be'] A = xr.DataArray(data, coords = [times, locations], dims = [time, space]) this gives me a DataArray. Now I want to set the entry for (1998,'It') manually to 5, but the following does not work: A.sel(time = 1998, space = 'It').values = 5 neither this works: A.sel(time = 1998, space = 'It').values = array(5) the data remains as it is. However, strangely the following works out well: A.sel(time = 1998).values[0] = 5 could you please explain me the logic behind this? | Xarray's assignment does not allow you to assign values to arrays using sel or isel. This is described in the documentation here. For your application, you probably want to use the .loc property: A.loc[dict(time=1998, space='It')] = 5 It is also possible to use DataArray.where to replace values. | 6 | 12 |
59,893,782 | 2020-1-24 | https://stackoverflow.com/questions/59893782/how-to-exit-cleanly-from-flask-and-waitress-running-as-a-windows-pywin32-servi | I have managed to cobble together a working demo of a pywin32 Windows service running Flask inside the Pylons waitress WSGI server (below). A nice self contained solution is the idea... I have spent hours reviewing and testing ways of making waitress exit cleanly (like this and this), but the best I can do so far is a kind of suicidal SIGINT which makes Windows complain "the pipe has been ended" when stopping through the Services control panel, but at least it stops :-/ I guess the pythonservice.exe which pywin32 starts, should not terminate, just the waitress treads? To be honest I'm still uncertain if this is a question about waitress, pywin32, or maybe it's just plain Python. I do have the feeling the answer is right in front of me, but right now I'm completely stumped. import os import random import signal import socket from flask import Flask, escape, request import servicemanager import win32event import win32service import win32serviceutil from waitress import serve app = Flask(__name__) @app.route('/') def hello(): random.seed() x = random.randint(1, 1000000) name = request.args.get("name", "World") return 'Hello, %s! - %s - %s' % (escape(name), x, os.getpid()) # based on https://www.thepythoncorner.com/2018/08/how-to-create-a-windows-service-in-python/ class SMWinservice(win32serviceutil.ServiceFramework): '''Base class to create winservice in Python''' _svc_name_ = 'WaitressService' _svc_display_name_ = 'Waitress server' _svc_description_ = 'Python waitress WSGI service' @classmethod def parse_command_line(cls): ''' ClassMethod to parse the command line ''' win32serviceutil.HandleCommandLine(cls) def __init__(self, args): ''' Constructor of the winservice ''' win32serviceutil.ServiceFramework.__init__(self, args) self.hWaitStop = win32event.CreateEvent(None, 0, 0, None) socket.setdefaulttimeout(60) def SvcStop(self): ''' Called when the service is asked to stop ''' self.stop() servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STOPPED, (self._svc_name_, '')) self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) win32event.SetEvent(self.hWaitStop) def SvcDoRun(self): ''' Called when the service is asked to start ''' self.start() servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STARTED, (self._svc_name_, '')) self.main() def start(self): pass def stop(self): print 'sigint' os.kill(os.getpid(), signal.SIGINT) def main(self): print 'serve' serve(app, listen='*:5000') if __name__ == '__main__': SMWinservice.parse_command_line() | I have found a solution using a sub-thread that seems to work. I am not quite sure if this may have possible unintended consequences yet... I believe the updated version below, "injecting" a SystemExit into the waitress thread is as good as it gets. I think thee original kills the thread hard, but this one prints "thread done" indicating a graceful shutdown. Corrections or improvements welcome! import ctypes import os import random import socket import threading from flask import Flask, escape, request import servicemanager import win32event import win32service import win32serviceutil from waitress import serve app = Flask(__name__) # waitress thread exit based on: # https://www.geeksforgeeks.org/python-different-ways-to-kill-a-thread/ @app.route('/') def hello(): random.seed() x = random.randint(1, 1000000) name = request.args.get("name", "World") return 'Hello, %s! - %s - %s' % (escape(name), x, os.getpid()) class ServerThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): print('thread start\n') serve(app, listen='*:5000') # blocking print('thread done\n') def get_id(self): # returns id of the respective thread if hasattr(self, '_thread_id'): return self._thread_id for id, thread in threading._active.items(): if thread is self: return id def exit(self): thread_id = self.get_id() res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit)) if res > 1: ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0) print('Exception raise failure') class SMWinservice(win32serviceutil.ServiceFramework): _svc_name_ = 'WaitressService' _svc_display_name_ = 'Waitress server' _svc_description_ = 'Python waitress WSGI service' @classmethod def parse_command_line(cls): win32serviceutil.HandleCommandLine(cls) def __init__(self, args): win32serviceutil.ServiceFramework.__init__(self, args) self.stopEvt = win32event.CreateEvent(None, 0, 0, None) socket.setdefaulttimeout(60) def SvcStop(self): servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STOPPED, (self._svc_name_, '')) self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) win32event.SetEvent(self.stopEvt) def SvcDoRun(self): servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STARTED, (self._svc_name_, '')) self.main() def main(self): print('main start') self.server = ServerThread() self.server.start() print('waiting on win32event') win32event.WaitForSingleObject(self.stopEvt, win32event.INFINITE) self.server.exit() # raise SystemExit in inner thread print('waiting on thread') self.server.join() print('main done') if __name__ == '__main__': SMWinservice.parse_command_line() | 13 | 10 |
59,854,439 | 2020-1-22 | https://stackoverflow.com/questions/59854439/how-to-create-requirement-txt-without-all-package-versions | Right now my requirement.txt contains following package list: asgiref==3.2.3 beautifulsoup4==4.8.2 certifi==2019.11.28 chardet==3.0.4 Click==7.0 Django==3.0.2 idna==2.8 pytz==2019.3 requests==2.22.0 six==1.14.0 soupsieve==1.9.5 sqlparse==0.3.0 urllib3==1.25.7 I just want the package name only, so that pip3 always installs the latest version. | Use sed to remove version info from your requirements.txt file. e.g. sed 's/==.*$//' requirements.txt will give output asgiref beautifulsoup4 certifi chardet Click Django idna pytz requests six soupsieve sqlparse urllib3 and this can be piped into pip to do the install sed 's/==.*$//' requirements.txt | xargs pip install | 11 | 9 |
59,890,977 | 2020-1-24 | https://stackoverflow.com/questions/59890977/f-string-multiple-format-specifiers | Is it possible to use multiple format specifiers in a Python f-string? For example, let's say we want to round up numbers to two decimal points and also specify a width for print. Individually it looks like this: In [1]: values = [12.1093, 13.95123] In [2]: for v in values: print(f'{v:.2}') 1.2e+01 1.4e+01 In [3]: for v in values: print(f'{v:<10} value') 12.1093 value 13.95123 value But, is it possible to combine both? I tried: for v in values: print(f'{v:.2,<10} value') But I got Invalid format specifier error. | Dependent on the result you want, you can combine them normally such as; for v in values: print(f"{v:<10.2} value") #1.2e+01 value #1.4e+01 value However, your result does not seem like the result you're looking for. To force the fixed notation of the 2 you need to add f: for v in values: print(f"{v:<10.2f} value") #12.11 value #13.95 value You can read more on format specifications here. | 15 | 9 |
59,953,611 | 2020-1-28 | https://stackoverflow.com/questions/59953611/how-can-i-get-a-tqdm-progress-apply-bar-in-pandas-operations-in-vs-code-notebook | I am trying to display a progress bar when I perform "vector" progress_apply operations on pandas dataframes, in MS Visual Studio Code. In VS Code with the Python extension enabled, I tried in a cell import pandas as pd from tqdm import tqdm_notebook, tqdm_pandas tqdm_notebook().pandas() df = pd.DataFrame({'a' : ['foo', 'bar'], 'b' : ['spam', 'eggs']}) df.progress_apply(lambda row: row['a'] + row['b'], axis = 1) And the result is not OK (edit: this may actually render fine on more recent versions of VS Code). How can I visualize the progress bar when I run pandas progress_apply in vscode? | Revisiting this in 2022 (VS Code 1.63.2), the code below will work fine in VS code, and may be more appealing visually than the other solution I previously had for this: import pandas as pd from tqdm.notebook import tqdm tqdm.pandas() df = pd.DataFrame({'a' : ['foo', 'bar'], 'b' : ['spam', 'eggs']}) df.progress_apply(lambda row: row['a'] + row['b'], axis = 1) | 11 | 7 |
59,930,590 | 2020-1-27 | https://stackoverflow.com/questions/59930590/prettier-vscode-extension-not-support-django-template-tags-tag | The Prettier Visual Studio Code extension does not support Django template tags {% %}. How can I fix this? Do I have to disable Prettier for HTML files, or is there another solution? See this GitHub issue too: No Django template tags support | February 2022 Based on @Al Mahdi's comment: Prettier does not support prettier.disableLanguages option anymore. Therefore, to ignore certain files you have to create a .prettierignore file akin a .gitignore file (for people who use Git). The file lives in the root folder of your project. Source of my examples below. To ignore one file you could type in a particular filename: # Ignoring just one file my_cool_html_file.html Or you could use a blanket statement: # Ignoring all html files *.html There is also the pragma option (<!--prettier-ignore-->) which lets you ignore bits and pieces of code from particular files. Suppose in your my_cool_html_file.html you want to not have Prettier format some lines in it, you could: <!-- prettier-ignore --> <div class="x" >hello world</div > <!-- prettier-ignore-attribute --> <div (mousedown)=" onStart ( ) " (mouseup)=" onEnd ( ) " ></div> <!-- prettier-ignore-attribute (mouseup) --> <div (mousedown)="onStart()" (mouseup)=" onEnd ( ) " ></div> July 2020 (old answer) You can do two things: Disable Prettier on HTML files by adding this command in the 'settings.json' file: "prettier.disableLanguages": ["html"] This will ensure, if you have enabled it, VS Code's inherent HTML formatting. OR You can install a Django extension like this one. However, the problem with this extension is that it disables VS Codes inherent HTML intellisense (which I personally like). Hope this helps. | 16 | 19 |
59,938,578 | 2020-1-27 | https://stackoverflow.com/questions/59938578/pybind11-running-the-test-cases | I'm trying to learn pybind11 and the first Google result is this page, where you should be guided towards compiling and running some test cases. From this page, I have installed bybind11 by: pip3 install pybind11 and I have installed: sudo apt install python3-dev cmake as instructed in the original page. But I don't know how to go to the next step which is to mkdir build ... and the rest of the steps to compile the test cases. I suppose this should be inside the pybind11 installation folder installed via pip3. my environment is: Ubuntu 18.04.3 LTS bionic Python3 3.6.9 pip 20.0.2 and my questions are: where is the path to the presumed test cases where I can follow the rest of the tutorial from is this the correct/best way to install pybind11? if not what is the recommended method of installation? P.S.1. using pip3 show pybind11 I realized that I have version 2.4.3 installed and the installation folder is /usr/<userName>/.local/lib/python3.6/sitepackages. However, inside the pybind11 folder there are no test cases as far as I can see. P.S.2. From here I installed via sudo apt install python-pybind11 and from here using dpkg --listfiles python-pybind11 I found the installation folder at /usr/lib/python2.7/dist-packages/. Not only there are no test cases in this folder either, but this is also python2.7 which I don't want to use! | You need to install pybind11 as instructed here by cloning the GitHub repository: python3 -m pip install pytest numpy scipy sudo apt install -y cmake python3-dev libeigen3-dev libboost-dev git git clone https://github.com/pybind/pybind11.git cd pybind11 cmake -DDOWNLOAD_CATCH=1 mkdir build cd build cmake .. sudo make install cd .. Then you can run the tests by going to the folder cd tests. Next, follow steps in the tutorial, starting with mkdir build. P.S. You may also need to make sure your Python packages are up to date, following the instructions here. | 8 | 6 |
59,882,714 | 2020-1-23 | https://stackoverflow.com/questions/59882714/python-generating-a-list-of-dates-between-two-dates | I want to generate a list of dates between two dates and store them in a list in string format. This list is useful to compare with other dates I have. My code is given below: from datetime import date, timedelta sdate = date(2019,3,22) # start date edate = date(2019,4,9) # end date def dates_bwn_twodates(start_date, end_date): for n in range(int ((end_date - start_date).days)): yield start_date + timedelta(n) print(dates_bwn_twodates(sdate,edate)) My present output: <generator object dates_bwn_twodates at 0x000002A8E7929410> My expected output: ['2019-03-22',.....,'2019-04-08'] Something wrong in my code. | You can use pandas.date_range() for this: import pandas pandas.date_range(sdate,edate-timedelta(days=1),freq='d') DatetimeIndex(['2019-03-22', '2019-03-23', '2019-03-24', '2019-03-25', '2019-03-26', '2019-03-27', '2019-03-28', '2019-03-29', '2019-03-30', '2019-03-31', '2019-04-01', '2019-04-02', '2019-04-03', '2019-04-04', '2019-04-05', '2019-04-06', '2019-04-07', '2019-04-08'], dtype='datetime64[ns]', freq='D') | 97 | 146 |
59,870,193 | 2020-1-23 | https://stackoverflow.com/questions/59870193/is-there-a-function-in-pyspark-dataframe-that-is-similar-to-pandas-io-json-json | I would like to perform operation similar to pandas.io.json.json_normalize is pyspark dataframe. Is there an equivalent function in spark? https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.io.json.json_normalize.html | Spark has a similar function explode() but it is not entirely identical. Here is how explode works at a very high level. >>> from pyspark.sql.functions import explode, col >>> data = {'A': [1, 2]} >>> df = spark.createDataFrame(data) >>> df.show() +------+ | A| +------+ |[1, 2]| +------+ >>> df.select(explode(col('A')).alias('normalized')).show() +----------+ |normalized| +----------+ | 1| | 2| +----------+ On the other hand you could convert the Spark DataFrame to a Pandas DataFrame using: spark_df.toPandas() --> leverage json_normalize() and then revert back to a Spark DataFrame. To revert back to a Spark DataFrame you would use spark.createDataFrame(pandas_df). Please note that this back and forth solution is not ideal as calling toPandas(), results in all records of the DataFrame to be collected (.collect()) to the driver and could result in memory errors when working with larger datasets. The link below provides more insight on using toPandas(): DF.topandas() throwing error in pyspark Hope this helps and good luck! | 8 | 4 |
59,905,761 | 2020-1-25 | https://stackoverflow.com/questions/59905761/split-train-data-to-train-and-validation-by-using-tensorflow-datasets-load-tf-2 | I'm trying to run the following Colab project, but when I want to split the training data into validation and train parts I get this error: KeyError: "Invalid split train[:70%]. Available splits are: ['train']" I use the following code: (training_set, validation_set), dataset_info = tfds.load( 'tf_flowers', split=['train[:70%]', 'train[70%:]'], with_info=True, as_supervised=True, ) How I can fix this error? | According to the Tensorflow Dataset docs the approach you presented is now supported. Splitting is possible by passing split parameter to tfds.load like so split="test[:70%]". (training_set, validation_set), dataset_info = tfds.load( 'tf_flowers', split=['train[:70%]', 'train[70%:]'], with_info=True, as_supervised=True, ) With the above code the training_set has 2569 entries, while validation_set has 1101. Thank you Saman for the comment on API deprecation: In previous Tensorflow version it was possible to use tfds.Split API which is now deprecated: (training_set, validation_set), dataset_info = tfds.load( 'tf_flowers', split=[ tfds.Split.TRAIN.subsplit(tfds.percent[:70]), tfds.Split.TRAIN.subsplit(tfds.percent[70:]) ], with_info=True, as_supervised=True, ) | 10 | 9 |
59,868,527 | 2020-1-22 | https://stackoverflow.com/questions/59868527/how-can-i-upload-a-pil-image-object-to-a-discord-chat-without-saving-the-image | I'm trying to send a PIL Image object to a discord chat (I don't want to save the file though) I have a function that gathers images from the internet, joins them together vertically and then return a PIL Image object. The code below creates a file image from the PIL Image object on my local machine and then sends it to a Discord chat. I don't want to constantly be recreating and saving the file image on my machine. How can I just send the PIL Image object instead of having to save the image every time I send a request? from PIL import Image from io import BytesIO import requests import discord # Initializes Discord Client client = discord.Client() # List of market indexes indexes = [ 'https://finviz.com/image.ashx?dow', 'https://finviz.com/image.ashx?nasdaq', 'https://finviz.com/image.ashx?sp500' ] # Returns a vertical image of market indexes def create_image(): im = [] for index in indexes: response = requests.get(index) im.append(Image.open(BytesIO(response.content))) dst = Image.new('RGB', (im[0].width, im[0].height + im[1].height + im[2].height)) dst.paste(im[0], (0, 0)) dst.paste(im[1], (0, im[0].height)) dst.paste(im[2], (0, im[0].height + im[1].height)) return dst # Prints when bot is online @client.event async def on_ready(): print('{0.user} is online'.format(client)) # Uploads vertical image of market indexes when requested @client.event async def on_message(message): if message.content.startswith('^index'): create_image().save('index.png') await message.channel.send(file=discord.File('index.png')) SOLUTION: @client.event async def on_message(message): if message.content.startswith('^index'): with BytesIO() as image_binary: create_image().save(image_binary, 'PNG') image_binary.seek(0) await message.channel.send(file=discord.File(fp=image_binary, filename='image.png')) | Posting my solution as a separate answer. Thanks Ceres for the recommendation. @client.event async def on_message(message): if message.content.startswith('^index'): with BytesIO() as image_binary: create_image().save(image_binary, 'PNG') image_binary.seek(0) await message.channel.send(file=discord.File(fp=image_binary, filename='image.png')) | 9 | 4 |
59,897,093 | 2020-1-24 | https://stackoverflow.com/questions/59897093/get-all-keys-and-its-hierarchy-in-h5-file-using-python-library-h5py | Is there any way I can recursively get all keys in h5 file using python library h5py? I tried using the code below import h5py h5_data = h5py.File(h5_file_location, 'r') print(h5_data.keys()) but it only print the top level keys of the h5 file. | Some of the keys returned by keys() on a Group may be Datasets some may be sub Groups. In order to find all keys you need to recurse the Groups. Here is a simple script to do that: import h5py def allkeys(obj): "Recursively find all keys in an h5py.Group." keys = (obj.name,) if isinstance(obj, h5py.Group): for key, value in obj.items(): if isinstance(value, h5py.Group): keys = keys + allkeys(value) else: keys = keys + (value.name,) return keys h5 = h5py.File('/dev/null', 'w') h5.create_group('g1') h5.create_group('g2') h5.create_dataset('d1', (10,), 'i') h5.create_dataset('d2', (10, 10,), 'f') h5['g1'].create_group('g1') h5['g1'].create_dataset('d1', (10,), 'i') h5['g1'].create_dataset('d2', (10,), 'f') h5['g1/g1'].attrs['a'] = 'b' print(allkeys(h5)) Gives: ('/', '/d1', '/d2', '/g1', '/g1/d1', '/g1/d2', '/g1/g1', '/g2') | 10 | 10 |
59,868,987 | 2020-1-22 | https://stackoverflow.com/questions/59868987/saving-multiple-plots-into-a-single-html | I recently discovered plotly and find it really good for graphing, now I have a problem which I want to save multiple plot into a single html, how to do it please? *I want to save multiple plot, i.e fig, fig1, fig 2 and so on, NOT one subplot which has multiple plot in it, because I found that the plot within subplot is too small. | In the Plotly API there is a function to_html which returns HTML of the figure. Moreover, you can set option param full_html=False which will give you just DIV containing figure. You can just write multiple figures to one HTML by appending DIVs containing figures: with open('p_graph.html', 'a') as f: f.write(fig1.to_html(full_html=False, include_plotlyjs='cdn')) f.write(fig2.to_html(full_html=False, include_plotlyjs='cdn')) f.write(fig3.to_html(full_html=False, include_plotlyjs='cdn')) https://plot.ly/python-api-reference/generated/plotly.io.to_html.html You can also use Beautiful Soup to do DOM manipulation and insert DIV exactly where you need it in the HTML. https://beautiful-soup-4.readthedocs.io/en/latest/#append | 43 | 101 |
59,893,850 | 2020-1-24 | https://stackoverflow.com/questions/59893850/how-to-accumulate-gradients-in-tensorflow-2-0 | I'm training a model with tensorflow 2.0. The images in my training set are of different resolutions. The Model I've built can handle variable resolutions (conv layers followed by global averaging). My training set is very small and I want to use full training set in a single batch. Since my images are of different resolutions, I can't use model.fit(). So, I'm planning to pass each sample through the network individually, accumulate the errors/gradients and then apply one optimizer step. I'm able to compute loss values, but I don't know how to accumulate the losses/gradients. How can I accumulate the losses/gradients and then apply a single optimizer step? Code: for i in range(num_epochs): print(f'Epoch: {i + 1}') total_loss = 0 for j in tqdm(range(num_samples)): sample = samples[j] with tf.GradientTape as tape: prediction = self.model(sample) loss_value = self.loss_function(y_true=labels[j], y_pred=prediction) gradients = tape.gradient(loss_value, self.model.trainable_variables) self.optimizer.apply_gradients(zip(gradients, self.model.trainable_variables)) total_loss += loss_value epoch_loss = total_loss / num_samples print(f'Epoch loss: {epoch_loss}') | If I understand correctly from this statement: How can I accumulate the losses/gradients and then apply a single optimizer step? @Nagabhushan is trying to accumulate gradients and then apply the optimization on the (mean) accumulated gradient. The answer provided by @TensorflowSupport does not answers it. In order to perform the optimization only once, and accumulate the gradient from several tapes, you can do the following: for i in range(num_epochs): print(f'Epoch: {i + 1}') total_loss = 0 # get trainable variables train_vars = self.model.trainable_variables # Create empty gradient list (not a tf.Variable list) accum_gradient = [tf.zeros_like(this_var) for this_var in train_vars] for j in tqdm(range(num_samples)): sample = samples[j] with tf.GradientTape as tape: prediction = self.model(sample) loss_value = self.loss_function(y_true=labels[j], y_pred=prediction) total_loss += loss_value # get gradients of this tape gradients = tape.gradient(loss_value, train_vars) # Accumulate the gradients accum_gradient = [(acum_grad+grad) for acum_grad, grad in zip(accum_gradient, gradients)] # Now, after executing all the tapes you needed, we apply the optimization step # (but first we take the average of the gradients) accum_gradient = [this_grad/num_samples for this_grad in accum_gradient] # apply optimization step self.optimizer.apply_gradients(zip(accum_gradient,train_vars)) epoch_loss = total_loss / num_samples print(f'Epoch loss: {epoch_loss}') Using tf.Variable() should be avoided inside the training loop, since it will produce errors when trying to execute the code as a graph. If you use tf.Variable() inside your training function and then decorate it with "@tf.function" or apply "tf.function(my_train_fcn)" to obtain a graph function (i.e. for improved performance), the execution will rise an error. This happens because the tracing of the tf.Variable function results in a different behaviour than the observed in eager execution (re-utilization or creation, respectively). You can find more info on this in the tensorflow help page. | 8 | 9 |
59,887,436 | 2020-1-23 | https://stackoverflow.com/questions/59887436/importerror-cannot-import-name-packagefinder | after updating everything in conda, pip can't install anything conda update -n base conda conda update --all when install or upgrade anything, this error is show $ pip install --upgrade HDF5 Traceback (most recent call last): File "C:\ProgramData\Anaconda3\Scripts\pip-script.py", line 10, in <module> sys.exit(main()) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_internal\main.py", line 45, in main command = create_command(cmd_name, isolated=("--isolated" in cmd_args)) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_internal\commands\__init__.py", line 96, in create_command module = importlib.import_module(module_path) File "C:\ProgramData\Anaconda3\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_internal\commands\install.py", line 23, in <module> from pip._internal.cli.req_command import RequirementCommand File "C:\ProgramData\Anaconda3\lib\site-packages\pip\_internal\cli\req_command.py", line 17, in <module> from pip._internal.index import PackageFinder ImportError: cannot import name 'PackageFinder' any help please. thank you. | It seems that this works. Reinstall the latest version of pip: $ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && python get-pip.py When you’re done, delete the installation script: $ rm get-pip.py | 26 | 41 |
59,920,760 | 2020-1-26 | https://stackoverflow.com/questions/59920760/python-graphql-gql-client-authentication | I´m having hard time to use GraphQL with Python since the suggested library: gql is completely undocumented. How ever I found out that to provide the api url I need to pass a RequestsHTTPTransport object to Client like this: client = Client(transport=RequestsHTTPTransport(url='https://some.api.com/v3/graphql')) but how to provide credentials like the Bearer Key? PS I noticed that it RequestsHTTPTransport accepts also a auth param which is described as: :param auth: Auth tuple or callable to enable Basic/Digest/Custom HTTP Auth how ever I still can not find out how to create this tuple or callable to work with a Bearer Key :( Thanks in advise | You can add it in the headers. reqHeaders = { 'x-api-key' : API_KEY, 'Authorization': 'Bearer ' + TOKEN_KEY // This is the key } _transport = RequestsHTTPTransport( url=API_ENDPOINT, headers = reqHeaders, use_json=True, ) client = Client( transport = _transport, fetch_schema_from_transport=True, ) | 8 | 7 |
59,938,619 | 2020-1-27 | https://stackoverflow.com/questions/59938619/does-the-django-address-module-provide-a-way-to-seed-the-initial-country-data | I'm using Django 2.0, Python 3.7, and MySql 5. I recently installed the django_address module. I noticed when I ran my initial migration based on my models.py file ... from django.db import models from address.models import AddressField from phonenumber_field.modelfields import PhoneNumberField class CoopType(models.Model): name = models.CharField(max_length=200, null=False) class Meta: unique_together = ("name",) class Coop(models.Model): type = models.ForeignKey(CoopType, on_delete=None) address = AddressField(on_delete=models.CASCADE) enabled = models.BooleanField(default=True, null=False) phone = PhoneNumberField(null=True) email = models.EmailField(null=True) web_site = models.TextField() It created some address tables, including ... mysql> show create table address_country; +-----------------+---------------------------------------------------+ | Table | Create Table | +-----------------+---------------------------------------------------+ | address_country | CREATE TABLE `address_country` ( | | | `id` int(11) NOT NULL AUTO_INCREMENT, | | | `name` varchar(40) COLLATE utf8_bin NOT NULL, | | | `code` varchar(2) COLLATE utf8_bin NOT NULL, | | | PRIMARY KEY (`id`), | | | UNIQUE KEY `name` (`name`) | | | ) ENGINE=InnoDB | | | DEFAULT CHARSET=utf8 COLLATE=utf8_bin | +-----------------+---------------------------------------------------+ However, this table has no data in it. Is there a way to obtain seed data for the table generated by the module or do I need to dig it up on my own? | I'd suggest you write a simple management command that imports data from pycountry into your address models (approach borrowed from here). pycountry is a wrapper around the ISO standard list of countries - i.e., it's about as canonical a list of countries as you're going to get. A management command to populate all the countries into your Country model would look something like this: import pycountry from django.core.management.base import BaseCommand, CommandError from address.models import Country class Command(BaseCommand): help = "Populates address.Country with data from pycountry." def handle(self, *args, **options): countries = [ Country( code=country.alpha_2, name=country.name[:40], # NOTE - concat to 40 chars because of limit on the model field ) for country in pycountry.countries ] Country.objects.bulk_create(countries) self.stdout.write("Successfully added %s countries." % len(countries)) This will populate your model with the ISO list of countries. One caveat here is that the address.Country.name field is limited to 40 characters (which seems like a questionable design decision to me, along with the decision not to make country codes unique - ISO 2-letter codes definitely are unique), so the script above truncates the name to fit. If this is a problem for you I suggest you set up your own address models, borrowing from django-address and raising the character limit. | 7 | 0 |
59,920,770 | 2020-1-26 | https://stackoverflow.com/questions/59920770/get-the-nearest-distance-with-two-geodataframe-in-pandas | Here is my first geodatframe : !pip install geopandas import pandas as pd import geopandas city1 = [{'City':"Buenos Aires","Country":"Argentina","Latitude":-34.58,"Longitude":-58.66}, {'City':"Brasilia","Country":"Brazil","Latitude":-15.78 ,"Longitude":-70.66}, {'City':"Santiago","Country":"Chile ","Latitude":-33.45 ,"Longitude":-70.66 }] city2 = [{'City':"Bogota","Country":"Colombia ","Latitude":4.60 ,"Longitude":-74.08}, {'City':"Caracas","Country":"Venezuela","Latitude":10.48 ,"Longitude":-66.86}] city1df = pd.DataFrame(city1) city2df = pd.DataFrame(city2) gcity1df = geopandas.GeoDataFrame( city1df, geometry=geopandas.points_from_xy(city1df.Longitude, city1df.Latitude)) gcity2df = geopandas.GeoDataFrame( city2df, geometry=geopandas.points_from_xy(city2df.Longitude, city2df.Latitude)) City1 City Country Latitude Longitude geometry 0 Buenos Aires Argentina -34.58 -58.66 POINT (-58.66000 -34.58000) 1 Brasilia Brazil -15.78 -47.91 POINT (-47.91000 -15.78000) 2 Santiago Chile -33.45 -70.66 POINT (-70.66000 -33.45000) and my second geodataframe : City2 : City Country Latitude Longitude geometry 1 Bogota Colombia 4.60 -74.08 POINT (-74.08000 4.60000) 2 Caracas Venezuela 10.48 -66.86 POINT (-66.86000 10.48000) i would like third dataframe with the nearest city from city1 to city2 with the distance like : City Country Latitude Longitude geometry Nearest Distance 0 Buenos Aires Argentina -34.58 -58.66 POINT (-58.66000 -34.58000) Bogota 111 Km Here is my actual solution using geodjango and dict (but it's way too long) : from django.contrib.gis.geos import GEOSGeometry result = [] dict_result = {} for city01 in city1 : dist = 99999999 pnt = GEOSGeometry('SRID=4326;POINT( '+str(city01["Latitude"])+' '+str(city01['Longitude'])+')') for city02 in city2: pnt2 = GEOSGeometry('SRID=4326;POINT('+str(city02['Latitude'])+' '+str(city02['Longitude'])+')') distance_test = pnt.distance(pnt2) * 100 if distance_test < dist : dist = distance_test result.append(dist) dict_result[city01['City']] = city02['City'] Here are my tryings : from shapely.ops import nearest_points # unary union of the gpd2 geomtries pts3 = gcity2df.geometry.unary_union def Euclidean_Dist(df1, df2, cols=['x_coord','y_coord']): return np.linalg.norm(df1[cols].values - df2[cols].values, axis=1) def near(point, pts=pts3): # find the nearest point and return the corresponding Place value nearest = gcity2df.geometry == nearest_points(point, pts)[1] return gcity2df[nearest].City gcity1df['Nearest'] = gcity1df.apply(lambda row: near(row.geometry), axis=1) gcity1df here : City Country Latitude Longitude geometry Nearest 0 Buenos Aires Argentina -34.58 -58.66 POINT (-58.66000 -34.58000) Bogota 1 Brasilia Brazil -15.78 -70.66 POINT (-70.66000 -15.78000) Bogota 2 Santiago Chile -33.45 -70.66 POINT (-70.66000 -33.45000) Bogota Regards | Firstly, I merge two data frames by cross join. And then, I found distance between two points using map in python. I use map, because most of the time it is much faster than apply, itertuples, iterrows etc. (Reference: https://stackoverflow.com/a/52674448/8205554) Lastly, I group by data frame and fetch minimum values of distance. Here are libraries, import pandas as pd import geopandas import geopy.distance from math import radians, cos, sin, asin, sqrt Here are used functions, def dist1(p1, p2): lon1, lat1, lon2, lat2 = map(radians, [p1.x, p1.y, p2.x, p2.y]) dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) return c * 6373 def dist2(p1, p2): lon1, lat1, lon2, lat2 = map(radians, [p1[0], p1[1], p2[0], p2[1]]) dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) return c * 6373 def dist3(p1, p2): x = p1.y, p1.x y = p2.y, p2.x return geopy.distance.geodesic(x, y).km def dist4(p1, p2): x = p1[1], p1[0] y = p2[1], p2[0] return geopy.distance.geodesic(x, y).km And data, city1 = [ { 'City': 'Buenos Aires', 'Country': 'Argentina', 'Latitude': -34.58, 'Longitude': -58.66 }, { 'City': 'Brasilia', 'Country': 'Brazil', 'Latitude': -15.78, 'Longitude': -70.66 }, { 'City': 'Santiago', 'Country': 'Chile ', 'Latitude': -33.45, 'Longitude': -70.66 } ] city2 = [ { 'City': 'Bogota', 'Country': 'Colombia ', 'Latitude': 4.6, 'Longitude': -74.08 }, { 'City': 'Caracas', 'Country': 'Venezuela', 'Latitude': 10.48, 'Longitude': -66.86 } ] city1df = pd.DataFrame(city1) city2df = pd.DataFrame(city2) Cross join with geopandas data frames, gcity1df = geopandas.GeoDataFrame( city1df, geometry=geopandas.points_from_xy(city1df.Longitude, city1df.Latitude) ) gcity2df = geopandas.GeoDataFrame( city2df, geometry=geopandas.points_from_xy(city2df.Longitude, city2df.Latitude) ) # cross join geopandas gcity1df['key'] = 1 gcity2df['key'] = 1 merged = gcity1df.merge(gcity2df, on='key') math functions and geopandas, # 6.64 ms ± 588 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %%timeit # find distance merged['dist'] = list(map(dist1, merged['geometry_x'], merged['geometry_y'])) mapping = { 'City_x': 'City', 'Country_x': 'Country', 'Latitude_x': 'Latitude', 'Longitude_x': 'Longitude', 'geometry_x': 'geometry', 'City_y': 'Nearest', 'dist': 'Distance' } nearest = merged.loc[merged.groupby(['City_x', 'Country_x'])['dist'].idxmin()] nearest.rename(columns=mapping)[list(mapping.values())] City Country Latitude Longitude geometry \ 2 Brasilia Brazil -15.78 -70.66 POINT (-70.66000 -15.78000) 0 Buenos Aires Argentina -34.58 -58.66 POINT (-58.66000 -34.58000) 4 Santiago Chile -33.45 -70.66 POINT (-70.66000 -33.45000) Nearest Distance 2 Bogota 2297.922808 0 Bogota 4648.004515 4 Bogota 4247.586882 geopy and geopandas, # 9.99 ms ± 764 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %%timeit # find distance merged['dist'] = list(map(dist3, merged['geometry_x'], merged['geometry_y'])) mapping = { 'City_x': 'City', 'Country_x': 'Country', 'Latitude_x': 'Latitude', 'Longitude_x': 'Longitude', 'geometry_x': 'geometry', 'City_y': 'Nearest', 'dist': 'Distance' } nearest = merged.loc[merged.groupby(['City_x', 'Country_x'])['dist'].idxmin()] nearest.rename(columns=mapping)[list(mapping.values())] City Country Latitude Longitude geometry \ 2 Brasilia Brazil -15.78 -70.66 POINT (-70.66000 -15.78000) 0 Buenos Aires Argentina -34.58 -58.66 POINT (-58.66000 -34.58000) 4 Santiago Chile -33.45 -70.66 POINT (-70.66000 -33.45000) Nearest Distance 2 Bogota 2285.239605 0 Bogota 4628.641817 4 Bogota 4226.710978 If you want to use pandas instead of geopandas, # cross join pandas city1df['key'] = 1 city2df['key'] = 1 merged = city1df.merge(city2df, on='key') With math functions, # 8.65 ms ± 2.21 ms per loop (mean ± std. dev. of 7 runs, 100 loops each) %%timeit # find distance merged['dist'] = list( map( dist2, merged[['Longitude_x', 'Latitude_x']].values, merged[['Longitude_y', 'Latitude_y']].values ) ) mapping = { 'City_x': 'City', 'Country_x': 'Country', 'Latitude_x': 'Latitude', 'Longitude_x': 'Longitude', 'City_y': 'Nearest', 'dist': 'Distance' } nearest = merged.loc[merged.groupby(['City_x', 'Country_x'])['dist'].idxmin()] nearest.rename(columns=mapping)[list(mapping.values())] City Country Latitude Longitude Nearest Distance 2 Brasilia Brazil -15.78 -70.66 Bogota 2297.922808 0 Buenos Aires Argentina -34.58 -58.66 Bogota 4648.004515 4 Santiago Chile -33.45 -70.66 Bogota 4247.586882 With geopy, # 9.8 ms ± 807 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %%timeit # find distance merged['dist'] = list( map( dist4, merged[['Longitude_x', 'Latitude_x']].values, merged[['Longitude_y', 'Latitude_y']].values ) ) mapping = { 'City_x': 'City', 'Country_x': 'Country', 'Latitude_x': 'Latitude', 'Longitude_x': 'Longitude', 'City_y': 'Nearest', 'dist': 'Distance' } nearest = merged.loc[merged.groupby(['City_x', 'Country_x'])['dist'].idxmin()] nearest.rename(columns=mapping)[list(mapping.values())] City Country Latitude Longitude Nearest Distance 2 Brasilia Brazil -15.78 -70.66 Bogota 2285.239605 0 Buenos Aires Argentina -34.58 -58.66 Bogota 4628.641817 4 Santiago Chile -33.45 -70.66 Bogota 4226.710978 | 14 | 13 |
59,894,984 | 2020-1-24 | https://stackoverflow.com/questions/59894984/torch-installation-results-in-not-supported-wheel-on-this-platform | Tried running pip3 install torch===1.4.0 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html first, taken from PyTorch website which resulted in No matching distribution found for torch===1.4.0 and Could not find a version that satisfies the requirement torch===1.4.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) Finally downloaded the .whl file from downloads page and tried installing locally like so 'C:\Users\Raf\AppData\Local\Programs\Python\Python38\Scripts\pip.exe' install torch-1.4.0+cpu-cp37-cp37m-win_amd64.whl after which I get torch-1.4.0+cpu-cp37-cp37m-win_amd64.whl is not a supported wheel on this platform. using 64 Python 3.8, on 64 bit Windows | using 64 Python 3.8 but you downloaded the cp37 whl which is for python 3.7. There is currently no whl file available for python 3.8. So either install from source (probably not recommended), install a different python version or create a virtual environment with python 3.7 Update There is now: https://download.pytorch.org/whl/cpu/torch-1.4.0%2Bcpu-cp38-cp38-win_amd64.whl | 6 | 8 |
59,856,694 | 2020-1-22 | https://stackoverflow.com/questions/59856694/how-to-get-around-slow-groupby-for-a-sparse-matrix | I have a large matrix (~200 million rows) describing a list of actions that occurred every day (there are ~10000 possible actions). My final goal is to create a co-occurrence matrix showing which actions happen during the same days. Here is an example dataset: data = {'date': ['01', '01', '01', '02','02','03'], 'action': [100, 101, 989855552, 100, 989855552, 777]} df = pd.DataFrame(data, columns = ['date','action']) I tried to create a sparse matrix with pd.get_dummies, but unravelling the matrix and using groupby on it is extremely slow, taking 6 minutes for just 5000 rows. # Create a sparse matrix of dummies dum = pd.get_dummies(df['action'], sparse = True) df = df.drop(['action'], axis = 1) df = pd.concat([df, dum], axis = 1) # Use groupby to get a single row for each date, showing whether each action occurred. # The groupby command here is the bottleneck. cols = list(df.columns) del cols[0] df = df.groupby('date')[cols].max() # Create a co-occurrence matrix by using dot-product of sparse matrices cooc = df.T.dot(df) I've also tried: getting the dummies in non-sparse format; using groupby for aggregation; going to sparse format before matrix multiplication. But I fail in step 1, since there is not enough RAM to create such a large matrix. I would greatly appreciate your help. | I came up with an answer using only sparse matrices based on this post. The code is fast, taking about 10 seconds for 10 million rows (my previous code took 6 minutes for 5000 rows and was not scalable). The time and memory savings come from working with sparse matrices until the very last step when it is necessary to unravel the (already small) co-occurrence matrix before export. ## Get unique values for date and action date_c = CategoricalDtype(sorted(df.date.unique()), ordered=True) action_c = CategoricalDtype(sorted(df.action.unique()), ordered=True) ## Add an auxiliary variable df['count'] = 1 ## Define a sparse matrix row = df.date.astype(date_c).cat.codes col = df.action.astype(action_c).cat.codes sparse_matrix = csr_matrix((df['count'], (row, col)), shape=(date_c.categories.size, action_c.categories.size)) ## Compute dot product with sparse matrix cooc_sparse = sparse_matrix.T.dot(sparse_matrix) ## Unravel co-occurrence matrix into dense shape cooc = pd.DataFrame(cooc_sparse.todense(), index = action_c.categories, columns = action_c.categories) | 7 | 3 |
59,956,479 | 2020-1-28 | https://stackoverflow.com/questions/59956479/python-warning-plotly-graph-objs-line-is-deprecated | Although everything works fine, I would like to know whether there is a way to fix what is provoking this warning: plotly.graph_objs.Line is deprecated. Please replace it with one of the following more specific types plotly.graph_objs.scatter.Line plotly.graph_objs.layout.shape.Line etc. | Fixing the warning regarding deprecated functions may variate, depending on the packages at use. In my particular case, I was using the "Line" function from "plotly" package. This function was being called from another package. In the latter package there was a .py file (I used "IDLE (Python 3.8 64-bit)" to edit it) that had the following code line: from plotly.graph_objs import Line Then, I first tried replacing that line with the suggestions provided by the warning. After a couple of tries, I ended up using the code line: from plotly.graph_objs.scatter.marker import Line My final script works fine like in the beginning, but this time with no warnings at all. Note: In my case, the packages were installed in "C:\Users\NIP\AppData\Roaming\Python\Python38\site-packages" I hope this helps. | 8 | 2 |
59,937,482 | 2020-1-27 | https://stackoverflow.com/questions/59937482/100-classifier-accuracy-after-using-train-test-split | I'm working on the mushroom classification data set (found here: https://www.kaggle.com/uciml/mushroom-classification). I'm trying to split my data into training and testing sets for my models, however if i use the train_test_split method my models always achieve 100% accuracy. This is not the case when i split my data manually. x = data.copy() y = x['class'] del x['class'] x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33) model = xgb.XGBClassifier() model.fit(x_train, y_train) predictions = model.predict(x_test) print(confusion_matrix(y_test, predictions)) print(accuracy_score(y_test, predictions)) This produces: [[1299 0] [ 0 1382]] 1.0 If I split the data manually I get a more reasonable result. x = data.copy() y = x['class'] del x['class'] x_train = x[0:5443] x_test = x[5444:] y_train = y[0:5443] y_test = y[5444:] model = xgb.XGBClassifier() model.fit(x_train, y_train) predictions = model.predict(x_test) print(confusion_matrix(y_test, predictions)) print(accuracy_score(y_test, predictions)) Result: [[2007 0] [ 336 337]] 0.8746268656716418 What could be causing this behaviour? Edit: As per request I'm including shapes of slices. train_test_split: x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33) print(x_train.shape) print(y_train.shape) print(x_test.shape) print(y_test.shape) Result: (5443, 64) (5443,) (2681, 64) (2681,) Manual split: x_train = x[0:5443] x_test = x[5444:] y_train = y[0:5443] y_test = y[5444:] print(x_train.shape) print(y_train.shape) print(x_test.shape) print(y_test.shape) Result: (5443, 64) (5443,) (2680, 64) (2680,) I've tried defining my own split function and the resulting split also results in 100% classifier accuracy. Here's the code for the split def split_data(dataFrame, testRatio): dataCopy = dataFrame.copy() testCount = int(len(dataFrame)*testRatio) dataCopy = dataCopy.sample(frac = 1) y = dataCopy['class'] del dataCopy['class'] return dataCopy[testCount:], dataCopy[0:testCount], y[testCount:], y[0:testCount] | You got lucky there on your train_test_split. The split you are doing manually may be having the most unseen data, which is doing better validation than the train_test_split which internally shuffled the data to split it. For better validation use K-fold cross validation, which will allow to verify the model accuracy with each of the different parts in your data as your test and rest part as train. | 8 | 3 |
59,944,653 | 2020-1-28 | https://stackoverflow.com/questions/59944653/foreign-key-to-the-same-table-in-python-peewee | I am using ORM peewee for sqlite in Python. I would like to create table Item with field parent_id that will be foreign key to the Item: from peewee import * db = SqliteDatabase("data.db") class Item(Model): id = AutoField() parent_id = ForeignKeyField(Item, null = True) class Meta: database = db db.create_tables([Item]) However, there is error because of circular foreign key: NameError: free variable 'Item' referenced before assignment in enclosing scope For this case, there is DeferredForeignKey in peewee: from peewee import * db = SqliteDatabase("data.db") class Item(Model): id = AutoField() parent_id = DeferredForeignKey("Item", null = True) class Meta: database = db db.create_tables([Item]) Item._schema.create_foreign_key(Item.parent_id) Unfortunately, there is no ADD CONSTRAINT in sqlite, so another error appears: peewee.OperationalError: near "CONSTRAINT": syntax error Is there any way to create circural foreign key in sqlite using peewee, or I have to use plain integer instead of foreign key or use native SQL instead of ORM? | This is documented very clearly: http://docs.peewee-orm.com/en/latest/peewee/models.html#self-referential-foreign-keys You just put 'self' as the identifier: class Item(Model): id = AutoField() parent = ForeignKeyField('self', backref='children', null=True) class Meta: database = db You do not need to mess with any deferred keys or anything. | 7 | 10 |
59,957,089 | 2020-1-28 | https://stackoverflow.com/questions/59957089/how-to-send-one-gcode-command-over-usb | I am trying to write a simple python script that sends a gcode command to my wanhao D9 motherboard printer, running Marlin. I am running the python script on a raspberry pi that is connected to the printer via USB. import serial ser = serial.Serial("/dev/ttyUSB0", 115200) ser.write("G28\n") I have read over 20 forum pages with simular problems and tried their answers such as changing the baud rate to 250000 and the following changes to the write function parameter: ser.write("G28\r\n") ser.write(b'G28\r\n') ser.write(b'G28\n') ser.write(b'G28') ser.write("G28") I have tried all these combinations, and I have also added: time.sleep(5) along with the relevant import statement for the time module at the top of my file. I added this line of code between my ser declaration and my ser.write function call. I have also tried adding: ser.close() to see if that would make a difference but it has not, as I know this is best practice anyway. No matter what combination of this code I used, when I run my python script my printer seems to restart (the screen changes from the home page to the opening wanhao logo and back to the home page) I look forward to any help anyone can give me in regards to my code and what I may be doing wrong. | Adding an extra sleep period after my command fixed my issue. I can also now read back the initial set up feedback from the printer. My final code without this is: import serial import time ser = serial.Serial('/dev/ttyUSB0', 115200) time.sleep(2) ser.write("G28\r\n") time.sleep(1) ser.close() Thank you, to the users in this post for guiding me in the right direction Full examples of using pySerial package This code works for me, I hope someone else finds it useful in the future too. | 8 | 6 |
59,956,496 | 2020-1-28 | https://stackoverflow.com/questions/59956496/f-strings-formatter-including-for-loop-or-if-conditions | How can I insert for loops or if expressions inside an f-string? I thought initially of doing something like this for if expressions: f'{a:{"s" if CONDITION else "??"}}' What I would like to do though is something like: Example 1 f'{key: value\n for key, value in dict.items()}' result: if dict = {'a': 1, 'b': 2} a: 1 b: 2 or Example 2 c = 'hello' f'{c} {name if name else "unknown"}' result: if name exists, e.g. name = 'Mike' hello Mike otherwise hello unknown Can this be done and if yes how? | Both ternaries ("if expressions") and comprehensions ("for expressions") are allowed inside f-strings. However, they must be part of expressions that evaluate to strings. For example, key: value is a dict pair, and f"{key}: {value}" is required to produce a string. >>> dct = {'a': 1, 'b': 2} >>> newline = "\n" # \escapes are not allowed inside f-strings >>> print(f'{newline.join(f"{key}: {value}" for key, value in dct.items())}') a: 1 b: 2 Note that if the entire f-string is a single format expression, it is simpler to just evaluate the expression directly. >>> print("\n".join(f"{key}: {value}" for key, value in dct.items()))) a: 1 b: 2 Expressions inside format strings still follow their regular semantics. For example, a ternary may test whether an existing name is true. It will fail if the name is not defined. >>> c, name = "Hello", "" >>> f'{c} {name if name else "unknown"}' 'Hello unknown' >>> del name >>> f'{c} {name if name else "unknown"}' NameError: name 'name' is not defined | 9 | 25 |
59,956,670 | 2020-1-28 | https://stackoverflow.com/questions/59956670/parsing-city-of-origin-destination-city-from-a-string | I have a pandas dataframe where one column is a bunch of strings with certain travel details. My goal is to parse each string to extract the city of origin and destination city (I would like to ultimately have two new columns titled 'origin' and 'destination'). The data: df_col = [ 'new york to venice, italy for usd271', 'return flights from brussels to bangkok with etihad from €407', 'from los angeles to guadalajara, mexico for usd191', 'fly to australia new zealand from paris from €422 return including 2 checked bags' ] This should result in: Origin: New York, USA; Destination: Venice, Italy Origin: Brussels, BEL; Destination: Bangkok, Thailand Origin: Los Angeles, USA; Destination: Guadalajara, Mexico Origin: Paris, France; Destination: Australia / New Zealand (this is a complicated case given two countries) Thus far I have tried: A variety of NLTK methods, but what has gotten me closest is using the nltk.pos_tag method to tag each word in the string. The result is a list of tuples with each word and associated tag. Here's an example... [('Fly', 'NNP'), ('to', 'TO'), ('Australia', 'NNP'), ('&', 'CC'), ('New', 'NNP'), ('Zealand', 'NNP'), ('from', 'IN'), ('Paris', 'NNP'), ('from', 'IN'), ('€422', 'NNP'), ('return', 'NN'), ('including', 'VBG'), ('2', 'CD'), ('checked', 'VBD'), ('bags', 'NNS'), ('!', '.')] I am stuck at this stage and am unsure how to best implement this. Can anyone point me in the right direction, please? Thanks. | TL;DR Pretty much impossible at first glance, unless you have access to some API that contains pretty sophisticated components. In Long From first look, it seems like you're asking to solve a natural language problem magically. But lets break it down and scope it to a point where something is buildable. First, to identify countries and cities, you need data that enumerates them, so lets try: https://www.google.com/search?q=list+of+countries+and+cities+in+the+world+json And top of the search results, we find https://datahub.io/core/world-cities that leads to the world-cities.json file. Now we load them into sets of countries and cities. import requests import json cities_url = "https://pkgstore.datahub.io/core/world-cities/world-cities_json/data/5b3dd46ad10990bca47b04b4739a02ba/world-cities_json.json" cities_json = json.loads(requests.get(cities_url).content.decode('utf8')) countries = set([city['country'] for city in cities_json]) cities = set([city['name'] for city in cities_json]) Now given data, lets try to build component ONE: Task: Detect if any substring in the texts matches a city/country. Tool: https://github.com/vi3k6i5/flashtext (a fast string search/match) Metric: No. of correctly identified cities/countries in string Lets put them together. import requests import json from flashtext import KeywordProcessor cities_url = "https://pkgstore.datahub.io/core/world-cities/world-cities_json/data/5b3dd46ad10990bca47b04b4739a02ba/world-cities_json.json" cities_json = json.loads(requests.get(cities_url).content.decode('utf8')) countries = set([city['country'] for city in cities_json]) cities = set([city['name'] for city in cities_json]) keyword_processor = KeywordProcessor(case_sensitive=False) keyword_processor.add_keywords_from_list(sorted(countries)) keyword_processor.add_keywords_from_list(sorted(cities)) texts = ['new york to venice, italy for usd271', 'return flights from brussels to bangkok with etihad from €407', 'from los angeles to guadalajara, mexico for usd191', 'fly to australia new zealand from paris from €422 return including 2 checked bags'] keyword_processor.extract_keywords(texts[0]) [out]: ['York', 'Venice', 'Italy'] Hey, what went wrong?! Doing due diligence, first hunch is that "new york" is not in the data, >>> "New York" in cities False What the?! #$%^&* For sanity sake, we check these: >>> len(countries) 244 >>> len(cities) 21940 Yes, you cannot just trust a single data source, so lets try to fetch all data sources. From https://www.google.com/search?q=list+of+countries+and+cities+in+the+world+json, you find another link https://github.com/dr5hn/countries-states-cities-database Lets munge this... import requests import json cities_url = "https://pkgstore.datahub.io/core/world-cities/world-cities_json/data/5b3dd46ad10990bca47b04b4739a02ba/world-cities_json.json" cities1_json = json.loads(requests.get(cities_url).content.decode('utf8')) countries1 = set([city['country'] for city in cities1_json]) cities1 = set([city['name'] for city in cities1_json]) dr5hn_cities_url = "https://raw.githubusercontent.com/dr5hn/countries-states-cities-database/master/cities.json" dr5hn_countries_url = "https://raw.githubusercontent.com/dr5hn/countries-states-cities-database/master/countries.json" cities2_json = json.loads(requests.get(dr5hn_cities_url).content.decode('utf8')) countries2_json = json.loads(requests.get(dr5hn_countries_url).content.decode('utf8')) countries2 = set([c['name'] for c in countries2_json]) cities2 = set([c['name'] for c in cities2_json]) countries = countries2.union(countries1) cities = cities2.union(cities1) And now that we are neurotic, we do sanity checks. >>> len(countries) 282 >>> len(cities) 127793 Wow, that's a lot more cities than previously. Lets try the flashtext code again. from flashtext import KeywordProcessor keyword_processor = KeywordProcessor(case_sensitive=False) keyword_processor.add_keywords_from_list(sorted(countries)) keyword_processor.add_keywords_from_list(sorted(cities)) texts = ['new york to venice, italy for usd271', 'return flights from brussels to bangkok with etihad from €407', 'from los angeles to guadalajara, mexico for usd191', 'fly to australia new zealand from paris from €422 return including 2 checked bags'] keyword_processor.extract_keywords(texts[0]) [out]: ['York', 'Venice', 'Italy'] Seriously?! There is no New York?! $%^&* Okay, for more sanity checks, lets just look for "york" in the list of cities. >>> [c for c in cities if 'york' in c.lower()] ['Yorklyn', 'West York', 'West New York', 'Yorktown Heights', 'East Riding of Yorkshire', 'Yorke Peninsula', 'Yorke Hill', 'Yorktown', 'Jefferson Valley-Yorktown', 'New York Mills', 'City of York', 'Yorkville', 'Yorkton', 'New York County', 'East York', 'East New York', 'York Castle', 'York County', 'Yorketown', 'New York City', 'York Beach', 'Yorkshire', 'North Yorkshire', 'Yorkeys Knob', 'York', 'York Town', 'York Harbor', 'North York'] Eureka! It's because it's call "New York City" and not "New York"! You: What kind of prank is this?! Linguist: Welcome to the world of natural language processing, where natural language is a social construct subjective to communal and idiolectal variant. You: Cut the crap, tell me how to solve this. NLP Practitioner (A real one that works on noisy user-generate texts): You just have to add to the list. But before that, check your metric given the list you already have. For every texts in your sample "test set", you should provide some truth labels to make sure you can "measure your metric". from itertools import zip_longest from flashtext import KeywordProcessor keyword_processor = KeywordProcessor(case_sensitive=False) keyword_processor.add_keywords_from_list(sorted(countries)) keyword_processor.add_keywords_from_list(sorted(cities)) texts_labels = [('new york to venice, italy for usd271', ('New York', 'Venice', 'Italy')), ('return flights from brussels to bangkok with etihad from €407', ('Brussels', 'Bangkok')), ('from los angeles to guadalajara, mexico for usd191', ('Los Angeles', 'Guadalajara')), ('fly to australia new zealand from paris from €422 return including 2 checked bags', ('Australia', 'New Zealand', 'Paris'))] # No. of correctly extracted terms. true_positives = 0 false_positives = 0 total_truth = 0 for text, label in texts_labels: extracted = keyword_processor.extract_keywords(text) # We're making some assumptions here that the order of # extracted and the truth must be the same. true_positives += sum(1 for e, l in zip_longest(extracted, label) if e == l) false_positives += sum(1 for e, l in zip_longest(extracted, label) if e != l) total_truth += len(label) # Just visualization candies. print(text) print(extracted) print(label) print() Actually, it doesn't look that bad. We get an accuracy of 90%: >>> true_positives / total_truth 0.9 But I %^&*(-ing want 100% extraction!! Alright, alright, so look at the "only" error that the above approach is making, it's simply that "New York" isn't in the list of cities. You: Why don't we just add "New York" to the list of cities, i.e. keyword_processor.add_keyword('New York') print(texts[0]) print(keyword_processor.extract_keywords(texts[0])) [out]: ['New York', 'Venice', 'Italy'] You: See, I did it!!! Now I deserve a beer. Linguist: How about 'I live in Marawi'? >>> keyword_processor.extract_keywords('I live in Marawi') [] NLP Practitioner (chiming in): How about 'I live in Jeju'? >>> keyword_processor.extract_keywords('I live in Jeju') [] A Raymond Hettinger fan (from farway): "There must be a better way!" Yes, there is what if we just try something silly like adding keywords of cities that ends with "City" into our keyword_processor? for c in cities: if 'city' in c.lower() and c.endswith('City') and c[:-5] not in cities: if c[:-5].strip(): keyword_processor.add_keyword(c[:-5]) print(c[:-5]) It works! Now lets retry our regression test examples: from itertools import zip_longest from flashtext import KeywordProcessor keyword_processor = KeywordProcessor(case_sensitive=False) keyword_processor.add_keywords_from_list(sorted(countries)) keyword_processor.add_keywords_from_list(sorted(cities)) for c in cities: if 'city' in c.lower() and c.endswith('City') and c[:-5] not in cities: if c[:-5].strip(): keyword_processor.add_keyword(c[:-5]) texts_labels = [('new york to venice, italy for usd271', ('New York', 'Venice', 'Italy')), ('return flights from brussels to bangkok with etihad from €407', ('Brussels', 'Bangkok')), ('from los angeles to guadalajara, mexico for usd191', ('Los Angeles', 'Guadalajara')), ('fly to australia new zealand from paris from €422 return including 2 checked bags', ('Australia', 'New Zealand', 'Paris')), ('I live in Florida', ('Florida')), ('I live in Marawi', ('Marawi')), ('I live in jeju', ('Jeju'))] # No. of correctly extracted terms. true_positives = 0 false_positives = 0 total_truth = 0 for text, label in texts_labels: extracted = keyword_processor.extract_keywords(text) # We're making some assumptions here that the order of # extracted and the truth must be the same. true_positives += sum(1 for e, l in zip_longest(extracted, label) if e == l) false_positives += sum(1 for e, l in zip_longest(extracted, label) if e != l) total_truth += len(label) # Just visualization candies. print(text) print(extracted) print(label) print() [out]: new york to venice, italy for usd271 ['New York', 'Venice', 'Italy'] ('New York', 'Venice', 'Italy') return flights from brussels to bangkok with etihad from €407 ['Brussels', 'Bangkok'] ('Brussels', 'Bangkok') from los angeles to guadalajara, mexico for usd191 ['Los Angeles', 'Guadalajara', 'Mexico'] ('Los Angeles', 'Guadalajara') fly to australia new zealand from paris from €422 return including 2 checked bags ['Australia', 'New Zealand', 'Paris'] ('Australia', 'New Zealand', 'Paris') I live in Florida ['Florida'] Florida I live in Marawi ['Marawi'] Marawi I live in jeju ['Jeju'] Jeju 100% Yeah, NLP-bunga !!! But seriously, this is only the tip of the problem. What happens if you have a sentence like this: >>> keyword_processor.extract_keywords('Adam flew to Bangkok from Singapore and then to China') ['Adam', 'Bangkok', 'Singapore', 'China'] WHY is Adam extracted as a city?! Then you do some more neurotic checks: >>> 'Adam' in cities Adam Congratulations, you've jumped into another NLP rabbit hole of polysemy where the same word has different meaning, in this case, Adam most probably refer to a person in the sentence but it is also coincidentally the name of a city (according to the data you've pulled from). I see what you did there... Even if we ignore this polysemy nonsense, you are still not giving me the desired output: [in]: ['new york to venice, italy for usd271', 'return flights from brussels to bangkok with etihad from €407', 'from los angeles to guadalajara, mexico for usd191', 'fly to australia new zealand from paris from €422 return including 2 checked bags' ] [out]: Origin: New York, USA; Destination: Venice, Italy Origin: Brussels, BEL; Destination: Bangkok, Thailand Origin: Los Angeles, USA; Destination: Guadalajara, Mexico Origin: Paris, France; Destination: Australia / New Zealand (this is a complicated case given two countries) Linguist: Even with the assumption that the preposition (e.g. from, to) preceding the city gives you the "origin" / "destination" tag, how are you going to handle the case of "multi-leg" flights, e.g. >>> keyword_processor.extract_keywords('Adam flew to Bangkok from Singapore and then to China') What's the desired output of this sentence: > Adam flew to Bangkok from Singapore and then to China Perhaps like this? What is the specification? How (un-)structured is your input text? > Origin: Singapore > Departure: Bangkok > Departure: China Lets try to build component TWO to detect prepositions. Lets take that assumption you have and try some hacks to the same flashtext methods. What if we add to and from to the list? from itertools import zip_longest from flashtext import KeywordProcessor keyword_processor = KeywordProcessor(case_sensitive=False) keyword_processor.add_keywords_from_list(sorted(countries)) keyword_processor.add_keywords_from_list(sorted(cities)) for c in cities: if 'city' in c.lower() and c.endswith('City') and c[:-5] not in cities: if c[:-5].strip(): keyword_processor.add_keyword(c[:-5]) keyword_processor.add_keyword('to') keyword_processor.add_keyword('from') texts = ['new york to venice, italy for usd271', 'return flights from brussels to bangkok with etihad from €407', 'from los angeles to guadalajara, mexico for usd191', 'fly to australia new zealand from paris from €422 return including 2 checked bags'] for text in texts: extracted = keyword_processor.extract_keywords(text) print(text) print(extracted) print() [out]: new york to venice, italy for usd271 ['New York', 'to', 'Venice', 'Italy'] return flights from brussels to bangkok with etihad from €407 ['from', 'Brussels', 'to', 'Bangkok', 'from'] from los angeles to guadalajara, mexico for usd191 ['from', 'Los Angeles', 'to', 'Guadalajara', 'Mexico'] fly to australia new zealand from paris from €422 return including 2 checked bags ['to', 'Australia', 'New Zealand', 'from', 'Paris', 'from'] Heh, that's pretty crappy rule to use to/from, What if the "from" is referring the price of the ticket? What if there's no "to/from" preceding the country/city? Okay, lets work with the above output and see what we do about the problem 1. Maybe check if the term after the from is city, if not, remove the to/from? from itertools import zip_longest from flashtext import KeywordProcessor keyword_processor = KeywordProcessor(case_sensitive=False) keyword_processor.add_keywords_from_list(sorted(countries)) keyword_processor.add_keywords_from_list(sorted(cities)) for c in cities: if 'city' in c.lower() and c.endswith('City') and c[:-5] not in cities: if c[:-5].strip(): keyword_processor.add_keyword(c[:-5]) keyword_processor.add_keyword('to') keyword_processor.add_keyword('from') texts = ['new york to venice, italy for usd271', 'return flights from brussels to bangkok with etihad from €407', 'from los angeles to guadalajara, mexico for usd191', 'fly to australia new zealand from paris from €422 return including 2 checked bags'] for text in texts: extracted = keyword_processor.extract_keywords(text) print(text) new_extracted = [] extracted_next = extracted[1:] for e_i, e_iplus1 in zip_longest(extracted, extracted_next): if e_i == 'from' and e_iplus1 not in cities and e_iplus1 not in countries: print(e_i, e_iplus1) continue elif e_i == 'from' and e_iplus1 == None: # last word in the list. continue else: new_extracted.append(e_i) print(new_extracted) print() That seems to do the trick and remove the from that doesn't precede a city/country. [out]: new york to venice, italy for usd271 ['New York', 'to', 'Venice', 'Italy'] return flights from brussels to bangkok with etihad from €407 from None ['from', 'Brussels', 'to', 'Bangkok'] from los angeles to guadalajara, mexico for usd191 ['from', 'Los Angeles', 'to', 'Guadalajara', 'Mexico'] fly to australia new zealand from paris from €422 return including 2 checked bags from None ['to', 'Australia', 'New Zealand', 'from', 'Paris'] But the "from New York" still isn't solve!! Linguist: Think carefully, should ambiguity be resolved by making an informed decision to make ambiguous phrase obvious? If so, what is the "information" in the informed decision? Should it follow a certain template first to detect the information before filling in the ambiguity? You: I'm losing my patience with you... You're bringing me in circles and circles, where's that AI that can understand human language that I keep hearing from the news and Google and Facebook and all?! You: What you gave me are rule based and where's the AI in all these? NLP Practitioner: Didn't you wanted 100%? Writing "business logics" or rule-based systems would be the only way to really achieve that "100%" given a specific data set without any preset data set that one can use for "training an AI". You: What do you mean by training an AI? Why can't I just use Google or Facebook or Amazon or Microsoft or even IBM's AI? NLP Practitioner: Let me introduce you to https://learning.oreilly.com/library/view/data-science-from/9781492041122/ https://allennlp.org/tutorials https://www.aclweb.org/anthology/ Welcome to the world of Computational Linguistics and NLP! In Short Yes, there's no real ready-made magical solution and if you want to use an "AI" or machine learning algorithm, most probably you would need a lot more training data like the texts_labels pairs shown in the above example. | 33 | 159 |
59,957,962 | 2020-1-28 | https://stackoverflow.com/questions/59957962/difference-between-conda-install-with-c-anaconda-and-without-it | I am new to python and I am trying to install new packages in Anaconda. I am using anaconda prompt and Windows 10. Can you please explain what is the difference between conda install with -c anaconda and without it? For example conda install -c anaconda mysqlclient and conda install mysqlclient. Which is better to use when and why? | When you use the -c option, you are specifying the channel from which to get the package. The default is -c anaconda, so they are similar. To use packages built locally, you would use -c local. Here is a link for more info: Docs explaining usage of conda install | 6 | 4 |
59,957,171 | 2020-1-28 | https://stackoverflow.com/questions/59957171/removing-non-ascii-and-special-character-in-pyspark-dataframe-column | I am reading data from csv files which has about 50 columns, few of the columns(4 to 5) contain text data with non-ASCII characters and special characters. df = spark.read.csv(path, header=True, schema=availSchema) I am trying to remove all the non-Ascii and special characters and keep only English characters, and I tried to do it as below df = df['textcolumn'].str.encode('ascii', 'ignore').str.decode('ascii') There are no spaces in my column name. I receive an error TypeError: 'Column' object is not callable --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <command-1486957561378215> in <module> ----> 1 InvFilteredDF = InvFilteredDF['SearchResultDescription'].str.encode('ascii', 'ignore').str.decode('ascii') TypeError: 'Column' object is not callable Is there an alternative to accomplish this, appreciate any help with this. | This should work. First creating a temporary example dataframe: df = spark.createDataFrame([ (0, "This is Spark"), (1, "I wish Java could use case classes"), (2, "Data science is cool"), (3, "This is aSA") ], ["id", "words"]) df.show() Output +---+--------------------+ | id| words| +---+--------------------+ | 0| This is Spark| | 1|I wish Java could...| | 2|Data science is ...| | 3| This is aSA| +---+--------------------+ Now to write a UDF because those functions that you use cannot be directly performed on a column type and you will get the Column object not callable error Solution from pyspark.sql.functions import udf def ascii_ignore(x): return x.encode('ascii', 'ignore').decode('ascii') ascii_udf = udf(ascii_ignore) df.withColumn("foo", ascii_udf('words')).show() Output +---+--------------------+--------------------+ | id| words| foo| +---+--------------------+--------------------+ | 0| This is Spark| This is Spark| | 1|I wish Java could...|I wish Java could...| | 2|Data science is ...|Data science is ...| | 3| This is aSA| This is aSA| +---+--------------------+--------------------+ | 6 | 11 |
59,907,079 | 2020-1-25 | https://stackoverflow.com/questions/59907079/visual-studio-codes-debugger-pipenv | I would like to use Visual Studio Code's debugger to debug my python code, but exception occurs. I use Windows 10, WSL, Debian, Python 3.7.6. Exception has occurred: ModuleNotFoundError No module named 'flask' File "/home/kazu/test/main.py", line 2, in <module> from flask import Flask This is python debugger console's record. pyenv shell 3.7.6 /home/kazu/.pyenv/versions/3.7.6/bin/python /home/kazu/.vscode-server/extensions/ms-python.python-2020.1.58038/pythonFiles/ptvsd_launcher.py --default --client --host localhost --port 52440 /home/kazu/test/main.py kazu@D:~/test$ pyenv shell 3.7.6 kazu@D~/test$ /home/kazu/.pyenv/versions/3.7.6/bin/python /home/kazu/.vscode-server/extensions/ms-python.python-2020.1.58038/pythonFiles/ptvsd_launcher.py --default --client --host localhost --port 52440 /home/kazu/test/main.py However, I have already installed flask using pipenv. When I don't use debugger, there isn't module error. This is my main.py from __future__ import unicode_literals from flask import Flask from flask import render_template from flask import request from flask import send_file import os import youtube_dl app = Flask(__name__) @app.route("/", methods=['POST', 'GET']) def index(): if request.method == "POST": if os.path.exists("/tmp/output.mp4"): os.remove("/tmp/output.mp4") else: print("Can not delete the file as it doesn't exists") url = request.form['url'] ydl_opts = {'outtmpl': '/tmp/output.mp4', 'format':'bestvideo[ext=mp4]'} with youtube_dl.YoutubeDL(ydl_opts) as ydl: ydl.download([url]) return send_file("/tmp/output.mp4",as_attachment=True) else: return render_template("index.html") if __name__ == "__main__": app.run() I searched the Internet and found that I should put my .venv folder in the project directory. So, I operated this command. export PIPENV_VENV_IN_PROJECT=1 and now my directory structure is this. . ├── main.py ├── Pipfile ├── Pipfile.lock ├── .venv └── templates └── index.html However, I get same error message. Then, I searched the Internet again and this time I set vs code's python venv path, but I got the same error message. Could you give me any information or suggestion? Thank you in advance. Sincerely, Kazu | If you look in the bottom-left corner of your screen you will notice you are currently running against a pyenv install of Python and not a pipenv virtual environment. If you click on the interpreter name and select the appropriate environment where you installed flask it should fix your issue. | 18 | 31 |
59,955,751 | 2020-1-28 | https://stackoverflow.com/questions/59955751/abcmeta-object-is-not-subscriptable-when-trying-to-annotate-a-hash-variable | The following dataclass: from abc import ABC from collections.abc import Mapping from dataclasses import dataclass, field @dataclass(eq=True, order=True, frozen=True) class Expression(Node, ABC): def node(self): raise NotImplementedError is used as a base class for: @dataclass(eq=True, frozen=True) class HashLiteral(Expression): pairs: Mapping[Expression, Expression] ... Node is defined as: @dataclass(eq=True, frozen=True) class Node: def __str__(self) -> str: raise NotImplementedError When trying to use the HashLiteral class I get the error: pairs: Mapping[Expression, Expression] TypeError: 'ABCMeta' object is not subscriptable What is wrong with my annotation of pairs above? | You should use typing.Mapping instead of collections.abc.Mapping. typing contains many generic versions of various types, which are designed to be used in type hints. According to the mypy documentation, there are some differences between the typing classes and the collections.abc classes, but they're unclear on exactly what those differences are. | 38 | 83 |
59,953,127 | 2020-1-28 | https://stackoverflow.com/questions/59953127/tensorflow-2-1-0-has-no-attribute-random-normal | I'm trying to get Uber's Ludwig to run. I get an error about there being no attribute 'random_normal'. I can reproduce the error in Python with these commands. >>> import tensorflow as tf >>> tf.reduce_sum(tf.random_normal([1000,1000])) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'tensorflow' has no attribute 'random_normal' >>> print(tf.__version__) 2.1.0 >>> print(sys.version) 3.7.5 (defaut, Oct 25 2019, 15:51:11) [GCC 7.3.0] How can I get past this error? | It was moved to tf.random.normal (along with all the other tf.random_* functions) | 28 | 48 |
59,951,983 | 2020-1-28 | https://stackoverflow.com/questions/59951983/enum-missing-function-not-silencing-valueerror | I'm trying to set up an Enum that will return None if the value is not found. The documentation mentions a function _missing_, but does not explain any of the details regarding the function: _missing_ – a lookup function used when a value is not found; may be overridden After some looking around, it seems this is a classmethod with the signature cls, value, so I tried to set it up, and its not working. >>> class G(enum.Enum): ... @classmethod ... def _missing_(cls, value): ... return None ... a = 1 ... >>> G(1) <G.a: 1> >>> G(2) Traceback (most recent call last): ... ValueError: 2 is not a valid G >>> G['b'] KeyError: 'b' >>> G.b AttributeError: b A google search suggests that _missing_ only catches the ValueError in the call case, so the KeyError and TypeError do not surprise me, but I do not know why G(2) is raising ValueError instead of returning None. | The 2 main things that the documentation is missing regarding the _missing_ function is the signature in the question, and the fact that the return type MUST be a member of the Enum. If None is returned, then the error simply isn't silenced. This behaviour is only seen through source inspection or a different error message: >>> class G(enum.Enum): ... @classmethod ... def _missing_(cls, value): ... return "a truthy value" # I suspected that the error may have been caused by a falsey return ... a = 1 ... >>> G(2) ValueError: 2 is not a valid G During handling of the above exception, another exception occured: Traceback (most recent call last): ... TypeError: error in G._missing_: returned 'a truthy value' instead of None or a valid member So the only way to handle this case is to have a sentinal G.none, G.null, G.missing or whatever value is most suitable. | 9 | 9 |
59,903,051 | 2020-1-24 | https://stackoverflow.com/questions/59903051/sphinxs-autodocs-automodule-having-apparently-no-effect | I am running Sphinx on a rst file containing automodule but it does not seem to have any effect. Here are the details: I have a Python project with a file agent.py containing a class Agent in it. I also have a subdirectory apidoc with a file agent.rst in it (generated by sphinx-apidoc): agent module ============ .. automodule:: agent :members: :undoc-members: :show-inheritance: I run sphinx with sphinx-build -b html apidoc apidoc/_build with the project's directory as the current working directory. To make sure the Python files are found, I've included the following in apidoc/conf.py: import os import sys sys.path.insert(0, os.path.abspath('.')) It runs without errors but when I open the resulting HTML file it only shows "agent module" and everything is blank. Why isn't it showing the class Agent and its members? Update: the original problem was likely caused by the fact that I had not included sphinx.ext.autodoc in conf.py. Now that I did, though, I get warnings like: WARNING: invalid signature for automodule ('My Project.agent') WARNING: don't know which module to import for autodocumenting 'My Project.agent' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name) WARNING: autodoc: failed to import module 'agent'; the following exception was raised: No module named 'agent' | I'll try answering by putting the "canonical" approach side-by-side with your case. The usual "getting started approach" follows these steps: create a doc directory in your project directory (it's from this directory the commands in the following steps are executed). sphinx-quickstart (choosing separate source from build). sphinx-apidoc -o ./source .. make html This would yield the following structure: C:\Project | | agent.py | |---docs | | make.bat | | Makefile | | | |---build | | | |---source | | conf.py | | agent.rst | | index.rst | | modules.rst In your conf.py you'd add (after step 2): sys.path.insert(0, os.path.abspath(os.path.join('..', '..'))) and in index.rst you'd link modules.rst: Welcome to Project's documentation! ================================ .. toctree:: :maxdepth: 2 :caption: Contents: modules Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` Now compare the above with what you have - from what you shared in your question: C:\Project | | agent.py | |---apidoc | | agent.rst | | conf.py | | | |-- _build You ran: sphinx-build -b html apidoc apidoc/_build and in your conf.py: sys.path.insert(0, os.path.abspath('.')) Your error stacktrace says it couldn't find the module agent. That's probably because you didn't go 1 level down in your conf.py (it's pointing to the path with .rst, not the path with .py), this should work: sys.path.insert(0, os.path.abspath('..')). Also, if you didn't manually edit/connect your modules.rst in your index.rst you are likely to only see that module. You may care to notice the signatures of the sphinx commands at play: sphinx-apidoc [OPTIONS] -o <OUTPUT_PATH> <MODULE_PATH> sphinx-build [options] <sourcedir> <outputdir> [filenames …] <sourcedir> refers to where .rst are, and <MODULE_PATH> to where .py are. <OUTPUT_PATH> to where .rst are placed, and <outputdir> to where .html are placed. Please also notice, you mentioned: "the project's directory as the current working directory." I've seen "working directory" mentioned in sphinx threads on stackoverflow, interchangeably as both the Project base directory, or the docs directory. However, if you search the Sphinx documentation for "working directory" you'll find no mention of it. Finally, there is an advantage to using the file/directory structure of the "getting started approach". It basically "puts you on the same page" with most threads on the Sphinx tag, and that way alleviates the mental work of mapping the cases to different directory/file structures. I hope this helps. | 7 | 17 |
59,944,182 | 2020-1-28 | https://stackoverflow.com/questions/59944182/how-to-create-a-visualization-for-events-along-a-timeline | I'm building a visualization with Python. There I'd like to visualize fuel stops and the fuel costs of my car. Furthermore, car washes and their costs should be visualized as well as repairs. The fuel costs and laundry costs should have a higher bar depending on the costs. I created the visualization below to describe the concepts. How to create such a visualization with matplotlib? This is the visualization being built: | Yes, this kind of visualization is perfectly possible with matplotlib. To store the data, numpy arrays are usually very handy. Here is some code to get you started: import matplotlib.pyplot as plt import numpy as np refuel_km = np.array([0, 505.4, 1070, 1690]) refuel_cost = np.array([40.1, 50, 63, 55]) carwash_km = np.array([302.0, 605.4, 901, 1331, 1788.2]) carwash_cost = np.array([35.0, 40.0, 35.0, 35.0, 35.0]) repair_km = np.array([788.0, 1605.4]) repair_cost = np.array([135.0, 74.5]) fig, ax = plt.subplots(figsize=(12,3)) plt.scatter(refuel_km, np.full_like(refuel_km, 0), marker='o', s=100, color='lime', edgecolors='black', zorder=3, label='refuel') plt.bar(refuel_km, refuel_cost, bottom=15, color='lime', ec='black', width=20, label='refuel cost') plt.scatter(carwash_km, np.full_like(carwash_km, 0), marker='d', s=100, color='tomato', edgecolors='black', zorder=3, label='car wash') plt.bar(carwash_km, -carwash_cost, bottom=-15, color='tomato', ec='black', width=20, label='car wash cost') plt.scatter(repair_km, np.full_like(repair_km, 0), marker='^', s=100, color='lightblue', edgecolors='black', zorder=3, label='car repair') #plt.bar(repair_km, -repair_cost, bottom=-15, color='lightblue', ec='black', width=20) ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') ax.spines['left'].set_color('none') ax.tick_params(axis='x', length=20) ax.set_yticks([]) # turn off the yticks _, xmax = ax.get_xlim() ymin, ymax = ax.get_ylim() ax.set_xlim(-15, xmax) ax.set_ylim(ymin, ymax+25) # make room for the legend ax.text(xmax, -5, "km", ha='right', va='top', size=14) plt.legend(ncol=5, loc='upper left') plt.tight_layout() plt.show() | 7 | 16 |
59,920,126 | 2020-1-26 | https://stackoverflow.com/questions/59920126/rest-api-in-python-with-fastapi-and-pydantic-read-only-property-in-model | Assume a REST API which defines a POST method on a resource /foos to create a new Foo. When creating a Foo the name of the Foo is an input parameter (present in the request body). When the server creates a Foo it assigns it an ID. This ID is returned together with the name in the REST response. I am looking for something similar to readOnly in OpenAPI. The input JSON should look like this: { "name": "bar" } The output JSON should look like that: { "id": 123, "name": "bar" } Is there a way to reuse the same pydantic model? Or is it necessary to use two diffent models? class FooIn(BaseModel): name: str class Foo(BaseModel): id: int name: str I cannot find any mentions of "read only", "read-only", or "readonly" in the pydantic documentation or in the Field class code. Googling I found a post which mentions id: int = Schema(..., readonly=True) But that seems to have no effect in my use case. | It is fine to have multiple models. You can use inheritance to reduce code repetition: from pydantic import BaseModel # Properties to receive via API create/update class Foo(BaseModel): name: str # Properties to return via API class FooDB(Foo): id: int The documentation which is excellent btw!, goes into this more in-depth. Here is a real user model example taken from the official full stack project generator. You can see how there are multiple models to define the user schema depending on the context. | 7 | 6 |
Subsets and Splits