question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
68,584,962 | 2021-7-30 | https://stackoverflow.com/questions/68584962/separate-panels-in-mplfinance | I would like to separate my panels because the titles superposed. ADX title is in RSI panel. I tried with tight_layout=True but still the same. My code: ap0 = [ mpf.make_addplot(df['sma_200'],color='#FF0000', panel=2), mpf.make_addplot(df['sma_50'],color='#ffa500', panel=2), mpf.make_addplot(df['sma_20'],color='#00FF00', panel=2), mpf.make_addplot(df['rsi'],color='#ffa500', panel=0, title="RSI"), mpf.make_addplot(df['hline_30'], panel=0), mpf.make_addplot(df['hline_70'], panel=0), mpf.make_addplot(df['adx'],color='#0000FF', panel=1, secondary_y=False, title="ADX"), mpf.make_addplot(df['-di'],color='#FF0000', panel=1, secondary_y=False), mpf.make_addplot(df['+di'],color='#32cd32', panel=1, secondary_y=False), mpf.make_addplot(df['hline_25'], panel=1, secondary_y=False) ] fig, axlist = mpf.plot( df, panel_ratios=(.05, .05, .2, .05), type="hollow_candle", yscale='log', volume=True, title="{} - {}".format(ticker, interval), style=mpf_style, figsize=(12.8, 10), returnfig=True, closefig=True, addplot=ap0, main_panel=2, volume_panel=3, num_panels=4, ) | Mplfinance subplots do not have a function to adjust the graph spacing, so there is a way to fit them into matplotlib subplots and make them spaced. We recommend using the y-axis label instead of the title, which is a feature only available in Mplfinance. import yfinance as yf import mplfinance as mpf import matplotlib.pyplot as plt df = yf.download("AAPL", start="2021-01-01", end="2021-07-01") df['sam_20'] = df['Close'].rolling(20).mean() df['sam_50'] = df['Close'].rolling(50).mean() fig = mpf.figure(style='yahoo',figsize=(7,8)) ax1 = fig.add_subplot(4,1,1, sharex=ax4) ax2 = fig.add_subplot(4,1,2, sharex=ax4) ax3 = fig.add_subplot(4,1,3, sharex=ax4) ax4 = fig.add_subplot(4,1,4) ap0 = [ mpf.make_addplot(df['sam_20'], color='#00FF00', panel=0, title='sma_20', ax=ax1), mpf.make_addplot(df['sam_50'], color='#FFa500', panel=1, title='sma_50', ax=ax2) ] mpf.plot(df,ax=ax3, addplot=ap0, volume=ax4, ) fig.subplots_adjust(hspace=0.2) ax1.tick_params(labelbottom=False) ax2.tick_params(labelbottom=False) ax3.tick_params(labelbottom=False) ax3.yaxis.set_label_position('left') ax3.yaxis.tick_left() Another solution ap0 = [ mpf.make_addplot(df['sam_20'], color='#00FF00', panel=0, ylabel='sma_20'), mpf.make_addplot(df['sam_50'], color='#FFa500', panel=1, ylabel='sma_50') ] @Daniel Goldfarlb solution ap0 = [ mpf.make_addplot(df['sam_20'], color='#00FF00', panel=0, title='sma_20', ylim=(110,140)), mpf.make_addplot(df['sam_50'], color='#FFa500', panel=1, title='sma_50') ] mpf.plot( df, panel_ratios=(2, 1, 3, 1), type="hollow_candle", yscale='log', volume=True, style='yahoo', figsize=(12.8, 10), addplot=ap0, main_panel=2, volume_panel=3, num_panels=4, ) | 8 | 10 |
68,586,561 | 2021-7-30 | https://stackoverflow.com/questions/68586561/sklearn-multi-class-problem-and-reporting-sensitivity-and-specificity | I have a three-class problem and I'm able to report precision and recall for each class with the below code: from sklearn.metrics import classification_report print(classification_report(y_test, y_pred)) which gives me the precision and recall nicely for each of the 3 classes in a table format. My question is how can I now get sensitivity and specificity for each of the 3 classes? I looked at sklearn.metrics and I didn't find anything for reporting sensitivity and specificity. | If we check the help page for classification report: Note that in binary classification, recall of the positive class is also known as “sensitivity”; recall of the negative class is “specificity”. So we can convert the pred into a binary for every class, and then use the recall results from precision_recall_fscore_support. Using an example: from sklearn.metrics import classification_report y_true = [0, 1, 2, 2, 2] y_pred = [0, 0, 2, 2, 1] target_names = ['class 0', 'class 1', 'class 2'] print(classification_report(y_true, y_pred, target_names=target_names)) Looks like: precision recall f1-score support class 0 0.50 1.00 0.67 1 class 1 0.00 0.00 0.00 1 class 2 1.00 0.67 0.80 3 accuracy 0.60 5 macro avg 0.50 0.56 0.49 5 weighted avg 0.70 0.60 0.61 5 Using sklearn: from sklearn.metrics import precision_recall_fscore_support res = [] for l in [0,1,2]: prec,recall,_,_ = precision_recall_fscore_support(np.array(y_true)==l, np.array(y_pred)==l, pos_label=True,average=None) res.append([l,recall[0],recall[1]]) put the results into a dataframe: pd.DataFrame(res,columns = ['class','sensitivity','specificity']) class sensitivity specificity 0 0 0.75 1.000000 1 1 0.75 0.000000 2 2 1.00 0.666667 | 4 | 8 |
68,587,852 | 2021-7-30 | https://stackoverflow.com/questions/68587852/how-to-iterate-over-items-in-dictionary-using-while-loop | i know how to iterate over items in dictionary using for loop. but i need know is how to iterate over items in dictionary using while loop. is that possible? This how i tried it with for loop. user_info = { "username" : "Hansana123", "password" : "1234", "user_id" : 3456, "reg_date" : "Nov 19" } for values,keys in user_info.items(): print(values, "=", keys) | You can iterate the items of a dictionary using iter and next with a while loop. This is almost the same process as how a for loop would perform the iteration in the background on any iterable. https://docs.python.org/3/library/functions.html https://docs.python.org/3/library/stdtypes.html#iterator-types Code: user_info = { "username" : "Hansana123", "password" : "1234", "user_id" : 3456, "reg_date" : "Nov 19" } print("Using for loop...") for key, value in user_info.items(): print(key, "=", value) print() print("Using while loop...") it_dict = iter(user_info.items()) while key_value := next(it_dict, None): print(key_value[0], "=", key_value[1]) Output: Using for loop... username = Hansana123 password = 1234 user_id = 3456 reg_date = Nov 19 Using while loop... username = Hansana123 password = 1234 user_id = 3456 reg_date = Nov 19 | 4 | 10 |
68,584,302 | 2021-7-30 | https://stackoverflow.com/questions/68584302/modulenotfounderror-no-module-named-when-trying-to-make-unit-tests-in-pyt | Problem I am trying to make unit tests for my project, but I am having issues figuring out where my error is in regards to absolute importing the module to test. I am using Visual Studio Code as my IDE. My directories look like this: project_folder + code_files - reader.py - __init__.py + tests - test_reader.py - data.txt - __init__.py My "test_reader.py" file looks like this: import unittest from code_files.reader import Reader class Test_LineCounter(unittest.TestCase): def test_empty_file_long(self): self.assertEqual(Reader.line_counter(self, "data.txt"), 4) if __name__ == "__main__": unittest.main() However, I keep getting the error: ModuleNotFoundError: No module named 'code_files' Attempted Solutions To be completely honest, when I was going into this project I was not very strong working with directories, packages, or modules. I have read many articles online and have read many forum posts on Stack Overflow and other websites trying to figure out what may be causing this error and how I can resolve it. Through this, I feel like I have gained a pretty good understanding of how Python handles packages and modules. Yet I still can't seem to get this problem figured out. I have tried putting the "code_files" package into a different directory to see if that would work: project_folder + app + code_files - reader.py - __init__.py And then imported by: from app.code_files.reader import Reader But then I got the error: ModuleNotFoundError: No module named 'app' The only time I managed to get "test_reader" to work was when I had my directory structed as follows: project_folder + code_files - reader.py - __init__.py + test_reader.py + data.txt + __init__.py Final Comments Any help would be greatly appreciated! I feel like there is a very simple solution to this that I just keep overlooking. I know very similar questions have been asked on here before, but all of the determined solutions I have seen are what I have already tried or am currently doing. So I apologize if this is a repetitive question. Thanks! | Reason: This is because the folder of project_folder was not in the sys.path. A list of strings that specifies the search path for modules. Initialized from the environment variable PYTHONPATH, plus an installation-dependent default. As initialized upon program startup, the first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. This can explain the problem you have met. When the test_reader.py is under the tests folder, tests folder will be added to the sys.path. When the test_reader.py is under the project_folder folder, project_folder folder will be added to the sys.path. Then the python interpreter can find the code_files package. Solution: How do you run the test? If you run the test through the Test Panel you will not run across the module problem: And it will automatically add the workspace folder to the sys.path. So, it's recommended to run the test in this way. tips: If you want to get the sys.path, you can add the pprint(sys.path) after the self.assertEqual(Reader.line_counter(self, "data.txt"), 4). But you can only get the result in the channel of Python Test Log in the OUTPUT panel. If you debug(F5) the test, you can add this in the launch.json file: "env": {"PYTHONPATH":"${workspaceFolder}"}, If you run the test with the command in ter terminal. You should modify the sys.path in the test file: sys.path.append("the path to the project_folder folder") | 10 | 8 |
68,576,519 | 2021-7-29 | https://stackoverflow.com/questions/68576519/what-is-the-type-hint-for-pytests-caplog-fixture | I am using the caplog fixture that comes with pytest. I am using mypy for my type checking, and would like to know what the correct type hint for caplog is. For example: def test_validate_regs(caplog: Any) -> None: validate_regs(df, logger) assert caplog.text == "", "No logs should have been made." In this example, I have it set to Any, but I am wondering if there is a more specific type hint I can use. I've tried reading the docs on caplog, as well as searching the pytest code in github to see what the caplog fixture returns, but couldn't find anything more than the following. But using a str type just gave me an error, saying that str type doesn't have the attribute text, which makes sense. When I print the type of caplog I get _pytest.logging.LogCaptureFixture although I'm not sure how I'd import this from _pytest and make use of it. | as of pytest 6.2.0 you should use pytest.LogCaptureFixture prior to that you needed to import a private name which is not recommended (we frequently change the internals inside the _pytest namespace without notice and with no promises of forward or backward compatibility) disclaimer: I'm a pytest core dev | 24 | 32 |
68,583,870 | 2021-7-29 | https://stackoverflow.com/questions/68583870/checking-whether-a-function-is-decorated | I am trying to build a control structure in a class method that takes a function as input and has different behaviors if a function is decorated or not. Any ideas on how you would go about building a function is_decorated that behaves as follows: def dec(fun): # do decoration def func(data): # do stuff @dec def func2(data): # do other stuff def is_decorated(func): # return True if func has decorator, otherwise False is_decorated(func) # False is_decorated(func2) # True | Yes, it's relatively easy because functions can have arbitrary attributes added to them, so the decorator function can add one when it does its thing: def dec(fun): def wrapped(*args, **kwargs): pass wrapped.i_am_wrapped = True return wrapped def func(data): ... # do stuff @dec def func2(data): ... # do other stuff def is_decorated(func): return getattr(func, 'i_am_wrapped', False) print(is_decorated(func)) # -> False print(is_decorated(func2)) # -> True | 7 | 7 |
68,581,187 | 2021-7-29 | https://stackoverflow.com/questions/68581187/playing-sound-in-google-colab | Hi so i have code which gives the answer in form of speech. I am using this code: audio.save("audio.wav") sound_file = '/content/audio.wav' Audio(sound_file, autoplay=True) Now the code plays file all well but if I play in a separate cell. But if I put this code in between my code, it doesn't work. Any ideas? | Wrap it in display(): from IPython.display import Audio, display display(Audio(sound_file, autoplay=True)) | 9 | 17 |
68,577,198 | 2021-7-29 | https://stackoverflow.com/questions/68577198/pytorch-summary-fails-with-huggingface-model | I want a summary of a PyTorch model downloaded from huggingface. Am I doing something wrong here? from torchinfo import summary from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2) summary(model, input_size=(16, 512)) Gives the error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 257 if isinstance(x, (list, tuple)): --> 258 _ = model.to(device)(*x, **kwargs) 259 elif isinstance(x, dict): 11 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1530 output_hidden_states=output_hidden_states, -> 1531 return_dict=return_dict, 1532 ) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1070 -> 1071 result = forward_call(*input, **kwargs) 1072 if _global_forward_hooks or self._forward_hooks: /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 988 inputs_embeds=inputs_embeds, --> 989 past_key_values_length=past_key_values_length, 990 ) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1070 -> 1071 result = forward_call(*input, **kwargs) 1072 if _global_forward_hooks or self._forward_hooks: /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 214 if inputs_embeds is None: --> 215 inputs_embeds = self.word_embeddings(input_ids) 216 token_type_embeddings = self.token_type_embeddings(token_type_ids) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1070 -> 1071 result = forward_call(*input, **kwargs) 1072 if _global_forward_hooks or self._forward_hooks: /usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input) 159 input, self.weight, self.padding_idx, self.max_norm, --> 160 self.norm_type, self.scale_grad_by_freq, self.sparse) 161 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2042 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2043 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2044 RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding) The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) <ipython-input-8-4f70d4e6fa82> in <module>() 5 else: 6 # Can't get this working ----> 7 summary(model, input_size=(16, 512)) #, device='cpu') 8 #print(model) /usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs) 190 ) 191 summary_list = forward_pass( --> 192 model, x, batch_dim, cache_forward_pass, device, **kwargs 193 ) 194 formatting = FormattingOptions(depth, verbose, col_names, col_width, row_settings) /usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 268 "Failed to run torchinfo. See above stack traces for more details. " 269 f"Executed layers up to: {executed_layers}" --> 270 ) from e 271 finally: 272 if hooks is not None: RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: [] | There's a bug [also reported] in torchinfo library [torchinfo.py] in the last line shown. When dtypes is None, it is by default creating torch.float tensors whereas forward method of bert model uses torch.nn.embedding which expects only int/long tensors. def process_input( input_data: Optional[INPUT_DATA_TYPE], input_size: Optional[INPUT_SIZE_TYPE], batch_dim: Optional[int], device: Union[torch.device, str], dtypes: Optional[List[torch.dtype]] = None, ) -> Tuple[CORRECTED_INPUT_DATA_TYPE, Any]: """Reads sample input data to get the input size.""" if input_size is not None: if dtypes is None: dtypes = [torch.float] * len(input_size) If you try modifying the line to the following, it works fine. dtypes = [torch.int] * len(input_size) EDIT (Direct solution w/o changing their internal code): from torchinfo import summary from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2) summary(model, input_size=(2, 512), dtypes=['torch.IntTensor']) Alternate: For a simple summary, you could use print(model) instead of summary function. | 7 | 8 |
68,578,226 | 2021-7-29 | https://stackoverflow.com/questions/68578226/how-to-map-two-dataframe-with-output-of-overlapping-items-in-new-columns | I have two dataframes: data = { 'values': ['Cricket', 'Soccer', 'Football', 'Tennis', 'Badminton', 'Chess'], 'gems': ['A1K, A2M, JA3, AN4', 'B1, A1, Bn2, B3', 'CD1, A1', 'KWS, KQM', 'JP, CVK', 'KF, GF'] } df1 = pd.DataFrame(data) df1 values gems 0 Cricket A1K, A2M, JA3, AN4 1 Soccer B1, A1, Bn2, B3 2 Football CD1, A1 3 Tennis KWS, KQM 4 Badminton JP, CVK 5 Chess KF, GF second dataframe data2 = { '1C': ['B1', 'K1', 'A1K', 'J1', 'A4'], '02C': ['Bn2', 'B3', 'JK', 'ZZ', 'ko'], '34C': ['KF', 'CD1', 'B3','ji', 'HU'] } df2 = pd.DataFrame(data2) df2 1C 02C 34C 0 B1 Bn2 KF 1 K1 B3 CD1 2 A1K JK B3 3 J1 ZZ ji 4 A4 ko HU I want check items in df1['gems'] in each column of df2 and represent their counts and overlapping items. The expected output is: values gems 1C 1CGroup 02C 02CGroup 34C 34CGroup 0 Cricket A1K, A2M, JA3, AN4 1 A1K 0 NA 0 NA 1 Soccer B1, A1, Bn2, B3 1 Bn2 2 Bn2, B3 1 B3 2 Football CD1, A1 0 NA 0 NA 1 CD1 3 Tennis KWS, KQM 0 NA 0 NA 0 NA 4 Badminton JP, CVK 0 NA 0 NA 0 NA 5 Chess KF, GF 0 NA 0 NA 1 KF | first str.split and explode the column gems and reset_index to keep the original index. Then for each column of df2, merge with the exploded gems, groupby the original index and do both the count and the aggregation as you want with join. pd.concat the merges for each column and join to your original df1. fillna the count columns with 0 as shown in the expected output. # one row per gem used in the merge df_ = df1['gems'].str.split(', ').explode().reset_index() res = ( df1.join( #can join to df1 as we keep the original index value pd.concat([df_.merge(df2[[col]], left_on='gems', right_on=col) .groupby('index') # original index in df1 [col].agg(**{col: 'count', # do each aggregation f'{col}Group':lambda x: ', '.join(x)}) for col in df2.columns], # do it for each column of df2 axis=1)) .fillna({col:0 for col in df2.columns}) #fill the count columns with 0 ) print(res) values gems 1C 1CGroup 02C 02CGroup 34C 34CGroup 0 Cricket A1K, A2M, JA3, AN4 1.0 A1K 0.0 NaN 0.0 NaN 1 Soccer B1, A1, Bn2, B3 1.0 B1 2.0 Bn2, B3 1.0 B3 2 Football CD1, A1 0.0 NaN 0.0 NaN 1.0 CD1 3 Tennis KWS, KQM 0.0 NaN 0.0 NaN 0.0 NaN 4 Badminton JP, CVK 0.0 NaN 0.0 NaN 0.0 NaN 5 Chess KF, GF 0.0 NaN 0.0 NaN 1.0 KF | 5 | 6 |
68,575,716 | 2021-7-29 | https://stackoverflow.com/questions/68575716/docker-error-response-from-daemon-failed-to-create-shim | On a fresh ubunto i installed docker and when i run the image, i got following error docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "--gpus": executable file not found in $PATH: unknown. ERRO[0000] error waiting for container: context canceled This is how i make system ready sudo apt updat sudo apt full-upgrade sudo apt autoremove #install gpu drivers via software and updates sudo apt install nvidia-cuda-toolkit sudo apt install docker.io #here is docker file FROM python:3 WORKDIR /workspace COPY test.py /workspace RUN pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html CMD ["python", "./test.py"] #here is test.py file import torch print('testing') print(torch.cuda.get_device_name(0)) print(torch.cuda.is_available()) print(torch.cuda.current_device()) print(torch.cuda.device_count()) print('gpu') nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243 nvidia-smi -----------------------------------------------------------------------------+ | NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A | | N/A 64C P0 24W / N/A | 513MiB / 6069MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ uname -r 5.8.0-63-generic lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.2 LTS Release: 20.04 Codename: focal docker version Client: lient: Version: 20.10.2 API version: 1.41 Go version: go1.13.8 Git commit: 20.10.2-0ubuntu1~20.04.3 Built: Fri Jul 23 21:06:26 2021 OS/Arch: linux/amd64 Context: default Experimental: true Server: Engine: Version: 20.10.2 API version: 1.41 (minimum version 1.12) Go version: go1.13.8 Git commit: 20.10.2-0ubuntu1~20.04.3 Built: Fri Jul 23 19:35:35 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.5.2-0ubuntu1~20.04.2 GitCommit: runc: Version: 1.0.0~rc95-0ubuntu1~20.04.2 GitCommit: docker-init: Version: 0.19.0 GitCommit: This is how i build image and run it sudo docker build -t test . sudo docker run test --gpus all | This command is incorrectly ordered: sudo docker run test --gpus all The docker run command takes the syntax: docker ${args_to_docker} run ${args_to_run} image_name ${cmd_override} The --gpus is a flag to the run command, and not a command you want to run inside your container. So you'd reorder as: sudo docker run --gpus all test | 23 | 15 |
68,571,553 | 2021-7-29 | https://stackoverflow.com/questions/68571553/how-to-use-a-faker-value-as-part-of-another-field-with-factoryboy | I'm using FactoryBoy and Faker to generate some models for unit tests. Generating data for fields is easy enough, but how to I generate a string that incorporates a value produced from a Faker provider? import factory import MyModel class MyFactory(factory.django.DjangoModelFactory): class Meta: model = MyModel # my_ip will be a temporary variable that is not returned by the factory. exclude = (my_ip,) my_ip = factory.Faker("ipv4_private") my_string = f"String with IP address [{my_ip}]" Using MyFactory then creates objects with my_string set to something like: String with IP address [<factory.faker.Faker object at 0x7fc656aa6358>] How can I get my_string to contain something like: String with the IP address [192.168.23.112] How do I resolve this to a value, rather than just getting the object? I've tried wrapping in str() with no luck. Do I need to use LazyAttribute or LazyFunction or something from FactoryBoy? | With some trial and error I figured out this did require using LazyAttribute so the my_string attribute is calculated after the rest of the object is generated. However, I then discovered this is essentially a duplicate of: In Factory Boy, how to join strings created with Faker? If anyone is wondering, the way to do this is: my_string = factory.LazyAttribute(lambda o: f"String with IP address [{o.my_ip}]") | 5 | 6 |
68,561,708 | 2021-7-28 | https://stackoverflow.com/questions/68561708/how-to-remove-noise-around-numbers-using-opencv | I'm trying to use Tesseract-OCR to get the readings on below images but having issues getting consistent results with the spotted background. I have below configuration on my pytesseract CONFIG = f"—psm 6 -c tessedit_char_whitelist=01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZÅÄabcdefghijklmnopqrstuvwxyzåäö.,-" I have also tried below image pre-processing with some good results, but still not perfect results blur = cv2.blur(img,(4,4)) (T, threshInv) = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU) What I want is to consistently be able to identify the numbers and the decimal separator. What image pre-processing could help in getting consistent results on images as below? | That was a challenge but i think i have an interesting approach: Pattern-matching If you zoom in, you realize that the pattern in the back only has 4 possible dots, a single full pixle, a double full pixel and a double pixel with a medium left or right. So what i did was grab these 4 patterns from the image with 17.160.000,00 and got to work. Save these to load again, i just grabbed them on the fly img = cv2.imread('C:/Users/***/17.jpg', cv2.IMREAD_GRAYSCALE) pattern_1 = img[2:5,1:5] pattern_2 = img[6:9,5:9] pattern_3 = img[6:9,11:15] pattern_4 = img[9:12,22:26] # just to show it carries over to other pics ;) img = cv2.imread('C:/Users/****/6.jpg', cv2.IMREAD_GRAYSCALE) Actual Pattern Matching Next we match all the patterns and threshold to find all occurrences, i used 0.7 but you can play around with it a little. These patterns take off some pixels on the side and only match a sigle pixel on the left so we pad twice (one with an extra) to hit both for the first 3 patterns. The last one is the single pixel so it doesnt need it res_1 = cv2.matchTemplate(img,pattern_1,cv2.TM_CCOEFF_NORMED ) thresh_1 = cv2.threshold(res_1,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8) pat_thresh_1 = np.pad(thresh_1,((1,1),(1,2)),'constant') pat_thresh_15 = np.pad(thresh_1,((1,1),(2,1)), 'constant') res_2 = cv2.matchTemplate(img,pattern_2,cv2.TM_CCOEFF_NORMED ) thresh_2 = cv2.threshold(res_2,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8) pat_thresh_2 = np.pad(thresh_2,((1,1),(1,2)),'constant') pat_thresh_25 = np.pad(thresh_2,((1,1),(2,1)), 'constant') res_3 = cv2.matchTemplate(img,pattern_3,cv2.TM_CCOEFF_NORMED ) thresh_3 = cv2.threshold(res_3,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8) pat_thresh_3 = np.pad(thresh_3,((1,1),(1,2)),'constant') pat_thresh_35 = np.pad(thresh_3,((1,1),(2,1)), 'constant') res_4 = cv2.matchTemplate(img,pattern_4,cv2.TM_CCOEFF_NORMED ) thresh_4 = cv2.threshold(res_4,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8) pat_thresh_4 = np.pad(thresh_4,((1,1),(1,2)),'constant') Editing the Image Now the only thing left to do is remove all the matches from the image. Since we have a mostly white backround we just set them to 255 to blend in. img[pat_thresh_1==1] = 255 img[pat_thresh_15==1] = 255 img[pat_thresh_2==1] = 255 img[pat_thresh_25==1] = 255 img[pat_thresh_3==1] = 255 img[pat_thresh_35==1] = 255 img[pat_thresh_4==1] = 255 Output Edit: Take a look at Abstracts answer as well for refining this output and tesseract finetuning | 4 | 5 |
68,569,239 | 2021-7-29 | https://stackoverflow.com/questions/68569239/what-is-the-difference-between-abstractclassmetaclass-abcmeta-and-class-abstra | I have seen two ways for defining Abstract Classes in Python. This one: from abc import ABCMeta, abstractmethod class AbstactClass(metaclass = ABCMeta): And this one: from abc import ABC, abstractmethod class AbstractClass2(ABC): What are there differences between them and what are the respective usage scenarios? | There is no actual functional difference. The ABC class is simply a convenience class to help make the code look less confusing to those who don't know the idea of a metaclass well, as the documentation states: A helper class that has ABCMeta as its metaclass. With this class, an abstract base class can be created by simply deriving from ABC avoiding sometimes confusing metaclass usage which is even clearer if you look at the the implementation of abc.py, which is nothing more than an empty class that specifies ABCMeta as its metaclass, just so that its descendants can inherit the type: class ABC(metaclass=ABCMeta): """Helper class that provides a standard way to create an ABC using inheritance. """ __slots__ = () | 30 | 29 |
68,566,490 | 2021-7-28 | https://stackoverflow.com/questions/68566490/should-i-use-utf8-or-utf-8-sig-when-opening-a-file-to-read-in-python | I have alway used 'utf8' to read in a file: with open(filename, 'r', encoding='utf8') as f, open(filename2, 'r', encoding='utf8') as f2: for line in f: line = line.strip() columns = line.split(' ') for line in f2: line = line.strip() columns = line.split(' ') However, the code above introduced an additional '\ufeff' code at the line of for 'f2': columns = line.split(' ') Now the columns[0] contains this character, while 'line' doesn't have this character. Why is that? Then I switched to 'utf-8-sig', and the problem is gone. However, the first file reading 'f' and the 'columns' doesn't have this issue at all even with 'encoding=utf8' only. Both are plain text files. So I have two questions regarding: I am using Python3 and when reading a file, should I always use 'utf-8-sig' to be safe? Why doesn't 'line' contain this additional code, but 'columns' contains it? | UTF-8-encoded files can be written with a signature indicating it is UTF-8. This signature code is called the "byte order mark" (or BOM) and has the Unicode code point value U+FEFF. If the file containing a BOM is viewed in a hex editor the file will start with the hexadecimal bytes EF BB BF. When viewed in a text editor with a non UTF-8 encoding they often appear as  but that depends on the encoding. The 'utf-8-sig' codec can read UTF-8-encoded files written with and without the starting BOM signature and will remove it if present. Use 'utf-8-sig' for writing a file only if you want a UTF-8 BOM written at the start of the file. Some (usually Windows) programs, such as Excel when reading text files, expect a BOM if the file contains UTF-8, and assume a localized encoding otherwise. Other programs may not expect a BOM and could read it as an extra character, so the choice is yours. So for your two questions: I am using Python3 and when reading a file, should I always use 'utf-8-sig' to be safe? Yes, it will remove the BOM if present. Why doesn't 'line' contain this additional code, but 'columns' contains it? line.strip() doesn't remove \ufeff so I can't reproduce your claim. If a UTF-8 w/ BOM-encoded file is opened with utf8 the first character should be \ufeff. Are you using print to display the line? \ufeff is a whitespace character if printed: >>> line = '\ufeffabc' >>> line '\ufeffabc' >>> print(line) abc >>> print(line.strip()) abc >>> line.strip() '\ufeffabc' | 4 | 8 |
68,561,211 | 2021-7-28 | https://stackoverflow.com/questions/68561211/python-set-timeout-on-popen-stdout-readline | I'd like to be able to set timeout on subprocess stdout and return an empty string if exceeded timeout. Here's my attempt to do so using asyncio. However, it failed on using file.stdout.readline() in asyncio.wait_for. Any idea how to fix this ? import threading import select import subprocess import queue import time import asyncio class collector(): @staticmethod async def readline(file, timeout=3): try: line = await asyncio.wait_for(file.stdout.readline(), timeout) except asyncio.TimeoutError: return "" else: return line @staticmethod async def background(command): f = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) while True: line = await collector.readline(file=f, timeout=3) asyncio.run(collector.background("tail -f /tmp/2222")) And here's callstack : File "/tmp/script.py", line 13, in readline line = await asyncio.wait_for(file.stdout.readline(), timeout) File "/usr/local/Cellar/[email protected]/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/tasks.py", line 462, in wait_for fut = ensure_future(fut, loop=loop) File "/usr/local/Cellar/[email protected]/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/tasks.py", line 679, in ensure_future raise TypeError('An asyncio.Future, a coroutine or an awaitable is ' TypeError: An asyncio.Future, a coroutine or an awaitable is required | The subprocess library only provides synchronous functions. These cannot be directly used by asyncio, and manually wrapping them is inefficient. asyncio already ships with its own subprocess backend. Its process representation works similar to subprocess.Popen but allows to cooperatively wait for operations. import asyncio.subprocess async def background(*command): # create subprocess via asyncio proc = await asyncio.create_subprocess_exec( *command, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE ) while True: line = await readline(proc.stdout, timeout=0.01) print("read", len(line), "characters") # read from an async stream vvvvvvvvvvvvvvvvvv instead of a file-like object async def readline(stream: asyncio.StreamReader, timeout: float): try: # stream.readline is a coroutine vvvvvvvvvvvv return await asyncio.wait_for(stream.readline(), timeout=timeout) except asyncio.TimeoutError: return "" asyncio.run(background("cat", "/dev/random")) | 5 | 5 |
68,555,085 | 2021-7-28 | https://stackoverflow.com/questions/68555085/how-can-i-chunk-through-a-csv-using-arrow | What I am trying to do I am using PyArrow to read some CSVs and convert them to Parquet. Some of the files I read have plenty of columns and have a high memory footprint (enough to crash the machine running the job). I am trying to chunk through the file while reading the CSV in a similar way to how Pandas read_csv with chunksize works. For example this is how the chunking code would work in pandas: chunks = pandas.read_csv(data, chunksize=100, iterator=True) # Iterate through chunks for chunk in chunks: do_stuff(chunk) I want to port a similar functionality to Arrow What I have tried to do I noticed that Arrow has ReadOptions which include a block_size parameter, and I thought maybe I could use it like: # Reading in-memory csv file arrow_table = arrow_csv.read_csv( input_file=input_buffer, read_options=arrow_csv.ReadOptions( use_threads=True, block_size=4096 ) ) # Iterate through batches for batch in arrow_table.to_batches(): do_stuff(batch) As this (block_size) does not seem to return an iterator, I am under the impression that this will still make Arrow read the entire table in memory and thus recreate my problem. Lastly, I am aware that I can first read the csv using Pandas and chunk through it then convert to Arrow tables. But I am trying to avoid using Pandas and only use Arrow. I am happy to provide additional information if needed | The function you are looking for is pyarrow.csv.open_csv which returns a pyarrow.csv.CSVStreamingReader. The size of the batches will be controlled by the block_size option you noticed. For a complete example: import pyarrow as pa import pyarrow.parquet as pq import pyarrow.csv in_path = '/home/pace/dev/benchmarks-proj/benchmarks/data/nyctaxi_2010-01.csv.gz' out_path = '/home/pace/dev/benchmarks-proj/benchmarks/data/temp/iterative.parquet' convert_options = pyarrow.csv.ConvertOptions() convert_options.column_types = { 'rate_code': pa.utf8(), 'store_and_fwd_flag': pa.utf8() } writer = None with pyarrow.csv.open_csv(in_path, convert_options=convert_options) as reader: for next_chunk in reader: if next_chunk is None: break if writer is None: writer = pq.ParquetWriter(out_path, next_chunk.schema) next_table = pa.Table.from_batches([next_chunk]) writer.write_table(next_table) writer.close() This example also highlights one of the challenges the streaming CSV reader introduces. It needs to return batches with consistent data types. However, when parsing CSV you typically need to infer the data type. In my example data the first few MB of the file have integral values for the rate_code column. Somewhere in the middle of the batch there is a non-integer value (* in this case) for that column. To work around this issue you can specify the types for columns up front as I am doing here. | 9 | 16 |
68,561,198 | 2021-7-28 | https://stackoverflow.com/questions/68561198/how-to-make-a-matplotlib-screen-fullscreen-without-hiding-the-taskbar | Let's say I have the following chunk of code: import matplotlib.pyplot as plt manager = plt.get_current_fig_manager() manager.full_screen_toggle() plt.show() The screen is set to be in fullscreen mode. My issue is that the taskbar is being hidden. How do I adjust the fullscreen mode so that it doesn't hide my taskbar? | I usually use mng = plt.get_current_fig_manager() mng.frame.Maximize(True) before the call to plt.show(), and I get a maximized window. This works for the 'wx' backend only. Or try this, wm = plt.get_current_fig_manager() wm.window.state('zoomed') | 6 | 6 |
68,559,870 | 2021-7-28 | https://stackoverflow.com/questions/68559870/why-is-the-class-attribute-being-modified | In my book it says Use class attributes to define properties that should have the same value for every class instance. Use instance attributes for properties that vary from one instance to another. But then, in an example, the species class attribute is modified for miles as shown below: class Dog: species = "Canis familiaris" miles = Dog() snoopy = Dog() miles.species = "Felis silvestris" That means that the species class attribute is not for the entire Dog class. Have I misunderstood something or is it simply normal to customise class attributes? | Why is the class attribute being modified? It isn't. miles is an instance of the Dog class, so assigning to miles.species creates an instance attribute. That means that the species class attribute is not for the entire Dog class. In the code you have shown, there are two things called species. An attribute of the class Dog ("Canis familiaris"), which is unchanged for the whole program. An attribute of the instance miles ("Felis silvestris"), which only applies to miles, but not to the whole Dog class, nor to the snoopy instance. Since snoopy doesn't have its own species instance attribute, accessing snoopy.species will fall back to Dog.species. | 4 | 3 |
68,554,782 | 2021-7-28 | https://stackoverflow.com/questions/68554782/modulenotfounderror-no-module-named-tkinter-on-macos | Tkinter doesn't work, it throws an error. Installation: % pip3 install tk My code: #!/usr/bin/env python3 import tkinter as tk The error: Traceback (most recent call last): File "/Users/arghadip/Library/Application Support/CodeRunner/Unsaved/Untitled.py", line 4, in <module> import tkinter as tk File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/tkinter/__init__.py", line 37, in <module> import _tkinter # If this fails your Python may not be configured for Tk ModuleNotFoundError: No module named '_tkinter' | For Python3 tkinter can be simply installed by, brew install python-tk pip sometimes wont work successfully on my Mac, especially with the High Sierra OS version. Brew can be used to install all kinds of software packages in mac. | 18 | 55 |
68,554,094 | 2021-7-28 | https://stackoverflow.com/questions/68554094/remove-the-trailing-comma-when-format-string-with-tuples | I'm doing string formatting with tuples: a = (1,2,3) s = f"something in {a}" print(s) 'something in (1, 2, 3)' Everything is fine until I encounter a single-element tuple, which gives: a = (1,) s = f"something in {a}" 'something in (1,)' what I actually want is: 'something in (1)' How do I make tuple string formatting behaves consistently and remove the trailing comma? | You could use your own formatting logic, e.g. a = (1,2,3) s = ','.join([str(x) for x in a]) print(s) # 1,2,3 a = (1,) s = ','.join([str(x) for x in a]) print(s) # 1 | 5 | 5 |
68,549,918 | 2021-7-27 | https://stackoverflow.com/questions/68549918/how-to-send-messages-on-threads | I currently have code for a command that takes in a channel ID followed by some text message as input. The code then finds the channel and sends the message on it. However, Discord has just released a new thread feature, and currently, no update has been made to the official Discord API docs regarding how bots can interact with threads. So, how can a bot send messages to threads? Please leave down answers as new information is released by Discord. Here's the code I was talking about before: @bot.command() async def text(ctx, channel_id, *, msg): channel = bot.get_channel(int(channel_id)) try: await channel.send(ctx.message.attachments[0].url) except IndexError: pass await channel.trigger_typing() await channel.send(msg) | Since discord.py 2.0, threads are supported with discord.Thread. get_thread method 1. Using the guild @bot.command() async def test(ctx): thread = ctx.guild.get_thread(thread_id) There is also the method get_channel_or_thread which returns either a channel or a thread. channel_or_thread = ctx.guild.get_channel_or_thread(channel_id) 2. Using the text channel @bot.command() async def test(ctx): thread = ctx.channel.get_thread(thread_id) threads list 1. Using the guild @bot.command() async def test(ctx): threads = ctx.guild.threads There is also the coroutine active_threads which returns all active threads which the bot can see active_threads = await ctx.guild.active_threads() 2. Using the text channel @bot.command() async def test(ctx): threads = ctx.channel.threads There is also the async iterator archived_threads that iterates over all archived threads in this text channel async for thread in ctx.channel.archived_threads(): # ... | 4 | 4 |
68,500,166 | 2021-7-23 | https://stackoverflow.com/questions/68500166/does-select-for-update-work-with-the-update-method-in-django | The documentation for Django 2.2, which I'm using, gives the following example usage for select_for_update: from django.db import transaction entries = Entry.objects.select_for_update().filter(author=request.user) with transaction.atomic(): for entry in entries: ... Using this approach, one would presumably mutate the model instances assigned to entry and call save on these. There are cases where I'd prefer the alternative approach below, but I'm unsure whether it would work (or even make sense) with select_for_update. with transaction.atomic(): Entry.objects.select_for_update().filter(author=request.user).update(foo="bar", wobble="wibble") The documentation states that the lock is created when the queryset is evaluated, so I doubt the update method would work. As far as I'm aware update just performs an UPDATE ... WHERE query, with no SELECT before it. However, I would appreciate it if someone more experienced with this aspect of the Django ORM could confirm this. A secondary question is whether a lock even adds any protection against race conditions if one makes a single UPDATE query against the locked rows. (I've entered this train of thought because I'm refactoring code that uses a lock when updating the values of two columns of a single row.) | As far as I'm aware update just performs an UPDATE ... WHERE query, with no SELECT before it Yes, that's correct. You could confirm this by looking at the actual queries made. Using the canonical django tutorial "polls" app as an example: with transaction.atomic(): qs = polls.models.Question.objects.select_for_update().all() qs.update(question_text='test') print(connection.queries) # {'sql': 'UPDATE "polls_question" SET "question_text" = \'test\'', 'time': '0.008'} So, as you expect, there is no SELECT. Though, ensuring the lock is acquired would be as simple as doing anything to cause the queryset to be evaluated. with transaction.atomic(): qs = polls.models.Question.objects.select_for_update().all() list(qs) # cause evaluation, locking the selected rows qs.update(question_text='test') print(connection.queries) #[... # {'sql': 'SELECT "polls_question"."id", "polls_question"."question_text", "polls_question"."pub_date" FROM "polls_question" FOR UPDATE', 'time': '0.003'}, # {'sql': 'UPDATE "polls_question" SET "question_text" = \'test\'', 'time': '0.001'} #] A secondary question is whether a lock even adds any protection against race conditions if one makes a single UPDATE query against the locked rows In general, yes. Whether it is necessary in a particular situation depends what kind of race condition you're worried about. The lock will prevent race conditions where another transaction may try to update the same row, for example. Race conditions can be avoided without locks, too, depending on the nature of the update/race condition. Sometimes a transaction is sufficient, sometimes it's not. You may also use expressions which are evaluated server-side on the db to prevent race conditions (e.g. using Django's F() expressions). There are also other considerations, like your db dialect, isolation levels, and more. Additional reference on race condition thoughts: PostgreSQL anti-patterns: read-modify-write cycles (archive) | 14 | 16 |
68,487,529 | 2021-7-22 | https://stackoverflow.com/questions/68487529/how-to-ensure-python-prints-utf-8-and-not-utf-16-le-when-piped-in-powershell | I want to print text as UTF-8 when piped (to, for example, a file), so on Python 3.7.3 on Windows 10 via PowerShell, I'm doing this: import sys if not sys.stdout.isatty(): sys.stdout.reconfigure(encoding='utf-8') print("Mamma mia.") When run as encodingtest.py > test.txt, test.txt then turns out to be this: 00000000 FF FE 4D 00 61 00 6D 00 6D 00 61 00 20 00 6D 00 ÿþM.a.m.m.a. .m. 00000010 69 00 61 00 2E 00 0D 00 0A 00 i.a....... Mysteriously enough, it starts with FF FE, which is the byte-order marker for UTF-16-LE – and null bytes are printed between the characters (as UTF-16 would have it)! However, when I run it via CMD rather than PowerShell, it prints UTF-8 just fine. How do I get Python to print UTF-8 even when piped via PowerShell? I could run encodingtest.py | Out-File -Encoding UTF8 test.txt instead, but is there a way to ensure the output encoding program-side? | Update: PowerShell (Core) v7.4+ now does support raw byte handling with external programs - see this answer. The following therefore applies only to Windows PowerShell and PowerShell (Core) 7.3- PowerShell fundamentally doesn't support processing raw output (a stream of bytes) from external programs: It invariably decodes such output as text, using the character encoding stored in [Console]::OutputEncoding See this answer for more information. Once decoded, it uses its default character encoding for file-output operations such as > (effectively an alias for the Out-File cmdlet), which for > are: Windows PowerShell (up to v5.1): "Unicode", i.e. UTF-16LE (which is what you're seeing) PowerShell (Core, v6+): BOM-less UTF-8 (now applied consistently across all cmdlets, unlike in Windows PowerShell). In other words: Even use of just > involves a character decoding and re-encoding cycle, with no relationship between the original and the resulting encoding. Therefore: (Temporarily) set [Console]::OutputEncoding = [System.Text.UTF8Encoding]::new() Pipe the output from your Python script call to Out-File - or, preferably, if the input is known to be strings already (always true for external-program calls) - Set-Content with Encoding utf8. Caveat: In Windows PowerShell, you'll invariably get a UTF-8 file with a BOM (see this answer for a workaround). In PowerShell (Core), you'll get one without a BOM (as you would by default), but can opt to create one with -Encoding utf8BOM. To put it all together (saving and restoring the original [Console]::OutputEncoding not shown): [Console]::OutputEncoding = [System.Text.UTF8Encoding]::new() encodingtest.py | Set-Content -Encoding utf8 test.txt Modifying [Console]::OutputEncoding isn't necessary if you've switched to UTF-8 system-wide, as described in this answer, but note that this Windows 10 feature is still in beta as of this writing and has far-reaching consequences. Alternatively, call via cmd.exe, which does pass the raw bytes through to a file with >: cmd /c 'encodingtest.py > test.txt' This technique (which analogously applies to Unix-like platforms via /bin/sh -c) is the general workaround for the lack of raw byte processing (see below). Background information: Lack of support for raw byte streams in PowerShell's pipeline: PowerShell's pipeline is object-based, which means that it is instances of .NET types that flow through it. This evolution of the traditional, binary-only pipeline is the key to PowerShell's power and versatility. Everything in PowerShell is mediated via pipelines, including use of the redirection operator >, with ... > foo.txt in effect being syntactic sugar for ... | Out-File foo.txt For PowerShell-native commands, which invariably output .NET objects, some form of encoding is necessary in order to write these objects to a file in a meaningful way (unless the objects are strings already, raw byte representations wouldn't make any sense), so text representations based on PowerShell's for-display output formatting systems are used (which, incidentally, is the reason why > with non-string input is generally unsuited to producing files for later programmatic processing). For external programs, PowerShell has chosen to only ever communicate with them via text (strings), which on receiving output involves the inevitable decoding of the raw bytes received into .NET strings, as described above. See this answer for more information. This lack of support for raw byte streams is problematic: Unless you call the underlying .NET APIs directly to explicitly handle byte streams (which would be quite cumbersome), the cycle of decoding and re-encoding as text: can alter the data, interfering not only with sending byte stream to files, but also with piping data between/to external programs; see this answer for an example. can significantly degrade performance. Historically, when PowerShell was a Windows-only shell, this wasn't much of a problem, because the Windows world didn't have many capable CLIs (command-line interfaces (utilities)) worth calling, so staying within the realm of PowerShell was usually sufficient (performance problems notwithstanding). In an increasingly cross-platform world, however, and especially on Unix-like platforms, capable CLIs abound and are sometimes indispensable for high-performance operations. Therefore, PowerShell should support raw byte streams at least on demand, and situationally even automatically when detecting that data is being piped between two external programs. See GitHub issue #1908 and GitHub issue #5974. | 9 | 13 |
68,476,576 | 2021-7-21 | https://stackoverflow.com/questions/68476576/python-match-case-switch-performance | I was expecting the Python match/case to have equal time access to each case, but seems like I was wrong. Any good explanation why? Lets use the following example: def match_case(decimal): match decimal: case '0': return "000" case '1': return "001" case '2': return "010" case '3': return "011" case '4': return "100" case '5': return "101" case '6': return "110" case '7': return "111" case _: return "NA" And define a quick tool to measure the time: import time def measure_time(funcion): def measured_function(*args, **kwargs): init = time.time() c = funcion(*args, **kwargs) print(f"Input: {args[1]} Time: {time.time() - init}") return c return measured_function @measure_time def repeat(function, input): return [function(input) for i in range(10000000)] If we run each 10000000 times each case, the times are the following: for i in range(8): repeat(match_case, str(i)) # Input: 0 Time: 2.458001136779785 # Input: 1 Time: 2.36093807220459 # Input: 2 Time: 2.6832823753356934 # Input: 3 Time: 2.9995620250701904 # Input: 4 Time: 3.5054492950439453 # Input: 5 Time: 3.815168857574463 # Input: 6 Time: 4.164452791213989 # Input: 7 Time: 4.857251167297363 Just wondering why the access times are different. Isn't this optimised with perhaps a lookup table?. Note that I'm not interested in other ways of having equals access times (i.e. with dictionaries). | The match/case statement introduced in PEP 622 offers a more elegant and readable approach to pattern matching compared to traditional if-elif-else chains. Its primary advantage lies in its ability to streamline complex conditional logic. Traditional approach def is_tuple(node): if isinstance(node, Node) and node.children == [LParen(), RParen()]: return True return (isinstance(node, Node) and len(node.children) == 3 and isinstance(node.children[0], Leaf) and isinstance(node.children[1], Node) and isinstance(node.children[2], Leaf) and node.children[0].value == "(" and node.children[2].value == ")") Using match/case def is_tuple(node: Node) -> bool: match node: case Node(children=[LParen(), RParen()]): return True case Node(children=[Leaf(value="("), Node(), Leaf(value=")")]): return True case _: return False While it may be equivalent to a dict lookup in the most primitive cases, in general it is not so. Case patterns are designed to look like normal python code but actually they conceal isinstance and len calls and don't execute what you'd expect to be executed when you see code like Node(). Essentially this is equivalent to a chain of if ... elif ... else statements. Note that unlike for the previously proposed switch statement, the pre-computed dispatch dictionary semantics does not apply here. | 15 | 9 |
68,533,094 | 2021-7-26 | https://stackoverflow.com/questions/68533094/how-do-i-access-mounted-secrets-when-using-google-cloud-run | I have two questions: Why can't I mount two cloud secrets in the same directory? I have attempted to mount two secrets, FIREBASE_AUTH_SERVICE_ACCOUNT and PURCHASE_VALIDATION_SERVICE_ACCOUNT in the directory: flask_app/src/services/firebase/service_accounts/ However I get this error, when attempting to do this: spec.template.spec.containers[0].volume_mounts[1].mount_path, Duplicate volume mount paths are forbidden Why is this? How do I access a mounted secret using python? I'm really not sure how to do this as I couldn't find any documentation on how to actually access the secret itself. This is the only thing I found. I am using python just for context. Would the secret be mounted as a .txt and is that mount path the folder that it is stored in or does it also specify the file name? | With Cloud Run and Secret manager you can load a secret in 2 manners: Load a secret in a environment variable, use --set-secrets=ENV_VAR_NAME=secretName:version Load a secret in a file, use --set-secrets=/path/to/file=secretName:version Therefore, you can read a secret as you read An environment variable (something like os.getenv()) A file (something like fs.open('/path/to/file','r')) So, your first question about directory is not clear. If you mount 2 secrets in 2 files in the same directory, no problem! If it doesn't solve your question, please, clarify. | 6 | 12 |
68,545,064 | 2021-7-27 | https://stackoverflow.com/questions/68545064/python-commands-to-build-distribution-setup-py-build-vs-python-m-build | I'm learning about Python packaging, and according to this guide, the command to build a python distribution package seems to be python3 -m build. But I aslo found that there is a command line interface for setup.py file from setuptools: $ python setup.py --help-commands Standard commands: build build everything needed to install sdist create a source distribution (tarball, zip file, etc.) bdist create a built (binary) distribution bdist_dumb create a "dumb" built distribution bdist_rpm create an RPM distribution ... It seems that python setup.py build, sdist or bdist can aslo build distribution, but I didn't find detailed intructions for these commands, the setuptools command reference lacks explanation for build sdist bdist. So I'm a bit confused, what is the difference between python setup.py build and python -m build, or between python setup.py sdist and python -m build --sdist? Is the python setup.py command deprecated hence the lack of full documentation? When should I use python -m build or python setup.py build? Any help would be appreciated. Update: The doc of build module says “build is roughly the equivalent of setup.py sdist bdist_wheel but with PEP 517 support, allowing use with projects that don’t use setuptools”. So should I always prefer build module rather than running python setup.py manually?. Is there still a use-case for setup.py build? | Citing Why you shouldn't invoke setup.py directly by Paul Ganssle. The setuptools project has stopped maintaining all direct invocations of setup.py years ago, and distutils is deprecated. There are undoubtedly many ways that your setup.py-based system is broken today, even if it's not failing loudly or obviously. Direct invocations of setup.py cannot bootstrap their own dependencies, and so some CLI is necessary for dependency management. The setuptools project no longer wants to provide any public CLI, and will be actively removing the existing interface (though the time scale for this is long). PEP 517, 518 and other standards-based packaging are the future of the Python ecosystem and a lot of progress has been made on making this upgrade seamless. Here's a summary table of the setup.py way and the newer recommended method: setup.py New command setup.py sdist python -m build (with build) setup.py bdist_wheel python -m build (with build) setup.py test pytest (usually via tox or nox) setup.py install pip install setup.py develop pip install -e setup.py upload twine upload (with twine) setup.py check twine check (this doesn't do all the same checks but it's a start) Custom commands tox and nox environments. | 25 | 20 |
68,517,139 | 2021-7-25 | https://stackoverflow.com/questions/68517139/matplotlib-appears-to-not-use-rcparams-when-plotting-particularly-for-text-e-g | I have an issue that matplotlib appears to not be following the rcparams. This occurs particularly for text: annotations, axis labels, titles, etc. I will note that I am not running Seaborn at the same time (Seaborn is known to interfere with some matplotlib settings). I am using Python 3.7.10, matplotlib 3.3.4, and seaborn 0.11.1 For instance, here is a simple plot statement. plt.plot([1,2,3], [1,2,3]) plt.title("Simple plot") Notice that the plot title, as well as the numeric tick labels, all turn out bold. If you look at examples online for matplotlib, they all have non-bold as the default. If I display the contents of plt.rcParams, I get the following (see bottom of post). Notice that the relevant items, e.g., 'figure.titleweight': 'normal', 'font.weight': 'normal' are 'normal' rather than 'bold'. Is there some other local setting it is using? I'm trying to get this to work without providing additional settings. The odd thing is that the default font face (DejaVu Sans), seems to not be responsive to changes in the font weight, such that if I enter the command plt.title("Simple plot", fontdict={'fontweight': 'normal', 'weight': 'normal', 'family': 'sans-serif'}) I get the same result. But if I choose serif font, I get plt.title("Simple plot", fontdict={'fontweight': 'normal', 'weight': 'normal', 'family': 'serif'}) Now, I want to get the result in sans-serif, but not bold, without additional parameters. I also checked my matplotlibrc file, and there is nothing there that seems to override these defaults. I also tried running plt.rcdefaults(), but it didn't help. Where are these bold settings coming from? RcParams({'_internal.classic_mode': False, 'agg.path.chunksize': 0, 'animation.avconv_args': [], 'animation.avconv_path': 'avconv', 'animation.bitrate': -1, 'animation.codec': 'h264', 'animation.convert_args': [], 'animation.convert_path': 'convert', 'animation.embed_limit': 20.0, 'animation.ffmpeg_args': [], 'animation.ffmpeg_path': 'ffmpeg', 'animation.frame_format': 'png', 'animation.html': 'none', 'animation.html_args': [], 'animation.writer': 'ffmpeg', 'axes.autolimit_mode': 'data', 'axes.axisbelow': 'line', 'axes.edgecolor': 'black', 'axes.facecolor': 'white', 'axes.formatter.limits': [-5, 6], 'axes.formatter.min_exponent': 0, 'axes.formatter.offset_threshold': 4, 'axes.formatter.use_locale': False, 'axes.formatter.use_mathtext': False, 'axes.formatter.useoffset': True, 'axes.grid': False, 'axes.grid.axis': 'both', 'axes.grid.which': 'major', 'axes.labelcolor': 'black', 'axes.labelpad': 4.0, 'axes.labelsize': 'medium', 'axes.labelweight': 'normal', 'axes.linewidth': 0.8, 'axes.prop_cycle': cycler('color', ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']), 'axes.spines.bottom': True, 'axes.spines.left': True, 'axes.spines.right': True, 'axes.spines.top': True, 'axes.titlecolor': 'auto', 'axes.titlelocation': 'center', 'axes.titlepad': 6.0, 'axes.titlesize': 'large', 'axes.titleweight': 'normal', 'axes.titley': None, 'axes.unicode_minus': True, 'axes.xmargin': 0.05, 'axes.ymargin': 0.05, 'axes3d.grid': True, 'backend': 'Qt5Agg', 'backend_fallback': True, 'boxplot.bootstrap': None, 'boxplot.boxprops.color': 'black', 'boxplot.boxprops.linestyle': '-', 'boxplot.boxprops.linewidth': 1.0, 'boxplot.capprops.color': 'black', 'boxplot.capprops.linestyle': '-', 'boxplot.capprops.linewidth': 1.0, 'boxplot.flierprops.color': 'black', 'boxplot.flierprops.linestyle': 'none', 'boxplot.flierprops.linewidth': 1.0, 'boxplot.flierprops.marker': 'o', 'boxplot.flierprops.markeredgecolor': 'black', 'boxplot.flierprops.markeredgewidth': 1.0, 'boxplot.flierprops.markerfacecolor': 'none', 'boxplot.flierprops.markersize': 6.0, 'boxplot.meanline': False, 'boxplot.meanprops.color': 'C2', 'boxplot.meanprops.linestyle': '--', 'boxplot.meanprops.linewidth': 1.0, 'boxplot.meanprops.marker': '^', 'boxplot.meanprops.markeredgecolor': 'C2', 'boxplot.meanprops.markerfacecolor': 'C2', 'boxplot.meanprops.markersize': 6.0, 'boxplot.medianprops.color': 'C1', 'boxplot.medianprops.linestyle': '-', 'boxplot.medianprops.linewidth': 1.0, 'boxplot.notch': False, 'boxplot.patchartist': False, 'boxplot.showbox': True, 'boxplot.showcaps': True, 'boxplot.showfliers': True, 'boxplot.showmeans': False, 'boxplot.vertical': True, 'boxplot.whiskerprops.color': 'black', 'boxplot.whiskerprops.linestyle': '-', 'boxplot.whiskerprops.linewidth': 1.0, 'boxplot.whiskers': 1.5, 'contour.corner_mask': True, 'contour.linewidth': None, 'contour.negative_linestyle': 'dashed', 'date.autoformatter.day': '%Y-%m-%d', 'date.autoformatter.hour': '%m-%d %H', 'date.autoformatter.microsecond': '%M:%S.%f', 'date.autoformatter.minute': '%d %H:%M', 'date.autoformatter.month': '%Y-%m', 'date.autoformatter.second': '%H:%M:%S', 'date.autoformatter.year': '%Y', 'date.epoch': '1970-01-01T00:00:00', 'docstring.hardcopy': False, 'errorbar.capsize': 0.0, 'figure.autolayout': False, 'figure.constrained_layout.h_pad': 0.04167, 'figure.constrained_layout.hspace': 0.02, 'figure.constrained_layout.use': False, 'figure.constrained_layout.w_pad': 0.04167, 'figure.constrained_layout.wspace': 0.02, 'figure.dpi': 100.0, 'figure.edgecolor': 'white', 'figure.facecolor': 'white', 'figure.figsize': [6.4, 4.8], 'figure.frameon': True, 'figure.max_open_warning': 20, 'figure.raise_window': True, 'figure.subplot.bottom': 0.11, 'figure.subplot.hspace': 0.2, 'figure.subplot.left': 0.125, 'figure.subplot.right': 0.9, 'figure.subplot.top': 0.88, 'figure.subplot.wspace': 0.2, 'figure.titlesize': 'large', 'figure.titleweight': 'normal', 'font.cursive': ['Apple Chancery', 'Textile', 'Zapf Chancery', 'Sand', 'Script MT', 'Felipa', 'cursive'], 'font.family': ['sans-serif'], 'font.fantasy': ['Comic Neue', 'Comic Sans MS', 'Chicago', 'Charcoal', 'ImpactWestern', 'Humor Sans', 'xkcd', 'fantasy'], 'font.monospace': ['DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Computer Modern Typewriter', 'Andale Mono', 'Nimbus Mono L', 'Courier New', 'Courier', 'Fixed', 'Terminal', 'monospace'], 'font.sans-serif': ['DejaVu Sans', 'Bitstream Vera Sans', 'Computer Modern Sans Serif', 'Lucida Grande', 'Verdana', 'Geneva', 'Lucid', 'Arial', 'Helvetica', 'Avant Garde', 'sans-serif'], 'font.serif': ['DejaVu Serif', 'Bitstream Vera Serif', 'Computer Modern Roman', 'New Century Schoolbook', 'Century Schoolbook L', 'Utopia', 'ITC Bookman', 'Bookman', 'Nimbus Roman No9 L', 'Times New Roman', 'Times', 'Palatino', 'Charter', 'serif'], 'font.size': 10.0, 'font.stretch': 'normal', 'font.style': 'normal', 'font.variant': 'normal', 'font.weight': 'normal', 'grid.alpha': 1.0, 'grid.color': '#b0b0b0', 'grid.linestyle': '-', 'grid.linewidth': 0.8, 'hatch.color': 'black', 'hatch.linewidth': 1.0, 'hist.bins': 10, 'image.aspect': 'equal', 'image.cmap': 'viridis', 'image.composite_image': True, 'image.interpolation': 'antialiased', 'image.lut': 256, 'image.origin': 'upper', 'image.resample': True, 'interactive': True, 'keymap.all_axes': ['a'], 'keymap.back': ['left', 'c', 'backspace', 'MouseButton.BACK'], 'keymap.copy': ['ctrl+c', 'cmd+c'], 'keymap.forward': ['right', 'v', 'MouseButton.FORWARD'], 'keymap.fullscreen': ['f', 'ctrl+f'], 'keymap.grid': ['g'], 'keymap.grid_minor': ['G'], 'keymap.help': ['f1'], 'keymap.home': ['h', 'r', 'home'], 'keymap.pan': ['p'], 'keymap.quit': ['ctrl+w', 'cmd+w', 'q'], 'keymap.quit_all': [], 'keymap.save': ['s', 'ctrl+s'], 'keymap.xscale': ['k', 'L'], 'keymap.yscale': ['l'], 'keymap.zoom': ['o'], 'legend.borderaxespad': 0.5, 'legend.borderpad': 0.4, 'legend.columnspacing': 2.0, 'legend.edgecolor': '0.8', 'legend.facecolor': 'inherit', 'legend.fancybox': True, 'legend.fontsize': 'medium', 'legend.framealpha': 0.8, 'legend.frameon': True, 'legend.handleheight': 0.7, 'legend.handlelength': 2.0, 'legend.handletextpad': 0.8, 'legend.labelspacing': 0.5, 'legend.loc': 'best', 'legend.markerscale': 1.0, 'legend.numpoints': 1, 'legend.scatterpoints': 1, 'legend.shadow': False, 'legend.title_fontsize': None, 'lines.antialiased': True, 'lines.color': 'C0', 'lines.dash_capstyle': 'butt', 'lines.dash_joinstyle': 'round', 'lines.dashdot_pattern': [6.4, 1.6, 1.0, 1.6], 'lines.dashed_pattern': [3.7, 1.6], 'lines.dotted_pattern': [1.0, 1.65], 'lines.linestyle': '-', 'lines.linewidth': 1.5, 'lines.marker': 'None', 'lines.markeredgecolor': 'auto', 'lines.markeredgewidth': 1.0, 'lines.markerfacecolor': 'auto', 'lines.markersize': 6.0, 'lines.scale_dashes': True, 'lines.solid_capstyle': 'projecting', 'lines.solid_joinstyle': 'round', 'markers.fillstyle': 'full', 'mathtext.bf': 'sans:bold', 'mathtext.cal': 'cursive', 'mathtext.default': 'it', 'mathtext.fallback': 'cm', 'mathtext.fallback_to_cm': None, 'mathtext.fontset': 'dejavusans', 'mathtext.it': 'sans:italic', 'mathtext.rm': 'sans', 'mathtext.sf': 'sans', 'mathtext.tt': 'monospace', 'mpl_toolkits.legacy_colorbar': True, 'patch.antialiased': True, 'patch.edgecolor': 'black', 'patch.facecolor': 'C0', 'patch.force_edgecolor': False, 'patch.linewidth': 1.0, 'path.effects': [], 'path.simplify': True, 'path.simplify_threshold': 0.111111111111, 'path.sketch': None, 'path.snap': True, 'pcolor.shading': 'flat', 'pdf.compression': 6, 'pdf.fonttype': 3, 'pdf.inheritcolor': False, 'pdf.use14corefonts': False, 'pgf.preamble': '', 'pgf.rcfonts': True, 'pgf.texsystem': 'xelatex', 'polaraxes.grid': True, 'ps.distiller.res': 6000, 'ps.fonttype': 3, 'ps.papersize': 'letter', 'ps.useafm': False, 'ps.usedistiller': None, 'savefig.bbox': None, 'savefig.directory': 'C:/Users/002060756/Downloads', 'savefig.dpi': 'figure', 'savefig.edgecolor': 'auto', 'savefig.facecolor': 'auto', 'savefig.format': 'png', 'savefig.jpeg_quality': 95, 'savefig.orientation': 'portrait', 'savefig.pad_inches': 0.1, 'savefig.transparent': False, 'scatter.edgecolors': 'face', 'scatter.marker': 'o', 'svg.fonttype': 'path', 'svg.hashsalt': None, 'svg.image_inline': True, 'text.antialiased': True, 'text.color': 'black', 'text.hinting': 'force_autohint', 'text.hinting_factor': 8, 'text.kerning_factor': 0, 'text.latex.preamble': '', 'text.latex.preview': False, 'text.usetex': False, 'timezone': 'UTC', 'tk.window_focus': False, 'toolbar': 'toolbar2', 'webagg.address': '127.0.0.1', 'webagg.open_in_browser': True, 'webagg.port': 8988, 'webagg.port_retries': 50, 'xaxis.labellocation': 'center', 'xtick.alignment': 'center', 'xtick.bottom': True, 'xtick.color': 'black', 'xtick.direction': 'out', 'xtick.labelbottom': True, 'xtick.labelsize': 'medium', 'xtick.labeltop': False, 'xtick.major.bottom': True, 'xtick.major.pad': 3.5, 'xtick.major.size': 3.5, 'xtick.major.top': True, 'xtick.major.width': 0.8, 'xtick.minor.bottom': True, 'xtick.minor.pad': 3.4, 'xtick.minor.size': 2.0, 'xtick.minor.top': True, 'xtick.minor.visible': False, 'xtick.minor.width': 0.6, 'xtick.top': False, 'yaxis.labellocation': 'center', 'ytick.alignment': 'center_baseline', 'ytick.color': 'black', 'ytick.direction': 'out', 'ytick.labelleft': True, 'ytick.labelright': False, 'ytick.labelsize': 'medium', 'ytick.left': True, 'ytick.major.left': True, 'ytick.major.pad': 3.5, 'ytick.major.right': True, 'ytick.major.size': 3.5, 'ytick.major.width': 0.8, 'ytick.minor.left': True, 'ytick.minor.pad': 3.4, 'ytick.minor.right': True, 'ytick.minor.size': 2.0, 'ytick.minor.visible': False, 'ytick.minor.width': 0.6, 'ytick.right': False}) | This is a known issue documented in font-manager.py. KNOWN ISSUES documentation font variant is untested font stretch is incomplete font size is incomplete default font algorithm needs improvement and testing setWeights function needs improvement 'light' is an invalid weight value, remove it. There seems to be an issue with font-manager.py "scoring" the default 'sans-serif' font-family on your system. The safest way to fix this is to. Back up the cache This is so that you can revert any modifications to the cache if there is an issue. It’s a JSON file on your system with a name here: fm_path = Path( mpl.get_cachedir(), f"fontlist-v{FontManager.__version__}.json") Modify the cache Option 1 Manually update the JSON file so that the default font weight that is selected for 'sans-serif' is correct. Option 2 Allow the FontManager to rebuild the cache. del matplotlib.font_manager.weight_dict['sans-serif'] matplotlib.font_manager._rebuild() References https://github.com/matplotlib/matplotlib/issues/5574 https://github.com/matplotlib/matplotlib/blob/main/lib/matplotlib/font_manager.py Matplotlib: Times New Roman appears bold | 6 | 1 |
68,523,339 | 2021-7-26 | https://stackoverflow.com/questions/68523339/is-bisect-bisect-different-from-bisect-bisect-right-in-python | Based on everything that I have seen bisect.bisect and bisect.bisect_right in Python seem to do the same thing. Is there any difference that accounts for the difference in name, or do they have identical behavior and merely a different name? Obviously bisect.bisect_left is different from both of them, but both bisect and bisect_right seem to always return the rightmost position where insertion of the element would maintain a sorted order. | They are identical: >>> import bisect >>> bisect.bisect is bisect.bisect_right True In case you were curious, bisect_right is the original function and bisect is the alias: >>> bisect.bisect.__name__ 'bisect_right' | 4 | 14 |
68,532,627 | 2021-7-26 | https://stackoverflow.com/questions/68532627/fastapi-returns-404-when-accessing-url-in-the-browser | I am learning fastapi and created the sample following application: from fastapi import FastAPI import uvicorn app = FastAPI() @app.get("/hello") async def hello_world(): return {"message": "hello_world"} if __name__== "__main__": uvicorn.run(app, host="127.0.0.1", port=8080) The server starts fine but when I test the url in browser I am getting the following error: {"detail": "Not Found"} and this error in the log: "GET / HTTP /" 404 Not Found I noticed another weird problem, when I make some error its not detecting the error and still starting the server. For example if I change the function like the following: @app.get("/hello") async def hello_world(): print (sample) return {"message": "hello_world"} It should have thrown the error: NameError: "sample" not defined but it still is starting the server. Any suggestions will be helpful. | always add the trailing slash after the path, had the same issue, took me hours to debug | 4 | 7 |
68,490,691 | 2021-7-22 | https://stackoverflow.com/questions/68490691/faster-way-to-look-for-a-value-in-pandas-dataframe | I'm trying to "translate" some of my R scripts to Python, but I notice, that working with data frames in Python is tremendously slower than doing it in R, e.g. exctracting cells according to some conditions. I've done a little investigation, this is how much time it takes to look for a specific value in Python: import pandas as pd from timeit import default_timer as timer code = 145896 # real df is way bigger df = pd.DataFrame(data={ 'code1': [145896, 800175, 633974, 774521, 416109], 'code2': [100, 800, 600, 700, 400], 'code3': [1, 8, 6, 7, 4]} ) start = timer() for _ in range(100000): desired = df.loc[df['code1']==code, 'code2'][0] print(timer() - start) # 19.866242500000226 (sec) and in R: code <- 145896 df <- data.frame("code1" = c(145896, 800175, 633974, 774521, 416109), "code2" = c(100, 800, 600, 700, 400), "code3" = c(1, 8, 6, 7, 4)) start <- Sys.time() for (i in 1:100000) { desired <- df[df$code1 == code, "code2"] } print(Sys.time() - start) # Time difference of 1.140949 secs I'm relatively new to Python, and I'm probably missing something. Is there some way to speed up this process? Maybe the whole idea of transferring this script to Python is pointless? In other operations Python is faster (namely working with strings), and it would be very inconvenient to jump between two or more scripts once working with data frames is required. Any help on this, please? UPDATE Real script block iterates over rows of initial data frame (which is fairly large, 500-1500k rows) and creates a new one with rows, containing value from original column "code1" and codes, that correspond it, from another data frame, and many other values, that are newly created. I believe, I can clarify it with the picture: Later in the script I will need to search for specific values in loops based on different conditions too. So the speed of search is essential. | Since you are looking to select a single value from a DataFrame there are a few things you can do to improve performance. Use .item() instead of [0], which has a small, but decent improvement especially for smaller DataFrames. It's wasteful to mask the entire DataFrame just to then select a known Series. Instead mask only the Series and select the value. Though you might think "oh this is chained -- the forbidden ][", it's only chained assignment which is worrisome, not chained selection. Use numpy. Pandas has a lot of overhead due to the indexing and alingment. But you just want to select a single value from a rectangular data structure, so dropping down to numpy will be faster. Below are illustrations of the timing for different ways to select the data [Each with it's own method below]. Using numpy is by far the fastest, especially for a smaller DataFrame like in your sample. For those, it will be more than 20x faster than your original way to select data, which looking at your initial comparisons with R should make it slightly faster than selecting data in R. As the DataFrames get larger the relative performance of the numpy solution isn't as good, but it's still the fastest method (shown here). import perfplot import pandas as pd import numpy as np def DataFrame_Slice(df, code=0): return df.loc[df['code1'] == code, 'code2'].iloc[0] def DataFrame_Slice_Item(df, code=0): return df.loc[df['code1'] == code, 'code2'].item() def Series_Slice_Item(df, code=0): return df['code2'][df['code1'] == code].item() def with_numpy(df, code=0): return df['code2'].to_numpy()[df['code1'].to_numpy() == code].item() perfplot.show( setup=lambda N: pd.DataFrame({'code1': range(N), 'code2': range(50, N+50), 'code3': range(100, N+100)}), kernels=[ lambda df: DataFrame_Slice(df), lambda df: DataFrame_Slice_Item(df), lambda df: Series_Slice_Item(df), lambda df: with_numpy(df) ], labels=['DataFrame_Slice', 'DataFrame_Slice_Item', 'Series_Slice_Item', 'with_numpy'], n_range=[2 ** k for k in range(1, 21)], equality_check=np.allclose, relative_to=3, xlabel='len(df)' ) | 9 | 18 |
68,507,277 | 2021-7-24 | https://stackoverflow.com/questions/68507277/connection-error-between-two-devices-importerror-libmariadb-so-3-cannot-open | I have two devices that I use as MySql server and Django server. My system, which works on my development device, becomes inoperable when it switches to other devices. Settings on 192.168.3.60: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'bildirdi_hurmalar', 'USER': 'gungelir_tirmalar', 'PASSWORD': 'EhuEhu_1793', 'HOST': '192.168.3.65', 'PORT': '3306', } } Apache2 and wsgi on mainside. Error when I run Migrate or runserver: (env) pi@raspberrypi:~/bilidrdi.com $ python manage.py runserver Watching for file changes with StatReloader Exception in thread django-main-thread: Traceback (most recent call last): File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/MySQLdb/__init__.py", line 18, in <module> from . import _mysql ImportError: libmariadb.so.3: cannot open shared object file: No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 865, in run self._target(*self._args, **self._kwargs) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/core/management/commands/runserver.py", line 110, in inner_run autoreload.raise_last_exception() File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception raise _exception[1] File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/core/management/__init__.py", line 375, in execute autoreload.check_errors(django.setup)() File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/apps/registry.py", line 114, in populate app_config.import_models() File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/apps/config.py", line 301, in import_models self.models_module = import_module(models_module_name) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/contrib/auth/models.py", line 3, in <module> from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/contrib/auth/base_user.py", line 48, in <module> class AbstractBaseUser(models.Model): File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/db/models/base.py", line 122, in __new__ new_class.add_to_class('_meta', Options(meta, app_label)) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/db/models/base.py", line 326, in add_to_class value.contribute_to_class(cls, name) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/db/models/options.py", line 207, in contribute_to_class self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/utils/connection.py", line 15, in __getattr__ return getattr(self._connections[self._alias], item) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/utils/connection.py", line 62, in __getitem__ conn = self.create_connection(alias) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/db/utils.py", line 204, in create_connection backend = load_backend(db['ENGINE']) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/db/utils.py", line 111, in load_backend return import_module('%s.base' % backend_name) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/django/db/backends/mysql/base.py", line 15, in <module> import MySQLdb as Database File "/home/pi/bilidrdi.com/env/lib/python3.7/site-packages/MySQLdb/__init__.py", line 24, in <module> version_info, _mysql.version_info, _mysql.__file__ NameError: name '_mysql' is not defined Requirements: asgiref==3.4.1 Django==3.2.5 django-bootstrap-form==3.4 django-post-office==3.5.3 jsonfield==3.1.0 mysql-client==0.0.1 mysqlclient==2.0.3 nano==0.10.0 Pillow==8.3.1 pkg-resources==0.0.0 pytz==2021.1 sqlparse==0.4.1 typing-extensions==3.10.0.0 192.1683.65 is dedicated server on debian. | Try to install libmariadbclient-dev: sudo apt install libmariadbclient-dev | 5 | 2 |
68,469,643 | 2021-7-21 | https://stackoverflow.com/questions/68469643/docker-build-time-secrets-with-layer-caching | I have a Dockerfile that does a pip install of a package from an AWS code artifact. The install requires an auth token, so my current approach is to generate the dynamic/secret repo url in a build script and pass it into Docker as a build arg, which leads to lines like this in my Dockerfile: ARG CORE_REPO_URL ARG CORE_VERSION RUN pip install -i $CORE_REPO_URL mylib_core==$CORE_VERSION The use of ARGs in a RUN command cause that layer to never be cached, and therefore this part gets rebuilt every time even if the library version did not change. Is there a better way to do this such that the layer cache would be used unless the CORE_VERSION changed? Maybe I should be installing the aws tool chain in the image so the dynamic repo url can be generated in there in an earlier step (using the same command every time so it wouldn't require an ARG and would hopefully cache the layer)? One downside of this is having to put AWS credentials in the image. I could maybe involve docker secrets to avoid that if that's the only solution though. | Figured out for my use case of : Package in CA Multi account setup where you assume a role to get where you need. Hopefully this will help someone else who finds this. Docker Build using an Assumed Role Profile Remember to build with the buildkit # syntax = docker/dockerfile:experimental # This needs to go at the top of the file or it will break ^ # installing requirements NB, You can probably just take this image it's real small and depending on latest isn't the best, but idc only working on a poc. FROM amazon/aws-cli:latest AS dependencies ARG PYTHONLIBS ARG PROFILE ENV AWS_DEFAULT_PROFILE=$PROFILE COPY ./requirements.txt . # NB I had this here as I was assuming a role you may not need it. COPY ./config /root/.aws/config RUN yum install -y pip RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws codeartifact login --tool pip --repository //rest of command RUN pip install -r requirements.txt --target ${PYTHONLIBS} | 5 | 1 |
68,490,194 | 2021-7-22 | https://stackoverflow.com/questions/68490194/defining-a-python-enum-in-a-c-extension-am-i-doing-this-right | I'm working on a Python C extension and I would like to expose a custom enum (as in: a class inheriting from enum.Enum) that would be entirely defined in C. It turned out to not be a trivial task and the regular mechanism for inheritance using .tp_base doesn't work - most likely due to the Enum's meta class not being pulled in. Basically I'm trying to do this: import enum class FooBar(enum.Enum): FOO = 1 BAR = 2 in C. After a lot of digging into cpython's internals, this is what I came up with, wrapped in an example buildable module: #include <Python.h> PyDoc_STRVAR(module_doc, "C extension module defining a class inheriting from enum.Enum."); static PyModuleDef module_def = { PyModuleDef_HEAD_INIT, .m_name = "pycenum", .m_doc = module_doc, .m_size = -1, }; struct enum_descr { const char *name; long value; }; static const struct enum_descr foobar_descr[] = { { .name = "FOO", .value = 1, }, { .name = "BAR", .value = 2, }, { } }; static PyObject *make_bases(PyObject *enum_mod) { PyObject *enum_type, *bases; enum_type = PyObject_GetAttrString(enum_mod, "Enum"); if (!enum_type) return NULL; bases = PyTuple_Pack(1, enum_type); /* Steals reference. */ if (!bases) Py_DECREF(enum_type); return bases; } static PyObject *make_classdict(PyObject *enum_mod, PyObject *bases) { PyObject *enum_meta_type, *classdict; enum_meta_type = PyObject_GetAttrString(enum_mod, "EnumMeta"); if (!enum_meta_type) return NULL; classdict = PyObject_CallMethod(enum_meta_type, "__prepare__", "sO", "FooBarEnum", bases); Py_DECREF(enum_meta_type); return classdict; } static int fill_classdict(PyObject *classdict, PyObject *modname, const struct enum_descr *descr) { const struct enum_descr *entry; PyObject *key, *val; int ret; key = PyUnicode_FromString("__module__"); if (!key) return -1; ret = PyObject_SetItem(classdict, key, modname); Py_DECREF(key); if (ret < 0) return -1; for (entry = descr; entry->name; entry++) { key = PyUnicode_FromString(entry->name); if (!key) return -1; val = PyLong_FromLong(entry->value); if (!val) { Py_DECREF(key); return -1; } ret = PyObject_SetItem(classdict, key, val); Py_DECREF(key); Py_DECREF(val); if (ret < 0) return -1; } return 0; } static PyObject *make_new_type(PyObject *classdict, PyObject *bases, const char *enum_name) { PyObject *name, *args, *new_type; int ret; name = PyUnicode_FromString(enum_name); if (!name) return NULL; args = PyTuple_Pack(3, name, bases, classdict); if (!args) { Py_DECREF(name); return NULL; } Py_INCREF(bases); Py_INCREF(classdict); /* * Reference to name was stolen by PyTuple_Pack(), no need to * increase it here. */ new_type = PyObject_CallObject((PyObject *)&PyType_Type, args); Py_DECREF(args); if (!new_type) return NULL; ret = PyType_Ready((PyTypeObject *)new_type); if (ret < 0) { Py_DECREF(new_type); return NULL; } return new_type; } static PyObject *make_enum_type(PyObject *modname, const char *enum_name, const struct enum_descr *descr) { PyObject *enum_mod, *bases, *classdict, *new_type; int ret; enum_mod = PyImport_ImportModule("enum"); if (!enum_mod) return NULL; bases = make_bases(enum_mod); if (!bases) { Py_DECREF(enum_mod); return NULL; } classdict = make_classdict(enum_mod, bases); if (!classdict) { Py_DECREF(bases); Py_DECREF(enum_mod); return NULL; } ret = fill_classdict(classdict, modname, descr); if (ret < 0) { Py_DECREF(bases); Py_DECREF(enum_mod); Py_DECREF(classdict); return NULL; } new_type = make_new_type(classdict, bases, enum_name); Py_DECREF(bases); Py_DECREF(enum_mod); Py_DECREF(classdict); return new_type; } PyMODINIT_FUNC PyInit_pycenum(void) { PyObject *module, *modname, *sub_enum_type; int ret; module = PyModule_Create(&module_def); if (!module) return NULL; ret = PyModule_AddStringConstant(module, "__version__", "0.0.1"); if (ret < 0) { Py_DECREF(module); return NULL; } modname = PyModule_GetNameObject(module); if (!modname) { Py_DECREF(module); return NULL; } sub_enum_type = make_enum_type(modname, "FooBar", foobar_descr); Py_DECREF(modname); if (!sub_enum_type) { Py_DECREF(module); return NULL; } ret = PyModule_AddObject(module, "FooBar", sub_enum_type); if (ret < 0) { Py_DECREF(sub_enum_type); Py_DECREF(module); return NULL; } return module; } Basically I'm calling the EnumMeta's __prepare__ method directly to create a correct classdict and then I call the PyType_Type object too to create the sub-type. This works and AFAICT results in a class that behaves exactly as expected, but... am I doing this right? Any feedback is appreciated. | The metaclass in Enum is tricky yes. But you can see here that you can create an enum (in Python) like: FooBar = enum.Enum('FooBar', dict(FOO=1, BAR=2)) So you can use this technique to easily create an enum class in Python C-API by doing something like: PyObject *key, *val, *name, *attrs, *args, *modname, *kwargs, *enum_type, *sub_enum_type; attrs = PyDict_New(); key = PyUnicode_FromString("FOO"); val = PyLong_FromLong(1); PyObject_SetItem(attrs, key, val); Py_DECREF(key); Py_DECREF(val); key = PyUnicode_FromString("BAR"); val = PyLong_FromLong(2); PyObject_SetItem(attrs, key, val); Py_DECREF(key); Py_DECREF(val); name = PyUnicode_FromString("FooBar"); args = PyTuple_Pack(3, name, attrs); Py_DECREF(attrs); Py_DECREF(name); // the module name might need to be passed as keyword argument PyDict_Type *kwargs = PyDict_New(); key = PyUnicode_FromString("module"); modname = PyModule_GetNameObject(module); PyObject_SetItem(kwargs, key, modname); Py_DECREF(key); Py_DECREF(modname); enum_type = PyObject_GetAttrString(enum_mod, "Enum"); sub_enum_type = PyObject_Call(enum_type, args, kwargs) Py_DECREF(enum_type); Py_DECREF(args); Py_DECREF(kwargs); return sub_enum_type | 14 | 15 |
68,523,752 | 2021-7-26 | https://stackoverflow.com/questions/68523752/python-module-asyncio-has-no-attribute-to-thread | From python's asyncio examples: import asyncio import time def blocking_io(): print(f"start blocking_io at {time.strftime('%X')}") # Note that time.sleep() can be replaced with any blocking # IO-bound operation, such as file operations. time.sleep(1) print(f"blocking_io complete at {time.strftime('%X')}") async def main(): print(f"started main at {time.strftime('%X')}") await asyncio.gather( asyncio.to_thread(blocking_io), asyncio.sleep(1)) print(f"finished main at {time.strftime('%X')}") asyncio.run(main()) # Expected output: # # started main at 19:50:53 # start blocking_io at 19:50:53 # blocking_io complete at 19:50:54 # finished main at 19:50:54 It is outputting the next error: asyncio.to_thread(blocking_io), AttributeError: module 'asyncio' has no attribute 'to_thread' Has this feature been deprecated? What would be an alternative for threading with asyncio? | to_thread is only available in python 3.9+, if you are working with python 3.8 or an older version, you can copy the source code of it: async def to_thread(func, /, *args, **kwargs): loop = asyncio.get_running_loop() ctx = contextvars.copy_context() func_call = functools.partial(ctx.run, func, *args, **kwargs) return await loop.run_in_executor(None, func_call) This method copies the context to the thread(to use the current value of your set ContextVars) If you don't need that and just want a one liner: await asyncio.get_running_loop().run_in_executor(None, blocking_io, arg1, arg2) More info about run_in_executor | 9 | 19 |
68,486,056 | 2021-7-22 | https://stackoverflow.com/questions/68486056/different-behavior-while-reading-dataframe-from-parquet-using-cli-versus-executa | Please consider following program as Minimal Reproducible Example -MRE: import pandas as pd import pyarrow from pyarrow import parquet def foo(): print(pyarrow.__file__) print('version:',pyarrow.cpp_version) print('-----------------------------------------------------') df = pd.DataFrame({'A': [1,2,3], 'B':['dummy']*3}) print('Orignal DataFrame:\n', df) print('-----------------------------------------------------') _table = pyarrow.Table.from_pandas(df) parquet.write_table(_table, 'foo') _table = parquet.read_table('foo', columns=[]) #passing empty list to columns arg df = _table.to_pandas() print('After reading from file with columns=[]:\n', df) print('-----------------------------------------------------') print('Not passing [] to columns parameter') _table = parquet.read_table('foo') #Not passing any list df = _table.to_pandas() print(df) print('-----------------------------------------------------') x = input('press any key to exit: ') if __name__=='__main__': foo() When I run it from console/IDE, it reads the entire data for columns=[]: (env) D:\foo>python foo.py D:\foo\env\lib\site-packages\pyarrow\__init__.py version: 3.0.0 ----------------------------------------------------- Orignal DataFrame: A B 0 1 dummy 1 2 dummy 2 3 dummy ----------------------------------------------------- After reading from file with columns=[]: A B 0 1 dummy 1 2 dummy 2 3 dummy ----------------------------------------------------- Not passing [] to columns parameter A B 0 1 dummy 1 2 dummy 2 3 dummy ----------------------------------------------------- press any key to exit: But When I run it from executable created using Pyinstaller, it reads no data for columns=[]: E:\foo\dist\foo\pyarrow\__init__.pyc version: 3.0.0 ----------------------------------------------------- Orignal DataFrame: A B 0 1 dummy 1 2 dummy 2 3 dummy ----------------------------------------------------- After reading from file with columns=[]: Empty DataFrame Columns: [] Index: [0, 1, 2] ----------------------------------------------------- Not passing [] to columns parameter A B 0 1 dummy 1 2 dummy 2 3 dummy ----------------------------------------------------- press any key to exit: As you can see passing columns=[] gives empty dataframe in executable file but this behavior is not there while running the python file directly, and I'm not sure why there is this two different behavior for the same code in the same environment. Looking at docstring of parquet.read_table in source code at GitHub: columns: list If not None, only these columns will be read from the file. A column name may be a prefix of a nested field, e.g. 'a' will select 'a.b', 'a.c', and 'a.d.e'. The read_table further calls dataset.read that calls _dataset.to_table which returns call to self.scanner which then returns call to static method from_dataset of Scanner class. Everywhere, None has been used as default value to columns parameter, if None and [] are directly converted to Boolean in python, both of them will indeed be False, but if [] is checked against None, then it will be False, but it is nowhere mentioned should it fetch all the columns for columns=[] because it evaluates to be False for Boolean value, or should it read no columns at all since the list is empty. But why the behavior is different while running it from the Command line/IDE, than to running it from the executable created using Pyinstaller for the same version of Pyarrow? The environment I'm on: Python Version: 3.7.6 Pyinstaller Verson: 4.2 Pyarrow Version: 3.0.0 Windows 10 64 bit OS Here is the spec file for your reference if you want to give it a try (You need to change pathex parameter): foo.spec # -*- mode: python ; coding: utf-8 -*- import sys ; sys.setrecursionlimit(sys.getrecursionlimit() * 5) block_cipher = None a = Analysis(['foo.py'], pathex=['D:\\foo'], binaries=[], datas=[], hiddenimports=[], hookspath=[], runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE(pyz, a.scripts, [], exclude_binaries=True, name='foo', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, console=True ) coll = COLLECT(exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=True, upx_exclude=[], name='foo') | Credit to @U12-Forward for assisting me in debugging the issue. After a bit of research and debugging, and exploring the library program files, I found that pyarrow uses _ParquetDatasetV2 and ParquetDataset functions which are essentially two different functions that reads the data from parquet file, _ParquetDatasetV2 is used as legacy_mode, even though these functions are defined in pyarrow.parquet module, they are coming from Dataset module of pyarrow, which was missing in the executable created using Pyinstaller. When I added pyarrow.Dataset as hidden imports and created the build, the exe was raising ModuleNotFoundError on execution due to several missing dependencies used by Dataset module. In order to resolve it, I added all .py files from the environment to the hidden imports and created the build again, finally it worked, by it worked, what I mean is I was able to observe the same behavior in both the environments. The modified spec file looks like this after modification: # -*- mode: python ; coding: utf-8 -*- import sys ; sys.setrecursionlimit(sys.getrecursionlimit() * 5) block_cipher = None a = Analysis(['foo.py'], pathex=['D:\\foo'], binaries=[], datas=[], hiddenimports=['pyarrow.benchmark', 'pyarrow.cffi', 'pyarrow.compat', 'pyarrow.compute', 'pyarrow.csv', 'pyarrow.cuda', 'pyarrow.dataset', 'pyarrow.feather', 'pyarrow.filesystem', 'pyarrow.flight', 'pyarrow.fs', 'pyarrow.hdfs', 'pyarrow.ipc', 'pyarrow.json', 'pyarrow.jvm', 'pyarrow.orc', 'pyarrow.pandas_compat', 'pyarrow.parquet', 'pyarrow.plasma', 'pyarrow.serialization', 'pyarrow.types', 'pyarrow.util', 'pyarrow._generated_version', 'pyarrow.__init__'], hookspath=[], runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE(pyz, a.scripts, [], exclude_binaries=True, name='foo', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, console=True ) coll = COLLECT(exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=True, upx_exclude=[], name='foo') Also, to create the build, I included the path of the virtual environment using --paths argument: pyinstaller --path D:\foo\env\Lib\site-packages foo.spec Here is execution after following above mentioned steps: E:\foo\dist\foo\pyarrow\__init__.pyc version: 3.0.0 ----------------------------------------------------- Orignal DataFrame: A B 0 1 dummy 1 2 dummy 2 3 dummy ----------------------------------------------------- After reading from file with columns=[]: A B 0 1 dummy 1 2 dummy 2 3 dummy ----------------------------------------------------- Not passing [] to columns parameter A B 0 1 dummy 1 2 dummy 2 3 dummy ----------------------------------------------------- press any key to exit: It is true that it is nowhere mentioned the desired behavior for columns=[], but looking at ARROW-13436 opened in pyarrow by @Pace, it seems that the desired behavior for columns=[] is to read no data columns at all, but its not an official conformation, so it is possibly a bug in pyarrow 3.0.0 itself. | 10 | 3 |
68,507,862 | 2021-7-24 | https://stackoverflow.com/questions/68507862/how-to-solve-condahttperror-http-000-connection-failed-error-in-wsl | I have enabled WSL in my Windows 10 and installed Ubuntu 20.04 LTS from Microsoft store. To use meep software, I am following the installation process on my Windows 10. Unfortunately, when I am running below command, conda create -n mp -c conda-forge pymeep I am getting an error like, Collecting package metadata (current_repodata.json): failed CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://conda.anaconda.org/conda-forge/linux-64/current_repodata.json> Elapsed: - An HTTP error occurred when trying to retrieve this URL. HTTP errors are often intermittent, and a simple retry will get you on your way. 'https://conda.anaconda.org/conda-forge/linux-64' I have tried to disable ssl with below command, conda config --set ssl_verify no But no luck. Now, How to resolve this issue? here is the --verbose output of the command, tasmia@TASMIA-PC:~/study/softwares$ conda create -n mp -c conda-forge pymeep -v Collecting package metadata (current_repodata.json): ...working... Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46d7220>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/r/noarch/current_repodata.json Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46d7be0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /conda-forge/linux-64/current_repodata.json Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46d74c0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /conda-forge/noarch/current_repodata.json Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46d7fd0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/r/linux-64/current_repodata.json Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46ef7c0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/main/noarch/current_repodata.json Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46fd4f0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/main/linux-64/current_repodata.json Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46ef3d0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/r/noarch/current_repodata.json Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46ef550>: Failed to establish a new connection: [Errno -2] Name or service not known')': /conda-forge/linux-64/current_repodata.json Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46efb20>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/r/linux-64/current_repodata.json Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46ef970>: Failed to establish a new connection: [Errno -2] Name or service not known')': /conda-forge/noarch/current_repodata.json Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46efd00>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/main/noarch/current_repodata.json Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46fd6d0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/main/linux-64/current_repodata.json Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46fda30>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/r/noarch/current_repodata.json Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46fdc10>: Failed to establish a new connection: [Errno -2] Name or service not known')': /conda-forge/linux-64/current_repodata.json Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46fdd90>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/r/linux-64/current_repodata.json Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e46fdf10>: Failed to establish a new connection: [Errno -2] Name or service not known')': /conda-forge/noarch/current_repodata.json Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e464c0d0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/main/noarch/current_repodata.json Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e464c250>: Failed to establish a new connection: [Errno -2] Name or service not known')': /pkgs/main/linux-64/current_repodata.json failed Traceback (most recent call last): File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/connection.py", line 169, in _new_conn conn = connection.create_connection( File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/util/connection.py", line 73, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/home/tasmia/miniconda/lib/python3.9/socket.py", line 953, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/connectionpool.py", line 699, in urlopen httplib_response = self._make_request( File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/connectionpool.py", line 382, in _make_request self._validate_conn(conn) File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1010, in _validate_conn conn.connect() File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/connection.py", line 353, in connect conn = self._new_conn() File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/connection.py", line 181, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f02e464c700>: Failed to establish a new connection: [Errno -2] Name or service not known During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/tasmia/miniconda/lib/python3.9/site-packages/requests/adapters.py", line 439, in send resp = conn.urlopen( File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/connectionpool.py", line 783, in urlopen return self.urlopen( File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/connectionpool.py", line 783, in urlopen return self.urlopen( File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/connectionpool.py", line 783, in urlopen return self.urlopen( File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/connectionpool.py", line 755, in urlopen retries = retries.increment( File "/home/tasmia/miniconda/lib/python3.9/site-packages/urllib3/util/retry.py", line 574, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='conda.anaconda.org', port=443): Max retries exceeded with url: /conda-forge/linux-64/current_repodata.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e464c700>: Failed to establish a new connection: [Errno -2] Name or service not known')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/subdir_data.py", line 701, in fetch_repodata_remote_request resp = session.get(join_url(url, filename), headers=headers, proxies=session.proxies, File "/home/tasmia/miniconda/lib/python3.9/site-packages/requests/sessions.py", line 555, in get return self.request('GET', url, **kwargs) File "/home/tasmia/miniconda/lib/python3.9/site-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/home/tasmia/miniconda/lib/python3.9/site-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/home/tasmia/miniconda/lib/python3.9/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='conda.anaconda.org', port=443): Max retries exceeded with url: /conda-forge/linux-64/current_repodata.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f02e464c700>: Failed to establish a new connection: [Errno -2] Name or service not known')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/exceptions.py", line 1079, in __call__ return func(*args, **kwargs) File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/cli/main.py", line 84, in _main exit_code = do_call(args, p) File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/cli/conda_argparse.py", line 83, in do_call return getattr(module, func_name)(args, parser) File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/cli/main_create.py", line 41, in execute install(args, parser, 'create') File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/cli/install.py", line 261, in install unlink_link_transaction = solver.solve_for_transaction( File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/solve.py", line 114, in solve_for_transaction unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier, File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/solve.py", line 157, in solve_for_diff final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned, File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/solve.py", line 262, in solve_final_state ssc = self._collect_all_metadata(ssc) File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/common/io.py", line 88, in decorated return f(*args, **kwds) File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/solve.py", line 425, in _collect_all_metadata index, r = self._prepare(prepared_specs) File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/solve.py", line 1020, in _prepare reduced_index = get_reduced_index(self.prefix, self.channels, File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/index.py", line 288, in get_reduced_index new_records = SubdirData.query_all(spec, channels=channels, subdirs=subdirs, File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/subdir_data.py", line 140, in query_all result = tuple(concat(executor.map(subdir_query, channel_urls))) File "/home/tasmia/miniconda/lib/python3.9/concurrent/futures/_base.py", line 608, in result_iterator yield fs.pop().result() File "/home/tasmia/miniconda/lib/python3.9/concurrent/futures/_base.py", line 445, in result return self.__get_result() File "/home/tasmia/miniconda/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result raise self._exception File "/home/tasmia/miniconda/lib/python3.9/concurrent/futures/thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/subdir_data.py", line 132, in <lambda> subdir_query = lambda url: tuple(SubdirData(Channel(url), repodata_fn=repodata_fn).query( File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/subdir_data.py", line 145, in query self.load() File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/subdir_data.py", line 210, in load _internal_state = self._load() File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/subdir_data.py", line 375, in _load raw_repodata_str = fetch_repodata_remote_request( File "/home/tasmia/miniconda/lib/python3.9/site-packages/conda/core/subdir_data.py", line 806, in fetch_repodata_remote_request raise CondaHTTPError(help_message, conda.exceptions.CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://conda.anaconda.org/conda-forge/linux-64/current_repodata.json> Elapsed: - An HTTP error occurred when trying to retrieve this URL. HTTP errors are often intermittent, and a simple retry will get you on your way. 'https://conda.anaconda.org/conda-forge/linux-64' | Maybe this discussion here helps: https://github.com/conda/conda/issues/9948 In summary three fixes are suggested there: Install an older version (4.7.12) of conda / miniconda see here Change file- & directory-permissions of your miniconda-installation (chmod -R 777 ~/.miniconda3) see here Restart wsl (wsl --shutdown) see here | 7 | 12 |
68,463,220 | 2021-7-21 | https://stackoverflow.com/questions/68463220/pandas-importing-error-importerror-cannot-import-name-dtypearg-from-pandas | When I try to import pandas, it throws an error. I cannot import pandas. I re-install pandas but it keeps ont throwing the same error. I tried running it in a local prompt and in a jupyter notebook. I think it may conflict with the pip version so I removed the package from pip. Currently I just have the conda version but still same error. What can I do? Traceback (most recent call last): File "havatahmin.py", line 1, in <module> import pandas as pd File "C:\Anaconda\envs\ED\lib\site-packages\pandas\__init__.py", line 144, in <module> from pandas.io.api import ( File "C:\Anaconda\envs\ED\lib\site-packages\pandas\io\api.py", line 8, in <module> from pandas.io.excel import ExcelFile, ExcelWriter, read_excel File "C:\Anaconda\envs\ED\lib\site-packages\pandas\io\excel\__init__.py", line 1, in <module> from pandas.io.excel._base import ExcelFile, ExcelWriter, read_excel File "C:\Anaconda\envs\ED\lib\site-packages\pandas\io\excel\_base.py", line 33, in <module> from pandas.io.parsers import TextParser File "C:\Anaconda\envs\ED\lib\site-packages\pandas\io\parsers\__init__.py", line 1, in <module> from pandas.io.parsers.readers import ( File "C:\Anaconda\envs\ED\lib\site-packages\pandas\io\parsers\readers.py", line 17, in <module> from pandas._typing import ( ImportError: cannot import name 'DtypeArg' from 'pandas._typing' (C:\Anaconda\envs\ED\lib\site-packages\pandas\_typing.py) | I confirm, it is a reproducible bug in pandas==1.3.1. A workaround is to downgrade it to some earlier version, e.g. pip install pandas==1.3.0. The woarkaround can be tested in build 20210717 of our python (3.8) CUDA-enabled containers: docker run -d --rm --name ml-gpu-py38-cuda112-cust -p 8888:8888 -v /home/mir:/home/jovyan mirekphd/ml-gpu-py38-cuda112-cust:20210717 && docker logs -f ml-gpu-py38-cuda112-cust Has it been already reported to pandas devs on Github? Update: The issue still persists, so I've provided a reproducible example to Pandas devs in #42506. | 29 | 22 |
68,495,481 | 2021-7-23 | https://stackoverflow.com/questions/68495481/how-to-map-function-directly-over-list-of-lists | I have built a pixel classifier for images, and for each pixel in the image, I want to define to which pre-defined color cluster it belongs. It works, but at some 5 minutes per image, I think I am doing something unpythonic that can for sure be optimized. How can we map the function directly over the list of lists? #First I convert my image to a list #Below list represents a true image size list1=[[255, 114, 70], [120, 89, 15], [247, 190, 6], [41, 38, 37], [102, 102, 10], [255,255,255]]*3583180 Then we define the clusters to map the colors to and the function to do so (which is taken from the PIL library) #Define colors of interest #Colors of interest RED=[255, 114, 70] DARK_YELLOW=[120, 89, 15] LIGHT_YELLOW=[247, 190, 6] BLACK=[41, 38, 37] GREY=[102, 102, 10] WHITE=[255,255,255] Colors=[RED, DARK_YELLOW, LIGHT_YELLOW, GREY, BLACK, WHITE] #Function to find closes cluster by root and squareroot distance of RGB def distance(c1, c2): (r1,g1,b1) = c1 (r2,g2,b2) = c2 return math.sqrt((r1 - r2)**2 + (g1 - g2) ** 2 + (b1 - b2) **2) What remains is to match every color, and make a new list with matched indexes from the original Colors: Filt_lab=[] #Match colors and make new list with indexed colors for pixel in tqdm(list1): closest_colors = sorted(Colors, key=lambda color: distance(color, pixel)) closest_color = closest_colors[0] for num, clust in enumerate(Colors): if list(clust) == list(closest_color): Filt_lab.append(num) Running a single image takes approximately 5 minutes, which is OK, but likely there is a method in which this time can be greatly reduced? 36%|███▌ | 7691707/21499080 [01:50<03:18, 69721.86it/s] Expected outcome of Filt_lab: [0, 1, 2, 4, 3, 5]*3583180 | You can use the Numba's JIT to speed up the code by a large margin. The idea is to build classified_pixels on the fly by iterating over the colours for each pixel. The colours are stored in a Numpy array where the index is the colour key. The whole computation can run in parallel. This avoid many temporary arrays to be created and written/read in memory and a lot of memory to be allocated. Moreover, the data types can be adapted so that the resulting array is smaller in memory (so written/read faster). Here is the final script: import numpy as np import numba as nb @nb.njit('int32[:,::1](int32[:,:,::1], int32[:,::1])', parallel=True) def classify(image, colors): classified_pixels = np.empty((image.shape[0], image.shape[1]), dtype=np.int32) for i in nb.prange(image.shape[0]): for j in range(image.shape[1]): minId = -1 minValue = 256*256 # The initial value is the maximum possible value ir, ig, ib = image[i, j] # Find the color index with the minimum difference for k in range(len(colors)): cr, cg, cb = colors[k] total = (ir-cr)**2 + (ig-cg)**2 + (ib-cb)**2 if total < minValue: minValue = total minId = k classified_pixels[i, j] = minId return classified_pixels # Representative image np.random.seed(42) imarray = np.random.rand(3650,2000,3) * 255 image = imarray.astype(np.int32) # Colors of interest RED = [255, 0, 0] DARK_YELLOW = [120, 89, 15] LIGHT_YELLOW = [247, 190, 6] BLACK = [41, 38, 37] GREY = [102, 102, 10] WHITE = [255, 255, 255] # Build a Numpy array rather than a dict colors = np.array([RED, DARK_YELLOW, LIGHT_YELLOW, GREY, BLACK, WHITE], dtype=np.int32) # Actual classification classified_pixels = classify(image, colors) # Convert array to list cl_pixel_list = classified_pixels.reshape(classified_pixels.shape[0] * classified_pixels.shape[1]).tolist() # Print print(cl_pixel_list[0:10]) This implementation takes about 0.19 second on my 6-core machine. It is about 15 times faster than the last provided answer so far and more than thousand times faster than the initial implementation. Note that about half the time is spent in tolist() since classify function is very fast. | 8 | 10 |
68,529,610 | 2021-7-26 | https://stackoverflow.com/questions/68529610/how-to-set-the-size-of-pic-video-rendered-by-manim | The default size of picture or video rendered by manim is 1920*1080. But I want to resize it to for example 2000 * 2000, and I don't know how to modify it. I browsed the doc of manim, but I haven't find any useful ways. Can you help me with it? I tried to modify frame_height,frame_width,frame_size,but nothing happend. https://docs.manim.community/en/stable/reference/manim._config.utils.ManimConfig.html#manim._config.utils.ManimConfig.frame_height | you can use the -r, and set e.g. -r 2000,2000. Here I wrote a whole tutorial how to change the pixel size: https://flyingframes.readthedocs.io/en/latest/ch5.html | 6 | 12 |
68,549,442 | 2021-7-27 | https://stackoverflow.com/questions/68549442/how-to-run-mediapipes-pose-landmark-detection-on-a-gpu | I am able to run MediaPipe's Pose Landmark detection on my Windows 10 computer by following this tutorial here: https://google.github.io/mediapipe/solutions/pose.html#python-solution-api, but I'm not sure how I can run this example using a GPU. I know that it is quite fast to run on CPU, but I want to use the model with model_complexity=2 since its most accurate, but this makes it slow on my CPU (around 5 FPS). I have GPU, so if I can run on a GPU it will speed things up a lot. I found these following resources. https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_landmark https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_detection It mentions GPU in these links but I'm not sure how I can utilize these modules. If someone could provide a link or a quick explanation on how to run MediaPipe's Pose Landmark detection on a GPU, I would greatly appreciate it. | TensorFlow Lite GPU delegate is majorly designed for mobile phone accelerations. See also https://www.tensorflow.org/lite/performance/gpu. Experimentally, the OpenCL backend in TFLite GPU delegate can be supported through Linux platforms. However, we have not verified it on Windows yet. See also https://github.com/tensorflow/tensorflow/issues/40325#issuecomment-642143623. | 7 | 2 |
68,477,792 | 2021-7-22 | https://stackoverflow.com/questions/68477792/pandas-boxplot-contains-content-of-plot-saved-before | I'm plotting some columns of a datafame into a boxplot. Sofar, no problem. As seen below I wrote some stuff and it works. BUT: the second plot contains the plot of the first plot, too. So as you can see I tried it with "= None" or "del value", but it does not work. Putting the plot function outside also don't solves the problem. Whats wrong with my code? Here is an executable example import pandas as pd d1 = {'ff_opt_time': [10, 20, 11, 5, 15 , 13, 19, 25 ], 'ff_count_opt': [30, 40, 45, 29, 35,38,32,41]} df1 = pd.DataFrame(data=d1) d2 = {'ff_opt_time': [1, 2, 1, 5, 1 , 1, 4, 5 ], 'ff_count_opt': [3, 4, 4, 9, 5,3, 2,4]} df2 = pd.DataFrame(data=d2) def evaluate2(df1, df2): def plot(df, output ): boxplot = df.boxplot(rot=45,fontsize=5) fig = boxplot.get_figure() fig.savefig(output + ".pdf") df_ot = pd.DataFrame(columns=['opt_time1' , 'opt_time2']) df_ot['opt_time1'] = df1['ff_opt_time'] df_ot['opt_time2'] = df2['ff_opt_time'] plot(df_ot, "bp_opt_time") df_op = pd.DataFrame(columns=['count_opt1' , 'count_opt2']) df_op['count_opt1'] = df1['ff_count_opt'] df_op['count_opt2'] = df2['ff_count_opt'] plot(df_op, "bp_count_opt_perm") evaluate2(df1, df2) Here is another executable example. I even used other variable names. import pandas as pd d1 = {'ff_opt_time': [10, 20, 11, 5, 15 , 13, 19, 25 ], 'ff_count_opt': [30, 40, 45, 29, 35,38,32,41]} df1 = pd.DataFrame(data=d1) d2 = {'ff_opt_time': [1, 2, 1, 5, 1 , 1, 4, 5 ], 'ff_count_opt': [3, 4, 4, 9, 5,3, 2,4]} df2 = pd.DataFrame(data=d2) def evaluate2(df1, df2): df_ot = pd.DataFrame(columns=['opt_time1' , 'opt_time2']) df_ot['opt_time1'] = df1['ff_opt_time'] df_ot['opt_time2'] = df2['ff_opt_time'] boxplot1 = df_ot.boxplot(rot=45,fontsize=5) fig1 = boxplot1.get_figure() fig1.savefig( "bp_opt_time.pdf") df_op = pd.DataFrame(columns=['count_opt1' , 'count_opt2']) df_op['count_opt1'] = df1['ff_count_opt'] df_op['count_opt2'] = df2['ff_count_opt'] boxplot2 = df_op.boxplot(rot=45,fontsize=5) fig2 = boxplot2.get_figure() fig2.savefig( "bp_count_opt_perm.pdf") evaluate2(df1, df2) | I can see from your code that boxplots: boxplot1 & boxplot2 are in the same graph. What you need to do is instruct that there is going to be two plots. This can be achieved either by Create two sub plots using pyplot in matplotlib, this code does the trick fig1, ax1 = plt.subplots() with ax1 specifying boxplot to put in that axes and fig2 specifying boxplot figure Dissolve evaluate2 function and execute the boxplot separately in different cell in the jupyter notebook Solution 1 : Two subplots using pyplot import pandas as pd import matplotlib.pyplot as plt d1 = {'ff_opt_time': [10, 20, 11, 5, 15 , 13, 19, 25 ], 'ff_count_opt': [30, 40, 45, 29, 35,38,32,41]} df1 = pd.DataFrame(data=d1) d2 = {'ff_opt_time': [1, 2, 1, 5, 1 , 1, 4, 5 ], 'ff_count_opt': [3, 4, 4, 9, 5,3, 2,4]} df2 = pd.DataFrame(data=d2) def evaluate2(df1, df2): df_ot = pd.DataFrame(columns=['opt_time1' , 'opt_time2']) df_ot['opt_time1'] = df1['ff_opt_time'] df_ot['opt_time2'] = df2['ff_opt_time'] fig1, ax1 = plt.subplots() boxplot1 = df_ot.boxplot(rot=45,fontsize=5) ax1=boxplot1 fig1 = boxplot1.get_figure() fig1.savefig( "bp_opt_time.pdf") df_op = pd.DataFrame(columns=['count_opt1' , 'count_opt2']) df_op['count_opt1'] = df1['ff_count_opt'] df_op['count_opt2'] = df2['ff_count_opt'] fig2, ax2 = plt.subplots() boxplot2 = df_op.boxplot(rot=45,fontsize=5) fig2 = boxplot2.get_figure() ax2=boxplot2 fig2.savefig( "bp_count_opt_perm.pdf") plt.show() evaluate2(df1, df2) Solution 2: Executing boxplot in different cell Update based on comments : clearing plots Two ways you can clear the plot, plot itself using clf() matplotlib.pyplot.clf() function to clear the current Figure’s state without closing it clear axes using cla() matplotlib.pyplot.cla() function clears the current Axes state without closing the Axes. Simply call plt.clf() function after calling fig.save Read this documentation on how to clear a plot in Python using matplotlib | 4 | 4 |
68,542,054 | 2021-7-27 | https://stackoverflow.com/questions/68542054/fastapi-add-long-tasks-to-buffer-and-process-them-one-by-one-while-maintaining | I am trying to set up a FastAPI server that will take as input some biological data, and run some processing on them. Since the processing takes up all the server's resources, queries should be processed sequentially. However, the server should stay responsive and add further requests in a buffer. I've been trying to use the BackgroundTasks module for this, but after sending the second query, the response gets delayed while the task is running. Any help appreciated, and thanks in advance. import os import sys import time from dataclasses import dataclass from fastapi import FastAPI, Request, BackgroundTasks EXPERIMENTS_BASE_DIR = "/experiments/" QUERY_BUFFER = {} app = FastAPI() @dataclass class Query(): query_name: str query_sequence: str experiment_id: str = None status: str = "pending" def __post_init__(self): self.experiment_id = str(time.time()) self.experiment_dir = os.path.join(EXPERIMENTS_BASE_DIR, self.experiment_id) os.makedirs(self.experiment_dir, exist_ok=False) def run(self): self.status = "running" # perform some long task using the query sequence and get a return code # self.status = "finished" return 0 # or another code depending on the final output @app.post("/") async def root(request: Request, background_tasks: BackgroundTasks): query_data = await request.body() query_data = query_data.decode("utf-8") query_data = dict(str(x).split("=") for x in query_data.split("&")) query = Query(**query_data) QUERY_BUFFER[query.experiment_id] = query background_tasks.add_task(process, query) return {"Query created": query, "Query ID": query.experiment_id, "Backlog Length": len(QUERY_BUFFER)} async def process(query): """ Process query and generate data""" ret_code = await query.run() del QUERY_BUFFER[query.experiment_id] print(f'Query {query.experiment_id} processing finished with return code {ret_code}.') @app.get("/backlog/") def return_backlog(): return {f"Currently {len(QUERY_BUFFER)} jobs in the backlog."} | EDIT: The original answer was influenced by testing with httpx.AsyncClient (as flagged might be the case in the original caveat). The test client causes background tasks to block that do not block without the test client. As such, there's a simpler solution provided you don't want to test it with httpx.AsyncClient. The new solution uses uvicorn and then I tested this manually with Postman instead. This solution uses a function as the background task (process) so that it runs outside the main thread. It then schedules a job to run aprocess which will run in the main thread when the event loop gets a chance. The aprocess coroutine is able to then await the run coroutine of your Query as before. Additionally, I've added a time.sleep(10) to the process function to illustrate that even long running non-IO tasks will not prevent your original HTTP session from sending a response back to the client (although this will only work if it is something that releases the GIL. If it's CPU bound though you might want a separate process altogether by using multiprocessing or a separate service). Finally, I've replaced the prints with logging so that they work along with the uvicorn logging. import asyncio import os import sys import time from dataclasses import dataclass from fastapi import FastAPI, Request, BackgroundTasks import logging logging.basicConfig(level=logging.INFO, format="%(levelname)-9s %(asctime)s - %(name)s - %(message)s") LOGGER = logging.getLogger(__name__) EXPERIMENTS_BASE_DIR = "/experiments/" QUERY_BUFFER = {} app = FastAPI() loop = asyncio.get_event_loop() @dataclass class Query(): query_name: str query_sequence: str experiment_id: str = None status: str = "pending" def __post_init__(self): self.experiment_id = str(time.time()) self.experiment_dir = os.path.join(EXPERIMENTS_BASE_DIR, self.experiment_id) # os.makedirs(self.experiment_dir, exist_ok=False) # Commented out for testing async def run(self): self.status = "running" await asyncio.sleep(5) # simulate long running query # perform some long task using the query sequence and get a return code # self.status = "finished" return 0 # or another code depending on the final output @app.post("/") async def root(request: Request, background_tasks: BackgroundTasks): query_data = await request.body() query_data = query_data.decode("utf-8") query_data = dict(str(x).split("=") for x in query_data.split("&")) query = Query(**query_data) QUERY_BUFFER[query.experiment_id] = query background_tasks.add_task(process, query) LOGGER.info(f'root - added task') return {"Query created": query, "Query ID": query.experiment_id, "Backlog Length": len(QUERY_BUFFER)} def process(query): """ Schedule processing of query, and then run some long running non-IO job without blocking the app""" asyncio.run_coroutine_threadsafe(aprocess(query), loop) LOGGER.info(f"process - {query.experiment_id} - Submitted query job. Now run non-IO work for 10 seconds...") time.sleep(10) # simulate long running non-IO work, does not block app as this is in another thread - provided it is not cpu bound. LOGGER.info(f'process - {query.experiment_id} - wake up!') async def aprocess(query): """ Process query and generate data """ ret_code = await query.run() del QUERY_BUFFER[query.experiment_id] LOGGER.info(f'aprocess - Query {query.experiment_id} processing finished with return code {ret_code}.') @app.get("/backlog/") def return_backlog(): return {f"return_backlog - Currently {len(QUERY_BUFFER)} jobs in the backlog."} if __name__ == "__main__": import uvicorn uvicorn.run("scratch_26:app", host="127.0.0.1", port=8000) ORIGINAL ANSWER: *A caveat on this answer - I've tried testing this with `httpx.AsyncClient`, which might account for different behavior compared to deploying behind guvicorn.* From what I can tell (and I am very open to correction on this), BackgroundTasks actually need to complete prior to an HTTP response being sent. This is not what the Starlette docs or the FastAPI docs say, but it appears to be the case, at least while using the httpx AsyncClient. Whether you add a a coroutine (which is executed in the main thread) or a function (which gets executed in it's own side thread) that HTTP response is blocked from being sent until the background task is complete. If you want to await a long running (asyncio friendly) task, you can get around this problem by using a wrapper function. The wrapper function adds the real task (a coroutine, since it will be using await) to the event loop and then returns. Since this is very fast, the fact that it "blocks" no longer matters (assuming a few milliseconds doesn't matter). The real task then gets executed in turn (but after the initial HTTP response has been sent), and although it's on the main thread, the asyncio part of the function will not block. You could try this: @app.post("/") async def root(request: Request, background_tasks: BackgroundTasks): ... background_tasks.add_task(process_wrapper, query) ... async def process_wrapper(query): loop = asyncio.get_event_loop() loop.create_task(process(query)) async def process(query): """ Process query and generate data""" ret_code = await query.run() del QUERY_BUFFER[query.experiment_id] print(f'Query {query.experiment_id} processing finished with return code {ret_code}.') Note also that you'll also need to make your run() function a coroutine by adding the async keyword since you're expecting to await it from your process() function. Here's a full working example that uses httpx.AsyncClient to test it. I've added the fmt_duration helper function to show the lapsed time for illustrative purposes. I've also commented out the code that creates directories, and simulated a 2 second query duration in the run() function. import asyncio import os import sys import time from dataclasses import dataclass from fastapi import FastAPI, Request, BackgroundTasks from httpx import AsyncClient EXPERIMENTS_BASE_DIR = "/experiments/" QUERY_BUFFER = {} app = FastAPI() start_ts = time.time() @dataclass class Query(): query_name: str query_sequence: str experiment_id: str = None status: str = "pending" def __post_init__(self): self.experiment_id = str(time.time()) self.experiment_dir = os.path.join(EXPERIMENTS_BASE_DIR, self.experiment_id) # os.makedirs(self.experiment_dir, exist_ok=False) # Commented out for testing async def run(self): self.status = "running" await asyncio.sleep(2) # simulate long running query # perform some long task using the query sequence and get a return code # self.status = "finished" return 0 # or another code depending on the final output @app.post("/") async def root(request: Request, background_tasks: BackgroundTasks): query_data = await request.body() query_data = query_data.decode("utf-8") query_data = dict(str(x).split("=") for x in query_data.split("&")) query = Query(**query_data) QUERY_BUFFER[query.experiment_id] = query background_tasks.add_task(process_wrapper, query) print(f'{fmt_duration()} - root - added task') return {"Query created": query, "Query ID": query.experiment_id, "Backlog Length": len(QUERY_BUFFER)} async def process_wrapper(query): loop = asyncio.get_event_loop() loop.create_task(process(query)) async def process(query): """ Process query and generate data""" ret_code = await query.run() del QUERY_BUFFER[query.experiment_id] print(f'{fmt_duration()} - process - Query {query.experiment_id} processing finished with return code {ret_code}.') @app.get("/backlog/") def return_backlog(): return {f"{fmt_duration()} - return_backlog - Currently {len(QUERY_BUFFER)} jobs in the backlog."} async def test_me(): async with AsyncClient(app=app, base_url="http://example") as ac: res = await ac.post("/", content="query_name=foo&query_sequence=42") print(f"{fmt_duration()} - [{res.status_code}] - {res.content.decode('utf8')}") res = await ac.post("/", content="query_name=bar&query_sequence=43") print(f"{fmt_duration()} - [{res.status_code}] - {res.content.decode('utf8')}") content = "" while not content.endswith('0 jobs in the backlog."]'): await asyncio.sleep(1) backlog_results = await ac.get("/backlog") content = backlog_results.content.decode("utf8") print(f"{fmt_duration()} - test_me - content: {content}") def fmt_duration(): return f"Progress time: {time.time() - start_ts:.3f}s" loop = asyncio.get_event_loop() print(f'starting loop...') loop.run_until_complete(test_me()) duration = time.time() - start_ts print(f'Finished. Duration: {duration:.3f} seconds.') in my local environment if I run the above I get this output: starting loop... Progress time: 0.005s - root - added task Progress time: 0.006s - [200] - {"Query created":{"query_name":"foo","query_sequence":"42","experiment_id":"1627489235.9300923","status":"pending","experiment_dir":"/experiments/1627489235.9300923"},"Query ID":"1627489235.9300923","Backlog Length":1} Progress time: 0.007s - root - added task Progress time: 0.009s - [200] - {"Query created":{"query_name":"bar","query_sequence":"43","experiment_id":"1627489235.932097","status":"pending","experiment_dir":"/experiments/1627489235.932097"},"Query ID":"1627489235.932097","Backlog Length":2} Progress time: 1.016s - test_me - content: ["Progress time: 1.015s - return_backlog - Currently 2 jobs in the backlog."] Progress time: 2.008s - process - Query 1627489235.9300923 processing finished with return code 0. Progress time: 2.008s - process - Query 1627489235.932097 processing finished with return code 0. Progress time: 2.041s - test_me - content: ["Progress time: 2.041s - return_backlog - Currently 0 jobs in the backlog."] Finished. Duration: 2.041 seconds. I also tried making process_wrapper a function so that Starlette executes it in a new thread. This works the same way, just use run_coroutine_threadsafe instead of create_task i.e. def process_wrapper(query): loop = asyncio.get_event_loop() asyncio.run_coroutine_threadsafe(process(query), loop) If there is some other way to get a background task to run without blocking the HTTP response I'd love to find out how, but absent that this wrapper solution should work. | 10 | 6 |
68,491,834 | 2021-7-22 | https://stackoverflow.com/questions/68491834/handle-client-side-cancellation-in-grpc-python-asyncio | Question first, context below. How can I perform some server-side action (eg, cleanup) based on a cancellation of an RPC from the client with an async gRPC python server? In my microservice, I have an asyncio gRPC server whose main RPCs are bidirectional streams. On the client side (which is also using asyncio), when I cancel something, it raises an asyncio.CancelledError which is caught and not reraised by the grpc core: https://github.com/grpc/grpc/blob/master/src/python/grpcio/grpc/_cython/_cygrpc/aio/server.pyx.pxi#L679 except asyncio.CancelledError: _LOGGER.debug('RPC cancelled for servicer method [%s]', _decode(rpc_state.method())) So I cannot rely on catching the asyncio.CancelledError in my own code, because it's caught beforehand and not reraised. The shared context is supposed to contain information as to whether the RPC was canceled on the client side, by calling .cancel() from the RPC call and being able to see if it was canceled by calling .cancelled(): https://grpc.github.io/grpc/python/grpc_asyncio.html#shared-context abstract cancel() Cancels the RPC. Idempotent and has no effect if the RPC has already terminated. Returns A bool indicates if the cancellation is performed or not. Return type bool abstract cancelled() Return True if the RPC is cancelled. The RPC is cancelled when the cancellation was requested with cancel(). Returns A bool indicates whether the RPC is cancelled or not. Return type bool However, this shared context is not attached to the context variable given to the RPC on the server side by the gRPC generated code. (I cannot run context.cancelled() or context.add_done_callback; they're not present) So, again, the question: How can I perform some server-side action (eg, cleanup) based on a cancellation of an RPC from the client with an async gRPC python server? | thanks for the post. We are aware of this issue, and adding support for those two methods is on our roadmap. For a short term solution, you can use try-catch and decorators. The client-side-cancellation is observed as an asyncio.CancelledError in method handler. Here is a modified helloworld example: Server code: class Greeter(helloworld_pb2_grpc.GreeterServicer): async def SayHello( self, request: helloworld_pb2.HelloRequest, context: grpc.aio.ServicerContext) -> helloworld_pb2.HelloReply: try: await asyncio.sleep(4) except asyncio.CancelledError: print('RPC cancelled') raise return helloworld_pb2.HelloReply(message='Hello, %s!' % request.name) Client code: async def run() -> None: async with grpc.aio.insecure_channel('localhost:50051') as channel: stub = helloworld_pb2_grpc.GreeterStub(channel) call = stub.SayHello(helloworld_pb2.HelloRequest(name='you')) await asyncio.sleep(2) call.cancel() print(await call.code()) | 4 | 6 |
68,543,704 | 2021-7-27 | https://stackoverflow.com/questions/68543704/why-do-i-get-method-describe-failed-401-unauthorized | Let me explain my problem, I am trying to access different channels in a DVR system. I have successfully gotten access to a single camera (channel 1) by using opencv as such: public_link = 'rtsp://test:[email protected]/cam/realmonitor' cap = cv2.VideoCapture(public_link, cv2.CAP_FFMPEG) The problem is I can't access the other channels with these parameters: public_link = 'rtsp://test:[email protected]/cam/realmonitor?channel=3&subtype=0' cap = cv2.VideoCapture(public_link, cv2.CAP_FFMPEG) I've tried the following links: rtsp://test:[email protected]/cam/realmonitor?channel=3&subtype=0 rtsp://test:[email protected]/cam/realmonitor?channel=3&subtype=1 rtsp://192.168.1.48/cam/realmonitor?channel=3&subtype=0&authbasic=dGVzdDp0ZXN0 I get this following error: [rtsp @ 00000201ce582cc0] method DESCRIBE failed: 401 Unauthorized I've noticed that even if I test with this URL (rtsp://test:[email protected]/blablabla) it works just fine! (ONLY Channel #1) but when I insert the symbol '=' into the URL string, I get the above error. It's really frustrating, Any sort of help would be much appreciated. PS: the user 'test' has admin privileges in the system. I've tried to run the test with plain ffmpeg command like such: ffmpeg -loglevel debug -i "rtsp://test:[email protected]/cam/monitor?channel=3&subtype=0" ./folder/output.m3u8 I get the following error: PS C:\Users\cjhou> ffmpeg -loglevel debug -i "rtsp://test:[email protected]/cam/realmonitor?channel=3&subtype=0" .\folder\output.m3u8 ffmpeg version 4.4-full_build-www.gyan.dev Copyright (c) 2000-2021 the FFmpeg developers built with gcc 10.2.0 (Rev6, Built by MSYS2 project) configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libglslang --enable-vulkan --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint libavutil 56. 70.100 / 56. 70.100 libavcodec 58.134.100 / 58.134.100 libavformat 58. 76.100 / 58. 76.100 libavdevice 58. 13.100 / 58. 13.100 libavfilter 7.110.100 / 7.110.100 libswscale 5. 9.100 / 5. 9.100 libswresample 3. 9.100 / 3. 9.100 libpostproc 55. 9.100 / 55. 9.100 Splitting the commandline. Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'. Reading option '-i' ... matched as input url with argument 'rtsp://test:[email protected]/cam/realmonitor?channel=3&subtype=0'. Reading option '.\folder\output.m3u8' ... matched as output url. Finished splitting the commandline. Parsing a group of options: global. Applying option loglevel (set logging level) with argument debug. Successfully parsed a group of options. Parsing a group of options: input url rtsp://test:[email protected]/cam/realmonitor?channel=3&subtype=0. Successfully parsed a group of options. Opening an input file: rtsp://houssem:[email protected]/cam/realmonitor?channel=3&subtype=0. [tcp @ 000001882b592240] No default whitelist set [tcp @ 000001882b592240] Original list of addresses: [tcp @ 000001882b592240] Address 192.168.1.48 port 554 [tcp @ 000001882b592240] Interleaved list of addresses: [tcp @ 000001882b592240] Address 192.168.1.48 port 554 [tcp @ 000001882b592240] Starting connection attempt to 192.168.1.48 port 554 [tcp @ 000001882b592240] Successfully connected to 192.168.1.48 port 554 [rtsp @ 000001882b58f080] method DESCRIBE failed: 401 Unauthorized [rtsp @ 000001882b58f080] Cseq: 3 Server: Rtsp Server 960*576*30*4096 WWW-Authenticate: Digest realm="Surveillance Server", nonce="44976150" rtsp://test:[email protected]/cam/realmonitor?channel=3&subtype=0: Server returned 401 Unauthorized (authorization failed) With this command ffplay "rtsp://test:[email protected]/cam/realmonitor?channel=3&subtype=0", I get this output: PS C:\Users\cjhou> ffplay "rtsp://test:[email protected]/cam/realmonitor?channel=3&subtype=0" ffplay version 4.4-full_build-www.gyan.dev Copyright (c) 2003-2021 the FFmpeg developers built with gcc 10.2.0 (Rev6, Built by MSYS2 project) configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libglslang --enable-vulkan --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint libavutil 56. 70.100 / 56. 70.100 libavcodec 58.134.100 / 58.134.100 libavformat 58. 76.100 / 58. 76.100 libavdevice 58. 13.100 / 58. 13.100 libavfilter 7.110.100 / 7.110.100 libswscale 5. 9.100 / 5. 9.100 libswresample 3. 9.100 / 3. 9.100 libpostproc 55. 9.100 / 55. 9.100 [rtsp @ 000001d413d2f640] method DESCRIBE failed: 401 Unauthorized rtsp://test:[email protected]/cam/realmonitor?channel=3&subtype=0: Server returned 401 Unauthorized (authorization failed) nan : 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0 By the way, I am using this device: Device Name: Digital Video Record Model Number: 16-CHANNEL Software Version: XVR_HI3521A_16_v6.1.52.1 Date: Dec 19 2016 14:36:39 Hope this helps! | Well, after a lot of research of the DVR model that I am using. It turns out that I am using a Longse DVR type model. I don't really know what model number exactly but at least I knew that it was a Longse DVR. It turns out that I was using a wrong URL. The DVR/cameras URL should be in this format: rtsp://[username]:[password]@[IP_ADDRESS]:[PORT]/[channelID][SubTypeID] Finally, I got access to all of the cameras connected to the DVR, example (viewing channel #3 on subset 0): rtsp://test:[email protected]:554/30 or viewing channel #6 on subset 1) rtsp://test:[email protected]:554/61 I've got a huge help with a "ISpyConnect Agent" Software to extract all available URLs of a given Model. (I've input the model: Longse: Unlisted) | 4 | 5 |
68,530,363 | 2021-7-26 | https://stackoverflow.com/questions/68530363/opentelemetry-python-how-to-instanciate-a-new-span-as-a-child-span-for-a-given | My goal is to perform tracing of the whole process of my application through several component. I am using GCP and Pub/Sub message queue to communicate information between components (developped in Python). I am currently trying to keep the same root trace between component A and component B by creating a new span as a child span of my root trace. Here is a small diagram: Component A ---> Pub/Sub message ---> component B (create the root trace) (contain information) (new span for root trace) I have a given trace_id and span_id of my parent that I can transmit through Pub/Sub but I can't figure out how to declare a new span as a child of this last. All I managed to do is to link a new trace to the parent one but it is not the behavior I am looking for. Has someone already tried to do something like that ? Regards, | It's called trace context propagation and there are multiple formats such w3c trace context, jaeger, b3 etc... https://github.com/open-telemetry/opentelemetry-specification/blob/b46bcab5fb709381f1fd52096a19541370c7d1b3/specification/context/api-propagators.md#propagators-distribution. You will have to use one of the propagator's inject/extract methods for this. Here is the simple example using W3CTraceContext propagator. from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import (BatchSpanProcessor, ConsoleSpanExporter) from opentelemetry.trace.propagation.tracecontext import \ TraceContextTextMapPropagator trace.set_tracer_provider(TracerProvider()) trace.get_tracer_provider().add_span_processor(BatchSpanProcessor(ConsoleSpanExporter())) tracer = trace.get_tracer(__name__) prop = TraceContextTextMapPropagator() carrier = {} # Injecting the context into carrier and send it over with tracer.start_as_current_span("first-span") as span: prop.inject(carrier=carrier) print("Carrier after injecting span context", carrier) # Extracting the remote context from carrier and starting a new span under same trace. ctx = prop.extract(carrier=carrier) with tracer.start_as_current_span("next-span", context=ctx): pass | 5 | 11 |
68,551,032 | 2021-7-27 | https://stackoverflow.com/questions/68551032/is-there-a-way-to-use-torch-nn-dataparallel-with-cpu | I'm trying to change some PyTorch code so that it can run on the CPU. The model was trained with torch.nn.DataParallel() so when I load the pre-trained model and try using it I must use nn.DataParallel() which I am currently doing like this: device = torch.device("cuda:0") net = nn.DataParallel(net, device_ids=[0]) net.load_state_dict(torch.load(PATH)) net.to(device) However after I switched my torch device to cpu like this: device = torch.device('cpu') net = nn.DataParallel(net, device_ids=[0]) net.load_state_dict(torch.load(PATH)) net.to(device) I got this error: File "C:\My\Program\win-py362-venv\lib\site-packages\torch\nn\parallel\data_parallel.py", line 156, in forward "them on device: {}".format(self.src_device_obj, t.device)) RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu I'm assuming that it's still looking for CUDA because that's what device_ids is set to but is there a way to make it use the CPU? This post from the PyTorch repo makes me think that I can but it doesn't explain how. If not is there any other way to use a model trained with DataParallel on your CPU? | When you use torch.nn.DataParallel() it implements data parallelism at the module level. According to the doc: The parallelized module must have its parameters and buffers on device_ids[0] before running this DataParallel module. So even though you are doing .to(torch.device('cpu')) it is still expecting to pass the data to a GPU. However since DataParallel is a container you can bypass it and get just the original module by doing this: net = net.module.to(device) Now it will access the original module you defined before you applied the DataParallel container. | 5 | 10 |
68,552,109 | 2021-7-27 | https://stackoverflow.com/questions/68552109/can-mark-rule-be-extended-outside-the-chart-with-altair | Is there a way to make a rule mark longer without disrupting the axes of a chart? If I have this: random.seed(0) df = pd.DataFrame({'x':[i for i in range(1,21)],'y':random.sample(range(1,50), 20)}) chart = alt.Chart(df).mark_area().encode(x='x',y='y') ruler = alt.Chart(pd.DataFrame({'x':[5]})).mark_rule().encode(x='x') chart+ruler But I want this : | You can set an explicit y-domain and then set clip=False inside mark_rule, but you also need to define the y-range of the rule since the default is to stretch over the entire plot: import altair as alt import pandas as pd import random random.seed(0) df = pd.DataFrame({'x':[i for i in range(1,21)],'y':random.sample(range(1,50), 20)}) chart = alt.Chart(df).mark_area().encode(x='x', y=alt.Y('y', scale=alt.Scale(domain=(0, 50)))) ruler = alt.Chart(pd.DataFrame({'x':[5], 'y': [-10], 'y2': [50]})).mark_rule(clip=False, fill='black').encode(x='x', y='y', y2='y2') chart+ruler | 4 | 2 |
68,551,327 | 2021-7-27 | https://stackoverflow.com/questions/68551327/split-column-of-pandas-dataframe-based-on-multiple-characters | I have a pandas dataframe which looks like this : Un_ID P_ID segment 0 Q8TDU6 7bw0 1( 16- 41), 2( 51- 73), 3( 86- 108) 1 P63092 7bw0 1( 16- 41), 2( 51- 73), 3( 86- 108) 2 Q8TDU6 7cfm 1( 22- 41), 2( 51- 72), 3( 86- 108) I want to split the third column'segment' into three columns i.e TM,starting,ending Un_ID P_ID segment TM starting ending 0 Q8TDU6 7bw0 1( 16- 41), 2( 51- 73), 3( 86- 108) TM1 16 41 1 P63092 7bw0 1( 16- 41), 2( 51- 73), 3( 86- 108) TM1 16 41 2 Q8TDU6 7cfm 1( 22- 41), 2( 51- 72), 3( 86- 108) TM1 22 41 0 Q8TDU6 7bw0 1( 16- 41), 2( 51- 73), 3( 86- 108) TM2 51 73 1 P63092 7bw0 1( 16- 41), 2( 51- 73), 3( 86- 108) TM2 51 73 2 Q8TDU6 7cfm 1( 22- 41), 2( 51- 72), 3( 86- 108) TM2 51 72 0 Q8TDU6 7bw0 1( 16- 41), 2( 51- 73), 3( 86- 108) TM3 86 108 1 P63092 7bw0 1( 16- 41), 2( 51- 73), 3( 86- 108) TM3 86 108 2 Q8TDU6 7cfm 1( 22- 41), 2( 51- 72), 3( 86- 108) TM3 86 108 I tried following code df[['TM','starting','ending']] = df.segment.apply(lambda x: pd.Series(str(x).split(",")) But,I am not sure how to manipulate the above code to get the dataframe as I want.. | Try: import re r = re.compile(r"(\d+)\(\s*(\d+)-\s*(\d+)\)") df["segment"] = df["segment"].apply(lambda x: r.findall(x)) df = df.explode("segment") df[["TM", "starting", "ending"]] = df.pop("segment").apply(pd.Series) df = df.sort_values(by="TM") df["TM"] = "TM" + df["TM"].astype(str) print(df) Prints: Un_ID P_ID TM starting ending 0 Q8TDU6 7bw0 TM1 16 41 1 P63092 7bw0 TM1 16 41 2 Q8TDU6 7cfm TM1 22 41 0 Q8TDU6 7bw0 TM2 51 73 1 P63092 7bw0 TM2 51 73 2 Q8TDU6 7cfm TM2 51 72 0 Q8TDU6 7bw0 TM3 86 108 1 P63092 7bw0 TM3 86 108 2 Q8TDU6 7cfm TM3 86 108 | 4 | 4 |
68,536,546 | 2021-7-26 | https://stackoverflow.com/questions/68536546/using-pipelines-with-a-local-model | I am trying to use a simple pipeline offline. I am only allowed to download files directly from the web. I went to https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/tree/main and downloaded all the files in a local folder C:\\Users\\me\\mymodel However, when I tried to load the model I get a strange error from transformers import pipeline classifier = pipeline(task= 'sentiment-analysis', model= "C:\\Users\\me\\mymodel", tokenizer = "C:\\Users\\me\\mymodel") ValueError: unable to parse C:\Users\me\mymodel\modelcard.json as a URL or as a local path What is the issue here? | the solution was slightly indirect: load the model on a computer with internet access save the model with save_pretrained() transfer the folder obtained above to the offline machine and point its path in the pipeline call The folder will contain all the expected files. | 5 | 2 |
68,520,738 | 2021-7-25 | https://stackoverflow.com/questions/68520738/identify-index-of-all-elements-in-a-list-comparing-with-another-list | For instance I have a list A: A = [100, 200, 300, 200, 400, 500, 600, 400, 700, 200, 500, 800] And I have list B: B = [100, 200, 200, 500, 600, 200, 500] I need to identify the index of elements in B with comparison to A I have tried: list_index = [A.index(i) for i in B] It returns: [0, 1, 1, 5, 6, 1, 5] But what I need is: [0, 1, 3, 5, 6, 9, 10] How can I solve it? | You can iterate through the enumeration of A to keep track of the indices and yield the values where they match: A = [100,200,300,200,400,500,600,400,700,200,500,800] B = [100,200,200,500,600,200,500] def get_indices(A, B): a_it = enumerate(A) for n in B: for i, an in a_it: if n == an: yield i break list(get_indices(A, B)) # [0, 1, 3, 5, 6, 9, 10] This avoids using index() multiple times. | 16 | 15 |
68,501,158 | 2021-7-23 | https://stackoverflow.com/questions/68501158/python-fetching-urllib3-request-headers | We are injecting tracing information into request headers of all the http request calls in our API client library which is implemented based on urllib3 def _init_jaeger_tracer(): '''Jaeger tracer initialization''' config = Config( config={ 'sampler': { 'type': 'const', 'param': 1, }, }, service_name="session" ) return config.new_tracer() class APIObject(rest.APIObject): '''Class for injecting traces into urllib3 request headers''' def __init__(self, configuration): print("RESTClientObject child class called####") self._tracer = None super().__init__(configuration) self._tracer = _init_jaeger_tracer() # pylint: disable=W0221 def request(self, method, url, *args, **kwargs): lower_method = method.lower() with self._tracer.start_active_span('requests.{}'.format(lower_method)) as scope: span = scope.span span.set_tag(tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT) span.set_tag(tags.COMPONENT, 'request') span.set_tag(tags.HTTP_METHOD, lower_method) span.set_tag(tags.HTTP_URL, url) headers = kwargs.setdefault('headers', {}) self._tracer.inject(span.context, Format.HTTP_HEADERS, headers) return r After such implementation, urllib3 request headers look like this, headers = { 'Content-Type': 'application/json', 'User-Agent': 'API-Generator/1.0.0/python', # ** 'uber-trace-id': '30cef3e816482516:1a4fed2c4863b2f6:0:1' # ** } Now we have to obtain trace-id from the injected request header. In python "requests" library we have "response.request.headers" attribute to return request headers In urllib3 i could not find a way return request headers for further processing Could any of you share some thoughts Now from this request headers, we need to fetch the 'uber-trace-id' and its value for further building URL. | The documentation of urllib3.response.HTTPResponse says it's: Backwards-compatible with http.client.HTTPResponse [...] That's stdlib class which has getheader method described as: Return the value of the header name, or default if there is no header matching name. If there is more than one header with the name name, return all of the values joined by ', '. If default is any iterable other than a single string, its elements are similarly returned joined by commas. Hence in your case it's something like this. import urllib3 http = urllib3.PoolManager() response = http.request('GET', 'https://python.org/') response.getheader('uber-trace-id', 'UNKNOWN') Update urllib3 doesn't have HTTP request object representation. An HTTP request is represented in the formal arguments of RequestMethods.request . Hence, you need to store the tracing identifier yourself in your request override after it was injected by jaeger_client. Storing it as an attribute of the response object is probably a good idea: from jaeger_client.constants import TRACE_ID_HEADER class APIObject(rest.APIObject): def request(self, *args, **kwargs) -> urllib3.response.BaseHTTPResponse: ... r._request_tracing_id = headers[TRACE_ID_HEADER] return r | 6 | 1 |
68,532,800 | 2021-7-26 | https://stackoverflow.com/questions/68532800/pandas-read-excel-parsing-excel-datetime-field-correctly | I have the following sample data stored in Excel file CLAIM CODE1 AGE DATE 7538 359 71 28/11/2019 7538 359 71 28/11/2019 540 428 73 16/10/2019 540 428 73 16/10/2019 605 1670 40 04/12/2019 740 134 55 24/12/2019 When importing to my Jupyter Notebook using the pandas.read_excel API, the date field does not get properly formated: excel = pd.read_excel('Libro.xlsx') Then I am getting the DATE field different as I have it formatted in the excel file. What argument should I apply to read_excel in order to display the DATE column formatted as I have it in the excel file? .info() method, outputs the column as int64 I already tried using the pd.to_datetime function, but I am getting strange results: Find the sample excel file I am using for my project in the following link sample_raw_data Here is some code that can be used to reproduce the DataFrame that is read in from excel: excel = pd.DataFrame({ 'CLAIM': {0: 7538, 1: 7538, 2: 540, 3: 540, 4: 4605, 5: 1740, 6: 7605}, 'CODE1': {0: 359, 1: 359, 2: 428, 3: 428, 4: 1670, 5: 134, 6: 415}, 'AGE': {0: 71, 1: 71, 2: 73, 3: 73, 4: 40, 5: 55, 6: 56}, 'DATE': {0: 43797, 1: 43797, 2: 43754, 3: 43754, 4: 43803, 5: 43823, 6: 43818} }) | To convert this Excel Date into datetime64[ns] use to_datetime to get unit in days with offset from origin '1899-12-30': excel = pd.read_excel('Libro.xlsx') excel['DATE'] = pd.to_datetime(excel['DATE'], unit='d', origin='1899-12-30') excel: CLAIM CODE1 AGE DATE 0 7538 359 71 2019-11-28 1 7538 359 71 2019-11-28 2 540 428 73 2019-10-16 3 540 428 73 2019-10-16 4 4605 1670 40 2019-12-04 5 1740 134 55 2019-12-24 6 7605 415 56 2019-12-19 info: # Column Non-Null Count Dtype --- ------ -------------- ----- 0 CLAIM 7 non-null int64 1 CODE1 7 non-null int64 2 AGE 7 non-null int64 3 DATE 7 non-null datetime64[ns] See Why is 1899-12-30 the zero date in Access / SQL Server instead of 12/31? for more information about why this is the base date. A converter for DATE can also be used with read_excel: excel = pd.read_excel( 'Libro.xlsx', converters={ 'DATE': lambda x: pd.to_datetime(x, unit='d', origin='1899-12-30') } ) info: # Column Non-Null Count Dtype --- ------ -------------- ----- 0 CLAIM 7 non-null int64 1 CODE1 7 non-null int64 2 AGE 7 non-null int64 3 DATE 7 non-null datetime64[ns] | 5 | 14 |
68,521,944 | 2021-7-25 | https://stackoverflow.com/questions/68521944/split-a-multifasta-file-to-files-with-the-same-number-of-accesion-numbers | I have a file that has thousands of accession numbers: and looks like this.. >NC_033829.1 Kallithea virus isolate DrosEU46_Kharkiv_2014, complete genome AGTCAGCAACGTCGATGTGGCGTACAATTTCTTGATTACATTTTTGTTCCTAACAAAATGTTGATATACT >NC_020414.2 Escherichia phage UAB_Phi78, complete genome TAGGCGTGTGTCAGGTCTCTCGGCCTCGGCCTCGCCGGGATGTCCCCATAGGGTGCCTGTGGGCGCTAGG If want to split this to multiple files with one accession number each then I can use the following code awk -F '|' '/^>/ {F=sprintf("%s.fasta",$2); print > F;next;} {print >> F;}' < yourfile.fa I have a file with thousands of accession numbers (aka >NC_*) and want to split it such as each files contains ~ 5000 accession numbers. Since I am new to awk/bash/python i struggle to find a neat solution Any idea or comment are appreciated | It wasn't clear from your question that an "accession number" is unique per input block (don't assume the people reading your question know anything about your domain - it's all just lines of text to us). It would have been clearer if you had phrased your question to just say you want 5000 new-line-separated blocks per output file rather than 5000 accession numbers. Having seen the answer you posted, it's now clear that this is what you should be using: awk -v RS= -v ORS='\n\n' ' (NR%5000) == 1 { close(out); out="myseq"(++n_seq)".fa" } { print > out } ' my_sequences.fa | 5 | 3 |
68,536,339 | 2021-7-26 | https://stackoverflow.com/questions/68536339/numpy-returns-unexpected-results-of-analytical-function | When I try to compute d_j(x), defined below, the algorithm based on Numpy results in unexpected values. I believe it has something to do with numerical precision, but I'm not sure how to solve this. The function is: where and The code fails when j>10. For example, when j=16, the function d_j(x) returns wrong values from around x=1, while the expected result is a smooth, almost periodic curve. Graph for 0<x<1.5: The code is: #%% import numpy as np import matplotlib.pyplot as plt #%% L = 1.5 # Length [m] def eta(j): if j == 1: return 1.875 / L if j > 1: return (j - 0.5) * np.pi / L def D(etaj): etajL = etaj * L return (np.cos(etajL) + np.cosh(etajL)) / (np.sin(etajL) - np.sinh(etajL)) def d(x, etaj): etajx = etaj * x return np.sin(etajx) - np.sinh(etajx) + D(etaj) * (np.cos(etajx) - np.cosh(etajx)) #%% aux = np.linspace(0, L, 2000) plt.plot(aux, d(aux, eta(16))) plt.show() | TL;DR: The problem comes from numerical instabilities. First of all, here is a simplified code on which the exact same problem appear (with different values): x = np.arange(0, 50, 0.1) plt.plot(np.sin(x) - np.sinh(x) - np.cos(x) + np.cosh(x)) plt.show() Here is another example where the problem does not appear: x = np.arange(0, 50, 0.1) plt.plot((np.sin(x) - np.cos(x)) + (np.cosh(x) - np.sinh(x))) plt.show() While the two code are mathematically equivalent with real numbers, they are not equivalent because of fixed-size floating-point precision. Indeed, np.sinh(x) and np.cosh(x) result both in huge values when x is big compared to np.sin(x) and np.cos(x). Unfortunately, when two fixed-size floating-point numbers are added together, there is a loss of precision. The loss of precision can be huge (if not critical) when the order of magnitude of the added numbers are very different. For example, in Python and on a mainstream platform (so with IEEE-754 64-bit floating-point numbers), 0.1 + 1e20 == 1e20 is true due to the limited precision of the number representation. Thus (0.1 + 1e20) - 1e20 == 0.0 is also true, while 0.1 + (1e20 - 1e20) == 0.0 is not true (the resulting value is 0.1). The floating-point addition is neither associative nor commutative. In this specific case, the accuracy can reach a threshold where there is not significant number anymore in the result. For more information about floating-point precision, please read this post. The point is you should be very careful when you subtract floating-point numbers. A good solution is to put parenthesis so that added/subtracted values have the same order of magnitude. Variable-sized and higher precision help a bit too. However, the best solution is to analyses the numerical stability of your algorithm. For example, studying condition number of the numerical operations used in your algorithm is a good start. Here, a relatively good solution is just to use the second code instead of the first. | 4 | 7 |
68,536,392 | 2021-7-26 | https://stackoverflow.com/questions/68536392/why-does-pytorch-autograd-need-a-scalar | I am working through "Deep Learning for Coders with fastai & Pytorch". Chapter 4 introduces the autograd function from the PyTorch library on a trivial example. x = tensor([3.,4.,10.]).requires_grad_() def f(q): return sum(q**2) y = f(x) y.backward() My question boils down to this: the result of y = f(x) is tensor(125., grad_fn=AddBackward0), but what does that even mean? Why would I sum the values of three completely different inputs? I get that using .backward() in this case is shorthand for .backward(tensor[1.,1.,1.]) in this scenario, but I don't see how summing 3 unrelated numbers in a list helps get the gradient for anything. What am I not understanding? I'm not looking for a grad-level explanation here. The subtitle for the book I'm using is AI Applications Without a Ph.D. My experience with gradients is from school is that I should be getting a FUNCTION back, but I understand that isn't the case with Autograd. A graph of this short example would be helpful, but the ones I see online usually include too many parameters or weights and biases to be useful, my mind gets lost in the paths. | TLDR; the derivative of a sum of functions is the sum of their derivatives Let x be your input vector made of x_i (where i in [0,n]), y = x**2 and L = sum(y_i). You are looking to compute dL/dx, a vector of the same size as x whose components are the dL/dx_j (where j in [0,n]). For j in [0,n], dL/dx_j is simply dy_j/dx_j (derivative of the sum is the sum of derivates and only one of them is different to zero), which is d(x_j**2)/dx_j, i.e. 2*x_j. Therefore, dL/dx = [2*x_j where j in [0,n]]. This is the result you get in x.grad when either computing the gradient of x as: y = f(x) y.backward() or the gradient of each components of x separately: y = x**2 y.backward(torch.ones_like(x)) | 5 | 4 |
68,533,000 | 2021-7-26 | https://stackoverflow.com/questions/68533000/how-to-attach-a-managed-policy-to-a-an-iam-role-using-the-cdk | I'm try to create a service role for AWS CodeBuild. I can create a role like this: from aws_cdk import aws_iam as iam role = iam.Role( self, 'CodebuildServiceRole', assumed_by=iam.ServicePrincipal('codebuild.amazonaws.com'), max_session_duration=cdk.Duration.hours(1), ) Now I need to attach the Amazon-provided AWSCodeBuildAdminAccess policy to the role. How can I do this using the CDK? | You can get access to the policy like this: AWSCodeBuildAdminAccess = iam.ManagedPolicy.from_aws_managed_policy_name('AWSCodeBuildAdminAccess') And attach it to your role like this: role.add_managed_policy(AWSCodeBuildAdminAccess) | 4 | 1 |
68,531,077 | 2021-7-26 | https://stackoverflow.com/questions/68531077/python-pandas-how-to-combine-or-merge-two-difrent-size-dataframes-based-on-date | I like to merge or combine two dataframes of different size df1 and df2, based on a range of dates, for example: df1: Date Open High Low 2021-07-01 8.43 8.44 8.22 2021-07-02 8.36 8.4 8.28 2021-07-06 8.22 8.23 8.06 2021-07-07 8.1 8.19 7.98 2021-07-08 8.07 8.1 7.91 2021-07-09 7.97 8.11 7.92 2021-07-12 8 8.2 8 2021-07-13 8.15 8.18 8.06 2021-07-14 8.18 8.27 8.12 2021-07-15 8.21 8.26 8.06 2021-07-16 8.12 8.23 8.07 df2: Day of month Revenue Earnings 01 45000 4000 07 43500 5000 12 44350 6000 15 39050 7000 results should be something like this: combination: Date Open High Low Earnings 2021-07-01 8.43 8.44 8.22 4000 2021-07-02 8.36 8.4 8.28 4000 2021-07-06 8.22 8.23 8.06 4000 2021-07-07 8.1 8.19 7.98 5000 2021-07-08 8.07 8.1 7.91 5000 2021-07-09 7.97 8.11 7.92 5000 2021-07-12 8 8.2 8 6000 2021-07-13 8.15 8.18 8.06 6000 2021-07-14 8.18 8.27 8.12 6000 2021-07-15 8.21 8.26 8.06 7000 2021-07-16 8.12 8.23 8.07 7000 The Earnings column is merged based on a range of date, how can I do this in python pandas? | Try merge_asof #df1.date=pd.to_datetime(df1.date) df1['Day of month'] = df1.Date.dt.day out = pd.merge_asof(df1, df2, on ='Day of month', direction = 'backward') out Out[213]: Date Open High Low Day of month Revenue Earnings 0 2021-07-01 8.43 8.44 8.22 1 45000 4000 1 2021-07-02 8.36 8.40 8.28 2 45000 4000 2 2021-07-06 8.22 8.23 8.06 6 45000 4000 3 2021-07-07 8.10 8.19 7.98 7 43500 5000 4 2021-07-08 8.07 8.10 7.91 8 43500 5000 5 2021-07-09 7.97 8.11 7.92 9 43500 5000 6 2021-07-12 8.00 8.20 8.00 12 44350 6000 7 2021-07-13 8.15 8.18 8.06 13 44350 6000 8 2021-07-14 8.18 8.27 8.12 14 44350 6000 9 2021-07-15 8.21 8.26 8.06 15 39050 7000 10 2021-07-16 8.12 8.23 8.07 16 39050 7000 | 5 | 2 |
68,530,492 | 2021-7-26 | https://stackoverflow.com/questions/68530492/create-hierarchy-column-in-pandas | I have got a dataframe like this: part part_parent 0 part1 NaN 1 part2 part1 2 part3 part2 3 part4 part3 4 part5 part2 I need to add an additional column hierarchy like this: part part_parent hierarchy 0 part1 NaN part1 1 part2 part1 part1/part2/ 2 part3 part2 part1/part2/part3/ 3 part4 part3 part1/part2/part3/part4 4 part5 part2 part1/part2/part5 Dict to create input/output dataframes: from numpy import nan df1 = pd.DataFrame({'part': {0: 'part1', 1: 'part2', 2: 'part3', 3: 'part4', 4: 'part5'}, 'part_parent': {0: nan, 1: 'part1', 2: 'part2', 3: 'part3', 4: 'part2'}}) df2 = pd.DataFrame({'part': {0: 'part1', 1: 'part2', 2: 'part3', 3: 'part4', 4: 'part5'}, 'part_parent': {0: nan, 1: 'part1', 2: 'part2', 3: 'part3', 4: 'part2'}, 'hierarchy': {0: 'part1', 1: 'part1/part2/', 2: 'part1/part2/part3/', 3: 'part1/part2/part3/part4', 4: 'part1/part2/part5'}}) NOTE: I've seen a couple of threads related to NetworkX to solve this issue but I'm not able to do so. Any help is appreciated. | Here is a solution using networkx. It treats nan as the root node, and finds the shortest path to each node based on that. import networkx as nx def find_path(net, source, target): # Adjust this as needed (in case multiple paths are present) # or error handling in case a path doesn't exist path = nx.shortest_path(net, source, target) return "/".join(list(path)[1:]) net = nx.from_pandas_edgelist(df1, "part", "part_parent") df1["hierarchy"] = [find_path(net, nan, node) for node in df1["part"]] part part_parent hierarchy 0 part1 NaN part1 1 part2 part1 part1/part2 2 part3 part2 part1/part2/part3 3 part4 part3 part1/part2/part3/part4 4 part5 part2 part1/part2/part5 The formatting of the path is contrived for this example, if more robust error-handling or multiple path formatting is needed, the path finder will have to be adjusted. | 5 | 4 |
68,522,656 | 2021-7-25 | https://stackoverflow.com/questions/68522656/convert-pandas-dataframe-to-a-dictionary-with-first-column-as-key | I have a Pandas Dataframe : A || B || C x1 x [x,y] x2 a [b,c,d] and I am trying to make a dictionary to that looks like: {x1: {B : x, c : [x,y]}, x2: {B: a, C:[b,c,d}} I have tried the to_dict function but that changes the entire dataframe into a dictionary. I am kind of lost on how to iterate onto the first column and make it the key and the rest of the df a dictionary as the value of that key. | Try: x = df.set_index("A").to_dict("index") print(x) Prints: {'x1': {'B': 'x', 'C': ['x', 'y']}, 'x2': {'B': 'a', 'C': ['b', 'c', 'd']}} | 6 | 8 |
68,521,514 | 2021-7-25 | https://stackoverflow.com/questions/68521514/how-to-select-another-type-of-font-with-qfont | I am trying to assign different types of text fonts to my application with PyQt5, but I don't know how to assign a different one to the standard one, for example in my application I could only assign it 'Roboto', but if I want to change to Roboto-MediumItalic, I don't know how to specify that font type to it, i'm newbie to python and pyqt5 QFontDatabase.addApplicationFont("Static/fonts/Roboto-Light.ttf") label2.setFont(QFont('Roboto',12)) folders | You have to use the styles and QFontDatabase to use Roboto-MediumItalic. You can also set the italic weight style through QFont. import os import sys from pathlib import Path from PyQt5.QtCore import Qt, QDir from PyQt5.QtGui import QFont, QFontDatabase from PyQt5.QtWidgets import QApplication, QLabel CURRENT_DIRECTORY = Path(__file__).resolve().parent def load_fonts_from_dir(directory): families = set() for fi in QDir(directory).entryInfoList(["*.ttf"]): _id = QFontDatabase.addApplicationFont(fi.absoluteFilePath()) families |= set(QFontDatabase.applicationFontFamilies(_id)) return families def main(): app = QApplication(sys.argv) font_dir = CURRENT_DIRECTORY / "Static" / "fonts" families = load_fonts_from_dir(os.fspath(font_dir)) print(families) db = QFontDatabase() styles = db.styles("Roboto") print(styles) font = db.font("Roboto", "Medium Italic", 12) # OR # font = QFont("Roboto", pointSize=12, weight=QFont.Medium, italic=True) label = QLabel(alignment=Qt.AlignCenter) label.setFont(font) label.setText("Hello world!!") label.resize(640, 480) label.show() sys.exit(app.exec_()) if __name__ == "__main__": main() | 5 | 5 |
68,481,660 | 2021-7-22 | https://stackoverflow.com/questions/68481660/django-admin-page-not-found-in-custom-view | I encountered very annoying problem. I have created my own AdminSite like this: from django.contrib import admin from django.template.response import TemplateResponse from django.urls import path class MyAdminSite(admin.AdminSite): def get_urls(self): urls = super().get_urls() my_urls = [ path('statistics/', self.admin_view(self.statistics), name='statistics'), ] return urls + my_urls def statistics(self, request): context = dict( self.each_context(request), ) return TemplateResponse(request, 'admin/statistics.html', context) I have created my own AdminConfig and assigned it into my INSTALLED_APPS, created html file, then in my root urls added it like this: urlpatterns = [ url(r'^admin/', admin.site.urls), ] I have logged in into my admin page and when I'm trying to open localhost:8000/admin/statistics I receive this: Page not found (404) Request URL: http://localhost:8000/admin/statistics/ Why this is happening? Did I miss something? Update I have added print on my get_urls and it showed this.(I removed unnecessary urls): [ <URLPattern '' [name='index']>, <URLPattern 'login/' [name='login']>, <URLPattern 'logout/' [name='logout']>, <URLPattern 'password_change/' [name='password_change']>, <URLPattern 'password_change/done/' [name='password_change_done']>, <URLPattern 'autocomplete/' [name='autocomplete']>, <URLPattern 'jsi18n/' [name='jsi18n']>, <URLPattern 'r/<int:content_type_id>/<path:object_id>/' [name='view_on_site']>, <URLResolver <URLPattern list> (None:None) 'auth/group/'>, <URLPattern '(?P<url>.*)$'>, <URLPattern 'statistics/' [name='statistics']> ] Using python manage.py show_urls | grep statistics shows me this: /admin/statistics/ project.admin.statistics admin:statistics | Well. I'm going to answer to my own question, in order to help other people. The solution of this problem was to switch returning url addition like this: return my_urls + urls my_urls comes first and the other urls. Why this is happening? Because urls' last path contains some kind of big wildcard url that just overwrites my own. That is why I needed to add first my new urls. Thanks for @lucascavalcante | 7 | 7 |
68,500,704 | 2021-7-23 | https://stackoverflow.com/questions/68500704/why-should-i-use-normalised-units-in-numerical-integration | I was simulating the solar system (Sun, Earth and Moon). When I first started working on the project, I used the base units: meters for distance, seconds for time, and metres per second for velocity. Because I was dealing with the solar system, the numbers were pretty big, for example the distance between the Earth and Sun is 150·10⁹ m. When I numerically integrated the system with scipy.solve_ivp, the results were completely wrong. Here is an example of Earth and Moon trajectories. But then I got a suggestion from a friend that I should use standardised units: astronomical unit (AU) for distance and years for time. And the simulation started working flawlessly! My question is: Why is this a generally valid advice for problems such as mine? (Mind that this is not about my specific problem which was already solved, but rather why the solution worked.) | Most, if not all integration modules work best out of the box if: your dynamical variables have the same order of magnitude; that order of magnitude is 1; the smallest time scale of your dynamics also has the order of magnitude 1. This typically fails for astronomical simulations where the orders of magnitude vary and values as well as time scales are often large in typical units. The reason for the above behaviour of integrators is that they use step-size adaption, i.e., the integration step is adjusted to keep the estimated error at a defined level. The step-size adaption in turn is governed by a lot of parameters like absolute tolerance, relative tolerance, minimum time step, etc. You can usually tweak these parameters, but if you don’t, there need to be some default values and these default values are chosen with the above setup in mind. Digression You might ask yourself: Can these parameters not be chosen more dynamically? As a developer and maintainer of an integration module, I would roughly expect that introducing such automatisms has the following consequences: About twenty in a thousand users will not run into problems like yours. About fifty a thousand users (including the above) miss an opportunity to learn rudimentary knowledge about how integrators work and reading documentations. About one in thousand users will run into a horrible problem with the automatisms that is much more difficult to solve than the above. I need to introduce new parameters governing the automatisms that are even harder to grasp for the average user. I spend a lot of time in devising and implementing the automatisms. | 4 | 8 |
68,513,540 | 2021-7-24 | https://stackoverflow.com/questions/68513540/how-to-install-and-run-virtualenv-on-macos-correctly | Hi I'm a beginner of python, I don't remember when and how I installed python3.8 on my Macbook air, only knew the installed path: % which python /usr/bin/python % which python3 /usr/local/bin/python3 The pip command cannot not be found but pip3 is ok. Today I want to install virtaulenv: % sudo -H pip3 install virtualenv WARNING: Ignoring invalid distribution - (/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages) Successfully installed virtualenv-20.6.0 I can use "pip3 show virtualenv" to know the info: % pip3 show virtualenv Name: virtualenv Version: 20.6.0 Summary: Virtual Python Environment builder Home-page: https://virtualenv.pypa.io/ Author: Bernat Gabor Author-email: [email protected] License: MIT Location: /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages Requires: six, platformdirs, backports.entry-points-selectable, distlib, filelock Required-by: But when I use "virtualenv" I got command not found message then I "pip3 uninstall" it. I searched for this and got a tip to use "easy_install" to install virtualenv. After installed I can execute the command, but got some error message: % virtualenv Traceback (most recent call last): File "/usr/local/bin/virtualenv", line 6, in <module> from pkg_resources import load_entry_point File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 3241, in <module> @_call_aside File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 3225, in _call_aside f(*args, **kwargs) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 3254, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 583, in _build_master ws.require(__requires__) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 900, in require needed = self.resolve(parse_requirements(requirements)) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 786, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'importlib-resources>=1.0' distribution was not found and is required by virtualenv It seems doesn't work and I try the "sudo pip3 uninstall virtualenv" to uninstall it successfully, but it seems not really be removed: % which virtualenv /usr/local/bin/virtualenv I have no idea about this, could you help me? I just want to run virtualenv normally and create flask project ... PS. I can "sudo pip3 install flask" and "pip3 show flask" but still cannot run flask cammand (command not found), what should I do? Thanks a lot! | try being explicit in the version of python you are using and install using -m pip instead python3 -m pip install virtualenv python3 -m virtualenv venv # create a new venv in ./venv source ./venv/bin/activate # activate your new venv often times the pip/pip3 just isnt pointing at the same python version you think you are using... by using this technique you are sure to be using the correct python and pip | 11 | 20 |
68,506,950 | 2021-7-24 | https://stackoverflow.com/questions/68506950/can-you-combine-the-addition-assignment-operator-with-the-walrus-operator | This is the code I write right now: a = 1 if (a := a + 1) == 2: print(a) I am wondering if something like this exists: a = 1 if (a +:= 1) == 2: print(a) | PEP-527 defined the new walrus operator. The section discussing differences between assignment statements and expressions explicitly states: Augmented assignment is not supported: total += tax # Equivalent: (total := total + tax) In the section explaining why = is still necessary with :=, we find: The two forms have different flexibilities. The := operator can be used inside a larger expression; the = statement can be augmented to += and its friends, can be chained, and can assign to attributes and subscripts. This strongly implies that there is no intention of supporting a merge of walrus and in-place operators of any kind. | 10 | 12 |
68,512,089 | 2021-7-24 | https://stackoverflow.com/questions/68512089/statements-must-be-separated-by-newlines-or-semicolons | I'm literally using the same code as the official Betfair Developer example, the only difference is that I'm putting the APP_KEY_HERE and SESSION_TOKEN data. But unlike the site, Visual Studio Code is giving me an error and a crash in the terminal. Terminal response: line 11 print json.dumps(json.loads(response.text), indent=3) ^ SyntaxError: invalid syntax https://docs.developer.betfair.com/display/1smk3cen4v3lu3yomq5qye0ni/Getting+Started What am I missing and what do I need to change to solve this problem? | In python 3.x, you have to enclose the arguments in (). print(json.dumps(json.loads(response.text), indent=3)) | 15 | 32 |
68,500,213 | 2021-7-23 | https://stackoverflow.com/questions/68500213/some-numbers-are-automatically-formatted-using-rich-console-how-to-prevent-this | The following code from rich.console import Console console = Console() console.print("ciao-16S-123") will print the number 123 highlighted (in blue, in my terminal). This happens on many strings with numbers, what could be the problem that causes this unwanted formatting, and how to prevent it? | As per Rich documentation, "Rich can apply styles to patterns in text which you print() or log(). With the default settings, Rich will highlight things such as numbers, strings, collections, booleans, None, and a few more exotic patterns such as file paths, URLs and UUIDs." You can disable it like this: console.print("ciao-16S-123", highlight=False) You can also define a custom highlighter better suited for your needs. | 11 | 13 |
68,507,229 | 2021-7-24 | https://stackoverflow.com/questions/68507229/how-can-i-remove-www-from-original-url-through-urllib-parse-in-python | Original URL ▶ https://www.exeam.org/index.html I want to extract exeam.org/ or exeam.org from original URL. To do this, I used urllib the most powerful parser in Python that I know, but unfortunately urllib (url.scheme, url.netloc ...) couldn't give me the type of format I wanted. | to extract the domain name from a url using `urllib): from urllib.parse import urlparse surl = "https://www.exam.org/index.html" urlparsed = urlparse(surl) # network location from parsed url print(urlparsed.netloc) # ParseResult Object print(urlparsed) this will give you www.exam.org, but you want to further decompose this to registered domain if you are after just the exam.org part. so besides doing simple splits, which could be sufficient, you could also use library such as tldextract which knows how to parse subdmains, suffixes and more: from tldextract import extract ext = extract(surl) print(ext.registered_domain) this will produce: exam.org | 5 | 7 |
68,506,555 | 2021-7-24 | https://stackoverflow.com/questions/68506555/pandas-if-else-condition-on-multiple-columns | I have a df as: df: col1 col2 col3 col4 col5 0 1.36 4.31 7.66 2 2 1 2.62 3.30 2.48 2 1 2 5.19 3.58 1.62 0 2 3 2.06 3.16 3.50 1 1 4 2.19 2.98 3.38 1 1 I want col6 to return 1 when (col4 > 1 and col5 > 1) else 0 and col7 to return 1 when (col4 > 1 and col5 > 1 and col 4 + col5 > 2) else 0 I am trying df.loc[df['col4'] > 0, df['col5'] > 0, 'col6'] = '1' however I am getting the error: File "pandas\_libs\index.pyx", line 269, in pandas._libs.index.IndexEngine.get_indexer File "pandas\_libs\hashtable_class_helper.pxi", line 5247, in pandas._libs.hashtable.PyObjectHashTable.lookup TypeError: unhashable type: 'Series' How can I perform this operation? | You can simple use bitwise operators: df['col6'] = ((df["col4"]>1) & (df["col5"]>1))*1 df['col7'] = ((df["col4"]>1) & (df["col5"]>1) & (df['col4']+df['col5']>2))*1 >>> df col1 col2 col3 col4 col5 col6 col7 0 1.36 4.31 7.66 2 2 1 1 1 2.62 3.30 2.48 2 1 0 0 2 5.19 3.58 1.62 0 2 0 0 3 2.06 3.16 3.50 1 1 0 0 4 2.19 2.98 3.38 1 1 0 0 | 4 | 1 |
68,505,320 | 2021-7-23 | https://stackoverflow.com/questions/68505320/what-do-empty-curly-braces-mean-in-a-string | So I have been browsing through online sites to read a file line by line and I come to this part of this code: print("Line {}: {}".format(linecount, line)) I am quite confused as to what is happening here. I know that it is printing something, but it shows: "Line{}" I do not understand what this means. I know that you could write this: foo = "hi" print(f"{foo} bob") But I don't get why there are empty brackets. | Empty braces are equivalent to numeric braces numbered from 0: >>> '{}: {}'.format(1,2) '1: 2' >>> '{0}: {1}'.format(1,2) '1: 2' Just a shortcut. But if you use numerals you can control the order: >>> '{1}: {0}'.format(1,2) '2: 1' Or the number of times something is used: >>> '{0}: {0}, {1}: {1}'.format(1,2) '1: 1, 2: 2' Which you cannot do with empty braces. | 4 | 4 |
68,505,216 | 2021-7-23 | https://stackoverflow.com/questions/68505216/modulenotfounderror-no-module-named-app-routes | So I'm learning fastapi right now and I was trying to separate my project into multiple files but when I do I get this error. ModuleNotFoundError: No module named 'app.routes' I have read This multiple times and I'm pretty sure I did everything right can anyone tell me what I did wrong? app │ main.py │ __init__.py │ └───routes auth.py __init__.py main.py from fastapi import FastAPI from app.routes import auth app = FastAPI() app.include_router(auth.router) auth.py from fastapi import APIRouter router = APIRouter() @router.get("/test") async def test(): return {"test": "test"} I ran uvicorn main:app --reload | Your uvicorn command is slightly off. From whatever directory is above app run -- uvicorn app.main:app --reload | 9 | 9 |
68,504,268 | 2021-7-23 | https://stackoverflow.com/questions/68504268/how-to-type-hint-a-tuple-of-callables-when-the-default-is-empty | I'm type hinting like this: some_kwarg: Tuple[Callable] = () but mypy raises error: Incompatible default for argument "some_kwarg" (default has type "Tuple[]", argument has type "Tuple[Callable[..., Any]]") I wouldn't want to put a dummy callable in the default kwarg so what's the right thing to do? | You are type annotating it to accept a tuple of size exactly 1. Use: Tuple[Callable, ...] To indicate a homogeneous tuple of any size. | 4 | 8 |
68,503,708 | 2021-7-23 | https://stackoverflow.com/questions/68503708/convert-a-list-into-a-dict-where-each-key-is-nested-under-the-next-one | I want to convert this list: [1,2,3,4,5] Into this dict: { 1 : { 2 : { 3 : { 4 : 5 }}}} This doesn't sound too complicated but I'm stumped when it's time to assign a value to a key deeper than the surface. I have a recursive function for finding how deep my dictionary goes but I don't know how to tell my algorithm to "add the new key here". | You are looking for a recursive function that builds a dictionary with the first list element as a key and the transformed rest of the list as the value: l = [1, 2, 3, 4, 5] def l2d(l): if len(l) < 2: # Not good raise Exception("The list is too short") if len(l) == 2: # Base case return {l[0]: l[1]} # Recursive case return {l[0]: l2d(l[1:])} l2d(l) # {1: {2: {3: {4: 5}}}} Another interesting approach is to use functools.reduce: from functools import reduce reduce(lambda tail,head: {head: tail}, reversed(l)) # {1: {2: {3: {4: 5}}}} It progressively applies a dictionary construction function to the first element of the list and the rest of it. The list is reversed first, so the construction naturally starts at the end. If the list is too short, the function returns its first element, which may or may not be desirable. The "reduce" solution is MUCH FASTER, by about two orders of magnitude. The bottom line: avoid recursion. | 4 | 6 |
68,498,945 | 2021-7-23 | https://stackoverflow.com/questions/68498945/parsing-yaml-file-with-in-python | How can we parse a file which contains multiple configs and which are separated by --- in python. I've config file which looks like File name temp.yaml %YAML 1.2 --- name: first cmp: - Some: first top: top_rate: 16000 audio_device: "pulse" --- name: second components: - name: second parameters: always_on: true timeout: 200000 When I read it with import yaml with open('./temp.yaml', 'r') as f: temp = yaml.load(f) I am getting following error temp = yaml.load(f) Traceback (most recent call last): File "temp.py", line 4, in <module> temp = yaml.load(f) File "/home/pranjald/.local/lib/python3.6/site-packages/yaml/__init__.py", line 114, in load return loader.get_single_data() File "/home/pranjald/.local/lib/python3.6/site-packages/yaml/constructor.py", line 41, in get_single_data node = self.get_single_node() File "/home/pranjald/.local/lib/python3.6/site-packages/yaml/composer.py", line 43, in get_single_node event.start_mark) yaml.composer.ComposerError: expected a single document in the stream in "./temp.yaml", line 3, column 1 but found another document in "./temp.yaml", line 10, column 1 | Your input is composed of multiple YAML documents. For that you will need yaml.load_all() or better yet yaml.safe_load_all(). (The latter will not construct arbitrary Python objects outside of data-like structures such as list/dict.) import yaml with open('temp.yaml') as f: temp = yaml.safe_load_all(f) As hinted at by the error message, yaml.load() is strict about accepting only a single YAML document. Note that safe_load_all() returns a generator of Python objects which you'll need to iterate over. >>> gen = yaml.safe_load_all(f) >>> next(gen) {'name': 'first', 'cmp': [{'Some': 'first', 'top': {'top_rate': 16000, 'audio_device': 'pulse'}}]} >>> next(gen) {'name': 'second', 'components': [{'name': 'second', 'parameters': {'always_on': True, 'timeout': 200000}}]} | 5 | 4 |
68,497,930 | 2021-7-23 | https://stackoverflow.com/questions/68497930/attributeerror-dlsymrtld-default-attachdebuggertracing-symbol-not-found | I'm trying to use the debugger in vs code (mac os big sur) with no success. I'm on the m1 macbook air. vs code insiders version. I've tried with all these python interpreters: 3.9.6 - /opt/homebrew/bin/python3 3.9.1 - /usr/local/bin/python3 3.8.2 - /usr/bin/python3 2.7.16 - /usr/bin/python I've tried with this launch.json file: { "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal" } ] } This is the error that I always get: /usr/bin/env /usr/local/bin/python3 /Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/launcher 50295 -- /Users/arch0n/Desktop/py-w3schools/2.py Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module> cli.main() File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main run() File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 267, in run_file start_debugging(target) File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 257, in start_debugging debugpy.connect(options.address, access_token=options.adapter_access_token) File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/../debugpy/common/compat.py", line 208, in kwonly_f return f(*args, **kwargs) File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/../debugpy/__init__.py", line 135, in connect return api.connect(address, access_token=access_token) File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/../debugpy/server/api.py", line 143, in debug log.reraise_exception("{0}() failed:", func.__name__, level="info") File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/../debugpy/server/api.py", line 141, in debug return func(address, settrace_kwargs, **kwargs) File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/../debugpy/server/api.py", line 276, in connect _settrace(host=host, port=port, client_access_token=access_token, **settrace_kwargs) File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/../debugpy/server/api.py", line 47, in _settrace return pydevd.settrace(*args, **kwargs) File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/_vendored/pydevd/pydevd.py", line 2696, in settrace _locked_settrace( File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/_vendored/pydevd/pydevd.py", line 2819, in _locked_settrace py_db.enable_tracing(py_db.trace_dispatch, apply_to_all_threads=True) File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/_vendored/pydevd/pydevd.py", line 1039, in enable_tracing pydevd_tracing.SetTrace(thread_trace_func) File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/_vendored/pydevd/pydevd_tracing.py", line 83, in SetTrace if set_trace_to_threads(tracing_func, thread_idents=[thread.get_ident()], create_dummy_thread=False) == 0: File "/Users/arch0n/.vscode-insiders/extensions/ms-python.python-2021.7.1053846006/pythonFiles/lib/python/debugpy/_vendored/pydevd/pydevd_tracing.py", line 349, in set_trace_to_threads result = lib.AttachDebuggerTracing( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ctypes/__init__.py", line 387, in __getattr__ func = self.__getitem__(name) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ctypes/__init__.py", line 392, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: dlsym(RTLD_DEFAULT, AttachDebuggerTracing): symbol not found arch0n is my user directory. | I fixed the same issue I was having by rolling back to the prior version of ms-python.python extension. The problem seems to be in the extension and has yet to be resolved. | 5 | 5 |
68,499,904 | 2021-7-23 | https://stackoverflow.com/questions/68499904/how-to-define-a-python-protocol-that-is-callable-with-any-number-of-keyword-argu | How do I define a Python protocol for a type that is: Callable with any number of keyword arguments of any type that returns a value of a specified type This is my attempt: from typing import Any, Protocol, TypeVar T = TypeVar("T", covariant=True) class Operation(Protocol[T]): def __call__(self, **kwargs: Any) -> T: pass # some example functions that should be a structural sub-type of "Operation[str]" def sumint(*, x: Any, y: Any) -> str: return f"{x} + {y} = {x + y}" def greet(*, name: Any = "World") -> str: return f"Hello {name}" # an example function that takes an "Operation[str]" as an argument def apply_operation(operation: Operation[str], **kwargs: Any) -> str: return operation(**kwargs) if __name__ == "__main__": print(apply_operation(sumint, x=2, y=2)) # prints: 2 + 2 = 4 print(apply_operation(greet, name="Stack")) # prints: Hello Stack However, mypy produces the error: example.py:26: error: Argument 1 to "apply_operation" has incompatible type "Callable[[NamedArg(Any, 'x'), NamedArg(Any, 'y')], str]"; expected "Operation[str]" example.py:28: error: Argument 1 to "apply_operation" has incompatible type "Callable[[DefaultNamedArg(Any, 'name')], str]"; expected "Operation[str]" Found 2 errors in 1 file (checked 1 source file) What am I doing wrong? How do I make mypy happy? | I can't answer your question about exactly why MyPy isn't happy — but here's a different approach that MyPy does seem to be happy with: from typing import Any, Callable, TypeVar T = TypeVar("T", covariant=True) Operation = Callable[..., T] # some example functions that should be a structural sub-type of "Operation[str]" def sumint(*, x: int = 1, y: int = 2) -> str: return f"{x} + {y} = {x + y}" def greet(*, name: str = "World") -> str: return f"Hello {name}" # an example function that takes an "Operation[str]" as an argument def apply_operation(operation: Operation[str], **kwargs: Any) -> str: return operation(**kwargs) if __name__ == "__main__": print(apply_operation(sumint, x=2, y=2)) # prints: 2 + 2 = 4 print(apply_operation(greet, name="Stack")) # prints: Hello Stack | 5 | 3 |
68,492,454 | 2021-7-22 | https://stackoverflow.com/questions/68492454/add-new-column-to-numpy-array-as-a-function-of-the-rows | I have a 2D Numpy Array, and I want to apply a function to each of the rows and form a new column (the new first column) with the results. For example, let M = np.array([[1,0,1], [0,0,1]]) and I want to apply the sum function on each row and get array([[2,1,0,1], [1,0,0,1]]) So the first column is [2,1], the sum of the first row and the second row. | You can generally append arrays to each other using np.concatenate when they have similar dimensionality. You can guarantee that sum will retain dimensionality regardless of axis using the keepdims argument: np.concatenate((M.sum(axis=1, keepdims=True), M), axis=1) This is equivalent to np.concatenate((M.sum(1)[:, None], M), axis=1) | 4 | 5 |
68,490,745 | 2021-7-22 | https://stackoverflow.com/questions/68490745/how-to-display-the-full-text-of-a-column-in-pandas | I have a data frame that contains a column with long texts. To demonstrate how it looks (note the ellipses "..." where text should continue): id text group 123 My name is Benji and I ... 2 The above text is actually longer than that phrase. For example it could be: My name is Benji and I am living in Kansas. The actual text is much longer than this. When I try to subset the text column only, it only shows the partial text with the dots "...". I need to make sure full text is shown for text sumarization later. But I'm not sure how to show the full text when selecting the text column. My df['text'] output looks something like this: 1 My name is Benji and I ... 2 He went to the creek and ... How do I show the full text and without the index number? | You can convert to a list an join with newlines ("\n"): import pandas as pd text = """The bullet pierced the window shattering it before missing Danny's head by mere millimeters. Being unacquainted with the chief raccoon was harming his prospects for promotion. There were white out conditions in the town; subsequently, the roads were impassable. The hawk didn’t understand why the ground squirrels didn’t want to be his friend. Nobody loves a pig wearing lipstick.""" df = pd.DataFrame({"id": list(range(5)), "text": text.splitlines()}) Original output: print(df["text"]) Yields: 0 The bullet pierced the window shattering it be... 1 Being unacquainted with the chief raccoon was ... 2 There were white out conditions in the town; s... 3 The hawk didn’t understand why the ground squi... 4 Nobody loves a pig wearing lipstick. Desired output: print("\n".join(df["text"].to_list())) Yields: The bullet pierced the window shattering it before missing Danny's head by mere millimeters. Being unacquainted with the chief raccoon was harming his prospects for promotion. There were white out conditions in the town; subsequently, the roads were impassable. The hawk didn’t understand why the ground squirrels didn’t want to be his friend. Nobody loves a pig wearing lipstick. | 7 | 2 |
68,489,765 | 2021-7-22 | https://stackoverflow.com/questions/68489765/what-is-the-correct-way-to-calculate-the-norm-1-norm-and-2-norm-of-vectors-in | I have a matrix: t = torch.rand(2,3) print(t) >>>tensor([[0.5164, 0.3651, 0.0882], [0.4488, 0.9824, 0.4067]]) I'm following this introduction to norms and want to try it in PyTorch. It seems like the: norm of a vector is "the size or length of a vector is a nonnegative number that describes the extent of the vector in space, and is sometimes referred to as the vector’s magnitude or the norm" 1-Norm is "the sum of the absolute vector values, where the absolute value of a scalar uses the notation |a1|. In effect, the norm is a calculation of the Manhattan distance from the origin of the vector space." 2-Norm is "the distance of the vector coordinate from the origin of the vector space. The L2 norm is calculated as the square root of the sum of the squared vector values." I currently only know of this: print(torch.linalg.norm(t, dim=1)) >>>tensor([0.6385, 1.1541]) But I can't figure out which one of the three (norm, 1-norm, 2-norm) from here it is calculating, and how to calculate the rest | To compute the 0-, 1-, and 2-norm you can either use torch.linalg.norm, providing the ord argument (0, 1, and 2 respectively). Or directly on the tensor: Tensor.norm, with the p argument. Here are the three variants: manually computed, with torch.linalg.norm, and with Tensor.norm. 0-norm >>> x.norm(dim=1, p=0) >>> torch.linalg.norm(x, dim=1, ord=0) >>> x.ne(0).sum(dim=1) 1-norm >>> x.norm(dim=1, p=1) >>> torch.linalg.norm(x, dim=1, ord=1) >>> x.abs().sum(dim=1) 2-norm >>> x.norm(dim=1, p=2) >>> torch.linalg.norm(x, dim=1, ord=2) >>> x.pow(2).sum(dim=1).sqrt() | 5 | 17 |
68,487,888 | 2021-7-22 | https://stackoverflow.com/questions/68487888/groupby-two-columns-sum-count-and-display-output-values-in-separate-column-pa | I have a dataset, df, where I would like to groupby two columns, take the sum and count of another column as well as list the strings in a separate column Data id date pwr type aa q321 10 hey aa q321 1 hello aa q425 20 hi aa q425 20 no bb q122 2 ok bb q122 1 cool bb q422 5 sure bb q422 5 sure bb q422 5 ok Desired id date pwr count type aa q321 11 2 hey hello aa q425 40 2 hi no bb q122 3 2 ok cool bb q422 15 3 sure sure ok Doing g = df.groupby(['id', 'date'])['pwr'].sum().reset_index() g['count'] = g['id'].map(df['id'].value_counts()) This works ok, except, I am not sure how to display the string output of column 'type' Any suggestion is appreciated. | You can use .GroupBy.transform() to set the values for columns pwr and count. Then .set_index() on the 4 columns except type to get a layout similar to the desired output: df['pwr'] = df.groupby(['id', 'date'])['pwr'].transform('sum') df['count'] = df.groupby(['id', 'date'])['pwr'].transform('count') df.set_index(['id', 'date', 'pwr', 'count']) Output: type id date pwr count aa q321 11 2 hey 2 hello q425 40 2 hi 2 no bb q122 3 2 ok 2 cool q422 15 3 sure 3 sure 3 ok | 5 | 3 |
68,476,886 | 2021-7-21 | https://stackoverflow.com/questions/68476886/what-is-the-correct-folder-structure-to-use-for-a-python-project-using-pytest | I try to organized my Python projects using a folder structure. When I need to make tests I use something like the following. . |-- src | |-- b.py | `-- main.py `-- tests `-- test_main.py There is just one big problem with this approach. Pytest won't run if main.py is importing b.py. So far I've tried placing empty __init__.py files inside the src and tests folders, both independently and together, but any of those seems to work. It seems to me this is a pretty standard project, but I haven't been able to find a solution online. Should I use a different folder structure? Is there any recommended way to use pytest with this kind of projects? This are the contents of the files: # b.py def triplicate(x): return x * 3 # main.py from b import triplicate def duplicate(x): return x * 2 # test_main.py from src.main import duplicate def test_duplicate(): assert duplicate(2) == 4 And this is the error I get when running pytest: ==================================================================================================== ERRORS ==================================================================================================== _____________________________________________________________________________________ ERROR collecting tests/test_main.py ______________________________________________________________________________________ ImportError while importing test module 'C:\Users\edwar\test_pytest\tests\test_main.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: c:\python39\lib\importlib\__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests\test_main.py:1: in <module> from src.main import duplicate src\main.py:1: in <module> from b import triplicate E ModuleNotFoundError: No module named 'b' =========================================================================================== short test summary info ============================================================================================ ERROR tests/test_main.py !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! =============================================================================================== 1 error in 0.15s =============================================================================================== | Python uses the 'environment variable' PYTHONPATH to look for sources to import code from. By default, the directory you execute a python program is automatically included, but you want to include something like this when you test: PYTHONPATH=$PYTHONPATH,../src python test_main.py This is if you're executing a test from the source directory. Tools like IntelliJ (PyCharm) will let you add this as a value in your test invocation. Alternatively you can use export PYTHONPATH=.... (Note this is for a *nix environment, your mileage on windows may vary.) The upshot is that every directory in PYTHONPATH will be loaded and Python will attempt to use it as a 'root' for modules you try to import. Your basic directory structure is the most idiomatic. See this answer for more on configuring PYTHONPATH correctly. See this doc for more about how the PYTHONPATH is modified and used 'under the hood'. See this answer for options to include the src directory when running pytest tests. See this blog post about using autoenv (a Python library) to enable the usage of .env files to manage this for you (at least within a virtualenv setup - a good idea generally). setup.py is also idiomatic for including many modules, and may provide a more convenient path for the situation you're handling. | 18 | 4 |
68,483,090 | 2021-7-22 | https://stackoverflow.com/questions/68483090/adding-level-2-index-as-a-sum-of-other-indexes-with-a-condition | I have a df: df = pd.DataFrame.from_dict({('group', ''): {0: 'A', 1: 'A', 2: 'A', 3: 'A', 4: 'A', 5: 'A', 6: 'A', 7: 'A', 8: 'A', 9: 'B', 10: 'B', 11: 'B', 12: 'B', 13: 'B', 14: 'B', 15: 'B', 16: 'B', 17: 'B', 18: 'all', 19: 'all'}, ('category', ''): {0: 'Amazon', 1: 'Apple', 2: 'Facebook', 3: 'Google', 4: 'Netflix', 5: 'Tesla', 6: 'Total', 7: 'Uber', 8: 'total', 9: 'Amazon', 10: 'Apple', 11: 'Facebook', 12: 'Google', 13: 'Netflix', 14: 'Tesla', 15: 'Total', 16: 'Uber', 17: 'total', 18: 'Total', 19: 'total'}, (pd.Timestamp('2020-06-29 00:00:00'), 'last_sales'): {0: 195.0, 1: 61.0, 2: 106.0, 3: 61.0, 4: 37.0, 5: 13.0, 6: 954.0, 7: 4.0, 8: 477.0, 9: 50.0, 10: 50.0, 11: 75.0, 12: 43.0, 13: 17.0, 14: 14.0, 15: 504.0, 16: 3.0, 17: 252.0, 18: 2916.0, 19: 2916.0}, (pd.Timestamp('2020-06-29 00:00:00'), 'sales'): {0: 1268.85, 1: 18274.385000000002, 2: 19722.65, 3: 55547.255, 4: 15323.800000000001, 5: 1688.6749999999997, 6: 227463.23, 7: 1906.0, 8: 113731.615, 9: 3219.6499999999996, 10: 15852.060000000001, 11: 17743.7, 12: 37795.15, 13: 5918.5, 14: 1708.75, 15: 166349.64, 16: 937.01, 17: 83174.82, 18: 787625.7400000001, 19: 787625.7400000001}, (pd.Timestamp('2020-06-29 00:00:00'), 'difference'): {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0, 5: 0.0, 6: 0.0, 7: 0.0, 8: 0.0, 9: 0.0, 10: 0.0, 11: 0.0, 12: 0.0, 13: 0.0, 14: 0.0, 15: 0.0, 16: 0.0, 17: 0.0, 18: 0.0, 19: 0.0}, (pd.Timestamp('2020-07-06 00:00:00'), 'last_sales'): {0: 26.0, 1: 39.0, 2: 79.0, 3: 49.0, 4: 10.0, 5: 10.0, 6: 436.0, 7: 5.0, 8: 218.0, 9: 89.0, 10: 34.0, 11: 133.0, 12: 66.0, 13: 21.0, 14: 20.0, 15: 732.0, 16: 3.0, 17: 366.0, 18: 2336.0, 19: 2336.0}, (pd.Timestamp('2020-07-06 00:00:00'), 'sales'): {0: 3978.15, 1: 12138.96, 2: 19084.175, 3: 40033.46000000001, 4: 4280.15, 5: 1495.1, 6: 165548.29, 7: 1764.15, 8: 82774.145, 9: 8314.92, 10: 12776.649999999996, 11: 28048.075, 12: 55104.21000000002, 13: 6962.844999999999, 14: 3053.2000000000003, 15: 231049.11000000002, 16: 1264.655, 17: 115524.55500000001, 18: 793194.8000000002, 19: 793194.8000000002}, (pd.Timestamp('2020-07-06 00:00:00'), 'difference'): {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0, 5: 0.0, 6: 0.0, 7: 0.0, 8: 0.0, 9: 0.0, 10: 0.0, 11: 0.0, 12: 0.0, 13: 0.0, 14: 0.0, 15: 0.0, 16: 0.0, 17: 0.0, 18: 0.0, 19: 0.0}, (pd.Timestamp('2021-06-28 00:00:00'), 'last_sales'): {0: 96.0, 1: 56.0, 2: 106.0, 3: 44.0, 4: 34.0, 5: 13.0, 6: 716.0, 7: 9.0, 8: 358.0, 9: 101.0, 10: 22.0, 11: 120.0, 12: 40.0, 13: 13.0, 14: 8.0, 15: 610.0, 16: 1.0, 17: 305.0, 18: 2652.0, 19: 2652.0}, (pd.Timestamp('2021-06-28 00:00:00'), 'sales'): {0: 5194.95, 1: 19102.219999999994, 2: 22796.420000000002, 3: 30853.115, 4: 11461.25, 5: 992.6, 6: 188143.41, 7: 3671.15, 8: 94071.705, 9: 6022.299999999998, 10: 7373.6, 11: 33514.0, 12: 35943.45, 13: 4749.000000000001, 14: 902.01, 15: 177707.32, 16: 349.3, 17: 88853.66, 18: 731701.46, 19: 731701.46}, (pd.Timestamp('2021-06-28 00:00:00'), 'difference'): {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0, 5: 0.0, 6: 0.0, 7: 0.0, 8: 0.0, 9: 0.0, 10: 0.0, 11: 0.0, 12: 0.0, 13: 0.0, 14: 0.0, 15: 0.0, 16: 0.0, 17: 0.0, 18: 0.0, 19: 0.0}, (pd.Timestamp('2021-07-07 00:00:00'), 'last_sales'): {0: 45.0, 1: 47.0, 2: 87.0, 3: 45.0, 4: 13.0, 5: 8.0, 6: 494.0, 7: 2.0, 8: 247.0, 9: 81.0, 10: 36.0, 11: 143.0, 12: 56.0, 13: 9.0, 14: 9.0, 15: 670.0, 16: 1.0, 17: 335.0, 18: 2328.0, 19: 2328.0}, (pd.Timestamp('2021-07-07 00:00:00'), 'sales'): {0: 7556.414999999998, 1: 14985.05, 2: 16790.899999999998, 3: 36202.729999999996, 4: 4024.97, 5: 1034.45, 6: 163960.32999999996, 7: 1385.65, 8: 81980.16499999998, 9: 5600.544999999999, 10: 11209.92, 11: 32832.61, 12: 42137.44500000001, 13: 3885.1499999999996, 14: 1191.5, 15: 194912.34000000003, 16: 599.0, 17: 97456.17000000001, 18: 717745.3400000001, 19: 717745.3400000001}, (pd.Timestamp('2021-07-07 00:00:00'), 'difference'): {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0, 5: 0.0, 6: 0.0, 7: 0.0, 8: 0.0, 9: 0.0, 10: 0.0, 11: 0.0, 12: 0.0, 13: 0.0, 14: 0.0, 15: 0.0, 16: 0.0, 17: 0.0, 18: 0.0, 19: 0.0}}).set_index(['group','category']) I am trying to create a level 2 index called combined which would be the sum of sales & last_sales of all categories except for Facebook and total / Total. So that the df would look like this: I tried doing it with .loc but with no success: s = df_out.stack(0) s['combined'] = 0 s.loc[(slice(None),[x for x in s.loc[(slice(None),:) if x != 'Facebook']].sum() | Solution Drop all in level=0, similarly drop the other unwanted level values in level=1 Calculate the sum on level=0 to aggregate the frame Create Multindex to add the additional level combined in aggregated frame Append and sort the index to maintain the order s = df.drop('all').drop(['Facebook', 'total', 'Total'], level=1).sum(level=0) s.index = pd.MultiIndex.from_product([s.index, ['combined']]) df_out = df.append(s).sort_index() Result 2020-06-29 00:00:00 2020-07-06 00:00:00 2021-06-28 00:00:00 2021-07-07 00:00:00 last_sales sales difference last_sales sales difference last_sales sales difference last_sales sales difference group category A Amazon 195.0 1268.850 0.0 26.0 3978.150 0.0 96.0 5194.950 0.0 45.0 7556.415 0.0 Apple 61.0 18274.385 0.0 39.0 12138.960 0.0 56.0 19102.220 0.0 47.0 14985.050 0.0 Facebook 106.0 19722.650 0.0 79.0 19084.175 0.0 106.0 22796.420 0.0 87.0 16790.900 0.0 Google 61.0 55547.255 0.0 49.0 40033.460 0.0 44.0 30853.115 0.0 45.0 36202.730 0.0 Netflix 37.0 15323.800 0.0 10.0 4280.150 0.0 34.0 11461.250 0.0 13.0 4024.970 0.0 Tesla 13.0 1688.675 0.0 10.0 1495.100 0.0 13.0 992.600 0.0 8.0 1034.450 0.0 Total 954.0 227463.230 0.0 436.0 165548.290 0.0 716.0 188143.410 0.0 494.0 163960.330 0.0 Uber 4.0 1906.000 0.0 5.0 1764.150 0.0 9.0 3671.150 0.0 2.0 1385.650 0.0 combined 371.0 94008.965 0.0 139.0 63689.970 0.0 252.0 71275.285 0.0 160.0 65189.265 0.0 total 477.0 113731.615 0.0 218.0 82774.145 0.0 358.0 94071.705 0.0 247.0 81980.165 0.0 B Amazon 50.0 3219.650 0.0 89.0 8314.920 0.0 101.0 6022.300 0.0 81.0 5600.545 0.0 Apple 50.0 15852.060 0.0 34.0 12776.650 0.0 22.0 7373.600 0.0 36.0 11209.920 0.0 Facebook 75.0 17743.700 0.0 133.0 28048.075 0.0 120.0 33514.000 0.0 143.0 32832.610 0.0 Google 43.0 37795.150 0.0 66.0 55104.210 0.0 40.0 35943.450 0.0 56.0 42137.445 0.0 Netflix 17.0 5918.500 0.0 21.0 6962.845 0.0 13.0 4749.000 0.0 9.0 3885.150 0.0 Tesla 14.0 1708.750 0.0 20.0 3053.200 0.0 8.0 902.010 0.0 9.0 1191.500 0.0 Total 504.0 166349.640 0.0 732.0 231049.110 0.0 610.0 177707.320 0.0 670.0 194912.340 0.0 Uber 3.0 937.010 0.0 3.0 1264.655 0.0 1.0 349.300 0.0 1.0 599.000 0.0 combined 177.0 65431.120 0.0 233.0 87476.480 0.0 185.0 55339.660 0.0 192.0 64623.560 0.0 total 252.0 83174.820 0.0 366.0 115524.555 0.0 305.0 88853.660 0.0 335.0 97456.170 0.0 all Total 2916.0 787625.740 0.0 2336.0 793194.800 0.0 2652.0 731701.460 0.0 2328.0 717745.340 0.0 total 2916.0 787625.740 0.0 2336.0 793194.800 0.0 2652.0 731701.460 0.0 2328.0 717745.340 0.0 | 5 | 4 |
68,478,097 | 2021-7-22 | https://stackoverflow.com/questions/68478097/excel-file-format-cannot-be-determined-you-must-specify-an-engine-manually | I am not sure why I am getting this error although sometimes my code works fine! Excel file format cannot be determined, you must specify an engine manually. Here below is my code with steps: 1- list of columns of customers Id: customer_id = ["ID","customer_id","consumer_number","cus_id","client_ID"] 2- The code to find all xlsx files in a folder and read them: l = [] #use a list and concat later, faster than append in the loop for f in glob.glob("./*.xlsx"): df = pd.read_excel(f).reindex(columns=customer_id).dropna(how='all', axis=1) df.columns = ["ID"] # to have only one column once concat l.append(df) all_data = pd.concat(l, ignore_index=True) # concat all data I added the engine openpyxl df = pd.read_excel(f, engine="openpyxl").reindex(columns = customer_id).dropna(how='all', axis=1) Now I got a different error: BadZipFile: File is not a zip file pandas version: 1.3.0 python version: python3.9 os: MacOS is there a better way to read all xlsx files from a folder ? | Found it. When an excel file is opened for example by MS excel a hidden temporary file is created in the same directory: ~$datasheet.xlsx So, when I run the code to read all the files from the folder it gives me the error: Excel file format cannot be determined, you must specify an engine manually. When all files are closed and no hidden temporary files ~$filename.xlsx in the same directory the code works perfectly. | 50 | 47 |
68,477,345 | 2021-7-21 | https://stackoverflow.com/questions/68477345/cpu-only-pytorch-is-crashing-with-error-assertionerror-torch-not-compiled-with | I'm trying to run the code from this repository and I need to use Pytorch 1.4.0. I've installed the CPU only version of pytorch with pip install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html. I ran the program by doing py -m train_Kfold_CV --device 0 --fold_id 10 --np_data_dir "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\prepare_datasets\edf_20_npz" but I'm getting this error: File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\train_Kfold_CV.py", line 94, in <module> main(config, fold_id) File "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\train_Kfold_CV.py", line 65, in main trainer.train() File "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\base\base_trainer.py", line 66, in train result, epoch_outs, epoch_trgs = self._train_epoch(epoch, self.epochs) File "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\trainer\trainer.py", line 49, in _train_epoch loss = self.criterion(output, target, self.class_weights) File "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\model\loss.py", line 6, in weighted_CrossEntropyLoss cr = nn.CrossEntropyLoss(weight=torch.tensor(classes_weights).cuda()) File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\cuda\__init__.py", line 196, in _lazy_init _check_driver() File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\cuda\__init__.py", line 94, in _check_driver raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled I've changed the number of GPU in the config to 0 and tried adding device = torch.device('cpu') at the begining of the program, but it's not doing anything. How can I fix this error? I'm using windows 10 with python 3.7.9 if it helps Thanks | You are using CPU only pytorch, but your code has statement like cr = nn.CrossEntropyLoss(weight=torch.tensor(classes_weights).cuda()) which is trying to move the tensor to GPU. To fix it, remove all the .cuda() operations. | 4 | 5 |
68,473,604 | 2021-7-21 | https://stackoverflow.com/questions/68473604/does-pytorch-support-complex-numbers | Minimum (not) working example kernel = Conv2d(in_channels=1, out_channels=1, kernel_size=(3, 2)) data = torch.rand(1, 1, 100, 100).type(torch.complex64) kernel(data) yields RuntimeError: "unfolded2d_copy" not implemented for 'ComplexDouble' for 64 and 128 bit complex numbers, while for 32 bit, i get RuntimeError: "copy_" not implemented for 'ComplexHalf'. Am I missing something, or is pytorch missing support for complex numbers?? note: I'm on macbook, using cpu only. | Currently (@ latest stable version - 1.9.0) Pytorch is missing support for such operations on complex tensors (which are a beta feature). See this feature request at Native implementation of convolution for complex numbers Splitting into convolution on real & image separately, though not ideal, is the way to go for now. | 4 | 2 |
68,471,886 | 2021-7-21 | https://stackoverflow.com/questions/68471886/how-to-add-permissions-to-a-lambda-function-using-the-cdk | I have a Lambda function that utilizes the AWS Python SDK to manage AWS CodeCommit repositories. I create the Lambda function using the CDK like so: from aws_cdk import aws_lambda as _lambda from aws_cdk.aws_lambda_python import PythonFunction service = PythonFunction( self, 'Svc', entry='./path/to', index='file.py', runtime=_lambda.Runtime.PYTHON_3_8, handler='handler', ) After deployment, I run the Lambda function, the following error occurs which is sent to CloudWatch logs: An error occurred (AccessDeniedException) when calling the GetRepository operation: User: arn:aws:iam::XXXXXXXXXXXX:user/XXXX is not authorized to perform: codecommit:GetRepository on resource: arn:aws:codecommit:us-east-1:XXXXXXXXXXXX:XXXX How can I allow the Lambda function to call codecommit:GetRepository on any repository in my account? | Create an IAM Policy Statement and add it to your Function's role policy: from aws_cdk import aws_iam as iam service.add_to_role_policy(iam.PolicyStatement( effect=iam.Effect.ALLOW, actions=[ 'codecommit:*', ], resources=[ 'arn:aws:codecommit:us-east-1:XXXXXXXXXXXX:*', ], )) | 5 | 10 |
68,472,236 | 2021-7-21 | https://stackoverflow.com/questions/68472236/type-hint-for-callable-that-takes-kwargs | I want to do something like from typing import Callable def a(foo: Callable[[int], None]): foo(b=5) This code works, but gives a warning Unexpected argument. Defining as def a(foo: Callable[[int], None]): foo(5) works with no warnings as expected. How can I pass in an expected argument as a kwarg into a function with the type checker not being angry at me? | The Callable docs say There is no syntax to indicate optional or keyword arguments; such function types are rarely used as callback types. However they also say Callable[..., ReturnType] (literal ellipsis) can be used to type hint a callable taking any number of arguments and returning ReturnType Applying here, that would be def a(foo: Callable[..., None]): You will lose the int annotation, but it's either that, living with the warning or explicitly surpassing it. | 21 | 15 |
68,471,392 | 2021-7-21 | https://stackoverflow.com/questions/68471392/can-i-inform-mypy-that-an-expression-will-not-return-an-optional | I have the following code: def extract_table_date(bucket_path: str) -> str: event_date = re.search(r"date=([^/]+)", bucket_path) return event_date.group(1)[0:10].replace("-", "") mypy throws error on the last line: Item "None" of "Optional[Match[str]]" has no attribute "group" I think I can solve that by assigning a type to event_date, and I can: from typing import Match def extract_table_date(bucket_path: str) -> str: event_date: Match = re.search(r"date=([^/]+)", bucket_path) return event_date.group(1)[0:10].replace("-", "") but mypy now throws another error on the first line of the function: Incompatible types in assignment (expression has type "Optional[Match[Any]]", variable has type "Match[Any]") I don't really know how to inform mypy that the result won't be optional but nonetheless I followed the advice at Optional types and the None type by adding an assert: from typing import Match def extract_table_date(bucket_path: str) -> str: assert bucket_path is not None event_date: Match = re.search(r"date=([^/]+)", bucket_path) return event_date.group(1)[0:10].replace("-", "") but mypy still raises the same error. I try to fix by changing the type defined for event_date: from typing import Match, optional, Any def extract_table_date(bucket_path: str) -> str: assert bucket_path is not None event_date: Optional[Match[Any]] = re.search(r"date=([^/]+)", bucket_path) return event_date.group(1)[0:10].replace("-", "") but (as expected) I'm now back to almost the same original error: Item "None" of "Optional[Match[Any]]" has no attribute "group" Any advice on how to fix this? | The thing that's Optional is event_date, because re.search is not guaranteed to return a match. mypy is warning you that this will raise an AttributeError if that's the case. You can tell it "no, I'm very confident that will not be the case" by doing an assert to that effect: def extract_table_date(bucket_path: str) -> str: event_date = re.search(r"date=([^/]+)", bucket_path) assert event_date is not None return event_date.group(1)[0:10].replace("-", "") If you're wrong, this code will still raise an exception (AssertionError, because your assert will fail), but mypy will no longer error because there is now no way for event_date to be None when you access its group attribute. Note that there is no need to assert on bucket_path because it's already explicitly typed as str. | 12 | 20 |
68,467,015 | 2021-7-21 | https://stackoverflow.com/questions/68467015/how-to-remove-rows-that-contain-nan-in-both-1st-and-3rd-columns | When dataframe is like this, a b c d 0 1.0 NaN 3.0 NaN 1 NaN 6.0 NaN 8.0 2 9.0 NaN NaN NaN 3 13.0 NaN 15.0 16.0 I want to remove rows that contain NaN in both b and d columns. So I want the result to be like this. a b c d 1 NaN 6.0 NaN 8.0 3 13.0 NaN 15.0 16.0 In this situation I can't use df.dropna(thresh=2) because I don't want to erase row 1, and if I use df.dropna(subset=['b', 'd']) then row 3 will be removed too. What should I do now? | dropna has an additional parameter, how: how{‘any’, ‘all’}, default ‘any’ Determine if row or column is removed from DataFrame, when we have at least one NA or all NA. ‘any’ : If any NA values are present, drop that row or column. ‘all’ : If all values are NA, drop that row or column. If you set it to all, it will only drop the lines that are filled with NaN. In your case df.dropna(subset=['b', 'd'], how="all") would work. | 4 | 6 |
68,464,926 | 2021-7-21 | https://stackoverflow.com/questions/68464926/how-to-test-django-querysets-are-equal-using-pytest-django | What's the best / most readable way to assert two querysets are equal? I've come up with a few solutions: # option 1 assert sorted(qs1.values_list("pk", flat=True)) == sorted(qs2.values_list("pk", flat=True)) # option 2 (need to assert length first because set might remove duplicates) assert len(qs1) == len(qs2) assert set(qs1) == set(qs2) I know Django has a method django.test.TransactionTestCase.assertQuerysetEqual. Does pytest-django have something similar? I don't see it in the documentation. | It's there in the starting lines of the link that you suggested: Assertions All of Django’s TestCase Assertions are available in pytest_django.asserts, e.g. from pytest_django.asserts import assertTemplateUsed Similarly you can use, from pytest_django.asserts import assertQuerysetEqual | 4 | 7 |
68,462,920 | 2021-7-21 | https://stackoverflow.com/questions/68462920/oserror-python-library-not-found-libpython3-9mu-so-1-0-libpython3-9m-so-etc | I am trying to create an executable from a python script, using pyinstaller, and am getting the error seen in the subject line. The particulars: python - version 3.9.2 pyinstaller - version 4 I am running on Debian Linux I evoke pyinstaller as: pyinstaller --onefile pythonfile.py When I looked to see what libpython*.so files were resident, I see libpython3.7*.so, and the error shows I need to install libpython3.9*so files. I have tried: pip3 install PyInstaller (to load on pyinstaller) apt-get install python3-dev (as recommended in the pyinstaller error msg) apt-get install python-dev (as recommended in the pyinstaller error msg) apt-get upgrade apt-get update but still get the error. How can I get the correct libpython*.so files loaded (i.e., 3.9)? TIA. | You need to generate the shared lib using: env PYTHON_CONFIGURE_OPTS="--enable-shared" pyenv install 3.9.2 I'm not sure whether 3.9.2 is working if not try 3.9.0 Official Document Here. | 4 | 12 |
68,436,658 | 2021-7-19 | https://stackoverflow.com/questions/68436658/mypy-says-request-json-returns-optionalany-how-do-i-solve | I am trying to understand mypy a little better. For the following line of code: request_body: dict = {} request_body = request.get_json() mypy returns an error: error: Incompatible types in assignment (expression has type "Optional[Any]", variable has type "Dict[Any, Any]") What is the correct fix for this? | As you can see in the following code, taken from /wekzeug/wrappers/request.py, the function get_json doesn't always return a dictionary. I would suggest removing the type hinting from the variable, as it can be None or a dictionary. def get_json( self, force: bool = False, silent: bool = False, cache: bool = True ) -> t.Optional[t.Any]: """Parse :attr:`data` as JSON. If the mimetype does not indicate JSON (:mimetype:`application/json`, see :meth:`is_json`), this returns ``None``. If parsing fails, :meth:`on_json_loading_failed` is called and its return value is used as the return value. :param force: Ignore the mimetype and always try to parse JSON. :param silent: Silence parsing errors and return ``None`` instead. :param cache: Store the parsed JSON to return for subsequent calls. """ if cache and self._cached_json[silent] is not Ellipsis: return self._cached_json[silent] if not (force or self.is_json): return None data = self.get_data(cache=cache) try: rv = self.json_module.loads(data) except ValueError as e: if silent: rv = None if cache: normal_rv, _ = self._cached_json self._cached_json = (normal_rv, rv) else: rv = self.on_json_loading_failed(e) if cache: _, silent_rv = self._cached_json self._cached_json = (rv, silent_rv) else: if cache: self._cached_json = (rv, rv) return rv This line specifically causes the method to return None: except ValueError as e: if silent: rv = None | 6 | 3 |
68,415,049 | 2021-7-16 | https://stackoverflow.com/questions/68415049/annotate-a-tuple-with-variable-number-of-items-and-first-item-is-of-different-ty | A few valid values for a tuple that I'm trying to annotate: ("foo", 1, 2) ("bar", 11) ("baz", 42, 31, 20, 0, -700, 44444, 12345, 1, 2, 3, 4, 5, 6, 7, 8, 9) I was expecting this to work: my_tuple: Tuple[str, int, ...] # doesn't work! ... but that throws error: Unexpected '...' Any way to annotate this structure? | In Python 3.11 I can now do the following: my_tuple: tuple[str, *tuple[int, ...]] = ("foo", 1, 2) This was added in PEP 646 For Python pre-3.11 (courtesy of @FMeinicke) typing_extensions.Unpack can be used instead [1,2,3]. from typing_extensions import Unpack my_tuple: tuple[str, Unpack[tuple[int, ...]]] = ("foo", 1, 2) though, note that there might still be issues with generic typed aliases pre-3.11, see https://github.com/python/typing_extensions/issues/103. | 7 | 3 |
68,381,971 | 2021-7-14 | https://stackoverflow.com/questions/68381971/how-to-use-postgresqls-stored-procedures-or-functions-in-django-project | I am working on one Django project. And I decided to write logic code in PostgreSQL instead of writing in Python. So, I created a stored procedure in PostgreSQL. For example, a stored procedure looks like this: create or replace procedure close_credit(id_loan int) language plpgsql as $$ begin update public.loan_loan set sum = 0 where id = id_loan; commit; end;$$ Then in settings.py, I made the following changes: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'pawnshop', 'USER': 'admin', 'PASSWORD': password.database_password, 'HOST': 'localhost', 'PORT': '', } } So the question is, How can I call this stored procedure in views.py? p.s. Maybe it sounds like a dumb question, but I really couldn't find any solution in Django. | I'd recommend storing the procedure definition in a migration file. For example, in the directory myapp/migrations/sql.py: from django.db import migrations SQL = """ CREATE PROCEDURE close_credit(id_loan int) language plpgsql AS $$ BEGIN UPDATE public.loan_loan SET sum = 0 WHERE id = id_loan; COMMIT; END; $$ """ class Migration(migrations.Migration): dependencies = [ ('myapp', '0001_initial'), ] operations = [migrations.RunSQL(SQL)] Note: you will need to replace myapp with the name of your application, and you will need to include only the most recent migration file for your app as a dependency. Now you can install the procedure using python3 manage.py migrate. Once your procedure is defined in the database, you can call it using cursor.callproc: from django.db import connection def close_credit(id_loan): with connection.cursor() as cursor: cursor.callproc('close_credit', [id_loan]) All that being said, if your procedure is really as trivial as the example you provided, it would be better to use the ORM: Loan.objects.filter(id=id_loan).update(sum=0) | 10 | 16 |
68,461,626 | 2021-7-20 | https://stackoverflow.com/questions/68461626/how-to-fix-unterminated-expression-in-f-string-missing-close-brace-in-python | I want to use f string formatting instead of print. However, I get these errors: Unterminated expression in f-string; missing close brace Expected ')' var="ab-c" f"{var.replace("-","")}text123" I tried to use single quote f'' and also double brackets but neither of them worked. Any idea about how to fix this? | Before Python 3.12: For f"{var.replace("-","")}text123", Python parses f"{var.replace(" as a complete string, which you can see has an opening { and opening (, but then the string is terminated. It first expected a ) and eventually a }, hence the error you see. To fix it, Python allows ' or " to enclose a string, so use one for the f-string and the other for inside the string: f"{var.replace('-','')}text123" or: f'{var.replace("-","")}text123' Triple quotes can also be used if internally you have both ' and " f'''{var.replace("-",'')}text123''' or: f"""{var.replace("-",'')}text123""" As of Python 3.12: Some f-string limitations have been lifted and the OP's original code now works: Python 3.12.0 (tags/v3.12.0:0fb18b0, Oct 2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> var="ab-c" >>> f"{var.replace("-","")}text123" 'abctext123' See What's New in Python 3.12, PEP 701: Syntactic formalization of f-strings. | 15 | 36 |
68,396,962 | 2021-7-15 | https://stackoverflow.com/questions/68396962/how-to-split-strings-in-c-like-in-python | so in python you can split strings like this: string = "Hello world!" str1 , str2 = string.split(" ") print(str1);print(str2) and it prints: Hello world! How can i do the same in C++? This wasn't useful Parse (split) a string in C++ using string delimiter (standard C++) , i need them splited so i can acces them separatedly like print just str1 for example. | If your tokenizer is always a white space (" ") and you might not tokenize the string with other characters (e.g. s.split(',')), you can use string stream: #include <iostream> #include <string> #include <sstream> int main() { std::string my_string = " Hello world! "; std::string str1, str2; std::stringstream s(my_string); s>>str1>>str2; std::cout<<str1<<std::endl; std::cout<<str2<<std::endl; return 0; } Keep in mind that this code is only suggested for whitespace tokens and might not be scalable if you have many tokens. Output: Hello World! | 9 | 9 |
68,375,767 | 2021-7-14 | https://stackoverflow.com/questions/68375767/how-can-i-use-databricks-utils-functions-in-pycharm-i-cant-find-appropriate-pi | PyCharm IDE. I want to use dbutils.widgets.get() in a module and than to import this module to databricks. I already tried with pip install databricks-client pip install databricks-utils and pip install DBUtils | The dbutils is available only as a part of the databricks-connect package. Its documentation contains detailed description on how to setup PyCharm to work with it. It also covers on how to use the dbutils. You may need to define following wrapper to be able to use dbutils locally and on Databricks: def get_dbutils(spark): from pyspark.dbutils import DBUtils return DBUtils(spark) get_dbutils().fs.cp('file:/home/user/data.csv', 'dbfs:/uploads') | 8 | 7 |
68,417,319 | 2021-7-17 | https://stackoverflow.com/questions/68417319/initialize-python-dataclass-from-dictionary | Let's say I want to initialize the below dataclass from dataclasses import dataclass @dataclass class Req: id: int description: str I can of course do it in the following way: data = make_request() # gives me a dict with id and description as well as some other keys. # {"id": 123, "description": "hello", "data_a": "", ...} req = Req(data["id"], data["description"]) But, is it possible for me to do it with dictionary unpacking, given that the keys I need is always a subset of the dictionary? req = Req(**data) # TypeError: __init__() got an unexpected keyword argument 'data_a' | Here's a solution that can be used generically for any class. It simply filters the input dictionary to exclude keys that aren't field names of the class with init==True: from dataclasses import dataclass, fields @dataclass class Req: id: int description: str def classFromArgs(className, argDict): fieldSet = {f.name for f in fields(className) if f.init} filteredArgDict = {k : v for k, v in argDict.items() if k in fieldSet} return className(**filteredArgDict) data = {"id": 123, "description": "hello", "data_a": ""} req = classFromArgs(Req, data) print(req) Output: Req(id=123, description='hello') UPDATE: Here's a variation on the strategy above which creates a utility class that caches dataclasses.fields for each dataclass that uses it (prompted by a comment by @rv.kvetch expressing performance concerns around duplicate processing of dataclasses.fields by multiple invocations for the same dataclass). from dataclasses import dataclass, fields class DataClassUnpack: classFieldCache = {} @classmethod def instantiate(cls, classToInstantiate, argDict): if classToInstantiate not in cls.classFieldCache: cls.classFieldCache[classToInstantiate] = {f.name for f in fields(classToInstantiate) if f.init} fieldSet = cls.classFieldCache[classToInstantiate] filteredArgDict = {k : v for k, v in argDict.items() if k in fieldSet} return classToInstantiate(**filteredArgDict) @dataclass class Req: id: int description: str req = DataClassUnpack.instantiate(Req, {"id": 123, "description": "hello", "data_a": ""}) print(req) req = DataClassUnpack.instantiate(Req, {"id": 456, "description": "goodbye", "data_a": "my", "data_b": "friend"}) print(req) @dataclass class Req2: id: int description: str data_a: str req2 = DataClassUnpack.instantiate(Req2, {"id": 123, "description": "hello", "data_a": "world"}) print(req2) print("\nHere's a peek at the internals of DataClassUnpack:") print(DataClassUnpack.classFieldCache) Output: Req(id=123, description='hello') Req(id=456, description='goodbye') Req2(id=123, description='hello', data_a='world') Here's a peek at the internals of DataClassUnpack: {<class '__main__.Req'>: {'description', 'id'}, <class '__main__.Req2'>: {'description', 'data_a', 'id'}} | 27 | 17 |
68,434,953 | 2021-7-19 | https://stackoverflow.com/questions/68434953/how-to-force-translate-i18n-in-some-specific-text-variable-in-vuejs | In normal context, we just attach translation property to a variable like : this.name = this.$t('language.name'); But I want to specific it in a specific language sometime( ex: in French). Can we do something like this in vue.js ? this.name = this.$t('language.name', locale: fr); | Using the old package kazupon/vue-i18n, the following should be possible: $t(key, locale) When using the successor-package intlify/vue-i18n-next, the answer depends on if you are using Vue I18n's Legacy API or the newer Composition API: Using Legacy API as described in the normal setup guide the usages of the t() function are stated here. This means you can still use the following call to translate a key to a specific locale (e.g. 'fr'): $t(key, locale) Example: $t('message.key', 'fr') Using the Composition API by calling createI18n() with the option legacy: false (as described here), the usages of the t() function are different as stated here. You can no longer pass the locale-string as the second parameter, but the locale can be handed over inside a TranslateOptions object. Unfortunately, there is no t(key,TranslateOptions) variant, but only the following variants: $t(key, plural, TranslateOptions) $t(key, defaultMsg, TranslateOptions) $t(key, interpolations, TranslateOptions) So the easiest solution would be e.g.: $t('message.key', 1, { locale: 'fr' }) | 7 | 7 |
68,417,682 | 2021-7-17 | https://stackoverflow.com/questions/68417682/qt-and-opencv-app-not-working-in-virtual-environment | I created a GUI app using pyqt5 and opencv. The app works fine without activating the virtual env but when I activate the virtual env and run the app it shows this error: QObject::moveToThread: Current thread (0x125b2f0) is not the object's thread (0x189e780). Cannot move to target thread (0x125b2f0) qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/deepak/Desktop/SampleApp/lib/python3.9/site-packages/cv2/qt/plugins" even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl. Aborted I tried running an example pyqt5 code (without importing opencv) and another code (only using opencv) both worked fine in the virtual env. Operating System: Parrot OS 4.11 Python Version: 3.9.2 | The problem is that the version of Qt with which opencv was compiled is not similar to the one used by PyQt5 causing a conflict. A possible solution is to indicate to use the Qt plugins used by PyQt5. import os from pathlib import Path import PyQt5 from PyQt5.QtWidgets import QWidget # others imports import cv2 os.environ["QT_QPA_PLATFORM_PLUGIN_PATH"] = os.fspath( Path(PyQt5.__file__).resolve().parent / "Qt5" / "plugins" ) # ... For PySide2: import os from pathlib import Path import PySide2 from PySide2.QtWidgets import QWidget # others imports import cv2 os.environ["QT_QPA_PLATFORM_PLUGIN_PATH"] = os.fspath( Path(PySide2.__file__).resolve().parent / "Qt" / "plugins" ) # ... Update: A better option is to use QLibraryInfo to get the plugins folder path: import os from PyQt5.QtCore import QLibraryInfo # from PySide2.QtCore import QLibraryInfo import cv2 os.environ["QT_QPA_PLATFORM_PLUGIN_PATH"] = QLibraryInfo.location( QLibraryInfo.PluginsPath ) | 6 | 18 |
68,446,642 | 2021-7-19 | https://stackoverflow.com/questions/68446642/how-do-i-get-pylance-to-ignore-the-possibility-of-none | I love Pylance type checking. However, If I have a variable var: Union[None, T], where T implements foo, pylance will throw an error at: var.foo() since type None doesn't implement foo. Is there any way to resolve this? A way to tell Pylance "This variable is None sometimes but in this case I'm 100% sure it will be assigned | There are many ways of forcing a type-checker to accept this. Use assert: from typing import Union def do_something(var: Union[T, None]): assert var is not None var.foo() Raise some other exception: from typing import Union def do_something(var: Union[T, None]): if var is None: raise RuntimeError("NO") var.foo() Use an if statement: from typing import Union def do_something(var: Union[T, None]): if var is not None: var.foo() Use typing.cast, a function that does nothing at runtime but forces a type-checker to accept that a variable is of a certain type: from typing import Union, cast def do_something(var: Union[T, None]): var = cast(T, var) var.foo() Switch off the type-checker for that line: from typing import Union def do_something(var: Union[T, None]): var.foo() # type: ignore Note also that, while it makes no difference to how your type annotation is interpreted by a type-checker (the two are semantically identical), you can also write typing.Union[T, None] as typing.Optional[T], which is arguably slightly nicer syntax. In Python >=3.10 (or earlier if you have from __future__ import annotations at the top of your code), you can even write Union types with the | operator, i.e. T | None. | 53 | 83 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.