question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
65,229,864 | 2020-12-10 | https://stackoverflow.com/questions/65229864/geopandas-polygon-to-line | I am new to geopandas and would like to plot only the outline of a polygon, similar to the function ST_Boundary() in PostGIS I have a geodataframe states containing polygons for each state states = counties.dissolve(by='STATEFP') When I subset by one state, I am able to plot that state: states.loc[states.index.isin(['06'])]['geometry'] I am only interested in the outline but it is not clear in the documentation how to convert a polygon to line geometry however. Is there a useful method in geopandas or another spatial library that might help in converting a polygon to a linestring? | You can get boundary as states.boundary Alternatively, if you want only exterior boundary you get it as states.exterior Those give you new GeoSeries with line geometry. | 8 | 9 |
65,231,632 | 2020-12-10 | https://stackoverflow.com/questions/65231632/importerror-cannot-import-name-bigquery-from-google-cloud-unknown-location | I'm trying to deploy a Cloud Function and it is returning me all the time the following error after importing bigquery from google.cloud ImportError: cannot import name 'bigquery' from 'google.cloud' (unknown location) I've tried to install all the newest versions and remove and reinstall and persists Any idea? | Try adding something like google-cloud-bigquery==2.3.1 into requirements.txt of your Google Cloud Function | 9 | 11 |
65,230,997 | 2020-12-10 | https://stackoverflow.com/questions/65230997/when-i-use-fastapi-and-pydantic-to-build-post-api-appear-a-typeerror-object-of | I use FastAPi and Pydantic to model the requests and responses to an POST API. I defined three class: from pydantic import BaseModel, Field from typing import List, Optional, Dict class RolesSchema(BaseModel): roles_id: List[str] class HRSchema(BaseModel): pk: int user_id: str worker_id: str worker_name: str worker_email: str schedulable: bool roles: RolesSchema state: dict class CreateHR(BaseModel): user_id: str worker_id: str worker_name: str worker_email: str schedulable: bool roles: RolesSchema And My API's program: @router.post("/humanResource", response_model=HRSchema) async def create_humanResource(create: CreateHR): query = HumanResourceModel.insert().values( user_id=create.user_id, worker_id=create.worker_id, worker_name=create.worker_name, worker_email=create.worker_email, schedulable=create.schedulable, roles=create.roles ) last_record_id = await database.execute(query) return {"status": "Successfully Created!"} Input data format is json: { "user_id": "123", "worker_id": "010", "worker_name": "Amos", "worker_email": "[email protected]", "schedulable": true, "roles": {"roles_id": ["001"]} } When I executed, I got TypeError: Object of type RolesSchema is not JSON serializable. How can I fix the program to normal operation? | Try to use roles=create.roles.dict() for creating query instead of roles=create.roles | 8 | 1 |
65,226,693 | 2020-12-10 | https://stackoverflow.com/questions/65226693/the-conflict-is-caused-by-the-user-requested-tensorboard-2-1-0-tensorflow-1-15 | I am trying to install a package VIBE from a git repo and inistally I was installing its dependencies. The code is located here: https://github.com/mkocabas/VIBE how should I fix this? Here's the error I got: (vibe-env) mona@mona:~/research/VIBE$ pip install -r requirements.txt Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Requirement already satisfied: torchvision==0.5.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 19)) (0.5.0) Collecting git+https://github.com/mattloper/chumpy.git (from -r requirements.txt (line 24)) Cloning https://github.com/mattloper/chumpy.git to /tmp/pip-req-build-vdh2h3jw Collecting git+https://github.com/mkocabas/yolov3-pytorch.git (from -r requirements.txt (line 25)) Cloning https://github.com/mkocabas/yolov3-pytorch.git to /tmp/pip-req-build-ay_gkil2 Collecting git+https://github.com/mkocabas/multi-person-tracker.git (from -r requirements.txt (line 26)) Cloning https://github.com/mkocabas/multi-person-tracker.git to /tmp/pip-req-build-l9jgk1qb Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Collecting filterpy==1.4.5 Using cached filterpy-1.4.5-py3-none-any.whl Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting gdown==3.6.4 Downloading gdown-3.6.4.tar.gz (5.2 kB) Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Collecting h5py==2.10.0 Using cached h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB) Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting joblib==0.14.1 Downloading joblib-0.14.1-py2.py3-none-any.whl (294 kB) |ββββββββββββββββββββββββββββββββ| 294 kB 5.6 MB/s Collecting llvmlite==0.32.1 Downloading llvmlite-0.32.1-cp37-cp37m-manylinux1_x86_64.whl (20.2 MB) |ββββββββββββββββββββββββββββββββ| 20.2 MB 14.1 MB/s Collecting matplotlib==3.1.3 Using cached matplotlib-3.1.3-cp37-cp37m-manylinux1_x86_64.whl (13.1 MB) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting numba==0.47.0 Downloading numba-0.47.0-cp37-cp37m-manylinux1_x86_64.whl (3.7 MB) |ββββββββββββββββββββββββββββββββ| 3.7 MB 33.0 MB/s Requirement already satisfied: setuptools in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from numba==0.47.0->-r requirements.txt (line 6)) (51.0.0.post20201207) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting opencv-python==4.1.2.30 Downloading opencv_python-4.1.2.30-cp37-cp37m-manylinux1_x86_64.whl (28.3 MB) |ββββββββββββββββββββββββββββββββ| 28.3 MB 29.4 MB/s Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting pillow==6.2.1 Downloading Pillow-6.2.1-cp37-cp37m-manylinux1_x86_64.whl (2.1 MB) |ββββββββββββββββββββββββββββββββ| 2.1 MB 107.9 MB/s Collecting progress==1.5 Downloading progress-1.5.tar.gz (5.8 kB) Collecting pyrender==0.1.36 Downloading pyrender-0.1.36-py3-none-any.whl (1.2 MB) |ββββββββββββββββββββββββββββββββ| 1.2 MB 23.0 MB/s Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Collecting PyYAML==5.3.1 Using cached PyYAML-5.3.1-cp37-cp37m-linux_x86_64.whl Collecting scikit-image==0.16.2 Downloading scikit_image-0.16.2-cp37-cp37m-manylinux1_x86_64.whl (26.5 MB) |ββββββββββββββββββββββββββββββββ| 26.5 MB 25.7 MB/s Collecting scikit-video==1.1.11 Using cached scikit_video-1.1.11-py2.py3-none-any.whl (2.3 MB) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting scipy==1.4.1 Using cached scipy-1.4.1-cp37-cp37m-manylinux1_x86_64.whl (26.1 MB) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting smplx==0.1.13 Downloading smplx-0.1.13-py3-none-any.whl (26 kB) Requirement already satisfied: torch>=1.0.1.post2 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from smplx==0.1.13->-r requirements.txt (line 7)) (1.4.0) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Collecting tensorboard==2.1.0 Downloading tensorboard-2.1.0-py3-none-any.whl (3.8 MB) |ββββββββββββββββββββββββββββββββ| 3.8 MB 29.3 MB/s Requirement already satisfied: setuptools in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from numba==0.47.0->-r requirements.txt (line 6)) (51.0.0.post20201207) Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Requirement already satisfied: wheel>=0.26 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from tensorboard==2.1.0->-r requirements.txt (line 18)) (0.36.1) Collecting tensorflow==1.15.4 Downloading tensorflow-1.15.4-cp37-cp37m-manylinux2010_x86_64.whl (110.5 MB) |ββββββββββββββββββββββββββββββββ| 110.5 MB 22 kB/s Requirement already satisfied: numpy==1.17.5 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.17.5) Requirement already satisfied: six>=1.11.0 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from chumpy==0.70->-r requirements.txt (line 24)) (1.15.0) Requirement already satisfied: wheel>=0.26 in /home/mona/anaconda3/envs/vibe-env/lib/python3.7/site-packages (from tensorboard==2.1.0->-r requirements.txt (line 18)) (0.36.1) INFO: pip is looking at multiple versions of tensorboard to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of smplx to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of scipy to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of scikit-video to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of scikit-image to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of pyyaml to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of pyrender to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of progress to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of pillow to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of opencv-python to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of numpy to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of numba to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of multi-person-tracker to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of matplotlib to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of llvmlite to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of joblib to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of h5py to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of gdown to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of filterpy to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of chumpy to determine which version is compatible with other requirements. This could take a while. ERROR: Cannot install -r requirements.txt (line 17) and tensorboard==2.1.0 because these package versions have conflicting dependencies. The conflict is caused by: The user requested tensorboard==2.1.0 tensorflow 1.15.4 depends on tensorboard<1.16.0 and >=1.15.0 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies (vibe-env) mona@mona:~/research/VIBE$ python Python 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.__version__ '1.4.0' Here are all the commands I ran before this: (base) mona@mona:~/research/VIBE$ export CONDA_ENV_NAME=vibe-env (base) mona@mona:~/research/VIBE$ conda create -n $CONDA_ENV_NAME python=3.7 (base) mona@mona:~/research/VIBE$ eval "$(conda shell.bash hook)" (base) mona@mona:~/research/VIBE$ conda activate $CONDA_ENV_NAME (vibe-env) mona@mona:~/research/VIBE$ pip install numpy==1.17.5 torch==1.4.0 torchvision==0.5.0 (vibe-env) mona@mona:~/research/VIBE$ pip install git+https://github.com/giacaglia/pytube.git --upgrade | The key here is this: The conflict is caused by: The user requested tensorboard==2.1.0 tensorflow 1.15.4 depends on tensorboard<1.16.0 and >=1.15.0 This is due to the fact that there is a conflict in requirements.txt of https://github.com/mkocabas/VIBE since it requires tensorboard==2.1.0 and tensorflow==1.15.4. However, according to the error message, this version of tensorflow only works with tensorboard 1.15.0 - 1.15.x. If you read the error closely you will see that pip itself suggests how to resolve this: To fix this you could try to: loosen the range of package versions you've specified remove package versions to allow pip attempt to solve the dependency conflict | 6 | 3 |
65,224,767 | 2020-12-9 | https://stackoverflow.com/questions/65224767/python-abstract-property-cant-instantiate-abstract-class-with-abstract-me | I'm trying to create a base class with a number of abstract python properties, in python 3.7. I tried it one way (see 'start' below) using the @property, @abstractmethod, @property.setter annotations. This worked but it doesn't raise an exception if the subclass doesn't implement a setter. That's the point of using @abstract to me, so that's no good. So I tried doing it another way (see 'end' below) using two @abstractmethod methods and a 'property()', which is not abstract itself but uses those methods. This approach generates an error when instantiating the subclass: # {TypeError}Can't instantiate abstract class FirstStep with abstract methods end I'm clearly implementing the abstract methods, so I don't understand what it means. The 'end' property is not marked @abstract, but if I comment it out, it does run (but I don't get my property). I also added that test non-abstract method 'test_elapsed_time' to demonstrate I have the class structure and abstraction right (it works). Any chance I'm doing something dumb, or is there some special behavior around property() that's causing this? class ParentTask(Task): def get_first_step(self): # {TypeError}Can't instantiate abstract class FirstStep with abstract methods end return FirstStep(self) class Step(ABC): # __metaclass__ = ABCMeta def __init__(self, task): self.task = task # First approach. Works, but no warnings if don't implement setter in subclass @property @abstractmethod def start(self): pass @start.setter @abstractmethod def start(self, value): pass # Second approach. "This method for 'end' may look slight messier, but raises errors if not implemented. @abstractmethod def get_end(self): pass @abstractmethod def set_end(self, value): pass end = property(get_end, set_end) def test_elapsed_time(self): return self.get_end() - self.start class FirstStep(Step): @property def start(self): return self.task.start_dt # No warnings if this is commented out. @start.setter def start(self, value): self.task.start_dt = value def get_end(self): return self.task.end_dt def set_end(self, value): self.task.end_dt = value | I suspect this is a bug in the interaction of abstract methods and properties. In your base class, the following things happen, in order: You define an abstract method named start. You create a new property that uses the abstract method from 1) as its getter. The name start now refers to this property, with the only reference to the original name now held by Self.start.fget. Python saves a temporary reference to start.setter, because the name start is about to be bound to yet another object. You create a second abstract method named start The reference from 3) is given the abstract method from 4) to define a new property to replace the once currently bound to the name start. This property has as its getter the method from 1 and as its setter the method from 4). Now start refers to this property; start.fget refers to the method from 1); start.fset refers to the method from 4). At this point, you have a property, whose component functions are abstract methods. The property itself was not decorated as abstract, but the definition of property.__isabstractmethod__ marks it as such because all its component methods are abstract. More importantly, you have the following entries in Step.__abstractmethods__: start, the property end, the property set_end, the setter for end gen_end, the getter for end Note that the component functions for the start property are missing, because __abstractmethods__ stores names of, not references to, things that need to be overriden. Using property and the resulting property's setter method as decorators repeatedly replace what the name start refers to. Now, in your child class, you define a new property named start, shadowing the inherited property, which has no setter and a concrete method as its getter. At this point, it doesn't matter if you provide a setter for this property or not, because as far as the abc machinery is concerned, you have provided everything it asked for: A concrete method for the name start Concrete methods for the names get_end and set_end Implicitly a concrete definition for the name end, because all of the underlying functions for the property end have been provided concrete definitions. | 6 | 4 |
65,222,324 | 2020-12-9 | https://stackoverflow.com/questions/65222324/how-to-sample-from-dataframe-based-on-percentile-of-a-column | Given a dataset like this: import pandas as pd rows = [{'key': 'ABC', 'freq': 100}, {'key': 'DEF', 'freq': 60}, {'key': 'GHI', 'freq': 50}, {'key': 'JKL', 'freq': 40}, {'key': 'MNO', 'freq': 13}, {'key': 'PQR', 'freq': 11}, {'key': 'STU', 'freq': 10}, {'key': 'VWX', 'freq': 10}, {'key': 'YZZ', 'freq': 3}, {'key': 'WHYQ', 'freq': 3}, {'key': 'HOWEE', 'freq': 2}, {'key': 'DUH', 'freq': 1}, {'key': 'HAHA', 'freq': 1}] df = pd.DataFrame(rows) df['percent'] = df['freq'] / sum(df['freq']) [out]: key freq percent 0 ABC 100 0.328947 1 DEF 60 0.197368 2 GHI 50 0.164474 3 JKL 40 0.131579 4 MNO 13 0.042763 5 PQR 11 0.036184 6 STU 10 0.032895 7 VWX 10 0.032895 8 YZZ 3 0.009868 9 WHYQ 3 0.009868 10 HOWEE 2 0.006579 11 DUH 1 0.003289 12 HAHA 1 0.003289 The goal is to select 1 example from the top 50-100 percentile of the frequency select 2 examples from the 10-50 percentile and select 4 example from < 10 percentile In this case, the answer that fits are: Pick 1 from ['ABC', 'DEF'] Pick 2 from ['GHI', 'JKL', 'MNO', 'PQR'] Pick 4 from ['VWX', 'STU', 'YZZ', 'WHYQ', 'HOWEE', 'HAHA', 'DUH'] I've tried this: import random import pandas as pd rows = [{'key': 'ABC', 'freq': 100}, {'key': 'DEF', 'freq': 60}, {'key': 'GHI', 'freq': 50}, {'key': 'JKL', 'freq': 40}, {'key': 'MNO', 'freq': 13}, {'key': 'PQR', 'freq': 11}, {'key': 'STU', 'freq': 10}, {'key': 'VWX', 'freq': 10}, {'key': 'YZZ', 'freq': 3}, {'key': 'WHYQ', 'freq': 3}, {'key': 'HOWEE', 'freq': 2}, {'key': 'DUH', 'freq': 1}, {'key': 'HAHA', 'freq': 1}] df = pd.DataFrame(rows) df['percent'] = df['freq'] / sum(df['freq']) bin_50_100 = [] bin_10_50 = [] bin_10 = [] total_percent = 1.0 for idx, row in df.sort_values(by=['freq', 'key'], ascending=False).iterrows(): if total_percent > 0.5: bin_50_100.append(row['key']) elif 0.1 < total_percent < 0.5: bin_10_50.append(row['key']) else: bin_10.append(row['key']) total_percent -= row['percent'] print(random.sample(bin_50_100, 1)) print(random.sample(bin_10_50, 2)) print(random.sample(bin_10, 4)) [out]: ['DEF'] ['MNO', 'PQR'] ['HOWEE', 'WHYQ', 'HAHA', 'DUH'] But is there a simpler way to solve the problem? | Let's try: bins = [0, 0.1, 0.5, 1] samples = [3,3,1] df['sample'] = pd.cut(df.percent[::-1].cumsum(), # accumulate percentage bins=[0, 0.1, 0.5, 1], # bins labels=False # num samples ).astype(int) df.groupby('sample').apply(lambda x: x.sample(n=samples[x['sample'].iloc[0])] ) Output: key freq percent sample sample 1 0 ABC 100 0.328947 1 2 2 GHI 50 0.164474 2 5 PQR 11 0.036184 2 4 7 VWX 10 0.032895 4 6 STU 10 0.032895 4 12 HAHA 1 0.003289 4 10 HOWEE 2 0.006579 4 | 8 | 10 |
65,219,970 | 2020-12-9 | https://stackoverflow.com/questions/65219970/how-to-design-a-neural-network-to-predict-arrays-from-arrays | I am trying to design a neural network to predict an array of the smooth underlying function from a dataset array with gaussian noise included. I have created a training and data set of 10000 arrays combined. Now I am trying to predict the array values for the actual function but it seems to fail and the accuracy isn't good either. Can someone guide me how to further improve my model to get better accuracy and be able to predict good data. My code used is below: for generating test and training data: noisy_data = [] pure_data =[] time = np.arange(1,100) for i in tqdm(range(10000)): array = [] noise = np.random.normal(0,1/10,99) for j in range(1,100): array.append( np.log(j)) array = np.array(array) pure_data.append(array) noisy_data.append(array+noise) pure_data=np.array(pure_data) noisy_data=np.array(noisy_data) print(noisy_data.shape) print(pure_data.shape) training_size=6000 x_train = noisy_data[:training_size] y_train = pure_data[:training_size] x_test = noisy_data[training_size:] y_test = pure_data[training_size:] print(x_train.shape) My model: model = tf.keras.models.Sequential() model.add(tf.keras.layers.Flatten(input_shape=(99,))) model.add(tf.keras.layers.Dense(768, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(768, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(99, activation=tf.nn.softmax)) model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy']) model.fit(x_train, y_train, epochs = 20) Outcome of bad accuracy: Epoch 1/20 125/125 [==============================] - 2s 16ms/step - loss: 947533.1875 - accuracy: 0.0000e+00 Epoch 2/20 125/125 [==============================] - 2s 15ms/step - loss: 9756863.0000 - accuracy: 0.0000e+00 Epoch 3/20 125/125 [==============================] - 2s 16ms/step - loss: 30837548.0000 - accuracy: 0.0000e+00 Epoch 4/20 125/125 [==============================] - 2s 15ms/step - loss: 63707028.0000 - accuracy: 0.0000e+00 Epoch 5/20 125/125 [==============================] - 2s 16ms/step - loss: 107545128.0000 - accuracy: 0.0000e+00 Epoch 6/20 125/125 [==============================] - 1s 12ms/step - loss: 161612192.0000 - accuracy: 0.0000e+00 Epoch 7/20 125/125 [==============================] - 1s 12ms/step - loss: 225245360.0000 - accuracy: 0.0000e+00 Epoch 8/20 125/125 [==============================] - 1s 12ms/step - loss: 297850816.0000 - accuracy: 0.0000e+00 Epoch 9/20 125/125 [==============================] - 1s 12ms/step - loss: 378894176.0000 - accuracy: 0.0000e+00 Epoch 10/20 125/125 [==============================] - 1s 12ms/step - loss: 467893216.0000 - accuracy: 0.0000e+00 Epoch 11/20 125/125 [==============================] - 2s 17ms/step - loss: 564412672.0000 - accuracy: 0.0000e+00 Epoch 12/20 125/125 [==============================] - 2s 15ms/step - loss: 668056384.0000 - accuracy: 0.0000e+00 Epoch 13/20 125/125 [==============================] - 2s 13ms/step - loss: 778468480.0000 - accuracy: 0.0000e+00 Epoch 14/20 125/125 [==============================] - 2s 18ms/step - loss: 895323840.0000 - accuracy: 0.0000e+00 Epoch 15/20 125/125 [==============================] - 2s 13ms/step - loss: 1018332672.0000 - accuracy: 0.0000e+00 Epoch 16/20 125/125 [==============================] - 1s 11ms/step - loss: 1147227136.0000 - accuracy: 0.0000e+00 Epoch 17/20 125/125 [==============================] - 2s 12ms/step - loss: 1281768448.0000 - accuracy: 0.0000e+00 Epoch 18/20 125/125 [==============================] - 2s 14ms/step - loss: 1421732608.0000 - accuracy: 0.0000e+00 Epoch 19/20 125/125 [==============================] - 1s 11ms/step - loss: 1566927744.0000 - accuracy: 0.0000e+00 Epoch 20/20 125/125 [==============================] - 1s 10ms/step - loss: 1717172480.0000 - accuracy: 0.0000e+00 and the prediction code I use: model.predict([noisy_data[0]]) This throws back the error: WARNING:tensorflow:Model was constructed with shape (None, 99) for input Tensor("flatten_5_input:0", shape=(None, 99), dtype=float32), but it was called on an input with incompatible shape (None, 1). ValueError: Input 0 of layer dense_15 is incompatible with the layer: expected axis -1 of input shape to have value 99 but received input with shape [None, 1] | Looking at your y data: y_train[0] array([0. , 0.69314718, 1.09861229, 1.38629436, 1.60943791, 1.79175947, 1.94591015, 2.07944154, 2.19722458, 2.30258509, 2.39789527, 2.48490665, 2.56494936, 2.63905733, 2.7080502 , 2.77258872, 2.83321334, 2.89037176, 2.94443898, 2.99573227, 3.04452244, 3.09104245, 3.13549422, 3.17805383, 3.21887582, 3.25809654, 3.29583687, 3.33220451, 3.36729583, 3.40119738, 3.4339872 , 3.4657359 , 3.49650756, 3.52636052, 3.55534806, 3.58351894, 3.61091791, 3.63758616, 3.66356165, 3.68887945, 3.71357207, 3.73766962, 3.76120012, 3.78418963, 3.80666249, 3.8286414 , 3.8501476 , 3.87120101, 3.8918203 , 3.91202301, 3.93182563, 3.95124372, 3.97029191, 3.98898405, 4.00733319, 4.02535169, 4.04305127, 4.06044301, 4.07753744, 4.09434456, 4.11087386, 4.12713439, 4.14313473, 4.15888308, 4.17438727, 4.18965474, 4.20469262, 4.21950771, 4.2341065 , 4.24849524, 4.26267988, 4.27666612, 4.29045944, 4.30406509, 4.31748811, 4.33073334, 4.34380542, 4.35670883, 4.36944785, 4.38202663, 4.39444915, 4.40671925, 4.41884061, 4.4308168 , 4.44265126, 4.4543473 , 4.46590812, 4.47733681, 4.48863637, 4.49980967, 4.51085951, 4.52178858, 4.53259949, 4.54329478, 4.55387689, 4.56434819, 4.57471098, 4.58496748, 4.59511985]) it would seem that you are in a regression setting, and not a classification one. So, you need to change the last layer of your model to model.add(tf.keras.layers.Dense(99)) # default linear activation and compile it as model.compile(optimizer = 'adam', loss = 'mse') (notice that accuracy is meaningless in regression problems). With these changes, fitting your model for 5 epochs gives now reasonable loss values: model.fit(x_train, y_train, epochs = 5) Epoch 1/5 188/188 [==============================] - 0s 2ms/step - loss: 0.2120 Epoch 2/5 188/188 [==============================] - 0s 2ms/step - loss: 4.0999e-04 Epoch 3/5 188/188 [==============================] - 0s 2ms/step - loss: 4.1783e-04 Epoch 4/5 188/188 [==============================] - 0s 2ms/step - loss: 4.2255e-04 Epoch 5/5 188/188 [==============================] - 0s 2ms/step - loss: 4.9760e-04 and it certainly seems you don't need 20 epochs. For predicting single values, you need to reshape them as follows: model.predict(np.array(noisy_data[0]).reshape(1,-1)) # result: array([[-0.02887887, 0.67635924, 1.1042297 , 1.4030693 , 1.5970025 , 1.8026372 , 1.9588575 , 2.0648997 , 2.202754 , 2.3088624 , 2.400107 , 2.4935524 , 2.560785 , 2.658005 , 2.714249 , 2.7735658 , 2.8429594 , 2.8860366 , 2.9135942 , 2.991392 , 3.0119512 , 3.1059306 , 3.1467025 , 3.1484323 , 3.2273414 , 3.2722526 , 3.2814353 , 3.3600745 , 3.3591018 , 3.3908122 , 3.4431438 , 3.4897916 , 3.5229044 , 3.542718 , 3.5617661 , 3.5660467 , 3.622283 , 3.614976 , 3.6565022 , 3.6963918 , 3.7061958 , 3.7615037 , 3.7564514 , 3.7682133 , 3.8250954 , 3.831929 , 3.86098 , 3.8959084 , 3.8967183 , 3.9016035 , 3.9568343 , 3.9597993 , 4.0028276 , 3.9931173 , 3.9887471 , 4.0221996 , 4.021959 , 4.048805 , 4.069759 , 4.104507 , 4.1473804 , 4.167117 , 4.1388593 , 4.148655 , 4.175832 , 4.1865892 , 4.2039223 , 4.2558513 , 4.237947 , 4.257041 , 4.2507076 , 4.2826586 , 4.2916007 , 4.2920256 , 4.304987 , 4.3153067 , 4.3575797 , 4.347109 , 4.3662906 , 4.396843 , 4.36556 , 4.3965526 , 4.421436 , 4.433974 , 4.424191 , 4.4379086 , 4.442377 , 4.4937015 , 4.468969 , 4.506153 , 4.515915 , 4.524729 , 4.53225 , 4.5434146 , 4.561402 , 4.582401 , 4.5856013 , 4.544302 , 4.6128435 ]], dtype=float32) | 7 | 3 |
65,219,786 | 2020-12-9 | https://stackoverflow.com/questions/65219786/how-to-run-a-server-in-python | How to run a server in python? I already have tried: python -m SimpleHTTPServer python -m HTTPServer but its says to me: invalid syntax Can someone help me? Thanks! | You can use this command in cmd or terminal python -m SimpleHTTPServer <port_number> # Python 2.x Python 3.x python3 -m http.server # Python 3x By default, this will run the contents of the directory on a local web server, on port 8000. You can go to this server by going to the URL localhost:8000 in your web browser. | 22 | 36 |
65,216,850 | 2020-12-9 | https://stackoverflow.com/questions/65216850/list-of-lists-into-a-python-rich-table | Given the below, how can i get the animal, age and gender into each of the table cells please? Currently all the data ends up in one cell. Thanks from rich.console import Console from rich.table import Table list = [['Cat', '7', 'Female'], ['Dog', '0.5', 'Male'], ['Guinea Pig', '5', 'Male']] table1 = Table(show_header=True, header_style='bold') table1.add_column('Animal') table1.add_column('Age') table1.add_column('Gender') for row in zip(*list): table1.add_row(' '.join(row)) console.print(table1) | Just use * to unpack the tuple and it should work fine. for row in zip(*list): table1.add_row(*row) Note that table1.add_row(*('Cat', 'Dog', 'Guinea Pig')) is equivalent to table1.add_row('Cat', 'Dog', 'Guinea Pig') While previously your approach was equivalent to table1.add_row('Cat Dog Guinea Pig') | 6 | 9 |
65,216,794 | 2020-12-9 | https://stackoverflow.com/questions/65216794/importerror-when-importing-metric-from-sklearn | When I am trying to import a metric from sklearn, I get the following error: from sklearn.metrics import mean_absolute_percentage_error ImportError: cannot import name 'mean_absolute_percentage_error' from 'sklearn.metrics' /Users/carter/opt/anaconda3/lib/python3.8/site-packages/sklearn/metrics/__init__.py) I have used conda update all, and reinstalled scikit-learn to no avail. Any other reasons this might happen and solutions? | The function mean_absolute_percentage_error is new in scikit-learn version 0.24 as noted in the documentation. As of December 2020, the latest version of scikit-learn available from Anaconda is v0.23.2, so that's why you're not able to import mean_absolute_percentage_error. You could try installing the latest version from source instead, or implement the function you need yourself. The source is available here if you'd like to take a look. | 18 | 15 |
65,213,809 | 2020-12-9 | https://stackoverflow.com/questions/65213809/pydantic-does-not-validate-when-assigning-a-number-to-a-string | When assigning an incorrect attribute to a Pydantic model field, no validation error occurs. from pydantic import BaseModel class pyUser(BaseModel): username: str class Config: validate_all = True validate_assignment = True person = pyUser(username=1234) person.username >>>1234 try_again = pyUser() pydantic.error_wrappers.ValidationError: [ErrorWrapper(exc=MissingError(), loc=('username',))] <class '__main__.pyUser'> How can I get pydantic to validate on assignment? | It is expected behaviour according to the documentation: str strings are accepted as-is, int, float and Decimal are coerced using str(v) You can use the StrictStr, StrictInt, StrictFloat, and StrictBool types to prevent coercion from compatible types. from pydantic import BaseModel, StrictStr class pyUser(BaseModel): username: StrictStr class Config: validate_all = True validate_assignment = True person = pyUser(username=1234) # ValidationError `str type expected` print(person.username) | 7 | 11 |
65,209,934 | 2020-12-9 | https://stackoverflow.com/questions/65209934/pydantic-enum-field-does-not-get-converted-to-string | I am trying to restrict one field in a class to an enum. However, when I try to get a dictionary out of class, it doesn't get converted to string. Instead it retains the enum. I checked pydantic documentation, but couldn't find anything relevant to my problem. This code is representative of what I actually need. from enum import Enum from pydantic import BaseModel class S(str, Enum): am = 'am' pm = 'pm' class K(BaseModel): k: S z: str a = K(k='am', z='rrrr') print(a.dict()) # {'k': <S.am: 'am'>, 'z': 'rrrr'} I'm trying to get the .dict() method to return {'k': 'am', 'z': 'rrrr'} | You need to use use_enum_values option of model config: use_enum_values whether to populate models with the value property of enums, rather than the raw enum. This may be useful if you want to serialise model.dict() later (default: False) from enum import Enum from pydantic import BaseModel class S(str, Enum): am='am' pm='pm' class K(BaseModel): k:S z:str class Config: use_enum_values = True # <-- a = K(k='am', z='rrrr') print(a.dict()) | 101 | 181 |
65,138,643 | 2020-12-4 | https://stackoverflow.com/questions/65138643/examples-or-explanations-of-pytorch-dataloaders | I am fairly new to Pytorch (and have never done advanced coding). I am trying to learn the basics of deep learning using the d2l.ai textbook but am having trouble with understanding the logic behind the code for dataloaders. I read the torch.utils.data docs and am not sure what the DataLoader class is meant for, and when for example I am supposed to use the torch.utils.data.TensorDataset class in combination with it. For example, d2l defines a function: def load_array(data_arrays, batch_size, is_train=True): """Construct a PyTorch data iterator.""" dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle=is_train) I assume this is supposed to return an iterable that iterates over different batches. However, I don't understand what the data.TensorDataset part does (seems like there are a lot of options listed on the docs page). Also, the documents say that there are two types of datasets: iterable and map style. When describing the former type, it says "This type of datasets is particularly suitable for cases where random reads are expensive or even improbable, and where the batch size depends on the fetched data." What does it mean for "a random read to be expensive or improbable" and for the batch_size to depend on the fetched data? Can anyone give an example of this? If there is any source where a CompSci noob like me can learn these basics, I'd really appreciate tips! Thanks very much! | I'll give you an example of how to use dataloaders and will explain the steps: Dataloaders are iterables over the dataset. So when you iterate over it, it will return B randomly from the dataset collected samples (including the data-sample and the target/label), where B is the batch-size. To create such a dataloader you will first need a class which inherits from the Dataset Pytorch class. There is a standard implementation of this class in pytorch which should be TensorDataset. But the standard way is to create an own one. Here is an example for image classification: import torch from PIL import Image class YourImageDataset(torch.utils.data.Dataset): def __init__(self, image_folder): self.image_folder = image_folder self.images = os.listdir(image_folder) # get sample def __getitem__(self, idx): image_file = self.images[idx] image = Image.open((self.image_folder + image_file)) image = np.array(image) # normalize image image = image / 255 # convert to tensor image = torch.Tensor(image).reshape(3, 512, 512) # get the label, in this case the label was noted in the name of the image file, ie: 1_image_28457.png where 1 is the label and the number at the end is just the id or something target = int(image_file.split("_")[0]) target = torch.Tensor(target) return image, target def __len__(self): return len(self.images) To get an example image you can call the class and pass some random index into the getitem function. It will then return the tensor of the image matrix and the tensor of the label at that index. For example: dataset = YourImageDataset("/path/to/image/folder") data, sample = dataset.__getitem__(0) # get data at index 0 Alright, so now you have created the class which preprocesses and returns ONE sample and its label. Now we have to create the datalaoder, which "wraps" around this class and then can return whole batches of samples from your dataset class. Lets create three dataloaders, one which iterates over the train set, one for the test set and one for the validation set: dataset = YourImageDataset("/path/to/image/folder") # lets split the dataset into three parts (train 70%, test 15%, validation 15%) test_size = 0.15 val_size = 0.15 test_amount, val_amount = int(dataset.__len__() * test_size), int(dataset.__len__() * val_size) # this function will automatically randomly split your dataset but you could also implement the split yourself train_set, val_set, test_set = torch.utils.data.random_split(dataset, [ (dataset.__len__() - (test_amount + val_amount)), test_amount, val_amount ]) # B is your batch-size, ie. 128 train_dataloader = torch.utils.data.DataLoader( train_set, batch_size=B, shuffle=True, ) val_dataloader = torch.utils.data.DataLoader( val_set, batch_size=B, shuffle=True, ) test_dataloader = torch.utils.data.DataLoader( test_set, batch_size=B, shuffle=True, ) Now you have created your dataloaders and are ready to train! For example like this: for epoch in range(epochs): for images, targets in train_dataloader: # now 'images' is a batch containing B samples (shape: B x image_height x image_width) # and 'targets' is a batch containing B targets (of the images in 'images' with the same index) optimizer.zero_grad() images, targets = images.cuda(), targets.cuda() predictions = model.train()(images) . . . Normally you would create an own file for the "YourImageDataset" class and then import to the file in which you want to create the dataloaders. I hope I could make clear what the role of the dataloader and the Dataset class is and how to use them! I don't know much about iter-style datasets but from what I understood: The method I showed you above, is the map-style. You use that, if your dataset is stored in a .csv, .json or whatever kind of file. So you can iterate through all rows or entries of the dataset. Iter-style will take you dataset or a part of the dataset and will convert in to an iterable. For example, if your dataset is a list, this is what an iterable of the list would look like: dataset = [1,2,3,4] dataset = iter(dataset) print(next(a)) print(next(a)) print(next(a)) print(next(a)) # output: # >>> 1 # >>> 2 # >>> 3 # >>> 4 So the next will give you the next item of the list. Using this together with a Pytorch Dataloader is probably more efficient and faster. Normally the map-dataloader is fast enough and common to use, but the documentation supposed that when you are loading data-batches from a database (which can be slower) then iter-style dataset would be more efficient. This explanation of iter-style is a bit vague but I hope it makes you understand what I understood. I would recommend you to use the map-style first, as I explained it in my original answer. | 6 | 14 |
65,120,501 | 2020-12-3 | https://stackoverflow.com/questions/65120501/typing-any-in-python-3-9-and-pep-585-type-hinting-generics-in-standard-collect | I am trying to understand if the typing package is still needed? If in Python 3.8 I do: from typing import Any, Dict my_dict = Dict[str, Any] Now in Python 3.9 via PEP 585 it's now preferred to use the built in types for collections hence: from typing import Any my_dict = dict[str, Any] Do I still need to use the typing.Any or is there a built in type to replace it which I can not find? | The use of the Any remains the same. PEP 585 applies only to standard collections. This PEP proposes to enable support for the generics syntax in all standard collections currently available in the typing module. Starting with Python 3.9, the following collections become generic and importing those from typing is deprecated: tuple (typing.Tuple) list (typing.List) dict (typing.Dict) set (typing.Set) frozenset (typing.FrozenSet) type (typing.Type) collections.deque collections.defaultdict collections.OrderedDict collections.Counter collections.ChainMap collections.abc.Awaitable collections.abc.Coroutine collections.abc.AsyncIterable collections.abc.AsyncIterator collections.abc.AsyncGenerator collections.abc.Iterable collections.abc.Iterator collections.abc.Generator collections.abc.Reversible collections.abc.Container collections.abc.Collection collections.abc.Callable collections.abc.Set (typing.AbstractSet) collections.abc.MutableSet collections.abc.Mapping collections.abc.MutableMapping collections.abc.Sequence collections.abc.MutableSequence collections.abc.ByteString collections.abc.MappingView collections.abc.KeysView collections.abc.ItemsView collections.abc.ValuesView contextlib.AbstractContextManager (typing.ContextManager) contextlib.AbstractAsyncContextManager (typing.AsyncContextManager) re.Pattern (typing.Pattern, typing.re.Pattern) re.Match (typing.Match, typing.re.Match) | 14 | 17 |
65,172,029 | 2020-12-6 | https://stackoverflow.com/questions/65172029/why-do-i-get-str-object-is-not-callable | I did this plot and it worked. The day after I ran it and I got this error: TypeError: 'str' object is not callable. plt.plot(A2, A15, color="black", label="TRC - P1", marker="o") plt.xlabel("Amostras") plt.ylabel("TRC (%)") plt.title("TRC da P1") plt.yticks([0, 10, 20, 30, 40, 50, 60, 70]) plt.xticks(rotation=60) Following the error I have got | I would guess that you defined somewhere in your notebook something like plt.xlabel = "something". This could also happened before you run this code shown. Try to close the Notebook and restart your Kernel. After restarting run your code shown and everything should be fine. If using jupyter notebook then click on Run tab and select restart kernel and run all cells | 14 | 52 |
65,101,442 | 2020-12-2 | https://stackoverflow.com/questions/65101442/formatter-black-is-not-working-on-my-vscode-but-why | I have started using Python and Django and I am very new in this field. And, this is my first time to ask a question here...I do apologise in advance if there is a known solution to this issue... When I installed and set VSCode formatter 'black' (after setting linter as flake8), the tutorial video tutor's side shows up pop-up like 'formatter autopep8 is not installed. install?'. & Mine did not show up that message. So what I did was... manually input 'pipenv install flack --dev --pre' on terminal. manually input "python.formatting.provider": "black", to 'settings.json' on '.vscode' folder. Setting(VSCode) -> flake8, Python > Linting: Flake8 Enabled (Also modified in: workspace), (ticked the box) Whether to lint Python files using flake8 The bottom code is from settings.json (on vscode folder). { "python.linting.pylintEnabled": false, "python.linting.flake8Enabled": true, "python.linting.enabled": true, "python.formatting.provider": "black", # input manually "python.linting.flake8Args": ["--max-line-length=88"] # input manually } I found a 'black formatter' document. https://github.com/psf/black & it stated... python -m black {source_file_or_directory} & I get the following error message. Usage: __main__.py [OPTIONS] [SRC]... Try '__main__.py -h' for help. Error: Invalid value for '[SRC]...': Path '{source_file_or_directory}' does not exist. Yes, honestly, I am not sure which source_file_or_directory I should set...but above all now I am afraid whether I am on the right track or not. Can I hear your advice? At least some direction to go, please. Thanks.. | Update 2023-09-15: Now VSCode has a Microsoft oficial Black Formatter extension. It will probably solve your problems. Original answer: I use Black from inside VSCode and it rocks. It frees mental cycles that you would spend deciding how to format your code. It's best to use it from your favorite editor. Just run from the command line if you need to format a lot of files at once. First, check if you have this in your VSCode settings.json (open it with Ctrl-P + settings): "python.formatting.provider": "black", "editor.formatOnSave": true, Remember that there may be 2 setting.json files: one in your home dir, and one in your project (.vscode/settings.json). The one inside the project prevails. That said, these kind of problems usually are about using a python interpreter where black isn't installed. I recommend the use of virtual environments, but first check your python interpreter on the status bar: If you didn't explicitly select an interpreter, do it now clicking on the Python version in your status bar. You can also do it with Ctrl-P + "Python: Select Interpreter". The status bar should change after selecting it. Now open a new terminal. Since you selected your interpreter, your virtual environment should be automatically activated by VSCode. Run python using your interpreter path and try to import black: $ python Python 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import black >>> Failed import? Problem solved. Just install black using the interpreter from the venv: python -m pip install black. You also can install using Conda, but in my experience VSCode works better with pip. Still not working? Click in the "OUTPUT" tab sibling of the TERMINAL and try to get more info at the "Log" output (if you use the newer Black plugun it may be called "Black Formatter"). Select it in the pull down menu: | 80 | 121 |
65,140,310 | 2020-12-4 | https://stackoverflow.com/questions/65140310/is-there-a-way-to-release-the-gil-for-pure-functions-using-pure-python | I think I must be missing something; this seems so right, but I can't see a way to do this. Say you have a pure function in Python: from math import sin, cos def f(t): x = 16 * sin(t) ** 3 y = 13 * cos(t) - 5 * cos(2*t) - 2 * cos(3*t) - cos(4*t) return (x, y) is there some built-in functionality or library that provides a wrapper of some sort that can release the GIL during the function's execution? In my mind I am thinking of something along the lines of from math import sin, cos from somelib import pure @pure def f(t): x = 16 * sin(t) ** 3 y = 13 * cos(t) - 5 * cos(2*t) - 2 * cos(3*t) - cos(4*t) return (x, y) Why do I think this might be useful? Because multithreading, which is currently only attractive for I/O-bound programs, would become attractive for such functions once they become long-running. Doing something like from math import sin, cos from somelib import pure from asyncio import run, gather, create_task @pure # releases GIL for f async def f(t): x = 16 * sin(t) ** 3 y = 13 * cos(t) - 5 * cos(2 * t) - 2 * cos(3 * t) - cos(4 * t) return (x, y) async def main(): step_size = 0.1 result = await gather(*[create_task(f(t / step_size)) for t in range(0, round(10 / step_size))]) return result if __name__ == "__main__": results = run(main()) print(results) Of course, multiprocessing offers Pool.map which can do something very similar. However, if the function returns a non-primitive / complex type then the worker has to serialize it and the main process HAS to deserialize and create a new object, creating a necessary copy. With threads, the child thread passes a pointer and the main thread simply takes ownership of the object. Much faster (and cleaner?). To tie this to a practical problem I encountered a few weeks ago: I was doing a reinforcement learning project, which involved building an AI for a chess-like game. For this, I was simulating the AI playing against itself for > 100,000 games; each time returning the resulting sequence of board states (a numpy array). Generating these games runs in a loop, and I use this data to create a stronger version of the AI each time. Here, re-creating ("malloc") the sequence of states for each game in the main process was the bottleneck. I experimented with re-using existing objects, which is a bad idea for many reasons, but that didn't yield much improvement. Edit: This question differs from How to run functions in parallel? , because I am not just looking for any way to run code in parallel (I know this can be achieved in various ways, e.g. via multiprocessing). I am looking for a way to let the interpreter know that nothing bad will happen when this function gets executed in a parallel thread. | Is there a way to release the GIL for pure functions using pure python? In short, the answer is no, because those functions aren't pure on the level on which the GIL operates. GIL serves not just to protect objects from being updated concurrently by Python code, its primary purpose is to prevent the CPython interpreter from performing a data race (which is undefined behavior, i.e. forbidden in the C memory model, in which CPython executes) while accessing and updating global and shared data. This includes Python-visible singletons such as None, True, and False, but also all globals like modules, shared dicts, and caches. Then there is their metadata such as reference counts and type objects, as well as shared data used internally by the implementation. Consider the provided pure function: def f(t): x = 16 * sin(t) ** 3 y = 13 * cos(t) - 5 * cos(2*t) - 2 * cos(3*t) - cos(4*t) return (x, y) The dis tool reveals the operations that the interpreter performs when executing the function: >>> dis.dis(f) 2 0 LOAD_CONST 1 (16) 2 LOAD_GLOBAL 0 (sin) 4 LOAD_FAST 0 (t) 6 CALL_FUNCTION 1 8 LOAD_CONST 2 (3) 10 BINARY_POWER 12 BINARY_MULTIPLY 14 STORE_FAST 1 (x) ... To run the code, the interpreter must access the global symbols sin and cos in order to call them. It accesses the integers 2, 3, 4, 5, 13, and 16, which are all cached and therefore also global. In case of an error, it looks up the exception classes in order to instantiate the appropriate exceptions. Even when these global accesses don't modify the objects, they still involve writes because they must update the reference counts. None of that can be done safely from multiple threads without synchronization. While it is conceivably possible to modify the Python interpreter to implement truly pure functions that don't access global state, it would require significant modifications to the internals, affecting compatibility with existing C extensions, including the vastly popular scientific ones. This last point is the principal reason why removing the GIL has proven to be so difficult. | 14 | 17 |
65,141,291 | 2020-12-4 | https://stackoverflow.com/questions/65141291/get-a-list-of-all-available-fonts-in-pil | I'm trying to find out what fonts are available to be used in PIL with the font = ImageFont.load() and/or ImageFont.truetype() function. I want to create a list from which I can sample a random font to be used. I haven't found anything in the documentation so far, unfortunately. | I have so far not found a solution with PIL but matplotlib has a function to get all the available fonts from the system: system_fonts = matplotlib.font_manager.findSystemFonts(fontpaths=None, fontext='ttf') The font can then be loaded using fnt = ImageFont.truetype(font, 60) | 9 | 12 |
65,184,937 | 2020-12-7 | https://stackoverflow.com/questions/65184937/fatal-python-error-init-fs-encoding-failed-to-get-the-python-codec-of-the-file | I am trying to start my uwsgi server in my virtual environment, but after I added plugin python3 option I get this error every time: !!! Python Home is not a directory: /home/env3/educ !!! Set PythonHome to /home/env3/educ Python path configuration: PYTHONHOME = '/home/env3/educ' PYTHONPATH = (not set) program name = '/home/env3/educ/bin/python' isolated = 0 environment = 1 user site = 1 import site = 1 sys._base_executable = '/home/env3/educ/bin/python' sys.base_prefix = '/home/env3/educ' sys.base_exec_prefix = '/home/env3/educ' sys.executable = '/home/env3/educ/bin/python' sys.prefix = '/home/env3/educ' sys.exec_prefix = '/home/env3/educ' sys.path = [ '/home/env3/educ/lib/python38.zip', '/home/env3/educ/lib/python3.8', '/home/env3/educ/lib/python3.8/lib-dynload', ] Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized ModuleNotFoundError: No module named 'encodings' Current thread 0x00007efe89db8780 (most recent call first): <no Python frame> Also I tried to create new virtual environment using python3 -m venv env and moved project files to it, but still the same error. Here is my uwsgi.ini file: [uwsgi] base = /home/env3/educ projectname = educ plugins = python3 master = true virtualenv = /home/env3/%(projectname) pythonpath = %(base) env = DJANGO_SETTINGS_MODULE=%(projectname).settings.pro module = %(projectname).wsgi:application socket = /tmp/%(projectname).sock chmod-socket = 666 I use Python 3.8.5 I am trying to use Django + uWSGI + nginx + Postgresql. | I see your PYTHONHOME is set to PYTHONHOME = '/home/env3/educ'. Try to check if it is really there. The solution for me was to remove the PYTHONHOME environment variable. For you, it can be just that, or setting that variable to another value. This worked on Windows, and would work on Linux for sure. If someone tries this on Linux, please post a comment here ! A CPython developer confirmed the solution here. : This is not a Python bug, this is a symptom of setting PYTHONHOME and/or PYTHONPATH when theyβre not needed. In nearly all cases you donβt need to set either of them; In the case of PYTHONHOME itβs almost always a mistake to set. | 57 | 44 |
65,142,024 | 2020-12-4 | https://stackoverflow.com/questions/65142024/when-set-name-is-useful-in-python | I saw a tweet from Raymond Hettinger yesterday. He used __set_name__. When I define __set_name__ method for my Class, the name becomes the instance's name. The owner became Foo, which is also expected but I couldn't figure out when and how this is useful. class Bar: def __set_name__(self, owner, name): print(f'{self} was named {name} by {owner}') class Foo: x = Bar() y = Bar() That prints <__main__.Bar object at 0x7f48f2968820> was named x by <class '__main__.Foo'> <__main__.Bar object at 0x7f48f2968c70> was named y by <class '__main__.Foo'> | As @juanpa provided a brief explanation, it is used to know the variable's name and class. One of its use cases is for logging. When you want to log the variable's name. This example was in descriptor's HowTo. import logging logging.basicConfig(level=logging.INFO) class LoggedAccess: def __set_name__(self, owner, name): self.public_name = name self.private_name = '_' + name def __get__(self, obj, objtype=None): value = getattr(obj, self.private_name) logging.info('Accessing %r giving %r', self.public_name, value) return value def __set__(self, obj, value): logging.info('Updating %r to %r', self.public_name, value) setattr(obj, self.private_name, value) class Person: name = LoggedAccess() # First descriptor instance age = LoggedAccess() # Second descriptor instance def __init__(self, name, age): self.name = name # Calls the first descriptor self.age = age # Calls the second descriptor def birthday(self): self.age += 1 | 14 | 12 |
65,167,879 | 2020-12-6 | https://stackoverflow.com/questions/65167879/python-y-should-be-a-1d-array-got-an-array-of-shape-instead | Let's consider data : import numpy as np from sklearn.linear_model import LogisticRegression x=np.linspace(0,2*np.pi,80) x = x.reshape(-1,1) y = np.sin(x)+np.random.normal(0,0.4,80) y[y<1/2] = 0 y[y>1/2] = 1 clf=LogisticRegression(solver="saga", max_iter = 1000) I want to fit logistic regression where y is dependent variable, and x is independent variable. But while I'm using : clf.fit(x,y) I see error 'y should be a 1d array, got an array of shape (80, 80) instead'. I tried to reshape data by using y=y.reshape(-1,1) But I end up with array of length 6400! (How come?) Could you please give me a hand with performing this regression ? | Change the order of your operations: First geneate x and y as 1-D arrays: x = np.linspace(0, 2*np.pi, 8) y = np.sin(x) + np.random.normal(0, 0.4, 8) Then (after y was generated) reshape x: x = x.reshape(-1, 1) Edit following a comment as of 2022-02-20 The source of the problem in the original code is that; x = np.linspace(0,2*np.pi,80) - generates a 1-D array. x = x.reshape(-1,1) - reshapes it into a 2-D array, with one column and as many rows as needed. y = np.sin(x) + np.random.normal(0,0.4,80) - operates on a columnar array and a 1-D array (treated here as a single row array). the effect is that y is a 2-D array (80 * 80). then the attempt to reshape y gives a single column array with 6400 rows. The proper solution is that both x and y should be initially 1-D (single row) arrays and my code does just this. Then both arrays can be reshaped. | 7 | 8 |
65,122,957 | 2020-12-3 | https://stackoverflow.com/questions/65122957/resolving-new-pip-backtracking-runtime-issue | The new pip dependency resolver that was released with version 20.3 takes an inappropriately long time to install a package. On our CI pipeline yesterday, a docker build that used to take ~10 minutes timed out after 1h of pip installation messages like this (almost for every library that is installed by any dependency there is a similar log output): INFO: pip is looking at multiple versions of setuptools to determine which version is compatible with other requirements. This could take a while. Downloading setuptools-50.0.0-py3-none-any.whl (783 kB) Downloading setuptools-49.6.0-py3-none-any.whl (803 kB) Downloading setuptools-49.5.0-py3-none-any.whl (803 kB) Downloading setuptools-49.4.0-py3-none-any.whl (803 kB) Downloading setuptools-49.3.2-py3-none-any.whl (790 kB) INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking Downloading setuptools-49.3.1-py3-none-any.whl (790 kB) Downloading setuptools-49.3.0-py3-none-any.whl (790 kB) Downloading setuptools-49.2.1-py3-none-any.whl (789 kB) Downloading setuptools-49.2.0-py3-none-any.whl (789 kB) Downloading setuptools-49.1.3-py3-none-any.whl (789 kB) Downloading setuptools-49.1.2-py3-none-any.whl (789 kB) Downloading setuptools-49.1.1-py3-none-any.whl (789 kB) Downloading setuptools-49.1.0-py3-none-any.whl (789 kB) Downloading setuptools-49.0.1-py3-none-any.whl (789 kB) Downloading setuptools-49.0.0-py3-none-any.whl (789 kB) Downloading setuptools-48.0.0-py3-none-any.whl (786 kB) Downloading setuptools-47.3.2-py3-none-any.whl (582 kB) Downloading setuptools-47.3.1-py3-none-any.whl (582 kB) Downloading setuptools-47.3.0-py3-none-any.whl (583 kB) Downloading setuptools-47.2.0-py3-none-any.whl (583 kB) Downloading setuptools-47.1.1-py3-none-any.whl (583 kB) Downloading setuptools-47.1.0-py3-none-any.whl (583 kB) Downloading setuptools-47.0.0-py3-none-any.whl (583 kB) Downloading setuptools-46.4.0-py3-none-any.whl (583 kB) Downloading setuptools-46.3.1-py3-none-any.whl (582 kB) Downloading setuptools-46.3.0-py3-none-any.whl (582 kB) Downloading setuptools-46.2.0-py3-none-any.whl (582 kB) Downloading setuptools-46.1.3-py3-none-any.whl (582 kB) Downloading setuptools-46.1.2-py3-none-any.whl (582 kB) Downloading setuptools-46.1.1-py3-none-any.whl (582 kB) Downloading setuptools-46.1.0-py3-none-any.whl (582 kB) Downloading setuptools-46.0.0-py3-none-any.whl (582 kB) Downloading setuptools-45.3.0-py3-none-any.whl (585 kB) Downloading setuptools-45.2.0-py3-none-any.whl (584 kB) Downloading setuptools-45.1.0-py3-none-any.whl (583 kB) Downloading setuptools-45.0.0-py2.py3-none-any.whl (583 kB) Downloading setuptools-44.1.1-py2.py3-none-any.whl (583 kB) Downloading setuptools-44.1.0-py2.py3-none-any.whl (583 kB) Downloading setuptools-44.0.0-py2.py3-none-any.whl (583 kB) Downloading setuptools-43.0.0-py2.py3-none-any.whl (583 kB) Downloading setuptools-42.0.2-py2.py3-none-any.whl (583 kB) Downloading setuptools-42.0.1-py2.py3-none-any.whl (582 kB) Downloading setuptools-42.0.0-py2.py3-none-any.whl (582 kB) Downloading setuptools-41.6.0-py2.py3-none-any.whl (582 kB) Downloading setuptools-41.5.1-py2.py3-none-any.whl (581 kB) Downloading setuptools-41.5.0-py2.py3-none-any.whl (581 kB) Downloading setuptools-41.4.0-py2.py3-none-any.whl (580 kB) Downloading setuptools-41.3.0-py2.py3-none-any.whl (580 kB) Downloading setuptools-41.2.0-py2.py3-none-any.whl (576 kB) Downloading setuptools-41.1.0-py2.py3-none-any.whl (576 kB) Downloading setuptools-41.0.1-py2.py3-none-any.whl (575 kB) Downloading setuptools-41.0.0-py2.py3-none-any.whl (575 kB) Downloading setuptools-40.9.0-py2.py3-none-any.whl (575 kB) Downloading setuptools-40.8.0-py2.py3-none-any.whl (575 kB) Downloading setuptools-40.7.3-py2.py3-none-any.whl (574 kB) Downloading setuptools-40.7.2-py2.py3-none-any.whl (574 kB) Downloading setuptools-40.7.1-py2.py3-none-any.whl (574 kB) Downloading setuptools-40.7.0-py2.py3-none-any.whl (573 kB) Downloading setuptools-40.6.3-py2.py3-none-any.whl (573 kB) Downloading setuptools-40.6.2-py2.py3-none-any.whl (573 kB) Downloading setuptools-40.6.1-py2.py3-none-any.whl (573 kB) Downloading setuptools-40.6.0-py2.py3-none-any.whl (573 kB) Downloading setuptools-40.5.0-py2.py3-none-any.whl (569 kB) Downloading setuptools-40.4.3-py2.py3-none-any.whl (569 kB) Downloading setuptools-40.4.2-py2.py3-none-any.whl (569 kB) Downloading setuptools-40.4.1-py2.py3-none-any.whl (569 kB) Downloading setuptools-40.4.0-py2.py3-none-any.whl (568 kB) Downloading setuptools-40.3.0-py2.py3-none-any.whl (568 kB) I am quite confused whether we are using the new pip resolver correctly, especially since - Substantial improvements in new resolver for performance, output and error messages, avoiding infinite loops, and support for constraints files. The behavior seen is described as backtracking in the release notes. I understand why it is there. It specifies that I can use a constraint file (looks like a requirements.txt) that fixes the version of the dependencies to reduce the runtime using pip install -c constraints.txt setup.py. What is the best way to produce this constraints file? Currently, the best way I can think of is running pip install setup.py locally in a new virtual environment, then using pip freeze > constraints.txt. However, this still takes a lot of time for the local install (it's been stuck for about 10 minutes now). The notes do mention that This means the βworkβ is done once during development process, and so will save users this work during deployment. With the old dependency resolver, I was able to install this package in less than a minute locally. What is the recommended process here? Edit: I just found out that some of the dependencies are pointing directly to out internal gitlab server. If I instead install directly from our internal package registry, it works in a couple of minutes again. | Latest update (2022-02) There seems to be major update in pip just few days old (version 22.0, release notes + relevant issue on github). I haven't tested it in more detail but it really seems to me that they optimized installation order calculation in complex case in such way that it resolves many issues we all encountered earlier. But I will need more time to check it. Anyway, the rest of this answer is still valid and smart requirements pinning suitable for particular project is a good practice imo. Since I encountered similar issue I agree this is quite annoying. Backtracking might be useful feature but you don't want to wait hours to complete with uncertain success. I found several option that might help: Use the old resolver (--use-deprecated=legacy-resolver) proposed in the answer by @Daniel Davee, but this is more like temporary solution than a proper one. Skip resolving dependencies with --no-deps option. I would not recommend this generally but in some cases you can have a working set of packages versions although there are some conflicts. Reduce the number of versions pip will try to backtrack and be more strict on package dependencies. This means instead of putting e.g. numpy in my requirements.txt, I could try numpy >= 1.18.0 or be even more strict with numpy == 1.18.0. The strictness might help a lot. Check the following sources: Fixing conflicts Github pip discussion Reducing backtracking I still do not have a proper answer that would always help but the best practice for requirements.txt seems to "pin" package versions. I found pip-tools that could help you manage this even with constrains.txt (but I am in an experimental phase so I can not tell you more). Update (2021-04) It seems author of the question was able to fix the issue (something with custom gitlab server) but I would like to extend this answer since it might be useful for others. After reading and trying I ended up with pinning all my package versions to a specific one. This really should be the correct way. Although everything can still work without it, there might be cases where if you don't pin your dependencies, your package manager will silently install a new version (when it's released) with possible bugs or incompatibility (this happens to me with dask last this year). There are several tools which might help you, I would recommend one of these approaches: Easiest one with pipreqs pipreqs is a library which generates pip requirements.txt file based on imports of any project you can start by pip install pipreqs and runnning just pipreqs in your project root (or eventually with --force flag if your requirements already exists) it will easily create requirements.txt with pinned versions based on imports in your project and versions taken from your environment then you can at any time create new environment based on this requirements.txt This is really simple tool (you even do not need to write your requirements.txt). It does not allow you to create something complex (might not be a good choice for bigger projects), last week I found one strange behavior (see this) but generally I'm happy with this tool as it usually works perfectly. Using pip-tools There are several other tools commonly used like pip-tools, Pipenv or Poetry. You can read more in Faster Docker builds with pipenv, poetry, or pip-tools or Python Application Dependency Management in 2018 (older but seems still valid to me). And it still seems to me that the best option (although it depends on your project/use case) is pip-tools. You can (this is one option, see more in docs): create requirements.in (the same format as requirements.txt, it's up to you whether you pin some package dependency or not) then you can use it by pip install pip-tools and running pip-compile requirements.in this will generate new requirements.txt file where all versions are pinned, it's clear, what is the origin (Optionally) you can run it with --generate-hashes option then you can (as with pipreqs) at any time create new environment based on this requirements.txt pip-tools offer you --upgrade option to upgrade the final reqs supports layered requirements (e.g. having dev and prod versions) there is integration with pre-commit offers pip-sync tool to update your environment based on requirements.txt There are few more stuff you can do with it and I really love the integration with pre-commit. This allows you to use the same requirements as before (just with .in suffix) and add pre-commit hook that automatically updates requirements.txt (so you will never experience having different local environment from the generated requirements.txt which might easily happen when you run something manually). | 185 | 130 |
65,152,998 | 2020-12-5 | https://stackoverflow.com/questions/65152998/pause-a-ffmpeg-encoding-in-a-python-popen-subprocess-on-windows | I am trying to pause an encode of FFmpeg while it is in a non-shell subprocess (This is important to how it plays into a larger program). This can be done by presssing the "Pause / Break" key on the keyboard by itself, and I am trying to send that to Popen. The command itself must be cross platform compatible, so I cannot wrap it in any way, but I can send signals or run functions that are platform specific as needed. I looked at how to send a "Ctrl+Break" to a subprocess via pid or handler and it suggested to send a signal, but that raised a "ValueError: Unsupported signal: 21" from subprocess import Popen, PIPE import signal if __name__ == '__main__': command = "ffmpeg.exe -y -i example_video.mkv -map 0:v -c:v libx265 -preset slow -crf 18 output.mkv" proc = Popen(command, stdin=PIPE, shell=False) try: proc.send_signal(signal.SIGBREAK) finally: proc.wait() Then attempted to use GenerateConsoleCtrlEvent to create a Ctrl+Break event as described here https://learn.microsoft.com/en-us/windows/console/generateconsolectrlevent from subprocess import Popen, PIPE import ctypes if __name__ == '__main__': command = "ffmpeg.exe -y -i example_video.mkv -map 0:v -c:v libx265 -preset slow -crf 18 output.mkv" proc = Popen(command, stdin=PIPE, shell=False) try: ctypes.windll.kernel32.GenerateConsoleCtrlEvent(1, proc.pid) finally: proc.wait() I have tried psutil pause feature, but it keeps the CPU load really high even when "paused". Even though it wouldn't work with the program overall, I have at least tried setting creationflags=CREATE_NEW_PROCESS_GROUP which makes the SIGBREAK not error, but also not pause it. For the Ctrl-Break event will entirely stop the encode instead of pausing it. | Linux/Unix solution: import subprocess, os, signal # ... # Start the task: proc = subprocess.Popen(..., start_new_session=True) # ... def send_signal_to_task(pid, signal): #gpid = os.getpgid(pid) # WARNING This does not work gpid = pid # But this does! print(f"Sending {signal} to process group {gpid}...") os.killpg(gpid, signal) # ... # Pause and resume: send_signal_to_task(proc.pid, signal.SIGSTOP) send_signal_to_task(proc.pid, signal.SIGCONT) Notice start_new_session=True in Popen call and using of os.killpg instead of os.kill in send_signal_to_task function. The reason for this is the same as as reason for why you have large CPU usage even in paused state as you reported. ffmpeg spawns a number of child processes. start_new_session=True will create a new process group and os.killpg sends signal to all processes in group. Cross-platform solution is supposed to be provided by psutil module but you probably should research whether it supports process groups as the variant above: >>> import psutil >>> psutil.pids() [1, 2, 3, 4, 5, 6, 7, 46, 48, 50, 51, 178, 182, 222, 223, 224, 268, 1215, 1216, 1220, 1221, 1243, 1244, 1301, 1601, 2237, 2355, 2637, 2774, 3932, 4176, 4177, 4185, 4187, 4189, 4225, 4243, 4245, 4263, 4282, 4306, 4311, 4312, 4313, 4314, 4337, 4339, 4357, 4358, 4363, 4383, 4395, 4408, 4433, 4443, 4445, 4446, 5167, 5234, 5235, 5252, 5318, 5424, 5644, 6987, 7054, 7055, 7071] >>> p = psutil.Process(7055) >>> p.suspend() >>> p.resume() Reference: https://pypi.org/project/psutil/ Some pointers on emulating process groups in Windows: Popen waiting for child process even when the immediate child has terminated | 6 | 2 |
65,182,608 | 2020-12-7 | https://stackoverflow.com/questions/65182608/how-to-define-a-type-for-a-function-arguments-and-return-type-with-a-predefine | I want to define a function signature (arguments and return type) based on a predefined type. Let's say I have this type: safeSyntaxReadType = Callable[[tk.Tk, Notebook, str], Optional[dict]] which means safeSyntaxReadType is a function that receives 3 arguments (from types as listed above), and it can return a dict or may not return anything. Now let's say I use a function safeReadJsonFile whose signature is: def safeReadJsonFile(root = None, notebook = None, path = ''): I want to assign the type safeSyntaxReadType to the function safeReadJsonFile in the signature, maybe something like: def safeReadJsonFile:safeSyntaxReadType(root = None, notebook = None, path = ''): But this syntax doesn't work. What is the right syntax for such type assigning? I can do it this way: def safeReadJsonFile(root:tk.Tk = None, notebook:Notebook = None, path:str = '') -> Optional[dict]: but I want to avoid that. After reading a lot (all the typing docs, and some of PEP544), I found that there is no such syntax for easily assigning a type to a whole function at the definition (the closest is @typing.overload and it's not exactly what we need here). But as a possible workaround I implemented a decorator function which can help with easily assigning a type: def func_type(function_type): def decorator(function): def typed_function(*args, **kwargs): return function(*args, **kwargs) typed_function: function_type # type assign return typed_function return decorator The usage is: greet_person_type = Callable[[str, int], str] def greet_person(name, age): return "Hello, " + name + " !\nYou're " + str(age) + " years old!" greet_person = func_type(greet_person_type)(greet_person) greet_person(10, 10) # WHALA! typeerror as expected in `name`: Expected type 'str', got 'int' instead Now, I need help: for some reason, the typechecker (pycharm) doesn't hint the typing if use decorated syntax which supposed to be equilavent: @func_type(greet_person_type) def greet_person(name, age): return "Hello, " + name + " !\nYou're " + str(age) + " years old!" greet_person(10, 10) # no type error. why? I think the decorated style does not work because decoration does not change the original function greet_person so the typing from the returned decorated function doesn't affect when inting the original greet_person function. How can I make the decorated solution work? | Simply assign the function to a new name representing the specific callable type. Greetable = Callable[[str, int], str] def any_greet_person(name, age): ... typed_greet_person: Greetable = any_greet_person reveal_type(any_greet_person) reveal_type(typed_greet_person) Keep in mind that the object defined as any_greet_person is of a specific type, which you cannot simply erase after creating it. In order to create the callable with a specific type, one can copy it from a template object (the abstract types Callable and Protocol do not work with Type[C]). This can be done with a decorator: from typing import TypeVar, Callable C = TypeVar('C', bound=Callable) # parameterize over all callables def copy_signature(template: C) -> Callable[[C], C]: """Decorator to copy the static signature between functions""" def apply_signature(target: C) -> C: # copy runtime inspectable metadata as well target.__annotations__ = template.__annotations__ return target return apply_signature This also encodes that only functions compatible with the copied signature are valid targets. # signature template def greetable(name: str, age: int) -> str: ... @copy_signature(greetable) def any_greet_person(name, age): ... @copy_signature(greetable) # error: Argument 1 has incompatible type ... def not_greet_person(age, bar): ... print(any_greet_person.__annotations__) # {'name': <class 'str'>, 'age': <class 'int'>, 'return': <class 'str'>} if TYPE_CHECKING: reveal_type(any_greet_person) # note: Revealed type is 'def (name: builtins.str, age: builtins.int) -> builtins.str' | 10 | 8 |
65,107,269 | 2020-12-2 | https://stackoverflow.com/questions/65107269/python-dictionary-with-generic-keys-and-callablet-values | I have some named tuples: JOIN = NamedTuple("JOIN", []) EXIT = NamedTuple("EXIT", []) I also have functions to handle each type of tuple with: def handleJoin(t: JOIN) -> bool: pass def handleExit(t: EXIT) -> bool: pass What I want to do is create a dictionary handleTuple so I can call it like so: t: Union[JOIN, EXIT] = #an instance of JOIN or EXIT result: bool result = handleTuple[type(t)](t) What I cannot figure out is how to define said dictionary. I tried defining a generic T: T = TypeVar("T", JOIN, EXIT) handleTuple: Dict[T, Callable[[T], bool] However I get an error saying "Type variable T is unbound" which I do not understand. The closest I have got so far is: handleTuple: Dict[Type[Union[JOIN, EXIT]], bool] handleTuple = { JOIN: True EXIT: False } This works fine for calling it the way I want, however I cannot figure out the Callable part in the definition so I can include my functions. How can I do this | TypeVars are only meaningful in aliases, classes and functions. One can define a Protocol for the lookup: T = TypeVar("T", JOIN, EXIT, contravariant=True) class Handler(Protocol): def __getitem__(self, item: Type[T]) -> Callable[[T], bool]: ... handleTuple = cast(Handler, {JOIN: handleJoin, EXIT: handleExit}) The special method self.__getitem__(item) corresponds to self[item]. Thus, the protocol defines that accessing handleTuple[item] with item: Type[T] evaluates to some Callable[[T], bool]. The cast is currently needed for type checkers such as MyPy to understand that the dict is a valid implementation of this protocol. Since the code effectively implements a single dispatch, defining a functools.singledispatch function provides the behaviour out of the box: @singledispatch def handle(t) -> bool: raise NotImplementedError @handle.register def handleJoin(t: JOIN) -> bool: pass @handle.register def handleExit(t: EXIT) -> bool: pass t: Union[JOIN, EXIT] result: bool result = handle(t) | 6 | 4 |
65,113,967 | 2020-12-2 | https://stackoverflow.com/questions/65113967/why-is-nothing-drawn-in-pygame-at-all | i have started a new project in python using pygame and for the background i want the bottom half filled with gray and the top black. i have used rect drawing in projects before but for some reason it seems to be broken? i don't know what i am doing wrong. the weirdest thing is that the result is different every time i run the program. sometimes there is only a black screen and sometimes a gray rectangle covers part of the screen, but never half of the screen. import pygame, sys from pygame.locals import * pygame.init() DISPLAY=pygame.display.set_mode((800,800)) pygame.display.set_caption("thing") pygame.draw.rect(DISPLAY, (200,200,200), pygame.Rect(0,400,800,400)) while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() | You need to update the display. You are actually drawing on a Surface object. If you draw on the Surface associated to the PyGame display, this is not immediately visible in the display. The changes become visibel, when the display is updated with either pygame.display.update() or pygame.display.flip(). See pygame.display.flip(): This will update the contents of the entire display. While pygame.display.flip() will update the contents of the entire display, pygame.display.update() allows updating only a portion of the screen to updated, instead of the entire area. pygame.display.update() is an optimized version of pygame.display.flip() for software displays, but doesn't work for hardware accelerated displays. The typical PyGame application loop has to: handle the events by calling either pygame.event.pump() or pygame.event.get(). update the game states and positions of objects dependent on the input events and time (respectively frames) clear the entire display or draw the background draw the entire scene (draw all the objects) update the display by calling either pygame.display.update() or pygame.display.flip() limit frames per second to limit CPU usage with pygame.time.Clock.tick import pygame from pygame.locals import * pygame.init() DISPLAY = pygame.display.set_mode((800,800)) pygame.display.set_caption("thing") clock = pygame.time.Clock() run = True while run: # handle events for event in pygame.event.get(): if event.type == QUIT: run = False # clear display DISPLAY.fill(0) # draw scene pygame.draw.rect(DISPLAY, (200,200,200), pygame.Rect(0,400,800,400)) # update display pygame.display.flip() # limit frames per second clock.tick(60) pygame.quit() exit() repl.it/@Rabbid76/PyGame-MinimalApplicationLoop See also Event and application loop | 6 | 4 |
65,110,798 | 2020-12-2 | https://stackoverflow.com/questions/65110798/feature-importance-in-a-binary-classification-and-extracting-shap-values-for-one | Suppose we have a binary classification problem, we have two classes of 1s and 0s as our target. I aim to use a tree classifier to predict 1s and 0s given the features. Further, I can use SHAP values to rank the feature importance that are predictive of 1s and 0s. Until now everything is good! Now suppose that I want to know importance of features that are predictive of 1s only, what is the recommended approach there? I can split my data into two parts (nominally: df_tot = df_zeros + df_ones) and use df_ones in my classifier and then extract the SHAP values for that, however doing so the target would only have 1s and so the model does not really learn to classify anything. So I am wondering how does one approach such problem? | Let's prepare some binary classification data: from seaborn import load_dataset from sklearn.model_selection import train_test_split from lightgbm import LGBMClassifier import shap titanic = load_dataset("titanic") X = titanic.drop(["survived","alive","adult_male","who",'deck'],1) y = titanic["survived"] features = X.columns cat_features = [] for cat in X.select_dtypes(exclude="number"): cat_features.append(cat) # think about meaningful ordering instead X[cat] = X[cat].astype("category").cat.codes.astype("category") X_train, X_val, y_train, y_val = train_test_split(X,y,train_size=.8, random_state=42) clf = LGBMClassifier(max_depth=3, n_estimators=1000, objective="binary") clf.fit(X_train,y_train, eval_set=(X_val,y_val), early_stopping_rounds=100, verbose=100) To answer your question, to extract shap values on a per class basis one may subset them by class label: explainer = shap.TreeExplainer(clf) shap_values = explainer.shap_values(X_train) sv = np.array(shap_values) y = clf.predict(X_train).astype("bool") # shap values for survival sv_survive = sv[:,y,:] # shap values for dying sv_die = sv[:,~y,:] However a more interesting question what you can do with these values. In general, one can gain valuable insights by looking at summary_plot (for the whole dataset): shap.summary_plot(shap_values[1], X_train.astype("float")) Interpretation (globally): sex, pclass and age were most influential features in determining outcome being a male, less affluent, and older decreased chances of survival Top 3 global most influential features can be extracted as follows: idx = np.abs(sv[1,:,:]).mean(0).argsort() features[idx[:-4:-1]] # Index(['sex', 'pclass', 'age'], dtype='object') If you want to analyze on a per class basis, you may do this separately for survivors (sv[1,y,:]): # top3 features for probability of survival idx = sv[1,y,:].mean(0).argsort() features[idx[:-4:-1]] # Index(['sex', 'pclass', 'age'], dtype='object') The same for those who did not survive (sv[0,~y,:]): # top3 features for probability of dieing idx = sv[0,~y,:].mean(0).argsort() features[idx[:3]] # Index(['alone', 'embark_town', 'parch'], dtype='object') Note, we are using mean shap values here and saying we are interested in biggest values for survivors and lowest values for those who are not (lowest values, close to 0, may also mean having no constant, one-directional influence at all). Using mean on abs may also make sense, but the interpretation will be most influential, regardless of direction. To make an educated choice either one prefers means or means of abs' one has to be aware of the following facts: shap values could be both positive and negative shap values are symmetrical, and increasing/decreasing probability of one class decreases/increases probability of the other by the same amount (due to pβ = 1 - pβ) Proof: #shap values sv = np.array(shap_values) #base values ev = np.array(explainer.expected_value) sv_died, sv_survived = sv[:,0,:] # + constant print(sv_died, sv_survived, sep="\n") # [-0.73585563 1.24520748 0.70440429 -0.15443337 -0.01855845 -0.08430467 0.02916375 -0.04846619 0. -0.01035171] # [ 0.73585563 -1.24520748 -0.70440429 0.15443337 0.01855845 0.08430467 -0.02916375 0.04846619 0. 0.01035171] Most probably you'll find out sex and age played the most influential role both for survivors and not; hence, rather than analyzing most influential features per class, it would be more interesting to see what made two passengers of the same sex and age one survive and the other not (hint: find such cases in the dataset, feed one as background, and analyze shap values for the other, or, try analyzing one class vs the other as background). You may do further analysis with dependence_plot (on a global or per class basis): shap.dependence_plot("sex", shap_values[1], X_train) Interpretation (globally): males had lower probability of survival (lower shap values) pclass (affluence) was the next most influential factor: higher pclass (less affluence) decreased chance of survival for female and vice versa for males | 9 | 12 |
65,124,833 | 2020-12-3 | https://stackoverflow.com/questions/65124833/how-to-combine-scatter-and-line-plots-using-plotly-express | Plotly Express has an intuitive way to provide pre-formatted plotly plots with minimal lines of code; sort of how Seaborn does it for matplotlib. It is possible to add traces of plots on Plotly to get a scatter plot on an existing line plot. However, I couldn't find such a functionality in Plotly Express. Is it possible to combine a scatter and line graph in Plotly Express? | You can use: fig3 = go.Figure(data=fig1.data + fig2.data) Where fig1 and fig2 are built using px.line() and px.scatter(), respectively. And fig3 is, as you can see, built using plotly.graph_objects. Some details: One approach that I use alot is building two figures fig1 and fig2 using plotly.express and then combine them using their data attributes together with a go.Figure / plotly.graph_objects object like this: import plotly.express as px import plotly.graph_objects as go df = px.data.iris() fig1 = px.line(df, x="sepal_width", y="sepal_length") fig1.update_traces(line=dict(color = 'rgba(50,50,50,0.2)')) fig2 = px.scatter(df, x="sepal_width", y="sepal_length", color="species") fig3 = go.Figure(data=fig1.data + fig2.data) fig3.show() Plot: | 35 | 76 |
65,184,355 | 2020-12-7 | https://stackoverflow.com/questions/65184355/error-403-access-denied-from-google-authentication-web-api-despite-google-acc | I'm using the default code provided from Google, and I don't quite understand why it's not working. The code outputs the prompt "Please visit this URL to authorize this application: [Google login URL]." When attempting to log in with the account designated as owner of the script under the google developers console I get an "Error 403: access_denied" error with the message "The developer hasnβt given you access to this app. Itβs currently being tested and it hasnβt been verified by Google. If you think you should have access, contact the developer [the email I just tried to log in with]." from __future__ import print_function import pickle import os.path from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly'] # The ID and range of a sample spreadsheet. SAMPLE_SPREADSHEET_ID = '1vrZpCGW58qCCEfVXoJYlwlulraIlfWI2SmFXa1iPtuU' SAMPLE_RANGE_NAME = 'Class Data!A2:E' def main(): """Shows basic usage of the Sheets API. Prints values from a sample spreadsheet. """ creds = None # The file token.pickle stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.pickle'): with open('token.pickle', 'rb') as token: creds = pickle.load(token) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'Google_API_Key.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.pickle', 'wb') as token: pickle.dump(creds, token) service = build('sheets', 'v4', credentials=creds) # Call the Sheets API sheet = service.spreadsheets() result = sheet.values().get(spreadsheetId=SAMPLE_SPREADSHEET_ID, range=SAMPLE_RANGE_NAME).execute() values = result.get('values', []) if not values: print('No data found.') else: print('Name, Major:') for row in values: # Print columns A and E, which correspond to indices 0 and 4. print('%s, %s' % (row[0], row[4])) def PrintToSheets(): main() | The error message you are getting is related to the fact that your application has not been verified yet. As mentioned in that link all applications that access google APIs using sensitive scopes need to go though googles verification process. Normal you are given a grace period of 100 users accessing your application before your application will be locked down and you wont be able to authorize anymore users until you verify your application it sounds like you may have hit that point. Your only options would be to go though the verification process or to create a whole new project on Google developer console and use that client id instead as the one you are using is currently locked for additional users. Update Change A change has been implemented on Google Developer console. You must now authorize users / testers to access your application before it has gone through the verification process. If you go to the Google developer console for your project, under consent screen you will find the following section You can add tests users here, however you can not remove them and you can only add 100 of them so use it wisely. | 60 | 26 |
65,130,080 | 2020-12-3 | https://stackoverflow.com/questions/65130080/attributeerror-running-django-site-on-mac-11-0-1 | I'm getting an error running a django site locally that was working fine before I updated my Mac OS to 11.0.1. I'm thinking this update is the cause of the problem since nothing else was really changed between when it was working and now. 10:15:05 worker.1 | Traceback (most recent call last): 10:15:05 worker.1 | File "/usr/local/bin/celery", line 5, in <module> 10:15:05 worker.1 | from celery.__main__ import main 10:15:05 worker.1 | File "/usr/local/lib/python2.7/site-packages/celery/__init__.py", line 133, in <module> 10:15:05 worker.1 | from celery import five # noqa 10:15:05 worker.1 | File "/usr/local/lib/python2.7/site-packages/celery/five.py", line 20, in <module> 10:15:05 worker.1 | from kombu.five import monotonic 10:15:05 worker.1 | File "/usr/local/lib/python2.7/site-packages/kombu/five.py", line 56, in <module> 10:15:05 worker.1 | absolute_to_nanoseconds = CoreServices.AbsoluteToNanoseconds 10:15:05 worker.1 | File "/usr/local/Cellar/python@2/2.7.17_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ctypes/__init__.py", line 379, in __getattr__ 10:15:05 worker.1 | func = self.__getitem__(name) 10:15:05 worker.1 | File "/usr/local/Cellar/python@2/2.7.17_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ctypes/__init__.py", line 384, in __getitem__ 10:15:05 worker.1 | func = self._FuncPtr((name_or_ordinal, self)) 10:15:05 worker.1 | AttributeError: dlsym(RTLD_DEFAULT, AbsoluteToNanoseconds): symbol not found Here is my brew config HOMEBREW_VERSION: 2.6.0 ORIGIN: https://github.com/Homebrew/brew HEAD: 1d5e354cc2ff048bd7161d95b3fa7f91dc9dd081 Last commit: 2 days ago Core tap ORIGIN: https://github.com/Homebrew/homebrew-core Core tap HEAD: fdb83fcfb482e5ed1f1c3c442a85b99223fcabeb Core tap last commit: 27 hours ago Core tap branch: master HOMEBREW_PREFIX: /usr/local HOMEBREW_CASK_OPTS: [] HOMEBREW_DISPLAY: /private/tmp/com.apple.launchd.rZ1F30XomO/org.macosforge.xquartz:0 HOMEBREW_MAKE_JOBS: 8 Homebrew Ruby: 2.6.3 => /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/bin/ruby CPU: octa-core 64-bit icelake Clang: 12.0 build 1200 Git: 2.24.3 => /Applications/Xcode-beta.app/Contents/Developer/usr/bin/git Curl: 7.64.1 => /usr/bin/curl Java: 14.0.2, 1.8.0_265 macOS: 11.0.1-x86_64 CLT: 12.3.0.0.1.1605054730 Xcode: 12.3 => /Applications/Xcode-beta.app/Contents/Developer XQuartz: 2.7.11 => /opt/X11 Typically I'll run the site with a virtualenv running python 2.7.15, I was getting the same error with that. I reinstalled python with pyenv and remade the virtualenv but the same error appeared. I'm running Django 1.10.8 with Kombu 3.0.37 | Ok this is a dirty workaround for Big Sur compatibility: https://developer.apple.com/documentation/macos-release-notes/macos-big-sur-11_0_1-release-notes New in macOS Big Sur 11.0.1, the system ships with a built-in dynamic linker cache of all system-provided libraries. As part of this change, copies of dynamic libraries are no longer present on the filesystem. Code that attempts to check for dynamic library presence by looking for a file at a path or enumerating a directory will fail. Instead, check for library presence by attempting to dlopen() the path, which will correctly check for the library in the cache. (62986286) so in order to find these libraries I just put the static paths in the find_library function at <path to your Python 2 installation>/lib/python2.7/ctypes/util.py just below os.name == "posix" and sys.platform == "darwin": if name == 'CoreServices': return '/System/Library/Frameworks/CoreServices.framework/CoreServices' elif name == 'libSystem.dylib': return '/usr/lib/libSystem.dylib' at the end it would look like this: if os.name == "posix" and sys.platform == "darwin": from ctypes.macholib.dyld import dyld_find as _dyld_find def find_library(name): if name == 'CoreServices': return '/System/Library/Frameworks/CoreServices.framework/CoreServices' elif name == 'libSystem.dylib': return '/usr/lib/libSystem.dylib' possible = ['@executable_path/../lib/lib%s.dylib' % name, 'lib%s.dylib' % name, '%s.dylib' % name, '%s.framework/%s' % (name, name)] for name in possible: try: return _dyld_find(name) except ValueError: continue return None | 7 | 26 |
65,205,506 | 2020-12-8 | https://stackoverflow.com/questions/65205506/lstm-autoencoder-problems | TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken from this paper: https://arxiv.org/pdf/1607.00148.pdf Encoder: Standard LSTM layer. Input sequence is encoded in the final hidden state. Decoder: LSTM Cell (I think!). Reconstruct the sequence one element at a time, starting with the last element x[N]. Decoder algorithm is as follows for a sequence of length N: Get Decoder initial hidden state hs[N]: Just use encoder final hidden state. Reconstruct last element in the sequence: x[N]= w.dot(hs[N]) + b. Same pattern for other elements: x[i]= w.dot(hs[i]) + b use x[i] and hs[i] as inputs to LSTMCell to get x[i-1] and hs[i-1] Minimum Working Example: Here is my implementation, starting with the encoder: class SeqEncoderLSTM(nn.Module): def __init__(self, n_features, latent_size): super(SeqEncoderLSTM, self).__init__() self.lstm = nn.LSTM( n_features, latent_size, batch_first=True) def forward(self, x): _, hs = self.lstm(x) return hs Decoder class: class SeqDecoderLSTM(nn.Module): def __init__(self, emb_size, n_features): super(SeqDecoderLSTM, self).__init__() self.cell = nn.LSTMCell(n_features, emb_size) self.dense = nn.Linear(emb_size, n_features) def forward(self, hs_0, seq_len): x = torch.tensor([]) # Final hidden and cell state from encoder hs_i, cs_i = hs_0 # reconstruct first element with encoder output x_i = self.dense(hs_i) x = torch.cat([x, x_i]) # reconstruct remaining elements for i in range(1, seq_len): hs_i, cs_i = self.cell(x_i, (hs_i, cs_i)) x_i = self.dense(hs_i) x = torch.cat([x, x_i]) return x Bringing the two together: class LSTMEncoderDecoder(nn.Module): def __init__(self, n_features, emb_size): super(LSTMEncoderDecoder, self).__init__() self.n_features = n_features self.hidden_size = emb_size self.encoder = SeqEncoderLSTM(n_features, emb_size) self.decoder = SeqDecoderLSTM(emb_size, n_features) def forward(self, x): seq_len = x.shape[1] hs = self.encoder(x) hs = tuple([h.squeeze(0) for h in hs]) out = self.decoder(hs, seq_len) return out.unsqueeze(0) And here's my training function: def train_encoder(model, epochs, trainload, testload=None, criterion=nn.MSELoss(), optimizer=optim.Adam, lr=1e-6, reverse=False): device = 'cuda' if torch.cuda.is_available() else 'cpu' print(f'Training model on {device}') model = model.to(device) opt = optimizer(model.parameters(), lr) train_loss = [] valid_loss = [] for e in tqdm(range(epochs)): running_tl = 0 running_vl = 0 for x in trainload: x = x.to(device).float() opt.zero_grad() x_hat = model(x) if reverse: x = torch.flip(x, [1]) loss = criterion(x_hat, x) loss.backward() opt.step() running_tl += loss.item() if testload is not None: model.eval() with torch.no_grad(): for x in testload: x = x.to(device).float() loss = criterion(model(x), x) running_vl += loss.item() valid_loss.append(running_vl / len(testload)) model.train() train_loss.append(running_tl / len(trainload)) return train_loss, valid_loss Data: Large dataset of events scraped from the news (ICEWS). Various categories exist that describe each event. I initially one-hot encoded these variables, expanding the data to 274 dimensions. However, in order to debug the model, I've cut it down to a single sequence that is 14 timesteps long and only contains 5 variables. Here is the sequence I'm trying to overfit: tensor([[0.5122, 0.0360, 0.7027, 0.0721, 0.1892], [0.5177, 0.0833, 0.6574, 0.1204, 0.1389], [0.4643, 0.0364, 0.6242, 0.1576, 0.1818], [0.4375, 0.0133, 0.5733, 0.1867, 0.2267], [0.4838, 0.0625, 0.6042, 0.1771, 0.1562], [0.4804, 0.0175, 0.6798, 0.1053, 0.1974], [0.5030, 0.0445, 0.6712, 0.1438, 0.1404], [0.4987, 0.0490, 0.6699, 0.1536, 0.1275], [0.4898, 0.0388, 0.6704, 0.1330, 0.1579], [0.4711, 0.0390, 0.5877, 0.1532, 0.2201], [0.4627, 0.0484, 0.5269, 0.1882, 0.2366], [0.5043, 0.0807, 0.6646, 0.1429, 0.1118], [0.4852, 0.0606, 0.6364, 0.1515, 0.1515], [0.5279, 0.0629, 0.6886, 0.1514, 0.0971]], dtype=torch.float64) And here is the custom Dataset class: class TimeseriesDataSet(Dataset): def __init__(self, data, window, n_features, overlap=0): super().__init__() if isinstance(data, (np.ndarray)): data = torch.tensor(data) elif isinstance(data, (pd.Series, pd.DataFrame)): data = torch.tensor(data.copy().to_numpy()) else: raise TypeError(f"Data should be ndarray, series or dataframe. Found {type(data)}.") self.n_features = n_features self.seqs = torch.split(data, window) def __len__(self): return len(self.seqs) def __getitem__(self, idx): try: return self.seqs[idx].view(-1, self.n_features) except TypeError: raise TypeError("Dataset only accepts integer index/slices, not lists/arrays.") Problem: The model only learns the average, no matter how complex I make the model or now long I train it. Predicted/Reconstruction: Actual: My research: This problem is identical to the one discussed in this question: LSTM autoencoder always returns the average of the input sequence The problem in that case ended up being that the objective function was averaging the target timeseries before calculating loss. This was due to some broadcasting errors because the author didn't have the right sized inputs to the objective function. In my case, I do not see this being the issue. I have checked and double checked that all of my dimensions/sizes line up. I am at a loss. Other Things I've Tried I've tried this with varied sequence lengths from 7 timesteps to 100 time steps. I've tried with varied number of variables in the time series. I've tried with univariate all the way to all 274 variables that the data contains. I've tried with various reduction parameters on the nn.MSELoss module. The paper calls for sum, but I've tried both sum and mean. No difference. The paper calls for reconstructing the sequence in reverse order (see graphic above). I have tried this method using the flipud on the original input (after training but before calculating loss). This makes no difference. I tried making the model more complex by adding an extra LSTM layer in the encoder. I've tried playing with the latent space. I've tried from 50% of the input number of features to 150%. I've tried overfitting a single sequence (provided in the Data section above). Question: What is causing my model to predict the average and how do I fix it? | Okay, after some debugging I think I know the reasons. TLDR You try to predict next timestep value instead of difference between current timestep and the previous one Your hidden_features number is too small making the model unable to fit even a single sample Analysis Code used Let's start with the code (model is the same): import seaborn as sns import matplotlib.pyplot as plt def get_data(subtract: bool = False): # (1, 14, 5) input_tensor = torch.tensor( [ [0.5122, 0.0360, 0.7027, 0.0721, 0.1892], [0.5177, 0.0833, 0.6574, 0.1204, 0.1389], [0.4643, 0.0364, 0.6242, 0.1576, 0.1818], [0.4375, 0.0133, 0.5733, 0.1867, 0.2267], [0.4838, 0.0625, 0.6042, 0.1771, 0.1562], [0.4804, 0.0175, 0.6798, 0.1053, 0.1974], [0.5030, 0.0445, 0.6712, 0.1438, 0.1404], [0.4987, 0.0490, 0.6699, 0.1536, 0.1275], [0.4898, 0.0388, 0.6704, 0.1330, 0.1579], [0.4711, 0.0390, 0.5877, 0.1532, 0.2201], [0.4627, 0.0484, 0.5269, 0.1882, 0.2366], [0.5043, 0.0807, 0.6646, 0.1429, 0.1118], [0.4852, 0.0606, 0.6364, 0.1515, 0.1515], [0.5279, 0.0629, 0.6886, 0.1514, 0.0971], ] ).unsqueeze(0) if subtract: initial_values = input_tensor[:, 0, :] input_tensor -= torch.roll(input_tensor, 1, 1) input_tensor[:, 0, :] = initial_values return input_tensor if __name__ == "__main__": torch.manual_seed(0) HIDDEN_SIZE = 10 SUBTRACT = False input_tensor = get_data(SUBTRACT) model = LSTMEncoderDecoder(input_tensor.shape[-1], HIDDEN_SIZE) optimizer = torch.optim.Adam(model.parameters()) criterion = torch.nn.MSELoss() for i in range(1000): outputs = model(input_tensor) loss = criterion(outputs, input_tensor) loss.backward() optimizer.step() optimizer.zero_grad() print(f"{i}: {loss}") if loss < 1e-4: break # Plotting sns.lineplot(data=outputs.detach().numpy().squeeze()) sns.lineplot(data=input_tensor.detach().numpy().squeeze()) plt.show() What it does: get_data either works on the data your provided if subtract=False or (if subtract=True) it subtracts value of the previous timestep from the current timestep Rest of the code optimizes the model until 1e-4 loss reached (so we can compare how model's capacity and it's increase helps and what happens when we use the difference of timesteps instead of timesteps) We will only vary HIDDEN_SIZE and SUBTRACT parameters! NO SUBTRACT, SMALL MODEL HIDDEN_SIZE=5 SUBTRACT=False In this case we get a straight line. Model is unable to fit and grasp the phenomena presented in the data (hence flat lines you mentioned). 1000 iterations limit reached SUBTRACT, SMALL MODEL HIDDEN_SIZE=5 SUBTRACT=True Targets are now far from flat lines, but model is unable to fit due to too small capacity. 1000 iterations limit reached NO SUBTRACT, LARGER MODEL HIDDEN_SIZE=100 SUBTRACT=False It got a lot better and our target was hit after 942 steps. No more flat lines, model capacity seems quite fine (for this single example!) SUBTRACT, LARGER MODEL HIDDEN_SIZE=100 SUBTRACT=True Although the graph does not look that pretty, we got to desired loss after only 215 iterations. Finally Usually use difference of timesteps instead of timesteps (or some other transformation, see here for more info about that). In other cases, neural network will try to simply... copy output from the previous step (as that's the easiest thing to do). Some minima will be found this way and going out of it will require more capacity. When you use the difference between timesteps there is no way to "extrapolate" the trend from previous timestep; neural network has to learn how the function actually varies Use larger model (for the whole dataset you should try something like 300 I think), but you can simply tune that one. Don't use flipud. Use bidirectional LSTMs, in this way you can get info from forward and backward pass of LSTM (not to confuse with backprop!). This also should boost your score Questions Okay, question 1: You are saying that for variable x in the time series, I should train the model to learn x[i] - x[i-1] rather than the value of x[i]? Am I correctly interpreting? Yes, exactly. Difference removes the urge of the neural network to base it's predictions on the past timestep too much (by simply getting last value and maybe changing it a little) Question 2: You said my calculations for zero bottleneck were incorrect. But, for example, let's say I'm using a simple dense network as an auto encoder. Getting the right bottleneck indeed depends on the data. But if you make the bottleneck the same size as the input, you get the identity function. Yes, assuming that there is no non-linearity involved which makes the thing harder (see here for similar case). In case of LSTMs there are non-linearites, that's one point. Another one is that we are accumulating timesteps into single encoder state. So essentially we would have to accumulate timesteps identities into a single hidden and cell states which is highly unlikely. One last point, depending on the length of sequence, LSTMs are prone to forgetting some of the least relevant information (that's what they were designed to do, not only to remember everything), hence even more unlikely. Is num_features * num_timesteps not a bottle neck of the same size as the input, and therefore shouldn't it facilitate the model learning the identity? It is, but it assumes you have num_timesteps for each data point, which is rarely the case, might be here. About the identity and why it is hard to do with non-linearities for the network it was answered above. One last point, about identity functions; if they were actually easy to learn, ResNets architectures would be unlikely to succeed. Network could converge to identity and make "small fixes" to the output without it, which is not the case. I'm curious about the statement : "always use difference of timesteps instead of timesteps" It seem to have some normalizing effect by bringing all the features closer together but I don't understand why this is key ? Having a larger model seemed to be the solution and the substract is just helping. Key here was, indeed, increasing model capacity. Subtraction trick depends on the data really. Let's imagine an extreme situation: We have 100 timesteps, single feature Initial timestep value is 10000 Other timestep values vary by 1 at most What the neural network would do (what is the easiest here)? It would, probably, discard this 1 or smaller change as noise and just predict 1000 for all of them (especially if some regularization is in place), as being off by 1/1000 is not much. What if we subtract? Whole neural network loss is in the [0, 1] margin for each timestep instead of [0, 1001], hence it is more severe to be wrong. And yes, it is connected to normalization in some sense come to think about it. | 13 | 8 |
65,184,035 | 2020-12-7 | https://stackoverflow.com/questions/65184035/alembic-ignore-specific-tables | I'm using alembic to manage database migrations as per user defined sqlalchemy models. My challenge is that I'd like for alembic to ignore any creation, deletion, or changes to a specific set of tables. Note: My Q is similar to this question Ignoring a model when using alembic autogenerate but is different in that I want to control alembic from outside the model definition. Here's a sample table I want to ignore: from sqlalchemy import MetaData from sqlalchemy.ext.declarative import declarative_base Base = declarative_base(metadata=MetaData()) class Ignore1(Base): """ Signed in to the account... """ __tablename__ = 'ignore_1' __table_args__ = { 'info':{'skip_autogenerate':True} } id = Column(Integer, primary_key=True) foo = Column(String(20), nullable=True) Example code (which does not solve my issue): In alembic/env.py # Ideally this is stored in my actual database, but for now, let's assume we have a list... IGNORE_TABLES = ['ignore_1', 'ignore_2'] def include_object(object, name, type_, reflected, compare_to): """ Should you include this table or not? """ if type_ == 'table' and (name in IGNORE_TABLES or object.info.get("skip_autogenerate", False)): return False elif type_ == "column" and object.info.get("skip_autogenerate", False): return False return True # Then add to config context.configure( ... include_object=include_object, ... ) | I found a solution to my problem! My error was in the instantiation of my context object in env.py def run_migrations_offline(): ... context.configure( url=url, target_metadata=target_metadata, include_object=include_object, literal_binds=True, dialect_opts={"paramstyle": "named"}, ) with context.begin_transaction(): context.run_migrations() I wasn't applying this change to context for online migrations: def run_migrations_online(): ... connectable = engine_from_config( config.get_section(config.config_ini_section), prefix="sqlalchemy.", poolclass=pool.NullPool, ) with connectable.connect() as connection: context.configure( connection=connection, target_metadata=target_metadata, # THE FOLLOWING LINE WAS MISSING FROM MY ORIGINAL CODE include_object=include_object, # <----------------------- THIS! ) ... Hopefully anyone else encountering this issue and experiencing similar turmoil can read through my question & following solution and recognize that despair is but a small dumb tweak from salvation. | 24 | 23 |
65,133,937 | 2020-12-3 | https://stackoverflow.com/questions/65133937/mock-streaming-api-in-python-for-unit-test | I have an async function that calls a streaming api. What is the best way to write unit test for this function? The api response has to be mocked. I tried with aiounittest and used mock from unittest. But this calls the actual api instead of getting the mocked response. Also tried with pytest.mark.asyncio annotation, but this kept giving me the error - coroutine was never awaited. I have verified that pytest-asyncio has been installed. I am using VS Code and Python 3.6.6 Here is the relevant code snippet: async def method1(): response = requests.get(url=url, params=params, stream=True) for data in response.iter_lines(): # processing logic here yield data Pasting some of the tests I tried. def mocked_get(*args, **kwargs): #implementation of mock class TestClass (unittest.TestCase): @patch("requests.get", side_effect=mocked_get) async def test_method (self, mock_requests): resp = [] async for data in method1: resp.append (data) #Also tried await method1 assert resp Also tried with class TestClass (aiounittest.AsyncTestCase): | Use asynctest instead of aiounittest. Replace unittest.TestCase with asynctest.TestCase. Replace from unittest.mock import patch with from asynctest.mock import patch. async for data in method1: should be async for data in method1():. import asynctest from asynctest.mock import patch class TestClass(asynctest.TestCase): @patch("requests.get", side_effect=mocked_get) async def test_method(self, mock_requests): resp = [] async for data in method1(): resp.append(data) assert resp | 6 | 4 |
65,108,407 | 2020-12-2 | https://stackoverflow.com/questions/65108407/understanding-featurehasher-collisions-and-vector-size-trade-off | I'm preprocessing my data before implementing a machine learning model. Some of the features are with high cardinality, like country and language. Since encoding those features as one-hot-vector can produce sparse data, I've decided to look into the hashing trick and used python's category_encoders like so: from category_encoders.hashing import HashingEncoder ce_hash = HashingEncoder(cols = ['country']) encoded = ce_hash.fit_transform(df.country) encoded['country'] = df.country encoded.head() When looking at the result, I can see the collisions col_0 col_1 col_2 col_3 col_4 col_5 col_6 col_7 country 0 0 0 1 0 0 0 0 0 US <ββ 1 0 1 0 0 0 0 0 0 CA. β US and SE collides 2 0 0 1 0 0 0 0 0 SE <ββ 3 0 0 0 0 0 0 1 0 JP Further investigation lead me to this Kaggle article. The example of Hashing there include both X and y. What is the purpose of y, does it help to fight the collision problem? Should I add more columns to the encoder and encode more than one feature together (for example country and language)? How to encode such categories using the hashing trick? Update: Based on the comments I got from @CoMartel, Iv'e looked at Sklearn FeatureHasher and written the following code to hash the country column: from sklearn.feature_extraction import FeatureHasher h = FeatureHasher(n_features=10,input_type='string') f = h.transform(df.country) df1 = pd.DataFrame(f.toarray()) df1['country'] = df.country df1.head() And got the following output: 0 1 2 3 4 5 6 7 8 9 country 0 -1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.0 0.0 US 1 -1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.0 0.0 US 2 -1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.0 0.0 US 3 0.0 -1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 CA 4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 -1.0 0.0 SE 5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 JP 6 -1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 AU 7 -1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 AU 8 -1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 DK 9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 -1.0 0.0 SE Is that the way to use the library in order to encode high categorical values? Why are some values negative? How would you choose the "right" n_features value? How can I check the collisions ratio? | Is that the way to use the library in order to encode high categorical values? Yes. There is nothing wrong with your implementation. You can think about the hashing trick as a "reduced size one-hot encoding with a small risk of collision, that you won't need to use if you can tolerate the original feature dimension". This idea was first introduced by Kilian Weinberger. You can find in their paper the whole analysis of the algorithm theoretically and practically/empirically. Why are some values negative? To avoid collision, a signed hash function is used. That is, the strings are hashed by using the usual hash function first (e.g. a string is converted to its corresponding numerical value by summing ASCII value of each char, then modulo n_feature to get an index in (0, n_features]). Then another single-bit output hash function is used. The latter produces +1 or -1 by definition, where it's added to the index resulted from the first hashing function. Pseudo code (it looks like Python, though): def hash_trick(features, n_features): for f in features: res = np.zero_like(features) h = usual_hash_function(f) # just the usual hashing index = h % n_features # find the modulo to get index to place f in res if single_bit_hash_function(f) == 1: # to reduce collision res[index] += 1 else: res[index] -= 1 # <--- this will make values to become negative return res How would you choose the "right" n_features value? As a rule of thumb, and as you can guess, if we hash an extra feature (i.e. #n_feature + 1), the collision is certainly going to happen. Hence, the best case-scenario is when each feature is mapped to a unique hash value -- hopefully. In this case, logically speaking, n_features should be at least equal to the actual number of features/categories (in your particular case, the number of different countries). Nevertheless, please remember that this is the "best" case scenario, which is not the case "mathematically speaking". Hence, the higher the better of course, but how high? See next. How can I check the collisions ratio? If we ignore the second single-bit hash function, the problem is reduced to something called "Birthday problem for Hashing". This is a big topic. For a comprehensive introduction to this problem, I recommend you read this, and for some detailed math, I recommend this answer. In a nutshell, what you need to know is that, the probability of no collisions is exp(-1/2) = 60.65%, that means there is approximately 39.35% chance of one collision, at least, to happen. So, as a rule of thumb, if we have X countries, there is about 40% chance, for at least one collision, if the hash function output range (i.e. n_feature parameter) is X^2. In other words, there is 40% chance of collision if the number of countries in your example = square_root(n_features). As you increase n_features exponentially, the chances of collision is reduced by half. (personally, if it is not for security purposes, but just a plain conversion from string to numbers, it is not worth going too high). Side-note for curios readers: For a large enough hash function output size(e.g. 256 bits), the chances an attacker guess (or avail of) the collision is almost impossible (from a security perspective). Regarding the y parameter, as you've already got in a comment, it is just for compatibility purpose, not used (scikit-learn has this along many other implementations). | 11 | 5 |
65,208,376 | 2020-12-8 | https://stackoverflow.com/questions/65208376/find-path-of-python-installed-by-homebrew | I have installed python 3.8.6 with homebrew on macOS. But when I check with which -a python3 I only get paths of 3.9 and 3.8.2. Is there a way to find the paths of all versions installed by homebrew? Or maybe more general question, how can I find the path of 3.8.6? | Use brew info <packagename> it's probably in one of (and referenced in both) /usr/local/Cellar/[email protected]/3.8.6_2 /usr/local/opt/[email protected]/libexec/bin See also Apple SE Where can I find the installed package path via brew If it's not there, it's plausible the version detection is wrong and only searches for the first two version digits (as conflicting versions may clobber each other). In this case, follow the warning and reinstall the specific version you want. | 7 | 8 |
65,179,646 | 2020-12-7 | https://stackoverflow.com/questions/65179646/how-do-i-parse-a-chemical-formula-using-a-regular-expression | I have a list patterns: patterns=['H', 'He', 'Li', 'Be', 'B', 'C', 'N', 'O', 'F', 'Ne', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'Cl', 'Ar', 'K', 'Ca', 'Sc', 'Ti', 'V', 'Cr', 'Mn', 'Fe', 'Co', 'Ni', 'Cu', 'Zn', 'Ga', 'Ge', 'As', 'Se', 'Br', 'Kr', 'Rb', 'Sr', 'Y', 'Zr', 'Nb', 'Mo', 'Tc', 'Ru', 'Rh', 'Pd', 'Ag', 'Cd', 'In', 'Sn', 'Sb', 'Te', 'I', 'Xe', 'Cs', 'Ba', 'La', 'Ce', 'Pr', 'Nd', 'Pm', 'Sm', 'Eu', 'Gd', 'Tb', 'Dy', 'Ho', 'Er', 'Tm', 'Yb', 'Lu', 'Hf', 'Ta', 'W', 'Re', 'Os', 'Ir', 'Pt', 'Au', 'Hg', 'Tl', 'Pb', 'Bi', 'Po', 'At', 'Rn'] and I have big dataframe with strings, for example: str0='Mg0.97Fe0.03B2' str1='Tl0.5Hg0.5Ba2Ca2Cu3O8' I am trying this: keyss=list(filter(None,regex.split("[^a-zA-Z]*",somestring))) values=list(filter(None,regex.split("[^0-9.0-9]*",somestring))) Sometimes, this works: str3='Hg0.75SrBa2Ca2Cu3O8' keyss=list(filter(None,regex.split("[^a-zA-Z]*",str3))) values=list(filter(None,regex.split("[^0-9.0-9]*",str3)) ['Ba', 'Fe', 'Co', 'Mn', 'As'] ['1', '1.832', '0.15', '0.018', '2'] However, if I have a string like this: str3='Hg0.75SrBa2Ca2Cu3O8' keyss=list(filter(None,regex.split("[^a-zA-Z]*",str3))) values=list(filter(None,regex.split("[^0-9.0-9]*",str3))) ['Hg', 'SrBa', 'Ca', 'Cu', 'O']!=['Hg', 'Sr','Ba', 'Ca', 'Cu', 'O'] ['0.75', '2', '2', '3', '8']!=['0.75', '1','2', '2', '3', '8'] or this str4='NbSn3' keyss=list(filter(None,regex.split("[^a-zA-Z]*",str4))) values=list(filter(None,regex.split("[^0-9.0-9]*",str4))) ['NbSn']!=['Nb','Sn'] ['3']!=['1','3'] str4='Pb1.4Sr4Y1.2Ca0.8Cu4.6O' ... My code is not working correctly. How I can fix it? | Use import pandas as pd patterns=['H', 'He', 'Li', 'Be', 'B', 'C', 'N', 'O', 'F', 'Ne', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'Cl', 'Ar', 'K', 'Ca', 'Sc', 'Ti', 'V', 'Cr', 'Mn', 'Fe', 'Co', 'Ni', 'Cu', 'Zn', 'Ga', 'Ge', 'As', 'Se', 'Br', 'Kr', 'Rb', 'Sr', 'Y', 'Zr', 'Nb', 'Mo', 'Tc', 'Ru', 'Rh', 'Pd', 'Ag', 'Cd', 'In', 'Sn', 'Sb', 'Te', 'I', 'Xe', 'Cs', 'Ba', 'La', 'Ce', 'Pr', 'Nd', 'Pm', 'Sm', 'Eu', 'Gd', 'Tb', 'Dy', 'Ho', 'Er', 'Tm', 'Yb', 'Lu', 'Hf', 'Ta', 'W', 'Re', 'Os', 'Ir', 'Pt', 'Au', 'Hg', 'Tl', 'Pb', 'Bi', 'Po', 'At', 'Rn'] rx = fr'({"|".join(sorted(patterns, key=len,reverse=True))})(\d+(?:\.\d+)?)?' df = pd.DataFrame({'formulas' : ['Mg0.97Fe0.03B2', 'Tl0.5Hg0.5Ba2Ca2Cu3O8', 'Hg0.75SrBa2Ca2Cu3O8', 'NbSn3']}) df['result'] = df['formulas'].str.findall(rx) df['result'] = df['result'].apply(lambda m: [(x,y) if y else (x,1) for x,y in m]) Results >>> df formulas result 0 Mg0.97Fe0.03B2 [(Mg, 0.97), (Fe, 0.03), (B, 2)] 1 Tl0.5Hg0.5Ba2Ca2Cu3O8 [(Tl, 0.5), (Hg, 0.5), (Ba, 2), (Ca, 2), (Cu, 3), (O, 8)] 2 Hg0.75SrBa2Ca2Cu3O8 [(Hg, 0.75), (Sr, 1), (Ba, 2), (Ca, 2), (Cu, 3), (O, 8)] 3 NbSn3 [(Nb, 1), (Sn, 3)] | 7 | 0 |
65,193,998 | 2020-12-8 | https://stackoverflow.com/questions/65193998/syntaxerror-invalid-syntax-to-repo-init-in-the-aosp-code | I have tried to repo init the source code Ubuntu build machine and it is successfully able to clone the code. repo init -u [email protected]:xxx/xx_manifest.git -b xxx Now I am trying repo init the source code in VM Ubuntu machine. In between getting the error like below: Traceback (most recent call last): File "/xxx/.repo/repo/main.py", line 56, in <module> from subcmds.version import Version File "/xxx/.repo/repo/subcmds/__init__.py", line 38, in <module> ['%s' % name]) File "/xxx/.repo/repo/subcmds/upload.py", line 27, in <module> from hooks import RepoHook File "/xxx/.repo/repo/hooks.py", line 472 file=sys.stderr) ^ SyntaxError: invalid syntax python version is same in build machine and vm machine 2.7.17. | try these commands curl https://storage.googleapis.com/git-repo-downloads/repo-1 > ~/bin/repo chmod a+x ~/bin/repo python3 ~/bin/repo init -u git@.... | 30 | 49 |
65,198,998 | 2020-12-8 | https://stackoverflow.com/questions/65198998/sphinx-warning-autosummary-stub-file-not-found-for-the-methods-of-the-class-c | I have an open source package with lots of classes over different submodules. All classes have methods fit and transform, and inherit fit_transform from sklearn. All classes have docstrings that follow numpydoc with subheadings Parameters, Attributes, Notes, See Also, and Methods, where I list fit, transform and fit_transform. I copy an example of a class: class DropFeatures(BaseEstimator, TransformerMixin): """ Some description. Parameters ---------- features_to_drop : str or list, default=None Variable(s) to be dropped from the dataframe Methods ------- fit transform fit_transform """ def __init__(self, features_to_drop: List[Union[str, int]]): some init parameters def fit(self, X: pd.DataFrame, y: pd.Series = None): """ This transformer does not learn any parameter. Verifies that the input X is a pandas dataframe, and that the variables to drop exist in the training dataframe. Parameters ---------- X : pandas dataframe of shape = [n_samples, n_features] The input dataframe y : pandas Series, default = None y is not needed for this transformer. You can pass y or None. Returns ------- self """ some functionality return self def transform(self, X: pd.DataFrame): """ Drop the variable or list of variables from the dataframe. Parameters ---------- X : pandas dataframe The input dataframe from which features will be dropped Returns ------- X_transformed : pandas dataframe, shape = [n_samples, n_features - len(features_to_drop)] The transformed dataframe with the remaining subset of variables. """ some more functionality return X In the conf.py for Sphinx I include: extensions = [ "sphinx.ext.autodoc", # Core Sphinx library for auto html doc generation from docstrings "sphinx.ext.autosummary", # Create neat summary tables for modules/classes/methods etc "sphinx.ext.intersphinx", # Link to other project's documentation (see mapping below) "sphinx_autodoc_typehints", # Automatically document param types (less noise in class signature) "numpydoc", "sphinx.ext.linkcode", ] numpydoc_show_class_members = False # generate autosummary even if no references autosummary_generate = True autosummary_imported_members = True When I build the documents using sphinx-build -b html docs build, the docs are built perfectly fine, but I get 3 warnings per class, one for each of the methods, that says: warning: autosummary: stub file not found for the methods of the class. check your autosummary_generate settings I've exhausted all my searching resources, and I am ready to give up. Would someone know either how to prevent that warning or how to make sphinx not print it to the console? I attach a copy of the error and I can provide a link to the PR to the repo if needed | Ok, after 3 days, I nailed it. The secret is add a short description to the methods in the docstrings after the heading "Methods" instead of leaving them empty as I did. So: class DropFeatures(BaseEstimator, TransformerMixin): Some description. Parameters ---------- features_to_drop : str or list, default=None Variable(s) to be dropped from the dataframe Methods ------- fit: some description transform: some description fit_transform: some description | 13 | 13 |
65,199,011 | 2020-12-8 | https://stackoverflow.com/questions/65199011/is-there-a-way-to-check-similarity-between-two-full-sentences-in-python | I am making a project like this one here: https://www.youtube.com/watch?v=dovB8uSUUXE&feature=youtu.be but i am facing trouble because i need to check the similarity between the sentences for example: if the user said: 'the person wear red T-shirt' instead of 'the boy wear red T-shirt' I want a method to check the similarity between these two sentences without having to check the similarity between each word is there a way to do this in python? I am trying to find a way to check the similarity between two sentences. | Most of there libraries below should be good choice for semantic similarity comparison. You can skip direct word comparison by generating word, or sentence vectors using pretrained models from these libraries. Sentence similarity with Spacy Required models must be loaded first. For using en_core_web_md use python -m spacy download en_core_web_md to download. For using en_core_web_lg use python -m spacy download en_core_web_lg. The large model is around ~830mb as writing and quite slow, so medium one can be a good choice. https://spacy.io/usage/vectors-similarity/ Code: import spacy nlp = spacy.load("en_core_web_lg") #nlp = spacy.load("en_core_web_md") doc1 = nlp(u'the person wear red T-shirt') doc2 = nlp(u'this person is walking') doc3 = nlp(u'the boy wear red T-shirt') print(doc1.similarity(doc2)) print(doc1.similarity(doc3)) print(doc2.similarity(doc3)) Output: 0.7003971105290047 0.9671912343259517 0.6121211244876517 Sentence similarity with Sentence Transformers https://github.com/UKPLab/sentence-transformers https://www.sbert.net/docs/usage/semantic_textual_similarity.html Install with pip install -U sentence-transformers. This one generates sentence embedding. Code: from sentence_transformers import SentenceTransformer model = SentenceTransformer('distilbert-base-nli-mean-tokens') sentences = [ 'the person wear red T-shirt', 'this person is walking', 'the boy wear red T-shirt' ] sentence_embeddings = model.encode(sentences) for sentence, embedding in zip(sentences, sentence_embeddings): print("Sentence:", sentence) print("Embedding:", embedding) print("") Output: Sentence: the person wear red T-shirt Embedding: [ 1.31643847e-01 -4.20616418e-01 ... 8.13076794e-01 -4.64620918e-01] Sentence: this person is walking Embedding: [-3.52878094e-01 -5.04286848e-02 ... -2.36091137e-01 -6.77282438e-02] Sentence: the boy wear red T-shirt Embedding: [-2.36365378e-01 -8.49713564e-01 ... 1.06414437e+00 -2.70157874e-01] Now embedding vector can be used to calculate various similarity metrics. Code: from sentence_transformers import SentenceTransformer, util print(util.pytorch_cos_sim(sentence_embeddings[0], sentence_embeddings[1])) print(util.pytorch_cos_sim(sentence_embeddings[0], sentence_embeddings[2])) print(util.pytorch_cos_sim(sentence_embeddings[1], sentence_embeddings[2])) Output: tensor([[0.4644]]) tensor([[0.9070]]) tensor([[0.3276]]) Same thing with scipy and pytorch, Code: from scipy.spatial import distance print(1 - distance.cosine(sentence_embeddings[0], sentence_embeddings[1])) print(1 - distance.cosine(sentence_embeddings[0], sentence_embeddings[2])) print(1 - distance.cosine(sentence_embeddings[1], sentence_embeddings[2])) Output: 0.4643629193305969 0.9069876074790955 0.3275738060474396 Code: import torch.nn cos = torch.nn.CosineSimilarity(dim=0, eps=1e-6) b = torch.from_numpy(sentence_embeddings) print(cos(b[0], b[1])) print(cos(b[0], b[2])) print(cos(b[1], b[2])) Output: tensor(0.4644) tensor(0.9070) tensor(0.3276) Sentence similarity with TFHub Universal Sentence Encoder https://tfhub.dev/google/universal-sentence-encoder/4 https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb Model is very large for this one around 1GB and seems slower than others. This also generates embeddings for sentences. Code: import tensorflow_hub as hub embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4") embeddings = embed([ "the person wear red T-shirt", "this person is walking", "the boy wear red T-shirt" ]) print(embeddings) Output: tf.Tensor( [[ 0.063188 0.07063895 -0.05998802 ... -0.01409875 0.01863449 0.01505797] [-0.06786212 0.01993554 0.03236153 ... 0.05772103 0.01787272 0.01740014] [ 0.05379306 0.07613157 -0.05256693 ... -0.01256405 0.0213196 -0.00262441]], shape=(3, 512), dtype=float32) Code: from scipy.spatial import distance print(1 - distance.cosine(embeddings[0], embeddings[1])) print(1 - distance.cosine(embeddings[0], embeddings[2])) print(1 - distance.cosine(embeddings[1], embeddings[2])) Output: 0.15320375561714172 0.8592830896377563 0.09080004692077637 Other Sentence Embedding Libraries https://github.com/facebookresearch/InferSent https://github.com/Tiiiger/bert_score This illustration shows the method, Resources How to compute the similarity between two text documents? https://en.wikipedia.org/wiki/Cosine_similarity#Angular_distance_and_similarity https://towardsdatascience.com/word-distance-between-word-embeddings-cc3e9cf1d632 https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.cosine.html https://www.tensorflow.org/api_docs/python/tf/keras/losses/CosineSimilarity https://nlp.town/blog/sentence-similarity/ | 28 | 76 |
65,100,974 | 2020-12-2 | https://stackoverflow.com/questions/65100974/how-do-i-properly-import-python-modules-in-a-multi-directory-project | I have a python project with a basic setup that looks like this: imptest.py utils/something.py utils/other.py Here's what's in the scripts: imptest.py #!./venv/bin/python import utils.something as something import utils.other as other def main(): """ Main function. """ something.do_something() other.do_other() if __name__ == "__main__": main() something.py #import other def do_something(): print("I am doing something") def main(): """ Main function """ do_something() #other.do_other() if __name__ == "__main__": main() other.py def do_other(): print("do other thing!") def main(): """ Main function """ do_other() if __name__ == "__main__": main() imptest.py is the main file that runs and calls the utils functions occasionally for some things. And as you can see, I have commented out some lines in "something.py" where I am importing "other" module for testing. But when I want to test certain functions in something.py, I have to run the file something.py and uncomment the import line. This feels like a bit of a clunky way of doing this. If I leave the import other uncommented and run imptest.py, I get this error: Traceback (most recent call last): File "imptest.py", line 5, in <module> import utils.something as something File "...../projects/imptest/utils/something.py", line 3, in <module> import other ModuleNotFoundError: No module named 'other' What's a better way of doing this? | The problem here is the path, Consider this directory structure main - utils/something.py - utils/other.py imptest.py When you try to import other using relative path in to something.py, then you would do something like from . import other. This would work when you execute $ python something.py but would fail when you run $ python imptest.py because in the second scenario it searches for main/other.py which doesn't exist. So inorder to fix this issue, I would suggest that you write unit tests for something.py & other.py and run them using $ python -m (mod) command. ( I highly recommend this approach ) But.... if you really what your existing code to work without much modification then you can add these 2 lines in something.py file ( this works, but I don't recommend this approach ) import sys, os sys.path.append(os.getcwd()) # Adding path to this module folder into sys path import utils.other as other def do_something(): print("I am doing something") def main(): """ Main function """ do_something() other.do_other() if __name__ == "__main__": main() Here are some references to get better understanding: Unit testing in python Absolute vs Relative Imports in python | 7 | 6 |
65,196,818 | 2020-12-8 | https://stackoverflow.com/questions/65196818/unable-to-access-docker-container-socket-hang-up-error | I have successfully built and started the docker container, it is running perfectly, but when I try to access it [End point url 0.0.0.0:6001] I am getting a "socket hang up" error GET http://0.0.0.0:6001/ Error: socket hang up Request Headers User-Agent: PostmanRuntime/7.26.8 Accept: */* Postman-Token: <token> Host: 0.0.0.0:6001 Accept-Encoding: gzip, deflate, br Connection: keep-alive Earlier it was working fine but when I removed the containers and images and rebuild it then I started getting this error I am using Postman to make GET request and I also tried Web browser Can anyone tell me whats the problem --Update-- Docker File Creating containers # Create Virtual Network $ sudo docker network create network1 # Using custom network as there are multiple containers # which communicate with each other # Create Containers $ sudo docker build -t form_ocr:latest . $ sudo docker run -d -p 6001:5000 --net network1 --name form_ocr form_ocr netstat command output $ netstat -nltp ... tcp6 0 0 :::6001 :::* LISTEN - docker container inspect output $ sudo docker container inspect <container-id> output docker ps output $ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 835e8cb11eee form_ocr "python3 app.py" 16 hours ago Up 40 seconds 0.0.0.0:6001->5000/tcp form_ocr | Try localhost:6001 not internet address You can also try any of your system local ipaddress , you find ipaddress by typing ifconfig or ipconfig if you are in linux or windows respectively | 14 | 0 |
65,194,694 | 2020-12-8 | https://stackoverflow.com/questions/65194694/session-not-created-this-version-of-chromedriver-only-supports-chrome-version-8 | I installed the version chromedriver 88 as requested but my version chrome is 87.0.4280.88 that is the last version (outside beta) while I am also asked to download version 88 of chrome Here is the error : selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 88 Current browser version is 87.0.4280.88 with binary path C:\Program Files\Google\Chrome\Application\chrome.exe How can I resolve this problem? | Your ChromeDriver version and your installed version of Chrome need to match up. You are using ChromeDriver for Chrome version 87. Keep both version same. Check your Chrome version (Help -> About) and then find the correct ChromeDriver release. You could instead use webdriver-manager which can handle this for you. Chrome is 87.0.4280.88 ChromeDriver Version 87 download from here https://chromedriver.storage.googleapis.com/index.html?path=87.0.4280.88/ | 18 | 24 |
65,186,969 | 2020-12-7 | https://stackoverflow.com/questions/65186969/simplifying-straight-line-movements-in-a-list-of-step-by-step-x-y-coordinates | In my game, I have a list of tuples (x,y) : solution = [(36, 37), (36, 36), (36, 35), (37, 35), (38, 35), (38, 34), (38, 33), (38, 32)] This list describes the movements the player should do to move from point (36, 37) to point (38, 32). I want to simplify this list to the following : opti = [(36, 37), (36, 35), (38, 35), (38, 32)] This means I want to reduce any series of steps where x is fixed (or y is fixed) to only the first and the last step. I'm struggling to figure out an algorithm to do this. I've been trying for more than two hours and here is what I'm currently trying to work on: solution = [(36, 37), (36, 36), (36, 35), (37, 35), (38, 35), (38, 34), (38, 33), (38, 32)] opti = [solution[0]] for i in range(len(solution)): if opti[-1][0] == solution[i][0]: pass elif opti[-1][1] == solution[i][1]: pass else: opti.append(solution[i]) In the end opti is equal to [(36, 37), (37, 35), (38, 34)] which is not what I want.. Can someone point me to the right way to do this? | You can try this: compare the previous and next location with the current location when iterating over the list(solution) to check if all the points are in the same line, pass if they are in the same line else append to the final (opti) list. solution = [(36, 37), (36, 36), (36, 35), (37, 35), (38, 35), (38, 34), (38, 33), (38, 32)] opti = [solution[0]] for i in range(1, len(solution) -1 ): if solution[i-1][0] == solution[i][0] and solution[i][0] == solution[i+1][0]: pass elif solution[i-1][1] == solution[i][1] and solution[i][1] == solution[i+1][1]: pass else: opti.append(solution[i]) opti.append(solution[-1]) print(opti) output: [(36, 37), (36, 35), (38, 35), (38, 32)] I hope this helps, feel free to reach out in case of any doubt. | 6 | 1 |
65,106,184 | 2020-12-2 | https://stackoverflow.com/questions/65106184/adding-a-dynamic-email-backend-in-django | I would like to let my users decide their email backend on their own. That is why I have created the email relevant keys (host, port, username...) on the users and now I try to work this (see below) backend into my Django project. Working with the docs and the source code, my first attempt was to extend the default EmailBackend by my custom "UserBackend" which overrides the __init__ function like this: class UserBackend(EmailBackend): def __init__(self, user_id, host=None, port=None, username=None ...): user = User.objects.get(id=user_id) super().init(host=user.email_host, port=user.email_port ...) As this method is called (I tried to send_mail from the shell) it gets no user_id. How can I approach this differently or how would I extend my attempts to do this? I wouldn't want to rewrite Djangos mail system entirely, as it works in itself. | send_email has a parameter called connection (link to docs) which seems to fit perfectly. You can get a connection by calling get_connection (link to docs) with the user's parameters. connection = get_connection(host=user.email_host, port=user.email_port, ...) send_email(connection=connection, ...) If you'd like to support multiple backend types, get_connection also supports it. | 7 | 4 |
65,174,575 | 2020-12-7 | https://stackoverflow.com/questions/65174575/typeerror-not-supported-between-instances-of-nonetype-and-float | I am following a YouTube tutorial and I wrote this code from the tutorial import numpy as np import pandas as pd from scipy.stats import percentileofscore as score my_columns = [ 'Ticker', 'Price', 'Number of Shares to Buy', 'One-Year Price Return', 'One-Year Percentile Return', 'Six-Month Price Return', 'Six-Month Percentile Return', 'Three-Month Price Return', 'Three-Month Percentile Return', 'One-Month Price Return', 'One-Month Percentile Return' ] final_df = pd.DataFrame(columns = my_columns) # populate final_df here.... pd.set_option('display.max_columns', None) print(final_df[:1]) time_periods = ['One-Year', 'Six-Month', 'Three-Month', 'One-Month'] for row in final_df.index: for time_period in time_periods: change_col = f'{time_period} Price Return' print(type(final_df[change_col])) percentile_col = f'{time_period} Percentile Return' print(final_df.loc[row, change_col]) final_df.loc[row, percentile_col] = score(final_df[change_col], final_df.loc[row, change_col]) print(final_df) It prints my data frame as | Ticker | Price | Number of Shares to Buy | One-Year Price Return | One-Year Percentile Return | Six-Month Price Return | Six-Month Percentile Return | Three-Month Price Return | Three-Month Percentile Return | One-Month Price Return | One-Month Percentile Return | |--------|---------|-------------------------|------------------------|----------------------------|------------------------|-----------------------------|--------------------------|-------------------------------|-------------------------|------------------------------| | A | 120.38 | N/A | 0.437579 | N/A | 0.280969 | N/A | 0.198355 | N/A | 0.0455988 | N/A | But when I call the score function I get this error <class 'pandas.core.series.Series'> 0.4320217937551543 Traceback (most recent call last): File "program.py", line 72, in <module> final_df.loc[row, percentile_col] = score(final_df[change_col], final_df.loc[row, change_col]) File "/Users/abhisheksrivastava/Library/Python/3.7/lib/python/site-packages/scipy/stats/stats.py", line 2017, in percentileofscore left = np.count_nonzero(a < score) TypeError: '<' not supported between instances of 'NoneType' and 'float' What is going wrong? I see the same code work in the YouTube video. I have next to none experience with Python Edit: I also tried print(type(final_df['One-Year Price Return'])) print(type(final_df['Six-Month Price Return'])) print(type(final_df['Three-Month Price Return'])) print(type(final_df['One-Month Price Return'])) for row in final_df.index: final_df.loc[row, 'One-Year Percentile Return'] = score(final_df['One-Year Price Return'], final_df.loc[row, 'One-Year Price Return']) final_df.loc[row, 'Six-Month Percentile Return'] = score(final_df['Six-Month Price Return'], final_df.loc[row, 'Six-Month Price Return']) final_df.loc[row, 'Three-Month Percentile Return'] = score(final_df['Three-Month Price Return'], final_df.loc[row, 'Three-Month Price Return']) final_df.loc[row, 'One-Month Percentile Return'] = score(final_df['One-Month Price Return'], final_df.loc[row, 'One-Month Price Return']) print(final_df) but it still gets the same error <class 'pandas.core.series.Series'> <class 'pandas.core.series.Series'> <class 'pandas.core.series.Series'> <class 'pandas.core.series.Series'> <class 'pandas.core.series.Series'> Traceback (most recent call last): File "program.py", line 71, in <module> final_df.loc[row, 'One-Year Percentile Return'] = score(final_df['One-Year Price Return'], final_df.loc[row, 'OneYear Price Return']) File "/Users/abhisheksrivastava/Library/Python/3.7/lib/python/site-packages/scipy/stats/stats.py", line 2017, in percentileofscore left = np.count_nonzero(a < score) TypeError: '<' not supported between instances of 'NoneType' and 'float' | I'm working through this tutorial as well. I looked deeper into the data in the four '___ Price Return' columns. Looking at my batch API call, there's four rows that have the value 'None' instead of a float which is why the 'NoneError' appears, as the percentileofscore function is trying to calculate the percentiles using 'None' which isn't a float. To work around this API error, I manually changed the None values to 0 which calculated the Percentiles, with the code below... time_periods = [ 'One-Year', 'Six-Month', 'Three-Month', 'One-Month' ] for row in hqm_dataframe.index: for time_period in time_periods: if hqm_dataframe.loc[row, f'{time_period} Price Return'] == None: hqm_dataframe.loc[row, f'{time_period} Price Return'] = 0 | 11 | 12 |
65,182,169 | 2020-12-7 | https://stackoverflow.com/questions/65182169/create-new-column-with-data-that-has-same-column | I have DataFrame similat to this. How to add new column with names of rows that have same value in one of the column? For example: Have this: name building a blue b white c blue d red e blue f red How to get this? name building in_building_with a blue [c, e] b white [] c blue [a, e] d red [f] e blue [a, c] f red [d] | This is approach(worst) I can only think of : r = df.groupby('building')['name'].agg(dict) df['in_building_with'] = df.apply(lambda x: [r[x['building']][i] for i in (r[x['building']].keys()-[x.name])], axis=1) df: name building in_building_with 0 a blue [c, e] 1 b white [] 2 c blue [a, e] 3 d red [f] 4 e blue [a, c] 5 f red [d] Approach: Make a dictionary which will give your indices where the building occurs. building blue {0: 'a', 2: 'c', 4: 'e'} red {3: 'd', 5: 'f'} white {1: 'b'} dtype: object subtract the index of the current building from the list since you are looking at the element other than it to get the indices of appearance. r[x['building']].keys()-[x.name] Get the values at those indices and make them into a list. | 7 | 4 |
65,181,817 | 2020-12-7 | https://stackoverflow.com/questions/65181817/how-to-iterate-over-multiple-lists-of-different-lengths-but-repeat-the-last-val | In my Python 3 script, I am trying to make a combination of three numbers from three different lists based on inputs. If the lists are the same size, there is no issue with zip. However, I want to be able to input a single number for a specific list and the script to repeat that number until the longest list is finished. This can be done with zip_longest. However, with fillvalue it is not possible to have separate fill values for separate lists. Taking this simple script as an example: from itertools import zip_longest list1=[1] list2=[4, 5, 6, 7, 8, 9] list3=[2] for l1, l2, l3 in zip_longest(list1, list2, list3): print(l1, l2, l3) This is the actual result: # 1 4 2 # None 5 None # None 6 None # None 7 None # None 8 None # None 9 None And this would be the result that I want: # 1 4 2 # 1 5 2 # 1 6 2 # 1 7 2 # 1 8 2 # 1 9 2 I already managed to do this specific task by manually creating different for loops and asking if a list is a constant or not, but zip_longest is so close to exactly what I need that I wonder if I am missing something obvious. | You could make use of logical or operator to use the last element of the shorter lists: from itertools import zip_longest list1 = [1] list2 = ["a", "b", "c", "d", "e", "f"] list3 = [2] for l1, l2, l3 in zip_longest(list1, list2, list3): print(l1 or list1[-1], l2, l3 or list3[-1]) Out: 1 a 2 1 b 2 1 c 2 1 d 2 1 e 2 1 f 2 | 10 | 10 |
65,180,527 | 2020-12-7 | https://stackoverflow.com/questions/65180527/how-to-update-a-value-in-the-nested-column-of-struct-using-pyspark | I try to do very simple - update a value of a nested column;however, I cannot figure out how Environment: Apache Spark 2.4.5 Databricks 6.4 Python 3.7 dataDF = [ (('Jon','','Smith'),'1580-01-06','M',3000) ] schema = StructType([ StructField('name', StructType([ StructField('firstname', StringType(), True), StructField('middlename', StringType(), True), StructField('lastname', StringType(), True) ])), StructField('dob', StringType(), True), StructField('gender', StringType(), True), StructField('gender', IntegerType(), True) ]) df = spark.createDataFrame(data = dataDF, schema = schema) df = df.withColumn("name.firstname", lit('John')) df.printSchema() df.show() #Results #I get a new column instead of update root |-- name: struct (nullable = true) | |-- firstname: string (nullable = true) | |-- middlename: string (nullable = true) | |-- lastname: string (nullable = true) |-- dob: string (nullable = true) |-- gender: string (nullable = true) |-- gender: integer (nullable = true) |-- name.firstname: string (nullable = false) +--------------+----------+------+------+--------------+ | name| dob|gender|gender|name.firstname| +--------------+----------+------+------+--------------+ |[Jon, , Smith]|1580-01-06| M| 3000| John| +--------------+----------+------+------+--------------+ | Need to wrangle with the column a bit as below: import pyspark.sql.functions as F df2 = df.select('*', 'name.*') \ .withColumn('firstname', F.lit('newname')) \ .withColumn('name', F.struct(*[F.col(col) for col in df.select('name.*').columns])) \ .drop(*df.select('name.*').columns) df2.show() +------------------+----------+------+------+ | name| dob|gender|gender| +------------------+----------+------+------+ |[newname, , Smith]|1580-01-06| M| 3000| +------------------+----------+------+------+ | 9 | 7 |
65,175,454 | 2020-12-7 | https://stackoverflow.com/questions/65175454/how-to-delete-multiple-files-and-specific-pattern-in-s3-boto3 | Can Python delete specific multiple files in S3? I want to delete multiple files with specific extensions. This script removes all files. These are the various specific files that I want to delete: XXX.tar.gz XXX.txt ** Current code: ** (all files deleted) import boto3 accesskey = "123" secretkey = "123" region = "ap-northeast-1" s3 = boto3.resource ('s3', aws_access_key_id = accesskey, aws_secret_access_key = secretkey, region_name = region) bucket = s3.Bucket ('test') files = [os.key for os in bucket.objects.filter (Prefix = "myfolder / test /")] tar_files = [file to file in files if file.endswith ('tar.gz')] #print (f'All files: {files} ') #print (f'CSV files: {csv_files} ') objects_to_delete = s3.meta.client.list_objects (Bucket = "test", Prefix = "myfolder / test /") delete_keys = {'Objects': []} delete_keys ['Objects'] = [{'Key': tar_files} for tar_files in [obj ['Key'] for obj in objects_to_delete.get ('Content', [])]] s3.meta.client.delete_objects (Bucket = "test", Delete = delete_keys) If anyone knows, please let me know. | Presuming that you want to delete *.tar.gz and *.txt files from the given bucket and prefix, this would work: import boto3 s3_resource = boto3.resource('s3') bucket = s3_resource.Bucket('my-bucket') objects = bucket.objects.filter(Prefix = 'myfolder/') objects_to_delete = [{'Key': o.key} for o in objects if o.key.endswith('.tar.gz') or o.key.endswith('.txt')] if len(objects_to_delete): s3_resource.meta.client.delete_objects(Bucket='my-bucket', Delete={'Objects': objects_to_delete}) | 6 | 12 |
65,103,114 | 2020-12-2 | https://stackoverflow.com/questions/65103114/most-efficient-way-of-adding-elements-given-the-index-list-in-numpy | Assume we have a numpy array A with shape (N, ) and a matrix D with shape (M, 3) which has data and another matrix I with shape (M, 3) which has corresponding index of each data element in D. How can we construct A given D and I such that the repeated element indexes are added? Example: ############# A[I] := D ################################### A = [0.5, 0.6] # Final Reduced Data Vector D = [[0.1, 0.1 0.2], [0.2, 0.4, 0.1]] # Data I = [[0, 1, 0], [0, 1, 1]] # Indices For example: A[0] = D[0][0] + D[0][2] + D[1][0] # 0.5 = 0.1 + 0.2 + 0.2 Since in index matrix we have: I[0][0] = I[0][2] = I[1][0] = 0 Target is to avoid looping over all elements to be efficient for large N, M (10^6-10^9). | I doubt you can get much faster than np.bincount - and notice how the official documentation provides this exact usecase # Your example A = [0.5, 0.6] D = [[0.1, 0.1, 0.2], [0.2, 0.4, 0.1]] I = [[0, 1, 0], [0, 1, 1]] # Solution import numpy as np D, I = np.array(D).flatten(), np.array(I).flatten() print(np.bincount(I, D)) #[0.5 0.6] | 7 | 6 |
65,171,183 | 2020-12-6 | https://stackoverflow.com/questions/65171183/how-to-run-headless-microsoft-edge-with-selenium-in-python | With Chrome you can add options when creating the driver. You just do options = Options() options.headless = True driver = webdriver.Chrome(PATH\TO\DRIVER, options=options) But for some reason when trying to do the same with Microsoft Edge options = Options() options.headless = True driver = webdriver.Edge(PATH\TO\DRIVER, options=options) I get this error below: TypeError: init() got an unexpected keyword argument 'options' For some reason Edge's driver doesn't accept any other parameters than the file path. Is there any way to run Edge headless and add more options just like in Chrome? | options = EdgeOptions() options.use_chromium = True options.add_argument("headless") options.add_argument("disable-gpu") Try above code , you have to enable chromium to enable headless https://learn.microsoft.com/en-us/microsoft-edge/webdriver-chromium/?tabs=python This works only for new edge chromium not for edge legacy versions . In legacy versions headless is not supported Full code from msedge.selenium_tools import EdgeOptions from msedge.selenium_tools import Edge # make Edge headless edge_options = EdgeOptions() edge_options.use_chromium = True # if we miss this line, we can't make Edge headless # A little different from Chrome cause we don't need two lines before 'headless' and 'disable-gpu' edge_options.add_argument('headless') edge_options.add_argument('disable-gpu') driver = Edge(executable_path='youredgedriverpath', options=edge_options) | 9 | 9 |
65,167,576 | 2020-12-6 | https://stackoverflow.com/questions/65167576/python-decimal-addition-and-subtraction-not-giving-exact-result | Python (3.8) code: #!/usr/bin/env python3 from decimal import Decimal from decimal import getcontext x = Decimal('0.6666666666666666666666666667') y = x; print(getcontext().prec) print(y) print(y == x) y += x; y += x; y += x; y -= x; y -= x; y -= x; print(y) print(y == x) Python output: 28 0.6666666666666666666666666667 True 0.6666666666666666666666666663 False Java code: import java.math.BigDecimal; public class A { public static void main(String[] args) { BigDecimal x = new BigDecimal("0.6666666666666666666666666667"); BigDecimal y = new BigDecimal("0.6666666666666666666666666667"); System.out.println(x.precision()); System.out.println(y.precision()); System.out.println(y); System.out.println(y.equals(x)); y = y.add(x); y = y.add(x); y = y.add(x); y = y.subtract(x); y = y.subtract(x); y = y.subtract(x); System.out.println(y); System.out.println(y.equals(x)); } } Java output: 28 28 0.6666666666666666666666666667 true 0.6666666666666666666666666667 true What would be the way to achieve arbitrary precision in Python? By setting a very large prec? | From Python documentation: The decimal module incorporates a notion of significant places so that 1.30 + 1.20 is 2.50. Moreover, the following also need to be considered: The context precision does not affect how many digits are stored. That is determined exclusively by the number of digits in value. For example, Decimal('3.00000') records all five zeros even if the context precision is only three. Context precision and rounding only come into play during arithmetic operations. Therefore: import decimal from decimal import Decimal decimal.getcontext().prec = 4 a = Decimal('1.22222') #1.22222 #what you put in is what you get even though the prec was set to 4 print(a) b = Decimal('0.22222') #0.22222 #Same reasoning as above print(b) a += 0; b += 0 #a will be printed as 1.222 (4 significant figures) #b will be printed as 0.2222 (Leading zeroes are not significant!) print('\n', a, '\n', b, sep='') | 6 | 3 |
65,160,277 | 2020-12-5 | https://stackoverflow.com/questions/65160277/spacy-tokenizer-with-only-whitespace-rule | I would like to know if the spacy tokenizer could tokenize words only using the "space" rule. For example: sentence= "(c/o Oxford University )" Normally, using the following configuration of spacy: nlp = spacy.load("en_core_news_sm") doc = nlp(sentence) for token in doc: print(token) the result would be: ( c / o Oxford University ) Instead, I would like an output like the following (using spacy): (c/o Oxford University ) Is it possible to obtain a result like this using spacy? | Let's change nlp.tokenizer with a custom Tokenizer with token_match regex: import re import spacy from spacy.tokenizer import Tokenizer nlp = spacy.load('en_core_web_sm') text = "This is it's" print("Before:", [tok for tok in nlp(text)]) nlp.tokenizer = Tokenizer(nlp.vocab, token_match=re.compile(r'\S+').match) print("After :", [tok for tok in nlp(text)]) Before: [This, is, it, 's] After : [This, is, it's] You can further adjust Tokenizer by adding custom suffix, prefix, and infix rules. An alternative, more fine grained way would be to find out why it's token is split like it is with nlp.tokenizer.explain(): import spacy from spacy.tokenizer import Tokenizer nlp = spacy.load('en_core_web_sm') text = "This is it's. I'm fine" nlp.tokenizer.explain(text) You'll find out that split is due to SPECIAL rules: [('TOKEN', 'This'), ('TOKEN', 'is'), ('SPECIAL-1', 'it'), ('SPECIAL-2', "'s"), ('SUFFIX', '.'), ('SPECIAL-1', 'I'), ('SPECIAL-2', "'m"), ('TOKEN', 'fine')] that could be updated to remove "it's" from exceptions like: exceptions = nlp.Defaults.tokenizer_exceptions filtered_exceptions = {k:v for k,v in exceptions.items() if k!="it's"} nlp.tokenizer = Tokenizer(nlp.vocab, rules = filtered_exceptions) [tok for tok in nlp(text)] [This, is, it's., I, 'm, fine] or remove split on apostrophe altogether: filtered_exceptions = {k:v for k,v in exceptions.items() if "'" not in k} nlp.tokenizer = Tokenizer(nlp.vocab, rules = filtered_exceptions) [tok for tok in nlp(text)] [This, is, it's., I'm, fine] Note the dot attached to the token, which is due to the suffix rules not specified. | 5 | 11 |
65,163,947 | 2020-12-6 | https://stackoverflow.com/questions/65163947/iterate-over-a-list-based-on-list-with-set-of-iteration-steps | I want to iterate a given list based on a variable number of iterations stored in another list and a constant number of skips stored in as an integer. Let's say I have 3 things - l - a list that I need to iterate on (or filter) w - a list that tells me how many items to iterate before taking a break k - an integer that tells me how many elements to skip between each set of iterations. To rephrase, w tells how many iterations to take, and after each set of iterations, k tells how many elements to skip. So, if w = [4,3,1] and k = 2. Then on a given list (of length 14), I want to iterate the first 4 elements, then skip 2, then next 3 elements, then skip 2, then next 1 element, then skip 2. Another example, #Lets say this is my original list l = [6,2,2,5,2,5,1,7,9,4] w = [2,2,1,1] k = 1 Based on w and k, I want to iterate as - 6 -> Keep # w says keep 2 elements 2 -> Keep 2 -> Skip # k says skip 1 5 -> Keep # w says keep 2 elements 2 -> Keep 5 -> Skip # k says skip 1 1 -> Keep # w says keep 1 element 7 -> Skip # k says skip 1 9 -> Keep # w says keep 1 element 4 -> Skip # k says skip 1 I tried finding something from itertools, numpy, a combination of nested loops, but I just can't seem to wrap my head around how to even iterate over this. Apologies for not providing any attempt, but I don't know where to start. I dont necessarily need a full solution, just a few hints/suggestions would do. | This works: l = [6,2,2,5,2,5,1,7,9,4] w = [2,2,1,1] k = 1 def take(xs, runs, skip_size): ixs = iter(xs) for run_size in runs: for _ in range(run_size ): yield next(ixs) for _ in range(skip_size): next(ixs) result = list(take(l, w, k)) print(result) Result: [6, 2, 5, 2, 1, 9] The function is what's called a generator, yielding one part of the result at a time, which is why it's combined into a list with list(take(l, w, k)). Inside the function, the list xs that is passed in is wrapped in an iterator, to be able to take one item at a time with next(). runs defines how many items to take and yield, skip_size defines how many items to skip to skip after each 'run'. As a bonus, here's a fun one-liner - if you can figure out why it works, I think you know enough about the problem to move on :) [y for i, y in zip([x for xs in [[1] * aw + [0] * k for aw in w] for x in xs], l) if i] | 9 | 6 |
65,157,725 | 2020-12-5 | https://stackoverflow.com/questions/65157725/using-selenium-inside-gitlab-ci-cd | I've desperetaly tried to set a pytest pipeline CI/CD for my personal projet hosted by gitlab. I tried to set up a simple project with two basic files: file test_core.py, witout any other dependencies for the sake of simplicity: # coding: utf-8 # !/usr/bin/python3 import pytest from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.firefox.options import Options def test_basic_headless_selenium_example(): """Test selenium installation by opening python website. (inspired by https://selenium-python.readthedocs.io/getting-started.html) """ opts = Options() opts.headless = True driver = webdriver.Firefox(options=opts) driver.get("http://www.python.org") driver.close() File .gitlab-ci.yml, for CI/CD automatic tests: stages: - tests pytest:python3.7: image: python:3.7 stage: tests services: - selenium/standalone-firefox:latest script: # - apt-get update && apt-get upgrade --assume-yes - wget -O ~/FirefoxSetup.tar.bz2 "https://download.mozilla.org/?product=firefox-latest&os=linux64" - tar xjf ~/FirefoxSetup.tar.bz2 -C /opt/ - ln -s /opt/firefox/firefox /usr/lib/firefox - export PATH=$PATH:/opt/firefox/ - wget -O ~/geckodriver.tar.gz "https://github.com/mozilla/geckodriver/releases/download/v0.28.0/geckodriver-v0.28.0-linux64.tar.gz" - tar -zxvf ~/geckodriver.tar.gz -C /opt/ - export PATH=$PATH:/opt/ - pip install selenium pytest - pytest On my laptop, the pytestcommand works fine 100% of time. When I push a commit to gitlab, I deseperately get errors: > raise exception_class(message, screen, stacktrace) E selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status 255 /usr/local/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py:242: WebDriverException =========================== short test summary info ============================ FAILED test_selenium.py::test_basic_headless_selenium_example - selenium.comm... ============================== 1 failed in 1.29s =============================== Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1 I've created a simple project: https://gitlab.com/OlivierLuG/selenium_firefox that reproduce this example. The failed pipeline can be directely found here : https://gitlab.com/OlivierLuG/selenium_firefox/-/pipelines/225711127 Does anybody have a clue how to fix this error ? | I've finally managed to ping gitlab CI on green with the below .gitlab-ci.yml file. Note that I'm not a fan of yaml language. To make the file shorter, I've used a shared block of code, named install_firefox_geckodriver. Then, I've configured 2 jobs with python 3.7 and 3.8, that call this block. The keys to make this kind of test to work are: _ run in headless mode (this was already the case for me) _ install firefox and geckodriver with command lines _ install firefox dependencies _ use gitlab selenium service Here is my yaml file. The sucessful pipeline can be found here : https://gitlab.com/OlivierLuG/selenium_firefox/-/pipelines/225756742 stages: - tests .install_firefox_geckodriver: &install_firefox_geckodriver - apt-get update && apt-get upgrade --assume-yes - apt-get install gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils --assume-yes - wget -nv -O ~/FirefoxSetup.tar.bz2 "https://download.mozilla.org/?product=firefox-latest&os=linux64" - tar xjf ~/FirefoxSetup.tar.bz2 -C /opt/ - ln -s /opt/firefox/firefox /usr/lib/firefox - export PATH=$PATH:/opt/firefox/ - wget -nv -O ~/geckodriver.tar.gz "https://github.com/mozilla/geckodriver/releases/download/v0.28.0/geckodriver-v0.28.0-linux64.tar.gz" - tar -zxvf ~/geckodriver.tar.gz -C /opt/ - export PATH=$PATH:/opt/ pytest:python3.7: image: python:3.7 stage: tests services: - selenium/standalone-firefox:latest script: - *install_firefox_geckodriver - pip install selenium pytest - pytest pytest:python3.8: image: python:3.8 stage: tests services: - selenium/standalone-firefox:latest script: - *install_firefox_geckodriver - pip install selenium pytest - pytest | 7 | 5 |
65,159,773 | 2020-12-5 | https://stackoverflow.com/questions/65159773/set-column-width-in-pandas-dataframe | Searching for this topic and found a solution but doesn't work for me The code I am working on (part of it like that) pd.set_option('max_colwidth', 1000) df = pd.DataFrame(list) The line of setting the max colwidth: I tried to put this line before the df line and another try after the df line but the output still the same.. Any ideas? | You can try setting the others parameters also like: pd.set_option('display.max_columns', 1000, 'display.width', 1000, 'display.max_rows',1000) | 6 | 4 |
65,158,620 | 2020-12-5 | https://stackoverflow.com/questions/65158620/official-repository-of-unicode-character-names | There are a few ways to get the list of all Unicode characters' names: for example using Python module unicodedata, as explained in List of unicode character names, or using the website: https://unicode.org/charts/charindex.html but here it's incomplete, and you have to open and parse PDF to find the names. But what is the official source / repository of all Unicode character names? (such that if a new character is added, the list is updated, so I'm looking for the initial source for these names, in a machine readable format). I'm looking for a list with just code point and name, in CSV or any other format: code character name ... 0102 LATIN CAPITAL LETTER A WITH BREVE 0103 LATIN SMALL LETTER A WITH BREVE ... | The official source for the actual character data (which includes the character names and many, many other details) is the Unicode Character Database. The latest version of the data files can be accessed via http://www.unicode.org/Public/UCD/latest/. Names specifically can be found in the files NamesList.txt. The format of that file is described here. This is the list in CSV format: https://www.unicode.org/Public/UCD/latest/ucd/UnicodeData.txt | 6 | 10 |
65,156,028 | 2020-12-5 | https://stackoverflow.com/questions/65156028/set-print-flush-true-to-default | I am aware that you can flush after a print statement by setting flush=True like so: print("Hello World!", flush=True) However, for cases where you are doing many prints, it is cumbersome to manually set each print to flush=True. Is there a way to set the default to flush=True for Python 3.x? I am thinking of something similar to the print options numpy gives using numpy.set_printoptions. | You can use partial: from functools import partial print_flushed = partial(print, flush=True) print_flushed("Hello world!") From the documentation: The partial() is used for partial function application which βfreezesβ some portion of a functionβs arguments and/or keywords resulting in a new object with a simplified signature. | 7 | 8 |
65,147,823 | 2020-12-4 | https://stackoverflow.com/questions/65147823/python-asyncio-task-exception-was-never-retrieved | Description: (simplified) I have 2 tasks. Within each task I have 3 coroutines. 2 coroutines from the first task fail. (simulated) While processing task results, I am getting one "Task exception was never retrieved" message. I believe this is because exception of only one of the two failed coroutines in that task was processed. How do I process exceptions of both coroutines within task and/or avoid the "Task exception was never retrieved" message? Code: (simplified) import asyncio async def download(data): filename = "*" if data in ["b", "c"] else data # simulated failure with open(filename, "w") as f: f.write(data) async def coro(data_list): coroutines = [download(data) for data in data_list] for coroutine in asyncio.as_completed(coroutines): await coroutine async def main(): task1 = asyncio.create_task(coro(["a", "b", "c"])) task2 = asyncio.create_task(coro(["d", "e", "f"])) results = await asyncio.gather(task1, task2, return_exceptions=True) for _ in results: pass asyncio.run(main()) Output: (simplified) Task exception was never retrieved future: <Task finished coro=<download() done, defined at D:/myscript.py:2> exception=OSError(22, 'Invalid argument')> Traceback (most recent call last): File "D:/myscript.py", line 4, in download with open(filename, "w") as f: OSError: [Errno 22] Invalid argument: '*' | If you want to collect exceptions instead of raising them, you can use asyncio.gather(return_exceptions=True) in coro as well. For example: import asyncio async def download(data): if data in ['b', 'c']: 1/0 # simulate error return 42 # success async def coro(data_list): coroutines = [download(data) for data in data_list] return await asyncio.gather(*coroutines, return_exceptions=True) async def main(): task1 = asyncio.create_task(coro(["a", "b", "c"])) task2 = asyncio.create_task(coro(["d", "e", "f"])) return await asyncio.gather(task1, task2, return_exceptions=True) print(asyncio.run(main())) This will print: [[42, ZeroDivisionError('division by zero'), ZeroDivisionError('division by zero')], [42, 42, 42]] | 14 | 10 |
65,115,092 | 2020-12-2 | https://stackoverflow.com/questions/65115092/occasional-deadlock-in-multiprocessing-pool | I have N independent tasks that are executed in a multiprocessing.Pool of size os.cpu_count() (8 in my case), with maxtasksperchild=1 (i.e. a fresh worker process is created for each new task). The main script can be simplified to: import subprocess as sp import multiprocessing as mp def do_work(task: dict) -> dict: res = {} # ... work ... for i in range(5): out = sp.run(cmd, stdout=sp.PIPE, stderr=sp.PIPE, check=False, timeout=60) res[i] = out.stdout.decode('utf-8') # ... some more work ... return res if __name__ == '__main__': tasks = load_tasks_from_file(...) # list of dicts logger = mp.get_logger() results = [] with mp.Pool(processes=os.cpu_count(), maxtasksperchild=1) as pool: for i, res in enumerate(pool.imap_unordered(do_work, tasks), start=1): results.append(res) logger.info('PROGRESS: %3d/%3d', i, len(tasks)) dump_results_to_file(results) The pool sometimes gets stuck. The traceback when I do a KeyboardInterrupt is here. It indicates that the pool won't fetch new tasks and/or worker processes are stuck in a queue / pipe recv() call. I was unable to reproduce this deterministically, varying different configs of my experiments. There's a chance that if I run the same code again, it'll finish gracefully. Further observations: Python 3.7.9 on x64 Linux start method for multiprocessing is fork (using spawn does not solve the issue) strace reveals that the processes are stuck in a futex wait; gdb's backtrace also shows: do_futex_wait.constprop disabling logging / explicit flushing does not help there's no bug in how a task is defined (i.e. they are all loadable). Update: It seems that deadlock occurs even with a pool of size = 1. strace reports that the process is blocked on trying to acquire some lock located at 0x564c5dbcd000: futex(0x564c5dbcd000, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY and gdb confirms: (gdb) bt #0 0x00007fcb16f5d014 in do_futex_wait.constprop () from /usr/lib/libpthread.so.0 #1 0x00007fcb16f5d118 in __new_sem_wait_slow.constprop.0 () from /usr/lib/libpthread.so.0 #2 0x0000564c5cec4ad9 in PyThread_acquire_lock_timed (lock=0x564c5dbcd000, microseconds=-1, intr_flag=0) at /tmp/build/80754af9/python_1598874792229/work/Python/thread_pthread.h:372 #3 0x0000564c5ce4d9e2 in _enter_buffered_busy (self=self@entry=0x7fcafe1e7e90) at /tmp/build/80754af9/python_1598874792229/work/Modules/_io/bufferedio.c:282 #4 0x0000564c5cf50a7e in _io_BufferedWriter_write_impl.isra.2 (self=0x7fcafe1e7e90) at /tmp/build/80754af9/python_1598874792229/work/Modules/_io/bufferedio.c:1929 #5 _io_BufferedWriter_write (self=0x7fcafe1e7e90, arg=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Modules/_io/clinic/bufferedio.c.h:396 | The deadlock occurred due to high memory usage in workers, thus triggering the OOM killer which abruptly terminated the worker subprocesses, leaving the pool in a messy state. This script reproduces my original problem. For the time being I am considering switching to a ProcessPoolExecutor which will throw a BrokenProcessPool exception when an abrupt worker termination occurs. References: https://bugs.python.org/issue22393#msg315684 https://stackoverflow.com/a/24896362 | 9 | 10 |
65,150,961 | 2020-12-4 | https://stackoverflow.com/questions/65150961/in-defined-for-generator | Why is in operator defined for generators? >>> def foo(): ... yield 42 ... >>> >>> f = foo() >>> 10 in f False What are the possible use cases? I know that range(...) objects have a __contains__ function defined so that we can do stuff like this: >>> r = range(10) >>> 4 in r True >>> r.__contains__ <method-wrapper '__contains__' of range object at 0x7f82bd51cc00> But f above doesn't have __contains__ method. | "What are the possible use cases?" To check if the generator will produce some value. Dunder methods serve as hooks for the particular syntax they are associated with. __contains__ isn't some kind of one-to-one mapping to x in y. The language ultimately defines the semantics of these operators. From the documentation of membership testing, we see there are several ways for x in y to be evaluated, depending on various properties of the objects involved. I've highlted the relevant one for generator objects, which do not define a __contains__ but are iterable, i.e., they define an __iter__ method: The operators in and not in test for membership. x in s evaluates to True if x is a member of s, and False otherwise. x not in s returns the negation of x in s. All built-in sequences and set types support this as well as dictionary, for which in tests whether the dictionary has a given key. For container types such as list, tuple, set, frozenset, dict, or collections.deque, the expression x in y is equivalent to any(x is e or x == e for e in y). For the string and bytes types, x in y is True if and only if x is a substring of y. An equivalent test is y.find(x) != -1. Empty strings are always considered to be a substring of any other string, so "" in "abc" will return True. For user-defined classes which define the __contains__() method, x in y returns True if y.__contains__(x) returns a true value, and False otherwise. For user-defined classes which do not define contains() but do define __iter__(), x in y is True if some value z, for which the expression x is z or x == z is true, is produced while iterating over y. If an exception is raised during the iteration, it is as if in raised that exception. Lastly, the old-style iteration protocol is tried: if a class defines __getitem__(), x in y is True if and only if there is a non-negative integer index i such that x is y[i] or x == y[i], and no lower integer index raises the IndexError exception. (If any other exception is raised, it is as if in raised that exception). The operator not in is defined to have the inverse truth value of in. To summarize, x in y will be defined for objects that: Are strings or bytes, and it is defined as a substring relationship. types that define __contains__ types that are iterators, i.e. that define __iter__ the old-style iteration protocol (relies on __getitem__) Generators fall into 3. A broader point, you really shouldn't use the dunder methods directly, unless you really understand what they are doing. Even then, it may be best to aviod it. It usually isn't worth trying to be credible or succinct by using something to the effect of: x.__lt__(y) Instead of: x < y You should at least understand, that this might happen: >>> (1).__lt__(3.) NotImplemented >>> And if you are just naively doing stuff like filter((1).__lt__, iterable) then you've probably got a bug. | 6 | 4 |
65,146,320 | 2020-12-4 | https://stackoverflow.com/questions/65146320/pandas-how-to-get-rows-with-consecutive-dates-and-sales-more-than-1000 | I have a data frame called df: Date Sales 01/01/2020 812 02/01/2020 981 03/01/2020 923 04/01/2020 1033 05/01/2020 988 ... ... How can I get the first occurrence of 7 consecutive days with sales above 1000? This is what I am doing to find the rows where sales is above 1000: In [221]: df.loc[df["sales"] >= 1000] Out [221]: Date Sales 04/01/2020 1033 08/01/2020 1008 09/01/2020 1091 17/01/2020 1080 18/01/2020 1121 19/01/2020 1098 ... ... | You can assign a unique identifier per consecutive days, group by them, and return the first value per group (with a previous filter of values > 1000): df = df.query('Sales > 1000').copy() df['grp_date'] = df.Date.diff().dt.days.fillna(1).ne(1).cumsum() df.groupby('grp_date').head(7).reset_index(drop=True) where you can change the value of head parameter to the first n rows from consecutive days. Note: you may need to use pd.to_datetime(df.Date, format='%d/%m/%Y') to convert dates from strings to pandas datetime, and sort them. | 6 | 5 |
65,147,132 | 2020-12-4 | https://stackoverflow.com/questions/65147132/how-to-set-the-hue-order-in-seaborn-plots | I have a Pandas dataset named titanic I am plotting a bar chart as described in the Seaborn official documentation, using the following code: import seaborn as sns titanic = sns.load_dataset("titanic") sns.catplot(x="sex", y="survived", hue="class", kind="bar", data=titanic) This produces the following plot: As you can see, the hue is represented by the class. How can I manually choose the hue order so that I can reverse the current one? | In order to manually select the hue order of a Seaborn plot, you have to define the desired order as a list and then pass it to the plot function as the argument hue_order . The following code would work: import seaborn as sns titanic = sns.load_dataset("titanic") hue_order = ['Third', 'Second', 'First'] sns.catplot(x="sex", y="survived", hue="class", data=titanic, hue_order=hue_order, kind="bar") | 29 | 60 |
65,114,261 | 2020-12-2 | https://stackoverflow.com/questions/65114261/is-there-a-way-to-deploy-a-fastapi-app-on-cpanel | I'm having trouble deploying a FastAPI app on cpanel with Passenger | You might be able to run your FastAPI app using a2wsgi: In your passenger_wsgi.py: from a2wsgi import ASGIMiddleware from main import app # Import your FastAPI app. application = ASGIMiddleware(app) | 9 | 10 |
65,139,977 | 2020-12-4 | https://stackoverflow.com/questions/65139977/how-is-x-42-x-lambda-x-parsed | I was surprised that this assertion fails: x = 42 x = lambda: x assert x() == 42 It seems that x ends up recursively referring to itself, so that x(), x()(), etc. are all functions. What is the rule used to parse this, and where is this documented? By the way (not unexpectedly given the above), the original value of x has no references left after the lambda definition: class X: def __del__(self): print('deleting') x = X() x = lambda: x # 'deleting' is printed here | The variable x is created by the first assignment, and rebound with the second assignment. Since the x in the lambda isn't evaluated until the lambda is called, calling it will evaluate to the most recently assigned value. Note that this is not dynamic scoping - if it were dynamic, the following would print "99", but it prints "<function ...": x = 42 x = lambda: x def test(f): x = 99 print(f()) test(x) | 49 | 45 |
65,135,205 | 2020-12-3 | https://stackoverflow.com/questions/65135205/how-to-set-a-protobuf-timestamp-field-in-python | I am exploring the use of protocol buffers and would like to use the new Timestamp data type which is in protobuf3. Here is my .proto file: syntax = "proto3"; package shoppingbasket; import "google/protobuf/timestamp.proto"; message TransactionItem { optional string product = 1; optional int32 quantity = 2; optional double price = 3; optional double discount = 4; } message Basket { optional string basket = 1; optional google.protobuf.Timestamp tstamp = 2; optional string customer = 3; optional string store = 4; optional string channel = 5; repeated TransactionItem transactionItems = 6; } message Baskets { repeated Basket baskets = 1; } After generating python classes from this .proto file I'm attempting to create some objects using the generated classes. Here's the code: import shoppingbasket_pb2 from google.protobuf.timestamp_pb2 import Timestamp baskets = shoppingbasket_pb2.Baskets() basket1 = baskets.baskets.add() basket1.basket = "001" basket1.tstamp = Timestamp().GetCurrentTime() which fails with error: AttributeError: Assignment not allowed to composite field "tstamp" in protocol message object. Can anyone explain to me why this isn't working as I am nonplussed. | See Timestamp. I think you want: basket1.tstamp.GetCurrentTime() | 6 | 7 |
65,120,743 | 2020-12-3 | https://stackoverflow.com/questions/65120743/many-rg-commands-started-by-vscode-that-consume-99-of-cpus | I'm working inside a very big github repo, say its structure is like project-root βββ project-1 β βββ subproject-a β βββ subproject-others βββ project-2 βββ subproject-b βββ subproject-others There are many projects, each contains many subprojects. I'm just working on one of the subprojects (e.g. subproject-a). When I opened vscode inside the subproject (it's a python subproject), I noticed that it launches many rg commands like below, and my CPU usage goes above 99%. I wonder what these rg commands are about? Are they just searching for stuffs inside the subproject, or the whole git repo, which contains tens of thousands of files? Why do they consume so many resources? How could I avoid that, please? /Applications/Visual Studio Code.app/Contents/Resources/app/node_modules.asar.unpacked/vscode-ripgrep/bin/rg --files --hidden --case-sensitive -g **/*.go/** -g **/*.go -g !**/.git -g !**/.svn -g !**/.hg -g !**/CVS -g !**/.DS_Store -g !**/.classpath -g !**/.factorypath -g !**/.project -g !**/.settings -g !**/node_modules -g !**/bower_components -g !**/*.code-search --no-ignore-parent --follow --quiet --no-config --no-ignore-global | It turns out that there are four symlink folders with over 700k files in them. These folders are usually ignored in /project-root/.gitginore. So rg by default would ignore searching in them. But here because of --no-ignore-parent --follow flags, they are being searched nonetheless. I added these folders to /project-root/project-1/subproject-a/.gitignore again, and now these rg commands don't take so much cpu resource anymore. | 11 | 5 |
65,108,382 | 2020-12-2 | https://stackoverflow.com/questions/65108382/visual-studio-code-color-not-working-when-using-python-types | I am using the new python syntax to describe what types my methods return e.g.,: def method(unpacked_message: dict) -> dict: This seems to break the vscode color scheme Expected colors: Environment and vs code extensions: Python 3.6.9 on ubuntu ms-python.python v2020.11.371526539 tht13.python: Python for VS code v0.2.3 magicstack.magicpython: MagicPython v1.1.0 The code runs flawlessly. Am I doing something wrong ? | Based on the information you provided, I reproduced the problem you described. Reason: The Syntax Highlighting style provided by the extension "Python for VSCode" is different from the extension "Python". Solution: Please disable the extension "Python for VSCode". before: after: | 18 | 33 |
65,127,212 | 2020-12-3 | https://stackoverflow.com/questions/65127212/python3-nat-hole-punching | I know this topic is not new. There is various information out there although, the robust solution is not presented (at least I did not found). I have a P2P daemon written in python3 and the last element on the pie is to connect two clients behind the NAT via TCP. My references for this topic: https://bford.info/pub/net/p2pnat/ How to make 2 clients connect each other directly, after having both connected a meeting-point server? Problems with TCP hole punching What I have done so far: SERVER: #!/usr/bin/env python3 import threading import socket MY_AS_SERVER_PORT = 9001 TIMEOUT = 120.0 BUFFER_SIZE = 4096 def get_my_local_ip(): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) try: # doesn't even have to be reachable s.connect(('10.255.255.255', 1)) IP = s.getsockname()[0] except Exception: IP = '127.0.0.1' finally: s.close() return bytes(IP, encoding='utf-8') def wait_for_msg(new_connection, client_address): while True: try: packet = new_connection.recv(BUFFER_SIZE) if packet: msg_from_client = packet.decode('utf-8') client_connected_from_ip = client_address[0] client_connected_from_port = client_address[1] print("We have a client. Client advertised his local IP as:", msg_from_client) print(f"Although, our connection is from: [{client_connected_from_ip}]:{client_connected_from_port}") msg_back = bytes("SERVER registered your data. Your local IP is: " + str(msg_from_client) + " You are connecting to the server FROM: " + str(client_connected_from_ip) + ":" + str(client_connected_from_port), encoding='utf-8') new_connection.sendall(msg_back) break except ConnectionResetError: break except OSError: break def server(): sock = socket.socket() sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1) sock.bind((get_my_local_ip().decode('utf-8'), MY_AS_SERVER_PORT)) sock.listen(8) sock.settimeout(TIMEOUT) while True: try: new_connection, client_address = sock.accept() if new_connection: threading.Thread(target=wait_for_msg, args=(new_connection,client_address,)).start() # print("connected!") # print("") # print(new_connection) # print("") # print(client_address) msg = bytes("Greetings! This message came from SERVER as message back!", encoding='utf-8') new_connection.sendall(msg) except socket.timeout: pass if __name__ == '__main__': server() CLIENT: #!/usr/bin/python3 import sys import socket import time import threading SERVER_IP = '1.2.3.4' SERVER_PORT = 9001 # We don't want to establish a connection with a static port. Let the OS pick a random empty one. #MY_AS_CLIENT_PORT = 8510 TIMEOUT = 3 BUFFER_SIZE = 4096 def get_my_local_ip(): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) try: # doesn't even have to be reachable s.connect(('10.255.255.255', 1)) IP = s.getsockname()[0] except Exception: IP = '127.0.0.1' finally: s.close() return bytes(IP, encoding='utf-8') def constantly_try_to_connect(sock): while True: try: sock.connect((SERVER_IP, SERVER_PORT)) except ConnectionRefusedError: print(f"Can't connect to the SERVER IP [{SERVER_IP}]:{SERVER_PORT} - does the server alive? Sleeping for a while...") time.sleep(1) except OSError: #print("Already connected to the server. Kill current session to reconnect...") pass def client(): sock = socket.socket() sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1) #sock.bind((get_my_local_ip().decode('utf-8'), MY_AS_CLIENT_PORT)) sock.settimeout(TIMEOUT) threading.Thread(target=constantly_try_to_connect, args=(sock,)).start() while True: try: packet = sock.recv(BUFFER_SIZE) if packet: print(packet) sock.sendall(get_my_local_ip()) except OSError: pass if __name__ == '__main__': client() Now the current code results: ./tcphole_server.py We have a client. Client advertised his local IP as: 10.10.10.50 Although, our connection is from: [89.22.11.50]:32928 We have a client. Client advertised his local IP as: 192.168.1.20 Although, our connection is from: [78.88.77.66]:51928 ./tcphole_client1.py b'Greetings! This message came from SERVER as message back!' b'SERVER registered your data. Your local IP is: 192.168.1.20 You are connecting to the server FROM: 89.22.11.50:32928' ./tcphole_client2.py b'Greetings! This message came from SERVER as message back!' b'SERVER registered your data. Your local IP is: 10.10.10.50 You are connecting to the server FROM: 78.88.77.66:51928' As you can see the server has all information to connect two clients. We can send details about the other peer individually through the current server-client connection. Now two questions remain in my head: Assuming the SERVER sends information about CLIENT 1 and CLIENT 2 for each of the peers. And now the CLIENTS starts connecting like [89.22.11.50]:32928 <> [78.88.77.66]:51928 Does the SERVER should close the current connections with the CLIENTS? How the CLIENT Router behaves? I assume it expecting the same EXTERNAL SERVER SRC IP [1.2.3.4], instead gets one of the CLIENTS EXT IP for instance [89.22.11.50] or [78.88.77.66]? This is messier than I thought. Any help to move forward appreciated. Hope this would help other Devs/DevOps too. | Finally found the expected behavior! Don't want to give too much code here but I hope after this you will understand the basics of how to implement it. Best to have a separate file in each of the client's folder - nearby ./tcphole_client1.py and ./tcphole_client2.py. We need to connect fast after we initiated sessions with the SERVER. Now for instance: ./tcphole_client_connector1.py 32928 51928 ./tcphole_client_connector2.py 51928 32928 Remember? We need to connect to the same ports as we initiated with SERVER: [89.22.11.50]:32928 <> [78.88.77.66]:51928 The first port is needed to bind the socket (OUR). With the second port, we are trying to connect to the CLIENT. The other CLIENT doing the same procedure except it binds to his port and connects to yours bound port. If the ROUTER still has an active connection - SUCCESS. | 7 | 5 |
65,131,391 | 2020-12-3 | https://stackoverflow.com/questions/65131391/what-exactly-is-kerass-categoricalcrossentropy-doing | I am porting a keras model over to torch and I'm having trouble replicating the exact behavior of keras/tensorflow's 'categorical_crossentropy' after a softmax layer. I have some workarounds for this problem, so I'm only interested in understanding what exactly tensorflow calculates when calculating categorical cross entropy. As a toy problem, I set up labels and predicted vectors >>> import tensorflow as tf >>> from tensorflow.keras import backend as K >>> import numpy as np >>> true = np.array([[0.0, 1.0], [1.0, 0.0]]) >>> pred = np.array([[0.0, 1.0], [0.0, 1.0]]) And calculate the Categorical Cross Entropy with: >>> loss = tf.keras.losses.CategoricalCrossentropy() >>> print(loss(pred, true).eval(session=K.get_session())) 8.05904769897461 This differs from the analytical result >>> loss_analytical = -1*K.sum(true*K.log(pred))/pred.shape[0] >>> print(loss_analytical.eval(session=K.get_session())) nan I dug into the source code for keras/tf's cross entropy (see Softmax Cross Entropy implementation in Tensorflow Github Source Code) and found the c function at https://github.com/tensorflow/tensorflow/blob/c903b4607821a03c36c17b0befa2535c7dd0e066/tensorflow/compiler/tf2xla/kernels/softmax_op.cc line 116. In that function, there is a comment: // sum(-labels * // ((logits - max_logits) - log(sum(exp(logits - max_logits))))) // along classes // (The subtraction broadcasts along the batch dimension.) And implementing that, I tried: >>> max_logits = K.max(pred, axis=0) >>> max_logits = max_logits >>> xent = K.sum(-true * ((pred - max_logits) - K.log(K.sum(K.exp(pred - max_logits)))))/pred.shape[0] >>> print(xent.eval(session=K.get_session())) 1.3862943611198906 I also tried to print the trace for xent.eval(session=K.get_session()), but the trace is ~95000 lines long. So it begs the question: what exactly is keras/tf doing when calculating 'categorical_crossentropy'? It makes sense that it doesn't return nan, that would cause training issues, but where does 8 come from? | The problem is that you are using hard 0s and 1s in your predictions. This leads to nan in your calculation since log(0) is undefined (or infinite). What is not really documented is that the Keras cross-entropy automatically "safeguards" against this by clipping the values to be inside the range [eps, 1-eps]. This means that, in your example, Keras gives you a different result because it flat out replaces the predictions by other values. If you replace your predictions by soft values, you should be able to reproduce the results. This makes sense anyway, since your networks will usually return such values via a softmax activation; hard 0/1 only happens in the case of numerical underflow. If you want to check this for yourself, the clipping happens here. This function is eventually called by the CategoricalCrossentropy function. epsilon is defined elsewhere, but it seems to be 0.0000001 -- try your manual calculation with pred = np.clip(pred, 0.0000001, 1-0.0000001) and you should see the result 8.059047875479163. | 8 | 7 |
65,119,003 | 2020-12-3 | https://stackoverflow.com/questions/65119003/binning-pandas-value-counts | I have a Pandas Series produced by df.column.value_counts().sort_index(). | N Months | Count | |------|------| | 0 | 15 | | 1 | 9 | | 2 | 78 | | 3 | 151 | | 4 | 412 | | 5 | 181 | | 6 | 543 | | 7 | 175 | | 8 | 409 | | 9 | 594 | | 10 | 137 | | 11 | 202 | | 12 | 170 | | 13 | 446 | | 14 | 29 | | 15 | 39 | | 16 | 44 | | 17 | 253 | | 18 | 17 | | 19 | 34 | | 20 | 18 | | 21 | 37 | | 22 | 147 | | 23 | 12 | | 24 | 31 | | 25 | 15 | | 26 | 117 | | 27 | 8 | | 28 | 38 | | 29 | 23 | | 30 | 198 | | 31 | 29 | | 32 | 122 | | 33 | 50 | | 34 | 60 | | 35 | 357 | | 36 | 329 | | 37 | 457 | | 38 | 609 | | 39 | 4744 | | 40 | 1120 | | 41 | 591 | | 42 | 328 | | 43 | 148 | | 44 | 46 | | 45 | 10 | | 46 | 1 | | 47 | 1 | | 48 | 7 | | 50 | 2 | my desired output is | bin | Total | |-------|--------| | 0-13 | 3522 | | 14-26 | 793 | | 27-50 | 9278 | I tried df.column.value_counts(bins=3).sort_index() but got | bin | Total | |---------------------------------|-------| | (-0.051000000000000004, 16.667] | 3634 | | (16.667, 33.333] | 1149 | | (33.333, 50.0] | 8810 | I can get the correct result with a = df.column.value_counts().sort_index()[:14].sum() b = df.column.value_counts().sort_index()[14:27].sum() c = df.column.value_counts().sort_index()[28:].sum() print(a, b, c) Output: 3522 793 9270 But I am wondering if there is a pandas method that can do what I want. Any advice is very welcome. :-) | You can use pd.cut: pd.cut(df['N Months'], [0,13, 26, 50], include_lowest=True).value_counts() Update you should be able to pass custom bin to value_counts: df['N Months'].value_counts(bins = [0,13, 26, 50]) Output: N Months (-0.001, 13.0] 3522 (13.0, 26.0] 793 (26.0, 50.0] 9278 Name: Count, dtype: int64 | 6 | 21 |
65,113,251 | 2020-12-2 | https://stackoverflow.com/questions/65113251/why-are-sqlite3-shortcut-functions-called-nonstandard | I've used sqlite3 in Python in which execute() creates ambiguity. When I use: import sqlite3 A = sqlite3.connect('a') A.execute('command to be executed') help(A.execute) I got the output of help() as: ..... ..... Executes a SQL statement. Non-standard. But when I execute like this: import sqlite3 A = sqlite.connect('a').cursor() A.execute('command to be executed') help(A.execute) I got the output of help() as: ..... ..... Executes a SQL statement. My doubt is what does Non-standard refer to? Even the Python documentation provides these words execute for execute(), executemany(), and executescript() in connection objects. I've even searched in web about nonstandard shortcuts in Python. But I didn't get any relevant information. Can anyone help me with this? | The "nonstandard" function is the execute method of the sqlite3.Connection class: This is a nonstandard shortcut that creates a cursor object by calling the cursor() method, calls the cursorβs execute() method with the parameters given, and returns the cursor. "Standard" refers to PEP 249 -- Python Database API Specification v2.0 which the sqlite3 module follows. It does not specify an execute method for the Connection class, but the sqlite3 module provides it anyway, that's why it is called "nonstandard". PEP 249 only specifies the execute method of the Cursor class, which the sqlite3 module implements, of course. | 6 | 6 |
65,112,585 | 2020-12-2 | https://stackoverflow.com/questions/65112585/pip-installation-stuck-in-infinite-loop-if-unresolvable-conflicts-in-dependencie | Pip installation is stuck in an infinite loop if there are unresolvable conflicts in dependencies. To reproduce, pip==20.3.0 and: pip install pyarrow==2.0.0 azureml-defaults==1.18.0 | Workarounds: Local environment: Downgrade pip to < 20.3 Conda environment created from yaml: This will be seen only if conda-forge is highest priority channel, anaconda channel doesn't have pip 20.3 (as of now). To mitigate the issue please explicitly specify pip<20.3 (!=20.3 or =20.2.4 pin to other version) as a conda dependency in the conda specification file AzureML experimentation: Follow the case above to make sure pinned pip resulted as a conda dependency in the environment object, either from yml file or programmatically | 15 | 13 |
65,111,601 | 2020-12-2 | https://stackoverflow.com/questions/65111601/what-is-the-difference-between-async-with-lock-and-with-await-lock | I have seen two ways of acquiring the asyncio Lock: async def main(lock): async with lock: async.sleep(100) and async def main(lock): with await lock: async.sleep(100) What is the difference between them? | The second form with await lock is deprecated since Python 3.7 and is removed in Python 3.9. Running it with Python 3.7 gives this warning: DeprecationWarning: 'with await lock' is deprecated use 'async with lock' instead Sources (scroll to the bottom): https://docs.python.org/3.7/library/asyncio-sync.html https://docs.python.org/3.9/library/asyncio-sync.html | 9 | 7 |
65,107,933 | 2020-12-2 | https://stackoverflow.com/questions/65107933/pytorch-model-training-cpu-memory-leak-issue | When I trained my pytorch model on GPU device,my python script was killed out of blue.Dives into OS log files , and I find script was killed by OOM killer because my CPU ran out of memory.Itβs very strange that I trained my model on GPU device but I ran out of my CPU memory. Snapshot of OOM killer log file In order to debug this issue,I install python memory profiler. Viewing log file from memory profiler, I find when column wise -= operation occurred, my CPU memory gradually increased until OOM killer killed my program. Snapshot of Python memory profiler Itβs very strange, I try many ways to solve this issue.Finally, I found before assignment operation,I detach Tensor first.Amazingly,it solves this issue.But I donβt understand clearly why it works.Here is my original function code. def GeneralizedNabla(self, image): pad_size = 2 affinity = torch.zeros(image.shape[0], self.window_size**2, self.h, self.w).to(self.device) h = self.h+pad_size w = self.w+pad_size #pad = nn.ZeroPad2d(pad_size) image_pad = self.pad(image) for i in range(0, self.window_size**2): affinity[:, i, :, :] = image[:, :, :].detach() # initialization dy = int(i/5)-2 dx = int(i % 5)-2 h_start = pad_size+dy h_end = h+dy # if 0 <= dy else h+dy w_start = pad_size+dx w_end = w+dx # if 0 <= dx else w+dx affinity[:, i, :, :] -= image_pad[:, h_start:h_end, w_start:w_end].detach() self.Nabla=affinity return If everyone has any ideas,I will appreciate very much, thank you. | Previously when you did not use the .detach() on your tensor, you were also accumulating the computation graph as well and as you went on, you kept acumulating more and more until you ended up exuasting your memory to the point it crashed. When you do a detach(), you are effectively getting the data without the previously entangled history thats needed for computing the gradients. | 6 | 4 |
65,102,969 | 2020-12-2 | https://stackoverflow.com/questions/65102969/invalid-syntax-jose-py | I was trying to use jose library for authentication for one of my flask apps. using the import statement as follows from jose import jwt But it throws following An error, Traceback (most recent call last): File "F:/XXX_XXX/xxxx-services-web/src/auth.py", line 6, in <module> from jose import jwt File "F:\Users\XXXX_XXXXX\AppData\Local\Programs\Python\Python37\lib\site-packages\jose.py", line 546 print decrypt(deserialize_compact(jwt), {'k':key}, ^ SyntaxError: invalid syntax Is this library outdated? | installing python-jose instead of jose fixed my problem. https://pypi.org/project/python-jose/ | 22 | 44 |
65,102,579 | 2020-12-2 | https://stackoverflow.com/questions/65102579/send-and-receive-file-using-python-fastapi-and-requests | I'm trying to upload a file to a FastAPI server using requests. I've boiled the problem down to its simplest components. The client using requests: import requests files = {'file': ('foo.txt', open('./foo.txt', 'rb'))} response = requests.post('http://127.0.0.1:8000/file', files=files) print(response) print(response.json()) The server using fastapi: from fastapi import FastAPI, File, UploadFile import uvicorn app = FastAPI() @app.post('/file') def _file_upload(my_file: UploadFile = File(...)): print(my_file) if __name__ == "__main__": uvicorn.run("main:app", host="0.0.0.0", port=8000, log_level="debug") Packages installed: fastapi python-multipart uvicorn requests Client Output: <Response [422]> {'detail': [{'loc': ['query', 'my_file'], 'msg': 'field required', 'type': 'value_error.missing'}]} Server Output: INFO: 127.0.0.1:37520 - "POST /file HTTP/1.1" 422 Unprocessable Entity What am I missing here? | FastAPI expecting the file in the my_file field and you are sending it to the file field. it should be as import requests url = "http://127.0.0.1:8000/file" files = {'my_file': open('README.md', 'rb')} res = requests.post(url, files=files) Also, you don't need a tuple to manage the upload file (we're dealing with a simple upload, right?) | 17 | 27 |
65,102,013 | 2020-12-2 | https://stackoverflow.com/questions/65102013/add-line-break-after-every-20-characters-and-save-result-as-a-new-string | I have a string variable input = "A very very long string from user input", how can I loop through the string and add a line break \n after 20 characters then save the formatted string in a variable new_input? So far am only able to get the first 20 characters as such input[0:20] but how do you do this through the entire string and add line breaks at that point? | You probably want to do something like this inp = "A very very long string from user input" new_input = "" for i, letter in enumerate(inp): if i % 20 == 0: new_input += '\n' new_input += letter # this is just because at the beginning too a `\n` character gets added new_input = new_input[1:] | 5 | 6 |
65,093,883 | 2020-12-1 | https://stackoverflow.com/questions/65093883/how-do-i-turn-off-the-evaluating-plt-show-did-not-finish-after-3-00s-seconds | I often debug my python code by plotting NumPy arrays in the vscode debugger. Often I spend more than 3s looking at a plot. When I do vscode prints the extremely long warning below. It's very annoying because I then have to scroll up a lot all the time to see previous debugging outputs. Where is this PYDEVD_WARN_EVALUATION_TIMEOUT variable? How do I turn this off? I included the warning below for completeness, thanks a lot for your help! Evaluating: plt.show() did not finish after 3.00s seconds. This may mean a number of things: This evaluation is really slow and this is expected. In this case it's possible to silence this error by raising the timeout, setting the PYDEVD_WARN_EVALUATION_TIMEOUT environment variable to a bigger value. The evaluation may need other threads running while it's running: In this case, it's possible to set the PYDEVD_UNBLOCK_THREADS_TIMEOUT environment variable so that if after a given timeout an evaluation doesn't finish, other threads are unblocked or you can manually resume all threads. Alternatively, it's also possible to skip breaking on a particular thread by setting a pydev_do_not_trace = True attribute in the related threading.Thread instance (if some thread should always be running and no breakpoints are expected to be hit in it). The evaluation is deadlocked: In this case you may set the PYDEVD_THREAD_DUMP_ON_WARN_EVALUATION_TIMEOUT environment variable to true so that a thread dump is shown along with this message and optionally, set the PYDEVD_INTERRUPT_THREAD_TIMEOUT to some value so that the debugger tries to interrupt the evaluation (if possible) when this happens. | If found a way to adapt the launch.json which takes care of this problem. { "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "debugpy", "request": "launch", "program": "${file}", "env": {"PYTHONPATH": "${workspaceRoot}", "PYDEVD_WARN_EVALUATION_TIMEOUT": "500"}, "cwd": "${workspaceFolder}", "console": "integratedTerminal" } ] } | 21 | 32 |
65,034,898 | 2020-11-27 | https://stackoverflow.com/questions/65034898/pip-install-does-not-find-package-but-pip-search-does | I want to install the hdbcli package (SAP HANA connector). When I search with pip the package is being found, but when I want to install it, pip can't find the package. Specifiying the current package also yields no results. pip install hdbcli==2.6.61 How do I solve this? > pip search hbdcli hdbcli (2.6.61) - SAP HANA Python Client > pip install hdbcli ERROR: Could not find a version that satisfies the requirement hdbcli (from versions: none) ERROR: No matching distribution found for hdbcli | This usually means pip could not find any distribution of that project that would be compatible with your python environment: Python implementation (CPython, or PyPy, etc.) Python interpreter major and minor version (3.10, or 3.11, etc.) operating system (Windows, or Linux, etc.) CPU bitness (64 bits or 32 bits) version of glibc and other libraries (that's why pip on Alpine Linux might ignore Linux distributions that require glibc) This project does not seem to have published any source distribution (sdist) ever. So it has to be a compatible wheel. Are you by chance on Python 3.9? As far as I can tell there are no wheel distributions for Python 3.9. Use path/to/pythonX.Y -m pip debug --verbose to get a list of "Compatible tags". Then compare this list with the list of available wheel distributions for that project. | 12 | 12 |
65,082,448 | 2020-11-30 | https://stackoverflow.com/questions/65082448/specify-pandas-index-name-in-the-constructor | Can I specify a pandas DataFrame index name in the constructor? Said otherwise, I would like to do the following: df = pd.DataFrame({"a":[1,2],"b":[3,4]}) df.rename_axis(index='myindex', inplace=True) with a single line of code (by calling only the constructor) | You can pass an index to the DataFrame constructor with the given name that you want. import pandas as pd df = pd.DataFrame({"a":[1,2],"b":[3,4]}, index=pd.Index([], name='myIndex')) df a b myIndex 0 1 3 1 2 4 | 13 | 13 |
65,031,764 | 2020-11-27 | https://stackoverflow.com/questions/65031764/posewarping-how-to-vectorize-this-for-loop-z-buffer | I'm trying to warp a frame from view1 to view2 using ground truth depth map, pose information, and camera matrix. I've been able to remove most of the for-loops and vectorize it, except one for-loop. When warping, multiple pixels in view1 may get mapped to a single location in view2, due to occlusions. In this case, I need to pick the pixel with the lowest depth value (foreground object). I'm not able to vectorize this part of the code. Any help to vectorize this for loop is appreciated. Context:I'm trying to warp an image into a new view, given ground truth pose, depth, and camera matrix. After computing warped locations, I'm rounding them off. Any suggestions to implement inverse bilinear interpolation are also welcome. My images are of full HD resolution. Hence it is taking a lot of time to warp the frames to the new view. If I can vectorize, I'm planning to convert the code to TensorFlow or PyTorch and run it on a GPU. Any other suggestions to speed up warping, or existing implementations are also welcome. Code: def warp_frame_04(frame1: numpy.ndarray, depth: numpy.ndarray, intrinsic: numpy.ndarray, transformation1: numpy.ndarray, transformation2: numpy.ndarray, convert_to_uint: bool = True, verbose_log: bool = True): """ Vectorized Forward warping. Nearest Neighbor. Offset requirement of warp_frame_03() overcome. mask: 1 if pixel found, 0 if no pixel found Drawback: Nearest neighbor, collision resolving not vectorized """ height, width, _ = frame1.shape assert depth.shape == (height, width) transformation = numpy.matmul(transformation2, numpy.linalg.inv(transformation1)) y1d = numpy.array(range(height)) x1d = numpy.array(range(width)) x2d, y2d = numpy.meshgrid(x1d, y1d) ones_2d = numpy.ones(shape=(height, width)) ones_4d = ones_2d[:, :, None, None] pos_vectors_homo = numpy.stack([x2d, y2d, ones_2d], axis=2)[:, :, :, None] intrinsic_inv = numpy.linalg.inv(intrinsic) intrinsic_4d = intrinsic[None, None] intrinsic_inv_4d = intrinsic_inv[None, None] depth_4d = depth[:, :, None, None] trans_4d = transformation[None, None] unnormalized_pos = numpy.matmul(intrinsic_inv_4d, pos_vectors_homo) world_points = depth_4d * unnormalized_pos world_points_homo = numpy.concatenate([world_points, ones_4d], axis=2) trans_world_homo = numpy.matmul(trans_4d, world_points_homo) trans_world = trans_world_homo[:, :, :3] trans_norm_points = numpy.matmul(intrinsic_4d, trans_world) trans_pos = trans_norm_points[:, :, :2, 0] / trans_norm_points[:, :, 2:3, 0] trans_pos_int = numpy.round(trans_pos).astype('int') # Solve occlusions a = trans_pos_int.reshape(-1, 2) d = depth.ravel() b = numpy.unique(a, axis=0, return_index=True, return_counts=True) collision_indices = b[1][b[2] >= 2] # Unique indices which are involved in collision for c1 in tqdm(collision_indices, disable=not verbose_log): cl = a[c1].copy() # Collision Location ci = numpy.where((a[:, 0] == cl[0]) & (a[:, 1] == cl[1]))[0] # Colliding Indices: Indices colliding for cl cci = ci[numpy.argmin(d[ci])] # Closest Collision Index: Index of the nearest point among ci a[ci] = [-1, -1] a[cci] = cl trans_pos_solved = a.reshape(height, width, 2) # Offset both axes by 1 and set any out of frame motion to edge. Then crop 1-pixel thick edge trans_pos_offset = trans_pos_solved + 1 trans_pos_offset[:, :, 0] = numpy.clip(trans_pos_offset[:, :, 0], a_min=0, a_max=width + 1) trans_pos_offset[:, :, 1] = numpy.clip(trans_pos_offset[:, :, 1], a_min=0, a_max=height + 1) warped_image = numpy.ones(shape=(height + 2, width + 2, 3)) * numpy.nan warped_image[trans_pos_offset[:, :, 1], trans_pos_offset[:, :, 0]] = frame1 cropped_warped_image = warped_image[1:-1, 1:-1] mask = numpy.isfinite(cropped_warped_image) cropped_warped_image[~mask] = 0 if convert_to_uint: final_warped_image = cropped_warped_image.astype('uint8') else: final_warped_image = cropped_warped_image mask = mask[:, :, 0] return final_warped_image, mask Code Explanation I'm using the equations[1,2] to get pixel locations in view2 Once I have the pixel locations, I need to figure out if there are any occlusions and if so, I have to pick the foreground pixels. `b = numpy.unique(a, axis=0, return_index=True, return_counts=True)` gives me unique locations. If multiple pixels from view1 map to a single pixel in view2 (collision), `return_counts` will give a value greater than 1. `collision_indices = b[1][b[2] >= 2]` gives indices which are involved in collision. Note that this gives only one index per collision. For each of such collision points, `ci = numpy.where((a[:, 0] == cl[0]) & (a[:, 1] == cl[1]))[0]` provides indices of all pixels from view1 which map to the same point in view2. `cci = ci[numpy.argmin(d[ci])]` gives the pixel index with lowest depth value. `a[ci] = [-1, -1]` and `a[cci] = cl` maps all other background pixels to location (-1,-1) which is out of frame and hence will be ignored. [1] https://i.sstatic.net/s1D9t.png [2] https://dsp.stackexchange.com/q/69890/32876 | I implemented this as follows. Instead of picking the nearest point (min), I used soft-min, i.e. I took the weighted average of all the colliding points where I made sure that a small difference in depth leads to a large difference in weights and that the nearest depth has the highest weight. I implemented the sum (in soft-min) using np.add.at as suggested here. I was able to further port it to PyTorch using torch.Tensor.index_put_ as suggested here. Finally, I replaced the rounding-off (nearest neighbour interpolation) with bilinear splatting (inverse bilinear interpolation). Both numpy and torch implementations are available here. | 8 | 1 |
65,037,641 | 2020-11-27 | https://stackoverflow.com/questions/65037641/plotly-how-to-add-multiple-y-axes | I have data with 5 different columns and their value varies from each other. Actual gen Storage Solar Gen Total Gen Frequency 1464 1838 1804 18266 51 2330 2262 518 4900 51 2195 923 919 8732 49 2036 1249 1316 3438 48 2910 534 1212 4271 47 857 2452 1272 6466 50 2331 990 2729 14083 51 2604 767 2730 19037 47 993 2606 705 17314 51 2542 213 548 10584 52 2030 942 304 11578 52 562 414 2870 840 52 1111 1323 337 19612 49 1863 2498 1992 18941 48 1575 2262 1576 3322 48 1223 657 661 10292 47 1850 1920 2986 10130 48 2786 1119 933 2680 52 2333 1245 1909 14116 48 1606 2934 1547 13767 51 So in from this data I want to plot a graph with 3 y-axis. One for the frequency, second for the Total Gen and third is for Actual gen, Storage and Solar Gen. Frequency should be on the secondary Y-axis(Right side) and the Rest of them should be on the left side. For frequency as you can see the values are very random between 47 to 52 that's why it should be on the right side, like this: For Total Gen value are very high as compared to others as they are from 100-20000 so that's I can't merge them with others, something like this: Here I want: Y-axis title 1 = Actual gen, Storage, and Solar gen Y-axis title 2 = Total gen Y-axis title 3 = Frequency My approach: import logging import pandas as pd import plotly.graph_objs as go import plotly.offline as pyo import xlwings as xw from plotly.subplots import make_subplots app = xw.App(visible=False) try: wb = app.books.open('2020 10 08 0000 (Float).xlsx') sheet = wb.sheets[0] actual_gen = sheet.range('A2:A21').value frequency = sheet.range('E2:E21').value storage = sheet.range('B2:B21').value total_gen = sheet.range('D2:D21').value solar_gen = sheet.range('C2:C21').value except Exception as e: logging.exception("Something awful happened!") print(e) finally: app.quit() app.kill() # Create figure with secondary y-axis fig = make_subplots(specs=[[{"secondary_y": True}]]) # Add traces fig.add_trace( go.Scatter(y=storage, name="BESS(KW)"), ) fig.add_trace( go.Scatter(y=actual_gen, name="Act(KW)"), ) fig.add_trace( go.Scatter(y=solar_gen, name="Solar Gen") ) fig.add_trace( go.Scatter(x=x_values, y=total_gen, name="Total Gen",yaxis = 'y2') ) fig.add_trace( go.Scatter(y=frequency, name="Frequency",yaxis = 'y1'), ) fig.update_layout( title_text = '8th oct BESS', yaxis2=dict(title="BESS(KW)",titlefont=dict(color="red"), tickfont=dict(color="red")), yaxis3=dict(title="Actual Gen(KW)",titlefont=dict(color="orange"),tickfont=dict(color="orange"), anchor="free", overlaying="y2", side="left"), yaxis4=dict(title="Solar Gen(KW)",titlefont=dict(color="pink"),tickfont=dict(color="pink"), anchor="x2",overlaying="y2", side="left"), yaxis5=dict(title="Total Gen(KW)",titlefont=dict(color="cyan"),tickfont=dict(color="cyan"), anchor="free",overlaying="y2", side="left"), yaxis6=dict(title="Frequency",titlefont=dict(color="purple"),tickfont=dict(color="purple"), anchor="free",overlaying="y2", side="right")) fig.show() Can someone please help? | Here is an example of how multi-level y-axes can be created. Essentially, the keys to this are: Create a key in the layout dict, for each axis, then assign a trace to the that axis. Set the xaxis domain to be narrower than [0, 1] (for example [0.2, 1]), thus pushing the left edge of the graph to the right, making room for the multi-level y-axis. A link to the official Plotly docs on the subject. To make reading the data easier for this demonstration, I have taken the liberty of storing your dataset as a CSV file, rather than Excel - then used the pandas.read_csv() function to load the dataset into a pandas.DataFrame, which is then passed into the plotting functions as data columns. Example: Read the dataset: df = pd.read_csv('energy.csv') Sample plotting code: import plotly.io as pio layout = {'title': '8th Oct BESS'} traces = [] traces.append({'y': df['storage'], 'name': 'Storage'}) traces.append({'y': df['actual_gen'], 'name': 'Actual Gen'}) traces.append({'y': df['solar_gen'], 'name': 'Solar Gen'}) traces.append({'y': df['total_gen'], 'name': 'Total Gen', 'yaxis': 'y2'}) traces.append({'y': df['frequency'], 'name': 'Frequency', 'yaxis': 'y3'}) layout['xaxis'] = {'domain': [0.12, 0.95]} layout['yaxis1'] = {'title': 'Actual Gen, Storage, Solar Gen', 'titlefont': {'color': 'orange'}, 'tickfont': {'color': 'orange'}} layout['yaxis2'] = {'title': 'Total Gen', 'side': 'left', 'overlaying': 'y', 'anchor': 'free', 'titlefont': {'color': 'red'}, 'tickfont': {'color': 'red'}} layout['yaxis3'] = {'title': 'Frequency', 'side': 'right', 'overlaying': 'y', 'anchor': 'x', 'titlefont': {'color': 'purple'}, 'tickfont': {'color': 'purple'}} pio.show({'data': traces, 'layout': layout}) Graph: Given the nature of these traces, they overlay each other heavily, which could make graph interpretation difficult. A couple of options are available: Change the range parameter for each y-axis so the axis only occupies a portion of the graph. For example, if a dataset ranges from 0-5, set the corresponding yaxis range parameter to [-15, 5], which will push that trace near the top of the graph. Consider using subplots, where like-traces can be grouped ... or each trace can have it's own graph. Here are Plotly's docs on subplots. Comments (TL;DR): The example code shown here uses the lower-level Plotly API, rather than a convenience wrapper such as graph_objects or express. The reason is that I (personally) feel it's helpful to users to show what is occurring 'under the hood', rather than masking the underlying code logic with a convenience wrapper. This way, when the user needs to modify a finer detail of the graph, they will have a better understanding of the lists and dicts which Plotly is constructing for the underlying graphing engine (orca). | 9 | 8 |
65,044,870 | 2020-11-27 | https://stackoverflow.com/questions/65044870/how-to-extract-info-within-a-shadow-root-open-using-selenium-python | I got the next url related to an online store https://www.tiendasjumbo.co/buscar?q=mani and I can't extract the product label an another fields: from selenium import webdriver import time from random import randint driver = webdriver.Firefox(executable_path= "C:\Program Files (x86)\geckodriver.exe") driver.implicitly_wait(10) time.sleep(4) url = "https://www.tiendasjumbo.co/buscar?q=mani" driver.maximize_window() driver.get(url) driver.find_element_by_xpath('//h1[@class="impulse-title"]') What am I doing wrong, I also tried to switch the iframes but there is no way to achieve my goal? any help is welcome. | The products within the website https://www.tiendasjumbo.co/buscar?q=mani are located within a #shadow-root (open). Solution To extract the product label you have to use shadowRoot.querySelector() and you can use the following Locator Strategy: Code Block: driver.get('https://www.tiendasjumbo.co/buscar?q=mani') item = driver.execute_script("return document.querySelector('impulse-search').shadowRoot.querySelector('div.group-name-brand h1.impulse-title span.formatted-text')") print(item.text) Console Output: La especial mezcla de nueces, manΓ, almendras y maraΓ±ones x 450 g References You can find a couple of relevant discussions in: Unable to locate the Sign In element within #shadow-root (open) using Selenium and Python How to locate the First name field within shadow-root (open) within the website https://www.virustotal.com using Selenium and Python Microsoft Edge and Google Chrome version 96 Chrome v96 has changed the shadow root return values for Selenium. Some helpful links: Java - full example on GitHub Shadow DOM in Selenium Python - full example on GitHub Shadow DOM and Selenium with Chromium 96 C# - full example on GitHub Shadow DOM in Ruby Selenium Ruby - full example on GitHub | 8 | 13 |
65,010,639 | 2020-11-25 | https://stackoverflow.com/questions/65010639/in-a-coockiecutter-template-add-folder-only-if-choice-variable-has-a-given-valu | I am creating a cookiecutter template and would like to add a folder (and the files it contains) only if a variable has a given value. For example cookiecutter.json: { "project_slug":"project_folder" "i_want_this_folder":['y','n'] } and my template structure looks like: template βββ {{ cookiecutter.project_slug }} βββ config.ini βββ data β βββ data.csv βββ {% if cookiecutter.i_want_this_folder == 'y' %}my_folder{% endif %} βββ some_files However, when running cookiecutter template and choose 'n' I get an error Error: "~/project_folder" directory already exists Is my syntax for the folder name correct? | I was facing the same issue having the option to add or no folders with different contents (all folders can exist at the same time). The structure of the project is the following: βββ {{cookiecutter.project_slug}} β β β βββ folder_1_to_add_or_no β β βββ file1.py β β βββ file2.py β β βββ file3.txt β β β βββ folder_2_to_add_or_no β β βββ image.png β β βββ data.csv β β βββ file.txt β β β βββ folder_3_to_add_or_no β βββ file1.py β βββ some_dir β βββ hooks β βββ post_gen_project.py β βββ cookiecutter.json where the cookiecutter.json contains the following { "project_owner": "some-name", "project_slug": "some-project", "add_folder_one": ["yes", "no"], "add_folder_two": ["yes", "no"], "add_folder_three": ["yes", "no"], } as each directory folder_X_to_add_or_no contains different files, the trick is to remove those folders that the answer is "no", you can do this through a hook. Inside the post_gen_project.py file # post_gen_project.py import os import shutil from pathlib import Path # Current path path = Path(os.getcwd()) # Source path parent_path = path.parent.absolute() def remove(filepath): if os.path.isfile(filepath): os.remove(filepath) elif os.path.isdir(filepath): shutil.rmtree(filepath) folders_to_add = [ 'folder_one', 'folder_two', 'folder_three' ] for folder in folders_to_add: # Check if user wants the folder cookiecutter_var = '{{cookiecutter.' + f'{folder}' + '}}' add_folder = cookiecutter_var == 'yes' # User does not want folder so remove it if not add_folder: folder_path = os.path.join( parent_path, '{{cookiecutter.project_slug}}', 'folder' ) remove(folder_path) Now the folders the user choose not to add will be removed. Select add_folder_one: 1 - yes 2 - no Choose from 1, 2 [1]: References This answer is based on briancapello answer on this github issue | 6 | 3 |
65,026,852 | 2020-11-26 | https://stackoverflow.com/questions/65026852/set-default-value-for-selectbox | I am new to streamlit. I tried to set a default value for sidebar.selectbox. The code is below. I appreciate the help! Thank you in advance. st.sidebar.header('Settings') fichier = st.sidebar.selectbox('Dataset', ('djia', 'msci', 'nyse_n', 'nyse_o', 'sp500', 'tse')) window_ANTICOR = st.sidebar.selectbox('Window ANTICOR', ['<select>',3, 5, 10, 15, 20, 30]) if window_ANTICOR == '<select>': window_ANTICOR == 30 window_OLMAR = st.sidebar.selectbox('Window OLMAR', ['<select>',3, 5, 10, 15, 20, 30]) if window_OLMAR == '<select>': window_OLMAR == 5 eps_OLMAR = st.sidebar.selectbox('Eps OLMAR', ['<select>', 3, 5, 10, 15, 20, 30]) if eps_OLMAR == '<select>': eps_OLMAR == 10 eps_PAMR = st.sidebar.selectbox('Eps PAMR', ['<select>',0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]) if eps_PAMR == '<select>': eps_PAMR == 0.5 variant = st.sidebar.selectbox('Variant PAMR', (0, 1, 2)) if variant == '<select>': eps_PAMR == 0 | Use the index keyword of the selectbox widget. Pass the index of the value in the options list that you want to be the default choice. E.g. if you wanted to set the default choice of the selectbox labeled 'Window ANTICOR' to 30 (which you appear to be trying to do), you could simply do this: values = ['<select>',3, 5, 10, 15, 20, 30] default_ix = values.index(30) window_ANTICOR = st.sidebar.selectbox('Window ANTICOR', values, index=default_ix) Source: https://docs.streamlit.io/library/api-reference/widgets/st.selectbox | 17 | 25 |
64,997,553 | 2020-11-25 | https://stackoverflow.com/questions/64997553/python-requires-ipykernel-to-be-installed | I encounter an issue when I use the Jupyter Notebook in VS code. The screen shows "Python 3.7.8 requires ipykernel to be installed". I followed the pop-up to install ipykernel. It still does not work. The screenshot is attached. It bothers me a lot. Could anyone help me with it? Tons of thanks. | I had the same issue and spent the whole day trying to resolve it. What worked for me was installing the Jupyter dependencies for anaconda: > conda install jupyter I installed this in my base environment. After this VSCode worked without any errors. | 64 | 28 |
65,074,811 | 2020-11-30 | https://stackoverflow.com/questions/65074811/how-to-open-ipynb-file-in-spyder | without using iSpyder DOS shell commands, how can an .ipynb (Jupyter Notebook) be opened directly into Spyder on Windows? Even the online Jupyter Notebook site prompts for a relative directory path where the file is stored. Why isn't there something that just loads the Notebook how it's supposed to look without typing a bunch of directory commands, and why does Spyder's RUN button become greyed out when it loads the .ipynb file? I have no idea what the .ipynb file format is compared to regular .py files Opening lines of the .ipynb when loaded in Spyder are: { "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": { "ExecuteTime": { This does not look like python code whatsoever | You may check out https://github.com/spyder-ide/spyder-notebook Once you install this, you can open native .ipynb files in spyder From the website: Spyder plugin to use Jupyter notebooks inside Spyder. Currently it supports basic functionality such as creating new notebooks, opening any notebook in your filesystem and saving notebooks at any location. You can also use Spyder's file switcher to easily switch between notebooks and open an IPython console connected to the kernel of a notebook to inspect its variables in the Variable Explorer. | 15 | 11 |
65,095,614 | 2020-12-1 | https://stackoverflow.com/questions/65095614/macbook-m1-and-python-libraries | Is new macbook m1 suitable for Data Science? Do Data Science python libraries such as pandas, numpy, sklearn etc work on the macbook m1 (Apple Silicon) chip and how fast compared to the previous generation intel based macbooks? | This GitHub repository has lots of useful information about the Apple M1 chip and data science in Python https://github.com/neurolabusc/AppleSiliconForNeuroimaging. I have included representative quotes below. This repository focuses on software for brain imaging analysis, but the takeaways are broad. Updated on 27 September 2021. TL;DR Unless you are a developer, I would strongly discourage scientists from purchasing an Apple Silicon computer in the short term. Productive work will require core tools to be ported. In the longer term, this architecture could have a profound impact on science. In particular if Apple develops servers that exploit the remarkable power efficiency of their CPUs (competing with AWS Graviton) and leverage the Metal language and GPUs for compute tasks (competing with NVidia's Tesla products and CUDA language). Limitations facing Apple Silicon The infrastructure scientists depend on is not yet available for this architecture. Here are some of the short term limitations: Native R can use the unstable R-devel 4.1. However, RStudio will require gdb. Julia does not yet natively support Apple Silicon. Python natively supports Apple Silicon. However, some modules have issues or are slow. See the NiBabel section below. Scientific modules of Python, R, and Julia require a Fortran compiler, which is currently only available in experimental form. While Apple's C Clang compiler generates fast native code, many scientific tools will need to wait until gcc and gFortran compilers are available. Tools like VirtualBox, VMware Fusion, Boot Camp and Parallels do not yet support Apple Silicon. Many users rely on these tools for using Windows and Linux programs on their macOS computers. Docker can support Apple Silicon. However, attempts to run Intel-based containers on Apple Silicon machines can crash as QEMU sometimes fails to run the container. These containers are popular with many neuroimaging tools. Homebrew 3.0 supports Apple Silicon. However, many homebrew components do not support Apple Silicon. MATLAB is used by many scientific tools, including SPM. While Matlab works in translation, it is not yet available natively (and mex files will need to be recompiled). FSL and AFNI do not yet natively support this architecture. While code may work in translation, creating some native tools must wait for compilers and libraries to be updated. This will likely require months. The current generation M1 only has four high performance cores. Most neuroimaging pipelines combine sequential tasks that only require a single core (where the M1 excels) as well as parallel tasks. Those parallel tasks could exploit a CPU with more cores (as shown in the pigz and niimath tests below). Bear in mind that this mixture of serial and parallel code faces Amdahls law, with diminishing returns for extra cores. The current generation M1 has a maximum of 16 Gb of RAM. Neuroimaging datasets often have large memory demands (especially multi-band accelerated functional, resting-state and diffusion datasets). In general, the M1 and Intel-based Macs have identical OpenGL compatibility, with the M1 providing better performance than previous integrated solutions. However, there are corner cases that may break OpenGL tools. Here I describe four limitations. First, OpenGL geometry shaders are not supported (there is no Metal equivalent). Second, the new retina displays support wide color with 16 bitsPerSample that can cause issues for code that assumes 32-bit RGBA textures (such as the text in this Apple example code). Third, textures can be handled differently. Fourth, use of the GL_INT_2_10_10_10_REV data type will cripple performance (tested on macOS 11.2). This is unfortunate, as Apple advocated for this datatype once upon a time. In this case, code msut be changed to use the less compact GL_HALF_FLOAT which is natively supported by the M1 GPU. This impacts neuroimaging scientists visualizing DTI tractography where GPU resources can be overwhelmed. | 24 | 45 |
65,031,238 | 2020-11-27 | https://stackoverflow.com/questions/65031238/why-doesnt-os-systemcls-clear-the-recent-output | I am always using system('cls') in C language before using Dev-C++. Now I am studying Python and using Pycharm 2020.2.3. I tried to use os.system('cls'). Here is my program: import os print("clear screen") n = int(input("")) if n == 1: os.system('cls') There is no error in my program but it is not clearing the recent output. This is the output of my program: What seems to be the problem why it is not clearing the recent output? | PhCharm displays the output of your running module using the output console. In order for your terminal commands under os.system() to work, you need to emulate your terminal inside the output console. Select 'Edit Configurations' from the 'Run' menu. Under the 'Execution' section, select 'Emulate terminal in output console' JetBrains Sergey Karpov Adds: Our Run window doesn't support some of the things that one can do in the terminal. One of them is clearing the output. When trying to clear the output in a 'non-terminal' output window, PyCharm even shows that the TERM environment variable is not set. Setting that variable manually may help in some cases, but other terminal-specific things are still missing, which is why we have an option to emulate the terminal in the Run window. | 8 | 8 |
65,002,585 | 2020-11-25 | https://stackoverflow.com/questions/65002585/connection-object-has-no-attribute-sftp-live-when-pysftp-connection-fails | I'd like to catch nicely the error when "No hostkey for host *** is found" and give an appropriate message to the end user. I tried this: import pysftp, paramiko try: with pysftp.Connection('1.2.3.4', username='root', password='') as sftp: sftp.listdir() except paramiko.ssh_exception.SSHException as e: print('SSH error, you need to add the public key of your remote in your local known_hosts file first.', e) but unfortunately the output is not very nice: SSH error, you need to add the public key of your remote in your local known_hosts file first. No hostkey for host 1.2.3.4 found. Exception ignored in: <function Connection.__del__ at 0x00000000036B6D38> Traceback (most recent call last): File "C:\Python37\lib\site-packages\pysftp\__init__.py", line 1013, in __del__ self.close() File "C:\Python37\lib\site-packages\pysftp\__init__.py", line 784, in close if self._sftp_live: AttributeError: 'Connection' object has no attribute '_sftp_live' How to nicely avoid these last lines / this "exception ignored" with a try: except:? | The analysis by @reverse_engineer is correct. However: It seems that an additional attribute, self._transport, also is defined too late. The problem can be temporarily corrected until a permanent fix comes by subclassing the pysftp.Connection class as follows: import pysftp import paramiko class My_Connection(pysftp.Connection): def __init__(self, *args, **kwargs): self._sftp_live = False self._transport = None super().__init__(*args, **kwargs) try: with My_Connection('1.2.3.4', username='root', password='') as sftp: l = sftp.listdir() print(l) except paramiko.ssh_exception.SSHException as e: print('SSH error, you need to add the public key of your remote in your local known_hosts file first.', e) Update I could not duplicate this error on my desktop. However, I see in the source for pysftp in the code where it initializes its _cnopts attribute with self._cnopts = cnopts or CnOpts() where cnopts is a keyword parameter to the pysftp.Connection constructor and there is a possibilty of the CnOpts constructor throwing a HostKeysException exception if no host keys are found resulting in the _cnopts attribute not being set. Try the following updated code and let me know if it works: import pysftp import paramiko class My_Connection(pysftp.Connection): def __init__(self, *args, **kwargs): try: if kwargs.get('cnopts') is None: kwargs['cnopts'] = pysftp.CnOpts() except pysftp.HostKeysException as e: self._init_error = True raise paramiko.ssh_exception.SSHException(str(e)) else: self._init_error = False self._sftp_live = False self._transport = None super().__init__(*args, **kwargs) def __del__(self): if not self._init_error: self.close() try: with My_Connection('1.2.3.4', username='root', password='') as sftp: l = sftp.listdir() print(l) except paramiko.ssh_exception.SSHException as e: print('SSH error, you need to add the public key of your remote in your local known_hosts file first.', e) | 18 | 12 |
65,064,740 | 2020-11-29 | https://stackoverflow.com/questions/65064740/error-when-trying-to-use-conda-on-visual-studio-code-conda-the-term-conda | I am trying to build a basic machine learning algorithm, and to do so I am using the Anaconda interpreter for Python. However, even though Visual Studio Code appears to have recognized Conda as the interpreter, and I have the Anaconda 3 shell working as a separate application, I cannot get Conda to work in Visual Studio Code. Whenever I try to check for Conda, I get the following error: conda : The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + conda activate base + ~~~~~ + CategoryInfo : ObjectNotFound: (conda:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException I have tried the fixes linked here: 'Conda' is not recognized as internal or external command However, they did not work for me. I tried setting Conda to my path, yet I still got the same error. | Try the following: Run Anaconda/Miniconda Activate the environment there: conda activate your-env Start Visual Studio Code from the Anaconda/Miniconda terminal: code Then Visual Studio Code should recognize conda: | 9 | 21 |
65,012,601 | 2020-11-25 | https://stackoverflow.com/questions/65012601/attributeerror-simpleimputer-object-has-no-attribute-validate-data-in-pyca | I am using PyCaret and get an error. AttributeError: 'SimpleImputer' object has no attribute '_validate_data' Trying to create a basic instance. # Create a basic PyCaret instance import pycaret from pycaret.regression import * mlb_pycaret = setup(data = pycaret_df, target = 'pts', train_size = 0.8, numeric_features = ['home', 'first_time_pitcher'], session_id = 123) All my variables are numeric (I coerced two of them, which are boolean). My target variable is label and this is by default. I also installed PyCaret, imported its regression, and re-installed scikit learn, imported SimpleImputer as from sklearn.impute import SimpleImputer OBP_avg Numeric SLG_avg Numeric SB_avg Numeric RBI_avg Numeric R_avg Numeric home Numeric first_time_pitcher Numeric park_ratio_OBP Numeric park_ratio_SLG Numeric SO_avg_p Numeric pts_500_parkadj_p Numeric pts_500_parkadj Numeric SLG_avg_parkadj Numeric OPS_avg_parkadj Numeric SLG_avg_parkadj_p Numeric OPS_avg_parkadj_p Numeric pts_BxP Numeric SLG_BxP Numeric OPS_BxP Numeric whip_SO_BxP Numeric whip_SO_B Numeric whip_SO_B_parkadj Numeric order Numeric ops x pts_500 order15 Numeric ops x pts_500 parkadj Numeric ops23 x pts_500 Numeric ops x pts_500 orderadj Numeric whip_p Numeric whip_SO_p Numeric whip_SO_parkadj_p Numeric whip_parkadj_p Numeric pts Label My traceback is the following: | The problem here is with the imputation. The default per pycaret documentation is 'simple' but in this case, you need to make that imputation_type='iterative' for it to work. | 8 | 13 |
65,011,660 | 2020-11-25 | https://stackoverflow.com/questions/65011660/how-can-i-get-the-title-of-the-currently-playing-media-in-windows-10-with-python | Whenever audio is playing in windows 10, whether it is from Spotify, Firefox, or a game. When you turn the volume, windows has a thing in the corner that says the song artist, title, and what app is playing like the photo below (sometimes it only says what app is playing sound if a game is playing the sound) I want to somehow get that data with python. My end goal, is to mute an application if it is playing something I don't like, such as an advertisement. | I am getting the titles of the windows to get the song information. Usually, the application name is displayed in the title, but when it is playing a song, the song name is shown. Here is a function that returns a list of all the window titles. from __future__ import print_function import ctypes def get_titles(): EnumWindows = ctypes.windll.user32.EnumWindows EnumWindowsProc = ctypes.WINFUNCTYPE(ctypes.c_bool, ctypes.POINTER(ctypes.c_int), ctypes.POINTER(ctypes.c_int)) GetWindowText = ctypes.windll.user32.GetWindowTextW GetWindowTextLength = ctypes.windll.user32.GetWindowTextLengthW IsWindowVisible = ctypes.windll.user32.IsWindowVisible titles = [] def foreach_window(hwnd, lParam): if IsWindowVisible(hwnd): length = GetWindowTextLength(hwnd) buff = ctypes.create_unicode_buffer(length + 1) GetWindowText(hwnd, buff, length + 1) titles.append(buff.value) return True EnumWindows(EnumWindowsProc(foreach_window), 0) return titles | 15 | 2 |
65,064,137 | 2020-11-29 | https://stackoverflow.com/questions/65064137/geopandas-how-to-plot-countries-cities | I would need to plot some data on a geographic plot. Specifically, I would like to highlight countries and states where data comes from. My dataset is Year Country State/City 0 2009 BGR Sofia 1 2018 BHS New Providence 2 2002 BLZ NaN 3 2000 CAN California 4 2002 CAN Ontario ... ... ... ... 250 2001 USA Ohio 251 1998 USA New York 252 1995 USA Virginia 253 2011 USA NaN 254 2019 USA New York To create the geographic plot, I have been using geopandas as follows: import geopandas as gpd shapefile = 'path/ne_110m_admin_0_countries/ne_110m_admin_0_countries.shp' gdf = gpd.read_file(shapefile)[['ADMIN', 'ADM0_A3', 'geometry']] gdf.columns = ['country', 'country_code', 'geometry'] Then I have merged the two datasets: merged = gdf.merge(df, left_on = 'country_code', right_on = 'Country') and converted data to json: import json merged_json = json.loads(merged.to_json()) #Convert to String like object. json_data = json.dumps(merged_json) Finally, I have tried to create the chart as follows: from bokeh.io import output_notebook, show, output_file from bokeh.plotting import figure from bokeh.models import GeoJSONDataSource, LinearColorMapper, ColorBar from bokeh.palettes import brewer geosource = GeoJSONDataSource(geojson = json_data) #Define a sequential multi-hue color palette. palette = brewer['YlGnBu'][8] palette = palette[::-1] color_mapper = LinearColorMapper(palette = palette, low = 0, high = 40) tick_labels = {'0': '0%', '5': '5%', '10':'10%', '15':'15%', '20':'20%', '25':'25%', '30':'30%','35':'35%', '40': '>40%'} color_bar = ColorBar(color_mapper=color_mapper, label_standoff=8,width = 500, height = 20, border_line_color=None,location = (0,0), orientation = 'horizontal', major_label_overrides = tick_labels) p = figure(title = 'Creation year across countries', plot_height = 600 , plot_width = 950, toolbar_location = None) p.xgrid.grid_line_color = None p.ygrid.grid_line_color = None p.patches('xs','ys', source = geosource,fill_color = {'field' :'per_cent_year', 'transform' : color_mapper}, line_color = 'black', line_width = 0.25, fill_alpha = 1) p.add_layout(color_bar, 'below') output_notebook() #Display figure. show(p) When I run it, it says BokehJS 1.0.2 successfully loaded. but it does not display anything. My expected output would be one map where the colour is based on the number of appearance of a country (e.g. USA=5 would be the darker) and another one based on State/City (New York would be the darker). Is there anything wrong in the code above? (happy to share more data/info, if required) | From the code you've posted I can't see anything wrong with the plotting, so I assume that the issue might be somewhere in your data aggregation or merging. Here is a solution that starts by generating data which should be similar to yours, then counts the number of times a country appears in the data as a proportion of the size of the dataset, as this is the required metric. We'll focus on just using a few countries as an example: from random import choices import pandas as pd import numpy as np def generate_data(): k = 100 countries_of_interest = ['USA','ARG','BRA','GBR','ESP','RUS'] countries = choices(countries_of_interest, k=k) start_yr = 2010 end_yr = 2021 return pd.DataFrame({'Country':countries, 'Year':np.random.randint(start_yr, end_yr, k)}, index=range(len(countries))) def aggregate_data(df): data = df.groupby('Country').agg('count')*100.0/len(df) data = data.reset_index().rename(columns={'Year':'proportion_of_dataset'}) return data df = generate_data() # Country Year # 0 USA 2017 # 1 GBR 2014 # 2 USA 2013 # 3 BRA 2016 # 4 BRA 2018 # .. ... ... # 95 ESP 2014 # 96 USA 2015 # 97 RUS 2019 # 98 RUS 2012 # 99 RUS 2011 # # [100 rows x 2 columns] data = aggregate_data(df) # Country proportion_of_dataset # 0 ARG 20.0 # 1 BRA 17.0 # 2 ESP 14.0 # 3 GBR 14.0 # 4 RUS 19.0 # 5 USA 16.0 Now load the country border shapefile using geopandas, and rename columns: import geopandas as gpd shapefile = 'path_to_shapfile_folder/ne_110m_admin_0_countries/ne_110m_admin_0_countries.shp' gdf = gpd.read_file(shapefile)[['ADMIN', 'ADM0_A3', 'geometry']] gdf.columns = ['country', 'country_code', 'geometry'] gdf.head() # country country_code \ # 0 Fiji FJI # 1 United Republic of Tanzania TZA # 2 Western Sahara SAH # 3 Canada CAN # 4 United States of America USA # # geometry # 0 MULTIPOLYGON (((180.00000 -16.06713, 180.00000... # 1 POLYGON ((33.90371 -0.95000, 34.07262 -1.05982... # 2 POLYGON ((-8.66559 27.65643, -8.66512 27.58948... # 3 MULTIPOLYGON (((-122.84000 49.00000, -122.9742... # 4 MULTIPOLYGON (((-122.84000 49.00000, -120.0000... Now we want to merge the country polygon dataframe with our aggregated data. Note: we want to do a left join (on the full country polygon dataframe) so that we include all countries, even ones we don't have data for. Also note that we are adding missing values for these countries by filling NaNs with zeros: merged = gdf.merge(data, left_on = 'country_code', right_on = 'Country', how='left') merged['proportion_of_dataset'] = merged['proportion_of_dataset'].fillna(0) Using your code to create the geojson: import json merged_json = json.loads(merged.to_json()) json_data = json.dumps(merged_json) Finally, we'll put your plotting code in a function, and pass in as arguments the geojson, column to plot, and the plot title: from bokeh.io import output_notebook, show, output_file from bokeh.plotting import figure from bokeh.models import GeoJSONDataSource, LinearColorMapper, ColorBar from bokeh.palettes import brewer def plot_map(json_data,plot_col,title): geosource = GeoJSONDataSource(geojson = json_data) #Define a sequential multi-hue color palette. palette = brewer['YlGnBu'][8] palette = palette[::-1] color_mapper = LinearColorMapper(palette = palette, low = 0, high = 40) tick_labels = {'0': '0%', '5': '5%', '10':'10%', '15':'15%', '20':'20%', '25':'25%', '30':'30%','35':'35%', '40': '>40%'} color_bar = ColorBar(color_mapper=color_mapper, label_standoff=8,width = 500, height = 20, border_line_color=None,location = (0,0), orientation = 'horizontal', major_label_overrides = tick_labels) p = figure(title = title, plot_height = 600 , plot_width = 950, toolbar_location = None) p.xgrid.grid_line_color = None p.ygrid.grid_line_color = None p.patches('xs','ys', source = geosource,fill_color = {'field' :plot_col, 'transform' : color_mapper}, line_color = 'black', line_width = 0.25, fill_alpha = 1) p.add_layout(color_bar, 'below') output_notebook() #Display figure. show(p) Now all we have to do is call the plotting function, passing in the required parameters: plot_map(json_data,'proportion_of_dataset','Dataset countries of origin') | 11 | 6 |
65,080,685 | 2020-11-30 | https://stackoverflow.com/questions/65080685/usb-usb-device-handle-win-cc1020-failed-to-read-descriptor-from-node-connectio | We recently upgraded our Windows 10 test environment with ChromeDriver v87.0.4280.20 and Chrome v87.0.4280.66 (Official Build) (64-bit) and after the up-gradation even the minimal program is producing this ERROR log: [9848:10684:1201/013233.169:ERROR:device_event_log_impl.cc(211)] [01:32:33.170] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) Minimum Code Block: from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument("start-maximized") driver = webdriver.Chrome(options=options, executable_path=r'C:\WebDrivers\chromedriver.exe') driver.get('https://www.google.com/') Console Output: DevTools listening on ws://127.0.0.1:64170/devtools/browser/2fb4bb93-79ab-4131-9e4a-3b65c08dbffb [9848:10684:1201/013233.169:ERROR:device_event_log_impl.cc(211)] [01:32:33.170] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) [9848:10684:1201/013233.172:ERROR:device_event_log_impl.cc(211)] [01:32:33.173] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) Anyone facing the same? Was there any change in ChromeDriver/Chrome v87 with respect to ChromeDriver/Chrome v86? Any clues will be helpful. | After going through quite a few discussions, documentations and Chromium issues here are the details related to the surfacing of the log message: [9848:10684:1201/013233.169:ERROR:device_event_log_impl.cc(211)] [01:32:33.170] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) Details It all started with the reporting of chromium issue Remove WebUSB's dependency on libusb on Windows as: For Linux (probably Mac as well), both WebUSB notification and communication works correctly (after allowing user access to the device in udev rules). For Windows, it seems that libusb only works with a non-standard WinUsb driver (https://github.com/libusb/libusb/issues/255). When the hardware is inserted and the VID/PID is unknown to the system, windows 10 correctly loads it's CDC driver for the CDC part and the WinUSB driver (version 10) for the WebUSB part (no red flags). However, it seems that chrome never finds the device until I manually force an older WinUSB driver (version 6 - probably modified also) on the interface. The solution was implemented in a step-wise manner as follows: Start supporting some transfers in the new Windows USB backend Fix bulk/interrupt transfers in the new Windows USB backend [usb] Read BOS descriptors from the hub driver on Windows [usb] Collect all composite devices paths during enumeration on Windows [usb] Remove out parameters in UsbServiceWin helper functions [usb] Support composite devices in the new Windows backend [usb] Detect USB functions as Windows enumerates them [usb] Support composite devices with multiple functions [usb] Hold interface requests until Windows enumerates functions [usb] Add direction parameter to ClearHalt [usb] Count references to a WINUSB_INTERFACE_HANDLE [usb] Implement blocking operations in the Windows backend These changes ensured that the new backend was ready to be tested and was available through Chrome Canary and chrome-dev-channel which you can access manually through: chrome://flags#enable-new-usb-backend More change requests were submitted as follows: [usb] Mark calls to SetupDiGetDeviceProperty as potentially blocking: According to hang reports this function performs an RPC call which may take some time to complete. Mark calls with a base::ScopedBlockingCall so that the thread pool knows this task may be busy for a while. variations: Enable NewUsbBackend in field trial testing config: This flag was experimental as beta-channel uses this change configuration as the default for testing. As the experimental launch of the new backend appeared to be stable, finally these configuration was enabled by default so that the chanege rolls out to all users of Chrome 87 through usb: Enable new Windows USB backend by default. Revision / Commit The idea was once this configuration becomes the default for a few milestones, Chromium Team will start removing the Windows-specific code from the old back-end and remove the flag. Road Ahead Chromium Team have already merged the revision/commit to Extend new-usb-backend flag expiration within Chrome v90 which will be available soon. Update As per @ReillyGrant's [Committer, WebDriver for Google Chrome] comment : ..." it would be good to reduce the log level for these messages so they don't appear on the console by default but we haven't landed code to do that yet"... References You can find a couple of relevant detailed discussions in: Failed to read descriptor from node connection: A device attached to the system is not functioning error using ChromeDriver Selenium on Windows OS Failed to read descriptor from node connection: A device attached to the system is not functioning error using ChromeDriver Chrome through Selenium | 16 | 20 |
65,083,808 | 2020-12-1 | https://stackoverflow.com/questions/65083808/bump2version-to-increment-pre-release-while-removing-post-release-segment | How would I use bump2version (with regards to its invocation and/or its configuration) to increment: 1.0.0.a2.post0 # post-release of a pre-release a2 to 1.0.0.a3 # pre-release a3 Reproducible example: $ python3 -m pip install 'bump2version==1.0.*' __init__.py: __version__ = "1.0.0.a2.post0" setup.cfg: [bumpversion] current_version = 1.0.0.a2.post0 parse = ^ (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+) # minimum major.minor.patch, 1.2.3 (?: \. (?P<prerel>a|alpha|b|beta|d|dev|rc) # pre-release segment (?P<prerelversion>\d+) # pre-release version num )? (?:\.post(?P<post>\d+))? # post-release serialize = {major}.{minor}.{patch}.{prerel}{prerelversion}.post{post} {major}.{minor}.{patch}.{prerel}{prerelversion} {major}.{minor}.{patch}.post{post} {major}.{minor}.{patch} [bumpversion:file:__init__.py] [bumpversion:part:prerel] optional_value = dev values = dev d alpha a beta b rc Examples of valid versions from this scheme, which takes some but not all rules from PEP 440: 1.2.3 # (1) final 1.2.3.dev0 # (2) prerelease 1.2.3.a0 1.2.3.alpha0 1.2.3.b0 1.2.3.beta0 1.2.3.rc0 1.2.3.rc3.post0 # (3) postrelease (of a pre-release version) 1.2.3.post0 # (4) postrelease (of a final version) I've tried, for example, bump2version --verbose prerelversion or alternatively with --new-version=1.0.0.a3 explicitly. Both of those attempts retain the .post0 rather than dropping it. Note: I asked this as a usage question issue in the bump2version repo a few weeks back with no luck. | We had to wrestle with that (and, back in the days, with the original bumpversion, not the nicer bump2version). With the config below, you can use bumpversion pre to go from 1.0.0.a2.post0 to 1.0.0.a3. Explanation: Since pre and post each have both a string prefix and a number, I believe it is necessary to split them accordingly. For example, the pre part could be split into a prekind (the string) and a pre (the number). Then, the nice thing is that you can increment prekind ('dev' to 'alpha' to 'beta' etc.) independently of the issue number (a sequence, as usual). Below, I've put both a complete configuration and an example with a number of invocations in sequence to show the various mutations that are possible. I'm sure the setup below can be further customized, but hopefully it will put you and others landing here on the right track. cat > .bumpversion.cfg << "EOF" [bumpversion] current_version = 1.0.0.a2.post0 files = __init__.py commit = False parse = ^ (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+) (\.(?P<prekind>a|alpha|b|beta|d|dev|rc) (?P<pre>\d+) # pre-release version num )? (\.(?P<postkind>post)(?P<post>\d+))? # post-release serialize = {major}.{minor}.{patch}.{prekind}{pre}.{postkind}{post} {major}.{minor}.{patch}.{prekind}{pre} {major}.{minor}.{patch}.{postkind}{post} {major}.{minor}.{patch} [bumpversion:part:prekind] optional_value = _ values = _ dev d alpha a beta b rc [bumpversion:part:postkind] optional_value = _ values = _ post EOF echo '__version__ = "1.0.0.a2.post0"' > __init__.py Tests: These perform a sequence of bumpversion operations to demonstrate some of the mutations that are possible. And of course, you can use --new-version=... to forcefully set a new version. for op in \ start post post pre pre prekind prekind pre postkind post prekind minor \ postkind post pre postkind prekind postkind post major prekind postkind post; do if [[ $op == 'start' ]]; then printf "starting from: %s\n" $(perl -ne 'print "$1\n" if /"(.*)"/' __init__.py) else bumpversion $op printf "%10s --> %s\n" $op $(perl -ne 'print "$1\n" if /"(.*)"/' __init__.py) fi done Output (commented): starting from: 1.0.0.a2.post0 post --> 1.0.0.a2.post1 # no issue incrementing post post --> 1.0.0.a2.post2 pre --> 1.0.0.a3 # can move to the next 'pre'release pre --> 1.0.0.a4 prekind --> 1.0.0.beta0 # can upgrade the kind of prerelease prekind --> 1.0.0.b0 pre --> 1.0.0.b1 # and keep incrementing postkind --> 1.0.0.b1.post0 # bring a post component again post --> 1.0.0.b1.post1 # and incrementing prekind --> 1.0.0.rc0 # upgrade pre kind directly minor --> 1.1.0 # patch/minor/major cut the optional parts postkind --> 1.1.0.post0 # but we can bring a post component (without pre) post --> 1.1.0.post1 pre --> 1.1.0 # BAD & silent: cannot increment a missing part postkind --> 1.1.0.post0 prekind --> 1.1.0.dev0 # default: pre part starts at 'dev' postkind --> 1.1.0.dev0.post0 # can add post part to a pre part post --> 1.1.0.dev0.post1 # etc... major --> 2.0.0 prekind --> 2.0.0.dev0 postkind --> 2.0.0.dev0.post0 post --> 2.0.0.dev0.post1 | 7 | 5 |
65,062,014 | 2020-11-29 | https://stackoverflow.com/questions/65062014/how-to-make-a-module-reload-in-python-after-the-script-is-compiled | The basic idea involved: I am trying to make an application where students can write code related to a specific problem(say to check if the number is even) The code given by the student is then checked by the application by comparing the output given by the user's code with the correct output given by the correct code which is already present in the application. The basic version of the project I am working on: An application in which you can write a python script (in tkinter text box). The contents of the text box are first stored in a test_it.py file. This file is then imported (on the click of a button) by the application. The function present in test_it.py is then called to get the output of the code(by the user). The problem: Since I am "importing" the contents of test_it.py , therefore, during the runtime of the application the user can test his script only once. The reason is that python will import the test_it.py file only once. So even after saving the new script of the user in test_it.py , it wont be available to the application. The solution: Reload test_it.py every time when the button to test the script is clicked. The actual problem: While this works perfectly when I run the application from the script, this method fails to work for the compiled/executable version(.exe) of the file (which is expected since during compilation all the imported modules would be compiled too and so modifying them later will not work) The question: I want my test_it.py file to be reloaded even after compiling the application. If you would like to see the working version of the application to test it yourself. You will find it here. | Even for the bundled application imports work the standard way. That means whenever an import is encountered, the interpreter will try to find the corresponding module. You can make your test_it.py module discoverable by appending the containing directory to sys.path. The import test_it should be dynamic, e.g. inside a function, so that it won't be discovered by PyInstaller (so that PyInstaller won't make an attempt to bundle it with the application). Consider the following example script, where the app data is stored inside a temporary directory which hosts the test_it.py module: import importlib import os import sys import tempfile def main(): with tempfile.TemporaryDirectory() as td: f_name = os.path.join(td, 'test_it.py') with open(f_name, 'w') as fh: # write the code fh.write('foo = 1') sys.path.append(td) # make available for import import test_it print(f'{test_it.foo=}') with open(f_name, 'w') as fh: # update the code fh.write('foo = 2') importlib.reload(test_it) print(f'{test_it.foo=}') main() | 6 | 1 |
65,028,943 | 2020-11-26 | https://stackoverflow.com/questions/65028943/google-or-tools-tsp-spanning-multiple-days-with-start-stop-times | I am using Google OR-Tools to optimize the routing of a single vehicle over the span of a several day. I am trying to: Be able to specify the number of days over which to optimize routing. Be able to specify the start location and end location for each day. Be able to specify the start time and end time for each day. I have a set of 40 locations. For each day I want to include in my range of days for optimization, I prepend the start and end location to the matrix. So if I want to optimize for one day, I would have a total of 42 locations in my matrix. If I want to optimize for two days, I would have a total of 44 locations in my matrix. And so on. The pattern would like like this: 1 Day: Matrix = [[start day 1], [end day 1], [location], [location], ... ] 2 Days: Matrix = [[start day 1], [end day 1], [start day 2], [end day 2], [location], [location], ... ] 3 Days: Matrix = [[start day 1], [end day 1], [start day 2], [end day 2], [start day 3], [end day 3], [location], [location], ... ] I want to allow locations to be dropped in order to achieve a feasible solution, as well as only allow locations to be visited during their specified time windows, both of which I believe I have successfully implemented. My current implementation is available here, as well as on GitHub. I warmly welcome any suggestions, guidance, or support. Thank you! Source: (Seeded with data for a two-day solution) from ortools.constraint_solver import pywrapcp from ortools.constraint_solver import routing_enums_pb2 Matrix = [[0,1706.7,526.4,0,1497,1136.3,1445.8,1728.3,864.4,1362.3,1443.2,1410,805.1,1031.5,781.1,1003.1,364.6,482.6,279.8,768.6,461.4,752.5,972.6,771.7,698.3,901.6,1086.9,994.2,416.7,737.5,1171.7,881.6,1052.1,1164.3,868.7,409.7,498.6,685.7,1693.3,1875.3,302.7,1297.1,1663.4,1427.8],[1614.3,0,1382.2,1614.3,2192.7,1851.9,2686.5,2995.8,2216.9,1545.3,1599.2,1550.6,1796.5,1685.3,1597.7,1518.5,1779.9,2033.2,1891.8,2295.5,2009.6,2279.4,2054.5,2040.4,1235.2,1249.6,1290.2,2433.5,1943.6,1074.2,1128.2,1229.6,1192.7,823.3,1106.1,1440.2,1528.1,2100.8,2357.4,2757.6,1586.5,1676.3,1929.1,1681.3],[534.8,1446.5,0,534.8,1885.4,1473.2,1834.2,2011.5,1252.8,1549,1561.5,1622,1094.3,1131.5,895.5,1103.1,656.3,871,738.7,1051.8,765.9,1035.7,1160.9,960,798.3,1047.8,1205.2,1277.4,699.9,476.7,877.2,999.9,1264.1,891.6,534.2,196.5,356.2,874,1881.6,2063.6,342.8,1448.6,1851.7,1566.8],[0,1706.7,526.4,0,1497,1136.3,1445.8,1728.3,864.4,1362.3,1443.2,1410,805.1,1031.5,781.1,1003.1,364.6,482.6,279.8,768.6,461.4,752.5,972.6,771.7,698.3,901.6,1086.9,994.2,416.7,737.5,1171.7,881.6,1052.1,1164.3,868.7,409.7,498.6,685.7,1693.3,1875.3,302.7,1297.1,1663.4,1427.8],[1418.2,2192.4,1835.2,1418.2,0,557.7,608.1,2149,1325.3,1031.6,1615.8,1986.8,1123.8,961,1040,1085.5,1439.8,1204.8,1545.6,1530.2,1879.6,1769.1,2234.4,2150.1,1315.1,1094.9,1339.3,1874.4,1834.9,1623.6,2265.6,1275.3,1628.9,1677.3,1770.8,1718.5,1891.8,2052.1,2750.3,2966.2,1662.4,2682,2788.7,2832.4],[1096.3,1832,1437.4,1096.3,568.8,0,1062.6,1896.2,966.1,651.1,1255.4,1626.4,687,521.4,600.4,725.1,1052.8,882.9,1223.7,1209.4,1557.7,1448.3,1918,1829.3,954.7,734.5,978.9,1558,1513,1200,1863.6,914.9,1268.5,1316.9,1331.2,1396.6,1569.9,1731.3,2433.9,2649.8,1340.5,2360.1,2472.3,2510.5],[1379.1,2741.6,1796.1,1379.1,651.6,1106.9,0,1834.1,1198.5,1580.8,2140,2536,1144.1,1439.4,1262.4,1496.3,1400.7,1165.7,1495.8,1231.5,1599.7,1470.2,1919.5,1851.2,1682.6,1644.1,1888.5,1559.5,1729,1826.7,2442.7,1824.5,2178.1,2226.5,1957.9,1679.4,1852.7,1753.4,2435.4,2651.3,1623.3,2433,2473.8,2693.1],[1799.2,3111,2046.1,1799.2,2171.9,2026.5,1868,0,1242.3,2252.5,2836.7,3064,1667.2,1962.5,1785.5,2019.4,1922.9,1692.7,1687.3,1240.7,1524.3,1185.1,1224.4,1507.6,2205.7,2328.9,2545.2,895.3,1653.6,2349.8,2362.4,2407.3,2706.1,2776.6,2431.9,1925.8,1948.6,1639.8,1415.3,1631.2,1854.1,1757.2,1778.7,2017.3],[805.5,2218.4,1222.5,805.5,1296.7,925.9,1181.5,1136.6,0,1151.9,1736.1,2046.8,566.6,861.9,684.9,918.8,827.1,592.1,781,449.8,818,688.7,1158.4,1069.7,1105.1,1228.3,1444.6,798.4,947.3,1249.2,1869.1,1306.7,1688.9,1676,1380.4,1105.8,1242.3,971.7,1674.3,1890.2,1049.7,1651.3,1712.7,1911.4],[1265.8,1550.9,1515,1265.8,1025.9,685.1,1519.7,2065.7,1135.6,0,811.6,1195.2,809.5,522.1,695.6,621.5,1148,1052.4,1380.7,1378.9,1727.2,1617.8,2087.5,1998.8,878.6,658.4,559.3,1727.5,1682.5,1187.1,1829.1,639.4,837.3,1117.1,1384.6,1518.6,1691.9,1900.8,2603.4,2819.3,1462.5,2394.8,2641.8,2513],[1419.8,1573.3,1501.6,1419.8,1678.7,1342.9,2162.9,2723.5,1793.4,840.4,0,972.9,1426.5,1232.7,1183.3,1027.1,1409.9,1631.2,1556.1,2036.7,1839.6,2109.4,2234.6,2033.7,865.2,922.3,542.6,2351.1,1773.6,1213.4,1855.4,622.7,746.7,1139.5,1395.4,1540.2,1713.5,1947.7,2955.3,3137.3,1484.1,2421.1,2803.8,2539.3],[1434.8,1610.4,1634.2,1434.8,2103.8,1763,2597.6,3058.1,2128,1292.3,1016.9,0,1707.6,1596.4,1508.8,1429.6,1670.5,1853.7,1712.3,2140.5,1854.6,2124.4,2249.6,2048.7,1146.3,1160.7,1008,2366.1,1788.6,1323.9,1902.6,982.4,579.5,1176.6,1521.4,1555.2,1728.5,1962.7,2970.3,3152.3,1499.1,2468.3,2840.9,2586.5],[745.9,1760.2,1028.4,745.9,1104,685.5,1195.2,1544.9,614.8,819.7,1383.1,1588.6,0,378.9,226.7,460.6,679.1,427.3,873.3,858.1,1207.3,1097,1566.7,1478,646.9,770.1,986.4,1206.7,1162.6,791,1454.6,848.5,1230.7,1217.8,922.2,1046.2,1219.5,1380,2082.6,2298.5,990.1,2009.7,2121,2138.5],[933.5,1687.2,1052,933.5,912.5,494,1406.3,1784.3,854.2,495.6,1198.2,1481.6,328.9,0,215,309.1,667.4,659.1,900.1,1097.5,1289.3,1336.4,1777.9,1577,670.5,589.7,834.1,1446.1,1267.5,814.6,1478.2,770.1,1123.7,1172.1,945.8,1090.6,1263.9,1491,2322,2537.9,1034.5,2043.9,2360.4,2162.1],[729,1645.1,913.3,729,1034.6,616.1,1294.4,1644.1,714,715.2,1209.3,1473.5,293.6,274.4,0,286.8,462.9,447.9,695.6,957.3,1084.8,1196.2,1573.4,1372.5,531.8,596.3,812.6,1305.9,1063,675.9,1339.5,733.4,1115.6,1102.7,807.1,934.4,1107.7,1286.5,2181.8,2397.7,878.3,1897.9,2220.2,2023.4],[995.2,1489.7,1077,995.2,1008,667.2,1501.8,1901.4,971.3,591.3,994.8,1284.1,550.9,369.2,287.1,0,750,735,982.7,1214.6,1371.9,1453.5,1810,1609.1,479.2,392.2,598.1,1563.2,1349,839.6,1503.2,572.6,926.2,974.6,970.8,1115.6,1288.9,1523.1,2439.1,2655,1059.5,2068.9,2477.5,2187.1],[350.3,1818.6,639.3,350.3,1473.6,1055.1,1475.4,1824.1,894,1154.2,1439.7,1530.2,732.6,713.4,463,749.8,0,471.9,283.8,863.5,673,885.7,1161.6,960.7,676.5,939.6,1083.4,1127.4,651.2,849.4,1285.9,878.1,1172.3,1276.2,980.6,522.6,695.9,874.7,1882.3,2064.3,466.5,1486.1,1852.4,1636.5],[437.4,2042,854.4,437.4,1269.3,908.6,1218.1,1566.8,636.7,1134.6,1649.8,1745.3,433.7,696.7,440.5,727.3,459,0,564.8,880,898.8,1118.9,1376.7,1175.8,971.8,1036.8,1253.1,1228.6,854.1,1072.8,1501,1173.4,1387.4,1499.6,1204,737.7,911,1089.8,2097.4,2279.4,681.6,1701.2,2067.5,1851.6],[279.5,1914.6,746.3,279.5,1579.2,1218.5,1506.8,1619,804.6,1401,1593.4,1617.9,887.3,960.2,709.8,996.6,291.3,564.8,0,621,410.3,643.2,1112.5,911.6,830.2,1093.3,1237.1,884.9,367.4,945.4,1392.9,1031.8,1260,1372.2,1076.6,629.6,693.7,765.7,1833.2,2015.2,536.1,1410.3,1803.3,1528.5],[800.2,2348.7,1050.6,800.2,1567.8,1294.7,1285.3,1226.6,510.5,1520.7,2101.7,2068.5,935.4,1230.7,1053.7,1287.6,874.2,932.8,638.6,0,512.4,442.6,1064.1,823.6,1356.8,1560.1,1745.4,684.3,658.1,1378.9,1670,1540.1,1710.6,1793.8,1436.4,930.3,953.1,725.6,1719.6,1901.6,858.6,1405.2,1689.7,1665.3],[407.8,2014.3,716.2,407.8,1813.5,1472.5,1531,1409.4,756.2,1698.5,1767.3,1734.1,1141.3,1274,1023.6,1310.4,605.1,818.8,323.7,450,0,433.6,881.3,680.4,1022.4,1225.7,1411,675.3,210.4,1044.5,1335.6,1205.7,1376.2,1459.4,1102,595.9,618.7,526.2,1602,1784,524.2,1205.8,1572.1,1453.5],[855.1,2395.3,1102,855.1,1798.4,1525.3,1515.9,1173.4,741.1,1751.3,2153.1,2119.9,1166,1461.3,1284.3,1518.2,978.8,1191.5,743.2,477.1,580.2,0,701.1,460.6,1408.2,1611.5,1796.8,439.3,709.5,1430.3,1646.7,1591.5,1762,1845.2,1487.8,981.7,1004.5,548.2,1474.6,1656.6,910,1057.4,1444.7,1317.5],[902.1,2028.4,1133.3,902.1,2345.6,1984.9,2055.1,1189.6,1266.5,2169.2,2184.4,2151.2,1653.7,1772.7,1536.7,1744.3,1151.1,1331.2,1061.6,1047,850.5,702.5,0,407.3,1439.5,1642.8,1828.1,510.1,784.2,1461.6,1279.8,1622.8,1793.3,1876.5,1519.1,1013,1115.7,539.5,904,1086,956.9,636.9,874.1,897],[746.1,2102,977.3,746.1,2147.9,1828.9,1865.4,1456.3,1090.6,2013.2,2028.4,1995.2,1497.7,1616.7,1380.7,1588.3,995.1,1175.2,905.6,826.6,694.5,448.6,347,0,1283.5,1486.8,1672.1,792.2,628.2,1305.6,1353.4,1466.8,1637.3,1720.5,1363.1,857,934.6,301.2,1170.7,1352.7,800.9,764.1,1140.8,1024.2],[714,1295.8,795.8,714,1312.1,971.3,1721.5,2071.2,1141.1,885.7,898.2,1089.2,720.7,733.5,521.9,485.6,704.1,969.8,850.3,1384.4,1133.8,1403.6,1528.8,1327.9,0,369,541.9,1645.3,1067.8,558.4,1222,336.6,731.3,780.7,689.6,834.4,1007.7,1241.9,2249.5,2431.5,778.3,1787.7,2198.3,1905.9],[927.4,1285.1,1044.2,927.4,1067,726.2,1560.8,2152.6,1222.5,650.3,929.6,1079.5,812.8,559.6,549,392.8,952.5,996.9,1098.7,1465.8,1347.2,1617,1742.2,1541.3,402.3,0,573.3,1814.4,1281.2,716.3,1358.3,368,721.6,770,913.8,1047.8,1221.1,1455.3,2462.9,2644.9,991.7,1924,2334.6,2042.2],[1066.9,1288.4,1148.7,1066.9,1353.6,1012.8,1847.4,2393.4,1463.3,510.3,522.8,932.7,1073.6,866.8,816.7,660.5,1057,1264.6,1203.2,1706.6,1486.7,1756.5,1881.7,1680.8,512.3,569.4,0,1998.2,1420.7,860.5,1502.5,269.8,571.2,854.6,1042.5,1187.3,1360.6,1594.8,2602.4,2784.4,1131.2,2068.2,2478.8,2186.4],[1033.9,2497.8,1280.8,1033.9,1937.1,1628.8,1633.2,906.4,844.6,1854.8,2331.9,2298.7,1269.5,1564.8,1387.8,1621.7,1157.6,1295,922,655.9,759,419.8,553.1,800.8,1587,1790.3,1975.6,0,888.3,1609.1,1749.2,1770.3,1940.8,2024,1666.6,1160.5,1183.3,888.4,1207.6,1389.6,1088.8,1106.3,1177.7,1366.4],[434.2,1998.9,700.8,434.2,1859.6,1498.9,1676.6,1555,901.8,1724.9,1751.9,1718.7,1167.7,1300.4,1050,1311.8,631.5,845.2,350.1,595.3,261.3,579.2,855.8,654.9,1007,1210.3,1395.6,820.9,0,1029.1,1320.2,1190.3,1360.8,1444,1086.6,580.5,603.3,509,1576.5,1758.5,508.8,1180.3,1546.6,1438.1],[660.4,1093.1,449.6,660.4,1614.9,1241.5,1863.4,2213.1,1283,1198.2,1213.4,1273.2,862.6,899.8,663.8,871.4,877.7,1079.3,937.9,1362.9,1077,1346.8,1472,1271.1,566.6,671.8,857.1,1588.5,1011,0,841.6,651.8,915.3,538.2,276.4,507.6,681.4,1185.1,2192.7,2374.7,653.9,1407.3,1817.9,1525.5],[1074.3,1158.2,863.3,1074.3,2242.2,1861.6,2474.8,2235.8,1893.4,1797.9,1840.7,1803.2,1482.7,1519.9,1283.9,1491.5,1296.9,1511.6,1279.7,1623.1,1337.2,1590,1294.5,1280.4,1186.7,1299.1,1484.4,1673.5,1271.2,855.8,0,1279.1,1445.3,943.7,772.3,813.5,690.4,1340.8,1733.2,2031,828.6,894.3,1304.9,1012.5],[846.2,1208.6,928,846.2,1291.8,951,1785.6,2203.4,1273.3,610,622.5,915.8,852.9,784.4,654.1,617.6,836.3,1102,982.5,1516.6,1266,1535.8,1661,1460.1,291.6,348.7,266.2,1777.5,1200,639.8,1281.8,0,557.9,655.1,821.8,966.6,1139.9,1374.1,2381.7,2563.7,910.5,1847.5,2258.1,1965.7],[976.5,1152.1,1175.9,976.5,1645.5,1304.7,2139.3,2599.8,1669.7,830.6,750.3,565.6,1249.3,1138.1,1050.5,971.3,1212.2,1395.4,1254,1682.2,1396.3,1666.1,1791.3,1590.4,688,702.4,575.5,1907.8,1330.3,865.6,1444.3,561.4,0,718.3,1063.1,1096.9,1270.2,1504.4,2512,2694,1040.8,2010,2382.6,2128.2],[1080,818.6,846.2,1080,1669.1,1328.3,2162.9,2623.4,1693.3,1087.6,1141.5,1092.9,1272.9,1161.7,1074.1,994.9,1256.3,1498.9,1357.5,1759.5,1473.6,1743.4,1868.6,1667.7,711.6,726,832.5,1985.1,1407.6,538.2,972.4,665.1,732.4,0,585.3,904.2,1078,1581.7,2377,2674.8,1050.5,1538.1,1948.7,1656.3],[703.6,1085.4,451.8,703.6,1703.2,1284.7,1906.6,2256.3,1326.2,1351.3,1366.5,1426.3,905.8,943,707,914.6,920.9,1122.5,981.1,1365.1,1079.2,1349,1474.2,1273.3,609.8,824.9,1010.2,1590.7,1013.2,250.1,779.8,804.9,1068.4,586.4,0,509.8,683.6,1187.3,2184.4,2376.9,656.1,1345.5,1756.1,1463.7],[436.6,1556.1,230.2,436.6,1787.2,1426.5,1736,1903.2,1154.6,1561.8,1577,1543.8,1095.3,1165.3,929.3,1136.9,558.1,772.8,640.5,943.5,657.6,927.4,1052.6,851.7,832.1,1035.4,1220.7,1169.1,591.6,586.3,829,1015.4,1185.9,1001.2,643.8,0,238.5,765.7,1773.3,1955.3,220.9,1377.1,1743.4,1500.4],[492.4,1695,348.7,492.4,1946,1585.3,1894.8,1999.7,1313.4,1720.6,1735.8,1702.6,1254.1,1324.1,1088.1,1295.7,716.9,931.6,697.8,1040,754.1,1023.9,1142,918.1,990.9,1194.2,1379.5,1265.6,688.1,732.7,827.2,1174.2,1344.7,1147.6,790.2,227.3,0,855.1,1862.7,2044.7,241.1,1237.1,1647.7,1355.3],[614.6,2143.9,845.8,614.6,2002.1,1697.4,1719.6,1598,944.8,1881.7,1896.9,1863.7,1366.2,1485.2,1249.2,1456.8,863.6,1043.7,713.4,680.8,484.2,584.9,551.3,310.8,1152,1355.3,1540.6,863.9,436,1174.1,1406.6,1335.3,1505.8,1589,1231.6,725.5,828.2,0,1334.4,1516.4,669.4,900.9,1304.5,1161],[1677.7,2333.7,1908.9,1677.7,2805.5,2497.2,2501.6,1345.9,1713,2723.2,2960,2926.8,2137.9,2433.2,2256.2,2490.1,1926.7,2106.8,1837.2,1717.5,1626.1,1481.4,945.3,1182.9,2215.1,2418.4,2603.7,1191.6,1559.8,2209.4,1673.1,2398.4,2568.9,2297.3,2125.9,1788.6,1891.3,1315.1,0,630,1732.5,874,604.7,1053.3],[1884,2890.3,2115.2,1884,3030.5,2722.2,2726.6,1570.9,1938,2948.2,3166.3,3133.1,2362.9,2658.2,2481.2,2715.1,2133,2313.1,2043.5,1923.8,1832.4,1687.7,1151.6,1389.2,2421.4,2624.7,2810,1397.9,1766.1,2443.5,2156,2604.7,2775.2,2753.2,2501,1994.9,2097.6,1521.4,732.6,0,1938.8,1430.6,1161.3,1609.9],[310.5,1654.5,356.4,310.5,1718.8,1358.1,1667.6,1834.8,1086.2,1493.4,1508.6,1475.4,1026.9,1096.9,860.9,1068.5,489.7,704.4,538.1,875.1,589.2,859,984.2,783.3,763.7,967,1152.3,1100.7,523.2,684.7,901.1,947,1117.5,1099.6,742.2,195,244.2,697.3,1704.9,1886.9,0,1308.7,1675,1452.6],[1233.7,1726.5,1422.2,1233.7,2677.2,2316.5,2414.6,1702.1,1639.8,2402.6,2445.4,2407.9,1985.3,2104.3,1868.3,2075.9,1482.7,1662.8,1393.2,1375.8,1182.1,997.8,690.8,688.2,1771.1,1903.8,2089.1,1095.2,1115.8,1460.5,951.2,1883.8,2050,1548.4,1377,1335.7,1205.8,837.3,948.4,1348.6,1285.8,0,520.1,358.8],[1656.1,2007.2,1844.6,1656.1,2935.8,2627.5,2631.9,1766.4,1843.3,2775.1,2829,2780.4,2268.2,2526.7,2290.7,2498.3,1905.1,2085.2,1815.6,1730,1604.5,1420.2,957.8,1110.6,2193.5,2326.2,2511.5,1204.1,1538.2,1882.9,1346.6,2306.2,2422.5,1970.8,1799.4,1758.1,1628.2,1259.7,675.4,1075.6,1708.2,547.5,0,687.3],[1388.7,1750.6,1537.5,1388.7,2838.1,2477.4,2671.7,1959.2,1896.9,2517.9,2560.7,2523.2,2146.2,2239.9,2003.9,2211.5,1637.7,1823.7,1532.5,1632.9,1439.2,1254.9,947.9,945.3,1906.7,2019.1,2204.4,1352.3,1372.9,1575.8,1066.5,1999.1,2165.3,1663.7,1492.3,1451,1321.1,1094.4,1101.4,1501.6,1401.1,346.4,664.6,0]] # Day 1 - Start at 8:00am at Location 1 (index 0) # Day 1 - End at 4:00pm at Location 2 (index 1) # Day 2 - Start at 6:00am at Location 3 (index 2) # Day 2 - End at 6:00pm at Location 4 (index 3) Windows = [[28800, 28800], [57600, 57600], [21600, 21600], [64800, 64800], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400], [0, 86400]] Durations = [0, 0, 0, 0, 0, 0, 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, 900, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 3600, 3600, 3600, 3600, 3600] Penalties = [576460752303423487, 576460752303423487, 576460752303423487, 576460752303423487, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000] NUM_DAYS = 2 START_NODES = [] for node in range(0, NUM_DAYS): START_NODES.append(node * 2) END_NODES = [] for node in range(0, NUM_DAYS): END_NODES.append(node * 2 + 1) REGULAR_NODES = [] for node in range(NUM_DAYS * 2, len(Matrix)): REGULAR_NODES.append(node) def transit_callback(from_index, to_index): # Returns the travel time between the two nodes. # Convert from routing variable Index to time matrix NodeIndex. from_node = manager.IndexToNode(from_index) to_node = manager.IndexToNode(to_index) # prevent movement from start nodes to start nodes if from_node in START_NODES: if to_node in START_NODES: return 576460752303423487 # prevent movement from start nodes to end nodes if from_node in START_NODES: if to_node in END_NODES: return 576460752303423487 # prevent movement from end nodes to end nodes if from_node in END_NODES: if to_node in END_NODES: return 576460752303423487 # prevent movement from end nodes to non start nodes if from_node in END_NODES: if to_node in START_NODES: return 0 else: return 576460752303423487 return Matrix[from_node][to_node] def time_callback(from_index, to_index): # Returns the travel time plus service time between the two nodes. # Convert from routing variable Index to time matrix NodeIndex. from_node = manager.IndexToNode(from_index) to_node = manager.IndexToNode(to_index) if from_node in END_NODES: Reset = Windows[from_node][1] else: Reset = 0 return Matrix[from_node][to_node] + Durations[from_node] - Reset # Create the routing index manager. manager = pywrapcp.RoutingIndexManager(len(Matrix), 1, [0], [1]) # Create Routing Model. routing = pywrapcp.RoutingModel(manager) # Register the Transit Callback. transit_callback_index = routing.RegisterTransitCallback(transit_callback) # Set the arc cost evaluator for all vehicles routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index) # Register the Time Callback. time_callback_index = routing.RegisterTransitCallback(time_callback) # Add Time Windows constraint. routing.AddDimension( time_callback_index, 86400, # An upper bound for slack (the wait times at the locations). 86400, # An upper bound for the total time over each vehicle's route. False, 'Time') time_dimension = routing.GetDimensionOrDie('Time') # Get rid of slack for all regular nodes # for node in range(len(START_NODES) + len(END_NODES), len(Matrix)): # index = manager.NodeToIndex(node) # time_dimension.SlackVar(index).SetValue(0) # Get rid of slack for all start nodes # for node in START_NODES: # index = manager.NodeToIndex(node) # time_dimension.SlackVar(index).SetValue(0) # Allow all regular nodes to be droppable. for node in range(len(START_NODES) + len(END_NODES), len(Matrix)): routing.AddDisjunction([manager.NodeToIndex(node)], Penalties[node]) # Add time window constraints for all regular nodes. for location_index, time_window in enumerate(Windows): if location_index in REGULAR_NODES: index = manager.NodeToIndex(location_index) time_dimension.CumulVar(index).SetRange(time_window[0], time_window[1]) # TODO! - I think this needs to be handled differently for each day # Add time window constraints for start node. index = routing.Start(0) time_dimension.CumulVar(index).SetRange(Windows[0][0],Windows[0][1]) index = routing.End(0) time_dimension.CumulVar(index).SetRange(Windows[1][0],Windows[1][1]) # Setting first solution heuristic. search_parameters = pywrapcp.DefaultRoutingSearchParameters() search_parameters.first_solution_strategy = (routing_enums_pb2.FirstSolutionStrategy.PARALLEL_CHEAPEST_INSERTION) # Setting local search metaheuristics: search_parameters.local_search_metaheuristic = (routing_enums_pb2.LocalSearchMetaheuristic.GUIDED_LOCAL_SEARCH) search_parameters.time_limit.seconds = 15 search_parameters.log_search = False # Solve the problem. solution = routing.SolveWithParameters(search_parameters) if not solution: print("no solution found") else: print("solution found. Objective value is ",solution.ObjectiveValue()) # Print the results result = { 'Dropped': [], 'Scheduled': [] } # Return the dropped locations for index in range(routing.Size()): if routing.IsStart(index) or routing.IsEnd(index): continue node = manager.IndexToNode(index) if node in END_NODES or node in START_NODES: continue if solution.Value(routing.NextVar(index)) == index: result['Dropped'].append(node) # Return the scheduled locations time = 0 index = routing.Start(0) while not routing.IsEnd(index): time = time_dimension.CumulVar(index) result['Scheduled'].append([manager.IndexToNode(index), solution.Min(time), solution.Max(time)]) index = solution.Value(routing.NextVar(index)) time = time_dimension.CumulVar(index) result['Scheduled'].append([manager.IndexToNode(index), solution.Min(time), solution.Max(time)]) print('Dropped') print(result['Dropped']) print('Scheduled') for line in result['Scheduled']: print(line) Output: solution found. Objective value is 22468 Dropped [] Scheduled [0, 28800, 28800] [28, 29216, 35021] [20, 31277, 37082] [21, 33510, 39315] [19, 35787, 41592] [8, 37197, 43002] [6, 39278, 45083] [4, 40829, 46634] [5, 41386, 47191] [9, 42037, 47842] [26, 43496, 49301] [10, 45818, 51623] [11, 47690, 53495] [32, 49169, 54974] [31, 51530, 57335] [24, 53621, 59426] [25, 55790, 61595] [15, 57982, 63787] [13, 59251, 65056] [14, 60366, 66171] [12, 61559, 67364] [17, 62886, 68691] [16, 64245, 70050] [18, 65428, 71233] [3, 66607, 72412] [2, 2334, 8139] [35, 2530, 8335] [36, 4568, 10373] [40, 6609, 12414] [37, 10906, 16711] [23, 13016, 18821] [22, 15163, 20968] [27, 17473, 23278] [7, 20179, 25984] [39, 22710, 28515] [38, 27042, 32847] [42, 29446, 35251] [41, 33593, 39398] [43, 37551, 43356] [30, 42217, 48022] [34, 44789, 50594] [29, 46839, 52644] [33, 49177, 54982] [1, 57600, 57600] | You may have something like that: python plop.py Objective: 93780 droped: [] Route for vehicle 0: 0 [21600;21600] -> 38 [21902;57722] -> 33 [23897;59717] -> 34 [25935;61755] -> 28 [28562;64382] -> 41 [31374;67194] -> 39 [33520;69340] -> 40 [35840;71660] -> 36 [38315;74135] -> 37 [40745;76565] -> 5 [44115;79935] -> 25 [46810;82630] -> 20 [49163;84983] -> 21 [51370;790+1day] -> 35 [53471;2891+1day] -> 26 [55707;5127+1day] -> 18 [57768;7188+1day] -> 19 [60001;9421+1day] -> 17 [62278;11698+1day] -> 6 [64588;14008+1day] -> 4 [67569;16989+1day] -> 2 [70020;19440+1day] -> 3 [72377;21797+1day] -> 7 [74828;24248+1day] -> 24 [77187;26607+1day] -> 8 [79509;28929+1day] -> 9 [82281;31701+1day] -> 30 [84660;34080+1day] -> 31 [778+1day;36598+1day] -> 32 [3163+1day;38983+1day] -> 27 [5213+1day;41033+1day] -> 22 [7579+1day;43399+1day] -> 29 [9715+1day;45535+1day] -> 23 [11863+1day;47683+1day] -> 13 [14055+1day;49875+1day] -> 11 [16224+1day;52044+1day] -> 12 [18239+1day;54059+1day] -> 10 [20332+1day;56152+1day] -> 15 [22559+1day;58379+1day] -> 14 [24818+1day;60638+1day] -> 16 [26901+1day;62721+1day] -> 1 [64800+1day;64800+1day] %diff -u plop.py plop_final.py --- plop.py 2020-12-01 17:48:15.187255138 +0100 +++ plop_final.py 2020-12-01 17:47:41.033692899 +0100 @@ -7,6 +7,7 @@ Penalties = [576460752303423487, 576460752303423487, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000] Slack_Max = 86400 Capacity = 86400 +OneDay = 86400 # The inputs to RoutingIndexManager are: # 1. The number of rows of the time matrix, which is the number of locations (including the depot). @@ -36,7 +37,7 @@ routing.AddDimension( transit_callback_index, Slack_Max, # An upper bound for slack (the wait times at the locations). - Capacity, # An upper bound for the total time over each vehicle's route. + 2*Capacity, # An upper bound for the total time over each vehicle's route. False, # Determine whether the cumulative variable is set to zero at the start of the vehicle's route. 'Time') time_dimension = routing.GetDimensionOrDie('Time') @@ -50,13 +51,14 @@ if location_idx == 0 or location_idx == 1: continue index = manager.NodeToIndex(location_idx) - time_dimension.CumulVar(index).SetRange(time_window[0], time_window[1]) + time_dimension.CumulVar(index).SetRange(time_window[0], time_window[1]+OneDay) + time_dimension.CumulVar(index).RemoveInterval(time_window[1], time_window[0]+OneDay) # Add time window constraints for each vehicle start node. index = routing.Start(0) time_dimension.CumulVar(index).SetRange(Windows[0][0],Windows[0][1]) index = routing.End(0) -time_dimension.CumulVar(index).SetRange(Windows[1][0],Windows[1][1]) +time_dimension.CumulVar(index).SetRange(Windows[1][0]+OneDay,Windows[1][1]+OneDay) # Instantiate route start and end times to produce feasible times. routing.AddVariableMinimizedByFinalizer(time_dimension.CumulVar(routing.Start(0))) @@ -73,28 +75,24 @@ # Solve the problem. solution = routing.SolveWithParameters(search_parameters) - -# Print the results -result = { - 'Dropped': [], - 'Scheduled': [] -} +print(f"Objective: {solution.ObjectiveValue()}") # Return the dropped locations +dropped = [] for node in range(routing.Size()): if routing.IsStart(node) or routing.IsEnd(node): continue if solution.Value(routing.NextVar(node)) == node: - result['Dropped'].append(manager.IndexToNode(node)) + dropped.append(manager.IndexToNode(node)) +print(f"droped: {dropped}") # Return the scheduled locations -time = 0 index = routing.Start(0) +plan_output = 'Route for vehicle 0:\n' while not routing.IsEnd(index): time = time_dimension.CumulVar(index) - result['Scheduled'].append([manager.IndexToNode(index), solution.Min(time),solution.Max(time)]) + tw_min = solution.Min(time) + if tw_min > OneDay: + tw_min = f"{tw_min%OneDay}+1day" + tw_max = solution.Max(time) + if tw_max > OneDay: + tw_max = f"{tw_max%OneDay}+1day" + + plan_output += f'{manager.IndexToNode(index)} [{tw_min};{tw_max}] -> ' index = solution.Value(routing.NextVar(index)) time = time_dimension.CumulVar(index) -result['Scheduled'].append([manager.IndexToNode(index), solution.Min(time),solution.Max(time)]) - -print(result) +tw_min = solution.Min(time) +tw_max = solution.Max(time) +if tw_min > OneDay: + tw_min = f"{tw_min%OneDay}+1day" +tw_max = solution.Max(time) +if tw_max > OneDay: + tw_max = f"{tw_max%OneDay}+1day" +plan_output += f'{manager.IndexToNode(index)} [{tw_min};{tw_max}]' +print(plan_output) My changes: Add const OneDay = 86400 Change Vehicle horizon to two day i.e. Capacity+OneDay You can remove a range of value using RemoveInterval() method. ref: https://github.com/google/or-tools/blob/84c35244c9bdd635609af703bbf3053c16c487c0/ortools/constraint_solver/constraint_solver.h#L3980-L3988 So the idea is SetRange to [FirstTW start; LastTW end] then remove [FirstTW end; LastTW start] Then I rewrite the output part (cleaner to me by removing your "unused struct result for this snippet") ps: If you have question, please join our or-tools discord (link in the README on github) ;) Step 2 Currently: your TW for location is [0;86400] while your vehicle is working from 21600 to 64800. Your end TW is [64800, 64800] instead I would use [21600;64800] i.e. finish ASAP instead of dispatching visit until 6pm ? So let's hack your TW data as follow: Windows = [[21600, 21600], [21600, 64800], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400], [21600, 86400]] Then you'll get the following result: python plop_final.py Objective: 93780 droped: [] Route for vehicle 0: 0 [21600;21600] -> 38 [21902;23641] -> 33 [23897;25636] -> 34 [25935;27674] -> 28 [28562;30301] -> 41 [31374;33113] -> 39 [33520;35259] -> 40 [35840;37579] -> 36 [38315;40054] -> 37 [40745;42484] -> 5 [44115;45854] -> 25 [46810;48549] -> 20 [49163;50902] -> 21 [51370;53109] -> 35 [53471;55210] -> 26 [55707;57446] -> 18 [57768;59507] -> 19 [60001;61740] -> 17 [62278;64017] -> 6 [64588;66327] -> 4 [67569;69308] -> 2 [70020;71759] -> 3 [72377;74116] -> 7 [74828;76567] -> 24 [77187;78926] -> 8 [79509;81248] -> 9 [82281;84020] -> 30 [84660;86399] -> 31 [21601+1day;21601+1day] -> 32 [23986+1day;23986+1day] -> 27 [26036+1day;26036+1day] -> 22 [28402+1day;28402+1day] -> 29 [30538+1day;30538+1day] -> 23 [32686+1day;32686+1day] -> 13 [34878+1day;34878+1day] -> 11 [37047+1day;37047+1day] -> 12 [39062+1day;39062+1day] -> 10 [41155+1day;41155+1day] -> 15 [43382+1day;43382+1day] -> 14 [45641+1day;45641+1day] -> 16 [47724+1day;47724+1day] -> 1 [49803+1day;49803+1day] note: You can find my fork here: https://github.com/Mizux/tsp_multiple_days | 9 | 2 |
65,084,044 | 2020-12-1 | https://stackoverflow.com/questions/65084044/pytest-fixture-not-found-pytest-bdd | I have the following step definitions, which result in an error because the @given fixture is not found, even though it is defined in target_fixture: import pytest from pytest_bdd import scenario, given, when, then, parsers from admin import Admin @scenario('../features/Admin.feature', 'register a new user') def test_admin(): pass @given('I\'m logged in as an admin at <host_name> with email <admin_email> and password <admin_password>', target_fixture="admin_login") def admin_login(host_name, admin_email, admin_password): admin = Admin(admin_email, admin_password) admin.login(host_name) # assert admin.status_code == 200 return admin @when('I call the register method for host <host_name> with email <user_email> and password <user_password> and firstName <first_name> and last name <last_name>') def test_register(admin_login, host_name, user_email, first_name, last_name): admin_login.register(host_name, user_email, first_name, last_name) assert admin_login.status_code == 200 @then('the user will be able to log in to <host_name> with email <user_email> and password <user_password>') def test_login(admin_login): print(admin_login) assert 3 == 3 This results in the error: platform darwin -- Python 3.8.5, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 rootdir: /Users/davidjoseph/work/ plugins: bdd-4.0.1 collected 3 items tests/step_defs/test_admin.py EEF [100%] ======================================================================== ERRORS ========================================================================= ____________________________________________________________ ERROR at setup of test_register ____________________________________________________________ file /Users/davidjoseph/work/tests/step_defs/test_admin.py, line 18 @when('I call the register method for host <host_name> with email <user_email> and password <user_password> and firstName <first_name> and last name <last_name>') def test_register(admin_login, host_name, user_email, first_name, last_name): E fixture 'admin_login' not found > available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestbdd_given_I'm logged in as an admin at <host_name> with email <admin_email> and password <admin_password>, pytestbdd_given_trace, pytestbdd_then_the user will be able to log in to <host_name> with email <user_email> and password <user_password>, pytestbdd_then_trace, pytestbdd_when_I call the register method for host <host_name> with email <user_email> and password <user_password> and firstName <first_name> and last name <last_name>, pytestbdd_when_trace, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory > use 'pytest --fixtures [testpath]' for help on them. Can anyone tell me why admin_login is being recognized as a fixture? | Either downgrade to pytest-bdd<4 where this behaviour is still accepted, or rename the steps by removing the test_ prefix to prevent pytest from recognizing them as separate tests. @when(...) def register(admin_login, ...): ... @then(...) def login(admin_login): ... should work. | 6 | 6 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.