content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Get non-float values from specific column in pandas dataframe I want to get in a new dataframe the rows of an original dataframe where there is a non-real (i.e. string) value in a specific column. import pandas as pd import numpy as np test = {'a':[1,2,3], 'b':[4,5,'x'], 'c':['f','g','h']} df_test = pd.DataFrame(test) print(df_test) I want to get the third row where the value in 'b' column is not numeric (it is 'x'). A: The complication is that Pandas forces column elements to have the same type (object for mixed str and int) so simple selection is not possible. Hence I think it is necessary to iterate over the column of interest to select the row(s) and then extract that/those. mask = [] for j in df_test['b']: if isinstance(j, str): mask.append(True) else: mask.append(False) print(df_test[mask]) which produces a b c 2 3 x h A: You'll need to perform some type of list comprehension or element-wise apply and build a boolean mask for this type of problem. You can use any of the following approaches (you should see similar performance for all). isinstance .apply mask = df_test['b'].apply(isinstance, args=(str, )) print(df_test.loc[mask]) a b c 2 3 x h isinstance list comprehension mask = [isinstance(v, str) for v in df_test['b']] print(df_test.loc[mask]) a b c 2 3 x h coerce to numeric and find nans mask = pd.to_numeric(df_test['b'], errors='coerce').isna() print(df_test.loc[mask]) a b c 2 3 x h
Get non-float values from specific column in pandas dataframe
I want to get in a new dataframe the rows of an original dataframe where there is a non-real (i.e. string) value in a specific column. import pandas as pd import numpy as np test = {'a':[1,2,3], 'b':[4,5,'x'], 'c':['f','g','h']} df_test = pd.DataFrame(test) print(df_test) I want to get the third row where the value in 'b' column is not numeric (it is 'x').
[ "The complication is that Pandas forces column elements to have the same type (object for mixed str and int) so simple selection is not possible. Hence I think it is necessary to iterate over the column of interest to select the row(s) and then extract that/those.\nmask = []\nfor j in df_test['b']:\n if isinstance(j, str):\n mask.append(True)\n else:\n mask.append(False)\n \nprint(df_test[mask])\n\nwhich produces\n a b c\n2 3 x h\n\n", "You'll need to perform some type of list comprehension or element-wise apply and build a boolean mask for this type of problem. You can use any of the following approaches (you should see similar performance for all).\nisinstance .apply\nmask = df_test['b'].apply(isinstance, args=(str, ))\n\nprint(df_test.loc[mask])\n a b c\n2 3 x h\n\nisinstance list comprehension\nmask = [isinstance(v, str) for v in df_test['b']]\n\nprint(df_test.loc[mask])\n a b c\n2 3 x h\n\ncoerce to numeric and find nans\nmask = pd.to_numeric(df_test['b'], errors='coerce').isna()\n\nprint(df_test.loc[mask])\n a b c\n2 3 x h\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074658386_dataframe_pandas_python.txt
Q: Django LoginView can not authenticate the user I have a LoginView and a registration form. this registration form is working properly, users are flying to the database, but LoginView gives an incorrect login or password when trying to log in, although all the data is correct, why can this be? CustomUser from models.py class CustomUser(AbstractUser): objects = UserManager() username = models.CharField( unique=True, max_length=150, error_messages={ 'unique': "Пользователь с таким ником уже есть"},) email = models.EmailField( 'почта', unique=True, blank=False) password = models.CharField( 'пароль', max_length=128) first_name = models.CharField( 'имя', max_length=30, blank=True) second_name = models.CharField( 'фамилия', max_length=30, blank=True) birthday = models.DateField( 'день рождения', blank=True, null=True) USERNAME_FIELD = 'email' REQUIRED_FIELDS = ['username'] class Meta: verbose_name = 'user' verbose_name_plural = 'users' urls.py path('login/', LoginView.as_view( template_name='user/login.html'), name='login'), register def registrations(request): form = RegistrationForm(request.POST or None) if request.method == 'POST' and form.is_valid(): CustomUser.objects.create(**form.cleaned_data).save() messages.success(request, 'Вы успешно зарегестрировались') context = { 'form': form, } return render(request, 'user/reg.html', context) It is necessary for LoginView to work properly A: Did you check that password are correctly hashed after the registration? If you are not using the forms in django.contrib.auth you need to manually create and verify password: See https://docs.djangoproject.com/en/4.1/topics/auth/default/ https://docs.djangoproject.com/en/4.1/topics/auth/passwords/#module-django.contrib.auth.hashers
Django LoginView can not authenticate the user
I have a LoginView and a registration form. this registration form is working properly, users are flying to the database, but LoginView gives an incorrect login or password when trying to log in, although all the data is correct, why can this be? CustomUser from models.py class CustomUser(AbstractUser): objects = UserManager() username = models.CharField( unique=True, max_length=150, error_messages={ 'unique': "Пользователь с таким ником уже есть"},) email = models.EmailField( 'почта', unique=True, blank=False) password = models.CharField( 'пароль', max_length=128) first_name = models.CharField( 'имя', max_length=30, blank=True) second_name = models.CharField( 'фамилия', max_length=30, blank=True) birthday = models.DateField( 'день рождения', blank=True, null=True) USERNAME_FIELD = 'email' REQUIRED_FIELDS = ['username'] class Meta: verbose_name = 'user' verbose_name_plural = 'users' urls.py path('login/', LoginView.as_view( template_name='user/login.html'), name='login'), register def registrations(request): form = RegistrationForm(request.POST or None) if request.method == 'POST' and form.is_valid(): CustomUser.objects.create(**form.cleaned_data).save() messages.success(request, 'Вы успешно зарегестрировались') context = { 'form': form, } return render(request, 'user/reg.html', context) It is necessary for LoginView to work properly
[ "Did you check that password are correctly hashed after the registration?\nIf you are not using the forms in django.contrib.auth you need to manually create and verify password:\nSee\n\nhttps://docs.djangoproject.com/en/4.1/topics/auth/default/\nhttps://docs.djangoproject.com/en/4.1/topics/auth/passwords/#module-django.contrib.auth.hashers\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "python", "python_3.x" ]
stackoverflow_0074659278_django_django_models_django_rest_framework_python_python_3.x.txt
Q: Using ray tune `tune.run` with pytorch returns different optimal hyperparameters combination I've initialized two identical ANN with PyTorch (both as structure and initial parameters), and I've noticed that the hyperparameters setting with Ray Tune, returns different results for the two ANN, even if I didn't have any random initialization. Someone could explain what I'm doing wrong? I'll attach the code: ANN Initialization: class Featrues_model(nn.Module): def __init__(self, n_inputs, dim_hidden, n_outputs): super().__init__() self.fc1 = nn.Linear(n_inputs, dim_hidden) self.fc2 = nn.Linear(dim_hidden, n_outputs) def forward(self, X): X = self.fc1(X) X = self.fc2(X) return X features_model_v1 = Featrues_model(len(list_input_variables),5,6) features_model_v2 = Featrues_model(len(list_input_variables),5,6) features_model_v2.load_state_dict(features_model_v1.state_dict()) Hyperpamameters setting config = { "lr": tune.choice([1e-2, 1e-5]), "weight_decay": tune.choice([1e-2, 1e-5]), "batch_size": tune.choice([16,64]), "epochs": tune.choice([10,50]) } Train & Validation Dataframe trainset = df_final.copy() test_abs = int(len(trainset) * 0.8) train_subset, val_subset = random_split( trainset, [test_abs, len(trainset) - test_abs] ) df_train = df_final.iloc[train_subset.indices] df_val = df_final.iloc[val_subset.indices] Train function design def setting_model(config, df_train, df_val, model): criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=config["lr"], weight_decay=config["weight_decay"]) BATCH_SIZE = config["batch_size"] for epoch in range(config["epochs"]): train_epoch_loss = 0 train_epoch_acc = 0 step = 0 for i in tqdm(range(0, df_train.shape[0], BATCH_SIZE)): batch_X = np.array( df_train[list_input_variables].iloc[i:i+BATCH_SIZE] ) batch_X = torch.Tensor([x for x in batch_X]) batch_Y = np.array( df_train[list_output_variables].iloc[i:i+BATCH_SIZE] ) batch_Y = torch.Tensor([int(y) for y in batch_Y]) batch_Y = batch_Y.type(torch.int64) optimizer.zero_grad() outputs = model.forward(batch_X) train_loss = criterion(outputs, batch_Y) train_acc = multi_acc(outputs, batch_Y) train_loss.backward() optimizer.step() train_epoch_loss += train_loss.item() train_epoch_acc += train_acc.item() step += 1 # print statistics print(f"Epochs: {epoch}") print(f"Train Loss: {train_epoch_loss/len(df_train)}") print(f"Train Acc: {train_epoch_acc/step}") print("\n") # Validation loss with torch.no_grad(): X_val = np.array( df_val[list_input_variables] ) X_val = torch.Tensor([x for x in X_val]) Y_val = np.array( df_val[list_output_variables] ) Y_val = torch.Tensor([int(y) for y in Y_val]) Y_val = Y_val.type(torch.int64) outputs = model.forward(X_val) _, predicted = torch.max(outputs.data, 1) total = Y_val.size(0) correct = (predicted == Y_val).sum().item() loss = criterion(outputs, Y_val) tune.report(loss=(loss.numpy()), accuracy=correct / total) print(f"Validation Loss: {loss.numpy()/len(df_val)}") print(f"Validation Acc: {correct / total:.3f}") print("Finished Training") Hyperparameters Tune result_v1 = tune.run( partial(setting_model, df_train=df_train, df_val=df_val, model=features_model_v1), config=config, fail_fast="raise", ) result_v2 = tune.run( partial(setting_model, df_train=df_train, df_val=df_val, model=features_model_v2), config=config, fail_fast="raise" ) Output result_v1.get_best_config() {'lr': 1e-05, 'weight_decay': 1e-05, 'epochs': 1} result_v2.get_best_config() {'lr': 0.01, 'weight_decay': 1e-05, 'epochs': 1} A: The issue is the use of torch.random under the hood. Since you are not directly providing a weight matrix for your layers, pytorch initializes it for you. Luckily, you can have a reproducible experiment by setting torch.manual_seed(x) # where x is an integer One should use only a few random seeds, otherwise you might overfit on the random seed. See lottery ticket hypothesis at https://arxiv.org/abs/1803.03635)
Using ray tune `tune.run` with pytorch returns different optimal hyperparameters combination
I've initialized two identical ANN with PyTorch (both as structure and initial parameters), and I've noticed that the hyperparameters setting with Ray Tune, returns different results for the two ANN, even if I didn't have any random initialization. Someone could explain what I'm doing wrong? I'll attach the code: ANN Initialization: class Featrues_model(nn.Module): def __init__(self, n_inputs, dim_hidden, n_outputs): super().__init__() self.fc1 = nn.Linear(n_inputs, dim_hidden) self.fc2 = nn.Linear(dim_hidden, n_outputs) def forward(self, X): X = self.fc1(X) X = self.fc2(X) return X features_model_v1 = Featrues_model(len(list_input_variables),5,6) features_model_v2 = Featrues_model(len(list_input_variables),5,6) features_model_v2.load_state_dict(features_model_v1.state_dict()) Hyperpamameters setting config = { "lr": tune.choice([1e-2, 1e-5]), "weight_decay": tune.choice([1e-2, 1e-5]), "batch_size": tune.choice([16,64]), "epochs": tune.choice([10,50]) } Train & Validation Dataframe trainset = df_final.copy() test_abs = int(len(trainset) * 0.8) train_subset, val_subset = random_split( trainset, [test_abs, len(trainset) - test_abs] ) df_train = df_final.iloc[train_subset.indices] df_val = df_final.iloc[val_subset.indices] Train function design def setting_model(config, df_train, df_val, model): criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=config["lr"], weight_decay=config["weight_decay"]) BATCH_SIZE = config["batch_size"] for epoch in range(config["epochs"]): train_epoch_loss = 0 train_epoch_acc = 0 step = 0 for i in tqdm(range(0, df_train.shape[0], BATCH_SIZE)): batch_X = np.array( df_train[list_input_variables].iloc[i:i+BATCH_SIZE] ) batch_X = torch.Tensor([x for x in batch_X]) batch_Y = np.array( df_train[list_output_variables].iloc[i:i+BATCH_SIZE] ) batch_Y = torch.Tensor([int(y) for y in batch_Y]) batch_Y = batch_Y.type(torch.int64) optimizer.zero_grad() outputs = model.forward(batch_X) train_loss = criterion(outputs, batch_Y) train_acc = multi_acc(outputs, batch_Y) train_loss.backward() optimizer.step() train_epoch_loss += train_loss.item() train_epoch_acc += train_acc.item() step += 1 # print statistics print(f"Epochs: {epoch}") print(f"Train Loss: {train_epoch_loss/len(df_train)}") print(f"Train Acc: {train_epoch_acc/step}") print("\n") # Validation loss with torch.no_grad(): X_val = np.array( df_val[list_input_variables] ) X_val = torch.Tensor([x for x in X_val]) Y_val = np.array( df_val[list_output_variables] ) Y_val = torch.Tensor([int(y) for y in Y_val]) Y_val = Y_val.type(torch.int64) outputs = model.forward(X_val) _, predicted = torch.max(outputs.data, 1) total = Y_val.size(0) correct = (predicted == Y_val).sum().item() loss = criterion(outputs, Y_val) tune.report(loss=(loss.numpy()), accuracy=correct / total) print(f"Validation Loss: {loss.numpy()/len(df_val)}") print(f"Validation Acc: {correct / total:.3f}") print("Finished Training") Hyperparameters Tune result_v1 = tune.run( partial(setting_model, df_train=df_train, df_val=df_val, model=features_model_v1), config=config, fail_fast="raise", ) result_v2 = tune.run( partial(setting_model, df_train=df_train, df_val=df_val, model=features_model_v2), config=config, fail_fast="raise" ) Output result_v1.get_best_config() {'lr': 1e-05, 'weight_decay': 1e-05, 'epochs': 1} result_v2.get_best_config() {'lr': 0.01, 'weight_decay': 1e-05, 'epochs': 1}
[ "The issue is the use of torch.random under the hood. Since you are not directly providing a weight matrix for your layers, pytorch initializes it for you. Luckily, you can have a reproducible experiment by setting\ntorch.manual_seed(x) # where x is an integer\n\nOne should use only a few random seeds, otherwise you might overfit on the random seed. See lottery ticket hypothesis at https://arxiv.org/abs/1803.03635)\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "hyperparameters", "python", "pytorch", "ray_tune" ]
stackoverflow_0074656124_deep_learning_hyperparameters_python_pytorch_ray_tune.txt
Q: How do I remove an item from an array based on the difference between two items I'm trying to remove outliers from a dataset, where an outlier is if the difference between one item and the next one is larger than 3 * the uncertainty on the item def remove_outliers(data): for i in data: x = np.where(abs(i[1] - (i+1)[1]) > 3( * data[:,2])) data_outliers_removed = np.delete(data, x, axis =1) return data_outliers_removed is the function which I tried to use, however it either deletes no values or all values when I've played around with it. A: i would maybe do something like this by working with a new empty array. def remove_outliers(dataset): filtered_dataset = [] for index, item in enumerate(dataset): if index == 0: filtered_dataset.append(item) else: if abs(item[0] - dataset[index - 1][0]) <= 3 * dataset[index - 1][1]: filtered_dataset.append(item) return filtered_dataset Of course the same can be achieved easily with numpy. Hope that helps A: Iterating over a numpy array is usually a code-smell, since you reject numpy's super-fast indexing and slicing abilities for python's slow loops. I'm assuming data is a numpy array since you've used it like one. Your criterion for an outlier is: if the difference between one item and the next one is larger than 3 * the uncertainty on the item From your usage, it appears the "items" are in the data[:, 1] column, and the uncertainties are in the data[:, 2] column. The difference between an item and the next one is easy to obtain using np.diff, so our condition becomes: np.diff(data[:, 1]) > 3 * data[:-1, 2] I skipped the last uncertainty by doing data[:-1, 2] because the last uncertainty doesn't matter -- the last item doesn't have a "next element". I'm going to consider that it is an outlier and filter it out, but I've also shown how to filter it in if you want. We will use boolean indexing to filter out the rows we don't want in our array: def remove_outliers(data): select_mask = np.zeros(data[:, 1].shape, dtype=bool) # Make an array of Falses # Since the default value of the mask is False, items are considered outliers # and therefore filtered out unless we calculate the value for the mask # If you want to consider the opposite, do `np.ones(...)` # Only calculate the value for the mask for the first through the second-last item select_mask[:-1] = np.diff(data[:, 1]) > 3 * data[:-1, 2] # Select only those rows where select_mask is True # And select all columns filtered_data = data[select_mask, :] return filtered_data
How do I remove an item from an array based on the difference between two items
I'm trying to remove outliers from a dataset, where an outlier is if the difference between one item and the next one is larger than 3 * the uncertainty on the item def remove_outliers(data): for i in data: x = np.where(abs(i[1] - (i+1)[1]) > 3( * data[:,2])) data_outliers_removed = np.delete(data, x, axis =1) return data_outliers_removed is the function which I tried to use, however it either deletes no values or all values when I've played around with it.
[ "i would maybe do something like this by working with a new empty array.\ndef remove_outliers(dataset):\nfiltered_dataset = []\nfor index, item in enumerate(dataset):\n if index == 0:\n filtered_dataset.append(item)\n else:\n if abs(item[0] - dataset[index - 1][0]) <= 3 * dataset[index - 1][1]:\n filtered_dataset.append(item)\nreturn filtered_dataset\n\nOf course the same can be achieved easily with numpy.\nHope that helps\n", "Iterating over a numpy array is usually a code-smell, since you reject numpy's super-fast indexing and slicing abilities for python's slow loops. I'm assuming data is a numpy array since you've used it like one.\nYour criterion for an outlier is:\n\nif the difference between one item and the next one is larger than 3 * the uncertainty on the item\n\nFrom your usage, it appears the \"items\" are in the data[:, 1] column, and the uncertainties are in the data[:, 2] column.\nThe difference between an item and the next one is easy to obtain using np.diff, so our condition becomes:\nnp.diff(data[:, 1]) > 3 * data[:-1, 2]\n\nI skipped the last uncertainty by doing data[:-1, 2] because the last uncertainty doesn't matter -- the last item doesn't have a \"next element\". I'm going to consider that it is an outlier and filter it out, but I've also shown how to filter it in if you want.\nWe will use boolean indexing to filter out the rows we don't want in our array:\ndef remove_outliers(data):\n select_mask = np.zeros(data[:, 1].shape, dtype=bool) # Make an array of Falses\n # Since the default value of the mask is False, items are considered outliers\n # and therefore filtered out unless we calculate the value for the mask\n # If you want to consider the opposite, do `np.ones(...)`\n\n # Only calculate the value for the mask for the first through the second-last item\n select_mask[:-1] = np.diff(data[:, 1]) > 3 * data[:-1, 2]\n\n # Select only those rows where select_mask is True\n # And select all columns\n filtered_data = data[select_mask, :] \n return filtered_data\n\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074656777_arrays_numpy_python.txt
Q: Why won't my square move right? I'm trying all different methods to move it in turtle module it works for up and down but not left and right? # Game creation import turtle wn = turtle.Screen() wn.title("Pong") wn.bgcolor("Black") wn.setup(width=800, height=800) wn.tracer(0) # paddle a paddle_a = turtle.Turtle() paddle_a.speed(0) paddle_a.shape("square") paddle_a.color("white") paddle_a.penup() paddle_a.goto(0, 0) # Functions def paddle_a_right(): turtle.forward(100) wn.onkeypress(paddle_a_right, 'd') while True: wn.update() Want the square to move to the right or left using 'a' or 'd' I don't know very much about turtle, I just want to program a simple game. A: There are three major issues with your code. First, you need to call wn.listen() to allow the window to receive keyboard input. Second, you do turtle.forward(100) when you mean paddle_a.forward(100). Finally, since you did tracer(0), you now need to call wn.update() anytime a change is made that you want your user to see. Here's a simplified example: from turtle import Screen, Turtle def paddle_right(): paddle.forward(10) screen.update() screen = Screen() screen.title("Pong") screen.bgcolor("Black") screen.setup(width=800, height=800) screen.tracer(0) paddle = Turtle() paddle.shape("square") paddle.color("white") paddle.penup() screen.onkeypress(paddle_right, 'd') screen.listen() screen.update() screen.mainloop()
Why won't my square move right? I'm trying all different methods to move it in turtle module it works for up and down but not left and right?
# Game creation import turtle wn = turtle.Screen() wn.title("Pong") wn.bgcolor("Black") wn.setup(width=800, height=800) wn.tracer(0) # paddle a paddle_a = turtle.Turtle() paddle_a.speed(0) paddle_a.shape("square") paddle_a.color("white") paddle_a.penup() paddle_a.goto(0, 0) # Functions def paddle_a_right(): turtle.forward(100) wn.onkeypress(paddle_a_right, 'd') while True: wn.update() Want the square to move to the right or left using 'a' or 'd' I don't know very much about turtle, I just want to program a simple game.
[ "There are three major issues with your code. First, you need to call wn.listen() to allow the window to receive keyboard input. Second, you do turtle.forward(100) when you mean paddle_a.forward(100). Finally, since you did tracer(0), you now need to call wn.update() anytime a change is made that you want your user to see.\nHere's a simplified example:\nfrom turtle import Screen, Turtle\n\ndef paddle_right():\n paddle.forward(10)\n screen.update()\n\nscreen = Screen()\nscreen.title(\"Pong\")\nscreen.bgcolor(\"Black\")\nscreen.setup(width=800, height=800)\nscreen.tracer(0)\n\npaddle = Turtle()\npaddle.shape(\"square\")\npaddle.color(\"white\")\npaddle.penup()\n\nscreen.onkeypress(paddle_right, 'd')\nscreen.listen()\nscreen.update()\nscreen.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_turtle", "turtle_graphics" ]
stackoverflow_0074649562_python_python_turtle_turtle_graphics.txt
Q: Discord.py Showing User Badges I am trying to do a command that shows a user's badges. This is my code: @bot.command(pass_context=True) async def test(ctx, user: discord.Member): test = discord.Embed(title=f"{user.name} User's Badges", description=f"{user.public_flags}", color=0xff0000 ) await ctx.channel.send(embed=test) And the bot is responding like this <PublicUserFlags value=64> I want it to respond like this Hype Squad ... How do I do that? A: You could do str(user.public_flags.all()) to obtain a string value of all the badges an user has. Although this is an improvement, your output will still be something like: [<UserFlags.hypesquad_brilliance: 128>]. But the advantage here is that the words hypesquad and brilliance are clearly indicated in the string. Now, all you have to do is to remove [<UserFlags., _ and : 128>] from the string. Here is a way to re-define your code: @client.command(pass_context=True) async def test(ctx, user: discord.Member): # Remove unnecessary characters hypesquad_class = str(user.public_flags.all()).replace('[<UserFlags.', '').replace('>]', '').replace('_', ' ').replace( ':', '').title() # Remove digits from string hypesquad_class = ''.join([i for i in hypesquad_class if not i.isdigit()]) # Output test = discord.Embed(title=f"{user.name} User's Badges", description=f"{hypesquad_class}", color=0xff0000) await ctx.channel.send(embed=test) A: user.public_flags is not the way to access the user's profile. From the documentation, you need to use user.profile() to get attributes like premium, staff, hypesquad. Since discord.py 1.7 it is impossible to get info from the user's profile using await user.profile(). In the documentation it states that this functionality is deprecated. If you try it you get an error Forbidden: 403 Forbidden (error code: 20001): Bots cannot use this endpoint A: here a lil correction from the code from @GGBerry @client.command(pass_context=True) async def test(ctx, user: discord.Member): userFlags = user.public_flags.all() for flag in userFlags: print(flag.name) user.public_flags.all() returns a list, that can be iterated. in the list are flag object from the type discord.UserFlag. This object contains all sorts of badges. Here is the documentation for the UserFlags: https://discordpy.readthedocs.io/en/stable/api.html?highlight=userflag#discord.UserFlags Greetings, DasMoorhuhn
Discord.py Showing User Badges
I am trying to do a command that shows a user's badges. This is my code: @bot.command(pass_context=True) async def test(ctx, user: discord.Member): test = discord.Embed(title=f"{user.name} User's Badges", description=f"{user.public_flags}", color=0xff0000 ) await ctx.channel.send(embed=test) And the bot is responding like this <PublicUserFlags value=64> I want it to respond like this Hype Squad ... How do I do that?
[ "You could do str(user.public_flags.all()) to obtain a string value of all the badges an user has. Although this is an improvement, your output will still be something like: [<UserFlags.hypesquad_brilliance: 128>]. But the advantage here is that the words hypesquad and brilliance are clearly indicated in the string. Now, all you have to do is to remove [<UserFlags., _ and : 128>] from the string.\nHere is a way to re-define your code:\[email protected](pass_context=True)\nasync def test(ctx, user: discord.Member):\n # Remove unnecessary characters\n hypesquad_class = str(user.public_flags.all()).replace('[<UserFlags.', '').replace('>]', '').replace('_',\n ' ').replace(\n ':', '').title()\n\n # Remove digits from string\n hypesquad_class = ''.join([i for i in hypesquad_class if not i.isdigit()])\n\n # Output\n test = discord.Embed(title=f\"{user.name} User's Badges\", description=f\"{hypesquad_class}\", color=0xff0000)\n await ctx.channel.send(embed=test)\n\n", "user.public_flags is not the way to access the user's profile.\nFrom the documentation, you need to use user.profile() to get attributes like premium,\nstaff, hypesquad.\nSince discord.py 1.7 it is impossible to get info from the user's profile using await user.profile(). In the documentation it states that this functionality is deprecated. If you try it you get an error Forbidden: 403 Forbidden (error code: 20001): Bots cannot use this endpoint\n", "here a lil correction from the code from @GGBerry\[email protected](pass_context=True)\nasync def test(ctx, user: discord.Member):\n userFlags = user.public_flags.all()\n for flag in userFlags:\n print(flag.name)\n\nuser.public_flags.all() returns a list, that can be iterated. in the list are flag object from the type discord.UserFlag. This object contains all sorts of badges. Here is the documentation for the UserFlags: https://discordpy.readthedocs.io/en/stable/api.html?highlight=userflag#discord.UserFlags\nGreetings, DasMoorhuhn\n" ]
[ 1, 0, 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0066951118_discord_discord.py_python.txt
Q: Highlighting multiple hex_tiles by hovering in bokeh I try to visualize my data in a hex map. For this I use python bokeh and the corresponding hex_tile function in the figure class. My data belongs to one of 8 different classes, each having a different color. The image below shows the current visualization: I would like to add the possibility to change the color of the element (and ideally all its class members) when the mouse hovers over it. I know, that it is somewhat possible, as bokeh themselves provide the following example: https://docs.bokeh.org/en/latest/docs/gallery/hexbin.html However, I do not know how to implement this myself (as this seems to be a feature for the hexbin function and not the simple hex_tile function) Currently I provide my data in a ColumnDataSource: source = ColumnDataSource(data=dict( r=x_row, q=y_col, color=colors_array, ipc_class=ipc_array )) where "ipc_class" describes one of the 8 classes the element belongs to. For the mouse hover tooltip I used the following code: TOOLTIPS = [ ("index", "$index"), ("(r,q)", "(@r, @q)"), ("ipc_class", "@ipc_class") ] and then I visualized everything with: p = figure(plot_width=1600, plot_height=1000, title="Ipc to Hexes with colors", match_aspect=True, tools="wheel_zoom,reset,pan", background_fill_color='#440154', tooltips=TOOLTIPS) p.grid.visible = False p.hex_tile('q', 'r', source=source, fill_color='color') I would like the visualization to add a function, where hovering over one element will result in one of the following: 1. Highlight the current element by changing its color 2. Highlight multiple elements of the same class when one is hovered over by changing its color 3. Change the color of the outer line of the hex_tile element (or complete class) when the element is hovered over Which of these features is possible with bokeh and how would I go about it? EDIT: After trying to reimplement the suggestion by Tony, all elements will turn pink as soon as my mouse hits the graph and the color won´t turn back. My code looks like this: source = ColumnDataSource(data=dict( x=x_row, y=y_col, color=colors_array, ipc_class=ipc_array )) p = figure(plot_width=800, plot_height=800, title="Ipc to Square with colors", match_aspect=True, tools="wheel_zoom,reset,pan", background_fill_color='#440154') p.grid.visible = False p.hex_tile('x', 'y', source=source, fill_color='color') ################################### code = ''' for (i in cb_data.renderer.data_source.data['color']) cb_data.renderer.data_source.data['color'][i] = colors[i]; if (cb_data.index.indices != null) { hovered_index = cb_data.index.indices[0]; hovered_color = cb_data.renderer.data_source.data['color'][hovered_index]; for (i = 0; i < cb_data.renderer.data_source.data['color'].length; i++) { if (cb_data.renderer.data_source.data['color'][i] == hovered_color) cb_data.renderer.data_source.data['color'][i] = 'pink'; } } cb_data.renderer.data_source.change.emit(); ''' TOOLTIPS = [ ("index", "$index"), ("(x,y)", "(@x, @y)"), ("ipc_class", "@ipc_class") ] callback = CustomJS(args=dict(colors=colors), code=code) hover = HoverTool(tooltips=TOOLTIPS, callback=callback) p.add_tools(hover) ######################################## output_file("hexbin.html") show(p) basically, I removed the tooltips from the figure function and put them down to the hover tool. As I already have red in my graph, I replaced the hover color to "pink". As I am not quite sure what each line in the "code" variable is supposed to do, I am quite helpless with this. I think one mistake may be, that my ColumnDataSource looks somewhat different from Tony's and I do not know what was done to "classifiy" the first and third element, as well as the second and fourth element together. For me, it would be perfect, if the classification would be done by the "ipc_class" variable. A: Following the discussion from previous post here comes the solution targeted for the OP code (Bokeh v1.1.0). What I did is: 1) Added a HoverTool 2) Added a JS callback to the HoverTool which: Resets the hex colors to the original ones (colors_array passed in the callback) Inspects the index of currently hovered hex (hovered_index) Gets the ip_class of currently hovered hex (hovered_ip_class) Walks through the data_source.data['ip_class'] and finds all hexagons with the same ip_class as the hovered one and sets a new color for it (pink) Send source.change.emit() signal to the BokehJS to update the model The code: from bokeh.plotting import figure, show, output_file from bokeh.models import ColumnDataSource, CustomJS, HoverTool colors_array = ["green", "green", "blue", "blue"] x_row = [0, 1, 2, 3] y_col = [1, 1, 1, 1] ipc_array = ['A', 'B', 'A', 'B'] source = ColumnDataSource(data = dict( x = x_row, y = y_col, color = colors_array, ipc_class = ipc_array )) p = figure(plot_width = 800, plot_height = 800, title = "Ipc to Square with colors", match_aspect = True, tools = "wheel_zoom,reset,pan", background_fill_color = '#440154') p.grid.visible = False p.hex_tile('x', 'y', source = source, fill_color = 'color') ################################### code = ''' for (let i in cb_data.renderer.data_source.data['color']) cb_data.renderer.data_source.data['color'][i] = colors[i]; if (cb_data.index.indices != null) { const hovered_index = cb_data.index.indices[0]; const hovered_ipc_class = cb_data.renderer.data_source.data['ipc_class'][hovered_index]; for (let i = 0; i < cb_data.renderer.data_source.data['ipc_class'].length; i++) { if (cb_data.renderer.data_source.data['ipc_class'][i] == hovered_ipc_class) cb_data.renderer.data_source.data['color'][i] = 'pink'; } } cb_data.renderer.data_source.change.emit(); ''' TOOLTIPS = [ ("index", "$index"), ("(x,y)", "(@x, @y)"), ("ipc_class", "@ipc_class") ] callback = CustomJS(args = dict(ipc_array = ipc_array, colors = colors_array), code = code) hover = HoverTool(tooltips = TOOLTIPS, callback = callback) p.add_tools(hover) ######################################## output_file("hexbin.html") show(p) Result: A: Maybe something like this to start with (Bokeh v1.1.0): from bokeh.plotting import figure, show from bokeh.models import ColumnDataSource, CustomJS, HoverTool colors = ["green", "blue", "green", "blue"] source = ColumnDataSource(dict(r = [0, 1, 2, 3], q = [1, 1, 1, 1], color = colors)) plot = figure(plot_width = 300, plot_height = 300, match_aspect = True) plot.hex_tile('r', 'q', fill_color = 'color', source = source) code = ''' for (i in cb_data.renderer.data_source.data['color']) cb_data.renderer.data_source.data['color'][i] = colors[i]; if (cb_data.index.indices != null) { hovered_index = cb_data.index.indices[0]; hovered_color = cb_data.renderer.data_source.data['color'][hovered_index]; for (i = 0; i < cb_data.renderer.data_source.data['color'].length; i++) { if (cb_data.renderer.data_source.data['color'][i] == hovered_color) cb_data.renderer.data_source.data['color'][i] = 'red'; } } cb_data.renderer.data_source.change.emit(); ''' callback = CustomJS(args = dict(colors = colors), code = code) hover = HoverTool(tooltips = [('R', '@r')], callback = callback) plot.add_tools(hover) show(plot) Result: A: Another approach is to update cb_data.index.indices to include all those indices that have ipc_class in common, and add hover_color="pink" to hex_tile. So in the CustomJS code one would loop the ipc_class column and get the indices that match the ipc_class of the currently hovered item. In this setup there is not need to update the color column in the data source. Code below tested used Bokeh version 3.0.2. from bokeh.plotting import figure, show, output_file from bokeh.models import ColumnDataSource, CustomJS, HoverTool colors_array = ["green", "green", "blue", "blue"] x_row = [0, 1, 2, 3] y_col = [1, 1, 1, 1] ipc_array = ['A', 'B', 'A', 'B'] source = ColumnDataSource(data = dict( x = x_row, y = y_col, color = colors_array, ipc_class = ipc_array )) plot = figure( width = 800, height = 800, title = "Ipc to Square with colors", match_aspect = True, tools = "wheel_zoom,reset,pan", background_fill_color = '#440154' ) plot.grid.visible = False plot.hex_tile( 'x', 'y', source = source, fill_color = 'color', hover_color = 'pink' # Added! ) code = ''' const hovered_index = cb_data.index.indices; const src_data = cb_data.renderer.data_source.data; if (hovered_index.length > 0) { const hovered_ipc_class = src_data['ipc_class'][hovered_index]; var idx_common_ipc_class = hovered_index; for (let i = 0; i < src_data['ipc_class'].length; i++) { if (i === hovered_index[0]) { continue; } if (src_data['ipc_class'][i] === hovered_ipc_class) { idx_common_ipc_class.push(i); } } cb_data.index.indices = idx_common_ipc_class; cb_data.renderer.data_source.change.emit(); } ''' TOOLTIPS = [ ("index", "$index"), ("(x,y)", "(@x, @y)"), ("ipc_class", "@ipc_class") ] callback = CustomJS(code = code) hover = HoverTool( tooltips = TOOLTIPS, callback = callback ) plot.add_tools(hover) output_file("hexbin.html") show(p)
Highlighting multiple hex_tiles by hovering in bokeh
I try to visualize my data in a hex map. For this I use python bokeh and the corresponding hex_tile function in the figure class. My data belongs to one of 8 different classes, each having a different color. The image below shows the current visualization: I would like to add the possibility to change the color of the element (and ideally all its class members) when the mouse hovers over it. I know, that it is somewhat possible, as bokeh themselves provide the following example: https://docs.bokeh.org/en/latest/docs/gallery/hexbin.html However, I do not know how to implement this myself (as this seems to be a feature for the hexbin function and not the simple hex_tile function) Currently I provide my data in a ColumnDataSource: source = ColumnDataSource(data=dict( r=x_row, q=y_col, color=colors_array, ipc_class=ipc_array )) where "ipc_class" describes one of the 8 classes the element belongs to. For the mouse hover tooltip I used the following code: TOOLTIPS = [ ("index", "$index"), ("(r,q)", "(@r, @q)"), ("ipc_class", "@ipc_class") ] and then I visualized everything with: p = figure(plot_width=1600, plot_height=1000, title="Ipc to Hexes with colors", match_aspect=True, tools="wheel_zoom,reset,pan", background_fill_color='#440154', tooltips=TOOLTIPS) p.grid.visible = False p.hex_tile('q', 'r', source=source, fill_color='color') I would like the visualization to add a function, where hovering over one element will result in one of the following: 1. Highlight the current element by changing its color 2. Highlight multiple elements of the same class when one is hovered over by changing its color 3. Change the color of the outer line of the hex_tile element (or complete class) when the element is hovered over Which of these features is possible with bokeh and how would I go about it? EDIT: After trying to reimplement the suggestion by Tony, all elements will turn pink as soon as my mouse hits the graph and the color won´t turn back. My code looks like this: source = ColumnDataSource(data=dict( x=x_row, y=y_col, color=colors_array, ipc_class=ipc_array )) p = figure(plot_width=800, plot_height=800, title="Ipc to Square with colors", match_aspect=True, tools="wheel_zoom,reset,pan", background_fill_color='#440154') p.grid.visible = False p.hex_tile('x', 'y', source=source, fill_color='color') ################################### code = ''' for (i in cb_data.renderer.data_source.data['color']) cb_data.renderer.data_source.data['color'][i] = colors[i]; if (cb_data.index.indices != null) { hovered_index = cb_data.index.indices[0]; hovered_color = cb_data.renderer.data_source.data['color'][hovered_index]; for (i = 0; i < cb_data.renderer.data_source.data['color'].length; i++) { if (cb_data.renderer.data_source.data['color'][i] == hovered_color) cb_data.renderer.data_source.data['color'][i] = 'pink'; } } cb_data.renderer.data_source.change.emit(); ''' TOOLTIPS = [ ("index", "$index"), ("(x,y)", "(@x, @y)"), ("ipc_class", "@ipc_class") ] callback = CustomJS(args=dict(colors=colors), code=code) hover = HoverTool(tooltips=TOOLTIPS, callback=callback) p.add_tools(hover) ######################################## output_file("hexbin.html") show(p) basically, I removed the tooltips from the figure function and put them down to the hover tool. As I already have red in my graph, I replaced the hover color to "pink". As I am not quite sure what each line in the "code" variable is supposed to do, I am quite helpless with this. I think one mistake may be, that my ColumnDataSource looks somewhat different from Tony's and I do not know what was done to "classifiy" the first and third element, as well as the second and fourth element together. For me, it would be perfect, if the classification would be done by the "ipc_class" variable.
[ "Following the discussion from previous post here comes the solution targeted for the OP code (Bokeh v1.1.0). What I did is:\n1) Added a HoverTool\n2) Added a JS callback to the HoverTool which:\n\nResets the hex colors to the original ones (colors_array passed in the callback)\nInspects the index of currently hovered hex (hovered_index)\nGets the ip_class of currently hovered hex (hovered_ip_class)\nWalks through the data_source.data['ip_class'] and finds all hexagons with the same ip_class as the hovered one and sets a new color for it (pink)\nSend source.change.emit() signal to the BokehJS to update the model\n\n\nThe code:\n\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.models import ColumnDataSource, CustomJS, HoverTool\n\ncolors_array = [\"green\", \"green\", \"blue\", \"blue\"]\nx_row = [0, 1, 2, 3]\ny_col = [1, 1, 1, 1]\nipc_array = ['A', 'B', 'A', 'B']\n\nsource = ColumnDataSource(data = dict(\n x = x_row,\n y = y_col,\n color = colors_array,\n ipc_class = ipc_array\n))\n\np = figure(plot_width = 800, plot_height = 800, title = \"Ipc to Square with colors\", match_aspect = True,\n tools = \"wheel_zoom,reset,pan\", background_fill_color = '#440154')\np.grid.visible = False\np.hex_tile('x', 'y', source = source, fill_color = 'color')\n\n###################################\ncode = ''' \nfor (let i in cb_data.renderer.data_source.data['color'])\n cb_data.renderer.data_source.data['color'][i] = colors[i];\n\nif (cb_data.index.indices != null) {\n const hovered_index = cb_data.index.indices[0];\n const hovered_ipc_class = cb_data.renderer.data_source.data['ipc_class'][hovered_index];\n for (let i = 0; i < cb_data.renderer.data_source.data['ipc_class'].length; i++) {\n if (cb_data.renderer.data_source.data['ipc_class'][i] == hovered_ipc_class)\n cb_data.renderer.data_source.data['color'][i] = 'pink';\n }\n}\ncb_data.renderer.data_source.change.emit();\n'''\n\nTOOLTIPS = [\n (\"index\", \"$index\"),\n (\"(x,y)\", \"(@x, @y)\"),\n (\"ipc_class\", \"@ipc_class\")\n]\n\ncallback = CustomJS(args = dict(ipc_array = ipc_array, colors = colors_array), code = code)\nhover = HoverTool(tooltips = TOOLTIPS, callback = callback)\np.add_tools(hover)\n########################################\n\noutput_file(\"hexbin.html\")\n\nshow(p)\n\nResult:\n\n", "Maybe something like this to start with (Bokeh v1.1.0):\nfrom bokeh.plotting import figure, show\nfrom bokeh.models import ColumnDataSource, CustomJS, HoverTool\n\ncolors = [\"green\", \"blue\", \"green\", \"blue\"]\nsource = ColumnDataSource(dict(r = [0, 1, 2, 3], q = [1, 1, 1, 1], color = colors))\nplot = figure(plot_width = 300, plot_height = 300, match_aspect = True)\nplot.hex_tile('r', 'q', fill_color = 'color', source = source)\n\ncode = ''' \nfor (i in cb_data.renderer.data_source.data['color'])\n cb_data.renderer.data_source.data['color'][i] = colors[i];\n\nif (cb_data.index.indices != null) {\n hovered_index = cb_data.index.indices[0];\n hovered_color = cb_data.renderer.data_source.data['color'][hovered_index];\n for (i = 0; i < cb_data.renderer.data_source.data['color'].length; i++) {\n if (cb_data.renderer.data_source.data['color'][i] == hovered_color)\n cb_data.renderer.data_source.data['color'][i] = 'red';\n }\n}\ncb_data.renderer.data_source.change.emit();\n'''\ncallback = CustomJS(args = dict(colors = colors), code = code)\nhover = HoverTool(tooltips = [('R', '@r')], callback = callback)\nplot.add_tools(hover)\nshow(plot)\n\nResult:\n\n", "Another approach is to update cb_data.index.indices to include all those indices that have ipc_class in common, and add hover_color=\"pink\" to hex_tile. So in the CustomJS code one would loop the ipc_class column and get the indices that match the ipc_class of the currently hovered item.\nIn this setup there is not need to update the color column in the data source.\nCode below tested used Bokeh version 3.0.2.\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.models import ColumnDataSource, CustomJS, HoverTool\n\ncolors_array = [\"green\", \"green\", \"blue\", \"blue\"]\nx_row = [0, 1, 2, 3]\ny_col = [1, 1, 1, 1]\nipc_array = ['A', 'B', 'A', 'B']\n\nsource = ColumnDataSource(data = dict(\n x = x_row,\n y = y_col,\n color = colors_array,\n ipc_class = ipc_array\n))\n\nplot = figure(\n width = 800,\n height = 800, \n title = \"Ipc to Square with colors\",\n match_aspect = True,\n tools = \"wheel_zoom,reset,pan\",\n background_fill_color = '#440154'\n)\nplot.grid.visible = False\nplot.hex_tile(\n 'x', 'y',\n source = source,\n fill_color = 'color',\n hover_color = 'pink' # Added!\n)\n\ncode = '''\n const hovered_index = cb_data.index.indices;\n const src_data = cb_data.renderer.data_source.data;\n if (hovered_index.length > 0) {\n const hovered_ipc_class = src_data['ipc_class'][hovered_index];\n var idx_common_ipc_class = hovered_index;\n for (let i = 0; i < src_data['ipc_class'].length; i++) {\n if (i === hovered_index[0]) {\n continue;\n }\n if (src_data['ipc_class'][i] === hovered_ipc_class) {\n idx_common_ipc_class.push(i);\n }\n }\n cb_data.index.indices = idx_common_ipc_class;\n cb_data.renderer.data_source.change.emit();\n }\n'''\n\nTOOLTIPS = [\n (\"index\", \"$index\"),\n (\"(x,y)\", \"(@x, @y)\"),\n (\"ipc_class\", \"@ipc_class\")\n]\n\ncallback = CustomJS(code = code)\nhover = HoverTool(\n tooltips = TOOLTIPS,\n callback = callback\n)\nplot.add_tools(hover)\n\noutput_file(\"hexbin.html\")\nshow(p)\n\n" ]
[ 7, 2, 0 ]
[]
[]
[ "bokeh", "python" ]
stackoverflow_0055947149_bokeh_python.txt
Q: setuptools pyproject.toml equivalent to `python setup.py clean --all` I'm migrating from setup.py to pyproject.toml. The commands to install my package appear to be the same, but I can't find what the pyproject.toml command for cleaning up build artifacts is. What is the equivalent to python setup.py clean --all? A: The distutils command clean is not needed for a pyproject.toml based build. Modern tools invoking PEP517/PEP518 hooks, such as build, create a temporary directory or a cache directory to store intermediate files while building, rather than littering the project directory with a build subdirectory. Anyway, it was not really an exciting command in the first place and rm -rf build does the same job. A: I ran into this same issue when I was migrating. What wim answered seems to be mostly true. If you do as the setuptools documentation says and use python -m build then the build directory will not be created, but a dist will. However if you do pip install . a build directory will be left behind even if you are using a pyproject.toml file. This can cause issues if you change your package structure or rename files as sometimes the old version that is in the build directory will be installed instead of your current changes. Personally I run pip install . && rm -rf build or pip install . && rmdir /s /q build for Windows. This could be expanded to remove any other unwanted artifacts.
setuptools pyproject.toml equivalent to `python setup.py clean --all`
I'm migrating from setup.py to pyproject.toml. The commands to install my package appear to be the same, but I can't find what the pyproject.toml command for cleaning up build artifacts is. What is the equivalent to python setup.py clean --all?
[ "The distutils command clean is not needed for a pyproject.toml based build. Modern tools invoking PEP517/PEP518 hooks, such as build, create a temporary directory or a cache directory to store intermediate files while building, rather than littering the project directory with a build subdirectory.\nAnyway, it was not really an exciting command in the first place and rm -rf build does the same job.\n", "I ran into this same issue when I was migrating. What wim answered seems to be mostly true. If you do as the setuptools documentation says and use python -m build then the build directory will not be created, but a dist will. However if you do pip install . a build directory will be left behind even if you are using a pyproject.toml file. This can cause issues if you change your package structure or rename files as sometimes the old version that is in the build directory will be installed instead of your current changes. Personally I run pip install . && rm -rf build or pip install . && rmdir /s /q build for Windows. This could be expanded to remove any other unwanted artifacts.\n" ]
[ 4, 1 ]
[]
[]
[ "pyproject.toml", "python", "setuptools" ]
stackoverflow_0072468946_pyproject.toml_python_setuptools.txt
Q: imaplib STORE failed - Mailbox has read-only access. Cannot Delete Yahoo Email trying to delete emails in my yahoo account using imaplib. i'm new to python, figured out most of the code but unable to find anything that works relating to this error. imap = imaplib.IMAP4_SSL(imap_server) imap.login(email_address, password) imap.select("Learn", readonly=False) con = imaplib.IMAP4_SSL('imap.mail.yahoo.com',993) con.login(email_address, password) con.select('Learn',readonly=False) imap.select('"Learn"', "(UNSEEN)") for i in '1': typ, msg_data = imap.fetch('1', '(RFC822)') for response_part in msg_data: if isinstance(response_part, tuple): msg = email.message_from_bytes(response_part[1]) for header in [ 'from' ]: print('%-8s: %s' % (header.upper(), msg[header])) imap.store(i, "+FLAGS", "\\Deleted") #tried commented codes below and same error #imap.expunge() #result, data = imap.uid('STORE', str(i) , '+FLAGS', '(\\Deleted)') #imap.uid('STORE', i, '+X-GM-LABELS', '\\Trash') con.close() con.logout() i get the error below STORE command error: BAD [b'[CANNOT] STORE failed - Mailbox has read-only access'] any help would be greatly appreciated A: imap.select('"Learn"', "(UNSEEN)") Select does not take a search criterion. The second parameter is “readonly”, so this is the same as: imap.select('"Learn"', readonly="(UNSEEN)") Which as a non-empty string is the same as: imap.select('"Learn"', readonly=True) Which is why you can’t make any changes to that mailbox. Delete the second parameter: imap.select('"Learn"') You appear to be wanting to do a search for unseen messages. Use search for this.
imaplib STORE failed - Mailbox has read-only access. Cannot Delete Yahoo Email
trying to delete emails in my yahoo account using imaplib. i'm new to python, figured out most of the code but unable to find anything that works relating to this error. imap = imaplib.IMAP4_SSL(imap_server) imap.login(email_address, password) imap.select("Learn", readonly=False) con = imaplib.IMAP4_SSL('imap.mail.yahoo.com',993) con.login(email_address, password) con.select('Learn',readonly=False) imap.select('"Learn"', "(UNSEEN)") for i in '1': typ, msg_data = imap.fetch('1', '(RFC822)') for response_part in msg_data: if isinstance(response_part, tuple): msg = email.message_from_bytes(response_part[1]) for header in [ 'from' ]: print('%-8s: %s' % (header.upper(), msg[header])) imap.store(i, "+FLAGS", "\\Deleted") #tried commented codes below and same error #imap.expunge() #result, data = imap.uid('STORE', str(i) , '+FLAGS', '(\\Deleted)') #imap.uid('STORE', i, '+X-GM-LABELS', '\\Trash') con.close() con.logout() i get the error below STORE command error: BAD [b'[CANNOT] STORE failed - Mailbox has read-only access'] any help would be greatly appreciated
[ "imap.select('\"Learn\"', \"(UNSEEN)\")\n\nSelect does not take a search criterion. The second parameter is “readonly”, so this is the same as:\nimap.select('\"Learn\"', readonly=\"(UNSEEN)\")\n\nWhich as a non-empty string is the same as:\nimap.select('\"Learn\"', readonly=True)\n\nWhich is why you can’t make any changes to that mailbox. Delete the second parameter:\nimap.select('\"Learn\"')\n\nYou appear to be wanting to do a search for unseen messages. Use search for this.\n" ]
[ 0 ]
[]
[]
[ "email", "imap", "imaplib", "python" ]
stackoverflow_0074650452_email_imap_imaplib_python.txt
Q: CondaError: Downloaded bytes did not match Content-Length while trying to download cudnn using conda This is the error I'm having in the anaconda prompt, after executing the command: conda install cudnn==7.6.5 Error: CondaError: Downloaded bytes did not match Content-Length url: https://repo.anaconda.com/pkgs/main/win-64/cudnn-7.6.5-cuda10.1_0.conda target_path: C:\Users\User\anaconda3\pkgs\cudnn-7.6.5-cuda10.1_0.conda Content-Length: 187807360 downloaded bytes: 165935561 Note that I am not installing it in a new environment but in the base. I would appreciate any help!! A: using curl download package conda install --offline your_download_package A: try go to target_path and uninstall cudnn or just install cudnn in it. It works for me when i install cudatoolkit.
CondaError: Downloaded bytes did not match Content-Length while trying to download cudnn using conda
This is the error I'm having in the anaconda prompt, after executing the command: conda install cudnn==7.6.5 Error: CondaError: Downloaded bytes did not match Content-Length url: https://repo.anaconda.com/pkgs/main/win-64/cudnn-7.6.5-cuda10.1_0.conda target_path: C:\Users\User\anaconda3\pkgs\cudnn-7.6.5-cuda10.1_0.conda Content-Length: 187807360 downloaded bytes: 165935561 Note that I am not installing it in a new environment but in the base. I would appreciate any help!!
[ "\nusing curl download package\nconda install --offline your_download_package\n\n", "try go to target_path and uninstall cudnn or just install cudnn in it. It works for me when i install cudatoolkit.\n" ]
[ 0, 0 ]
[]
[]
[ "anaconda", "python" ]
stackoverflow_0065130985_anaconda_python.txt
Q: matplotlib (equal unit length): with 'equal' aspect ratio z-axis is not equal to x- and y- When I set up an equal aspect ratio for a 3d graph, the z-axis does not change to 'equal'. So this: fig = pylab.figure() mesFig = fig.gca(projection='3d', adjustable='box') mesFig.axis('equal') mesFig.plot(xC, yC, zC, 'r.') mesFig.plot(xO, yO, zO, 'b.') pyplot.show() Gives me the following: Where obviously the unit length of z-axis is not equal to x- and y- units. How can I make the unit length of all three axes equal? All the solutions I found did not work. A: I like the above solutions, but they do have the drawback that you need to keep track of the ranges and means over all your data. This could be cumbersome if you have multiple data sets that will be plotted together. To fix this, I made use of the ax.get_[xyz]lim3d() methods and put the whole thing into a standalone function that can be called just once before you call plt.show(). Here is the new version: from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np def set_axes_equal(ax): '''Make axes of 3D plot have equal scale so that spheres appear as spheres, cubes as cubes, etc.. This is one possible solution to Matplotlib's ax.set_aspect('equal') and ax.axis('equal') not working for 3D. Input ax: a matplotlib axis, e.g., as output from plt.gca(). ''' x_limits = ax.get_xlim3d() y_limits = ax.get_ylim3d() z_limits = ax.get_zlim3d() x_range = abs(x_limits[1] - x_limits[0]) x_middle = np.mean(x_limits) y_range = abs(y_limits[1] - y_limits[0]) y_middle = np.mean(y_limits) z_range = abs(z_limits[1] - z_limits[0]) z_middle = np.mean(z_limits) # The plot bounding box is a sphere in the sense of the infinity # norm, hence I call half the max range the plot radius. plot_radius = 0.5*max([x_range, y_range, z_range]) ax.set_xlim3d([x_middle - plot_radius, x_middle + plot_radius]) ax.set_ylim3d([y_middle - plot_radius, y_middle + plot_radius]) ax.set_zlim3d([z_middle - plot_radius, z_middle + plot_radius]) fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.set_aspect('equal') X = np.random.rand(100)*10+5 Y = np.random.rand(100)*5+2.5 Z = np.random.rand(100)*50+25 scat = ax.scatter(X, Y, Z) set_axes_equal(ax) plt.show() A: I believe matplotlib does not yet set correctly equal axis in 3D... But I found a trick some times ago (I don't remember where) that I've adapted using it. The concept is to create a fake cubic bounding box around your data. You can test it with the following code: from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.set_aspect('equal') X = np.random.rand(100)*10+5 Y = np.random.rand(100)*5+2.5 Z = np.random.rand(100)*50+25 scat = ax.scatter(X, Y, Z) # Create cubic bounding box to simulate equal aspect ratio max_range = np.array([X.max()-X.min(), Y.max()-Y.min(), Z.max()-Z.min()]).max() Xb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][0].flatten() + 0.5*(X.max()+X.min()) Yb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][1].flatten() + 0.5*(Y.max()+Y.min()) Zb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][2].flatten() + 0.5*(Z.max()+Z.min()) # Comment or uncomment following both lines to test the fake bounding box: for xb, yb, zb in zip(Xb, Yb, Zb): ax.plot([xb], [yb], [zb], 'w') plt.grid() plt.show() z data are about an order of magnitude larger than x and y, but even with equal axis option, matplotlib autoscale z axis: But if you add the bounding box, you obtain a correct scaling: A: Simple fix! I've managed to get this working in version 3.3.1. It looks like this issue has perhaps been resolved in PR#17172; You can use the ax.set_box_aspect([1,1,1]) function to ensure the aspect is correct (see the notes for the set_aspect function). When used in conjunction with the bounding box function(s) provided by @karlo and/or @Matee Ulhaq, the plots now look correct in 3D! Minimum Working Example import matplotlib.pyplot as plt import mpl_toolkits.mplot3d import numpy as np # Functions from @Mateen Ulhaq and @karlo def set_axes_equal(ax: plt.Axes): """Set 3D plot axes to equal scale. Make axes of 3D plot have equal scale so that spheres appear as spheres and cubes as cubes. Required since `ax.axis('equal')` and `ax.set_aspect('equal')` don't work on 3D. """ limits = np.array([ ax.get_xlim3d(), ax.get_ylim3d(), ax.get_zlim3d(), ]) origin = np.mean(limits, axis=1) radius = 0.5 * np.max(np.abs(limits[:, 1] - limits[:, 0])) _set_axes_radius(ax, origin, radius) def _set_axes_radius(ax, origin, radius): x, y, z = origin ax.set_xlim3d([x - radius, x + radius]) ax.set_ylim3d([y - radius, y + radius]) ax.set_zlim3d([z - radius, z + radius]) # Generate and plot a unit sphere u = np.linspace(0, 2*np.pi, 100) v = np.linspace(0, np.pi, 100) x = np.outer(np.cos(u), np.sin(v)) # np.outer() -> outer vector product y = np.outer(np.sin(u), np.sin(v)) z = np.outer(np.ones(np.size(u)), np.cos(v)) fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.plot_surface(x, y, z) ax.set_box_aspect([1,1,1]) # IMPORTANT - this is the new, key line # ax.set_proj_type('ortho') # OPTIONAL - default is perspective (shown in image above) set_axes_equal(ax) # IMPORTANT - this is also required plt.show() A: I simplified Remy F's solution by using the set_x/y/zlim functions. from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.set_aspect('equal') X = np.random.rand(100)*10+5 Y = np.random.rand(100)*5+2.5 Z = np.random.rand(100)*50+25 scat = ax.scatter(X, Y, Z) max_range = np.array([X.max()-X.min(), Y.max()-Y.min(), Z.max()-Z.min()]).max() / 2.0 mid_x = (X.max()+X.min()) * 0.5 mid_y = (Y.max()+Y.min()) * 0.5 mid_z = (Z.max()+Z.min()) * 0.5 ax.set_xlim(mid_x - max_range, mid_x + max_range) ax.set_ylim(mid_y - max_range, mid_y + max_range) ax.set_zlim(mid_z - max_range, mid_z + max_range) plt.show() A: As of matplotlib 3.3.0, Axes3D.set_box_aspect seems to be the recommended approach. import numpy as np xs, ys, zs = <your data> ax = <your axes> # Option 1: aspect ratio is 1:1:1 in data space ax.set_box_aspect((np.ptp(xs), np.ptp(ys), np.ptp(zs))) # Option 2: aspect ratio 1:1:1 in view space ax.set_box_aspect((1, 1, 1)) A: Adapted from @karlo's answer to make things even cleaner: def set_axes_equal(ax: plt.Axes): """Set 3D plot axes to equal scale. Make axes of 3D plot have equal scale so that spheres appear as spheres and cubes as cubes. Required since `ax.axis('equal')` and `ax.set_aspect('equal')` don't work on 3D. """ limits = np.array([ ax.get_xlim3d(), ax.get_ylim3d(), ax.get_zlim3d(), ]) origin = np.mean(limits, axis=1) radius = 0.5 * np.max(np.abs(limits[:, 1] - limits[:, 0])) _set_axes_radius(ax, origin, radius) def _set_axes_radius(ax, origin, radius): x, y, z = origin ax.set_xlim3d([x - radius, x + radius]) ax.set_ylim3d([y - radius, y + radius]) ax.set_zlim3d([z - radius, z + radius]) Usage: fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.set_aspect('equal') # important! # ...draw here... set_axes_equal(ax) # important! plt.show() EDIT: This answer does not work on more recent versions of Matplotlib due to the changes merged in pull-request #13474, which is tracked in issue #17172 and issue #1077. As a temporary workaround to this, one can remove the newly added lines in lib/matplotlib/axes/_base.py: class _AxesBase(martist.Artist): ... def set_aspect(self, aspect, adjustable=None, anchor=None, share=False): ... + if (not cbook._str_equal(aspect, 'auto')) and self.name == '3d': + raise NotImplementedError( + 'It is not currently possible to manually set the aspect ' + 'on 3D axes') A: EDIT: user2525140's code should work perfectly fine, although this answer supposedly attempted to fix a non--existant error. The answer below is just a duplicate (alternative) implementation: def set_aspect_equal_3d(ax): """Fix equal aspect bug for 3D plots.""" xlim = ax.get_xlim3d() ylim = ax.get_ylim3d() zlim = ax.get_zlim3d() from numpy import mean xmean = mean(xlim) ymean = mean(ylim) zmean = mean(zlim) plot_radius = max([abs(lim - mean_) for lims, mean_ in ((xlim, xmean), (ylim, ymean), (zlim, zmean)) for lim in lims]) ax.set_xlim3d([xmean - plot_radius, xmean + plot_radius]) ax.set_ylim3d([ymean - plot_radius, ymean + plot_radius]) ax.set_zlim3d([zmean - plot_radius, zmean + plot_radius]) A: As of matplotlib 3.6.0, this feature has been added with the command ax.set_aspect('equal'). Other options are 'equalxy', 'equalxz', and 'equalyz', to set only two directions to equal aspect ratios. This changes the data limits, example below. In the upcoming 3.7.0, you will be able to change the plot box aspect ratios rather than the data limits via the command ax.set_aspect('equal', adjustable='box'). To get the original behavior, use adjustable='datalim'. A: I think this feature has been added to matplotlib since these answers have been posted. In case anyone is still searching a solution this is how I do it: import matplotlib.pyplot as plt import numpy as np fig = plt.figure(figsize=plt.figaspect(1)*2) ax = fig.add_subplot(projection='3d', proj_type='ortho') X = np.random.rand(100) Y = np.random.rand(100) Z = np.random.rand(100) ax.scatter(X, Y, Z, color='b') The key bit of code is figsize=plt.figaspect(1) which sets the aspect ratio of the figure to 1 by 1. The *2 after figaspect(1) scales the figure by a factor of two. You can set this scaling factor to whatever you want. NOTE: This only works for figures with one plot. A: for the time beeing ax.set_aspect('equal') araises an error (version 3.5.1 with Anaconda). ax.set_aspect('auto',adjustable='datalim') did not give a convincing solution either. a lean work-aorund with ax.set_box_aspect((asx,asy,asz)) and asx, asy, asz = np.ptp(X), np.ptp(Y), np.ptp(Z) seems to be feasible (see my code snippet) Let's hope that version 3.7 with the features @Scott mentioned will be successful soon. import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D #---- generate data nn = 100 X = np.random.randn(nn)*20 + 0 Y = np.random.randn(nn)*50 + 30 Z = np.random.randn(nn)*10 + -5 #---- check aspect ratio asx, asy, asz = np.ptp(X), np.ptp(Y), np.ptp(Z) fig = plt.figure(figsize=(15,15)) ax = fig.add_subplot(projection='3d') #---- set box aspect ratio ax.set_box_aspect((asx,asy,asz)) scat = ax.scatter(X, Y, Z, c=X+Y+Z, s=500, alpha=0.8) ax.set_xlabel('X-axis'); ax.set_ylabel('Y-axis'); ax.set_zlabel('Z-axis') plt.show()
matplotlib (equal unit length): with 'equal' aspect ratio z-axis is not equal to x- and y-
When I set up an equal aspect ratio for a 3d graph, the z-axis does not change to 'equal'. So this: fig = pylab.figure() mesFig = fig.gca(projection='3d', adjustable='box') mesFig.axis('equal') mesFig.plot(xC, yC, zC, 'r.') mesFig.plot(xO, yO, zO, 'b.') pyplot.show() Gives me the following: Where obviously the unit length of z-axis is not equal to x- and y- units. How can I make the unit length of all three axes equal? All the solutions I found did not work.
[ "I like the above solutions, but they do have the drawback that you need to keep track of the ranges and means over all your data. This could be cumbersome if you have multiple data sets that will be plotted together. To fix this, I made use of the ax.get_[xyz]lim3d() methods and put the whole thing into a standalone function that can be called just once before you call plt.show(). Here is the new version:\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef set_axes_equal(ax):\n '''Make axes of 3D plot have equal scale so that spheres appear as spheres,\n cubes as cubes, etc.. This is one possible solution to Matplotlib's\n ax.set_aspect('equal') and ax.axis('equal') not working for 3D.\n\n Input\n ax: a matplotlib axis, e.g., as output from plt.gca().\n '''\n\n x_limits = ax.get_xlim3d()\n y_limits = ax.get_ylim3d()\n z_limits = ax.get_zlim3d()\n\n x_range = abs(x_limits[1] - x_limits[0])\n x_middle = np.mean(x_limits)\n y_range = abs(y_limits[1] - y_limits[0])\n y_middle = np.mean(y_limits)\n z_range = abs(z_limits[1] - z_limits[0])\n z_middle = np.mean(z_limits)\n\n # The plot bounding box is a sphere in the sense of the infinity\n # norm, hence I call half the max range the plot radius.\n plot_radius = 0.5*max([x_range, y_range, z_range])\n\n ax.set_xlim3d([x_middle - plot_radius, x_middle + plot_radius])\n ax.set_ylim3d([y_middle - plot_radius, y_middle + plot_radius])\n ax.set_zlim3d([z_middle - plot_radius, z_middle + plot_radius])\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.set_aspect('equal')\n\nX = np.random.rand(100)*10+5\nY = np.random.rand(100)*5+2.5\nZ = np.random.rand(100)*50+25\n\nscat = ax.scatter(X, Y, Z)\n\nset_axes_equal(ax)\nplt.show()\n\n", "I believe matplotlib does not yet set correctly equal axis in 3D... But I found a trick some times ago (I don't remember where) that I've adapted using it. The concept is to create a fake cubic bounding box around your data.\nYou can test it with the following code:\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.set_aspect('equal')\n\nX = np.random.rand(100)*10+5\nY = np.random.rand(100)*5+2.5\nZ = np.random.rand(100)*50+25\n\nscat = ax.scatter(X, Y, Z)\n\n# Create cubic bounding box to simulate equal aspect ratio\nmax_range = np.array([X.max()-X.min(), Y.max()-Y.min(), Z.max()-Z.min()]).max()\nXb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][0].flatten() + 0.5*(X.max()+X.min())\nYb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][1].flatten() + 0.5*(Y.max()+Y.min())\nZb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][2].flatten() + 0.5*(Z.max()+Z.min())\n# Comment or uncomment following both lines to test the fake bounding box:\nfor xb, yb, zb in zip(Xb, Yb, Zb):\n ax.plot([xb], [yb], [zb], 'w')\n\nplt.grid()\nplt.show()\n\nz data are about an order of magnitude larger than x and y, but even with equal axis option, matplotlib autoscale z axis:\n\nBut if you add the bounding box, you obtain a correct scaling:\n\n", "Simple fix!\nI've managed to get this working in version 3.3.1.\nIt looks like this issue has perhaps been resolved in PR#17172; You can use the ax.set_box_aspect([1,1,1]) function to ensure the aspect is correct (see the notes for the set_aspect function). When used in conjunction with the bounding box function(s) provided by @karlo and/or @Matee Ulhaq, the plots now look correct in 3D!\n\nMinimum Working Example\nimport matplotlib.pyplot as plt\nimport mpl_toolkits.mplot3d\nimport numpy as np\n\n# Functions from @Mateen Ulhaq and @karlo\ndef set_axes_equal(ax: plt.Axes):\n \"\"\"Set 3D plot axes to equal scale.\n\n Make axes of 3D plot have equal scale so that spheres appear as\n spheres and cubes as cubes. Required since `ax.axis('equal')`\n and `ax.set_aspect('equal')` don't work on 3D.\n \"\"\"\n limits = np.array([\n ax.get_xlim3d(),\n ax.get_ylim3d(),\n ax.get_zlim3d(),\n ])\n origin = np.mean(limits, axis=1)\n radius = 0.5 * np.max(np.abs(limits[:, 1] - limits[:, 0]))\n _set_axes_radius(ax, origin, radius)\n\ndef _set_axes_radius(ax, origin, radius):\n x, y, z = origin\n ax.set_xlim3d([x - radius, x + radius])\n ax.set_ylim3d([y - radius, y + radius])\n ax.set_zlim3d([z - radius, z + radius])\n\n# Generate and plot a unit sphere\nu = np.linspace(0, 2*np.pi, 100)\nv = np.linspace(0, np.pi, 100)\nx = np.outer(np.cos(u), np.sin(v)) # np.outer() -> outer vector product\ny = np.outer(np.sin(u), np.sin(v))\nz = np.outer(np.ones(np.size(u)), np.cos(v))\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.plot_surface(x, y, z)\n\nax.set_box_aspect([1,1,1]) # IMPORTANT - this is the new, key line\n# ax.set_proj_type('ortho') # OPTIONAL - default is perspective (shown in image above)\nset_axes_equal(ax) # IMPORTANT - this is also required\nplt.show()\n\n", "I simplified Remy F's solution by using the set_x/y/zlim functions.\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.set_aspect('equal')\n\nX = np.random.rand(100)*10+5\nY = np.random.rand(100)*5+2.5\nZ = np.random.rand(100)*50+25\n\nscat = ax.scatter(X, Y, Z)\n\nmax_range = np.array([X.max()-X.min(), Y.max()-Y.min(), Z.max()-Z.min()]).max() / 2.0\n\nmid_x = (X.max()+X.min()) * 0.5\nmid_y = (Y.max()+Y.min()) * 0.5\nmid_z = (Z.max()+Z.min()) * 0.5\nax.set_xlim(mid_x - max_range, mid_x + max_range)\nax.set_ylim(mid_y - max_range, mid_y + max_range)\nax.set_zlim(mid_z - max_range, mid_z + max_range)\n\nplt.show()\n\n\n", "As of matplotlib 3.3.0, Axes3D.set_box_aspect seems to be the recommended approach.\nimport numpy as np\n\nxs, ys, zs = <your data>\nax = <your axes>\n\n# Option 1: aspect ratio is 1:1:1 in data space\nax.set_box_aspect((np.ptp(xs), np.ptp(ys), np.ptp(zs)))\n\n# Option 2: aspect ratio 1:1:1 in view space\nax.set_box_aspect((1, 1, 1))\n\n", "Adapted from @karlo's answer to make things even cleaner:\ndef set_axes_equal(ax: plt.Axes):\n \"\"\"Set 3D plot axes to equal scale.\n\n Make axes of 3D plot have equal scale so that spheres appear as\n spheres and cubes as cubes. Required since `ax.axis('equal')`\n and `ax.set_aspect('equal')` don't work on 3D.\n \"\"\"\n limits = np.array([\n ax.get_xlim3d(),\n ax.get_ylim3d(),\n ax.get_zlim3d(),\n ])\n origin = np.mean(limits, axis=1)\n radius = 0.5 * np.max(np.abs(limits[:, 1] - limits[:, 0]))\n _set_axes_radius(ax, origin, radius)\n\ndef _set_axes_radius(ax, origin, radius):\n x, y, z = origin\n ax.set_xlim3d([x - radius, x + radius])\n ax.set_ylim3d([y - radius, y + radius])\n ax.set_zlim3d([z - radius, z + radius])\n\nUsage:\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.set_aspect('equal') # important!\n\n# ...draw here...\n\nset_axes_equal(ax) # important!\nplt.show()\n\n\nEDIT: This answer does not work on more recent versions of Matplotlib due to the changes merged in pull-request #13474, which is tracked in issue #17172 and issue #1077. As a temporary workaround to this, one can remove the newly added lines in lib/matplotlib/axes/_base.py:\n class _AxesBase(martist.Artist):\n ...\n\n def set_aspect(self, aspect, adjustable=None, anchor=None, share=False):\n ...\n\n+ if (not cbook._str_equal(aspect, 'auto')) and self.name == '3d':\n+ raise NotImplementedError(\n+ 'It is not currently possible to manually set the aspect '\n+ 'on 3D axes')\n\n", "EDIT: user2525140's code should work perfectly fine, although this answer supposedly attempted to fix a non--existant error. The answer below is just a duplicate (alternative) implementation:\ndef set_aspect_equal_3d(ax):\n \"\"\"Fix equal aspect bug for 3D plots.\"\"\"\n\n xlim = ax.get_xlim3d()\n ylim = ax.get_ylim3d()\n zlim = ax.get_zlim3d()\n\n from numpy import mean\n xmean = mean(xlim)\n ymean = mean(ylim)\n zmean = mean(zlim)\n\n plot_radius = max([abs(lim - mean_)\n for lims, mean_ in ((xlim, xmean),\n (ylim, ymean),\n (zlim, zmean))\n for lim in lims])\n\n ax.set_xlim3d([xmean - plot_radius, xmean + plot_radius])\n ax.set_ylim3d([ymean - plot_radius, ymean + plot_radius])\n ax.set_zlim3d([zmean - plot_radius, zmean + plot_radius])\n\n", "As of matplotlib 3.6.0, this feature has been added with the command\nax.set_aspect('equal'). Other options are 'equalxy', 'equalxz', and 'equalyz', to set only two directions to equal aspect ratios. This changes the data limits, example below.\nIn the upcoming 3.7.0, you will be able to change the plot box aspect ratios rather than the data limits via the command ax.set_aspect('equal', adjustable='box'). To get the original behavior, use adjustable='datalim'.\n\n", "I think this feature has been added to matplotlib since these answers have been posted. In case anyone is still searching a solution this is how I do it:\nimport matplotlib.pyplot as plt \nimport numpy as np\n \nfig = plt.figure(figsize=plt.figaspect(1)*2)\nax = fig.add_subplot(projection='3d', proj_type='ortho')\n \nX = np.random.rand(100)\nY = np.random.rand(100)\nZ = np.random.rand(100)\n \nax.scatter(X, Y, Z, color='b')\n\nThe key bit of code is figsize=plt.figaspect(1) which sets the aspect ratio of the figure to 1 by 1. The *2 after figaspect(1) scales the figure by a factor of two. You can set this scaling factor to whatever you want.\nNOTE: This only works for figures with one plot.\n\n", "\nfor the time beeing ax.set_aspect('equal') araises an error (version 3.5.1 with Anaconda).\n\nax.set_aspect('auto',adjustable='datalim') did not give a convincing solution either.\n\na lean work-aorund with ax.set_box_aspect((asx,asy,asz)) and asx, asy, asz = np.ptp(X), np.ptp(Y), np.ptp(Z) seems to be feasible (see my code snippet)\n\nLet's hope that version 3.7 with the features @Scott mentioned will be successful soon.\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n#---- generate data\nnn = 100\nX = np.random.randn(nn)*20 + 0\nY = np.random.randn(nn)*50 + 30\nZ = np.random.randn(nn)*10 + -5\n\n#---- check aspect ratio\nasx, asy, asz = np.ptp(X), np.ptp(Y), np.ptp(Z)\n\nfig = plt.figure(figsize=(15,15))\nax = fig.add_subplot(projection='3d')\n\n#---- set box aspect ratio\nax.set_box_aspect((asx,asy,asz))\nscat = ax.scatter(X, Y, Z, c=X+Y+Z, s=500, alpha=0.8)\n\nax.set_xlabel('X-axis'); ax.set_ylabel('Y-axis'); ax.set_zlabel('Z-axis')\nplt.show()\n\n\n\n\n" ]
[ 85, 79, 58, 57, 25, 23, 7, 2, 1, 0 ]
[]
[]
[ "aspect_ratio", "axis", "graph", "matplotlib", "python" ]
stackoverflow_0013685386_aspect_ratio_axis_graph_matplotlib_python.txt
Q: Updated StatsForecast Library shows error 'forecasts' is not defined in Python I was trying to replicate this code for stat forecasting in python, I came across an odd error "name 'forecasts' is not defined" which is quite strange as I was able to replicate the code without any errors before. I believe this was resolved in the latest update of this library StatsForecast but I still run across to the same error. Can you please help me out here. The code I am replicating is from this : https://towardsdatascience.com/time-series-forecasting-with-statistical-models-f08dcd1d24d1 A similar question was earlier asked for the same error, and the solution was updated but this error till comes up after the new solution as well, attached is the link to the question: Error in Data frame definition while Multiple TS Stat Forecasting in Python import random from itertools import product from IPython.display import display, Markdown from multiprocessing import cpu_count import matplotlib.pyplot as plt import numpy as np import pandas as pd from statsforecast import StatsForecast from nixtlats.data.datasets.m4 import M4, M4Info from statsforecast.models import CrostonClassic, CrostonSBA, CrostonOptimized, ADIDA, IMAPA, TSB from statsforecast.models import SimpleExponentialSmoothing, SimpleExponentialSmoothingOptimized, SeasonalExponentialSmoothing, SeasonalExponentialSmoothingOptimized, Holt, HoltWinters from statsforecast.models import HistoricAverage, Naive, RandomWalkWithDrift, SeasonalNaive, WindowAverage, SeasonalWindowAverage from statsforecast.models import MSTL from statsforecast.models import Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta from statsforecast.models import AutoARIMA, AutoETS, AutoCES, AutoTheta df = pd.read_excel ('C:/X/X/X/Data.xlsx',sheet_name='Transpose') df.rename(columns = {'Row Labels':'Key'}, inplace=True) df['Key'] = df['Key'].astype(str) df = pd.melt(df,id_vars='Key',value_vars=list(df.columns[1:]),var_name ='ds') df.columns = df.columns.str.replace('Key', 'unique_id') df.columns = df.columns.str.replace('value', 'y') df["ds"] = pd.to_datetime(df["ds"],format='%Y-%m-%d') df=df[["ds","unique_id","y"]] df['unique_id'] = df['unique_id'].astype('object') df = df.set_index('unique_id') df.reset_index() seasonality = 30 #Monthly data models = [ ADIDA, CrostonClassic(), CrostonSBA(), CrostonOptimized(), IMAPA, (TSB,0.3,0.2), MSTL, Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta, AutoARIMA, AutoETS, AutoCES, AutoTheta, HistoricAverage, Naive, RandomWalkWithDrift, (SeasonalNaive, seasonality), (SeasonalExponentialSmoothing, seasonality, 0.2), (SeasonalWindowAverage, seasonality, 2 * seasonality), (WindowAverage, 2 * seasonality) ] fcst = StatsForecast(df=df, models=models, freq='MS', n_jobs=cpu_count()) %time forecasts = fcst.forecast(9) forecasts.reset_index() forecasts = forecasts.round(0) forecasts.to_excel("C:/X/X/X/Forecast_Output.xlsx",sheet_name='Sheet1') The dataset I am working with is given below: {'Row Labels': {0: 'XYZ-912750', 1: 'XYZ-461356', 2: 'XYZ-150591', 3: 'XYZ-627885', 4: 'XYZ-582638', 5: 'XYZ-631691', 6: 'XYZ-409952', 7: 'XYZ-245245', 8: 'XYZ-230662', 9: 'XYZ-533388', 10: 'XYZ-248225', 11: 'XYZ-582912', 12: 'XYZ-486079', 13: 'XYZ-867685', 14: 'XYZ-873555', 15: 'XYZ-375397', 16: 'XYZ-428066', 17: 'XYZ-774244', 18: 'XYZ-602796', 19: 'XYZ-267306', 20: 'XYZ-576156', 21: 'XYZ-775994', 22: 'XYZ-226742', 23: 'XYZ-641711', 24: 'XYZ-928543', 25: 'XYZ-217200', 26: 'XYZ-971921', 27: 'XYZ-141388', 28: 'XYZ-848360', 29: 'XYZ-864999', 30: 'XYZ-821384', 31: 'XYZ-516339', 32: 'XYZ-462488', 33: 'XYZ-140964', 34: 'XYZ-225559', 35: 'XYZ-916534', 36: 'XYZ-389683', 37: 'XYZ-247803', 38: 'XYZ-718639', 39: 'XYZ-512944', 40: 'XYZ-727601', 41: 'XYZ-315757', 42: 'XYZ-764867', 43: 'XYZ-918344', 44: 'XYZ-430939', 45: 'XYZ-204784', 46: 'XYZ-415285', 47: 'XYZ-272089', 48: 'XYZ-812045', 49: 'XYZ-889257', 50: 'XYZ-275863', 51: 'XYZ-519930', 52: 'XYZ-102141', 53: 'XYZ-324473', 54: 'XYZ-999148', 55: 'XYZ-514915', 56: 'XYZ-932751', 57: 'XYZ-669878', 58: 'XYZ-233459', 59: 'XYZ-289984', 60: 'XYZ-150061', 61: 'XYZ-355028', 62: 'XYZ-881803', 63: 'XYZ-721426', 64: 'XYZ-522174', 65: 'XYZ-790172', 66: 'XYZ-744677', 67: 'XYZ-617017', 68: 'XYZ-982812', 69: 'XYZ-940695', 70: 'XYZ-119041', 71: 'XYZ-313844', 72: 'XYZ-868117', 73: 'XYZ-791717', 74: 'XYZ-100742', 75: 'XYZ-259687', 76: 'XYZ-688842', 77: 'XYZ-247326', 78: 'XYZ-360939', 79: 'XYZ-185017', 80: 'XYZ-244773', 81: 'XYZ-289058', 82: 'XYZ-477846', 83: 'XYZ-305072', 84: 'XYZ-828236', 85: 'XYZ-668927', 86: 'XYZ-616913', 87: 'XYZ-874876', 88: 'XYZ-371693', 89: 'XYZ-951238', 90: 'XYZ-371675', 91: 'XYZ-736997', 92: 'XYZ-922244', 93: 'XYZ-883225', 94: 'XYZ-267555', 95: 'XYZ-704013', 96: 'XYZ-874917', 97: 'XYZ-567402', 98: 'XYZ-167338', 99: 'XYZ-592671', 100: 'XYZ-130168', 101: 'XYZ-492522', 102: 'XYZ-696211', 103: 'XYZ-310469', 104: 'XYZ-973277', 105: 'XYZ-841356', 106: 'XYZ-389440', 107: 'XYZ-613876', 108: 'XYZ-662850', 109: 'XYZ-800625', 110: 'XYZ-500125', 111: 'XYZ-539949', 112: 'XYZ-576121', 113: 'XYZ-339006', 114: 'XYZ-247314', 115: 'XYZ-129049', 116: 'XYZ-980653', 117: 'XYZ-678520', 118: 'XYZ-584841', 119: 'XYZ-396755', 120: 'XYZ-409502', 121: 'XYZ-824561', 122: 'XYZ-825996', 123: 'XYZ-820540', 124: 'XYZ-264710', 125: 'XYZ-241176', 126: 'XYZ-491386', 127: 'XYZ-914132', 128: 'XYZ-496194', 129: 'XYZ-941615', 130: 'XYZ-765328', 131: 'XYZ-540602', 132: 'XYZ-222660', 133: 'XYZ-324367', 134: 'XYZ-583764', 135: 'XYZ-248478', 136: 'XYZ-379180', 137: 'XYZ-628462', 138: 'XYZ-454262'}, '2021-03-01': {0: 0, 1: 951, 2: 0, 3: 0, 4: 13, 5: 0, 6: 0, 7: 0, 8: 487, 9: 501, 10: 0, 11: 0, 12: 0, 13: 0, 14: 715, 15: 726, 16: 235, 17: 340, 18: 0, 19: 0, 20: 0, 21: 960, 22: 127, 23: 92, 24: 0, 25: 0, 26: 170, 27: 0, 28: 0, 29: 0, 30: 0, 31: 133, 32: 0, 33: 0, 34: 105, 35: 168, 36: 0, 37: 500, 38: 0, 39: 0, 40: 61, 41: 0, 42: 212, 43: 101, 44: 0, 45: 0, 46: 0, 47: 83, 48: 185, 49: 0, 50: 131, 51: 67, 52: 0, 53: 141, 54: 0, 55: 140, 56: 0, 57: 0, 58: 180, 59: 0, 60: 0, 61: 99, 62: 63, 63: 0, 64: 0, 65: 1590, 66: 0, 67: 0, 68: 15, 69: 113, 70: 0, 71: 0, 72: 0, 73: 54, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 108, 80: 0, 81: 62, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 29, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 69, 98: 0, 99: 0, 100: 0, 101: 62, 102: 30, 103: 42, 104: 0, 105: 0, 106: 0, 107: 67, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 52, 116: 36, 117: 0, 118: 110, 119: 0, 120: 44, 121: 0, 122: 102, 123: 0, 124: 71, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 77, 132: 56, 133: 0, 134: 0, 135: 103, 136: 0, 137: 0, 138: 53}, '2021-04-01': {0: 0, 1: 553, 2: 0, 3: 0, 4: 18, 5: 0, 6: 0, 7: 0, 8: 313, 9: 1100, 10: 0, 11: 0, 12: 0, 13: 0, 14: 336, 15: 856, 16: 216, 17: 415, 18: 0, 19: 0, 20: 0, 21: 1363, 22: 148, 23: 171, 24: 0, 25: 0, 26: 260, 27: 0, 28: 0, 29: 0, 30: 0, 31: 229, 32: 0, 33: 0, 34: 286, 35: 215, 36: 0, 37: 381, 38: 0, 39: 0, 40: 171, 41: 0, 42: 261, 43: 211, 44: 0, 45: 0, 46: 0, 47: 94, 48: 167, 49: 0, 50: 171, 51: 111, 52: 0, 53: 229, 54: 0, 55: 104, 56: 0, 57: 0, 58: 158, 59: 0, 60: 0, 61: 142, 62: 156, 63: 0, 64: 0, 65: 1152, 66: 0, 67: 0, 68: 19, 69: 160, 70: 0, 71: 0, 72: 0, 73: 50, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 146, 80: 0, 81: 25, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 69, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 49, 98: 0, 99: 0, 100: 0, 101: 22, 102: 46, 103: 48, 104: 0, 105: 0, 106: 0, 107: 60, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 24, 116: 51, 117: 0, 118: 112, 119: 0, 120: 73, 121: 0, 122: 155, 123: 0, 124: 57, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 59, 132: 62, 133: 0, 134: 0, 135: 132, 136: 0, 137: 0, 138: 70}, '2021-05-01': {0: 0, 1: 439, 2: 0, 3: 0, 4: 13, 5: 0, 6: 0, 7: 0, 8: 119, 9: 735, 10: 0, 11: 0, 12: 0, 13: 0, 14: 183, 15: 70, 16: 79, 17: 244, 18: 0, 19: 0, 20: 0, 21: 2842, 22: 30, 23: 76, 24: 0, 25: 0, 26: 95, 27: 0, 28: 0, 29: 0, 30: 0, 31: 38, 32: 0, 33: 0, 34: 197, 35: 114, 36: 0, 37: 140, 38: 0, 39: 0, 40: 91, 41: 0, 42: 82, 43: 83, 44: 0, 45: 0, 46: 0, 47: 35, 48: 126, 49: 0, 50: 83, 51: 101, 52: 0, 53: 94, 54: 0, 55: 100, 56: 0, 57: 0, 58: 89, 59: 0, 60: 0, 61: 94, 62: 112, 63: 0, 64: 0, 65: 1903, 66: 0, 67: 0, 68: 61, 69: 91, 70: 0, 71: 0, 72: 0, 73: 30, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 116, 80: 0, 81: 12, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 56, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 0, 98: 0, 99: 0, 100: 0, 101: 20, 102: 42, 103: 35, 104: 0, 105: 0, 106: 0, 107: 59, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 0, 116: 27, 117: 0, 118: 45, 119: 0, 120: 49, 121: 0, 122: 129, 123: 0, 124: 58, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 41, 132: 41, 133: 0, 134: 0, 135: 61, 136: 0, 137: 0, 138: 38}, '2021-06-01': {0: 0, 1: 390, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 221, 9: 816, 10: 0, 11: 0, 12: 0, 13: 0, 14: 109, 15: 255, 16: 126, 17: 161, 18: 0, 19: 0, 20: 0, 21: 959, 22: 52, 23: 119, 24: 0, 25: 0, 26: 261, 27: 0, 28: 0, 29: 0, 30: 0, 31: 142, 32: 0, 33: 0, 34: 203, 35: 42, 36: 0, 37: 133, 38: 0, 39: 0, 40: 113, 41: 0, 42: 118, 43: 62, 44: 0, 45: 0, 46: 0, 47: 48, 48: 112, 49: 0, 50: 75, 51: 105, 52: 0, 53: 107, 54: 0, 55: 102, 56: 0, 57: 0, 58: 77, 59: 0, 60: 0, 61: 81, 62: 94, 63: 0, 64: 0, 65: 764, 66: 0, 67: 0, 68: 47, 69: 116, 70: 0, 71: 0, 72: 0, 73: 19, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 148, 80: 0, 81: 20, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 46, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 33, 98: 0, 99: 0, 100: 0, 101: 39, 102: 52, 103: 47, 104: 0, 105: 0, 106: 0, 107: 56, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 62, 116: 41, 117: 0, 118: 51, 119: 0, 120: 59, 121: 0, 122: 73, 123: 0, 124: 34, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 17, 132: 42, 133: 0, 134: 0, 135: 74, 136: 0, 137: 0, 138: 58}, '2021-07-01': {0: 0, 1: 349, 2: 0, 3: 0, 4: 11, 5: 0, 6: 0, 7: 0, 8: 222, 9: 418, 10: 0, 11: 0, 12: 0, 13: 0, 14: 104, 15: 57, 16: 92, 17: 118, 18: 0, 19: 0, 20: 0, 21: 2040, 22: 80, 23: 50, 24: 0, 25: 0, 26: 147, 27: 0, 28: 0, 29: 0, 30: 0, 31: 22, 32: 0, 33: 0, 34: 117, 35: 88, 36: 0, 37: 146, 38: 0, 39: 0, 40: 65, 41: 0, 42: 117, 43: 65, 44: 0, 45: 0, 46: 0, 47: 33, 48: 36, 49: 0, 50: 51, 51: 50, 52: 0, 53: 66, 54: 0, 55: 51, 56: 0, 57: 0, 58: 100, 59: 0, 60: 0, 61: 63, 62: 55, 63: 0, 64: 0, 65: 847, 66: 0, 67: 0, 68: 32, 69: 68, 70: 0, 71: 0, 72: 0, 73: 42, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 72, 80: 0, 81: 27, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 47, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 36, 98: 0, 99: 0, 100: 0, 101: 25, 102: 29, 103: 39, 104: 0, 105: 0, 106: 0, 107: 40, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 37, 116: 41, 117: 0, 118: 29, 119: 0, 120: 54, 121: 0, 122: 75, 123: 0, 124: 41, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 12, 132: 28, 133: 0, 134: 0, 135: 46, 136: 0, 137: 0, 138: 24}, '2021-08-01': {0: 0, 1: 402, 2: 0, 3: 0, 4: 14, 5: 0, 6: 0, 7: 0, 8: 138, 9: 373, 10: 0, 11: 0, 12: 0, 13: 0, 14: 133, 15: 107, 16: 69, 17: 116, 18: 0, 19: 0, 20: 0, 21: 1554, 22: 80, 23: 65, 24: 0, 25: 0, 26: 123, 27: 0, 28: 0, 29: 0, 30: 0, 31: 23, 32: 0, 33: 0, 34: 95, 35: 49, 36: 0, 37: 146, 38: 0, 39: 0, 40: 50, 41: 0, 42: 90, 43: 57, 44: 0, 45: 0, 46: 0, 47: 19, 48: 46, 49: 0, 50: 38, 51: 20, 52: 0, 53: 91, 54: 0, 55: 69, 56: 0, 57: 0, 58: 57, 59: 0, 60: 0, 61: 53, 62: 48, 63: 0, 64: 0, 65: 934, 66: 0, 67: 0, 68: 19, 69: 66, 70: 0, 71: 0, 72: 0, 73: 75, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 86, 80: 0, 81: 33, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 32, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 46, 98: 0, 99: 0, 100: 0, 101: 22, 102: 31, 103: 63, 104: 0, 105: 0, 106: 0, 107: 41, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 42, 116: 42, 117: 0, 118: 30, 119: 0, 120: 32, 121: 0, 122: 70, 123: 0, 124: 40, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 12, 132: 21, 133: 0, 134: 0, 135: 83, 136: 0, 137: 0, 138: 20}, '2021-09-01': {0: 0, 1: 560, 2: 55, 3: 496, 4: 11, 5: 0, 6: 0, 7: 0, 8: 77, 9: 309, 10: 45, 11: 257, 12: 0, 13: 0, 14: 87, 15: 179, 16: 61, 17: 79, 18: 65, 19: 144, 20: 307, 21: 840, 22: 52, 23: 41, 24: 108, 25: 156, 26: 113, 27: 0, 28: 30, 29: 27, 30: 0, 31: 59, 32: 0, 33: 0, 34: 66, 35: 53, 36: 70, 37: 42, 38: 0, 39: 26, 40: 38, 41: 0, 42: 50, 43: 11, 44: 209, 45: 56, 46: 52, 47: 18, 48: 47, 49: 0, 50: 58, 51: 32, 52: 0, 53: 76, 54: 0, 55: 45, 56: 0, 57: 63, 58: 95, 59: 0, 60: 0, 61: 33, 62: 45, 63: 0, 64: 96, 65: 249, 66: 0, 67: 0, 68: 0, 69: 73, 70: 0, 71: 30, 72: 0, 73: 41, 74: 0, 75: 0, 76: 37, 77: 22, 78: 0, 79: 68, 80: 18, 81: 47, 82: 0, 83: 0, 84: 0, 85: 79, 86: 0, 87: 75, 88: 40, 89: 39, 90: 35, 91: 0, 92: 0, 93: 0, 94: 40, 95: 0, 96: 0, 97: 44, 98: 30, 99: 46, 100: 0, 101: 33, 102: 40, 103: 31, 104: 0, 105: 17, 106: 15, 107: 32, 108: 15, 109: 0, 110: 58, 111: 63, 112: 0, 113: 0, 114: 0, 115: 42, 116: 35, 117: 19, 118: 55, 119: 0, 120: 25, 121: 0, 122: 47, 123: 0, 124: 37, 125: 16, 126: 24, 127: 124, 128: 67, 129: 0, 130: 0, 131: 28, 132: 20, 133: 0, 134: 0, 135: 34, 136: 0, 137: 26, 138: 28}, '2021-10-01': {0: 122, 1: 720, 2: 129, 3: 1135, 4: 11, 5: 0, 6: 0, 7: 85, 8: 122, 9: 280, 10: 100, 11: 159, 12: 0, 13: 0, 14: 87, 15: 115, 16: 40, 17: 32, 18: 236, 19: 176, 20: 322, 21: 334, 22: 113, 23: 49, 24: 133, 25: 119, 26: 136, 27: 0, 28: 74, 29: 56, 30: 38, 31: 83, 32: 0, 33: 0, 34: 65, 35: 88, 36: 75, 37: 68, 38: 52, 39: 36, 40: 44, 41: 11, 42: 40, 43: 13, 44: 198, 45: 244, 46: 130, 47: 23, 48: 44, 49: 0, 50: 62, 51: 49, 52: 0, 53: 92, 54: 0, 55: 14, 56: 0, 57: 83, 58: 58, 59: 0, 60: 0, 61: 44, 62: 42, 63: 39, 64: 37, 65: 132, 66: 0, 67: 0, 68: 49, 69: 57, 70: 0, 71: 40, 72: 112, 73: 28, 74: 102, 75: 0, 76: 56, 77: 17, 78: 22, 79: 37, 80: 48, 81: 0, 82: 14, 83: 13, 84: 48, 85: 84, 86: 0, 87: 104, 88: 81, 89: 34, 90: 49, 91: 0, 92: 0, 93: 42, 94: 101, 95: 41, 96: 11, 97: 74, 98: 35, 99: 45, 100: 73, 101: 19, 102: 38, 103: 26, 104: 0, 105: 26, 106: 26, 107: 43, 108: 93, 109: 0, 110: 74, 111: 70, 112: 35, 113: 25, 114: 0, 115: 55, 116: 28, 117: 0, 118: 58, 119: 0, 120: 26, 121: 0, 122: 13, 123: 0, 124: 50, 125: 16, 126: 39, 127: 74, 128: 42, 129: 29, 130: 0, 131: 24, 132: 26, 133: 0, 134: 0, 135: 125, 136: 0, 137: 37, 138: 20}, '2021-11-01': {0: 1331, 1: 1810, 2: 274, 3: 899, 4: 0, 5: 0, 6: 30, 7: 606, 8: 138, 9: 1735, 10: 209, 11: 468, 12: 0, 13: 0, 14: 327, 15: 1394, 16: 73, 17: 187, 18: 1259, 19: 355, 20: 374, 21: 2079, 22: 500, 23: 168, 24: 305, 25: 80, 26: 256, 27: 0, 28: 340, 29: 143, 30: 380, 31: 273, 32: 79, 33: 0, 34: 143, 35: 137, 36: 200, 37: 336, 38: 166, 39: 235, 40: 97, 41: 202, 42: 75, 43: 130, 44: 650, 45: 675, 46: 326, 47: 46, 48: 105, 49: 0, 50: 195, 51: 135, 52: 93, 53: 229, 54: 0, 55: 93, 56: 0, 57: 188, 58: 89, 59: 46, 60: 123, 61: 101, 62: 89, 63: 64, 64: 208, 65: 325, 66: 0, 67: 0, 68: 211, 69: 90, 70: 0, 71: 111, 72: 218, 73: 42, 74: 139, 75: 16, 76: 94, 77: 148, 78: 45, 79: 92, 80: 100, 81: 16, 82: 31, 83: 123, 84: 87, 85: 142, 86: 0, 87: 444, 88: 123, 89: 105, 90: 63, 91: 0, 92: 16, 93: 149, 94: 240, 95: 114, 96: 99, 97: 128, 98: 128, 99: 104, 100: 196, 101: 32, 102: 41, 103: 55, 104: 0, 105: 67, 106: 97, 107: 56, 108: 40, 109: 14, 110: 194, 111: 290, 112: 151, 113: 154, 114: 11, 115: 105, 116: 54, 117: 30, 118: 148, 119: 0, 120: 71, 121: 0, 122: 39, 123: 0, 124: 118, 125: 207, 126: 58, 127: 131, 128: 93, 129: 30, 130: 0, 131: 90, 132: 43, 133: 0, 134: 0, 135: 40, 136: 0, 137: 58, 138: 29}, '2021-12-01': {0: 1901, 1: 2469, 2: 298, 3: 1760, 4: 14, 5: 0, 6: 573, 7: 1444, 8: 126, 9: 1568, 10: 220, 11: 497, 12: 0, 13: 71, 14: 248, 15: 1670, 16: 77, 17: 93, 18: 910, 19: 362, 20: 698, 21: 1044, 22: 651, 23: 156, 24: 208, 25: 185, 26: 314, 27: 0, 28: 356, 29: 205, 30: 570, 31: 186, 32: 25, 33: 0, 34: 117, 35: 90, 36: 385, 37: 228, 38: 410, 39: 270, 40: 63, 41: 228, 42: 50, 43: 53, 44: 450, 45: 896, 46: 431, 47: 74, 48: 62, 49: 0, 50: 678, 51: 123, 52: 204, 53: 225, 54: 100, 55: 13, 56: 88, 57: 302, 58: 81, 59: 111, 60: 141, 61: 98, 62: 57, 63: 73, 64: 334, 65: 422, 66: 49, 67: 0, 68: 600, 69: 86, 70: 55, 71: 162, 72: 138, 73: 50, 74: 296, 75: 30, 76: 153, 77: 186, 78: 68, 79: 39, 80: 173, 81: 0, 82: 276, 83: 192, 84: 66, 85: 116, 86: 89, 87: 385, 88: 209, 89: 121, 90: 68, 91: 22, 92: 52, 93: 262, 94: 261, 95: 70, 96: 85, 97: 298, 98: 170, 99: 126, 100: 145, 101: 17, 102: 53, 103: 56, 104: 0, 105: 97, 106: 114, 107: 72, 108: 42, 109: 22, 110: 211, 111: 370, 112: 175, 113: 111, 114: 27, 115: 62, 116: 104, 117: 118, 118: 248, 119: 0, 120: 58, 121: 20, 122: 52, 123: 20, 124: 97, 125: 119, 126: 107, 127: 108, 128: 79, 129: 42, 130: 0, 131: 281, 132: 83, 133: 57, 134: 61, 135: 50, 136: 50, 137: 22, 138: 37}, '2022-01-01': {0: 938, 1: 1501, 2: 377, 3: 1455, 4: 17, 5: 0, 6: 815, 7: 562, 8: 534, 9: 628, 10: 178, 11: 332, 12: 0, 13: 177, 14: 311, 15: 614, 16: 50, 17: 121, 18: 343, 19: 314, 20: 356, 21: 587, 22: 498, 23: 67, 24: 222, 25: 230, 26: 210, 27: 0, 28: 237, 29: 131, 30: 222, 31: 74, 32: 12, 33: 0, 34: 79, 35: 53, 36: 397, 37: 351, 38: 253, 39: 269, 40: 63, 41: 211, 42: 53, 43: 163, 44: 209, 45: 287, 46: 364, 47: 59, 48: 49, 49: 0, 50: 290, 51: 55, 52: 113, 53: 76, 54: 85, 55: 83, 56: 190, 57: 166, 58: 72, 59: 108, 60: 119, 61: 121, 62: 25, 63: 46, 64: 163, 65: 204, 66: 76, 67: 0, 68: 250, 69: 76, 70: 148, 71: 161, 72: 97, 73: 44, 74: 150, 75: 34, 76: 144, 77: 189, 78: 73, 79: 27, 80: 109, 81: 0, 82: 90, 83: 185, 84: 48, 85: 110, 86: 198, 87: 216, 88: 139, 89: 59, 90: 34, 91: 45, 92: 116, 93: 187, 94: 164, 95: 34, 96: 80, 97: 45, 98: 78, 99: 82, 100: 54, 101: 14, 102: 28, 103: 31, 104: 48, 105: 52, 106: 97, 107: 29, 108: 56, 109: 33, 110: 84, 111: 212, 112: 111, 113: 128, 114: 18, 115: 81, 116: 32, 117: 115, 118: 192, 119: 0, 120: 36, 121: 194, 122: 17, 123: 55, 124: 98, 125: 104, 126: 83, 127: 101, 128: 54, 129: 36, 130: 0, 131: 156, 132: 33, 133: 104, 134: 101, 135: 31, 136: 46, 137: 66, 138: 20}, '2022-02-01': {0: 612, 1: 912, 2: 325, 3: 892, 4: 11, 5: 0, 6: 706, 7: 310, 8: 439, 9: 563, 10: 134, 11: 140, 12: 0, 13: 153, 14: 281, 15: 399, 16: 49, 17: 90, 18: 204, 19: 231, 20: 100, 21: 318, 22: 255, 23: 63, 24: 309, 25: 181, 26: 205, 27: 0, 28: 121, 29: 84, 30: 117, 31: 80, 32: 143, 33: 0, 34: 65, 35: 64, 36: 227, 37: 271, 38: 133, 39: 290, 40: 47, 41: 156, 42: 0, 43: 176, 44: 153, 45: 244, 46: 300, 47: 14, 48: 30, 49: 0, 50: 126, 51: 46, 52: 81, 53: 69, 54: 165, 55: 48, 56: 79, 57: 91, 58: 31, 59: 95, 60: 138, 61: 87, 62: 34, 63: 39, 64: 101, 65: 111, 66: 19, 67: 0, 68: 15, 69: 26, 70: 0, 71: 88, 72: 81, 73: 53, 74: 135, 75: 62, 76: 92, 77: 141, 78: 57, 79: 32, 80: 71, 81: 34, 82: 357, 83: 92, 84: 50, 85: 82, 86: 97, 87: 128, 88: 75, 89: 54, 90: 23, 91: 28, 92: 57, 93: 108, 94: 138, 95: 48, 96: 79, 97: 109, 98: 52, 99: 54, 100: 73, 101: 27, 102: 20, 103: 26, 104: 86, 105: 48, 106: 54, 107: 27, 108: 39, 109: 61, 110: 67, 111: 110, 112: 127, 113: 147, 114: 0, 115: 60, 116: 23, 117: 68, 118: 101, 119: 23, 120: 25, 121: 93, 122: 35, 123: 25, 124: 52, 125: 72, 126: 50, 127: 84, 128: 78, 129: 43, 130: 0, 131: 82, 132: 34, 133: 84, 134: 13, 135: 13, 136: 37, 137: 69, 138: 13}, '2022-03-01': {0: 573, 1: 775, 2: 267, 3: 870, 4: 19, 5: 0, 6: 494, 7: 254, 8: 402, 9: 657, 10: 180, 11: 144, 12: 0, 13: 266, 14: 240, 15: 394, 16: 106, 17: 142, 18: 216, 19: 211, 20: 113, 21: 245, 22: 152, 23: 88, 24: 225, 25: 168, 26: 177, 27: 0, 28: 92, 29: 70, 30: 98, 31: 124, 32: 103, 33: 0, 34: 85, 35: 86, 36: 189, 37: 184, 38: 108, 39: 0, 40: 69, 41: 125, 42: 26, 43: 128, 44: 119, 45: 226, 46: 251, 47: 26, 48: 58, 49: 0, 50: 109, 51: 67, 52: 70, 53: 55, 54: 157, 55: 49, 56: 51, 57: 89, 58: 43, 59: 69, 60: 136, 61: 92, 62: 79, 63: 54, 64: 59, 65: 64, 66: 35, 67: 0, 68: 239, 69: 48, 70: 101, 71: 91, 72: 53, 73: 65, 74: 147, 75: 38, 76: 70, 77: 107, 78: 41, 79: 32, 80: 51, 81: 39, 82: 130, 83: 123, 84: 44, 85: 60, 86: 177, 87: 99, 88: 75, 89: 35, 90: 21, 91: 25, 92: 77, 93: 88, 94: 86, 95: 88, 96: 52, 97: 45, 98: 42, 99: 52, 100: 121, 101: 28, 102: 22, 103: 26, 104: 104, 105: 39, 106: 48, 107: 45, 108: 42, 109: 35, 110: 74, 111: 101, 112: 101, 113: 120, 114: 22, 115: 58, 116: 23, 117: 53, 118: 70, 119: 45, 120: 30, 121: 69, 122: 44, 123: 37, 124: 33, 125: 49, 126: 49, 127: 58, 128: 55, 129: 33, 130: 0, 131: 58, 132: 30, 133: 42, 134: 43, 135: 23, 136: 31, 137: 83, 138: 22}, '2022-04-01': {0: 356, 1: 595, 2: 231, 3: 444, 4: 0, 5: 0, 6: 220, 7: 145, 8: 185, 9: 394, 10: 140, 11: 112, 12: 0, 13: 104, 14: 139, 15: 236, 16: 102, 17: 121, 18: 77, 19: 174, 20: 108, 21: 133, 22: 105, 23: 53, 24: 195, 25: 114, 26: 155, 27: 11, 28: 88, 29: 40, 30: 102, 31: 91, 32: 142, 33: 0, 34: 66, 35: 36, 36: 90, 37: 114, 38: 64, 39: 262, 40: 46, 41: 87, 42: 47, 43: 87, 44: 64, 45: 93, 46: 114, 47: 15, 48: 95, 49: 0, 50: 85, 51: 40, 52: 30, 53: 51, 54: 81, 55: 38, 56: 66, 57: 52, 58: 43, 59: 59, 60: 121, 61: 53, 62: 44, 63: 22, 64: 59, 65: 64, 66: 47, 67: 0, 68: 194, 69: 26, 70: 59, 71: 37, 72: 47, 73: 51, 74: 146, 75: 36, 76: 43, 77: 120, 78: 37, 79: 16, 80: 52, 81: 22, 82: 151, 83: 51, 84: 35, 85: 52, 86: 71, 87: 32, 88: 39, 89: 20, 90: 25, 91: 25, 92: 48, 93: 44, 94: 35, 95: 40, 96: 30, 97: 41, 98: 24, 99: 45, 100: 44, 101: 17, 102: 15, 103: 19, 104: 39, 105: 32, 106: 45, 107: 35, 108: 21, 109: 16, 110: 34, 111: 44, 112: 46, 113: 29, 114: 20, 115: 51, 116: 17, 117: 45, 118: 52, 119: 31, 120: 29, 121: 34, 122: 21, 123: 16, 124: 26, 125: 39, 126: 22, 127: 45, 128: 48, 129: 20, 130: 0, 131: 35, 132: 18, 133: 39, 134: 22, 135: 30, 136: 71, 137: 15, 138: 11}, '2022-05-01': {0: 383, 1: 326, 2: 108, 3: 397, 4: 0, 5: 0, 6: 110, 7: 83, 8: 142, 9: 240, 10: 137, 11: 70, 12: 0, 13: 142, 14: 110, 15: 203, 16: 111, 17: 265, 18: 52, 19: 109, 20: 57, 21: 85, 22: 73, 23: 202, 24: 102, 25: 50, 26: 178, 27: 42, 28: 55, 29: 26, 30: 53, 31: 173, 32: 76, 33: 0, 34: 207, 35: 87, 36: 29, 37: 79, 38: 27, 39: 102, 40: 115, 41: 33, 42: 102, 43: 65, 44: 42, 45: 47, 46: 92, 47: 25, 48: 93, 49: 0, 50: 42, 51: 80, 52: 20, 53: 105, 54: 52, 55: 70, 56: 46, 57: 31, 58: 86, 59: 39, 60: 32, 61: 33, 62: 103, 63: 16, 64: 49, 65: 24, 66: 22, 67: 0, 68: 161, 69: 78, 70: 31, 71: 36, 72: 28, 73: 73, 74: 57, 75: 21, 76: 30, 77: 39, 78: 22, 79: 70, 80: 24, 81: 55, 82: 134, 83: 25, 84: 16, 85: 28, 86: 24, 87: 28, 88: 31, 89: 17, 90: 60, 91: 30, 92: 32, 93: 49, 94: 20, 95: 13, 96: 12, 97: 31, 98: 20, 99: 25, 100: 21, 101: 33, 102: 29, 103: 36, 104: 23, 105: 26, 106: 26, 107: 31, 108: 30, 109: 15, 110: 22, 111: 20, 112: 32, 113: 27, 114: 39, 115: 18, 116: 40, 117: 31, 118: 21, 119: 24, 120: 52, 121: 22, 122: 62, 123: 37, 124: 16, 125: 19, 126: 17, 127: 23, 128: 17, 129: 15, 130: 0, 131: 22, 132: 32, 133: 24, 134: 20, 135: 21, 136: 13, 137: 23, 138: 25}, '2022-06-01': {0: 613, 1: 1944, 2: 1826, 3: 494, 4: 0, 5: 244, 6: 928, 7: 798, 8: 219, 9: 1529, 10: 1029, 11: 526, 12: 122, 13: 195, 14: 173, 15: 1261, 16: 87, 17: 243, 18: 1179, 19: 217, 20: 464, 21: 952, 22: 353, 23: 148, 24: 166, 25: 187, 26: 134, 27: 124, 28: 321, 29: 221, 30: 193, 31: 224, 32: 75, 33: 0, 34: 277, 35: 77, 36: 253, 37: 174, 38: 343, 39: 283, 40: 73, 41: 295, 42: 108, 43: 138, 44: 102, 45: 1364, 46: 467, 47: 28, 48: 87, 49: 16, 50: 145, 51: 88, 52: 128, 53: 60, 54: 80, 55: 81, 56: 40, 57: 206, 58: 61, 59: 166, 60: 144, 61: 71, 62: 78, 63: 39, 64: 331, 65: 116, 66: 25, 67: 13, 68: 62, 69: 37, 70: 24, 71: 311, 72: 106, 73: 50, 74: 257, 75: 22, 76: 56, 77: 128, 78: 100, 79: 55, 80: 139, 81: 70, 82: 140, 83: 20, 84: 53, 85: 33, 86: 38, 87: 167, 88: 218, 89: 20, 90: 34, 91: 19, 92: 25, 93: 199, 94: 122, 95: 24, 96: 28, 97: 36, 98: 69, 99: 146, 100: 33, 101: 14, 102: 21, 103: 27, 104: 28, 105: 78, 106: 62, 107: 30, 108: 47, 109: 20, 110: 78, 111: 48, 112: 35, 113: 21, 114: 17, 115: 49, 116: 61, 117: 92, 118: 26, 119: 16, 120: 47, 121: 36, 122: 54, 123: 43, 124: 23, 125: 40, 126: 22, 127: 121, 128: 145, 129: 12, 130: 18, 131: 31, 132: 31, 133: 17, 134: 23, 135: 23, 136: 19, 137: 24, 138: 24}, '2022-07-01': {0: 349, 1: 283, 2: 163, 3: 318, 4: 67, 5: 328, 6: 121, 7: 96, 8: 205, 9: 219, 10: 89, 11: 60, 12: 153, 13: 68, 14: 135, 15: 181, 16: 53, 17: 94, 18: 65, 19: 96, 20: 67, 21: 57, 22: 67, 23: 59, 24: 134, 25: 94, 26: 78, 27: 142, 28: 33, 29: 29, 30: 45, 31: 64, 32: 65, 33: 76, 34: 81, 35: 55, 36: 44, 37: 83, 38: 15, 39: 46, 40: 84, 41: 45, 42: 56, 43: 54, 44: 50, 45: 48, 46: 90, 47: 17, 48: 56, 49: 27, 50: 66, 51: 37, 52: 34, 53: 63, 54: 58, 55: 27, 56: 45, 57: 74, 58: 51, 59: 61, 60: 80, 61: 45, 62: 65, 63: 34, 64: 27, 65: 30, 66: 18, 67: 35, 68: 47, 69: 31, 70: 24, 71: 40, 72: 18, 73: 30, 74: 44, 75: 26, 76: 31, 77: 32, 78: 29, 79: 29, 80: 45, 81: 14, 82: 54, 83: 31, 84: 37, 85: 24, 86: 32, 87: 20, 88: 40, 89: 32, 90: 22, 91: 17, 92: 30, 93: 29, 94: 20, 95: 52, 96: 34, 97: 25, 98: 26, 99: 28, 100: 72, 101: 17, 102: 15, 103: 22, 104: 28, 105: 24, 106: 28, 107: 19, 108: 25, 109: 25, 110: 38, 111: 19, 112: 27, 113: 26, 114: 15, 115: 22, 116: 28, 117: 24, 118: 33, 119: 13, 120: 57, 121: 40, 122: 22, 123: 14, 124: 18, 125: 23, 126: 20, 127: 38, 128: 20, 129: 14, 130: 36, 131: 24, 132: 18, 133: 39, 134: 14, 135: 40, 136: 16, 137: 21, 138: 13}, '2022-08-01': {0: 857, 1: 500, 2: 362, 3: 334, 4: 308, 5: 296, 6: 289, 7: 266, 8: 244, 9: 223, 10: 206, 11: 192, 12: 180, 13: 169, 14: 160, 15: 159, 16: 140, 17: 134, 18: 134, 19: 128, 20: 127, 21: 126, 22: 123, 23: 116, 24: 112, 25: 111, 26: 108, 27: 102, 28: 99, 29: 94, 30: 94, 31: 89, 32: 88, 33: 88, 34: 87, 35: 85, 36: 83, 37: 79, 38: 78, 39: 77, 40: 77, 41: 77, 42: 76, 43: 75, 44: 75, 45: 74, 46: 72, 47: 65, 48: 65, 49: 65, 50: 64, 51: 64, 52: 64, 53: 62, 54: 62, 55: 61, 56: 61, 57: 61, 58: 60, 59: 60, 60: 58, 61: 55, 62: 54, 63: 54, 64: 54, 65: 54, 66: 53, 67: 53, 68: 52, 69: 50, 70: 49, 71: 49, 72: 49, 73: 48, 74: 48, 75: 48, 76: 47, 77: 47, 78: 46, 79: 44, 80: 44, 81: 43, 82: 43, 83: 43, 84: 42, 85: 42, 86: 41, 87: 41, 88: 41, 89: 40, 90: 39, 91: 39, 92: 39, 93: 39, 94: 38, 95: 37, 96: 37, 97: 36, 98: 36, 99: 36, 100: 36, 101: 35, 102: 35, 103: 35, 104: 35, 105: 35, 106: 35, 107: 34, 108: 34, 109: 34, 110: 32, 111: 32, 112: 32, 113: 32, 114: 31, 115: 31, 116: 30, 117: 30, 118: 30, 119: 30, 120: 29, 121: 29, 122: 28, 123: 28, 124: 28, 125: 28, 126: 28, 127: 28, 128: 28, 129: 28, 130: 28, 131: 27, 132: 27, 133: 27, 134: 27, 135: 27, 136: 27, 137: 27, 138: 26}} A: You have to instantiate the models since they are classes. The code would be, from statsforecast import StatsForecast from statsforecast.models import CrostonClassic, CrostonSBA, CrostonOptimized, ADIDA, IMAPA, TSB from statsforecast.models import SimpleExponentialSmoothing, SimpleExponentialSmoothingOptimized, SeasonalExponentialSmoothing, SeasonalExponentialSmoothingOptimized, Holt, HoltWinters from statsforecast.models import HistoricAverage, Naive, RandomWalkWithDrift, SeasonalNaive, WindowAverage, SeasonalWindowAverage from statsforecast.models import MSTL from statsforecast.models import Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta from statsforecast.models import AutoARIMA, AutoETS, AutoCES, AutoTheta seasonality = 12 #Monthly data models = [ ADIDA(), CrostonClassic(), CrostonSBA(), CrostonOptimized(), IMAPA(), TSB(0.3,0.2), Theta(season_length=seasonality), OptimizedTheta(season_length=seasonality), DynamicTheta(season_length=seasonality), DynamicOptimizedTheta(season_length=seasonality), AutoARIMA(season_length=seasonality), AutoCES(season_length=seasonality), AutoTheta(season_length=seasonality), HistoricAverage(), Naive(), RandomWalkWithDrift(), SeasonalNaive(season_length=seasonality), SeasonalExponentialSmoothing(season_length=seasonality, alpha=0.2), ] fcst = StatsForecast(df=df, models=models, freq='MS', n_jobs=-1, fallback_model=SeasonalNaive(season_length=seasonality)) %time forecasts = fcst.forecast(9) forecasts.reset_index() forecasts = forecasts.round(0) Here's a colab link fixing the error: https://colab.research.google.com/drive/1vwIImCoKzGvePbgFKidauV8sXimAvO48?usp=sharing
Updated StatsForecast Library shows error 'forecasts' is not defined in Python
I was trying to replicate this code for stat forecasting in python, I came across an odd error "name 'forecasts' is not defined" which is quite strange as I was able to replicate the code without any errors before. I believe this was resolved in the latest update of this library StatsForecast but I still run across to the same error. Can you please help me out here. The code I am replicating is from this : https://towardsdatascience.com/time-series-forecasting-with-statistical-models-f08dcd1d24d1 A similar question was earlier asked for the same error, and the solution was updated but this error till comes up after the new solution as well, attached is the link to the question: Error in Data frame definition while Multiple TS Stat Forecasting in Python import random from itertools import product from IPython.display import display, Markdown from multiprocessing import cpu_count import matplotlib.pyplot as plt import numpy as np import pandas as pd from statsforecast import StatsForecast from nixtlats.data.datasets.m4 import M4, M4Info from statsforecast.models import CrostonClassic, CrostonSBA, CrostonOptimized, ADIDA, IMAPA, TSB from statsforecast.models import SimpleExponentialSmoothing, SimpleExponentialSmoothingOptimized, SeasonalExponentialSmoothing, SeasonalExponentialSmoothingOptimized, Holt, HoltWinters from statsforecast.models import HistoricAverage, Naive, RandomWalkWithDrift, SeasonalNaive, WindowAverage, SeasonalWindowAverage from statsforecast.models import MSTL from statsforecast.models import Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta from statsforecast.models import AutoARIMA, AutoETS, AutoCES, AutoTheta df = pd.read_excel ('C:/X/X/X/Data.xlsx',sheet_name='Transpose') df.rename(columns = {'Row Labels':'Key'}, inplace=True) df['Key'] = df['Key'].astype(str) df = pd.melt(df,id_vars='Key',value_vars=list(df.columns[1:]),var_name ='ds') df.columns = df.columns.str.replace('Key', 'unique_id') df.columns = df.columns.str.replace('value', 'y') df["ds"] = pd.to_datetime(df["ds"],format='%Y-%m-%d') df=df[["ds","unique_id","y"]] df['unique_id'] = df['unique_id'].astype('object') df = df.set_index('unique_id') df.reset_index() seasonality = 30 #Monthly data models = [ ADIDA, CrostonClassic(), CrostonSBA(), CrostonOptimized(), IMAPA, (TSB,0.3,0.2), MSTL, Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta, AutoARIMA, AutoETS, AutoCES, AutoTheta, HistoricAverage, Naive, RandomWalkWithDrift, (SeasonalNaive, seasonality), (SeasonalExponentialSmoothing, seasonality, 0.2), (SeasonalWindowAverage, seasonality, 2 * seasonality), (WindowAverage, 2 * seasonality) ] fcst = StatsForecast(df=df, models=models, freq='MS', n_jobs=cpu_count()) %time forecasts = fcst.forecast(9) forecasts.reset_index() forecasts = forecasts.round(0) forecasts.to_excel("C:/X/X/X/Forecast_Output.xlsx",sheet_name='Sheet1') The dataset I am working with is given below: {'Row Labels': {0: 'XYZ-912750', 1: 'XYZ-461356', 2: 'XYZ-150591', 3: 'XYZ-627885', 4: 'XYZ-582638', 5: 'XYZ-631691', 6: 'XYZ-409952', 7: 'XYZ-245245', 8: 'XYZ-230662', 9: 'XYZ-533388', 10: 'XYZ-248225', 11: 'XYZ-582912', 12: 'XYZ-486079', 13: 'XYZ-867685', 14: 'XYZ-873555', 15: 'XYZ-375397', 16: 'XYZ-428066', 17: 'XYZ-774244', 18: 'XYZ-602796', 19: 'XYZ-267306', 20: 'XYZ-576156', 21: 'XYZ-775994', 22: 'XYZ-226742', 23: 'XYZ-641711', 24: 'XYZ-928543', 25: 'XYZ-217200', 26: 'XYZ-971921', 27: 'XYZ-141388', 28: 'XYZ-848360', 29: 'XYZ-864999', 30: 'XYZ-821384', 31: 'XYZ-516339', 32: 'XYZ-462488', 33: 'XYZ-140964', 34: 'XYZ-225559', 35: 'XYZ-916534', 36: 'XYZ-389683', 37: 'XYZ-247803', 38: 'XYZ-718639', 39: 'XYZ-512944', 40: 'XYZ-727601', 41: 'XYZ-315757', 42: 'XYZ-764867', 43: 'XYZ-918344', 44: 'XYZ-430939', 45: 'XYZ-204784', 46: 'XYZ-415285', 47: 'XYZ-272089', 48: 'XYZ-812045', 49: 'XYZ-889257', 50: 'XYZ-275863', 51: 'XYZ-519930', 52: 'XYZ-102141', 53: 'XYZ-324473', 54: 'XYZ-999148', 55: 'XYZ-514915', 56: 'XYZ-932751', 57: 'XYZ-669878', 58: 'XYZ-233459', 59: 'XYZ-289984', 60: 'XYZ-150061', 61: 'XYZ-355028', 62: 'XYZ-881803', 63: 'XYZ-721426', 64: 'XYZ-522174', 65: 'XYZ-790172', 66: 'XYZ-744677', 67: 'XYZ-617017', 68: 'XYZ-982812', 69: 'XYZ-940695', 70: 'XYZ-119041', 71: 'XYZ-313844', 72: 'XYZ-868117', 73: 'XYZ-791717', 74: 'XYZ-100742', 75: 'XYZ-259687', 76: 'XYZ-688842', 77: 'XYZ-247326', 78: 'XYZ-360939', 79: 'XYZ-185017', 80: 'XYZ-244773', 81: 'XYZ-289058', 82: 'XYZ-477846', 83: 'XYZ-305072', 84: 'XYZ-828236', 85: 'XYZ-668927', 86: 'XYZ-616913', 87: 'XYZ-874876', 88: 'XYZ-371693', 89: 'XYZ-951238', 90: 'XYZ-371675', 91: 'XYZ-736997', 92: 'XYZ-922244', 93: 'XYZ-883225', 94: 'XYZ-267555', 95: 'XYZ-704013', 96: 'XYZ-874917', 97: 'XYZ-567402', 98: 'XYZ-167338', 99: 'XYZ-592671', 100: 'XYZ-130168', 101: 'XYZ-492522', 102: 'XYZ-696211', 103: 'XYZ-310469', 104: 'XYZ-973277', 105: 'XYZ-841356', 106: 'XYZ-389440', 107: 'XYZ-613876', 108: 'XYZ-662850', 109: 'XYZ-800625', 110: 'XYZ-500125', 111: 'XYZ-539949', 112: 'XYZ-576121', 113: 'XYZ-339006', 114: 'XYZ-247314', 115: 'XYZ-129049', 116: 'XYZ-980653', 117: 'XYZ-678520', 118: 'XYZ-584841', 119: 'XYZ-396755', 120: 'XYZ-409502', 121: 'XYZ-824561', 122: 'XYZ-825996', 123: 'XYZ-820540', 124: 'XYZ-264710', 125: 'XYZ-241176', 126: 'XYZ-491386', 127: 'XYZ-914132', 128: 'XYZ-496194', 129: 'XYZ-941615', 130: 'XYZ-765328', 131: 'XYZ-540602', 132: 'XYZ-222660', 133: 'XYZ-324367', 134: 'XYZ-583764', 135: 'XYZ-248478', 136: 'XYZ-379180', 137: 'XYZ-628462', 138: 'XYZ-454262'}, '2021-03-01': {0: 0, 1: 951, 2: 0, 3: 0, 4: 13, 5: 0, 6: 0, 7: 0, 8: 487, 9: 501, 10: 0, 11: 0, 12: 0, 13: 0, 14: 715, 15: 726, 16: 235, 17: 340, 18: 0, 19: 0, 20: 0, 21: 960, 22: 127, 23: 92, 24: 0, 25: 0, 26: 170, 27: 0, 28: 0, 29: 0, 30: 0, 31: 133, 32: 0, 33: 0, 34: 105, 35: 168, 36: 0, 37: 500, 38: 0, 39: 0, 40: 61, 41: 0, 42: 212, 43: 101, 44: 0, 45: 0, 46: 0, 47: 83, 48: 185, 49: 0, 50: 131, 51: 67, 52: 0, 53: 141, 54: 0, 55: 140, 56: 0, 57: 0, 58: 180, 59: 0, 60: 0, 61: 99, 62: 63, 63: 0, 64: 0, 65: 1590, 66: 0, 67: 0, 68: 15, 69: 113, 70: 0, 71: 0, 72: 0, 73: 54, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 108, 80: 0, 81: 62, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 29, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 69, 98: 0, 99: 0, 100: 0, 101: 62, 102: 30, 103: 42, 104: 0, 105: 0, 106: 0, 107: 67, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 52, 116: 36, 117: 0, 118: 110, 119: 0, 120: 44, 121: 0, 122: 102, 123: 0, 124: 71, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 77, 132: 56, 133: 0, 134: 0, 135: 103, 136: 0, 137: 0, 138: 53}, '2021-04-01': {0: 0, 1: 553, 2: 0, 3: 0, 4: 18, 5: 0, 6: 0, 7: 0, 8: 313, 9: 1100, 10: 0, 11: 0, 12: 0, 13: 0, 14: 336, 15: 856, 16: 216, 17: 415, 18: 0, 19: 0, 20: 0, 21: 1363, 22: 148, 23: 171, 24: 0, 25: 0, 26: 260, 27: 0, 28: 0, 29: 0, 30: 0, 31: 229, 32: 0, 33: 0, 34: 286, 35: 215, 36: 0, 37: 381, 38: 0, 39: 0, 40: 171, 41: 0, 42: 261, 43: 211, 44: 0, 45: 0, 46: 0, 47: 94, 48: 167, 49: 0, 50: 171, 51: 111, 52: 0, 53: 229, 54: 0, 55: 104, 56: 0, 57: 0, 58: 158, 59: 0, 60: 0, 61: 142, 62: 156, 63: 0, 64: 0, 65: 1152, 66: 0, 67: 0, 68: 19, 69: 160, 70: 0, 71: 0, 72: 0, 73: 50, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 146, 80: 0, 81: 25, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 69, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 49, 98: 0, 99: 0, 100: 0, 101: 22, 102: 46, 103: 48, 104: 0, 105: 0, 106: 0, 107: 60, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 24, 116: 51, 117: 0, 118: 112, 119: 0, 120: 73, 121: 0, 122: 155, 123: 0, 124: 57, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 59, 132: 62, 133: 0, 134: 0, 135: 132, 136: 0, 137: 0, 138: 70}, '2021-05-01': {0: 0, 1: 439, 2: 0, 3: 0, 4: 13, 5: 0, 6: 0, 7: 0, 8: 119, 9: 735, 10: 0, 11: 0, 12: 0, 13: 0, 14: 183, 15: 70, 16: 79, 17: 244, 18: 0, 19: 0, 20: 0, 21: 2842, 22: 30, 23: 76, 24: 0, 25: 0, 26: 95, 27: 0, 28: 0, 29: 0, 30: 0, 31: 38, 32: 0, 33: 0, 34: 197, 35: 114, 36: 0, 37: 140, 38: 0, 39: 0, 40: 91, 41: 0, 42: 82, 43: 83, 44: 0, 45: 0, 46: 0, 47: 35, 48: 126, 49: 0, 50: 83, 51: 101, 52: 0, 53: 94, 54: 0, 55: 100, 56: 0, 57: 0, 58: 89, 59: 0, 60: 0, 61: 94, 62: 112, 63: 0, 64: 0, 65: 1903, 66: 0, 67: 0, 68: 61, 69: 91, 70: 0, 71: 0, 72: 0, 73: 30, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 116, 80: 0, 81: 12, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 56, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 0, 98: 0, 99: 0, 100: 0, 101: 20, 102: 42, 103: 35, 104: 0, 105: 0, 106: 0, 107: 59, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 0, 116: 27, 117: 0, 118: 45, 119: 0, 120: 49, 121: 0, 122: 129, 123: 0, 124: 58, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 41, 132: 41, 133: 0, 134: 0, 135: 61, 136: 0, 137: 0, 138: 38}, '2021-06-01': {0: 0, 1: 390, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 221, 9: 816, 10: 0, 11: 0, 12: 0, 13: 0, 14: 109, 15: 255, 16: 126, 17: 161, 18: 0, 19: 0, 20: 0, 21: 959, 22: 52, 23: 119, 24: 0, 25: 0, 26: 261, 27: 0, 28: 0, 29: 0, 30: 0, 31: 142, 32: 0, 33: 0, 34: 203, 35: 42, 36: 0, 37: 133, 38: 0, 39: 0, 40: 113, 41: 0, 42: 118, 43: 62, 44: 0, 45: 0, 46: 0, 47: 48, 48: 112, 49: 0, 50: 75, 51: 105, 52: 0, 53: 107, 54: 0, 55: 102, 56: 0, 57: 0, 58: 77, 59: 0, 60: 0, 61: 81, 62: 94, 63: 0, 64: 0, 65: 764, 66: 0, 67: 0, 68: 47, 69: 116, 70: 0, 71: 0, 72: 0, 73: 19, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 148, 80: 0, 81: 20, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 46, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 33, 98: 0, 99: 0, 100: 0, 101: 39, 102: 52, 103: 47, 104: 0, 105: 0, 106: 0, 107: 56, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 62, 116: 41, 117: 0, 118: 51, 119: 0, 120: 59, 121: 0, 122: 73, 123: 0, 124: 34, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 17, 132: 42, 133: 0, 134: 0, 135: 74, 136: 0, 137: 0, 138: 58}, '2021-07-01': {0: 0, 1: 349, 2: 0, 3: 0, 4: 11, 5: 0, 6: 0, 7: 0, 8: 222, 9: 418, 10: 0, 11: 0, 12: 0, 13: 0, 14: 104, 15: 57, 16: 92, 17: 118, 18: 0, 19: 0, 20: 0, 21: 2040, 22: 80, 23: 50, 24: 0, 25: 0, 26: 147, 27: 0, 28: 0, 29: 0, 30: 0, 31: 22, 32: 0, 33: 0, 34: 117, 35: 88, 36: 0, 37: 146, 38: 0, 39: 0, 40: 65, 41: 0, 42: 117, 43: 65, 44: 0, 45: 0, 46: 0, 47: 33, 48: 36, 49: 0, 50: 51, 51: 50, 52: 0, 53: 66, 54: 0, 55: 51, 56: 0, 57: 0, 58: 100, 59: 0, 60: 0, 61: 63, 62: 55, 63: 0, 64: 0, 65: 847, 66: 0, 67: 0, 68: 32, 69: 68, 70: 0, 71: 0, 72: 0, 73: 42, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 72, 80: 0, 81: 27, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 47, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 36, 98: 0, 99: 0, 100: 0, 101: 25, 102: 29, 103: 39, 104: 0, 105: 0, 106: 0, 107: 40, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 37, 116: 41, 117: 0, 118: 29, 119: 0, 120: 54, 121: 0, 122: 75, 123: 0, 124: 41, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 12, 132: 28, 133: 0, 134: 0, 135: 46, 136: 0, 137: 0, 138: 24}, '2021-08-01': {0: 0, 1: 402, 2: 0, 3: 0, 4: 14, 5: 0, 6: 0, 7: 0, 8: 138, 9: 373, 10: 0, 11: 0, 12: 0, 13: 0, 14: 133, 15: 107, 16: 69, 17: 116, 18: 0, 19: 0, 20: 0, 21: 1554, 22: 80, 23: 65, 24: 0, 25: 0, 26: 123, 27: 0, 28: 0, 29: 0, 30: 0, 31: 23, 32: 0, 33: 0, 34: 95, 35: 49, 36: 0, 37: 146, 38: 0, 39: 0, 40: 50, 41: 0, 42: 90, 43: 57, 44: 0, 45: 0, 46: 0, 47: 19, 48: 46, 49: 0, 50: 38, 51: 20, 52: 0, 53: 91, 54: 0, 55: 69, 56: 0, 57: 0, 58: 57, 59: 0, 60: 0, 61: 53, 62: 48, 63: 0, 64: 0, 65: 934, 66: 0, 67: 0, 68: 19, 69: 66, 70: 0, 71: 0, 72: 0, 73: 75, 74: 0, 75: 0, 76: 0, 77: 0, 78: 0, 79: 86, 80: 0, 81: 33, 82: 0, 83: 0, 84: 0, 85: 0, 86: 0, 87: 0, 88: 0, 89: 0, 90: 32, 91: 0, 92: 0, 93: 0, 94: 0, 95: 0, 96: 0, 97: 46, 98: 0, 99: 0, 100: 0, 101: 22, 102: 31, 103: 63, 104: 0, 105: 0, 106: 0, 107: 41, 108: 0, 109: 0, 110: 0, 111: 0, 112: 0, 113: 0, 114: 0, 115: 42, 116: 42, 117: 0, 118: 30, 119: 0, 120: 32, 121: 0, 122: 70, 123: 0, 124: 40, 125: 0, 126: 0, 127: 0, 128: 0, 129: 0, 130: 0, 131: 12, 132: 21, 133: 0, 134: 0, 135: 83, 136: 0, 137: 0, 138: 20}, '2021-09-01': {0: 0, 1: 560, 2: 55, 3: 496, 4: 11, 5: 0, 6: 0, 7: 0, 8: 77, 9: 309, 10: 45, 11: 257, 12: 0, 13: 0, 14: 87, 15: 179, 16: 61, 17: 79, 18: 65, 19: 144, 20: 307, 21: 840, 22: 52, 23: 41, 24: 108, 25: 156, 26: 113, 27: 0, 28: 30, 29: 27, 30: 0, 31: 59, 32: 0, 33: 0, 34: 66, 35: 53, 36: 70, 37: 42, 38: 0, 39: 26, 40: 38, 41: 0, 42: 50, 43: 11, 44: 209, 45: 56, 46: 52, 47: 18, 48: 47, 49: 0, 50: 58, 51: 32, 52: 0, 53: 76, 54: 0, 55: 45, 56: 0, 57: 63, 58: 95, 59: 0, 60: 0, 61: 33, 62: 45, 63: 0, 64: 96, 65: 249, 66: 0, 67: 0, 68: 0, 69: 73, 70: 0, 71: 30, 72: 0, 73: 41, 74: 0, 75: 0, 76: 37, 77: 22, 78: 0, 79: 68, 80: 18, 81: 47, 82: 0, 83: 0, 84: 0, 85: 79, 86: 0, 87: 75, 88: 40, 89: 39, 90: 35, 91: 0, 92: 0, 93: 0, 94: 40, 95: 0, 96: 0, 97: 44, 98: 30, 99: 46, 100: 0, 101: 33, 102: 40, 103: 31, 104: 0, 105: 17, 106: 15, 107: 32, 108: 15, 109: 0, 110: 58, 111: 63, 112: 0, 113: 0, 114: 0, 115: 42, 116: 35, 117: 19, 118: 55, 119: 0, 120: 25, 121: 0, 122: 47, 123: 0, 124: 37, 125: 16, 126: 24, 127: 124, 128: 67, 129: 0, 130: 0, 131: 28, 132: 20, 133: 0, 134: 0, 135: 34, 136: 0, 137: 26, 138: 28}, '2021-10-01': {0: 122, 1: 720, 2: 129, 3: 1135, 4: 11, 5: 0, 6: 0, 7: 85, 8: 122, 9: 280, 10: 100, 11: 159, 12: 0, 13: 0, 14: 87, 15: 115, 16: 40, 17: 32, 18: 236, 19: 176, 20: 322, 21: 334, 22: 113, 23: 49, 24: 133, 25: 119, 26: 136, 27: 0, 28: 74, 29: 56, 30: 38, 31: 83, 32: 0, 33: 0, 34: 65, 35: 88, 36: 75, 37: 68, 38: 52, 39: 36, 40: 44, 41: 11, 42: 40, 43: 13, 44: 198, 45: 244, 46: 130, 47: 23, 48: 44, 49: 0, 50: 62, 51: 49, 52: 0, 53: 92, 54: 0, 55: 14, 56: 0, 57: 83, 58: 58, 59: 0, 60: 0, 61: 44, 62: 42, 63: 39, 64: 37, 65: 132, 66: 0, 67: 0, 68: 49, 69: 57, 70: 0, 71: 40, 72: 112, 73: 28, 74: 102, 75: 0, 76: 56, 77: 17, 78: 22, 79: 37, 80: 48, 81: 0, 82: 14, 83: 13, 84: 48, 85: 84, 86: 0, 87: 104, 88: 81, 89: 34, 90: 49, 91: 0, 92: 0, 93: 42, 94: 101, 95: 41, 96: 11, 97: 74, 98: 35, 99: 45, 100: 73, 101: 19, 102: 38, 103: 26, 104: 0, 105: 26, 106: 26, 107: 43, 108: 93, 109: 0, 110: 74, 111: 70, 112: 35, 113: 25, 114: 0, 115: 55, 116: 28, 117: 0, 118: 58, 119: 0, 120: 26, 121: 0, 122: 13, 123: 0, 124: 50, 125: 16, 126: 39, 127: 74, 128: 42, 129: 29, 130: 0, 131: 24, 132: 26, 133: 0, 134: 0, 135: 125, 136: 0, 137: 37, 138: 20}, '2021-11-01': {0: 1331, 1: 1810, 2: 274, 3: 899, 4: 0, 5: 0, 6: 30, 7: 606, 8: 138, 9: 1735, 10: 209, 11: 468, 12: 0, 13: 0, 14: 327, 15: 1394, 16: 73, 17: 187, 18: 1259, 19: 355, 20: 374, 21: 2079, 22: 500, 23: 168, 24: 305, 25: 80, 26: 256, 27: 0, 28: 340, 29: 143, 30: 380, 31: 273, 32: 79, 33: 0, 34: 143, 35: 137, 36: 200, 37: 336, 38: 166, 39: 235, 40: 97, 41: 202, 42: 75, 43: 130, 44: 650, 45: 675, 46: 326, 47: 46, 48: 105, 49: 0, 50: 195, 51: 135, 52: 93, 53: 229, 54: 0, 55: 93, 56: 0, 57: 188, 58: 89, 59: 46, 60: 123, 61: 101, 62: 89, 63: 64, 64: 208, 65: 325, 66: 0, 67: 0, 68: 211, 69: 90, 70: 0, 71: 111, 72: 218, 73: 42, 74: 139, 75: 16, 76: 94, 77: 148, 78: 45, 79: 92, 80: 100, 81: 16, 82: 31, 83: 123, 84: 87, 85: 142, 86: 0, 87: 444, 88: 123, 89: 105, 90: 63, 91: 0, 92: 16, 93: 149, 94: 240, 95: 114, 96: 99, 97: 128, 98: 128, 99: 104, 100: 196, 101: 32, 102: 41, 103: 55, 104: 0, 105: 67, 106: 97, 107: 56, 108: 40, 109: 14, 110: 194, 111: 290, 112: 151, 113: 154, 114: 11, 115: 105, 116: 54, 117: 30, 118: 148, 119: 0, 120: 71, 121: 0, 122: 39, 123: 0, 124: 118, 125: 207, 126: 58, 127: 131, 128: 93, 129: 30, 130: 0, 131: 90, 132: 43, 133: 0, 134: 0, 135: 40, 136: 0, 137: 58, 138: 29}, '2021-12-01': {0: 1901, 1: 2469, 2: 298, 3: 1760, 4: 14, 5: 0, 6: 573, 7: 1444, 8: 126, 9: 1568, 10: 220, 11: 497, 12: 0, 13: 71, 14: 248, 15: 1670, 16: 77, 17: 93, 18: 910, 19: 362, 20: 698, 21: 1044, 22: 651, 23: 156, 24: 208, 25: 185, 26: 314, 27: 0, 28: 356, 29: 205, 30: 570, 31: 186, 32: 25, 33: 0, 34: 117, 35: 90, 36: 385, 37: 228, 38: 410, 39: 270, 40: 63, 41: 228, 42: 50, 43: 53, 44: 450, 45: 896, 46: 431, 47: 74, 48: 62, 49: 0, 50: 678, 51: 123, 52: 204, 53: 225, 54: 100, 55: 13, 56: 88, 57: 302, 58: 81, 59: 111, 60: 141, 61: 98, 62: 57, 63: 73, 64: 334, 65: 422, 66: 49, 67: 0, 68: 600, 69: 86, 70: 55, 71: 162, 72: 138, 73: 50, 74: 296, 75: 30, 76: 153, 77: 186, 78: 68, 79: 39, 80: 173, 81: 0, 82: 276, 83: 192, 84: 66, 85: 116, 86: 89, 87: 385, 88: 209, 89: 121, 90: 68, 91: 22, 92: 52, 93: 262, 94: 261, 95: 70, 96: 85, 97: 298, 98: 170, 99: 126, 100: 145, 101: 17, 102: 53, 103: 56, 104: 0, 105: 97, 106: 114, 107: 72, 108: 42, 109: 22, 110: 211, 111: 370, 112: 175, 113: 111, 114: 27, 115: 62, 116: 104, 117: 118, 118: 248, 119: 0, 120: 58, 121: 20, 122: 52, 123: 20, 124: 97, 125: 119, 126: 107, 127: 108, 128: 79, 129: 42, 130: 0, 131: 281, 132: 83, 133: 57, 134: 61, 135: 50, 136: 50, 137: 22, 138: 37}, '2022-01-01': {0: 938, 1: 1501, 2: 377, 3: 1455, 4: 17, 5: 0, 6: 815, 7: 562, 8: 534, 9: 628, 10: 178, 11: 332, 12: 0, 13: 177, 14: 311, 15: 614, 16: 50, 17: 121, 18: 343, 19: 314, 20: 356, 21: 587, 22: 498, 23: 67, 24: 222, 25: 230, 26: 210, 27: 0, 28: 237, 29: 131, 30: 222, 31: 74, 32: 12, 33: 0, 34: 79, 35: 53, 36: 397, 37: 351, 38: 253, 39: 269, 40: 63, 41: 211, 42: 53, 43: 163, 44: 209, 45: 287, 46: 364, 47: 59, 48: 49, 49: 0, 50: 290, 51: 55, 52: 113, 53: 76, 54: 85, 55: 83, 56: 190, 57: 166, 58: 72, 59: 108, 60: 119, 61: 121, 62: 25, 63: 46, 64: 163, 65: 204, 66: 76, 67: 0, 68: 250, 69: 76, 70: 148, 71: 161, 72: 97, 73: 44, 74: 150, 75: 34, 76: 144, 77: 189, 78: 73, 79: 27, 80: 109, 81: 0, 82: 90, 83: 185, 84: 48, 85: 110, 86: 198, 87: 216, 88: 139, 89: 59, 90: 34, 91: 45, 92: 116, 93: 187, 94: 164, 95: 34, 96: 80, 97: 45, 98: 78, 99: 82, 100: 54, 101: 14, 102: 28, 103: 31, 104: 48, 105: 52, 106: 97, 107: 29, 108: 56, 109: 33, 110: 84, 111: 212, 112: 111, 113: 128, 114: 18, 115: 81, 116: 32, 117: 115, 118: 192, 119: 0, 120: 36, 121: 194, 122: 17, 123: 55, 124: 98, 125: 104, 126: 83, 127: 101, 128: 54, 129: 36, 130: 0, 131: 156, 132: 33, 133: 104, 134: 101, 135: 31, 136: 46, 137: 66, 138: 20}, '2022-02-01': {0: 612, 1: 912, 2: 325, 3: 892, 4: 11, 5: 0, 6: 706, 7: 310, 8: 439, 9: 563, 10: 134, 11: 140, 12: 0, 13: 153, 14: 281, 15: 399, 16: 49, 17: 90, 18: 204, 19: 231, 20: 100, 21: 318, 22: 255, 23: 63, 24: 309, 25: 181, 26: 205, 27: 0, 28: 121, 29: 84, 30: 117, 31: 80, 32: 143, 33: 0, 34: 65, 35: 64, 36: 227, 37: 271, 38: 133, 39: 290, 40: 47, 41: 156, 42: 0, 43: 176, 44: 153, 45: 244, 46: 300, 47: 14, 48: 30, 49: 0, 50: 126, 51: 46, 52: 81, 53: 69, 54: 165, 55: 48, 56: 79, 57: 91, 58: 31, 59: 95, 60: 138, 61: 87, 62: 34, 63: 39, 64: 101, 65: 111, 66: 19, 67: 0, 68: 15, 69: 26, 70: 0, 71: 88, 72: 81, 73: 53, 74: 135, 75: 62, 76: 92, 77: 141, 78: 57, 79: 32, 80: 71, 81: 34, 82: 357, 83: 92, 84: 50, 85: 82, 86: 97, 87: 128, 88: 75, 89: 54, 90: 23, 91: 28, 92: 57, 93: 108, 94: 138, 95: 48, 96: 79, 97: 109, 98: 52, 99: 54, 100: 73, 101: 27, 102: 20, 103: 26, 104: 86, 105: 48, 106: 54, 107: 27, 108: 39, 109: 61, 110: 67, 111: 110, 112: 127, 113: 147, 114: 0, 115: 60, 116: 23, 117: 68, 118: 101, 119: 23, 120: 25, 121: 93, 122: 35, 123: 25, 124: 52, 125: 72, 126: 50, 127: 84, 128: 78, 129: 43, 130: 0, 131: 82, 132: 34, 133: 84, 134: 13, 135: 13, 136: 37, 137: 69, 138: 13}, '2022-03-01': {0: 573, 1: 775, 2: 267, 3: 870, 4: 19, 5: 0, 6: 494, 7: 254, 8: 402, 9: 657, 10: 180, 11: 144, 12: 0, 13: 266, 14: 240, 15: 394, 16: 106, 17: 142, 18: 216, 19: 211, 20: 113, 21: 245, 22: 152, 23: 88, 24: 225, 25: 168, 26: 177, 27: 0, 28: 92, 29: 70, 30: 98, 31: 124, 32: 103, 33: 0, 34: 85, 35: 86, 36: 189, 37: 184, 38: 108, 39: 0, 40: 69, 41: 125, 42: 26, 43: 128, 44: 119, 45: 226, 46: 251, 47: 26, 48: 58, 49: 0, 50: 109, 51: 67, 52: 70, 53: 55, 54: 157, 55: 49, 56: 51, 57: 89, 58: 43, 59: 69, 60: 136, 61: 92, 62: 79, 63: 54, 64: 59, 65: 64, 66: 35, 67: 0, 68: 239, 69: 48, 70: 101, 71: 91, 72: 53, 73: 65, 74: 147, 75: 38, 76: 70, 77: 107, 78: 41, 79: 32, 80: 51, 81: 39, 82: 130, 83: 123, 84: 44, 85: 60, 86: 177, 87: 99, 88: 75, 89: 35, 90: 21, 91: 25, 92: 77, 93: 88, 94: 86, 95: 88, 96: 52, 97: 45, 98: 42, 99: 52, 100: 121, 101: 28, 102: 22, 103: 26, 104: 104, 105: 39, 106: 48, 107: 45, 108: 42, 109: 35, 110: 74, 111: 101, 112: 101, 113: 120, 114: 22, 115: 58, 116: 23, 117: 53, 118: 70, 119: 45, 120: 30, 121: 69, 122: 44, 123: 37, 124: 33, 125: 49, 126: 49, 127: 58, 128: 55, 129: 33, 130: 0, 131: 58, 132: 30, 133: 42, 134: 43, 135: 23, 136: 31, 137: 83, 138: 22}, '2022-04-01': {0: 356, 1: 595, 2: 231, 3: 444, 4: 0, 5: 0, 6: 220, 7: 145, 8: 185, 9: 394, 10: 140, 11: 112, 12: 0, 13: 104, 14: 139, 15: 236, 16: 102, 17: 121, 18: 77, 19: 174, 20: 108, 21: 133, 22: 105, 23: 53, 24: 195, 25: 114, 26: 155, 27: 11, 28: 88, 29: 40, 30: 102, 31: 91, 32: 142, 33: 0, 34: 66, 35: 36, 36: 90, 37: 114, 38: 64, 39: 262, 40: 46, 41: 87, 42: 47, 43: 87, 44: 64, 45: 93, 46: 114, 47: 15, 48: 95, 49: 0, 50: 85, 51: 40, 52: 30, 53: 51, 54: 81, 55: 38, 56: 66, 57: 52, 58: 43, 59: 59, 60: 121, 61: 53, 62: 44, 63: 22, 64: 59, 65: 64, 66: 47, 67: 0, 68: 194, 69: 26, 70: 59, 71: 37, 72: 47, 73: 51, 74: 146, 75: 36, 76: 43, 77: 120, 78: 37, 79: 16, 80: 52, 81: 22, 82: 151, 83: 51, 84: 35, 85: 52, 86: 71, 87: 32, 88: 39, 89: 20, 90: 25, 91: 25, 92: 48, 93: 44, 94: 35, 95: 40, 96: 30, 97: 41, 98: 24, 99: 45, 100: 44, 101: 17, 102: 15, 103: 19, 104: 39, 105: 32, 106: 45, 107: 35, 108: 21, 109: 16, 110: 34, 111: 44, 112: 46, 113: 29, 114: 20, 115: 51, 116: 17, 117: 45, 118: 52, 119: 31, 120: 29, 121: 34, 122: 21, 123: 16, 124: 26, 125: 39, 126: 22, 127: 45, 128: 48, 129: 20, 130: 0, 131: 35, 132: 18, 133: 39, 134: 22, 135: 30, 136: 71, 137: 15, 138: 11}, '2022-05-01': {0: 383, 1: 326, 2: 108, 3: 397, 4: 0, 5: 0, 6: 110, 7: 83, 8: 142, 9: 240, 10: 137, 11: 70, 12: 0, 13: 142, 14: 110, 15: 203, 16: 111, 17: 265, 18: 52, 19: 109, 20: 57, 21: 85, 22: 73, 23: 202, 24: 102, 25: 50, 26: 178, 27: 42, 28: 55, 29: 26, 30: 53, 31: 173, 32: 76, 33: 0, 34: 207, 35: 87, 36: 29, 37: 79, 38: 27, 39: 102, 40: 115, 41: 33, 42: 102, 43: 65, 44: 42, 45: 47, 46: 92, 47: 25, 48: 93, 49: 0, 50: 42, 51: 80, 52: 20, 53: 105, 54: 52, 55: 70, 56: 46, 57: 31, 58: 86, 59: 39, 60: 32, 61: 33, 62: 103, 63: 16, 64: 49, 65: 24, 66: 22, 67: 0, 68: 161, 69: 78, 70: 31, 71: 36, 72: 28, 73: 73, 74: 57, 75: 21, 76: 30, 77: 39, 78: 22, 79: 70, 80: 24, 81: 55, 82: 134, 83: 25, 84: 16, 85: 28, 86: 24, 87: 28, 88: 31, 89: 17, 90: 60, 91: 30, 92: 32, 93: 49, 94: 20, 95: 13, 96: 12, 97: 31, 98: 20, 99: 25, 100: 21, 101: 33, 102: 29, 103: 36, 104: 23, 105: 26, 106: 26, 107: 31, 108: 30, 109: 15, 110: 22, 111: 20, 112: 32, 113: 27, 114: 39, 115: 18, 116: 40, 117: 31, 118: 21, 119: 24, 120: 52, 121: 22, 122: 62, 123: 37, 124: 16, 125: 19, 126: 17, 127: 23, 128: 17, 129: 15, 130: 0, 131: 22, 132: 32, 133: 24, 134: 20, 135: 21, 136: 13, 137: 23, 138: 25}, '2022-06-01': {0: 613, 1: 1944, 2: 1826, 3: 494, 4: 0, 5: 244, 6: 928, 7: 798, 8: 219, 9: 1529, 10: 1029, 11: 526, 12: 122, 13: 195, 14: 173, 15: 1261, 16: 87, 17: 243, 18: 1179, 19: 217, 20: 464, 21: 952, 22: 353, 23: 148, 24: 166, 25: 187, 26: 134, 27: 124, 28: 321, 29: 221, 30: 193, 31: 224, 32: 75, 33: 0, 34: 277, 35: 77, 36: 253, 37: 174, 38: 343, 39: 283, 40: 73, 41: 295, 42: 108, 43: 138, 44: 102, 45: 1364, 46: 467, 47: 28, 48: 87, 49: 16, 50: 145, 51: 88, 52: 128, 53: 60, 54: 80, 55: 81, 56: 40, 57: 206, 58: 61, 59: 166, 60: 144, 61: 71, 62: 78, 63: 39, 64: 331, 65: 116, 66: 25, 67: 13, 68: 62, 69: 37, 70: 24, 71: 311, 72: 106, 73: 50, 74: 257, 75: 22, 76: 56, 77: 128, 78: 100, 79: 55, 80: 139, 81: 70, 82: 140, 83: 20, 84: 53, 85: 33, 86: 38, 87: 167, 88: 218, 89: 20, 90: 34, 91: 19, 92: 25, 93: 199, 94: 122, 95: 24, 96: 28, 97: 36, 98: 69, 99: 146, 100: 33, 101: 14, 102: 21, 103: 27, 104: 28, 105: 78, 106: 62, 107: 30, 108: 47, 109: 20, 110: 78, 111: 48, 112: 35, 113: 21, 114: 17, 115: 49, 116: 61, 117: 92, 118: 26, 119: 16, 120: 47, 121: 36, 122: 54, 123: 43, 124: 23, 125: 40, 126: 22, 127: 121, 128: 145, 129: 12, 130: 18, 131: 31, 132: 31, 133: 17, 134: 23, 135: 23, 136: 19, 137: 24, 138: 24}, '2022-07-01': {0: 349, 1: 283, 2: 163, 3: 318, 4: 67, 5: 328, 6: 121, 7: 96, 8: 205, 9: 219, 10: 89, 11: 60, 12: 153, 13: 68, 14: 135, 15: 181, 16: 53, 17: 94, 18: 65, 19: 96, 20: 67, 21: 57, 22: 67, 23: 59, 24: 134, 25: 94, 26: 78, 27: 142, 28: 33, 29: 29, 30: 45, 31: 64, 32: 65, 33: 76, 34: 81, 35: 55, 36: 44, 37: 83, 38: 15, 39: 46, 40: 84, 41: 45, 42: 56, 43: 54, 44: 50, 45: 48, 46: 90, 47: 17, 48: 56, 49: 27, 50: 66, 51: 37, 52: 34, 53: 63, 54: 58, 55: 27, 56: 45, 57: 74, 58: 51, 59: 61, 60: 80, 61: 45, 62: 65, 63: 34, 64: 27, 65: 30, 66: 18, 67: 35, 68: 47, 69: 31, 70: 24, 71: 40, 72: 18, 73: 30, 74: 44, 75: 26, 76: 31, 77: 32, 78: 29, 79: 29, 80: 45, 81: 14, 82: 54, 83: 31, 84: 37, 85: 24, 86: 32, 87: 20, 88: 40, 89: 32, 90: 22, 91: 17, 92: 30, 93: 29, 94: 20, 95: 52, 96: 34, 97: 25, 98: 26, 99: 28, 100: 72, 101: 17, 102: 15, 103: 22, 104: 28, 105: 24, 106: 28, 107: 19, 108: 25, 109: 25, 110: 38, 111: 19, 112: 27, 113: 26, 114: 15, 115: 22, 116: 28, 117: 24, 118: 33, 119: 13, 120: 57, 121: 40, 122: 22, 123: 14, 124: 18, 125: 23, 126: 20, 127: 38, 128: 20, 129: 14, 130: 36, 131: 24, 132: 18, 133: 39, 134: 14, 135: 40, 136: 16, 137: 21, 138: 13}, '2022-08-01': {0: 857, 1: 500, 2: 362, 3: 334, 4: 308, 5: 296, 6: 289, 7: 266, 8: 244, 9: 223, 10: 206, 11: 192, 12: 180, 13: 169, 14: 160, 15: 159, 16: 140, 17: 134, 18: 134, 19: 128, 20: 127, 21: 126, 22: 123, 23: 116, 24: 112, 25: 111, 26: 108, 27: 102, 28: 99, 29: 94, 30: 94, 31: 89, 32: 88, 33: 88, 34: 87, 35: 85, 36: 83, 37: 79, 38: 78, 39: 77, 40: 77, 41: 77, 42: 76, 43: 75, 44: 75, 45: 74, 46: 72, 47: 65, 48: 65, 49: 65, 50: 64, 51: 64, 52: 64, 53: 62, 54: 62, 55: 61, 56: 61, 57: 61, 58: 60, 59: 60, 60: 58, 61: 55, 62: 54, 63: 54, 64: 54, 65: 54, 66: 53, 67: 53, 68: 52, 69: 50, 70: 49, 71: 49, 72: 49, 73: 48, 74: 48, 75: 48, 76: 47, 77: 47, 78: 46, 79: 44, 80: 44, 81: 43, 82: 43, 83: 43, 84: 42, 85: 42, 86: 41, 87: 41, 88: 41, 89: 40, 90: 39, 91: 39, 92: 39, 93: 39, 94: 38, 95: 37, 96: 37, 97: 36, 98: 36, 99: 36, 100: 36, 101: 35, 102: 35, 103: 35, 104: 35, 105: 35, 106: 35, 107: 34, 108: 34, 109: 34, 110: 32, 111: 32, 112: 32, 113: 32, 114: 31, 115: 31, 116: 30, 117: 30, 118: 30, 119: 30, 120: 29, 121: 29, 122: 28, 123: 28, 124: 28, 125: 28, 126: 28, 127: 28, 128: 28, 129: 28, 130: 28, 131: 27, 132: 27, 133: 27, 134: 27, 135: 27, 136: 27, 137: 27, 138: 26}}
[ "You have to instantiate the models since they are classes.\nThe code would be,\nfrom statsforecast import StatsForecast\n\nfrom statsforecast.models import CrostonClassic, CrostonSBA, CrostonOptimized, ADIDA, IMAPA, TSB\nfrom statsforecast.models import SimpleExponentialSmoothing, SimpleExponentialSmoothingOptimized, SeasonalExponentialSmoothing, SeasonalExponentialSmoothingOptimized, Holt, HoltWinters\nfrom statsforecast.models import HistoricAverage, Naive, RandomWalkWithDrift, SeasonalNaive, WindowAverage, SeasonalWindowAverage\nfrom statsforecast.models import MSTL\nfrom statsforecast.models import Theta, OptimizedTheta, DynamicTheta, DynamicOptimizedTheta\nfrom statsforecast.models import AutoARIMA, AutoETS, AutoCES, AutoTheta\n\nseasonality = 12 #Monthly data\n\nmodels = [\n ADIDA(),\n CrostonClassic(),\n CrostonSBA(),\n CrostonOptimized(),\n IMAPA(),\n TSB(0.3,0.2),\n Theta(season_length=seasonality),\n OptimizedTheta(season_length=seasonality),\n DynamicTheta(season_length=seasonality),\n DynamicOptimizedTheta(season_length=seasonality),\n AutoARIMA(season_length=seasonality),\n AutoCES(season_length=seasonality),\n AutoTheta(season_length=seasonality),\n HistoricAverage(),\n Naive(),\n RandomWalkWithDrift(),\n SeasonalNaive(season_length=seasonality),\n SeasonalExponentialSmoothing(season_length=seasonality, alpha=0.2),\n]\n\nfcst = StatsForecast(df=df, models=models, freq='MS', n_jobs=-1, \n fallback_model=SeasonalNaive(season_length=seasonality))\n%time forecasts = fcst.forecast(9)\nforecasts.reset_index()\n\nforecasts = forecasts.round(0)\n\nHere's a colab link fixing the error: https://colab.research.google.com/drive/1vwIImCoKzGvePbgFKidauV8sXimAvO48?usp=sharing\n" ]
[ 0 ]
[]
[]
[ "forecasting", "python", "python_3.x", "time_series" ]
stackoverflow_0074657616_forecasting_python_python_3.x_time_series.txt
Q: Python monkeypatch.setattr() with pytest fixture at module scope First of all, the relevant portion of my project directory looks like: └── my_package ├── my_subpackage │ ├── my_module.py | └── other_module.py └── tests └── my_subpackage └── unit_test.py I am writing some tests in unit_test.py that require mocking of an external resource at the module level. I would like to use a pytest fixture with module level scope and pytest monkeypatch to acomplish this. Here is a snippet of what I have tried in unit_test.py: import unittest.mock as mock import pytest from my_package.my_subpackage.my_module import MyClass @pytest.fixture(scope='function') def external_access(monkeypatch): external_access = mock.MagicMock() external_access.get_something = mock.MagicMock( return_value='Mock was used.') monkeypatch.setattr( 'my_package.my_subpackage.my_module.ExternalAccess.get_something', external_access.get_something) def test_get_something(external_access): instance = MyClass() instance.get_something() assert instance.data == 'Mock was used.' Everything works just fine. But when I try to change line 8 from @pytest.fixture(scope='function') to @pytest.fixture(scope='module'), I get the following error. ScopeMismatch: You tried to access the 'function' scoped fixture 'monkeypatch' with a 'module' scoped request object, involved factories my_package\tests\unit_test.py:7: def external_access(monkeypatch) ..\..\Anaconda3\envs\py37\lib\site-packages\_pytest\monkeypatch.py:20: def monkeypatch() Does anyone know how to monkeypatch with module level scope? In case anyone wants to know, this is what the two modules look like as well. my_module.py from my_package.my_subpackage.other_module import ExternalAccess class MyClass(object): def __init__(self): self.external_access = ExternalAccess() self.data = None def get_something(self): self.data = self.external_access.get_something() other_module.py class ExternalAccess(object): def get_something(self): return 'Call to external resource.' A: I found this issue which guided the way. I needed to make a few changes to the solution for module level scope. unit_test.py now looks like this: import unittest.mock as mock import pytest from my_package.my_subpackage.my_module import MyClass @pytest.fixture(scope='module') def monkeymodule(): from _pytest.monkeypatch import MonkeyPatch mpatch = MonkeyPatch() yield mpatch mpatch.undo() @pytest.fixture(scope='module') def external_access(monkeymodule): external_access = mock.MagicMock() external_access.get_something = mock.MagicMock( return_value='Mock was used.') monkeymodule.setattr( 'my_package.my_subpackage.my_module.ExternalAccess.get_something', external_access.get_something) def test_get_something(external_access): instance = MyClass() instance.get_something() assert instance.data == 'Mock was used.' A: This has gotten simpler as of pytest 6.2, thanks to the pytest.MonkeyPatch class and context-manager (https://docs.pytest.org/en/6.2.x/reference.html#pytest.MonkeyPatch). Building off Rich's answer, the monkeymodule fixture can now be written as follows: @pytest.fixture(scope='module') def monkeymodule(): with pytest.MonkeyPatch.context() as mp: yield mp @pytest.fixture(scope='function') def external_access(monkeymodule): external_access = mock.MagicMock() external_access.get_something = mock.MagicMock( return_value='Mock was used.') monkeymodule.setattr( 'my_package.my_subpackage.my_module.ExternalAccess.get_something', external_access.get_something)
Python monkeypatch.setattr() with pytest fixture at module scope
First of all, the relevant portion of my project directory looks like: └── my_package ├── my_subpackage │ ├── my_module.py | └── other_module.py └── tests └── my_subpackage └── unit_test.py I am writing some tests in unit_test.py that require mocking of an external resource at the module level. I would like to use a pytest fixture with module level scope and pytest monkeypatch to acomplish this. Here is a snippet of what I have tried in unit_test.py: import unittest.mock as mock import pytest from my_package.my_subpackage.my_module import MyClass @pytest.fixture(scope='function') def external_access(monkeypatch): external_access = mock.MagicMock() external_access.get_something = mock.MagicMock( return_value='Mock was used.') monkeypatch.setattr( 'my_package.my_subpackage.my_module.ExternalAccess.get_something', external_access.get_something) def test_get_something(external_access): instance = MyClass() instance.get_something() assert instance.data == 'Mock was used.' Everything works just fine. But when I try to change line 8 from @pytest.fixture(scope='function') to @pytest.fixture(scope='module'), I get the following error. ScopeMismatch: You tried to access the 'function' scoped fixture 'monkeypatch' with a 'module' scoped request object, involved factories my_package\tests\unit_test.py:7: def external_access(monkeypatch) ..\..\Anaconda3\envs\py37\lib\site-packages\_pytest\monkeypatch.py:20: def monkeypatch() Does anyone know how to monkeypatch with module level scope? In case anyone wants to know, this is what the two modules look like as well. my_module.py from my_package.my_subpackage.other_module import ExternalAccess class MyClass(object): def __init__(self): self.external_access = ExternalAccess() self.data = None def get_something(self): self.data = self.external_access.get_something() other_module.py class ExternalAccess(object): def get_something(self): return 'Call to external resource.'
[ "I found this issue which guided the way. I needed to make a few changes to the solution for module level scope. unit_test.py now looks like this:\nimport unittest.mock as mock\n\nimport pytest\n\nfrom my_package.my_subpackage.my_module import MyClass\n\n\[email protected](scope='module')\ndef monkeymodule():\n from _pytest.monkeypatch import MonkeyPatch\n mpatch = MonkeyPatch()\n yield mpatch\n mpatch.undo()\n\[email protected](scope='module')\ndef external_access(monkeymodule):\n external_access = mock.MagicMock()\n external_access.get_something = mock.MagicMock(\n return_value='Mock was used.')\n monkeymodule.setattr(\n 'my_package.my_subpackage.my_module.ExternalAccess.get_something',\n external_access.get_something)\n\n\ndef test_get_something(external_access):\n instance = MyClass()\n instance.get_something()\n assert instance.data == 'Mock was used.'\n\n", "This has gotten simpler as of pytest 6.2, thanks to the pytest.MonkeyPatch class and context-manager (https://docs.pytest.org/en/6.2.x/reference.html#pytest.MonkeyPatch). Building off Rich's answer, the monkeymodule fixture can now be written as follows:\[email protected](scope='module')\ndef monkeymodule():\n with pytest.MonkeyPatch.context() as mp:\n yield mp\n\[email protected](scope='function')\ndef external_access(monkeymodule):\n external_access = mock.MagicMock()\n external_access.get_something = mock.MagicMock(\n return_value='Mock was used.')\n monkeymodule.setattr(\n 'my_package.my_subpackage.my_module.ExternalAccess.get_something',\n external_access.get_something)\n\n" ]
[ 17, 0 ]
[]
[]
[ "fixtures", "pytest", "python", "scope" ]
stackoverflow_0053963822_fixtures_pytest_python_scope.txt
Q: How do I implement recursion to this python program def recurse( aList ): matches = [ match for match in action if "A" in match ] uses = " ".join(matches) return f"Answer: { aList.index( uses )" This is the non recursive method. I just couldn't figure out how to implement recursion in regards of lists. Output should be Answer: n uses. Can anybody help. A: Recursion is a bad fit for this problem in Python, because lists aren' really recursive data structures. But you could write the following: def recurse(aList): if not aList: return 0 return ("A" in aList[0]) + recurse(aList[1:]) Nothing in an empty list, by definition, contains "A". Otherwise, determined if "A" is in the first element of the list, and add 1 or 0 as appropriate (remember, bools are ints in Python) to the number of matches in the rest of the list. The recursive function should only deal with the count itself; let the caller of the recursive function put the count into a string: print("Answer: {recurse(aList)} uses"}
How do I implement recursion to this python program
def recurse( aList ): matches = [ match for match in action if "A" in match ] uses = " ".join(matches) return f"Answer: { aList.index( uses )" This is the non recursive method. I just couldn't figure out how to implement recursion in regards of lists. Output should be Answer: n uses. Can anybody help.
[ "Recursion is a bad fit for this problem in Python, because lists aren' really recursive data structures. But you could write the following:\ndef recurse(aList):\n if not aList:\n return 0\n return (\"A\" in aList[0]) + recurse(aList[1:])\n\nNothing in an empty list, by definition, contains \"A\". Otherwise, determined if \"A\" is in the first element of the list, and add 1 or 0 as appropriate (remember, bools are ints in Python) to the number of matches in the rest of the list.\nThe recursive function should only deal with the count itself; let the caller of the recursive function put the count into a string:\nprint(\"Answer: {recurse(aList)} uses\"}\n\n" ]
[ 1 ]
[]
[]
[ "list", "python", "python_3.x", "recursion" ]
stackoverflow_0074659546_list_python_python_3.x_recursion.txt
Q: Cant properly orginize self method in a class TypeError:create_bool(): incompatible function arguments. The following argument types are supported: Return Error when Im tried to make a class. When I tried as here https://github.com/google/mediapipe/blob/master/docs/solutions/face_mesh.md#python-solution-api. everething is perfect There are some problem with self. method. But I could not undestand where exactly import cv2 import mediapipe as mp import time class FaceMeshDetector: def __init__(self, static_mode=False, maxFaces=2, minDetectionCon=0.5, minTrackCon=0.5): self.static_mode = static_mode self.maxFaces = maxFaces self.minDetectionCon = minDetectionCon self.minTrackCon = minTrackCon self.mpDraw = mp.solutions.drawing_utils self.mpFaceMesh = mp.solutions.face_mesh self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, self.minDetectionCon, self.minTrackCon) self.drawSpec = self.mpDraw.DrawingSpec(thickness=1, circle_radius=1) def findFaceMesh(self, img, draw=True): self.imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.faceMesh.process(self.imgRGB) faces = [] if self.results.multi_face_landmarks: for faceLms in self.results.multi_face_landmarks: if draw: self.mpDraw.draw_landmarks(img, faceLms, self.mpFaceMesh.FACEMESH_CONTOURS, self.drawSpec, self.drawSpec) face = [] for id, lm in enumerate(faceLms.landmark): # print(lm) ih, iw, ic = img.shape x, y = int(lm.x * iw), int(lm.y * ih) # cv2.putText(img, str(id), (x, y), cv2.FONT_HERSHEY_PLAIN, 0.7, (0, 255, 0), 1) # print(id, x, y) face.append([x, y]) faces.append(face) return img, faces def main(): cap = cv2.VideoCapture(0) pTime = 0 detector = FaceMeshDetector() while True: success, img = cap.read() img, faces = detector.findFaceMesh(img) if len(faces) != 0: print(faces[0]) cTime = time.time() fps = 1 / (cTime - pTime) pTime = cTime cv2.putText(img, f'FPS: {int(fps)}', (20, 70), cv2.FONT_HERSHEY_PLAIN, 3, (0, 255, 0), 3) cv2.imshow("Image", img) cv2.waitKey(1) if __name__ == '__main__': main() Full traceback Traceback (most recent call last): File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 59, in main() File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 44, in main detector = FaceMeshDetector() File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 16, in init self.minTrackCon) File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solutions\face_mesh.py", line 107, in init outputs=['multi_face_landmarks']) File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 291, in init for name, data in (side_inputs or {}).items() File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 291, in for name, data in (side_inputs or {}).items() File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 592, in make_packet return getattr(packet_creator, 'create' + packet_data_type.value)(data) TypeError: create_bool(): incompatible function arguments. The following argument types are supported: 1. (arg0: bool) -> mediapipe.python._framework_bindings.packet.Packet Invoked with: 0.5 A: You have a parameter in the wrong place. Use named parameters or add a value for "refine_landmarks". See signature of FaceMesh: def __init__(self, static_image_mode=False, max_num_faces=1, refine_landmarks=False, min_detection_confidence=0.5, min_tracking_confidence=0.5): Or add the missing parameter: Change self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, self.minDetectionCon, self.minTrackCon) to self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, False, self.minDetectionCon, self.minTrackCon) A: Finally done and work def __init__(self, static_image_mode=False, max_num_faces=2, refine_landmarks=False, min_detection_confidence=0.5, min_tracking_confidence=0.5): self.static_mode = static_image_mode self.maxFaces = max_num_faces self.refine_landmarks = refine_landmarks self.minDetectionCon = min_detection_confidence self.minTrackCon = min_tracking_confidence self.mpDraw = mp.solutions.drawing_utils self.mpFaceMesh = mp.solutions.face_mesh self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, self.refine_landmarks, self.minDetectionCon, self.minTrackCon) self.drawSpec = self.mpDraw.DrawingSpec(thickness=1, circle_radius=1)
Cant properly orginize self method in a class TypeError:create_bool(): incompatible function arguments. The following argument types are supported:
Return Error when Im tried to make a class. When I tried as here https://github.com/google/mediapipe/blob/master/docs/solutions/face_mesh.md#python-solution-api. everething is perfect There are some problem with self. method. But I could not undestand where exactly import cv2 import mediapipe as mp import time class FaceMeshDetector: def __init__(self, static_mode=False, maxFaces=2, minDetectionCon=0.5, minTrackCon=0.5): self.static_mode = static_mode self.maxFaces = maxFaces self.minDetectionCon = minDetectionCon self.minTrackCon = minTrackCon self.mpDraw = mp.solutions.drawing_utils self.mpFaceMesh = mp.solutions.face_mesh self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, self.minDetectionCon, self.minTrackCon) self.drawSpec = self.mpDraw.DrawingSpec(thickness=1, circle_radius=1) def findFaceMesh(self, img, draw=True): self.imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.faceMesh.process(self.imgRGB) faces = [] if self.results.multi_face_landmarks: for faceLms in self.results.multi_face_landmarks: if draw: self.mpDraw.draw_landmarks(img, faceLms, self.mpFaceMesh.FACEMESH_CONTOURS, self.drawSpec, self.drawSpec) face = [] for id, lm in enumerate(faceLms.landmark): # print(lm) ih, iw, ic = img.shape x, y = int(lm.x * iw), int(lm.y * ih) # cv2.putText(img, str(id), (x, y), cv2.FONT_HERSHEY_PLAIN, 0.7, (0, 255, 0), 1) # print(id, x, y) face.append([x, y]) faces.append(face) return img, faces def main(): cap = cv2.VideoCapture(0) pTime = 0 detector = FaceMeshDetector() while True: success, img = cap.read() img, faces = detector.findFaceMesh(img) if len(faces) != 0: print(faces[0]) cTime = time.time() fps = 1 / (cTime - pTime) pTime = cTime cv2.putText(img, f'FPS: {int(fps)}', (20, 70), cv2.FONT_HERSHEY_PLAIN, 3, (0, 255, 0), 3) cv2.imshow("Image", img) cv2.waitKey(1) if __name__ == '__main__': main() Full traceback Traceback (most recent call last): File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 59, in main() File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 44, in main detector = FaceMeshDetector() File "C:\Users\Roman\PycharmProjects\pythonProject\FaceMeshModule.py", line 16, in init self.minTrackCon) File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solutions\face_mesh.py", line 107, in init outputs=['multi_face_landmarks']) File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 291, in init for name, data in (side_inputs or {}).items() File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 291, in for name, data in (side_inputs or {}).items() File "C:\Users\Roman\PycharmProjects\pythonProject\venv\lib\site-packages\mediapipe\python\solution_base.py", line 592, in make_packet return getattr(packet_creator, 'create' + packet_data_type.value)(data) TypeError: create_bool(): incompatible function arguments. The following argument types are supported: 1. (arg0: bool) -> mediapipe.python._framework_bindings.packet.Packet Invoked with: 0.5
[ "You have a parameter in the wrong place.\nUse named parameters or add a value for \"refine_landmarks\".\nSee signature of FaceMesh:\ndef __init__(self,\n static_image_mode=False,\n max_num_faces=1,\n refine_landmarks=False,\n min_detection_confidence=0.5,\n min_tracking_confidence=0.5):\n\nOr add the missing parameter:\nChange\n\nself.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, self.minDetectionCon, self.minTrackCon)\n\nto\n\nself.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, self.maxFaces, False, self.minDetectionCon, self.minTrackCon)\n\n", "Finally done and work\ndef __init__(self,\n static_image_mode=False,\n max_num_faces=2,\n refine_landmarks=False,\n min_detection_confidence=0.5,\n min_tracking_confidence=0.5):\n self.static_mode = static_image_mode\n self.maxFaces = max_num_faces\n self.refine_landmarks = refine_landmarks\n self.minDetectionCon = min_detection_confidence\n self.minTrackCon = min_tracking_confidence\n \n self.mpDraw = mp.solutions.drawing_utils\n self.mpFaceMesh = mp.solutions.face_mesh\n self.faceMesh = self.mpFaceMesh.FaceMesh(self.static_mode, \n self.maxFaces, \n self.refine_landmarks, \n self.minDetectionCon,\n self.minTrackCon)\n self.drawSpec = self.mpDraw.DrawingSpec(thickness=1, circle_radius=1)\n\n" ]
[ 0, 0 ]
[]
[]
[ "mediapipe", "python", "self" ]
stackoverflow_0074658820_mediapipe_python_self.txt
Q: ImportError with tkinter So I found tutorial about work with GUI in python tkinter then I try to learn it from w3school, I copied the sample code: from tkinter import * from tkinter .ttk import * root = Tk() label = Label(root, text="Hello world Tkinket GUI Example ") label.pack() root.mainloop() So, I google how to install tkinter on ubuntu. I used: $ sudo apt-get install python-tk python3-tk tk-dev $ sudo apt-get install python-tk $ pip install tk It's seem it was successfully but I was wrong.. I get this error Ubuntu 22.04.1 LTS A: Generally what you want is import tkinter as tk # 'as tk' isn't required, but it's common practice from tkinter import ttk # though you aren't using any ttk widgets at the moment... I know star imports have a certain appeal, but they can lead to namespace pollution which is a huge headache! For example lets say I've done the following: from tkinter import * from tkinter.ttk import * root = Tk() label = Label(root, text='Hello!') is Label a tkinter widget or a ttk widget? Conversely... import tkinter as tk from tkinter import ttk root = tk.Tk() label = ttk.Label(root) Here, it's clear that label is a ttk widget. Now everything is namespaced appropriately, and everyone's happy! A: I believe you only need to use: from tkinter import * I'd say get rid of the: from tkinter .ttk import * A: I think you can just remove the line: from tkinter .ttk import * I don't think you need that line to run this code.
ImportError with tkinter
So I found tutorial about work with GUI in python tkinter then I try to learn it from w3school, I copied the sample code: from tkinter import * from tkinter .ttk import * root = Tk() label = Label(root, text="Hello world Tkinket GUI Example ") label.pack() root.mainloop() So, I google how to install tkinter on ubuntu. I used: $ sudo apt-get install python-tk python3-tk tk-dev $ sudo apt-get install python-tk $ pip install tk It's seem it was successfully but I was wrong.. I get this error Ubuntu 22.04.1 LTS
[ "Generally what you want is\nimport tkinter as tk # 'as tk' isn't required, but it's common practice\nfrom tkinter import ttk # though you aren't using any ttk widgets at the moment...\n\nI know star imports have a certain appeal, but they can lead to namespace pollution which is a huge headache!\nFor example lets say I've done the following:\nfrom tkinter import *\nfrom tkinter.ttk import *\n\nroot = Tk()\nlabel = Label(root, text='Hello!')\n\nis Label a tkinter widget or a ttk widget?\nConversely...\nimport tkinter as tk\nfrom tkinter import ttk\n\nroot = tk.Tk()\nlabel = ttk.Label(root)\n\nHere, it's clear that label is a ttk widget.\nNow everything is namespaced appropriately, and everyone's happy!\n", "I believe you only need to use: from tkinter import *\nI'd say get rid of the: from tkinter .ttk import *\n", "I think you can just remove the line: from tkinter .ttk import *\nI don't think you need that line to run this code.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "python_3.x", "tkinter" ]
stackoverflow_0074659714_python_python_3.x_tkinter.txt
Q: Index Enumeration doesn't seem to be operating properly. Where am I messing up? I'm trying to figure out how to enumerate an index properly into specified cells on an Excel spreadsheet using Python. Following a tutorial video, I thought I had it figured out, but it doesn't seem to be pulling each index value and parsing it to each individual cell as intended. Instead, it's taking only the first entry and applying it to all specified cells and ignoring the second and third entry. Can someone help me understand where I'm messing up on this? Thank you kindly. Code: from openpyxl import Workbook, load_workbook from openpyxl.utils import get_column_letter wb = load_workbook('PythonNetwork.xlsx') ws = wb['Systems'] print(ws) # Shows each designated cell as well as its cell value. for row in ws['A2':'A4']: for cell in row: print(cell, cell.value) new_data = ["192.168.1.4", "192.168.1.5", "192.168.1.6"] # Enters new data from created index. for row in ws['A2':'A4']: for index, cell in enumerate(row): cell.value = new_data[index] # Shows each designated cell value for comparison to previously printed information. for row in ws['A2':'A4']: for cell in row: print(cell.value) Output: <Worksheet "Systems"> <Cell 'Systems'.A2> 192.168.1.1 <Cell 'Systems'.A3> 192.168.1.2 <Cell 'Systems'.A4> 192.168.1.3 192.168.1.4 192.168.1.4 192.168.1.4 I tried changing the values in the index from having quotes to simple integers without quotes to see if it made any difference. It does not. For example I replaced each IP address in the index with 10, 20, etc as shown below: new_data = [10, 20, 30] The output was the same result as each cell reported back as 10 10 10 instead of 10 20 30. A: Unfortunately, this manner of accessing a range of cells is always a bit clumsy. If you look at what accessing the range returns: >>> ws['A2':'A4'] ((<Cell 'Systems'.A2>,), (<Cell 'Systems'.A3>,), (<Cell 'Systems'.A4>,)) it's a tuple of tuples, where each inner tuple is a single cell. So, in your for loop, what you're calling a row is a tuple of a single cell. It's not a row exactly, but your code to print those cells works anyway. When you try to change the values, though, the enumeration index is always 0, since each tuple has a single cell, so you're always assigning the value of new_data[0]. Instead, you can do something like this: for index, cell in enumerate(ws['A2':'A4']): cell[0].value = new_data[index] Each cell is actually a tuple of a cell, so you have to reference the 0th element.
Index Enumeration doesn't seem to be operating properly. Where am I messing up?
I'm trying to figure out how to enumerate an index properly into specified cells on an Excel spreadsheet using Python. Following a tutorial video, I thought I had it figured out, but it doesn't seem to be pulling each index value and parsing it to each individual cell as intended. Instead, it's taking only the first entry and applying it to all specified cells and ignoring the second and third entry. Can someone help me understand where I'm messing up on this? Thank you kindly. Code: from openpyxl import Workbook, load_workbook from openpyxl.utils import get_column_letter wb = load_workbook('PythonNetwork.xlsx') ws = wb['Systems'] print(ws) # Shows each designated cell as well as its cell value. for row in ws['A2':'A4']: for cell in row: print(cell, cell.value) new_data = ["192.168.1.4", "192.168.1.5", "192.168.1.6"] # Enters new data from created index. for row in ws['A2':'A4']: for index, cell in enumerate(row): cell.value = new_data[index] # Shows each designated cell value for comparison to previously printed information. for row in ws['A2':'A4']: for cell in row: print(cell.value) Output: <Worksheet "Systems"> <Cell 'Systems'.A2> 192.168.1.1 <Cell 'Systems'.A3> 192.168.1.2 <Cell 'Systems'.A4> 192.168.1.3 192.168.1.4 192.168.1.4 192.168.1.4 I tried changing the values in the index from having quotes to simple integers without quotes to see if it made any difference. It does not. For example I replaced each IP address in the index with 10, 20, etc as shown below: new_data = [10, 20, 30] The output was the same result as each cell reported back as 10 10 10 instead of 10 20 30.
[ "Unfortunately, this manner of accessing a range of cells is always a bit clumsy. If you look at what accessing the range returns:\n>>> ws['A2':'A4']\n((<Cell 'Systems'.A2>,), (<Cell 'Systems'.A3>,), (<Cell 'Systems'.A4>,))\n\nit's a tuple of tuples, where each inner tuple is a single cell. So, in your for loop, what you're calling a row is a tuple of a single cell. It's not a row exactly, but your code to print those cells works anyway.\nWhen you try to change the values, though, the enumeration index is always 0, since each tuple has a single cell, so you're always assigning the value of new_data[0].\nInstead, you can do something like this:\nfor index, cell in enumerate(ws['A2':'A4']):\n cell[0].value = new_data[index]\n\nEach cell is actually a tuple of a cell, so you have to reference the 0th element.\n" ]
[ 0 ]
[]
[]
[ "enumeration", "excel", "indexing", "openpyxl", "python" ]
stackoverflow_0074659526_enumeration_excel_indexing_openpyxl_python.txt
Q: How to save XGBoost/LightGBM model to PostgreSQL database in Python for subsequent inference in Java? I'm restricted to a PostgreSQL as 'model storage' for the models itself or respective components (coefficients, ..). Obviously, PostgreSQL is far from being a fully-fledged model storage, so I can't rule out that I have to implement the whole model training process in Java [...]. I couldn't find a solution that involves a PostgreSQL database as intermediate storage for the models. Writing files directly to the disk/other storages isn't really an option for me. I considered calling Python code from within the Java application but I don't know whether this would be an efficient solution for subsequent inference tasks and beyond [...]. Are there ways to serialize PMML or other formats that can be loaded via Java implementations of the algorithms? Or ways to use the model definitions/parameters directly for reproducing the model [...]? A: Using PostgreSQL as dummy model storage: Train a model in Python. Establish PostgreSQL connection, dump your model in Pickle data format to the "models" table. Obviously, the data type of the main column should be BLOB. Anytime you want to use the model for some application, unpickle it from the "models" table. The "models" table may have extra columns for storing the model in alternative data formats such as PMML. Assuming you've used correct Python-to-PMML conversion tools, you can assume that the Pickle representation and the PMML representation of the same model will be functionally identical (ie. making the same prediction when given the same input). Using PMML in Java/JVM applications is easy.
How to save XGBoost/LightGBM model to PostgreSQL database in Python for subsequent inference in Java?
I'm restricted to a PostgreSQL as 'model storage' for the models itself or respective components (coefficients, ..). Obviously, PostgreSQL is far from being a fully-fledged model storage, so I can't rule out that I have to implement the whole model training process in Java [...]. I couldn't find a solution that involves a PostgreSQL database as intermediate storage for the models. Writing files directly to the disk/other storages isn't really an option for me. I considered calling Python code from within the Java application but I don't know whether this would be an efficient solution for subsequent inference tasks and beyond [...]. Are there ways to serialize PMML or other formats that can be loaded via Java implementations of the algorithms? Or ways to use the model definitions/parameters directly for reproducing the model [...]?
[ "Using PostgreSQL as dummy model storage:\n\nTrain a model in Python.\nEstablish PostgreSQL connection, dump your model in Pickle data format to the \"models\" table. Obviously, the data type of the main column should be BLOB.\nAnytime you want to use the model for some application, unpickle it from the \"models\" table.\n\nThe \"models\" table may have extra columns for storing the model in alternative data formats such as PMML. Assuming you've used correct Python-to-PMML conversion tools, you can assume that the Pickle representation and the PMML representation of the same model will be functionally identical (ie. making the same prediction when given the same input). Using PMML in Java/JVM applications is easy.\n" ]
[ 0 ]
[]
[]
[ "java", "lightgbm", "machine_learning", "python", "xgboost" ]
stackoverflow_0074656521_java_lightgbm_machine_learning_python_xgboost.txt
Q: Solve a math operation in a string without using the eval function(python) Solve a math operation in a string based on operation priority without using the eval function for example (3*(72/2)+2-1(32%2)) should solve this without using eval I couldn't make the parenthetical operation priority A: No need to reinvent the wheel, there is a very straightforward way to do this by using the PCPP package, which is a C/C++ preprocessor for Python: from pcpp import Evaluator eval = Evaluator() result = eval("(3*(72/2)+2-(32%2))") print(result.value()) Note that for this case I had to manually remove the 1 in -1(32%2)) as it makes no sense and make the library or any other processor to crash. PCPP also allows you to add custom variables, flags and functions to evaluate an expression, which is very useful. Output: 110
Solve a math operation in a string without using the eval function(python)
Solve a math operation in a string based on operation priority without using the eval function for example (3*(72/2)+2-1(32%2)) should solve this without using eval I couldn't make the parenthetical operation priority
[ "No need to reinvent the wheel, there is a very straightforward way to do this by using the PCPP package, which is a C/C++ preprocessor for Python:\nfrom pcpp import Evaluator\n\neval = Evaluator()\nresult = eval(\"(3*(72/2)+2-(32%2))\")\nprint(result.value())\n\nNote that for this case I had to manually remove the 1 in -1(32%2)) as it makes no sense and make the library or any other processor to crash.\nPCPP also allows you to add custom variables, flags and functions to evaluate an expression, which is very useful.\nOutput:\n110\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074659443_python.txt
Q: How to convert numpy.ndarray image to discord.File? I found similar question but about PIL: How can I upload a PIL Image object to a Discord chat without saving the image?, and using it results in AttributeError: 'numpy.ndarray' object has no attribute 'save' which is surely because I use OpenCV and not PIL. The question is how to convert this numpy.ndarray to discord.File (using binary or otherwise)? A: In case anybody also get this problem here is function that takes cv2 image(which is basically numpy.ndarray) and returns discord.File: def cv2discordfile(img): img_encode = cv2.imencode('.png', img)[1] data_encode = np.array(img_encode) byte_encode = data_encode.tobytes() byteImage = BytesIO(byte_encode) image=discord.File(byteImage, filename='image.png') return image
How to convert numpy.ndarray image to discord.File?
I found similar question but about PIL: How can I upload a PIL Image object to a Discord chat without saving the image?, and using it results in AttributeError: 'numpy.ndarray' object has no attribute 'save' which is surely because I use OpenCV and not PIL. The question is how to convert this numpy.ndarray to discord.File (using binary or otherwise)?
[ "In case anybody also get this problem here is function that takes cv2 image(which is basically numpy.ndarray) and returns discord.File:\ndef cv2discordfile(img):\n img_encode = cv2.imencode('.png', img)[1]\n data_encode = np.array(img_encode)\n byte_encode = data_encode.tobytes()\n byteImage = BytesIO(byte_encode)\n image=discord.File(byteImage, filename='image.png')\n return image\n\n" ]
[ 0 ]
[]
[]
[ "discord.py", "numpy_ndarray", "python" ]
stackoverflow_0074657948_discord.py_numpy_ndarray_python.txt
Q: How to optimize hyper-parameters of a PPO for a gym environment training I would like to use an optimization algorithm (hyperOptSearch) using ray.tune . On the official documentation, they use this syntax : tuner = tune.Tuner( objective, tune_config=tune.TuneConfig( metric="mean_loss", mode="min", search_alg=algo, num_samples=num_samples, ), param_space=search_config, ) results = tuner.fit() where objective is a function to minimize (or maximize) defined as : def evaluate(step, width, height): time.sleep(0.1) return (0.1 + width * step / 100) ** (-1) + height * 0.1 def objective(config): for step in range(config["steps"]): score = evaluate(step, config["width"], config["height"]) session.report({"iterations": step, "mean_loss": score}) I would like to use this syntaxe, but with an 'evaluate' function evaluating the episode_reward_mean of my gym environment, which is a LunarLander-v2 env. I recently used this config config = { "env": "LunarLander-v2", "sgd_minibatch_size": 5000, "num_sgd_iter": 50, "lr": 5e-5, "lambda": 0.8, "vf_loss_coeff": 0.7, "kl_target": 0.01, "kl_coeff": 0.6, "entropy_coeff": 0.001, "clip_param": 0.38, "train_batch_size": 25000, # "monitor": True, # "model": {"free_log_std": True}, "num_workers": 1, "num_gpus": 0, # "batch_mode": "complete_episodes" }, and this syntaxe to train the model : analysis = tune.Tuner( "PPO", # Algorithme d'IA utilisé tune_config=tune.TuneConfig( metric="episode_reward_mean", mode="max", search_alg=HyperOptSearch(metric="episode_reward_mean", mode="max"), # num_samples will repeat the entire config 10 times. num_samples=10, ), param_space=config, # local_dir="res_LunarLander" ) results = analysis.fit() What could I do to solve my problem ? I used to train my model without using any optimization algorithm. I would like to use one to improve my parameters. A: You need to modify your config to make use of Tune search space distributions (https://docs.ray.io/en/latest/tune/tutorials/tune-search-spaces.html), which will let you specify lower and upper bounds for possible values in your search space. Without them (as it is in your case), you will only have constant values and thus identical configurations for each trial. For example, if you want "lr" to be sampled from a logarithmic distribution between 5e-6 and 5e-4, you'd specify it as "lr": tune.loguniform(5e-6, 5e-4).
How to optimize hyper-parameters of a PPO for a gym environment training
I would like to use an optimization algorithm (hyperOptSearch) using ray.tune . On the official documentation, they use this syntax : tuner = tune.Tuner( objective, tune_config=tune.TuneConfig( metric="mean_loss", mode="min", search_alg=algo, num_samples=num_samples, ), param_space=search_config, ) results = tuner.fit() where objective is a function to minimize (or maximize) defined as : def evaluate(step, width, height): time.sleep(0.1) return (0.1 + width * step / 100) ** (-1) + height * 0.1 def objective(config): for step in range(config["steps"]): score = evaluate(step, config["width"], config["height"]) session.report({"iterations": step, "mean_loss": score}) I would like to use this syntaxe, but with an 'evaluate' function evaluating the episode_reward_mean of my gym environment, which is a LunarLander-v2 env. I recently used this config config = { "env": "LunarLander-v2", "sgd_minibatch_size": 5000, "num_sgd_iter": 50, "lr": 5e-5, "lambda": 0.8, "vf_loss_coeff": 0.7, "kl_target": 0.01, "kl_coeff": 0.6, "entropy_coeff": 0.001, "clip_param": 0.38, "train_batch_size": 25000, # "monitor": True, # "model": {"free_log_std": True}, "num_workers": 1, "num_gpus": 0, # "batch_mode": "complete_episodes" }, and this syntaxe to train the model : analysis = tune.Tuner( "PPO", # Algorithme d'IA utilisé tune_config=tune.TuneConfig( metric="episode_reward_mean", mode="max", search_alg=HyperOptSearch(metric="episode_reward_mean", mode="max"), # num_samples will repeat the entire config 10 times. num_samples=10, ), param_space=config, # local_dir="res_LunarLander" ) results = analysis.fit() What could I do to solve my problem ? I used to train my model without using any optimization algorithm. I would like to use one to improve my parameters.
[ "You need to modify your config to make use of Tune search space distributions (https://docs.ray.io/en/latest/tune/tutorials/tune-search-spaces.html), which will let you specify lower and upper bounds for possible values in your search space. Without them (as it is in your case), you will only have constant values and thus identical configurations for each trial.\nFor example, if you want \"lr\" to be sampled from a logarithmic distribution between 5e-6 and 5e-4, you'd specify it as \"lr\": tune.loguniform(5e-6, 5e-4).\n" ]
[ 0 ]
[]
[]
[ "hyperparameters", "openai_gym", "python", "ray", "reinforcement_learning" ]
stackoverflow_0074635036_hyperparameters_openai_gym_python_ray_reinforcement_learning.txt
Q: (Stochastic) Gradient Descent implementation in Python I am trying to do (preferably Stochastic) Gradient Descent to minimize a custom loss function. I tried using scikit learn SGDRegressor class. However, SGDRegressor doesn't seem to allow me to minimize a custom loss function without data, and if I can use custom loss function, I can only use it as regression to fit data with fit() method. Is there a way to use scikit implementation or any other Python implementation of stochastic gradient descent to minimize a custom function without data? A: Implementation of Basic Gradient Descent Now that you know how the basic gradient descent works, you can implement it in Python. You’ll use only plain Python and NumPy, which enables you to write concise code when working with arrays (or vectors) and gain a performance boost. This is a basic implementation of the algorithm that starts with an arbitrary point, start, iteratively moves it toward the minimum, and returns a point that is hopefully at or near the minimum: def gradient_descent(gradient, start, learn_rate, n_iter): vector = start for _ in range(n_iter): diff = -learn_rate * gradient(vector) vector += diff return vector gradient_descent() takes four arguments: gradient is the function or any Python callable object that takes a vector and returns the gradient of the function you’re trying to minimize. start is the point where the algorithm starts its search, given as a sequence (tuple, list, NumPy array, and so on) or scalar (in the case of a one-dimensional problem). learn_rate is the learning rate that controls the magnitude of the vector update. n_iter is the number of iterations. This function does exactly what’s described above: it takes a starting point (line 2), iteratively updates it according to the learning rate and the value of the gradient (lines 3 to 5), and finally returns the last position found. Before you apply gradient_descent(), you can add another termination criterion: import numpy as np def gradient_descent( gradient, start, learn_rate, n_iter=50, tolerance=1e-06): vector = start for _ in range(n_iter): diff = -learn_rate * gradient(vector) if np.all(np.abs(diff) <= tolerance): break vector += diff return vector You now have the additional parameter tolerance (line 4), which specifies the minimal allowed movement in each iteration. You’ve also defined the default values for tolerance and n_iter, so you don’t have to specify them each time you call gradient_descent(). Lines 9 and 10 enable gradient_descent() to stop iterating and return the result before n_iter is reached if the vector update in the current iteration is less than or equal to tolerance. This often happens near the minimum, where gradients are usually very small. Unfortunately, it can also happen near a local minimum or a saddle point. Line 9 uses the convenient NumPy functions numpy.all() and numpy.abs() to compare the absolute values of diff and tolerance in a single statement. That’s why you import numpy on line 1. Now that you have the first version of gradient_descent(), it’s time to test your function. You’ll start with a small example and find the minimum of the function = ². This function has only one independent variable (), and its gradient is the derivative 2. It’s a differentiable convex function, and the analytical way to find its minimum is straightforward. However, in practice, analytical differentiation can be difficult or even impossible and is often approximated with numerical methods. You need only one statement to test your gradient descent implementation: >>> gradient_descent( ... gradient=lambda v: 2 * v, start=10.0, learn_rate=0.2) 2.210739197207331e-06 You use the lambda function lambda v: 2 * v to provide the gradient of ². You start from the value 10.0 and set the learning rate to 0.2. You get a result that’s very close to zero, which is the correct minimum. The figure below shows the movement of the solution through the iterations: enter link description here You start from the rightmost green dot ( = 10) and move toward the minimum ( = 0). The updates are larger at first because the value of the gradient (and slope) is higher. As you approach the minimum, they become lower. Improvement of the Code You can make gradient_descent() more robust, comprehensive, and better-looking without modifying its core functionality: import numpy as np def gradient_descent( gradient, x, y, start, learn_rate=0.1, n_iter=50, tolerance=1e-06, dtype="float64"): # Checking if the gradient is callable if not callable(gradient): raise TypeError("'gradient' must be callable") # Setting up the data type for NumPy arrays dtype_ = np.dtype(dtype) # Converting x and y to NumPy arrays x, y = np.array(x, dtype=dtype_), np.array(y, dtype=dtype_) if x.shape[0] != y.shape[0]: raise ValueError("'x' and 'y' lengths do not match") # Initializing the values of the variables vector = np.array(start, dtype=dtype_) # Setting up and checking the learning rate learn_rate = np.array(learn_rate, dtype=dtype_) if np.any(learn_rate <= 0): raise ValueError("'learn_rate' must be greater than zero") # Setting up and checking the maximal number of iterations n_iter = int(n_iter) if n_iter <= 0: raise ValueError("'n_iter' must be greater than zero") # Setting up and checking the tolerance tolerance = np.array(tolerance, dtype=dtype_) if np.any(tolerance <= 0): raise ValueError("'tolerance' must be greater than zero") # Performing the gradient descent loop for _ in range(n_iter): # Recalculating the difference diff = -learn_rate * np.array(gradient(x, y, vector), dtype_) # Checking if the absolute difference is small enough if np.all(np.abs(diff) <= tolerance): break # Updating the values of the variables vector += diff return vector if vector.shape else vector.item() A: Yes, you can use scikit-learn's SGDRegressor class to minimize a custom loss function without data. The SGDRegressor class allows you to specify a custom loss function using the loss parameter. For example, suppose you have a custom loss function called custom_loss_function that you want to minimize using stochastic gradient descent. You can do this using the following code: from sklearn.linear_model import SGDRegressor # Define your custom loss function def custom_loss_function(y_true, y_pred): # Your custom loss function implementation goes here pass # Create an SGDRegressor object with the custom loss function sgd_regressor = SGDRegressor(loss=custom_loss_function) # Use the fit() method to minimize the custom loss function without data sgd_regressor.fit(X=None, y=None) In this code, the SGDRegressor object is created with the custom_loss_function as the loss function. Then, the fit() method is used to minimize the custom loss function without data. Note that the X and y arguments to the fit() method are set to None because we are not using any data. Please note that the custom_loss_function should be implemented according to the scikit-learn loss function API. This means that the custom_loss_function should take two arguments: y_true and y_pred, and should return a scalar value representing the loss. You can find more details about the loss function API in the scikit-learn documentation: https://scikit-learn.org/stable/developers/contributing.html#rolling-your-own-estimator
(Stochastic) Gradient Descent implementation in Python
I am trying to do (preferably Stochastic) Gradient Descent to minimize a custom loss function. I tried using scikit learn SGDRegressor class. However, SGDRegressor doesn't seem to allow me to minimize a custom loss function without data, and if I can use custom loss function, I can only use it as regression to fit data with fit() method. Is there a way to use scikit implementation or any other Python implementation of stochastic gradient descent to minimize a custom function without data?
[ "Implementation of Basic Gradient Descent\nNow that you know how the basic gradient descent works, you can implement it in Python. You’ll use only plain Python and NumPy, which enables you to write concise code when working with arrays (or vectors) and gain a performance boost.\nThis is a basic implementation of the algorithm that starts with an arbitrary point, start, iteratively moves it toward the minimum, and returns a point that is hopefully at or near the minimum:\ndef gradient_descent(gradient, start, learn_rate, n_iter):\n vector = start\n for _ in range(n_iter):\n diff = -learn_rate * gradient(vector)\n vector += diff\n return vector\n\ngradient_descent() takes four arguments:\ngradient is the function or any Python callable object that takes a vector and returns the gradient of the function you’re trying to minimize.\nstart is the point where the algorithm starts its search, given as a sequence (tuple, list, NumPy array, and so on) or scalar (in the case of a one-dimensional problem).\nlearn_rate is the learning rate that controls the magnitude of the vector update.\nn_iter is the number of iterations.\nThis function does exactly what’s described above: it takes a starting point (line 2), iteratively updates it according to the learning rate and the value of the gradient (lines 3 to 5), and finally returns the last position found.\nBefore you apply gradient_descent(), you can add another termination criterion:\nimport numpy as np\n\ndef gradient_descent(\n gradient, start, learn_rate, n_iter=50, tolerance=1e-06):\n vector = start\n for _ in range(n_iter):\n diff = -learn_rate * gradient(vector)\n if np.all(np.abs(diff) <= tolerance):\n break\n vector += diff\n return vector\n\nYou now have the additional parameter tolerance (line 4), which specifies the minimal allowed movement in each iteration. You’ve also defined the default values for tolerance and n_iter, so you don’t have to specify them each time you call gradient_descent().\nLines 9 and 10 enable gradient_descent() to stop iterating and return the result before n_iter is reached if the vector update in the current iteration is less than or equal to tolerance. This often happens near the minimum, where gradients are usually very small. Unfortunately, it can also happen near a local minimum or a saddle point.\nLine 9 uses the convenient NumPy functions numpy.all() and numpy.abs() to compare the absolute values of diff and tolerance in a single statement. That’s why you import numpy on line 1.\nNow that you have the first version of gradient_descent(), it’s time to test your function. You’ll start with a small example and find the minimum of the function = ².\nThis function has only one independent variable (), and its gradient is the derivative 2. It’s a differentiable convex function, and the analytical way to find its minimum is straightforward. However, in practice, analytical differentiation can be difficult or even impossible and is often approximated with numerical methods.\nYou need only one statement to test your gradient descent implementation:\n>>> gradient_descent(\n... gradient=lambda v: 2 * v, start=10.0, learn_rate=0.2)\n2.210739197207331e-06\n\nYou use the lambda function lambda v: 2 * v to provide the gradient of ². You start from the value 10.0 and set the learning rate to 0.2. You get a result that’s very close to zero, which is the correct minimum.\nThe figure below shows the movement of the solution through the iterations:\nenter link description here\nYou start from the rightmost green dot ( = 10) and move toward the minimum ( = 0). The updates are larger at first because the value of the gradient (and slope) is higher. As you approach the minimum, they become lower.\nImprovement of the Code\nYou can make gradient_descent() more robust, comprehensive, and better-looking without modifying its core functionality:\nimport numpy as np\n\ndef gradient_descent(\n gradient, x, y, start, learn_rate=0.1, n_iter=50, tolerance=1e-06,\n dtype=\"float64\"):\n # Checking if the gradient is callable\n if not callable(gradient):\n raise TypeError(\"'gradient' must be callable\")\n\n # Setting up the data type for NumPy arrays\n dtype_ = np.dtype(dtype)\n\n # Converting x and y to NumPy arrays\n x, y = np.array(x, dtype=dtype_), np.array(y, dtype=dtype_)\n if x.shape[0] != y.shape[0]:\n raise ValueError(\"'x' and 'y' lengths do not match\")\n\n # Initializing the values of the variables\n vector = np.array(start, dtype=dtype_)\n\n # Setting up and checking the learning rate\n learn_rate = np.array(learn_rate, dtype=dtype_)\n if np.any(learn_rate <= 0):\n raise ValueError(\"'learn_rate' must be greater than zero\")\n\n # Setting up and checking the maximal number of iterations\n n_iter = int(n_iter)\n if n_iter <= 0:\n raise ValueError(\"'n_iter' must be greater than zero\")\n\n # Setting up and checking the tolerance\n tolerance = np.array(tolerance, dtype=dtype_)\n if np.any(tolerance <= 0):\n raise ValueError(\"'tolerance' must be greater than zero\")\n\n # Performing the gradient descent loop\n for _ in range(n_iter):\n # Recalculating the difference\n diff = -learn_rate * np.array(gradient(x, y, vector), dtype_)\n\n # Checking if the absolute difference is small enough\n if np.all(np.abs(diff) <= tolerance):\n break\n\n # Updating the values of the variables\n vector += diff\n\n return vector if vector.shape else vector.item()\n\n", "Yes, you can use scikit-learn's SGDRegressor class to minimize a custom loss function without data. The SGDRegressor class allows you to specify a custom loss function using the loss parameter.\nFor example, suppose you have a custom loss function called custom_loss_function that you want to minimize using stochastic gradient descent. You can do this using the following code:\nfrom sklearn.linear_model import SGDRegressor\n\n# Define your custom loss function\ndef custom_loss_function(y_true, y_pred):\n # Your custom loss function implementation goes here\n pass\n\n# Create an SGDRegressor object with the custom loss function\nsgd_regressor = SGDRegressor(loss=custom_loss_function)\n\n# Use the fit() method to minimize the custom loss function without data\nsgd_regressor.fit(X=None, y=None)\n\nIn this code, the SGDRegressor object is created with the custom_loss_function as the loss function. Then, the fit() method is used to minimize the custom loss function without data. Note that the X and y arguments to the fit() method are set to None because we are not using any data.\nPlease note that the custom_loss_function should be implemented according to the scikit-learn loss function API. This means that the custom_loss_function should take two arguments: y_true and y_pred, and should return a scalar value representing the loss. You can find more details about the loss function API in the scikit-learn documentation:\nhttps://scikit-learn.org/stable/developers/contributing.html#rolling-your-own-estimator\n" ]
[ 1, 0 ]
[]
[]
[ "gradient_descent", "keras", "python", "scikit_learn", "tensorflow" ]
stackoverflow_0074631492_gradient_descent_keras_python_scikit_learn_tensorflow.txt
Q: function that takes two parameters of string type which are fractions with the same denominator and returns a sum expression and the sum result For example: >>> a_b = '1/3' >>> c_b = '5/3' >>> get_fractions(a_b, c_b) '1/3 + 5/3 = 6/3'` I'm trying to solve this but it won't work: def get_fractions(a_b: str, c_b: str) -> str: calculate = int(a_b) + int(c_b) return calculate A: First you will have to get the nominator and denominator for each argument. After that you convert the nominator of each argument from string to integer and add them. Then lastly convert the sum of nominators to str and concatenate it with '/' and any of the argument denominator. def get_fractions(a_b: str, c_b: str) -> str: a_b = a_b.split('/') a_n, a_d = a_b[0], a_b[1] c_b = c_b.split('/') c_n, c_d = c_b[0], c_b[1] n_sum = int(c_n) + int(a_n) out = f'{n_sum} / {a_d}' return out Output 6 / 3 A: def get_fractions(a_b: str, c_b: str) -> str: a_b = a_b.split('/') a_n, a_d = a_b[0], a_b[1] c_b = c_b.split('/') c_n, c_d = c_b[0], c_b[1] n_sum = int(c_n) + int(az_n) out = f'{n_sum} / {a_d}' return out a_b = '1/3' c_b = '5/3'
function that takes two parameters of string type which are fractions with the same denominator and returns a sum expression and the sum result
For example: >>> a_b = '1/3' >>> c_b = '5/3' >>> get_fractions(a_b, c_b) '1/3 + 5/3 = 6/3'` I'm trying to solve this but it won't work: def get_fractions(a_b: str, c_b: str) -> str: calculate = int(a_b) + int(c_b) return calculate
[ "First you will have to get the nominator and denominator for each argument. After that you convert the nominator of each argument from string to integer and add them. Then lastly convert the sum of nominators to str and concatenate it with '/' and any of the argument denominator.\ndef get_fractions(a_b: str, c_b: str) -> str:\n a_b = a_b.split('/')\n a_n, a_d = a_b[0], a_b[1]\n c_b = c_b.split('/')\n c_n, c_d = c_b[0], c_b[1]\n n_sum = int(c_n) + int(a_n)\n out = f'{n_sum} / {a_d}'\n return out\n\nOutput\n6 / 3\n\n", "def get_fractions(a_b: str, c_b: str) -> str:\na_b = a_b.split('/')\na_n, a_d = a_b[0], a_b[1]\nc_b = c_b.split('/')\nc_n, c_d = c_b[0], c_b[1]\nn_sum = int(c_n) + int(az_n)\nout = f'{n_sum} / {a_d}'\nreturn out\n\na_b = '1/3'\nc_b = '5/3'\n\n" ]
[ 1, 0 ]
[]
[]
[ "fractions", "integer", "python", "python_3.x", "string" ]
stackoverflow_0074235217_fractions_integer_python_python_3.x_string.txt
Q: Create multiple objects in one form Django I am trying to create a form in Django that can create one Student object with two Contact objects in the same form. The second Contact object must be optional to fill in (not required). Schematic view of the objects created in the single form: Contact 1 Student < Contact 2 (not required) I have the following models in models.py: class User(AbstractUser): is_student = models.BooleanField(default=False) is_teacher = models.BooleanField(default=False) class Student(models.Model): ACCOUNT_STATUS_CHOICES = ( ('A', 'Active'), ('S', 'Suspended'), ('D', 'Deactivated'), ) first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) year = models.ForeignKey(Year, on_delete=models.SET_NULL, null=True) school = models.ForeignKey(School, on_delete=models.SET_NULL, null=True) student_email = models.EmailField() # named student_email because email conflicts with user email account_status = models.CharField(max_length=1, choices=ACCOUNT_STATUS_CHOICES) phone_number = models.CharField(max_length=50) homework_coach = models.ForeignKey(Teacher, on_delete=models.SET_NULL, null=True, blank=True, default='') user = models.OneToOneField(User, on_delete=models.CASCADE, null=True) plannings = models.ForeignKey(Planning, on_delete=models.SET_NULL, null=True) def __str__(self): return f"{self.first_name} {self.last_name}" class Contact(models.Model): student = models.ForeignKey(Student, on_delete=models.CASCADE) contact_first_name = models.CharField(max_length=50) contact_last_name = models.CharField(max_length=50) contact_phone_number = models.CharField(max_length=50) contact_email = models.EmailField() contact_street = models.CharField(max_length=100) contact_street_number = models.CharField(max_length=10) contact_zipcode = models.CharField(max_length=30) contact_city = models.CharField(max_length=100) def __str__(self): return f"{self.contact_first_name} {self.contact_last_name}" In forms.py, I have created two forms to register students and contacts. A student is also connected to a User object for login and authentication, but this is not relevant. Hence, when a user is created, the user is defined as the user. from django import forms from django.contrib.auth.models import User from django.contrib.auth.forms import UserCreationForm from django.db import transaction from .models import Student, Teacher, User, Year, School, Location, Contact class StudentSignUpForm(UserCreationForm): ACCOUNT_STATUS_CHOICES = ( ('A', 'Active'), ('S', 'Suspended'), ('D', 'Deactivated'), ) #student first_name = forms.CharField(max_length=50, required=True) last_name = forms.CharField(max_length=50, required=True) year = forms.ModelChoiceField(queryset=Year.objects.all(), required=False) school = forms.ModelChoiceField(queryset=School.objects.all(), required=False) # not required for new schools / years that are not yet in the database student_email = forms.EmailField(required=True) account_status = forms.ChoiceField(choices=ACCOUNT_STATUS_CHOICES) phone_number = forms.CharField(max_length=50, required=True) homework_coach = forms.ModelChoiceField(queryset=Teacher.objects.all(), required=False) class Meta(UserCreationForm.Meta): model = User fields = ( 'username', 'first_name', 'last_name', 'year', 'school', 'student_email', 'account_status', 'phone_number', 'homework_coach', 'password1', 'password2', ) @transaction.atomic def save( self, first_name, last_name, year, school, student_email, account_status, phone_number, homework_coach, ): user = super().save(commit=False) user.is_student = True user.save() Student.objects.create( # create student object user=user, first_name=first_name, last_name=last_name, year=year, school=school, student_email=student_email, account_status=account_status, phone_number=phone_number, homework_coach=homework_coach ) return user class ContactForm(forms.ModelForm): contact_first_name = forms.CharField(max_length=50, required=True) contact_last_name = forms.CharField(max_length=50, required=True) contact_phone_number = forms.CharField(max_length=50, required=False) contact_email = forms.EmailField(required=False) # not required because some students might not know contact information contact_street = forms.CharField(max_length=100, required=False) contact_street_number = forms.CharField(max_length=10, required=False) contact_zipcode = forms.CharField(max_length=10, required=False) contact_city = forms.CharField(max_length=100, required=False) class Meta: model = Contact fields = '__all__' In views.py, I have created a view that saves the data (so far only student data, not contact data). class StudentSignUpView(CreateView): model = User form_class = StudentSignUpForm template_name = 'registration/signup_form.html' def get_context_data(self, **kwargs): kwargs['user_type'] = 'student' return super().get_context_data(**kwargs) def form_valid(self, form): # student first_name = form.cleaned_data.get('first_name') last_name = form.cleaned_data.get('last_name') year = form.cleaned_data.get('year') school = form.cleaned_data.get('school') student_email = form.cleaned_data.get('student_email') account_status = form.cleaned_data.get('account_status') phone_number = form.cleaned_data.get('phone_number') homework_coach = form.cleaned_data.get('email') user = form.save( # student first_name=first_name, last_name=last_name, year=year, school=school, student_email=student_email, account_status=account_status, phone_number=phone_number, homework_coach=homework_coach, ) login(self.request, user) return redirect('home') And in registration/signup_form.html, the template is as follows: {% block content %} {% load crispy_forms_tags %} <form method="POST" enctype="multipart/form-data"> {{ formset.management_data }} {% csrf_token %} {{formset|crispy}} <input type="submit" value="Submit"> </form> {% endblock %} Urls.py: from .views import StudentSignUpView urlpatterns = [ path('', views.home, name='home'), path('signup/student/', StudentSignupView.as_view(), name='student_signup'), ] How can I create one view that has one form that creates 1 Student object and 2 Contact objects (of which the 2nd Contact is not required)? Things I have tried: Using formsets to create multiple contacts at once, but I only managed to create multiple Contacts and could not manage to add Students to that formset. I added this to views.py: def formset_view(request): context={} # creating the formset ContactFormSet = formset_factory(ContactForm, extra = 2) formset = ContactFormSet() # print formset data if it is valid if formset.is_valid(): for form in formset: print(form.cleaned_data) context['formset']=formset return render(request, 'registration/signup_form.html', context) Urls.py: urlpatterns = [ path('', views.home, name='home'), path('signup/student/', views.formset_view, name='student_signup'), ] But I only managed to create multiple Contacts and was not able to add a Student object through that form. I tried creating a ModelFormSet to add fields for the Student object, but that did not work either. A: What I'd try: I don't understand your StudentSignUpForm magic. However, if it's effectively the same as a ModelForm: class StudentSignUpForm(forms.Modelform): class Meta: model = Student fields = ('first_name', 'last_name', ...) then just add non-model fields contact1_first_name = forms.CharField(max_length=50, required=True) contact1_last_name = forms.CharField(max_length=50, required=True) contact1_phone_number = forms.CharField(max_length=50, required=False) ... contact2_first_name = forms.CharField(max_length=50, required=True) ... contact2_zipcode = forms.CharField(max_length=10, required=False) contact2_city = forms.CharField(max_length=100, required=False) And then put everything together in form_valid: @transaction.atomic def form_valid( self, form): student = form.save() contact1 = Contact( student = student, contact_first_name = form.cleaned_data['contact1_first_name'], contact_last_name = ... ) contact1.save() if (form.cleaned_data['contact2_first_name'] and form.cleaned_data['contact2_last_name'] # blank if omitted ): contact2 = Contact( student=student, contact_first_name = form.cleaned_data['contact2_first_name'], ... ) contact2.save() return HttpResponseRedirect( ...) If you want to do further validation beyond what's easy in a form definition you can. (You may well want to check that if conatct2_first_name is specified, contact2_last_name must also be specified). def form_valid( self, form): # extra validations, add errors on fail n=0 if form.cleaned_data['contact2_first_name']: n+=1 if form.cleaned_data['contact2_last_name']: n+=1 if n==1: form.add_error('contact2_first_name', 'Must provide first and last names for contact2, or omit both for no second contact') form.add_error('contact2_last_name', 'Must provide first and last names for contact2, or omit both for no second contact') contact2_provided = (n != 0) ... if not form.is_valid(): return self.form_invalid( self, form) with transaction.atomic(): student = form.save() contact1 = ( ... # as before if contact2_provided: contact2 = ( ...
Create multiple objects in one form Django
I am trying to create a form in Django that can create one Student object with two Contact objects in the same form. The second Contact object must be optional to fill in (not required). Schematic view of the objects created in the single form: Contact 1 Student < Contact 2 (not required) I have the following models in models.py: class User(AbstractUser): is_student = models.BooleanField(default=False) is_teacher = models.BooleanField(default=False) class Student(models.Model): ACCOUNT_STATUS_CHOICES = ( ('A', 'Active'), ('S', 'Suspended'), ('D', 'Deactivated'), ) first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) year = models.ForeignKey(Year, on_delete=models.SET_NULL, null=True) school = models.ForeignKey(School, on_delete=models.SET_NULL, null=True) student_email = models.EmailField() # named student_email because email conflicts with user email account_status = models.CharField(max_length=1, choices=ACCOUNT_STATUS_CHOICES) phone_number = models.CharField(max_length=50) homework_coach = models.ForeignKey(Teacher, on_delete=models.SET_NULL, null=True, blank=True, default='') user = models.OneToOneField(User, on_delete=models.CASCADE, null=True) plannings = models.ForeignKey(Planning, on_delete=models.SET_NULL, null=True) def __str__(self): return f"{self.first_name} {self.last_name}" class Contact(models.Model): student = models.ForeignKey(Student, on_delete=models.CASCADE) contact_first_name = models.CharField(max_length=50) contact_last_name = models.CharField(max_length=50) contact_phone_number = models.CharField(max_length=50) contact_email = models.EmailField() contact_street = models.CharField(max_length=100) contact_street_number = models.CharField(max_length=10) contact_zipcode = models.CharField(max_length=30) contact_city = models.CharField(max_length=100) def __str__(self): return f"{self.contact_first_name} {self.contact_last_name}" In forms.py, I have created two forms to register students and contacts. A student is also connected to a User object for login and authentication, but this is not relevant. Hence, when a user is created, the user is defined as the user. from django import forms from django.contrib.auth.models import User from django.contrib.auth.forms import UserCreationForm from django.db import transaction from .models import Student, Teacher, User, Year, School, Location, Contact class StudentSignUpForm(UserCreationForm): ACCOUNT_STATUS_CHOICES = ( ('A', 'Active'), ('S', 'Suspended'), ('D', 'Deactivated'), ) #student first_name = forms.CharField(max_length=50, required=True) last_name = forms.CharField(max_length=50, required=True) year = forms.ModelChoiceField(queryset=Year.objects.all(), required=False) school = forms.ModelChoiceField(queryset=School.objects.all(), required=False) # not required for new schools / years that are not yet in the database student_email = forms.EmailField(required=True) account_status = forms.ChoiceField(choices=ACCOUNT_STATUS_CHOICES) phone_number = forms.CharField(max_length=50, required=True) homework_coach = forms.ModelChoiceField(queryset=Teacher.objects.all(), required=False) class Meta(UserCreationForm.Meta): model = User fields = ( 'username', 'first_name', 'last_name', 'year', 'school', 'student_email', 'account_status', 'phone_number', 'homework_coach', 'password1', 'password2', ) @transaction.atomic def save( self, first_name, last_name, year, school, student_email, account_status, phone_number, homework_coach, ): user = super().save(commit=False) user.is_student = True user.save() Student.objects.create( # create student object user=user, first_name=first_name, last_name=last_name, year=year, school=school, student_email=student_email, account_status=account_status, phone_number=phone_number, homework_coach=homework_coach ) return user class ContactForm(forms.ModelForm): contact_first_name = forms.CharField(max_length=50, required=True) contact_last_name = forms.CharField(max_length=50, required=True) contact_phone_number = forms.CharField(max_length=50, required=False) contact_email = forms.EmailField(required=False) # not required because some students might not know contact information contact_street = forms.CharField(max_length=100, required=False) contact_street_number = forms.CharField(max_length=10, required=False) contact_zipcode = forms.CharField(max_length=10, required=False) contact_city = forms.CharField(max_length=100, required=False) class Meta: model = Contact fields = '__all__' In views.py, I have created a view that saves the data (so far only student data, not contact data). class StudentSignUpView(CreateView): model = User form_class = StudentSignUpForm template_name = 'registration/signup_form.html' def get_context_data(self, **kwargs): kwargs['user_type'] = 'student' return super().get_context_data(**kwargs) def form_valid(self, form): # student first_name = form.cleaned_data.get('first_name') last_name = form.cleaned_data.get('last_name') year = form.cleaned_data.get('year') school = form.cleaned_data.get('school') student_email = form.cleaned_data.get('student_email') account_status = form.cleaned_data.get('account_status') phone_number = form.cleaned_data.get('phone_number') homework_coach = form.cleaned_data.get('email') user = form.save( # student first_name=first_name, last_name=last_name, year=year, school=school, student_email=student_email, account_status=account_status, phone_number=phone_number, homework_coach=homework_coach, ) login(self.request, user) return redirect('home') And in registration/signup_form.html, the template is as follows: {% block content %} {% load crispy_forms_tags %} <form method="POST" enctype="multipart/form-data"> {{ formset.management_data }} {% csrf_token %} {{formset|crispy}} <input type="submit" value="Submit"> </form> {% endblock %} Urls.py: from .views import StudentSignUpView urlpatterns = [ path('', views.home, name='home'), path('signup/student/', StudentSignupView.as_view(), name='student_signup'), ] How can I create one view that has one form that creates 1 Student object and 2 Contact objects (of which the 2nd Contact is not required)? Things I have tried: Using formsets to create multiple contacts at once, but I only managed to create multiple Contacts and could not manage to add Students to that formset. I added this to views.py: def formset_view(request): context={} # creating the formset ContactFormSet = formset_factory(ContactForm, extra = 2) formset = ContactFormSet() # print formset data if it is valid if formset.is_valid(): for form in formset: print(form.cleaned_data) context['formset']=formset return render(request, 'registration/signup_form.html', context) Urls.py: urlpatterns = [ path('', views.home, name='home'), path('signup/student/', views.formset_view, name='student_signup'), ] But I only managed to create multiple Contacts and was not able to add a Student object through that form. I tried creating a ModelFormSet to add fields for the Student object, but that did not work either.
[ "What I'd try:\nI don't understand your StudentSignUpForm magic. However, if it's effectively the same as a ModelForm:\nclass StudentSignUpForm(forms.Modelform):\n class Meta:\n model = Student\n fields = ('first_name', 'last_name', ...)\n\nthen just add non-model fields\n contact1_first_name = forms.CharField(max_length=50, required=True)\n contact1_last_name = forms.CharField(max_length=50, required=True)\n contact1_phone_number = forms.CharField(max_length=50, required=False)\n ...\n contact2_first_name = forms.CharField(max_length=50, required=True)\n ...\n contact2_zipcode = forms.CharField(max_length=10, required=False)\n contact2_city = forms.CharField(max_length=100, required=False)\n\nAnd then put everything together in form_valid:\[email protected]\ndef form_valid( self, form):\n student = form.save()\n\n contact1 = Contact(\n student = student,\n contact_first_name = form.cleaned_data['contact1_first_name'],\n contact_last_name = ...\n )\n contact1.save()\n\n if (form.cleaned_data['contact2_first_name'] and \n form.cleaned_data['contact2_last_name'] # blank if omitted\n ):\n\n contact2 = Contact(\n student=student,\n contact_first_name = form.cleaned_data['contact2_first_name'],\n ...\n )\n contact2.save()\n\n return HttpResponseRedirect( ...)\n\nIf you want to do further validation beyond what's easy in a form definition you can. (You may well want to check that if conatct2_first_name is specified, contact2_last_name must also be specified).\ndef form_valid( self, form):\n\n # extra validations, add errors on fail\n\n n=0\n if form.cleaned_data['contact2_first_name']:\n n+=1\n if form.cleaned_data['contact2_last_name']:\n n+=1\n\n if n==1:\n form.add_error('contact2_first_name',\n 'Must provide first and last names for contact2, or omit both for no second contact') \n form.add_error('contact2_last_name',\n 'Must provide first and last names for contact2, or omit both for no second contact') \n contact2_provided = (n != 0)\n ...\n\n if not form.is_valid():\n return self.form_invalid( self, form)\n\n with transaction.atomic():\n student = form.save()\n contact1 = ( ... # as before\n\n if contact2_provided:\n contact2 = ( ...\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_forms", "django_models", "formset", "python" ]
stackoverflow_0074655177_django_django_forms_django_models_formset_python.txt
Q: __init__.py issue in django test I have an issue with running a test in my Django project, using the command python manage.py test. It shows: user:~/workspace/connector$ docker-compose run --rm app sh -c "python manage.py test" Creating connector_app_run ... done Found 0 test(s). System check identified no issues (0 silenced). ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK I was debugging it and I know that it's probably a "init.py" file. If I'm deleting file init.py from app.app (I have read somewhere that it can help) then I'm receiving an error: ====================================================================== ERROR: app.tests.test_secrets (unittest.loader._FailedTest) ---------------------------------------------------------------------- ImportError: Failed to import test module: app.tests.test_secrets Traceback (most recent call last): File "/usr/local/lib/python3.9/unittest/loader.py", line 436, in _find_test_path module = self._get_module_from_name(name) File "/usr/local/lib/python3.9/unittest/loader.py", line 377, in _get_module_from_name __import__(name) File "/app/app/tests/test_secrets.py", line 12, in <module> from app.app import secrets ModuleNotFoundError: No module named 'app.app' why did this error occur? Pycharm projects normally see import and what I know from version 3.4 it's not obligatory to put init.py into folders to make the package. This is the github link: https://github.com/MrHarvvey/connector.git Can you explain me what I'm I doing wrong here? A: So as per your project file structure, I changed from app.app import secrets to from app import secrets and then found test cases are also failing, so I fixed them also, you can review the changes here: https://github.com/MrHarvvey/connector/pull/1 Please let me know you if you wanted something else.
__init__.py issue in django test
I have an issue with running a test in my Django project, using the command python manage.py test. It shows: user:~/workspace/connector$ docker-compose run --rm app sh -c "python manage.py test" Creating connector_app_run ... done Found 0 test(s). System check identified no issues (0 silenced). ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK I was debugging it and I know that it's probably a "init.py" file. If I'm deleting file init.py from app.app (I have read somewhere that it can help) then I'm receiving an error: ====================================================================== ERROR: app.tests.test_secrets (unittest.loader._FailedTest) ---------------------------------------------------------------------- ImportError: Failed to import test module: app.tests.test_secrets Traceback (most recent call last): File "/usr/local/lib/python3.9/unittest/loader.py", line 436, in _find_test_path module = self._get_module_from_name(name) File "/usr/local/lib/python3.9/unittest/loader.py", line 377, in _get_module_from_name __import__(name) File "/app/app/tests/test_secrets.py", line 12, in <module> from app.app import secrets ModuleNotFoundError: No module named 'app.app' why did this error occur? Pycharm projects normally see import and what I know from version 3.4 it's not obligatory to put init.py into folders to make the package. This is the github link: https://github.com/MrHarvvey/connector.git Can you explain me what I'm I doing wrong here?
[ "So as per your project file structure, I changed from app.app import secrets to from app import secrets and then found test cases are also failing, so I fixed them also, you can review the changes here:\nhttps://github.com/MrHarvvey/connector/pull/1\nPlease let me know you if you wanted something else.\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074658535_django_python.txt
Q: Select dropdown option with Selenium (Python) I'm kinda new at Selenium, so a proposed myself a project. I'm trying to get as much information as I can from this URL https://statusinvest.com.br/acoes/proventos/ibovespa Until that time I was able to do everything, EXCEPT change the default option at the "Filtro por Índice". I would like to change it from "Ibovespa" to "--GERAL--" but it has been harder than I would expect! I tried via classical XPath (find then click) and by the Select() class in Selenium, but it appears to be beyound my knowledge and I'm totally stuck... Anyone has any tip on how to accomplish it? Thanks! A: So a very simple way to change the input option would be to do: from selenium.webdriver.common.by import By select_obj = driver.find_element(By.CLASS_NAME, 'select-wrapper') # object that contains all of the elements for first input selector select_obj.find_element(By.TAG_NAME, 'input').click() # click the input object to bring up the options select_obj.find_elements(By.TAG_NAME, 'li')[0].click() # click the first option
Select dropdown option with Selenium (Python)
I'm kinda new at Selenium, so a proposed myself a project. I'm trying to get as much information as I can from this URL https://statusinvest.com.br/acoes/proventos/ibovespa Until that time I was able to do everything, EXCEPT change the default option at the "Filtro por Índice". I would like to change it from "Ibovespa" to "--GERAL--" but it has been harder than I would expect! I tried via classical XPath (find then click) and by the Select() class in Selenium, but it appears to be beyound my knowledge and I'm totally stuck... Anyone has any tip on how to accomplish it? Thanks!
[ "So a very simple way to change the input option would be to do:\nfrom selenium.webdriver.common.by import By\n\nselect_obj = driver.find_element(By.CLASS_NAME, 'select-wrapper') # object that contains all of the elements for first input selector\nselect_obj.find_element(By.TAG_NAME, 'input').click() # click the input object to bring up the options\nselect_obj.find_elements(By.TAG_NAME, 'li')[0].click() # click the first option\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074659548_python_selenium.txt
Q: How to convert python list to JSON array? If I have python list like pyList=[‘[email protected]’,’[email protected]’] And I want it to convert it to json array and add {} around every object, it should be like that : arrayJson=[{“email”:”[email protected]”},{“ email”:”[email protected]”}] any idea how to do that ? A: You can achieve this by using built-in json module import json arrayJson = json.dumps([{"email": item} for item in pyList]) A: Try to Google this kind of stuff first. :) import json array = [1, 2, 3] jsonArray = json.dumps(array) By the way, the result you asked for can not be achieved with the list you provided. You need to use python dictionaries to get json objects. The conversion is like below Python -> JSON list -> array dictionary -> object And here is the link to the docs https://docs.python.org/3/library/json.html A: pip install jsonwhatever. You should try it, you can put anything on it from jsonwhatever import jsonwhatever as jw pyList=['[email protected]','[email protected]'] jsonwe = jw.JsonWhatEver() mytr = jsonwe.jsonwhatever('my_custom_list', pyList) print(mytr)
How to convert python list to JSON array?
If I have python list like pyList=[‘[email protected]’,’[email protected]’] And I want it to convert it to json array and add {} around every object, it should be like that : arrayJson=[{“email”:”[email protected]”},{“ email”:”[email protected]”}] any idea how to do that ?
[ "You can achieve this by using built-in json module\nimport json\n\narrayJson = json.dumps([{\"email\": item} for item in pyList])\n\n", "Try to Google this kind of stuff first. :)\nimport json\n\narray = [1, 2, 3]\njsonArray = json.dumps(array)\n\nBy the way, the result you asked for can not be achieved with the list you provided.\nYou need to use python dictionaries to get json objects. The conversion is like below\nPython -> JSON\nlist -> array\ndictionary -> object\n\nAnd here is the link to the docs\nhttps://docs.python.org/3/library/json.html\n", "pip install jsonwhatever.\nYou should try it, you can put anything on it\nfrom jsonwhatever import jsonwhatever as jw\n\npyList=['[email protected]','[email protected]']\n\njsonwe = jw.JsonWhatEver()\n\nmytr = jsonwe.jsonwhatever('my_custom_list', pyList)\n\nprint(mytr)\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "arraylist", "arrays", "django", "json", "python" ]
stackoverflow_0071979765_arraylist_arrays_django_json_python.txt
Q: Create a DataFrame with data from a class I want to create a DataFrame to which I want to import data from a class. I mean, I type t1 = Transaction("20221128", "C1", 14) and I want a DataFrame to show data like: Column 1: Date Column 2: Concept Column 3: Amount The code where I want to implement this is: class Transactions: num_of_transactions = 0 amount = 0 def __init__(self, date, concept, amount): self.date = date self.concept = concept self.amount = amount Transaction.add_transaction() Transaction.add_money(self) @classmethod def number_of_transactions(cls): return cls.num_of_transactions @classmethod def add_transaction(cls): cls.num_of_transactions += 1 @classmethod def amount_of_money(cls): return cls.amount @classmethod def add_money(cls, self): cls.amount += self.amount t1 = Transaction("20221128", "C1", 14) t2 = Transaction("20221129", "C2", 30) t3 = Transaction("20221130", "3", 14) I tried: def DataFrame(self): df = pd.DataFrame(self.date self.concept, self.amount) But looking at pandas documentation, I have seen it is not a valid way. Any help on that? Thank you! A: In order to create a new data frame, you have to provide the rows and the columns name. You have to change the code as the following: def DataFrame(self): df = pd.DataFrame(data=[[self.date, self.concept, self.amount]], columns=['Date','Concept','Amount']) A: You can create a DataFrame from a list of Transaction objects by first creating a list of dictionaries, where each dictionary represents a row in the DataFrame and has keys that correspond to the columns. Here's one way to do it: import pandas as pd # Create a list of Transaction objects transactions = [t1, t2, t3] # Create a list of dictionaries, where each dictionary represents a row in the DataFrame data = [] for t in transactions: row = {"Date": t.date, "Concept": t.concept, "Amount": t.amount} data.append(row) # Create a DataFrame from the list of dictionaries, specifying the columns in the desired order df = pd.DataFrame(data, columns=["Date", "Concept", "Amount"]) # Print the DataFrame print(df) This should produce a DataFrame that looks like this: | | Date | Concept | Amount | |---:|:---------|:----------|:---------| | 0 | 20221128 | C1 | 14 | | 1 | 20221129 | C2 | 30 | | 2 | 20221130 | 3 | 14 | The above code assumes that the Transaction class is defined as you have shown in your question, with the __init__ method and the class variables and methods that you have included. Note that I have replaced Transaction with Transactions in the class definition to match the name of the class, and I have also changed the self parameter of the add_money method to transaction, to avoid confusion with the self parameter of the instance methods. The DataFrame function is not part of the class definition, but is defined as a separate function that takes a list of Transaction objects as its argument. You can also add a class method to the Transactions class that returns a DataFrame representing all the instances of the class. To do this, you can add a class variable transactions_list that keeps track of all the instances of the class, and a class method to_dataframe that converts transactions_list to a DataFrame. Here's one way to implement it: import pandas as pd class Transactions: num_of_transactions = 0 amount = 0 transactions_list = [] # Class variable to store all instances of the class def __init__(self, date, concept, amount): self.date = date self.concept = concept self.amount = amount # Add the instance to the transactions_list self.transactions_list.append(self) Transactions.add_transaction() Transactions.add_money(self) @classmethod def number_of_transactions(cls): return cls.num_of_transactions @classmethod def add_transaction(cls): cls.num_of_transactions += 1 @classmethod def amount_of_money(cls): return cls.amount @classmethod def add_money(cls, self): cls.amount += self.amount @classmethod def to_dataframe(cls): # Create a list of dictionaries representing each transaction transactions_list = [{'Date': t.date, 'Concept': t.concept, 'Amount': t.amount} for t in cls.transactions_list] # Create a DataFrame from the list of dictionaries df = pd.DataFrame(transactions_list) return df # Create some transactions t1 = Transactions("20221128", "C1", 14) t2 = Transactions("20221129", "C2", 30) t3 = Transactions("20221130", "3", 14) You can then call the class method to_dataframe to get a DataFrame representing all the transactions: df = Transactions.to_dataframe() This should create a DataFrame df with columns 'Date', 'Concept', and 'Amount' and rows corresponding to each transaction. A: for the example you provided we can do some modifications in the class so we could get a dataframe easily: class Transaction: num_of_transactions = 0 amount = 0 transactions = [] # <----- class atribute added def __init__(self, date, concept, amount): self.date = date self.concept = concept self.amount = amount Transaction.add_transaction() Transaction.add_money(self) Transaction.transactions.append(self) # <----- append added now we can get a dataframe like this: pd.DataFrame([t.__dict__ for t in Transaction.transactions]) >>> ''' date concept amount 0 20221128 C1 14 1 20221129 C2 30 2 20221130 3 14
Create a DataFrame with data from a class
I want to create a DataFrame to which I want to import data from a class. I mean, I type t1 = Transaction("20221128", "C1", 14) and I want a DataFrame to show data like: Column 1: Date Column 2: Concept Column 3: Amount The code where I want to implement this is: class Transactions: num_of_transactions = 0 amount = 0 def __init__(self, date, concept, amount): self.date = date self.concept = concept self.amount = amount Transaction.add_transaction() Transaction.add_money(self) @classmethod def number_of_transactions(cls): return cls.num_of_transactions @classmethod def add_transaction(cls): cls.num_of_transactions += 1 @classmethod def amount_of_money(cls): return cls.amount @classmethod def add_money(cls, self): cls.amount += self.amount t1 = Transaction("20221128", "C1", 14) t2 = Transaction("20221129", "C2", 30) t3 = Transaction("20221130", "3", 14) I tried: def DataFrame(self): df = pd.DataFrame(self.date self.concept, self.amount) But looking at pandas documentation, I have seen it is not a valid way. Any help on that? Thank you!
[ "In order to create a new data frame, you have to provide the rows and the columns name.\nYou have to change the code as the following:\ndef DataFrame(self):\n df = pd.DataFrame(data=[[self.date, self.concept, self.amount]], columns=['Date','Concept','Amount'])\n\n", "You can create a DataFrame from a list of Transaction objects by first creating a list of dictionaries, where each dictionary represents a row in the DataFrame and has keys that correspond to the columns. Here's one way to do it:\nimport pandas as pd\n\n# Create a list of Transaction objects\ntransactions = [t1, t2, t3]\n\n# Create a list of dictionaries, where each dictionary represents a row in the DataFrame\ndata = []\nfor t in transactions:\n row = {\"Date\": t.date, \"Concept\": t.concept, \"Amount\": t.amount}\n data.append(row)\n\n# Create a DataFrame from the list of dictionaries, specifying the columns in the desired order\ndf = pd.DataFrame(data, columns=[\"Date\", \"Concept\", \"Amount\"])\n\n# Print the DataFrame\nprint(df)\n\nThis should produce a DataFrame that looks like this:\n| | Date | Concept | Amount |\n|---:|:---------|:----------|:---------|\n| 0 | 20221128 | C1 | 14 |\n| 1 | 20221129 | C2 | 30 |\n| 2 | 20221130 | 3 | 14 |\n\nThe above code assumes that the Transaction class is defined as you have shown in your question, with the __init__ method and the class variables and methods that you have included. Note that I have replaced Transaction with Transactions in the class definition to match the name of the class, and I have also changed the self parameter of the add_money method to transaction, to avoid confusion with the self parameter of the instance methods. The DataFrame function is not part of the class definition, but is defined as a separate function that takes a list of Transaction objects as its argument.\nYou can also add a class method to the Transactions class that returns a DataFrame representing all the instances of the class. To do this, you can add a class variable transactions_list that keeps track of all the instances of the class, and a class method to_dataframe that converts transactions_list to a DataFrame.\nHere's one way to implement it:\nimport pandas as pd\n\nclass Transactions:\n\n num_of_transactions = 0\n amount = 0\n transactions_list = [] # Class variable to store all instances of the class\n\n def __init__(self, date, concept, amount):\n self.date = date\n self.concept = concept\n self.amount = amount\n # Add the instance to the transactions_list\n self.transactions_list.append(self)\n Transactions.add_transaction()\n Transactions.add_money(self)\n\n @classmethod\n def number_of_transactions(cls):\n return cls.num_of_transactions\n\n @classmethod\n def add_transaction(cls):\n cls.num_of_transactions += 1\n\n @classmethod\n def amount_of_money(cls):\n return cls.amount\n\n @classmethod\n def add_money(cls, self):\n cls.amount += self.amount\n\n @classmethod\n def to_dataframe(cls):\n # Create a list of dictionaries representing each transaction\n transactions_list = [{'Date': t.date, 'Concept': t.concept, 'Amount': t.amount} for t in cls.transactions_list]\n\n # Create a DataFrame from the list of dictionaries\n df = pd.DataFrame(transactions_list)\n\n return df\n\n# Create some transactions\nt1 = Transactions(\"20221128\", \"C1\", 14)\nt2 = Transactions(\"20221129\", \"C2\", 30)\nt3 = Transactions(\"20221130\", \"3\", 14)\n\nYou can then call the class method to_dataframe to get a DataFrame representing all the transactions:\ndf = Transactions.to_dataframe()\n\nThis should create a DataFrame df with columns 'Date', 'Concept', and 'Amount' and rows corresponding to each transaction.\n", "for the example you provided we can do some modifications in the class so we could get a dataframe easily:\nclass Transaction:\n\n num_of_transactions = 0\n amount = 0\n transactions = [] # <----- class atribute added\n\n def __init__(self, date, concept, amount):\n self.date = date\n self.concept = concept\n self.amount = amount\n Transaction.add_transaction()\n Transaction.add_money(self)\n Transaction.transactions.append(self) # <----- append added\n\nnow we can get a dataframe like this:\npd.DataFrame([t.__dict__ for t in Transaction.transactions])\n\n>>>\n'''\n date concept amount\n0 20221128 C1 14\n1 20221129 C2 30\n2 20221130 3 14\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074659144_dataframe_pandas_python_python_3.x.txt
Q: how to connect an s3 bucket w/ airflow I have an airflow task where I try and load a file into an s3 bucket. I have airflow running on a Ec2 instance. Im running AF version 2.4.3 I have done pip install 'apache-airflow[amazon]' I start up my AF server, log in and go to the Admin section to add a connection. I open a new connection and I dont have an option for s3. My only Amazon options are: Amazon Elastic MapReduce Amazon Redshift Amazon Web services. what else am I missing? A: You need to define aws connection under "Amazon Web Services Connection" for more details see here A: You should define the connection within your DAG. You should also use a secure settings.ini file to save your secrets, and then call those variables from your DAG. See this answer for a complete guide: Airflow s3 connection using UI
how to connect an s3 bucket w/ airflow
I have an airflow task where I try and load a file into an s3 bucket. I have airflow running on a Ec2 instance. Im running AF version 2.4.3 I have done pip install 'apache-airflow[amazon]' I start up my AF server, log in and go to the Admin section to add a connection. I open a new connection and I dont have an option for s3. My only Amazon options are: Amazon Elastic MapReduce Amazon Redshift Amazon Web services. what else am I missing?
[ "You need to define aws connection under \"Amazon Web Services Connection\"\nfor more details see here\n", "You should define the connection within your DAG.\nYou should also use a secure settings.ini file to save your secrets, and then call those variables from your DAG.\nSee this answer for a complete guide: Airflow s3 connection using UI\n" ]
[ 1, 0 ]
[]
[]
[ "airflow", "amazon_s3", "amazon_web_services", "python" ]
stackoverflow_0074631434_airflow_amazon_s3_amazon_web_services_python.txt
Q: How can I define some initial values that are variable in formula How can I define some initial values (that are variable)in formulas I should write code to predict and also optimize prediction of data series But it has many formulas as a gray box and in these formulas, I should define some initial values(as variables) A: Default Arguments: def student(firstname, lastname ='Mark', standard ='Fifth'): print(firstname, lastname, 'studies in', standard, 'Standard') We need to keep the following points in mind while calling functions: In the case of passing the keyword arguments, the order of arguments is important. There should be only one value for one parameter. The passed keyword name should match with the actual keyword name. In the case of calling a function containing non-keyword arguments, the order is important. Example #1: Calling functions without keyword arguments def student(firstname, lastname ='Mark', standard ='Fifth'): print(firstname, lastname, 'studies in', standard, 'Standard') # 1 positional argument student('John') # 3 positional arguments student('John', 'Gates', 'Seventh') # 2 positional arguments student('John', 'Gates') student('John', 'Seventh') Output: John Mark studies in Fifth Standard John Gates studies in Seventh Standard John Gates studies in Fifth Standard John Seventh studies in Fifth Standard Example #2: Calling functions with keyword arguments def student(firstname, lastname ='Mark', standard ='Fifth'): print(firstname, lastname, 'studies in', standard, 'Standard') # 1 keyword argument student(firstname ='John') # 2 keyword arguments student(firstname ='John', standard ='Seventh') # 2 keyword arguments student(lastname ='Gates', firstname ='John') Output: John Mark studies in Fifth Standard John Mark studies in Seventh Standard John Gates studies in Fifth Standard Example #3: Some Invalid function calls def student(firstname, lastname ='Mark', standard ='Fifth'): print(firstname, lastname, 'studies in', standard, 'Standard') # required argument missing student() # non keyword argument after a keyword argument student(firstname ='John', 'Seventh') # unknown keyword argument student(subject ='Maths') The above code will throw an error because: In the first call, value is not passed for parameter firstname which is the required parameter. In the second call, there is a non-keyword argument after a keyword argument. In the third call, the passing keyword argument is not matched with the actual keyword name arguments. Example using dictionary mutable default argument values example using python dictionary itemName is the name of item and quantity is the number of such items are there def addItemToDictionary(itemName, quantity, itemList = {}): itemList[itemName] = quantity return itemList print(addItemToDictionary('notebook', 4)) print(addItemToDictionary('pencil', 1)) print(addItemToDictionary('eraser', 1)) Output {'notebook': 4} {'notebook': 4, 'pencil': 1} {'notebook': 4, 'pencil': 1, 'eraser': 1}
How can I define some initial values that are variable in formula
How can I define some initial values (that are variable)in formulas I should write code to predict and also optimize prediction of data series But it has many formulas as a gray box and in these formulas, I should define some initial values(as variables)
[ "Default Arguments:\ndef student(firstname, lastname ='Mark', standard ='Fifth'):\n\n print(firstname, lastname, 'studies in', standard, 'Standard')\n\nWe need to keep the following points in mind while calling functions:\nIn the case of passing the keyword arguments, the order of arguments is important.\nThere should be only one value for one parameter.\nThe passed keyword name should match with the actual keyword name.\nIn the case of calling a function containing non-keyword arguments, the order is important.\nExample #1: Calling functions without keyword arguments\ndef student(firstname, lastname ='Mark', standard ='Fifth'):\n print(firstname, lastname, 'studies in', standard, 'Standard')\n\n# 1 positional argument\nstudent('John')\n\n# 3 positional arguments \nstudent('John', 'Gates', 'Seventh') \n\n# 2 positional arguments\nstudent('John', 'Gates') \nstudent('John', 'Seventh')\n\nOutput:\nJohn Mark studies in Fifth Standard\nJohn Gates studies in Seventh Standard\nJohn Gates studies in Fifth Standard\nJohn Seventh studies in Fifth Standard\n\nExample #2: Calling functions with keyword arguments\ndef student(firstname, lastname ='Mark', standard ='Fifth'):\n print(firstname, lastname, 'studies in', standard, 'Standard')\n\n# 1 keyword argument\nstudent(firstname ='John') \n\n# 2 keyword arguments \nstudent(firstname ='John', standard ='Seventh')\n\n# 2 keyword arguments\nstudent(lastname ='Gates', firstname ='John') \n \n\nOutput:\nJohn Mark studies in Fifth Standard\nJohn Mark studies in Seventh Standard\nJohn Gates studies in Fifth Standard\n \n\nExample #3: Some Invalid function calls\ndef student(firstname, lastname ='Mark', standard ='Fifth'):\n print(firstname, lastname, 'studies in', standard, 'Standard')\n\n# required argument missing\nstudent() \n\n# non keyword argument after a keyword argument \nstudent(firstname ='John', 'Seventh')\n\n# unknown keyword argument\nstudent(subject ='Maths') \n\nThe above code will throw an error because:\nIn the first call, value is not passed for parameter firstname which is the required parameter.\nIn the second call, there is a non-keyword argument after a keyword argument.\nIn the third call, the passing keyword argument is not matched with the actual keyword name arguments.\nExample using dictionary\nmutable default argument values example using python dictionary\nitemName is the name of item and quantity is the number of such\nitems are there\ndef addItemToDictionary(itemName, quantity, itemList = {}):\n itemList[itemName] = quantity\n return itemList\n\n\nprint(addItemToDictionary('notebook', 4))\nprint(addItemToDictionary('pencil', 1))\nprint(addItemToDictionary('eraser', 1))\n\nOutput\n{'notebook': 4}\n{'notebook': 4, 'pencil': 1}\n{'notebook': 4, 'pencil': 1, 'eraser': 1}\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074659988_python.txt
Q: Ignore first element with XPATH I need to get only text from the following structure, however, ignoring the first element, which would be the <span>SIGNIFICADO: </span> tag <p class="p1"> <span>SIGNIFICADO: </span> <strong> <a href="www.site.com">Text Link</a> </strong> Some text Some text Some text </p> Currently I do it like this: p1=driver.find_element(By.XPATH,'//p[@class="p1"]').text And if I put this xpath: //p[@class="p1"]/text() Text that is inside the <a> tag is ignored. How can I get all the text except the first one that is inside <span> ?? A: To omit text of and before the first element, you can use //p[@class="p1"]//text()[preceding::*] This selects all text nodes that have at least one preceding element(here: <span>. Disadvantage is that this also discards text between the <p> element and the <span> element.
Ignore first element with XPATH
I need to get only text from the following structure, however, ignoring the first element, which would be the <span>SIGNIFICADO: </span> tag <p class="p1"> <span>SIGNIFICADO: </span> <strong> <a href="www.site.com">Text Link</a> </strong> Some text Some text Some text </p> Currently I do it like this: p1=driver.find_element(By.XPATH,'//p[@class="p1"]').text And if I put this xpath: //p[@class="p1"]/text() Text that is inside the <a> tag is ignored. How can I get all the text except the first one that is inside <span> ??
[ "To omit text of and before the first element, you can use\n//p[@class=\"p1\"]//text()[preceding::*]\n\nThis selects all text nodes that have at least one preceding element(here: <span>. Disadvantage is that this also discards text between the <p> element and the <span> element.\n" ]
[ 0 ]
[]
[]
[ "python", "xpath" ]
stackoverflow_0074656703_python_xpath.txt
Q: Convert a dictionary into a list by enumerating? I have a list created from a csv file that is a dictionary within a list. I need to officially convert the list to a dictionary. I have a working solution below but what's wrong with it is that it labels each dictionary row sequentially sub[i], for example name[0] and name[1]. I have searched around online and it seems that enumerate requires an integer be assigned to each row hence the naming sequence but i would need for this program to work to translate the dictionary while leaving the names for each row blank. data = [] check_correct_args() try: with open(sys.argv[1]) as csvfile: #with open('Address_Book.csv', "r") as csvfile: reader = csv.DictReader(csvfile) # reader = csv.reader(csvfile) for row in reader: data.append(row) #return data except FileNotFoundError: sys.exit("Couldn't read csv file") dictionary = {f'name{i}':v for i, v in enumerate(data)} I'm looking for something around the lines of (in psuedocode): dictionary = {f'':v for i, v in enumerate(data)} A: To convert a dictionary into a list by enumerating in Python, you can use the items() method to get a list of the key-value pairs in the dictionary, and then use a for loop to enumerate over the pairs and create a new list. Here is an example: # Define a dictionary my_dict = {'apple': 1, 'banana': 2, 'cherry': 3} # Create an empty list my_list = [] # Enumerate over the key-value pairs in the dictionary for key, value in my_dict.items(): # Create a tuple with the key and value, and append it to the list my_list.append((key, value)) # Print the resulting list print(my_list) In this example, the dictionary my_dict contains three key-value pairs, and the for loop enumerates over these pairs and creates a list of tuples containing the keys and values. The output of this code would be: [('apple', 1), ('banana', 2), ('cherry', 3)] You can also use a list comprehension to convert a dictionary into a list of tuples, like this: # Define a dictionary my_dict = {'apple': 1, 'banana': 2, 'cherry': 3} # Use a list comprehension to create a list of tuples my_list = [(key, value) for key, value in my_dict.items()] # Print the resulting list print(my_list) This code does the same thing as the previous example, but it uses a list comprehension to create the list of tuples in a more concise way. The output would be the same as above.
Convert a dictionary into a list by enumerating?
I have a list created from a csv file that is a dictionary within a list. I need to officially convert the list to a dictionary. I have a working solution below but what's wrong with it is that it labels each dictionary row sequentially sub[i], for example name[0] and name[1]. I have searched around online and it seems that enumerate requires an integer be assigned to each row hence the naming sequence but i would need for this program to work to translate the dictionary while leaving the names for each row blank. data = [] check_correct_args() try: with open(sys.argv[1]) as csvfile: #with open('Address_Book.csv', "r") as csvfile: reader = csv.DictReader(csvfile) # reader = csv.reader(csvfile) for row in reader: data.append(row) #return data except FileNotFoundError: sys.exit("Couldn't read csv file") dictionary = {f'name{i}':v for i, v in enumerate(data)} I'm looking for something around the lines of (in psuedocode): dictionary = {f'':v for i, v in enumerate(data)}
[ "To convert a dictionary into a list by enumerating in Python, you can use the items() method to get a list of the key-value pairs in the dictionary, and then use a for loop to enumerate over the pairs and create a new list.\nHere is an example:\n# Define a dictionary\nmy_dict = {'apple': 1, 'banana': 2, 'cherry': 3}\n\n# Create an empty list\nmy_list = []\n\n# Enumerate over the key-value pairs in the dictionary\nfor key, value in my_dict.items():\n # Create a tuple with the key and value, and append it to the list\n my_list.append((key, value))\n\n# Print the resulting list\nprint(my_list)\n\n\nIn this example, the dictionary my_dict contains three key-value pairs, and the for loop enumerates over these pairs and creates a list of tuples containing the keys and values. The output of this code would be:\n[('apple', 1), ('banana', 2), ('cherry', 3)]\n\n\nYou can also use a list comprehension to convert a dictionary into a list of tuples, like this:\n# Define a dictionary\nmy_dict = {'apple': 1, 'banana': 2, 'cherry': 3}\n\n# Use a list comprehension to create a list of tuples\nmy_list = [(key, value) for key, value in my_dict.items()]\n\n# Print the resulting list\nprint(my_list)\n\n\nThis code does the same thing as the previous example, but it uses a list comprehension to create the list of tuples in a more concise way. The output would be the same as above.\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074659808_python_python_3.x.txt
Q: How do I crop a python array to maximum size with only non-zero values (largest non-zero rectangle) I have a numpy array of pixel data, something like 0 0 0 0 0 0 0 0 1 3 4 6 1 0 0 2 3 5 2 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 I would like to get a new array which excludes any outer rows/columns with zeroes, so I just end up with only the non-zero values (that works for any given array) i.e. 1 3 4 6 1 2 3 5 2 1 So far all I've managed to get is 1 3 4 6 1 2 3 5 2 1 1 0 0 1 0 using np.argwhere to find the "min" and "max" non-zero values, but this still includes rows/columns with zero and non-zero values in. My actual array: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765 0 0 0 0 1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756 0 0 0 0 1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792 0 0 0 0 1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817 0 0 0 0 1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754 0 0 0 0 1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864 0 0 0 0 1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824 0 0 0 0 1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876 0 0 0 0 1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754 0 0 0 0 1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928 0 0 0 0 1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798 0 0 0 0 1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875 0 0 0 0 1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825 0 0 0 0 1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813 0 0 0 0 1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857 0 0 0 0 1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857 0 0 0 0 1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818 0 0 0 0 1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898 0 0 0 0 1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822 0 0 0 0 1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834 0 0 0 0 1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819 0 0 0 0 1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907 0 0 0 0 1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861 0 0 0 0 1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881 0 0 0 0 1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873 0 0 0 0 1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842 0 0 0 0 1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888 0 0 0 0 1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873 0 0 0 0 1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796 0 0 0 0 1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755 0 0 0 0 1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816 0 0 0 0 1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855 0 0 0 0 1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787 0 0 0 0 1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842 0 0 0 0 1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738 0 0 0 0 1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829 0 0 0 0 1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776 0 0 0 0 1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721 0 0 0 0 1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844 0 0 0 0 1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887 0 0 0 0 1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800 0 0 0 0 0 1782 0 0 0 0 1879 0 0 0 0 1884 0 0 0 0 0 0 0 1893 0 1932 1909 1938 0 0 0 0 0 1928 0 0 1816 0 0 1921 1887 0 0 0 0 1876 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1907 0 0 0 0 1944 0 0 0 0 1954 0 0 0 0 0 0 0 1930 0 1875 1882 1912 0 0 0 0 0 1890 0 0 1875 0 0 1873 1872 0 0 0 0 1897 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 A: Welcome to StackOverflow! Input: [[ 0 0 0 ... 0 0 0] [ 0 0 0 ... 0 0 0] [ 0 0 1872 ... 1765 0 0] ... [ 0 0 1850 ... 1800 0 0] [ 0 0 0 ... 0 0 0] [ 0 0 0 ... 0 0 0]] Input array.npy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765 0 0 0 0 1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756 0 0 0 0 1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792 0 0 0 0 1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817 0 0 0 0 1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754 0 0 0 0 1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864 0 0 0 0 1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824 0 0 0 0 1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876 0 0 0 0 1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754 0 0 0 0 1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928 0 0 0 0 1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798 0 0 0 0 1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875 0 0 0 0 1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825 0 0 0 0 1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813 0 0 0 0 1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857 0 0 0 0 1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857 0 0 0 0 1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818 0 0 0 0 1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898 0 0 0 0 1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822 0 0 0 0 1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834 0 0 0 0 1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819 0 0 0 0 1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907 0 0 0 0 1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861 0 0 0 0 1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881 0 0 0 0 1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873 0 0 0 0 1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842 0 0 0 0 1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888 0 0 0 0 1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873 0 0 0 0 1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796 0 0 0 0 1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755 0 0 0 0 1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816 0 0 0 0 1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855 0 0 0 0 1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787 0 0 0 0 1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842 0 0 0 0 1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738 0 0 0 0 1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829 0 0 0 0 1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776 0 0 0 0 1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721 0 0 0 0 1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844 0 0 0 0 1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887 0 0 0 0 1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800 0 0 0 0 0 1782 0 0 0 0 1879 0 0 0 0 1884 0 0 0 0 0 0 0 1893 0 1932 1909 1938 0 0 0 0 0 1928 0 0 1816 0 0 1921 1887 0 0 0 0 1876 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1907 0 0 0 0 1944 0 0 0 0 1954 0 0 0 0 0 0 0 1930 0 1875 1882 1912 0 0 0 0 0 1890 0 0 1875 0 0 1873 1872 0 0 0 0 1897 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Solution 1: np_input = np.load('array.npy') # Remove all zeros from column np_input = np_input[:, (np_input != 0).any(axis=0)] # Remove all zeros from row np_input = np_input[(np_input != 0).any(axis=1)] # converting to list of lists np_input = np_input.tolist() # Remove sub list that contains a zero np_input = [x for x in np_input if 0 not in x] # Convert pixles_input to numpy array final_np = np.array(np_input) print(final_np) Solution 2: np_input = np.load('array.npy') final_np = np.array([x for x in np_input[:, (np_input != 0).any(axis=0)][(np_input != 0).any(axis=1)].tolist() if 0 not in x]) print(final_np) Output: [[1872 1803 1731 ... 1709 1774 1765] [1937 1746 1790 ... 1685 1814 1756] [1754 1895 1806 ... 1817 1885 1792] ... [1861 1895 1819 ... 1861 1867 1844] [1822 1867 1806 ... 1786 1919 1887] [1850 1926 1855 ... 1861 1761 1800]] Output array.npy 1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765 1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756 1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792 1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817 1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754 1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864 1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824 1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876 1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754 1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928 1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798 1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875 1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825 1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813 1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857 1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857 1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818 1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898 1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822 1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834 1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819 1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907 1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861 1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881 1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873 1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842 1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888 1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873 1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796 1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755 1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816 1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855 1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787 1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842 1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738 1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829 1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776 1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721 1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844 1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887 1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800
How do I crop a python array to maximum size with only non-zero values (largest non-zero rectangle)
I have a numpy array of pixel data, something like 0 0 0 0 0 0 0 0 1 3 4 6 1 0 0 2 3 5 2 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 I would like to get a new array which excludes any outer rows/columns with zeroes, so I just end up with only the non-zero values (that works for any given array) i.e. 1 3 4 6 1 2 3 5 2 1 So far all I've managed to get is 1 3 4 6 1 2 3 5 2 1 1 0 0 1 0 using np.argwhere to find the "min" and "max" non-zero values, but this still includes rows/columns with zero and non-zero values in. My actual array: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765 0 0 0 0 1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756 0 0 0 0 1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792 0 0 0 0 1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817 0 0 0 0 1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754 0 0 0 0 1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864 0 0 0 0 1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824 0 0 0 0 1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876 0 0 0 0 1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754 0 0 0 0 1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928 0 0 0 0 1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798 0 0 0 0 1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875 0 0 0 0 1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825 0 0 0 0 1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813 0 0 0 0 1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857 0 0 0 0 1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857 0 0 0 0 1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818 0 0 0 0 1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898 0 0 0 0 1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822 0 0 0 0 1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834 0 0 0 0 1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819 0 0 0 0 1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907 0 0 0 0 1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861 0 0 0 0 1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881 0 0 0 0 1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873 0 0 0 0 1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842 0 0 0 0 1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888 0 0 0 0 1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873 0 0 0 0 1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796 0 0 0 0 1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755 0 0 0 0 1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816 0 0 0 0 1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855 0 0 0 0 1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787 0 0 0 0 1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842 0 0 0 0 1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738 0 0 0 0 1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829 0 0 0 0 1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776 0 0 0 0 1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721 0 0 0 0 1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844 0 0 0 0 1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887 0 0 0 0 1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800 0 0 0 0 0 1782 0 0 0 0 1879 0 0 0 0 1884 0 0 0 0 0 0 0 1893 0 1932 1909 1938 0 0 0 0 0 1928 0 0 1816 0 0 1921 1887 0 0 0 0 1876 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1907 0 0 0 0 1944 0 0 0 0 1954 0 0 0 0 0 0 0 1930 0 1875 1882 1912 0 0 0 0 0 1890 0 0 1875 0 0 1873 1872 0 0 0 0 1897 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[ "Welcome to StackOverflow!\nInput:\n[[ 0 0 0 ... 0 0 0]\n [ 0 0 0 ... 0 0 0]\n [ 0 0 1872 ... 1765 0 0]\n ...\n [ 0 0 1850 ... 1800 0 0]\n [ 0 0 0 ... 0 0 0]\n [ 0 0 0 ... 0 0 0]]\n\nInput array.npy\n0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n0 0 1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765 0 0\n0 0 1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756 0 0\n0 0 1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792 0 0\n0 0 1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817 0 0\n0 0 1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754 0 0\n0 0 1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864 0 0\n0 0 1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824 0 0\n0 0 1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876 0 0\n0 0 1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754 0 0\n0 0 1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928 0 0\n0 0 1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798 0 0\n0 0 1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875 0 0\n0 0 1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825 0 0\n0 0 1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813 0 0\n0 0 1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857 0 0\n0 0 1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857 0 0\n0 0 1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818 0 0\n0 0 1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898 0 0\n0 0 1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822 0 0\n0 0 1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834 0 0\n0 0 1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819 0 0\n0 0 1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907 0 0\n0 0 1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861 0 0\n0 0 1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881 0 0\n0 0 1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873 0 0\n0 0 1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842 0 0\n0 0 1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888 0 0\n0 0 1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873 0 0\n0 0 1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796 0 0\n0 0 1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755 0 0\n0 0 1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816 0 0\n0 0 1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855 0 0\n0 0 1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787 0 0\n0 0 1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842 0 0\n0 0 1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738 0 0\n0 0 1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829 0 0\n0 0 1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776 0 0\n0 0 1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721 0 0\n0 0 1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844 0 0\n0 0 1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887 0 0\n0 0 1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800 0 0\n0 0 0 1782 0 0 0 0 1879 0 0 0 0 1884 0 0 0 0 0 0 0 1893 0 1932 1909 1938 0 0 0 0 0 1928 0 0 1816 0 0 1921 1887 0 0 0 0 1876 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n0 0 0 1907 0 0 0 0 1944 0 0 0 0 1954 0 0 0 0 0 0 0 1930 0 1875 1882 1912 0 0 0 0 0 1890 0 0 1875 0 0 1873 1872 0 0 0 0 1897 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n\n\nSolution 1:\nnp_input = np.load('array.npy')\n\n# Remove all zeros from column\nnp_input = np_input[:, (np_input != 0).any(axis=0)]\n\n# Remove all zeros from row\nnp_input = np_input[(np_input != 0).any(axis=1)]\n\n# converting to list of lists\nnp_input = np_input.tolist()\n\n# Remove sub list that contains a zero\nnp_input = [x for x in np_input if 0 not in x]\n\n# Convert pixles_input to numpy array\nfinal_np = np.array(np_input)\n\nprint(final_np)\n\nSolution 2:\nnp_input = np.load('array.npy')\nfinal_np = np.array([x for x in np_input[:, (np_input != 0).any(axis=0)][(np_input != 0).any(axis=1)].tolist() if 0 not in x])\nprint(final_np)\n\n\nOutput:\n[[1872 1803 1731 ... 1709 1774 1765]\n [1937 1746 1790 ... 1685 1814 1756]\n [1754 1895 1806 ... 1817 1885 1792]\n ...\n [1861 1895 1819 ... 1861 1867 1844]\n [1822 1867 1806 ... 1786 1919 1887]\n [1850 1926 1855 ... 1861 1761 1800]]\n\nOutput array.npy\n1872 1803 1731 1766 1816 1843 1706 1768 1815 1741 1846 1857 1731 1745 1842 1720 1769 1853 1764 1776 1816 1773 1793 1767 1830 1791 1835 1823 1762 1832 1763 1762 1779 1901 1872 1819 1862 1802 1726 1788 1847 1785 1796 1773 1800 1742 1873 1830 1869 1832 1809 1861 1702 1808 1709 1774 1765\n1937 1746 1790 1750 1862 1898 1770 1727 1868 1895 1761 1800 1814 1826 1836 1774 1847 1868 1837 1746 1809 1869 1818 1760 1940 1844 1845 1833 1815 1872 1773 1816 1769 1860 1841 1856 1857 1779 1779 1822 1781 1778 1858 1727 1816 1835 1835 1864 1793 1781 1908 1820 1803 1838 1685 1814 1756\n1754 1895 1806 1818 1829 1733 1865 1903 1764 1850 1847 1913 1856 1757 1782 1826 1818 1875 1843 1777 1716 1825 1761 1842 1843 1925 1791 1879 1887 1873 1789 1769 1805 1915 1825 1829 1817 1840 1882 1762 1840 1878 1830 1862 1789 1884 1798 1802 1847 1875 1825 1773 1803 1850 1817 1885 1792\n1773 1830 1797 1878 1758 1897 1813 1836 1835 1960 1841 1807 1788 1799 1839 1834 1792 1855 1785 1912 1824 1845 1831 1902 1879 1869 1793 1901 1801 1881 1871 1786 1851 1879 1822 1829 1951 1873 1778 1769 1941 1805 1826 1892 1869 1783 1895 1799 1800 1973 1829 1869 1903 1858 1806 1837 1817\n1828 1858 1793 1833 1894 1832 1763 1892 1786 1893 1883 1846 1828 1821 1875 1864 1778 1863 1832 1801 1798 1871 1753 1899 1892 1901 1907 1877 1756 1865 1899 1874 1841 1775 1838 1817 1864 1798 1843 1803 1853 1878 1831 1855 1803 1816 1885 1818 1882 1859 1790 1892 1826 1906 1842 1831 1754\n1811 1831 1837 1828 1792 1768 1818 1797 1766 1924 1849 1921 1881 1795 1883 1954 1811 1804 2006 1849 1841 1808 1867 1918 1755 1765 1881 1852 1930 1848 1807 1876 1776 1790 1849 1855 1942 1871 1908 1822 1810 1794 1889 1780 1857 1879 1845 1858 1901 1839 1744 1743 1811 1853 1841 1854 1864\n1880 1888 1874 1878 1888 1868 1852 1887 1875 1874 1892 1828 1842 1822 1789 1870 1829 1841 1864 1859 1846 1776 1799 1875 1875 1811 1873 1837 1921 1917 1777 1840 1872 1816 1878 1890 1821 1925 1810 1945 1884 1845 1859 1843 1806 1894 1886 1886 1885 1931 1761 1819 1889 1765 1891 1896 1824\n1856 1827 1826 1882 1786 1852 1820 1880 1912 1795 1854 1868 1899 1855 1886 1894 1891 1907 1907 1713 1800 1922 1831 1814 1894 1851 1927 1879 1881 1884 1932 1904 1807 1839 1851 1885 1889 1913 1878 1754 1930 1905 1915 1825 1901 1870 1839 1867 1897 1862 1843 1836 1774 1764 1838 1829 1876\n1858 1840 1897 1884 1861 1910 1860 1879 1882 1860 1831 1828 1846 1820 1889 1830 1852 1880 1842 1917 1872 1839 1820 1888 1871 1838 1817 1939 1905 1890 1832 1925 1780 1862 1793 1887 1836 1846 1852 1939 1922 1874 1865 1890 1864 1863 1918 1819 1861 1851 1854 1886 1898 1888 1796 1917 1754\n1891 1852 1926 1803 1863 1814 1849 1857 1870 1882 1979 1786 1880 1820 1812 1863 1922 1916 1851 1879 1827 1859 1913 1843 1852 1823 1812 1891 1932 1887 1883 1975 1769 1831 1859 1954 1780 1829 1853 1754 1832 1733 1886 1800 1808 1879 1821 1934 1897 1822 1941 1863 1818 1826 1883 1894 1928\n1829 1820 1899 1869 1864 1863 1895 1923 1839 1804 1884 1835 1859 1872 1825 1841 1817 1817 1832 1882 1878 1854 1867 1917 1843 1928 1949 1859 1929 1938 1826 1808 1823 1872 1865 1811 1908 1848 1861 1926 1799 1825 1799 1859 1957 1848 1863 1846 1806 1934 1845 1899 1827 1881 1836 1806 1798\n1794 1914 1880 1892 1849 1862 1819 1927 1873 1886 1857 1907 1840 1897 1857 1867 1925 1972 1871 1975 1854 1843 1856 1872 1875 1927 1819 1905 1948 1881 1904 1832 1863 1854 1811 1869 1797 1946 1805 1779 1824 1919 1886 1817 1845 1844 1909 1885 1900 1826 1867 1817 1833 1870 1888 1879 1875\n1930 1857 1851 1862 1907 1924 1838 1833 1858 1847 1892 1788 1902 1786 1880 1818 1896 1938 1953 1952 1903 1723 1867 1955 1859 1869 1890 1830 1864 1837 1806 1827 1872 1868 1907 1977 1878 1895 1786 1892 1897 1872 1927 1807 1854 1865 1911 1957 1816 1833 1904 1897 1764 1895 1854 1800 1825\n1889 1837 1887 1885 1865 1863 1779 1883 1815 1807 1856 1788 1857 1842 1812 1838 1949 1887 1909 1843 1848 1901 1812 1890 1882 1873 1835 1870 1855 1846 1811 1899 1855 1826 1916 1781 1887 1882 1887 1826 1848 1855 1804 1859 1827 1802 1884 1920 1920 1876 1839 1835 1822 1868 1844 1796 1813\n1845 1883 1857 1790 1738 1915 1963 1899 1878 1890 1813 1779 1836 1832 1895 1863 1874 1899 1946 1851 1967 1816 1860 1860 1793 1852 1917 1904 1879 1911 1747 1939 1938 1849 1917 1894 1845 1895 1877 1903 1870 1868 1878 1857 1921 1858 1843 1800 1930 1820 1752 1827 1885 1927 1902 1842 1857\n1916 1898 1929 1884 1981 1866 1940 1978 1848 1903 1935 1843 1817 1944 1871 1862 1917 1876 1920 1921 1789 1881 1938 1793 1906 1912 1854 1904 1855 1901 1877 1814 1894 1907 1894 1828 1839 1980 1805 1878 1861 1808 1885 1854 1958 1863 1756 1922 1898 1808 1822 1864 1916 1855 1919 1896 1857\n1961 1800 1897 1857 1791 1823 1925 1827 1894 1911 1836 1826 1888 1854 1753 1841 1900 1859 1807 1910 1902 1908 1902 1920 1901 1951 1944 1920 1897 1889 1880 1873 1836 1886 1930 1856 1984 1935 1834 1926 1868 1932 1876 1891 1796 1814 1807 1824 1852 1888 1870 1911 1834 1845 1854 1863 1818\n1885 1947 1836 1886 1803 1982 1901 1939 1930 1876 1832 1888 1886 1855 1845 1910 1877 1836 1910 1888 1904 1905 1859 1899 1834 1879 1893 1861 1896 1931 1855 1890 1964 1939 1798 1894 1844 1913 1906 1920 1873 1807 1875 1837 1900 1904 1919 1845 1895 1844 1793 1855 1926 1786 1917 1834 1898\n1863 1856 1776 1925 1943 1875 1903 1858 1878 1865 1877 1821 1892 1914 1907 1863 1779 1879 1939 1893 1867 1846 1940 1910 1927 1920 1920 1934 1788 1851 1937 1943 1906 1853 1954 1910 1892 1857 1878 1853 1887 1876 1915 1819 1820 1933 1813 1848 1867 1866 1949 1905 1832 1876 1786 1918 1822\n1897 1880 1904 1942 1886 1894 1887 1946 1881 1855 1924 1866 1905 1846 1960 1854 1878 1979 1908 1933 1868 1920 1938 1805 1882 1879 1850 1862 1889 1872 1900 1903 1856 1862 1862 1959 1886 1856 1910 1912 1847 1939 1884 1885 1798 1885 1825 1903 1837 1900 1825 1837 1845 1807 1890 1843 1834\n1879 1896 1898 1980 1844 1889 2013 1938 1950 1877 1849 1916 1879 1871 1946 1916 1890 1945 1942 1934 1914 1821 1902 1938 1878 1906 1823 1927 1912 1948 1932 1927 1859 1819 1933 1927 1915 1789 1970 1930 1931 1831 1856 1890 1831 1852 1863 1884 1821 1842 1861 1843 1751 1872 1790 1852 1819\n1884 1974 1825 1888 1932 1843 1911 1899 1905 1845 1847 1920 1883 1934 1879 1869 1792 2024 1882 1944 1850 1913 1899 1799 1899 1927 1849 1935 1880 1874 1888 1881 1870 1829 1908 1841 1957 1892 2001 1999 1941 1959 1917 1913 1893 1849 1908 1853 1928 1868 1784 1881 1871 1844 1754 1849 1907\n1890 1898 1845 1922 1950 1938 1868 1915 1907 1858 1825 1867 1933 1921 1933 1820 1865 1851 1947 1903 1869 1871 1837 1941 1892 1833 1817 1856 1863 1884 1909 1875 1904 1943 1916 2001 1887 1858 1837 1875 1846 1824 1913 1831 1891 1901 1818 1908 1921 1864 1898 1869 1829 1733 1815 1824 1861\n1902 1934 1894 1839 1894 1869 1962 1809 1891 1865 1957 1950 1926 1861 1954 1876 1782 1883 1959 1852 1849 1891 1887 1756 1861 1905 1894 1913 1831 1828 1906 1875 1981 1887 1990 1922 1825 1995 1831 1852 1864 1922 1878 1895 1897 1819 1851 1873 1799 1901 1810 1880 1922 1875 1858 1841 1881\n1852 1867 1940 1858 1867 1888 1863 1839 1851 1885 1875 1928 1903 1913 1858 1838 1819 1818 1744 1850 1856 1884 1861 1846 1896 1891 1894 1946 1911 1888 1865 1849 1777 1893 2010 1931 1832 1901 1817 1900 1869 1863 1825 1848 1885 1893 1875 1843 1884 1819 1950 1899 1926 1837 1819 1876 1873\n1872 1871 1884 1844 1847 1935 1859 1858 1894 1866 1930 1741 1919 1854 1855 1866 1833 1860 1875 1852 1976 1835 1811 1994 1897 1833 1891 1904 1938 1906 1802 1875 1861 1835 1939 1870 1877 1972 1949 1880 1881 1795 1792 1764 1945 1978 1875 1887 1861 1890 1832 1794 1873 1919 1797 1876 1842\n1897 1884 1845 1842 1878 1918 1835 1866 1868 1858 1908 1900 1868 1756 1841 1746 1842 1891 1852 1889 1869 1886 1802 1902 1859 1935 1978 1880 1918 1865 1779 1889 1824 1781 1902 1890 1836 1833 1908 1865 1916 1916 1902 1796 1878 1858 1825 1914 1921 1829 1848 1862 1863 1847 1847 1831 1888\n1856 1933 1882 1948 1882 2003 1938 1901 1856 1755 1834 1868 1861 1768 1863 1841 1814 1896 1859 1871 1860 1908 1912 1893 1896 1968 1863 1938 1920 1828 1952 1854 1867 1913 1764 1893 1876 1892 1901 1813 1890 1916 1915 1887 1836 1812 1798 1846 1867 1846 1866 1787 1915 1898 1911 1717 1873\n1877 1885 1868 1858 1932 1949 1835 1849 1898 1867 1911 1902 1926 1859 1818 1941 1836 1816 1940 1908 1886 1818 1899 1948 1870 1845 1887 1925 1891 1823 1885 1844 1795 1886 1879 1865 1841 1830 1902 1946 1803 1889 1893 1856 1816 1853 1813 1851 1897 1852 1827 1918 1834 1859 1738 1808 1796\n1838 1839 1997 1844 1855 1867 1953 1898 1876 1865 1882 1808 1857 1856 1850 1832 1892 1802 1858 1882 1896 1925 1840 1905 1895 1838 1865 1922 1904 1843 1958 1890 1907 1796 1858 1871 1906 1815 1888 1870 1902 1717 1868 1823 1888 1905 1821 1812 1928 1867 1787 1826 1821 1905 1839 1747 1755\n1870 1868 1899 1915 1873 1841 1938 1918 1897 1902 1846 1887 1750 1868 1841 1828 1928 1852 1876 1905 1859 1838 1931 1871 1920 1779 1836 1897 1863 1937 1895 1934 1940 1872 1890 1893 1852 1874 1860 1857 1874 1903 1826 1873 1877 1833 1922 1847 1832 1874 1914 1829 1846 1863 1829 1913 1816\n1887 1888 1924 1880 1818 1878 1842 1908 1947 1914 1848 1867 1868 1891 1874 1872 1900 1828 1905 1865 1925 1965 1868 1893 1864 1869 1868 1867 1863 1946 1822 1883 1863 1817 1948 1846 1843 1826 1832 1793 1825 1802 2014 1967 1832 1895 1848 1833 1914 1817 1898 1798 1910 1865 1862 1856 1855\n1914 1862 1828 1924 1897 1984 1931 1925 1896 1895 1908 1933 1889 1813 1836 1921 1855 1841 1935 1917 1897 1890 1880 1904 1851 1937 1936 1920 1856 1798 1810 1819 1871 1855 1905 1832 1941 1844 1827 1855 1901 1846 1826 1762 1870 1899 1873 1853 1902 1839 1884 1841 1838 1816 1846 1860 1787\n1869 1874 1867 1894 1865 1951 1865 1887 1857 1900 1839 1874 1877 1876 1845 1897 1881 1952 1832 1855 1855 1949 1889 1942 1844 1881 1937 1892 1779 1841 1893 1902 1814 1791 1858 1870 1874 1856 1814 1744 1799 1831 1839 1717 1878 1815 1846 1864 1832 1927 1808 1859 1818 1848 1828 1803 1842\n1871 1884 1842 1834 1873 1884 1950 1911 1992 1847 1847 1834 1849 1809 1822 1927 1925 1835 1857 1891 1848 1833 1843 1939 1858 1871 1975 1816 1874 1915 1835 1918 1906 1902 1849 1863 1909 1798 1842 1910 1791 1843 1781 1832 1898 1889 1884 1853 1883 1855 1975 1767 1826 1761 1879 1814 1738\n1886 1909 1873 1850 1908 1894 1907 1872 1837 1773 1847 1926 1884 1882 1831 1832 1942 1897 1844 1950 1886 1978 1947 1815 1843 1785 1886 1914 1911 1883 1824 1873 1934 1943 1831 1906 1813 1820 1831 1870 1824 1875 1866 1913 1800 1818 1930 1860 1808 1884 1834 1921 1717 1812 1816 1947 1829\n1860 1893 1883 1843 1923 1853 1834 1858 1922 1944 1942 1839 1813 1852 1889 1945 1902 1977 1929 1881 1850 1967 1844 1877 1970 1850 1941 1897 1814 1894 1841 1837 1821 1866 1777 1805 1851 1889 1838 1843 1853 1776 1907 1909 1846 1781 1775 1876 1941 1851 1849 1854 1813 1885 1912 1887 1776\n1819 1896 1911 1936 1887 1847 1874 1894 1855 1869 1843 1864 1921 1883 1875 1926 1866 1923 1886 1889 1844 1896 2002 1944 1909 1858 1927 1870 1882 1886 1899 1894 1809 1904 1786 1920 1908 1888 1901 1859 1857 1793 1880 1828 1809 1839 1905 1893 1849 1920 1837 1868 1910 1850 1873 1900 1721\n1861 1895 1819 1865 1741 1797 1832 1849 1901 1869 1870 1811 1786 1910 1936 1961 1907 1899 1949 1863 1845 1885 1881 1831 1884 1937 1860 1906 1873 1838 1859 1898 1924 1863 1902 1881 1851 1880 1945 1851 1929 1846 1843 1879 1774 1826 1788 1871 1918 1780 1825 1853 1782 1852 1861 1867 1844\n1822 1867 1806 1745 1942 1836 1841 1861 1787 1867 1947 1906 1826 1822 1935 1787 1879 1920 1830 1928 1879 1837 1921 1923 1855 1932 1844 1841 1917 1928 1865 1915 1873 1839 1846 1910 1896 1903 1911 1838 1857 1905 1870 1811 1899 1874 1860 1822 1935 1757 1862 1807 1856 1868 1786 1919 1887\n1850 1926 1855 1766 1858 1815 1894 1861 1911 1910 1846 1861 1857 1800 1837 1784 1912 1937 1916 1942 1929 1866 1905 1916 1923 1922 1899 1838 1910 1872 1778 1849 1863 1868 1870 1828 1880 1793 1889 1937 1857 1888 1882 1946 1841 1838 1800 1819 1874 1918 1879 1895 1874 1884 1861 1761 1800\n\n" ]
[ 0 ]
[ "If we go by your assumption that there likely won't be any zeros in the middle of the array, we can figure out if a row contains any zeros using any(axis=1) (or axis=0 for columns), and if a row contains all zeros using all\ndata = np.array([[0, 0, 0, 0, 0, 0, 0],\n [0, 1, 3, 4, 6, 1, 0],\n [0, 2, 3, 5, 2, 1, 0],\n [0, 1, 0, 0, 1, 0, 0],\n [0, 0, 0, 0, 0, 0, 0]])\n\nTo start, we want to delete those rows and columns that are all zeros.\ndelete_rows = (data == 0).all(axis=1)\ndelete_cols = (data == 0).all(axis=0)\n\nFor now, let's set those rows to -999 (since your data is pixel data, -999 is an invalid value that you never expect to see) so that data == 0 for the future steps isn't confused by these \"border\" rows/cols\ndata[delete_rows, :] = -999\ndata[:, delete_cols] = -999\n\nNext, let's find any rows that contain any zeros and are next to a row that's going to be deleted (previous or next row is in delete_rows):\nzero_rows = (data == 0).any(axis=1)\n\nd_r = np.zeros(zero_rows.shape, dtype=bool)\nd_r[1:] = d_r[1:] | delete_rows[:-1]\nd_r[:-1] = d_r[:-1] | delete_rows[1:]\n\ndelete_rows = delete_rows | (zero_rows & d_r)\ndata[delete_rows, :] = -999\n\nWe can repeat this until there are no more changes to delete_rows. I.e.:\ndel_count = sum(delete_rows)\nprev_del_count = del_count + 1\n\nwhile del_count != prev_del_count:\n zero_rows = (data == 0).any(axis=1)\n\n d_r = np.zeros(zero_rows.shape, dtype=bool)\n d_r[1:] = d_r[1:] | delete_rows[:-1]\n d_r[:-1] = d_r[:-1] | delete_rows[1:]\n\n delete_rows = delete_rows | (zero_rows & d_r)\n prev_del_count, del_count = del_count, sum(delete_rows)\n data[delete_rows, :] = -999\n\nThen, we can do the same for columns:\ndel_count = sum(delete_cols)\nprev_del_count = del_count + 1\n\nwhile del_count != prev_del_count:\n zero_cols = (data == 0).any(axis=0)\n\n d_c = np.zeros(zero_cols.shape, dtype=bool)\n d_c[1:] = d_c[1:] | delete_cols[:-1]\n d_c[:-1] = d_c[:-1] | delete_cols[1:]\n\n delete_cols = delete_cols | (zero_cols & d_c)\n prev_del_count, del_count = del_count, sum(delete_cols)\n data[:, delete_cols] = -999\n\nNow, we have:\ndelete_rows = np.array([ True, False, False, True, True])\ndelete_cols = np.array([ True, False, False, False, False, False, True])\n\nAnd we can filter out the required rows and cols:\nfiltered_data = data[~delete_rows, :][:, ~delete_cols]\n\nwhich gives:\narray([[1, 3, 4, 6, 1],\n [2, 3, 5, 2, 1]])\n\n" ]
[ -1 ]
[ "numpy", "numpy_ndarray", "python", "python_3.x" ]
stackoverflow_0074655756_numpy_numpy_ndarray_python_python_3.x.txt
Q: how to print month on calendar with list comprehension in python basically need to print a calendar for a month using list comprehension. cant figure out how to make this work, if anyone can help itd be greatly appreciated not great with list comprehension so not sure where to even start with this A: You can use the calendar library to display a calendar in a variety of formats >>> calendar.monthcalendar(2022, 12) [[0, 0, 0, 1, 2, 3, 4], [5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24, 25], [26, 27, 28, 29, 30, 31, 0]] >>> calendar.TextCalendar().prmonth(2022, 12) December 2022 Mo Tu We Th Fr Sa Su 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 To produce the format in your example you can use a list comprehension to convert the nested list of int to your padded strings >>> [['{:02d}'.format(day) if day != 0 else '' for day in week] for week in calendar.monthcalendar(2022, 12)] [['', '', '', '01', '02', '03', '04'], ['05', '06', '07', '08', '09', '10', '11'], ['12', '13', '14', '15', '16', '17', '18'], ['19', '20', '21', '22', '23', '24', '25'], ['26', '27', '28', '29', '30', '31', '']]
how to print month on calendar with list comprehension in python
basically need to print a calendar for a month using list comprehension. cant figure out how to make this work, if anyone can help itd be greatly appreciated not great with list comprehension so not sure where to even start with this
[ "You can use the calendar library to display a calendar in a variety of formats\n>>> calendar.monthcalendar(2022, 12)\n[[0, 0, 0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9, 10, 11],\n [12, 13, 14, 15, 16, 17, 18],\n [19, 20, 21, 22, 23, 24, 25],\n [26, 27, 28, 29, 30, 31, 0]]\n\n>>> calendar.TextCalendar().prmonth(2022, 12)\n December 2022\nMo Tu We Th Fr Sa Su\n 1 2 3 4\n 5 6 7 8 9 10 11\n12 13 14 15 16 17 18\n19 20 21 22 23 24 25\n26 27 28 29 30 31\n\nTo produce the format in your example you can use a list comprehension to convert the nested list of int to your padded strings\n>>> [['{:02d}'.format(day) if day != 0 else '' for day in week] for week in calendar.monthcalendar(2022, 12)]\n[['', '', '', '01', '02', '03', '04'],\n ['05', '06', '07', '08', '09', '10', '11'],\n ['12', '13', '14', '15', '16', '17', '18'],\n ['19', '20', '21', '22', '23', '24', '25'],\n ['26', '27', '28', '29', '30', '31', '']]\n\n" ]
[ 1 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0074659961_list_comprehension_python.txt
Q: How to change the font of Axis label in Pyqtgraph I have a custom font, I am able to set this font in title of the graph, I need help in setting the axis label font.(left, bottom axis labels) I am able to set the font to the title of the graph like this graphWidget = pyqtgraph.PlotWidget() graph = graphWidget.getPlotItem() graph.titleLabel.item.setFont(font) I would like to know if there's any similar way to set the font for axis labels. A: To set custom QFont to axis label, you have to setFont for label of each axis. Here is a short example, which changes font family to Times for title, bottom and left axis. import sys import pyqtgraph from PyQt5.QtGui import QFont from PyQt5.QtWidgets import QApplication app = QApplication(sys.argv) # Define your font my_font = QFont("Times", 10, QFont.Bold) graphWidget = pyqtgraph.PlotWidget() graphWidget.setTitle("My plot") # Set label for both axes graphWidget.setLabel('bottom', "My x axis label") graphWidget.setLabel('left', "My y axis label") # Set your custom font for both axes graphWidget.getAxis("bottom").label.setFont(my_font) graphWidget.getAxis("left").label.setFont(my_font) graph = graphWidget.getPlotItem() # Set font for plot title graph.titleLabel.item.setFont(my_font) graphWidget.show() app.exec()
How to change the font of Axis label in Pyqtgraph
I have a custom font, I am able to set this font in title of the graph, I need help in setting the axis label font.(left, bottom axis labels) I am able to set the font to the title of the graph like this graphWidget = pyqtgraph.PlotWidget() graph = graphWidget.getPlotItem() graph.titleLabel.item.setFont(font) I would like to know if there's any similar way to set the font for axis labels.
[ "To set custom QFont to axis label, you have to setFont for label of each axis.\nHere is a short example, which changes font family to Times for title, bottom and left axis.\nimport sys\n\nimport pyqtgraph\nfrom PyQt5.QtGui import QFont\nfrom PyQt5.QtWidgets import QApplication\n\napp = QApplication(sys.argv)\n\n# Define your font\nmy_font = QFont(\"Times\", 10, QFont.Bold)\n\ngraphWidget = pyqtgraph.PlotWidget()\ngraphWidget.setTitle(\"My plot\")\n\n# Set label for both axes\ngraphWidget.setLabel('bottom', \"My x axis label\")\ngraphWidget.setLabel('left', \"My y axis label\")\n\n# Set your custom font for both axes\ngraphWidget.getAxis(\"bottom\").label.setFont(my_font)\ngraphWidget.getAxis(\"left\").label.setFont(my_font)\n\ngraph = graphWidget.getPlotItem()\n# Set font for plot title\ngraph.titleLabel.item.setFont(my_font)\n\ngraphWidget.show()\napp.exec()\n\n" ]
[ 0 ]
[]
[]
[ "pyqt", "pyqt5", "pyqtgraph", "pyside2", "python" ]
stackoverflow_0074628737_pyqt_pyqt5_pyqtgraph_pyside2_python.txt
Q: Unable to list files in google drive using python Not sure if this has to do with my code or something on the Google side, however I'm able to push files to drive, but for some reason I cannot list the file/folder metadata inside a folder. Here is the code I'm using: SCOPES = ['https://www.googleapis.com/auth/drive'] SERVICE_ACCOUNT_FILE = 'creds.json' credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES) service = build('drive', 'v3', credentials=credentials) topFolderId = '0AAYXadsMHp8IUk9PVA' items = [] pageToken = "" while pageToken is not None: response = service.files().list(q="'" + topFolderId + "' in parents", pageSize=1000, pageToken=pageToken, fields="nextPageToken, files(id, name)").execute() items.extend(response.get('files', [])) pageToken = response.get('nextPageToken') Any ideas here, I don't think it's permissions related as I'm able to put files in Drive, just not list them. A: I think you forgot .execute() try: service = build('drive', 'v3', credentials=creds) # Call the Drive v3 API results = service.files().list(q="'" + topFolderId + "' in parents", pageSize=10, fields="nextPageToken, files(id, name)").execute() items = results.get('files', []) if not items: print('No files found.') return print('Files:') for item in items: print(u'{0} ({1})'.format(item['name'], item['id'])) except HttpError as error: # TODO(developer) - Handle errors from drive API. print(f'An error occurred: {error}')
Unable to list files in google drive using python
Not sure if this has to do with my code or something on the Google side, however I'm able to push files to drive, but for some reason I cannot list the file/folder metadata inside a folder. Here is the code I'm using: SCOPES = ['https://www.googleapis.com/auth/drive'] SERVICE_ACCOUNT_FILE = 'creds.json' credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES) service = build('drive', 'v3', credentials=credentials) topFolderId = '0AAYXadsMHp8IUk9PVA' items = [] pageToken = "" while pageToken is not None: response = service.files().list(q="'" + topFolderId + "' in parents", pageSize=1000, pageToken=pageToken, fields="nextPageToken, files(id, name)").execute() items.extend(response.get('files', [])) pageToken = response.get('nextPageToken') Any ideas here, I don't think it's permissions related as I'm able to put files in Drive, just not list them.
[ "I think you forgot .execute()\ntry:\n service = build('drive', 'v3', credentials=creds)\n\n # Call the Drive v3 API\n results = service.files().list(q=\"'\" + topFolderId + \"' in parents\",\n pageSize=10, fields=\"nextPageToken, files(id, name)\").execute()\n items = results.get('files', [])\n\n if not items:\n print('No files found.')\n return\n print('Files:')\n for item in items:\n print(u'{0} ({1})'.format(item['name'], item['id']))\nexcept HttpError as error:\n # TODO(developer) - Handle errors from drive API.\n print(f'An error occurred: {error}')\n\n" ]
[ 1 ]
[]
[]
[ "google_api", "google_api_python_client", "google_drive_api", "python", "python_3.x" ]
stackoverflow_0074659022_google_api_google_api_python_client_google_drive_api_python_python_3.x.txt
Q: How could I find specific texts in one column of another dataset? Python I have 2 datasets. One contains a column of companies name, and another contains a column of headlines of news. So the aim I want to achieve is to find all the news whose headline contains one company in the other datasets.Basically the two datasets are like this, and I wanna select the news with specific company names I have tried to use for loop to achieve my goals, but I think it takes too much time and I think pandas or some other libraries can do this in an easier way. I am a starter in python. A: If I understand correctly you should have 2 data sets with different columns, first, you need to loop through the dataset that contains the company name to search in the headline, then you could use obj. find(“search”) to find matches in both datasets. Also if every query is stored in a CSV format you could use the split() function to get the only column you wanna use A: Supposing that you have saved your company names in a pd.Series called company and headlines and texts in a pd.DataFrame called df, this will be what you are looking for: # it will add a column called "company" to your initial df for org, headline in zip(company, df['headline']): if org in headline: df.loc[df['headline'] == headline, 'company'] = org You should pay attention to lower and upper case letters, as this will only find the corresponding company if the exact same word appears in the headline.
How could I find specific texts in one column of another dataset? Python
I have 2 datasets. One contains a column of companies name, and another contains a column of headlines of news. So the aim I want to achieve is to find all the news whose headline contains one company in the other datasets.Basically the two datasets are like this, and I wanna select the news with specific company names I have tried to use for loop to achieve my goals, but I think it takes too much time and I think pandas or some other libraries can do this in an easier way. I am a starter in python.
[ "If I understand correctly you should have 2 data sets with different columns, first, you need to loop through the dataset that contains the company name to search in the headline, then you could use obj. find(“search”) to find matches in both datasets.\nAlso if every query is stored in a CSV format you could use the split() function to get the only column you wanna use\n", "Supposing that you have saved your company names in a pd.Series called company and headlines and texts in a pd.DataFrame called df, this will be what you are looking for:\n# it will add a column called \"company\" to your initial df\nfor org, headline in zip(company, df['headline']):\n if org in headline:\n df.loc[df['headline'] == headline, 'company'] = org\n\nYou should pay attention to lower and upper case letters, as this will only find the corresponding company if the exact same word appears in the headline.\n" ]
[ 0, 0 ]
[]
[]
[ "dataset", "pandas", "python", "python_3.x" ]
stackoverflow_0074659809_dataset_pandas_python_python_3.x.txt
Q: How to define custom check according to my rules and how to implement Django I using Python 3.10, Django 4.1.2, djangorestframework==3.14.0 (front separately) In an order, the products received field is empty by default. As we receive the order, we must remove these elements from the ordered field and transfer them to the received ones. received products must contain only products from requested Products After submitting request with amount of received products, this particular products should be removed from requested Products and addiing to recived_products I have two ideas for a theoretical implementation. Using the patch, the received_product and the elements in it Separate method I have this code: class Orders(models.Model): delivery_model_choices = (("Pickup", "Pickup"), ("Delivery", "Delivery")) order_status_choices = (("Draft", "Draft"), ("Open", "Open"), ("Partially Received", "Partially Received"), ("Received", "Received"), ("Cancelled", "Cancelled")) costumer = models.ManyToManyField(Costumers) products = models.ManyToManyField(Products) recived_products = ??? date_create = models.DateTimeField(auto_now_add=True) delivery = models.CharField(max_length=40, choices=delivery_model_choices) delivery_date = models.DateField() order_status = models.CharField(max_length=40, choices=order_status_choices) total_price = models.CharField(max_length=10) Please, I ask you for a correct example on this implementation. I'm still new to development A: I will not write the complete code, but you can try this logic - Define a Create method for the viewset or views (whatever you use) def create(self, request, format=None): request.data is the data that you receive all_product_recieved = all products that you have received recived_products = all_product_recieved - ordered product custom_data = create a new dictionary with valid data then ... serializer = self.get_serializer(data=custom_data) if serializer.is_valid(): serializer.save() return Response() return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) Hope this helps.
How to define custom check according to my rules and how to implement Django
I using Python 3.10, Django 4.1.2, djangorestframework==3.14.0 (front separately) In an order, the products received field is empty by default. As we receive the order, we must remove these elements from the ordered field and transfer them to the received ones. received products must contain only products from requested Products After submitting request with amount of received products, this particular products should be removed from requested Products and addiing to recived_products I have two ideas for a theoretical implementation. Using the patch, the received_product and the elements in it Separate method I have this code: class Orders(models.Model): delivery_model_choices = (("Pickup", "Pickup"), ("Delivery", "Delivery")) order_status_choices = (("Draft", "Draft"), ("Open", "Open"), ("Partially Received", "Partially Received"), ("Received", "Received"), ("Cancelled", "Cancelled")) costumer = models.ManyToManyField(Costumers) products = models.ManyToManyField(Products) recived_products = ??? date_create = models.DateTimeField(auto_now_add=True) delivery = models.CharField(max_length=40, choices=delivery_model_choices) delivery_date = models.DateField() order_status = models.CharField(max_length=40, choices=order_status_choices) total_price = models.CharField(max_length=10) Please, I ask you for a correct example on this implementation. I'm still new to development
[ "I will not write the complete code, but you can try this logic -\nDefine a Create method for the viewset or views (whatever you use)\ndef create(self, request, format=None):\n request.data is the data that you receive\n all_product_recieved = all products that you have received\n recived_products = all_product_recieved - ordered product\n custom_data = create a new dictionary with valid data\n then ...\n serializer = self.get_serializer(data=custom_data)\n if serializer.is_valid():\n serializer.save()\n return Response()\n return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)\n\nHope this helps.\n" ]
[ 1 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "python" ]
stackoverflow_0074657076_django_django_models_django_rest_framework_python.txt
Q: Terminal can't find version of python despite it being installed I'm trying to install packages on multiple versions of Python. I'm currently running 3.8.8, and 3.11.0. Following this post Install a module using pip for specific python version called python3.11 -m pip install pandas which results in File "<stdin>", line 1 python3.11 -m pip install pandas SyntaxError: invalid syntax This seems to indicate an issue with python, so I double checked that python3.11 is installed. the python3.11 works in isolation seems to work. I don't understand why the install command isn't working. A: If you’re using Linux try just python3 —-version In Windows you may need to add path to folder with installed Python to PATH variable.
Terminal can't find version of python despite it being installed
I'm trying to install packages on multiple versions of Python. I'm currently running 3.8.8, and 3.11.0. Following this post Install a module using pip for specific python version called python3.11 -m pip install pandas which results in File "<stdin>", line 1 python3.11 -m pip install pandas SyntaxError: invalid syntax This seems to indicate an issue with python, so I double checked that python3.11 is installed. the python3.11 works in isolation seems to work. I don't understand why the install command isn't working.
[ "If you’re using Linux try just\npython3 —-version\n\nIn Windows you may need to add path to folder with installed Python to PATH variable.\n" ]
[ 0 ]
[ "Check your environment variables, you could try removing the variables pointing to the 3.8 version until you get the packages you want installed.\nYou could also try navigating to that python 3.11 installation directly, and executing the python shell from there, then run the command.\n" ]
[ -1 ]
[ "module", "python", "version" ]
stackoverflow_0074660181_module_python_version.txt
Q: Use of int function in Python This code is working fine but I am confused why I only have to change age into an integer and not months, weeks or days. If I simply add age = 25, then it does not give any error. age = input("What is your current age? ") Years_remaining = Years_remaining = (90 - int(age)) months = Years_remaining * 12 weeks = Years_remaining * 52 days = Years_remaining * 365 print (f"you have {days} days, {weeks} weeks, and {months} months left") A: This is why: age is str as this is what the input method returns, therefore, you have to cast to int to subtract it to 90 and store it in Years_remaining. At this point, Years_remaining is an int, so months does not need any cast as both of its operands are now int (Years_remaining and 12). If for example, you would cast age to float, then months, weeks, etc would be a float as well. Does this make sense to you?
Use of int function in Python
This code is working fine but I am confused why I only have to change age into an integer and not months, weeks or days. If I simply add age = 25, then it does not give any error. age = input("What is your current age? ") Years_remaining = Years_remaining = (90 - int(age)) months = Years_remaining * 12 weeks = Years_remaining * 52 days = Years_remaining * 365 print (f"you have {days} days, {weeks} weeks, and {months} months left")
[ "This is why:\nage is str as this is what the input method returns, therefore, you have to cast to int to subtract it to 90 and store it in Years_remaining.\nAt this point, Years_remaining is an int, so months does not need any cast as both of its operands are now int (Years_remaining and 12).\nIf for example, you would cast age to float, then months, weeks, etc would be a float as well.\nDoes this make sense to you?\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074660044_python_python_3.x.txt
Q: How to get the ID of an element with class name with BS4 I have a site where there are multiple li elements whos ID I need but I only have the class name. I also need the IDs to be put into a list The html: <ul class="price-list"> <li class="price-box" id="200"></li> <li class="price-box" id="300"></li> <li class="price-box" id="400"></li> </ul> I have tried the following but to no avail list = [] div = soup.find("ul", {"class": "price-list"}) for size in div: id = soup.find_all("li", {"class": "price-box"})['id'] list.append(id)
How to get the ID of an element with class name with BS4
I have a site where there are multiple li elements whos ID I need but I only have the class name. I also need the IDs to be put into a list The html: <ul class="price-list"> <li class="price-box" id="200"></li> <li class="price-box" id="300"></li> <li class="price-box" id="400"></li> </ul> I have tried the following but to no avail list = [] div = soup.find("ul", {"class": "price-list"}) for size in div: id = soup.find_all("li", {"class": "price-box"})['id'] list.append(id)
[]
[]
[ "import requests\nimport bs4\n\nresult = requests.get(\"url\")\nsoup = bs4.BeautifulSoup(result.text,\"html.parser\")\nclass_name = \"the class name\"\ndivs = soup.find_all(\"div\", {'class':class_name})\n# this will give you a list of divs with the class name\n# if you want to find the first div the soup find or you are looking for a unique class name\ndiv = soup.find(\"div\", {\"class\": class_name})\n# also you can do it like this\ndiv = soup.find(\"div\", class_= class_name)\n\n" ]
[ -1 ]
[ "beautifulsoup", "python", "python_requests" ]
stackoverflow_0074660222_beautifulsoup_python_python_requests.txt
Q: How to create widgets based on lists in kivy? does anyone knwo whether it is possible in kivy to create buttons based on list items. I have a list of category names within a list, the amount of items can change based on the users previous input. So does anyone know whether, and how, it is possible to create buttons dynamically, and maybe also link these buttons to a new page? It should work like this: List: ["Fruits", "Dessert", "Main"] -> Creates buttons Fruits, Dessert and Mains -> each button opens a new page so FruitsButton -> FruitsPage / DessertButton -> DessertPaige, etc. A: that is an very general question. here is an idea to get you started. A widget is needed that can hold the buttons and you also can bind each button in advance to a specific function using partial from functools import partial def switch_page(self, _this_button, course: str = "") -> None: print(f"Set {course} {_this_button.text}") # code for switching page here for _course in ("fruits", "deserts", "main"): _button: Button = Button(text=f"{_course}") # _this_callback = partial(self.switch_page, course=_course) _button.bind(on_press=_this_callback) # this container could be a box layout or grid layout, etc self.your_container_widget.add_widget(_button)
How to create widgets based on lists in kivy?
does anyone knwo whether it is possible in kivy to create buttons based on list items. I have a list of category names within a list, the amount of items can change based on the users previous input. So does anyone know whether, and how, it is possible to create buttons dynamically, and maybe also link these buttons to a new page? It should work like this: List: ["Fruits", "Dessert", "Main"] -> Creates buttons Fruits, Dessert and Mains -> each button opens a new page so FruitsButton -> FruitsPage / DessertButton -> DessertPaige, etc.
[ "that is an very general question. here is an idea to get you started. A widget is needed that can hold the buttons and you also can bind each button in advance to a specific function using partial\nfrom functools import partial\ndef switch_page(self, _this_button, course: str = \"\") -> None:\n print(f\"Set {course} {_this_button.text}\")\n # code for switching page here\n \n \nfor _course in (\"fruits\", \"deserts\", \"main\"):\n _button: Button = Button(text=f\"{_course}\")\n # \n _this_callback = partial(self.switch_page, course=_course)\n _button.bind(on_press=_this_callback)\n # this container could be a box layout or grid layout, etc\n self.your_container_widget.add_widget(_button)\n\n" ]
[ 0 ]
[]
[]
[ "button", "kivy", "list", "python" ]
stackoverflow_0074657481_button_kivy_list_python.txt
Q: Python. View the "import" name of a library Some Python libraries are listed under one name in pip, but imported under a different name in the interpreter. pycroptodome is a good example. In pip list, you see "pycryptodome". In a Python program, you have to call "import Crypto". "import pycryptodome" gives an error that the module doesn't exist. Some libraries I've imported are giving me "module not found" errors. I want to see if they're imported under a different name from what appears in pip. Where can I find that data? For reference, "pip show " and "pip inspect " don't seem to have this information. A: Usually in /lib/site-packages in your Python folder. (At least, on Windows.) You can use sys. path to find out what directories are searched for modules. In the standard Python interpreter, you can type " help('modules') ". At the command-line, you can use pydoc modules . In a script, call pkgutil. iter_modules() pydoc modules works A: Here's how you can get the import name: Open the folder that your packages are installed ( Specified in Location in pip show pycryptodome). Then open the package dist-info folder (pycryptodome-3.16.0.dist-info). open top_level.txt. you can see the name you are looking for.
Python. View the "import" name of a library
Some Python libraries are listed under one name in pip, but imported under a different name in the interpreter. pycroptodome is a good example. In pip list, you see "pycryptodome". In a Python program, you have to call "import Crypto". "import pycryptodome" gives an error that the module doesn't exist. Some libraries I've imported are giving me "module not found" errors. I want to see if they're imported under a different name from what appears in pip. Where can I find that data? For reference, "pip show " and "pip inspect " don't seem to have this information.
[ "Usually in /lib/site-packages in your Python folder. (At least, on Windows.) You can use sys. path to find out what directories are searched for modules.\nIn the standard Python interpreter, you can type \" help('modules') \". At the command-line, you can use pydoc modules . In a script, call pkgutil. iter_modules()\npydoc modules \n\nworks\n", "Here's how you can get the import name:\nOpen the folder that your packages are installed ( Specified in Location in pip show pycryptodome). Then open the package dist-info folder (pycryptodome-3.16.0.dist-info). open top_level.txt. you can see the name you are looking for.\n" ]
[ 0, 0 ]
[ "go https://pypi.org/project/pycryptodome\ndownload the tar file version you downloaded using pip and see the top-level view to see import names\n" ]
[ -1 ]
[ "pip", "python" ]
stackoverflow_0074659686_pip_python.txt
Q: I want to filter a dataframe that contains all the days of year 2021 and 2022 such that I only have the data that belongs to 2021? enter image description here I only want to print only the data for 2021 A: can you try this: df['time'] = pd.to_datetime(df['time']) df = df[df['time'].dt.year == 2021]
I want to filter a dataframe that contains all the days of year 2021 and 2022 such that I only have the data that belongs to 2021?
enter image description here I only want to print only the data for 2021
[ "can you try this:\ndf['time'] = pd.to_datetime(df['time'])\ndf = df[df['time'].dt.year == 2021]\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074660177_dataframe_pandas_python.txt
Q: Why is my api response not inserted into postgres dictionary = testEns(idSession) columns = dictionary.keys() for i in dictionary.values(): sql2='''insert into PERSONS(person_id , person_name) VALUES{};'''.format(i) cursor.execute(sql2) The function testEns(idSession) contains the result of an api call that returns an xml response that has been transformed into a dictionary. i'm trying to insert the response into a table that have been created in a postgres database but here is the error i'm getting. Any idea why? and what am i missing? psycopg2.errors.SyntaxError: syntax error at or near "{" LINE1: ...nsert into PERSONS(person_id, person_name) VALUES{'category... After I changed VALUES{id, name} to VALUES(id, name) I have this error psycopg2.errors.UndefinedColumn: column "id" does not exist LINE 1: ...sert into PERSONS(person_id , person_name) VALUES(id, name) eve though my table PERSONS is created in pgadmin with the columns id and name A: Your SQL statement looks off, I think you want something like: sql2='''insert into PERSONS (person_id , person_name) VALUES (%s, %s);''' cursor.execute(sql2, (i.person_id, i.person_name)) Assuming the property names in i here.
Why is my api response not inserted into postgres
dictionary = testEns(idSession) columns = dictionary.keys() for i in dictionary.values(): sql2='''insert into PERSONS(person_id , person_name) VALUES{};'''.format(i) cursor.execute(sql2) The function testEns(idSession) contains the result of an api call that returns an xml response that has been transformed into a dictionary. i'm trying to insert the response into a table that have been created in a postgres database but here is the error i'm getting. Any idea why? and what am i missing? psycopg2.errors.SyntaxError: syntax error at or near "{" LINE1: ...nsert into PERSONS(person_id, person_name) VALUES{'category... After I changed VALUES{id, name} to VALUES(id, name) I have this error psycopg2.errors.UndefinedColumn: column "id" does not exist LINE 1: ...sert into PERSONS(person_id , person_name) VALUES(id, name) eve though my table PERSONS is created in pgadmin with the columns id and name
[ "Your SQL statement looks off, I think you want something like:\nsql2='''insert into PERSONS (person_id , person_name) VALUES (%s, %s);'''\ncursor.execute(sql2, (i.person_id, i.person_name))\n\nAssuming the property names in i here.\n" ]
[ 0 ]
[ "Your line\nsql2 = '''insert into PERSONS(person_id , person_name) VALUES{};'''.format(i)\n\nShould be fixed\nsql2 = '''INSERT INTO PERSONS (person_id, person_name) VALUES (value1, value2, ...)'''.format(i)\n\n" ]
[ -1 ]
[ "postgresql", "python" ]
stackoverflow_0074660328_postgresql_python.txt
Q: Unable to install the jupyter module with pip I would like to be able to install the python module jupyter with pip but I get an error in my terminal when I try 'pip install jupyter' which returns this: ` error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [9 lines of output] C:\Users\nunes\AppData\Local\Temp\pip-build-env-dofs9qdx\overlay\Lib\site-packages\setuptools\_distutils\dist.py:265: UserWarning: Unknown distribution option: 'cffi_modules' warnings.warn(msg) running egg_info writing pyzmq.egg-info\PKG-INFO writing dependency_links to pyzmq.egg-info\dependency_links.txt writing requirements to pyzmq.egg-info\requires.txt writing top-level names to pyzmq.egg-info\top_level.txt running configure error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. ` I have installed Microsoft Visual Studio Builds Tools as indicated but still the same error. If someone has an idea I'm a taker thank you in advance for your help. A: Try pip install jupyterlab. Refer for more info: https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html I also see that you are having an error: Microsoft Visual C++ 14.0 or greater is required You can also try installing/upgrading Microsoft Visual C++ A: I still have the same error with pip install jupyterlab. I had already checked, I have the last version of Microsoft Visual Redistribuate C++
Unable to install the jupyter module with pip
I would like to be able to install the python module jupyter with pip but I get an error in my terminal when I try 'pip install jupyter' which returns this: ` error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [9 lines of output] C:\Users\nunes\AppData\Local\Temp\pip-build-env-dofs9qdx\overlay\Lib\site-packages\setuptools\_distutils\dist.py:265: UserWarning: Unknown distribution option: 'cffi_modules' warnings.warn(msg) running egg_info writing pyzmq.egg-info\PKG-INFO writing dependency_links to pyzmq.egg-info\dependency_links.txt writing requirements to pyzmq.egg-info\requires.txt writing top-level names to pyzmq.egg-info\top_level.txt running configure error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. ` I have installed Microsoft Visual Studio Builds Tools as indicated but still the same error. If someone has an idea I'm a taker thank you in advance for your help.
[ "Try pip install jupyterlab.\nRefer for more info: https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html\nI also see that you are having an error: Microsoft Visual C++ 14.0 or greater is required\nYou can also try installing/upgrading Microsoft Visual C++\n", "I still have the same error with pip install jupyterlab. I had already checked, I have the last version of Microsoft Visual Redistribuate C++\n" ]
[ 0, 0 ]
[]
[]
[ "jupyter", "pip", "python" ]
stackoverflow_0074659543_jupyter_pip_python.txt
Q: Receiving OSError: [Errno 8] Exec format error in app running in Docker Container I have a React/Flask app running within a Docker container. There is no issue with me building the project using docker-compose, and running the app itself in the container. Where I am running into issues is a particular API route that is supposed to fetch user profiles from the DB, encrypt the values in a text file, and return to the frontend for download. The encryption script is written in C, though the API route is written in Python. When I try and encrypt through the app running in Docker, I am given the following error message: OSError: [Errno 8] Exec format error: './app/Crypto/encrypt.exe' I know the following command works in the CLI if invoked outside of the Docker Container (still invoked at the same directory level as it would in app): ./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./profile.txt -o ./profile.encr I am using the following Python code to invoke the command in the API route which is where it fails: proc = subprocess.Popen(f"./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./{profile.profile_name}.txt -o ./{profile.profile_name}.encr", shell=True) The Dockerfile for my backend is pasted below: FROM python:3 WORKDIR /app ENV FLASK_APP=main.py COPY ./requirements.txt . RUN pip install -r requirements.txt COPY . . CMD ["python", "main.py"] I have tried to tackle the issue a few different ways: By default my Docker Container was built with Architecture of ARM64. I read that the OS Error was caused by Architecture not being AMD64, so I rebuilt the container with AMD64 and it gave me the same error. In case this was a permissions error, I ran chmod +rwx on encrypt.exe through the Dockerfile when building the container. Pretty sure it has nothing to do with permissions especially as it still failed. I added a shebang (#!/bin/bash) to the script as well as to the Dockerfile. At the end of the day I know I am failing when using subprocess.Popen, so I am positive I must be missing something when invoking the script using Python, or there is a configuration in my Docker Container that is preventing this functionality. My machine is a Macbook Pro which the script runs fine on. The script has also successfully been utilized on a machine running Linux. Any chances folks have seen similar issues arise with this error? Thanks in advance! A: So thanks to David Maze's comment on this, I followed the lead that maybe the executable I wanted to run needed to be built within the Dockerfile. I destroyed my original container, added in a step to run the Makefile that generates the executable, and finally ran the program through the app running in the Docker container. This did the trick! Not sure as to why the executable needed to be compiled within the Docker container, but running 'make' on the Makefile within the Dockerfile did the trick.
Receiving OSError: [Errno 8] Exec format error in app running in Docker Container
I have a React/Flask app running within a Docker container. There is no issue with me building the project using docker-compose, and running the app itself in the container. Where I am running into issues is a particular API route that is supposed to fetch user profiles from the DB, encrypt the values in a text file, and return to the frontend for download. The encryption script is written in C, though the API route is written in Python. When I try and encrypt through the app running in Docker, I am given the following error message: OSError: [Errno 8] Exec format error: './app/Crypto/encrypt.exe' I know the following command works in the CLI if invoked outside of the Docker Container (still invoked at the same directory level as it would in app): ./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./profile.txt -o ./profile.encr I am using the following Python code to invoke the command in the API route which is where it fails: proc = subprocess.Popen(f"./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./{profile.profile_name}.txt -o ./{profile.profile_name}.encr", shell=True) The Dockerfile for my backend is pasted below: FROM python:3 WORKDIR /app ENV FLASK_APP=main.py COPY ./requirements.txt . RUN pip install -r requirements.txt COPY . . CMD ["python", "main.py"] I have tried to tackle the issue a few different ways: By default my Docker Container was built with Architecture of ARM64. I read that the OS Error was caused by Architecture not being AMD64, so I rebuilt the container with AMD64 and it gave me the same error. In case this was a permissions error, I ran chmod +rwx on encrypt.exe through the Dockerfile when building the container. Pretty sure it has nothing to do with permissions especially as it still failed. I added a shebang (#!/bin/bash) to the script as well as to the Dockerfile. At the end of the day I know I am failing when using subprocess.Popen, so I am positive I must be missing something when invoking the script using Python, or there is a configuration in my Docker Container that is preventing this functionality. My machine is a Macbook Pro which the script runs fine on. The script has also successfully been utilized on a machine running Linux. Any chances folks have seen similar issues arise with this error? Thanks in advance!
[ "So thanks to David Maze's comment on this, I followed the lead that maybe the executable I wanted to run needed to be built within the Dockerfile. I destroyed my original container, added in a step to run the Makefile that generates the executable, and finally ran the program through the app running in the Docker container. This did the trick! Not sure as to why the executable needed to be compiled within the Docker container, but running 'make' on the Makefile within the Dockerfile did the trick.\n" ]
[ 0 ]
[]
[]
[ "docker", "python" ]
stackoverflow_0074605372_docker_python.txt
Q: How we handle input validation on python I am new to python. I am wondering how we handle input validation using try catch. I have the code below, would you provide some suggestion? try: validate_input(date, value, region) raise IllegalArgumentError("Invalid input") except IllegalArgumentError as error: print("Invalid input occur:", error) class IllegalArgumentError(ValueError): pass def validate_input(date, value, region): if ((date is not None and date != "") and (value is not None and value != "") and (region is not None and region != "")): return True else: raise IllegalArgumentError("Invalid lambda event parameters") A: Suggestions First of all, you cannot call a function before it being actually created. Also, except is used to provide a set of code if the execution flow of code is interrupted with an error, so the raising of another error inside the except is illogical You need to use a base exception to create a custom exception Will provide two ways on how I would have done this class IllegalArgumentError(Exception): def __init__(self, err): self.err = err super().__init__(self.err) def validate_input(date, value, region): if ((date is not None and date != "") and (value is not None and value != "") and (region is not None and region != "")): return True else: raise IllegalArgumentError("Invalid lambda event parameters") date = input() value = input() region = input() validate_input(date,value,region) import sys def validate_input(date, value, region): if ((date is not None and date != "") and (value is not None and value != "") and (region is not None and region != "")): return True else: raise ValueError try: date = input() value = input() region = input() validate_input(date, value, region) except ValueError: sys.exit("Invalid lambda event parameters") Hope this helps!
How we handle input validation on python
I am new to python. I am wondering how we handle input validation using try catch. I have the code below, would you provide some suggestion? try: validate_input(date, value, region) raise IllegalArgumentError("Invalid input") except IllegalArgumentError as error: print("Invalid input occur:", error) class IllegalArgumentError(ValueError): pass def validate_input(date, value, region): if ((date is not None and date != "") and (value is not None and value != "") and (region is not None and region != "")): return True else: raise IllegalArgumentError("Invalid lambda event parameters")
[ "Suggestions\n\nFirst of all, you cannot call a function before it being actually created.\nAlso, except is used to provide a set of code if the execution flow of code is interrupted with an error, so the raising of another error inside the except is illogical\nYou need to use a base exception to create a custom exception\n\nWill provide two ways on how I would have done this\n\n\n\nclass IllegalArgumentError(Exception):\n def __init__(self, err):\n self.err = err\n super().__init__(self.err)\n \ndef validate_input(date, value, region):\n if ((date is not None and date != \"\") and (value is not None and value != \"\") and\n (region is not None and region != \"\")):\n return True\n else:\n raise IllegalArgumentError(\"Invalid lambda event parameters\")\ndate = input()\nvalue = input()\nregion = input()\nvalidate_input(date,value,region)\n\n\n\n\nimport sys\ndef validate_input(date, value, region):\n if ((date is not None and date != \"\") and (value is not None and value != \"\") and\n (region is not None and region != \"\")):\n return True\n else:\n raise ValueError\n \ntry:\n date = input()\n value = input()\n region = input()\n validate_input(date, value, region)\nexcept ValueError:\n sys.exit(\"Invalid lambda event parameters\")\n\nHope this helps!\n" ]
[ 0 ]
[]
[]
[ "python", "try_catch", "validation" ]
stackoverflow_0074659981_python_try_catch_validation.txt
Q: Is there a matplotlib function in Python for forcing all subplots inside different figures to have the same x and y axis length? I'm testing out different way of displaying figures. I have one figure which is made up of 12 subplots split into two columns. Something like... fig, ax = plt.subplots(6, 2, figsize= (20,26)) I have another code which splits the 12 subplots into 3 different figures based on categorical data. Something like figA, ax = plt.subplots(5, 1, figsize= (10,23)) figB, ax = plt.subplots(3, 1, figsize= (10,17)) fig2, ax = plt.subplots(4, 1, figsize= (10,20)) Is there a way to ensure all the subplots in every figure have the same x and y axis length? A: Answer turns out to be simple. Use a variable that can be scaled by the number of plots in the figure. So, a figure with more plots will have a higher figsize yet equal plot sizes. Something like... ps = 5 #indicates plot size figA, ax = plt.subplots(5, 1, figsize= (10, 5*ps)) figB, ax = plt.subplots(3, 1, figsize= (10, 3*ps)) fig2, ax = plt.subplots(4, 1, figsize= (10, 4*ps)) A: I had a similar problem, try avg(len(x)) as the multiplier. It scales suitably for all lengths.
Is there a matplotlib function in Python for forcing all subplots inside different figures to have the same x and y axis length?
I'm testing out different way of displaying figures. I have one figure which is made up of 12 subplots split into two columns. Something like... fig, ax = plt.subplots(6, 2, figsize= (20,26)) I have another code which splits the 12 subplots into 3 different figures based on categorical data. Something like figA, ax = plt.subplots(5, 1, figsize= (10,23)) figB, ax = plt.subplots(3, 1, figsize= (10,17)) fig2, ax = plt.subplots(4, 1, figsize= (10,20)) Is there a way to ensure all the subplots in every figure have the same x and y axis length?
[ "Answer turns out to be simple. Use a variable that can be scaled by the number of plots in the figure. So, a figure with more plots will have a higher figsize yet equal plot sizes. Something like...\nps = 5 #indicates plot size\nfigA, ax = plt.subplots(5, 1, figsize= (10, 5*ps))\nfigB, ax = plt.subplots(3, 1, figsize= (10, 3*ps))\nfig2, ax = plt.subplots(4, 1, figsize= (10, 4*ps))\n\n", "I had a similar problem, try avg(len(x)) as the multiplier. It scales suitably for all lengths.\n" ]
[ 0, 0 ]
[]
[]
[ "figure", "matplotlib", "plot", "python", "subplot" ]
stackoverflow_0074382240_figure_matplotlib_plot_python_subplot.txt
Q: Lookoing to create a graph based off the average of two columns in my dataset Ulitmately I am very new with Data Analysis and am in the middle of a project that is due very soon. Of the data here: enter image description here I would like to have the Station areas grouped up, and the Time_Diff averaged out for each area. There are 35000+ entries in this dataset, hence why I want to group it up into the totals so the graph will work. Such as: Tallaght: 13:46 Blanchardstown: 14:35 etc.. I have attempted to graph them but my results were only returning the total count of the time_diff column hence making the area with the higher entries the higher count. The Time_Diff column I made by converting the 'text' value times into datetime using pandas, then minus the IA from the TOC to retrieve the time difference. My dataset: https://data.gov.ie/dataset/fire-brigade-and-ambulance?package_type=dataset Brownie points if you can figure out how I can remove the 0 days entry from the output. I believe this was a result of me converting the 'text' to datetime. A: subset.groupby('Station Area')['Time_Diff'].mean()
Lookoing to create a graph based off the average of two columns in my dataset
Ulitmately I am very new with Data Analysis and am in the middle of a project that is due very soon. Of the data here: enter image description here I would like to have the Station areas grouped up, and the Time_Diff averaged out for each area. There are 35000+ entries in this dataset, hence why I want to group it up into the totals so the graph will work. Such as: Tallaght: 13:46 Blanchardstown: 14:35 etc.. I have attempted to graph them but my results were only returning the total count of the time_diff column hence making the area with the higher entries the higher count. The Time_Diff column I made by converting the 'text' value times into datetime using pandas, then minus the IA from the TOC to retrieve the time difference. My dataset: https://data.gov.ie/dataset/fire-brigade-and-ambulance?package_type=dataset Brownie points if you can figure out how I can remove the 0 days entry from the output. I believe this was a result of me converting the 'text' to datetime.
[ "subset.groupby('Station Area')['Time_Diff'].mean()\n" ]
[ 0 ]
[]
[]
[ "data_analysis", "dataset", "graph", "jupyter_notebook", "python" ]
stackoverflow_0074659331_data_analysis_dataset_graph_jupyter_notebook_python.txt
Q: How do I replace every NaN value in every column by minimum value of that column in pandas? I have a dataframe and I want to replace every NaN value in every column by min() of the column, how do I do that? A: To replace all NaN values in a dataframe with the minimum value of the respective column, you can use the pandas DataFrame.fillna() method in combination with the DataFrame.min() method. For example, suppose you have a dataframe df with the following values: col1 col2 0 NaN 1 1 NaN 3 2 5.0 2 3 6.0 NaN 4 NaN 4 To replace all NaN values with the minimum value of each column, you can use the following code: df.fillna(df.min()) This will return a new dataframe with the NaN values replaced by the minimum value of each column: col1 col2 0 5.0 1 1 5.0 3 2 5.0 2 3 6.0 1 4 5.0 4 Note that the fillna() method will only replace NaN values in the original dataframe. If you want to save the changes to the original dataframe, you can use the inplace parameter like this: df.fillna(df.min(), inplace=True) This will replace the NaN values in the original dataframe df and return None.
How do I replace every NaN value in every column by minimum value of that column in pandas?
I have a dataframe and I want to replace every NaN value in every column by min() of the column, how do I do that?
[ "To replace all NaN values in a dataframe with the minimum value of the respective column, you can use the pandas DataFrame.fillna() method in combination with the DataFrame.min() method.\nFor example, suppose you have a dataframe df with the following values:\n col1 col2\n0 NaN 1\n1 NaN 3\n2 5.0 2\n3 6.0 NaN\n4 NaN 4\n\n\nTo replace all NaN values with the minimum value of each column, you can use the following code:\ndf.fillna(df.min())\n\nThis will return a new dataframe with the NaN values replaced by the minimum value of each column:\n col1 col2\n0 5.0 1\n1 5.0 3\n2 5.0 2\n3 6.0 1\n4 5.0 4\n\n\nNote that the fillna() method will only replace NaN values in the original dataframe. If you want to save the changes to the original dataframe, you can use the inplace parameter like this:\ndf.fillna(df.min(), inplace=True)\n\nThis will replace the NaN values in the original dataframe df and return None.\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074660412_dataframe_pandas_python.txt
Q: How to concatenate a series to a pandas dataframe in python? I would like to iterate through a dataframe rows and concatenate that row to a different dataframe basically building up a different dataframe with some rows. For example: `IPCSection and IPCClass Dataframes allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis = 0) finalpatentclasses = pd.DataFrame(columns=allcolumns) for isec, secrow in IPCSection.iterrows(): for icl, clrow in IPCClass.iterrows(): if (secrow[0] in clrow[0]): pdList = [finalpatentclasses, pd.DataFrame(secrow), pd.DataFrame(clrow)] finalpatentclasses = pd.concat(pdList, axis=0, ignore_index=True) display(finalpatentclasses) The output is: I want the nan values to dissapear and move all the data under the correct columns. I tried axis = 1 but messes up the column names. Append does not work as well all values are placed diagonally at the table with nan values as well. A: The problem with the current implementation is that pd.concat is being called with axis=0 and ignore_index=True, resulting in the values from secrow and clrow being concatenated vertically and the original indices being ignored. This causes the values to be misaligned with the columns of the final dataframe, as shown in the output. To solve this problem, you can create a new dataframe that has the same columns as the final dataframe, and then assign the values from secrow and clrow to the appropriate columns in the new dataframe. After that, you can append the new dataframe to the final dataframe using the pd.concat function with axis=0, as before. Here is a modified version of the code that should produce the desired output: allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0) finalpatentclasses = pd.DataFrame(columns=allcolumns) for isec, secrow in IPCSection.iterrows(): for icl, clrow in IPCClass.iterrows(): if (secrow[0] in clrow[0]): # Create a new dataframe with the same columns as the final dataframe newrow = pd.DataFrame(columns=allcolumns) # Assign the values from secrow and clrow to the appropriate columns in the new dataframe newrow[IPCSection.columns] = secrow.values newrow[IPCClass.columns] = clrow.values # Append the new dataframe to the final dataframe finalpatentclasses = pd.concat([finalpatentclasses, newrow], axis=0) display(finalpatentclasses) This should result in a final dataframe that has the values from secrow and clrow concatenated horizontally under the correct columns, with no nan values. UPDATED SCRIPT: allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0) finalpatentclasses = pd.DataFrame(columns=allcolumns) for isec, secrow in IPCSection.iterrows(): for icl, clrow in IPCClass.iterrows(): if (secrow[0] in clrow[0]): print("Condition met") pdList = [finalpatentclasses, secrow.to_frame().transpose(), clrow.to_frame().transpose()] finalpatentclasses = pd.concat(pdList, axis=0, ignore_index=True) display(finalpatentclasses) Final Update (Efficient for larger datasets): allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0) finalpatentclasses_list = [] for secrow in IPCSection.itertuples(index=False): for clrow in IPCClass.itertuples(index=False): if secrow[0] in clrow[0]: row = list(secrow) + list(clrow) finalpatentclasses_list.append(row) finalpatentclasses = pd.DataFrame(finalpatentclasses_list, columns=allcolumns) display(finalpatentclasses) Note how secrow and clrow are now namedtuples instead of Series, and need to be converted to lists using the list() function before concatenating them with the + operator. Also, the index=False argument is passed to itertuples() to skip the index column in the output. A: Alright, I have figured it out. The idea is that you create a newrowDataframe and concatenate all the data in a list from there you can add it to the dataframe and then conc with the final dataframe. Here is the code: allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis = 0) finalpatentclasses = pd.DataFrame(columns=allcolumns) for isec, secrow in IPCSection.iterrows(): for icl, clrow in IPCClass.iterrows(): newrow = pd.DataFrame(columns=allcolumns) values = np.concatenate((secrow.values, subclrow.values), axis=0) newrow.loc[len(newrow.index)] = values finalpatentclasses = pd.concat([finalpatentclasses, newrow], axis=0) finalpatentclasses.reset_index(drop=false, inplace=True) display(finalpatentclasses) Update the code below is more efficient: allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns, IPCSubClass.columns, IPCGroup.columns), axis = 0) newList = [] for secrow in IPCSection.itertuples(): for clrow in IPCClass.itertuples(): if (secrow[1] in clrow[1]): values = ([secrow[1], secrow[2], subclrow[1], subclrow[2]]) new_row = {IPCSection.columns[0]: [secrow[1]], IPCSection.columns[1]: [secrow[2]], IPCClass.columns[0]: [clrow[1]], IPCClass.columns[1]: [clrow[2]]} newList.append(values) finalpatentclasses = pd.DataFrame(newList, columns=allcolumns) display(finalpatentclasses)
How to concatenate a series to a pandas dataframe in python?
I would like to iterate through a dataframe rows and concatenate that row to a different dataframe basically building up a different dataframe with some rows. For example: `IPCSection and IPCClass Dataframes allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis = 0) finalpatentclasses = pd.DataFrame(columns=allcolumns) for isec, secrow in IPCSection.iterrows(): for icl, clrow in IPCClass.iterrows(): if (secrow[0] in clrow[0]): pdList = [finalpatentclasses, pd.DataFrame(secrow), pd.DataFrame(clrow)] finalpatentclasses = pd.concat(pdList, axis=0, ignore_index=True) display(finalpatentclasses) The output is: I want the nan values to dissapear and move all the data under the correct columns. I tried axis = 1 but messes up the column names. Append does not work as well all values are placed diagonally at the table with nan values as well.
[ "The problem with the current implementation is that pd.concat is being called with axis=0 and ignore_index=True, resulting in the values from secrow and clrow being concatenated vertically and the original indices being ignored. This causes the values to be misaligned with the columns of the final dataframe, as shown in the output.\nTo solve this problem, you can create a new dataframe that has the same columns as the final dataframe, and then assign the values from secrow and clrow to the appropriate columns in the new dataframe. After that, you can append the new dataframe to the final dataframe using the pd.concat function with axis=0, as before.\nHere is a modified version of the code that should produce the desired output:\nallcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0)\nfinalpatentclasses = pd.DataFrame(columns=allcolumns)\nfor isec, secrow in IPCSection.iterrows():\n for icl, clrow in IPCClass.iterrows():\n if (secrow[0] in clrow[0]):\n # Create a new dataframe with the same columns as the final dataframe\n newrow = pd.DataFrame(columns=allcolumns)\n # Assign the values from secrow and clrow to the appropriate columns in the new dataframe\n newrow[IPCSection.columns] = secrow.values\n newrow[IPCClass.columns] = clrow.values\n # Append the new dataframe to the final dataframe\n finalpatentclasses = pd.concat([finalpatentclasses, newrow], axis=0)\ndisplay(finalpatentclasses)\n\nThis should result in a final dataframe that has the values from secrow and clrow concatenated horizontally under the correct columns, with no nan values.\nUPDATED SCRIPT:\n allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0)\nfinalpatentclasses = pd.DataFrame(columns=allcolumns)\nfor isec, secrow in IPCSection.iterrows():\n for icl, clrow in IPCClass.iterrows():\n if (secrow[0] in clrow[0]):\n print(\"Condition met\")\n pdList = [finalpatentclasses, secrow.to_frame().transpose(), clrow.to_frame().transpose()]\n finalpatentclasses = pd.concat(pdList, axis=0, ignore_index=True)\ndisplay(finalpatentclasses)\n\nFinal Update (Efficient for larger datasets):\nallcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0)\nfinalpatentclasses_list = []\nfor secrow in IPCSection.itertuples(index=False):\n for clrow in IPCClass.itertuples(index=False):\n if secrow[0] in clrow[0]:\n row = list(secrow) + list(clrow)\n finalpatentclasses_list.append(row)\nfinalpatentclasses = pd.DataFrame(finalpatentclasses_list, columns=allcolumns)\ndisplay(finalpatentclasses)\n\nNote how secrow and clrow are now namedtuples instead of Series, and need to be converted to lists using the list() function before concatenating them with the + operator. Also, the index=False argument is passed to itertuples() to skip the index column in the output.\n", "Alright, I have figured it out. The idea is that you create a newrowDataframe and concatenate all the data in a list from there you can add it to the dataframe and then conc with the final dataframe.\nHere is the code:\nallcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis = 0)\nfinalpatentclasses = pd.DataFrame(columns=allcolumns)\nfor isec, secrow in IPCSection.iterrows():\n for icl, clrow in IPCClass.iterrows():\n newrow = pd.DataFrame(columns=allcolumns)\n values = np.concatenate((secrow.values, subclrow.values), axis=0)\n newrow.loc[len(newrow.index)] = values \n finalpatentclasses = pd.concat([finalpatentclasses, newrow], axis=0)\nfinalpatentclasses.reset_index(drop=false, inplace=True)\ndisplay(finalpatentclasses)\n\nUpdate the code below is more efficient:\nallcolumns = np.concatenate((IPCSection.columns, IPCClass.columns, IPCSubClass.columns, IPCGroup.columns), axis = 0)\nnewList = []\nfor secrow in IPCSection.itertuples():\n for clrow in IPCClass.itertuples():\n if (secrow[1] in clrow[1]):\n values = ([secrow[1], secrow[2], subclrow[1], subclrow[2]])\n new_row = {IPCSection.columns[0]: [secrow[1]], IPCSection.columns[1]: [secrow[2]],\n IPCClass.columns[0]: [clrow[1]], IPCClass.columns[1]: [clrow[2]]}\n newList.append(values)\nfinalpatentclasses = pd.DataFrame(newList, columns=allcolumns)\ndisplay(finalpatentclasses)\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "loops", "python" ]
stackoverflow_0074659968_dataframe_loops_python.txt
Q: Error: type(Nonetype) has no len() attribute I was trying to solve a problem on leetcode but I keep incurring in an error that I don’t understand class Solution(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ char = set() longest = [] curr = [] for i in range(len(s)): if s[i] in char: longest = max(curr, longest, key=len) curr = curr[curr.index(s[i])+1:].append(s[i]) else: curr.append(s[i]) char.add(s[i]) return max(curr, longest, key=len) This is the code. The error refers to the fact that when i call the function max() one between curr or longest has no attribute len(). Aren’t both lists? I looked up the solve but it uses a slightly different method. A: As already mentioned in the comment problem When you set longest = max... it ceases to be a list by @mark-ransom already. I will purpose different way of solving this # lengthOfLongestSubstring # can be optimized to O(n) using sliding window technique class Solution2(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ if len(s) == 0: return 0 if len(s) == 1: return 1 max_len = 0 for i in range(len(s)): for j in range(i+1, len(s)+1): if len(set(s[i:j])) == len(s[i:j]): max_len = max(max_len, len(s[i:j])) else: break return max_len class SolutionWithProblem(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ char = set() longest = [] curr = [] for i in range(len(s)): if s[i] in char: longest = max(curr, longest, key=len) # problem here curr = curr[curr.index(s[i])+1:].append(s[i]) else: curr.append(s[i]) char.add(s[i]) return max(curr, longest, key=len) # call the function s = Solution2() print(s.lengthOfLongestSubstring("abcabcbb")) print(s.lengthOfLongestSubstring("bbbbb")) print(s.lengthOfLongestSubstring("pwwkew")) print(s.lengthOfLongestSubstring(" ")) print(s.lengthOfLongestSubstring("dvdf")) print(s.lengthOfLongestSubstring("anviaj")) print(s.lengthOfLongestSubstring("abba")) This might not be the best solution. Update to your solution class Solution3(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ char = set() longest = [] curr = [] for i in range(len(s)): if s[i] in char: if len(curr) > len(longest): longest = curr curr = [] char = set() char.add(s[i]) curr.append(s[i]) if len(curr) > len(longest): longest = curr return len(longest)
Error: type(Nonetype) has no len() attribute
I was trying to solve a problem on leetcode but I keep incurring in an error that I don’t understand class Solution(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ char = set() longest = [] curr = [] for i in range(len(s)): if s[i] in char: longest = max(curr, longest, key=len) curr = curr[curr.index(s[i])+1:].append(s[i]) else: curr.append(s[i]) char.add(s[i]) return max(curr, longest, key=len) This is the code. The error refers to the fact that when i call the function max() one between curr or longest has no attribute len(). Aren’t both lists? I looked up the solve but it uses a slightly different method.
[ "As already mentioned in the comment problem When you set longest = max... it ceases to be a list by @mark-ransom already. I will purpose different way of solving this\n# lengthOfLongestSubstring\n# can be optimized to O(n) using sliding window technique\nclass Solution2(object):\n def lengthOfLongestSubstring(self, s):\n \"\"\"\n :type s: str\n :rtype: int\n \"\"\"\n if len(s) == 0:\n return 0\n if len(s) == 1:\n return 1\n max_len = 0\n for i in range(len(s)):\n for j in range(i+1, len(s)+1):\n if len(set(s[i:j])) == len(s[i:j]):\n max_len = max(max_len, len(s[i:j]))\n else:\n break\n return max_len\n\n\nclass SolutionWithProblem(object):\n def lengthOfLongestSubstring(self, s):\n \"\"\"\n :type s: str\n :rtype: int\n \"\"\"\n char = set()\n longest = []\n curr = []\n for i in range(len(s)):\n if s[i] in char:\n longest = max(curr, longest, key=len) # problem here\n curr = curr[curr.index(s[i])+1:].append(s[i])\n else:\n curr.append(s[i])\n char.add(s[i])\n return max(curr, longest, key=len)\n\n# call the function\ns = Solution2()\nprint(s.lengthOfLongestSubstring(\"abcabcbb\"))\nprint(s.lengthOfLongestSubstring(\"bbbbb\"))\nprint(s.lengthOfLongestSubstring(\"pwwkew\"))\nprint(s.lengthOfLongestSubstring(\" \"))\nprint(s.lengthOfLongestSubstring(\"dvdf\"))\nprint(s.lengthOfLongestSubstring(\"anviaj\"))\nprint(s.lengthOfLongestSubstring(\"abba\"))\n\nThis might not be the best solution. Update to your solution\nclass Solution3(object):\n def lengthOfLongestSubstring(self, s):\n \"\"\"\n :type s: str\n :rtype: int\n \"\"\"\n char = set()\n longest = []\n curr = []\n for i in range(len(s)):\n if s[i] in char:\n if len(curr) > len(longest):\n longest = curr\n curr = []\n char = set()\n char.add(s[i])\n curr.append(s[i])\n if len(curr) > len(longest):\n longest = curr\n return len(longest)\n\n" ]
[ 0 ]
[]
[]
[ "algorithm", "python", "python_3.x" ]
stackoverflow_0074660122_algorithm_python_python_3.x.txt
Q: How to set input time limit for user in game? I was wondering how I can make a program with input of MAXIMUM 5 seconds(e.g he can send input after 2 seconds) in python I decided to do a SIMPLE game where you basically have to rewrite a word below 5 seconds. I know how to create input and make it wait EXACTLY 5 SECONDS, but what I want to achieve is to set maximum time of input to 5 seconds so if a user types an answer in let's say 2 seconds he will go the next word. Could you tell me the way to achieve my goal. Thanks in advance! for word in ["banana","earth","turtle","manchester","coctail","chicken"]: # User gets maximum of 5 seconds to write the word, # if he does it before 5 seconds pass ,he goes to next word (does not have to wait exactly 5 seconds, he # can send input in e.g 2 seconds) # if he does not do it in 5 seconds he loses game and it is finished user_input = input(f"Type word '{word}': ") #IF the word is correct go to next iteration if(user_input==word): continue #If the word is incorrect finish the game else: print("You lost") break I tried to do it with threading.Timer() but it doesn't work import threading class NoTime(Exception): pass def count_time(): raise NoTime for word in ["banana","earth","turtle","manchester","coctail","chicken"]: try: #Create timer which raises exception after 5 seconds timer = threading.Timer(5,count_time) timer.start() user_input = input(f"Type word '{word}': ") #if timer hasn't lasted 5 seconds then destroy it in order to prevent unwanted exception timer.cancel() if user_input==word: print("Correct") else: print("Incorrect, you LOSE!") break except NoTime: print("You run out of time, you lose") break The error i get Traceback (most recent call last): File "C:\Users\papit\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner self.run() File "C:\Users\papit\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1394, in run self.function(*self.args, **self.kwargs) File "C:\Users\papit\OneDrive\Pulpit\Programming\Python Bro Course\Math\second\threading_training.py", line 7, in count_time raise NoTime NoTime
How to set input time limit for user in game?
I was wondering how I can make a program with input of MAXIMUM 5 seconds(e.g he can send input after 2 seconds) in python I decided to do a SIMPLE game where you basically have to rewrite a word below 5 seconds. I know how to create input and make it wait EXACTLY 5 SECONDS, but what I want to achieve is to set maximum time of input to 5 seconds so if a user types an answer in let's say 2 seconds he will go the next word. Could you tell me the way to achieve my goal. Thanks in advance! for word in ["banana","earth","turtle","manchester","coctail","chicken"]: # User gets maximum of 5 seconds to write the word, # if he does it before 5 seconds pass ,he goes to next word (does not have to wait exactly 5 seconds, he # can send input in e.g 2 seconds) # if he does not do it in 5 seconds he loses game and it is finished user_input = input(f"Type word '{word}': ") #IF the word is correct go to next iteration if(user_input==word): continue #If the word is incorrect finish the game else: print("You lost") break I tried to do it with threading.Timer() but it doesn't work import threading class NoTime(Exception): pass def count_time(): raise NoTime for word in ["banana","earth","turtle","manchester","coctail","chicken"]: try: #Create timer which raises exception after 5 seconds timer = threading.Timer(5,count_time) timer.start() user_input = input(f"Type word '{word}': ") #if timer hasn't lasted 5 seconds then destroy it in order to prevent unwanted exception timer.cancel() if user_input==word: print("Correct") else: print("Incorrect, you LOSE!") break except NoTime: print("You run out of time, you lose") break The error i get Traceback (most recent call last): File "C:\Users\papit\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner self.run() File "C:\Users\papit\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1394, in run self.function(*self.args, **self.kwargs) File "C:\Users\papit\OneDrive\Pulpit\Programming\Python Bro Course\Math\second\threading_training.py", line 7, in count_time raise NoTime NoTime
[]
[]
[ "=======\nTo create a program with a maximum input time of 5 seconds in Python, you can use the time and select modules to implement a timeout for the input operation. The time module provides functions for working with time, such as measuring elapsed time, and the select module provides functions for waiting for input from multiple sources, including a timeout.\nTo create a program with a maximum input time of 5 seconds, you can use the following steps:\nImport the time and select modules at the beginning of your program:\nimport time\nimport select\nUse the time.time() function to get the current time at the start of the input operation:\nstart_time = time.time()\nUse the select.select() function to wait for input from the user, with a timeout of 5 seconds:\ntimeout = 5 # Set the timeout to 5 seconds\ninput_ready, _, _ = select.select([sys.stdin], [], [], timeout)\nIf the input_ready variable is not empty, which indicates that input was received from the user within the timeout, read the input from the user using the input() function:\nif input_ready:\nuser_input = input()\nIf the input_ready variable is empty, which indicates that the timeout expired without receiving input from the user, handle the timeout by either displaying an error message or taking another action, as appropriate for your program:\nelse:\n# Handle the timeout, e.g. by displaying an error message or taking another action\nUse the time.time() function to get the current time at the end of the input operation, and calculate the elapsed time by subtracting the start time from the end time:\nend_time = time.time()\nelapsed_time = end_time - start_time\nUse the elapsed time to determine whether the user's input was within the maximum time limit, and take appropriate action, such as displaying the input or moving on to the next word in your game:\nif elapsed_time <= timeout:\n# The user's input was within the time limit, so display it or take another action\nelse:\n# The user's input was not within the time limit, so handle the timeout\nOverall, to create a program with a maximum input time of 5 seconds in Python, you can use the time and select modules to implement a timeout for the input operation, and to handle the timeout if the user does not provide input within the maximum time limit. This allows you to ensure that the user's input is received within the specified time limit, and to take appropriate action based on the elapsed time.\n", "import threading\n\ndef lost():\n print(\"You run out of time, you lose\")\n\n\nfor word in [\"banana\", \"earth\", \"turtle\", \"manchester\", \"coctail\", \"chicken\"]:\n\n timer = threading.Timer(5, lost)\n timer.start()\n\n user_input = input(f\"Type word '{word}': \")\n timer.cancel()\n\n if user_input == word:\n print(\"Correct\")\n else:\n print(\"Incorrect, you LOSE!\")\n\n" ]
[ -1, -1 ]
[ "python", "python_3.x" ]
stackoverflow_0074660327_python_python_3.x.txt
Q: Plotting GIF on Folium Map I have a GIF of 24 seconds showing the temperature of -119.564209,38.503915,-114.060059,41.211203 region in the square (heat map) for 24 hrs. Is there a way I can plot this GIF on the Folium map (or any other interactive map in python) by giving the mentioned coordinates? A: Here's how I solved it using folium. from folium import raster_layers m = folium.Map(location=[100, 100], zoom_start=2, tiles='OpenStreetMap') raster_layers.ImageOverlay('temp.gif', [[-119.564209,38.503915],[-114.060059,41.211203]], opacity=0.8, ).add_to(m) folium.LayerControl().add_to(m) m
Plotting GIF on Folium Map
I have a GIF of 24 seconds showing the temperature of -119.564209,38.503915,-114.060059,41.211203 region in the square (heat map) for 24 hrs. Is there a way I can plot this GIF on the Folium map (or any other interactive map in python) by giving the mentioned coordinates?
[ "Here's how I solved it using folium.\nfrom folium import raster_layers\nm = folium.Map(location=[100, 100], zoom_start=2, tiles='OpenStreetMap')\n\nraster_layers.ImageOverlay('temp.gif',\n [[-119.564209,38.503915],[-114.060059,41.211203]],\n opacity=0.8,\n ).add_to(m)\n\nfolium.LayerControl().add_to(m)\nm\n\n" ]
[ 0 ]
[]
[]
[ "folium", "gis", "python" ]
stackoverflow_0074657170_folium_gis_python.txt
Q: How to explicitly add role to a user in discord bot I'm relatively new to programming and am trying to code a bot for a server I'm in. I'd ideally like to assign a user to a specific role based on them sending a message containing 'gm' or 'good morning'. Right now, the bot can read the message and send a reply. But I'm a bit lost trying to figure out how to actually add the role to a user once the 'gm' message is read. `@client.event async def on_ready(): print(f'We have logged in as {client.user}') async def addRole(user : discord.Member, role : discord.Role = BagChaser): if role in user.roles: return else: await user.add_roles(role) @client.event async def on_message(message): if message.author == client.user: return msg = message.content.lower() words_list = ['gm', 'good morning'] if any(word in msg for word in words_list): # await addRole(message.author, BagChaser) await message.channel.send(f'Lets get this bag, {message.author}') await message.author.add_roles(BagChaser)` the commented line and the last line were some ideas of how to add the role 'BagChaser' to the author of the message. I tried setting the role parameter in the addRole function to BagChaser since that will never change, but this seems incorrect. The role is already made in my server, but I'm not sure how I can make the bot aware of that role in the code. Any help would be greatly appreciated! I tried explicitly calling out my role but i can't get it recognized. A: You need a role object, and to do that, you need a guild object, which you can get with message.author.guild. From this, you can get the Role object: role = await message.author.guild.get_role(ROLE_ID) Note that you need to get the role ID yourself. The easiest method to do so is to go into Discord and enable Developer settings, then right click the role on someone's profile and click "Copy ID". Once you have this role object, you can just apply it with message.author.add_roles(role). Complete code: role_id = ... author = message.author; role = await author.guild.get_role(role_id) await author.add_roles(role) Make sure your bot has the Manage Roles permission
How to explicitly add role to a user in discord bot
I'm relatively new to programming and am trying to code a bot for a server I'm in. I'd ideally like to assign a user to a specific role based on them sending a message containing 'gm' or 'good morning'. Right now, the bot can read the message and send a reply. But I'm a bit lost trying to figure out how to actually add the role to a user once the 'gm' message is read. `@client.event async def on_ready(): print(f'We have logged in as {client.user}') async def addRole(user : discord.Member, role : discord.Role = BagChaser): if role in user.roles: return else: await user.add_roles(role) @client.event async def on_message(message): if message.author == client.user: return msg = message.content.lower() words_list = ['gm', 'good morning'] if any(word in msg for word in words_list): # await addRole(message.author, BagChaser) await message.channel.send(f'Lets get this bag, {message.author}') await message.author.add_roles(BagChaser)` the commented line and the last line were some ideas of how to add the role 'BagChaser' to the author of the message. I tried setting the role parameter in the addRole function to BagChaser since that will never change, but this seems incorrect. The role is already made in my server, but I'm not sure how I can make the bot aware of that role in the code. Any help would be greatly appreciated! I tried explicitly calling out my role but i can't get it recognized.
[ "You need a role object, and to do that, you need a guild object, which you can get with message.author.guild.\nFrom this, you can get the Role object:\nrole = await message.author.guild.get_role(ROLE_ID)\n\nNote that you need to get the role ID yourself. The easiest method to do so is to go into Discord and enable Developer settings, then right click the role on someone's profile and click \"Copy ID\". Once you have this role object, you can just apply it with message.author.add_roles(role).\nComplete code:\nrole_id = ...\n\nauthor = message.author;\nrole = await author.guild.get_role(role_id)\nawait author.add_roles(role)\n\nMake sure your bot has the Manage Roles permission\n" ]
[ 0 ]
[]
[]
[ "bots", "discord", "python" ]
stackoverflow_0074660454_bots_discord_python.txt
Q: pip install . creates only the dist-info not the package I am trying to make a python package which I want to install using pip install . locally. The package name is listed in pip freeze but import <package> results in an error No module named <package>. Also the site-packages folder does only contain a dist-info folder. find_packages() is able to find packages. What am I missing? import io import os import sys from shutil import rmtree from setuptools import find_packages, setup, Command # Package meta-data. NAME = '<package>' DESCRIPTION = 'description' URL = '' EMAIL = 'email' AUTHOR = 'name' # What packages are required for this module to be executed? REQUIRED = [ # 'requests', 'maya', 'records', ] # The rest you shouldn't have to touch too much :) # ------------------------------------------------ # Except, perhaps the License and Trove Classifiers! # If you do change the License, remember to change the Trove Classifier for that! here = os.path.abspath(os.path.dirname(__file__)) # Where the magic happens: setup( name=NAME, #version=about['__version__'], description=DESCRIPTION, # long_description=long_description, author=AUTHOR, author_email=EMAIL, url=URL, packages=find_packages(), # If your package is a single module, use this instead of 'packages': # py_modules=['mypackage'], # entry_points={ # 'console_scripts': ['mycli=mymodule:cli'], # }, install_requires=REQUIRED, include_package_data=True, license='MIT', classifiers=[ # Trove classifiers # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers 'License :: OSI Approved :: MIT License', 'Programming Language :: Python', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy' ], ) A: Since the question has become quite popular, here are the diagnosis steps to go through when you're missing files after installation. Imagine having an example project with the following structure: root ├── spam │ ├── __init__.py │ ├── data.txt │ ├── eggs.py │ └── fizz │ ├── __init__.py │ └── buzz.py ├── bacon.py └── setup.py Now I run pip install ., check that the package is installed: $ pip list Package Version ---------- ------- mypkg 0.1 pip 19.0.1 setuptools 40.6.3 wheel 0.32.3 but see neither spam, nor spam/eggs.py nor bacon.py nor spam/fizz/buzz.py in the list of files belonging to the installed package: $ pip show -f mypkg Name: mypkg Version: 0.1 ... Files: mypkg-0.1.dist-info/DESCRIPTION.rst mypkg-0.1.dist-info/INSTALLER mypkg-0.1.dist-info/METADATA mypkg-0.1.dist-info/RECORD mypkg-0.1.dist-info/WHEEL mypkg-0.1.dist-info/metadata.json mypkg-0.1.dist-info/top_level.txt So what to do now? Diagnose by inspecting the wheel build log Unless told not to do so, pip will always try to build a wheel file and install your package from it. We can inspect the log for the wheel build process if reinstalling in the verbose mode. First step is to uninstall the package: $ pip uninstall -y mypkg ... then install it again, but now with an additional argument: $ pip install . -vvv ... Now if I inspect the log: $ pip install . -vvv | grep 'adding' adding 'mypkg-0.1.dist-info/METADATA' adding 'mypkg-0.1.dist-info/WHEEL' adding 'mypkg-0.1.dist-info/top_level.txt' adding 'mypkg-0.1.dist-info/RECORD' I notice that no files from the spam directory or bacon.py are mentioned anywhere. This means they were simply not included in the wheel file and hence not installed by pip. The most common error sources are: Missing packages: check the packages argument Verify you have passed the packages argument to the setup function. Check that you have mentioned all of the packages that should be installed. Subpackages will not be collected automatically if only the parent package is mentioned! For example, in the setup script from setuptools import setup setup( name='mypkg', version='0.1', packages=['spam'] ) spam will be installed, but not spam.fizz because it is a package itself and must be mentioned explicitly. Fixing it: from setuptools import setup setup( name='mypkg', version='0.1', packages=['spam', 'spam.fizz'] ) If you have lots of packages, use setuptools.find_packages to automate the process: from setuptools import find_packages, setup setup( name='mypkg', version='0.1', packages=find_packages() # will return a list ['spam', 'spam.fizz'] ) In case you are missing a module: Missing modules: check the py_modules argument In the above examples, I will be missing bacon.py after installation since it doesn't belong to any package. I have to provide its module name in the separate argument py_modules: from setuptools import find_packages, setup setup( name='mypkg', version='0.1', packages=find_packages(), py_modules=['bacon'] ) Missing data files: check the package_data argument I have all the source code files in place now, but the data.txt file is still not installed. Data files located under package directories should be added via the package_data argument. Fixing the above setup script: from setuptools import find_packages, setup setup( name='mypkg', version='0.1', packages=find_packages(), package_data={'spam': ['data.txt']}, py_modules=['bacon'] ) Don't be tempted to use the data_files argument. Place the data files under a package and configure package_data instead. After fixing the setup script, verify the package files are in place after installation If I now reinstall the package, I will notice all of the files are added to the wheel: $ pip install . -vvv | grep 'adding' adding 'bacon.py' adding 'spam/__init__.py' adding 'spam/data.txt' adding 'spam/eggs.py' adding 'spam/fizz/__init__.py' adding 'spam/fizz/buzz.py' adding 'mypkg-0.1.dist-info/METADATA' adding 'mypkg-0.1.dist-info/WHEEL' adding 'mypkg-0.1.dist-info/top_level.txt' adding 'mypkg-0.1.dist-info/RECORD' They will also be visible in the list of files belonging to mypkg: $ pip show -f mypkg Name: mypkg Version: 0.1 ... Files: __pycache__/bacon.cpython-36.pyc bacon.py mypkg-0.1.dist-info/INSTALLER mypkg-0.1.dist-info/METADATA mypkg-0.1.dist-info/RECORD mypkg-0.1.dist-info/WHEEL mypkg-0.1.dist-info/top_level.txt spam/__init__.py spam/__pycache__/__init__.cpython-36.pyc spam/__pycache__/eggs.cpython-36.pyc spam/data.txt spam/eggs.py spam/fizz/__init__.py spam/fizz/__pycache__/__init__.cpython-36.pyc spam/fizz/__pycache__/buzz.cpython-36.pyc spam/fizz/buzz.py A: For me, I noticed something weird if you do this: # Not in the setup.py directory python /path/to/folder/setup.py bdist_wheel It will only install the .dist-info folder in your site-packages folder when you install the wheel. However, if you do this: cd /path/to/folder \ && python setup.py bdist_wheel The wheel will include all your files. A: If you are on Windows 10+, one way you could make sure that you had all the correct installations was to click start in the bottom left-hand corner and search cmd.exe and right-click on "Command Prompt" (Make sure you choose "Run as Administrator"). Type "cd path to your Python 3.X installation". You can find this path in File Explorer (go to the folder where Python is installed) and then at the top. Copy this, and put it in where I wrote above path to your Python 3.X installation. Once you do that and click enter, type "python -m pip install package" (package signifies the package you would like to install). Your Python program should now work perfectly. A: I had the same problem, and updating setuptools helped: python3 -m pip install --upgrade pip setuptools wheel After that, reinstall the package, and it should work fine :) A: Make certain that your src files are in example_package_YOUR_USERNAME_HERE (this is the example package name that is used in the docs) and not in src. Errantly putting the files in src can have the effect described in the question. Reference: https://packaging.python.org/en/latest/tutorials/packaging-projects/ The package should be set up like this: packaging_tutorial/ └── src/ └── example_package_YOUR_USERNAME_HERE/ ├── __init__.py └── example.py
pip install . creates only the dist-info not the package
I am trying to make a python package which I want to install using pip install . locally. The package name is listed in pip freeze but import <package> results in an error No module named <package>. Also the site-packages folder does only contain a dist-info folder. find_packages() is able to find packages. What am I missing? import io import os import sys from shutil import rmtree from setuptools import find_packages, setup, Command # Package meta-data. NAME = '<package>' DESCRIPTION = 'description' URL = '' EMAIL = 'email' AUTHOR = 'name' # What packages are required for this module to be executed? REQUIRED = [ # 'requests', 'maya', 'records', ] # The rest you shouldn't have to touch too much :) # ------------------------------------------------ # Except, perhaps the License and Trove Classifiers! # If you do change the License, remember to change the Trove Classifier for that! here = os.path.abspath(os.path.dirname(__file__)) # Where the magic happens: setup( name=NAME, #version=about['__version__'], description=DESCRIPTION, # long_description=long_description, author=AUTHOR, author_email=EMAIL, url=URL, packages=find_packages(), # If your package is a single module, use this instead of 'packages': # py_modules=['mypackage'], # entry_points={ # 'console_scripts': ['mycli=mymodule:cli'], # }, install_requires=REQUIRED, include_package_data=True, license='MIT', classifiers=[ # Trove classifiers # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers 'License :: OSI Approved :: MIT License', 'Programming Language :: Python', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy' ], )
[ "Since the question has become quite popular, here are the diagnosis steps to go through when you're missing files after installation. Imagine having an example project with the following structure:\nroot\n├── spam\n│ ├── __init__.py\n│ ├── data.txt\n│ ├── eggs.py\n│ └── fizz\n│ ├── __init__.py\n│ └── buzz.py\n├── bacon.py\n└── setup.py\n\nNow I run pip install ., check that the package is installed:\n$ pip list\nPackage Version\n---------- -------\nmypkg 0.1 \npip 19.0.1 \nsetuptools 40.6.3 \nwheel 0.32.3 \n\nbut see neither spam, nor spam/eggs.py nor bacon.py nor spam/fizz/buzz.py in the list of files belonging to the installed package:\n$ pip show -f mypkg\nName: mypkg\nVersion: 0.1\n...\nFiles:\n mypkg-0.1.dist-info/DESCRIPTION.rst\n mypkg-0.1.dist-info/INSTALLER\n mypkg-0.1.dist-info/METADATA\n mypkg-0.1.dist-info/RECORD\n mypkg-0.1.dist-info/WHEEL\n mypkg-0.1.dist-info/metadata.json\n mypkg-0.1.dist-info/top_level.txt\n\nSo what to do now?\nDiagnose by inspecting the wheel build log\nUnless told not to do so, pip will always try to build a wheel file and install your package from it. We can inspect the log for the wheel build process if reinstalling in the verbose mode. First step is to uninstall the package:\n$ pip uninstall -y mypkg\n...\n\nthen install it again, but now with an additional argument:\n$ pip install . -vvv\n...\n\nNow if I inspect the log:\n$ pip install . -vvv | grep 'adding'\n adding 'mypkg-0.1.dist-info/METADATA'\n adding 'mypkg-0.1.dist-info/WHEEL'\n adding 'mypkg-0.1.dist-info/top_level.txt'\n adding 'mypkg-0.1.dist-info/RECORD'\n\nI notice that no files from the spam directory or bacon.py are mentioned anywhere. This means they were simply not included in the wheel file and hence not installed by pip. The most common error sources are:\nMissing packages: check the packages argument\nVerify you have passed the packages argument to the setup function. Check that you have mentioned all of the packages that should be installed. Subpackages will not be collected automatically if only the parent package is mentioned! For example, in the setup script\nfrom setuptools import setup\n\nsetup(\n name='mypkg',\n version='0.1',\n packages=['spam']\n)\n\nspam will be installed, but not spam.fizz because it is a package itself and must be mentioned explicitly. Fixing it:\nfrom setuptools import setup\n\nsetup(\n name='mypkg',\n version='0.1',\n packages=['spam', 'spam.fizz']\n)\n\nIf you have lots of packages, use setuptools.find_packages to automate the process:\nfrom setuptools import find_packages, setup\n\nsetup(\n name='mypkg',\n version='0.1',\n packages=find_packages() # will return a list ['spam', 'spam.fizz']\n)\n\nIn case you are missing a module:\nMissing modules: check the py_modules argument\nIn the above examples, I will be missing bacon.py after installation since it doesn't belong to any package. I have to provide its module name in the separate argument py_modules:\nfrom setuptools import find_packages, setup\n\nsetup(\n name='mypkg',\n version='0.1',\n packages=find_packages(),\n py_modules=['bacon']\n)\n\nMissing data files: check the package_data argument\nI have all the source code files in place now, but the data.txt file is still not installed. Data files located under package directories should be added via the package_data argument. Fixing the above setup script:\nfrom setuptools import find_packages, setup\n\nsetup(\n name='mypkg',\n version='0.1',\n packages=find_packages(),\n package_data={'spam': ['data.txt']},\n py_modules=['bacon']\n)\n\nDon't be tempted to use the data_files argument. Place the data files under a package and configure package_data instead.\nAfter fixing the setup script, verify the package files are in place after installation\nIf I now reinstall the package, I will notice all of the files are added to the wheel:\n$ pip install . -vvv | grep 'adding'\n adding 'bacon.py'\n adding 'spam/__init__.py'\n adding 'spam/data.txt'\n adding 'spam/eggs.py'\n adding 'spam/fizz/__init__.py'\n adding 'spam/fizz/buzz.py'\n adding 'mypkg-0.1.dist-info/METADATA'\n adding 'mypkg-0.1.dist-info/WHEEL'\n adding 'mypkg-0.1.dist-info/top_level.txt'\n adding 'mypkg-0.1.dist-info/RECORD'\n\nThey will also be visible in the list of files belonging to mypkg:\n$ pip show -f mypkg\nName: mypkg\nVersion: 0.1\n...\nFiles:\n __pycache__/bacon.cpython-36.pyc\n bacon.py\n mypkg-0.1.dist-info/INSTALLER\n mypkg-0.1.dist-info/METADATA\n mypkg-0.1.dist-info/RECORD\n mypkg-0.1.dist-info/WHEEL\n mypkg-0.1.dist-info/top_level.txt\n spam/__init__.py\n spam/__pycache__/__init__.cpython-36.pyc\n spam/__pycache__/eggs.cpython-36.pyc\n spam/data.txt\n spam/eggs.py\n spam/fizz/__init__.py\n spam/fizz/__pycache__/__init__.cpython-36.pyc\n spam/fizz/__pycache__/buzz.cpython-36.pyc\n spam/fizz/buzz.py\n\n", "For me, I noticed something weird if you do this:\n# Not in the setup.py directory\npython /path/to/folder/setup.py bdist_wheel\n\nIt will only install the .dist-info folder in your site-packages folder when you install the wheel.\nHowever, if you do this:\ncd /path/to/folder \\\n&& python setup.py bdist_wheel\n\nThe wheel will include all your files.\n", "If you are on Windows 10+, one way you could make sure that you had all the correct installations was to click start in the bottom left-hand corner and search cmd.exe and right-click on \"Command Prompt\" (Make sure you choose \"Run as Administrator\"). Type \"cd path to your Python 3.X installation\". You can find this path in File Explorer (go to the folder where Python is installed) and then at the top. Copy this, and put it in where I wrote above path to your Python 3.X installation. Once you do that and click enter, type \"python -m pip install package\" (package signifies the package you would like to install). Your Python program should now work perfectly. \n", "I had the same problem, and updating setuptools helped:\npython3 -m pip install --upgrade pip setuptools wheel\n\nAfter that, reinstall the package, and it should work fine :)\n", "Make certain that your src files are in example_package_YOUR_USERNAME_HERE (this is the example package name that is used in the docs) and not in src. Errantly putting the files in src can have the effect described in the question.\nReference: https://packaging.python.org/en/latest/tutorials/packaging-projects/\nThe package should be set up like this:\npackaging_tutorial/\n└── src/\n └── example_package_YOUR_USERNAME_HERE/\n ├── __init__.py\n └── example.py\n\n" ]
[ 127, 1, 0, 0, 0 ]
[]
[]
[ "package", "pip", "python", "setup.py", "setuptools" ]
stackoverflow_0050585246_package_pip_python_setup.py_setuptools.txt
Q: Update treemodel in real time PySide How can I make it so when I click the Randomize button, for the selected treeview items, the treeview updates to show the changes to data, while maintaining the expanding items states and the users selection? Is this accomplished by subclasses the StandardItemModel or ProxyModel class? Help is much appreciated as I'm not sure how to resolve this issue. It's a very simple example demonstrating the issue. When clicking Randmoize, all it's doing is randomly assigning a new string (name) to each coaches position on the selected Team. import os import sys import random from PySide2 import QtGui, QtWidgets, QtCore class Team(object): def __init__(self, name='', nameA='', nameB='', nameC='', nameD=''): super(Team, self).__init__() self.name = name self.headCoach = nameA self.assistantCoach = nameB self.offensiveCoach = nameC self.defensiveCoach = nameD def randomize(self): names = ['doug', 'adam', 'seth', 'emily', 'kevin', 'mike', 'sarah', 'cassy', 'courtney', 'henry'] cnt = len(names)-1 self.headCoach = names[random.randint(0, cnt)] self.assistantCoach = names[random.randint(0, cnt)] self.offensiveCoach = names[random.randint(0, cnt)] self.defensiveCoach = names[random.randint(0, cnt)] print('TRADED PLAYERS') TEAMS = [ Team('Cowboys', 'doug', 'adam', 'seth', 'emily'), Team('Packers'), Team('Lakers', 'kevin', 'mike', 'sarah', 'cassy'), Team('Yankees', 'courtney', 'henry'), Team('Gators'), ] class MainDialog(QtWidgets.QMainWindow): def __init__(self, parent=None): super(MainDialog, self).__init__(parent) self.resize(600,400) self.button = QtWidgets.QPushButton('Randomize') self.itemModel = QtGui.QStandardItemModel() self.proxyModel = QtCore.QSortFilterProxyModel() self.proxyModel.setSourceModel(self.itemModel) self.proxyModel.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) self.proxyModel.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) self.proxyModel.setDynamicSortFilter(True) self.proxyModel.setFilterKeyColumn(0) self.treeView = QtWidgets.QTreeView() self.treeView.setModel(self.proxyModel) self.treeView.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers) self.treeView.setSelectionMode(QtWidgets.QAbstractItemView.ExtendedSelection) self.treeView.setVerticalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel) self.treeView.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows) self.treeView.setAlternatingRowColors(True) self.treeView.setSortingEnabled(True) self.treeView.setUniformRowHeights(False) self.treeView.header().setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents) self.treeView.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) self.selectionModel = self.treeView.selectionModel() # layout self.mainLayout = QtWidgets.QVBoxLayout() self.mainLayout.addWidget(self.treeView) self.mainLayout.addWidget(self.button) self.mainWidget = QtWidgets.QWidget() self.mainWidget.setLayout(self.mainLayout) self.setCentralWidget(self.mainWidget) # connections self.selectionModel.selectionChanged.connect(self.updateControls) self.button.clicked.connect(self.randomizeTeams) # begin self.populateModel() self.updateControls() def randomizeTeams(self): for proxyIndex in self.selectionModel.selectedRows(): sourceIndex = self.proxyModel.mapToSource(proxyIndex) item = self.itemModel.itemFromIndex(sourceIndex) team = item.data(QtCore.Qt.UserRole) team.randomize() # UPDATE UI... def updateControls(self): self.button.setEnabled(self.selectionModel.hasSelection()) def populateModel(self): self.itemModel.clear() self.itemModel.setHorizontalHeaderLabels(['Position', 'Name']) # add teams for ts in TEAMS: col1 = QtGui.QStandardItem(ts.name) col1.setData(ts, QtCore.Qt.UserRole) # add coaches childCol1 = QtGui.QStandardItem('Head Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.headCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Head Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.assistantCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Offensive Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.offensiveCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Defensive Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.defensiveCoach) col1.appendRow([childCol1, childCol2]) self.itemModel.appendRow([col1]) self.itemModel.setSortRole(QtCore.Qt.DisplayRole) self.itemModel.sort(0, QtCore.Qt.AscendingOrder) self.proxyModel.sort(0, QtCore.Qt.AscendingOrder) def main(): app = QtWidgets.QApplication(sys.argv) window = MainDialog() window.show() app.exec_() if __name__ == '__main__': pass main() A: Your Team class should be a subclass of QStandardItem, which will be the top-level parent in the model. This class should create its own child items (as you are currently doing in the for-loop of populateModel), and its randomize method should directly reset the item-data of those children. This will ensure the changes are immediately reflected in the model. So - it's really just a matter of taking the code you already have and refactoring it accordingly. For example, something like this should work: TEAMS = { 'Cowboys': ('doug', 'adam', 'seth', 'emily'), 'Packers': (), 'Lakers': ('kevin', 'mike', 'sarah', 'cassy'), 'Yankees': ('courtney', 'henry'), 'Gators': (), } class Team(QtGui.QStandardItem): def __init__(self, name): super(Team, self).__init__(name) for coach in ('Head', 'Assistant', 'Offensive', 'Defensive'): childCol1 = QtGui.QStandardItem(f'{coach} Coach') childCol2 = QtGui.QStandardItem() self.appendRow([childCol1, childCol2]) def populate(self, head='', assistant='', offensive='', defensive=''): self.child(0, 1).setText(head) self.child(1, 1).setText(assistant) self.child(2, 1).setText(offensive) self.child(3, 1).setText(defensive) def randomize(self, names): self.populate(*random.sample(names, 4)) class MainDialog(QtWidgets.QMainWindow): ... def randomizeTeams(self): for proxyIndex in self.selectionModel.selectedRows(): sourceIndex = self.proxyModel.mapToSource(proxyIndex) item = self.itemModel.itemFromIndex(sourceIndex) if not isinstance(item, Team): item = item.parent() item.randomize(self._coaches) def populateModel(self): self.itemModel.clear() self.itemModel.setHorizontalHeaderLabels(['Position', 'Name']) self._coaches = [] # add teams for name, coaches in TEAMS.items(): team = Team(name) team.populate(*coaches) self._coaches.extend(coaches) self.itemModel.appendRow([team]) self.itemModel.setSortRole(QtCore.Qt.DisplayRole) self.itemModel.sort(0, QtCore.Qt.AscendingOrder) self.proxyModel.sort(0, QtCore.Qt.AscendingOrder)
Update treemodel in real time PySide
How can I make it so when I click the Randomize button, for the selected treeview items, the treeview updates to show the changes to data, while maintaining the expanding items states and the users selection? Is this accomplished by subclasses the StandardItemModel or ProxyModel class? Help is much appreciated as I'm not sure how to resolve this issue. It's a very simple example demonstrating the issue. When clicking Randmoize, all it's doing is randomly assigning a new string (name) to each coaches position on the selected Team. import os import sys import random from PySide2 import QtGui, QtWidgets, QtCore class Team(object): def __init__(self, name='', nameA='', nameB='', nameC='', nameD=''): super(Team, self).__init__() self.name = name self.headCoach = nameA self.assistantCoach = nameB self.offensiveCoach = nameC self.defensiveCoach = nameD def randomize(self): names = ['doug', 'adam', 'seth', 'emily', 'kevin', 'mike', 'sarah', 'cassy', 'courtney', 'henry'] cnt = len(names)-1 self.headCoach = names[random.randint(0, cnt)] self.assistantCoach = names[random.randint(0, cnt)] self.offensiveCoach = names[random.randint(0, cnt)] self.defensiveCoach = names[random.randint(0, cnt)] print('TRADED PLAYERS') TEAMS = [ Team('Cowboys', 'doug', 'adam', 'seth', 'emily'), Team('Packers'), Team('Lakers', 'kevin', 'mike', 'sarah', 'cassy'), Team('Yankees', 'courtney', 'henry'), Team('Gators'), ] class MainDialog(QtWidgets.QMainWindow): def __init__(self, parent=None): super(MainDialog, self).__init__(parent) self.resize(600,400) self.button = QtWidgets.QPushButton('Randomize') self.itemModel = QtGui.QStandardItemModel() self.proxyModel = QtCore.QSortFilterProxyModel() self.proxyModel.setSourceModel(self.itemModel) self.proxyModel.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) self.proxyModel.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) self.proxyModel.setDynamicSortFilter(True) self.proxyModel.setFilterKeyColumn(0) self.treeView = QtWidgets.QTreeView() self.treeView.setModel(self.proxyModel) self.treeView.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers) self.treeView.setSelectionMode(QtWidgets.QAbstractItemView.ExtendedSelection) self.treeView.setVerticalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel) self.treeView.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows) self.treeView.setAlternatingRowColors(True) self.treeView.setSortingEnabled(True) self.treeView.setUniformRowHeights(False) self.treeView.header().setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents) self.treeView.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) self.selectionModel = self.treeView.selectionModel() # layout self.mainLayout = QtWidgets.QVBoxLayout() self.mainLayout.addWidget(self.treeView) self.mainLayout.addWidget(self.button) self.mainWidget = QtWidgets.QWidget() self.mainWidget.setLayout(self.mainLayout) self.setCentralWidget(self.mainWidget) # connections self.selectionModel.selectionChanged.connect(self.updateControls) self.button.clicked.connect(self.randomizeTeams) # begin self.populateModel() self.updateControls() def randomizeTeams(self): for proxyIndex in self.selectionModel.selectedRows(): sourceIndex = self.proxyModel.mapToSource(proxyIndex) item = self.itemModel.itemFromIndex(sourceIndex) team = item.data(QtCore.Qt.UserRole) team.randomize() # UPDATE UI... def updateControls(self): self.button.setEnabled(self.selectionModel.hasSelection()) def populateModel(self): self.itemModel.clear() self.itemModel.setHorizontalHeaderLabels(['Position', 'Name']) # add teams for ts in TEAMS: col1 = QtGui.QStandardItem(ts.name) col1.setData(ts, QtCore.Qt.UserRole) # add coaches childCol1 = QtGui.QStandardItem('Head Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.headCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Head Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.assistantCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Offensive Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.offensiveCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Defensive Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.defensiveCoach) col1.appendRow([childCol1, childCol2]) self.itemModel.appendRow([col1]) self.itemModel.setSortRole(QtCore.Qt.DisplayRole) self.itemModel.sort(0, QtCore.Qt.AscendingOrder) self.proxyModel.sort(0, QtCore.Qt.AscendingOrder) def main(): app = QtWidgets.QApplication(sys.argv) window = MainDialog() window.show() app.exec_() if __name__ == '__main__': pass main()
[ "Your Team class should be a subclass of QStandardItem, which will be the top-level parent in the model. This class should create its own child items (as you are currently doing in the for-loop of populateModel), and its randomize method should directly reset the item-data of those children. This will ensure the changes are immediately reflected in the model.\nSo - it's really just a matter of taking the code you already have and refactoring it accordingly. For example, something like this should work:\nTEAMS = {\n 'Cowboys': ('doug', 'adam', 'seth', 'emily'),\n 'Packers': (),\n 'Lakers': ('kevin', 'mike', 'sarah', 'cassy'),\n 'Yankees': ('courtney', 'henry'),\n 'Gators': (),\n }\n\nclass Team(QtGui.QStandardItem):\n def __init__(self, name):\n super(Team, self).__init__(name)\n for coach in ('Head', 'Assistant', 'Offensive', 'Defensive'):\n childCol1 = QtGui.QStandardItem(f'{coach} Coach')\n childCol2 = QtGui.QStandardItem()\n self.appendRow([childCol1, childCol2])\n\n def populate(self, head='', assistant='', offensive='', defensive=''):\n self.child(0, 1).setText(head)\n self.child(1, 1).setText(assistant)\n self.child(2, 1).setText(offensive)\n self.child(3, 1).setText(defensive)\n\n def randomize(self, names):\n self.populate(*random.sample(names, 4))\n\nclass MainDialog(QtWidgets.QMainWindow):\n ...\n def randomizeTeams(self):\n for proxyIndex in self.selectionModel.selectedRows():\n sourceIndex = self.proxyModel.mapToSource(proxyIndex)\n item = self.itemModel.itemFromIndex(sourceIndex)\n if not isinstance(item, Team):\n item = item.parent()\n item.randomize(self._coaches)\n\n def populateModel(self):\n self.itemModel.clear()\n self.itemModel.setHorizontalHeaderLabels(['Position', 'Name'])\n self._coaches = []\n # add teams\n for name, coaches in TEAMS.items():\n team = Team(name)\n team.populate(*coaches)\n self._coaches.extend(coaches)\n self.itemModel.appendRow([team])\n self.itemModel.setSortRole(QtCore.Qt.DisplayRole)\n self.itemModel.sort(0, QtCore.Qt.AscendingOrder)\n self.proxyModel.sort(0, QtCore.Qt.AscendingOrder)\n\n" ]
[ 0 ]
[]
[]
[ "pyside", "python", "qstandarditemmodel" ]
stackoverflow_0074646878_pyside_python_qstandarditemmodel.txt
Q: Python: What does this error message mean and why are we getting it? We are trying to create a new Excel file with nested data using Python code. Here is the code for reference: `import glob import pandas as pd import re import openpyxl dp = pd.read_excel("UnpredictableDataMerge.xlsx", sheet_name ="Sheet1") line_numbers = [4, 7] print("Heey, we read") dp_max = dp.groupby(['Subject', 'Date & Time', 'Trees Again', 'DifficultyLevel', 'Block', 'UpdatevsNonupdate', 'responsetimerecodeforACC', 'Nonupdate', 'Update'], sort=False).max() dp_max = dp_max[["Total Training Time"]] print("This worked. Good start. Yaaaay.s") dp_max.to_excel('unpredictable_grouped_max_heregoesnothing.xlsx', index=True) print("This worked. Yaaaay.s") dp['Signal_Detection2'] = dp.loc[:, 'Signal_Detection'] dp_count = dp.groupby(['Subject', 'Signal_Detection'], sort=False).count()[["Signal_Detection2"]] dp_count.to_excel('unpredictable_grouped_signal_count_heregoesnothing.xlsx', index=True) Unexpected exception formatting exception. Falling back to standard exception Output exceeds the size limit. Open the full output data in a text editor Traceback (most recent call last): File "C:\Users\mxa210135\AppData\Roaming\Python\Python38\site-packages\IPython\core\interactiveshell.py", line 3433, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-9-853a8bf5b14e>", line 5, in <module> dp = pd.read_excel("UnpredictableDataMerge.xlsx", sheet_name ="Sheet1")` The code above is what we had tried and it had worked previously. We only added the 'Trees Again' variable and 'UpdatevsNonupdate', 'responsetimerecodeforACC', 'Nonupdate', and lastly 'Update'. Please let me know if more information is needed and I will happily provide it. We tried splitting the large file in half and run the code on both, but it did not work and gave us the same error message. A: The error message indicates that an exception occurred while trying to read an Excel file using the pd.read_excel function. The most likely cause of the error is that the file "UnpredictableDataMerge.xlsx" was not found in the current working directory. You can check the current working directory by running the following code: import os print(os.getcwd()) This will print the current working directory, which is the directory where Python is looking for the file. Make sure that the file "UnpredictableDataMerge.xlsx" is located in the current working directory, or provide the full path to the file in the pd.read_excel function. Another possible cause of the error are: the file is open in another program, such as Excel, and is locked for editing. In that case, you can try closing the file in Excel, and then run the code again. The sheet "Sheet1" does not exist in the Excel file. Make sure that the sheet name is spelled correctly, and that it exists in the file. You can also try omitting the sheet_name parameter to read the first sheet in the file by default. There is not enough memory available to read the Excel file. Make sure that you have enough free memory to read the file. You can try closing other programs to free up memory, or increasing the amount of memory available to your Python environment. If neither of these suggestions solves the problem, please provide the full error message and traceback, as well as the version of pandas and openpyxl that you are using. You can check the version of pandas by running pd.__version__ and the version of openpyxl by running openpyxl.__version__. Also, please provide more information about the file "UnpredictableDataMerge.xlsx", such as its size, the number of rows and columns, and the type of data it contains, as well as the versions of pandas and openpyxl that you are using. This will help to narrow down the possible causes of the error. A: I have ran into this before with large data sets. Try installing lxml as openpyxl will auto detect if the library is installed. Be sure to install it to the correct interpreter you are using. py -m pip install lxml Alternatively: Convert data to CSV file and use pd.read_csv()
Python: What does this error message mean and why are we getting it?
We are trying to create a new Excel file with nested data using Python code. Here is the code for reference: `import glob import pandas as pd import re import openpyxl dp = pd.read_excel("UnpredictableDataMerge.xlsx", sheet_name ="Sheet1") line_numbers = [4, 7] print("Heey, we read") dp_max = dp.groupby(['Subject', 'Date & Time', 'Trees Again', 'DifficultyLevel', 'Block', 'UpdatevsNonupdate', 'responsetimerecodeforACC', 'Nonupdate', 'Update'], sort=False).max() dp_max = dp_max[["Total Training Time"]] print("This worked. Good start. Yaaaay.s") dp_max.to_excel('unpredictable_grouped_max_heregoesnothing.xlsx', index=True) print("This worked. Yaaaay.s") dp['Signal_Detection2'] = dp.loc[:, 'Signal_Detection'] dp_count = dp.groupby(['Subject', 'Signal_Detection'], sort=False).count()[["Signal_Detection2"]] dp_count.to_excel('unpredictable_grouped_signal_count_heregoesnothing.xlsx', index=True) Unexpected exception formatting exception. Falling back to standard exception Output exceeds the size limit. Open the full output data in a text editor Traceback (most recent call last): File "C:\Users\mxa210135\AppData\Roaming\Python\Python38\site-packages\IPython\core\interactiveshell.py", line 3433, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-9-853a8bf5b14e>", line 5, in <module> dp = pd.read_excel("UnpredictableDataMerge.xlsx", sheet_name ="Sheet1")` The code above is what we had tried and it had worked previously. We only added the 'Trees Again' variable and 'UpdatevsNonupdate', 'responsetimerecodeforACC', 'Nonupdate', and lastly 'Update'. Please let me know if more information is needed and I will happily provide it. We tried splitting the large file in half and run the code on both, but it did not work and gave us the same error message.
[ "The error message indicates that an exception occurred while trying to read an Excel file using the pd.read_excel function. The most likely cause of the error is that the file \"UnpredictableDataMerge.xlsx\" was not found in the current working directory.\nYou can check the current working directory by running the following code:\nimport os\nprint(os.getcwd())\n\nThis will print the current working directory, which is the directory where Python is looking for the file. Make sure that the file \"UnpredictableDataMerge.xlsx\" is located in the current working directory, or provide the full path to the file in the pd.read_excel function.\nAnother possible cause of the error are:\n\nthe file is open in another program, such as Excel, and is locked for editing. In that case, you can try closing the file in Excel, and then run the code again.\nThe sheet \"Sheet1\" does not exist in the Excel file. Make sure that the sheet name is spelled correctly, and that it exists in the file. You can also try omitting the sheet_name parameter to read the first sheet in the file by default.\nThere is not enough memory available to read the Excel file. Make sure that you have enough free memory to read the file. You can try closing other programs to free up memory, or increasing the amount of memory available to your Python environment.\n\nIf neither of these suggestions solves the problem, please provide the full error message and traceback, as well as the version of pandas and openpyxl that you are using. You can check the version of pandas by running pd.__version__ and the version of openpyxl by running openpyxl.__version__. Also, please provide more information about the file \"UnpredictableDataMerge.xlsx\", such as its size, the number of rows and columns, and the type of data it contains, as well as the versions of pandas and openpyxl that you are using. This will help to narrow down the possible causes of the error.\n", "I have ran into this before with large data sets. Try installing lxml as openpyxl will auto detect if the library is installed. Be sure to install it to the correct interpreter you are using.\npy -m pip install lxml\n\nAlternatively:\n\nConvert data to CSV file and use pd.read_csv()\n\n" ]
[ 0, 0 ]
[]
[]
[ "data_analysis", "database", "excel", "output", "python" ]
stackoverflow_0074659693_data_analysis_database_excel_output_python.txt
Q: TypeError: 'NoneType' object is not iterable in Python What does TypeError: 'NoneType' object is not iterable mean? Example: for row in data: # Gives TypeError! print(row) A: It means the value of data is None. A: Explanation of error: 'NoneType' object is not iterable In python2, NoneType is the type of None. In Python3 NoneType is the class of None, for example: >>> print(type(None)) #Python2 <type 'NoneType'> #In Python2 the type of None is the 'NoneType' type. >>> print(type(None)) #Python3 <class 'NoneType'> #In Python3, the type of None is the 'NoneType' class. Iterating over a variable that has value None fails: for a in None: print("k") #TypeError: 'NoneType' object is not iterable Python methods return NoneType if they don't return a value: def foo(): print("k") a, b = foo() #TypeError: 'NoneType' object is not iterable You need to check your looping constructs for NoneType like this: a = None print(a is None) #prints True print(a is not None) #prints False print(a == None) #prints True print(a != None) #prints False print(isinstance(a, object)) #prints True print(isinstance(a, str)) #prints False Guido says only use is to check for None because is is more robust to identity checking. Don't use equality operations because those can spit bubble-up implementationitis of their own. Python's Coding Style Guidelines - PEP-008 NoneTypes are Sneaky, and can sneak in from lambdas: import sys b = lambda x : sys.stdout.write("k") for a in b(10): pass #TypeError: 'NoneType' object is not iterable NoneType is not a valid keyword: a = NoneType #NameError: name 'NoneType' is not defined Concatenation of None and a string: bar = "something" foo = None print foo + bar #TypeError: cannot concatenate 'str' and 'NoneType' objects What's going on here? Python's interpreter converted your code to pyc bytecode. The Python virtual machine processed the bytecode, it encountered a looping construct which said iterate over a variable containing None. The operation was performed by invoking the __iter__ method on the None. None has no __iter__ method defined, so Python's virtual machine tells you what it sees: that NoneType has no __iter__ method. This is why Python's duck-typing ideology is considered bad. The programmer does something completely reasonable with a variable and at runtime it gets contaminated by None, the python virtual machine attempts to soldier on, and pukes up a bunch of unrelated nonsense all over the carpet. Java or C++ doesn't have these problems because such a program wouldn't be allowed to compile since you haven't defined what to do when None occurs. Python gives the programmer lots of rope to hang himself by allowing you to do lots of things that should cannot be expected to work under exceptional circumstances. Python is a yes-man, saying yes-sir when it out to be stopping you from harming yourself, like Java and C++ does. A: Code: for row in data: Error message: TypeError: 'NoneType' object is not iterable Which object is it complaining about? Choice of two, row and data. In for row in data, which needs to be iterable? Only data. What's the problem with data? Its type is NoneType. Only None has type NoneType. So data is None. You can verify this in an IDE, or by inserting e.g. print "data is", repr(data) before the for statement, and re-running. Think about what you need to do next: How should "no data" be represented? Do we write an empty file? Do we raise an exception or log a warning or keep silent? A: Another thing that can produce this error is when you are setting something equal to the return from a function, but forgot to actually return anything. Example: def foo(dict_of_dicts): for key, row in dict_of_dicts.items(): for key, inner_row in row.items(): Do SomeThing #Whoops, forgot to return all my stuff return1, return2, return3 = foo(dict_of_dicts) This is a little bit of a tough error to spot because the error can also be produced if the row variable happens to be None on one of the iterations. The way to spot it is that the trace fails on the last line and not inside the function. If your only returning one variable from a function, I am not sure if the error would be produced... I suspect error "'NoneType' object is not iterable in Python" in this case is actually implying "Hey, I'm trying to iterate over the return values to assign them to these three variables in order but I'm only getting None to iterate over" A: It means that the data variable is passing None (which is type NoneType), its equivalent for nothing. So it can't be iterable as a list, as you are trying to do. A: You're calling write_file with arguments like this: write_file(foo, bar) But you haven't defined 'foo' correctly, or you have a typo in your code so that it's creating a new empty variable and passing it in. A: For me it was a case of having my Groovy hat on instead of the Python 3 one. Forgot the return keyword at the end of a def function. Had not been coding Python 3 in earnest for a couple of months. Was thinking last statement evaluated in routine was being returned per the Groovy (or Rust) way. Took a few iterations, looking at the stack trace, inserting try: ... except TypeError: ... block debugging/stepping thru code to figure out what was wrong. The solution for the message certainly did not make the error jump out at me. A: It also depends on Python version you are using. Seeing different error message thrown in python 3.6 and python 3.8 as following which was the issue in my case Python 3.6 (a,b) = None Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'NoneType' object is not iterable Python 3.8 (a,b) = None Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: cannot unpack non-iterable NoneType object A: This means that the value of data is None. A: because using for loop while the result it is just one value not a set of value pola.py @app.route("/search") def search(): title='search' search_name = request.form.get('search') search_item = User.query.filter_by(id=search_name).first() return render_template('search.html', title=title, search_item=search_item ) search.html (wrong) {% for p in search %} {{ p }} search.html (correct) <td>{{ search_item }}</td> A: i had this error with pandas in databricks. The solution for this error was install the library in the cluster enter image description here A: It means data is None, which is not an iterable. Adding an or []* prevents the exception and doesn't print anything: for row in data or []: # no more TypeError! print(row) * credits to some earlier comments; please beware that raising an exception may be a desired behavior too and/or an indicator of improper data setting.
TypeError: 'NoneType' object is not iterable in Python
What does TypeError: 'NoneType' object is not iterable mean? Example: for row in data: # Gives TypeError! print(row)
[ "It means the value of data is None.\n", "Explanation of error: 'NoneType' object is not iterable\nIn python2, NoneType is the type of None. In Python3 NoneType is the class of None, for example:\n>>> print(type(None)) #Python2\n<type 'NoneType'> #In Python2 the type of None is the 'NoneType' type.\n\n>>> print(type(None)) #Python3\n<class 'NoneType'> #In Python3, the type of None is the 'NoneType' class.\n\nIterating over a variable that has value None fails:\nfor a in None:\n print(\"k\") #TypeError: 'NoneType' object is not iterable\n\nPython methods return NoneType if they don't return a value:\ndef foo():\n print(\"k\")\na, b = foo() #TypeError: 'NoneType' object is not iterable\n\nYou need to check your looping constructs for NoneType like this:\na = None \nprint(a is None) #prints True\nprint(a is not None) #prints False\nprint(a == None) #prints True\nprint(a != None) #prints False\nprint(isinstance(a, object)) #prints True\nprint(isinstance(a, str)) #prints False\n\nGuido says only use is to check for None because is is more robust to identity checking. Don't use equality operations because those can spit bubble-up implementationitis of their own. Python's Coding Style Guidelines - PEP-008\nNoneTypes are Sneaky, and can sneak in from lambdas:\nimport sys\nb = lambda x : sys.stdout.write(\"k\") \nfor a in b(10): \n pass #TypeError: 'NoneType' object is not iterable \n\nNoneType is not a valid keyword:\na = NoneType #NameError: name 'NoneType' is not defined\n\nConcatenation of None and a string:\nbar = \"something\"\nfoo = None\nprint foo + bar #TypeError: cannot concatenate 'str' and 'NoneType' objects\n\nWhat's going on here?\nPython's interpreter converted your code to pyc bytecode. The Python virtual machine processed the bytecode, it encountered a looping construct which said iterate over a variable containing None. The operation was performed by invoking the __iter__ method on the None. \nNone has no __iter__ method defined, so Python's virtual machine tells you what it sees: that NoneType has no __iter__ method. \nThis is why Python's duck-typing ideology is considered bad. The programmer does something completely reasonable with a variable and at runtime it gets contaminated by None, the python virtual machine attempts to soldier on, and pukes up a bunch of unrelated nonsense all over the carpet. \nJava or C++ doesn't have these problems because such a program wouldn't be allowed to compile since you haven't defined what to do when None occurs. Python gives the programmer lots of rope to hang himself by allowing you to do lots of things that should cannot be expected to work under exceptional circumstances. Python is a yes-man, saying yes-sir when it out to be stopping you from harming yourself, like Java and C++ does.\n", "Code: for row in data:\nError message: TypeError: 'NoneType' object is not iterable\nWhich object is it complaining about? Choice of two, row and data.\nIn for row in data, which needs to be iterable? Only data.\nWhat's the problem with data? Its type is NoneType. Only None has type NoneType. So data is None.\nYou can verify this in an IDE, or by inserting e.g. print \"data is\", repr(data) before the for statement, and re-running.\nThink about what you need to do next: \nHow should \"no data\" be represented? Do we write an empty file? Do we raise an exception or log a warning or keep silent?\n", "Another thing that can produce this error is when you are setting something equal to the return from a function, but forgot to actually return anything.\nExample:\ndef foo(dict_of_dicts):\n for key, row in dict_of_dicts.items():\n for key, inner_row in row.items():\n Do SomeThing\n #Whoops, forgot to return all my stuff\n\nreturn1, return2, return3 = foo(dict_of_dicts)\n\nThis is a little bit of a tough error to spot because the error can also be produced if the row variable happens to be None on one of the iterations. The way to spot it is that the trace fails on the last line and not inside the function.\nIf your only returning one variable from a function, I am not sure if the error would be produced... I suspect error \"'NoneType' object is not iterable in Python\" in this case is actually implying \"Hey, I'm trying to iterate over the return values to assign them to these three variables in order but I'm only getting None to iterate over\"\n", "It means that the data variable is passing None (which is type NoneType), its equivalent for nothing. So it can't be iterable as a list, as you are trying to do.\n", "You're calling write_file with arguments like this:\nwrite_file(foo, bar)\n\nBut you haven't defined 'foo' correctly, or you have a typo in your code so that it's creating a new empty variable and passing it in.\n", "For me it was a case of having my Groovy hat on instead of the Python 3 one.\nForgot the return keyword at the end of a def function.\nHad not been coding Python 3 in earnest for a couple of months. Was thinking last statement evaluated in routine was being returned per the Groovy (or Rust) way.\nTook a few iterations, looking at the stack trace, inserting try: ... except TypeError: ... block debugging/stepping thru code to figure out what was wrong.\nThe solution for the message certainly did not make the error jump out at me.\n", "It also depends on Python version you are using. Seeing different error message thrown in python 3.6 and python 3.8 as following which was the issue in my case\n\nPython 3.6\n\n\n(a,b) = None\nTraceback (most recent call last):\nFile \"<stdin>\", line 1, in <module>\nTypeError: 'NoneType' object is not iterable\n\n\n\nPython 3.8\n\n\n(a,b) = None\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: cannot unpack non-iterable NoneType object\n\n\n", "This means that the value of data is None.\n", "because using for loop while the result it is just one value not a set of value\n\npola.py\n\[email protected](\"/search\")\ndef search():\n title='search'\n search_name = request.form.get('search')\n \n search_item = User.query.filter_by(id=search_name).first()\n\n return render_template('search.html', title=title, search_item=search_item ) \n\n\nsearch.html (wrong)\n\n{% for p in search %}\n{{ p }}\n\n\n\nsearch.html (correct)\n\n\n<td>{{ search_item }}</td>\n\n", "i had this error with pandas in databricks.\nThe solution for this error was install the library in the cluster\nenter image description here\n", "It means data is None, which is not an iterable. Adding an or []* prevents the exception and doesn't print anything:\nfor row in data or []: # no more TypeError!\n print(row)\n\n* credits to some earlier comments; please beware that raising an exception may be a desired behavior too and/or an indicator of improper data setting.\n" ]
[ 261, 114, 63, 20, 8, 7, 2, 1, 0, 0, 0, 0 ]
[ "Just continue the loop when you get None Exception,\nexample:\n a = None\n if a is None:\n continue\n else:\n print(\"do something\")\n\nThis can be any iterable coming from DB or an excel file.\n" ]
[ -3 ]
[ "nonetype", "python" ]
stackoverflow_0003887381_nonetype_python.txt
Q: how to remove buttons off of a message discord @client.command() async def test(ctx): message = await ctx.send("**TEST**\n**IS THIS WORKING?**") await asyncio.sleep(3) button = Button(style = discord.ButtonStyle.green, emoji = "◀", custom_id = "button") view = View() view.add_item(button) async def button_callback(interaction): await message.edit(content="**edited message and removed buttons!**") button.callback = button_callback await ctx.send("test", view=view) so far when it edits the message to "edited message and removed buttons" the buttons arent removed, how can i make it so the buttons are removed when it edits to that? A: Make sure you set view=None when you edit your message (or if you only want a few buttons removed, create a new view without those buttons and set view to that).
how to remove buttons off of a message discord
@client.command() async def test(ctx): message = await ctx.send("**TEST**\n**IS THIS WORKING?**") await asyncio.sleep(3) button = Button(style = discord.ButtonStyle.green, emoji = "◀", custom_id = "button") view = View() view.add_item(button) async def button_callback(interaction): await message.edit(content="**edited message and removed buttons!**") button.callback = button_callback await ctx.send("test", view=view) so far when it edits the message to "edited message and removed buttons" the buttons arent removed, how can i make it so the buttons are removed when it edits to that?
[ "Make sure you set view=None when you edit your message (or if you only want a few buttons removed, create a new view without those buttons and set view to that).\n" ]
[ 0 ]
[]
[]
[ "discord", "pycord", "python" ]
stackoverflow_0074612394_discord_pycord_python.txt
Q: In Python why is my "for entry in csv_compare:" loop iterating only once and getting stuck on the last input I'm trying to compare 2 csv files and then put the common entries in a 3rd csv to write to file. For some reason it iterates the whole loop for row in csv_input but the entry in csv_compare loop iterates only once and stops on the last entry. I want to compare every row entry with every entry entry. import csv finalCSV = {} with open('input.csv', newline='') as csvfile, open('compare.csv', newline='') as keyCSVFile, open('output.csv', 'w' ,newline='') as OutputCSV: csv_input = csv.reader(csvfile) csv_compare = csv.reader(keyCSVFile) csv_output = csv.writer(OutputCSV) csv_output.writerow(next(csv_input)) for row in csv_input: for entry in csv_compare: print(row[0] + ' ' + entry[0]) if row[0] == entry[0]: csv_output.writerow(row) break print('wait...') A: When you break the inner loop and start the next iteration of the outer loop, csv_compare doesn't reset to the beginning. It picks up where you left off. Once you have exhausted the iterator, that's it. You would need to reset the iterator at the top of each iteration of the outer loop, which is most easily done by simply opening the file there. with open('input.csv', newline='') as csvfile, open('output.csv', 'w' ,newline='') as OutputCSV: csv_input = csv.reader(csvfile) csv_output = csv.writer(OutputCSV) csv_output.writerow(next(csv_input)) for row in csv_input: with open('compare.csv', newline='') as keyCSVFile: csv_compare = csv.reader(keyCSVFile) for entry in csv_compare: if row[0] == entry[0]: csv_output.writerow(row) break A: I suggest to read the first column from csv_compare to list or a set and then use only single for-loop: import csv finalCSV = {} with open("input.csv", newline="") as csvfile, open( "compare.csv", newline="" ) as keyCSVFile, open("output.csv", "w", newline="") as OutputCSV: csv_input = csv.reader(csvfile) csv_compare = csv.reader(keyCSVFile) csv_output = csv.writer(OutputCSV) csv_output.writerow(next(csv_input)) compare = {entry[0] for entry in csv_compare} # <--- read csv_compare to a set for row in csv_input: if row[0] in compare: # <--- use `in` operator csv_output.writerow(row) A: You could skip the inner loop completely. You add rows from input.csv when the first column matches any of the first column values in compare.csv. So put those values in a set for easy lookup. import csv with open('compare.csv', newline='') as keyCSVFile: key_set = {row[0] for row in csv.reader(keyCSVFile)} with open('input.csv', newline='') as csvfile, open('output.csv', 'w' ,newline='') as OutputCSV: csv_input = csv.reader(csvfile) csv_output = csv.writer(OutputCSV) csv_output.writerow(next(csv_input)) csv_output.writerows(row for row in csv_input if row[0] in key_set) del key_set print('wait...')
In Python why is my "for entry in csv_compare:" loop iterating only once and getting stuck on the last input
I'm trying to compare 2 csv files and then put the common entries in a 3rd csv to write to file. For some reason it iterates the whole loop for row in csv_input but the entry in csv_compare loop iterates only once and stops on the last entry. I want to compare every row entry with every entry entry. import csv finalCSV = {} with open('input.csv', newline='') as csvfile, open('compare.csv', newline='') as keyCSVFile, open('output.csv', 'w' ,newline='') as OutputCSV: csv_input = csv.reader(csvfile) csv_compare = csv.reader(keyCSVFile) csv_output = csv.writer(OutputCSV) csv_output.writerow(next(csv_input)) for row in csv_input: for entry in csv_compare: print(row[0] + ' ' + entry[0]) if row[0] == entry[0]: csv_output.writerow(row) break print('wait...')
[ "When you break the inner loop and start the next iteration of the outer loop, csv_compare doesn't reset to the beginning. It picks up where you left off. Once you have exhausted the iterator, that's it.\nYou would need to reset the iterator at the top of each iteration of the outer loop, which is most easily done by simply opening the file there.\nwith open('input.csv', newline='') as csvfile, open('output.csv', 'w' ,newline='') as OutputCSV:\n csv_input = csv.reader(csvfile)\n csv_output = csv.writer(OutputCSV)\n csv_output.writerow(next(csv_input))\n\n for row in csv_input:\n with open('compare.csv', newline='') as keyCSVFile:\n csv_compare = csv.reader(keyCSVFile)\n for entry in csv_compare:\n if row[0] == entry[0]:\n csv_output.writerow(row)\n break\n\n", "I suggest to read the first column from csv_compare to list or a set and then use only single for-loop:\nimport csv\n\nfinalCSV = {}\nwith open(\"input.csv\", newline=\"\") as csvfile, open(\n \"compare.csv\", newline=\"\"\n) as keyCSVFile, open(\"output.csv\", \"w\", newline=\"\") as OutputCSV:\n csv_input = csv.reader(csvfile)\n csv_compare = csv.reader(keyCSVFile)\n csv_output = csv.writer(OutputCSV)\n csv_output.writerow(next(csv_input))\n\n compare = {entry[0] for entry in csv_compare} # <--- read csv_compare to a set\n\n for row in csv_input:\n if row[0] in compare: # <--- use `in` operator\n csv_output.writerow(row)\n\n", "You could skip the inner loop completely. You add rows from input.csv when the first column matches any of the first column values in compare.csv. So put those values in a set for easy lookup.\nimport csv\n\nwith open('compare.csv', newline='') as keyCSVFile:\n key_set = {row[0] for row in csv.reader(keyCSVFile)}\n\nwith open('input.csv', newline='') as csvfile, open('output.csv', 'w' ,newline='') as OutputCSV:\n csv_input = csv.reader(csvfile)\n csv_output = csv.writer(OutputCSV)\n csv_output.writerow(next(csv_input))\n csv_output.writerows(row for row in csv_input if row[0] in key_set)\n\ndel key_set\nprint('wait...')\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "csv", "for_loop", "python" ]
stackoverflow_0074660417_csv_for_loop_python.txt
Q: How can I return a list I wanted to do a discord command scraper in python for the raffles available on https://releases.footshop.com/ and I almost finished it but when I wan to return a list of size (and stock also) it's return an error "IndexError: list index out of range" and I can't find what to do :/ thank for your help guys ! I try this code but it return an error (I try other thing but I don't remember, I had been trying to find a solution by myself for some time now lol) there is the code: def searchsizefootshop(size): item = searchfootshop(size) if len(item['sizeSets']['Men']['sizes']) !=0: size1 = int(item['sizeSets']['Men']['sizes'][X]) size = item['sizeSets']['Men']['sizes'][X]['eur'] count = 0 while count <= len(size1): print(size[count]) count += 1 searchsizefootshop('hGZrRYMB3xHSyCfZ4BFw') [scraper] [Footshop API] A: len(size1) returns the length of the list, not the index of the last item. To loop through a list using len and index access, you need to compare it to len(size1) - 1 or just use count < len(size1).
How can I return a list
I wanted to do a discord command scraper in python for the raffles available on https://releases.footshop.com/ and I almost finished it but when I wan to return a list of size (and stock also) it's return an error "IndexError: list index out of range" and I can't find what to do :/ thank for your help guys ! I try this code but it return an error (I try other thing but I don't remember, I had been trying to find a solution by myself for some time now lol) there is the code: def searchsizefootshop(size): item = searchfootshop(size) if len(item['sizeSets']['Men']['sizes']) !=0: size1 = int(item['sizeSets']['Men']['sizes'][X]) size = item['sizeSets']['Men']['sizes'][X]['eur'] count = 0 while count <= len(size1): print(size[count]) count += 1 searchsizefootshop('hGZrRYMB3xHSyCfZ4BFw') [scraper] [Footshop API]
[ "len(size1) returns the length of the list, not the index of the last item. To loop through a list using len and index access, you need to compare it to len(size1) - 1 or just use count < len(size1).\n" ]
[ 0 ]
[]
[]
[ "discord", "list", "python" ]
stackoverflow_0074605388_discord_list_python.txt
Q: How to use `ListCtrl` on wxpython How can I append row and it's corresponding data into ListCtrl. I've just finished how to use TreeCtrl(Relatively easier than ListCtrl), it shows me a clear usage of matching single GUI object and data. But ListCtrl dose not. How can I append or insert single row with it's corresponding data. How can I access row and it's data How can I manipulated them (Editing data/row, Deleting data/row) Can you explain summary of them? Thank you. I know my question is so simple and I can get about this from doc somewhat. I read docs, but still I got no clue A: I know that wxPython docs are retarded and gives no much help, here is some quick tips below, i added explanations in comments: # create new list control listctrl = wx.dataview.DataViewListCtrl( my_panel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.dataview.DV_SINGLE ) # setup listctrl columns listctrl.AppendTextColumn('first name', width=220) # normal text column listctrl.AppendBitmapColumn('my images', 0, width=35) # you can add images in this col listctrl.AppendProgressColumn('Progress', align=wx.ALIGN_CENTER) # a progress bar listctrl.SetRowHeight(30) # define all rows height # add data, note myList is a list or tuple contains the exact type of data for each columns and same length as col numbers listctrl.AppendItem(myList) # to modify an entry "a single cell located at row x col" listctrl.SetValue(myNewValue, row, column) A: this is what works for me: import wx il_icons = wx.ImageList(16, 16, mask=True, initialCount=2) il_icons.Add(wx.Bitmap('icon01.png')) il_icons.Add(wx.Bitmap('icon02.png')) lc_list = wx.ListCtrl(self, wx.ID_ANY, style=wx.LC_REPORT | wx.LC_SINGLE_SEL | wx.LC_EDIT_LABELS | wx.LC_VRULES, name='lc_list') lc_list.AssignImageList(il_icons, which=wx.IMAGE_LIST_SMALL) lc_list.AppendColumn('col01', format=wx.LIST_FORMAT_LEFT, width=64) lc_list.AppendColumn('col02', format=wx.LIST_FORMAT_RIGHT, width=64) lc_list.Append(('item01',100)) lc_list.Append(('item02',200)) lc_list.SetItemColumnImage(0,0,0) lc_list.SetItemColumnImage(1,0,1) lc_list.Bind(wx.EVT_LIST_ITEM_SELECTED, OnItemSelected) lc_list.Show(True)
How to use `ListCtrl` on wxpython
How can I append row and it's corresponding data into ListCtrl. I've just finished how to use TreeCtrl(Relatively easier than ListCtrl), it shows me a clear usage of matching single GUI object and data. But ListCtrl dose not. How can I append or insert single row with it's corresponding data. How can I access row and it's data How can I manipulated them (Editing data/row, Deleting data/row) Can you explain summary of them? Thank you. I know my question is so simple and I can get about this from doc somewhat. I read docs, but still I got no clue
[ "I know that wxPython docs are retarded and gives no much help, here is some quick tips below,\ni added explanations in comments:\n# create new list control\nlistctrl = wx.dataview.DataViewListCtrl( my_panel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.dataview.DV_SINGLE )\n\n# setup listctrl columns\nlistctrl.AppendTextColumn('first name', width=220) # normal text column\nlistctrl.AppendBitmapColumn('my images', 0, width=35) # you can add images in this col\nlistctrl.AppendProgressColumn('Progress', align=wx.ALIGN_CENTER) # a progress bar\n\nlistctrl.SetRowHeight(30) # define all rows height\n\n# add data, note myList is a list or tuple contains the exact type of data for each columns and same length as col numbers\nlistctrl.AppendItem(myList)\n\n# to modify an entry \"a single cell located at row x col\"\nlistctrl.SetValue(myNewValue, row, column)\n\n", "this is what works for me:\nimport wx\n\nil_icons = wx.ImageList(16, 16, mask=True, initialCount=2)\nil_icons.Add(wx.Bitmap('icon01.png'))\nil_icons.Add(wx.Bitmap('icon02.png'))\n\nlc_list = wx.ListCtrl(self, wx.ID_ANY, style=wx.LC_REPORT | wx.LC_SINGLE_SEL | wx.LC_EDIT_LABELS | wx.LC_VRULES, name='lc_list')\nlc_list.AssignImageList(il_icons, which=wx.IMAGE_LIST_SMALL)\nlc_list.AppendColumn('col01', format=wx.LIST_FORMAT_LEFT, width=64)\nlc_list.AppendColumn('col02', format=wx.LIST_FORMAT_RIGHT, width=64)\nlc_list.Append(('item01',100))\nlc_list.Append(('item02',200))\nlc_list.SetItemColumnImage(0,0,0)\nlc_list.SetItemColumnImage(1,0,1)\n\nlc_list.Bind(wx.EVT_LIST_ITEM_SELECTED, OnItemSelected)\n\nlc_list.Show(True)\n\n" ]
[ 1, 0 ]
[]
[]
[ "listctrl", "python", "wxpython" ]
stackoverflow_0055789154_listctrl_python_wxpython.txt
Q: assistance needed please, float is not iterable issues ` life_max = -5 life_min = 999 country_max = "" country_min = "" answer = int(input("Which year would you like to enter? ")) with open ("life.csv") as f: next(f) for line in f: parts = line.split(",") life = float(parts[3]) year = int(parts[2]) country = parts[0].strip() code = parts[1].strip() if life > life_max: life_max = life country_max = country if life < life_min: life_min = life country_min = country average = range(sum(life)) / range(len(life)) print(f"The average is {average}") print(f"The country with the worst life expectancy is {country_min} at {life_min} years.") print(f"The country with the best life expectancy is {country_max} at {life_max} years.") ` I'm having some troubles in finding the average life expectancy given a specified year, it returns with a 'float' not iterable error and I'm pretty lost.
assistance needed please, float is not iterable issues
` life_max = -5 life_min = 999 country_max = "" country_min = "" answer = int(input("Which year would you like to enter? ")) with open ("life.csv") as f: next(f) for line in f: parts = line.split(",") life = float(parts[3]) year = int(parts[2]) country = parts[0].strip() code = parts[1].strip() if life > life_max: life_max = life country_max = country if life < life_min: life_min = life country_min = country average = range(sum(life)) / range(len(life)) print(f"The average is {average}") print(f"The country with the worst life expectancy is {country_min} at {life_min} years.") print(f"The country with the best life expectancy is {country_max} at {life_max} years.") ` I'm having some troubles in finding the average life expectancy given a specified year, it returns with a 'float' not iterable error and I'm pretty lost.
[]
[]
[ "sum needs a list of values to add to each other ;)\n", "A bit confusing answering this without your input and what line throws the error, but I'm guessing it's due to the 'sum(life)' - life seems to be a float while sum expects an iterable\n" ]
[ -2, -2 ]
[ "python", "python_3.x" ]
stackoverflow_0074660613_python_python_3.x.txt
Q: Python, replace a word in a string from a list and iterate over it I have a simple string and a list: string = "the secret key is A" list = ["123","234","345"] I need to replace one item ("A") combining that item with another item from the list ("A123") as many times as the number of items in the list. Basically the result I would like to achieve is: "the secret key is A123" "the secret key is A234" "the secret key is A345" I know I need to use a for loop but I fail in joining together the items. A: Please don't clobber reserved keywords. s = "the secret key is A" lst = ["123","234","345"] item = 'A' newlst = [s.replace(item, f'{item}{tok}') for tok in lst] >>> newlst ['the secret key is A123', 'the secret key is A234', 'the secret key is A345'] Edit As rightly noted by @JohnnyMopp, the above will over-enthusiastically replace any occurrence of the item in a string such as 'And the secret key is A'. We can specify that only words matching the item should be replaced, using regex: import re s = 'And the secret key is A, I repeat: A.' lst = ['123', '234', '345'] item = 'A' newlst = [re.sub(fr'\b{item}\b', f'{item}{e}', s) for e in lst] >>> newlst ['And the secret key is A123, I repeat: A123.', 'And the secret key is A234, I repeat: A234.', 'And the secret key is A345, I repeat: A345.'] A: You can use str.replace. st = "the secret key is A" lst = ["123","234","345"] key_rep = "A" for l in lst: print(st.replace(key_rep, key_rep+l)) # Or as list_comprehension # [st.replace(key_rep, key_rep+l) for l in lst] Output: the secret key is A123 the secret key is A234 the secret key is A345 A: If I understood you correctly you can try this. string = "the secret key is A" lst = ["123", "234", "345"] res = list(map(lambda x: string + x, lst)) #You can print it in any way you want, here are some examples: print(*res) [print(i for i in res)] #...
Python, replace a word in a string from a list and iterate over it
I have a simple string and a list: string = "the secret key is A" list = ["123","234","345"] I need to replace one item ("A") combining that item with another item from the list ("A123") as many times as the number of items in the list. Basically the result I would like to achieve is: "the secret key is A123" "the secret key is A234" "the secret key is A345" I know I need to use a for loop but I fail in joining together the items.
[ "Please don't clobber reserved keywords.\ns = \"the secret key is A\"\nlst = [\"123\",\"234\",\"345\"]\n\nitem = 'A'\nnewlst = [s.replace(item, f'{item}{tok}') for tok in lst]\n\n>>> newlst\n['the secret key is A123', 'the secret key is A234', 'the secret key is A345']\n\nEdit\nAs rightly noted by @JohnnyMopp, the above will over-enthusiastically replace any occurrence of the item in a string such as 'And the secret key is A'. We can specify that only words matching the item should be replaced, using regex:\nimport re\n\ns = 'And the secret key is A, I repeat: A.'\nlst = ['123', '234', '345']\n\nitem = 'A'\nnewlst = [re.sub(fr'\\b{item}\\b', f'{item}{e}', s) for e in lst]\n\n>>> newlst\n['And the secret key is A123, I repeat: A123.',\n 'And the secret key is A234, I repeat: A234.',\n 'And the secret key is A345, I repeat: A345.']\n\n", "You can use str.replace.\nst = \"the secret key is A\"\n\nlst = [\"123\",\"234\",\"345\"]\n\nkey_rep = \"A\"\n\nfor l in lst:\n print(st.replace(key_rep, key_rep+l))\n\n# Or as list_comprehension\n# [st.replace(key_rep, key_rep+l) for l in lst]\n\nOutput:\nthe secret key is A123\nthe secret key is A234\nthe secret key is A345\n\n", "If I understood you correctly you can try this.\nstring = \"the secret key is A\"\nlst = [\"123\", \"234\", \"345\"]\n\nres = list(map(lambda x: string + x, lst))\n\n#You can print it in any way you want, here are some examples:\nprint(*res)\n[print(i for i in res)]\n#...\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "list", "python", "replace" ]
stackoverflow_0074660234_list_python_replace.txt
Q: Buttons don't do their intended command at click Im making a turtle race game, its a game where there are a few turtles who are assigned random speeds and then one turtle wins. However, just for fun im trying to add a few things to the game. For example, a button to exit the game and a button to restart the race. I have made only the exit button for now, and gave the command to exit the game. The button works, however not in the right time. The problem is is that i have a piece of code that makes the canvas (background), Which is just the turtle drawing. I have another piece of code that places the buttons and tells them what to do when being clicked. And then I have a piece of code that assigns random speeds to the turtles. This is the buttons code.(The try again button command is not finished yet.) screen = Screen() screen.setup(width=600, height=400) def exit_game(): exit() canvas = screen.getcanvas() button = Button(canvas.master, text="Exit Game", command=exit_game, width=10, height=4, fg="white", bg="dodgerblue") button.pack() button.place(x=150, y=530) canvas2 = screen.getcanvas() button2 = Button(canvas2.master, text="Try Again", command=exit_game, width=10, height=4, fg="white", bg="dodgerblue" ) button2.pack() button2.place(x=50, y=530) And here is the code for assigning random numbers to the turtles. for movement in range (230): red.forward(randint(1,8)) blue.forward(randint(1,8)) purple.forward(randint(1,8)) orange.forward(randint(1,8)) The problem is, is that when for example the turtles are moving, i can press the button, but it does not do the command. After the movement loop goes through 230 times, only then it exits the game. So basically my code is just reading the speed to the turtles and forgot about the button commands. Is there a way to override this somehow and make my button exit the game when being clicked at all times? Also i did try to put the button into an infinite loop, but it did not work(maybe I did it wrong). import turtle import time from random import randint from tkinter import * from turtle import Screen, Turtle import tkinter import tkinter as tk # Window Customization Window = turtle.Screen() Window.title('Turtle Race Game') #Complete back canvas for the game def back_canvas(): # Main drawing turtle pen = turtle.Turtle() pen.speed(0) # far left -640; far right 633 #top 330; bottom -320 # Landscape making #Making the ground pen.hideturtle() pen.color("sienna") pen.penup() pen.left(90) pen.setpos(-640, -320) pen.pendown() pen.begin_fill() pen.color("sienna") for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() #Making Racing Area for i in range(2): pen.forward(162.5) pen.color("lime") pen.begin_fill() for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() #Making Top Area pen.color("dodgerblue") pen.begin_fill() pen.forward(162.5) for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() pen.penup() # Writing "Turtle Race Game" pen.color('lime') pen.setpos(-170,250) pen.color("black") pen.write("Turtle Race Game",pen, font=("Arial", 27, 'normal')) # Making the first finishline pen.setpos(500,143) pen.right(180) for i in range(7): pen.color('black') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() pen.color('white') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() # Making the second finishline pen.setpos(520,143) for i in range(7): pen.color('white') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() pen.color('black') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() # placing main pen to right place to say who won pen.setpos(520,180) # Making all the turtles def race(): # Making the turtles, turtle 1 red = turtle.Turtle() red.speed(0) red.shape("turtle") red.penup() red.color("red") red.setpos(-550, 90) red.pendown() # Making the turtles, turtle 2 blue = turtle.Turtle() blue.shape("turtle") blue.speed(0) blue.penup() blue.color("blue") blue.setpos(-550,30) blue.pendown() # Making the turtles, turtle 3 purple = turtle.Turtle() purple.speed(0) purple.shape("turtle") purple.penup() purple.color("purple") purple.setpos(-550,-30) purple.pendown() # Making the turtles, turtle 4 orange = turtle.Turtle() orange.speed(0) orange.shape("turtle") orange.penup() orange.color("orange") orange.setpos(-550,-90) orange.pendown() race_step_count = 230 if race_step_count: red.forward(randint(1,8)) blue.forward(randint(1,8)) purple.forward(randint(1,8)) orange.forward(randint(1,8)) race_step_count -= 1 next_step = Window.after(100, race) # call this function again after 100mS else: # no more steps - the race is over! Window.after_cancel(next_step) # stop calling the race function def main_game(): run = True screen = Screen() screen.setup(width=600, height=400) def exit_game(): exit() canvas = screen.getcanvas() button = Button(canvas.master, text="Exit Game",command = exit_game ,width= 10, height = 4, fg = "white", bg = "dodgerblue") button.place(x=150, y=530) canvas2 = screen.getcanvas() button2 = Button(canvas2.master, text="Try Again",command = exit_game, width= 10, height = 4,fg = "white", bg = "dodgerblue" ) button2.place(x=50, y=530) #Complete back canvas for the game back_canvas() # Making all the turtles race() main_game() # Making my button do something when being clicked # Making the turtles stop when hitting the finish line time.sleep(1) #Writing who won def who_won(): for i in range(1): if blue.xcor() > red.xcor() and blue.xcor() > purple.xcor() and blue.xcor() > orange.xcor(): time.sleep(1) pen.write('Blue won!', align = "center", font =("Arial", 25, "bold")) elif red.xcor() > blue.xcor() and red.xcor() > purple.xcor() and red.xcor() > orange.xcor(): time.sleep(1) pen.write('Red won!', align = "center", font =("Arial", 25, "bold")) elif purple.xcor() > blue.xcor() and purple.xcor() > red.xcor() and purple.xcor() > orange.xcor(): time.sleep(1) pen.write('Purple won!', align = "center", font =("Arial", 25, "bold")) elif orange.xcor() > blue.xcor() and orange.xcor() > red.xcor() and orange.xcor() > purple.xcor(): time.sleep(1) pen.write('Orange won!', align = "center", font =("Arial", 25, "bold")) else: continue # Window doesnt close on its own Window.mainloop() A: What's happing is that your application is getting "hung up" on the long-running movement loop. Tkinter is registering your button press, but it can't do anything about it until it's done with the for loop. A quick solution to this is to define a function that handles the movements, and uses tkinter.after() to call it periodically until the "race" is over, since the built-in after method allows the UI's event loop to continue uninterrupted. # I don't know what your imports look like, so this is a boilerplate example import tkinter as tk root = tk.Tk() # this is whatever you're calling 'mainloop()' on right now race_step_count = 230 # define how 'long' the race is def race(): global race_step_count if race_step_count: red.forward(randint(1,8)) blue.forward(randint(1,8)) purple.forward(randint(1,8)) orange.forward(randint(1,8)) race_step_count -= 1 next_step = root.after(100, race) # call this function again after 100mS else: # no more steps - the race is over! root.after_cancel(next_step) # stop calling the race function To start the race, just call the function when you're ready: race() A: Looking at your code, I'm surprised it runs. Running your code, I find it doesn't. It bombs out with: AttributeError: '_Screen' object has no attribute 'after' Turtle works in two modes, standalone and embeded in a larger tkinter program. You're trying to embed a standalone turtle program. Below, I've taken apart and reassembled your turtle program to be embedded in tkinter and fully implement the functionality you describe. (It has a tkinter "Exit Game" button.) from random import randint from turtle import TurtleScreen, RawTurtle import tkinter as tk import sys def back_canvas(): # Landscape making # Making the ground pen.color('sienna') pen.penup() pen.setpos(-640, -162.5) pen.pendown() pen.begin_fill() for _ in range(2): pen.forward(1280) pen.right(90) pen.forward(162.5) pen.right(90) pen.end_fill() # Making Racing Area pen.color('lime') pen.begin_fill() for _ in range(2): pen.forward(1280) pen.left(90) pen.forward(325) pen.left(90) pen.end_fill() # Making Top Area pen.color('dodgerblue') pen.begin_fill() pen.left(90) pen.forward(325) for _ in range(2): pen.forward(162.5) pen.right(90) pen.forward(1280) pen.right(90) pen.end_fill() pen.penup() # Writing "Turtle Race Game" pen.color('lime') pen.setpos(0, 250) pen.color('black') pen.write("Turtle Race Game", align='center', font=('Arial', 27, 'normal')) # Making the first finish line pen.right(90) pen.setpos(500, 143) def flag(): pen.color('black') pen.begin_fill() for _ in range(4): pen.forward(20) pen.right(90) pen.end_fill() pen.forward(20) pen.color('white') pen.begin_fill() for _ in range(4): pen.forward(20) pen.right(90) pen.end_fill() pen.forward(20) for _ in range(7): flag() pen.right(90) pen.forward(40) pen.right(90) flag() pen.right(180) # placing main pen to right place to say who won pen.setpos(520, 180) race_step_count = 230 def race(): global race_step_count if race_step_count > 0: red.forward(randint(1, 8)) blue.forward(randint(1, 8)) purple.forward(randint(1, 8)) orange.forward(randint(1, 8)) race_step_count -= 1 screen.ontimer(race, 100) # call this function again after 100mS else: who_won() def who_won(): if blue.xcor() > red.xcor() and blue.xcor() > purple.xcor() and blue.xcor() > orange.xcor(): pen.write("Blue won!", align='center', font=('Arial', 25, 'bold')) elif red.xcor() > blue.xcor() and red.xcor() > purple.xcor() and red.xcor() > orange.xcor(): pen.write("Red won!", align='center', font=('Arial', 25, 'bold')) elif purple.xcor() > blue.xcor() and purple.xcor() > red.xcor() and purple.xcor() > orange.xcor(): pen.write("Purple won!", align='center', font=('Arial', 25, 'bold')) elif orange.xcor() > blue.xcor() and orange.xcor() > red.xcor() and orange.xcor() > purple.xcor(): pen.write("Orange won!", align='center', font=('Arial', 25, 'bold')) master = tk.Tk() master.title("Turtle Race Game") canvas = tk.Canvas(master, width=1280, height=650) canvas.pack() screen = TurtleScreen(canvas) tk.Button(master, text="Exit Game", command=sys.exit, width=0, height=4, fg='gold', bg='dodgerblue').pack() # Main drawing turtle pen = RawTurtle(screen) pen.hideturtle() pen.speed('fastest') back_canvas() red = RawTurtle(screen) red.speed('fastest') red.shape('turtle') red.penup() red.color('red') red.setpos(-550, 90) blue = red.clone() blue.color('blue') blue.setpos(-550, 30) purple = red.clone() purple.color('purple') purple.setpos(-550, -30) orange = red.clone() orange.color('orange') orange.setpos(-550, -90) race() screen.mainloop() Whenever you import the same library multiple ways, you're probably in trouble. (When you import mulitple libraries multiple ways, you're definitely in trouble.)
Buttons don't do their intended command at click
Im making a turtle race game, its a game where there are a few turtles who are assigned random speeds and then one turtle wins. However, just for fun im trying to add a few things to the game. For example, a button to exit the game and a button to restart the race. I have made only the exit button for now, and gave the command to exit the game. The button works, however not in the right time. The problem is is that i have a piece of code that makes the canvas (background), Which is just the turtle drawing. I have another piece of code that places the buttons and tells them what to do when being clicked. And then I have a piece of code that assigns random speeds to the turtles. This is the buttons code.(The try again button command is not finished yet.) screen = Screen() screen.setup(width=600, height=400) def exit_game(): exit() canvas = screen.getcanvas() button = Button(canvas.master, text="Exit Game", command=exit_game, width=10, height=4, fg="white", bg="dodgerblue") button.pack() button.place(x=150, y=530) canvas2 = screen.getcanvas() button2 = Button(canvas2.master, text="Try Again", command=exit_game, width=10, height=4, fg="white", bg="dodgerblue" ) button2.pack() button2.place(x=50, y=530) And here is the code for assigning random numbers to the turtles. for movement in range (230): red.forward(randint(1,8)) blue.forward(randint(1,8)) purple.forward(randint(1,8)) orange.forward(randint(1,8)) The problem is, is that when for example the turtles are moving, i can press the button, but it does not do the command. After the movement loop goes through 230 times, only then it exits the game. So basically my code is just reading the speed to the turtles and forgot about the button commands. Is there a way to override this somehow and make my button exit the game when being clicked at all times? Also i did try to put the button into an infinite loop, but it did not work(maybe I did it wrong). import turtle import time from random import randint from tkinter import * from turtle import Screen, Turtle import tkinter import tkinter as tk # Window Customization Window = turtle.Screen() Window.title('Turtle Race Game') #Complete back canvas for the game def back_canvas(): # Main drawing turtle pen = turtle.Turtle() pen.speed(0) # far left -640; far right 633 #top 330; bottom -320 # Landscape making #Making the ground pen.hideturtle() pen.color("sienna") pen.penup() pen.left(90) pen.setpos(-640, -320) pen.pendown() pen.begin_fill() pen.color("sienna") for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() #Making Racing Area for i in range(2): pen.forward(162.5) pen.color("lime") pen.begin_fill() for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() #Making Top Area pen.color("dodgerblue") pen.begin_fill() pen.forward(162.5) for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() pen.penup() # Writing "Turtle Race Game" pen.color('lime') pen.setpos(-170,250) pen.color("black") pen.write("Turtle Race Game",pen, font=("Arial", 27, 'normal')) # Making the first finishline pen.setpos(500,143) pen.right(180) for i in range(7): pen.color('black') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() pen.color('white') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() # Making the second finishline pen.setpos(520,143) for i in range(7): pen.color('white') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() pen.color('black') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() # placing main pen to right place to say who won pen.setpos(520,180) # Making all the turtles def race(): # Making the turtles, turtle 1 red = turtle.Turtle() red.speed(0) red.shape("turtle") red.penup() red.color("red") red.setpos(-550, 90) red.pendown() # Making the turtles, turtle 2 blue = turtle.Turtle() blue.shape("turtle") blue.speed(0) blue.penup() blue.color("blue") blue.setpos(-550,30) blue.pendown() # Making the turtles, turtle 3 purple = turtle.Turtle() purple.speed(0) purple.shape("turtle") purple.penup() purple.color("purple") purple.setpos(-550,-30) purple.pendown() # Making the turtles, turtle 4 orange = turtle.Turtle() orange.speed(0) orange.shape("turtle") orange.penup() orange.color("orange") orange.setpos(-550,-90) orange.pendown() race_step_count = 230 if race_step_count: red.forward(randint(1,8)) blue.forward(randint(1,8)) purple.forward(randint(1,8)) orange.forward(randint(1,8)) race_step_count -= 1 next_step = Window.after(100, race) # call this function again after 100mS else: # no more steps - the race is over! Window.after_cancel(next_step) # stop calling the race function def main_game(): run = True screen = Screen() screen.setup(width=600, height=400) def exit_game(): exit() canvas = screen.getcanvas() button = Button(canvas.master, text="Exit Game",command = exit_game ,width= 10, height = 4, fg = "white", bg = "dodgerblue") button.place(x=150, y=530) canvas2 = screen.getcanvas() button2 = Button(canvas2.master, text="Try Again",command = exit_game, width= 10, height = 4,fg = "white", bg = "dodgerblue" ) button2.place(x=50, y=530) #Complete back canvas for the game back_canvas() # Making all the turtles race() main_game() # Making my button do something when being clicked # Making the turtles stop when hitting the finish line time.sleep(1) #Writing who won def who_won(): for i in range(1): if blue.xcor() > red.xcor() and blue.xcor() > purple.xcor() and blue.xcor() > orange.xcor(): time.sleep(1) pen.write('Blue won!', align = "center", font =("Arial", 25, "bold")) elif red.xcor() > blue.xcor() and red.xcor() > purple.xcor() and red.xcor() > orange.xcor(): time.sleep(1) pen.write('Red won!', align = "center", font =("Arial", 25, "bold")) elif purple.xcor() > blue.xcor() and purple.xcor() > red.xcor() and purple.xcor() > orange.xcor(): time.sleep(1) pen.write('Purple won!', align = "center", font =("Arial", 25, "bold")) elif orange.xcor() > blue.xcor() and orange.xcor() > red.xcor() and orange.xcor() > purple.xcor(): time.sleep(1) pen.write('Orange won!', align = "center", font =("Arial", 25, "bold")) else: continue # Window doesnt close on its own Window.mainloop()
[ "What's happing is that your application is getting \"hung up\" on the long-running movement loop. Tkinter is registering your button press, but it can't do anything about it until it's done with the for loop. A quick solution to this is to define a function that handles the movements, and uses tkinter.after() to call it periodically until the \"race\" is over, since the built-in after method allows the UI's event loop to continue uninterrupted.\n# I don't know what your imports look like, so this is a boilerplate example\nimport tkinter as tk\n\nroot = tk.Tk() # this is whatever you're calling 'mainloop()' on right now\nrace_step_count = 230 # define how 'long' the race is\n\n\ndef race():\n global race_step_count\n if race_step_count:\n red.forward(randint(1,8))\n blue.forward(randint(1,8))\n purple.forward(randint(1,8))\n orange.forward(randint(1,8))\n race_step_count -= 1\n next_step = root.after(100, race) # call this function again after 100mS\n else: # no more steps - the race is over!\n root.after_cancel(next_step) # stop calling the race function\n\nTo start the race, just call the function when you're ready: race()\n", "Looking at your code, I'm surprised it runs. Running your code, I find it doesn't. It bombs out with:\nAttributeError: '_Screen' object has no attribute 'after'\n\nTurtle works in two modes, standalone and embeded in a larger tkinter program. You're trying to embed a standalone turtle program. Below, I've taken apart and reassembled your turtle program to be embedded in tkinter and fully implement the functionality you describe. (It has a tkinter \"Exit Game\" button.)\nfrom random import randint\nfrom turtle import TurtleScreen, RawTurtle\nimport tkinter as tk\nimport sys\n\ndef back_canvas():\n # Landscape making\n # Making the ground\n\n pen.color('sienna')\n\n pen.penup()\n pen.setpos(-640, -162.5)\n pen.pendown()\n\n pen.begin_fill()\n\n for _ in range(2):\n pen.forward(1280)\n pen.right(90)\n pen.forward(162.5)\n pen.right(90)\n\n pen.end_fill()\n\n # Making Racing Area\n\n pen.color('lime')\n pen.begin_fill()\n\n for _ in range(2):\n pen.forward(1280)\n pen.left(90)\n pen.forward(325)\n pen.left(90)\n\n pen.end_fill()\n\n # Making Top Area\n\n pen.color('dodgerblue')\n pen.begin_fill()\n pen.left(90)\n pen.forward(325)\n\n for _ in range(2):\n pen.forward(162.5)\n pen.right(90)\n pen.forward(1280)\n pen.right(90)\n\n pen.end_fill()\n pen.penup()\n\n # Writing \"Turtle Race Game\"\n pen.color('lime')\n pen.setpos(0, 250)\n pen.color('black')\n pen.write(\"Turtle Race Game\", align='center', font=('Arial', 27, 'normal'))\n\n # Making the first finish line\n pen.right(90)\n pen.setpos(500, 143)\n\n def flag():\n pen.color('black')\n pen.begin_fill()\n\n for _ in range(4):\n pen.forward(20)\n pen.right(90)\n\n pen.end_fill()\n pen.forward(20)\n\n pen.color('white')\n pen.begin_fill()\n\n for _ in range(4):\n pen.forward(20)\n pen.right(90)\n\n pen.end_fill()\n pen.forward(20)\n\n for _ in range(7):\n flag()\n\n pen.right(90)\n pen.forward(40)\n pen.right(90)\n\n flag()\n\n pen.right(180)\n\n # placing main pen to right place to say who won\n pen.setpos(520, 180)\n\nrace_step_count = 230\n\ndef race():\n global race_step_count\n\n if race_step_count > 0:\n red.forward(randint(1, 8))\n blue.forward(randint(1, 8))\n purple.forward(randint(1, 8))\n orange.forward(randint(1, 8))\n\n race_step_count -= 1\n screen.ontimer(race, 100) # call this function again after 100mS\n else:\n who_won()\n\ndef who_won():\n if blue.xcor() > red.xcor() and blue.xcor() > purple.xcor() and blue.xcor() > orange.xcor():\n pen.write(\"Blue won!\", align='center', font=('Arial', 25, 'bold'))\n elif red.xcor() > blue.xcor() and red.xcor() > purple.xcor() and red.xcor() > orange.xcor():\n pen.write(\"Red won!\", align='center', font=('Arial', 25, 'bold'))\n elif purple.xcor() > blue.xcor() and purple.xcor() > red.xcor() and purple.xcor() > orange.xcor():\n pen.write(\"Purple won!\", align='center', font=('Arial', 25, 'bold'))\n elif orange.xcor() > blue.xcor() and orange.xcor() > red.xcor() and orange.xcor() > purple.xcor():\n pen.write(\"Orange won!\", align='center', font=('Arial', 25, 'bold'))\n\nmaster = tk.Tk()\nmaster.title(\"Turtle Race Game\")\n\ncanvas = tk.Canvas(master, width=1280, height=650)\ncanvas.pack()\n\nscreen = TurtleScreen(canvas)\n\ntk.Button(master, text=\"Exit Game\", command=sys.exit, width=0, height=4, fg='gold', bg='dodgerblue').pack()\n\n# Main drawing turtle\npen = RawTurtle(screen)\npen.hideturtle()\npen.speed('fastest')\n\nback_canvas()\n\nred = RawTurtle(screen)\nred.speed('fastest')\nred.shape('turtle')\nred.penup()\n\nred.color('red')\nred.setpos(-550, 90)\n\nblue = red.clone()\nblue.color('blue')\nblue.setpos(-550, 30)\n\npurple = red.clone()\npurple.color('purple')\npurple.setpos(-550, -30)\n\norange = red.clone()\norange.color('orange')\norange.setpos(-550, -90)\n\nrace()\n\nscreen.mainloop()\n\nWhenever you import the same library multiple ways, you're probably in trouble. (When you import mulitple libraries multiple ways, you're definitely in trouble.)\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_turtle", "tkinter" ]
stackoverflow_0074646592_python_python_turtle_tkinter.txt
Q: How to put a dictionary in a JSON? I'm working with a REST API, and I need to return a JSON with my values ​​to it. However, I need the items of the payload variable to show all the items inside the cart_item. I have this: payload = { "items": [], } I tried this, but I don't know how I would put this inside the items of the payload: for cart_item in cart_items: item = [ { "reference_id": f"{cart_item.sku}", "name": f"{cart_item.product.name}", "quantity": cart_item.quantity, "unit_amount": cart_item.product.price }, ] I need you to get back to me: payload = { "items": [ { "reference_id": "SKU49FS20DD", "name": "Produto 1", "quantity": 1, "unit_amount": 130 }, { "reference_id": "SKU42920SSD", "name": "Produto 2", "quantity": 1, "unit_amount": 100 } ], } response = requests.request( "POST", url, headers=headers, json=payload ) I don't know if I would need to pass what's in JSON to the dictionary to change and then send it to JSON again. A: Instead of trying to create one item at a time, just populate payload['items'] directly, using a comprehension: payload['items'] = [ { 'reference_id': cart_item.sku, 'name': cart_item.product.name, 'quantity': cart_item.quantity, 'unit_amount': cart_item.product.price } for cart_item in cart_items ] Another possible improvement is about requests. Instead of using requests.requests('POST' ...), you can use requests.post(...). And finally, if the API really needs json to have a valid JSON string, use json.dumps to convert it. Putting all together: import requests import json payload['items'] = [ { 'reference_id': cart_item.sku, 'name': cart_item.product.name, 'quantity': cart_item.quantity, 'unit_amount': cart_item.product.price } for cart_item in cart_items ] response = requests.post( url, headers=headers, json=json.dumps(payload) ) Even though I'm almost a hundred percent sure requests.post() will do the right thing if you just pass the payload as is in json=payload. A: You're just missing the "append()" method on a list, and the conversion from Python list & dict to a JSON string: from json import dumps items_dict = [] for cart_item in cart_items: items_dict.append({ "reference_id": f"{cart_item.sku}", "name": f"{cart_item.product.name}", "quantity": cart_item.quantity, "unit_amount": cart_item.product.price }) payload = { 'items': items_dict } # And if you want a JSON string as output print(dumps(payload)) But you don't need a string to the "json" argument in requests.post, so you can keep your response = requests.request( "POST", url, headers=headers, json=payload )
How to put a dictionary in a JSON?
I'm working with a REST API, and I need to return a JSON with my values ​​to it. However, I need the items of the payload variable to show all the items inside the cart_item. I have this: payload = { "items": [], } I tried this, but I don't know how I would put this inside the items of the payload: for cart_item in cart_items: item = [ { "reference_id": f"{cart_item.sku}", "name": f"{cart_item.product.name}", "quantity": cart_item.quantity, "unit_amount": cart_item.product.price }, ] I need you to get back to me: payload = { "items": [ { "reference_id": "SKU49FS20DD", "name": "Produto 1", "quantity": 1, "unit_amount": 130 }, { "reference_id": "SKU42920SSD", "name": "Produto 2", "quantity": 1, "unit_amount": 100 } ], } response = requests.request( "POST", url, headers=headers, json=payload ) I don't know if I would need to pass what's in JSON to the dictionary to change and then send it to JSON again.
[ "Instead of trying to create one item at a time, just populate payload['items'] directly, using a comprehension:\npayload['items'] = [\n {\n 'reference_id': cart_item.sku,\n 'name': cart_item.product.name,\n 'quantity': cart_item.quantity,\n 'unit_amount': cart_item.product.price \n }\n for cart_item in cart_items\n]\n\nAnother possible improvement is about requests. Instead of using requests.requests('POST' ...), you can use requests.post(...).\nAnd finally, if the API really needs json to have a valid JSON string, use json.dumps to convert it.\nPutting all together:\nimport requests\nimport json\n\npayload['items'] = [\n {\n 'reference_id': cart_item.sku,\n 'name': cart_item.product.name,\n 'quantity': cart_item.quantity,\n 'unit_amount': cart_item.product.price \n }\n for cart_item in cart_items\n]\n\nresponse = requests.post(\n url,\n headers=headers,\n json=json.dumps(payload)\n)\n\nEven though I'm almost a hundred percent sure requests.post() will do the right thing if you just pass the payload as is in json=payload.\n", "You're just missing the \"append()\" method on a list, and the conversion from Python list & dict to a JSON string:\n from json import dumps\n\n items_dict = []\n for cart_item in cart_items:\n items_dict.append({\n \"reference_id\": f\"{cart_item.sku}\",\n \"name\": f\"{cart_item.product.name}\",\n \"quantity\": cart_item.quantity,\n \"unit_amount\": cart_item.product.price\n })\n\npayload = {\n 'items': items_dict\n}\n\n# And if you want a JSON string as output\nprint(dumps(payload))\n\nBut you don't need a string to the \"json\" argument in requests.post, so you can keep your\nresponse = requests.request(\n \"POST\",\n url, \n headers=headers,\n json=payload\n)\n\n" ]
[ 1, 0 ]
[]
[]
[ "django_rest_framework", "json", "python", "python_requests", "rest" ]
stackoverflow_0074660579_django_rest_framework_json_python_python_requests_rest.txt
Q: Change dates to quarters in JSON file Python I'm trying to convert the dates inside a JSON file to their respective quarter and year. My JSON file is formatted below: { "lastDate": { "0": "11/22/2022", "1": "10/28/2022", "2": "10/17/2022", "7": "07/03/2022", "8": "07/03/2022", "9": "06/03/2022", "18": "05/17/2022", "19": "05/08/2022", "22": "02/03/2022", "24": "02/04/2022" } } The current code I'm using is an attempt of using the pandas.Series.dt.quarter as seen below: import json import pandas as pd data = json.load(open("date_to_quarters.json")) df = data['lastDate'] pd.to_datetime(df['lastDate']) df['Quarter'] = df['Date'].dt.quarter open("date_to_quarters.json", "w").write( json.dumps(data, indent=4)) The issue I face is that my code isn't comprehending the object name "lastDate". My ideal output should have the dates ultimately replaced into their quarter, check below: { "lastDate": { "0": "Q42022", "1": "Q42022", "2": "Q42022", "7": "Q32022", "8": "Q32022", "9": "Q22022", "18": "Q22022", "19": "Q22022", "22": "Q12022", "24": "Q12022" } } A: You can use this bit of code instead: import json import pandas as pd data = json.load(open("date_to_quarters.json")) # convert json to df df = pd.DataFrame.from_dict(data, orient="columns") # convert last date to quarter df['lastDate'] = pd.to_datetime(df['lastDate']) df['lastDate'] = df['lastDate'].dt.to_period('Q') # change type of lastDate to string df['lastDate'] = df['lastDate'].astype(str) # write to json file df.to_json("date_to_quarters1.json", orient="columns", indent=4) json object is different than pd.DataFrame. You have to convert json to pd.DataFrame first using from_dict() function. A: Try: import json import pandas as pd with open("data.json", "r") as f_in: data = json.load(f_in) x = pd.to_datetime(list(data["lastDate"].values())) out = { "lastDate": dict( zip(data["lastDate"], (f"Q{q}{y}" for q, y in zip(x.quarter, x.year))) ) } print(out) Prints: { "lastDate": { "0": "Q42022", "1": "Q42022", "2": "Q42022", "7": "Q32022", "8": "Q32022", "9": "Q22022", "18": "Q22022", "19": "Q22022", "22": "Q12022", "24": "Q12022", } } To save out as Json: with open("out.json", "w") as f_out: json.dump(out, f_out, indent=4)
Change dates to quarters in JSON file Python
I'm trying to convert the dates inside a JSON file to their respective quarter and year. My JSON file is formatted below: { "lastDate": { "0": "11/22/2022", "1": "10/28/2022", "2": "10/17/2022", "7": "07/03/2022", "8": "07/03/2022", "9": "06/03/2022", "18": "05/17/2022", "19": "05/08/2022", "22": "02/03/2022", "24": "02/04/2022" } } The current code I'm using is an attempt of using the pandas.Series.dt.quarter as seen below: import json import pandas as pd data = json.load(open("date_to_quarters.json")) df = data['lastDate'] pd.to_datetime(df['lastDate']) df['Quarter'] = df['Date'].dt.quarter open("date_to_quarters.json", "w").write( json.dumps(data, indent=4)) The issue I face is that my code isn't comprehending the object name "lastDate". My ideal output should have the dates ultimately replaced into their quarter, check below: { "lastDate": { "0": "Q42022", "1": "Q42022", "2": "Q42022", "7": "Q32022", "8": "Q32022", "9": "Q22022", "18": "Q22022", "19": "Q22022", "22": "Q12022", "24": "Q12022" } }
[ "You can use this bit of code instead:\nimport json\nimport pandas as pd\n\ndata = json.load(open(\"date_to_quarters.json\"))\n\n# convert json to df\ndf = pd.DataFrame.from_dict(data, orient=\"columns\")\n\n# convert last date to quarter\ndf['lastDate'] = pd.to_datetime(df['lastDate'])\ndf['lastDate'] = df['lastDate'].dt.to_period('Q')\n\n# change type of lastDate to string\ndf['lastDate'] = df['lastDate'].astype(str)\n\n# write to json file\ndf.to_json(\"date_to_quarters1.json\", orient=\"columns\", indent=4)\n\njson object is different than pd.DataFrame. You have to convert json to pd.DataFrame first using from_dict() function.\n", "Try:\nimport json\nimport pandas as pd\n\nwith open(\"data.json\", \"r\") as f_in:\n data = json.load(f_in)\n\nx = pd.to_datetime(list(data[\"lastDate\"].values()))\n\n\nout = {\n \"lastDate\": dict(\n zip(data[\"lastDate\"], (f\"Q{q}{y}\" for q, y in zip(x.quarter, x.year)))\n )\n}\nprint(out)\n\nPrints:\n{\n \"lastDate\": {\n \"0\": \"Q42022\",\n \"1\": \"Q42022\",\n \"2\": \"Q42022\",\n \"7\": \"Q32022\",\n \"8\": \"Q32022\",\n \"9\": \"Q22022\",\n \"18\": \"Q22022\",\n \"19\": \"Q22022\",\n \"22\": \"Q12022\",\n \"24\": \"Q12022\",\n }\n}\n\n\nTo save out as Json:\nwith open(\"out.json\", \"w\") as f_out:\n json.dump(out, f_out, indent=4)\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "json", "pandas", "python" ]
stackoverflow_0074660556_dataframe_json_pandas_python.txt
Q: How to use Gradio interface to auto submit the audio when recording is done? I am using the following Gradio sample code to transcribe my audio: from transformers import pipeline p = pipeline("automatic-speech-recognition") import gradio as gr def transcribe(audio): text = p(audio)["text"] return text gr.Interface( fn=transcribe, inputs=gr.Audio(source="microphone", type="filepath"), outputs="text").launch() However, the user has to start recording audio, stop recording audio, and the submit the audio. Can I auto submit the audio when the user presses stop recording audio? A: You can use auto-submit something like this should work #auto submit after 5 seconds gr.Interface( fn=transcribe, inputs=gr.Audio(source="microphone", type="filepath"), outputs="text", auto_submit=True, auto_submit_duration=5).launch() A: I found the solution. I am putting it here for other's reference. import gradio as gr from transformers import pipeline p = pipeline("automatic-speech-recognition") def transcribe(audio): text = p(audio)["text"] return text gr.Interface( fn=transcribe, inputs=gr.Audio(source="microphone", type="filepath"), outputs="text",live=True).launch() Adding live=True serves the purpose.
How to use Gradio interface to auto submit the audio when recording is done?
I am using the following Gradio sample code to transcribe my audio: from transformers import pipeline p = pipeline("automatic-speech-recognition") import gradio as gr def transcribe(audio): text = p(audio)["text"] return text gr.Interface( fn=transcribe, inputs=gr.Audio(source="microphone", type="filepath"), outputs="text").launch() However, the user has to start recording audio, stop recording audio, and the submit the audio. Can I auto submit the audio when the user presses stop recording audio?
[ "You can use auto-submit something like this should work\n#auto submit after 5 seconds\ngr.Interface(\n fn=transcribe,\n inputs=gr.Audio(source=\"microphone\", type=\"filepath\"),\n outputs=\"text\",\n auto_submit=True,\n auto_submit_duration=5).launch()\n\n", "I found the solution. I am putting it here for other's reference.\nimport gradio as gr\n\nfrom transformers import pipeline\n\np = pipeline(\"automatic-speech-recognition\")\n\ndef transcribe(audio):\n text = p(audio)[\"text\"]\n return text\n\ngr.Interface(\n fn=transcribe, \n inputs=gr.Audio(source=\"microphone\", type=\"filepath\"), \n outputs=\"text\",live=True).launch()\n\nAdding live=True serves the purpose.\n" ]
[ 0, 0 ]
[]
[]
[ "gradio", "python" ]
stackoverflow_0074660611_gradio_python.txt
Q: Find if words from one sentence are found in corresponding row of another column also containing sentences (Pandas) I have dataframe that looks like this: email account_name 0 NaN weichert, realtors mnsota 1 jhawkins sterling group com sterling group 2 lbaltz baltzchevy com baltz chevrolet and I have this code that works as a solution but it takes forever on larger datasets and I know there has to be an easier way to solve it so just looking to see if anyone knows of a more concise/elegant way to do find a count of matching words between corresponding rows of both columns. Thanks test = prod_nb_wcomps_2.sample(3, random_state=10).reset_index(drop = True) test = test[['email','account_name']] print(test) lst = [] for i in test.index: if not isinstance(test['email'].iloc[i], float): for word in test['email'].iloc[i].split(' '): if not isinstance(test['account_name'].iloc[i], float): for word2 in test['account_name'].iloc[i].split(' '): if word in word2: lst.append({'index':i, 'bool_col': True}) else: lst.append({'index':i, 'bool_col': False}) df_dct = pd.DataFrame(lst) df_dct = df_dct.loc[df_dct['bool_col'] == True] df_dct['number of matches_per_row'] = df_dct.groupby('index')['bool_col'].transform('size') df_dct.set_index('index', inplace=True, drop=True) df_dct.drop(['bool_col'], inplace=True, axis =1) test_ = pd.merge(test, df_dct, left_index=True, right_index=True) test_ the resulting dataframe test_ looks like this A: This solves your query. import pandas as pd df = pd.DataFrame({'email': ['', 'jhawkins sterling group com', 'lbaltz baltzchevy com'], 'name': ['John', 'sterling group', 'Linda']}) for index, row in df.iterrows(): matches = sum([1 for x in row['email'].split() if x in row['name'].split()]) df.loc[index, 'matches'] = matches Output: email name matches 0 John 0.0 1 jhawkins sterling group com sterling group 2.0 2 lbaltz baltzchevy com Linda 0.0 A: You can use the apply method on your dataframe to apply a function to each row, which can simplify your code and make it more efficient. The apply method will apply the function you specify to each row of the dataframe, and the function should take a single row as input and return the desired result. In your case, you can define a function that takes a row as input, splits the email and account_name values in that row into words, and then counts the number of words that appear in both the email and account_name values. Here is an example of how you could define and use this function: def count_matching_words(row): email_words = row['email'].split(' ') account_name_words = row['account_name'].split(' ') return len(set(email_words).intersection(account_name_words)) test['number of matches_per_row'] = test.apply(count_matching_words, axis=1) This code will apply the count_matching_words function to each row of the test dataframe, and the result will be a new column in the dataframe that contains the number of matching words between the email and account_name values in each row. This should be much more efficient and concise than your current solution, and it should work well even on larger datasets.
Find if words from one sentence are found in corresponding row of another column also containing sentences (Pandas)
I have dataframe that looks like this: email account_name 0 NaN weichert, realtors mnsota 1 jhawkins sterling group com sterling group 2 lbaltz baltzchevy com baltz chevrolet and I have this code that works as a solution but it takes forever on larger datasets and I know there has to be an easier way to solve it so just looking to see if anyone knows of a more concise/elegant way to do find a count of matching words between corresponding rows of both columns. Thanks test = prod_nb_wcomps_2.sample(3, random_state=10).reset_index(drop = True) test = test[['email','account_name']] print(test) lst = [] for i in test.index: if not isinstance(test['email'].iloc[i], float): for word in test['email'].iloc[i].split(' '): if not isinstance(test['account_name'].iloc[i], float): for word2 in test['account_name'].iloc[i].split(' '): if word in word2: lst.append({'index':i, 'bool_col': True}) else: lst.append({'index':i, 'bool_col': False}) df_dct = pd.DataFrame(lst) df_dct = df_dct.loc[df_dct['bool_col'] == True] df_dct['number of matches_per_row'] = df_dct.groupby('index')['bool_col'].transform('size') df_dct.set_index('index', inplace=True, drop=True) df_dct.drop(['bool_col'], inplace=True, axis =1) test_ = pd.merge(test, df_dct, left_index=True, right_index=True) test_ the resulting dataframe test_ looks like this
[ "This solves your query.\nimport pandas as pd\n\ndf = pd.DataFrame({'email': ['', 'jhawkins sterling group com', 'lbaltz baltzchevy com'], 'name': ['John', 'sterling group', 'Linda']})\n\nfor index, row in df.iterrows():\n matches = sum([1 for x in row['email'].split() if x in row['name'].split()])\n df.loc[index, 'matches'] = matches\n\nOutput:\n email name matches\n0 John 0.0\n1 jhawkins sterling group com sterling group 2.0\n2 lbaltz baltzchevy com Linda 0.0\n\n", "You can use the apply method on your dataframe to apply a function to each row, which can simplify your code and make it more efficient.\nThe apply method will apply the function you specify to each row of the dataframe, and the function should take a single row as input and return the desired result. In your case, you can define a function that takes a row as input, splits the email and account_name values in that row into words, and then counts the number of words that appear in both the email and account_name values. Here is an example of how you could define and use this function:\ndef count_matching_words(row):\n email_words = row['email'].split(' ')\n account_name_words = row['account_name'].split(' ')\n return len(set(email_words).intersection(account_name_words))\n\ntest['number of matches_per_row'] = test.apply(count_matching_words, axis=1)\n\nThis code will apply the count_matching_words function to each row of the test dataframe, and the result will be a new column in the dataframe that contains the number of matching words between the email and account_name values in each row. This should be much more efficient and concise than your current solution, and it should work well even on larger datasets.\n" ]
[ 2, 2 ]
[]
[]
[ "group_by", "pandas", "python" ]
stackoverflow_0074660484_group_by_pandas_python.txt
Q: How can I compare one column of a dataframe to multiple other columns using SequenceMatcher? I have a dataframe with 6 columns, the first two are an id and a name column, the remaining 4 are potential matches for the name column. id name match1 match2 match3 match4 id name match1 match2 match3 match4 1 NXP Semiconductors NaN NaN NaN NaN 2 Cincinnati Children's Hospital Medical Center Montefiore Medical center Children's Hospital Los Angeles Cincinnati Children's Hospital Medical Center SSM Health SLU Hospital 3 Seminole Tribe of Florida The State Board of Administration of Florida NaN NaN NaN 4 Miami-Dade County County of Will County of Orange NaN NaN 5 University of California California Teacher's Association Yale University University of Toronto University System of Georgia 6 Bon Appetit Management Waste Management Sculptor Capital NaN NaN I'd like to use SequenceMatcher to compare the name column with each match column if there is a value and return the match value with the highest ratio, or closest match, in a new column at the end of the dataframe. So the output would be something like this: id name match1 match2 match3 match4 best match 1 NXP Semiconductors NaN NaN NaN NaN NaN 2 Cincinnati Children's Hospital Medical Center Montefiore Medical center Children's Hospital Los Angeles Cincinnati Children's Hospital Medical Center SSM Health SLU Hospital Cincinnati Children's Hospital Medical Center 3 Seminole Tribe of Florida The State Board of Administration of Florida NaN NaN NaN The State Board of Administration of Florida 4 Miami-Dade County County of Will County of Orange NaN NaN County of Orange 5 University of California California Teacher's Association Yale University University of Toronto University System of Georgia California Teacher's Association 6 Bon Appetit Management Waste Management Sculptor Capital NaN NaN Waste Management I've gotten the data into the dataframe and have been able to compare one column to a single other column using the apply method: df['diff'] = df.apply(lambda x: diff.SequenceMatcher(None, x[0].strip(), x[1].strip()).ratio(), axis=1) However, I'm not sure how to loop over multiple columns in the same row. I also thought about trying to reformat my data so it that the method above would work, something like this: name match name1 match1 name1 match2 name1 match3 However, I was running into issues dealing with the NaN values. Open to suggestions on the best route to accomplish this. A: I ended up solving this using the second idea of reformatting the table. Using the melt function I was able to get a two column table of the name field with each possible match. From there I used the original lambda function to compare the two columns and output a ratio. From there it was relatively easy to go through and see the most likely matches, although it did require some manual effort. df = pd.read_csv('output.csv') df1 = df.melt(id_vars = ['id', 'name'], var_name = 'match').dropna().drop('match',1).sort_values('name') df1['diff'] = df1.apply(lambda x: diff.SequenceMatcher(None, x[1].strip(), x[2].strip()).ratio(), axis=1) df1.to_csv('comparison-output.csv', encoding='utf-8')
How can I compare one column of a dataframe to multiple other columns using SequenceMatcher?
I have a dataframe with 6 columns, the first two are an id and a name column, the remaining 4 are potential matches for the name column. id name match1 match2 match3 match4 id name match1 match2 match3 match4 1 NXP Semiconductors NaN NaN NaN NaN 2 Cincinnati Children's Hospital Medical Center Montefiore Medical center Children's Hospital Los Angeles Cincinnati Children's Hospital Medical Center SSM Health SLU Hospital 3 Seminole Tribe of Florida The State Board of Administration of Florida NaN NaN NaN 4 Miami-Dade County County of Will County of Orange NaN NaN 5 University of California California Teacher's Association Yale University University of Toronto University System of Georgia 6 Bon Appetit Management Waste Management Sculptor Capital NaN NaN I'd like to use SequenceMatcher to compare the name column with each match column if there is a value and return the match value with the highest ratio, or closest match, in a new column at the end of the dataframe. So the output would be something like this: id name match1 match2 match3 match4 best match 1 NXP Semiconductors NaN NaN NaN NaN NaN 2 Cincinnati Children's Hospital Medical Center Montefiore Medical center Children's Hospital Los Angeles Cincinnati Children's Hospital Medical Center SSM Health SLU Hospital Cincinnati Children's Hospital Medical Center 3 Seminole Tribe of Florida The State Board of Administration of Florida NaN NaN NaN The State Board of Administration of Florida 4 Miami-Dade County County of Will County of Orange NaN NaN County of Orange 5 University of California California Teacher's Association Yale University University of Toronto University System of Georgia California Teacher's Association 6 Bon Appetit Management Waste Management Sculptor Capital NaN NaN Waste Management I've gotten the data into the dataframe and have been able to compare one column to a single other column using the apply method: df['diff'] = df.apply(lambda x: diff.SequenceMatcher(None, x[0].strip(), x[1].strip()).ratio(), axis=1) However, I'm not sure how to loop over multiple columns in the same row. I also thought about trying to reformat my data so it that the method above would work, something like this: name match name1 match1 name1 match2 name1 match3 However, I was running into issues dealing with the NaN values. Open to suggestions on the best route to accomplish this.
[ "I ended up solving this using the second idea of reformatting the table. Using the melt function I was able to get a two column table of the name field with each possible match. From there I used the original lambda function to compare the two columns and output a ratio. From there it was relatively easy to go through and see the most likely matches, although it did require some manual effort.\ndf = pd.read_csv('output.csv')\ndf1 = df.melt(id_vars = ['id', 'name'], var_name = 'match').dropna().drop('match',1).sort_values('name')\ndf1['diff'] = df1.apply(lambda x: diff.SequenceMatcher(None, x[1].strip(), x[2].strip()).ratio(), axis=1) \ndf1.to_csv('comparison-output.csv', encoding='utf-8')\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "sequencematcher" ]
stackoverflow_0074637083_pandas_python_sequencematcher.txt
Q: Determine if Two Strings Are Close i am trying to make a program that compares word1 strings with word2 string to occur only once class Solution: def closeStrings(self, word1: str, word2: str) -> bool: word1 = [x.strip() for x in word1] word2 = [x.strip() for x in word2] update = False for x in word1: if(x in word2): update = True if(type(x) is str): a = word1.index(x) b = word2.index(x) word1[a]='' word2[b]='' else: update = False else: update = False break return update print(Solution.closeStrings(Solution,word1='a',word2='aa')) Input word1 = 'a',word2 ='aa' Expected Output = False Actual Output = True A: print(Solution.closeStrings(Solution,word1='a',word2='aa')) You create a class in order to be able to create an instance of it. That way you don't need to pass Solution as the self parameter. word1 = [x.strip() for x in word1] It looks like you expect to remove spaces. But you'll get a list of strings with empty strings for the spaces. That's not what you want. See the output of print([x.strip() for x in "Hello world"]) Your algorithm is way too complicated. You can simply count the occurrences of each character in word2: class Solution: def closeStrings(self, word1: str, word2: str) -> bool: for x in word1: if word2.count(x) != word1.count(x): return False return True s = Solution() print(s.closeStrings(word1='a',word2='aa')) print(s.closeStrings(word1='abcb',word2='bcab')) A: Extending to other more solution answer by @Thomas Weller well explained by him class Solution: def closeStrings(self, word1: str, word2: str) -> bool: for i in word1: if i not in word2: return False for i in word2: if i not in word1: return False return True def closeStrings2(self, word1: str, word2: str) -> bool: if len(word1) != len(word2): return False if set(word1) != set(word2): return False return True def closeStrings3(self, word1: str, word2: str) -> bool: if len(word1) != len(word2): return False if sorted(word1) != sorted(word2): return False return True print(Solution().closeStrings(word1="cabbba", word2="abbccc")) print(Solution().closeStrings3(word1="cabbba", word2="aabbss")) print(Solution().closeStrings3(word1="cabbba", word2="aabbss"))
Determine if Two Strings Are Close
i am trying to make a program that compares word1 strings with word2 string to occur only once class Solution: def closeStrings(self, word1: str, word2: str) -> bool: word1 = [x.strip() for x in word1] word2 = [x.strip() for x in word2] update = False for x in word1: if(x in word2): update = True if(type(x) is str): a = word1.index(x) b = word2.index(x) word1[a]='' word2[b]='' else: update = False else: update = False break return update print(Solution.closeStrings(Solution,word1='a',word2='aa')) Input word1 = 'a',word2 ='aa' Expected Output = False Actual Output = True
[ "\nprint(Solution.closeStrings(Solution,word1='a',word2='aa'))\nYou create a class in order to be able to create an instance of it. That way you don't need to pass Solution as the self parameter.\n\nword1 = [x.strip() for x in word1]\nIt looks like you expect to remove spaces. But you'll get a list of strings with empty strings for the spaces. That's not what you want. See the output of\nprint([x.strip() for x in \"Hello world\"])\n\nYour algorithm is way too complicated.\nYou can simply count the occurrences of each character in word2:\n\n\nclass Solution:\n def closeStrings(self, word1: str, word2: str) -> bool:\n for x in word1:\n if word2.count(x) != word1.count(x): return False\n return True\n\n\ns = Solution()\nprint(s.closeStrings(word1='a',word2='aa'))\nprint(s.closeStrings(word1='abcb',word2='bcab'))\n\n", "Extending to other more solution answer by @Thomas Weller well explained by him\nclass Solution:\n def closeStrings(self, word1: str, word2: str) -> bool:\n for i in word1:\n if i not in word2:\n return False\n for i in word2:\n if i not in word1:\n return False\n return True\n\n def closeStrings2(self, word1: str, word2: str) -> bool:\n if len(word1) != len(word2):\n return False\n if set(word1) != set(word2):\n return False\n return True\n\n def closeStrings3(self, word1: str, word2: str) -> bool:\n if len(word1) != len(word2):\n return False\n if sorted(word1) != sorted(word2):\n return False\n return True\n\nprint(Solution().closeStrings(word1=\"cabbba\", word2=\"abbccc\"))\nprint(Solution().closeStrings3(word1=\"cabbba\", word2=\"aabbss\"))\nprint(Solution().closeStrings3(word1=\"cabbba\", word2=\"aabbss\"))\n\n\n\n\n" ]
[ 1, 0 ]
[]
[]
[ "list", "python", "python_3.x" ]
stackoverflow_0074660641_list_python_python_3.x.txt
Q: Error while working on the site in Django This is a continuation of the previous question. When I continued to work on the site and when I wanted to test the site through "python manage.py runserver" in the C:\mysite\site\miniproject directory, the following error pops up: C:\Program Files\Python36\lib\site-packages\django\db\models\base.py:321: RuntimeWarning: Model 'blog.post' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models. new_class._meta.apps.register_model(new_class._meta.app_label, new_class) C:\Program Files\Python36\lib\site-packages\django\db\models\base.py:321: RuntimeWarning: Model 'blog.post' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models. new_class._meta.apps.register_model(new_class._meta.app_label, new_class) Watching for file changes with StatReloader Performing system checks... Exception in thread django-main-thread: Traceback (most recent call last): File "C:\Program Files\Python36\lib\site-packages\django\urls\conf.py", line 17, in include urlconf_module, app_name = arg ValueError: too many values to unpack (expected 2) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files\Python36\lib\threading.py", line 916, in _bootstrap_inner self.run() File "C:\Program Files\Python36\lib\threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "C:\Program Files\Python36\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "C:\Program Files\Python36\lib\site-packages\django\core\management\commands\runserver.py", line 118, in inner_run self.check(display_num_errors=True) File "C:\Program Files\Python36\lib\site-packages\django\core\management\base.py", line 423, in check databases=databases, File "C:\Program Files\Python36\lib\site-packages\django\core\checks\registry.py", line 76, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File "C:\Program Files\Python36\lib\site-packages\django\core\checks\urls.py", line 13, in check_url_config return check_resolver(resolver) File "C:\Program Files\Python36\lib\site-packages\django\core\checks\urls.py", line 23, in check_resolver return check_method() File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 416, in check for pattern in self.url_patterns: File "C:\Program Files\Python36\lib\site-packages\django\utils\functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 602, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "C:\Program Files\Python36\lib\site-packages\django\utils\functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 595, in urlconf_module return import_module(self.urlconf_name) File "C:\Program Files\Python36\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 978, in _gcd_import File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed File "C:\mysite\site\miniproject\miniproject\urls.py", line 20, in <module> url(r'^admin/', include(admin.site.urls)), File "C:\Program Files\Python36\lib\site-packages\django\urls\conf.py", line 27, in include 'provide the namespace argument to include() instead.' % len(arg) django.core.exceptions.ImproperlyConfigured: Passing a 3-tuple to include() is not supported. Pass a 2-tuple containing the list of patterns and app_name, and provide the namespace argument to include() instead. Here is a link to the chapter where I worked: https://pocoz.gitbooks.io/django-v-primerah/content/sozdanie-shablonov-dlia-view.html , Most likely I made a mistake somewhere. Next, I will show you the contents of the files: base.html: {% load static files %} <!DOCTYPE html> <html> <head> <title>{% block title %}{% endblock %}</title> <link href="{% static "css/blog.css" %}" rel="stylesheet"> </head> <body> <div id="content"> {% block content %} {%endblock%} </div> <div id="sidebar"> <h2>My blog</h2> <p>This is my blog.</p> </div> </body> </html> list.html: {% extends "blog/base.html" %} {% block title %}My Blog{% endblock %} {% block content %} <h1>My Blog</h1> {% for post in posts %} <h2> <a href="{{ post.get_absolute_url }}">{{ post.title }}</a> </h2> <p class="date"> Published {{ post.publish }} by {{ post.author }} </p> {{ post.body|truncatewords:30|linebreaks }} {% endfor %} {%endblock%} detail.html: {% extends "blog/base.html" %} {% block title %}{{ post. title }}{% endblock %} {% block content %} <h1>{{post.title}}</h1> <p class="date"> Published {{ post.publish }} by {{ post.author }} </p> {{ post.body|linebreaks}} {%endblock%} C:\mysite\site\miniproject\blog\views.py: from django.shortcuts import render, get_object_or_404 from .models import Post def post_list(request): posts = Post.published.all() return render(request, 'blog/post/list.html', {'posts': posts}) def post_detail(request, year, month, day, post): post = get_object_or_404(Post, slug=post, status='published', publish_year=year, publish__month=month, publish_day=day) return render(request,'blog/post/detail.html', {'post': post}) # Create your views here. C:\mysite\site\miniproject\blog\urls.py: from django.conf.urls import url from. import views urlpatterns = [ # post views url(r'^$', views.post_list, name='post_list'), url(r'^(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d{2})/'\ r'(?P<post>[-\w]+)/$', views.post_detail, name='post_detail'), ] C:\mysite\site\miniproject\miniproject\urls.py: """miniproject URL Configuration The `urlpatterns` list routes URLs to views. For more information please see: https://docs.djangoproject.com/en/3.2/topics/http/urls/ Examples: Function views 1. Add an import: from my_app import views 2. Add a URL to urlpatterns: path('', views.home, name='home') class-based views 1. Add an import: from other_app.views import Home 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home') Including another URLconf 1. Import the include() function: from django.urls import include, path 2. Add a URL to urlpatterns: path('blog/', include('blog.urls')) """ from django.conf.urls import include, url from django.contrib import admin urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^blog/', include('blog.urls', namespace='blog', app_name='blog')), ] C:\mysite\site\miniproject\blog\models.py: from django.db import models from django.utils import timezone from django.contrib.auth.models import User from django.shortcuts import reverse class Post(models.Model): def get_absolute_url(self): return reverse('blog:post_detail', args=[self.publish.year, self.publish.strftime('%m'), self.publish.strftime('%d'), self.slug]) class Post(models.Model): STATUS_CHOICES = ( ('draft', 'Draft'), ('published', 'Published'), ) title = models.CharField(max_length=250) slug = models.SlugField(max_length=250, unique_for_date='publish') author = models.ForeignKey(User, on_delete=models.CASCADE, related_name='blog_posts') body = models.TextField() publish = models.DateTimeField(default=timezone.now) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) status = models.CharField(max_length=10, choices=STATUS_CHOICES, default='draft') class Meta: ordering = ('-publish',) def __str__(self): return self.title # Create your models here. I updated the Python libraries and carefully checked everything, read the Django documentation and nothing helped, maybe I inserted the Python code incorrectly A: The error lies in your urls file app_name is passed as an arg instead of a kwarg url(r'^blog/', include('blog.urls', namespace='blog', app_name='blog')), This should fix the issue url(r'^blog/', include('blog.urls', "blog", namespace='blog'), here is the implemtation behind include method def include(arg, namespace=None): app_name = None if isinstance(arg, tuple): # Callable returning a namespace hint. try: urlconf_module, app_name = arg except ValueError: if namespace: raise ImproperlyConfigured( "Cannot override the namespace for a dynamic module that " "provides a namespace." ) raise ImproperlyConfigured( "Passing a %d-tuple to include() is not supported. Pass a " "2-tuple containing the list of patterns and app_name, and " "provide the namespace argument to include() instead." % len(arg) ) else: # No namespace hint - use manually provided namespace. urlconf_module = arg ... A: Change the import line to from django.shortcuts import reverse.
Error while working on the site in Django
This is a continuation of the previous question. When I continued to work on the site and when I wanted to test the site through "python manage.py runserver" in the C:\mysite\site\miniproject directory, the following error pops up: C:\Program Files\Python36\lib\site-packages\django\db\models\base.py:321: RuntimeWarning: Model 'blog.post' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models. new_class._meta.apps.register_model(new_class._meta.app_label, new_class) C:\Program Files\Python36\lib\site-packages\django\db\models\base.py:321: RuntimeWarning: Model 'blog.post' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models. new_class._meta.apps.register_model(new_class._meta.app_label, new_class) Watching for file changes with StatReloader Performing system checks... Exception in thread django-main-thread: Traceback (most recent call last): File "C:\Program Files\Python36\lib\site-packages\django\urls\conf.py", line 17, in include urlconf_module, app_name = arg ValueError: too many values to unpack (expected 2) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files\Python36\lib\threading.py", line 916, in _bootstrap_inner self.run() File "C:\Program Files\Python36\lib\threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "C:\Program Files\Python36\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "C:\Program Files\Python36\lib\site-packages\django\core\management\commands\runserver.py", line 118, in inner_run self.check(display_num_errors=True) File "C:\Program Files\Python36\lib\site-packages\django\core\management\base.py", line 423, in check databases=databases, File "C:\Program Files\Python36\lib\site-packages\django\core\checks\registry.py", line 76, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File "C:\Program Files\Python36\lib\site-packages\django\core\checks\urls.py", line 13, in check_url_config return check_resolver(resolver) File "C:\Program Files\Python36\lib\site-packages\django\core\checks\urls.py", line 23, in check_resolver return check_method() File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 416, in check for pattern in self.url_patterns: File "C:\Program Files\Python36\lib\site-packages\django\utils\functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 602, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "C:\Program Files\Python36\lib\site-packages\django\utils\functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "C:\Program Files\Python36\lib\site-packages\django\urls\resolvers.py", line 595, in urlconf_module return import_module(self.urlconf_name) File "C:\Program Files\Python36\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 978, in _gcd_import File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed File "C:\mysite\site\miniproject\miniproject\urls.py", line 20, in <module> url(r'^admin/', include(admin.site.urls)), File "C:\Program Files\Python36\lib\site-packages\django\urls\conf.py", line 27, in include 'provide the namespace argument to include() instead.' % len(arg) django.core.exceptions.ImproperlyConfigured: Passing a 3-tuple to include() is not supported. Pass a 2-tuple containing the list of patterns and app_name, and provide the namespace argument to include() instead. Here is a link to the chapter where I worked: https://pocoz.gitbooks.io/django-v-primerah/content/sozdanie-shablonov-dlia-view.html , Most likely I made a mistake somewhere. Next, I will show you the contents of the files: base.html: {% load static files %} <!DOCTYPE html> <html> <head> <title>{% block title %}{% endblock %}</title> <link href="{% static "css/blog.css" %}" rel="stylesheet"> </head> <body> <div id="content"> {% block content %} {%endblock%} </div> <div id="sidebar"> <h2>My blog</h2> <p>This is my blog.</p> </div> </body> </html> list.html: {% extends "blog/base.html" %} {% block title %}My Blog{% endblock %} {% block content %} <h1>My Blog</h1> {% for post in posts %} <h2> <a href="{{ post.get_absolute_url }}">{{ post.title }}</a> </h2> <p class="date"> Published {{ post.publish }} by {{ post.author }} </p> {{ post.body|truncatewords:30|linebreaks }} {% endfor %} {%endblock%} detail.html: {% extends "blog/base.html" %} {% block title %}{{ post. title }}{% endblock %} {% block content %} <h1>{{post.title}}</h1> <p class="date"> Published {{ post.publish }} by {{ post.author }} </p> {{ post.body|linebreaks}} {%endblock%} C:\mysite\site\miniproject\blog\views.py: from django.shortcuts import render, get_object_or_404 from .models import Post def post_list(request): posts = Post.published.all() return render(request, 'blog/post/list.html', {'posts': posts}) def post_detail(request, year, month, day, post): post = get_object_or_404(Post, slug=post, status='published', publish_year=year, publish__month=month, publish_day=day) return render(request,'blog/post/detail.html', {'post': post}) # Create your views here. C:\mysite\site\miniproject\blog\urls.py: from django.conf.urls import url from. import views urlpatterns = [ # post views url(r'^$', views.post_list, name='post_list'), url(r'^(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d{2})/'\ r'(?P<post>[-\w]+)/$', views.post_detail, name='post_detail'), ] C:\mysite\site\miniproject\miniproject\urls.py: """miniproject URL Configuration The `urlpatterns` list routes URLs to views. For more information please see: https://docs.djangoproject.com/en/3.2/topics/http/urls/ Examples: Function views 1. Add an import: from my_app import views 2. Add a URL to urlpatterns: path('', views.home, name='home') class-based views 1. Add an import: from other_app.views import Home 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home') Including another URLconf 1. Import the include() function: from django.urls import include, path 2. Add a URL to urlpatterns: path('blog/', include('blog.urls')) """ from django.conf.urls import include, url from django.contrib import admin urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^blog/', include('blog.urls', namespace='blog', app_name='blog')), ] C:\mysite\site\miniproject\blog\models.py: from django.db import models from django.utils import timezone from django.contrib.auth.models import User from django.shortcuts import reverse class Post(models.Model): def get_absolute_url(self): return reverse('blog:post_detail', args=[self.publish.year, self.publish.strftime('%m'), self.publish.strftime('%d'), self.slug]) class Post(models.Model): STATUS_CHOICES = ( ('draft', 'Draft'), ('published', 'Published'), ) title = models.CharField(max_length=250) slug = models.SlugField(max_length=250, unique_for_date='publish') author = models.ForeignKey(User, on_delete=models.CASCADE, related_name='blog_posts') body = models.TextField() publish = models.DateTimeField(default=timezone.now) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) status = models.CharField(max_length=10, choices=STATUS_CHOICES, default='draft') class Meta: ordering = ('-publish',) def __str__(self): return self.title # Create your models here. I updated the Python libraries and carefully checked everything, read the Django documentation and nothing helped, maybe I inserted the Python code incorrectly
[ "The error lies in your urls file\napp_name is passed as an arg instead of a kwarg\n url(r'^blog/', include('blog.urls',\n namespace='blog',\n app_name='blog')), \n\nThis should fix the issue\n url(r'^blog/', include('blog.urls', \"blog\", namespace='blog'),\n\nhere is the implemtation behind include method\ndef include(arg, namespace=None):\n app_name = None\n if isinstance(arg, tuple):\n # Callable returning a namespace hint.\n try:\n urlconf_module, app_name = arg\n except ValueError:\n if namespace:\n raise ImproperlyConfigured(\n \"Cannot override the namespace for a dynamic module that \"\n \"provides a namespace.\"\n )\n raise ImproperlyConfigured(\n \"Passing a %d-tuple to include() is not supported. Pass a \"\n \"2-tuple containing the list of patterns and app_name, and \"\n \"provide the namespace argument to include() instead.\" % len(arg)\n )\n else:\n # No namespace hint - use manually provided namespace.\n urlconf_module = arg\n ...\n\n", "Change the import line to from django.shortcuts import reverse.\n" ]
[ 1, 0 ]
[]
[]
[ "django", "django_templates", "python", "python_3.x", "web" ]
stackoverflow_0074645823_django_django_templates_python_python_3.x_web.txt
Q: Neo4j Python Driver Using Unwind with a list of dictionaries I'm trying to batch merge to create multiple nodes. Using the below code, def test_batches(tx,user_batch): result= tx.run(f"Unwind {user_batch} as user\ MERGE (n:User {{id: user.id, name: user.name, username: user.username }})") However I am getting this error. Note I'm passing in a list of dictionaries. CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input '[': expected "+" or "-" (line 1, column 8 (offset: 7)) "Unwind [{'id': 1596859520977969156, 'name': 'Bigspuds', 'username': 'bigspuds777'}, {'id': 1596860505662144513, 'name': 'JOHN VIEIRA', 'username': 'JOHNVIE67080352'}, {'id': 1596860610905448449, 'name': 'biru nkumat', 'username': 'NkumatB'}, {'id': 1513497734711738374, 'name': 'elfiranda Hakim', 'username': 'Kidonk182'}, {'id': 1596836234860859392, 'name': 'Ecat Miao', 'username': 'sylvanasMa'}] as user MERGE (n:User {id: user.id, name: user.name, username: user.username })" ^} I have no idea why this is happening any help is greatly appreciated. A: Below is a working code on using UNWIND for a list of dictionaries. Please note that is it recommended to pass the value as a parameter rather than working on the value string in query. from neo4j import GraphDatabase uri = "neo4j://localhost:7687" driver = GraphDatabase.driver(uri, auth=("neo4j", "awesomepassword")) def test_batches(tx, user_batch): tx.run("UNWIND $user_batch as user \ MERGE (n:User {id: user.id, name: user.name, username: user.username})", user_batch=user_batch) with driver.session() as session: user_batch = [ {'id': 1596859520977969156, 'name': 'Bigspuds', 'username': 'bigspuds777'}, {'id': 1596860505662144513, 'name': 'JOHN VIEIRA', 'username': 'JOHNVIE67080352'}, {'id': 1596860610905448449, 'name': 'biru nkumat', 'username': 'NkumatB'}, {'id': 1513497734711738374, 'name': 'elfiranda Hakim', 'username': 'Kidonk182'}, {'id': 1596836234860859392, 'name': 'Ecat Miao', 'username': 'sylvanasMa'}] session.write_transaction(test_batches, user_batch) driver.close() sample result: A: You may need to adjust the syntax of the Cypher query to conform to the Neo4j Cypher query language specification. For example, the MERGE clause should use the ON CREATE and ON MATCH syntax to specify the actions that should be taken if the node already exists or not. Here is an example of how the Cypher query can be rewritten to use the ON CREATE and ON MATCH syntax: def test_batches(tx,user_batch): result = tx.run(f"UNWIND {user_batch} as user MERGE (n:User {{id: user.id, name: user.name, username: user.username }}) ON CREATE SET n = user ON MATCH SET n += user")
Neo4j Python Driver Using Unwind with a list of dictionaries
I'm trying to batch merge to create multiple nodes. Using the below code, def test_batches(tx,user_batch): result= tx.run(f"Unwind {user_batch} as user\ MERGE (n:User {{id: user.id, name: user.name, username: user.username }})") However I am getting this error. Note I'm passing in a list of dictionaries. CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input '[': expected "+" or "-" (line 1, column 8 (offset: 7)) "Unwind [{'id': 1596859520977969156, 'name': 'Bigspuds', 'username': 'bigspuds777'}, {'id': 1596860505662144513, 'name': 'JOHN VIEIRA', 'username': 'JOHNVIE67080352'}, {'id': 1596860610905448449, 'name': 'biru nkumat', 'username': 'NkumatB'}, {'id': 1513497734711738374, 'name': 'elfiranda Hakim', 'username': 'Kidonk182'}, {'id': 1596836234860859392, 'name': 'Ecat Miao', 'username': 'sylvanasMa'}] as user MERGE (n:User {id: user.id, name: user.name, username: user.username })" ^} I have no idea why this is happening any help is greatly appreciated.
[ "Below is a working code on using UNWIND for a list of dictionaries. Please note that is it recommended to pass the value as a parameter rather than working on the value string in query.\nfrom neo4j import GraphDatabase\n\nuri = \"neo4j://localhost:7687\"\ndriver = GraphDatabase.driver(uri, auth=(\"neo4j\", \"awesomepassword\"))\n\ndef test_batches(tx, user_batch):\n tx.run(\"UNWIND $user_batch as user \\\n MERGE (n:User {id: user.id, name: user.name, username: user.username})\", user_batch=user_batch)\n \nwith driver.session() as session:\n user_batch = [\n {'id': 1596859520977969156, 'name': 'Bigspuds', 'username': 'bigspuds777'}, \n {'id': 1596860505662144513, 'name': 'JOHN VIEIRA', 'username': 'JOHNVIE67080352'}, \n {'id': 1596860610905448449, 'name': 'biru nkumat', 'username': 'NkumatB'}, \n {'id': 1513497734711738374, 'name': 'elfiranda Hakim', 'username': 'Kidonk182'}, \n {'id': 1596836234860859392, 'name': 'Ecat Miao', 'username': 'sylvanasMa'}]\n session.write_transaction(test_batches, user_batch) \n\ndriver.close()\n\nsample result:\n\n", "You may need to adjust the syntax of the Cypher query to conform to the Neo4j Cypher query language specification. For example, the MERGE clause should use the ON CREATE and ON MATCH syntax to specify the actions that should be taken if the node already exists or not.\nHere is an example of how the Cypher query can be rewritten to use the ON CREATE and ON MATCH syntax:\ndef test_batches(tx,user_batch):\n result = tx.run(f\"UNWIND {user_batch} as user\n MERGE (n:User {{id: user.id, name: user.name, username: user.username }})\n ON CREATE SET n = user\n ON MATCH SET n += user\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "neo4j", "neo4j_python_driver", "python" ]
stackoverflow_0074659436_neo4j_neo4j_python_driver_python.txt
Q: How to get reference table fields with django model query When I am trying to fetch foreign key table using django model I am only unable to get the referenced table details. I have two models TblVersion and TblProject defined below class TblVersion(models.Model): version_id = models.AutoField(primary_key=True) project = models.ForeignKey(TblProject, models.DO_NOTHING) version_major = models.PositiveSmallIntegerField() version_minor = models.PositiveSmallIntegerField() class Meta: managed = False db_table = 'tbl_version' class TblProject(models.Model): project_id = models.AutoField(primary_key=True) project_name = models.CharField(max_length=32) class Meta: managed = False db_table = 'tbl_project' My current code implementation: result= TblVersion.objects.all().select_related() data = serializers.serialize('json', result) print(data) Code Result: `[{"model": "CCM_API.tblversion", "pk": 1, "fields": {"project": 1, "version_major": 1000, "version_minor": 0}}, {"model": "CCM_API.tblversion", "pk": 2, "fields": {"project": 2, "version_major": 1000, "version_minor": 0}}, {"model": "CCM_API.tblversion", "pk": 3, "fields": {"project": 2, "version_major": 1000, "version_minor": 2}}]` The code output lacks the foreign key fields (Project Name). I want a list of version numbers with their respective projects like this. | Version Id | Major Version | Minor Version | Project Id | Project Name| | -------- | -------- |-------- |-------- |-------- | | 1 | 1000 |1 | 1| PROJ_1 | | 2 | 1000 |1 | 2| PROJ_2 | | 3 | 1000 |2 | 1| PROJ_1 | A: select_related method accepts an arg of fields that relates to an other model result= TblVersion.objects.all().select_related("product") Update To add those related field to be serializable u can list the values as result = TblVersion.objects.all().select_related("product").values("id", "version_id", ..., "product__id", "product__name")
How to get reference table fields with django model query
When I am trying to fetch foreign key table using django model I am only unable to get the referenced table details. I have two models TblVersion and TblProject defined below class TblVersion(models.Model): version_id = models.AutoField(primary_key=True) project = models.ForeignKey(TblProject, models.DO_NOTHING) version_major = models.PositiveSmallIntegerField() version_minor = models.PositiveSmallIntegerField() class Meta: managed = False db_table = 'tbl_version' class TblProject(models.Model): project_id = models.AutoField(primary_key=True) project_name = models.CharField(max_length=32) class Meta: managed = False db_table = 'tbl_project' My current code implementation: result= TblVersion.objects.all().select_related() data = serializers.serialize('json', result) print(data) Code Result: `[{"model": "CCM_API.tblversion", "pk": 1, "fields": {"project": 1, "version_major": 1000, "version_minor": 0}}, {"model": "CCM_API.tblversion", "pk": 2, "fields": {"project": 2, "version_major": 1000, "version_minor": 0}}, {"model": "CCM_API.tblversion", "pk": 3, "fields": {"project": 2, "version_major": 1000, "version_minor": 2}}]` The code output lacks the foreign key fields (Project Name). I want a list of version numbers with their respective projects like this. | Version Id | Major Version | Minor Version | Project Id | Project Name| | -------- | -------- |-------- |-------- |-------- | | 1 | 1000 |1 | 1| PROJ_1 | | 2 | 1000 |1 | 2| PROJ_2 | | 3 | 1000 |2 | 1| PROJ_1 |
[ "select_related method accepts an arg of fields that relates to an other model\nresult= TblVersion.objects.all().select_related(\"product\")\nUpdate\nTo add those related field to be serializable u can list the values as\nresult = TblVersion.objects.all().select_related(\"product\").values(\"id\", \"version_id\", ..., \"product__id\", \"product__name\")\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_orm", "django_views", "python" ]
stackoverflow_0074660813_django_django_models_django_orm_django_views_python.txt
Q: Mock class in Python with decorator patch I would like to patch a class in Python in unit testing. The main code is this (mymath.py): class MyMath: def my_add(self, a, b): return a + b def add_three_and_two(): my_math = MyMath() return my_math.my_add(3, 2) The test class is this: import unittest from unittest.mock import patch import mymath class TestMyMath(unittest.TestCase): @patch('mymath.MyMath') def test_add_three_and_two(self, mymath_mock): mymath_mock.my_add.return_value = 5 result = mymath.add_three_and_two() mymath_mock.my_add.assert_called_once_with(3, 2) self.assertEqual(5, result) unittest.main() I am getting the following error: AssertionError: Expected 'my_add' to be called once. Called 0 times. The last assert would also fail: AssertionError: 5 != <MagicMock name='MyMath().my_add()' id='3006283127328'> I would expect that the above test passes. What I did wrong? UPDATE: Restrictions: I would not change the tested part if possible. (I am curious if it is even possible, and this is the point of the question.) If not possible, then I want the least amount of change in the to be tested part. Especially I want to keep the my_add() function non-static. A: Your code is almost there, some small changes and you'll be okay: my_add should be a class method since self does not really play a role here. If my_add is an instance method, then it will be harder to trace the calls, since your test will track the instance signature, not the class sig Since you are are patching, not stubbing, you should use the "real thing", except when mocking the return value. Here's what that looks like in your code: class MyMath: @classmethod def my_add(cls, a, b): return a + b def add_three_and_two(): return MyMath.my_add(3, 2) Now, the test: import unittest from unittest.mock import patch, MagicMock import mymath class TestMyMath(unittest.TestCase): @patch('mymath.MyMath') def test_add_three_and_two(self, mymath_mock): # Mock what `mymath` would return mymath_mock.my_add.return_value = 5 # We are patching, not stubbing, so use the real thing result = mymath.add_three_and_two() mymath.MyMath.my_add.assert_called_once_with(3, 2) self.assertEqual(5, result) unittest.main() This should now work. A: Instead of patching the entire class, just patch the function. class TestMyMath(unittest.TestCase): @patch.object(mymath.MyMath, 'my_add') def test_add_three_and_two(self, m): m.return_value = 5 result = mymath.add_three_and_two() m.assert_called_once_with(3, 2) self.assertEqual(5, result) I think the original problems is that my_math.my_add produces a new mock object every time it is used; you configured one Mock's return_value attribute, but then checked if another Mock instance was called. At the very least, using patch.object ensures you are disturbing your original code as little as possible.
Mock class in Python with decorator patch
I would like to patch a class in Python in unit testing. The main code is this (mymath.py): class MyMath: def my_add(self, a, b): return a + b def add_three_and_two(): my_math = MyMath() return my_math.my_add(3, 2) The test class is this: import unittest from unittest.mock import patch import mymath class TestMyMath(unittest.TestCase): @patch('mymath.MyMath') def test_add_three_and_two(self, mymath_mock): mymath_mock.my_add.return_value = 5 result = mymath.add_three_and_two() mymath_mock.my_add.assert_called_once_with(3, 2) self.assertEqual(5, result) unittest.main() I am getting the following error: AssertionError: Expected 'my_add' to be called once. Called 0 times. The last assert would also fail: AssertionError: 5 != <MagicMock name='MyMath().my_add()' id='3006283127328'> I would expect that the above test passes. What I did wrong? UPDATE: Restrictions: I would not change the tested part if possible. (I am curious if it is even possible, and this is the point of the question.) If not possible, then I want the least amount of change in the to be tested part. Especially I want to keep the my_add() function non-static.
[ "Your code is almost there, some small changes and you'll be okay:\n\nmy_add should be a class method since self does not really play a role here.\nIf my_add is an instance method, then it will be harder to trace the calls, since your test will track the instance signature, not the class sig\nSince you are are patching, not stubbing, you should use the \"real thing\", except when mocking the return value.\n\nHere's what that looks like in your code:\nclass MyMath:\n\n @classmethod\n def my_add(cls, a, b):\n return a + b\n\ndef add_three_and_two():\n return MyMath.my_add(3, 2)\n\n\nNow, the test:\nimport unittest\nfrom unittest.mock import patch, MagicMock\nimport mymath\n\n\nclass TestMyMath(unittest.TestCase):\n\n @patch('mymath.MyMath')\n def test_add_three_and_two(self, mymath_mock):\n\n # Mock what `mymath` would return \n mymath_mock.my_add.return_value = 5\n\n # We are patching, not stubbing, so use the real thing\n result = mymath.add_three_and_two()\n mymath.MyMath.my_add.assert_called_once_with(3, 2)\n self.assertEqual(5, result)\n\n\nunittest.main()\n\nThis should now work.\n", "Instead of patching the entire class, just patch the function.\nclass TestMyMath(unittest.TestCase):\n @patch.object(mymath.MyMath, 'my_add')\n def test_add_three_and_two(self, m):\n m.return_value = 5\n\n result = mymath.add_three_and_two()\n\n m.assert_called_once_with(3, 2)\n self.assertEqual(5, result)\n\nI think the original problems is that my_math.my_add produces a new mock object every time it is used; you configured one Mock's return_value attribute, but then checked if another Mock instance was called. At the very least, using patch.object ensures you are disturbing your original code as little as possible.\n" ]
[ 2, 0 ]
[]
[]
[ "python", "python_unittest", "python_unittest.mock" ]
stackoverflow_0074525368_python_python_unittest_python_unittest.mock.txt
Q: How to use AWS Sagemaker with newer version of Huggingface Estimator? When trying to use Huggingface estimator on sagemaker, Run training on Amazon SageMaker e.g. # create the Estimator huggingface_estimator = HuggingFace( entry_point='train.py', source_dir='./scripts', instance_type='ml.p3.2xlarge', instance_count=1, role=role, transformers_version='4.17', pytorch_version='1.10', py_version='py38', hyperparameters = hyperparameters ) When I tried to increase the version to transformers_version='4.24', it throws an error where the maximum version supported is 4.17. How to use AWS Sagemaker with newer version of Huggingface Estimator? There's a note on using newer version for inference on https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/9 but it looks like the way to use it for training with the Huggingface estimator is kind of complicated https://discuss.huggingface.co/t/huggingface-pytorch-versions-on-sagemaker/26315/5?u=alvations and it's not confirmed that the complicated steps can work. A: You can use the Pytorch estimator and in your source directory place a requirements.txt with Transformers added to it. This will ensure 2 things You can use higher version of pytorch 1.12 (current) compared to 1.10.2 in the huggingface estimator. Install new version of HuggingFace Transformers library. To achieve this you need to structure your source directory like this scripts /train.py /requirements.txt and pass the source_dir attribute to the pytorch estimator pt_estimator = PyTorch( entry_point="train.py", source_dir="scripts", role=sagemaker.get_execution_role(), A: @alvas, Amazon SageMaker is a managed service, which means AWS builds and operates the tooling for you, saving your time. In your case, the tooling of interest is an integration of a new version of HuggingFace Transformers library with SageMaker that should be developed, tested and deployed to production. So, this integration is naturally expected to be one or few versions behind the upstream library. But as a benefit, you always get a version of Transformers that is proved to be stable and compatible with SageMaker. In your case, you want to try the latest version of Transformers in SageMaker, potentially sacrificing the stability and compatibility (v4.24 was released just less than a month ago). As you correctly mentioned, this workflow can be "kind of complicated" and "not confirmed that the complicated steps can work". @Arun Lokanatha suggested the easiest way to try the new version. Indeed, Transformers work with regular PyTorch estimator, but instead of high-level HuggingFace estimator API you now need to use the lower-level PyTorch estimator API. The above-mentioned requirements.txt will look like this: transformers==4.24.0 As a drawback, you need to do a little bit more work by yourself, e.g. to figure out what is the minimal version of PyTorch/CUDA libraries required etc. And you're responsible for testing, securing, and optimizing the integration as appropriate for production grade use, potentially loosing some benefits from utilising SageMaker at its full capability. If you finally decide to use HuggingFace high-level estimator in production after my explanation, I recommend to take at least these actions: See the current list of supported versions in the latest version SageMaker Python SDK directly in its source code (at of today it's v4.17.0). Create or monitor an existing issue asking for a new version support in SageMaker Python SDK, e.g. #3456 for support for Transformers v4.24.0. I hope this answer is helpful. Ivan A: You can achieve this by Step-1 : Create a custom ECR Image with required hf version (https://docs.aws.amazon.com/sagemaker/latest/dg/studio-byoi.html) Step-2 : Develop your Train.py Step-3 : : Pass train.py and the new ecr image uri to sagemaker.estimator. (https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) A: To use a newer version of the HuggingFace Estimator on Amazon SageMaker, you can use the transformers_version parameter in the HuggingFace() constructor to specify the version of the HuggingFace library that you want to use. However, the maximum supported version may be limited by the version of the PyTorch library that is installed on the SageMaker instances that you are using for training. For example, if you try to use a newer version of the HuggingFace library than the one installed on the SageMaker instances, you may see an error similar to the following: ImportError: Unable to import 'transformers' To use a newer version of the HuggingFace library on SageMaker, you can do the following: Use the pytorch_version and py_version parameters in the HuggingFace() constructor to specify the version of PyTorch that you want to use. This will ensure that the correct version of PyTorch is installed on the SageMaker instances that you are using for training. Use the requirements.txt file in the source_dir parameter to specify any additional dependencies that are required by the newer version of the HuggingFace library. This will ensure that these dependencies are installed on the SageMaker instances along with the correct version of PyTorch. Here is an example of how you can use these parameters to specify the version of the HuggingFace library and its dependencies on SageMaker: # create the Estimator huggingface_estimator = HuggingFace( entry_point='train.py', source_dir='./scripts', instance_type='ml.p3.2xlarge', instance_count=1, role=role, transformers_version='4.24', pytorch_version='1
How to use AWS Sagemaker with newer version of Huggingface Estimator?
When trying to use Huggingface estimator on sagemaker, Run training on Amazon SageMaker e.g. # create the Estimator huggingface_estimator = HuggingFace( entry_point='train.py', source_dir='./scripts', instance_type='ml.p3.2xlarge', instance_count=1, role=role, transformers_version='4.17', pytorch_version='1.10', py_version='py38', hyperparameters = hyperparameters ) When I tried to increase the version to transformers_version='4.24', it throws an error where the maximum version supported is 4.17. How to use AWS Sagemaker with newer version of Huggingface Estimator? There's a note on using newer version for inference on https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/9 but it looks like the way to use it for training with the Huggingface estimator is kind of complicated https://discuss.huggingface.co/t/huggingface-pytorch-versions-on-sagemaker/26315/5?u=alvations and it's not confirmed that the complicated steps can work.
[ "You can use the Pytorch estimator and in your source directory place a requirements.txt with Transformers added to it. This will ensure 2 things\n\nYou can use higher version of pytorch 1.12 (current) compared to 1.10.2 in the huggingface estimator.\nInstall new version of HuggingFace Transformers library.\n\nTo achieve this you need to structure your source directory like this\nscripts\n/train.py\n/requirements.txt\nand pass the source_dir attribute to the pytorch estimator\npt_estimator = PyTorch(\nentry_point=\"train.py\",\nsource_dir=\"scripts\",\nrole=sagemaker.get_execution_role(),\n\n", "@alvas,\nAmazon SageMaker is a managed service, which means AWS builds and operates the tooling for you, saving your time. In your case, the tooling of interest is an integration of a new version of HuggingFace Transformers library with SageMaker that should be developed, tested and deployed to production. So, this integration is naturally expected to be one or few versions behind the upstream library. But as a benefit, you always get a version of Transformers that is proved to be stable and compatible with SageMaker.\nIn your case, you want to try the latest version of Transformers in SageMaker, potentially sacrificing the stability and compatibility (v4.24 was released just less than a month ago). As you correctly mentioned, this workflow can be \"kind of complicated\" and \"not confirmed that the complicated steps can work\". @Arun Lokanatha suggested the easiest way to try the new version. Indeed, Transformers work with regular PyTorch estimator, but instead of high-level HuggingFace estimator API you now need to use the lower-level PyTorch estimator API. The above-mentioned requirements.txt will look like this:\ntransformers==4.24.0\n\nAs a drawback, you need to do a little bit more work by yourself, e.g. to figure out what is the minimal version of PyTorch/CUDA libraries required etc. And you're responsible for testing, securing, and optimizing the integration as appropriate for production grade use, potentially loosing some benefits from utilising SageMaker at its full capability.\nIf you finally decide to use HuggingFace high-level estimator in production after my explanation, I recommend to take at least these actions:\n\nSee the current list of supported versions in the latest version SageMaker Python SDK directly in its source code (at of today it's v4.17.0).\nCreate or monitor an existing issue asking for a new version support in SageMaker Python SDK, e.g. #3456 for support for Transformers v4.24.0.\n\nI hope this answer is helpful.\nIvan\n", "You can achieve this by\nStep-1 : Create a custom ECR Image with required hf version (https://docs.aws.amazon.com/sagemaker/latest/dg/studio-byoi.html)\nStep-2 : Develop your Train.py\nStep-3 : : Pass train.py and the new ecr image uri to sagemaker.estimator.\n(https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html)\n", "To use a newer version of the HuggingFace Estimator on Amazon SageMaker, you can use the transformers_version parameter in the HuggingFace() constructor to specify the version of the HuggingFace library that you want to use. However, the maximum supported version may be limited by the version of the PyTorch library that is installed on the SageMaker instances that you are using for training.\nFor example, if you try to use a newer version of the HuggingFace library than the one installed on the SageMaker instances, you may see an error similar to the following:\nImportError: Unable to import 'transformers'\n\nTo use a newer version of the HuggingFace library on SageMaker, you can do the following:\nUse the pytorch_version and py_version parameters in the HuggingFace() constructor to specify the version of PyTorch that you want to use. This will ensure that the correct version of PyTorch is installed on the SageMaker instances that you are using for training.\nUse the requirements.txt file in the source_dir parameter to specify any additional dependencies that are required by the newer version of the HuggingFace library. This will ensure that these dependencies are installed on the SageMaker instances along with the correct version of PyTorch.\nHere is an example of how you can use these parameters to specify the version of the HuggingFace library and its dependencies on SageMaker:\n# create the Estimator\nhuggingface_estimator = HuggingFace(\n entry_point='train.py',\n source_dir='./scripts',\n instance_type='ml.p3.2xlarge',\n instance_count=1,\n role=role,\n transformers_version='4.24',\n pytorch_version='1\n\n" ]
[ 2, 2, 0, 0 ]
[]
[]
[ "amazon_sagemaker", "docker", "huggingface", "python", "pytorch" ]
stackoverflow_0074548143_amazon_sagemaker_docker_huggingface_python_pytorch.txt
Q: Input from user to print out a certain instance variable in python I have created a class with programs: class Program: def __init__(self,channel,start, end, name, viewers, percentage): self.channel = channel self.start = start self.end = end self.name = name self.viewers = viewers Channel 1, start:16.00 end:17.45 viewers: 100 name: Matinee:The kiss on the cross Channel 1, start:17.45 end:17.50 viewers: 45 name: The stock market today Channel 2, start:16.45 end:17.50 viewers: 30 name: News Channel 4, start:17.25 end:17.50 viewers: 10 name: Home building Channel 5, start:15.45 end:16.50 viewers: 28 name: Reality I also have created a nested list with the programs: [[1,16:00, 17,45, 100, 'Matinee: The kiss on the cross'],[1,17:45, 17,50, 45,'The stock market today'],[2,16:45, 17,50, 30,'News'], [4,17:25, 17,50, 10,'Home building'],[5,15:45, 16,50, 28,'Reality'] Now we want the user to be able to write the name of a program: News The result should be: News 19.45-17.50 has 30 viewers I thought about how you could incorporate a method to avoid the program from crashing if the input is invalid/ not an instance variable I have tried this: Check_input(): print('Enter the name of the desired program:') while True: #Continue asking for valid input. try: name = input('>') if name == #is an instance? return name else: print('Enter a program that is included in the schedule:') #input out of range except ValueError: print('Write a word!') #Word or letter as input print('Try again') I wonder if I should separate all the program-names from the nested list and check if the user enters a name in the list as input? (Maybe by creating a for-loop to iterate over?) I also have a question regarding how to print out the selected program when the user enters the correct name? I understand how to rearrange them into the correct order to create the sentence. However, I don't know how to access the correct program in the "memory" Do you have any suggestions how to combat the problem? All help is much appreciated! A: I wonder if I should separate all the program-names from the nested list and check if the user enters a name in the list as input? (Maybe by creating a for-loop to iterate over?) Well if all your programs have a unique name then the easiest approach would probably be to store them in a dictionary instead of a nested list like: programs = { "News": Program("2", "16:45", "17:50", "News", "30", "60"), "Reality": <Initialize Program class object for this program>, ... } Then you could just use the get dictionary method (it allows you to return a specific value if the key does not exist) to see if the asked program exists: name = input('>') program = programs.get(name, None) if program: print(program) else: # raise an exception or handle however you prefer And if your programs don't have a unique name then you will have to iterate over the list. In which case I would probably return a list of all existing objects that have that name. A for loop would work just fine, but I would switch the nested list with a list of Program objects since you already have the class. I also have a question regarding how to print out the selected program when the user enters the correct name? I understand how to rearrange them into the correct order to create the sentence. However, I don't know how to access the correct program in the "memory" Do you have any suggestions how to combat the problem. I would say that the most elegant solution is to override the __str__ method of your Program class so that you can just call print(program) and write out the right output. For example: class Program: def __init__(self,channel,start, end, name, viewers, percentage): self.channel = channel self.start = start self.end = end self.name = name self.viewers = viewers def __str__(self): return self.name + " " + self.start + "-" + self.end + " has " + self.viewers + " viewers" should print out News 19.45-17.50 has 30 viewers when you call it like: program = programs.get(name, None) if program: print(program)
Input from user to print out a certain instance variable in python
I have created a class with programs: class Program: def __init__(self,channel,start, end, name, viewers, percentage): self.channel = channel self.start = start self.end = end self.name = name self.viewers = viewers Channel 1, start:16.00 end:17.45 viewers: 100 name: Matinee:The kiss on the cross Channel 1, start:17.45 end:17.50 viewers: 45 name: The stock market today Channel 2, start:16.45 end:17.50 viewers: 30 name: News Channel 4, start:17.25 end:17.50 viewers: 10 name: Home building Channel 5, start:15.45 end:16.50 viewers: 28 name: Reality I also have created a nested list with the programs: [[1,16:00, 17,45, 100, 'Matinee: The kiss on the cross'],[1,17:45, 17,50, 45,'The stock market today'],[2,16:45, 17,50, 30,'News'], [4,17:25, 17,50, 10,'Home building'],[5,15:45, 16,50, 28,'Reality'] Now we want the user to be able to write the name of a program: News The result should be: News 19.45-17.50 has 30 viewers I thought about how you could incorporate a method to avoid the program from crashing if the input is invalid/ not an instance variable I have tried this: Check_input(): print('Enter the name of the desired program:') while True: #Continue asking for valid input. try: name = input('>') if name == #is an instance? return name else: print('Enter a program that is included in the schedule:') #input out of range except ValueError: print('Write a word!') #Word or letter as input print('Try again') I wonder if I should separate all the program-names from the nested list and check if the user enters a name in the list as input? (Maybe by creating a for-loop to iterate over?) I also have a question regarding how to print out the selected program when the user enters the correct name? I understand how to rearrange them into the correct order to create the sentence. However, I don't know how to access the correct program in the "memory" Do you have any suggestions how to combat the problem? All help is much appreciated!
[ "\nI wonder if I should separate all the program-names from the nested list and check if the user enters a name in the list as input? (Maybe by creating a for-loop to iterate over?)\n\nWell if all your programs have a unique name then the easiest approach would probably be to store them in a dictionary instead of a nested list like:\nprograms = {\n \"News\": Program(\"2\", \"16:45\", \"17:50\", \"News\", \"30\", \"60\"),\n \"Reality\": <Initialize Program class object for this program>,\n ...\n}\n\nThen you could just use the get dictionary method (it allows you to return a specific value if the key does not exist) to see if the asked program exists:\nname = input('>') \nprogram = programs.get(name, None)\nif program:\n print(program)\nelse:\n # raise an exception or handle however you prefer\n\nAnd if your programs don't have a unique name then you will have to iterate over the list. In which case I would probably return a list of all existing objects that have that name. A for loop would work just fine, but I would switch the nested list with a list of Program objects since you already have the class.\n\nI also have a question regarding how to print out the selected program when the user enters the correct name? I understand how to rearrange them into the correct order to create the sentence. However, I don't know how to access the correct program in the \"memory\" Do you have any suggestions how to combat the problem.\n\nI would say that the most elegant solution is to override the __str__ method of your Program class so that you can just call print(program) and write out the right output. For example:\nclass Program:\n def __init__(self,channel,start, end, name, viewers, percentage):\n self.channel = channel\n self.start = start\n self.end = end\n self.name = name\n self.viewers = viewers \n \n def __str__(self):\n return self.name + \" \" + self.start + \"-\" + self.end + \" has \" + self.viewers + \" viewers\"\n\nshould print out\n\nNews 19.45-17.50 has 30 viewers\n\nwhen you call it like:\nprogram = programs.get(name, None)\nif program:\n print(program)\n\n" ]
[ 1 ]
[]
[]
[ "class", "input", "list", "python", "try_except" ]
stackoverflow_0074660715_class_input_list_python_try_except.txt
Q: I'm looking forward to install the hunspell package using pip, but it throws the following error: Collecting hunspell Using cached hunspell-0.5.5.tar.gz (34 kB) Building wheels for collected packages: hunspell Building wheel for hunspell (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\shikhar\AppData\Local\Temp\pip-wheel-5grngp_q' cwd: C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd Complete output (12 lines): running bdist_wheel running build running build_ext building 'hunspell' extension creating build creating build\temp.win-amd64-3.10 creating build\temp.win-amd64-3.10\Release C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHUNSPELL_STATIC -IV:/hunspell-1.3.3/src/hunspell -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\include -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt /EHsc /Tphunspell.cpp /Fobuild\temp.win-amd64-3.10\Release\hunspell.obj /MT cl : Command line warning D9025 : overriding '/MD' with '/MT' hunspell.cpp hunspell.cpp(20): fatal error C1083: Cannot open include file: 'hunspell.hxx': No such file or directory error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit code 2 ERROR: Failed building wheel for hunspell Running setup.py clean for hunspell Failed to build hunspell Installing collected packages: hunspell Running setup.py install for hunspell ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\shikhar\AppData\Local\Temp\pip-record-clyqesxf\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include\hunspell' cwd: C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd Complete output (12 lines): running install running build running build_ext building 'hunspell' extension creating build creating build\temp.win-amd64-3.10 creating build\temp.win-amd64-3.10\Release C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHUNSPELL_STATIC -IV:/hunspell-1.3.3/src/hunspell -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\include -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt /EHsc /Tphunspell.cpp /Fobuild\temp.win-amd64-3.10\Release\hunspell.obj /MT cl : Command line warning D9025 : overriding '/MD' with '/MT' hunspell.cpp hunspell.cpp(20): fatal error C1083: Cannot open include file: 'hunspell.hxx': No such file or directory error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit code 2 ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\shikhar\AppData\Local\Temp\pip-record-clyqesxf\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include\hunspell' Check the logs for full command output. A: I tried an older version and successfully installed. pip install hunspell==0.3.4 A: Collecting cyhunspell Using cached CyHunspell-1.3.4.tar.gz (2.7 MB) Preparing metadata (setup.py) ... done Requirement already satisfied: cacheman>=2.0.6 in c:\users\abdul rehman\appdata\local\programs\python\python310\lib\site-packages (from cyhunspell) (2.0.8) Requirement already satisfied: future>=0.16.0 in c:\users\abdul rehman\appdata\local\programs\python\python310\lib\site-packages (from cacheman>=2.0.6->cyhunspell) (0.18.2) Requirement already satisfied: psutil>=2.1.0 in c:\users\abdul rehman\appdata\roaming\python\python310\site-packages (from cacheman>=2.0.6->cyhunspell) (5.9.4) Requirement already satisfied: six>=1.10.0 in c:\users\abdul rehman\appdata\roaming\python\python310\site-packages (from cacheman>=2.0.6->cyhunspell) (1.16.0) Building wheels for collected packages: cyhunspell Building wheel for cyhunspell (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [40 lines of output] C:\Users\Abdul Rehman\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead warnings.warn( running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-cpython-310 creating build\lib.win-amd64-cpython-310\hunspell copying hunspell\platform.py -> build\lib.win-amd64-cpython-310\hunspell copying hunspell_init_.py -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\hunspell.pxd -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\thread.pxd -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\hunspell.pyx -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\hunspell.cpython-36m-x86_64-linux-gnu.so -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\thread.hpp -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\hunspell.cpp -> build\lib.win-amd64-cpython-310\hunspell creating build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_AU.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_CA.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_GB.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_NZ.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_US.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_ZA.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\test.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_AU.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_CA.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_GB.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_NZ.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_US.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_ZA.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\test.dic -> build\lib.win-amd64-cpython-310\dictionaries creating build\lib.win-amd64-cpython-310\libs creating build\lib.win-amd64-cpython-310\libs\msvc copying libs\msvc\libhunspell-msvc11-x64.lib -> build\lib.win-amd64-cpython-310\libs\msvc copying libs\msvc\libhunspell-msvc11-x86.lib -> build\lib.win-amd64-cpython-310\libs\msvc copying libs\msvc\libhunspell-msvc14-x64.lib -> build\lib.win-amd64-cpython-310\libs\msvc copying libs\msvc\libhunspell-msvc14-x86.lib -> build\lib.win-amd64-cpython-310\libs\msvc running build_ext building 'hunspell.hunspell' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for cyhunspell Running setup.py clean for cyhunspell Failed to build cyhunspell Installing collected packages: cyhunspell Running setup.py install for cyhunspell ... error error: subprocess-exited-with-error × Running setup.py install for cyhunspell did not run successfully. │ exit code: 1 ╰─> [42 lines of output] C:\Users\Abdul Rehman\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead warnings.warn( running install C:\Users\Abdul Rehman\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build\lib.win-amd64-cpython-310 creating build\lib.win-amd64-cpython-310\hunspell copying hunspell\platform.py -> build\lib.win-amd64-cpython-310\hunspell copying hunspell_init_.py -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\hunspell.pxd -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\thread.pxd -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\hunspell.pyx -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\hunspell.cpython-36m-x86_64-linux-gnu.so -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\thread.hpp -> build\lib.win-amd64-cpython-310\hunspell copying hunspell\hunspell.cpp -> build\lib.win-amd64-cpython-310\hunspell creating build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_AU.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_CA.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_GB.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_NZ.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_US.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_ZA.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\test.aff -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_AU.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_CA.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_GB.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_NZ.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_US.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\en_ZA.dic -> build\lib.win-amd64-cpython-310\dictionaries copying dictionaries\test.dic -> build\lib.win-amd64-cpython-310\dictionaries creating build\lib.win-amd64-cpython-310\libs creating build\lib.win-amd64-cpython-310\libs\msvc copying libs\msvc\libhunspell-msvc11-x64.lib -> build\lib.win-amd64-cpython-310\libs\msvc copying libs\msvc\libhunspell-msvc11-x86.lib -> build\lib.win-amd64-cpython-310\libs\msvc copying libs\msvc\libhunspell-msvc14-x64.lib -> build\lib.win-amd64-cpython-310\libs\msvc copying libs\msvc\libhunspell-msvc14-x86.lib -> build\lib.win-amd64-cpython-310\libs\msvc running build_ext building 'hunspell.hunspell' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> cyhunspell note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.
I'm looking forward to install the hunspell package using pip, but it throws the following error:
Collecting hunspell Using cached hunspell-0.5.5.tar.gz (34 kB) Building wheels for collected packages: hunspell Building wheel for hunspell (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\shikhar\AppData\Local\Temp\pip-wheel-5grngp_q' cwd: C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd Complete output (12 lines): running bdist_wheel running build running build_ext building 'hunspell' extension creating build creating build\temp.win-amd64-3.10 creating build\temp.win-amd64-3.10\Release C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHUNSPELL_STATIC -IV:/hunspell-1.3.3/src/hunspell -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\include -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt /EHsc /Tphunspell.cpp /Fobuild\temp.win-amd64-3.10\Release\hunspell.obj /MT cl : Command line warning D9025 : overriding '/MD' with '/MT' hunspell.cpp hunspell.cpp(20): fatal error C1083: Cannot open include file: 'hunspell.hxx': No such file or directory error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit code 2 ERROR: Failed building wheel for hunspell Running setup.py clean for hunspell Failed to build hunspell Installing collected packages: hunspell Running setup.py install for hunspell ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\shikhar\AppData\Local\Temp\pip-record-clyqesxf\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include\hunspell' cwd: C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd Complete output (12 lines): running install running build running build_ext building 'hunspell' extension creating build creating build\temp.win-amd64-3.10 creating build\temp.win-amd64-3.10\Release C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHUNSPELL_STATIC -IV:/hunspell-1.3.3/src/hunspell -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\include -IC:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\include -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt /EHsc /Tphunspell.cpp /Fobuild\temp.win-amd64-3.10\Release\hunspell.obj /MT cl : Command line warning D9025 : overriding '/MD' with '/MT' hunspell.cpp hunspell.cpp(20): fatal error C1083: Cannot open include file: 'hunspell.hxx': No such file or directory error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit code 2 ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"'; file='"'"'C:\Users\shikhar\AppData\Local\Temp\pip-install-gur9qi9n\hunspell_590196089ad44370bc048a58cf3d40dd\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\shikhar\AppData\Local\Temp\pip-record-clyqesxf\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\shikhar\AppData\Local\Programs\Python\Python310\Include\hunspell' Check the logs for full command output.
[ "I tried an older version and successfully installed.\npip install hunspell==0.3.4\n\n", "Collecting cyhunspell\nUsing cached CyHunspell-1.3.4.tar.gz (2.7 MB)\nPreparing metadata (setup.py) ... done\nRequirement already satisfied: cacheman>=2.0.6 in c:\\users\\abdul rehman\\appdata\\local\\programs\\python\\python310\\lib\\site-packages (from cyhunspell) (2.0.8)\nRequirement already satisfied: future>=0.16.0 in c:\\users\\abdul rehman\\appdata\\local\\programs\\python\\python310\\lib\\site-packages (from cacheman>=2.0.6->cyhunspell) (0.18.2)\nRequirement already satisfied: psutil>=2.1.0 in c:\\users\\abdul rehman\\appdata\\roaming\\python\\python310\\site-packages (from cacheman>=2.0.6->cyhunspell) (5.9.4)\nRequirement already satisfied: six>=1.10.0 in c:\\users\\abdul rehman\\appdata\\roaming\\python\\python310\\site-packages (from cacheman>=2.0.6->cyhunspell) (1.16.0)\nBuilding wheels for collected packages: cyhunspell\nBuilding wheel for cyhunspell (setup.py) ... error\nerror: subprocess-exited-with-error\n× python setup.py bdist_wheel did not run successfully.\n│ exit code: 1\n╰─> [40 lines of output]\nC:\\Users\\Abdul Rehman\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\setuptools\\dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead\nwarnings.warn(\nrunning bdist_wheel\nrunning build\nrunning build_py\ncreating build\ncreating build\\lib.win-amd64-cpython-310\ncreating build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\platform.py -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell_init_.py -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.pxd -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\thread.pxd -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.pyx -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.cpython-36m-x86_64-linux-gnu.so -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\thread.hpp -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.cpp -> build\\lib.win-amd64-cpython-310\\hunspell\ncreating build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_AU.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_CA.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_GB.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_NZ.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_US.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_ZA.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\test.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_AU.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_CA.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_GB.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_NZ.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_US.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_ZA.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\test.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncreating build\\lib.win-amd64-cpython-310\\libs\ncreating build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc11-x64.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc11-x86.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc14-x64.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc14-x86.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\nrunning build_ext\nbuilding 'hunspell.hunspell' extension\nerror: Microsoft Visual C++ 14.0 or greater is required. Get it with \"Microsoft C++ Build Tools\": https://visualstudio.microsoft.com/visual-cpp-build-tools/\n[end of output]\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nERROR: Failed building wheel for cyhunspell\nRunning setup.py clean for cyhunspell\nFailed to build cyhunspell\nInstalling collected packages: cyhunspell\nRunning setup.py install for cyhunspell ... error\nerror: subprocess-exited-with-error\n× Running setup.py install for cyhunspell did not run successfully.\n│ exit code: 1\n╰─> [42 lines of output]\nC:\\Users\\Abdul Rehman\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\setuptools\\dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead\nwarnings.warn(\nrunning install\nC:\\Users\\Abdul Rehman\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\setuptools\\command\\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.\nwarnings.warn(\nrunning build\nrunning build_py\ncreating build\ncreating build\\lib.win-amd64-cpython-310\ncreating build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\platform.py -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell_init_.py -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.pxd -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\thread.pxd -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.pyx -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.cpython-36m-x86_64-linux-gnu.so -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\thread.hpp -> build\\lib.win-amd64-cpython-310\\hunspell\ncopying hunspell\\hunspell.cpp -> build\\lib.win-amd64-cpython-310\\hunspell\ncreating build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_AU.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_CA.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_GB.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_NZ.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_US.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_ZA.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\test.aff -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_AU.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_CA.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_GB.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_NZ.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_US.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\en_ZA.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncopying dictionaries\\test.dic -> build\\lib.win-amd64-cpython-310\\dictionaries\ncreating build\\lib.win-amd64-cpython-310\\libs\ncreating build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc11-x64.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc11-x86.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc14-x64.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\ncopying libs\\msvc\\libhunspell-msvc14-x86.lib -> build\\lib.win-amd64-cpython-310\\libs\\msvc\nrunning build_ext\nbuilding 'hunspell.hunspell' extension\nerror: Microsoft Visual C++ 14.0 or greater is required. Get it with \"Microsoft C++ Build Tools\": https://visualstudio.microsoft.com/visual-cpp-build-tools/\n[end of output]\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nerror: legacy-install-failure\n× Encountered error while trying to install package.\n╰─> cyhunspell\nnote: This is an issue with the package mentioned above, not pip.\nhint: See above for output from the failure.\n" ]
[ 0, 0 ]
[]
[]
[ "hunspell", "python" ]
stackoverflow_0071396413_hunspell_python.txt
Q: How to convert this pseudocode to code in python I am new to coding and i can't figure out a way to convert this pseudocode to actual code in python especially the total number of dice part. I want to calculate total number of green dice in a dice stack of different colours. Y4 G6 R3 G2 W2 W1 where, Y4 = yellow dice with face value 4 R3 = red dice with face value 2 W2 = white dice with face value 2 G6 = green dice with face value 6 G2 = green dice with face value 2 W1 = white dice with face value 1 score = 0 if total number of green dice is 1 score = 2 if total number of green dice is 2 score = 5 if total number of green dice is 3 score = 10 if total number of green dice is 4 score = 15 if total number of green dice is 5 score = 20 if total number of green dice is 6 score = 30 return score A: Welcome to the community. I will help get you started and provide some references. To be honest however, when you post on here, you typically want to be as descriptive as possible. Don't be afraid to list the guidelines for your assignment or go into more exact detail on what problems you are having. I know it is probably confusing and can be hard to explain, but the more quality information you give to the professionals on here, the better they can understand what you are trying to accomplish and help you in the best way possible. With that said, I do not know exactly what techniques you are being asked to perform here or completely what you are trying to accomplish. However, based on the existing information, I will give you an idea of what techniques and implementations you may have to use. First, let's start on the section you commented you are struggling with, the total number of dice. Total number of dice should be represented as a variable such as the score variable at the top of your example (score = 0). I am unsure based on what you have what the total number of dice should be, so you will have to format that value in. What you need to focus on now though is using proper naming conventions for variables. A good naming convention usually follows a specific style whether it be underscore or camel case. It should also not be overly long, but be descriptive enough that you understand what the variable is. Most recommend a variable name to be between 8 to 20 characters long. Underscore example: new_member Camel case example: newMember From my experience, most Python programmers use underscores while Java and C++ programmers use camel case. With the underscore, you separate words such as "new" and "member" with an underscore and keep them lowercase in almost all situations. In camel case, the words are joined together but the first character in each word except for the very first one are capitalized. Here is probably the best guide to get you started and is followed by most organizations: https://peps.python.org/pep-0008/#naming-conventions Okay, now let's focus on the if statements or as some call them: conditional statements. Conditional/if statements can vary in format from one language to another, but Python conditional statements typically follow this structure: if variable1 condition variable2: As you can see, you start out with the word "if" and keep it lowercase. Having it uppercase will cause an error as compilers are mostly case sensitive. Following that you write a variable though later you will be able to use different things in these statements. Your variable would be what you decided to name the total number of dice. Here is an example: if new_member condition variable2: Next, you need to define the condition. Conditions usually are made up of equality operators (== and !=) and relational operators (< and >). If the condition is proven true, then whatever code you have in the block (the indented lines below your if statement) will execute. So in your first conditional statement, the number of dice would need to equal 1 for the score to be set to 2. If the statement is not true, the if statement does not execute. For example, let's say the total number of dice is 3. That means the if statement saying the total number of dice is 1 or 2 will not execute. However, the line that says that the total number of dice equals 3 will execute and assign score with the value of 10. Here is a resource on how these conditions work: https://www.programiz.com/python-programming/if-elif-else Now, because your pseudocode says "is" in its if statements, you will want to use == (== means is equal to) like this: if new_member == variable2: To finish off, you put the other variable you are comparing against. In your example variable2 would just be the number 1 for your first if statement. Typically, you want to use variables as writing in numbers or strings without first declaring them to a variable is bad practice. However, it is fine since you are learning. Here is the finished example: if new_member == 'High_spectre1408': print('Welcome to the community!') Lastly, I noticed you have return statements. These are typically used for classes or functions. Functions and classes are typically used to modify or output the values of variables and other things. They make code easier to read and make it so you don't have to write the same thing over and over again. You set up a function like this: def my_func(): The word 'def' is short for definition, meaning you are defining the function or what it is supposed to do. Next, you put the name of your function. It can be anything you want it to be, but it should follow proper naming conventions like variables. Like variables, they start with a lowercase letter. The reference to naming conventions I provided should give more insight on naming them too. However, whatever you do name them should be followed with by (). Here is another example: def newcomer_greeting(): Now what you can do in the () is include variables in your function that weren't part of them before. These are called parameters and can look like this: def newcomer_greeting(new_member): You can have no parameters, 1 parameter, or many parameters. With that said, let's look at an example of a definition: def newcomer_greeting(new_member): if new_member == 'High_spectre1408': print('Welcome to the community!') return Now to finish, you must call the function with the specified number of parameters in your main section. It will look like this: def newcomer_greeting(new_member): if new_member == 'High_spectre1408': print('Welcome to the community!') return new_member = 'High_spectre1408' newcomer_greeting(new_member) I was a little vague on functions compared to the last 2 concepts, but it is something considered more adept than conditional statements and variables. Instead, I will provide a reference that will save me the time of writing a book and hopefully give you more insight: https://www.programiz.com/python-programming/function Don't get discouraged and feel free to comment again if you are still having difficulties or need more examples. Best of luck!
How to convert this pseudocode to code in python
I am new to coding and i can't figure out a way to convert this pseudocode to actual code in python especially the total number of dice part. I want to calculate total number of green dice in a dice stack of different colours. Y4 G6 R3 G2 W2 W1 where, Y4 = yellow dice with face value 4 R3 = red dice with face value 2 W2 = white dice with face value 2 G6 = green dice with face value 6 G2 = green dice with face value 2 W1 = white dice with face value 1 score = 0 if total number of green dice is 1 score = 2 if total number of green dice is 2 score = 5 if total number of green dice is 3 score = 10 if total number of green dice is 4 score = 15 if total number of green dice is 5 score = 20 if total number of green dice is 6 score = 30 return score
[ "Welcome to the community. I will help get you started and provide some references. To be honest however, when you post on here, you typically want to be as descriptive as possible. Don't be afraid to list the guidelines for your assignment or go into more exact detail on what problems you are having. I know it is probably confusing and can be hard to explain, but the more quality information you give to the professionals on here, the better they can understand what you are trying to accomplish and help you in the best way possible. With that said, I do not know exactly what techniques you are being asked to perform here or completely what you are trying to accomplish. However, based on the existing information, I will give you an idea of what techniques and implementations you may have to use.\nFirst, let's start on the section you commented you are struggling with, the total number of dice. Total number of dice should be represented as a variable such as the score variable at the top of your example (score = 0). I am unsure based on what you have what the total number of dice should be, so you will have to format that value in. What you need to focus on now though is using proper naming conventions for variables.\nA good naming convention usually follows a specific style whether it be underscore or camel case. It should also not be overly long, but be descriptive enough that you understand what the variable is. Most recommend a variable name to be between 8 to 20 characters long.\nUnderscore example: new_member\nCamel case example: newMember\n\nFrom my experience, most Python programmers use underscores while Java and C++ programmers use camel case. With the underscore, you separate words such as \"new\" and \"member\" with an underscore and keep them lowercase in almost all situations. In camel case, the words are joined together but the first character in each word except for the very first one are capitalized.\nHere is probably the best guide to get you started and is followed by most organizations: https://peps.python.org/pep-0008/#naming-conventions\nOkay, now let's focus on the if statements or as some call them: conditional statements. Conditional/if statements can vary in format from one language to another, but Python conditional statements typically follow this structure:\nif variable1 condition variable2:\n\nAs you can see, you start out with the word \"if\" and keep it lowercase. Having it uppercase will cause an error as compilers are mostly case sensitive. Following that you write a variable though later you will be able to use different things in these statements. Your variable would be what you decided to name the total number of dice. Here is an example:\nif new_member condition variable2:\n\nNext, you need to define the condition. Conditions usually are made up of equality operators (== and !=) and relational operators (< and >). If the condition is proven true, then whatever code you have in the block (the indented lines below your if statement) will execute. So in your first conditional statement, the number of dice would need to equal 1 for the score to be set to 2. If the statement is not true, the if statement does not execute. For example, let's say the total number of dice is 3. That means the if statement saying the total number of dice is 1 or 2 will not execute. However, the line that says that the total number of dice equals 3 will execute and assign score with the value of 10.\nHere is a resource on how these conditions work: https://www.programiz.com/python-programming/if-elif-else\nNow, because your pseudocode says \"is\" in its if statements, you will want to use == (== means is equal to) like this:\nif new_member == variable2:\n\nTo finish off, you put the other variable you are comparing against. In your example variable2 would just be the number 1 for your first if statement. Typically, you want to use variables as writing in numbers or strings without first declaring them to a variable is bad practice. However, it is fine since you are learning. Here is the finished example:\nif new_member == 'High_spectre1408':\n print('Welcome to the community!')\n\nLastly, I noticed you have return statements. These are typically used for classes or functions. Functions and classes are typically used to modify or output the values of variables and other things. They make code easier to read and make it so you don't have to write the same thing over and over again. You set up a function like this:\ndef my_func():\n\nThe word 'def' is short for definition, meaning you are defining the function or what it is supposed to do. Next, you put the name of your function. It can be anything you want it to be, but it should follow proper naming conventions like variables. Like variables, they start with a lowercase letter. The reference to naming conventions I provided should give more insight on naming them too. However, whatever you do name them should be followed with by (). Here is another example:\ndef newcomer_greeting():\n\nNow what you can do in the () is include variables in your function that weren't part of them before. These are called parameters and can look like this:\ndef newcomer_greeting(new_member):\n\nYou can have no parameters, 1 parameter, or many parameters. With that said, let's look at an example of a definition:\ndef newcomer_greeting(new_member):\n if new_member == 'High_spectre1408':\n print('Welcome to the community!')\n\n return\n\nNow to finish, you must call the function with the specified number of parameters in your main section. It will look like this:\ndef newcomer_greeting(new_member):\n if new_member == 'High_spectre1408':\n print('Welcome to the community!')\n\n return\n\nnew_member = 'High_spectre1408'\nnewcomer_greeting(new_member)\n\nI was a little vague on functions compared to the last 2 concepts, but it is something considered more adept than conditional statements and variables. Instead, I will provide a reference that will save me the time of writing a book and hopefully give you more insight: https://www.programiz.com/python-programming/function\nDon't get discouraged and feel free to comment again if you are still having difficulties or need more examples. Best of luck!\n" ]
[ 0 ]
[ "I think you should not be asking someone to do your code.\nI will try to help Python version if you mean this\nclass Dice(object):\n\n def __init__(self, dice_list):\n self.dice_list = dice_list\n\n def score(self):\n total_number_of_dice = 0\n for dice in self.dice_list:\n total_number_of_dice = total_number_of_dice + dice\n if total_number_of_dice == 1:\n return 2\n if total_number_of_dice == 2:\n return 5\n if total_number_of_dice == 3:\n return 10\n if total_number_of_dice == 4:\n return 15\n if total_number_of_dice == 5:\n return 20\n if total_number_of_dice == 6:\n return 30\n return 0\n\n" ]
[ -2 ]
[ "algorithm", "dice", "pseudocode", "python", "python_3.x" ]
stackoverflow_0074660492_algorithm_dice_pseudocode_python_python_3.x.txt
Q: How to create two columns of a dataframe from separate lines in a text file I have a text file where every other row either begins with "A" or "B" like this A810 WE WILDWOOD DR B20220901BROOKE A6223 AMHERST BAY B20221001SARAI How can I read the text file and create a two column pandas dataframe where the line beginning with "A" is a column and likewise for the "B", on a single row. Like this |A |B | |:------------------|:--------------| |A810 WE WILDWOOD DR|B20220901BROOKE| |:------------------|---------------| |A6223 AMHERST BAY |B20221001SARAI | |:------------------|---------------| A: You can approach this by using pandas.DataFrame.shift and pandas.DataFrame.join : from io import StringIO import pandas as pd s = """A810 WE WILDWOOD DR B20220901BROOKE A6223 AMHERST BAY B20221001SARAI """ df = pd.read_csv(StringIO(s), header=None, names=["A"]) #in your case, df = pd.read_csv("path_of_your_txtfile", header=None, names=["A"]) out = ( df .join(df.shift(-1).rename(columns= {"A": "B"})) .iloc[::2] .reset_index(drop=True) ) # Output : print(out) A B 0 A810 WE WILDWOOD DR B20220901BROOKE 1 A6223 AMHERST BAY B20221001SARAI A: What about using a pivot? col = df[0].str.extract('(.)', expand=False) out = (df .assign(col=col, idx=df.groupby(col).cumcount()) .pivot(index='idx', columns='col', values=0) .rename_axis(index=None, columns=None) ) Output: A B 0 A810 WE WILDWOOD DR B20220901BROOKE 1 A6223 AMHERST BAY B20221001SARAI A: Another possible solution, which only works if the strings alternate, regularly, between A and B, as the OP states: pd.DataFrame(df.values.reshape((-1, 2)), columns=list('AB')) Output: A B 0 A810 WE WILDWOOD DR B20220901BROOKE 1 A6223 AMHERST BAY B20221001SARAI
How to create two columns of a dataframe from separate lines in a text file
I have a text file where every other row either begins with "A" or "B" like this A810 WE WILDWOOD DR B20220901BROOKE A6223 AMHERST BAY B20221001SARAI How can I read the text file and create a two column pandas dataframe where the line beginning with "A" is a column and likewise for the "B", on a single row. Like this |A |B | |:------------------|:--------------| |A810 WE WILDWOOD DR|B20220901BROOKE| |:------------------|---------------| |A6223 AMHERST BAY |B20221001SARAI | |:------------------|---------------|
[ "You can approach this by using pandas.DataFrame.shift and pandas.DataFrame.join :\nfrom io import StringIO \nimport pandas as pd\n\ns = \"\"\"A810 WE WILDWOOD DR\nB20220901BROOKE\nA6223 AMHERST BAY\nB20221001SARAI\n\"\"\"\n\ndf = pd.read_csv(StringIO(s), header=None, names=[\"A\"])\n#in your case, df = pd.read_csv(\"path_of_your_txtfile\", header=None, names=[\"A\"])\n\nout = (\n df\n .join(df.shift(-1).rename(columns= {\"A\": \"B\"}))\n .iloc[::2]\n .reset_index(drop=True)\n )\n\n# Output :\nprint(out)\n A B\n0 A810 WE WILDWOOD DR B20220901BROOKE\n1 A6223 AMHERST BAY B20221001SARAI\n\n", "What about using a pivot?\ncol = df[0].str.extract('(.)', expand=False)\n\nout = (df\n .assign(col=col, idx=df.groupby(col).cumcount())\n .pivot(index='idx', columns='col', values=0)\n .rename_axis(index=None, columns=None)\n)\n\nOutput:\n A B\n0 A810 WE WILDWOOD DR B20220901BROOKE\n1 A6223 AMHERST BAY B20221001SARAI\n\n", "Another possible solution, which only works if the strings alternate, regularly, between A and B, as the OP states:\npd.DataFrame(df.values.reshape((-1, 2)), columns=list('AB'))\n\nOutput:\n A B\n0 A810 WE WILDWOOD DR B20220901BROOKE\n1 A6223 AMHERST BAY B20221001SARAI\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "pandas", "python", "text_files" ]
stackoverflow_0074660644_pandas_python_text_files.txt
Q: Pip subprocess error: No matching distribution found for matlabengineforpython==R2020b So, I'm trying to import an environment.yml file from one windows laptop to a windows pc. I enter the following command (conda env create -f environment.yml), and get the following error (at the end of the code). The imports fail when they reach the matlabengine package. Not sure why this is. Any thoughts? Thanks. C:\Software\srv569>conda env create -f environment.yml Collecting package metadata (repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 22.9.0 latest version: 22.11.0 Please update conda by running $ conda update -n base -c defaults conda Preparing transaction: done Verifying transaction: done Executing transaction: done Installing pip dependencies: \ Ran pip subprocess with arguments: ['C:\\Software\\srv569\\Anaconda3\\envs\\research_projects\\python.exe', '-m', 'pip', 'install', '-U', '-r', 'C:\\Software\\srv569\\condaenv.0oaiyh7x.requirements.txt'] Pip subprocess output: Collecting absl-py==1.0.0 Using cached absl_py-1.0.0-py3-none-any.whl (126 kB) Collecting ansi2html==1.7.0 Using cached ansi2html-1.7.0-py3-none-any.whl (15 kB) Collecting argon2-cffi==21.3.0 Using cached argon2_cffi-21.3.0-py3-none-any.whl (14 kB) Collecting argon2-cffi-bindings==21.2.0 Using cached argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl (30 kB) Collecting asttokens==2.0.5 Using cached asttokens-2.0.5-py2.py3-none-any.whl (20 kB) Collecting astunparse==1.6.3 Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB) Collecting attrs==21.4.0 Using cached attrs-21.4.0-py2.py3-none-any.whl (60 kB) Collecting backcall==0.2.0 Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB) Collecting beautifulsoup4==4.11.1 Using cached beautifulsoup4-4.11.1-py3-none-any.whl (128 kB) Collecting bleach==5.0.0 Using cached bleach-5.0.0-py3-none-any.whl (160 kB) Collecting brotli==1.0.9 Using cached Brotli-1.0.9-cp38-cp38-win_amd64.whl (365 kB) Collecting cachetools==5.1.0 Using cached cachetools-5.1.0-py3-none-any.whl (9.2 kB) Collecting cffi==1.15.0 Using cached cffi-1.15.0-cp38-cp38-win_amd64.whl (179 kB) Collecting charset-normalizer==2.0.12 Using cached charset_normalizer-2.0.12-py3-none-any.whl (39 kB) Collecting click==8.1.3 Using cached click-8.1.3-py3-none-any.whl (96 kB) Collecting cycler==0.11.0 Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB) Collecting dash==2.4.1 Using cached dash-2.4.1-py3-none-any.whl (9.8 MB) Collecting dash-core-components==2.0.0 Using cached dash_core_components-2.0.0-py3-none-any.whl (3.8 kB) Collecting dash-html-components==2.0.0 Using cached dash_html_components-2.0.0-py3-none-any.whl (4.1 kB) Collecting dash-table==5.0.0 Using cached dash_table-5.0.0-py3-none-any.whl (3.9 kB) Collecting debugpy==1.6.0 Using cached debugpy-1.6.0-cp38-cp38-win_amd64.whl (4.3 MB) Collecting decorator==5.1.1 Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB) Collecting defusedxml==0.7.1 Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB) Collecting entrypoints==0.4 Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB) Collecting executing==0.8.3 Using cached executing-0.8.3-py2.py3-none-any.whl (16 kB) Collecting fastjsonschema==2.15.3 Using cached fastjsonschema-2.15.3-py3-none-any.whl (22 kB) Collecting flask==2.1.2 Using cached Flask-2.1.2-py3-none-any.whl (95 kB) Collecting flask-compress==1.12 Using cached Flask_Compress-1.12-py3-none-any.whl (7.9 kB) Collecting flatbuffers==1.12 Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB) Collecting fonttools==4.33.3 Using cached fonttools-4.33.3-py3-none-any.whl (930 kB) Collecting gast==0.4.0 Using cached gast-0.4.0-py3-none-any.whl (9.8 kB) Collecting glob2==0.7 Using cached glob2-0.7.tar.gz (10 kB) Collecting google-auth==2.6.6 Using cached google_auth-2.6.6-py2.py3-none-any.whl (156 kB) Collecting google-auth-oauthlib==0.4.6 Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB) Collecting google-pasta==0.2.0 Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB) Collecting grpcio==1.46.3 Using cached grpcio-1.46.3-cp38-cp38-win_amd64.whl (3.5 MB) Collecting h5py==3.7.0 Using cached h5py-3.7.0-cp38-cp38-win_amd64.whl (2.6 MB) Collecting idna==3.3 Using cached idna-3.3-py3-none-any.whl (61 kB) Collecting imageio==2.19.3 Using cached imageio-2.19.3-py3-none-any.whl (3.4 MB) Collecting importlib-metadata==4.11.4 Using cached importlib_metadata-4.11.4-py3-none-any.whl (18 kB) Collecting importlib-resources==5.7.1 Using cached importlib_resources-5.7.1-py3-none-any.whl (28 kB) Collecting ipykernel==6.13.0 Using cached ipykernel-6.13.0-py3-none-any.whl (131 kB) Collecting ipython==8.3.0 Using cached ipython-8.3.0-py3-none-any.whl (750 kB) Collecting ipython-genutils==0.2.0 Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB) Collecting ipywidgets==7.7.0 Using cached ipywidgets-7.7.0-py2.py3-none-any.whl (123 kB) Collecting itsdangerous==2.1.2 Using cached itsdangerous-2.1.2-py3-none-any.whl (15 kB) Collecting jedi==0.18.1 Using cached jedi-0.18.1-py2.py3-none-any.whl (1.6 MB) Collecting jinja2==3.1.2 Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB) Collecting jsonschema==4.5.1 Using cached jsonschema-4.5.1-py3-none-any.whl (72 kB) Collecting jupyter==1.0.0 Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB) Collecting jupyter-client==7.3.1 Using cached jupyter_client-7.3.1-py3-none-any.whl (130 kB) Collecting jupyter-console==6.4.3 Using cached jupyter_console-6.4.3-py3-none-any.whl (22 kB) Collecting jupyter-core==4.10.0 Using cached jupyter_core-4.10.0-py3-none-any.whl (87 kB) Collecting jupyter-dash==0.4.2 Using cached jupyter_dash-0.4.2-py3-none-any.whl (23 kB) Collecting jupyterlab-pygments==0.2.2 Using cached jupyterlab_pygments-0.2.2-py2.py3-none-any.whl (21 kB) Collecting jupyterlab-widgets==1.1.0 Using cached jupyterlab_widgets-1.1.0-py3-none-any.whl (245 kB) Collecting keras==2.9.0 Using cached keras-2.9.0-py2.py3-none-any.whl (1.6 MB) Collecting keras-preprocessing==1.1.2 Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB) Collecting kiwisolver==1.4.2 Using cached kiwisolver-1.4.2-cp38-cp38-win_amd64.whl (55 kB) Collecting libclang==14.0.1 Using cached libclang-14.0.1-py2.py3-none-win_amd64.whl (14.2 MB) Collecting markdown==3.3.7 Using cached Markdown-3.3.7-py3-none-any.whl (97 kB) Collecting markupsafe==2.1.1 Using cached MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl (17 kB) Pip subprocess error: ERROR: Could not find a version that satisfies the requirement matlabengineforpython==R2020b (from versions: none) ERROR: No matching distribution found for matlabengineforpython==R2020b failed CondaEnvException: Pip failed A: Have you looked at condaENVexception. Try updating conda and pip versions to latest.
Pip subprocess error: No matching distribution found for matlabengineforpython==R2020b
So, I'm trying to import an environment.yml file from one windows laptop to a windows pc. I enter the following command (conda env create -f environment.yml), and get the following error (at the end of the code). The imports fail when they reach the matlabengine package. Not sure why this is. Any thoughts? Thanks. C:\Software\srv569>conda env create -f environment.yml Collecting package metadata (repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 22.9.0 latest version: 22.11.0 Please update conda by running $ conda update -n base -c defaults conda Preparing transaction: done Verifying transaction: done Executing transaction: done Installing pip dependencies: \ Ran pip subprocess with arguments: ['C:\\Software\\srv569\\Anaconda3\\envs\\research_projects\\python.exe', '-m', 'pip', 'install', '-U', '-r', 'C:\\Software\\srv569\\condaenv.0oaiyh7x.requirements.txt'] Pip subprocess output: Collecting absl-py==1.0.0 Using cached absl_py-1.0.0-py3-none-any.whl (126 kB) Collecting ansi2html==1.7.0 Using cached ansi2html-1.7.0-py3-none-any.whl (15 kB) Collecting argon2-cffi==21.3.0 Using cached argon2_cffi-21.3.0-py3-none-any.whl (14 kB) Collecting argon2-cffi-bindings==21.2.0 Using cached argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl (30 kB) Collecting asttokens==2.0.5 Using cached asttokens-2.0.5-py2.py3-none-any.whl (20 kB) Collecting astunparse==1.6.3 Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB) Collecting attrs==21.4.0 Using cached attrs-21.4.0-py2.py3-none-any.whl (60 kB) Collecting backcall==0.2.0 Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB) Collecting beautifulsoup4==4.11.1 Using cached beautifulsoup4-4.11.1-py3-none-any.whl (128 kB) Collecting bleach==5.0.0 Using cached bleach-5.0.0-py3-none-any.whl (160 kB) Collecting brotli==1.0.9 Using cached Brotli-1.0.9-cp38-cp38-win_amd64.whl (365 kB) Collecting cachetools==5.1.0 Using cached cachetools-5.1.0-py3-none-any.whl (9.2 kB) Collecting cffi==1.15.0 Using cached cffi-1.15.0-cp38-cp38-win_amd64.whl (179 kB) Collecting charset-normalizer==2.0.12 Using cached charset_normalizer-2.0.12-py3-none-any.whl (39 kB) Collecting click==8.1.3 Using cached click-8.1.3-py3-none-any.whl (96 kB) Collecting cycler==0.11.0 Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB) Collecting dash==2.4.1 Using cached dash-2.4.1-py3-none-any.whl (9.8 MB) Collecting dash-core-components==2.0.0 Using cached dash_core_components-2.0.0-py3-none-any.whl (3.8 kB) Collecting dash-html-components==2.0.0 Using cached dash_html_components-2.0.0-py3-none-any.whl (4.1 kB) Collecting dash-table==5.0.0 Using cached dash_table-5.0.0-py3-none-any.whl (3.9 kB) Collecting debugpy==1.6.0 Using cached debugpy-1.6.0-cp38-cp38-win_amd64.whl (4.3 MB) Collecting decorator==5.1.1 Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB) Collecting defusedxml==0.7.1 Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB) Collecting entrypoints==0.4 Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB) Collecting executing==0.8.3 Using cached executing-0.8.3-py2.py3-none-any.whl (16 kB) Collecting fastjsonschema==2.15.3 Using cached fastjsonschema-2.15.3-py3-none-any.whl (22 kB) Collecting flask==2.1.2 Using cached Flask-2.1.2-py3-none-any.whl (95 kB) Collecting flask-compress==1.12 Using cached Flask_Compress-1.12-py3-none-any.whl (7.9 kB) Collecting flatbuffers==1.12 Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB) Collecting fonttools==4.33.3 Using cached fonttools-4.33.3-py3-none-any.whl (930 kB) Collecting gast==0.4.0 Using cached gast-0.4.0-py3-none-any.whl (9.8 kB) Collecting glob2==0.7 Using cached glob2-0.7.tar.gz (10 kB) Collecting google-auth==2.6.6 Using cached google_auth-2.6.6-py2.py3-none-any.whl (156 kB) Collecting google-auth-oauthlib==0.4.6 Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB) Collecting google-pasta==0.2.0 Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB) Collecting grpcio==1.46.3 Using cached grpcio-1.46.3-cp38-cp38-win_amd64.whl (3.5 MB) Collecting h5py==3.7.0 Using cached h5py-3.7.0-cp38-cp38-win_amd64.whl (2.6 MB) Collecting idna==3.3 Using cached idna-3.3-py3-none-any.whl (61 kB) Collecting imageio==2.19.3 Using cached imageio-2.19.3-py3-none-any.whl (3.4 MB) Collecting importlib-metadata==4.11.4 Using cached importlib_metadata-4.11.4-py3-none-any.whl (18 kB) Collecting importlib-resources==5.7.1 Using cached importlib_resources-5.7.1-py3-none-any.whl (28 kB) Collecting ipykernel==6.13.0 Using cached ipykernel-6.13.0-py3-none-any.whl (131 kB) Collecting ipython==8.3.0 Using cached ipython-8.3.0-py3-none-any.whl (750 kB) Collecting ipython-genutils==0.2.0 Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB) Collecting ipywidgets==7.7.0 Using cached ipywidgets-7.7.0-py2.py3-none-any.whl (123 kB) Collecting itsdangerous==2.1.2 Using cached itsdangerous-2.1.2-py3-none-any.whl (15 kB) Collecting jedi==0.18.1 Using cached jedi-0.18.1-py2.py3-none-any.whl (1.6 MB) Collecting jinja2==3.1.2 Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB) Collecting jsonschema==4.5.1 Using cached jsonschema-4.5.1-py3-none-any.whl (72 kB) Collecting jupyter==1.0.0 Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB) Collecting jupyter-client==7.3.1 Using cached jupyter_client-7.3.1-py3-none-any.whl (130 kB) Collecting jupyter-console==6.4.3 Using cached jupyter_console-6.4.3-py3-none-any.whl (22 kB) Collecting jupyter-core==4.10.0 Using cached jupyter_core-4.10.0-py3-none-any.whl (87 kB) Collecting jupyter-dash==0.4.2 Using cached jupyter_dash-0.4.2-py3-none-any.whl (23 kB) Collecting jupyterlab-pygments==0.2.2 Using cached jupyterlab_pygments-0.2.2-py2.py3-none-any.whl (21 kB) Collecting jupyterlab-widgets==1.1.0 Using cached jupyterlab_widgets-1.1.0-py3-none-any.whl (245 kB) Collecting keras==2.9.0 Using cached keras-2.9.0-py2.py3-none-any.whl (1.6 MB) Collecting keras-preprocessing==1.1.2 Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB) Collecting kiwisolver==1.4.2 Using cached kiwisolver-1.4.2-cp38-cp38-win_amd64.whl (55 kB) Collecting libclang==14.0.1 Using cached libclang-14.0.1-py2.py3-none-win_amd64.whl (14.2 MB) Collecting markdown==3.3.7 Using cached Markdown-3.3.7-py3-none-any.whl (97 kB) Collecting markupsafe==2.1.1 Using cached MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl (17 kB) Pip subprocess error: ERROR: Could not find a version that satisfies the requirement matlabengineforpython==R2020b (from versions: none) ERROR: No matching distribution found for matlabengineforpython==R2020b failed CondaEnvException: Pip failed
[ "Have you looked at condaENVexception.\nTry updating conda and pip versions to latest.\n" ]
[ 0 ]
[]
[]
[ "anaconda", "matlab", "pip", "python" ]
stackoverflow_0074660932_anaconda_matlab_pip_python.txt
Q: h2o frame from pandas casting I am using h2o to perform predictive modeling from python. I have loaded some data from a csv using pandas, specifying some column types: dtype_dict = {'SIT_SSICCOMP':'object', 'SIT_CAPACC':'object', 'PTT_SSIRMPOL':'object', 'PTT_SPTCLVEI':'object', 'cap_pad':'object', 'SIT_SADNS_RESP_PERC':'object', 'SIT_GEOCODE':'object', 'SIT_TIPOFIRMA':'object', 'SIT_TPFRODESI':'object', 'SIT_CITTAACC':'object', 'SIT_INDIRACC':'object', 'SIT_NUMCIVACC':'object' } date_cols = ["SIT_SSIDTSIN","SIT_SSIDTDEN","PTT_SPTDTEFF","PTT_SPTDTSCA","SIT_DTANTIFRODE","PTT_DTELABOR"] columns_to_drop = ['SIT_TPFRODESI','SIT_CITTAACC', 'SIT_INDIRACC', 'SIT_NUMCIVACC', 'SIT_CAPACC', 'SIT_LONGITACC', 'SIT_LATITACC','cap_pad','SIT_DTANTIFRODE'] comp='mycomp' file_completo = os.path.join(dataDir,"db4modelrisk_"+comp+".csv") db4scoring = pd.read_csv(filepath_or_buffer=file_completo,sep=";", encoding='latin1', header=0,infer_datetime_format =True,na_values=[''], keep_default_na =False, parse_dates=date_cols,dtype=dtype_dict,nrows=500e3) db4scoring.drop(labels=columns_to_drop,axis=1,inplace =True) Then, after I set up a h2o cluster I import it in h2o using db4scoring_h2o = H2OFrame(db4scoring) and I convert categorical predictors in factor for example: db4scoring_h2o["SIT_SADTPROV"]=db4scoring_h2o["SIT_SADTPROV"].asfactor() db4scoring_h2o["PTT_SPTFRAZ"]=db4scoring_h2o["PTT_SPTFRAZ"].asfactor() When I check data types using db4scoring.dtypes I notice that they are properly set but when I import it in h2o I notice that h2oframe performs some unwanted conversions to enum (eg from float or from int). I wonder if is is a way to specify the variable format in H2OFrame. A: Yes, there is. See the H2OFrame doc here: http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/frame.html#h2oframe You just need to use the column_types argument when you cast. Here's a short example: # imports import h2o import numpy as np import pandas as pd # create small random pandas df df = pd.DataFrame(np.random.randint(0,10,size=(10, 2)), columns=list('AB')) print(df) # A B #0 5 0 #1 1 3 #2 4 8 #3 3 9 # ... # start h2o, convert pandas frame to H2OFrame # use column_types dict to set data types h2o.init() h2o_df = h2o.H2OFrame(df, column_types={'A':'numeric', 'B':'enum'}) h2o_df.describe() # you should now see the desired data types # A B # type int enum # ... A: # Filter a dictionary to keep elements only whose keys are even newDict = filterTheDict(dictOfNames, lambda elem : elem[0] % 2 == 0) print('Filtered Dictionary : ') print(newDict)`enter code here`
h2o frame from pandas casting
I am using h2o to perform predictive modeling from python. I have loaded some data from a csv using pandas, specifying some column types: dtype_dict = {'SIT_SSICCOMP':'object', 'SIT_CAPACC':'object', 'PTT_SSIRMPOL':'object', 'PTT_SPTCLVEI':'object', 'cap_pad':'object', 'SIT_SADNS_RESP_PERC':'object', 'SIT_GEOCODE':'object', 'SIT_TIPOFIRMA':'object', 'SIT_TPFRODESI':'object', 'SIT_CITTAACC':'object', 'SIT_INDIRACC':'object', 'SIT_NUMCIVACC':'object' } date_cols = ["SIT_SSIDTSIN","SIT_SSIDTDEN","PTT_SPTDTEFF","PTT_SPTDTSCA","SIT_DTANTIFRODE","PTT_DTELABOR"] columns_to_drop = ['SIT_TPFRODESI','SIT_CITTAACC', 'SIT_INDIRACC', 'SIT_NUMCIVACC', 'SIT_CAPACC', 'SIT_LONGITACC', 'SIT_LATITACC','cap_pad','SIT_DTANTIFRODE'] comp='mycomp' file_completo = os.path.join(dataDir,"db4modelrisk_"+comp+".csv") db4scoring = pd.read_csv(filepath_or_buffer=file_completo,sep=";", encoding='latin1', header=0,infer_datetime_format =True,na_values=[''], keep_default_na =False, parse_dates=date_cols,dtype=dtype_dict,nrows=500e3) db4scoring.drop(labels=columns_to_drop,axis=1,inplace =True) Then, after I set up a h2o cluster I import it in h2o using db4scoring_h2o = H2OFrame(db4scoring) and I convert categorical predictors in factor for example: db4scoring_h2o["SIT_SADTPROV"]=db4scoring_h2o["SIT_SADTPROV"].asfactor() db4scoring_h2o["PTT_SPTFRAZ"]=db4scoring_h2o["PTT_SPTFRAZ"].asfactor() When I check data types using db4scoring.dtypes I notice that they are properly set but when I import it in h2o I notice that h2oframe performs some unwanted conversions to enum (eg from float or from int). I wonder if is is a way to specify the variable format in H2OFrame.
[ "Yes, there is. See the H2OFrame doc here: http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/frame.html#h2oframe\nYou just need to use the column_types argument when you cast.\nHere's a short example:\n# imports\nimport h2o\nimport numpy as np\nimport pandas as pd\n\n# create small random pandas df\ndf = pd.DataFrame(np.random.randint(0,10,size=(10, 2)), \ncolumns=list('AB'))\nprint(df)\n\n# A B\n#0 5 0\n#1 1 3\n#2 4 8\n#3 3 9\n# ...\n\n# start h2o, convert pandas frame to H2OFrame\n# use column_types dict to set data types\nh2o.init()\nh2o_df = h2o.H2OFrame(df, column_types={'A':'numeric', 'B':'enum'})\nh2o_df.describe() # you should now see the desired data types \n\n# A B\n# type int enum\n# ... \n\n", "# Filter a dictionary to keep elements only whose keys are even\nnewDict = filterTheDict(dictOfNames, lambda elem : elem[0] % 2 == 0)\nprint('Filtered Dictionary : ')\nprint(newDict)`enter code here`\n\n" ]
[ 3, 0 ]
[]
[]
[ "casting", "h2o", "pandas", "python" ]
stackoverflow_0049823178_casting_h2o_pandas_python.txt
Q: How do I write the __init__ to assign 1 to the __value data attibute for the dice game? Write a class named Die that simulates rolling dice. The Die class should have one private data attribute named __value. It should also have the following methods: __init__ : The __init__ method should assign 1 to the __value data attribute. roll: The roll method should set the __value data attribute to a random number from 1 to 6. get_value: The get_value function should return the value of the __value data attribute. Write a program that creates a Die object then uses a loop to roll the die 5 times. Each time the die is rolled, display its value. import random(1,6) class Die: def __init__(self): self.number int(input("Enter a number 1-6":) def get_value(self): for n in number: def main(): in
How do I write the __init__ to assign 1 to the __value data attibute for the dice game?
Write a class named Die that simulates rolling dice. The Die class should have one private data attribute named __value. It should also have the following methods: __init__ : The __init__ method should assign 1 to the __value data attribute. roll: The roll method should set the __value data attribute to a random number from 1 to 6. get_value: The get_value function should return the value of the __value data attribute. Write a program that creates a Die object then uses a loop to roll the die 5 times. Each time the die is rolled, display its value. import random(1,6) class Die: def __init__(self): self.number int(input("Enter a number 1-6":) def get_value(self): for n in number: def main(): in
[]
[]
[ "As you created number variable using self.number,you can create value with self.__value.\nLike this: self.__value = randint(1,6).\nBe aware that you can create it outside of the __init__ method. But if you do that, the variable will be linked to the class instead of instances (so multiple call to new dice will have the same value).\n", "Hope this help if you mean to create a class to demonstrate rolling dice.\n# Write a class named Die with the following methods:\nclass Die:\n # __init__ method that initializes the die's value to 1\n def __init__(self):\n self.value = 1\n\n # roll method that generates a random number in the range 1 through 6, and assigns this value to the die's value attribute\n def roll(self):\n import random\n self.value = random.randint(1, 6)\n\n # get_value method that returns the die's value\n def get_value(self):\n return self.value\n\n# main function\ndef main():\n # Create an instance of the Die class, and assign it to a variable named die.\n die = Die()\n # Write a loop that rolls the die 5 times.\n for i in range(5):\n die.roll()\n print(die.get_value())\n\n" ]
[ -1, -1 ]
[ "class", "dice", "python" ]
stackoverflow_0074660989_class_dice_python.txt
Q: How to generate a random group of result from a list So, In a exercice of my school, 'cell' is a point that have his own coordinates x and y, in a previous question i had to generate a list of his neighbours and now I have to generate randomly his one of these neighbours and the result have to be in the form of (x,y) and only a single value. import random #Qst1 cell=(2,3) lgn, col = cell def voisines_PI(cell): n=[(lgn-1,col-1),(lgn-1,col+1),(lgn+1,col+1),(lgn+1,col-1)] return n print(voisines_PI(cell)) #Qst2 def voisine_PI_alea(cell): m= 0 b= len(voisines_PI(cell)) g= random.randint(m,b) return g print(voisine_PI_alea(voisines_PI(cell))) A: Like @JohnnyMopp said, the function already returns a list of neighbors, so you can use random.choice() to select a random element of the list, like so: def voisine_PI_alea(cell): return random.choice(voisines_PI(cell))
How to generate a random group of result from a list
So, In a exercice of my school, 'cell' is a point that have his own coordinates x and y, in a previous question i had to generate a list of his neighbours and now I have to generate randomly his one of these neighbours and the result have to be in the form of (x,y) and only a single value. import random #Qst1 cell=(2,3) lgn, col = cell def voisines_PI(cell): n=[(lgn-1,col-1),(lgn-1,col+1),(lgn+1,col+1),(lgn+1,col-1)] return n print(voisines_PI(cell)) #Qst2 def voisine_PI_alea(cell): m= 0 b= len(voisines_PI(cell)) g= random.randint(m,b) return g print(voisine_PI_alea(voisines_PI(cell)))
[ "Like @JohnnyMopp said, the function already returns a list of neighbors, so you can use random.choice() to select a random element of the list, like so:\ndef voisine_PI_alea(cell):\n return random.choice(voisines_PI(cell))\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074660869_python.txt
Q: GUIZERO: I want to use a pushbutton to save what's in a textbox to file. How do I do that? I'm a super beginner with Python, so please be kind. I am creating an app that should take in user input from text boxes, and then when the user presses the submit button this is saved in a text file. I think the issue is that I'm not quite sure how to create the right function for the pushbutton command. I would really appreciate if someone can code a simple app showing how to do this. This is the code I have so far, but I get an error "TypeError: write() argument must be str, not TextBox". from guizero import * import os cwd = os.getcwd() # function for writing files def save_file(): with open(cwd+'/Desktop/File handling/newfile.txt','a') as f: f.write(userInput) app = App("testing") userInput = TextBox(app) submit_button = PushButton(app, command=save_file, text="submit") app.display() ` A: I figured it out. Thanks :) from guizero import * import os cwd = os.getcwd() # function for writing files def save_file(): with open(cwd+'/Desktop/File handling/newfile.txt','a') as f: f.write("First name:"+" "+first_name.value+"\n") #the app app = App("testing") #text box first_name = TextBox(app, width=30, grid=[2,4]) #submit button box = Box(app) submitbutton = PushButton(app, command=(save_file), text="Submit") app.display()
GUIZERO: I want to use a pushbutton to save what's in a textbox to file. How do I do that?
I'm a super beginner with Python, so please be kind. I am creating an app that should take in user input from text boxes, and then when the user presses the submit button this is saved in a text file. I think the issue is that I'm not quite sure how to create the right function for the pushbutton command. I would really appreciate if someone can code a simple app showing how to do this. This is the code I have so far, but I get an error "TypeError: write() argument must be str, not TextBox". from guizero import * import os cwd = os.getcwd() # function for writing files def save_file(): with open(cwd+'/Desktop/File handling/newfile.txt','a') as f: f.write(userInput) app = App("testing") userInput = TextBox(app) submit_button = PushButton(app, command=save_file, text="submit") app.display() `
[ "I figured it out. Thanks :)\n from guizero import *\nimport os\ncwd = os.getcwd()\n\n\n# function for writing files\ndef save_file():\n with open(cwd+'/Desktop/File handling/newfile.txt','a') as f:\n f.write(\"First name:\"+\" \"+first_name.value+\"\\n\")\n\n#the app\napp = App(\"testing\")\n\n\n#text box\nfirst_name = TextBox(app, width=30, grid=[2,4])\n\n#submit button\nbox = Box(app)\nsubmitbutton = PushButton(app, command=(save_file), text=\"Submit\")\n\napp.display()\n\n" ]
[ 0 ]
[]
[]
[ "guizero", "python", "textbox" ]
stackoverflow_0074648037_guizero_python_textbox.txt
Q: Failed to install wsgiref on Python 3 I have a problem installing wsgiref: $ python --version Python 3.6.0 :: Anaconda 4.3.1 (x86_64) $ pip --version pip 9.0.1 from /anaconda/lib/python3.6/site-packages (python 3.6) My requirement.txt file are shown as below. numpy==1.8.1 scipy==0.14.0 pyzmq==14.3.1 pandas==0.14.0 Jinja2==2.7.3 MarkupSafe==0.23 backports.ssl-match-hostname==3.4.0.2 gnureadline==6.3.3 ipython==2.1.0 matplotlib==1.3.1 nose==1.3.3 openpyxl==1.8.6 patsy==0.2.1 pyparsing==2.0.2 python-dateutil==2.2 pytz==2014.4 scikit-learn==0.14.1 six==1.7.3 tornado==3.2.2 wsgiref==0.1.2 statsmodels==0.5.0 when I run pip install -r requirement.txt, I got this error Collecting wsgiref==0.1.2 (from -r requirements.txt (line 20)) Using cached wsgiref-0.1.2.zip Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/setup.py", line 5, in <module> import ez_setup File "/private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/ez_setup/__init__.py", line 170 print "Setuptools version",version,"or greater has been installed." ^ SyntaxError: Missing parentheses in call to 'print' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/ I have tried to run pip install --upgrade setuptools and sudo easy_install -U setuptools but neither works. How can I solve this problem? A: wsgiref is already been included as a standard library in Python 3... So in case if you are trying with Python 3 just go ahead and import wsgiref thats it. A: According to this line SyntaxError: Missing parentheses in call to 'print', I think it needs Python 2.x to run the setup.py. Whether to use parentheses in print is the different syntax of Python 2 and Python 3. This is the solution from the Github issue: There are a few fixes that will get you running, in order of least work to most: Switch over to python2.7 for your will installs. Try to upgrade wsgiref with pip install --upgrade wsgiref, and see if the latest version works with your setup, and with will (if it doesn't, you'd notice the http/webhooks stuff not working. If you try 2) and it works, submit a PR here with the upgraded version in requirements.txt. (You can find out what versions you've got by using pip freeze). You can find more about the syntax difference here A: Solution: Flask-restful is deprecated, use version flask-restx
Failed to install wsgiref on Python 3
I have a problem installing wsgiref: $ python --version Python 3.6.0 :: Anaconda 4.3.1 (x86_64) $ pip --version pip 9.0.1 from /anaconda/lib/python3.6/site-packages (python 3.6) My requirement.txt file are shown as below. numpy==1.8.1 scipy==0.14.0 pyzmq==14.3.1 pandas==0.14.0 Jinja2==2.7.3 MarkupSafe==0.23 backports.ssl-match-hostname==3.4.0.2 gnureadline==6.3.3 ipython==2.1.0 matplotlib==1.3.1 nose==1.3.3 openpyxl==1.8.6 patsy==0.2.1 pyparsing==2.0.2 python-dateutil==2.2 pytz==2014.4 scikit-learn==0.14.1 six==1.7.3 tornado==3.2.2 wsgiref==0.1.2 statsmodels==0.5.0 when I run pip install -r requirement.txt, I got this error Collecting wsgiref==0.1.2 (from -r requirements.txt (line 20)) Using cached wsgiref-0.1.2.zip Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/setup.py", line 5, in <module> import ez_setup File "/private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/ez_setup/__init__.py", line 170 print "Setuptools version",version,"or greater has been installed." ^ SyntaxError: Missing parentheses in call to 'print' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/ I have tried to run pip install --upgrade setuptools and sudo easy_install -U setuptools but neither works. How can I solve this problem?
[ "wsgiref is already been included as a standard library in Python 3...\nSo in case if you are trying with Python 3 just go ahead and import wsgiref thats it.\n", "According to this line SyntaxError: Missing parentheses in call to 'print', I think it needs Python 2.x to run the setup.py. Whether to use parentheses in print is the different syntax of Python 2 and Python 3.\nThis is the solution from the Github issue:\n\nThere are a few fixes that will get you running, in order of least work to most:\n\nSwitch over to python2.7 for your will installs.\n\nTry to upgrade wsgiref with pip install --upgrade wsgiref, and see if the latest version works with your setup, and with will (if it doesn't, you'd notice the http/webhooks stuff not working.\n\nIf you try 2) and it works, submit a PR here with the upgraded version in requirements.txt. (You can find out what versions you've got by using pip freeze).\n\n\n\nYou can find more about the syntax difference here\n", "Solution:\nFlask-restful is deprecated, use version flask-restx\n" ]
[ 30, 4, 0 ]
[]
[]
[ "pip", "python", "python_3.x", "wsgiref" ]
stackoverflow_0043026999_pip_python_python_3.x_wsgiref.txt
Q: Paraphrase generation Can u please give some hint how can we create utterances for example I have input say - "I want my account details" The output should be like Can I get my account details Please provide me my account details Can I get my account information A: The problem you are describing here is called paraphrasing: taking an input phrase and producing an output phrase with the same meaning. To get the taste of it, you can try an online paraphraser, like https://quillbot.com/. And one way to create your own paraphraser (if you really really want it) is to get a pretrained translation model, and translate your phrase into some other language and then back into the original language. If your translator generates multiple hypotheses, then you'll have multiple paraphrases. Another (simpler!) way is just to replace some of your words with their synonyms, obtained from WordNet, Wiktionary, or another linguistic resource. Some more details you can find in this question. A: Paraphrasing is the process of taking an input phrase and creating an output. To get the taste of it, you can try an online paraphraser, like click me
Paraphrase generation
Can u please give some hint how can we create utterances for example I have input say - "I want my account details" The output should be like Can I get my account details Please provide me my account details Can I get my account information
[ "The problem you are describing here is called paraphrasing: taking an input phrase and producing an output phrase with the same meaning. \nTo get the taste of it, you can try an online paraphraser, like https://quillbot.com/. \nAnd one way to create your own paraphraser (if you really really want it) is to get a pretrained translation model, and translate your phrase into some other language and then back into the original language. If your translator generates multiple hypotheses, then you'll have multiple paraphrases. \nAnother (simpler!) way is just to replace some of your words with their synonyms, obtained from WordNet, Wiktionary, or another linguistic resource.\nSome more details you can find in this question. \n", "Paraphrasing is the process of taking an input phrase and creating an output.\nTo get the taste of it, you can try an online paraphraser, like\nclick me\n" ]
[ 0, 0 ]
[]
[]
[ "keras", "machine_learning", "nlp", "nltk", "python" ]
stackoverflow_0060712874_keras_machine_learning_nlp_nltk_python.txt
Q: How to Create a Pandas DataFrame from multiple list of dictionaries I want to create a pandas dataframe using the two list of dictionaries below: country_codes = [ { "id": 92, "name": "93", "position": 1, "description": "Afghanistan" }, { "id": 93, "name": "355", "position": 2, "description": "Albania" }, { "id": 94, "name": "213", "position": 3, "description": "Algeria" }, { "id": 95, "name": "1-684", "position": 4, "description": "American Samoa" } ] gender = [ { "id": 1, "name": "Female" }, { "id": 3 "name": "Male" } ] The dataframe should have two columns: Gender and Country Code. The values for gender will be from the gender variable while the value for country code will be from the country code variable. I have tried: df = pd.DataFrame(list( zip( gender, country_codes ) ), columns=[ "name" "description" ] ).rename({ "name": "Gender", "description": "Country" })) writer = pd.ExcelWriter('my_excel_file.xlsx', engine='xlsxwriter') df.to_excel(writer, sheet_name="sample_sheet", index=False) writer.save() But after running the script, the excel file was not populated. The expected output is have the excel sheet (screenshot attached) populated with the data in those list of dictionaries I declared above A: Use: df = pd.DataFrame({'gender': pd.DataFrame(gender)['name'], 'country': pd.DataFrame(country_codes)['description']}) Output: gender country 0 Female Afghanistan 1 Male Albania 2 NaN Algeria 3 NaN American Samoa
How to Create a Pandas DataFrame from multiple list of dictionaries
I want to create a pandas dataframe using the two list of dictionaries below: country_codes = [ { "id": 92, "name": "93", "position": 1, "description": "Afghanistan" }, { "id": 93, "name": "355", "position": 2, "description": "Albania" }, { "id": 94, "name": "213", "position": 3, "description": "Algeria" }, { "id": 95, "name": "1-684", "position": 4, "description": "American Samoa" } ] gender = [ { "id": 1, "name": "Female" }, { "id": 3 "name": "Male" } ] The dataframe should have two columns: Gender and Country Code. The values for gender will be from the gender variable while the value for country code will be from the country code variable. I have tried: df = pd.DataFrame(list( zip( gender, country_codes ) ), columns=[ "name" "description" ] ).rename({ "name": "Gender", "description": "Country" })) writer = pd.ExcelWriter('my_excel_file.xlsx', engine='xlsxwriter') df.to_excel(writer, sheet_name="sample_sheet", index=False) writer.save() But after running the script, the excel file was not populated. The expected output is have the excel sheet (screenshot attached) populated with the data in those list of dictionaries I declared above
[ "Use:\ndf = pd.DataFrame({'gender': pd.DataFrame(gender)['name'],\n 'country': pd.DataFrame(country_codes)['description']})\n\nOutput:\n gender country\n0 Female Afghanistan\n1 Male Albania\n2 NaN Algeria\n3 NaN American Samoa\n\n" ]
[ 1 ]
[]
[]
[ "django", "pandas", "python", "xlsxwriter" ]
stackoverflow_0074660948_django_pandas_python_xlsxwriter.txt
Q: How to propagate the effect of model bias via MEAS update correctly to other variables in GEKKO EDIT: I am just checking if the issue not on the condensate side. I have a material balance optimisation problem that I have configured in GEKKO. I have reproduced my challenge on a smaller problem that I can share here. It pertains to the initial values for CV's that I have left undefined (defaulting to zero) during controller instantiation and then assigned via the MEAS attribute with FSTATUS=1 parameter before the first call to the solve() method. As expected the controller establishes a BIAS to account for the difference between MEAS and the initial controller state. It then correctly drives optimisation of the biased CV to the appropriate target. However, it then appears to continue to use the unbiased model values for the remaining to calculate other Intermediate streams and to use in Equations. The result is that the rest of the material balance shifts to a point that is not representing the actual plant operating point. Attached is a code snippet illustrating my challenge. The output is: PowerProduced.value [0.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0] PowerProduced.PRED [188.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0] Steam for Generation [1300.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0] The PRED values are realistic but the values for Steam for Generation reverts back to a explicit positional form rather than an incremental adjustment from the initial condition. I expected [1300, 1968, 1968, 1968 ...] for Steam for Generation How do I adjust the model configuration to account for this? # -*- coding: utf-8 -*- """ Created on Wed Nov 30 11:53:50 2022 @author: Jacques Strydom """ from gekko import GEKKO import numpy as np m=GEKKO(remote=False) m.time=np.linspace(0,9,10) #GLOBAL OPTIONS m.options.IMODE=6 #control mode,dynamic control, simultaneous m.options.NODES=2 #collocation nodes m.options.SOLVER=1 # 1=APOPT, 2=BPOPT, 3=IPOPT m.options.CV_TYPE=1 #2 = squared error from reference trajectory m.options.CTRL_UNITS=3 #control time steps units (3= HOURS) m.options.MV_DCOST_SLOPE=2 m.options.CTRL_TIME=1 #1=1 hour per time step m.options.REQCTRLMODE=3 #3= CONTRO m.StmToPowerRatio=m.Const(4.0) #Constant that relates Stm to Power m.StmToProductRatio=m.Const(1.5) #Constant that relates Stm to Product m.SodiumSoftner_Conductivity=m.Param(value=285,name='SodiumSoftner_Conductivity') m.Condensate_Conductivity = m.Param(value=10,name='Condensate_Conductivity') m.Cycles_of_Concentration = m.Param(value=12,name='COC') m.SodiumSoftner_Production = m.MV(lb=0,ub=2450,name='SodiumSoftner_Production') #MV m.Final_Product = m.MV(lb=0,ub=1400,name='Final Product') #MV m.Steam_Produced = m.MV(lb=0,ub=4320,name='SteamProduced') #MV m.OtherNetSteamUsers = m.MV(name='OtherNetSteamUsers') #Disturbance Var m.BFW_Conductivity =m.CV(name='BFW_Conducitivy') m.PowerProduced =m.CV(name='PowerProduced') m.Blowdown=m.Intermediate(m.Steam_Produced/(m.Cycles_of_Concentration-1),name='Blowdown') m.BoilerFeedWater_Required=m.Intermediate(m.Steam_Produced+m.Blowdown,name='BFWRequired') m.SteamforGeneration=m.Intermediate(m.Steam_Produced-m.StmToProductRatio*m.Final_Product-m.OtherNetSteamUsers,name='StmforPower') m.CondensateForBFW = m.Intermediate(m.BoilerFeedWater_Required-m.SodiumSoftner_Production,name='Condensate for BFW') m.Cond_SS_Ratio = m.Intermediate(m.CondensateForBFW/m.BoilerFeedWater_Required) m.Equation(m.PowerProduced==m.SteamforGeneration/m.StmToPowerRatio) m.Equation(m.BFW_Conductivity==(m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity+m.CondensateForBFW*m.Condensate_Conductivity)/m.BoilerFeedWater_Required) #MV SETTINGS m.SodiumSoftner_Production.STATUS=1 # Manipulate this m.SodiumSoftner_Production.FSTATUS=1 # MEASURE this m.SodiumSoftner_Production.COST=-1 # Higher is better m.Final_Product.STATUS=1 # Manipulate this m.Final_Product.FSTATUS=1 # Measure this m.Final_Product.COST=-20 # Higher is better m.Steam_Produced.STATUS=1 # Manipulate this m.Steam_Produced.FSTATUS=1 # MEASURE this m.OtherNetSteamUsers.STATUS=0 # Solver cannot manipulate, disturbance m.OtherNetSteamUsers.FSTATUS=1 # MEASURE this m.BFW_Conductivity.STATUS=1 #Control this CV m.BFW_Conductivity.FSTATUS=1 #MEASURE this CV m.BFW_Conductivity.WSPHI=50 #Penalty for SPHI violation m.BFW_Conductivity.WSPLO=50 #Penalty for SPLO violation m.BFW_Conductivity.SPHI=140 #High limit for target range m.BFW_Conductivity.SPLO=110 #Low limit for target range m.PowerProduced.STATUS=1 #Control this CV m.PowerProduced.FSTATUS=1 #MEASURE this m.PowerProduced.COST=-2 #Higher is better m.PowerProduced.WSPHI=50 #Penalty for SPHI violation m.PowerProduced.WSPLO=50 #Penalty for SPLO violation m.PowerProduced.SPHI=355 #High limit for target range m.PowerProduced.SPLO=100 #Low limit for target range #Load measurements - realistic mass balance m.Final_Product.MEAS =1200 m.SodiumSoftner_Production.MEAS =2200 m.OtherNetSteamUsers.MEAS =800 m.Steam_Produced.MEAS =3900 m.BFW_Conductivity.MEAS =152 m.PowerProduced.MEAS =188 m.solve() #solve for first step print('PowerProduced.value',m.PowerProduced.value) print('PowerProduced.PRED',m.PowerProduced.PRED) print('Steam for Generation',m.SteamforGeneration.value) The process associated with the reduced problem is depicted here: A: Gekko uses the unbiased model values to solve the equations. The BIAS is only applied to that specific CV as an output correction. A state estimation algorithm such as a Kalman filter or Moving Horizon Estimator (MHE) is required to adjust parameters or initial conditions to correct for the difference between measured and model outputs. The bias method is commonly applied to model predictive control as a quick correction when a more complete state estimator is not available. See preprint or article on various estimation methods, including the bias method. Hedengren, J. D., Eaton, A. N., Overview of Estimation Methods for Industrial Dynamic Systems, Optimization and Engineering, Springer, Vol 18 (1), 2017, pp. 155-178, DOI: 10.1007/s11081-015-9295-9. To include the bias in the model, an external bias calculation is recommended with the creation of bias1 and bias2. m.bias1 =m.FV(0); m.bias1.STATUS=0; m.bias1.FSTATUS=1 m.BFW_Conductivity =m.CV(name='BFW_Conducitivy') m.bias2 =m.FV(0); m.bias2.STATUS=0; m.bias2.FSTATUS=1 m.PowerProduced =m.CV(name='PowerProduced') Substitute the Var+bias to use the biased value of the variable. The denominator terms are also rearranged to the other side of the equation to improve the numerical solution potential by avoiding potential divide-by-zero. m.Equation(m.StmToPowerRatio*(m.PowerProduced+m.bias2)==m.SteamforGeneration) m.Equation(m.BoilerFeedWater_Required*(m.BFW_Conductivity+m.bias1)==\ (m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity\ +m.CondensateForBFW*m.Condensate_Conductivity)) The equations now use the biased value. Here is the complete script: from gekko import GEKKO import numpy as np m=GEKKO(remote=False) m.time=np.linspace(0,9,10) #GLOBAL OPTIONS m.options.IMODE=6 #control mode,dynamic control, simultaneous m.options.NODES=2 #collocation nodes m.options.SOLVER=1 # 1=APOPT, 2=BPOPT, 3=IPOPT m.options.CV_TYPE=1 #2 = squared error from reference trajectory m.options.CTRL_UNITS=3 #control time steps units (3= HOURS) m.options.MV_DCOST_SLOPE=2 m.options.CTRL_TIME=1 #1=1 hour per time step m.options.REQCTRLMODE=3 #3= CONTRO m.StmToPowerRatio=m.Const(4.0) #Constant that relates Stm to Power m.StmToProductRatio=m.Const(1.5) #Constant that relates Stm to Product m.SodiumSoftner_Conductivity=m.Param(value=285,name='SodiumSoftner_Conductivity') m.Condensate_Conductivity = m.Param(value=10,name='Condensate_Conductivity') m.Cycles_of_Concentration = m.Param(value=12,name='COC') m.SodiumSoftner_Production = m.MV(lb=0,ub=2450,name='SodiumSoftner_Production') #MV m.Final_Product = m.MV(lb=0,ub=1400,name='Final Product') #MV m.Steam_Produced = m.MV(lb=0,ub=4320,name='SteamProduced') #MV m.OtherNetSteamUsers = m.MV(name='OtherNetSteamUsers') #Disturbance Var m.bias1 =m.FV(0); m.bias1.STATUS=0; m.bias1.FSTATUS=1 m.BFW_Conductivity =m.CV(name='BFW_Conducitivy') m.bias2 =m.FV(0); m.bias2.STATUS=0; m.bias2.FSTATUS=1 m.PowerProduced =m.CV(name='PowerProduced') m.Blowdown=m.Intermediate(m.Steam_Produced/(m.Cycles_of_Concentration-1),name='Blowdown') m.BoilerFeedWater_Required=m.Intermediate(m.Steam_Produced+m.Blowdown,name='BFWRequired') m.SteamforGeneration=m.Intermediate(m.Steam_Produced-m.StmToProductRatio*m.Final_Product\ -m.OtherNetSteamUsers,name='StmforPower') m.CondensateForBFW = m.Intermediate(m.BoilerFeedWater_Required\ -m.SodiumSoftner_Production,name='Condensate for BFW') m.Cond_SS_Ratio = m.Intermediate(m.CondensateForBFW/m.BoilerFeedWater_Required) m.Equation(m.StmToPowerRatio*(m.PowerProduced-m.bias2)==m.SteamforGeneration) m.Equation(m.BoilerFeedWater_Required*(m.BFW_Conductivity-m.bias1)==\ (m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity\ +m.CondensateForBFW*m.Condensate_Conductivity)) #MV SETTINGS m.SodiumSoftner_Production.STATUS=1 # Manipulate this m.SodiumSoftner_Production.FSTATUS=1 # MEASURE this m.SodiumSoftner_Production.COST=-1 # Higher is better m.Final_Product.STATUS=1 # Manipulate this m.Final_Product.FSTATUS=1 # Measure this m.Final_Product.COST=-20 # Higher is better m.Steam_Produced.STATUS=1 # Manipulate this m.Steam_Produced.FSTATUS=1 # MEASURE this m.OtherNetSteamUsers.STATUS=0 # Solver cannot manipulate, disturbance m.OtherNetSteamUsers.FSTATUS=1 # MEASURE this m.BFW_Conductivity.STATUS=1 #Control this CV m.BFW_Conductivity.FSTATUS=0 #MEASURE this CV m.BFW_Conductivity.WSPHI=50 #Penalty for SPHI violation m.BFW_Conductivity.WSPLO=50 #Penalty for SPLO violation m.BFW_Conductivity.SPHI=140 #High limit for target range m.BFW_Conductivity.SPLO=110 #Low limit for target range m.PowerProduced.STATUS=1 #Control this CV m.PowerProduced.FSTATUS=0 #MEASURE this m.PowerProduced.COST=-2 #Higher is better m.PowerProduced.WSPHI=50 #Penalty for SPHI violation m.PowerProduced.WSPLO=50 #Penalty for SPLO violation m.PowerProduced.SPHI=355 #High limit for target range m.PowerProduced.SPLO=100 #Low limit for target range #Load measurements - realistic mass balance m.Final_Product.MEAS =1200 m.SodiumSoftner_Production.MEAS =2200 m.OtherNetSteamUsers.MEAS =800 m.Steam_Produced.MEAS =3900 #m.BFW_Conductivity.MEAS =152 m.bias1.MEAS =-152 #m.PowerProduced.MEAS =188 m.bias2.MEAS =-188 m.solve() #solve for first step print('PowerProduced.value',m.PowerProduced.value) print('PowerProduced.PRED',m.PowerProduced.PRED) print('Steam for Generation',m.SteamforGeneration.value) import matplotlib.pyplot as plt plt.subplot(2,1,1) plt.plot(m.time,m.SteamforGeneration.value,'r-',label='SteamforGeneration') plt.legend(); plt.grid() plt.subplot(2,1,2) plt.plot(m.time,m.Steam_Produced.value,'r-.',label='Steam_Produced') plt.plot(m.time,-1.5*np.array(m.Final_Product.value),'b--',label='Final_Product') plt.plot(m.time,-np.array(m.OtherNetSteamUsers.value),'k:',label='OtherNetSteamUsers') plt.xlabel('Time') plt.legend(); plt.grid() plt.show()
How to propagate the effect of model bias via MEAS update correctly to other variables in GEKKO
EDIT: I am just checking if the issue not on the condensate side. I have a material balance optimisation problem that I have configured in GEKKO. I have reproduced my challenge on a smaller problem that I can share here. It pertains to the initial values for CV's that I have left undefined (defaulting to zero) during controller instantiation and then assigned via the MEAS attribute with FSTATUS=1 parameter before the first call to the solve() method. As expected the controller establishes a BIAS to account for the difference between MEAS and the initial controller state. It then correctly drives optimisation of the biased CV to the appropriate target. However, it then appears to continue to use the unbiased model values for the remaining to calculate other Intermediate streams and to use in Equations. The result is that the rest of the material balance shifts to a point that is not representing the actual plant operating point. Attached is a code snippet illustrating my challenge. The output is: PowerProduced.value [0.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0] PowerProduced.PRED [188.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0] Steam for Generation [1300.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0] The PRED values are realistic but the values for Steam for Generation reverts back to a explicit positional form rather than an incremental adjustment from the initial condition. I expected [1300, 1968, 1968, 1968 ...] for Steam for Generation How do I adjust the model configuration to account for this? # -*- coding: utf-8 -*- """ Created on Wed Nov 30 11:53:50 2022 @author: Jacques Strydom """ from gekko import GEKKO import numpy as np m=GEKKO(remote=False) m.time=np.linspace(0,9,10) #GLOBAL OPTIONS m.options.IMODE=6 #control mode,dynamic control, simultaneous m.options.NODES=2 #collocation nodes m.options.SOLVER=1 # 1=APOPT, 2=BPOPT, 3=IPOPT m.options.CV_TYPE=1 #2 = squared error from reference trajectory m.options.CTRL_UNITS=3 #control time steps units (3= HOURS) m.options.MV_DCOST_SLOPE=2 m.options.CTRL_TIME=1 #1=1 hour per time step m.options.REQCTRLMODE=3 #3= CONTRO m.StmToPowerRatio=m.Const(4.0) #Constant that relates Stm to Power m.StmToProductRatio=m.Const(1.5) #Constant that relates Stm to Product m.SodiumSoftner_Conductivity=m.Param(value=285,name='SodiumSoftner_Conductivity') m.Condensate_Conductivity = m.Param(value=10,name='Condensate_Conductivity') m.Cycles_of_Concentration = m.Param(value=12,name='COC') m.SodiumSoftner_Production = m.MV(lb=0,ub=2450,name='SodiumSoftner_Production') #MV m.Final_Product = m.MV(lb=0,ub=1400,name='Final Product') #MV m.Steam_Produced = m.MV(lb=0,ub=4320,name='SteamProduced') #MV m.OtherNetSteamUsers = m.MV(name='OtherNetSteamUsers') #Disturbance Var m.BFW_Conductivity =m.CV(name='BFW_Conducitivy') m.PowerProduced =m.CV(name='PowerProduced') m.Blowdown=m.Intermediate(m.Steam_Produced/(m.Cycles_of_Concentration-1),name='Blowdown') m.BoilerFeedWater_Required=m.Intermediate(m.Steam_Produced+m.Blowdown,name='BFWRequired') m.SteamforGeneration=m.Intermediate(m.Steam_Produced-m.StmToProductRatio*m.Final_Product-m.OtherNetSteamUsers,name='StmforPower') m.CondensateForBFW = m.Intermediate(m.BoilerFeedWater_Required-m.SodiumSoftner_Production,name='Condensate for BFW') m.Cond_SS_Ratio = m.Intermediate(m.CondensateForBFW/m.BoilerFeedWater_Required) m.Equation(m.PowerProduced==m.SteamforGeneration/m.StmToPowerRatio) m.Equation(m.BFW_Conductivity==(m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity+m.CondensateForBFW*m.Condensate_Conductivity)/m.BoilerFeedWater_Required) #MV SETTINGS m.SodiumSoftner_Production.STATUS=1 # Manipulate this m.SodiumSoftner_Production.FSTATUS=1 # MEASURE this m.SodiumSoftner_Production.COST=-1 # Higher is better m.Final_Product.STATUS=1 # Manipulate this m.Final_Product.FSTATUS=1 # Measure this m.Final_Product.COST=-20 # Higher is better m.Steam_Produced.STATUS=1 # Manipulate this m.Steam_Produced.FSTATUS=1 # MEASURE this m.OtherNetSteamUsers.STATUS=0 # Solver cannot manipulate, disturbance m.OtherNetSteamUsers.FSTATUS=1 # MEASURE this m.BFW_Conductivity.STATUS=1 #Control this CV m.BFW_Conductivity.FSTATUS=1 #MEASURE this CV m.BFW_Conductivity.WSPHI=50 #Penalty for SPHI violation m.BFW_Conductivity.WSPLO=50 #Penalty for SPLO violation m.BFW_Conductivity.SPHI=140 #High limit for target range m.BFW_Conductivity.SPLO=110 #Low limit for target range m.PowerProduced.STATUS=1 #Control this CV m.PowerProduced.FSTATUS=1 #MEASURE this m.PowerProduced.COST=-2 #Higher is better m.PowerProduced.WSPHI=50 #Penalty for SPHI violation m.PowerProduced.WSPLO=50 #Penalty for SPLO violation m.PowerProduced.SPHI=355 #High limit for target range m.PowerProduced.SPLO=100 #Low limit for target range #Load measurements - realistic mass balance m.Final_Product.MEAS =1200 m.SodiumSoftner_Production.MEAS =2200 m.OtherNetSteamUsers.MEAS =800 m.Steam_Produced.MEAS =3900 m.BFW_Conductivity.MEAS =152 m.PowerProduced.MEAS =188 m.solve() #solve for first step print('PowerProduced.value',m.PowerProduced.value) print('PowerProduced.PRED',m.PowerProduced.PRED) print('Steam for Generation',m.SteamforGeneration.value) The process associated with the reduced problem is depicted here:
[ "Gekko uses the unbiased model values to solve the equations. The BIAS is only applied to that specific CV as an output correction. A state estimation algorithm such as a Kalman filter or Moving Horizon Estimator (MHE) is required to adjust parameters or initial conditions to correct for the difference between measured and model outputs. The bias method is commonly applied to model predictive control as a quick correction when a more complete state estimator is not available. See preprint or article on various estimation methods, including the bias method.\n\nHedengren, J. D., Eaton, A. N., Overview of Estimation Methods for Industrial Dynamic Systems, Optimization and Engineering, Springer, Vol 18 (1), 2017, pp. 155-178, DOI: 10.1007/s11081-015-9295-9.\n\nTo include the bias in the model, an external bias calculation is recommended with the creation of bias1 and bias2.\nm.bias1 =m.FV(0); m.bias1.STATUS=0; m.bias1.FSTATUS=1\nm.BFW_Conductivity =m.CV(name='BFW_Conducitivy')\n\nm.bias2 =m.FV(0); m.bias2.STATUS=0; m.bias2.FSTATUS=1\nm.PowerProduced =m.CV(name='PowerProduced')\n\nSubstitute the Var+bias to use the biased value of the variable. The denominator terms are also rearranged to the other side of the equation to improve the numerical solution potential by avoiding potential divide-by-zero.\nm.Equation(m.StmToPowerRatio*(m.PowerProduced+m.bias2)==m.SteamforGeneration)\nm.Equation(m.BoilerFeedWater_Required*(m.BFW_Conductivity+m.bias1)==\\ \n (m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity\\\n +m.CondensateForBFW*m.Condensate_Conductivity))\n\nThe equations now use the biased value.\n\nHere is the complete script:\nfrom gekko import GEKKO\nimport numpy as np\n\nm=GEKKO(remote=False)\nm.time=np.linspace(0,9,10)\n\n#GLOBAL OPTIONS\nm.options.IMODE=6 #control mode,dynamic control, simultaneous\nm.options.NODES=2 #collocation nodes\nm.options.SOLVER=1 # 1=APOPT, 2=BPOPT, 3=IPOPT\nm.options.CV_TYPE=1 #2 = squared error from reference trajectory\nm.options.CTRL_UNITS=3 #control time steps units (3= HOURS)\nm.options.MV_DCOST_SLOPE=2\nm.options.CTRL_TIME=1 #1=1 hour per time step\nm.options.REQCTRLMODE=3 #3= CONTRO\n\nm.StmToPowerRatio=m.Const(4.0) #Constant that relates Stm to Power\nm.StmToProductRatio=m.Const(1.5) #Constant that relates Stm to Product\n\nm.SodiumSoftner_Conductivity=m.Param(value=285,name='SodiumSoftner_Conductivity')\nm.Condensate_Conductivity = m.Param(value=10,name='Condensate_Conductivity')\nm.Cycles_of_Concentration = m.Param(value=12,name='COC')\n \n\nm.SodiumSoftner_Production = m.MV(lb=0,ub=2450,name='SodiumSoftner_Production') #MV\nm.Final_Product = m.MV(lb=0,ub=1400,name='Final Product') #MV\nm.Steam_Produced = m.MV(lb=0,ub=4320,name='SteamProduced') #MV\nm.OtherNetSteamUsers = m.MV(name='OtherNetSteamUsers') #Disturbance Var\n\nm.bias1 =m.FV(0); m.bias1.STATUS=0; m.bias1.FSTATUS=1\nm.BFW_Conductivity =m.CV(name='BFW_Conducitivy')\n\nm.bias2 =m.FV(0); m.bias2.STATUS=0; m.bias2.FSTATUS=1\nm.PowerProduced =m.CV(name='PowerProduced')\n\nm.Blowdown=m.Intermediate(m.Steam_Produced/(m.Cycles_of_Concentration-1),name='Blowdown')\nm.BoilerFeedWater_Required=m.Intermediate(m.Steam_Produced+m.Blowdown,name='BFWRequired')\nm.SteamforGeneration=m.Intermediate(m.Steam_Produced-m.StmToProductRatio*m.Final_Product\\\n -m.OtherNetSteamUsers,name='StmforPower')\nm.CondensateForBFW = m.Intermediate(m.BoilerFeedWater_Required\\\n -m.SodiumSoftner_Production,name='Condensate for BFW')\nm.Cond_SS_Ratio = m.Intermediate(m.CondensateForBFW/m.BoilerFeedWater_Required)\n\nm.Equation(m.StmToPowerRatio*(m.PowerProduced-m.bias2)==m.SteamforGeneration)\nm.Equation(m.BoilerFeedWater_Required*(m.BFW_Conductivity-m.bias1)==\\\n (m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity\\\n +m.CondensateForBFW*m.Condensate_Conductivity))\n\n#MV SETTINGS\n\nm.SodiumSoftner_Production.STATUS=1 # Manipulate this\nm.SodiumSoftner_Production.FSTATUS=1 # MEASURE this\nm.SodiumSoftner_Production.COST=-1 # Higher is better\n\nm.Final_Product.STATUS=1 # Manipulate this\nm.Final_Product.FSTATUS=1 # Measure this\nm.Final_Product.COST=-20 # Higher is better\n\nm.Steam_Produced.STATUS=1 # Manipulate this\nm.Steam_Produced.FSTATUS=1 # MEASURE this\n\nm.OtherNetSteamUsers.STATUS=0 # Solver cannot manipulate, disturbance\nm.OtherNetSteamUsers.FSTATUS=1 # MEASURE this\n\nm.BFW_Conductivity.STATUS=1 #Control this CV\nm.BFW_Conductivity.FSTATUS=0 #MEASURE this CV\nm.BFW_Conductivity.WSPHI=50 #Penalty for SPHI violation\nm.BFW_Conductivity.WSPLO=50 #Penalty for SPLO violation\nm.BFW_Conductivity.SPHI=140 #High limit for target range\nm.BFW_Conductivity.SPLO=110 #Low limit for target range\n\nm.PowerProduced.STATUS=1 #Control this CV\nm.PowerProduced.FSTATUS=0 #MEASURE this\nm.PowerProduced.COST=-2 #Higher is better\nm.PowerProduced.WSPHI=50 #Penalty for SPHI violation\nm.PowerProduced.WSPLO=50 #Penalty for SPLO violation\nm.PowerProduced.SPHI=355 #High limit for target range\nm.PowerProduced.SPLO=100 #Low limit for target range\n\n#Load measurements - realistic mass balance\nm.Final_Product.MEAS =1200\nm.SodiumSoftner_Production.MEAS =2200\nm.OtherNetSteamUsers.MEAS =800\nm.Steam_Produced.MEAS =3900\n#m.BFW_Conductivity.MEAS =152\nm.bias1.MEAS =-152\n#m.PowerProduced.MEAS =188\nm.bias2.MEAS =-188\n\nm.solve() #solve for first step\n\nprint('PowerProduced.value',m.PowerProduced.value)\nprint('PowerProduced.PRED',m.PowerProduced.PRED)\nprint('Steam for Generation',m.SteamforGeneration.value)\n\nimport matplotlib.pyplot as plt\nplt.subplot(2,1,1)\nplt.plot(m.time,m.SteamforGeneration.value,'r-',label='SteamforGeneration')\nplt.legend(); plt.grid()\nplt.subplot(2,1,2)\nplt.plot(m.time,m.Steam_Produced.value,'r-.',label='Steam_Produced')\nplt.plot(m.time,-1.5*np.array(m.Final_Product.value),'b--',label='Final_Product')\nplt.plot(m.time,-np.array(m.OtherNetSteamUsers.value),'k:',label='OtherNetSteamUsers')\nplt.xlabel('Time')\nplt.legend(); plt.grid()\nplt.show()\n\n" ]
[ 0 ]
[]
[]
[ "gekko", "python" ]
stackoverflow_0074640886_gekko_python.txt
Q: drop table timing out using psycopg2 (on postgres) I can't drop a table that has dependencies using psycopg2 in python because it times out. (updating to remove irrelevant info, thank you to @Adrian Klaver for the assistance so far). I have two docker images, one running a postgres database, the other a python flask application making use of multiple psycopg2 calls to create tables, insert rows, select rows, and (unsuccessfully dropping a specific table). Things I have tried: used psycopg2 to select data, insert data used psycopg2 to drop some tables successfully tried (unsuccessfully) to drop a specific table 'davey1' (through psycopg2 I get the same timeout issue) looked at locks on the table SELECT * FROM pg_locks l JOIN pg_class t ON l.relation = t.oid AND t.relkind = 'r' WHERE t.relname = 'davey1'; looked at processes running select * from pg_stat_activity; Specifically the code I call the function (i have hard coded the table name for my testing): @site.route("/drop-table", methods=['GET','POST']) @login_required def drop_table(): form = DeleteTableForm() if request.method == "POST": tablename = form.tablename.data POSTGRES_USER= os.getenv('POSTGRES_USER') POSTGRES_PASSWORD= os.getenv('POSTGRES_PASSWORD') POSTGRES_DB = os.getenv('POSTGRES_DB') POSTGRES_HOST = os.getenv('POSTGRES_HOST') POSTGRES_PORT = os.getenv('POSTGRES_PORT') try: conn = psycopg2.connect(database=POSTGRES_DB, user=POSTGRES_USER,password=POSTGRES_PASSWORD,host=POSTGRES_HOST,port=POSTGRES_PORT) cursor = conn.cursor() sql_command = "DROP TABLE "+ str(tablename) cursor.execute(sql_command) conn.commit() cursor.close() conn.close() except Exception as e: flash("Unable to Drop table " + tablename +" it does not exist","error") app.logger.info("Error %s", str(e)) cursor.close() conn.close() return render_template("drop-table.html", form=form) Update 7/11 - I don't know why, but the problem is caused by either flask @login_required and/or accessing "current_user" (both functions are part of flask_login), in my code I import them as from flask_login import login_required,current_user. I have no idea why this is happening, and it really annoying. If I comment out the above @login_required decorator it works fine, logs look like this: 2022-11-07 09:36:45.854 UTC [55] LOG: statement: BEGIN 2022-11-07 09:36:45.854 UTC [55] LOG: statement: DROP TABLE davey1 2022-11-07 09:36:45.858 UTC [55] LOG: statement: COMMIT 2022-11-07 09:36:45.867 UTC [33] LOG: statement: BEGIN 2022-11-07 09:36:45.867 UTC [33] LOG: statement: SELECT users.user_id AS users_user_id, users.name AS users_name, users.email AS users_email, users.password AS users_password, users.created_on AS users_created_on, users.last_login AS users_last_login, users.email_sent AS users_email_sent, users.verified_account AS users_verified_account, users.email_confirmed_on AS users_email_confirmed_on, users.number_of_failed_runs AS users_number_of_failed_runs, users.number_of_logins AS users_number_of_logins, users.datarobot_api_token AS users_datarobot_api_token, users.document_schema AS users_document_schema, users.column_to_classify AS users_column_to_classify, users.column_name_for_title AS users_column_name_for_title FROM users WHERE users.user_id = 1 2022-11-07 09:36:45.875 UTC [33] LOG: statement: ROLLBACK When I have the @login_required included in the code, the drop table times out and I receive this log: 2022-11-07 09:38:37.192 UTC [34] LOG: statement: BEGIN 2022-11-07 09:38:37.192 UTC [34] LOG: statement: SELECT users.user_id AS users_user_id, users.name AS users_name, users.email AS users_email, users.password AS users_password, users.created_on AS users_created_on, users.last_login AS users_last_login, users.email_sent AS users_email_sent, users.verified_account AS users_verified_account, users.email_confirmed_on AS users_email_confirmed_on, users.number_of_failed_runs AS users_number_of_failed_runs, users.number_of_logins AS users_number_of_logins, users.datarobot_api_token AS users_datarobot_api_token, users.document_schema AS users_document_schema, users.column_to_classify AS users_column_to_classify, users.column_name_for_title AS users_column_name_for_title FROM users WHERE users.user_id = 1 2022-11-07 09:38:37.209 UTC [38] LOG: statement: BEGIN 2022-11-07 09:38:37.209 UTC [38] LOG: statement: DROP TABLE davey1 I have even tried putting a "time.sleep(10)" in my code to wait for rogue database transactions to rollback (which from the logs seems like the login_required is causing perhaps?!. I am lost on how to fix this, or even debug further. A: The issue was with only including the psycopg2-binary==2.9.5 module in my requirements file... I needed to also include psycopg2==2.9.5 I don't completely understand why, but this was the solution to the problem (I found this when deploying my docker image to AWS-ECS and seeing that my uwsgi process was crashing due to psycopg2) Thank you @AdrianKlaver for your assistance.
drop table timing out using psycopg2 (on postgres)
I can't drop a table that has dependencies using psycopg2 in python because it times out. (updating to remove irrelevant info, thank you to @Adrian Klaver for the assistance so far). I have two docker images, one running a postgres database, the other a python flask application making use of multiple psycopg2 calls to create tables, insert rows, select rows, and (unsuccessfully dropping a specific table). Things I have tried: used psycopg2 to select data, insert data used psycopg2 to drop some tables successfully tried (unsuccessfully) to drop a specific table 'davey1' (through psycopg2 I get the same timeout issue) looked at locks on the table SELECT * FROM pg_locks l JOIN pg_class t ON l.relation = t.oid AND t.relkind = 'r' WHERE t.relname = 'davey1'; looked at processes running select * from pg_stat_activity; Specifically the code I call the function (i have hard coded the table name for my testing): @site.route("/drop-table", methods=['GET','POST']) @login_required def drop_table(): form = DeleteTableForm() if request.method == "POST": tablename = form.tablename.data POSTGRES_USER= os.getenv('POSTGRES_USER') POSTGRES_PASSWORD= os.getenv('POSTGRES_PASSWORD') POSTGRES_DB = os.getenv('POSTGRES_DB') POSTGRES_HOST = os.getenv('POSTGRES_HOST') POSTGRES_PORT = os.getenv('POSTGRES_PORT') try: conn = psycopg2.connect(database=POSTGRES_DB, user=POSTGRES_USER,password=POSTGRES_PASSWORD,host=POSTGRES_HOST,port=POSTGRES_PORT) cursor = conn.cursor() sql_command = "DROP TABLE "+ str(tablename) cursor.execute(sql_command) conn.commit() cursor.close() conn.close() except Exception as e: flash("Unable to Drop table " + tablename +" it does not exist","error") app.logger.info("Error %s", str(e)) cursor.close() conn.close() return render_template("drop-table.html", form=form) Update 7/11 - I don't know why, but the problem is caused by either flask @login_required and/or accessing "current_user" (both functions are part of flask_login), in my code I import them as from flask_login import login_required,current_user. I have no idea why this is happening, and it really annoying. If I comment out the above @login_required decorator it works fine, logs look like this: 2022-11-07 09:36:45.854 UTC [55] LOG: statement: BEGIN 2022-11-07 09:36:45.854 UTC [55] LOG: statement: DROP TABLE davey1 2022-11-07 09:36:45.858 UTC [55] LOG: statement: COMMIT 2022-11-07 09:36:45.867 UTC [33] LOG: statement: BEGIN 2022-11-07 09:36:45.867 UTC [33] LOG: statement: SELECT users.user_id AS users_user_id, users.name AS users_name, users.email AS users_email, users.password AS users_password, users.created_on AS users_created_on, users.last_login AS users_last_login, users.email_sent AS users_email_sent, users.verified_account AS users_verified_account, users.email_confirmed_on AS users_email_confirmed_on, users.number_of_failed_runs AS users_number_of_failed_runs, users.number_of_logins AS users_number_of_logins, users.datarobot_api_token AS users_datarobot_api_token, users.document_schema AS users_document_schema, users.column_to_classify AS users_column_to_classify, users.column_name_for_title AS users_column_name_for_title FROM users WHERE users.user_id = 1 2022-11-07 09:36:45.875 UTC [33] LOG: statement: ROLLBACK When I have the @login_required included in the code, the drop table times out and I receive this log: 2022-11-07 09:38:37.192 UTC [34] LOG: statement: BEGIN 2022-11-07 09:38:37.192 UTC [34] LOG: statement: SELECT users.user_id AS users_user_id, users.name AS users_name, users.email AS users_email, users.password AS users_password, users.created_on AS users_created_on, users.last_login AS users_last_login, users.email_sent AS users_email_sent, users.verified_account AS users_verified_account, users.email_confirmed_on AS users_email_confirmed_on, users.number_of_failed_runs AS users_number_of_failed_runs, users.number_of_logins AS users_number_of_logins, users.datarobot_api_token AS users_datarobot_api_token, users.document_schema AS users_document_schema, users.column_to_classify AS users_column_to_classify, users.column_name_for_title AS users_column_name_for_title FROM users WHERE users.user_id = 1 2022-11-07 09:38:37.209 UTC [38] LOG: statement: BEGIN 2022-11-07 09:38:37.209 UTC [38] LOG: statement: DROP TABLE davey1 I have even tried putting a "time.sleep(10)" in my code to wait for rogue database transactions to rollback (which from the logs seems like the login_required is causing perhaps?!. I am lost on how to fix this, or even debug further.
[ "The issue was with only including the psycopg2-binary==2.9.5 module in my requirements file... I needed to also include psycopg2==2.9.5\nI don't completely understand why, but this was the solution to the problem (I found this when deploying my docker image to AWS-ECS and seeing that my uwsgi process was crashing due to psycopg2)\nThank you @AdrianKlaver for your assistance.\n" ]
[ 0 ]
[]
[]
[ "flask", "postgresql", "psycopg2", "python" ]
stackoverflow_0074323783_flask_postgresql_psycopg2_python.txt
Q: Cloud Run with Gunicorn Best-Practise I am currently working on a service that is supposed to provide an HTTP endpoint in Cloud Run and I don't have much experience. I am currently using flask + gunicorn and can also call the service. My main problem now is optimising for multiple simultaneous requests. Currently, the service in Cloud Run has 4GB of memory and 1 CPU allocated to it. When it is called once, the instance that is started directly consumes 3.7GB of memory and about 40-50% of the CPU (I use a neural network to embed my data). Currently, my settings are very basic: memory: 4096M CPU: 1 min-instances: 0 max-instances: 1 concurrency: 80 Workers: 1 (Gunicorn) Threads: 1 (Gunicorn) Timeout: 0 (Gunicorn, as recommended by Google) If I up the number of workers to two, I would need to up the Memory to 8GB. If I do that my service should be able to work on two requests simultaneously with one instance, if this 1 CPU allocated, has more than one core. But what happens, if there is a thrid request? I would like to think, that Cloud Run will start a second instance. Does the new instance gets also 1 CPU and 8GB of memory and if not, what is the best practise for me? A: One of the best practice is to let Cloud Run scale automatically instead of trying to optimize each instance. Using 1 worker is a good idea to limit the memory footprint and reduce the cold start. I recommend to play with the threads, typically to put it to 8 or 16 to leverage the concurrency parameter. If you put those value too low, Cloud Run internal load balancer will route the request to the instance, thinking it will be able to serve it, but if Gunicorn can't access new request, you will have issues. Tune your service with the correct parameter of CPU and memory, but also the thread and the concurrency to find the correct ones. Hey is a useful tool to stress your service and observe what's happens when you scale. A: The best practice so far is For environments with multiple CPU cores, increase the number of workers to be equal to the cores available. Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling. Adjust the number of workers and threads on a per-application basis. For example, try to use a number of workers equal to the cores available and make sure there is a performance improvement, then adjust the number of threads.i.e. CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
Cloud Run with Gunicorn Best-Practise
I am currently working on a service that is supposed to provide an HTTP endpoint in Cloud Run and I don't have much experience. I am currently using flask + gunicorn and can also call the service. My main problem now is optimising for multiple simultaneous requests. Currently, the service in Cloud Run has 4GB of memory and 1 CPU allocated to it. When it is called once, the instance that is started directly consumes 3.7GB of memory and about 40-50% of the CPU (I use a neural network to embed my data). Currently, my settings are very basic: memory: 4096M CPU: 1 min-instances: 0 max-instances: 1 concurrency: 80 Workers: 1 (Gunicorn) Threads: 1 (Gunicorn) Timeout: 0 (Gunicorn, as recommended by Google) If I up the number of workers to two, I would need to up the Memory to 8GB. If I do that my service should be able to work on two requests simultaneously with one instance, if this 1 CPU allocated, has more than one core. But what happens, if there is a thrid request? I would like to think, that Cloud Run will start a second instance. Does the new instance gets also 1 CPU and 8GB of memory and if not, what is the best practise for me?
[ "One of the best practice is to let Cloud Run scale automatically instead of trying to optimize each instance. Using 1 worker is a good idea to limit the memory footprint and reduce the cold start.\nI recommend to play with the threads, typically to put it to 8 or 16 to leverage the concurrency parameter.\nIf you put those value too low, Cloud Run internal load balancer will route the request to the instance, thinking it will be able to serve it, but if Gunicorn can't access new request, you will have issues.\nTune your service with the correct parameter of CPU and memory, but also the thread and the concurrency to find the correct ones. Hey is a useful tool to stress your service and observe what's happens when you scale.\n", "The best practice so far is For environments with multiple CPU cores, increase the number of workers to be equal to the cores available. Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling. Adjust the number of workers and threads on a per-application basis. For example, try to use a number of workers equal to the cores available and make sure there is a performance improvement, then adjust the number of threads.i.e.\nCMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app\n\n" ]
[ 0, 0 ]
[]
[]
[ "google_cloud_run", "gunicorn", "python" ]
stackoverflow_0071378905_google_cloud_run_gunicorn_python.txt
Q: The video writer is not writing any video just an empty .mp4 file. Rest is working fine. What's the problem? import cv2 import os cam = cv2.VideoCapture(r"C:/Users/User/Desktop/aayfryxljh.mp4") detector= cv2.CascadeClassifier("haarcascade_frontalface_default.xml") result = cv2.VideoWriter('C:/Users/User/Desktop/new.mp4',cv2.VideoWriter_fourcc(*'mp4v'),30,(112,112)) while (True): # reading from frame ret, frame = cam.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = detector.detectMultiScale(gray, 1.3, 5) size=(frame.shape[1],frame.shape[0]) c = cv2.waitKey(1) # if video is still left continue creating images for (x, y, w, h) in faces: cropped = frame[y: y + h, x: x + w] cv2.imshow('frame', cropped) result.write(cropped) # Release all space and windows once donecam.release() result.release() Should save the cropped faces video.I want to save it in .mp4 format. It Just shows an empty .mp4 file, I can't understand the issue. The code executes without any error A: As I mentioned in the comments the while loop never finishes and so result.release() never gets called. It looks like the code needs a way to end the while loop. Perhaps: while (True): # reading from frame ret, frame = cam.read() ### ADDED CODE: if ret == False: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = detector.detectMultiScale(gray, 1.3, 5) size=(frame.shape[1],frame.shape[0]) ### CHANGED CODE if cv2.waitKey(1) & 0xFF == ord('q'): break # if video is still left continue creating images for (x, y, w, h) in faces: cropped = frame[y: y + h, x: x + w] cv2.imshow('frame', cropped) result.write(cropped) # Release all space and windows once donecam.release() ### ADDED CODE cam.release() result.release() ### ADDED CODE cv2.destroyAllWindows() See the example at: https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#saving-a-video
The video writer is not writing any video just an empty .mp4 file. Rest is working fine. What's the problem?
import cv2 import os cam = cv2.VideoCapture(r"C:/Users/User/Desktop/aayfryxljh.mp4") detector= cv2.CascadeClassifier("haarcascade_frontalface_default.xml") result = cv2.VideoWriter('C:/Users/User/Desktop/new.mp4',cv2.VideoWriter_fourcc(*'mp4v'),30,(112,112)) while (True): # reading from frame ret, frame = cam.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = detector.detectMultiScale(gray, 1.3, 5) size=(frame.shape[1],frame.shape[0]) c = cv2.waitKey(1) # if video is still left continue creating images for (x, y, w, h) in faces: cropped = frame[y: y + h, x: x + w] cv2.imshow('frame', cropped) result.write(cropped) # Release all space and windows once donecam.release() result.release() Should save the cropped faces video.I want to save it in .mp4 format. It Just shows an empty .mp4 file, I can't understand the issue. The code executes without any error
[ "As I mentioned in the comments the while loop never finishes and so result.release() never gets called. It looks like the code needs a way to end the while loop. Perhaps:\nwhile (True):\n\n # reading from frame\n ret, frame = cam.read()\n\n ### ADDED CODE:\n if ret == False:\n break\n\n gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n\n faces = detector.detectMultiScale(gray, 1.3, 5)\n size=(frame.shape[1],frame.shape[0])\n \n ### CHANGED CODE\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n\n # if video is still left continue creating images\n for (x, y, w, h) in faces:\n cropped = frame[y: y + h, x: x + w]\n cv2.imshow('frame', cropped)\n result.write(cropped)\n\n# Release all space and windows once donecam.release()\n\n### ADDED CODE\ncam.release()\nresult.release()\n\n### ADDED CODE\ncv2.destroyAllWindows()\n\nSee the example at: https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#saving-a-video\n" ]
[ 0 ]
[]
[]
[ "opencv", "python" ]
stackoverflow_0074626256_opencv_python.txt
Q: Why does my file keep closing after the first loop in python I'm trying to read through a large file in which I have marked the start and end lines of each segment. I'm extracting a component of each segment using regex. What I don't understand is that after the first inner loop, my code seems to have closed the file and I don't get the desired output. Simplified code below with open("data_full", 'r') as file: for x in position: print(x) s = position[x]['start'] e = position[x]['end'] title = [] abs = [] mesh = [] ti_prev = False for i,line in enumerate(file.readlines()[s:e]): print(i) print(s,e) if re.search(r'(?<=TI\s{2}-\s).*', line) is not None and ti_prev is False: title.append(re.search(r'(?<=TI\s{2}-\s).*', line).group()) ti_prev = True line_mark = i if re.search(r'(?<=\s{6}).*',line) is not None and ti_prev is True and i == (line_mark+1): title.append(re.search(r'(?<=\s{6}).*',line).group()) else: pass data[x]['title']=title What I think has happened, is that after the first inner loop file.readlines() does not work since the file is closed. But I don't understand why, since it's within my with open loop. My alternative is to read the file for each segment (9k+ segments) and is not doing wonders to my performance. Any suggestions are welcomed with thanks ! A: Assuming your indentation is wrong in the description and not actually in your original code, readlines() moves the file pointer to the end so you can't read any more lines. You need to either reopen the file or .seek(0). See this for more info: Does fp.readlines() close a file?
Why does my file keep closing after the first loop in python
I'm trying to read through a large file in which I have marked the start and end lines of each segment. I'm extracting a component of each segment using regex. What I don't understand is that after the first inner loop, my code seems to have closed the file and I don't get the desired output. Simplified code below with open("data_full", 'r') as file: for x in position: print(x) s = position[x]['start'] e = position[x]['end'] title = [] abs = [] mesh = [] ti_prev = False for i,line in enumerate(file.readlines()[s:e]): print(i) print(s,e) if re.search(r'(?<=TI\s{2}-\s).*', line) is not None and ti_prev is False: title.append(re.search(r'(?<=TI\s{2}-\s).*', line).group()) ti_prev = True line_mark = i if re.search(r'(?<=\s{6}).*',line) is not None and ti_prev is True and i == (line_mark+1): title.append(re.search(r'(?<=\s{6}).*',line).group()) else: pass data[x]['title']=title What I think has happened, is that after the first inner loop file.readlines() does not work since the file is closed. But I don't understand why, since it's within my with open loop. My alternative is to read the file for each segment (9k+ segments) and is not doing wonders to my performance. Any suggestions are welcomed with thanks !
[ "Assuming your indentation is wrong in the description and not actually in your original code, readlines() moves the file pointer to the end so you can't read any more lines.\nYou need to either reopen the file or .seek(0).\nSee this for more info: Does fp.readlines() close a file?\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0074661143_python.txt
Q: How to run Docker with python and Java? I need both java and python in my docker container to run some code. This is my dockerfile: It works perpectly if I don't add the FROM openjdk:slim #get python FROM python:3.6-slim RUN pip install --trusted-host pypi.python.org flask #get openjdk FROM openjdk:slim COPY . /targetdir WORKDIR /targetdir # Make port 81 available to the world outside this container EXPOSE 81 CMD ["python", "test.py"] And the test.py app is in the same directory: from flask import Flask import os app = Flask(__name__) @app.route("/") def hello(): html = "<h3>Test:{test}</h3>" test = os.environ['JAVA_HOME'] return html.format(test = test) if __name__ == '__main__': app.run(debug=True,host='0.0.0.0',port=81) I'm getting this error: D:\MyApps\Docker Toolbox\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"python\": executable file not found in $PATH": unknown. What exactly am I doing wrong here? I'm new to docker, perhaps I'm missing a step. Additional details My goal I have to run a python program that runs a Java file. The python library I'm using requires the path to JAVA_HOME. My issues: I do not know Java, so I cannot run the file properly. My entire code is in Python, except this Java bit The Python wrapper runs the file in a way I need it to run. A: An easier solution to the above issue is to use multi-stage docker containers where you can copy the content from one to another. In the above case you can have openjdk:slim as the base container and then use content from a python container to be copied over into this base container as follows: FROM openjdk:slim COPY --from=python:3.6 / / ... <normal instructions for python container continues> ... This feature is available as of Docker 17.05 and there are more things you can do using multi-stage build as in copying only the content you need from one to another. Reference documentation A: OK it took me a little while to figure it out. And my thanks go to this answer. I think my approach didn't work because I did not have a basic version of Linux. So it goes like this: Get Linux (I'm using Alpine because it's barebones) Get Java via the package manager Get Python, PIP OPTIONAL: find and set JAVA_HOME Find the path to JAVA_HOME. Perhaps there is a better way to do this, but I did this running the running the container, then I looked inside the container using docker exec -it [COINTAINER ID] bin/bash and found it. Set JAVA_HOME in dockerfile and build + run it all again Here is the final Dockerfile ( it should work with the python code in the question) : ### 1. Get Linux FROM alpine:3.7 ### 2. Get Java via the package manager RUN apk update \ && apk upgrade \ && apk add --no-cache bash \ && apk add --no-cache --virtual=build-dependencies unzip \ && apk add --no-cache curl \ && apk add --no-cache openjdk8-jre ### 3. Get Python, PIP RUN apk add --no-cache python3 \ && python3 -m ensurepip \ && pip3 install --upgrade pip setuptools \ && rm -r /usr/lib/python*/ensurepip && \ if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \ if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \ rm -r /root/.cache ### Get Flask for the app RUN pip install --trusted-host pypi.python.org flask #### #### OPTIONAL : 4. SET JAVA_HOME environment variable, uncomment the line below if you need it #ENV JAVA_HOME="/usr/lib/jvm/java-1.8-openjdk" #### EXPOSE 81 ADD test.py / CMD ["python", "test.py"] I'm new to Docker, so this may not be the best possible solution. I'm open to suggestions. UPDATE: COMMON ISUUES Difficulty using python packages As Joabe Lucena pointed out here, Alpine can have issues certain python packages. I recommend that you use a Linux distro that works best for you, e.g. centos. A: Another alternative is to simply use docker-java-python image from docker hub. https://hub.docker.com/r/rappdw/docker-java-python FROM rappdw/docker-java-python:openjdk1.8.0_171-python3.6.6 RUN java -version RUN python --version A: Oh, let me add my five cents. I took python slim as a base image. Then I found open-jdk-11 (Note, open-jdk-10 will fail because it is not supported) base image code!... And copy-pasted it into my docker file. Note, copy-paste driven development is cool... ONLY when you understand each line you use in your code!!! And here it is! <!-- language: shell --> FROM python:3.7.2-slim # Do your stuff, install python. # and now Jdk RUN rm -rf /var/lib/apt/lists/* && apt-get clean && apt-get update && apt-get upgrade -y \ && apt-get install -y --no-install-recommends curl ca-certificates \ && rm -rf /var/lib/apt/lists/* ENV JAVA_VERSION jdk-11.0.2+7 COPY slim-java* /usr/local/bin/ RUN set -eux; \ ARCH="$(dpkg --print-architecture)"; \ case "${ARCH}" in \ ppc64el|ppc64le) \ ESUM='c18364a778b1b990e8e62d094377af48b000f9f6a64ec21baff6a032af06386d'; \ BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.1%2B13/OpenJDK11U-jdk_ppc64le_linux_hotspot_11.0.1_13.tar.gz'; \ ;; \ s390x) \ ESUM='e39aacc270731dadcdc000aaaf709adae7a08113ccf5b4a045bc87fc13458d71'; \ BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11%2B28/OpenJDK11-jdk_s390x_linux_hotspot_11_28.tar.gz'; \ ;; \ amd64|x86_64) \ ESUM='d89304a971e5186e80b6a48a9415e49583b7a5a9315ba5552d373be7782fc528'; \ BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.2%2B7/OpenJDK11U-jdk_x64_linux_hotspot_11.0.2_7.tar.gz'; \ ;; \ aarch64|arm64) \ ESUM='b66121b9a0c2e7176373e670a499b9d55344bcb326f67140ad6d0dc24d13d3e2'; \ BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.1%2B13/OpenJDK11U-jdk_aarch64_linux_hotspot_11.0.1_13.tar.gz'; \ ;; \ *) \ echo "Unsupported arch: ${ARCH}"; \ exit 1; \ ;; \ esac; \ curl -Lso /tmp/openjdk.tar.gz ${BINARY_URL}; \ sha256sum /tmp/openjdk.tar.gz; \ mkdir -p /opt/java/openjdk; \ cd /opt/java/openjdk; \ echo "${ESUM} /tmp/openjdk.tar.gz" | sha256sum -c -; \ tar -xf /tmp/openjdk.tar.gz; \ jdir=$(dirname $(dirname $(find /opt/java/openjdk -name javac))); \ mv ${jdir}/* /opt/java/openjdk; \ export PATH="/opt/java/openjdk/bin:$PATH"; \ apt-get update; apt-get install -y --no-install-recommends binutils; \ /usr/local/bin/slim-java.sh /opt/java/openjdk; \ apt-get remove -y binutils; \ rm -rf /var/lib/apt/lists/*; \ rm -rf ${jdir} /tmp/openjdk.tar.gz; ENV JAVA_HOME=/opt/java/openjdk \ PATH="/opt/java/openjdk/bin:$PATH" ENV JAVA_TOOL_OPTIONS="-XX:+UseContainerSupport" Now references. https://github.com/AdoptOpenJDK/openjdk-docker/blob/master/11/jdk/ubuntu/Dockerfile.hotspot.releases.slim https://hub.docker.com/_/python/ https://hub.docker.com/r/adoptopenjdk/openjdk11/ I used them to answer this question, which may help you sometime. Running Python and Java in Docker A: I found Sunny Pal's answer very useful but I made the copy more specific and added the necessary environment variables and update-alternatives lines so that Java was accessible from the command line in the Python container. FROM python:3.9-slim COPY --from=openjdk:8-jre-slim /usr/local/openjdk-8 /usr/local/openjdk-8 ENV JAVA_HOME /usr/local/openjdk-8 RUN update-alternatives --install /usr/bin/java java /usr/local/openjdk-8/bin/java 1 ... A: I believe that by adding FROM openjdk:slim line, you tell docker to execute all of your subsequent commands in openjdk container (which does not have python) I would approach this by creating two separate containers for openjdk and python and specify individual sets of commands for them. Docker is made to modularize your solutions and mashing everything into one container is usually a bad practice. A: I tried pajamas's anwser which worked very well for creating this image. However, when trying to install packages like gensim, pandas or else, I faced some errors like: don't know how to compile Fortran code on platform 'posix'. I searched and tried this, this and that but none worked for me. So, based on pajamas's anwser I decided to convert his image from Alpine to Centos which worked very well. So here's a Dockerfile that might help someone who's may be struggling in this scenario like I was: # Get Linux FROM centos:7 # Install Java RUN yum update -y \ && yum install java-1.8.0-openjdk -y \ && yum clean all \ && rm -rf /var/cache/yum # Set JAVA_HOME environment var ENV JAVA_HOME="/usr/lib/jvm/jre-openjdk" # Install Python RUN yum install python3 -y \ && pip3 install --upgrade pip setuptools wheel \ && if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi \ && if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi \ && yum clean all \ && rm -rf /var/cache/yum CMD ["bash"] A: you should have one FROM in your dockerfile (unless you use multi-stage build for the docker) A: I think i found easiest way to mix java jdk 17 and python3. I is not working on python2 FROM openjdk:17.0.1-jdk-slim RUN apt-get update && \ apt-get install -y software-properties-common && \ apt-get install -y python3-pip Software Commons have python3 lightweight version. (3.9.1 version) U can also install some libraries like that. RUN python3 -m pip install --upgrade pip && \ python3 -m pip install numpy && \ python3 -m pip install opencv-python OR RUN apt-get update && \ apt-get install -y ffmpeg A: Easiest is to just start from a Python image and add the OpenJDK. Note that FROM openjdk has been deprecated and replaced with eclipse-temurin FROM python:3.10 ENV JAVA_HOME=/opt/java/openjdk COPY --from=eclipse-temurin:17-jre $JAVA_HOME $JAVA_HOME ENV PATH="${JAVA_HOME}/bin:${PATH}" RUN pip install --trusted-host pypi.python.org flask See How to use this Image - Using a different base Image section of https://hub.docker.com/_/eclipse-temurin for details.
How to run Docker with python and Java?
I need both java and python in my docker container to run some code. This is my dockerfile: It works perpectly if I don't add the FROM openjdk:slim #get python FROM python:3.6-slim RUN pip install --trusted-host pypi.python.org flask #get openjdk FROM openjdk:slim COPY . /targetdir WORKDIR /targetdir # Make port 81 available to the world outside this container EXPOSE 81 CMD ["python", "test.py"] And the test.py app is in the same directory: from flask import Flask import os app = Flask(__name__) @app.route("/") def hello(): html = "<h3>Test:{test}</h3>" test = os.environ['JAVA_HOME'] return html.format(test = test) if __name__ == '__main__': app.run(debug=True,host='0.0.0.0',port=81) I'm getting this error: D:\MyApps\Docker Toolbox\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"python\": executable file not found in $PATH": unknown. What exactly am I doing wrong here? I'm new to docker, perhaps I'm missing a step. Additional details My goal I have to run a python program that runs a Java file. The python library I'm using requires the path to JAVA_HOME. My issues: I do not know Java, so I cannot run the file properly. My entire code is in Python, except this Java bit The Python wrapper runs the file in a way I need it to run.
[ "An easier solution to the above issue is to use multi-stage docker containers where you can copy the content from one to another. In the above case you can have openjdk:slim as the base container and then use content from a python container to be copied over into this base container as follows:\nFROM openjdk:slim\nCOPY --from=python:3.6 / /\n\n... \n\n<normal instructions for python container continues>\n\n...\n\n\nThis feature is available as of Docker 17.05 and there are more things you can do using multi-stage build as in copying only the content you need from one to another.\nReference documentation\n", "OK it took me a little while to figure it out. And my thanks go to this answer.\nI think my approach didn't work because I did not have a basic version of Linux.\nSo it goes like this:\n\nGet Linux (I'm using Alpine because it's barebones)\nGet Java via the package manager\nGet Python, PIP\n\nOPTIONAL: find and set JAVA_HOME\n\nFind the path to JAVA_HOME. Perhaps there is a better way to do this, but I did this running the running the container, then I looked inside the container using docker exec -it [COINTAINER ID] bin/bash and found it.\nSet JAVA_HOME in dockerfile and build + run it all again\n\nHere is the final Dockerfile ( it should work with the python code in the question) :\n### 1. Get Linux\nFROM alpine:3.7\n\n### 2. Get Java via the package manager\nRUN apk update \\\n&& apk upgrade \\\n&& apk add --no-cache bash \\\n&& apk add --no-cache --virtual=build-dependencies unzip \\\n&& apk add --no-cache curl \\\n&& apk add --no-cache openjdk8-jre\n\n### 3. Get Python, PIP\n\nRUN apk add --no-cache python3 \\\n&& python3 -m ensurepip \\\n&& pip3 install --upgrade pip setuptools \\\n&& rm -r /usr/lib/python*/ensurepip && \\\nif [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \\\nif [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \\\nrm -r /root/.cache\n\n### Get Flask for the app\nRUN pip install --trusted-host pypi.python.org flask\n\n####\n#### OPTIONAL : 4. SET JAVA_HOME environment variable, uncomment the line below if you need it\n\n#ENV JAVA_HOME=\"/usr/lib/jvm/java-1.8-openjdk\"\n\n####\n\nEXPOSE 81 \nADD test.py /\nCMD [\"python\", \"test.py\"]\n\nI'm new to Docker, so this may not be the best possible solution. I'm open to suggestions.\nUPDATE: COMMON ISUUES\n\nDifficulty using python packages\n\nAs Joabe Lucena pointed out here, Alpine can have issues certain python packages.\nI recommend that you use a Linux distro that works best for you, e.g. centos.\n", "Another alternative is to simply use docker-java-python image from docker hub. https://hub.docker.com/r/rappdw/docker-java-python\nFROM rappdw/docker-java-python:openjdk1.8.0_171-python3.6.6\nRUN java -version\nRUN python --version\n\n", "Oh, let me add my five cents. I took python slim as a base image. Then I found open-jdk-11 (Note, open-jdk-10 will fail because it is not supported) base image code!... And copy-pasted it into my docker file. \nNote, copy-paste driven development is cool... ONLY when you understand each line you use in your code!!!\nAnd here it is!\n<!-- language: shell -->\nFROM python:3.7.2-slim\n\n# Do your stuff, install python.\n\n# and now Jdk\nRUN rm -rf /var/lib/apt/lists/* && apt-get clean && apt-get update && apt-get upgrade -y \\\n && apt-get install -y --no-install-recommends curl ca-certificates \\\n && rm -rf /var/lib/apt/lists/*\n\nENV JAVA_VERSION jdk-11.0.2+7\n\nCOPY slim-java* /usr/local/bin/\n\nRUN set -eux; \\\n ARCH=\"$(dpkg --print-architecture)\"; \\\n case \"${ARCH}\" in \\\n ppc64el|ppc64le) \\\n ESUM='c18364a778b1b990e8e62d094377af48b000f9f6a64ec21baff6a032af06386d'; \\\n BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.1%2B13/OpenJDK11U-jdk_ppc64le_linux_hotspot_11.0.1_13.tar.gz'; \\\n ;; \\\n s390x) \\\n ESUM='e39aacc270731dadcdc000aaaf709adae7a08113ccf5b4a045bc87fc13458d71'; \\\n BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11%2B28/OpenJDK11-jdk_s390x_linux_hotspot_11_28.tar.gz'; \\\n ;; \\\n amd64|x86_64) \\\n ESUM='d89304a971e5186e80b6a48a9415e49583b7a5a9315ba5552d373be7782fc528'; \\\n BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.2%2B7/OpenJDK11U-jdk_x64_linux_hotspot_11.0.2_7.tar.gz'; \\\n ;; \\\n aarch64|arm64) \\\n ESUM='b66121b9a0c2e7176373e670a499b9d55344bcb326f67140ad6d0dc24d13d3e2'; \\\n BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.1%2B13/OpenJDK11U-jdk_aarch64_linux_hotspot_11.0.1_13.tar.gz'; \\\n ;; \\\n *) \\\n echo \"Unsupported arch: ${ARCH}\"; \\\n exit 1; \\\n ;; \\\n esac; \\\n curl -Lso /tmp/openjdk.tar.gz ${BINARY_URL}; \\\n sha256sum /tmp/openjdk.tar.gz; \\\n mkdir -p /opt/java/openjdk; \\\n cd /opt/java/openjdk; \\\n echo \"${ESUM} /tmp/openjdk.tar.gz\" | sha256sum -c -; \\\n tar -xf /tmp/openjdk.tar.gz; \\\n jdir=$(dirname $(dirname $(find /opt/java/openjdk -name javac))); \\\n mv ${jdir}/* /opt/java/openjdk; \\\n export PATH=\"/opt/java/openjdk/bin:$PATH\"; \\\n apt-get update; apt-get install -y --no-install-recommends binutils; \\\n /usr/local/bin/slim-java.sh /opt/java/openjdk; \\\n apt-get remove -y binutils; \\\n rm -rf /var/lib/apt/lists/*; \\\n rm -rf ${jdir} /tmp/openjdk.tar.gz;\n\nENV JAVA_HOME=/opt/java/openjdk \\\n PATH=\"/opt/java/openjdk/bin:$PATH\"\nENV JAVA_TOOL_OPTIONS=\"-XX:+UseContainerSupport\"\n\nNow references.\nhttps://github.com/AdoptOpenJDK/openjdk-docker/blob/master/11/jdk/ubuntu/Dockerfile.hotspot.releases.slim\nhttps://hub.docker.com/_/python/\nhttps://hub.docker.com/r/adoptopenjdk/openjdk11/\nI used them to answer this question, which may help you sometime.\nRunning Python and Java in Docker\n", "I found Sunny Pal's answer very useful but I made the copy more specific and added the necessary environment variables and update-alternatives lines so that Java was accessible from the command line in the Python container.\nFROM python:3.9-slim\nCOPY --from=openjdk:8-jre-slim /usr/local/openjdk-8 /usr/local/openjdk-8\n\nENV JAVA_HOME /usr/local/openjdk-8\n\nRUN update-alternatives --install /usr/bin/java java /usr/local/openjdk-8/bin/java 1\n...\n\n", "I believe that by adding FROM openjdk:slim line, you tell docker to execute all of your subsequent commands in openjdk container (which does not have python)\nI would approach this by creating two separate containers for openjdk and python and specify individual sets of commands for them.\nDocker is made to modularize your solutions and mashing everything into one container is usually a bad practice. \n", "I tried pajamas's anwser which worked very well for creating this image. However, when trying to install packages like gensim, pandas or else, I faced some errors like: don't know how to compile Fortran code on platform 'posix'. I searched and tried this, this and that but none worked for me.\nSo, based on pajamas's anwser I decided to convert his image from Alpine to Centos which worked very well. So here's a Dockerfile that might help someone who's may be struggling in this scenario like I was:\n# Get Linux\nFROM centos:7\n\n# Install Java\nRUN yum update -y \\\n&& yum install java-1.8.0-openjdk -y \\\n&& yum clean all \\\n&& rm -rf /var/cache/yum\n\n# Set JAVA_HOME environment var\nENV JAVA_HOME=\"/usr/lib/jvm/jre-openjdk\"\n\n# Install Python\nRUN yum install python3 -y \\\n&& pip3 install --upgrade pip setuptools wheel \\\n&& if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi \\\n&& if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi \\\n&& yum clean all \\\n&& rm -rf /var/cache/yum\n\nCMD [\"bash\"]\n\n", "you should have one FROM in your dockerfile\n(unless you use multi-stage build for the docker) \n", "I think i found easiest way to mix java jdk 17 and python3. I is not working on python2\nFROM openjdk:17.0.1-jdk-slim\n\n\nRUN apt-get update && \\\n apt-get install -y software-properties-common && \\\n apt-get install -y python3-pip\n\nSoftware Commons have python3 lightweight version. (3.9.1 version)\nU can also install some libraries like that.\nRUN python3 -m pip install --upgrade pip && \\\n python3 -m pip install numpy && \\\n python3 -m pip install opencv-python\n\nOR\nRUN apt-get update && \\\n apt-get install -y ffmpeg\n\n", "Easiest is to just start from a Python image and add the OpenJDK. Note that FROM openjdk has been deprecated and replaced with eclipse-temurin\nFROM python:3.10\n\nENV JAVA_HOME=/opt/java/openjdk\nCOPY --from=eclipse-temurin:17-jre $JAVA_HOME $JAVA_HOME\nENV PATH=\"${JAVA_HOME}/bin:${PATH}\"\n\nRUN pip install --trusted-host pypi.python.org flask\n\nSee How to use this Image - Using a different base Image section of https://hub.docker.com/_/eclipse-temurin for details.\n" ]
[ 34, 30, 6, 2, 2, 1, 1, 0, 0, 0 ]
[ "Instead of using FROM openjdk:slim you can separately install Java, please refer below example:\n# Install OpenJDK-8\nRUN apt-get update && \\\napt-get install -y openjdk-8-jdk && \\\napt-get install -y ant && \\\napt-get clean;\n\n# Fix certificate issues\nRUN apt-get update && \\\napt-get install ca-certificates-java && \\\napt-get clean && \\\nupdate-ca-certificates -f;\n# Setup JAVA_HOME -- useful for docker commandline\nENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/\nRUN export JAVA_HOME\n\n" ]
[ -1 ]
[ "docker", "java", "python", "python_3.x" ]
stackoverflow_0051121875_docker_java_python_python_3.x.txt
Q: How to connect to remote and run Python from local like in SQL DB Management tools So if we want to run SQL in a remote server we can connect to it using JDBC connection strings. Is there something similar but for Python? I want to develop using my, already tuned, IDE instead of the clunky IDEs there are like Zeppelin for remote servers. Do you know a secure way to achieve this? I know it's possible using SSH, but I don't think is the best option security-wise And if there is no option, could I get a recommendation of a powerfull IDE I can install in my clusters and expose through a web interface maybe? Thanks!
How to connect to remote and run Python from local like in SQL DB Management tools
So if we want to run SQL in a remote server we can connect to it using JDBC connection strings. Is there something similar but for Python? I want to develop using my, already tuned, IDE instead of the clunky IDEs there are like Zeppelin for remote servers. Do you know a secure way to achieve this? I know it's possible using SSH, but I don't think is the best option security-wise And if there is no option, could I get a recommendation of a powerfull IDE I can install in my clusters and expose through a web interface maybe? Thanks!
[]
[]
[ "I'm not aware of a way to execute python on a remote instance without establishing an ssh (linux) or winrm/prsp (windows) session first. I do know of a powerful IDE that can accomplish this pretty smoothly though.\nPycharm Professional has the ability to establish an ssh session out to a target environment, allows you to setup a virtual environment on that target, and then set that virtual environment as you codes interpreter. This will effectively allow you to develop you code on your personal computer, but execute against the target server. The interactive debug mode also works in your local IDE even though the code is running on the remote server.\nIt's important to note that this functionality is only available in the PyCharm Professional edition, so a license will need to be purchased in order to develop locally but execute remotely.\nHopefully this will meet your needs of connecting to and remotely executing python code.\nLinks to Pycharm documentation for remote development:\n\nhttps://www.jetbrains.com/help/pycharm/remote-development-overview.html\nhttps://www.jetbrains.com/help/pycharm/remote-development-starting-page.html\n\n" ]
[ -1 ]
[ "pycharm", "python", "remote_server", "visual_studio_code" ]
stackoverflow_0074660678_pycharm_python_remote_server_visual_studio_code.txt
Q: Quit function in python programming I have tried to use the 'quit()' function in python and the spyder's compiler keep says me "quit" is not defined print("Welcome to my computer quiz") playing = input("Do you want to play? ") if (playing != "yes" ): quit() print("Okay! Let's play :)") the output keep says me "name 'quit' is not defined", how can i solve that problem? A: There is no such thing as quit() in python. Python rather has exit(). Simply replace your quit() to exit(). print("Welcome to my computer quiz") playing = input("Do you want to play? ") if (playing != "yes" ): exit() print("Okay! Let's play :)") A: Invert the logic and play if the user answers yes. The game will automatically quit when it reaches the end of the file print("Welcome to my computer quiz") playing = input("Do you want to play? ") if (playing == "yes" ): print("Okay! Let's play :)")
Quit function in python programming
I have tried to use the 'quit()' function in python and the spyder's compiler keep says me "quit" is not defined print("Welcome to my computer quiz") playing = input("Do you want to play? ") if (playing != "yes" ): quit() print("Okay! Let's play :)") the output keep says me "name 'quit' is not defined", how can i solve that problem?
[ "There is no such thing as quit() in python. Python rather has exit(). Simply replace your quit() to exit().\nprint(\"Welcome to my computer quiz\")\n\nplaying = input(\"Do you want to play? \")\n\nif (playing != \"yes\" ):\n exit()\n \nprint(\"Okay! Let's play :)\")\n\n", "Invert the logic and play if the user answers yes. The game will automatically quit when it reaches the end of the file\nprint(\"Welcome to my computer quiz\")\n\nplaying = input(\"Do you want to play? \")\n\nif (playing == \"yes\" ):\n print(\"Okay! Let's play :)\")\n\n" ]
[ 2, 0 ]
[]
[]
[ "python", "runtime_error" ]
stackoverflow_0074661123_python_runtime_error.txt
Q: Python write serial data to the second column of my .csv file Im reading from my serialport data, I can store this data to .csv file. But the problem is that I want to write my data to a second or third column. With code the data is stored in the first column: file = open('test.csv', 'w', encoding="utf",newline="") writer = csv.writer(file) while True: if serialInst.in_waiting: packet = (serialInst.readline()) packet = [str(packet.decode().rstrip())] #decode remove \r\n strip the newline writer.writerow(packet) output of the code .csv file: Column A Column B Data 1 Data 2 Data 3 Data 4 example desired output .csv file: Column A Column B Data1 data 2 Data3 Data 4 A: I've not use the csv.writer before, but a quick read of the docs, seems to indicate that you can only write one row at a time, but you are getting data one cell/value at a time. In your code example, you already have a file handle. Instead of writing one row at a time, you want to write one cell at a time. You'll need some extra variables to keep track of when to make a new line. file = open('test.csv', 'w', encoding="utf",newline="") writer = csv.writer(file) ncols = 2 # 2 columns total in this example, but it's easy to imagine you might want more one day col = 0 # use Python convention of zero based lists/arrays while True: if serialInst.in_waiting: packet = (serialInst.readline()) packet = [str(packet.decode().rstrip())] #decode remove \r\n strip the newline if col == ncols-1: # last column, leave out comma and add newline \n file.write(packet + '\n') col = 0 # reset col to first position else: file.write(packet + ',') col = col + 1 In this code, we're using the write method of a file object instead of using the csv module. See these docs for how to directly read and write from/to files.
Python write serial data to the second column of my .csv file
Im reading from my serialport data, I can store this data to .csv file. But the problem is that I want to write my data to a second or third column. With code the data is stored in the first column: file = open('test.csv', 'w', encoding="utf",newline="") writer = csv.writer(file) while True: if serialInst.in_waiting: packet = (serialInst.readline()) packet = [str(packet.decode().rstrip())] #decode remove \r\n strip the newline writer.writerow(packet) output of the code .csv file: Column A Column B Data 1 Data 2 Data 3 Data 4 example desired output .csv file: Column A Column B Data1 data 2 Data3 Data 4
[ "I've not use the csv.writer before, but a quick read of the docs, seems to indicate that you can only write one row at a time, but you are getting data one cell/value at a time.\nIn your code example, you already have a file handle. Instead of writing one row at a time, you want to write one cell at a time. You'll need some extra variables to keep track of when to make a new line.\nfile = open('test.csv', 'w', encoding=\"utf\",newline=\"\")\nwriter = csv.writer(file)\n\nncols = 2 # 2 columns total in this example, but it's easy to imagine you might want more one day\ncol = 0 # use Python convention of zero based lists/arrays\n\nwhile True:\n if serialInst.in_waiting:\n packet = (serialInst.readline())\n packet = [str(packet.decode().rstrip())] #decode remove \\r\\n strip the newline\n if col == ncols-1:\n # last column, leave out comma and add newline \\n\n file.write(packet + '\\n')\n col = 0 # reset col to first position\n else:\n file.write(packet + ',')\n col = col + 1\n\nIn this code, we're using the write method of a file object instead of using the csv module. See these docs for how to directly read and write from/to files.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074657545_python.txt
Q: Pandas JSON Normalize multiple columns in a dataframe So I have the following dataframe: The JSON blobs all look something like this: {"id":"dddd1", "random_number":"77777"} What I want my dataframe to look like is something like this: Basically what I need is to get a way to iterate and normalize all the JSON blob columns and put them back in the dataframe in the proper rows (0-99). I have tried the following: pd.json_normalize(data_frame.iloc[:, JSON_0,JSON_99]) I get the following error: IndexingError: Too many indexers I could go through and normalize each JSON_BLOB column individually however that is inefficient, I cant think of a proper way to do this via a Lambda function or for loop because of the JSON blob. The for loop I wrote gives me the same error: array=[] for app in data_frame.iloc[:, JSON_0,JSON_99]: data = { 'id': data['id'] } array.append(data) test= pd.DataFrame(array) IndexingError: Too many indexers Also some of the JSON_Blobs have NAN values Any suggestions would be great. A: Can you try this: normalized = pd.concat([df[i].apply(pd.Series) for i in df.iloc[:,2:]],axis=1) #2 is the position number of JSON_0. final = pd.concat([df[['Root_id_PK','random_number']],normalized],axis=1) if you want the column names as in the question: normalized = pd.concat([df[i].apply(pd.Series).rename(columns={'id':'id_from_{}'.format(i),'random_number':'random_number_from_{}'.format(i)}) for i in df.iloc[:,2:]],axis=1) final = pd.concat([df[['Root_id_PK','random_number']],normalized],axis=1)
Pandas JSON Normalize multiple columns in a dataframe
So I have the following dataframe: The JSON blobs all look something like this: {"id":"dddd1", "random_number":"77777"} What I want my dataframe to look like is something like this: Basically what I need is to get a way to iterate and normalize all the JSON blob columns and put them back in the dataframe in the proper rows (0-99). I have tried the following: pd.json_normalize(data_frame.iloc[:, JSON_0,JSON_99]) I get the following error: IndexingError: Too many indexers I could go through and normalize each JSON_BLOB column individually however that is inefficient, I cant think of a proper way to do this via a Lambda function or for loop because of the JSON blob. The for loop I wrote gives me the same error: array=[] for app in data_frame.iloc[:, JSON_0,JSON_99]: data = { 'id': data['id'] } array.append(data) test= pd.DataFrame(array) IndexingError: Too many indexers Also some of the JSON_Blobs have NAN values Any suggestions would be great.
[ "Can you try this:\nnormalized = pd.concat([df[i].apply(pd.Series) for i in df.iloc[:,2:]],axis=1) #2 is the position number of JSON_0.\nfinal = pd.concat([df[['Root_id_PK','random_number']],normalized],axis=1)\n\nif you want the column names as in the question:\nnormalized = pd.concat([df[i].apply(pd.Series).rename(columns={'id':'id_from_{}'.format(i),'random_number':'random_number_from_{}'.format(i)}) for i in df.iloc[:,2:]],axis=1)\nfinal = pd.concat([df[['Root_id_PK','random_number']],normalized],axis=1)\n\n" ]
[ 1 ]
[]
[]
[ "json", "pandas", "python" ]
stackoverflow_0074660865_json_pandas_python.txt